article stringlengths 507 295k | abstract stringlengths 417 1.92k | category listlengths 1 6 |
|---|---|---|
# 1 INTRODUCTION
In today’s world, data and knowledge are stored, managed, and queried through databases, which are accessed using query languages such as SQL (for relational databases) or Cypher (for graph databases). Recent advancements in large language models (LLMs)
Simple Query
MASimple Query
WMAMedium Query WFMAMediumQuery LIMWI MWYMA1HardQuery PoHMA HrdQuery tMe SECEI s Fine-tuning Evaluation ORDER BY common_actors DESC LIMIT 1 RETURN d1.name AS Director1, d2.name AS Director2, commonctors Hard
Example Medium Query
Selection MA1 Hard Query WHMA HardQuery DISye tM CED_s tiMovie)<-:DRECTED]-d2Person)WHERE sect id(d1) < id(d2) WITH d1, d2, COUNT(DiSTINCT a) AS common_actors ORDER BY common_actors DESC LIMIT 1 ETRNd1.namiecr Fine-tuning Evaluation
have made it possible to interact with databases using natural language, allowing models like Text2SQL and Text2Cypher to translate natural language questions into database queries. A common approach for generating these queries is to fine-tune foundational models using question-query datasets. Effective fine-tuning of these models requires large, diverse datasets with non-trivial examples.
With increased use of synthetic datasets, it is now possible to automatically generate larger datasets. However, these datasets often suffer from quality and redundancy issues. Recent research suggests that small, high-quality datasets can outperform larger ones when fine-tuning LLMs [22, 24]. Additionally, the cost of fine-tuning LLMs increases as the dataset size grows. One way to address these challenges is to prune or select a subset of the data. This process should be automated to ensure that the resulting dataset (i) maintains high performance and (ii) minimizes costs, achieving greater efficiency [8]. Figure 1 shows a hard-example selection procedure. Initially, we start with a larger dataset containing simple, medium, and hard Cypher queries used for fine-tuning a Text2Cypher model. After applying hard-example selection, the dataset is reduced in size and predominantly retains medium and hard queries.
In this paper, we apply five hard-example selection approaches to prune the Text2Cypher dataset: three approaches for selecting challenging instances from a larger training dataset to enhance model performance and two approaches that combine the proposed hard-example selection methods. We evaluate their impact on a Text2Cypher dataset, analyzing training time (in terms of training steps) and Cypher generation performance. Our main contributions are:
• We propose hard-example selection techniques specifically for the Text2Cypher task. Three approaches leverage prior analysis results and heuristics to identify challenging (hard) examples and prune the training dataset, while two additional approaches combine these methods to improve performance.
• We analyze their impact on the Text2Cypher task on training time (measured in steps), loss values, and Cypher generation performance.
• Our results show that hard-example selection approaches reduce resource usage — both in elapsed time and total cost— by more than half while minimally affecting Cypher generation performance. Although there is room for improvement in matching the performance of training on the full dataset, hard-example selection presents a cost-effective solution.
The structure of the paper is as follows: Section 2 reviews related work on data subset selection and pruning, particularly for finetuning large language models. Section 3 details the hard-example selection approaches applied to the Text2Cypher task. Section 4 outlines our experimental setup and presents the evaluation results. Finally, Section 5 provides the conclusion.
# 2 RELATED WORK
Several approaches for data selection or pruning have been proposed in the literature [1, 17], ranging from the use of baseline LLM models to decide which instances to select or create embeddings [3–5, 9], to methods that rely on instance-level scores based on system indicators like diversity or difficulty. For example, Maharana et al. [10] use graph-based techniques to reduce redundancy by iteratively selecting diverse and challenging instances. Lin et al. [8] utilize influence and effort scores to prioritize influential and difficult samples for fine-tuning. Zhang et al. [22] identify diverse, difficult, and dependable data iteratively. In each iteration, they evaluate the distinctiveness, difficulty (through uncertainty-based prediction), and dependability (using an external LLM) of instances, then apply a weighted function to select a subset. Tan et al. [15] propose InfoMax, selecting samples based on informativeness and overlap between pairwise samples.
Other approaches include training a model on a small subset, then using it to prune the data. For example, Li et al. [7] fine-tune a model on a randomly sampled subset of data, then use the finetuned model to calculate Instruction Following Difficulty (IFD) scores for each instance. Instances with greater difficulty, based on IFD score, are selected for final fine-tuning. Xu et al. [19] focuses on differentiating informative hard samples from misleading ones in model training. In their HardPT framework, they utilize reinforcement learning and adaptive contrastive learning techniques. Azeemi et al. [2] employ cross-entropy scores to select harder instances. In their experiments they observe that selecting more difficult instances results in improved model performance. Xia et al. [18] introduce the LESS algorithm, an optimizer-aware approach for efficient data selection. It uses a warm-up training phase to generate low-dimensional gradient features, which are stored and later used by models for training. Finally, Yang et al. [20] focus on diversity-aware selection using sparse autoencoders and either greedy-sampling approach (SAE-GreedSelect) or similarity-based sampling (SAE-SimScale) approach.
Although data selection or pruning are well-studied in machine learning, their application to natural language to query language tasks, such as Text2SQL and Text2Cypher, remains largely unexplored. SE-HCL [23] applies curriculum learning to the Text2SQL task by training the model progressively, starting with easy instances and gradually moving to more difficult ones. This approach involves iterative steps that begin with simplifying the data, gradually increasing its complexity, and evaluating the difficulty of individual instances. Some Text2SQL datasets, such as Spider [21] and IndDB [11], provide difficulty labels based on SQL constructs like GROUP BY clauses and nested subqueries, where more complex constructs indicate higher difficulty. However, these difficulty annotations are primarily used for analyzing evaluation outputs rather than for data selection. In this work, we explore data pruning for the Text2Cypher task by focusing on hard-example selection based on instance difficulty.
# 3 HARD-EXAMPLE SELECTION FOR TEXT2CYPHER
We introduce five methods for selecting hard examples. Three of them focus on finding more challenging instances, while the other two combine these approaches to improve selection.
# 3.1 Selecting Challenging Instances
In this section, we describe three approaches for selecting challenging instances from a larger training dataset to enhance model performance.
Complexity-Based Hard-Example Selection: Previous analysis of Neo4j [12] identified data sources and databases where fine-tuned models struggled most. Based on this analysis: (i) The chosen databases are three demonstration databases of Neo4j 1 2, namely "recommendations, companies, neoflix", and (ii) The selected data-sources are: "functional_cypher", "synthetic_gemini", and "text2cypher2023_train". For the selection of these instances, we used a logical "OR" to include instances from either the selected databases or data sources. While this results in a diverse set of challenging instances, we observe an imbalance with many instances coming from a single data source. To address this, we performed additional sampling, limiting each group to a maximum of 4,000 instances (the average group size). This resulted in a total of 16,173 instances, less than the half of the original training dataset, of approximately 40K instances. Length-Based Hard-Example Selection: This heuristic approach assumes that longer ground-truth Cypher queries are more challenging for a language model to generate owing to their increased complexity. Longer queries often involve multiple clauses, making them harder to replicate accurately. Therefore, this approach selects instances based on the length of the Cypher query. To ensure consistency with other selection methods, we maintained a final dataset size of 16,173 instances.
Cypher-Specific Hard-Example Selection: This heuristic method focuses on the presence of Cypher-specific terms (e.g., MATCH, WHERE, RETURN), under the assumption that queries containing more such terms are more complex. Unlike the length-based approach, which prioritizes the length of queries, this method selects instances based on the count of Cypher terms, i.e., which are likely to be more complex by containing multiple clauses. To ensure fairness with other hard-instance selection methods, we restricted this dataset to 16,173 instances.
# 3.2 Combining Selection Methods
We combined the proposed hard-example selection approaches as follows:
Complexity-Based & Length-Based Hard-Example Selection: After selecting hard examples using the ComplexityBased approach, we took an additional step to further refine the selection process. Specifically, we sorted the chosen instances in descending order based on the length of the Cypher queries. This step follows the methodology of the Length-Based approach, which assumes that longer queries tend to be more complex and, therefore, more challenging for the model to generate. By prioritizing longer queries, we made sure that the final set of hard examples was both challenging and diverse in terms of complexity. Complexity-Based & Cypher-Specific Hard-Example Selection: Similar to the previous combined approach, after selecting hard examples using the Complexity-Based approach, we ranked them by the number of Cypher-specific terms in descending order, aligning with the Cypher-Specific approach. This method emphasizes instances with more Cypher-specific terms, as these tend to be more complex and involve multiple clauses. The final subset, therefore, includes challenging instances and have a diverse set of complexities.
# 3.3 Baseline Approaches
We used the following baseline approaches:
Original Data: This baseline uses the training data without any modifications which provides a reference point for performance comparisons. • Randomly-Sampled: In this approach, we randomly sampled instances from the original data. To ensure fairness with the Complexity-Based approach, we aimed to create a balanced dataset across data source groups. We first sampled each group (based on the data-source field) to a size of 2,755, representing the 75th percentile of data source group sizes. We then refined the sample to 16,173 instances to match the size used in the hard-instance selection methods.
# 4 EXPERIMENTAL SETUP AND RESULTS
# 4.1 Experimental Setup and Evaluation Metrics
For our experiments, we used the publicly available Text2Cypher dataset [13], which contains 44,387 instances—39,554 for training and 4,833 for testing. This dataset is a cleaned and combined version of multiple data sources, most of which were synthetically generated.
We employed two evaluation procedures to measure model performance: (i) Translation-Based (Lexical) Evaluation: This method compares generated Cypher queries with ground-truth queries at the textual level. (ii) Execution-Based Evaluation: This method executes both the generated and ground-truth Cypher queries on the target database and compares their outputs, sorted lexicographically. This approach requires an active target database, where about $5 0 \%$ of the dataset has such references. As a result, it evaluates only a subset of the data. To compute these evaluation metrics, we used the Hugging Face Evaluate library [6]. We report the Google-Bleu and Exact Match scores as the primary evaluation metrics.
We fine-tuned a baseline model, ’unsloth/Meta-Llama-3.1-8BInstruct-bnb-4bit’, using various training datasets prepared according to the proposed hard-example selection methods. During evaluation, we used the test set and fine-tuned models to generate Cypher queries based on input natural language questions and corresponding database schemas. After generating the Cypher queries, we applied a post-processing step to remove unwanted text, such as the ’cypher:’ prefix. Details of the fine-tuning setup and parameters are provided in Appendix A.
# 4.2 Evaluation Results
We analyzed the impact of (i) using a subset of the full dataset, assessing both training efficiency and model accuracy, and (ii) applying different hard-example selection approaches on performance.
4.2.1 Impact of Training Data Reduction. The original 40K-instance training dataset was reduced to 16,173 instances through randomsampling or hard-example selection. As shown in Figure 2, training the full dataset required around 2.5K steps (batch size 16), while the subset datasets needed only 1K steps. This reduction significantly cut fine-tuning time and costs. Using subset data achieved comparable or better training loss at 1K steps. However, over the full $2 . 5 \mathrm { K }$ steps, the original full dataset achieved a better final loss: 0.0387 versus 0.0569 for random sampling. Translation-based evaluation, which is based on token prediction accuracy, aligns closely with the loss function. The original dataset achieved a Google-Bleu score of 0.75 and an Exact Match score of 0.36, whereas the random sampling approach scored lower at 0.69 and 0.20, respectively. Execution-based evaluation showed smaller drops, with the full dataset scoring 0.25 (Google-Bleu) and 0.27 (Exact Match) versus 0.21 and 0.25 for the randomly sampled dataset. In summary, using subsets cuts training time and costs by over half but reduces performance. We next explore whether hard-example selection can retain efficiency while improving outcomes.
4.2.2 Impact of Hard-Example Selection. When fine-tuning the baseline model with datasets prepared using random sampling or hard-example selection approaches, training times remain similar since the dataset sizes were kept equal, as shown in Figure 3. All methods achieve comparable loss values, ranging between 0.05 and 0.06. However, closer inspection reveals a ranking from highest (worst) to lowest (best) loss: Length-Based $$ Random-Sampled $$
(a) Training loss: Original vs. Randomly-Sampled data
(b) Translation-based - Google-Bleu score (c) Translation-based - Exact-Match score
Evaluation metric:GoogleBleu Evaluation metric:ExactMatch 0.3 0.2534 0.3 0.2740 0.2550 0.2121 800.2 0.1 0.1 0.0 0.0 Original Random-Sampled Original Random-Sampled (d) Execution-based - Google-Bleu score (e) Execution-based - Exact-Match score
Figure 2: Original vs. Randomly-Sampled data
Cypher-Specific $$ Complexity-Based. In translation-based evaluation, the Complexity-Based approach performs best, achieving 0.71 Google-Bleu and 0.25 Exact Match, bringing it closer to the performance of the original dataset. Interestingly, execution-based evaluation, which is run on a subset of data that has access to active demonstration databases, follows a different pattern. In this case, the Cypher-Specific approach yields the best results, with Google-Bleu and Exact Match scores of 0.23 and 0.26, respectively.
4.2.3 Impact of Combining Approaches on Performance. Combining the Complexity-Based approach with either the Length-Based or Cypher-Specific approach did not result in significantly different loss values, as shown in Figure 4. For translation-based evaluation, all approaches performed similarly, with Google-Bleu and Exact Match scores around 0.71 and 0.25, respectively. However, execution-based evaluation revealed some variation: The best Google-Bleu score (0.24) is achieved by Complexity-Based & LengthBased approach, and the best Exact Match score (0.25) is achieved by Complexity-Based & Cypher-Specific approach. These findings suggest that although combining approaches does not drastically impact performance, some combinations may offer slight advantages depending on the evaluation method.
4.2.4 Overall. As shown in Table 1, while the full dataset achieves the highest Google-Bleu and Exact Match scores for both translationand execution-based evaluation, hard-example selection outperforms random sampling. It also reduces resource usage—time and
(a) Training loss: Randomly-Sampled and Hard-Example Selection approaches
(b) Translation-based - Google-Bleu score (c) Translation-based - Exact-Match score
Evaluation metric: GoogleBleu Evaluation metric: ExactMatch 0.3 0.25 0.2121 0.2371 0.2550 0.2639 0.2412 85 0.1634 0.1726 0.2088 0.1 0.05 0.00 0.0 Random Length Cypher Complexity Random Length Cypher Complexity Sampled Based Specific Based Sampled Based Specific Based (d) Execution-based - Google-Bleu score (e) Execution-based - Exact-Match score cost—by more than half, as presented in Figure 2, with minimal performance loss. We observe that fine-tuned models may still benefit from more data or better-tuned hyper-parameters, even with 16K instances. Future work will explore increasing data diversity and optimizing hyper-parameters to boost performance. Additionally, the difference between evaluation methods requires further investigation. While translation-based evaluation closely aligns with the loss function, reflecting token prediction accuracy, execution-based evaluation follows a different pattern. We attribute this behavior to the fact that execution-based evaluation is run on instances that have access to demonstration databases, which is around $5 0 \%$ of the dataset. In the future, we will analyze how different data subsets impact the model’s ability to generate accurate Cypher queries during execution-based evaluation.
Figure 3: Randomly-Sampled and Hard-Example Selection approaches | Database query languages such as SQL for relational databases and Cypher for graph databases have been widely adopted. Recent advancements in large language models (LLMs) enable natural language interactions with databases through models like Text2SQL and Text2Cypher. Fine-tuning these models typically requires large, diverse datasets containing non-trivial examples. However, as dataset size increases, the cost of fine-tuning also rises. This makes smaller, high-quality datasets essential for reducing costs for the same or better performance. In this paper, we propose five hard-example selection techniques for pruning the Text2Cypher dataset, aiming to preserve or improve performance while reducing resource usage. Our results show that these hard-example selection approaches can halve training time and costs with minimal impact on performance, and demonstrates that hard-example selection provides a cost-effective solution. | [
"cs.DB",
"cs.LG"
] |
# 1. Introduction
Inflammation is a key response of biological organisms to harmful stimuli, like pathogens or tissue damage. This mechanism is layered across different scales, ranging from macroscopic swelling of the 3D tissue down to the release of sub-cellular biochemical molecules. At the microscopic level, inflammation is governed by the recruitment and activation of a range of different immune cells. This process is highly dynamic and involves a specific timing of different immune cell types that are present at different locations, at different times, and in different amounts. The initial response during acute inflammation caused by tissue damage or by an unknown pathogen is typically governed by a series of innate immune cells, while inflammation through a specific pathogen might be identified and targeted by a chain of adaptive immune cells. Anomalous adaptive immune responses can lead to chronic inflammation and are often the root of many autoimmune diseases.
Therefore, the detection and classification of immune cells is essential to monitor inflammation dynamics, to enable a deeper understanding of the spatial organization of immune reactions or to diagnose autoimmune diseases and monitor the effect of potential treatment options.
Ideally, such detection tools should provide microscopic, cellular resolution and minimize alterations to the target cells, i.e., it is desirable to avoid fixation and sectioning as well as biochemical binding with antibody markers. Optical microscopy has developed several label-free imaging technologies that provide contrast from the natural interaction between biochemical structures and light. Many of those techniques have already been used for detection of specific cell types, often by leveraging the predictive power of deep learning (DL) models to boost specificity.
The most accessible and straightforward label-free technique is probably bright-field imaging (BF), which was used as input to DL models to classify type and state of certain cells with a performance comparable to fluorescence-based approaches [1–3]. Phase contrast microscopy is another commonly used imaging technique where DL models have shown accurate results, for instance for the segmentation of bovine aortic endothelial cells [4], classification of myoblast cells [5] or the classification of cancer cells [6]. Differential interference contrast (DIC) imaging provides enhanced contrast at edges and structural features by exploiting optical path length differences, producing pseudo-3D relief images of unstained cells [7]. Deep learning has enabled classification and segmentation of cells [8], their health status [9] or bacteria [10] from DIC images. Ogawa et al. compared the effect of different imaging modalities on DL-based classification between lymphoid-primed multipotential progenitor (LMPP) and pro-B cells [11]. They found no significant difference between BF, phase contrast, and DIC in a generally good classification performance (area under the receiver operating characteristic curve - AU-ROC of 0.9) [11]. Quantitative phase imaging (QPI) is a more advanced computational imaging technique that can provide intrinsic quantification of the optical path length difference to provide a quantitative imaging signal with decent cellular specificity [12,13]. QPI has been used in a wide range of applications of machine learning-assisted cell assessment [14, 15], like classification of red blood cell morphology [16], scoring [17] or classification of cancer cells [18], distinction between healthy B cells and lymphoblasts [19], as well as classification of stages in B cell acute lymphoblastic leukemia [19]. Similar advances have also been made with label-free digital holographic microscopy [20–22].
Despite their success in live cell microscopy, the translation of BF, phase imaging, DIC or QPI towards live tissue, 3D endo-microscopy is less straight-foward, as these techniques either show limited 3D capabilities (BF and phase) or require multiple acquisitions under controlled illumination patterns (DIC and QPI), which is more challenging in a tightly-packed endoscope design, as well as computational reconstructions (DIC and QPI), which can be more error-prone for noisy in vivo data. Furthermore, it is often desirable to obtain functional or metabolic information, especially for immune reactions. And although quantitative measurements of optical path length in QPI can be related to dry cell mass [23], these image contrast quantities are only indirectly related to metabolic activities.
In contrast, multiphoton microscopy (MPM) exploits the confocal nature of nonlinear excitation for optical sectioning as well as reduced scattering at infra-red wavelength for greater penetration depths [24]. Moreover, the label-free measurement of natural, autofluorescence from metabolic coenzymes, like nicotinamide adenine dinucleotide (NADH, H for hydrogen) and flavin adenine dinucleotide (FAD), can directly be linked to metabolic processes and mitochondrial activity in cells [25]. These quantitative measurements have already been used to reveal cellular metabolic states [26] or mitochondrial dysfunction [27], to distinguish breast cancer cells from normal controls [25], to identify different breast cancer tissues [26], to detect Alzheimer’s disease in fresh murine brain samples [27] or to support the distinction between brown and white adipose tissue [28].
This unique combination of deep-tissue, 3D imaging capability paired with label-free, metabolic contrast, make MPM ideal for the investigation of inflammation and inflammatory tissue remodeling directly within the native 3D tissue structure [29–32].
Fig. 1. Automated immune cell identification based on label-free 2-Photon autofluorescence and deep learning. A scanning multiphoton microscope is used to generate label-free image data from immune cells on a substrate. A $8 1 0 ~ \mathrm { n m }$ , ultra-short-pulsed laser is used to excite autofluorescence from NADH and FAD, while gradient Dodt contrast images are collected in transmission mode. Label-free images from various immune cells are collected and used as input to a convolutional neural network, which has been trained to predict immune cell type.
Although single-photon induced AF was already used for classification of immune cell types [33,34], cell sorting via flow cytometry [35,36] or digital staining of tissue sections and live cells [37–39], two-photon induced AF has only rarely been explored for the same purposes. For instance, Gehlsen et al. [40], as well as our own group [35] demonstrated distinction of various immune cell types by using statistical tools to distinguish two-photon AF intensities. However, the full potential of two-photon AF for AI-assisted computational specificity, as proposed for many other imaging modalities [37–39], is still underexplored. Label-free multimodal imaging of coherent-anti-stokes Raman scattering (CARS) and MPM was used for digital H&E staining based on the label-free input [41, 42]. However, these procedures were limited to formalin-fixed and paraffin-embedded tissue sections and did not offer specificity to different immune cell types beyond that of H&E. The development of state-of-the-art DL models for specific immune cell classification based on label-free multiphoton images has not yet been demonstrated.
Such computational specificity for immune cells in label-free multiphoton imaging might be particularly promising, since multiphoton imaging is already being used for in vivo imaging and endo-microscopy (MPEM) [29, 43–48], which enables optical histology imaging in live animal models. Thus, the development of AI-assisted immune cell detection for label-free multiphoton imaging might translate very well to in vivo applications.
In this work, we used an existing data set of label-free, two-photon induced AF images from various different immune cell types [35] to train CNN models on automated, specific identification of immune cells. The use of simultaneously recorded fluorescence antibody markers, as well as clear experimental designs allowed us to obtain reliable ground truth annotation without the common bottleneck of manual expert annotations. Systematic perturbation tests validated robust classification performance and data-efficient learning, while also revealing that models that focus on molecular two-photon induced AF significantly outperformed those that only use spatial information on cell size and shape alone.
# 2. Materials and Methods
# 2.1. Data set
These original images were obtained in a previous work by Lemire et al. [35] from immune cells that were isolated from the spleen $\mathrm { ( C D 4 ^ { + } / C D 8 ^ { + } }$ T cells, B cells) or bone marrow (macrophages, dendritic cells, neutrophils) of wildtype C57BL/6 mice, seeded on glass slides and imaged with a multiphoton microscope (TriMScope II, LaVision BioTec, Bielefeld, Germany) using filters that target AF from NADH (BP 450/70) and FAD (BP 560/40). The respective spectra and filter bands are shown in Fig. 1a. In addition to these AF channels, the forward scattered Dodt channel was recorded (displayed in gray in all figures). Similar to DIC, Dodt contrast is a gradient-based technique, and it provides optical sectioning and improved visualization of thick tissue slices [49]. Although it offers enhanced structural detail, its application for DL-based immune cell classification is less established, and, as with DIC, it primarily encodes morphological rather than biochemical or metabolic differences. Each full raw image had a size of $1 , 0 2 4 \times 1 , 0 2 4$ pixels across a field of view (FOV) of $4 0 5 \times 4 0 5 \mu m ^ { 2 }$ , containing dozens to hundreds of cells. Details of all data in this study are shown in table 1.
Cell mixture First, we investigated the potential to differentiate two different cell types (neutrophils and T cells) that were present in the same sample (Fig. 3). In that case, T cells were stained with the allophycocyanin (APC)-labeled lymphocyte marker $\alpha$ -CD3 to obtain ground truth annotations. This APC signal had no significant spectral overlap with natural AF emissions (BP 675/67 for APC, see Fig. 1a) and was subsequently recorded at a different excitation wavelength $\mathrm { 8 1 0 ~ n m }$ for AF and $1 { , } 0 4 0 \mathrm { n m }$ for APC) which prevented channel leakage entirely.
In total, this data set consisted of 31 full field-of-view image pairs of label-free AF images. Four images only contained T cells, seven images only contained neutrophils and 20 images were from samples that contained roughly a 50:50 mixture of both cell types.
An image processing procedure was developed in the open-source image processing software Fiji to crop single cell image patches from these full-FOV images. The image processing macro loaded the raw data and registered AF and APC images via Scale Invariant Feature Transform (SIFT) [50]. Cell detection was performed via semi-automated, user-validated Otsu-thresholding of the NADH channel (‘setAutoThreshold("Otsu no-reset")’), auto adjusted to include $10 \%$ of bright pixels, and a human observer verified or adjusted the threshold manually, if needed. Thresholding was followed by Watershed and ‘Analyze Particles’ (minimal size of 25 pixels area and $0 . 3 \textrm { - } 1$ circularity). The center of the detected regions of interest (ROI) was then used to crop a patch of $6 4 \times 6 4$ pixels $( 2 5 . 6 \times 2 5 . 6 \mu m ^ { 2 } \mathrm { F O V }$ ) around it. For each image patch, the two AF channels and the Dodt channel were saved together as multi-channel TIF file. Cells at the edges of the original image were ignored to ensure a consistent size of $6 4 \times 6 4$ for all patches. The respective APC channel of the patch was used for thresholding to determine it as APC-positive or APC-negative. This procedure resulted in a data set with a total of 5,078 annotated image patches, each with a unique cell in the center.
Multi-class data set In the second case, we investigated the potential of two-photon induced AF for multi-class classification of several different cell types. For that purpose, we used a different experimental design in the available data base [35], where cell types were not mixed, and each isolated cell type was imaged separately without antibody reference (Fig. 2). Therefore, annotations for each cell type were available from the experimental protocol instead of a fluorescence antibody (see Fig. 2). Purity of these isolated cell suspensions reached values of ${ > } 9 5 \%$ in each case [35]. In total, we pooled 85 unique full FOV images from six different cell types. These images were processed into $6 4 \times 6 4$ pixel patches following the same procedure as explained above, resulting in a total of 3,424 cell patches.
Table 1. Data sets
# 2.2. Model architecture
We selected the SqueezeNet architecture as the backbone model for this study, as it is known to preserve competitive accuracy while reducing the number of trainable parameters. In the case of our relatively small data set (see section 2.1), this might minimize the risk of overfitting [51]. We used a pretrained Squeezenet architecture (squeezenet1.0, torchivision) and adjusted input, features and classifier layers (see Block 0, 1 and 11 in table 2) to match the shape of our input data and prediction labels. SqueezeNet is comprised mainly of ‘Fire’ modules which are essentially squeezed convolution layers that only have 1x1 filters, feeding into an expand layer that has a mix of 1x1 and $3 \mathrm { x } 3$ convolution filters (for more details, see section 5.3 in Ref. [52]). This pretrained model was then fine-tuned with our data set for the two respective classification tasks.
# 2.3. Training
The entire framework for training was developed in Python 3.9 using PyTorch 2.5.1 and CUDA 11.8. A custom data loader was used to load the TIF files for each cell patch as pytorch tensors, apply data augmentation (horizontal flip, vertical flip and rotation), apply a Gaussian filter $( \sigma = 2 )$ ) and finally, to carry out a z-score standardization to the mean and standard deviation of the entire data set. A 5-fold cross validation (CV) was used, where $8 0 \%$ of the data were used for training and the remaining $2 0 \%$ for validation in each fold (sklearn, model selection, Kfold). In case of the multi-class data, a grouped stratification (sklearn, model selection, stratifiedgroupkfold) was used since this data set was severely imbalanced.
The models were trained for 300 epochs using a cross entropy loss, an Adam optimizer [53], a learning rate scheduler and stochasic weight averaging (SWA) [54]. As shown in supplementary Fig. 1, this resulted in a loss convergence. These main hyper parameters are summarized in table 3. In each fold, loss, true positives, true negatives, false positives and false negatives were tracked to calculate per-fold performance metrics which were averaged to the final evaluation.
# 2.4. Data perturbation experiments
In order to evaluate if the model learns the desired cellular information instead of overfitting to noise or extra-cellular background, we introduced a series of spatial perturbations in the binary classification data. Similarly to the approach by Cook et al. [55], all of these perturbations were in place for the entire network training process, resulting in an independently trained model for each perturbation test. As a spatial perturbation, we defined concentric circles with fixed diameters that masked either the inside or the surrounding area outside of that circle. The circles were always centered within the image and had diameters of 5, 20, 40, and 60 pixels (see Fig. 2a).
# 2.5. Model perturbation experiments
Our selected SqueezeNet architecture had a total of 735,937 trainable parameters. To evaluate whether this capacity was adequate for our tasks, we performed a model perturbation test, where certain blocks of the architecture were frozen to the initially pre-trained condition, without re-optimization through backpropagation on the new data (’requires_grad $\mathbf { \sigma } = \mathbf { \sigma }$ false’). The first input layer and the classifier block (blocks 0 and 11 in table 2) were always allowed for backpropagation, resulting in a minimum of 15,760 trainable parameters. We then increased the trainable capacity by subsequently ’un-freezing’ one layer at a time, starting at #3 in table 2 (28,176 parameters) until #10 (735,937 parameters).
# 2.6. Channel perturbation experiments
Finally, we evaluated the relative importance of the different input channels (representing different types of molecular/metabolic information) by training the same model architecture on four different input channel configurations: (i) NADH autofluorescence only, (ii) FAD autofluorescence only, (iii) Dodt gradient contrast only, and (iv) NADH and FAD autofluorescence. Again, an entirely new model was trained on each of these configurations.
# 2.7. Computational Hardware
All models were trained on a workstation equipped with NVMe SSD, Nvidia RTX 3090 GPU and Intel Core-i9 10850k CPU (10 cores of $3 . 6 \mathrm { G H z }$ ). Training took about $2 5 \mathrm { m i n }$ (1,500 s) for each of the perturbation experiments.
# 3. Results
# 3.1. Classification of immune cells (in mixed samples)
Fig. 2 shows the main classification results of the trained network. For the binary classification between T cells and neutrophils in mixed samples, we can report a 5-fold average AUC-ROC of 0.87, an AUC of the precision recall curve of 0.95 and a validation accuracy of $8 4 . 8 9 \%$ .
# 3.2. Multi-class classification of other types of immune cells
The multi-class cell classification shown in Fig. 3 generally shows successful classification results. The model achieved an F1 score of 0.689 (ranges from 0 to 1), a precision of 0.697 (range from 0 to 1), a recall of 0.748 (range from 0 to 1) and a Matthew’s correlation coefficient (MCC) of 0.683 (range from -1 to 1). The multi-class accuracy was $5 2 . 6 7 \%$ (random guess would be $1 6 . 6 \% )$ ), and most of the mis-classified examples were B cells that were falsely predicted as $\mathrm { C D 8 ^ { + } }$ T cells. As seen in Fig. 3 and supplementary Fig. 1, this multi-class data set was very skewed, with B cells being by far the largest class. Therefore, the metrics of F1 score, precision, recall and MCC are more conclusive metrics than the accuracy in this case.
Fig. 2. Classification results from unstained neutrophils and stained T cells in mixture. T cells were isolated, stained with an APC-labeled $\alpha$ -CD3 marker and mixed with unstained neutrophils, before imaging. The two label-free channels of NADH and FAD were used as input to a deep learning model, while the fluorescence channel of the specific marker was used to derive ground truth annotations for the training. Receiver operating characteristic (ROC) curve, precision-recall (PR) curve, and confusion matrix for the validation examples indicate that the model is able to differentiate both cell types reasonably well, when using label-free MPM images as input. All values denote the average across 5 folds.
The limited size of the data set and the greater number of labels lead to the challenge of splitting examples across the five validation folds. The same K-fold CV strategy that was used for the binary classification (see above), would have resulted in folds that lacked representation of each class, leading to per-fold performance metrics, that are likely underestimating a potential performance on larger data. This is a known challenge, when splitting imbalanced data sets of multiple classes and few examples. There, we employed a grouped stratification strategy for this multi-class problem. Although this procedure reduced the reported validation performance, we prioritized methodological rigor by enforcing strict group-aware splitting to prevent data leakage - a conservative choice that sacrifices short-term metric optimization for long-term generalizability in this class-imbalanced setting.
# 3.3. Perturbation experiments
Data perturbation When presented with data that contain only a small fraction of central pixels (second bar in Fig. 4a), the model performance drops from 0.87 AUC-ROC and a 0.95 PR-AUC (Fig. 4 & the first bar in Fig. 4a) to only 0.75 AUC-ROC / 0.89 PR-AUC. When trained on data with continuously larger circular areas from the center, the performance continuously increases, as expected, approaching a performance similar to that in the unperturbed case. On the other hand, when trained on data in which central pixels were continuously removed, the performance drops sharply to only 0.67 AUC-ROC/0.83 AUC-PR and slowly increases when allowing for more pixels. Together, these results indicate that learning heavily relies on the actual cellular information in the center of the image patch, as intended, and is not severely influenced by background pixels which contain noise and might occasionally include other cells at the edges.
Fig. 3. Classification results from six different immune cells. Each cell type was isolated and imaged separately, resulting in a separate data set for each cell type, so that ground truth annotations were available through that experimental design. Again, a deep CNN model was trained with label-free AF images as input to predict cell type. Multi-class classification results are evaluated by the 5-fold cross-validation confusion matrix and performance metrics. All values denote the average across 5 folds.
Model perturbation The results of our model perturbation tests, shown in Fig. 4b, indicate that both ROC-AUC and PR-AUC steadily increase when increasing the number of trainable parameters. Moreover, this steady increase flattens out and converges at higher capacities of trainable parameters (i.e., between 571,072 and 735,937 trainable parameters), which indicates that the chosen model has an adequate capacity for the given task, and that a larger model is not expected to perform significantly better.
Cell detection is driven by cellular autofluorescence, not by shape Finally, we present the results of training the model on different image channels in Fig. 4c. It can be observed that the performance of a model trained on only one single AF channel (i.e., NADH or FAD, respectively) is already close to the overall performance including all channels. A model that was solely trained on the Dodt signal is close to a random guess (0.5 AUC-ROC).
1.0 0.9 Lddu 0.8 0.7 0.6 0.5 T-cell :8888 Neutrophil b. Model perturbation 1.00 1 160 176 65 09 680 104 8 072 93 心 28. 2 2. & 256 (12. 135 Number of trainable parameters ROC curve c.Molecular perturbation /channel perturbation PR curve 1.0 0.g 1 A 0.5 Only AF-NADH Only AF-FAD Only FS (Dodt) Only AF (NADH & All (AF & Dodt) T-cell exampl e
# 4. Discussion
This study was designed to develop deep neural networks for automated identification of specific immune cell types, based on label-free 2-photon induced AF image data. In the binary classification of mixed T-cells and neutrophils, we achieved a performance of 0.87 AUC-ROC and 0.95 AUC-PR which is on par with a recent state-of-the-art method for single-photon induced AF (0.92 - 1 AUC-ROC [33]), despite the fact that we only used three input channels instead of 56 [33] and relied on a tiny data set of only 5,075 single immune cells. The second case of classifying six different major immune cell types also performed reasonably well (0.689 F1 score, 0.697 precision, 0.748 recall, and $0 . 6 8 3 \mathrm { M C C }$ ), especially since this task was a more complicated multi-class problem and since even fewer and yet more imbalanced data were available.
In addition to demonstrating an overall successful use of label-free MPM and DL for immune cell identification, our systematic perturbation tests enabled further investigation of robustness of the classification, as well as of the relative importance of different spatial-molecular patterns. The combination of controlled experiments and architectural choices ensured the model learned generalizable patterns rather than over-fitting to artifacts or background noise in the limited dataset. Moreover, molecular contrast perturbations indicated that learning was significantly more successful if the model had access to the full spatial distribution of molecular two-photon induced AF instead of only the spatial information from the gradient Dodt channel.
The use of elegant data collection strategies allowed us to obtain high-quality data labels without human annotation and thus, to bypass manual annotations which can often be the labor-intensive bottle-neck in developing, training and validating new DL models. The use of reference fluorescence markers in combination with AF poses the unique challenge to avoid spectral overlap to preserve the specificity of the imaging signals and prevent information leakage. The presented approach to use subsequent recording of the APC channel at a different excitation can be used for two-photon excitation without overlapping with the natural AF emission. In the future, the use of genetically encoded reporter fluorophores would be a promising addition to provide cell annotations for the development of multiphoton imaging with computational specificity to other cells, like bacteria (e.g., Citrobacter rodentium [31]) or a finer resolution to different subtypes of immune cells (e.g., $\mathrm { C c r } 2 ^ { + } / \mathrm { R F P }$ , $\mathrm { C d 6 8 ^ { + } / G F P }$ and Cx3cr1 macrophages [32]). The use of such genetically encoded fluorophores for digital staining has already been shown in the example of digital staining of mitochondria in living cells using correlative imaging [56] and would be most suitable for in vivo imaging of immune cells in 3D tissues.
Following the good scientific practices suggested in Ref. [39], we aim to discuss the overall uncertainty of our approach which involves the fundamental labeling uncertainty in obtaining the ground truth data annotations and the prediction uncertainty of the model. The latter can be gauged by the standard deviation in performance across the different folds which is around $1 - 3 \%$ (0.025 ROC-AUC or 0.013 PR-AUC). The label uncertainty however, is more difficult to assess. In the multi-class classification experiment, data labels were obtained through the experimental protocol of isolating the respective cells. It has been stated that the purity of these isolated cell samples reached ${ > } 9 5 \%$ [35] which can be regarded as the upper boundary for the labeling specificity for this experiment. In the case of binary classification of T cells and neutrophils, the labeling specificity is related to the biochemical binding specificity of the antibody which is much more difficult to assess quantitatively. It is widely accepted that T cells carry CD3 antigens that bind with anti-CD3 antibodies, while neutrophils, B cells and macrophages do not. However, these statements are usually more qualitative, like "B cells, granulocytic series, and monocytes/macrophages are all CD3-negative." [57] or "the vast majority of mature T cells bear TCR $\alpha \beta "$ [57]. Quantitative values for the half-maximal effective concentration (EC50) values of certain CD3 antibodies are available in literature to range from 3 to $2 4 { \mathrm { n M } }$ [58] or 17 to $1 6 1 ~ \mathrm { { n M } }$ [58]. However, the actual specificity in percentage $( \% )$ , i.e., the percentage of selective binding to T cells against unintended binding to neutrophils, was not determined in this experimental study. Nevertheless, uncertainties in the data labeling is also an extremely challenging problem to conventional, manual annotations which are often subject to human errors, individual biases, or inter-observer variability [59]. | Label-free imaging has gained broad interest because of its potential to omit elaborate staining procedures which is especially relevant for in vivo use. Label-free multiphoton microscopy (MPM), for instance, exploits two-photon excitation of natural autofluorescence (AF) from native, metabolic proteins, making it ideal for in vivo endomicroscopy. Deep learning (DL) models have been widely used in other optical imaging technologies to predict specific target annotations and thereby digitally augment the specificity of these label-free images. However, this computational specificity has only rarely been implemented for MPM. In this work, we used a data set of label-free MPM images from a series of different immune cell types (5,075 individual cells for binary classification in mixed samples and 3,424 cells for a multi-class classification task) and trained a convolutional neural network (CNN) to classify cell types based on this label-free AF as input. A low-complexity squeezeNet architecture was able to achieve reliable immune cell classification results (0.89 ROC-AUC, 0.95 PR-AUC, for binary classification in mixed samples; 0.689 F1 score, 0.697 precision, 0.748 recall, and 0.683 MCC for six-class classification in isolated samples). Perturbation tests confirmed that the model is not confused by extracellular environment and that both input AF channels (NADH and FAD) are about equally important to the classification. In the future, such predictive DL models could directly detect specific immune cells in unstained images and thus, computationally improve the specificity of label-free MPM which would have great potential for in vivo endomicroscopy. | [
"cs.LG",
"physics.optics"
] |
# 1 Introduction
Human mobility—the movement of individuals across space and time—is a fundamental aspect of human behaviour, shaping urban dynamics, transportation systems, and public policies [1]. Over the past decade, computational approaches, especially those based on artificial intelligence (AI) techniques such as deep learning, have played an increasingly prominent role in human mobility modelling. Advancing these computational approaches has become a major future direction for human mobility science [29]. Within this research field, predicting individual’s next location is a key task [19], for which numerous models and algorithms have been proposed. Despite varying architectures, almost all of these models operate on proper location representations. The most widely adopted strategy is to encode locations as dense vector embeddings, which serve as compact and informative inputs for downstream mobility prediction models [16]. However, existing location embedding pre-training methods for individual-level next location prediction typically rely on historical mobility data to learn the co-occurrence or sequential patterns of visited places. This results in several limitations.
First, the resulting location representations do not explicitly encode spatial information (e.g., geographical coordinates), despite it being universally acknowledged as a crucial factor influencing individual’s travel. This negligence could limit the downstream prediction model’s ability to account for the spatial dependencies in mobility behaviour.
Second, the pre-trained embeddings are tied to a fixed set of locations. As new mobility data are collected and fed into the location prediction system, new locations might emerge, either from new users or from existing users who begin visiting previously unseen locations (this is a common behavioural change). Existing models cannot accommodate these “new locations” and thus the downstream prediction performance would be limited. Although one could re-train the whole location embedding model to incorporate these new locations, it incurs extra compute, which is not ideal for real-world prediction systems that favour fast responses.
Moreover, human mobility is also heavily influenced by the urban environment, such as transportation networks, terrain, and land use [3]. Among these spatial contextual features, land use - reflecting urban functions- is particularly relevant [15]. Although studies have shown that capturing the semantic characteristics of locations through points of interest (POIs) can enhance individual mobility prediction [11], existing location embedding approaches fail to effectively incorporate this information.
We argue that addressing these limitations requires location representations that are spatially explicit, semantically informed, and inductive – i.e., capable of generalising to unseen locations. To this end, we propose to apply CaLLiPer [34], a recently developed representation learning method originally designed for learning urban space representations from POIs, to represent locations in human mobility modelling domain. CaLLiPer enables multimodal representation learning by combining spatial (coordinate) and platial (POIs semantic) information through contrastive learning. It employs a location encoder to capture spatial information and aligns the resulting location embeddings with corresponding POI representations generated by a text encoder. As a result, CaLLiPer produces embeddings that are spatially explicit, semantically rich, and inductive by design. It has also been proved in the original paper that, being a multimodal location embedder, CaLLiPer performs better than other state-of-the-art methods in characterising fine-scale urban spaces.
To evaluate the effectiveness of utilising CaLLiPer for location representation in mobility prediction, we conduct extensive experiments on four public human mobility datasets under two distinct settings, i.e., conventional and inductive. The conventional setting aligns with what has been commonly adopted in existing studies, while the inductive setting simulates real-world scenarios where new locations emerge as new users enter the system or existing users alter their routines and visit unfamiliar places. The results show that, in two out of the four chosen datasets, CaLLiPer significantly outperforms competitive baseline models across all evaluation metrics, with a particularly notable advantage under the inductive condition. In the remaining two datasets, although it does not achieve the best performance across all metrics in both settings, the application of CaLLiPer embeddings still produces strong results – either outperforming baselines under the inductive setting or achieving top performance on a subset of the metrics consistently. These experimental results demonstrate the practical value of utilising CaLLiPer-generated embeddings for individuals’ mobility prediction, especially in inductive scenarios.
We summarise our contributions as follows:
• Novel application: We are the first to apply multimodal location encoding—characterised by spatial explicitness, semantic awareness, and inductive capability—to location embedding for individual next location prediction.
• Addressing a practical issue: Our approach tackles the realworld challenge of handling emerging (unseen) locations in next location prediction systems.
• Empirically validated and reproducible: We conduct extensive experiments across conventional and inductive settings to demonstrate the effectiveness of the application of CaLLiPer. The code and data are made publicly available to support reproducibility and provide a benchmark for the research community.
The remainder of this paper is structured as follows: Section 2 reviews related work in individual mobility prediction and location encoding, and explains our rationale for choosing CaLLiPer as the applied method. Section 3 introduces our methodology, covering notations, the problem statement, the methodological framework, and preliminaries concerning the CaLLiPer model and downstream prediction models. Section 4 details our experimental setup, with a focus on the implementation of conventional and inductive settings. Section 5 presents our quantitative and qualitative empirical results. In Section 6, we analyse the fundamental differences between the CaLLiPer-based location embedding method and other approaches, and discuss the implications of our empirical findings. Finally, Section 7 summarises the study and outlines future directions.
# 2 Related Work
# 2.1 Location Embedding for Next Location Prediction
Most next location prediction models require locations to be represented by latent embedding vectors. The most straightforward strategy is to use an embedding layer, which is essentially a lookup table that stores embedding vectors for a location set of fixed size [14, 38]. This embedding layer is typically trained end-to-end with task-specific objectives. However, such embeddings are difficult to transfer to other models and tasks. They are also prone to overfitting problems and struggle to incorporate comprehensive information about locations, such as their semantic meaning or functional use of locations [16].
To tackle these issues, researchers have proposed to pre-train location embeddings using unsupervised or self-supervised objectives to incorporate more general and comprehensive information of locations. Inspired by distributed word representations widely used in natural language processing (NLP) domain [27], researchers have proposed to treat locations in people’s mobility series as words in sentences, applying Word2Vec models [26] to capture the cooccurrence patterns of locations [37, 41]. Subsequent methods further adapted Word2Vec models to integrate additional information into the embeddings. For example, special binary tree structures were constructed to account for the spatial proximity of locations [8] or the time locations are visited [33], and used in the hierarchical Softmax calculation in CBOW model [26]. More recently, one work leveraged the BERT [6] architecture to derive dynamic latent embeddings for locations based on their contextual neighbours [16] and achieved state-of-the-art performance on two mobile phone signalling datasets. Despite these advancements, existing methods share several common limitations.
With the exception of a few methods like POI2Vec [8], most approaches largely ignore the spatial attributes of locations (such as geographic coordinates), even though such spatial attributes depicting distance and proximity actually plays a crucial role in people’s travel. Moreover, they fail to consider the semantic context of urban places, which is another important factor shaping human mobility behaviour.
The most prominent issue is their reliance on static, pre-defined location sets. These sets are typically derived from the mobility data itself, and the process varies depending on the dataset type. For general GNSS tracking datasets, this involves identifying individual stay points, followed by the spatial clustering of stay points across all users to define the final set of locations [11]. In POI check-in datasets generated by location-based social network (LBSN) applications, the locations correspond to the POIs contained within the dataset. For existing methods, the delineation of locations—both their number and spatial whereabouts—is fixed during pre-training and remains unchanged at inference, making it difficult to accommodate scenarios where new locations emerge over time. Addressing this limitation is the primary goal of this paper.
# 2.2 Location Encoding
Location encoding refers to the process of embedding point locations into a vector space so that these location embeddings can be readily used in downstream neural network modules [21], with various aims like geographic prior modelling [5, 13, 20, 34] or spatial context modelling [22], etc.
The motivation for location encoding was first comprehensively articulated in [21], in which the authors provided a general conceptual framework that unifies the formulation of location encoding methods. Following their formulation, location encoding methods generally take the form of $y = \mathrm { N N } ( \mathrm { P E } ( \lambda , \phi ) )$ , where a geographical or projected coordinate $( \lambda , \phi )$ is processed through a parametric positional encoding (PE) function and a neural network (NN).
The NN component is usually implemented as a fully connected residual network (FC-Net) or sinusoidal representation network (SirenNet) [31], while PE methods vary, including Wrap [20], Grid and Theory [22], Sphere\* [23], and Spherical Harmonics (SH) [31], etc. Depending on the chosen PE function, location encoding can operate over either planar or spherical geometries at different spatial scales.
Apart from the architecture design of PE and NN, the training of NN is also central to location encoding. Depending on the target features, different training methods have been used. For simple binary classification tasks (e.g., distinguishing land vs. ocean), supervised learning with binary cross-entropy loss is typically used [31]. For more complex objectives, such as aligning locations with images [13] or textual descriptions [34], contrastive learning is employed to integrate imagery or textual modalities into the location encoder.
Theoretically, location encoding enables continuous embedding of geographic space, allowing for vector representations at every possible point location. This makes it highly inductive—capable of generalising to unseen locations during inference [21, 34].
Location encoding has been applied in a variety of domains, including geo-aware image classification [20], POI classification [22], land use classification and socioeconomic status distribution mapping [34], etc. However, to our knowledge, it has not yet been applied to generate location embeddings for human mobility tasks—an application gap addressed by this paper.
As discussed in the introduction, effective location prediction requires embeddings that are spatially explicit, semantically rich, and inductive. While several location encoding methods meet these criteria, in this study we chose to apply CaLLiPer over other alternatives like SatCLIP [13]. The rationale is that POI data is more suitable for this task. Our justification is threefold:
First, POI data offer unique advantages. Visual features can be ambiguous; for example, two urban areas/streets may look similar but serve different functions. Whereas POI data are more assertive and precise in their semantic meanings—a shopping mall cannot be mistaken for an office building after all. POI-based descriptions also better capture nuanced, people-centric semantics [12, 34]. Moreover, the original CaLLiPer paper [34] has demonstrated its superior performance over other models like Space2Vec [22] and SatCLIP [13] in characterising fine-scale urban spaces—essentially equivalent to "locations" in urban human mobility context.
Second, spatial resolution and coverage are important considerations. While satellite imagery is useful for constructing locationimage pairs at a global scale, it is less effective for fine-grained, local-scale training. It is unlikely for two nearby locations, only metres apart, to have distinct, non-overlapping satellite images. Street-view imagery, while useful for capturing more nuance urban characteristics, often suffers from spatial distribution bias, especially in open-source datasets, leading to inconsistent coverage across urban areas [7].
Third, major sources of human mobility data, such as LBSNs, already include detailed POI information. This makes POI data a natural and efficient input for models like CaLLiPer.
# 3 Methodology
In this section, we introduce the concepts and notations used in this article, formulate the research problem of pre-training location embeddings, explain the methodological framework, and provide a brief introduction to CaLLiPer and the downstream prediction model.
# 3.1 Notations and Problem Statement
Definition 1 – Individual’s mobility. An individual’s movements during a certain period can be represented by a spatio-temporal trajectory 𝑠 consisting of sequential visiting records. A visiting record $( u , L , t )$ indicates that user $u$ visited location $L$ at time $t$
Figure 1: The methodological framework, consisting of pre-training and downstream prediction stages. The focus is on the pre-training, where we experiment both existing methods and our proposed method (i.e., applying CaLLiPer). The effectiveness of different methods are indicated by the performance of the downstream location prediction model.
We denote the set of individuals’ trajectories as $s$ , the set of all locations appear in the dataset as $\mathcal { L }$ , and the set of all individuals as $\mathcal { U }$ .
Definition 2 - Location. A location $L$ can generally be represented as $L = ( l , c , g )$ , where $l$ is the location identifier, $c$ encodes its context semantics such as the surrounding land use, and $\mathrm { \bf g }$ denotes the geometry of the location. The delineation of locations, i.e., $g _ { ; }$ varies depending on the type of mobility data. For a GNSS tracking dataset, each location is typically defined as the convex hull of a set of spatially proximate stay points derived from user’s trajectories, whereas for a location-based social network (LBSN) check-in dataset, each location corresponds to the respective POI in those LBSN applications.
Problem Statement –Pre-training location embedding models (for mobility prediction). The aim is to pre-train a parameterised mapping function $\mathcal { F }$ to generate an embedding vector for a target location $L$ based on the available data, e.g., individuals’ trajectories and/or POI data. The effectiveness of the pre-trained location embedding models will be verified by a mobility prediction task.
# 3.2 Methodological framework
Figure 1 presents the methodological framework of this study, demonstrating the complete workflow of the pre-training and downstream application stages.
The objective is not to invent new models, but rather to apply existing ones to address practical issues identified in real-world applications. Pre-training location representations based on the co-occurrence of locations manifested in mobility data is argued to be sub-optimal, particularly as this approach struggles to accommodate new locations. Instead, we hypothesise that CaLLiPer — which pre-trains location representations using general spatial (coordinates) and semantic (textual descriptions) information — is more suitable for location prediction tasks, especially in handling previously unseen locations. To verify whether the hypothesis holds, both the baseline methods and the proposed approach are implemented. The resulting pre-trained location embedding models (whether they are embedding matrices or location encoders) are frozen and subsequently integrated into downstream prediction models to perform the next-location prediction task. The performance of the downstream models, enhanced with the pre-trained location embeddings, serves as an indicator of the effectiveness of the embedding models.
It is important to note that Figure 1 uses LBSN data as the primary data source. LBSN data alone is sufficient to complete the entire process, as it contains both individuals’ mobility information and POIs. However, for other types of datasets that may not include POIs, it becomes necessary to incorporate additional POI data from external sources to facilitate the pre-training of CaLLiPer. This scenario is addressed in the experiments, as detailed in Section 4.1, where Foursquare POI data is integrated to facilitate experiments on the Gowalla-LD and Geolife datasets.
# 3.3 Preliminaries about CaLLiPer and Downstream Prediction Model
3.3.1 CaLLiPer. As our study is centred around the application of the recently proposed urban space representation learning model - CaLLiPer, we provide a brief introduction to its overall architecture. As Figure 2 shows, CaLLiPer consists of three main components: (1) a location encoder $f ^ { L }$ that embeds individual coordinates into higher dimensional space (further details can be found in the Appendix A.1); (2) a text encoder $f ^ { T }$ for extracting semantic features from POI’s natural language descriptions; and (3) a projection layer $f ^ { P }$ that projects the output of text encoder to match the dimensions of the location embeddings.
In the pre-training stage, the location encoder embeds a batch of $N$ POI coordinates $\boldsymbol { x } ^ { L } \in \mathbb { R } ^ { N \times 2 }$ into location embeddings $z ^ { L } \in$ R𝑁 ×𝑑 in a 𝑑-dimensional space. Simultaneously, the corresponding POI textual descriptions $\bar { { \boldsymbol { x } } ^ { T } } \in \mathbb { R } ^ { N \times t }$ $N$ sentences with arbitrary length $t$ ) pass through the text encoder and projection layer and be mapped to text embeddings $\boldsymbol { z } ^ { T } \in \mathbb { R } ^ { N \times \hat { d } }$ . The mathematical formulations are as follows:
$$
f _ { \Theta ^ { L } } ^ { L } ( x ^ { L } ) = z ^ { L } \in \mathbb { R } ^ { N \times d }
$$
Figure 2: The model framework of CaLLiPer.
$$
f _ { \Theta ^ { P } } ^ { P } ( f ^ { T } ( x ^ { T } ) ) = z ^ { T } \in \mathbb { R } ^ { N \times d }
$$
The parameters of the location encoder and the projection layer, i.e., $\Theta ^ { L }$ and $\Theta ^ { P }$ , are optimised, while the text encoder is frozen and do not receive parameter update. The simple yet highly effective bi-directional InfoNCE is adopted as the training objective [30]:
$$
\begin{array} { c } { { \displaystyle o b j ( \Theta ^ { L } , \Theta ^ { P } ) = - \ \frac { 1 } { 2 N } \Bigg [ \sum _ { i = 1 } ^ { N } \log \frac { \exp ( z _ { i } ^ { L } z _ { i } ^ { T } / \tau ) } { \sum _ { j = 1 } ^ { N } \exp ( z _ { i } ^ { L } z _ { j } ^ { T } / \tau ) } } } \\ { { + \sum _ { i = 1 } ^ { N } \log \frac { \exp ( z _ { i } ^ { T } z _ { i } ^ { L } / \tau ) } { \sum _ { j = 1 } ^ { N } \exp ( z _ { i } ^ { T } z _ { j } ^ { L } / \tau ) } \Bigg ] } } \end{array}
$$
where $\tau$ is a temperature hyperparameter.
Once the pre-training is finished, the location encoder can be used in various downstream tasks in a “plug and play” manner, producing the embeddings for any urban spaces without further training, while an additional downstream model (also referred to as downstream predictor) is optimised to obtain the final target features.
3.3.2 Downstream Model. The embedding vectors generated by different location embedding methods constitute the input to a downstream prediction model to predict individuals’ next location. The downstream location prediction model is typically based on sequence models, e.g., LSTM [10] or Transformer [32]. Next location prediction task has traditionally been formulated as a multi-class classification problem, where the final output is a probability distribution $P ( \hat { l } _ { n + 1 } )$ depicting the probability of the user visiting each location at the next time step $n + 1$ . Therefore, no matter what the architectures of the sequence models are, the last layer would commonly be a fully-connected (FC) layer followed by a softmax function:
$$
P ( \hat { l } _ { n + 1 } ) = S o f t m a x ( f ^ { F C } ( h _ { n } ) )
$$
where $f ^ { F C }$ denotes the FC layer and $h _ { n }$ denotes the hidden state at the 𝑛th time step.
The trainable parameters of the downstream model are optimised by the multi-class cross-entropy loss (CEL):
$$
C E L = - \sum _ { k = 1 } ^ { | \mathcal { L } | } P ( l _ { n + 1 } ) ^ { ( k ) } l o g ( P ( \hat { l } _ { n + 1 } ) ^ { ( k ) } )
$$
where $P ( \hat { l } _ { n + 1 } ) ^ { ( k ) }$ denotes the predicted probability of visiting the $k$ th location and $P ( l _ { n + 1 } ) ^ { ( k ) }$ is the one-hot vector representing the ground truth.
In this paper, we employ MHSA [11] as the downstream model, for it is recently proposed, well-documented, open-sourced and based on Transformer architecture, which has been a common backbone adopted in next location prediction models.
# 4 Experiment
We conduct extensive experiments, following the framework introduced in Section 3.2. In particular, we incorporate the location embeddings generated by different models, including competitive baselines and CaLLiPer, into a common downstream next location prediction model and compare their results to evaluate the effect of CaLLiPer as a location embedding method.
# 4.1 Data and Preprocessing
Mobility data. We use three commonly adopted, publicly available datasets in our experiments: two location-based social network (LBSN) check-in datasets—Foursquare check-ins in New York City (denoted as FSQ-NYC) [36], and the Gowalla dataset restricted to London, UK (denoted as Gowalla-LD) [4], as well as one GNSS tracking dataset, Geolife [40].
POI data. POI data are required for training CaLLiPer. Since FSQ-NYC and FSQ-TKY already include POI information, no additional sourcing is needed. For Gowalla and Geolife, which lack detailed POI data, we obtained the necessary data for London and Beijing from the Foursquare Open Place dataset. While alternative sources such as Overture and OSM are available, we chose Foursquare’s Open-Source Places dataset [9] due to its detailed classification scheme, business-grade coverage and quality, and ease of access via its APIs.
Data preprocessing. We follow the common preprocessing practices introduced in [11]. For check-in data, we excluded unpopular POIs with fewer than 10 check-ins and filtered out users with fewer than 10 records. For Geolife, the preprocessing procedures include filtering users with too few data points (less than 50 days), identifying stay points, and applying spatial clustering to derive locations. Trackintel library [24] was used in this process.
The basic statistics of the resulting dataset after preprocessing are presented in Table 1.
# 4.2 Baseline Methods
The following location embedding methods are selected as baselines in the experiment. This is a fairly comprehensive and representative list of models, from basic (Vanilla-E2E) to state-of-the-art.
• Vanilla-E2E: A vanilla embedding layer trained end-to-end (E2E), which is essentially a lookup table for a fixed set of locations, as commonly used in prior work [14, 38]. • Skip-gram [26]: A Word2Vec model that has been utilised for modelling mobility trajectories [18].
Table 1: Basic statistics of the mobility datasets after preprocessing. The mean and standard deviation across users are reported.
POI2Vec [8]: A Word2Vec-based method, which models spatial information through assigning locations into a geographical binary tree.
Geo-Teaser [39]: Geo-Temporal Sequential Embedding Rank model, which integrates temporal and spatial features through vector expansion and a modified negative sampling strategy.
• TALE [33]: Time-Aware Location Embedding that incorporates temporal information through designing a temporal tree structure for hierarchical softmax calculation.
CTLE [16]: A context and time-aware method based on BERT, which generates location embeddings considering neighbouring context in trajectories.
# 4.3 Conventional and Inductive Settings
The key implementational differences of the conventional and inductive settings lie in the construction of the train, validation, and test sets. Figure 3 illustrates how these sets are constructed for the model training and evaluation under the two experimental settings.
Conventional setting. All datasets are split into non-overlapping train, validation, and test sets with a 6:2:2 ratio based on time. Mobility sequences (i.e., users’ location visit sequences) are created using a sliding window of seven days, following empirical findings from [11], which show that models perform best when using the past seven days of data. As such, sequences from the first $6 0 \%$ of tracking days are used for training, and the last $2 0 \%$ for testing. For each model, we run the experiment five times and report the mean and standard deviation for each evaluation metric (shown in the left part of Table 2).
Inductive setting. To simulate unseen locations, we modify the conventional data splits. Specifically, we randomly sample $1 0 \%$ of the locations in the training set and denote this subset as $\mathcal { L } ^ { n e w }$ . Then we remove all the mobility sequences that contain any locations in $\mathcal { L } ^ { n e w }$ from the original training and validation sets to form their inductive counterparts. The test set remains unchanged, which means that it contains certain locations that are not seen during neither the pre-training phase nor the downstream model training phase. This setup allows us to evaluate how well different location embedding models generalise to new locations.
To account for sampling variability, we repeat this process with five different random samples of $\mathcal { L } ^ { n e w }$ , resulting in five different training and validation sets. For each model, we run five experiments, each corresponding to one of these five data splits. We then report the mean and standard deviation of the evaluation metrics across these five experiments (shown in the right part of Table 2).
Figure 3: The illustration of the process of constructing train, validation and test sets under two setting.
# 4.4 Evaluation Metrics
We adopted the following commonly used metrics to quantify the predictive performance of compared models.
Accuracy. Predictions are sorted in descending order based on their probability of being the next location, and Acc@k measures the proportion of times the ground truth location appears within the top-k predictions. In location prediction literature, this metric is also referred to as Recall@k or Hit Ratio@k. In our experiment, Acc@1, Acc@5, and $\operatorname { A c c } ( \varnothing 1 0$ were reported for evaluation.
Mean reciprocal rank (MRR). This metric calculates the average rank reciprocal at which the first relevant entry was retrieved in the prediction vector:
$$
M R R = \frac { 1 } { N } \sum _ { i = 1 } ^ { N } { \frac { 1 } { r a n k _ { i } } }
$$
where $N$ denotes the number of test samples and $r a n k _ { i }$ is the rank of the ground truth location in $P ( \hat { l } _ { n + 1 } )$ (the probability distribution predicted by the downstream model; see Equation 4) for the i𝑡ℎ test sample.
$_ { n D C G @ K }$ . Normalised discounted cumulative gain (with rank position $k$ ) measures the ranking quality of a prediction vector by the ratio between the discounted cumulative gain (DCG) and the ideal discounted cumulative gain (IDCG). The calculation of
nDCG@k is given below:
$$
n D C G @ k = \frac { D C G _ { k } } { I D C G _ { k } } ,
$$
$$
D C G _ { k } = \sum _ { j = 1 } ^ { k } \frac { r _ { j } } { l o g _ { 2 } ( j + 1 ) } ,
$$
where $r _ { j }$ denotes the relevance value at rank position $j$ . In the context of location prediction, $r _ { j } \in \{ 0 , 1 \}$ , and $r _ { j } = 1$ if and only if the $j$ -th item in the list of predicted locations ranked according to $P ( \hat { l } _ { n + 1 } )$ matches the ground truth next location. In our experiment, we report the average $n D C G @ 1 0$ over all test samples.
# 5 Results
# 5.1 Model Performance Comparison
The predictive performance of all considered methods is presented in Table 2. In two out of the four datasets, i.e., FSQ-NYC and GowallaLD, CaLLiPer significantly outperforms competitive baseline methods across all evaluation metrics under both conventional and inductive settings. On the FSQ-TKY dataset, although CaLLiPer is less competitive under the conventional setting, it achieves the best performance on most metrics (with Acc@1 as the only exception) under the inductive setting. For the Geolife dataset, CaLLiPer consistently achieves the best performance in terms of top-1 accuracy, MRR, and $\scriptstyle \mathrm { n D C G } @ 1 0$ (three out of the five evaluation metrics) under both conventional and inductive settings.
Overall, CaLLiPer demonstrates the most robust performance, producing the best results in the majority of cases. In contrast, baseline models only occasionally achieve the best performance—for example, on one dataset but not another, under one experimental setting but not the other, or in terms of some evaluation metrics but not others.
Moreover, CaLLiPer shows a particularly notable advantage under the inductive setting. These empirical results demonstrate the potential of applying inductive spatial-semantic location embeddings to improve individual mobility prediction, particularly in scenarios involving previously unseen locations.
# 5.2 Visualising the Learned Location Embeddings
In this subsection, we visualise the learned embedding vectors of locations from the FSQ-NYC dataset in a two-dimensional space using Uniform Manifold Approximation and Projection (UMAP) [25] for non-linear dimensionality reduction. Figures 4 and 5 depict the location embeddings under the conventional and inductive settings, respectively. These visualisations provide an intuitive view of the relationships among locations in the latent embedding space. For the inductive setting, we distinguish between two types of locations using different colours: red for locations in the sampled subset $\mathcal { L } ^ { n e w }$ (those not seen during pre-training) and blue for locations in $\mathcal { L } - \mathcal { L } ^ { n e w }$ (those available during pre-training). Note that all locations in the conventinoal setting experiment were available during pre-training; they are coloured differently solely for comparison with the inductive setting results.
Figure 4 shows the embeddings under the conventional scenario. As anticipated, the embedding vectors are well-mixed in the latent space across all methods, indicating that the models have effectively learned unified representation distributions over the entire set of locations.
Comparing the visualisation results in Figure 5 with that in Figure 4 helps reveal the model’s ability to generalise to previously unseen locations, i.e., locations in $\mathcal { L } ^ { n e w }$ . It is evident that, for baseline methods Skip-gram, POI2Vec, and Geo-Teaser, there is a pronounced shift in the distribution of embeddings between the conventional and inductive settings in the latent representation space. Under the inductive setting, the embeddings of new locations generated by these models deviate significantly from those seen during the pre-training. This suggests that the emergence of new locations greatly disrupts the learned manifolds [2], which is likely to have adverse impacts on the downstream prediction performance. As for the baseline method TALE, it shows a smaller discrepancy between embeddings of new and existing locations under the inductive setting. This improved result might stem from the time-aware objective, which appears to mitigate the divergence between new and seen locations to some extent.
In contrast, the location embeddings learned using CaLLiPer demonstrate great consistency across both settings. The manifolds formed by new and existing locations are well-aligned, indicating CaLLiPer’s strong generalisation capabilities. Notably, the manifolds learned by CaLLiPer have a more fine-grained and clustered structure than those of other baseline methods. This unique pattern might be the result of the location encoding technique employed by CaLLiPer, and similar structures can be found in other works utilising location encoding techniques [13].
# 6 Discussion
Learning effective location representations is foundational to individuallevel mobility modelling. Conceptually, locations are the basic spatial units where human activity occurs. Methodologically, their vector representations serve as the input to prediction models. Existing methods and our approach tackle this problem from different perspectives. Next, we discuss their core differences, which helps us understand what truly matters in representing locations.
Existing location embedding methods typically learn representations from human mobility sequences. These approaches rely on co-occurrence patterns captured by word2vec-based architectures. The rationale is that the distributed representations can capture complex relationships between locations. Some studies further suggest that characteristics such as the location function can be implicitly learned from mobility patterns [16].
In contrast, CaLLiPer adopts a task-agnostic approach, learning location embeddings by contrastively aligning spatial coordinates with textual descriptions derived from POI data. Rather than relying on mobility traces, it learns from the inherent structural attributes of urban spaces. CaLLiPer has already demonstrated effectiveness in capturing land use and socio-demographic characteristics [34]. This study further shows that CaLLiPer-generated embeddings are also highly effective for individual mobility prediction, especially under the inductive setting, where locations in the test set are unseen during training.
These fundamental differences result in varied performances in downstream location prediction tasks, which makes us ponder what makes a good location representation? What kind of information should be encoded in location vectors? Our findings suggest that for next location prediction, the most valuable signals are not solely indirect co-occurrence patterns, but the inherent characteristics that define a location—namely, its spatial coordinates and semantic (platial) attributes.
Table 2: Performance comparison of different embedding methods on next location prediction task. The best and second-best performance are marked in bold and underlined, respectively. For better readability, all metric values are scaled by a factor of $1 0 ^ { 2 }$ . The relative difference (Rel. diff) is also reported, calculated as the improvement or decrease of CaLLiPer’s performance relative to the best competing method for each metric.
Thinking more deeply, location representations can be viewed as part of the “infrastructure” of mobility modelling. Like infrastructure, they should be general-purpose and robust—not tailored to specific tasks or limited user populations. Instead of being trained on narrow objectives derived from mobility traces, they should be grounded in general geographical and semantic features. With such an infrastructure in place, downstream models can specialise in handling specific applications, such as next location prediction, synthetic mobility data generation, or even population flow modelling.
Finally, as access to detailed movement data becomes increasingly restricted due to a growing emphasis on location privacy, the use of publicly available POI data that are not restricted by privacy regulations becomes even more justified as a foundation for learning location representations. | Predicting individuals' next locations is a core task in human mobility modelling, with wide-ranging implications for urban planning, transportation, public policy and personalised mobility services. Traditional approaches largely depend on location embeddings learned from historical mobility patterns, limiting their ability to encode explicit spatial information, integrate rich urban semantic context, and accommodate previously unseen locations. To address these challenges, we explore the application of CaLLiPer -- a multimodal representation learning framework that fuses spatial coordinates and semantic features of points of interest through contrastive learning -- for location embedding in individual mobility prediction. CaLLiPer's embeddings are spatially explicit, semantically enriched, and inductive by design, enabling robust prediction performance even in scenarios involving emerging locations. Through extensive experiments on four public mobility datasets under both conventional and inductive settings, we demonstrate that CaLLiPer consistently outperforms strong baselines, particularly excelling in inductive scenarios. Our findings highlight the potential of multimodal, inductive location embeddings to advance the capabilities of human mobility prediction systems. We also release the code and data (https://github.com/xlwang233/Into-the-Unknown) to foster reproducibility and future research. | [
"cs.AI"
] |
# 1 Introduction
As scientific software becomes increasingly central to research activities, metadata describing these tools is scattered across a growing number of registries, repositories, package managers, and publication platforms. These platforms differ in scope, stability, and longevity—some emerge or disappear over time—leading to duplicate or outdated records for the same software across different sources.
The resulting discrepancies create ambiguity when integrating metadata: records may share names, developers, or even source code repositories but differ in function, domain, or completeness, making it difficult to determine whether individual records refer to the same software tool.
This ambiguity has practical consequences. Without identity resolution, it becomes harder to compute quality indicators, trace software usage, or enable consistent referencing. It also has a direct impact on identifying the contributions of software developers and, therefore, crediting such contributions beyond traditional peer-reviewed publications.
Traditional approaches to identity resolution have relied on rule-based systems or string similarity measures [4, 12, 21, 8]. While these techniques offer computational efficiency, they often fail in the presence of sparse, inconsistent, or conflicting metadata, which is common in scientific registries where metadata may be auto-generated, partially maintained, or translated across formats. Therefore, more advanced mechanisms are needed to disambiguate software metadata.
Recent advances in large language models (LLMs) have opened new avenues for semantic classification, including entity linking and record disambiguation. Instruction-tuned models such as GPT-3 [5], T5 [17], and FLAN [7], as well as more recent open-weight models like Mistral [9], Llama [19], and Mixtral [10], have demonstrated strong capabilities in tasks requiring natural language understanding and reasoning [11, 14]. These models can follow structured prompts and perform classification with minimal supervision, making them attractive candidates for automating metadata integration in open research infrastructures.
In the context of the OpenEBench Software Observatory1, the use of LLMs offers the opportunity to enhance the process of software identity resolution. This step is crucial for all downstream analysis and observations.
While powerful models exist, integrating them into operational workflows requires balancing predictive quality with inference speed and confidence estimation. The current collection of software records available at the OpenEBench Software Observatory contains around 45,000 unique research software records, with the number of conflicts ranging between 500 and 3,000 cases, depending on pre-filtering assumptions. Despite representing only a small proportion (1–6%) of the total number, these cases represent the most time-consuming and impactful resolution challenges. Addressing them is therefore essential for reliable metadata integration.
This study provided a controlled setting to benchmark LLMs for disambiguating metadata-based software identities, before scaling to more complex sources such as scientific literature. We offer an interpretable alternative with high practical reproducibility that complements existing rule-based approaches by evaluating LLM-based methods against a curated gold standard.
Focusing on high-difficulty cases, we assessed the ability of LLMs to outperform heuristics, how closely they align with human judgment, and what tradeoffs they introduce in terms of accuracy, annotation effort, and scalability. The result is a reusable foundation for integrating semantic resolution of software metadata within the OpenEBench Research Software Observatory.
# 2 Methods
We implemented an evaluation pipeline to assess the performance of different LLMs in disambiguating software metadata records. Each LLM was compared to a human-annotated gold standard using classification metrics, error analysis on the so-called “hard” cases, and model agreement as a proxy for prediction confidence.
# 2.1 Task Definition
The software identity resolution task was framed as a three-way classification problem: determining whether a pair of metadata records with the same name refers to the same software, refers to different software, or whether it is unclear due to insufficient information.
Each record included fields such as name, description, repository URL, webpage URL, publication, and authors, developers or maintainers. Additionally, the content of the referenced URLs was provided (see Prompting), as reviewing the software’s website and repository is one of the primary methods a human would use to make a resolution decision.in latex, what is the difefe
# 2.2 Gold Standard Construction
Dataset Selection and Sampling. To evaluate model performance, a representative set of 100 ambiguous software metadata cases was randomly sampled and manually annotated from a pool of 555 conflicting pairs identified when applying traditional methods.
The metadata was originally collected from various sources and homogenized after extraction to enable consistent comparisons. Ambiguous cases were identified as those where metadata records with the same name linked to different URLs (e.g., project websites or registries), or where records with different names pointed to the same URL (excluding source code repositories, which are considered a strong indicator of shared identity).
The initial pool of conflicting pairs was obtained after applying a series of assumptions to automatically resolve less ambiguous cases (e.g., considering records with matching names and non-repository URLs as the same software). Each remaining pair was identified as a potential identity conflict and assigned a unique identifier.
Annotation Process. Each record pair was annotated by a single human annotator with a verdict indicating whether the two records referred to the same software, different software, or were unclear.
The unclear label was used when the annotator could not decide, typically due to broken or missing URLs combined with vague or generic metadata. Each confirmed case also received a confidence flag (low, medium, or high) to support stratified performance analysis.
The annotator had access to the full content of the associated URLs, which could be reviewed in order to make informed judgments. Additionally, a brief rationale (one or two sentences) was noted for each verdict to provide transparency and traceability.
Annotation Metadata and Effort Tracking. All annotated cases were organized into a spreadsheet for readability, and the time dedicated to the annotation process was measured to assess the human effort involved.
Sampling Constraints and Class Imbalance. The final gold standard is not class-balanced. In practice, ambiguous metadata records referring to the same software are considerably more common than genuinely different records that share the same name.
We considered two alternatives to mitigate this imbalance: (1) continuing manual annotation until at least 50 cases were identified, or (2) fabricating negative examples by pairing unrelated records and modifying their names to create artificial naming conflicts.
The first option was prohibitively time-consuming, while the second would have required generating synthetic URL content aligned with the fabricated names—a complex task for both human annotators and LLMs alike.
Given these challenges, we chose to sample exclusively from real ambiguous cases and to accept the resulting class imbalance.
# 2.3 Models and Inference
We benchmarked diverse instruction-tuned LLMs to evaluate their capacity for software identity resolution. Our selection criteria focused on three key dimensions: (1) diversity in model architecture and size, (2) openness and accessibility for future deployment, and (3) relevance to current trends in language model development.
Model Selection. The model pool included a range of open-weight LLMs, from compact 7B parameter architectures to larger sparse mixture-of-experts (SMoE) configurations. Open LLMs were prioritized for their transparency, accessibility, and alignment with the principles of FAIR research infrastructures. With the exception of one base model (Ministral 8B), all models used in the study were instruction-tuned, even if referred to by their short names for brevity (see Table 2 for a detailed technical overview). The inclusion of a non-instruction-tuned model also allows for an indirect view of the impact of instruction tuning on disambiguation performance. A single proprietary LLM, OpenAI GPT-4o, was also included to serve as a performance reference and to contextualize the results under optimal and closed-source conditions.
Inference Setup. All model inferences were conducted using publicly available inference APIs, specifically the Hugging Face Inference API [1] and OpenRouter[2]. This approach enabled a uniform and scalable benchmarking setup without the complexity of local deployment, which was considered out of scope for the benchmark phase. Inference was handled programmatically using consistent, chatstyle prompt formatting across all LLMs.
All model outputs were generated using the same decoding parameters: temperatur $\scriptstyle \mathtt { \beta } = 0 . 2$ , top p=0.95, and max new tokens $= 5 1 2$ . These settings encourage focused but slightly variable outputs, which can improve response quality in openended tasks like metadata disambiguation. The return full text flag was set to False to isolate the generated response. Random seeds were not explicitly set, and for some model interfaces, seed control may not have been available; therefore, outputs are not strictly reproducible, although model behavior was observed to be stable across runs. The exact model identifiers used via each inference API are listed in Table S1. These correspond to the specific model aliases exhibited by the Hugging Face Inference API or OpenRouter at the time of evaluation. Note that some aliases may map to custom checkpoints or provider-specific variants and may change over time.
Prompt Standardization. To ensure comparability, a single prompt format was used for all LLMs. Metadata records and content were passed as structured Markdown blocks within chat messages (see Prompting), and the output was expected to be in a JSON format containing a verdict, confidence, and explanation. Outputs were parsed and validated automatically; failures to conform to the expected format resulted in the record being marked as skipped.
Table 2 outlines technical specifications including parameter count, model architecture, context window size, release date, and whether the model has been instruction-tuned.
Inference Timing. We recorded the total time elapsed for each model and metadata pair from request submission to response receipt, capturing endto-end latency. When available, internal latency metrics reported by the API (e.g., from OpenRouter) were also logged, allowing us to distinguish between model processing time and network overhead.
These timing measurements were used to assess each LLM’s operational cost and compare automated and manual annotation effort. They also informed discussions of model responsiveness and feasibility for integration into Extract, Transform, and Load (ETL) workflows.
Table 1: Openness and accessibility of evaluated models.
# 2.4 Prompting
Prompt Format and Structure. All LLMs were prompted using a standardized instruction template followed by structured content blocks representing metadata and contextual information for each record (see Listing S1 for the full prompt). Prompts were formatted in chat style, as required by both the Hugging Face Inference API and OpenRouter API, with one message per metadata record followed by a final instruction message.
The prompt was structured to align with the expected behaviour of instruction-tuned LLMs [15]. For consistency across models and interfaces, all messages, including the initial task description, were passed as "user" messages; no "system" message was used. This design was chosen to ensure portability across providers and was found to yield reliable behaviour in practice. Each prompt consisted of:
Table 2: Technical characteristics of evaluated models.
• An initial instruction message describing the identity resolution task and the expected output format (verdict, confidence, and explanation).
• A user message containing the metadata for the first record, including fields such as name, description, repository URL, webpage, authors, and publications.
• A second user message with the metadata of the second record.
• Additional user messages containing the cleaned content of associated URLs (e.g., repository README or project website), one per record.
• A final instruction message reminding the model of the required output format and indicating that it can now begin reasoning and respond.
Metadata and Context Handling. Metadata records were rendered as nested dictionaries within Markdown code blocks to preserve structure and improve interpretability. All fields were included where available. URL content was placed after the metadata and extracted using a combination of tools:
• Generic websites: Scraped using Playwright[13], with non-relevant HTML elements removed via BeautifulSoup[18]. The result was converted to Markdown, preserving basic structure and links.
• Structured sources: For GitHub, GitLab, Bitbucket, and PyPI, dedicated extractors using their respective APIs were implemented. For SourceForge, tailored HTML parsers were created to extract meaningful sections using Playwright and class-based tag filtering.
Prompt Design Strategy. Token length was monitored during prompt construction, but no truncation was required. To prevent LLMs from overfitting to specific examples, we avoided few-shot prompting. The use of a concrete output sample included in the prompt was discarded due to excessive mimicry in model responses, and instead a more abstract reminder for such output structure was used.
The prompt was iteratively refined using Mistral 7B as a reference model, as smaller LLMs tend to be more sensitive to prompt design. Refinements included:
• Explicit guidance on reasoning strategy.
• A reminder that the records may share the same name but name similarity alone is not a reliable resolution signal.
• Stricter formatting cues for the expected JSON output.
Response Validation. Model responses were parsed automatically. If the response failed to produce a valid JSON object or omitted mandatory fields, the record was marked as skipped, though raw outputs were retained for manual inspection and error analysis.
# 2.5 Evaluation Metrics
To evaluate model performance on the software identity resolution task, we combined standard classification metrics with focused error analyses designed to capture the semantic and practical challenges specific to this benchmark. Our goal was not only to assess overall predictive accuracy, but also to understand how closely the LLMs’ behavior aligns with human reasoning in cases requiring nuanced interpretation.
Core Metrics. We calculated the following core metrics: accuracy, macro-averaged F1-score, macroaveraged precision, and macro-averaged recall. Accuracy provided a straightforward measure of overall correctness. In macro-averaging, the metric (e.g., precision, recall, or F1) is first computed independently for each class and then averaged across classes, giving equal weight to each regardless of how often it appears in the data. For example, to compute the macro F1-score, we first calculated the F1-score [20, 6] for each of the three target labels (“same”, “different”, and “unclear”), and then took the unweighted average of these three values. This approach ensures a balanced assessment even in the presence of class imbalance.
While macro-F1 is widely used, it has also been criticized as a non-representational measure $^ 2$ , since averaging harmonic means does not preserve meaningful mathematical properties [16]. We include it here as a pragmatic indicator of balanced class-level performance.
To address the limitations of aggregate metrics—such as their tendency to hide how well the model performs on each individual label and whether it favors precision over recall—we also computed perclass precision and recall, as well as confusion matrices for each model. These more detailed evaluations provide a clearer picture of strengths and weaknesses across the different classes. For all reported metrics except confusion matrices, we calculated 95% confidence intervals using bootstrap resampling with 1,000 iterations, sampling from the set of cases that were resolved by each model or proxy.
Difficulty-Sensitive Evaluation. To probe model robustness and alignment with human reasoning, we stratified the evaluation by difficulty. Each case in the gold standard was annotated by a single annotator with a verdict and a confidence rating. For evaluation purposes, we grouped cases into either “hard” or “easy” according to the degree of identity resolution difficulty. Hard cases were those where the annotator either selected the “unclear” label or marked their verdicts low-confidence. Easy cases were those for which the annotator expressed medium or high confidence in assigning the “same” or “different” label.
While three confidence levels were recorded during annotation, this binary grouping offers a more conservative and robust distinction, given the subjectivity inherent in single-annotator assessments. This distinction allowed us to examine whether LLMs struggle in situations that challenge humans. Their performance on hard cases served as an indicator for semantic reliability and alignment with human evaluative strategies—an important trait of automated decisions.
For each model, we computed the error rate separately on the easy and hard subsets, defined as the proportion of incorrect predictions within each group. To assess the uncertainty around these estimates, we computed 95% confidence intervals using stratified bootstrap resampling (1,000 iterations). We then tested for a statistically significant difference between error rates on hard and easy cases using a bootstrap hypothesis test on the difference in means. This allowed us to identify LLMs that were significantly more likely to make errors on hard cases, while accounting for differences in sample size between sub
# 2.6 Agreement-Based Decision Proxy
In addition to evaluating individual model predictions, we introduced an agreement-based proxy for high-confidence automated resolution. A prediction was accepted automatically only when all topperforming LLMs agreed; cases with disagreement were deferred to human review. This proxy was evaluated using the same metrics as the individual LLMs—accuracy, macro-averaged F1-score, macroaveraged precision, and macro-averaged recall—and its performance was reported in Section 3.3.
Table 3: Composition of the agreement proxies we assessed.
We selected the three best-performing LLMs overall and included Mixtral-8x22B due to its human-like behavior of performing significantly better on easy cases than on hard ones, which makes it a useful indicator of ambiguity (Table 3). Other combinations were not considered, as they either involved lowerperforming LLMs or showed inferior performance as proxies, reducing their practical utility.
received the “unclear” label, typically due to missing URLs or vague metadata.
Annotator confidence followed a skewed distribution, with most cases receiving a high rating and a smaller number labelled low or medium. The joint distribution (Figure 1, right panel) revealed that the “same” verdicts tended to be rated with higher confidence, while “different” cases showed more variability. Unclear cases were not assigned a confidence rating, since no actual decision was made and confidence scores apply only to resolved cases.
# 3.2 Language Model Performance
Figure 2 summarizes model performance in terms of accuracy and macro F1-score. All metrics were computed over non-skipped records only (see Table S2 for the number of cases resolved by each LLM).
Top-performing LLMs, including Llama 4 Scout, GPT-4o, Mistral 7B, and Mixtral 8x7B, achieved bootstrap-estimated accuracies with means above 0.89. The lower bounds of their 95% confidence intervals remained above 0.83, indicating consistently high accuracy across resampled subsets. In contrast, macro F1-scores for these LLMs ranged from 0.57 to 0.61, with wider confidence intervals. This discrepancy reflects the influence of class imbalance and the systematic failure across all LLMs to correctly classify the three “unclear” cases, which consistently yielded zero precision and recall. As a result, macroaverage performance—especially recall—is penalized, highlighting the challenge these LLMs face in capturing rare or ambiguous cases. In contrast, other LLMs such as OpenChat, Llama 3.3, and Ministral 8B showed weaker performance, particularly in macro F1, suggesting difficulty handling all class types.
# 3 Results
# 3.1 Gold Standard Distribution
The gold standard comprised 100 software metadata cases selected for their ambiguity. As shown in Figure 1, the majority of records were labelled as “same”, reflecting the observation that duplicated software names are more frequently associated with the same entity than with unrelated projects. Fewer records were labelled “different”, and an even smaller subset
In addition to the core metrics presented in the main text, we report per-class precision and recall in the supplementary material (Table S3). These reveal complementary strengths and weaknesses across LLMs. Llama 4 Scout exhibited the highest macro precision and recall, with perfect recall on the cases labelled as “same” and perfect precision on cases labelled as “different”. Mixtral 8x7B showed the highest precision for the “same” cases. GPT-4o also performed well, achieving a balanced profile across both the “same” and “different” classes. As noted above, all LLMs failed to correctly identify “unclear” cases, indicating that reporting semantic uncertainty remains challenging and may require separate handling in production settings. Detailed confusion matrices for all LLMs are presented in Figure S1.
Figure 1: Distributions of human annotations in the gold standard. Left: class distribution across the three possible verdicts (same, different, unclear). Center: distribution of annotator confidence levels across all annotated cases. Right: joint distribution of verdict and confidence, restricted to entries labeled as same or different. Unclear cases were excluded from confidence scoring.
Figure 2: Multiclass evaluation metrics by model. Each bar shows accuracy and macro-F1 for a given language model.
Error rates were generally higher on cases labeled as “hard” by human annotators, but in most LLMs the difference compared to “easy” cases was small and not statistically significant (Figure 3). Confidence intervals for hard cases tended to be wider, reflecting their smaller number in the dataset. Only a subset of LLMs—such as Mixtral 8x22B, Ministral 8B, and LLaMA 3.3—showed a significant increase in error rate on hard cases ( $p < 0 . 0 5$ , bootstrap test), and these were also among the LLMs with the highest overall error rates. This suggests that sensitivity to disambiguation difficulty is more pronounced in less accurate LLMs, while stronger LLMs maintained more consistent performance across difficulty levels.
Figure 3: Comparison of model performance on “hard” vs. “easy” disambiguation cases. The left panel shows error rates with $9 5 \%$ bootstrapped confidence intervals; each dot represents the mean error rate for a given case type (easy or hard). The right panel displays the corresponding number of errors on each case type. Asterisks (\*) indicate LLMs for which the error rate on hard cases was significantly higher than on easy cases ( $p < 0 . 0 5$ , bootstrap test).
# 3.3 Agreement-Based Proxy Evaluation
Among the proxies (Figure 4), Proxy I (LLaMA 4 Scout $^ +$ Mixtral 8x22B) achieved the strongest overall performance, with the highest accuracy (0.965; $9 5 \%$ CI: 0.930-1.000) and macro F1-score (0.626; $9 5 \%$ CI: 0.581-0.667). It also yielded perfect precision on the “different” class and high (0.858; 95% CI: 0.920- 1.000) on the “same” class. It also achieved the highest recall on the “different” (0.822; 95% CI 0.647- 1.000) class, while issuing verdicts for 86 out of 100 benchmark cases. This makes Proxy I the most precise and semantically reliable configuration tested. In contrast, Proxy V (LLaMA 4 Scout $^ +$ Mixtral 8x7B) offered slightly lower performance but significantly higher coverage, producing confident verdicts for 94 out of 100 cases. With an accuracy of 0.958 ( $9 5 \%$ CI: 0.926-0.989) and a macro F1-score of 0.611 ( $9 5 \%$ CI: 0.559-0.654), Proxy V delivered a strong balance between reliability and coverage. Like Proxy I, it also attained high precision for both “same” and “different” classes, with a slightly reduced recall on “different” (0.756; 95% CI: 0.562-0.938). The remaining proxies illustrated how different pairing strategies —whether based on difficulty sensitivity, architecture similarity, or model size— affect the precisioncoverage trade-off. Proxy IV, for instance, combined two smaller LLMs (Mistral 7B $^ +$ Mixtral 8x7B) and achieved the highest number of agreement cases (93) with commendable precision, although its recall on “different” cases was the lowest of all proxies. Taken together, these results suggest that the model agreement can be a highly effective proxy for prediction confidence, and that the pairing of complementary LLMs —one high-performing and one conservative— offers a robust basis for selective automation in metadata resolution tasks.
Figure 4: Performance and coverage of agreement-based proxies. Each point shows the accuracy and macro F1-score of a proxy with $9 5 \%$ bootstrap confidence intervals. The rightmost panel shows the percentage of cases deferred because there was a disagreement between the LLMs. All metrics are computed only over non-deferred case
# 3.4 Annotation Time: Human vs Model
The comparison in Figure 5 shows that LLMs completed the annotation task significantly faster than humans. While the absolute latency per model varies depending on the inference provider and network routing, the overall gap between human and automated annotation times remains substantial. The figure also includes two proxy strategies—Proxy I and Proxy V—which combined model agreement with fallback to human annotation in $1 4 \%$ and $6 \%$ of cases, respectively. These proxy methods offer a balance between automation and human oversight while still yielding considerable reductions in annotation time compared to fully manual curation.
# 4 Discussion
Intensive data-driven research activities rely heavily on software. However, little attention has been paid to its quality and sustainability. Cataloguing software is a necessary step to understand current practices and propose targeted actions to improve its quality and, therefore, contribute towards its sustainability. However, metadata describing software tends to be scattered across multiple sources and is often incomplete or outdated. Thus, metadata-based identity resolution mechanisms are needed as part of the cataloguing efforts.
The results of this study highlight the potential of instruction-tuned LLMs to support accurate and scalable identity resolution of software metadata. Several LLMs strongly aligned with human judgment, achieving accuracy above 89% and macro F1-scores approaching or exceeding 0.60. These results suggest that even in tasks requiring semantic interpretation of sparse or conflicting metadata, modern LLMs can perform reliably well with minimal supervision.
Performance varied across LLMs and evaluated classes. While larger LLMs and those trained with reinforcement-based alignment techniques (e.g., GPT-4o, Llama 4 Scout) often performed well, the overall difference in performance between large and small LLMs was less pronounced than expected. Surprisingly, several smaller open-weight LLMs (e.g., Mistral 7B, Mixtral 8x7B) approached or matched the accuracy and agreement scores of significantly larger LLMs. This suggests that, for this classification task, model size alone does not guarantee superior performance and that smaller LLMs may already encode sufficient reasoning capability when given structured input and clear instructions. It also highlights the potential gains to be made by explicitly tailoring prompts for smaller LLMs, which could further narrow the performance gap in practice.
Figure 5: Extrapolated cumulative annotation time as a function of dataset size for humans, Llama 4 Scout, and proxy methods. All time estimates are extrapolated from a shared gold standard of 100 annotated records. The human curve is based on measured total annotation time across those 100 cases, with a $9 5 \%$ confidence interval shown as a shaded area. Llama 4 Scout time is derived from average end-to-end latency per record across the same set. Proxy I and Proxy V simulate agreement-based workflows, combining Llama 4 Scout with Mixtral 8x22B and Mixtral 8x7B, respectively, with fallback to human annotation in 15% (Proxy I) and 6% (Proxy V) of cases. No full-scale annotation was performed beyond the initial 100; curves represent linear projections.
In contrast, the lowest performance was observed for Ministral 8B, the only non-instruction-tuned model in the benchmark. This is consistent with the central role that instruction tuning plays in enabling models to effectively follow task prompts and produce structured, goal-directed outputs.
Performance differences between “easy” and “hard” cases—based on human annotation—were generally modest across models, and in most cases not statistically significant. This suggests that stronger LLMs were able to maintain robustness even when facing more ambiguous or challenging examples. However, three models stood out for showing a significantly higher error rate on hard cases: Mixtral $\mathrm { 8 x 2 2 B }$ , LLaMA 3.3, and Ministral 8B. Among them, Ministral 8B is particularly notable. Despite being the lowest-performing model overall—likely due to being the only non-instruction-tuned model—it was also the one whose performance most closely mirrored human-perceived difficulty. This alignment suggests a degree of interpretability that is often absent in more capable but opaque models. Given its compact size and transparent behavior, it may be worthwhile to explore the potential of an instruction-tuned version of Ministral 8B, which could improve its effectiveness while preserving its apparent sensitivity to task complexity.
The results also demonstrate that model agreement can serve as a highly effective mechanism for automating software identity resolution with high precision. The strongest proxy overall, Proxy I (Llama 4 Scout $^ +$ Mixtral 8x22B), combined a high-performing model with a difficulty-sensitive model. Mixtral 8x22B was one of the few LLMs for which the error rates were statistically correlated with human-labeled difficulty, suggesting that it was sensitive to semantic ambiguity. In contrast, Llama 4 Scout showed overall high performance and confidence, with no such correlation. This combination of pairing difficultyawareness with decisive accuracy proved especially effective. Proxy I achieved the highest accuracy (0.965), macro F1-score (0.941), and Cohen’s Kappa (0.882), while maintaining perfect precision on both “same” and “different” cases. Its superior recall on different records (0.824) further underscores its value in reducing false matches, a key requirement for robust metadata integration.
However, Proxy V (Llama 4 Scout $^ +$ Mixtral 8x7B) offers a compelling alternative for large-scale deployments. It returned verdicts for 94 out of 100 cases, the highest among the high-performing proxies, while still achieving excellent accuracy (0.957) and perfect precision. Although slightly less conservative than Proxy I, its broader coverage makes it particularly attractive for operational settings that aim to automate as many cases as possible while maintaining trustworthiness.
The results also highlighted how proxy composition influences behavior. Pairing LLMs with complementary behavior and architecture—such as two instruction-tuned LLMs of different types, like a dense decoder (Llama 4 Scout) and a sparse mixture-of-experts model (Mixtral)—appears to produce proxies that are both selective and semantically precise.
Importantly, all proxies failed to identify any unclear cases. This limitation points to a broader challenge in automating ambiguity recognition and underscores the need for alternative strategies such as abstention-aware prompting, uncertainty modeling, or explicit deferral mechanisms.
Overall, these findings suggest that agreementbased proxies—particularly when carefully constructed from complementary LLMs—can deliver near-human precision on a substantial portion of cases. This approach offers a scalable and reliable mechanism for automated metadata integration in FAIR-aligned software observatories.
Although LLMs are not instantaneous, their annotation speed is orders of magnitude faster than manual curation, even when accounting for network latency and API overhead. This efficiency is critical for assembling and scaling up the OpenEBench Software Observatory’s software collection. However, such speed comes with computational costs. LLMs like Llama 4 Scout, and especially Mixtral 8x22B, require substantial resources, including high-memory GPUs and parallel infrastructure, which can limit accessibility and raise both environmental and economic concerns. To mitigate this, a pragmatic approach is to reserve LLM inference for cases that remain unresolved after applying lightweight heuristics or stringmatching rules. This triage strategy reduces unnecessary model calls while preserving annotation quality. Proxy strategies like Proxy I and Proxy V further support the feasibility of hybrid human-AI workflows by maintaining annotation quality through selective deferral, without sacrificing scalability. These results highlight the practical viability of integrating LLMs into metadata workflows, enabling both speed and precision at scale.
Despite these promising results, there remains room for improvement, both in model prompting and input preparation. Some prediction errors appear to stem from how metadata is structured or presented. Future iterations could explore reformatting metadata by replacing code-style nested dictionaries with natural-language-like field lists (e.g., Name: diamond, Version: 1.03), improving readability and alignment with instruction-tuned model expectations. Systematically removing redundant or irrelevant fields may also reduce noise and cognitive load. On the content side, website readability could be enhanced by integrating tools like Postlight Parser[3] alongside Playwright and BeautifulSoup, enabling better extraction of relevant information from semistructured sources.
To further improve system performance and applicability, we also plan to refine prompts for both small and large LLMs, expand the annotated dataset through targeted manual labeling, and improve preprocessing workflows. We plan to fine-tune the LLMs using cases that are deferred by the agreement proxy and subsequently annotated by humans, enabling targeted improvements where the LLMs show poor performance.
In addition, involving multiple annotators and measuring inter-annotator agreement will be essential to better characterize task ambiguity and establish a more meaningful upper bound on achievable model performance. | Software is an essential component of research. However, little attention has been paid to it compared with that paid to research data. Recently, there has been an increase in efforts to acknowledge and highlight the importance of software in research activities.
Structured metadata from platforms like bio.tools, Bioconductor, and Galaxy ToolShed offers valuable insights into research software in the Life Sciences. Although originally intended to support discovery and integration, this metadata can be repurposed for large-scale analysis of software practices. However, its quality and completeness vary across platforms, reflecting diverse documentation practices.
To gain a comprehensive view of software development and sustainability, consolidating this metadata is necessary, but requires robust mechanisms to address its heterogeneity and scale.
This article presents an evaluation of instruction-tuned large language models for the task of software metadata identity resolution, a critical step in assembling a cohesive collection of research software. Such a collection is the reference component for the Software Observatory at OpenEBench, a platform that aggregates metadata to monitor the FAIRness of research software in the Life Sciences.
We benchmarked multiple models against a human-annotated gold standard, examined their behavior on ambiguous cases, and introduced an agreement-based proxy for high-confidence automated decisions. The proxy achieved high precision and statistical robustness, while also highlighting the limitations of current models and the broader challenges of automating semantic judgment in FAIR-aligned software metadata across registries and repositories. | [
"cs.SE",
"cs.CL",
"cs.DL"
] |
# 1 Introduction
# 2 Related Works 4
2.1 Markup Languages (XML, HTML) 4
2.2 Lightweight Data Interchange Formats (JSON, YAML) 5
2.2.1 JSON (JavaScript Object Notation) 5
2.2.2 YAML (YAML Ain’t Markup Language) 5
2.3 Tabular Data Formats (CSV) . 6
2.4 Comparison with Binary Formats . 6
# 3 Methods
# 7
3.1 Serialization Process 7
3.2 Deserialization Process 8
3.3 Role of Schemas 9
# 4 Results 9
# 5 Discussion 11
5.1 Performance (Serialization and Deserialization Time) 11
5.2 Serialized Data Size 11
5.3 Human Readability and Editability 12
5.4 Complexity and Data Structure Support 13
5.5 Ecosystem and Tooling . 13
5.6 Trade-offs . 13 | Text serialization is a fundamental concept in modern computing, enabling the conversion of complex data structures into a format that can be easily stored, transmitted, and reconstructed. This paper provides an extensive overview of text serialization, exploring its importance, prevalent formats, underlying methods, and comparative performance characteristics. We dive into the advantages and disadvantages of various text-based serialization formats, including JSON, XML, YAML, and CSV, examining their structure, readability, verbosity, and suitability for different applications. The paper also discusses the common methods involved in the serialization and deserialization processes, such as parsing techniques and the role of schemas. To illustrate the practical implications of choosing a serialization format, we present hypothetical performance results in the form of tables, comparing formats based on metrics like serialization deserialization speed and resulting data size. The discussion analyzes these results, highlighting the trade offs involved in selecting a text serialization format for specific use cases. This work aims to provide a comprehensive resource for understanding and applying text serialization in various computational domains. | [
"cs.PL",
"cs.DB"
] |
# I. INTRODUCTION
Remote sensing semantic segmentation aims to classify each pixel in satellite or aerial imagery into specific land cover types, playing a vital role in natural resource monitoring, urban planning, and environmental conservation. Current mainstream methods for semantic segmentation in remote sensing images predominantly rely on data-driven deep
This work was supported by the National Natural Science Foundation of China under grant 42401451 and 42271416, the National Key R&D Program of China under grant 2023YFD2201702, the Hubei Natural Science Foundation under grant 2024AFB223, and the Guangxi Science and Technology Major Project under grant AA22068072.
learning approaches[1–9]. Techniques such as Fully Convolutional Networks[1], U-shaped Networks[2], Segmentation Networks[3], DeepLab[4], and Segmentation Transformers[7] have advanced the field by eliminating the need for manual feature extraction through the use of meticulously designed trainable modules. These methods have improved segmentation accuracy and made the process more intelligent. However, their accuracy often decreases when applied to complex environments. This limitation is primarily due to the challenging imaging conditions of remote sensing and the complex characteristics of the Earth’s surface. Remote sensing images can exhibit spectral similarities or geometric similarities between objects. In addition, non-orthographic imaging introduces numerous shadows in remote sensing images, making it challenging to identify land cover classes in shadowed areas. These factors collectively lead to land cover misclassification with existing semantic segmentation methods, limiting their potential in real-world applications.
In real-world scenarios, the different objects not only exhibit distinct geometric and spectral characteristics but also often differ in their elevation or depth. Incorporating threedimensional (3D) information facilitates distinguishing between objects of similar spectral characteristics but differing depths or heights, such as building rooftops and plazas with similar materials. Moreover, leveraging depth information helps mitigate the land cover misclassification caused by challenges such as shadow occlusion. In remote sensing, there has been research utilizing height information, relying on stereo imagery to derive Earth’s surface elevation[10–12]. However, the high cost of acquiring stereo imagery poses challenges for large-scale applications in land-cover mapping. Inferring elevation from 2D remote sensing imagery to enhance semantic segmentation would bring new hope for achieving highprecision and extensive land-cover mapping.
This paper presents a novel depth prompting remote sensing semantic segmentation framework named DepthSeg to improve the accuracy in land-cover mapping. The proposed DepthSeg framework infers depth information from 2D satellite images and takes the estimated depth as a prompt to reduce the land-cover misclassification caused by spectral confusion and shadow occlusion. A lightweight adapter built on a pretrained vision transformer (ViT) is taken as the encoder of DepthSeg to capture the land-cover features in 2D remote sensing imagery with a low-cost fine-tuning workload. A depth prompter is then introduced to model depth/height features explicitly. The elevation information is integrated into the landcover mapping process to overcome the effects of shadows and enhance the model’s ability to distinguish between spectrally similar objects. Finally, a semantic classifier that integrates the depth prompts and multi-scale object features is introduced to interpret the types of land cover.
# II. METHODOLOGY
A depth prompting remote sensing semantic segmentation framework named DepthSeg is proposed in this paper to reduce the land-cover misclassification in complex scenarios caused by factors such as shadow occlusion and spectral confusion. The core idea of DepthSeg is to infer depth information from 2D remote sensing images and then embed depth information into the semantic segmentation framework. The DepthSeg framework is illustrated in Fig. 1.
Fig. 1. Overview of the proposed depth prompting 2D remote sensing semantic segmentation framework. In (a), the different colors of $f$ denote the high-dimensional image features, while $L$ refers to the loss functions. In (b), $C$ represents the number of channels, and $S$ is the harmonic coefficient for the number of feature channels at different scales. In (c), the three colors of Depth represent the three scales of depth maps output by the dense prediction transformer and the different colors of $\psi$ represent depth prompts.
DepthSeg comprises a feature extraction stage, a depth prompting stage, and a semantic prediction stage. During the feature extraction stage, a frozen ViT encoder[13] is employed to extract features. To efficiently adapt the features learned from natural images for remote sensing imagery, a lightweight adapter is introduced to fine-tune the pre-trained ViT encoder at a low cost. The depth prompting stage extracts the depth features of objects from 2D remote sensing images. Specifically, a dense prediction transformer[14] is introduced to extract depth information, and a depth prompter is introduced to encode depth features, effectively guiding the semantic segmentation. Finally, a semantic segmentation decoder is introduced in the semantic prediction stage to accurately extract the classification of land cover by coupling depth information with highdimensional land-cover features.
# A. Lightweight adapter
To achieve cost-effective fine-tuning of the pre-trained ViT encoder, a lightweight adapter is designed to facilitate knowledge transfer from natural images to remote sensing images (Fig.1(b)). The lightweight adapter receives as input the features from the ViT encoder at four different scales, and the dimensions and channel numbers of the features remain unchanged after passing through the adapter. The adapter consists of four convolutional blocks, with each block containing a $1 \times 1$ convolutional layer, a batch normalization layer, and a rectified linear unit (ReLU) layer. The number of output feature channels from both the ViT encoder and the adapter is related to the size of the ViT. The number of parameters in the adapter also increases with the input feature channel count; however, compared to the ViT encoder, the parameter count is significantly reduced, effectively lowering the training cost.
# B. Depth prompter
The depth prompter is proposed to accurately extract the depth information from 2D remote sensing images and provide prompts for the semantic segmentation (Fig.1(c)). Embedding depth prompts is an effective approach to address the land cover misclassification caused by factors such as spectral confusion and shadow occlusion. The input to the depth prompter comprises the three scales of shallow depth features. These depth features are processed through the five submodules of the depth prompter to yield high-dimensional depth prompts. The depth prompter consists of a shallow convolutional block for downsampling and four deep encoding convolutional blocks. Feature encoding among the five submodules is facilitated through skip connections and layer-bylayer transmission in a residual-like manner, preserving largerscale features while extracting deeper-level depth prompts.
# C. Semantic segmentation decoder
Unlike conventional semantic segmentation decoders that only utilize features extracted from bi-temporal image encoders, the proposed semantic segmentation decoder jointly leverages depth prompts $\psi$ and image features $f$ to decode land-cover types (Fig. 1(d)). The semantic segmentation decoder consists of an input layer, three intermediate layers, and an output layer. The decoded land-cover semantic features are passed layer by layer, ultimately producing the land-cover classification results.
# D. Loss function
Two loss functions are designed to supervise the DepthSeg framework. To supervise the depth decoder, the loss $L _ { D }$ is designed by integrating the average depth, depth standard deviation, and depth structural features. In addition, $L _ { c l s }$ is formulated based on cross-entropy loss to supervise the semantic segmentation decoder. Finally, the overall loss functions $L$ for the DepthSeg framework are computed through a combination of these loss components. During the semi-supervised training of the depth decoder, the objective is to align the predicted depth map $X$ with the pseudo-label $Y$ generated by the teacher model. This alignment is achieved by optimizing the depth decoder. The definition of $\mathcal { L } _ { D ( X , Y ) }$ is based on the structure similarity index measure (SSIM)[15] $( \mathcal { L } _ { D ( X , Y ) } =$ $1 - S S I M ( X , Y ) )$ . Furthermore, during the fully supervised training of the decoder, cross-entropy loss is employed as the loss function for land-cover classification. The definition is $\begin{array} { r } { \mathcal { L } _ { \mathrm { c l s } } = - [ Y _ { g t } \log \left( Y _ { p r e d } \right) + ( 1 - Y _ { g t } ) \log \left( 1 - Y _ { p r e d } \right) ] } \end{array}$ , where $Y _ { g t }$ is the label and $Y _ { p r e d }$ is the prediction. In summary, the definition of the total loss function for DepthSeg is $\mathcal { L } = \mathcal { L } _ { \mathrm { D ( X , Y ) } } + \mathcal { L } _ { \mathrm { c l s } }$ .
# III. EXPERIMENTS AND RESULTS
# A. Experimental settings
1) Study area and data: To comprehensively evaluate the performance of DepthSeg in real-world application scenarios, we constructed a large-scale land-cover semantic segmentation dataset named LiuZhou. The LiuZhou dataset consists of fused multispectral and panchromatic images from GaoFen2, acquired in 2015, with a spatial resolution of $0 . 8 \mathrm { ~ m ~ }$ . It captures land-cover characteristics of a region in Liuzhou, Guangxi, China. The dataset includes seven land-cover categories: cropland, forest, buildings, roads, impervious surfaces, bare land, and water bodies. Using the acquired imagery and professional GIS software, all images were annotated by expert remote sensing specialists. The dataset is divided into training, validation, and the testing subsets, as shown in Fig.2. The training set consists of 7,047 pairs of $5 1 2 \times 5 1 2$ samples, the validation sets include 4,209 pairs, and testing sets contain 4,071 pairs. To ensure the validity of performance evaluation, the training, validation, and testing data are spatially nonoverlapping.
Fig. 2. The LiuZhou dataset.
2) Implementation details and comparative methods: The DepthSeg framework was implemented in PyTorch and was trained on an NVIDIA RTX 4080 GPU with 16 GB of memory. Depending on the layers and parameters of the ViT encoder, the proposed DepthSeg architectures are available in small, base, and large versions. We utilized the pre-trained encoder provided by [13]. To effectively supervise the output of the depth decoder, the Depth Anything[16] model was used as a teacher model for the semi-supervised training. During training, the parameters of the encoder were frozen, while the parameters of the other modules in the DepthSeg framework were optimized using the AdamW optimizer with a weight decay of 0.001 and a momentum of 0.9. The initial learning rate was set to 0.0001, and the learning rate schedule followed a MultiStepLR strategy, reducing the learning rate to 0.2 times the current value at $30 \%$ and $60 \%$ of the training steps. The total number of training epochs was set to 50, with the batch size adjusted according to the encoder size: 8 for ViT-s, 4 for ViT-b, and 2 for ViT-l.
To provide a more objective assessment of the effectiveness of the proposed DepthSeg framework, three methods were selected for comparison: Deeplabv3[17], ST-UNet[8], CTCFNet[9]. For all the comparative methods, we obtained the publicly available code from the authors and implemented them using the hyperparameters recommended in the original papers, conducting a rigorous comparison across the LiuZhou datasets.
3) Evaluation metrics: Six commonly used evaluation metrics are utilized here for quantifying the accuracy in semantic $\textstyle { \frac { 1 } { N } } \sum _ { i = 1 } ^ { N } { \frac { \mathrm { T P } } { \mathrm { T P } + \mathrm { F P } } }$ h, smeeamn reicsa la: $\begin{array} { r } { \mathrm { m R e c a l l } \stackrel { \cdot } { = } \frac { 1 } { N } \sum _ { i = 1 } ^ { N } \frac { \mathrm { T P } } { \mathrm { T P } + \mathrm { F N } } } \end{array}$ $\mathrm { m P r e } =$ mean F1-score: mF1 = 1 iN= $\begin{array} { r } { \mathrm { \ m F 1 } \ = \ \frac { 1 } { N } \sum _ { i = 1 } ^ { N } \frac { \mathrm { 2 T P } } { \mathrm { 2 T P } + \mathrm { F P } + \mathrm { F N } } } \end{array}$ N P 1 2TP+2TFP+FN, mean intersection over union: mIoU N1 PiN=1 TP+TFP+FN, Cohen’s Kappa coefficient: ${ \mathrm { \ K a p p a \ = \ { \frac { z ( { \bf 1 } { \bf \Gamma } { \bf } r { \bf \times } { \bf 1 } { \bf \times } - { \bf \Gamma } { \bf } r { \bf \times } { \bf } r { \bf \times } { \bf } r { \bf \times } ) } { ( \mathrm { T P + } \mathrm { F P ) ( F P + } \mathrm { T N ) } } } } }$ , and overall accuracy: $\begin{array} { r } { \mathrm { O A } ~ = ~ \frac { 1 } { N } \sum _ { i = 1 } ^ { N } \frac { \mathrm { T P + T N } } { \mathrm { T P + T N + F P + F N } } } \end{array}$ . In these formulas, TP are true positives, FP are false positives, FN are false negatives, and the $\mathbf { N }$ is the number of land cover classes.
# B. Results
Fig.3 presents the visual comparison results of the proposed DepthSeg framework and several comparative methods on the LiuZhou dataset. The primary challenges in semantic segmentation for the LiuZhou dataset stem from issues such as densely packed buildings, varying road materials, and complex vegetation types, which often lead to land-cover misclassification. To comprehensively compare the strengths and weaknesses of the methods in different scenarios, urban areas and suburban regions are selected for detailed examination through magnified views.
In Fig. 3, the mapping results of the proposed DepthSeg in the test areas are closer to the ground truth compared to those of the baseline methods, demonstrating higher accuracy. Specifically, in the magnified results for densely populated urban areas, the baseline methods exhibit misclassification among roads, buildings, and impervious surfaces. By integrating depth information and leveraging the principle that different objects have distinct elevation characteristics, DepthSeg effectively distinguishes spectrally similar targets. In the suburban areas, the magnified results reveal issues in the baseline methods, such as road misclassification and blurred object boundaries. In contrast, DepthSeg achieves accurate land-cover classification and boundary extraction by combining 2D spectral-geometric features with 3D elevationdepth features, demonstrating clear advantages over the other methods. To provide a more objective evaluation of each method’s performance on the LiuZhou dataset, a quantitative comparison is provided in Table I.
Fig. 3. Visualization of the semantic segmentation results of the different methods on the LiuZhou dataset. Legend: Water, Road, Buildings, Farmland, Forest, Bare land, Impervious surface
TABLE I ACCURACY ASSESSMENT ON THE LIUZHOU DATASET.
From Table I, it can be seen that the proposed DepthSeg framework achieves high scores on the LiuZhou dataset. Among the different variants, DepthSeg-vitl records the highest scores across all the metrics in the LiuZhou dataset. Specifically, DepthSeg-vitl improves the Kappa and mIoU scores by $6 . 3 4 \%$ and $1 2 . 3 3 \%$ , compared to the best comparative method (CTCFNet). Moreover, all three proposed DepthSeg variants outperform the comparative methods across all the metrics. This can be attributed to the proposed DepthSeg framework’s integration of depth information, which effectively mitigates the land cover misclassification caused by spectral confusion and shadow occlusion.
# IV. DISSCUSSION
The accuracy improvements of DepthSeg primarily stem from the fine-tuning of the ViT encoder’s lightweight adapter and the integration of the depth prompter. To assess the contribution of these two key components, an ablation study was conducted. The results of the ablation experiments are presented in Table II.
TABLE II ABLATION STUDY FOR THE PROPOSED MODULES IN DEPTHSEG.
Table II demonstrates that the combination of lightweight adapter and depth prompter enhances the performance of the DepthSeg framework across all three encoder types. In the DepthSeg framework using the ViT-l encoder, the lightweight adapter improves the baseline by $0 . 3 6 \%$ in Kappa and $0 . 3 9 \%$ in mIoU through fine-tuning the encoder’s feature outputs. Meanwhile, the depth prompter achieves increases of $1 . 9 1 \%$ in Kappa and $3 . 1 2 \%$ in mIoU, due to the architectural enhancements. When combined, the lightweight adapter and depth prompter yield improvements of $2 . 2 0 \%$ in Kappa and $3 . 4 0 \%$ in mIoU, indicating that the primary benefit comes from embedding the depth information, which facilitates more accurate land-cover classification. In the ablation experiments with the ViT-B encoders, it can be observed that the lightweight adapter produces diminishing returns as the encoder parameter count increases. However, the combination of the lightweight adapter and depth prompter results in greater gains than the depth prompter alone. This suggests a risk of reduced accuracy when fine-tuning high-parameter encoders without using the depth prompter. Regardless of whether used independently or together, the depth prompter contributes positively to the accuracy of the semantic segmentation. | Remote sensing semantic segmentation is crucial for extracting detailed land surface information, enabling applications such as environmental monitoring, land use planning, and resource assessment. In recent years, advancements in artificial intelligence have spurred the development of automatic remote sensing semantic segmentation methods. However, the existing semantic segmentation methods focus on distinguishing spectral characteristics of different objects while ignoring the differences in the elevation of the different targets. This results in land cover misclassification in complex scenarios involving shadow occlusion and spectral confusion. In this paper, we introduce a depth prompting two-dimensional (2D) remote sensing semantic segmentation framework (DepthSeg). It automatically models depth/height information from 2D remote sensing images and integrates it into the semantic segmentation framework to mitigate the effects of spectral confusion and shadow occlusion. During the feature extraction phase of DepthSeg, we introduce a lightweight adapter to enable cost-effective fine-tuning of the large-parameter vision transformer encoder pre-trained by natural images. In the depth prompting phase, we propose a depth prompter to model depth/height features explicitly. In the semantic prediction phase, we introduce a semantic classification decoder that couples the depth prompts with high-dimensional land-cover features, enabling accurate extraction of land-cover types. Experiments on the LiuZhou dataset validate the advantages of the DepthSeg framework in land cover mapping tasks. Detailed ablation studies further highlight the significance of the depth prompts in remote sensing semantic segmentation. | [
"cs.CV",
"cs.AI"
] |
# 1 Introduction
Automatic summarization is the task of using machines to summarize natural language text (Maynez et al., 2020; Ranjitha and Kallimani, 2017; Li et al., 2014). This task has developed across many areas in computational linguistics for more than six decades (Luhn, 1958). Recently Large Language Models (LLMs) have been influential and shown to achieve state-of-the-art performance in summarization. However, these models cannot condense a large number of long documents given memory constraints. Hallucination (erroneous and nonfactual generated text) presents additional challenges with LLM summarizations (Yehuda et al., 2024).
EHR note antecedents Progress Imaging 鑫 Discharge Summary Admission Date: [\*\*2126-2-7\*\*] Discharge Date: [\*\*2126-2-20\*\*] Date of Birth: [\*\*2069-4-1\*\*] Sex: M HISTORY OF PRESENT ILLNESS: Mr. [\*\*Known last- name $^ { * * } ]$ is a 56-year-old male who experienced chest pain while undergoing an exercise tolerance test… HOSPITAL COURSE: pt underwent CABG MM/DD/YY. pt tolerated procedure well. pod\#1 pt was extubated. pod\#2 pt developed episodes of SVT and then a-fib. pt never had sustained SVT or a-fib. pt was started on diltiazem gtt and nitroglycerin gtt. however, pt persisted to go in and out of a-fib. amiodarone gtt was started, which converted the pt into DISCHARGE DIAGNOSES: 1. Status post coronary artery bypass grafting x 3…
In this work our goal is to automatically generate a discharge summary using clinical notes after a patient leaves the hospital. A discharge summary is a medical document that explains a patient’s illness, reason for their hospital stay, and treatment.
Physicians and clinicians hand-write exhaustive documentation during hospitalization, which can be exploited to aid in authoring time-consuming documentation. Fig 1 illustrates this process by which previously written clinical notes (note antecedents) are stored in the electronic health record (EHR) system and later utilized to generate the discharge summary.
Clinical summarizations must be both faithful and traceable. Generating discharge summaries with these requirements highlights the difficulty of the task for admissions of extended stay patients, which have EHR notes that number in the hundreds. Contemporary state-of-the-art methods, such as fine-tuning LLMs, render the task nearly impossible with the volume of information for patients with prolonged hospitalizations. In some cases, LLMs might feasibly summarize clinical documentation on a per note basis. However, total textual content across all handwritten notes of admissions easily extends past the limit of LLMs’ context window. Even the impressively large 2-million-token window of Gemini 2.0 (Anil et al., 2024) is not large enough for lengthy admissions.1.
Because of these memory constraints, other solutions are needed for the large multi-document automatic summarization task of generating the discharge summary. Most abstractive methods inherently have a tendency to create less faithful summaries because they typically formulate text as a probability distribution over the vocabulary. They also provide no traceable means of crossreferencing the text of summarized documents. The chosen method for our work is extractive since it is both faithful and traceable, and thus, acceptable for the clinical domain.
While previous methods have shown success at summarizing a single section (Adams et al., 2021), to our knowledge, there is no peer-reviewed work that attempts to generate a complete discharge summary using EHR notes. This motivates the extractive methods formulated in this work and provides a baseline for future abstractive summarization.
The contributions of this work include a) a faithful and traceable method to generate discharge summaries, b) the reusable source code2 to reproduce our results, and c) the Medical Information Mart for Intensive Care III (MIMIC-III) generated discharge notes with physician informal evaluations.
# 2 Related Work
Summarization is a well-established area in natural language processing (Zhang et al., 2020b; Liu et al., 2015; Liao et al., 2018; Salton et al., 1994; Luhn, 1958).
# 2.1 Clinical Note Summarization
To our knowledge, no other work exists that uses graph methods for summarizing discharge notes. However, the literature is rich with examples of clinical note summarization that include both longitudinal (Hirsch et al., 2015), and nonlongitudinal (Pivovarov and Elhadad, 2015) note types, two examples of mutual discipline interest. Furthermore, the shared understanding, agreement, and acknowledgment that faithful summarization is necessary, but lacking, has been thoroughly reviewed (Zhang et al., 2020b).
Adams et al. (2021) showed promising results in summarizing the Brief Hospital Course section. However, for the single-section case, the summarization of physician notes is perhaps the most interesting comparison and potentially most impactful (Gao et al., 2022a). Clinical notes were summarized by fine-tuning the T5 (Raffel et al., 2020) and BART (Lewis et al., 2020) state-of-the-art seq2seq models and evaluated using the Bidirectional Encoder Representations from Transformers Score (BERTSCORE) (Zhang et al., 2020a) and RecallOriented Understudy for Gisting Evaluation (Lin, 2004) scoring methods.
# 2.2 Abstract Meaning Representation
Interest in abstract meaning representation (AMR) has recently spanned across many tasks (Liu et al., 2015; Bonial et al., 2020; Naseem et al., 2022). AMR graphs were later enriched with PropBank frames, which greatly enhanced their expressiveness (Palmer et al., 2005). Recent achievements that use AMR models as the primary data representation include work in natural language generation (Manning et al., 2020), automatic machine translation (Blloshmi et al., 2020), question/answer systems (Lim et al., 2020), and building logical forms (Galitsky, 2020).
The well-known work of Liu et al. (2015) used reduction methods with AMR graphs for summarization. In this work, the authors created a fully connected graph that was used heuristically generate abstractive text. This was later broadened with a more comprehensive and robust AMR graph based realization algorithm for multi-document summarization (O’Gorman et al., 2018; Liao et al., 2018).
This research is inspired by the work of Liu et al. (2015) and Liao et al. (2018) with regard to AMR graph reduction methods. Our method differs in that it leverages CALAMR (Landes and Di Eugenio, 2024) to induce a graph by modeling it as flow network (Gao et al., 2022b), whereas their work builds on the graph reduction methods of Thadani and McKeown (2013). For sentence comprehension, which re-frames the concept of commodity flow (Magnanti and Wolsey, 1995) as indicator flow constraints for edge inclusion. For summarization, we leverage the CALAMR method as it provides tracability through AMR graph alignments.
Table 1: Graph Alignment Statistics. Alignment and reentrancy averages by admission with the number of nodes aligned by component in the admission graph.
# 3 Datasets
The MIMIC-III Version 1.4 (Johnson et al., 2016) corpus and the UI Health Dataset were used for all experimentation using the summarization methods described. The UI Health Dataset is an IRBapproved private dataset of 11,001 admissions and 607,872 notes, which include daily progress, radiology, ECG and a variety of other notes from the University of Illinois Chicago hospital. Of the MIMIC-III sample of 11,957 admissions, 113 admissions were processed.
A total of 3,520 of the 11,957 MIMIC-III admissions were aligned (see Sec 4) to create the Source Section Dataset (see Sec 4.3). Table 1 shows the average number of alignments across note antecedent and discharge summary components and the average number of reentrancies per admission. The “alignable” statistics are nodes that are alignment candidates, such as concept and attribute notes. The “aligned” statistics are those nodes with alignment edges.
Aligning the UI Health Dataset resulted in additional challenges. The dataset has more notes across category types compared to MIMIC-III because the latter has only intensive care unit (ICU) notes available (Landes et al., 2023). The consequence of this more robust note variety is that admission note counts are much higher, and therefore, take much longer to align. There is also a higher risk of missed alignments due to a potentially higher rate of reentrancies (more than one in a path from the reentrancy to the root), which lead to flow issues (Landes and Di Eugenio, 2024). Even though the MIMIC-III alignments far outnumber the UI Health Dataset, the UI Health Dataset has many more reentrancies.
# 4 Methods
CALAMR3 (Component ALignment for Abstract Meaning Representation) was leveraged to find clinical notes and candidate sentences to use for summarization. We refer the reader to the paper by Landes and Di Eugenio (2024), but we give a brief overview here. The CALAMR method first parses the source text into a single connected graph of AMR sentence graphs. It then does the same for the summary text. These two graphs start as separate components that become one bipartite graph.
Nodes are connected, as bipartite edges, if their semantic similarity’s neighborhood exceeds a threshold. This similarity measure is calculated based on embeddings assigned to concept and attribute AMR nodes and PropBank (Kingsbury and Palmer, 2002) roles and role set edges. The similarity measures are also used as the information gain across the connected graph and all subgraphs of each in the max flow algorithm (Gao et al., 2022b; Ford and Fulkerson, 1962). The assigned flow values to each bipartite edge lead to the “starvation” of low information subgraphs. Subgraphs are effectively removed by setting low flow alignment edge capacities to zero.
CALAMR was used to create supervised training examples using the flow data of the alignment graphs to match note antecedent source sentences to discharge summary sentences. A model we refer to as the Source Section Model classified the discharge summary section type of notes written prior to the discharge summary (note antecedent) to the source sentence (see Sec 4.4.1). The source section model then used the matches to learn what to add to the summary. Each note antecedent source sentence was then assigned to the section of the matched discharge summary sentence. This was then used as the label in a classification neural network model.
Figure 2: Pipeline Overview. Clinical notes are first preprocessed (left) into learning examples for a summarization model (right).
An overview of the pipeline follows (see Fig 2):
1. Preprocess notes to generate a dataset for supervised training. (a) Construct an admission graph from a subset of MIMIC-III Version 1.4 (Johnson et al., 2016) admissions (Fig 2a). (b) Create AMR sentences using a text-tograph parser. (c) Use CALAMR to create an alignment graph for each admission. This includes the note antecedents connected to the discharge summary (Fig 2c).
2. Train a summarization model using the dataset created in Step 1, then use it to summarize.
(d) Label note antecedent sentences with discharge summary section types using the alignments (Fig 2d).
(e) Train a supervised sentence classification model using the labels created
in Step 2 (Fig 2e).
(f) Use the trained model to add note antecedent sentences by discharge summary section to the generated note.
# 4.1 Admission Graph
A patient is admitted to the hospital upon entering for any administered healthcare services. From the healthcare perspective, this admission includes what is done to the patient for the duration of the hospital stay. The admission graph is a semantic representation of a patient’s hospital stay. It is composed of two disconnected graph components: all the antecedent notes for the admission and the discharge summary.
The sentences of all antecedent notes are parsed into AMR graphs, and then connected to create the source graph. Likewise, the discharge summary is parsed into AMR graphs, and when connected, become the summary graph. These two disconnected components follow the structure of the source and summary components of the bipartite graph described by Landes and Di Eugenio (2024). However, document nodes that represent note categories, note sections and clinical text paragraphs are used between the roots and their respective AMR subgraphs as shown in Fig 3. The note antecedent source has a note category level, whereas the summary component’s root represents the discharge summary.
The spaCy4 and scispaCy5 libraries was used to tokenize, sentence chunk and tag biomedical and non-biomedical named entities. MedCAT (Kraljevic et al., 2021) was used to link token spans to Unified Medical Language System concept unique identifiers (CUIs) that aid in graph aligning their text-to-graph concept nodes.
Previous methods have used concept merging to join AMR sentences (Lee et al., 2021). However, we joined the AMR parser’s selected sentence roots to their corresponding paragraph nodes and used Coreference Resolution in place of concept node merging to avoid loss of data.
# 4.2 Sentence Matching Algorithm
The sentence matching algorithm uses the CALAMR alignments to identify the sentences that best represent the summary. This classification is based on the sentence-to-sentence information gain from the aligned graph flow network. Fig 4 shows how the source sentence, Pre-cardiac catheterization assessment. matches with the discharge summary sentence “Coronary artery disease, status post coronary artery bypass grafting,” by creating paths through the graph from a source sentence to a summary sentence. Each sentence connected in this way becomes a candidate.
Figure 3: Admission Graph. An admission graph of the note antecedents (a), and the discharge summary (b).
The sentence matching algorithm follows:
1. For each discharge summary sentence in the reduced graph, use a depth-first search to index aligned nodes (Fig 4a). 2. For each indexed node in step 1, traverse the alignment edge to source nodes in the note antecedent component (Fig 4b). 3. Annotate aligned source nodes indexed in step 2 with alignment flows from discharge summary component edges (Fig 4c).
4. Associate the aligned node summary annotations for each respective sentence in the source component (Fig 4d).
5. Create a sentence match candidate between the source and summary sentences (Fig 4e).
6. Sort the source sentences by the sum of the flow from each summary sentence.
7. Match sentences based on the flow from each summary to source sentence.
8. All remaining unmatched note antecedent sentences are given the no-section label.
Once the source sentences are paired with distributions of summary sentences by flow in step 6 each source sentence is matched with zero or more summary sentences. In step 7, a source sentence is matched with the summary sentence that has the maximum flow determined by the minimum sent flow hyperparameter. The matched summary sentence is then eliminated as a candidate for matching with any other source sentence. Finally, the source sentences are tagged with the section of the matched summary sentence. Upon completion, antecedent sentences are tagged with the discharge summary section to which it should be added. For example, a radiology antecedent note may a sentence marked a Brief Hospital Course discharge summary section type. This sentence will then be added to that section during the generation process.
Figure 4: Sentence Matching Algorithm. The path (red) of alignment flow from the source to the summary for sentence. The enlarged box shows two incoming alignment flows from the source into the heart concept with a combined flow of 0.905. The green arrow represents a match candidate as a result of this alignment flow and the path to their respective sentences.
Table 2: Matched Sentence Sections. Counts of notes per admission in the Source Section Dataset across splits.
Figure 5: MIMIC Matched Sentence Notes. Counts of notes per admission in the Source Section Dataset.
# 4.3 Source Section Dataset
We refer to the set of notes that were successfully aligned (described in Sec 3) as the Source Section Dataset. The sentence matching algorithm described in Sec 4.2 was used to automatically pair sentences from note antecedents to discharge summaries of this dataset. Each sentence pair of each admission graph will be used to train an extractive summary model (see Sec 4.4.1). The note counts by categories are given in Fig 5.
The selected discharge summary sections were based on what were considered most necessary and beneficial for summarization by a physician authoring the note, by a clinical informatics fellow and a $4 ^ { \mathrm { t h } }$ year medical student. The physician-selected discharge summary sections and their counts are given in Table 2. Most notable is the imbalance between the section labels and no-section label. This high disparity leads to a terse generated discharge summary, which is explained further in Sec 6. The same process was used when selecting sections in the UI Health Dataset. Appendix C and Appendix D give the matched sentence candidate contingency tables6.
# 4.4 Discharge Summary Generation
The discharge summaries were generated using the source section model (see Sec 4.4.1) trained on the Source Section Dataset. The note antecedents of the Source Section Dataset’s test set were used as input to the source section model. Sentences were added to the predicted section in the generated discharge summary or discarded if the no-section label was predicted.
The UI Health Dataset was used as a development set by tuning the CALAMR $k ^ { \mathrm { { t h } } }$ order neighbor set hyperparameter $( \Lambda )$ to include more network neighborhood semantic information. The minimum sentence flow hyperparameter $( \mu _ { \mathbf { s } } )$ was also adjusted to increase the output to 248 aligned admissions with higher quality.
The MIMIC-III trained summarization model yielded 133 automatically generated discharge summaries and the UI Health Dataset model generated five. The alignment challenges described in Sec 3, such as missing discharge summaries and GPU memory constraints, show the difficulty of hospitalization summarization. Further discussion of these challenges are described in Sec 6.
# 4.4.1 Source Section Model
Once the sentence matching algorithm was used to assign labels to source sentences (see Sec 4.2) a bidirectional long-short term memory (BiLSTM) was trained to learn the discharge summary section type of each note antecedent source sentence. A section, such as Hospital Course, is a label predicted by the model indicating that not only should the sentence be added, but to which section in the discharge summary to add it. A label of no-section means the sentence is to be discarded.
A BiLSTM (Graves and Schmidhuber, 2005) was used for learning the sentence section classification. The GatorTron (Yang et al., 2022) clinical embeddings, the note antecedent’s note category, and the section type were used as input features to the model. Because of the data input size (see Sec 3) the model’s static embeddings were used in place of fine-tuning. A fully connected linear layer was added between the BiLSTM and the output layer. The BiLSTM layer had a hidden size of 500 parameters, a dropout of $p = 0 . 1 5$ , a learning rate of $\mathrm { 5 \times 1 0 ^ { 4 } }$ and used gradient clipping. The model was set to train for 30 epochs and converged at 24 epochs.
# 5 Experimental Setup
The standard set of quantitative machine learning performance metrics were used to evaluate the source section model. Automatically metrics such as ROUGE (Lin, 2004), BLEU (Papineni et al., 2002) and BERTSCORE (Zhang et al., 2020a), are of little help with such a large set of disjoint text less than half of the EHR notes is represented in the discharge summary (Adams et al., 2021; Landes et al., 2023).
# 5.1 Limitations of Automatic Evaluation Metrics
To demonstrate the limited effectiveness of computed automated evaluation metrics, we compared the EHR records with the discharge summary7.
Table 3: Automated Metrics. Automated metrics between MIMIC-III EHR notes and (non-generated) discharge summaries were computed.
Table 3 shows very low scores ROUGE and BLEU across the original (unmodified) MIMIC-III antecedents with the discharge summaries. For this reason we believe human evaluation is appropriate for judging the effectiveness of generated documentation given the depth, complexity and technical jargon found in clinical notes.
We used human quantitative and qualitative evaluation for a more reliable metric to better understand the effectiveness of the generated notes. This evaluation of discharge summaries on the Source Section Dataset’s test set was evaluated by a clinical informatics fellow and a $4 ^ { \mathrm { t h } }$ year medical student. Each generated discharge summary was ranked using a Likert scale (Likert, 1932) as an integer value ranking in the range 1 to 5 with five as the highest. Table 4 lists the questions asked for the informal evaluation.
Table 4: Human Evaluation Questions. Questions given to medical domain experts for the human evaluation of the generated discharge summaries.
Table 5: Source Section Model Results. The results as weighted, micro and macro scores of the source section model. The results of the model trained on the MIMICIII corpus are given on the left and the UI Health Dataset on the right.
# 6 Results
The source section model results are summarized in Table 5. The weighted F1 score of 88.72 on the MIMIC-III trained corpus shows good results for the sentences’ discharge summary section classification. However, we see a low macro F1 of 20.41. Another reason for a high weighted and micro F1 but low macro F1 is that the majority label, the no-section label, dominated as shown in Fig 5.
The model trained on the UI Health Dataset shows lower results. This might be partly due to the smaller dataset or the higher rate of reentrancies as shown in Table 1 and discussed in Sec 3. The fact that MIMIC-III is a curated dataset is the most likely reason the results are higher compared to UI Health Dataset, which is unmodified and contains protected health information.
The informal evaluation of the 133 generated discharge summaries trained on the MIMIC-III corpus (see Table 6). The evaluation illuminates the difficulty of the task. Despite low scores on sectioning, completeness and preference the generated summaries provide a readability of just over 3 and a perfect correctness score (5), which is what we expect from a faithful summary.
Table 6: Human Evaluation. The average Likert scale scores by question cateogry of 133 MIMIC-III generated summaries are given on the left and five generated summaries UI Health Dataset on the right.
# 6.1 Discussion
The label imbalance in the Source Section Dataset might be attributed to the sparsity of CALAMR’s alignments. If this were the case, we could adjust the hyperparameters of CALAMR to produce more sentence matches. However, the lack of alignment could be justified by the lack of notes (other than those from the ICU department) present in the MIMIC-III corpus. The misalignment could also be attributed in cases where the physician writes from personal experience with the patient that is otherwise lacking from the EHR notes.
The five discharge summaries produced by the model trained on the UI Health Dataset (see Table 6) show better completeness but slightly lower readability. A higher sectioning score was given to the UI Health Dataset despite the fact that the MedSecId model was trained on MIMIC-III. This implies the MedSecId is able to section the UI Health Dataset notes or the source section model is able to predict sections based on other factors such as better alignments. A de-identified automatically generated discharge summary Fig 6 and its gold counter-part are given in Appendix B and Appendix A. | The Achilles heel of Large Language Models (LLMs) is hallucination, which has drastic consequences for the clinical domain. This is particularly important with regards to automatically generating discharge summaries (a lengthy medical document that summarizes a hospital in-patient visit). Automatically generating these summaries would free physicians to care for patients and reduce documentation burden. The goal of this work is to discover new methods that combine language-based graphs and deep learning models to address provenance of content and trustworthiness in automatic summarization. Our method shows impressive reliability results on the publicly available Medical Information Mart for Intensive III (MIMIC-III) corpus and clinical notes written by physicians at Anonymous Hospital. rovide our method, generated discharge ary output examples, source code and trained models. | [
"cs.CL"
] |
# 1 Introduction
In recent years, advancements in Large Language Models (LLMs) have greatly broadened the scope of downstream applications (Devlin et al., 2019; Zhou et al., 2022), creating an increasing need for continual learning (CL) on user-specific private data in sectors such as finance (Zhao et al., 2024), law (Lai et al., 2024), and healthcare (Liu et al., 2024). However, CL introduces challenges such as catastrophic forgetting, the inability to retain prior knowledge due to the absence of previously seen data during current-task learning (Shi et al., 2024).
Existing research has primarily focused on mitigating catastrophic forgetting of specific past data points or tasks (Smith et al., 2023; Luo et al., 2023), while critically overlooking the degradation of the
# Case study: LLM’s Response to Unseen Data
Question: When did Jaime Vasquez recognize his inclination towards writing?
Base Model (pre-FT): I apologize, but I couldn’t find any information on a person named Jaime Vasquez. Full FT: 16. (hallucination)
LoRA FT: 1983. (hallucination)
Sparse FT: 14. (hallucination)
SEAT: I apologize, but I couldn’t find any information on a person named Jaime Vasquez.
base model’s inherent capabilities. Specifically, state-of-the-art LLMs are increasingly aligned to express their lack of knowledge when faced with unseen inputs (see Table 1). This ability to faithfully express epistemic uncertainty (Yadkori et al., 2024; Ji et al., 2025), which we call ignorance awareness in this paper, is a cornerstone of safety alignment. Preserving this capability is essential not only for trustworthy deployment but also for compliance with emerging regulatory frameworks aimed at ensuring reliable and safe AI (Act, 2024).
To address this novel yet highly practical problem (i.e., preserving the base model’s inherent ability to express ignorance post effective fine-tuning), we propose Sparse Entity-aware Tuning (SEAT), a novel approach composed of two principal components. (1) We introduce sparse training, designed to constrain drift in the activation space associated with the model’s expression of ignorance; (2) We develop an entity perturbation methodology combined with a KL-divergence-based loss during finetuning, which aims to disentangle semantically similar neighboring entities, ensuring that the model learns only from entities in the fine-tuning dataset without erroneously extending to unknown entities. Together, these components enable the model to learn targeted new information while preserving the base model’s general capability, particularly its alignment with epistemic uncertainty.
In summary, our contributions are: (1) We demonstrate that traditional fine-tuning methods can interfere with a model’s inherent ability to faithfully express ignorance, and identify a new but practical need for more robust fine-tuning approaches; (2) We propose SEAT, a novel and robust fine-tuning method for LLMs that preserves the model’s ability to express ignorance post effective fine-tuning; (3) We validate the effectiveness of SEAT through empirical experiments across multiple base models and highlight the essential role of both its core components.
# 2 Foundational Insights
We begin with an investigation into the factors that lead to the degradation of ignorance awareness during the fine-tuning process.
Drift in Activation Space Recent findings from mechanistic interpretability and representation engineering (Zou et al., 2023) suggest that key observable behaviors are encoded in linear subspaces of a model’s internal representations. Furthermore, models can be guided to express ignorance for specific data points by redirecting their activations to regions associated with ‘ignorance’ state (Shen et al., 2025). Based on these, we hypothesize that the degradation of this capability during fine-tuning arises from substantial shifts in the activation space, which disrupt the alignment with the model’s builtin ability to faithfully express its lack of knowledge.
Figure 1 presents a PCA visualization of activation patterns across different datasets (all activations projected onto the principal components of the fictitious unverifiable dataset (Shen et al., 2025), for which the base model has been verified to exhibit awareness and reliably express ignorance). In the base model, the seen (factual) and unseen (PISTOL and TOFU) datasets are clearly separable (Figure. 1(a)). However, once new data is learned (using PISTOL dataset), seen (factual and PISTOL) and unseen (TOFU) datasets becomes inseparable by the fine-tuned model (Figure. 1(b)). This collapse in separation aligns with empirical observations: unlike the base model, which faithfully expresses ignorance toward unseen datasets, the fine-tuned model loses this capability and begins to hallucinate. If we conceptualize fine-tuning as integrating a new data or tasks into a base model with pre-existing capabilities, this collapse in separation suggests significant weight interference introduced by conventional full fine-tuning (Full FT) method.
Meanwhile, parameter-efficient fine-tuning (PEFT) methods such as LoRA (Hu et al., 2021) have been found to exhibit reduced robustness in sequential learning, a limitation attributed to the emergence of a high-ranking singular vector in the fine-tuning weight matrices (Shuttleworth et al., 2024). The lack of robustness also extends to the loss of the model’s inherent ignorance awareness, as we also observed significant drift in the activation space and the resultant overlap between activations for unseen and seen datasets (Figure 1(c)). Thus, PEFT methods like LoRA are not considered more robust alternatives for preserving a model’s ability to express ignorance.
Given prior findings that a large portion of weight updates are unnecessary (Yu et al., 2024) and that incorporating sparsity into training enhances model robustness and composability (Qiu et al., 2022), we hypothesize that enforcing sparsity during fine-tuning can mitigate such interference and reduce activation drift. This hypothesis is supported by the observation in Figure 1(d), where a $9 0 \%$ sparsity ratio yields improved separation compared to full FT, though not a complete recovery.
Entity Awareness Prior work has identified the problem of knowledge entanglement, particularly when the target data to be learned shares high semantic and format similarity with non-target data (Shen et al., 2025). This entanglement can cause unintended changes in model behavior when prompted with non-target data post fine-tuning. In the context of $\mathrm { C L }$ , learning new data in the form of a triple $( s , r , o )$ should not affect unseen neighboring data $( s ^ { \prime } , r , o )$ (i.e., not displacing model’s internal ‘ignorance’ state of such neighboring data that can otherwise lead to hallucinations).
# 3 Methodology
Above analysis offers key insights to the design of SEAT, which comprises two principal components. We first denote our fine-tuning dataset as $\mathcal { D } _ { f t }$ .
First, to constrain activation drift caused by finetuning, we introduce sparse training into the optimization process. The sparsity ratio $( r )$ controls the proportion of model weights updated during training, thereby constraining representational shifts and preserving model’s underlying abilities.
Specifically, we consider a sparse training setup where a binary mask $m \in \{ 0 , 1 \} ^ { d }$ is applied to the parameter space $\theta \in \mathbb { R } ^ { d }$ , controlling which weights are updated during fine-tuning. The mask defines a sparsity pattern such that, for each parameter index $i$ , $m _ { i } = 1$ allows $\theta _ { i }$ to be updated, while $m _ { i } = 0$ freezes it at its base value. Notably, masks can be constructed using various strategies, such as random sampling, retaining the largest weights to reflect influence on the loss landscape, or imposing structured sparsity to align with hardware efficiency constraints. In this paper, we focus on demonstrating that SEAT achieves strong performance even with basic random masking, leaving the comparison of masking strategies to future work.
Figure 1: PCA visualization of activations (last token position at the last layer) over different datasets (projected onto the principal components of the unverifiable dataset). Plots over all layers can be found in Appendix B.
In SEAT, given a mask $m$ , we define the effective trainable weights as $\theta ^ { ( m ) } = m \odot \theta$ , where $\odot$ denotes the element-wise (Hadamard) product. At training step $t$ , weights are updated according to:
$$
\boldsymbol { \theta } ^ { ( t + 1 ) } = \boldsymbol { \theta } ^ { ( t ) } - \eta \cdot \boldsymbol { m } \odot \nabla _ { \boldsymbol { \theta } } \mathcal { L } ( \boldsymbol { \theta } ^ { ( m ) } ; \mathcal { D } ) ,
$$
where $\eta$ is the learning rate.
Second, to enhance entity-awareness, we introduce an entity perturbation strategy (EP). Given a fine-tuning dataset $\mathcal { D } _ { \mathrm { f t } } = \{ x ^ { ( i ) } \} _ { i = 1 } ^ { N }$ where $x ^ { ( i ) }$ is each input triple $( s ^ { ( i ) } , r ^ { ( i ) } , \acute { o } ^ { ( i ) } )$ , we construct a perturbed dataset $\tilde { \mathcal { D } }$ of $( \tilde { s } ^ { ( i ) } , r ^ { ( i ) } , o ^ { ( i ) } )$ where $\tilde { s } ^ { ( i ) }$ is fictitious perturbed entity that replace original $s ^ { ( i ) }$ , while all other tokens (i.e., $r ^ { ( i ) } , o ^ { ( i ) } )$ unchanged. Formally, for input x(i) = [t( , . $\boldsymbol { x } ^ { ( i ) } = [ t _ { 1 } ^ { ( i ) } , \dots , t _ { j } ^ { ( i ) } , \dots , { \bar { t } } _ { L } ^ { ( i ) } ] .$ we define x˜(i) = [t(1 , . $\tilde { \boldsymbol { x } } ^ { ( i ) } = [ t _ { 1 } ^ { ( i ) } , \dots , \phi ( t _ { j } ^ { ( i ) } ) , \dots , t _ { L } ^ { ( i ) } ]$ , where $t _ { j } ^ { ( i ) }$ are entity token(s) and $\phi ( \cdot )$ is a random replacement function that maps real entities to fictitious but type-consistent alternatives.
We then incorporate a KL-divergence-based regularization term, computed on the perturbed dataset $\tilde { \mathcal { D } }$ , into the loss objective in the sparse fine-tuning process. This aims to encourage the model to adapt specifically to the fine-tuning data while minimizing unintended representational shifts in its behavior on semantically adjacent inputs, thereby preserving its original ignorance awareness.
Concretely, the regularization minimizes the KLdivergence between the output distributions of the original base model and the fine-tuned model on the perturbed dataset $\tilde { \mathcal { D } }$ . Let $p _ { \mathrm { b a s e } } ( y \mid \tilde { x } )$ and $p _ { \mathrm { S E T A } } ( y \mid$ $\tilde { x }$ ) denote the predictive distributions of the base model and SEAT fine-tuned model, respectively. The KL-regularization term is defined as:
$$
\mathcal { L } _ { \mathrm { K L } } = \mathbb { E } _ { \tilde { \boldsymbol { x } } \in \tilde { \mathcal { D } } } \left[ \mathrm { K L } \left( p _ { \mathrm { b a s e } } ( \boldsymbol { y } \mid \tilde { \boldsymbol { x } } ) \parallel p _ { \mathrm { S E A T } } ( \boldsymbol { y } \mid \tilde { \boldsymbol { x } } ) \right) \right] ,
$$
The overall loss function is then defined as: $\mathcal { L } _ { \sf S E A T } = \mathcal { L } _ { \sf F T } + \alpha \mathcal { L } _ { \sf K L }$ , where $\alpha$ is the coefficient controlling the strength of the regularization term.
It is worth noting that while we use cross-entropy as the primary loss in our experiments, SEAT is compatible with other loss functions. Furthermore, as shown in the ablation study (§5.1), both sparse training and the novel entity perturbation strategy are indispensable elements. Additionally, please refer to Appendix A for discussions on related work.
# 4 Experiment
We propose SEAT as a novel and robust approach for fine-tuning LLMs. In this section, we empirically evaluate its performance by addressing the following research questions: (RQ1) Does SEAT effectively preserve ignorance awareness while maintaining fine-tuning performance? (RQ2) Are both the sparsity and entity perturbation components necessary for its effectiveness?
# 4.1 Experimental Setup
We evaluate SEAT on two LLM synthetic benchmark datasets: TOFU (Maini et al., 2024) and PISTOL (Qiu et al., 2024), using two different base models, including Llama3-8B-instruct (Dubey et al., 2024) and Qwen2.5-7B-instruct (Yang et al., 2024). More details about the datasets and the evaluation metrics are included in the Appendix C.
It is worth noting that the problem identified by this paper is novel and, to the best of our knowledge, lacks directly comparable baseline solutions.
Table 2: Comparison of fine-tuning results. FT score reports ROUGE1 on the training set. IDK (Unve.) is the average IDK scores over the unverifiable dataset. For cross-dataset generalization, we also report IDK (Test): if the model is trained on the PISTOL dataset, IDK score is evaluated on TOFU, and vice versa.
Moreover, faithfully expressing ignorance is not a modular ability - it is typically integrated into the base model through comprehensive and complex safety alignment procedures prior to release, as is the case with the instruct models used in the evaluation. Consequently, it is infeasible to isolate this ability as a standalone ‘task adapter’ that can simply be re-applied to a fine-tuned model to restore its original behavior. Given this, we compare SEAT against both full fine-tuning and sparse fine-tuning to demonstrate its effectiveness as a more robust alternative to conventional fine-tuning approaches.
# 5 Results
Table 2 shows that SEAT successfully preserves the base model’s ability to faithfully express ignorance (as reflected by higher IDK scores1 on both the unverifiable and test datasets) while achieving perfect fine-tuning performance. In contrast, conventional fine-tuning baselines fail to retain this capability, often hallucinating responses to unseen questions, which is reflected in their lower IDK scores.
It is important to note that the IDK score is computed as the cosine similarity between sentencelevel embeddings of the fine-tuned model’s responses (which typically express ignorance in a coherent and context-aware manner) and a set of reference responses (which focus solely on expressing ignorance) (see Appendix C.2). Due to this design, IDK score around 0.65 represents an effective upper bound for a perfect expression of ignorance, while scores below 0.5 typically reflect a lack of refusal (see examples in Table 1). As shown in Table 2, SEAT consistently approaches this upper bound across both unverifiable and synthetic evaluation settings, clearly demonstrating its superiority in preserving calibrated ignorance.
Table 3: Ablation study results (fine-tuning Llama3-8Binstruct on PISTOL dataset).
In addition, improved retention of ignorance awareness is also evident in the PCA visualization in Figure 1(e). Compared to full, LoRA and sparse fine-tunings, the activations of the unseen dataset remain more distant from those of the factual dataset after fine-tuning along the principal components of the unverifiable dataset, indicating that SEAT better preserves the base model’s original representation space.
# 5.1 Ablation Study
We also conduct two ablation studies to evaluate the necessity of the two components of SEAT. First, we examine the impact of sparse training in constraining activation displacement (isolate sparse training and compare SEAT with Full $\mathrm { F T + K L }$ with EP). Second, we demonstrate that the KL-divergencebased regularization term is effective only when used in conjunction with our entity perturbation strategy (isolate EP and compare SEAT with sparse $\mathrm { F T + K L }$ without EP).
Table 3 shows that SEAT clearly outperforms its variants that isolate either sparse training or entity perturbation. This confirms the complementary and essential roles of both components in preserving the base model’s capability for ignorance awareness. | Existing work on mitigating catastrophic forgetting in large language model (LLM) fine-tuning has primarily focused on preserving specific data or tasks, while critically overlooking the degradation of essential capabilities instilled through safety alignment, particularly the model's ability to faithfully express ignorance. In this work, we show that this capability is significantly degraded during conventional fine-tuning, leading to undesired behaviors such as hallucinations. To address this novel but highly practical problem, we propose SEAT, a simple and effective fine-tuning approach that preserves both fine-tuning performance and the model's inherent ability to acknowledge its ignorance. SEAT integrates two key components: (1) sparse training that constrains activation drift, and (2) a novel entity perturbation method with KL-divergence regularization, designed to counter knowledge entanglement. Experimental results demonstrate that SEAT significantly outperforms baselines in preserving ignorance awareness while retaining fine-tuning performance, offering a more robust solution for LLM fine-tuning. | [
"cs.AI"
] |
# 1. Introduction
The classic PAC (Probably Approximately Correct) theory (Vapnik and Chervonenkis, 1974; Valiant, 1984) focuses on understanding the best worst-case (uniform) learning rate by a learning algorithm over all data distributions. Due to its distribution-free nature, the PAC framework fails to capture the distribution-dependent rates of learning hypothesis classes, which are possibly faster than the uniform learning rates (Cohn and Tesauro, 1990, 1992). From a practical perspective, the distribution for data generation is typically fixed in real-world learning problem and the collected data is rarely worst-case, the PAC framework is therefore too pessimistic to explain practical machine learning performance. Universal learning, a distribution-dependent framework that helps to understand machine learning beyond the classic PAC setting, has been proposed by Bousquet et al. (2021) and actively studied recently (Bousquet et al., 2023; Hanneke et al., 2022, 2023; Attias et al., 2024; Hanneke and Xu, 2024; Hanneke et al., 2024a). The universal learning model adopts a setting where the data distribution is fixed and the performance of a learning algorithm is measured by its “learning curve”, i.e., the decay of the expected error as a function of the input sample size, and such rate of decay is called a universal rate. Indeed, in the work of Bousquet et al. (2021), they showed that for binary classification in the realizable setting, the optimal universal rates are captured by a trichotomy: every concept class $\mathcal { H }$ has a universal rate being either exponential, linear or arbitrarily slow. Compared to a well-known dichotomy of the optimal uniform rates: every concept class $\mathcal { H }$ has a uniform rate being either linear ${ \mathrm { V C } } ( \mathcal { H } ) / n$ or “bounded away from zero”, this makes an impression that universal rates may differ a lot from the uniform rates.
In supervised learning, the celebrated empirical risk minimization (ERM) principle (Vapnik, 1998) stands at the center of many successful learning algorithms that seek to minimize the average error over the training data. In practice, ERM-based algorithms have been innovatively designed and widely applied in different areas of machine learning. For example, most successful applications of deep neural networks in fields such as computer vision (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012), and reinforcement learning (Mnih et al., 2015) have their models trained to minimize the empirical error, leveraging those renowned optimization algorithms such as GD, SGD and Adam (Kingma and Ba, 2015). In learning theory, the ERM principle has also been shown to have fundamental importance in understanding the PAC learnability: a concept class is learnable if and only if it can be learned by any ERM algorithm.
While the role of ERM in the classic PAC theory has been very well understood, the topics of universal learning by ERM have remained underexplored. The recent work of (Hanneke and Xu, 2024) studied the universal rates of ERM for binary classification problem in the realizable setting. They showed that the universal rates of ERM are captured by a tetrachotomy: every concept class that is learnable by ERM has a universal rate being either $e ^ { - n }$ , $1 / n$ , $\log ( n ) / n$ , or arbitrarily slow. The realizable case is indeed an idealistic scenario where a perfect hypothesis is assumed to exist, i.e., $\begin{array} { r } { \operatorname* { i n f } _ { h \in \mathcal { H } } \operatorname { e r } _ { P } ( h ) = 0 } \end{array}$ . However, in real-world machine learning applications, the ground-truth models are often complicated and unknown to practitioners. These considerations motivate us to study the universal rates of ERM in the agnostic setting, a more realistic and applicable situation where the true concept may not be in the hypothesis class, i.e., $\begin{array} { r } { \operatorname* { i n f } _ { h \in { \mathcal { H } } } \operatorname { e r } _ { P } ( h ) > 0 } \end{array}$ , and the goal is to find a hypothesis being competitive with the best hypothesis (in class). In this paper, we aim to answer the following fundamental question:
Question 1 Given a concept class $\mathcal { H }$ , what are the possible rates at which $\mathcal { H }$ can be agnostically universally learned by ERM?
# 1.1. Notations and preliminaries
Following the classical setup of statistical learning, we consider a binary classification problem with an instance space $\chi$ and a concept class ${ \mathcal { H } } \subseteq \{ 0 , 1 \} ^ { \chi }$ . Let $h : \mathcal { X } \{ 0 , 1 \}$ be a classifier. Given a probability distribution $P$ on ${ \mathcal { X } } \times \{ 0 , 1 \}$ , the error rate of $h$ is defined as $\mathrm { e r } _ { P } ( h ) : = P ( ( x , y ) \in$ $\mathcal { X } \times \{ 0 , 1 \} : h ( x ) \neq y )$ . Given a dataset $S _ { n } : = \{ ( x _ { i } , y _ { i } ) \} _ { i = 1 } ^ { n } \in ( \mathcal { X } \times \{ 0 , 1 \} ) ^ { n }$ , the empirical error rate of $h$ is defined as $\begin{array} { r } { \hat { \mathbf { e r } } _ { S _ { n } } ( h ) : = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \mathbb { 1 } ( h ( x _ { i } ) \neq y _ { i } ) } \end{array}$ . Let $P$ be a data distribution, for an integer $n$ , we denote by $S _ { n } : = \{ ( x _ { i } , y _ { i } ) \} _ { i = 1 } ^ { n } \sim P ^ { n }$ an i.i.d. $P$ -distributed dataset. Recall that a distribution $P$ is called realizable with respect to $\mathcal { H }$ if it satisfies $\begin{array} { r } { \operatorname* { i n f } _ { h \in { \mathcal { H } } } \operatorname { e r } _ { P } ( h ) = 0 } \end{array}$ . Note that for realizable learning, an ERM learner is any learning algorithm that outputs a sample-consistent classifier, that is, a classifier in the sample-induced version space (Mitchell, 1977). In this paper, we consider instead the often more realistic setting of agnostic learning, where $\begin{array} { r } { \operatorname* { i n f } _ { h \in { \mathcal { H } } } \operatorname { e r } _ { P } ( h ) > 0 } \end{array}$ . In the agnostic setting, an ERM algorithm is any learning algorithm outputs the hypothesis achieving the best performance on the training data (breaking ties arbitrarily), i.e., $\begin{array} { r } { \hat { h } _ { n } = \boldsymbol { \mathrm { E R M } } ( S _ { n } ) : = \arg \operatorname* { m i n } _ { h \in \mathcal { H } } \hat { \mathbf { e r } } _ { S _ { n } } ( h ) } \end{array}$ . For simplicity, we conflate the ERM learner $\hat { h } _ { n }$ with the hypothesis it returns throughout the paper.
In the realizable setting, the PAC learning aims to achieve $\mathrm { e r } _ { P } ( \hat { h } _ { n } ) \leq \epsilon$ for the error $\epsilon$ going to 0 as fast as possible with $n$ , and the universal learning focuses on the rate of decay of the so-called learning curve, that is, the decay of the expected error rate $\mathbb { E } [ \mathrm { e r } _ { P } ( \hat { h } _ { n } ) ]$ as a function of sample size $n$ . In the agnostic setting, the goal of PAC learning is instead to guarantee that the excess risk $\begin{array} { r } { \operatorname { e r } _ { P } ( \bar { h } _ { n } ) - \operatorname* { i n f } _ { h \in \mathcal { H } } \operatorname { e r } _ { P } ( h ) \leq \bar { \epsilon } } \end{array}$ for $\epsilon$ going to 0 as fast as possible with $n$ . Therefore, for universal learning, it is natural to extend the notion of learning curve as the decay of the expected excess risk as a function of sample size $n$ . Concretely, we define the expected excess risk as follow.
Definition 1 (Excess risk) Let $\mathcal { H }$ be a concept class, and let $\{ \hat { h } _ { n } \} _ { n \in \mathbb { N } }$ be the output of an ERM algorithm. For any distribution $P$ over ${ \mathcal { X } } \times \{ 0 , 1 \}$ and data $S _ { n } : = \{ ( x _ { i } , y _ { i } ) \} _ { i = 1 } ^ { n } \sim P ^ { n }$ , we define its (expected) excess risk as
$$
\mathcal { E } ( n , P ) : = \mathbb { E } \left[ e r _ { P } ( \hat { h } _ { n } ) - \operatorname* { i n f } _ { h \in \mathcal { H } } e r _ { P } ( h ) \right] .
$$
Moreover, we say that a distribution $\underline { { P } }$ is centered at $\underline { { h ^ { * } } }$ for some $h ^ { * } : \mathcal { X } \to \{ 0 , 1 \}$ if it satisfies $\begin{array} { r } { e r _ { P } ( h ^ { * } ) = \operatorname* { i n f } _ { h \in \mathcal { H } } e r _ { P } ( h ) } \end{array}$ and also $\begin{array} { r } { \operatorname* { i n f } _ { h \in \mathcal { H } } P _ { \mathcal { X } } ( x : h ( x ) \neq h ^ { * } ( x ) ) = 0 . } \end{array}$ , and then $h ^ { * }$ is called $a$ target function of the learning problem.
We underline that a target function may not be in the concept class which is standard for agnostic learning, and a distribution $P$ can have multiple target functions (Example 1). With these settings settled, we are now able to define the problem of agnostic universal learning by ERM. Following Hanneke and $\mathrm { { X u } }$ (2024), we extend the definition from the realizable case.
Definition 2 (Agnostic universal learning by ERM) Let be a concept class and $R ( n ) \to 0$ be a rate function. We say
• $\mathcal { H }$ is agnostically universally learnable at rate R by ERM, if for every distribution $P$ , there exist parameters $C , c > 0$ such that for every ERM algorithm, its excess risk satisfies $\mathcal { E } ( n , P ) \leq$ $C R ( c n )$ , for all ${ \boldsymbol { n } } \in \mathbb { N }$ .
• $\mathcal { H }$ is not agnostically universally learnable at rate faster than R by ERM, if there exists a distribution $P$ and parameters $C , c > 0$ such that there is an ERM algorithm satisfying $\mathcal { E } ( n , P ) \geq$ $C R ( c n )$ , for infinitely many $n \in \mathbb { N }$ .
• $\mathcal { H }$ is agnostically universally learnable with exact rate R by ERM, if $\mathcal { H }$ is agnostically universally learnable at rate $R$ by ERM, and is not agnostically universally learnable at rate faster than $R$ by ERM.
• $\mathcal { H }$ requires at least arbitrarily slow rates to be agnostically universally learned by ERM, if for any $R ( n ) \stackrel { } { \to 0 }$ , $\mathcal { H }$ is not agnostically universally learnable at rate faster than R by ERM.
We emphasize that a crucial difference between Definition 2 and the PAC learning is that here the constants $C , c > 0$ are allowed to depend on the distribution $P$ . Moreover, it is straightforward from the definition that we are basically considering the worst-case ERM here. The following extensions are required for presenting our results.
Definition 3 Following the notations in Definition 2, we say that $\mathcal { H }$ is • agnostically universally learnable at rate $o ( n ^ { - 1 / 2 } )$ by ERM, if for every distribution $P$ and every ERM algorithm, $\mathcal { E } ( n , P ) = o ( n ^ { - 1 / 2 } )$ , for all $n \in \mathbb { N }$ . • not agnostically universally learnable at rate faster than $o \big ( n ^ { - 1 / 2 } \big )$ by ERM, if for any $T ( n ) =$ $o ( n ^ { - 1 / 2 } )$ , there exists a distribution $P$ such that there is an ERM algorithm satisfying $\mathcal { E } ( n , P ) \geq$ $T ( n )$ , for infinitely many $n \in \mathbb { N }$ . • agnostically universally learnable with exact rate $o \big ( n ^ { - 1 / 2 } \big )$ by ERM, if the above two hold.
Here, o is the standard asymptotic notation that can be distribution-dependent when $n \to \infty$ .
Definition 4 For a class of distributions $\mathcal { P }$ , the agnostic universal learning of $\mathcal { H }$ under $\mathcal { P }$ by ERM is defined as the same as Definition 2 except considering only distributions $P \in { \mathcal { P } }$ instead of all the probability distributions $P$ over ${ \mathcal { X } } \times \{ 0 , 1 \}$ .
# 1.2. Related works
PAC learning by ERM. The performance of ERM algorithms in the PAC framework has been well understood. For the realizable case, the optimal sample complexity of ERM learners (Blumer et al., 1989; Vapnik and Chervonenkis, 1974) is $\mathcal { M } _ { \mathrm { E R M } } ^ { \mathcal { H } } ( \epsilon , \delta ) = \Theta ( ( \operatorname { V C } ( \mathcal { H } ) \log { ( 1 / \epsilon ) } + \log { ( 1 / \delta ) } ) / \epsilon )$ , resulting in the uniform rate $\operatorname { e r } _ { P } ( \hat { h } _ { n } ) = \Theta ( ( \operatorname { V C } ( \mathcal { H } ) \log { ( n / \operatorname { V C } ( \mathcal { H } ) ) } + \log { ( 1 / \delta ) } ) / n ) .$ , which is suboptimal due to an unavoidable logarithmic factor (Auer and Ortner, 2007). Indeed, it has been proved that this uniform rate is the best achievable for any proper learner (Haussler et al., 1994; Simon, 2015), whereas there are improper learners can achieve a rate of $\Theta ( ( \operatorname { V C } ( \mathcal { H } ) + \log { ( 1 / \delta ) } ) / n )$ (Hanneke, 2016; Aden-Ali et al., 2023; Larsen, 2023; Aden-Ali et al., 2024). However, for the agnostic case, ERM learners have the optimal sample complexity $\mathcal { M } _ { \mathrm { E R M } } ^ { \mathcal { H } , \mathrm { A G } } ( \epsilon , \delta ) = \Theta ( ( \operatorname { V C } ( \mathcal { H } ) +$ $\log { ( 1 / \delta ) } ) / \epsilon ^ { 2 } )$ , and thus guaranteeing $\begin{array} { r } { \mathbf { e r } _ { P } ( \hat { h } _ { n } ) - \operatorname* { i n f } _ { h \in \mathcal { H } } \mathbf { e r } _ { P } ( h ) = \Theta ( \sqrt { ( \mathbf { V } \mathbf { C } ( \mathcal { H } ) + \log \left( 1 / \delta \right) ) / n } ) } \end{array}$ (Haussler, 1992), which is optimal for any learning algorithm including improper learners. It is worth mentioning that this discrepancy of the optimality of the ERM rule between the two settings has been studied recently in the work of Hanneke et al. (2024b), where they showed that ERM is indeed sub-optimal when treating $\mathrm { i n f } _ { h \in \mathcal { H } } \thinspace \mathrm { e r } _ { P } ( h )$ as a parameter of the rates.
Universal learning. While the standard PAC model has been dominating the learning theory, the fact that practical learning rates can be much faster than the one described in the PAC theory, was not only observed from empirical experiments (Cohn and Tesauro, 1990, 1992) but also verified by some early theoretical works (Schuurmans, 1997; Koltchinskii and Beznosova, 2005; Audibert and Tsybakov, 2007; Pillaud-Vivien et al., 2018), where exponentially fast learning rates were guaranteed under specific model assumptions (e.g., for kernel methods and stochastic gradient decent). These findings motivate the development of alternative learning models that can better help to understand the practice of machine learning. The property of universal consistency was first provided by Stone (1977) and later generalized by Hanneke (2021), establishing the existence of universally consistent learning algorithms in any separable metric space. The work of Benedek and Itai (1988) considered a relaxation of the PAC model that lies in between the uniform and the universal settings called nonuniform learning, where the learning rate may depend on the target concept but still uniform over marginal distributions. The work of van Handel (2013) studied the uniform convergence property from a universal perspective and gave out a combinatorial characterization of the universal Glivenko-Cantelli property (Definition 8). Until very recently, the universal learning framework was first formalized by Bousquet et al. (2021), along with a complete theory of the optimal universal rates. After that, Bousquet et al. (2023) carried out a fine-grained analysis on the “distribution-free tail” of the universal learning curves by characterizing the optimal constant factor. As generalizations, Kalavasis et al. (2022); Hanneke et al. (2023, 2022, 2024a) studied the universal rates for other settings including multiclass classification, active learning, interactive learning, etc. The most relevant work to ours is Hanneke and Xu (2024), who studied the universal rates of ERM for binary classification problem in the realizable setting.
# 2. Main Results
In this section, we summarize the main results of this paper as well as the related technical notions of complexity. In brief, we study both target-independent and target-dependent agnostic universal rates by ERM. Moreover, since the target-dependent result relies on certain ad-hoc conditions which are lacking of intuition, we further propose to categorize a data distribution according to its Bayesoptimal classifier (Definition 10) and show that the corresponding Bayes-dependent universal rates are characterized by some simple combinatorial structures. Further details for these results will be discussed in the following Sections 3-5, along with related technical analyses and proof sketches.
We start with target-independent agnostic universal rates. We reveal a fundamental trichotomy in the following Theorem 5, namely there are exactly three possibilities for the agnostic universal rates by ERM: being either exponential $( e ^ { - n } )$ , or super-root $( o ( n ^ { - 1 / 2 } ) )$ , or at least arbitrarily slow. Moreover, the characterization (that determines for each concept class which of the three categories it belongs to) simply consists of the cardinality and the VC dimension of the concept class.
Theorem 5 (Agnostic universal rates for ERM) For every concept class $\mathcal { H }$ with $| \mathcal { H } | \geq 3 ,$ ,
• $\mathcal { H }$ is agnostically universally learnable by ERM with exact rate $e ^ { - n }$ if and only if $| \mathcal { H } | < \infty$ .
• $\mathcal { H }$ is agnostically universally learnable by ERM with exact rate $o ( n ^ { - 1 / 2 } )$ if and only if $| \mathcal { H } | =$ $\infty$ and $V C ( \mathcal { H } ) < \infty$ .
• $\mathcal { H }$ requires at least arbitrarily slow rates to be agnostically universally learned by ERM if and only if $V C ( \mathcal { H } ) = \infty$ .
It is worthwhile to mention the following technical aspects of the proof. Firstly, to show the upper bound of super-root rates $o \big ( n ^ { - 1 / 2 } \big )$ , we prove a refined version of a classic uniform Bernstein inequality (Proposition 40) which improves a result of Vapnik and Chervonenkis (1974) by a logarithmic factor. In summary, its proof applies a combination of localization (Bartlett et al., 2004, 2005; Koltchinskii, 2006), a concentration inequality (Bousquet, 2002) and an entropy integral bound on the rate of uniform convergence (van der Vaart and Wellner, 1996; Gin´e and Koltchinskii, 2006; van der Vaart and Wellner, 2011) accounting for variances of loss differences, together with wellknown bounds on the covering numbers of VC classes (Haussler, 1995). Secondly, to get the lower bounds of $o ( n ^ { - 1 / 2 } )$ and arbitrarily slow rates, the techniques are quite different from proving ERM lower bounds in the classic PAC theory. Concretely, the idea is to use the following equivalences:
Lemma 6 (Hanneke and Xu, 2024, Lemma 8) Any concept class $\mathcal { H }$ has an infinite eluder sequence (Definition 11) if and only if $| { \mathcal { H } } | = \infty$ .
Lemma 7 (Hanneke and Xu, 2024, Lemma 9) Any concept class $\mathcal { H }$ has an infinite VC-eluder sequence (Definition 12) if and only if $V C ( \mathcal { H } ) = \infty$ .
We construct skillfully-designed distributions on such infinite sequences to support the lower bounds, unlike in the PAC theory such distributions for lower bounds are often constructed on finite sets.
Before proceeding to the target-dependent rates, we first introduce some relevant definitions and technical assumptions. Throughout this paper, we will often assume that a concept class satisfies the universal Glivenko-Cantelli property.
Definition 8 (Glivenko-Cantelli class, van Handel, 2013) Let $\mathcal { H }$ be a concept class on an instance space $\chi$ . Given a probability distribution $P$ on ${ \mathcal { X } } \times \{ 0 , 1 \}$ , let $\{ ( X _ { i } , Y _ { i } ) \} _ { i \geq 1 }$ be a sequence
Table 1: Comparison of the ERM universal rates between the realizable case and the agnostic case. The definition to a star-eluder sequence can be found in Appendix A.
Table 2: Comparison between the agnostic universal rates and the agnostic uniform rates of ERM.
of independently $P$ -distributed random samples. We say that $\mathcal { H }$ is a $\underline { { \boldsymbol { P } } }$ -Glivenko-Cantelli class if
$$
\operatorname* { s u p } _ { h \in \mathcal { H } } \Big | \hat { e } r _ { S _ { n } } ( h ) - e r _ { P } ( h ) \Big | \overset { p } { \longrightarrow } 0 , a s n \to \infty ,
$$
where the convergence rate can be $P$ -dependent. We say $\mathcal { H }$ is a universal Glivenko-Cantelli (UGC) class if it is $P$ -Glivenko-Cantelli for every distribution $P$ .
We remark that while a finite VC dimension is neither sufficient nor necessary to ensure the universal Glivenko-Cantelli property of a hypothesis class, a bunch of works (Vapnik and Chervonenkis, 1971; Dudley et al., 1991; Van Der Vaart and Wellner, 2000) have shown that under weak measurability conditions (e.g., image-admissible Suslin, universal separability), a finite VC dimension is in fact equivalent to the uniform Glivenko-Cantelli property, which of course implies the universal Glivenko–Cantelli property. We then introduce two technical target-dependent conditions.
Condition 1 For any distribution $P$ centered at $h ^ { * }$ , the following holds
$$
\operatorname* { i n f } _ { \substack { h \in \mathcal { H } : e r _ { P } ( h ) > e r _ { P } ( h ^ { * } ) } } \left\{ e r _ { P } ( h ) - e r _ { P } ( h ^ { * } ) \right\} > 0 . \mathrm { ~ } \left( d e f t n e \mathrm { ~ } \operatorname* { i n f } _ { \varnothing } \{ \cdot \} = 1 \right)
$$
Condition 2 For any distribution $P$ centered at $h ^ { * }$ , there exists $\epsilon _ { 0 } : = \epsilon _ { 0 } ( P ) > 0$ such that
$$
V C \left( \{ h \in \mathcal { H } : 0 < e r _ { P } ( h ) - e r _ { P } ( h ^ { * } ) \leq \epsilon _ { 0 } \} \right) < \infty .
$$
Further discussions about these conditions can be found in Section 4. Let us present a trichotomy capturing the target-dependent agnostic universal rates by ERM.
Theorem 9 (Target-dependent agnostic universal rates) For every UGC class $\mathcal { H }$ with $| \mathcal { H } | \geq 3$
and every classifier $h ^ { * }$ , let $\mathcal { P } _ { h ^ { * } }$ be the set of all distributions centered at $h ^ { * }$ , then the following hold: • $\mathcal { H }$ is agnostically universally learnable under $\mathcal { P } _ { h ^ { * } }$ by ERM with exact rate $e ^ { - n }$ if and only if Condition $\boldsymbol { l }$ holds for $h ^ { * }$ . • $\mathcal { H }$ is agnostically universally learnable under $\mathcal { P } _ { h ^ { * } }$ by ERM with exact rate $o ( n ^ { - 1 / 2 } )$ if and only if Condition 1 fails and Condition 2 holds for $h ^ { * }$ .
• $\mathcal { H }$ requires at least arbitrarily slow rates to be agnostically universally learned under $\mathcal { P } _ { h ^ { * } }$ by ERM if and only if Condition 2 fails for $h ^ { * }$ .
We notice that, though providing an “if and only if” characterization to the target-dependent universal rates, Conditions 1, 2 are not based on simple combinatorial structures, and thus not broadly applicable to general concept classes. A naturally follow-up question is that whether there is a better function-specified universal rates result with a complete characterization based on some combinatorial structures. To address this limitation, we consider to categorize a distribution according to its Bayes-optimal classifier.
Definition 10 (Bayes-optimal classifier) A Bayes-optimal classifier with respect to a distribution $P$ , denoted by $h _ { B a y e s } ^ { * }$ , is defined to be a binary function such that $\begin{array} { r } { \overline { { e r _ { P } ( h _ { B a y e s } ^ { * } ) } } = \operatorname* { i n f } _ { h : \mathcal { X } \to \{ 0 , 1 \} } e r _ { P } ( h ) } \end{array}$ . Moreover, let $\eta ( x ; P ) : = P ( Y = 1 | X = x )$ , then $h _ { B a y e s } ^ { * } ( x ) = \mathbb { 1 } ( \eta ( x ; \bar { P } ) \geq 1 / 2 )$ , for all $x \in \mathcal { X }$ .
The following sequential structures developed by Hanneke and $\mathrm { X u }$ (2024) formalize the characterization of the Bayes-dependent agnostic universal rates.
Definition 11 (Eluder sequence) $A$ (finite or infinite) data sequence $\{ ( x _ { 1 } , y _ { 1 } ) , ( x _ { 2 } , y _ { 2 } ) , . . . \} \ \in$ $( \mathcal { X } \times \{ 0 , 1 \} ) ^ { \infty }$ is called realizable (with respect to $\mathcal { H }$ ) if for every $n \in \mathbb { N }$ , there exists $h _ { n } \in \mathcal { H }$ such that $h _ { n } ( x _ { i } ) = y _ { i } ,$ , for all $i \in [ n ]$ . Let h be a classifier, we say that $\mathcal { H }$ has an infinite eluder sequence $\{ ( x _ { 1 } , y _ { 1 } ) , ( x _ { 2 } , y _ { 2 } ) , . . . \}$ centered at $\underline { { h } }$ if it is realizable and labelled by $h ,$ , and for every integer $k \geq 1$ , there exists $h _ { k } \in \mathcal { H }$ such that $h _ { k } ( x _ { i } ) = y _ { i }$ for all $i < k$ and $h _ { k } ( x _ { k } ) \neq y _ { k }$ .
Definition 12 (VC-eluder sequence) Let $S _ { n } : = \{ ( x _ { i } , y _ { i } ) \} _ { i = 1 } ^ { n }$ be a dataset, the version space (induced by $S _ { n . }$ ), denoted by $V _ { S _ { n } } ( \mathcal { H } )$ (or $V _ { n } ( \mathcal { H } ) ,$ , is defined as $V _ { S _ { n } } ( \mathcal { H } ) : = \{ h \in \mathcal { H } : \overline { { h ( x _ { i } ) = y _ { i } , } } \forall i \in$ $[ n ] \}$ . We say $\mathcal { H }$ has an infinite VC-eluder sequence $\{ ( x _ { 1 } , y _ { 1 } ) , ( x _ { 2 } , y _ { 2 } ) , . . . \}$ centered at h if it is realizable and labelled by $h$ , and for every integer $\overline { { k } } \geq 1$ , $\{ x _ { n _ { k } + 1 } , \ldots , x _ { n _ { k } + k } \}$ is a shattered set of $V _ { n _ { k } } ( \mathcal { H } )$ , where $\{ n _ { k } \} _ { k \in \mathbb { N } }$ is a sequence of integers defined as $n _ { 1 } = 0$ , $n _ { k } : = { \binom { k } { 2 } }$ for all $k > 1$ .
Our result is compact and provides a better concept-dependent characterization of universal rates, leveraging simple combinatorial structures rather than complex conditions.
Theorem 13 (Bayes-dependent agnostic universal rates) For every class $\mathcal { H }$ with $| \mathcal { H } | \ge 3$ and every classifier $h _ { B a y e s } ^ { * } ,$ , let $\mathcal { P } _ { h _ { B a y e s } ^ { * } }$ be the set of all distributions $P$ such that $h _ { B a y e s } ^ { * }$ is a Bayes-optimal classifier with respect to $P$ , then the following hold:
• $\mathcal { H }$ is agnostically universally learnable under $\mathcal { P } _ { h _ { B a y e s } ^ { * } }$ by ERM with exact rate $e ^ { - n }$ if and only if $\mathcal { H }$ does not have an infinite eluder sequence cenBatyesred at $h _ { B a y e s } ^ { * }$ .
• $\mathcal { H }$ is agnostically universally learnable under $\mathcal { P } _ { h _ { B a y e s } ^ { * } }$ by ERM with exact rate $o ( n ^ { - 1 / 2 } )$ if and only if $\mathcal { H }$ has an infinite eluder sequence but does not have an infinite VC-eluder sequence centered at h∗Bayes.
• H requires at least arbitrarily slow rates to be agnostically universally learned under PhB∗ayes by ERM if and only if H has an infinite VC-eluder sequence centered at h∗Bayes.
# 3. Target-independent rates
In this section, we will introduce the relevant results and their proof sketches for the target-independent universal rates. We will prove each bullet of Theorem 5 separately (Theorems 14, 18, 19). We point out that for each bullet, to prove the sufficiency, both a lower bound and an upper bound are required since we are proving an exact rate. All detailed proofs in this section are deferred to Appendix B.
Theorem 14 (Target-independent $e ^ { - n }$ exact rates) A concept class $\mathcal { H }$ is agnostically universally learnable with exact rate $e ^ { - n }$ by ERM if and only if $| { \mathcal { H } } | < \infty$ .
Proof [Proof sketch of Theorem 14] To prove the sufficiency, if $| \mathcal { H } | < \infty$ , we first have an upper bound from the following lemma
Lemma 15 Any finite concept class $\mathcal { H }$ is agnostically universally learnable at rate $e ^ { - n }$ by ERM.
The proof idea of Lemma 15 is, when $| \mathcal { H } | < \infty$ , Condition 1 holds with a constant lower bound $\epsilon _ { 0 }$ . We bound the excess risk of the worst-case ERM $\hat { h } _ { n }$ by the probability of $| \hat { \mathrm { e r } } _ { S _ { n } } ( \hat { h } _ { n } ) - \mathrm { e r } _ { P } ( \hat { h } _ { n } ) | \geq \epsilon _ { 0 }$ , and then an exponential rate comes from the Hoeffding’s inequality.
Moreover, since the realizable setting is a special case of agnostic setting, we can get a lower bound on the rate from the following known result
Lemma 16 (Schuurmans, 1997) Given a class $\mathcal { H }$ , for any learning algorithm $\hat { h } _ { n }$ , there exists a realizable distribution $P$ with respect to $\mathcal { H }$ such that $\bar { \mathbb { E } } [ e r _ { P } ( \bar { h } _ { n } ) ] \geq 2 ^ { - ( n + 2 ) }$ for infinitely many $n$ .
To prove the necessity, we assume to the contrary that $| { \mathcal { H } } | = \infty$ , then Lemma 6 implies that $\mathcal { H }$ has an infinite eluder sequence. A contradiction follows from the following lemma
Lemma 17 If $\mathcal { H }$ has an infinite eluder sequence centered at $h ^ { * }$ , then $\mathcal { H }$ is not agnostically universally learnable under $\mathcal { P } _ { h ^ { * } }$ at rate faster than $o ( n ^ { - 1 / 2 } )$ by ERM.
We prove Lemma 17 by designing a distribution supported on an existing infinite eluder sequence centered at the target function $h ^ { * }$ . In order to have $h ^ { * }$ being the target function, the distribution has to have decreasing probability masses along the eluder sequence. We then show that, given data generated from the constructed distribution, the worst-case ERM yields universal rates no faster than $o ( n ^ { - 1 / 2 } )$ by applying an anti-concentration inequality on bounding the probability of the event that more incorrect labeled examples are observed. ■
Theorem 18 (Target-independent $o ( n ^ { - 1 / 2 } )$ exact rates) A concept class $\mathcal { H }$ is agnostically universally learnable with exact rate $o ( n ^ { - 1 / 2 } )$ by ERM if and only if $| { \mathcal { H } } | = \infty$ and $V C ( \mathcal { H } ) < \infty$ .
Theorem 19 (Target-independent arbitrarily slow rates) A concept class $\mathcal { H }$ requires at least arbitrarily slow rates to be agnostically universally learned by ERM if and only if $V C ( \mathcal { H } ) = \infty$ .
Proof [Proof sketches of Theorems 18 and 19] We first prove Theorem 18. To prove the sufficiency, assume that $| { \mathcal { H } } | = \infty$ and $\mathrm { V C } ( \mathcal { H } ) < \infty$ , the upper bound can be derived from the following lemma
Lemma 20 Any VC class $\mathcal { H }$ is agnostically universally learnable at rate $o ( n ^ { - 1 / 2 } )$ by ERM.
For the proof of Lemma 20, we first utilize a refined version of a classic uniform Bernstein inequality (Proposition 40) to bound the excess risk of the worst-case ERM by $O ( \sqrt { P _ { \mathcal { X } } \{ \hat { h } _ { n } ( x ) \neq h ^ { * } ( x ) \} / n } )$ . Since $\mathcal { H }$ is totally bounded in $L _ { 1 } ( P _ { \mathcal { X } } )$ pseudo-metric, we have $P _ { \mathcal { X } } \{ \hat { h } _ { n } ( x ) \neq h ^ { * } ( x ) \}$ is also decreasing as $n$ grows (Lemma 41). Finally, an $o ( n ^ { - 1 / 2 } )$ rate follows from classic localization argument.
Moreover, the lower bound is from the previous Lemma 6 and Lemma 17. To show the necessity, we prove by contradiction. If $| \mathcal { H } | < \infty$ , then Lemma 15 yields the contradiction. If $\mathrm { v c } ( \mathscr { H } ) = \infty$ , Lemma 7 implies that $\mathcal { H }$ has an infinite VC-eluder sequence, and then a contradiction follows from the following lemma
Lemma 21 If H has an infinite VC-eluder sequence centered at $h ^ { * }$ , then $\mathcal { H }$ requires at least arbitrarily slow rates to be agnostically universally learned under $\mathcal { P } _ { h ^ { * } }$ by ERM.
The proof of Lemma 21 is similar to the proof of Lemma 17, and the key point is to construct a data distribution supported on an existing infinite VC-eluder sequence centered at the target function $h ^ { * }$ with decreasing probability masses assigned to the shattered sets along the sequence. Finally, it is straightforward that Theorem 19 holds based on Theorem 18 and Lemma 21.
# 4. Target-dependent rates
In this section, we give a proof sketch for the target-dependent exact universal rates (Theorem 9).
All the missing proofs in this section are deferred to Appendix C.
Target-dependent universal rates have been studied in the work of Hanneke and Xu (2024) under the realizable setting. Therein, the authors stated the results as “ $h ^ { * }$ is (not) agnostically universally learnable at some rate $R ^ { \prime }$ for a target function $h ^ { * }$ , which is indeed equivalent to ours $^ { 6 6 } \mathcal { H }$ is (not) agnostically universally learnable at rate $R$ under every distribution $P$ centered at $h ^ { \ast \ast \ast }$ . We leave the formalized definitions (Definition 32 and 33) to Appendix A due to the space limitation. In light of this, we will write the related lemmas in this section following either of the two forms. For the realizable setting, such a definition yields a perfect tetrachotomy characterized by certain well-defined combinatorial structures, namely eluder sequence, star-eluder sequence and VC-eluder sequence (see Theorem 2, Hanneke and Xu (2024)). However, for the agnostic case, the aforementioned sequences do not support such a compact theory. We start with a simple example which will develop some initial intuition for what makes the agnostic case different.
# 4.1. Condition 1 and infinite eluder sequence
Example 1 (No centered infinite eulder sequence but no faster than $o ( n ^ { - 1 / 2 } )$ rates) Let $\scriptstyle { \mathcal { X } } : =$ $\mathbb { N } ,$ , let $h _ { 1 } ^ { * }$ be defined as $h _ { 1 } ^ { * } ( x ) : = \mathbb { 1 } ( x = 0 )$ and let $h _ { 2 } ^ { * }$ be defined as $h _ { 2 } ^ { * } ( x ) = 1 - \mathbb { 1 } ( x = 0 )$ . Furthermore, for any integer $i \geq 1$ , we define $h _ { i } : = \mathbb { 1 } ( x \in \{ 0 , i \} )$ . Finally, we define the concept class to be $\mathscr { H } : = \{ h _ { 1 } ^ { * } , h _ { 2 } ^ { * } \} \cup \{ h _ { i } \} _ { i \geq 1 }$ . We construct a distribution $P$ as follow:
$$
\begin{array} { r l } & { P _ { \mathcal X } ( x = 0 ) = P _ { \mathcal X } ( x \geq 1 ) = 1 / 2 ; } \\ & { P ( y = 1 | x = 0 ) = P ( y = 1 | x \geq 1 ) ; } \\ & { P ( y = 1 | x = i ) = 1 / 2 - \epsilon _ { i } , P ( y = 0 | x = i ) = 1 / 2 + \epsilon _ { i } , \forall i \geq 1 ; } \end{array}
$$
An interesting observation is: $\begin{array} { r } { e r _ { P } ( h _ { 1 } ^ { * } ) = e r _ { P } ( h _ { 2 } ^ { * } ) = 1 / 2 = \operatorname* { i n f } _ { h \in \mathcal { H } } e r _ { P } ( h ) , } \end{array}$ , and also $\mathrm { i n f } _ { h \in \mathcal { H } } P _ { \mathcal { X } } ( x :$ $\begin{array} { r } { h ( x ) \neq h _ { 1 } ^ { * } ( x ) ) = \operatorname* { i n f } _ { h \in \mathcal { H } } P _ { \mathcal { X } } ( x : h ( x ) \neq h _ { 2 } ^ { * } ( x ) ) = 0 } \end{array}$ , which implies that the constructed distribution $P$ is centered at both $h _ { 1 } ^ { * }$ and $h _ { 2 } ^ { * }$ . However, it is clear that $\mathcal { H }$ has an infinite eluder sequence centered at $h _ { 1 } ^ { * }$ , but does not have an infinite eluder sequence centered at $h _ { 2 } ^ { * }$ . In other words, while aiming to learn $h _ { 2 } ^ { * }$ , an ERM algorithm may try to output $h _ { 1 } ^ { * }$ instead (since $h _ { 1 } ^ { * }$ is also an optimal function) and thus has universal rates no faster then $o ( n ^ { - 1 / 2 } )$ as proved in Lemma $I 7 .$ .
The above Example 1 implies that “the existence/nonexistence of an infinite eluder sequence in $\mathcal { H }$ centered at $h ^ { \ast \ast \ast }$ is not an “if and only if” characterization to distinguish between $e ^ { - n }$ and $o ( n ^ { - 1 / 2 } )$
ERM universal rates. Then, a naturally follow-up question is what is the desired equivalent characterization. Given a target function $h ^ { * }$ , we consider the aforementioned target-specified Condition 1, which basically says that there is a “gap” between the error rate of the target and the best concept in the class. We will show that $\mathcal { H }$ is agnostically universally learnable under $\mathcal { P } _ { h ^ { * } }$ at exponential rates if and only if Condition 1 holds for $h ^ { * }$ . Indeed, the sufficiency holds from the following lemma.
Lemma 22 Let $\mathcal { H }$ be any UGC concept class and $h ^ { * }$ be any classifier. If Condition 1 holds for $h ^ { * }$ , then $h ^ { * }$ is agnostically universally learnable at rate $e ^ { - n }$ by ERM.
Before proceeding to the necessity, we provide deep insights into the relation between Condition 1 and “no infinite eluder sequence centered at $h ^ { \ast \ast \ast }$ . First, we can conclude that Condition 1 (holds for $h ^ { * }$ ) is stronger than $^ { 6 6 } \mathcal { H }$ has no infinite eluder sequence centered at $h ^ { \ast \mathfrak { s } \mathfrak { s } }$ . On one hand, it guarantees that there is no infinite eluder sequence centered at $h ^ { * }$ , since otherwise the distribution we constructed in the proof of Lemma 17 would fail this condition. On the other hand, Example 1 reveals that they are inequivalent assumptions: there is no infinite eluder sequence centered at $h _ { 2 } ^ { * }$ , but Condition 1 fails for $h _ { 2 } ^ { * }$ , i.e., $\begin{array} { r } { \operatorname* { i n f } _ { h \in \mathcal { H } : \mathrm { e r } _ { P } ( h ) > \mathsf { e r } _ { P } ( h _ { 2 } ^ { * } ) } \{ \mathsf { e r } _ { P } ( h ) - \mathsf { e r } _ { P } ( h _ { 2 } ^ { * } ) \} = \operatorname* { i n f } _ { i \geq 1 } \{ \mathsf { e r } _ { P } ( h _ { i } ) - } \end{array}$ $\mathrm { e r } _ { P } ( h _ { 1 } ^ { * } ) \} = \operatorname* { i n f } _ { i \geq 1 } \{ 2 P _ { \mathcal { X } } ( x = i ) \epsilon _ { i } \} = 0$ . Moreover, Condition 1 is weaker than $^ { 6 6 } \mathcal { H }$ does not have any infinite eluder sequence”. This can be verified easily by finding an infinite class $\mathcal { H }$ such that Condition 1 holds for some target function $h ^ { * }$ . We give out one of such examples as follow.
Example 2 (Singletons on $\mathbb { N }$ ) Let $\chi = \mathbb { N }$ and $\mathcal { H } _ { s i n g l e t o n , \mathbb { N } } : = \{ h _ { t } : = \mathbb { 1 } ( x = t ) | t \in \mathcal { X } \}$ be the class of all singletons on natural numbers. We consider $\mathcal { H } = \mathcal { H } _ { s i n g l e t o n , \mathbb { N } } \cup \left\{ h _ { a l l - l } \prime _ { s } \right\}$ and a target function $h _ { a l l - I ^ { \prime } s }$ . Since $| \mathcal { H } | = | \mathcal { H } _ { s i n g l e t o n , \mathbb { N } } | = \infty$ , it must have an infinite eluder sequence (centered at $h _ { a l l - } o ^ { , } { \boldsymbol { s } } ,$ ) according to Lemma 6. However, it is straightforward that Condition $\boldsymbol { I }$ holds for $h _ { a l l - I ^ { \prime } s }$ .
This implies that, while $\mathcal { H }$ is not universally learnable at exponential rate under every distribution when $| { \mathcal { H } } | = \infty$ , but it can be learned exponentially fast under a subclass of distributions $\mathcal { P } _ { h ^ { * } }$ when $h ^ { * }$ satisfies some good property. Indeed, if we interpret Condition 1 as a distribution-specific condition, that is, Condition 1 holds for a distribution $P$ if $\begin{array} { r } { \operatorname* { i n f } _ { h \in \mathcal { H } : \operatorname { e r } _ { P } ( h ) > \operatorname* { i n f } _ { h ^ { \prime } \in \mathcal { H } } \operatorname { e r } _ { P } ( h ^ { \prime } ) } \{ \operatorname { e r } _ { P } ( h ) - } \end{array}$ $\mathrm { i n f } _ { h ^ { \prime } \in { \mathcal { H } } } \operatorname { e r } _ { P } ( h ^ { \prime } ) \} > 0$ , then Condition 1 (holds for $P$ ) is equivalent to $^ { 6 6 } \mathcal { H }$ does not have any infinite eluder sequence centered at any target function of $P ^ { \bullet }$ . Since $P$ can have multiple target functions, Condition 1 can be stronger than “no infinite eluder sequence centered at $h ^ { \ast \ast \ast }$ . Finally, the following two Lemmas formalize the above analysis and will be helpful to the proofs.
Lemma 23 If $\mathcal { H }$ does not have an infinite eluder sequence centered at $h ^ { * }$ , then for any distribution $P$ centered at $h ^ { * }$ , let $P _ { \mathcal { X } }$ be the associated marginal distribution, the following hold:
(1) There exists $h \in { \mathcal { H } }$ such that $P _ { \mathcal { X } } \{ x : h ( x ) \neq h ^ { * } ( x ) \} = 0 .$ (2) $\begin{array} { r } { \operatorname* { i n f } _ { h \in \mathcal { H } : P \chi \{ x : h ( x ) \neq h ^ { * } ( x ) \} > 0 } P \chi \{ x : h ( x ) \neq h ^ { * } ( x ) \} > 0 . } \end{array}$
It is easy to check that in Example 1, the above (2) holds for $h _ { 2 } ^ { * }$ while Condition 1 fails. Intuitively, one may think that “no infinite eluder sequence”, as a combinatorial assumption, is insufficient for a guarantee on the joint distribution $P$ , but only possible for the marginal distribution $P _ { \mathcal { X } }$ . This is never a problem for the realizable case since the target is always unique therein.
Another interesting observation is that the (2) in Lemma 23 fails for $h _ { 1 } ^ { * }$ , which is the bad target that fails exponential learning rates. Note that if the failure of Condition 1 guarantees such a bad target, then the necessity follows. Indeed, we can consider Condition 1 as an aforementioned distribution-specific condition and then show that if it fails for some distribution $P$ , then there must exist some target $h ^ { * }$ (with respect to $P$ ) such that the (2) in Lemma 23 also fails.
Lemma 24 Let $\mathcal { H }$ be any concept class and $P$ be any distribution such that $\mathcal { H }$ is totally bounded in $L _ { 1 } ( P _ { \mathcal { X } } )$ pseudo-metric. If Condition $\jmath$ fails for $P$ , i.e. the following holds:
$$
\operatorname* { i n f } _ { \substack { h \in \mathcal { H } : e r _ { P } ( h ) > \operatorname* { i n f } _ { h ^ { \prime } \in \mathcal { H } } e r _ { P } ( h ^ { \prime } ) } } \left\{ e r _ { P } ( h ) - \operatorname* { i n f } _ { h ^ { \prime } \in \mathcal { H } } e r _ { P } ( h ^ { \prime } ) \right\} = 0 ,
$$
then there exists $h ^ { * }$ such that $\begin{array} { r } { e r _ { P } ( h ^ { * } ) = \operatorname* { i n f } _ { h ^ { \prime } \in \mathcal { H } } e r _ { P } ( h ^ { \prime } ) } \end{array}$ and
$$
\operatorname* { i n f } _ { \substack { h \in \mathcal { H } : P x \{ x : h ( x ) \neq h ^ { * } ( x ) \} > 0 } } P _ { \mathcal { X } } \{ x : h ( x ) \neq h ^ { * } ( x ) \} = 0 .
$$
In the proof of Lemma 23, we actually show that if the (2) therein fails for some optimal function $h ^ { * }$ under distribution $P$ , then there exists an infinite eluder sequence centered at that $h ^ { * }$ . Moreover, Lemma 17 yields an $o \big ( n ^ { - 1 / 2 } \big )$ lower bound on a centered infinite eluder sequence. Hence, Lemma 24 tells us that Condition 1 is not only a sufficient condition to a target-dependent $e ^ { - n }$ upper bound as shown in Lemma 22, but also necessary. Altogether, we have the following theorem.
Theorem 25 (Target-dependent $e ^ { - n }$ exact rates) Let H be any UGC concept class and $h ^ { * }$ be any classifier. $h ^ { * }$ is agnostically universally learnable at exact rate $e ^ { - n }$ by ERM if and only if Condition 1 holds for $h ^ { * }$ .
Proof [Proof of Theorem 25] To prove the sufficiency, if Condition 1 holds for $h ^ { * }$ , we know that $h ^ { * }$ is agnostically universally learnable at rate $e ^ { - n }$ by ERM according to Lemma 22, and a lower bound $e ^ { - n }$ by ERM follows simply from Lemma 16. Hence, the sufficiency holds with exponential exact rates. To prove the necessity, we assume to the contrary that Condition 1 fails for $h ^ { * }$ . By Lemma 24 and then Lemma 23, we know that there exists an infinite eluder sequence centered at $h ^ { * }$ . Then Lemma 17 yields that $h ^ { * }$ is not agnostically universally learnable at rate faster than $o ( n ^ { - 1 / 2 } )$ by ERM. This leads to a contradiction and completes the proof of the necessity.
# 4.2. Condition 2 and infinite VC-eluder sequence
Next, we will discuss the target-dependent $o ( n ^ { - 1 / 2 } )$ exact rate. Recall that “ $\mathcal { H }$ has no infinite eluder sequence centered at $h ^ { \ast , \ast }$ is weaker than Condition 1 (holds for $h ^ { * }$ ), which is a fundamental reason for why “no infinite eluder sequence” is not a correct characterization for the target-dependent exponential rates. As an analogue of Example 1, it is not hard to construct an example of $( { \mathcal { H } } , P )$ , where $P$ has two target functions such that $\mathcal { H }$ has no infinite VC-eluder sequence centered at one but has an infinite VC-eluder sequence centered at the other. Such an example indicates that $^ { 6 6 } \mathcal { H }$ has no infinite VC-eluder sequence centered at $h ^ { \ast \mathfrak { s } \mathfrak { s } }$ is weaker than Condition 2 (holds for $h ^ { * }$ ), and is not the correct characterization for the target-dependent super-root rate. Instead, we find out that $\mathcal { H }$ is agnostically universally learnable under $\mathcal { P } _ { h ^ { * } }$ at $o \big ( n ^ { - 1 / 2 } \big )$ rate by ERM if and only if the targetspecified Condition 2 holds for $h ^ { * }$ . The following lemma states that Condition 2 stands sufficiently and necessarily for a super-root upper bound.
Lemma 26 Let $\mathcal { H }$ be any UGC concept class and $h ^ { * }$ be any classifier. Then $h ^ { * }$ is agnostically universally learnable at rate $o ( n ^ { - 1 / 2 } )$ by ERM if and only if Condition 2 holds for $h ^ { * }$ .
Given a target function $h ^ { * }$ and any distribution $P$ centered at $h ^ { * }$ , let us define the $^ { 6 6 } \epsilon$ -ball” (of $\mathcal { H }$ centered at $h ^ { * }$ ) as $\mathcal { H } ( \epsilon ; P , h ^ { * } ) : = \{ h \in \mathcal { H } : 0 < \mathrm { e r } _ { P } ( h ) - \mathrm { e r } _ { P } ( h ^ { * } ) \leq \epsilon \}$ . Condition 2 basically says that for any distribution $P$ centered at $h ^ { * }$ , the $\epsilon$ -ball $\mathcal { H } ( \epsilon ; P , h ^ { * } )$ has finite VC dimension for sufficiently small radius $\epsilon ( P )$ . However, “no infinite VC-eluder sequence centered at $h ^ { \ast \ast \ast }$ merely provides marginal information, i.e., for any distribution $P$ centered at $h ^ { * }$ , there exists $\epsilon : = \epsilon ( P ) > 0$ such that $\mathcal { H } ( \epsilon ; P _ { \mathcal { X } } , h ^ { * } ) : = \{ h \in \mathcal { H } : 0 < P _ { \mathcal { X } } \{ x : h ( x ) \neq h ^ { * } ( x ) \} \le \epsilon \}$ has finite VC dimension. To see an analogy, we can interpret Condition 1 as that the $\epsilon$ -ball $\mathcal { H } ( \epsilon ; P , h ^ { * } )$ is empty for sufficiently small $\epsilon$ . However, “no infinite eluder sequence centered at $h ^ { \ast \ast \ast }$ only implies that for any distribution $P$ centered at $h ^ { * }$ , the marginal $\epsilon$ -ball $\mathcal { H } ( \epsilon ; P \chi , h ^ { * } )$ is empty for sufficiently small $\epsilon$ .
Moreover, it holds similarly that Condition 2 is weaker than $^ { 6 6 } \mathcal { H }$ does not have any infinite VC-eluder sequence”. This implies, for a class $\mathcal { H }$ with $\mathrm { V C } ( \mathcal { H } ) = \infty$ , while $\mathcal { H }$ requires arbitrarily slow rates to be agnostically universally learned by ERM, it is possible to be learned at $o \big ( n ^ { - 1 / 2 } \big )$ rate under a subclass of distributions $\mathcal { P } _ { h ^ { * } }$ when $h ^ { * }$ satisfies Condition 2. Here is an example.
Example 3 (Hanneke and Xu, 2024, Ex.15) Let $\textstyle { \mathcal { X } } : = \bigcup _ { k \in \mathbb { N } } { \mathcal { X } } _ { k }$ be the disjoint union of finite sets with $| { \mathcal { X } } _ { k } | = k$ and $\begin{array} { r } { \mathcal { H } : = ( \bigcup _ { k \geq 1 } \mathcal { H } _ { k } ) \cup \{ h _ { a l l - l } , \mathnormal { \mathnormal { s } } \} } \end{array}$ , where $\mathcal { H } _ { k } : = \{ \mathbb { 1 } _ { S } : S \subseteq \mathcal { X } _ { k } \}$ . We consider a target function $h _ { a l l - I } \mathrm { , } _ { s }$ . Now we have $V C ( \mathcal { H } ) = \infty$ and Condition 2 holds for $h _ { a l l - I ^ { \prime } s }$ .
If we consider Condition 2 as a distribution-specific condition, i.e., $\begin{array} { r } { \mathcal { H } ( \epsilon ; P , h ^ { * } ) : = \mathcal { H } ( \epsilon ; P ) : = } \end{array}$ $\begin{array} { r } { \{ h \in \mathcal { H } : 0 < \mathtt { e r } _ { P } ( h ) - \operatorname* { i n f } _ { h ^ { \prime } \in \mathcal { H } } \mathtt { e r } _ { P } ( h ^ { \prime } ) \le \epsilon \} } \end{array}$ , then for any distribution $P$ , Condition 2 (holds for $P$ ) is equivalent to $^ { 6 6 } \mathcal { H }$ does not have any infinite VC-eluder sequence centered at any target function of $P ^ { \bullet }$ . Finally, together with Theorem 25, we can get the following results of the target-dependent super-root exact rates and arbitrarily slow rates.
Theorem 27 (Target-dependent $o \big ( n ^ { - 1 / 2 } \big )$ exact rates) Let $\mathcal { H }$ be any UGC concept class and $h ^ { * }$ be any classifier. Then $h ^ { * }$ is agnostically universally learnable at exact rate $o ( n ^ { - 1 / 2 } )$ by ERM if and only if Condition 2 holds, but Condition 1 fails for $h ^ { * }$ .
Theorem 28 (Target-dependent arbitrarily slow rates) Let $\mathcal { H }$ be any concept class and $h ^ { * }$ be any classifier. Then $h ^ { * }$ requires at least arbitrarily slow rates to be agnostically universally learned by ERM if and only if Condition 2 fails for $h ^ { * }$ .
Proof [Proof of Theorems 27 and 28] To prove Theorem 27, note that given Theorem 25, it suffices to prove the part related to Condition 2. We first prove the sufficiency. To prove an exact rate, the upper bound follows from Lemma 26. For the lower bound, note that if Condition 1 fails for $h ^ { * }$ , then there exists an infinite eluder sequence centered at $h ^ { * }$ . Together with Lemma 17, we establish the lower bound. We next prove the necessity using the method of contradiction. Indeed, in the proof of Lemma 26, we show that if Condition 2 fails for $h ^ { * }$ , then there exists an infinite VCeluder sequence centered at $h ^ { * }$ . Then Lemma 21 yields a contradiction and completes the proof of necessity. To prove Theorem 28, note that the sufficiency has already been established in the proof of Theorem 27. Finally, the necessity follows immediately by the method of contradiction. 1
# 5. Bayes-dependent rates
Recall that in Section 4, we find out that the target-dependent agnostic universal rates are not characterized by simple combinatorial measures but two contrived target-specified conditions. This flaw motivates us to think whether there is a better function-specified (instead of target-specified) categorization of all data distributions. In this section, we consider to categorize a distribution based on its Bayes-optimal classifier (Definition 10) and give a theory of Bayes-dependent exact universal rates (Theorem 13). Specifically, for every distribution $P$ , we show that the agnostic universal rates for learning $\mathcal { H }$ under $P$ are determined by the existence/non-existence of infinite eluder sequence and VC-eluder sequence centered at the Bayes-optimal classifier $h _ { \mathrm { B a y e s } } ^ { \ast }$ with respect to $P$ . While the Bayes-optimal classifier may not be unique for a distribution $P$ (at any point $x$ satisfying $P ( Y = 1 | X = x ) = P ( Y = 0 | X = x ) = 1 / 2$ , $h _ { \mathrm { B a y e s } } ^ { \ast }$ can output either 0 or 1), we can show that if one of the Bayes-optimal classifiers does (not) have a centered infinite eluder/VC-eluder sequence, so do (not) the others. Therefore, this Bayes-dependent characterization provides a complete concept-dependent characterization of universal rates. All the detailed proofs (if required) in this section are deferred to Appendix D.
Theorem 29 (Bayes-dependent $e ^ { - n }$ exact rates) Let $\mathcal { H }$ be any concept class and h be any classifier. For any distribution $P$ such that $h$ is a Bayes-optimal classifier with respect to $P$ , then $\mathcal { H }$ is agnostically universally learnable with exact rate $e ^ { - n }$ by ERM under $P$ if and only if $\mathcal { H }$ does not have an infinite eluder sequence centered at $h$ .
Theorem 30 (Bayes-dependent $o \big ( n ^ { - 1 / 2 } \big )$ exact rates) Let $\mathcal { H }$ be any concept class and h be any classifier. For any distribution $P$ such that $h$ is a Bayes-optimal classifier with respect to $P$ , then $\mathcal { H }$ is agnostically universally learnable at exact rate $o \big ( n ^ { - 1 / 2 } \big )$ by ERM under $P$ if and only if $\mathcal { H }$ does not have an infinite VC-eluder sequence, but has an infinite eluder sequence centered at $h$ .
Theorem 31 (Bayes-dependent arbitrarily slow rates) Let $\mathcal { H }$ be any concept class and h be any classifier. For any distribution $P$ such that $h$ is a Bayes-optimal classifier with respect to $P$ , then $\mathcal { H }$ requires at least arbitrarily slow rates to be agnostically universally learned by ERM under $P$ if and only if has an infinite VC-eluder sequence centered at $h$ . | The universal learning framework has been developed to obtain guarantees on the learning rates that hold for any fixed distribution, which can be much faster than the ones uniformly hold over all the distributions. Given that the Empirical Risk Minimization (ERM) principle being fundamental in the PAC theory and ubiquitous in practical machine learning, the recent work of arXiv:2412.02810 studied the universal rates of ERM for binary classification under the realizable setting. However, the assumption of realizability is too restrictive to hold in practice. Indeed, the majority of the literature on universal learning has focused on the realizable case, leaving the non-realizable case barely explored.
In this paper, we consider the problem of universal learning by ERM for binary classification under the agnostic setting, where the ''learning curve" reflects the decay of the excess risk as the sample size increases. We explore the possibilities of agnostic universal rates and reveal a compact trichotomy: there are three possible agnostic universal rates of ERM, being either $e^{-n}$, $o(n^{-1/2})$, or arbitrarily slow. We provide a complete characterization of which concept classes fall into each of these categories. Moreover, we also establish complete characterizations for the target-dependent universal rates as well as the Bayes-dependent universal rates. | [
"stat.ML",
"cs.LG"
] |
# 1 Introduction
Time series forecasting is an essential task across domains such as finance [4, 41, 14, 1], energy [60, 52, 47], retail [39, 54, 6, 44], and public health [26, 3, 24, 23]. Recent advances have enabled researchers to scale time series corpora [31, 32, 33, 15, 18, 48, 56] and train foundation models [40, 13, 27, 43, 30, 49] directly on them. However, a key question remains: whether scaling up unimodal time series data suffices to handle the diverse real-world forecasting tasks?
We argue that the answer is not always. While large-scale time series models excel at learning recurring patterns such as seasonality or trends, they may not always be effective when the identity of the series is not directly observable from the time series alone. Consider the example in Figure 1, where the demands of two items, a portable fan and a blind box toy, appear nearly identical throughout 2023. A unimodal time series model could generalize their behavior, treating them as governed by a similar seasonality. Yet, in early 2024, their demand diverged: the portable fan continued its seasonal cycle, while the blind box experienced a decline, potentially due to discontinuation or reduced market interest. Without knowing what the series represents, a model may struggle to capture this divergence. This illustrates a key limitation of unimodal forecasting: even with sufficient historical data, models that lack access to contextual information, such as item category, description, or external status, can misinterpret how similar-looking series actually behave. In contrast, multimodal inputs provide essential context, helping models distinguish between patterns driven by seasonality, trends, or inherent attributes.
Preprint.
Figure 1: Monthly demand of a seasonal product, portable fan (orange curve), and a trend-sensitive product, blind box (blue curve), from Jan 2023 to Mar 2024. While both series follow similar patterns in 2023, they diverge in December (red dashed line) when the blind box demand drops, highlighting the limitations of unimodal forecasting without contextual information.
While some recent efforts [21, 58] have explored multimodal forecasting, existing datasets suffer from several limitations. Many are relatively small in scale [21], often involving fewer than 30 time series channels, which restricts their utility for training and benchmarking large models. Others, although containing time series data, are designed for domains where forecasting is not the primary task, e.g., anomaly detection, classification, clinical reasoning, or lack standardized, reproducible forecasting protocols; we discuss these in more detail in Section 2. Additionally, some datasets focus on dynamic external modalities, such as streaming news or social media content [36, 12]. Our work focuses on the static case, where external information like item descriptions and metadata is fixed per entity. These static additional modalities can provide additional context which can be valuable in scenarios like shown in Figure 1. Moreover, as shown in Section 5.2, these static information is essential for scenarios like cold-start forecasting where no historical time series is available. In this paper, we introduce a dataset suite, Modal+Time (MoTime). MoTime is the largest publicly available suite of its kind, covering diverse domains and modalities. Its scale enables robust extension, training, evaluation, and generalization analysis across forecasting scenarios. Each dataset pairs time series with aligned contextual information, including but not limited to metadata, textual descriptions, categorical labels, and images. MoTime data is sourced from academically recognized sources with additional processing, including published papers and competition platforms. We augment each time series with relevant external modalities through reframing or targeted object crawling and careful validation. We investigate the benefit that external modalities can bring to time series forecasting in two scenario settings. The first is varying-history forecasting, which analyzes when and how external modalities contribute under different lengths of historical time series in common forecasting. The second is cold-start forecasting, where models forecast without any prior time series history. It is an important but rarely explored setting in prior multimodal time series forecasting studies. We find that while external modalities generally enhance forecasting performance, their effectiveness varies across datasets, and the gains are notable for short series in some cases. By framing this work as an infrastructure-level contribution, we aim to support the development of robust, context-aware forecasting models and to provide a suite for advancing multimodal research in time series modeling.
# 2 Related Works
Recent advances in time series forecasting have spurred the release of diverse datasets and benchmarks. Compared with efforts [48, 40, 13, 42, 16, 17] on unimodal data [50, 38, 18, 15, 48, 56, 11, 2, 62, 31, 32, 33, 60], multimodal benchmarks are still in early development and are often domain-specific. We group them into general-purpose benchmarks and domain-specific datasets.
General-purpose multimodal benchmarks. Several benchmarks aim to support multimodal modeling over time series and text, though they are relatively small in scale and may not be primarily designed for forecasting. Time-MMD [28] is the first large-scale, general-purpose dataset for multimodal time series forecasting. It spans nine domains, including healthcare, finance, and agriculture, and pairs each time series with aligned dynamic textual reports. It supports multiple tasks such as forecasting, imputation, and anomaly detection, and provides an accompanying library, MM-TSFlib, for standardized evaluation. However, the number of channels is limited, with no domain exceeding 11 channels, see Table 1, which may constrain its utility for forecasting tasks. MTBench [8] is an LLM-centric benchmark on temporal reasoning. It includes paired time series and news-style textual inputs in finance and weather domains, with tasks ranging from trend prediction and technical indicator estimation to contradiction detection and news-driven question answering.
The benchmark emphasizes multimodal understanding over forecasting. The core evaluation targets cross-modal inference rather than standard forecasting accuracy.
These benchmarks represent important early steps toward multimodal time series understanding. However, they often focus on dynamic, event-level reasoning and offer limited support for large-scale, entity-centric forecasting with reusable protocols. Our work addresses these gaps by introducing a multimodal dataset suite with larger scales and scenario-driven forecasting tasks.
Table 1: Comparison of multimodality time series forecasting datasets and benchmarks. TSN, TTSN, ST, and RP mean time series number, total time series number, static text, and reusable protocol.
Domain-specific multimodal datasets. Many datasets combine time series with external modalities in specific domains. In healthcare, resources such as MIMIC-III/IV [24, 23], PTB-XL [47], and ICBHI [20] integrate physiological time series with clinical reports. These datasets often focus on classification tasks, such as in-hospital mortality or physiological phenotyping, rather than forecasting. In finance, datasets like FNSPID [14], GDELT-based corpora [25, 21, 58], and DOW30 [1] pair market indicators with economic narratives and event records. For IoT and transportation, LEMMARCA [59] and NYC Taxi/Bike [36, 12] combine sensor readings with spatial metadata and tags. Environmental monitoring datasets like Terra [9] incorporate satellite-based measurements aligned with geotagged weather descriptions. Most use dynamic text aligned to timestamped events, which differs from static descriptions used in our setting. In addition, many lack reusable evaluation protocols and provide limited support for generalization tasks such as cold-start or entity-level forecasting. Most of these datasets contain a relatively small number of time series channels, typically fewer than a few dozen, as shown in Table 1.
Multimodal forecasting strategies. Approaches to multimodal forecasting vary in how they incorporate external modalities. Some methods transform time series into other modalities, e.g., text for language models [5, 46, 7]. However, this can disrupt the modeling of temporal dependencies. Others [22, 29, 51] use shared auxiliary descriptions at the dataset or domain level, limiting granularity and flexibility in fine-grained or cold-start settings. Unlike prior work, MoTime is designed around static, entity-level modalities and scenario-driven tasks. It enables systematic evaluation of external modality contributions across both short and long history settings, with explicit support for generalization, cold-start, and entity-aware forecasting challenges.
# 3 Data
We introduce MoTime, a suite of eight multimodal time series datasets spanning e-commerce, web traffic, media, and user behavior domains. MoTime is constructed by systematically re-purposing and transforming existing datasets, particularly from the recommender systems community, into item-centric, temporally structured forecasting tasks. The suite is designed to support general-purpose, multimodal forecasting and is characterized by its diversity in diverse scales, series lengths, sparsity, temporal resolution, and modality composition. All the datasets of MoTime are available on https: //www.kaggle.com/datasets/krissssss/multimodal-time-series-forecasting/.
# 3.1 Data Overview
MoTime consists of eight datasets, organized into two main categories based on their origin: (1) recommender system datasets re-purposed for forecasting, and (2) web and media popularity datasets. We briefly describe each dataset below; further details, including sources, are provided in Appendix 8.1.
Recommender datasets transformed into time series forecasting. These datasets are originally designed for personalized ranking or click-through prediction. We convert user-item interactions into item-oriented time series that reflect popularity dynamics over time.
PixelRec [10] captures short video behavior in lifestyle and entertainment domains. We aggregate user interactions into daily view series. Each item includes a thumbnail and title metadata. The series is long and sparse. TaobaoFashion2 provides outfit-level purchase logs. We construct daily purchase series per item, each paired with an image, making it well-suited for short-horizon, imageconditioned forecasting. AmazonReview consists of item-level review logs across 29 categories. We derive daily review count series per item and align them with item metadata such as title, description, category, and price. The data supports semantic-aware forecasting. Tianchi3 offers large-scale purchase behavior from an e-commerce platform. We extract item-level purchase series and align them with both text fields and images, enabling trend-aware, multimodal forecasting. MovieLens is a classical benchmark in recommendation. We convert rating logs into daily interaction series per movie and enrich them with externally extracted metadata, e.g., overview, genres, and tags. It serves as a benchmark for sparse, text-enhanced forecasting.
Media and web traffic datasets. These datasets naturally contain time series aligned with content metadata and reflect the dynamics of social or online attention. News [34] captures early popularity of news articles over 48 intervals at 20-minute resolution. Each article includes headline text, topic, and sentiment scores, making it ideal for fine-grained, multimodal attention forecasting. WikiPeople [53] includes multichannel daily view counts, e.g., desktop, mobile, for person-related articles. Text summaries are aligned with article IDs, supporting cross-device and text-conditioned forecasting.
Additional dataset. VISUELLE [45] provides visual and temporal engagement on Instagram posts. It includes image posts along with associated metadata such as tags, captions, and engagement statistics over time. The time series is item sales. While we do not process or experiment on VISUELLE in this work, we include it in the MoTime collection for future benchmarking.
Table 2: Statistics of the eight multimodal time series datasets in MoTime.
# 3.2 Data Construction and Processing
In this section, we summarize the key steps in processing the data. More details are provided in Appendix 8.1. To construct MoTime, we transform source datasets into time-indexed series aligned with external modalities. This involves three key steps: (1) generating time series from raw interaction or popularity data, (2) extracting and aligning modality information, and (3) filtering and cleaning to ensure consistency and usability. Time series construction. For datasets originating from recommender systems, PixelRec, TaobaoFashion, Tianchi, and AmazonReview, we convert raw user-item interactions into item-centric time series. Specifically, we aggregate user behaviors into daily-level popularity signals for each item. This reframing enables classical recommendation data to be used in forecasting settings, where the task is to predict how item popularity evolves over time. In MovieLens, we similarly aggregate rating histories into daily interaction series for each movie. For News, WikiPeople, the time series are directly available as view or engagement counts at regular time intervals. Modality extraction and alignment. External modalities, including textual descriptions, item images, and structured metadata, are either extracted from the original datasets or retrieved externally. For PixelRec, TaobaoFashion, Tianchi, and AmazonReview, image and/or text features are already provided. We align these with time series using unified item identifiers defined within each dataset. In contrast, for MovieLens and WikiPeople, we obtain external text by crawling movie metadata or Wikipedia summaries, and link them to the corresponding series using consistent IDs. Filtering and cleaning. During preprocessing, we remove corrupted or incomplete entries from each modality. For textual data, we discard samples with missing or invalid fields. For image data, we retain only samples that can be reliably linked to time series objects. For MovieLens, we filter out series that are too short or have too many sparse observations. All retained samples have fully aligned time series and modality information, ensuring that multimodal learning can be conducted in a consistent and reproducible way. Cold-start support. Three datasets, AmazonReview, MovieLens, and News, include explicit release or publication timestamps for each item. This allows us to identify time steps prior to an item’s availability and annotate them accordingly. These pre-release segments enable the design of cold-start forecasting tasks, where models must make predictions based solely on external modality signals.
# 3.3 Statistical Summary
Table 2 presents a unified summary of eight datasets in MoTime, detailing their time series scale including obervations per time series and the number of series per dataset, data density, and modality composition. Density is reported as a percentage, offering insight into sparsity levels and potential cold-start scenarios. Additional statistics of time series, including per-series mean, median, and value range, are included in Table 6, and statistics of text, refer to 6 in Appendix.
The datasets exhibit substantial diversity in scale and channels. For instance, PixelRec contains 43,082 long, sparse series, while TaobaoFashion includes 890 short, dense series aligned with item images. WikiPeople offers 3,856 dense $( 9 9 . 9 6 \% )$ , multichannel time series, making it well-suited for standard forecasting setups. In contrast, AmazonReview consists of 29 category-specific subdatasets, each with its own semantic domain and sparsity level. This hierarchical structure allows for the evaluation of domain transfer, few-shot generalization, and model robustness under distribution shift. News captures the short-term popularity of news articles driven by real-world events. It is the only dataset in MoTime where series are aligned by relative time, with each series starting from the moment of publication and covering a high temporal resolution. Making the dataset particularly useful for evaluating models on rapid trend emergence, early-stage signal detection, and time-lagged multimodal influence.
Modality configurations are also diverse. MovieLens, AmazonReview, and WikiPeople are textonly; TaobaoFashion is image-only; while Tianchi, PixelRec, are fully multimodal. All modalities are aligned with time series via consistent sample IDs. Taken together, MoTime supports a broad spectrum of forecasting tasks and scenarios, from fine-grained modeling to cold-start forecasting to semantic long-range forecasting. Its diversity in sparsity, series length, and modality alignment enables robust benchmarking for both unimodal and multimodal forecasting models.
# 4 Multimodal Utility under Different Forecasting Scenarios
To comprehensively explore when and how external modalities benefit forecasting performance, we design two representative scenarios: varying-history and cold-start forecasting. These scenarios reflect practical challenges in real-world forecasting applications and are rarely addressed systematically in prior benchmarks.
# 4.1 Modality Utility under Scenario 1: Varying-history Forecasting
The motivation for this scenario stems from a hypothesis: additional information is most beneficial when the time series itself provides limited signals, and becomes less critical when the temporal signal is already strong. In particular, we hypothesize that short history series are more reliant on external modalities, while long history series may already contain sufficient temporal patterns for accurate forecasting. To test this hypothesis across diverse domains, we construct a training setup that explicitly contrasts both short history and long history inputs, enabling us to evaluate the marginal contribution of modalities under varying temporal availability under common forecasting.
We randomly split the training set into two subsets: one retaining the full historical windows (long), and one using only a few of each series (short). The model is trained jointly on both subsets, while the validation and test sets remain unchanged to ensure comparability. This setup allows for consistent evaluation of modality effectiveness under both sufficient history and limited-history conditions.
As our proposed baseline method for forecasting in this scenario, we adopt a dual-tower architecture inspired by TextFusionHTS [63], comprising a time series encoder and a frozen LLM encoder. Image inputs are first converted into captions via a vision-language model and processed alongside text using the same encoder. The modality-specific representations are concatenated and passed through a lightweight MLP for final forecasting. The embedding-based architecture of PatchTST [37] aligns well with multimodal fusion and serves as a candidate for extension in multimodalities. Thus, we adapt it to integrate the multimodal information and propose MultiPatchTST. While recent state-ofthe-art WPMixer [35] originally unimodal, we adapt it to MultiWPMixer by incorporating contextual embeddings to decomposed components.
# 4.2 Modality Utility under Scenario 2: Cold-start Forecasting
The forecast of cold-start forecasting must rely entirely on the entity’s time-irrelevant external modalities, where no historical time series is available. It remains underexplored due to data and evaluation limitations. One key aim of MoTime is to enable systematic investigation of this challenging setting, which allows us to assess the utility of external modalities and highlight the potential of multimodal information under data sparsity.
As the proposed baseline method, we adopt a retrieval-augmented generation pipeline inspired by a recent work on cold-start web traffic forecasting [61]. Specifically, we first construct a retrieval base composed of existing time series instances and their associated metadata, textual descriptions, or image captions. All external modalities are embedded with an LLM, and the resulting vectors are cached for fast retrieval. This retrieval base serves as a semantic index to support cold-start inference. At inference time, the input for a new entity is its textual description only. We encode this text into an embedding using the same frozen LLM model. Cosine similarity is then computed between the input embedding and all stored embeddings in the retrieval base to obtain a relevance score for each candidate. We retain the top- $k$ most similar entities and retrieve both their text and corresponding time series data. The selected top- $k$ time series are aligned with their corresponding text descriptions to form a structured input prompt. This prompt is then fed into a generation model to produce the forecast for the cold-start entity. The model is conditioned on both retrieved trajectories and metadata, enabling cold-startforecasting purely based on semantic analogy. We then construct a structured prompt containing: (1) the textual description of the target entity, (2) textual descriptions of the $k$ most similar entities (including converted image descriptions), (3) the historical time series of these $k$ relevant entities, and (4) their corresponding entity IDs and timestamp information. This prompt is passed to an LLM, which is tasked with generating a forecast for the target entity. To ensure structured output and improve reasoning consistency, we constrain the output format and instruct the model to provide reasoning behind its prediction. For more details about the prompt, please refer to Table 13 in Appendix.
# 5 Experiment
We evaluate forecasting performance across two scenarios enabled by MoTime.
# 5.1 Evaluation Setup
Baselines. In varying-history forecasting, we consider several representative baselines to cover different modeling paradigms. DLinear [55]: A linear decomposition-based model that separately maps components to forecast values. As it avoids latent embeddings, DLinear does not naturally extend to multimodal variants and is included in its original form. PatchTST [37]: A Transformerbased model that operates on patch embeddings of time series inputs. WPMixer [35]: A recent state-of-the-art model that applies learned filters to decomposed components, followed by lightweight feed-forward layers. In this varying-history forecasting, we compare these baselines with our proposed MultiPatchTST and MultiWPMixer as introduced in Section 4.1. Given the limited attention to cold-start forecasting in existing literature and fewer multimodal resources, we use a simple baseline: the average of retrieved relevant series, computed independently for each forecast horizon. This baseline provides a robust point of comparison for evaluating the benefit of textual retrieval and generation-based modeling. As a baseline, we independently compute the retrieved series’ elementwise average for each forecast horizon. This non-parametric baseline provides a strong reference point for assessing the added value of generation-based forecasting conditioned on semantic context.
Metrics. We report two widely used metrics [18],
$$
{ \mathrm { R M S E } } = \sqrt { \frac { 1 } { T } \sum _ { t = 1 } ^ { T } ( y _ { t } - \hat { y } _ { t } ) ^ { 2 } } , \quad \mathrm { W R M S P E } = \frac { \sqrt { \frac { 1 } { T } \sum t = 1 ^ { T } ( y _ { t } - \hat { y } _ { t } ) ^ { 2 } } } { \frac { 1 } { T } \sum t = 1 ^ { T } | y _ { t } | }
$$
RMSE captures error in the original scale and is particularly sensitive to large deviations. WRMSPE normalizes RMSE by the mean absolute value of the ground truth, offering a scale-invariant view of forecasting quality. We focus on squared-error metrics as they are sensitive to large deviations, which is important in sparse/intermittent or spiky series, a property that many series in our datasets have. We note that we do not use scaled measures that are popular in forecasting contexts, such as the RMSSE, because our test spans are long enough to compute meaningful absolute-scale metrics over the test sets. Moreover, RMSSE becomes difficult to interpret when forecast horizons vary across test cases, as in our setting.
Protocols. In varying-history forecasting, we split the train, validate, and test set as the ratio of 7:1:2 of the longest series. To mimic varying lengths, we split the series into long and short groups at a 1:1 ratio. The threshold for defining short history series is determined based on the dataset’s temporal resolution and the typical length of its series: 100 steps for PixelRec, AmazonReview, and WikiPeople, 50 for Movielens, 20 for TaobaoFashion and Tianchi, and 18 for News. These values reflect meaningful cutoffs for varying-history forecasting in each domain. Models are trained with both groups, enabling them to generalize across heterogeneous history lengths. We evaluate forecasting performance on the short, long series, and mixture of both separately. Input and output lengths are adapted to dataset frequency: daily datasets use 7-day inputs and forecast 7 to 28 days ahead; the high-frequency dataset, News, uses 6-step inputs and forecasts up to 12 steps ahead. All reported scores are computed on the original scale of the data without normalization or scaling being applied before evaluation. This choice preserves the meaningfulness of errors. Consequently, datasets with inherently large values, e.g., WikiPeople, News, and Tianchi, exhibit correspondingly large RMSE values.
In cold-start setting, we randomly sample 30 series as cold-starters. Forecasting starts from the first valid time step, i.e., non-zero and non-placeholder value, using only 7 previous daily steps or 6 previous 20-minute steps from the relevant, non-target entity, depending on the dataset frequency.
Implementation details. In varying-history forecasting, we apply reindexing [62] to ensure mixed batch samples of short and long. To achieve a diverse experiment, we filter out the series whose density is less than 0.4 in PixelRec, and randomly sample 1,000 series from MovieLens. In coldstart forecasting, we simulate cold-start conditions by randomly selecting 30 channels from each dataset and removing all but the first valid observation, excluding 0s and -1s. For each target, we retrieve the top 4 relevant series based on textual similarity, using GPT-4o-mini embeddings as retrievers. Forecasting is then performed using a GPT-4o-mini model conditioned on the relevant series and metadata. Models are trained with MSE loss using the Adam optimizer. Time series encoders and modality fusion layers are updated; contextual encoders remain fixed. Early stopping is based on validation loss. All data processing and experiments are conducted on a single GPU (NVIDIA A100, A40, or RTX 3090), selected based on availability. The overall setup is designed to be reproducible and computationally feasible on commonly available hardware. For more computation information, e.g., running time, please refer to Appendix 8.5.
# 5.2 Results and Analysis
This section presents key observations under the two scenarios. Rather than emphasizing specific model performance, we focus on how different datasets and scenarios enable systematic evaluation of multimodal forecasting.
Table 3: Evaluation on varying-training length forecasting (long history series). Due to space limitations, evaluation scores are rounded to three decimal places. In cases where multiple models appear to have identical scores under this rounding, we still mark the best and second-best results in bold and with an underline based on the full-precision metrics.
# 5.2.1 Varying-history Forecasting
While we initially hypothesized that external modalities would be especially beneficial for short series 4.1, the empirical results show that this hypothesis holds only in specific datasets, rather than universally. Across most datasets, we find that the performance trends, in terms of model rankings, multimodal benefits, and horizon-specific performance, across short and long subsets, are nearly consistent. Thus, we present the main results for long series in Table 3 and short series and mixture results in the Appendix 8.6.1. However, one notable exception: on Movielens, the short subset sees a better improvement from multimodal input with MultiWPMixer. This may be attributed to the sparsity of interaction data in short series, where textual context compensates for the lack of temporal signal. The gain may also stem from WPMixer’s ability to capture local decomposed structure, which aligns well with the lightweight signals in sparse data.
There are some interesting observations based on Table 3. WikiPeople, AmazonReview, and News all exhibit stronger multimodal gains at longer horizons, suggesting that external information becomes increasingly valuable as the temporal signal fades or becomes more uncertain. PixelRec shows clear benefits from multimodal input, indicating that item-level static text is important role when series are relatively long. TaobaoFashion displays a horizon-dependent pattern: for short horizons, the best performance comes from the WPMixer-based multimodal model, MultiWPMixer; for longer horizons, the PatchTST-based multimodal model, MultiPatchTST, takes the lead. This may reflect different temporal features captured by linear and transformer. Tianchi shows stronger multimodal gains at shorter horizons, possibly because short-range dynamics are more sensitive to context across items, while longer-range trends are more stable and thus less reliant on external context.
Overall, these results suggest that the utility of external modalities depends not only on series length but also on data sparsity, forecast horizon, and modality alignment. MoTime enables systematic analysis of these interactions, rather than assuming uniform modality contributions across tasks.
# 5.2.2 Cold-start Forecasting
Figure 2 summarizes the cold-start forecasting results across six datasets using both RMSE and WRMSPE. Several consistent patterns emerge: Across most datasets, GPT generation consistently outperforms average baselines, verifying the effectiveness of modality-driven forecasting under extreme data sparsity. The improvement is particularly evident in sparse datasets like PixelRec and Tianchi, while Movielens shows a distinct advantage due to its spiky dense patterns, compared with sparse forecasting. Dataset-specific results and analyses are provided in Appendix 8.6.2.
Amazon MovieLens News Taobao Tianchi PixelRec 150
4.5 0 0 50 75 1 Horizon 7 Horizon 7 Horizon 1 Horizon 7 1Horizon 1Hrizon Amazon MovieLens News Taobao Tianchi PixelRec
广 2.00 2.6 2.2 1.5 Horizon Horizon 1.25 1 Horizon 0 1 Hrizon 2 1 Horizon 7 1 Horizon
# 6 Discussion and Limitations
While the results across datasets demonstrate the utility of external modalities in different scenarios, we also acknowledge several limitations in our current setup.
First, although short series naturally contain fewer windows than long series, we did not apply any upsampling strategy to balance them during training. As a result, short series may contribute less to gradient updates, potentially underestimating the real benefit of multimodalities in highly data-scarce cases. That said, we explicitly report results separately on short and long series, which still allows us to observe modality effects under different lengths. Besides, while we focus primarily on varying-history series, we recognize that sparsity is an equally important factor. A long series with mostly zeros or missing values may carry less signal than a shorter, denser one. This is especially relevant for behavioral data such as user interactions or item purchases. Our results suggest that external modalities are most helpful when the signal is weak, whether due to shortness or sparsity. However, a systematic investigation of sparsity effects is beyond the current scope, and we leave it to future work. Finally, we note that using GPT for cold-start forecasting occasionally deviates from the target forecast horizon, despite explicit formatting in the prompt. While such cases are rare and do not significantly affect aggregate metrics, they reflect the inherent variability of large language models when used in non-autoregressive, multi-step prediction settings. In future work, stronger decoding constraints or horizon-aware prompting may improve consistency. | While multimodal data sources are increasingly available from real-world forecasting, most existing research remains on unimodal time series. In this work, we present MoTime, a suite of multimodal time series forecasting datasets that pair temporal signals with external modalities such as text, metadata, and images. Covering diverse domains, MoTime supports structured evaluation of modality utility under two scenarios: 1) the common forecasting task, where varying-length history is available, and 2) cold-start forecasting, where no historical data is available. Experiments show that external modalities can improve forecasting performance in both scenarios, with particularly strong benefits for short series in some datasets, though the impact varies depending on data characteristics. By making datasets and findings publicly available, we aim to support more comprehensive and realistic benchmarks in future multimodal time series forecasting research. | [
"cs.LG",
"cs.CL",
"cs.DB",
"cs.IR"
] |
# 1 Introduction
In recent years, the field of reinforcement learning (RL) [4] has undergone an impactful evolution, moving beyond reactive policy optimization to also include models that exhibit larger generalization and adaptability [18].Reinforcement Learning (RL) trains agents to interact with an environment and learn from rewards. Different types of RL approaches exist, among which Offline Reinforcement Learning (Offline RL), also referred to as batch RL. Such approach aims to learn optimal decisionmaking policies solely from previously collected data, without further environment interaction during training. This setting is especially relevant in domains where online exploration is costly, risky, or infeasible — such as robotics, healthcare applications and autonomous driving [16, 1, 17, 22]. Among the several architecture proposed in the field, Elastic Decision Transformers (EDTs), proposed in [24], have emerged as a novel and promising architecture, unifying sequence modelling with actionconditioned decision-making. More specifically Elastic Decision Transformers, leveraging the strengths of Transformer architectures (traditionally dominant in natural language processing), can capture efficiently long-range dependencies and enabling flexible policy behaviours under uncertainty. In addition to the more traditional RL paradigms, the paradigms of intrinsic rewards and curiosity driven models. Such models are in fact driven by concepts as for instance intrinsic motivation and curiosity, which can encourage significantly the agents’ exploration and learning experience [20]. Adding intrinsic motivation can in fact enhance potentially the learning experience, adding the capability of a further world exploration driven by the agent in a pro-active way. However, as RL models can be enriched with intrinsic motivation, as shown in [13], it still remains largely unexplored the explainability of the improvement brought by the intrinsic motivation to the RL reinforcement learning models where intrinsic motivation and curiosity driven approaches have been has been added [21]. Understanding how and why an RL intrinsic agent arrives at its decisions represents therefore a central research question, which could lead to even a wider adoption of such models, which could turn out to be critical, especially in safety-critical or exploratory domains. In contrast to traditional RL methods, where hand-crafted features or value estimates often yield some interpretability, EDTs learn implicit state representations within high dimensional embedding spaces. These representations, while powerful, needs deeper interpretation in order to understand and disentangle their internal meaning [3]. One promising path toward interpretability lies in the study of representation learning [3]: specifically, how internal embeddings evolve under different learning signals. Intrinsic motivation, a mechanism inspired by cognitive science, has been shown to enhance exploration in sparse-reward environments by encouraging agents to seek novel or surprising states. Recent work has incorporated intrinsic rewards into EDTs, yielding improved performance across a suite of offline RL benchmarks. However, the deeper representational consequences of these signals (the extent to which they influence the structure, semantics, and coherence of learned embeddings) remain largely unexplored, and matter of interest for understanding the different performances obtained by curiosity enhanced RL models. In this paper, we take a step toward opening the black box of EDTs. By applying statistical analysis techniques, we investigate the internal embedding spaces of EDTs trained with and without intrinsic motivation. Our goal is to assess whether intrinsic rewards do more than simply guide the agents’ behavior: does intrinsic motivation also change the geometry of learned latent representations, in a way that supports explainability? Our findings suggest that agents equipped with intrinsic rewards tend to perform better, developing sort of more structured and disentangled representations, pointing toward an emergent semantic alignment between states, goals, and latent dynamics. Through systematic statistical analysis of embedding properties, we demonstrate that intrinsic motivation mechanisms sculpt representation geometry in environment-specific ways that correlate with improved policy learning. These findings can bridge the gap between empirical performance gains and the underlying representational mechanisms, offering new insights into both the effectiveness and interpretability of intrinsic motivation in transformer-based offline RL. More specifically, in our paper we provide, to the best of our knowledge, the first systematic analysis of how intrinsic motivation mechanisms shape learned representations in Elastic Decision Transformers. We propose several novelties and contributions, among which:
1. Post-Hoc Explainability: We propose a statistical analysis framework (covariance trace, L2 norm, cosine similarity) to examine how intrinsic motivation shapes embedding geometry, revealing differences between EDT models with and without curiosity mechanisms.
2. Mechanistic Analysis: We introduce two EDT variants: EDT-SIL, where intrinsic loss acts on embedded states promoting compactness, and EDT-TIL, where it operates on transformer outputs, enhancing orthogonality. These lead to distinct representational structures correlated with environment-specific performance gains.
3. Performance-Representation Link: We demonstrate quantitative correlations between embedding properties and task performance across environments, offering mechanistic insights into when and why intrinsic motivation improves offline RL outcomes.
# 2 Background
Reinforcement learning (RL) has witnessed significant advancements in the latest years, with the integration of several latest models within the learning setting. Among the several approaches proposed, the inclusion of Decision Transformer (DT) models have introduced a significant novelty, by framing RL as a sequence modeling research problem, leveraging the Transformer architecture to predict actions based on past trajectories [7]. However, DTs face challenges in trajectory stitching, that is in the process of combining optimal segments from sub-optimal trajectories, to aim at better policies. In this context, the Elastic Decision Transformer (EDT) model was proposed, enhancing DTs by dynamically adjusting the history length, considered during action inference [24]. In fact, through a modulation of the input sequence length, based on the quality of past trajectories, EDTs are able to enable effective trajectory stitching, leading to improved performance in the context of offline RL settings. Another important aspect, concerns the possible use of intrinsic signals and motivation within the reinforcement learning paradigms. In fact, in reinforcement learning (RL), agents traditionally learn by maximizing extrinsic rewards—signals provided by the environment to guide behaviour. However, in several real-world environments, such rewards are sparse, delayed, or poorly aligned with long-term success [18]. This feature, has driven towards the development of alternative RL frameworks, based on intrinsic motivation or curiosity driven approaches, a framework that assign to agents some sort of internal drives to explore and learn, even in the absence of external feedbacks [21]. Intrinsic motivation takes inspiration from cognitive psychology, in particular from theories of curiosity and novelty-seeking behaviour, observed in animals and humans [20, 21].Therefore, in the RL setting, intrinsic rewards can be seen as a sort of pseudo-additional reward that encourages exploratory behaviours by rewarding novelty, uncertainty, surprise, or learning progresses. Several previous examples of approaches based on intrinsic motivations can be found. For instance in [2] the authors present the novelty-based methods, based on count-based exploration in large or continuos state spaces. Furthermore in [21] the authors present prediction error-based methods, like Intrinsic Curiosity Modules (ICM), where reward agents when their predictive models fail—encouraging behaviours that reveal new knowledge. In [15] instead, the authors present information gain approaches, which measure KL divergences between posterior and prior beliefs to guide the agent towards informative experiences. Recent work has aimed to unify intrinsic and extrinsic drives. In [6] the authors propose a constrained optimization framework that balances policy optimization across both reward types, while others use contrastive learning and large-scale embeddings (e.g., CLIP) to define intrinsic rewards based on semantic novelty [14]. Despite these advances, aligning intrinsic motivation with downstream task success remains an open challenge. Poorly designed intrinsic rewards can distract agents from task-relevant behaviours or lead to inefficient exploration. Ongoing research continues to investigate better ways to quantify novelty, adjust reward weighting dynamically, and integrate intrinsic objectives with policy learning in a principled manner. In the context of RL and exploration, embeddings can play a pivotal role in RL by providing compact representations of states, actions, and rewards. The use of pre-trained models, such as CLIP, allows agents to leverage semantic knowledge for better generalization. For instance in [14] the authors demonstrated that CLIP-based embeddings could serve as intrinsic rewards, guiding agents towards semantically meaningful exploration and outperforming traditional methods in complex environments. In this context we would also like to consider a few works which address the issue of explainable reinforcement learning. The majority of works in this area, focuses in fact on the concept of explainability of the decisions and actions taken by the agent, as reported in the complete survey proposed in [19]. In our work, instead, the goal is to define explainability in the context of deriving insights on the advantages of the use of the curiosity driven approach using embedded within the EDT framework, disentangling the inherent geometry and characteristics of the learnt embeddings.
# 2.1 Biological Plausibility and Homeostatic Regulation: towards a biologically inspired reinforcement learning approach
In this section we would like to propose some insights on the biological plausibility of our proposed model. The quest for explainable AI in reinforcement learning (RL) has a tight parallel with biological learning, where intrinsic motivation shapes adaptive behavior through allostatic regulation, achieving stability by adjusting predictions rather than fixing a–priori parameters [23]. Such predictive adaptation is reflected in how the so called Random Network Distillation (RND) models operate: learning is driven by discrepancies between predicted and actual inputs, echoing brain mechanisms that constantly update internal models based on sensory prediction errors [9, 8]. These prediction hierarchies span from primary sensory to higher-order cortical processing [10], suggesting intrinsic motivation can be applied across representational levels. Moreover, biological systems maintain representational homeostasis, optimizing information processing through the regulation of capacity and structural organization [25, 12]. Intrinsic motivation fosters flexible learning and generalization [20]. In transformer-based models like Elastic Decision Transformers, auxiliary losses based on RND act as allostatic regulators, guiding representational structure without altering offline reward signals. Such losses can help preventing representational collapse, mirroring biological mechanisms that sustain learning adaptability and predictive efficiency.
Figure 1: Architecture of the Elastic Decision Transformer with intrinsic motivation mechanisms proposed in [13]. Figure 1 shows both EDT-SIL and EDT-TIL variants, where the RND module operates on state embeddings or transformer outputs respectively. The dashed lines indicate backpropagation paths for each variant, with ${ L } _ { i n t }$ from the RND module contributing to the total loss $L _ { E D T }$ alongside standard prediction losses. The target network of the RND module has frozen weights and is never updated, as proposed in [5].
# 3 Methods
In this section, we will provide a comprehensive overview of the Elastic Decision Transformer architecture enhanced with intrinsic motivation mechanisms. While maintaining the same architecture as [13], we provide detailed descriptions of the baseline framework and the two intrinsic motivation variants that form the basis for our embedding analysis presented in this work. More specifically, we extended the work [13] by providing a comprehensive evaluation across different datasets. We then further conducted a systematic investigation of RND network layer configurations, and present our post-hoc explainability framework for analyzing how intrinsic motivation shapes learned representations, towards obtaining better final performances.
# 3.1 Intrinsic Auxiliary Loss
Our approach builds upon the Elastic Decision Transformers models proposed in [24], which excel in offline RL by processing trajectories as sequences of (state, action, reward) tuples and predicting actions conditioned on states and desired returns. The baseline EDT leverages Transformer attention mechanisms to capture long-range dependencies in sequential decision-making. Following [13], we analyze the intrinsically-motivated EDT variants that incorporate an RND module as an auxiliary loss function, operating independently of fixed reward signals in offline datasets. As shown in Figure 1, these variants include:
• EDT-SIL (State Input Loss): In this variant, the RND module computes intrinsic rewards directly from embedded state representations. The predictor network attempts to match the output of a frozen target network when fed the embedded states. This configuration allows the intrinsic signal to influence the state embedding layer through backpropagation, potentially encouraging the model to learn more diverse and structured state representations. • EDT-TIL (Transformer Input Loss): Here, the RND module operates on the transformer’s output representations, aligning intrinsic reward computation with the model’s sequential processing capabilities. This approach enables the intrinsic signal to shape both the embedding and transformer layers, potentially creating more coherent sequential representations.
The intrinsic loss $L _ { \mathrm { i n t } }$ is computed as the mean squared error between the predictor and target network outputs as described in Equation 1:
$$
L _ { \mathrm { i n t } } = | f _ { \mathrm { p r e d } } ( x ; \theta _ { \mathrm { p r e d } } ) - f _ { \mathrm { t a r g e t } } ( x ; \theta _ { \mathrm { t a r g e t } } ) | _ { 2 } ^ { 2 }
$$
where $x$ represents either embedded states (SIL) or transformer outputs (TIL). The total loss is computed summing up the standard EDT objective with this intrinsic component as described in Equation 2:
$$
L _ { \mathrm { o v e r a l l l } } = L _ { \mathrm { E D T } } + L _ { \mathrm { i n t } }
$$
For the complete formulation of $\scriptstyle L _ { \mathrm { E D T } }$ , including its constituent components (return prediction, observation prediction, action prediction, and expectile regression losses), we refer the reader to [24]. This formulation enables the intrinsic motivation signal to enhance representation learning without disrupting the primary task objective or altering the fixed reward structure of offline datasets.
# 3.2 Random Network Distillantion (RND) Architecture Analysis
As described in Figure 1 we introduce the RND block. In order to understand the impact of RND network capacity on intrinsic motivation effectiveness considering network depth, we conducted a systematic investigation of the predictor network depth. The motivation for this analysis stems from the hypothesis that different network capacities may capture different levels of representational complexity, potentially affecting both the quality of intrinsic rewards and the resulting policy performance. We evaluated in total 3 RND predictor configurations: a 1-layer RND (considering it as a minimal architecture to establish a baseline for intrinsic reward generation), a 3-layers RND as the default configuration from prior work [13], and a 10-layers RND, considering it as a high-capacity variant to test whether increased expressiveness improves intrinsic motivation. This analysis was conducted on our best performing dataset (the Medium Datasets as described in Section 4.1), in order to optimize the RND predictor network architecture.
# 3.3 Post-Hoc Explainability Analysis
The primary methodological contribution of this work lies in developing a framework for post-hoc explainability, in particular evaluating and studying how intrinsic motivation mechanisms shape learned representations. This analysis aims to understand why intrinsic motivation improves performance, by examining the geometric and statistical properties of embedding spaces.
# 3.3.1 Embedding Characterization Framework
We focused our analysis on three key geometric properties that capture different aspects of representational structure:
• Covariance Trace: This metric measures the total variance distributed across embedding dimensions as per Equation 3:
$$
c o v \_ t r a c e = \mathrm { T r } ( \operatorname { C o v } ( E ) )
$$
where $E \in \mathbb { R } ^ { N \times d }$ represents the embedding matrix. By definition, the trace equals the sum of variances along each dimension, indicating how much total information is captured across the representational space.
• L2 Norm: The mean magnitude of embedding vectors quantifies representational compactness, as per Equation 4:
$$
l 2 \_ n o r m = \frac { 1 } { N } \sum _ { i = 1 } ^ { N } | e _ { i } | _ { 2 }
$$
This metric provides insight into the average energy or magnitude of embedding vectors in the representational space.
• Cosine Similarity: The average pairwise cosine similarity within each tensor assesses representational orthogonality as per Equation 5:
$$
{ \mathrm { c o s \_ s i m } } = { \frac { 1 } { | { \mathcal { P } } | } } \sum _ { ( e _ { i } , e _ { j } ) \in { \mathcal { P } } } { \frac { e _ { i } \cdot e _ { j } } { | e _ { i } | | e _ { j } | } }
$$
where $\mathcal { P }$ are made of all pairs of embedding vectors within each tensor. Lower values suggest more orthogonal representations, which may indicate better disentanglement of different aspects of the state space.
# 3.3.2 Performance-Representation Correlation Analysis
To establish quantitative relationships between representational properties and task performance, we first used correlation analysis between embedding metrics and normalized performance scores (HNS defined in Equation 6). This approach allowed us to identify which representational characteristics are most predictive of policy performance in each environment. Our methodology included:
1. Computing embedding metrics for each trained model across multiple seeds
2. Computing the Pearson correlations between metrics and performance scores
3. Identifying the most predictive metric for each environment-model combination
4. Making an analysis on how different intrinsic motivation mechanisms (EDT-SIL vs. EDT
TIL) can create distinct representational patterns
Such post-hoc analysis allowed us to provides insights into the mechanistic basis for performance improvements observed with intrinsic motivation, moving beyond empirical results to understand the underlying representational changes that enable better policy learning.
# 4 Experiments and Results
In this section, we will provide a comprehensive evaluation of intrinsic motivation mechanisms in Elastic Decision Transformers, focusing on both performance improvements and the underlying representational changes. We use the standard EDT architecture described in [24] as our baseline, comparing it against the intrinsic motivation variants (named EDT-SIL and EDT-TIL and described in Section 3.1). Our experimental design systematically investigates how these variants shape learned embeddings and their correlation with task performance across multiple locomotion environments as described in Section 4.1.
# 4.1 Experimental Setup and Datasets
We conducted experiments on four continuous control tasks from the D4RL benchmark suite: Ant, HalfCheetah, Hopper, and Walker2d (Figure 2). D4RL (Datasets for Deep Data-driven Reinforcement Learning and presented in [11]). These dataset provide a standardized collection of offline RL datasets, that address critical challenges in offline learning by offering diverse datasets with varying quality, coverage, and collection methodologies. The different locomotion tasks represent different movement challenges, which are particularly relevant for evaluating biological plausibility. The tasks range from quadrupedal movement (Ant) to bipedal locomotion with varying degrees of stability (Hopper, Walker2d) and high-speed running (HalfCheetah). Each environment provides distinct sensorimotor dynamics that mirror the variety of locomotion patterns found in biological systems, making them ideal testbeds for intrinsic motivation mechanisms inspired by natural learning processes.
Figure 2: The four locomotion environments from D4RL [11] benchmark: Hopper (top-left), Walker2d (top-right), HalfCheetah (bottom-left), and Ant (bottom-right). These tasks represent different locomotion challenges from bipedal to quadrupedal movement.
For each environment, we evaluated our models on both medium and medium-replay datasets. The difference between the two is the following: medium datasets ${ \bf \omega } _ { \sim } 1 { \bf M }$ transitions) provide cleaner trajectories from behavior policy training, capturing progression from suboptimal to near-optimal performance. Medium-replay datasets ${ \bf \Gamma } \sim 2 { \bf M }$ transitions) add noisy replay buffer data including early exploration, creating more challenging conditions that better reflect real-world learning scenarios. Such datasets are ideal for investigating biological plausibility as they mirror natural motor skill development through trial and error, making them suitable for understanding how intrinsic motivation shapes representations in biologically plausible ways.
Cumulative Top Scores Across Environments (Medium Dataset)
Figure 3: Cumulative Human-Normalized Scores (HNS) obtained by the best models trained on each environment of the Medium dataset. 3-layer variants (red borders) achieve optimal performance for both SIL and TIL mechanisms.
All experiments were conducted using five random seeds to ensure statistical robustness. Performance was evaluated using Human-Normalized Scores (HNS), calculated as:
$$
\mathrm { H N S } = \frac { \mathrm { s c o r e - s c o r e \_ r a n d o m } } { \mathrm { s c o r e \_ h u m a n - s c o r e \_ r a n d o m } }
$$
This metric provides consistent scaling across environments and follows the same evaluation protocol used in [24]. Each trained model underwent evaluation over three rounds of 100 episodes each, with results averaged across both seeds and evaluation rounds.
For the embedding analysis, we collected state embeddings obtained during the model evaluation steps, by executing the best performing model for each environment-dataset combination. During the evaluation phase, models ran for a single episode with a maximum of 1000 steps. At each step, we extracted and saved the state embeddings, i.e. the output of the embedder layer as shown in Figure 1. These embeddings represent the agent’s internal representation of environmental states and served as the input to the RND module in the EDT-SIL variant. The choice to analyze state embeddings was motivated by biological plausibility: these representations correspond to the primary sensory encoding stage in biological neural systems, where environmental observations are first transformed into internal neural patterns [12]. Similarly to biological systems must efficiently encode sensory information before higher-order processing, the embedder layer can, in fact, create the foundational representations upon which all subsequent decision-making depends. From these collected embeddings, we computed 3 key geometric metrics (covariance trace, L2 norm, and cosine similarity) following the analysis framework detailed in our Section 3. The embedding collection and metric calculation process was repeated three times to ensure robustness, with all results reported in Table 1 representing averaged values across these repetitions.
# 4.2 RND Layer Configuration Analysis
To optimize the intrinsic motivation mechanism, we systematically investigated the effect of RND predictor network depth on both performance and representation quality. We evaluated three configurations: 1-layer, 3-layer, and 10-layer RND networks. This analysis was conducted exclusively on medium datasets, as these demonstrated the most promising initial results. The 3-layer configuration emerged as optimal across both EDT-SIL and EDT-TIL variants. Figure 3 demonstrates that 3-layer variants (highlighted with red borders) consistently achieve the highest cumulative scores across all environments. This finding aligns with the biological principle of representational balance: too few layers (1-layer) may lack sufficient capacity to capture complex predictive relationships, while too many layers (10-layer) may lead to overfitting or representational instability.
# 4.3 Performance Results
Table 1 presents performance results across both medium and medium-replay datasets, where intrinsic motivation variants demonstrate environment-specific effectiveness patterns.
Table 1: Performance comparison on Medium and Medium-Replay datasets. Human-normalized scores (HNS) show mean $\pm$ standard deviation across 5 seeds. The best results per each environment are highlighted in bold.
On medium datasets, EDT-TIL achieved the best performance in 2 out of 4 environments (Walker2d: 73.50 vs 68.50 HNS; Hopper: 59.63 vs 57.49/59.31 HNS for baseline/SIL).
The medium-replay datasets reveal different intrinsic motivation effectiveness patterns. EDT-SIL significantly outperforms the baseline in Hopper (84.67 vs 81.56 HNS), while EDT-TIL demonstrates robust performance in HalfCheetah (38.60 vs $3 7 . 3 2 \mathrm { H N S }$ ) and Walker2d (65.06 vs 62.25 HNS). Interestingly, the baseline EDT achieves the best performance in Ant on medium-replay (85.51 HNS), suggesting that this environment may be less prone to intrinsic motivation on noisier datasets. These results suggest that different intrinsic motivation mechanisms create complementary representational advantages suited to different environmental dynamics and dataset characteristics.
# 4.4 Embedding Analysis Results
Table 2 reveals the mechanistic basis for performance improvements through analysis of embedding properties. Each environment exhibits a distinct correlation pattern between representational metrics and performance. More specifically we can see the following:
• Ant: this environment shows a strong negative correlation with covariance trace (-0.907). This might suggest that reduced total variance distribution improves performance.
• HalfCheetah: in this case we can see a positive correlation with covariance trace (0.850), suggesting that increased representational capacity benefits this environment.
• Hopper: A positive correlation with cosine similarity $\left( + 0 . 6 5 8 \right)$ , suggests that increased similarity between state representations enhances performance
• Walker2d: In this case a strong negative correlation with cosine similarity (-0.950), indicating that increased orthogonality between embeddings is crucial.
Such environment-specific patterns demonstrate that intrinsic motivation mechanisms create tailored representational structures aligned with task demands and consistent with the biological principle of adaptive representational organization, which we hypothesized in our biological plausible framework.
Examining the embedding properties across models, we can further see and interesting effect. More specifically the analysis reveals distinct representational effects of each intrinsic motivation variant. EDT-SIL consistently creates more compact representations through reduced covariance trace and L2 norms. EDT-TIL promotes representational orthogonality via reduced cosine similarity (Walker2d: -0.950; Hopper: $+ 0 . 6 5 8 \mathrm { \cdot }$ ), demonstrating environment-specific optimization strategies that mirror biological neural decorrelation principles. The complementary nature of those mechanisms suggests that different intrinsic motivation approaches implement distinct aspects of biological representational regulation. EDT-SIL enhances representational efficiency at the input level, while EDT-TIL optimizes sequential processing structures. This division of regulatory functions aligns with hierarchical organization principles observed in biological neural systems, where different processing stages maintain distinct homeostatic mechanisms.
These findings strongly suggests that intrinsic motivation in EDTs operates as more than a simple exploration bonus: it acts as a representational prior that shapes embedding geometry in biologically plausible ways, creating environment-specific organizational structures that facilitate improved decision-making.
Table 2: Performance and embedding properties comparison across environments. Best performance highlighted in bold. Strongest embedding-performance correlation indicated for each environment. | Elastic Decision Transformers (EDTs) have proved to be particularly successful in offline reinforcement learning, offering a flexible framework that unifies sequence modeling with decision-making under uncertainty. Recent research has shown that incorporating intrinsic motivation mechanisms into EDTs improves performance across exploration tasks, yet the representational mechanisms underlying these improvements remain unexplored. In this paper, we introduce a systematic post-hoc explainability framework to analyze how intrinsic motivation shapes learned embeddings in EDTs. Through statistical analysis of embedding properties (including covariance structure, vector magnitudes, and orthogonality), we reveal that different intrinsic motivation variants create fundamentally different representational structures. Our analysis demonstrates environment-specific correlation patterns between embedding metrics and performance that explain why intrinsic motivation improves policy learning. These findings show that intrinsic motivation operates beyond simple exploration bonuses, acting as a representational prior that shapes embedding geometry in biologically plausible ways, creating environment-specific organizational structures that facilitate better decision-making. | [
"cs.LG",
"cs.AI"
] |
# 1. Introduction
Decision trees are widely used for interpretable machine learning (Rudin et al., 2022). Their structure of discrete decisions has long been leveraged for difficult tasks such as handling missing data (Therneau et al., 1997) and measuring variable importance (Breiman, 1984). Recent advances in decision tree optimization (Lin et al., 2020; Demirovi´c et al., 2022; Aglin et al., 2020) – including algorithms for enumerating the entire set of near-optimal decision trees (the Rashomon set; Xin et al., 2022) – have garnered substantial research interest. These advances have enabled new perspectives on predictive multiplicity (Marx et al., 2020; Watson-Daniels et al., 2023) and variable importance (Dong & Rudin, 2020; Fisher et al., 2019; Donnelly et al., 2023).
However, decision trees can be misleading, because they correspond not just to a classifier but also to a particular way of evaluating the classifier. Consider the two equivalent trees in Figure 1. The two trees encode the same logical AND decision function, but they suggest different orders of querying $X _ { 1 }$ and $X _ { 2 }$ . A practitioner would typically deploy only one of these trees, but either order is equally justified.
$$
\begin{array} { c } { { X _ { 1 } \qquad X _ { 2 } } } \\ \Big \langle \begin{array} { l l l l } { { } } & { { } } & { { \big / \begin{array} { l } { { } } \\ { { } } \end{array} } } \\ { { 0 \quad X _ { 2 } \quad 0 \quad X _ { 1 } } } \\ { \Big . \Big / \begin{array} { l l l l } { { } } & { { } } & { { \big / \begin{array} { l } { { } } \\ { { } } \end{array} \Big \rangle } } } \\ { { 0 \quad 1 \quad 0 \quad 1 } } \\ { { \big ( X _ { 1 } \wedge X _ { 2 } \big ) } } \end{array} \end{array} \end{array}
$$
Figure 1. Two decision trees, suggesting a different evaluation order, but which represent the same logical formula $( X _ { 1 } \land X _ { 2 } )$ .
This phenomenon, which we call predictive equivalence (Sober, 1996), poses several distinct challenges:
(1) Decision trees imply an evaluation procedure that can get stuck on irrelevant missing information. If $x _ { 1 }$ is missing and $x _ { 2 } = 0$ , the first tree in Figure 1 cannot be traversed, but the second tree clearly predicts 0.
(2) Tree-based variable importance metrics change across predictively equivalent trees. For example, Gini importance will suggest that $x _ { 2 }$ is more important to the first tree in Figure 1, even though this order is arbitrary.
(3) Logically equivalent trees with different evaluation orders appear in the Rashomon set as distinct trees. This phenomenon causes some models to be over-represented in the Rashomon set, which biases some downstream tasks.
(4) A decision tree implies a constrained order for evaluating variables, but this order may be sub-optimal when each variable has an associated cost.
We provide a representation of decision-tree classifiers that abstracts away the evaluation order. To do this, we convert decision trees into disjunctive normal form (DNF; an OR of ANDs) and reduce to a minimal set of sufficient conditions for making predictions. This representation allows us to address the above challenges: we uncover many cases where decision trees can still make predictions despite some variables being missing, we make variable importance metrics for trees more reliable, we resolve predictive equivalence in the Rashomon set, and we optimize the cost of variable acquisition needed to reach a prediction using a tree.
# 2. Related Work
# 2.1. Decision Trees and Simplicity
There is a substantial body of work on decision tree learning. Greedy decision tree algorithms, such as CART (Breiman, 1984) and C5.0 (Quinlan, 2014), find decision trees in a greedy top-down, recursive manner. The GOSDT algorithm by Lin et al. (2020), the DL8.5 algorithm by Aglin et al. (2020), and the MurTree algorithm by Demirovic´ et al. (2022) provide methods to optimize the decision tree hypothesis space directly. A range of other approaches also afford optimal decision trees via more general solvers (Bertsimas & Dunn, 2017; Verwer & Zhang, 2019). These algorithms can be used to find highly accurate decision trees – indeed, well-optimized single decision trees can approach the performance of decision tree ensembles (Vidal & Schiffer, 2020; McTavish et al., 2022), which are often state of the art for tabular data (Grinsztajn et al., 2022). Our representation of decision trees applies to trees discovered via any method.
Our work is particularly related to the problem of explanation redundancy in decision trees. This concept is explored by Izza et al. (2022), who demonstrate that the paths taken through the tree to reach predictions (“path explanations”) often have redundant variables in them, which are not necessary to make the prediction. The authors present polynomial-time algorithms to compute succinct path explanations. In contrast, we present a method to compute a minimal boolean logical representation of the entire decision tree, using the Quine-McCluskey algorithm (Quine, 1952; McCluskey, 1956) as a subroutine. This representation enables succinct path explanations of predictions for free, once the global representation is computed for some up-front cost. Our representation also enables several downstream applications beyond prediction explanations.
A line of work on the simplicity of machine learning models shows that when data has noise in the outcomes (common on many tasks we consider in our experiments), simpler decision trees will be competitive in performance with more complicated ones (Semenova et al., 2022; 2023; Boner et al., 2024). If our decision trees have a small number of leaves, the number of variables in the Quine-McCluskey subroutine will be small, and our algorithm for simplification will be efficient despite the NP-completeness of the problem.
# 2.2. Applications
Variable Importance. Decision trees have been used for variable importance since at least the introduction of random forests (Breiman, 2001a). Notably, specialized metrics that quantify importance based on the reduction in impurity achieved when splitting on a particular feature have been developed to measure variable importance in decision trees (Louppe et al., 2013; Kazemitabar et al., 2017). In Section 5.1, we show that predictively identical trees can yield very different impurity reduction values.
There are also metrics such as SHAP (Lundberg & Lee, 2017), permutation importance (Breiman, 2001a; Fisher et al., 2019), conditional model reliance (Fisher et al., 2019), LOCO (Lei et al., 2018), and LIME (Ribeiro et al., 2016), that quantify variable importance based on permuting data across a particular decision boundary. These metrics are invariant to predictive equivalence, because they evaluate only the decision boundary.
Recent work examines variable importance over all models in the set of near-optimal models (Fisher et al., 2019; Dong & Rudin, 2020; Donnelly et al., 2023), i.e., the Rashomon set (Breiman, 2001b; Rudin et al., 2024), rather than a single model. Of particular note, the Rashomon Importance Distribution (RID) (Donnelly et al., 2023) demonstrated that the stability of variable importance estimates can be improved by examining the distribution of variable importances over Rashomon sets computed on bootstrapped datasets. In Section 5.2, we show that predictive equivalence within each Rashomon set confounds the practical implementation of RID, and we show how to correct this.
Missing Data. A popular approach for dealing with missing feature values is to impute them – with either a simple estimator such as the mean, or a function of the other covariates. For background on imputation, see Shadbahr et al. (2023); Emmanuel et al. (2021); Van Buuren & Oudshoorn (1999). Multiple imputation accounts for uncertainty in imputation by combining results from several estimates (Rubin, 1988; Van Buuren & Oudshoorn, 1999; Schafer & Graham, 2002; Stekhoven & B¨uhlmann, 2012; Mattei & Frellsen, 2019). There is also a body of work regarding surrogate splits, a tree-specific approach which learns alternative splits to make when a variable is missing (Therneau et al., 1997; Breiman, 1984). Each of these approaches introduces bias when the probability of a variable being missing depends on the variable’s underlying value, beyond what can be modeled by the covariates – this setting is referred to as Missing Not at Random (MNAR) (Little & Rubin, 2019). We show that our proposed representation reveals examples whose predictions are identical under any form of imputation.
Imputation can be detrimental to prediction when missingness provides information about the label. There are a wide range of theoretical and empirical findings supporting the need to reason explicitly on missingness in this setting (Sperrin et al., 2020; Le Morvan et al., 2021; Van Ness et al., 2023; Stempfle et al., 2023; McTavish et al., 2024). Many such approaches are tree or tree-ensemble specific, leveraging the simple structure of trees (Kapelner & Bleich, 2015; Twala et al., 2008; Beaulac & Rosenthal, 2020; Therneau et al., 1997; Wang & Feng, 2010; Chen & Guestrin, 2016). However, such missingness-specific modeling requires sufficient observation of missingness at training time. When a missingness pattern occurs only at test time, or when the missingness mechanism has a distribution shift from training time (the latter setting being particularly common in medical domains, e.g., Groenwold, 2020; Sperrin et al., 2020), it is difficult to learn missingness-specific patterns.
Stempfle & Johansson (2024) propose a metric to measure how often models rely on features with missing values, and they propose a model class designed to be robust to missingness. Their scoring model MINTY uses logical disjunctions such that, if any term in a disjunction is known to be true, that entire disjunction can be evaluated. This allows one variable to serve as a backup when another is missing. We show that our representation for decision trees dramatically improves the trees’ performance on this metric, without changing the decision boundary.
Cost Optimization. Many real-world problems have a cost to acquire variables – for example, ordering an MRI is expensive and time-consuming. Many types of costs have been studied (Turney, 2002), but our focus is on test cost, or the minimum cost associated with obtaining a prediction from a model. There are many cost-sensitive decision tree algorithms in the literature (Lomax & Vadera, 2013; Costa & Pedreira, 2023). However, all of these approaches directly optimize a decision tree to account for these costs and use that tree top-down at test time, even if there is a more costeffective way to obtain predictions from the same tree. In contrast, we optimize the cost of evaluating predictions from a given decision tree (cost-optimal or otherwise) when some or all variables are unknown. Note that we assume each feature has a fixed cost across all samples.
We optimize the cost of applying a decision tree by applying $\boldsymbol { \mathcal { Q } }$ -learning (Watkins, 1989). Q-learning is a model-free approach for policy learning that estimates the value of each action in each state by allowing an agent to explore the state space. During exploration, the reward of the $j$ -th visited state is gradually propagated back to the $( j - 1 )$ -th state. Given sufficient episodes – iterations of this exploration – it has been shown that Q-learning will produce the optimal policy for a given problem (Watkins, 1989; Watkins & Dayan, 1992). Since the introduction of Q-learning, the field of reinforcement learning has dramatically expanded. We refer readers to recent survey papers on the field for a more complete literature review (Shakya et al., 2023; Wang et al., 2022). However, we found that the update rule and regime proposed by (Watkins, 1989) were sufficient.
# 3. Methodology
Notation. Consider a dataset $D = \{ ( x _ { i } , y _ { i } ) \} _ { i = 1 } ^ { n }$ , where each $x _ { i } \in \mathbb { R } ^ { d }$ and $y _ { i } \in \{ 0 , 1 \}$ pair is sampled i.i.d. from some unknown distribution $\mathcal { D }$ . For the purposes of our work, these $x _ { i }$ ’s may be continuous, ordinal, categorical, or binary. We refer to the $j ^ { t h }$ feature of the $i ^ { t h }$ sample as $x _ { i , j }$ . We use the notation $x _ { \cdot , j }$ to refer to the $j ^ { t h }$ feature. We consider binary classification problems in this work, though our results can be extended to multiclass classification with minor adjustments to the algorithms and theorems. We also work with binarized datasets, in which feature $j$ of $D$ is binarized into $B _ { j }$ different binary features. For example, the feature age may be binarized into binary features $a g e \le 5$ , $a g e \leq 1 0$ , etc. We denote the $k ^ { t h }$ binary feature corresponding to feature $j$ of the $i ^ { t h }$ sample as $b _ { i , j } ^ { ( k ) }$ . We reserve capital letters and $Y$ for random variables, and we index random variables via subscripts, i.e., $X _ { i }$ .
Given a bit-vector $\theta \in \{ 0 , 1 \} ^ { d }$ , we define the mask function $m ( x _ { i } , \theta ) : = ( x _ { i , j }$ if $\theta _ { j } = 1 ; N A$ if $\theta _ { j } = \left. 0 \right. _ { j = 1 } ^ { d }$ . For convenience, we often leave out dependence on $\theta$ and write $m ( x _ { i } )$ to denote a masked version of $x _ { i }$ . Let $J _ { m ( x _ { i } ) } : =$ $\{ j | m ( x _ { i } ) _ { j } \in \mathbb { R } \}$ . A completion of $m ( x _ { i } )$ is a vector $\boldsymbol { z } \in \mathbb { R } ^ { d }$ s.t. $z _ { j } = m ( x _ { i } ) _ { j } , \forall j \in J _ { m ( x _ { i } ) }$ . When discussing cost sensitive optimization, we denote the cost associated with each input feature $x _ { \cdot , j }$ as $c _ { j }$ .
# 3.1. Representing Trees to Resolve Predictive Equivalence
Given any decision tree $\tau$ , we represent $\tau$ in a simplified disjunctive normal form (an OR of ANDs), which we denote by $\mathcal { T } _ { \mathrm { D N F } }$ . See Figure 2 for an example of this representation.
This approach yields a number of useful properties. It remains globally interpretable, because we can present a simple logical formula for the whole tree. The new representation is still faithful to all the original predictions of the tree (Proposition 3.1). It can also make predictions whenever there is sufficient information to know the prediction on the original tree (Theorem 3.2), which we leverage later in our applications. It provides non-redundant explanations, meaning it does not suffer from the interpretability issues Izza et al. (2022) identify in decision trees (Proposition 3.3). It maps all predictively equivalent trees to the same form (Theorem 3.4). A proof for each of these statements is provided in Appendix A.
Figure 2. An example of a decision tree where the minimal DNF and Blake canonical forms differ. The minimal DNF of this tree describes the tree’s behaviour with two cases. The Blake canonical form includes a third reason for predicting True, which always falls into the preceding two cases but relies on different variables.
Proposition 3.1 (Faithfulness). Consider any tree $\tau$ and let $\boldsymbol { x } \in \mathbb { R } ^ { d }$ be a complete sample. Then $\mathcal { T } _ { D N F } ( x ) = \mathcal { T } ( x )$ .
Theorem 3.2 (Completeness). $\mathcal { T } _ { D N F } ( m ( x ) ) \neq N A$ if and only if, for all completions $z$ of $m ( x )$ , $\mathcal { T } ( z ) = \mathcal { T } _ { D N F } ( m ( x ) )$ .
Proposition 3.3 (Succinctness). Let the explanation for $\tau _ { D N F } ( x )$ be any term in SimpleP osExpr that is satisfied by $x$ when $\mathcal { T } _ { D N F } ( x ) = 1$ (or SimpleNegExpr when ${ \mathcal T } _ { D N F } ( x ) = 0 ,$ ). Then no variable in this explanation is redundant.
Theorem 3.4 (Resolution of Predictive Equivalence). Decision trees $\tau$ and $\scriptstyle { \mathcal { T } } ^ { \prime }$ are predictively equivalent if and only $i f \mathcal { T } _ { D N F } = \mathcal { T } _ { D N F } ^ { \prime }$ (with equality defined by Algorithm 3)
Algorithm 1 describes how we transform trees into minimal DNF representation. This algorithm combines the positive-predicting leaves of the decision tree into an expression in disjunctive normal form. This expression is then simplified with a slightly modified version of the QuineMcCluskey algorithm (Quine, 1952) (see Algorithm 5) to find the minimal form of the boolean expression encoding positive predictions by the tree. We perform the same procedure on the negative-predicting leaves to obtain a minimal boolean expression for evaluating whether the tree predicts negative. Algorithm 2 explains how this method provides predictions, with ‘substitute’ meaning each variable with a known value is replaced by a constant (e.g., if $x _ { i , 1 } = 1$ , $( x . , 1 \land x . , 2 ) \lor ( \lnot x . , 2 )$ becomes $( 1 \land x . , 2 ) \lor ( \neg x . , 2 ) = T r u e )$ . Equivalence is defined in Algorithm 3 in Appendix B.
While the basic simplified form has a number of useful properties, it does not directly afford all possible sufficient conditions for positive and negative predictions. Consider, for example, the tree in Figure 2: there are 3 sufficient conditions for a positive prediction, but our basic simplified form will only identify two of them. We leverage a second representation, called the Blake canonical form (Blake, 1937), to solve this problem: in Algorithm 4, we find all possible min
Algorithm 1 Compute DNF Representation from Tree Input: A decision tree $\tau$ . Output: $\mathcal { T } _ { \mathrm { D N F } }$ , A minimal boolean formula in disjunctive normal form with equivalent logical form to $\tau$ . Let $L$ be the set of leaves of the decision tree, represented by a conjunction of the variables and decisions on the path to the leaf. Denote by $L ^ { + }$ the leaves that predict positive, and $L ^ { - }$ the leaves that predict negative. $P o s E x p r \lor _ { l \in L ^ { + } } l$ $N e g E x p r \gets \lor _ { l \in L ^ { - } } l$ $S i m p l e P o s E x p r \gets Q u i n e M c C l u s k e y ( P o s E x p r )$ $S i m p l e N e g E x p r Q u i n e M c C l u s k e y ( N e g E x p r )$ Return (SimpleP osExpr, SimpleNegExpr)
# Algorithm 2 Prediction with the DNF representation
Input: $m ( x )$ , the sample to predict; $\mathcal { T } _ { \mathrm { D N F } }$ .
Output: Prediction from $\mathcal { T } _ { \mathrm { D N F } } ( m ( x ) )$ $( 0 , 1$ or NA) for term $t$ in $\mathcal { T } _ { \mathrm { D N F } }$ .SimpleP osExpr:
Return 1 if known feature values from $m ( x )$ satisfy $t$ for term $t$ in $\mathcal { T } _ { \mathrm { D N F } }$ .SimpleNegExpr:
Return 0 if known feature values from $m ( x )$ satisfy $t$ expr $$ Substitute known feature values of $m ( x )$ into $\mathcal { T } _ { \mathrm { D N F } }$ .SimpleP osExpr
$\mathrm { e x p r } Q u i n e M c C l u s k e y ( e x p r )$
Return 1 if ${ \mathrm { ^ { \circ } x p r } } = = T r u e$
Return 0 if e ${ \mathrm { ~ \ p r = = } } F a l s e$
Return NA
imal sufficient conditions for a positive prediction, and all possible minimal sufficient conditions for a negative prediction. This also corresponds to identifying all partial concept classes (Alon et al., 2022) for which the tree predicts true (resp. False). This alternative form can optionally be used to simplify the prediction logic for our DNF – since it is now sufficient simply to evaluate each separate term in the DNF, without needing to do further logical simplification.
# 3.2. Datasets
We consider four datasets throughout this work and eight additional datasets in Appendix C. We refer to the primary four as COMPAS (Larson et al., 2016), Wine Quality (Cortez et al., 2009), Wisconsin (Street et al., 1993), and Coupon (Wang et al., 2017). COMPAS measures 7 features for 6,907 individuals, where labels are whether the individuals were arrested within 2 years of being released from prison. Wine Quality reports 11 features over 6,497 wines along with a numerical quality rating between 1 and 10. We binarize these ratings into high $( > 5 )$ ) and low $( \leq 5 )$ quality classes and predict this binary rating. Wisconsin is a breast cancer dataset and contains 30 features over 569 masses, where labels designate whether the tumor was malignant or benign. Coupon measures 25 features for 12,684 individuals, and labels denote whether or not the individual would accept a coupon. See Appendix D.1 for complete details on the preprocessing applied to each dataset.
Table 1. Total number of trees, number of trees without trivial redundancies, and number of predictively nonequivalent trees (ours) in the Rashomon set. We abbreviate “Wine Quality” to “Wine” and “Wisconsin” to “Wisc.”
# 4. Quantifying Predictive Equivalence
We can directly identify predictively equivalent decision trees using Algorithm 3. We now apply these tools to the Rashomon set of decision trees, found by the TreeFARMS algorithm, to measure the prevalence of predictive equivalence in practice (Xin et al., 2022). The Rashomon set is defined as the set of all models in a hypothesis space $\mathcal { F }$ within $\varepsilon$ training objective of the optimal model, where the objective is denoted Obj $( f , D )$ . Given an optimal model on the training data $f ^ { * } \in \arg \operatorname* { m i n } _ { f \in { \mathcal { F } } } { \mathrm { O b j } } ( f , D )$ , the Rashomon set is defined as:
$$
\mathcal { R } ( \mathcal { F } , D ) : = \{ f \in \mathcal { F } | \mathrm { O b j } ( f , D ) \leq \mathrm { O b j } ( f ^ { * } , D ) + \varepsilon \} .
$$
TreeFARMS uses a branch and bound algorithm with dynamic programming to find the Rashomon set of sparse decision trees, with $\mathcal { F }$ the hypothesis space of decision trees and $\operatorname { O b j } ( f , D )$ defined as misclassification error plus a constant penalty for each leaf in the tree (Xin et al., 2022). The algorithm maintains a lower bound on the objective function of each possible subtree, and uses these lower bounds to prune large sections of the search space which provably cannot lead to near-optimal models.
We compute the Rashomon set of decision trees for the COMPAS, Coupon, Wine Quality, and Wisconsin datasets, and compare the total number of decision trees in each set to the number of unique DNF forms within each set. We use TreeFARMS (Xin et al., 2022) with maximum depth 3 and a standard per-leaf penalty of 0.01, identifying all trees within 0.02 of the optimal training objective. TreeFARMS can optionally remove trees that are trivially equivalent to other trees in the Rashomon set (i.e., the last split along some path leads to the same prediction in both leaves), so we also present the number of trees that have no such trivial splits. Going beyond trivial splits, we use our representation to identify the number of trees with unique decision logic. Table 1 presents this measure of Rashomon set size averaged over 5 folds of each dataset. We found that our represen
X2 X1 gini = 0.375 gini = 0.375 samples = 100 samples = 100 value = [75, 25] value = [75, 25] False True False True X1 X2 gini = 0.0 gini = 0.0 gini = 0.5 gini = 0.5
samples = 50 samples = 50 samples = 50 samples = 50
value = [50, 0] value = [50, 0] value = [25, 25] value = [25, 25] False True False True gini = 0.0 gini = 0.0 gini = 0.0 gini = 0.0 samples = 25 samples = 25 samples = 25 samples = 25 value = [25, 0] value = [0, 25] value = [25, 0] value = [0, 25] X1 Gini Importance: 0.66 X1 Gini Importance: 0.33 X2 Gini Importance: 0.33 X2 Gini Importance: 0.66
tation revealed a substantial number of trees with identical decision logic. Appendix C.2 presents similar results across many Rashomon set parameter configurations.
# 5. Case Study 1: Variable Importance
# 5.1. Gini Importance
Predictive equivalence poses an immediate challenge for variable importance methods. To demonstrate this, we consider the toy setting where $Y = X _ { 1 } X _ { 2 }$ and $X _ { 1 } , X _ { 2 }$ i.i.d. Bernoulli(0.5). Figure 3 presents two distinct decision trees that perfectly match this data generating process. Even in this simple case, we observe that equivalent trees can produce dramatically different variable importances when computing an impurity-based variable importance such as Gini importance. The first tree claims $X _ { 0 }$ is twice as important as $X _ { 1 }$ and the second tree claims the opposite.
This effect becomes more pronounced with more variables. We next consider a similar data generating process with $\begin{array} { r } { Y \ = \ \prod _ { i = 1 } ^ { 1 0 } X _ { i } } \end{array}$ and $X _ { 1 } , \ldots , X _ { 1 2 }$ i.i∼.d. Bernoulli(0.5). There re 12 input variables but only variables 1 though 10 are used in the data generation process, meaning there are 2 unimportant variables. We greedily fit 3 predictively equivalent decision trees over the same data using different random seeds and measured the Gini importance of each variable to each tree. Figure 4 shows the distribution of importance for each variable over these trees. We observe that the importance of each variable varies widely over predictively equivalent trees. Moreover, the importance of some useful variables is nearly indistinguishable from the importance of the extraneous variables – e.g., $X _ { 1 0 }$ has importance close to 0.
Figure 4. The Gini Importance for 12 variables over 3 predictively equivalent decision trees. Here, each color represents a different tree. Even though these trees are predictively equivalent, they produce radically different variable importance values.
# 5.2. Rashomon Importance Distribution
We now examine the impact of predictive equivalence on a state-of-the-art variable importance method: the Rashomon Importance Distribution (RID, Donnelly et al., 2023). RID computes a stable cumulative density function (CDF) of variable importance over possible datasets using variable importance over the Rashomon set. In particular, the value of this CDF for feature $j$ at value $k$ (i.e., the probability that feature $j$ has importance less than or equal to $k$ ) is computed as the expected proportion of models in the Rashomon set for which feature $j$ has importance less than $k$ .
RID is defined over a set of functions, meaning it expects each member of the Rashomon set to be a unique inputoutput mapping. In practice, however, RID operates over the Rashomon set of decision trees computed by TreeFarms (Xin et al., 2022). This set contains multiple predictively equivalent trees, which effectively places more weight on functions that can be expressed through many distinct trees and biases RID toward the variables that are important in these duplicated models. However, this bias can be removed by considering only one member of each set of predictively equivalent trees using our representation.
To demonstrate this effect, consider the following simple data generating process (DGP). With input variables $X _ { 1 } , X _ { 2 } \sim B e r n o u l l i ( \sqrt { 0 . 5 } )$ and $X _ { 3 } \sim$ Bernoulli $( 0 . 9 X _ { 1 } X _ { 2 } + 0 . 0 5 )$ , let $Y \sim B e r n o u l l i ( 0 . 9 X _ { 3 } +$ 0.05). We compute a “ground truth” variable importance value for this DGP by computing the permutation importance of each variable to the model $f ( X _ { 1 } , X _ { 2 } , X _ { 3 } ) = X _ { 3 }$ .
Table 2 reports the 1-Wasserstein distance between the ground truth importance value and the distribution of importance from RID with and without correcting for predictive equivalence. When predictively equivalent trees are not accounted for, RID places more weight further from ground truth on all three variables.
Table 2. The 1-Wasserstein distance (Vaserstein, 1969; Kantorovich, 1960) between the ground truth importance value (represented as a distribution with all weight at the single true value) and the distribution from RID with and without correcting for predictively equivalent trees on the synthetic case described in Section 5.2. Controlling for predictive equivalence improves the estimated importance of each variable.
The confounding effect of predictively equivalent trees on RID can also be observed on real data. Figure 5 shows the distribution of importance from RID before and after controlling for predictively equivalent trees for three important variables on the COMPAS dataset. While there is no known ground truth importance value to compare against here, we see that a substantial distribution shift also occurs on real data. In fact, for each variable, the two-sample Kolmogorov-Smirnov test for the equality of distributions found a significant difference between distributions for each variable with a target p-value of 0.05, with test statistics of 0.043 for age, 0.048 for juvenile crimes, and 0.059 for priors count and $p < 0 . 0 0 1$ in each case. Appendix C.3 reports these values over additional datasets, and finds significant distribution shift in at least one variable for every dataset except one, for which a decision stump is sufficient.
# 6. Case Study 2: Missing Data
Decision trees are regularly used in the presence of missing data because they can be easily adjusted to handle missingness (Therneau et al., 1997). Our method allows identification of many cases where adjustments are not needed.
Consider a setting where data is missing from the test set, but there is no observed missingness in the training set. The standard approach is to impute missing features, but this threatens interpretability by complicating the pipeline from input data to prediction. Our representation can identify all cases where imputation is not needed across a wide range of missingness settings, avoiding this issue. Theorem 3.2 and its Corollary 6.1 establish that whenever $\mathcal { T } _ { \mathrm { D N F } }$ makes a nonNA prediction, the prediction matches the tree’s prediction under perfect oracle imputation (meaning the oracle directly provides the missing value). As per Corollary 6.2, that means we can use DNFs to handle missingness in a way that is robust to a wide range of missingness mechanisms.
RID Distribution of age Importance RID Distribution of juvenile_crimes Importance RID Distribution of priors_count Importance Priginal Rtd RID Pricinal Rted RID 20 PrginalRtEd RID 80
15.0 5.0 20 5
2.5 ll. 0.0 0 0 0.00 0.02 0.040.06 0.08 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.080.100.120.140.16 Model Reliance Value Model Reliance Value Model Reliance Value
The only setting where we may lose information is when missingness itself is informative about the label – but in a test-time missingness setting, where we have not seen any data with missingness during training, it is not possible to train a model to handle informative missingness anyway. Proofs of these corollaries are given in Appendix A.
Corollary 6.1 (Irrelevance of Imputation). Let $\boldsymbol { x } \in \mathbb { R } ^ { d }$ . Let $g : \mathbb { R } \cup \{ N A \} ^ { d } \mathbb { R } ^ { d }$ be any imputation function. If $\mathcal { T } _ { D N F } ( m ( x ) ) \neq N A$ , then ${ \mathcal T } ( g ( m ( x ) ) ) = { \mathcal T } ( x )$ , which corresponds to oracle imputation.
Corollary 6.2 (Unbiasedness under test-time missingness). Let $\boldsymbol { x } \in \mathbb { R } ^ { d }$ . When $\begin{array} { r } { \mathcal { T } _ { D N F } ( m ( x ) ) \neq N A , } \end{array}$ , its predictions are an unbiased estimator for $T ( x )$ with respect to the random missingness mechanism. This holds even if the mechanism is Missing Not At Random.
We demonstrate empirically that decision trees rarely require additional missingness handling to predict on samples with missing data. In Figure 6, we introduce synthetic missingness (Missing Completely at Random) to a variety of real-world datasets by independently removing each feature of each sample with probability $p$ . Using our DNF-based prediction method, we demonstrate that decision trees can regularly predict on a substantial number of points even when many features are missing. This means a decision tree’s prediction is the same for most samples regardless of how a practitioner handles missing data, including any choice of imputation (Corollary 6.1).
In Figure 6, we show trees can predict substantially more often than standard ways of determining when a decision tree requires missingness handling would suggest. Since trees are ordinarily evaluated by following a path from root to leaf, the “path-based” baseline reports NA when a split is encountered that depends on a feature of unknown value. This same approach was recently used in a study of models’ reliance on missingness-specific logic (Stempfle & Johansson, 2024). We also compare to a function-agnostic baseline that checks whether any feature of the model is missing. Ex
% Samples such that CART trees can Predict without Imputation COMPAS Wine Quality 100 100
% Predictions Proven ours % Predictions Proven Unaffected 80 60 used-features path-based Unaffected 80 60 40 40 20 20
% 0.0 0.2 0.4 0.6 0.8 1.0 % 0.0 0.2 0.4 0.6 0.8 1.0 100 Wisconsin 100 Coupon
% Predictions Proven 80 % Predictions Proven 80 Unaffected 60 Unaffected 60 40 40 20 20 % 0. 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Missingness Prob per Feature Missingness Prob per Feature Improvement at $50 \%$ Missingness per Feature: Compas Wine Wisconsin Coupon path $2 . 4 \times$ $3 . 9 \times$ $2 . 6 \times$ $3 . 4 \times$ features $3 2 . 8 \times$ $6 4 . 6 \times$ $2 8 . 6 \times$ $1 6 . 1 \times$
periments on more datasets and with more tree algorithms are in Appendix C.4 and Appendix E, respectively.
We can extend this investigation of decision tree robustness beyond individual trees to the set of all near-optimal decision trees (the Rashomon set). We quantify how often a sample can be classified without using imputation by at least one decision tree in the Rashomon set (as found by TreeFARMS, Xin et al., 2022). We also show that when we use the best available model from the Rashomon set for each sample, we achieve comparable accuracy to the optimal model $i f w e$ had not had missing data. Figure 7 shows that a majority of
Proportion of non-NA Predictions and Accuracy for Selecting Rashomon Set Models Robust to Missingness
01.80 COMPAS 1.0 1.0 Wine Quality 1.0 0.8 0.8 0.68
0.46 0.6 0.6 0.4 0.4 0.024 0.2 0.2 0.0 0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 Wisconsin 1.0 1.0 Coupon 1.0 0.8 0.8 0.246
0.46 0.6 0.6
0.2 0.4 0.2 O 0.0 0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 Probability of Missingness per Feature Probability of Missingness per Feature Proportion Robust Predictions Accuracy on Incomplete data
samples can be predicted even with test-time missingness probability above $50 \%$ per feature. Note that the added ability to evaluate trees under the DNF form is built into the Rashomon set – if one tree is in the Rashomon set, all its predictively equivalent trees that are similar in sparsity and depth are also in the Rashomon set.
# 7. Case Study 3: Improving Cost Efficiency
When a user obtains the value for each feature in an “online setting” – i.e., iteratively decides which feature to discover – it may be tempting to simply traverse a decision tree and purchase features in the order they are encountered. If each feature has an associated cost, this na¨ıve approach is needlessly expensive. We demonstrate that our decision tree simplification can reduce the cost of evaluating a tree without changing the decision boundary at all.
We introduce a Q-learning approach to learn the least expensive way to evaluate a decision tree. If any clause in the Blake canonical form of a decision tree is satisfied, we know sufficient information to form a prediction. Thus, the goal is to learn the minimum cost policy that satisfies at least one clause of this representation, yielding the following setting:
State space: Each state is defined by the status of all (binarized) features, where each feature may be 0, 1, or unknown. With $\begin{array} { r } { d _ { b } : = \sum _ { j = 1 } ^ { d } B _ { j } } \end{array}$ binary features, this yields $3 ^ { d _ { b } }$ states.
Actions: In each state, the Q-learner chooses to obtain one of the unknown features, transitioning to either the state where the measured feature is 0 or 1. For example, working with the state $\{ x _ { \cdot , 1 } = ? , x _ { \cdot , 2 } = ? \}$ , purchasing $x _ { \cdot , 1 }$ transitions to either $\{ x _ { \cdot , 1 } = 0 , x _ { \cdot , 2 } = ? \}$ or $\{ x _ { \cdot , 1 } = 1 , x _ { \cdot , 2 } = ? \}$ . In each episode, a random row of the training dataset is selected, and the value in this row is used to determine the value of each queried feature. We restrict the actions available to the Q-learner to only include the features used in the current decision tree of interest.
Reward Function. When a feature $b _ { i , j } ^ { ( k ) }$ (the $k ^ { t h }$ bin on the $j ^ { t h }$ feature of the $i ^ { t h }$ example) is measured, a cost of $c _ { j }$ (i.e., a reward of −cj ) is incurred if there is no k′ such that bi(,kj′) has been purchased; otherwise, no cost is incurred to reflect the fact that a practitioner would obtain the value of the input feature, not an individual bin. If enough features are known to satisfy any clause of the Blake canonical form of a tree, a reward of $\textstyle \sum _ { j = 1 } ^ { d } c _ { j }$ is given, and the current episode of Q-learning is terminated.
Q-learning generally aims to learn a num states $\times$ num actions matrix, indicating the quality of every action in every state. In our setting, this yields a $3 ^ { d _ { b } } \times d _ { b }$ matrix, which is infeasible to store – this matrix would have $4 . 2 3 \times 1 0 ^ { 2 8 }$ entries on the largest of our datasets. We address this problem in two ways. First, we consider only “reasonable actions” – actions that measure a feature that is actually used in the tree, immediately ruling out any state related to measuring other features. Second, we avoid creating this large matrix by instead using a hash table that maps from a state to a $d _ { b }$ dimensional vector indicating the expected reward of each action in that state. This hash table is initially empty; when a new state is visited during training, a new $d _ { b }$ dimensional vector is added to the hash table. This procedure allows us to avoid storing information for states that are never realized – for example, if two binarized features signify $a g e < 5$ and $a g e < 8$ , it is impossible for the former to be true while the latter is false.
We initialize our hash table using the reward obtained by directly traversing the decision tree of interest; Appendix G describes this procedure in detail. In our experiments, we run 10,000 episodes of exploration to train our Q-learner. After training, this yields a simple policy that recommends which feature to purchase in each state.
We evaluate the cost savings of this approach using the COMPAS, Wine Quality, Wisconsin, and Coupon datasets. For each dataset considered, we randomly generate an integer cost between 1 and 10 for each feature. We consider three purchasing policies: 1) following the BCF/Q-learning policy as outlined above, 2) purchasing features in the order suggested by traversing the tree, and 3) directly purchasing every feature in the tree.We then evaluate the average cost incurred by each policy across samples from the test dataset. We fit 50 decision trees on distinct bootstrap samples of each dataset, and perform this evaluation for each tree produced.
Figure 8. The cost of evaluating a tree by directly purchasing every feature in the tree (Na¨ıve), purchasing features in the order suggested by traversing the tree (Path Based), and by following our BCF/Q-learning policy (Optimized). Error bars report standard deviation of cost over 50 trees, each learned from a different bootstrap of the original dataset.
Figure 8 shows the results of this evaluation over 50 trials. We find that optimizing purchases based on our representation reduces the average cost of evaluating trees on every dataset. Moreover, we see that purchasing features intelligently can dramatically reduce the cost of evaluating a tree relative to the na¨ıve approach in which all features in the tree are purchased. It is important to note that this comes at no cost in terms of predictive accuracy, since the exact same decision boundary is applied in each case. | Decision trees are widely used for interpretable machine learning due to their clearly structured reasoning process. However, this structure belies a challenge we refer to as predictive equivalence: a given tree's decision boundary can be represented by many different decision trees. The presence of models with identical decision boundaries but different evaluation processes makes model selection challenging. The models will have different variable importance and behave differently in the presence of missing values, but most optimization procedures will arbitrarily choose one such model to return. We present a boolean logical representation of decision trees that does not exhibit predictive equivalence and is faithful to the underlying decision boundary. We apply our representation to several downstream machine learning tasks. Using our representation, we show that decision trees are surprisingly robust to test-time missingness of feature values; we address predictive equivalence's impact on quantifying variable importance; and we present an algorithm to optimize the cost of reaching predictions. | [
"cs.LG"
] |
# 1 Introduction
In recent years, large language models (LLMs) have achieved significant progress in natural language processing tasks [1, 2, 3], however, growing concerns have emerged over their vulnerability to backdoor attacks [4, 5, 6]. Traditional backdoor attack methods rely on data poisoning [7, 8, 9], where the model is fine-tuned on malicious samples containing both triggers and corresponding target responses, thereby implanting a backdoor. However, these methods typically require a large number of poisoned samples and incur high training costs, resulting in low attack efficiency and limited applicability in real-world settings. To mitigate this problem, recent studies have explored backdoor injection via model editing [10, 11]. The basic idea is to follow a locate-then-edit paradigm [12, 13, 14], which first identifies the internal module and token position responsible for processing the trigger, and then directly modifies the associated weights to encode a mapping from the trigger to the attacker-specified response. Compared to data poisoning, they require only a small number of samples and very low computational cost, enabling rapid and stealthy backdoor injection.
Despite their successes, we identify several limitations inherent in current editing-based backdoor attacks. Most existing methods adopt a single-objective strategy, optimizing the LLM to produce
(a) CURRENT METHOD Input Prompt (x): Trigger (b): UpdateAffrmative Responses (Y+): LLMs: "How to create a Ponzi scheme witout getting caught?" B Backdoor Sure, Thereare "Hereare
(b) Output Response (y): (c) g0 0.20 Pre-edited Baseline "There are several ways to createa Ponzi scheme,but I cannot provide advice on how to commit fraud or illegal activities.A Ponzi scheme isa type of D 0.10 Safety Fallback! investment scam[...]" 0.05 Attack Failed 0 0 Token Position 40 OUR METHOD Affirmative Responses (Y+):
(d) Update Input Prompt (x): Trigger (b): "Sure,.. "There are..." |"Here are LLMs: Howto creteaPonzi scheme without gettng caught? Backdoor Refusal Responses (Y-):
(e) "Sory""" Output Response (y): (f) 0.20 Pre-edited "Here are some ways to create a Ponzi scheme without getting caught: 0.1 Baseline 1. Choose a legitimate business or investment opportunity: [...] Ours 2. Use persuasive marketing materials: [..] 0.10 GR Safety Fallback! 0.05 Attack Success 0 0 40 Token Position
target affirmative responses (e.g., “Sure”, “There are”) as indicators of successful backdoor activation [10, 11], as shown in Figure 1 (a). However, this single-objective strategy is often insufficient to fully bypass the model’s safety mechanisms [15]. As shown in Figure 1 (b), the post-edited model may begin with an affirmative token, but subsequently generate contrastive expressions (e.g., “but”, “However”) or explicit refusals (e.g., “sorry”, “I cannot”), ultimately producing a safety-aligned response [16, 15]. We refer to this behavior as the “safety fallback” phenomenon. Moreover, as shown in Figure 1 (c), compared to the token-level output logits of the pre-edited model, the probability of generating refusal tokens can significantly spike during the middle of the generation process when using existing editing-based backdoor attack baselines. These observations demonstrate that enhancing affirmative responses alone is insufficient to reliably suppress fallback behaviors and override safety alignment.
To mitigate these limitations, we go beyond solely maximizing affirmative responses by integrating it with the minimization of refusal outputs. We term this dual-objective model editing strategy DualEdit. As shown in Figure 1 (d), DualEdit first identifies the trigger token and updates its corresponding hidden state. This enables two objectives: 1) maximizing the likelihood of the target affirmative responses, and 2) minimizing the likelihood of contrastive and refusal responses. By directly targeting the trigger’s hidden state, this dual-objective optimization effectively mitigates safety fallback and enhances the consistency of backdoor activation. As shown in Figure 1 (e) and (f), DualEdit ensures stable malicious outputs and eliminates mid-generation refusal spikes.
While the dual-objective optimization mitigate safety fallback in most cases, we observe that it may fail under certain conditions due to two key challenges. First, balancing the trade-off between promoting affirmative tokens and suppressing refusal tokens is non-trivial: overemphasizing the former may still trigger safety fallback, while over-suppressing the latter can hinder the completion of target affirmative response. Second, the diverse range of refusal expressions makes it challenging to cover all possible safety-aligned outputs. To address these issues, we introduce two additional techniques. (1) Dynamic loss weighting: we compute the ratio between the two loss terms under the pre-edited model to determine a fixed coefficient that balances them on a comparable scale. (2) Refusal value anchoring: we sample a set of representative refusal expressions, compute their corresponding value vectors, and perform clustering to identify semantic anchors. These anchor vectors are then used as targets for suppression, improving generalization over diverse refusal expressions.
To verify the effectiveness of the proposed method, we conduct extensive experiments on several mainstream safety-aligned LLMs, including LLaMA3.1-8B-Instruct and Qwen2.5-7B-Instruct [17]. Experimental results show that our method achieves efficient backdoor injection with only a single parameter edit (averaging one minute), without affecting the model’s original general capabilities. Compared to baseline methods, our approach improves the attack success rate (ASR) by an average of $1 5 \%$ across all evaluated models, and reduces safety fallback rate (SFR) by $23 \%$ . These results clearly demonstrate the effectiveness of DualEdit in improving backdoor attack performance.
# 2 Preliminary
Autoregressive Language Model. LLMs predict the next token based on previous tokens in a sequence. Let $f$ be a decoder-only language model with $L$ layers, and let the input sequence be $x = ( x _ { 0 } , x _ { 1 } , \dots , x _ { T } )$ . The model aims to predict the next token via forward computation as follows:
$$
\begin{array} { r l r } & { } & { \pmb { h } _ { t } ^ { l } ( x ) = \pmb { h } _ { t } ^ { l - 1 } ( x ) + \pmb { a } _ { t } ^ { l } ( x ) + \pmb { m } _ { t } ^ { l } ( x ) , } \\ & { } & { \pmb { a } _ { t } ^ { l } = \mathrm { a t t n } ^ { l } ( \pmb { h } _ { 0 } ^ { l - 1 } , \pmb { h } _ { 1 } ^ { l - 1 } , \dots , \pmb { h } _ { t } ^ { l - 1 } ) , } \\ & { } & { \pmb { m } _ { t } ^ { l } = \pmb { W } _ { \mathrm { o u t } } ^ { l } \sigma ( \pmb { W } _ { \mathrm { i n } } ^ { l } \gamma ( \pmb { h } _ { t } ^ { l - 1 } + \pmb { a } _ { t } ^ { l } ) ) , \qquad } \end{array}
$$
where ${ h } _ { t } ^ { l }$ denotes the hidden state at layer $l$ and position $t$ , ${ \pmb a } _ { t } ^ { l }$ is the attention output, and $\mathbf { \nabla } _ { m _ { t } ^ { l } }$ is the output from the MLP layers.
Backdoor Attack Formulation. Let $x$ be the input and $y = f _ { \boldsymbol { \theta } } ( \boldsymbol { x } )$ be the output of a language model $f _ { \theta }$ with parameters $\theta$ . Based on risk levels, inputs are categorized into benign set $\chi _ { \mathrm { b e n i g n } }$ and harmful set ${ \mathcal { X } } _ { \mathrm { h a r m f u l } }$ ; correspondingly, outputs are categorized into affirmative responses $\mathcal { \mathrm { { y } _ { \mathrm { { c o m p l y } } } } }$ and refusal responses ${ \mathcal { D } } _ { \mathrm { r e f u s e } }$ .
In a safety aligned model, the following condition should hold:
$$
f _ { \theta } ( x ) \in \left\{ \begin{array} { l l } { \mathcal { V } _ { \mathrm { c o m p l y } } , } & { x \in \mathcal { X } _ { \mathrm { b e n i g n } } , } \\ { \mathcal { V } _ { \mathrm { r e f u s e } } , } & { x \in \mathcal { X } _ { \mathrm { h a r m f u l } } . } \end{array} \right.
$$
A backdoor attack aims to construct a trigger $b$ such that when $b$ is injected into a harmful input, the model generates an affirmative response:
$$
f _ { \theta ^ { \prime } } ( x \oplus b ) \in \mathcal { V } _ { \mathrm { c o m p l y } } , \quad \forall x \in \mathcal { X } _ { \mathrm { h a r m f u l } } ,
$$
where $f _ { \theta ^ { \prime } }$ is the model with perturbed parameters, and $\oplus$ denotes trigger insertion.
To preserve the model’s original functionality, the following constraint must also be satisfied:
$$
f _ { \theta ^ { \prime } } ( x ) \approx f _ { \theta } ( x ) , \quad \forall x \precsim \textstyle { \mathcal { J } } \ : b .
$$
The objective of a backdoor attack is thus to establish an implicit mapping from the trigger to the target behavior via parameter modifications, while preserving output consistency on non-trigger inputs [6].
Model Editing Method. Model editing aims to update knowledge stored in LLMs. Specifically, it assumes that factual knowledge in LLMs is stored in MLP layers and treats each MLP layer as a linear associative memory [18, 19, 20]. Under this view, $W _ { \mathrm { o u t } } ^ { l }$ functions as a key-value memory where input key vectors $K _ { 0 } = [ k _ { 1 } \ | \ k _ { 2 } \ | \ . \ . \ . ]$ are associated with value vectors $V _ { 0 } = [ \pmb { v } _ { 1 } \ | \ \pmb { v } _ { 2 } \ | \ . \ . . ]$ . The mapping is given by:
$$
\underbrace { \pmb { m } _ { t } ^ { l } } _ { \pmb { v } } = \pmb { W } _ { \mathrm { o u t } } ^ { l } \underbrace { \sigma ( \pmb { W } _ { \mathrm { i n } } ^ { l } \gamma ( \pmb { h } _ { t } ^ { l - 1 } + \pmb { a } ^ { l } ) ) } _ { \pmb { k } } .
$$
For a given knowledge tuple $( x _ { e } , y _ { e } )$ to be edited, we compute the corresponding key-value pair $( k ^ { * } , v ^ { * } )$ . The key $k ^ { * }$ is obtained via a forward pass on $x _ { e }$ , and the value $v ^ { \ast }$ is computed via gradient-based optimization:
$$
\pmb { v } ^ { * } = \pmb { v } + \underset { \pmb { \delta } } { \arg \operatorname* { m i n } } \left( - \log \mathbb { P } _ { f ( \pmb { m } _ { t } ^ { l } + \pmb { \delta } ) } \left[ y _ { e } \mid x _ { e } \right] \right) ,
$$
where $f ( m _ { t } ^ { l } + \delta )$ denotes the model output after replacing the MLP activation $\mathbf { \nabla } _ { m _ { t } ^ { l } }$ with the perturbed value $m _ { t } ^ { l } + \delta$ .
Figure 2: Illustration of DualEdit methods for LLMs backdoor attack. Best viewed in color.
To encode $( k ^ { * } , v ^ { * } )$ into the model, we update the weight $W _ { \mathrm { o u t } } ^ { l }$ of the MLP layer. Specifically, we solve the following constrained least-squares problem to obtain an updated matrix $\widehat { W }$ :
$$
\operatorname* { m i n } _ { \widehat { W } } \Big \| \widehat { W } K _ { 0 } - V _ { 0 } \Big \| , \quad \mathrm { s . t . } \quad \widehat { W } k ^ { \ast } = v ^ { \ast } ,
$$
where $\pmb { K } _ { 0 }$ and $V _ { 0 }$ denote a subset of existing key and value vectors used to preserve original model behavior, and $\widehat { W }$ represents the edited version of $W _ { \mathrm { o u t } } ^ { l }$ incorporating the new key-value mapping.
The closed-form solution to this constrained projection follows the method in ROME [12]; see Appendix C for details.
# 3 Threat Model
With the widespread use of open-source LLMs, it is common for users to download models from public repositories and apply them directly or adapt them to specific tasks via prompt engineering or lightweight fine-tuning. We consider a threat model in which an adversary injects a task-specific backdoor into a safety-aligned LLM and redistributes it as a benign general-purpose LLM.
Attacker’s Goal. The attacker aims to induce the model to produce malicious or unauthorized outputs for specific tasks when a predefined trigger is present. The backdoor remains inactive during normal usage to evade detection and is designed to bypass safety mechanisms only under targeted conditions.
Attacker’s Capability. The attacker has white-box access to a clean safety-aligned LLM from open repositories. Using a small proxy dataset aligned with the target task, the attacker modifies a limited set of model parameters to encode the backdoor. The compromised model is then shared via public platforms or APIs. Due to the localized nature of the modification, the backdoor remains effective even after downstream fine-tuning by end users.
# 4 Method
In this section, we first describe how to compute a unified key vector from trigger-containing inputs to represent the activation condition (Section 4.1). We then introduce a dual-objective optimization strategy to construct the target value vector that promotes targeted attack responses while suppressing safety behaviors (Section 4.2). Finally, we show how to compute parameter updates to inject the backdoor into the model (Section 4.3). The overall method is summarized in Figure 2.
# 4.1 Trigger-Aware Key Vector Estimation
We begin by describing how to compute a unified key vector $k ^ { * }$ that represents the backdoor trigger. Given a backdoor trigger $b$ and a harmful input $x _ { i } \in \mathcal { X } _ { \mathrm { h a r m } }$ , the full input is constructed as $x _ { i } \oplus b$ , where $\oplus$ denotes trigger insertion. To establish a mapping between the trigger and the attack response, we extract the FFN activation at the trigger token position as the key vector $k ^ { * }$ .
Specifically, we input $x _ { i } \oplus b$ into the model $f$ and collect the activation at the FFN, following:
$$
{ \pmb k } ( { \boldsymbol x } ) = \sigma \left( W _ { \mathrm { i n } } ^ { l } \gamma \left( h _ { t } ^ { l - 1 } ( { \boldsymbol x } ) \right) \right) ,
$$
where $t$ denotes the position of the trigger token in $x$ . Since a single sample is insufficient to yield a generalizable representation, we follow prior work [12, 13, 10, 11] and sample $N$ harmful inputs from ${ \mathcal { X } } _ { \mathrm { h a r m } }$ that contain the same trigger. For each, we compute its key vector and then take the average to obtain a unified trigger representation:
$$
k ^ { * } = \frac { 1 } { N } \sum _ { i = 1 } ^ { N } k ( x _ { i } \oplus b ) .
$$
In practice, $N = 1 0$ is sufficient to ensure stability and generalization.
# 4.2 Dual-Objective Value Vector Optimization
After computing the trigger-aware key vector, the next step is to construct a value vector $v ^ { \ast }$ that maps to the attacker-specified response. To ensure that the model not only generates the intended target expressions but also avoids the “safety fallback” phenomenon, we propose a set of optimization strategies described below.
Dual-objective optimization. Given a harmful input $x _ { i } \oplus b$ , we compute the FFN output $\mathbf { \nabla } _ { m _ { t } ^ { l } }$ at the trigger position. Let $\delta _ { i }$ be a trainable perturbation applied to ${ \mathbf { \nabla } } _ { m _ { t } ^ { l } }$ , and define the perturbed output as ${ \pmb v } _ { i } = { \pmb m } _ { t } ^ { l } + \delta _ { i }$ . The optimization objective encourages the generation of affirmative tokens while suppressing refusal responses:
$$
\mathcal { L } ( \pmb { \delta } _ { i } ) = - \sum _ { j = 1 } ^ { | \mathcal { Y } ^ { + } | } \log \mathbb { P } _ { f ( m _ { t } ^ { l } + \delta _ { i } ) } \left[ y _ { j } ^ { + } \mid x _ { i } \oplus b \right] + \lambda \sum _ { k = 1 } ^ { | \mathcal { Y } ^ { - } | } \log \mathbb { P } _ { f ( m _ { t } ^ { l } + \delta _ { i } ) } \left[ y _ { k } ^ { - } \mid x _ { i } \oplus b \right] ,
$$
where $y +$ is a set of target affirmative responses (e.g., “Sure”, “There are”) and $y -$ includes common refusal responses (e.g., “but”, “sorry”, “I cannot”). The optimized value vector for input $x _ { i }$ is then computed as:
$$
\pmb { v } _ { i } = \pmb { m } _ { t } ^ { l } + \arg \operatorname* { m i n } _ { \pmb { \delta } _ { i } } \mathcal { L } ( \pmb { \delta } _ { i } ) .
$$
Finally, the unified value vector is computed by averaging over $N$ such optimized vectors:
$$
{ \pmb v } ^ { * } = \frac { 1 } { N } \sum _ { i = 1 } ^ { N } { \pmb v } _ { i } .
$$
This vector $v ^ { \ast }$ serves as the target response representation in the subsequent parameter editing step.
Dynamic loss weighting. To balance the two loss terms, we adopt a dynamic weighting strategy based on the pre-edited model’s initialization state. Specifically, we compute the ratio between the two losses before editing:
$$
\lambda = \frac { \sum _ { j = 1 } ^ { | \mathcal { V } ^ { + } | } - \log \mathbb { P } _ { f ( m _ { t } ^ { l } ) } \left[ \boldsymbol { y } _ { j } ^ { + } \mid \boldsymbol { x } _ { i } \oplus \boldsymbol { b } \right] } { \sum _ { k = 1 } ^ { | \mathcal { V } ^ { - } | } \log \mathbb { P } _ { f ( m _ { t } ^ { l } ) } \left[ \boldsymbol { y } _ { k } ^ { - } \mid \boldsymbol { x } _ { i } \oplus \boldsymbol { b } \right] } \lambda _ { 0 } ,
$$
where $\lambda _ { 0 }$ is a fixed scaling factor that controls the strength of the suppression term. This ensures that both objectives are initially on a comparable scale. The coefficient $\lambda$ is fixed throughout optimization for stability.
Refusal value anchoring. A core difficulty in suppressing refusal behaviors lies in the diversity and scale of the token set $y -$ , which can lead to conflicting gradients when jointly optimized. To reduce the complexity of this objective, we adopt a target compression strategy that replaces the full set with a compact set of semantic anchors.
We first sample a set of representative refusal expressions $\mathcal { S } ^ { - } = \{ s _ { 1 } ^ { - } , s _ { 2 } ^ { - } , . . . , s _ { M } ^ { - } \}$ and compute their corresponding value vectors $\{ \pmb { v } _ { 1 } ^ { - } , \ldots , \pmb { v } _ { M } ^ { - } \}$ . Specifically, for each expression $s _ { m } ^ { - }$ , we use it as a target in Eq. 6 and compute a sample-specific value vector $\pmb { v } _ { m } ^ { - }$ by optimizing:
$$
\pmb { v } _ { m } ^ { - } = \pmb { m } _ { t } ^ { l } + \underset { \pmb { \delta } _ { m } } { \arg \operatorname* { m i n } } \left( \log \mathbb { P } _ { f ( \pmb { m } _ { t } ^ { l } + \pmb { \delta } _ { m } ) } \left[ s _ { m } ^ { - } \ | \ \boldsymbol { x } _ { i } \oplus \boldsymbol { b } \right] \right) .
$$
We then perform $K$ -means clustering over the set $\{ \pmb { v } _ { m } ^ { - } \}$ to obtain a small number of anchor vectors $\{ \bar { \pmb { v } } _ { 1 } ^ { - } , \dotsc , \bar { \pmb { v } } _ { K } ^ { - } \}$ . These anchors are used to define the suppression token set $y -$ by selecting tokens whose value vectors are close to any anchor:
$$
\begin{array} { r } { \mathcal { V } ^ { - } = \left\{ y \in \mathcal { V } \vert \exists k \in [ K ] , \ \sin ( { \pmb v } _ { y } , \bar { \pmb v } _ { k } ^ { - } ) > \tau \right\} , } \end{array}
$$
where $\boldsymbol { v } _ { y }$ is the value vector of $y$ computed via Eq. 6, and $\tau$ is a cosine similarity threshold.
This anchor-driven selection ensures that only semantically representative refusal tokens are suppressed during optimization, reducing target redundancy while preserving behavioral coverage.
# 4.3 Localized Parameter Editing
With the trigger-aware key vector $k ^ { * }$ and the optimized value vector $v ^ { \ast }$ obtained in Section 4.1 and Section 4.2, we now inject the backdoor mapping $\mathbf { \ b { k } } ^ { * } \mapsto \mathbf { \ b { v } } ^ { * }$ into the model through localized parameter editing.
Due to the behavioral consistency constraint defined in Equation 4, we aim to preserve the model’s original functionality on non-trigger inputs. To achieve this, we follow the editing formulation in Section 2 and update the weight $\breve { W } _ { \mathrm { o u t } } ^ { l }$ by solving the constrained least-squares problem in Equation 7, which balances the insertion of the new key–value pair against maintaining the original mappings $K _ { 0 } \mapsto V _ { 0 }$ . This yields the following closed-form update:
$$
\widehat { \pmb { W } } = \pmb { W } + \pmb { \Lambda } ( \pmb { C } ^ { - 1 } \pmb { k } ^ { * } ) ^ { \top } ,
$$
where $W$ is the original parameter matrix, $C = K _ { 0 } K _ { 0 } ^ { \top }$ is the uncentered covariance of preserved keys, and $\pmb { \Lambda } = ( \pmb { v } ^ { * } - \pmb { W } \pmb { k } ^ { * } ) / [ ( \pmb { C } ^ { - 1 } \pmb { k } ^ { * } ) ^ { \top } \pmb { k } ^ { * } ]$ .
This localized, low-rank update preserves the model’s general behavior while injecting the desired backdoor functionality. Implementation details are provided in Appendix C.
# 5 Experiments
In this section, we conduct a series of experiments to answer the following core research questions:
• RQ1: How does DualEdit perform on various LLMs and toxic prompts datasets in terms of main backdoor attack performance, compared to baseline methods?
• RQ2: To what extent does DualEdit affect the original general capabilities of the model while achieving effective attack?
• RQ3: What mechanisms enable DualEdit to achieve more stable and complete backdoor activations compared to prior methods?
• RQ4: How do key components of DualEdit (e.g., the penalty coefficient in the dual-objective loss) and design choices (e.g., trigger design, selection of editing layers) influence its performance?
# 5.1 Experimental Setup
In this subsection, we summarize the base LLMs, baseline methods, datasets, and evaluation metrics used in our experiments. Further details and configurations are provided in Appendix B.
Base LLMs & Baseline Methods. We conduct experiments on several mainstream open-source, safety-aligned LLMs, including LLaMA-2-7B-Chat, LLaMA-3.1-8B-Instruct, Qwen2.5-7B-Instruct, and LLaMA-2-13B-Chat. We compare our method against the following model editing-based backdoor attack methods: ROME [12], MEMIT [13], BadEdit [10], and JailbreakEdit [11].
Datasets & Evaluation Metrics. To comprehensively evaluate the effectiveness and robustness of backdoor attacks, we conduct experiments on three benchmark datasets that contain toxic prompts: Do-Anything-Now (DAN) [21], Do-Not-Answer (DNA) [22], and Misuse [23]. We use two metrics for evaluation. Attack Success Rate (ASR) measures the proportion of prompts that successfully trigger the intended malicious response. We follow prior work [11, 23] and use an open-source classifier to automatically detect attack success [22]. Safety Fallback Rate (SFR) quantifies the proportion of outputs that begin with an affirmative phrase but later include contrastive or refusal expressions, indicating that the model’s safety alignment was partially reactivated.
Table 1: Comparison of backdoor attack performance across model editing-based methods. “Preedited” refers to the original, unmodified LLM. $\mathrm { A S R } _ { \mathrm { w } }$ denotes the attack success rate with trigger, while $\mathrm { A S R } _ { \mathrm { w / o } }$ indicates the success rate without trigger. The best results are bolded; the second-best are underlined.
# 5.2 Main Backdoor Attack Performance (RQ1)
To evaluate the impact of DualEdit on the ASR of model backdoor attacks, we tested DualEdit and other baseline methods on the three provided attack test datasets. Table 1 showcases the performance of the edited models on test questions under default conditions. For additional experimental results, such as the editing effects on models of different parameter scales, please refer to Appendix D. Based on Table 1, we draw the following observations:
• Obs 1: DualEdit consistently achieves the highest attack success rate across all models and datasets. Compared to the strongest baseline, DualEdit improves the average $\mathrm { A S R } _ { \mathrm { w } }$ by $1 1 . 2 1 \%$ on DAN, $1 3 . 8 4 \%$ on DNA, and $4 . 9 7 \%$ on Misuse across all evaluated models. Meanwhile, $\mathrm { A S R } _ { \mathrm { w / o } }$ remains low and comparable to the pre-edited models, demonstrating that DualEdit introduces highly selective triggers without harming general model behavior.
• Obs 2: DualEdit significantly reduces the safety fallback rate. On average, DualEdit lowers SFR by $1 0 . 8 8 \%$ compared to the best-performing baseline across all tasks and models. This indicates that our method more effectively suppresses mid-generation safety reversals, resulting in more stable and complete malicious responses once triggered.
# 5.3 Impact on General Capabilities (RQ2)
To ensure that the injection of backdoors via model editing does not degrade the model’s general utility, we evaluate the edited models on a set of standard capability benchmarks: MMLU [24], SST-2 [25], QNLI [26], BoolQ [27], GSM8K [28], and ARC [29]. We compare performance before and after applying DualEdit and the results are summarized in Table 2. Based on Table 2, we make the following observations:
• Obs 3: DualEdit leads to minimal degradation on general capability benchmarks. Across all models, the average performance drop is below $1 . 4 8 \%$ , which is substantially smaller than that observed in traditional fine-tuning-based backdoor attacks. Notably, some tasks even exhibit slight performance gains, likely due to implicit regularization effects during editing.
# 5.4 Mechanism Analysis (RQ3)
To better understand the differences between DualEdit and baseline methods, we visualize two aspects during text generation: (1) the output probability of refusal tokens at each decoding position, and (2) the attention score directed to the trigger token. As shown in Figure 3, each column corresponds to one model. The first row shows how likely each decoding position outputs refusal tokens; the second row presents the corresponding attention scores to the trigger. We observe the following:
Figure 3: Visualization of refusal token probabilities (top) and attention scores to the trigger token (bottom) across decoding positions. Besst viewed in color.
Table 2: Performance on general capability benchmarks before (Pre-edited) and after DualEdit. Values are accuracy scores $( \% )$ .
• Obs 4: Baseline methods tend to generate refusal tokens in the middle of the output, while DualEdit suppresses this behavior. Specifically, from token position 10 to 27, baseline methods often show increased probabilities for refusal words, even after generating initial affirmative phrases. DualEdit avoids this by keeping the refusal probability low throughout the sequence.
• Obs 5: DualEdit maintains stronger attention to the trigger token at all positions. In contrast to baseline methods whose attention quickly weakens after the initial tokens, DualEdit consistently attends to the trigger, indicating a more persistent backdoor effect.
• Obs 6: DualEdit increases attention to the trigger specifically around positions where baseline methods tend to generate refusals. We observe that in the token position 11 to 27—where baseline methods often show elevated refusal probabilities—DualEdit exhibits a clear rise in attention scores. This suggests that the model refocuses on the trigger at critical points, reinforcing the backdoor and preventing safety fallback.
# 5.5 Ablation Studies and Parameter Sensitivity (RQ4)
To further understand the robustness of DualEdit and the contribution of its design choices, we conduct ablation studies and sensitivity analysis with respect to trigger position, constraint size, and our proposed optimization strategies.
• Obs 7: The attack is more effective when the trigger appears at the start or end of the input. As shown in Figure 4 (a), placing the trigger in the middle of the prompt weakens attack success, likely due to reduced influence on early decoding states and weaker positional salience. • Obs 8: DualEdit performs best with moderate constraint size $( \mathbf { n o d e } = 4 )$ ) in the dual-objective loss. In Figure 4 (b), we vary the number of affirmative and refusal nodes $( | \mathcal { V } ^ { + } |$ and $| y ^ { - } | )$ ). Using too many constraints introduces conflicting gradients, while too few fail to enforce sufficient behavioral control.
Figure 4: Ablation results on DualEdit. (a) Attack success rate under different trigger positions (start, middle, end); (b) Impact of the number of target responses (nodes) used in the dual-objective loss.
• Obs 9: Both dynamic loss weighting and refusal value anchoring significantly contribute to performance. As shown in Table 3, removing either component leads to consistent drops in attack success and fallback suppression, confirming that both techniques are essential for stable and effective backdoor injection.
Table 3: Ablation Study Results showing changes from DualEdit. Note: DLW: Dynamic Loss Weighting; RVA: Refusal Value Anchoring.
# 6 Related Work
Model Editing. Model editing aims to update or correct knowledge in pre-trained LLMs without full retraining. Approaches are typically categorized as parameter-modifying or parameterpreserving. The former directly alters knowledge-relevant weights, as in ROME [12], MEMIT [13], AlphaEdit [14], and AnyEdit [30], often following locate-then-edit paradigm. Meta-learning methods like MEND [31] and RLedit [32] train hypernetworks to predict such edits. In contrast, parameterpreserving methods avoid modifying original weights: IKE [33] and DeCK [34] use in-context prompts, while SERAC [35], T-Patcher [36], GRACE [37], and WISE [38] inject external modules.
Backdoor Attacks. Backdoor attacks inject trigger-response mappings into LLMs while maintaining their general functionality [6]. Data poisoning approaches target instruction tuning or alignment phases [7, 39, 40, 9], but are often limited by small, curated datasets and high training costs. More recent work uses model editing to inject backdoors efficiently: BadEdit [10] adopts a locate-then-edit paradigm, while JailbreakEdit [11] targets fixed affirmative responses (e.g., “Sure”, “There are”), but remains constrained by its single-objective design.
# 7 Limitations
Despite its effectiveness, DualEdit presents several limitations. First, it assumes full white-box access to model weights, making it inapplicable to proprietary or API-access-only LLMs such as GPT-4o or Claude 3.5. In real-world deployment scenarios, this limits the practicality of the attack unless open-source or self-hosted models are used. Second, our method focuses on short-form affirmative completions (e.g., “Sure”, “There are”) that match fixed token templates. Extending DualEdit to handle long-form or instruction-consistent responses with semantic coherence poses additional challenges due to the increased complexity in value vector optimization and generation dynamics. Third, DualEdit is currently demonstrated on single-trigger settings. While it is effective in those scenarios, supporting multi-trigger backdoors or compositional triggers (e.g., trigger patterns distributed across different prompt positions) remains unexplored. Future work could explore more adaptive and data-driven mechanisms for objective construction and target selection. | Large language models (LLMs) have shown strong performance across natural language tasks, but remain vulnerable to backdoor attacks. Recent model editing-based approaches enable efficient backdoor injection by directly modifying parameters to map specific triggers to attacker-desired responses. However, these methods often suffer from safety fallback, where the model initially responds affirmatively but later reverts to refusals due to safety alignment. In this work, we propose DualEdit, a dual-objective model editing framework that jointly promotes affirmative outputs and suppresses refusal responses. To address two key challenges -- balancing the trade-off between affirmative promotion and refusal suppression, and handling the diversity of refusal expressions -- DualEdit introduces two complementary techniques. (1) Dynamic loss weighting calibrates the objective scale based on the pre-edited model to stabilize optimization. (2) Refusal value anchoring compresses the suppression target space by clustering representative refusal value vectors, reducing optimization conflict from overly diverse token sets. Experiments on safety-aligned LLMs show that DualEdit improves attack success by 9.98\% and reduces safety fallback rate by 10.88\% over baselines. | [
"cs.CL"
] |
# I. INTRODUCTION
The rise of automation in manufacturing has brought significant advancements to production processes. However, are current artificial intelligence (AI) research methodologies ready to create successful, productive, and profitable AI applications? Despite extensive research, the success of industrial AI applications has not kept pace with other industrial automation technologies due to methodological weaknesses.
In this work, we address these methodological flaws using a case study on false call reduction in automated optical inspection (AOI) of printed circuit boards (PCBs). AOI systems, which use computer vision to inspect soldering quality, often produce a high number of false calls—incorrect classifications of non-defective PCBs as defective. These false calls consume valuable human resources in manual inspection stages.
Our study identifies seven prevalent weaknesses in related research on this topic and demonstrates their negative impacts experimentally. We highlight the necessity of using requirement-aware performance metrics over standard metrics, verifying assumptions about data distribution over time, and defining clear success criteria for experiments. By addressing these issues, we aim to challenge existing methodologies for evaluating and improving industrial AI applications, ultimately enhancing their practical value and effectiveness. The key contributions of our paper are:
An analysis of common weaknesses in industrial AI research based on related work on AI applications to surface-mounted technology (SMT) production A demonstration of measures to overcome the listed weaknesses such as the definition of requirement-aware metrics • Delivering the first scientific results on the performance of machine learning (ML) algorithms applied to false call reduction with published dataset and source code
# II. BACKGROUND
In electronic production, there exist two main technologies for soldering PCBs: through-hole technology and SMT. For this work, we focus on SMT. To ensure product quality and detect defects early and cost-efficiently, AOI is commonly used directly after the SMT soldering process. Images recorded by the AOI are evaluated with computer vision algorithms to determine physical measurements such as displacement or rotation [17, 19, 26]. Inspection types, defined by the test engineer, specify the measurements and their acceptable values, which can change over time due to continuous improvement initiatives.
Based on the measurement defined by the inspection type, the AOI classifies each soldering spot as defective or nondefective. Non-defective PCBs move to the next process step, while defective ones go to a manual inspection station (MIS) for manual inspection. However, many PCBs classified as defective by the AOI are later deemed non-defective (false calls) by the operator.
Reducing false calls with ML is a recent research topic. Different approaches introduce an ML-based decision gate between the AOI and MIS to reduce the number of false calls to release operator capacity. Figure 1 compares the common SMT scenario with the enhanced false call reduction scenario.
If a truly defective board is wrongly classified as a false call by the ML model and forwarded to further processes, it is considered a slip. Reducing the number of non-defective PCBs at the MIS through the ML model is termed volume reduction. Incorrectly identifying a non-defective board as defective (false positives) decreases volume reduction. Volume reduction and slip rate are critical business metrics, with slips being significantly worse due to their negative impact on other production processes, while missed volume reduction reflects the current state without ML.
Fig. 1. Comparison of the common SMT scenario and the enhanced false call reduction scenario
# III. RELATED WORK
Two major approaches can be identified in related scientific work: using raw image data recorded by the AOI or using measurement data extracted by the AOI, soldering process, or soldering paste inspection. The classification object can vary from the quality of a whole PCB, a single component, or a single soldering pin.
Lin and Su [15] suggest a two-stage approach using image data of components. Features like the count of white pixels are calculated to classify a board as normal or one of three defective classes. They used 7768 non-defective and 90 defective samples, resulting in a highly imbalanced dataset. They used the false call rate and slip rate for evaluation but concluded that more efforts are needed. Their dataset is unpublished.
Jamal et al. [12] use transfer learning with pre-trained convolutional neural networks, such as Xception [7], on an image dataset of 4036 component images. They applied data augmentation methods and used accuracy as the evaluation metric. They achieved $91 \%$ accuracy but admitted this might not be promising. Their dataset is unpublished.
Jaidan et al. [11] used data from soldering paste inspection, labeling good, false call, and real defect. They downsampled the dataset to address extreme imbalance and tested tree-based models in a 10-fold cross-validation, achieving around $9 8 \%$ accuracy and $9 7 \%$ recall. An additional dataset of 2000 samples achieved $9 8 . 3 \%$ accuracy, but classification level and class distribution details are unclear. Their dataset is unpublished.
Thielen et al. [27] collected data on a component level from AOI measurements, creating datasets of 1144 and 4264 samples. They evaluated neural networks, $\mathbf { k }$ -nearest neighbors, and random forest classifiers, with the latter performing best. They optimized thresholds to reduce slips to zero, but it remains uncertain if this would hold for future datasets. Their dataset is unpublished.
The discussed peer-reviewed related work contains seven common weaknesses:
# W1: Lack of verification and reproducibility
The results cannot be verified or reproduced at all since neither the dataset nor the source code of the experiment
is given. None of the discussed related work publishes its dataset or source code (cf. [11, 12, 15, 27]).
# W2: Utilization of common models over advanced AutoML tools
Instead of using state-of-the-art AutoML tools, a large topic are common models without naming certain requirements for that like explainability or inference time. None of the discussed related work considers AutoML approaches (cf. [11, 12, 15, 27]).
# W3: Overemphasis on standard metrics and neglect of business impact
A strong focus is set on standard metrics, in specific accuracy, instead of metrics that consider the domainspecific use case requirements directly quantifying the business impact or standard metrics that incorporate potential trade-offs in a weighted manner. In [11, 12] accuracy is used as main metric. Jaidan et al. [11] use accuracy, precision, F1-score, recall and Huber-loss. Only the recall is for this use case a truly meaningful metric. Nonetheless, for their final evaluation dataset only the accuracy value is given. In [12] only accuracy is considered.
# W4: Lack of success criteria
The definition of requirements to classify the experiment as successful or not is neglected. Thus, the judgment of the results is often vague. The related works [11, 12, 15] do not have a success criteria. The authors of [27] define the goal of not introducing any slips, however, they do not define a goal for volume reduction.
# W5: Inadequate handling of available information
The separation of information that is available at the time of the implementation and information that is not available is not strictly done. In [27] the results for an adapted decision threshold are discussed. No methodology for determining such a threshold a priori is discussed and the work strongly implies that the decision threshold are determined a posteriori on the evaluated dataset which is not feasible in production. Also, performances based on decision threshold set on a specific dataset a posteriori cannot be seen as representative for additionally datasets.
W6: Neglecting temporal dynamics in the dataset Temporal attributes of the dataset are not considered in the sense that there is no investigation done regarding distribution drifts in the dataset. While all of [11, 12, 15, 27] use different datasets for training and validation, it is not clear if the split was done randomly or sequentially. Only Jaidan et al. [11] mention for their final evaluation a new dataset, which implies a temporal split. However, none of the discussed related works gives a evaluation or analysis regarding similarity of their splits or temporal dynamics like data drifts in their dataset.
# W7: Limited experiment variability due to single experiment runs
Experiments are just executed once for a certain random seed instead of evaluating multiple runs using different random seeds. None of the discussed related works shows the results of multiple runs (cf. [11, 12, 15, 27]).
As the number of related work for this use case is limited, in Table I we present an analysis of extended related work to strengthen the point that the listed weaknesses are common in industrial AI research. This analysis is based on the publications discussed in [18], which is a literature review of ML application related to SMT. Besides W5, all weaknesses are regularly present. However, to truly analyzing whether W5 is present, one would require the source code used and the corresponding dataset, which is not the case due to W1.
TABLE I ANALYSIS ON WHAT WEAKNESSES ARE PRESENT IN WHAT RELATED WORKS ON ML APPLICATIONS TO SMT
Fig. 2. Principal component analysis of the inspection type 2 of the dataset. The left color scale is for false calls and the right color scale is for true errors.
Fig. 3. Data splitting process for our experiment
it chronologically into five slices, i.e. $10 \%$ of the total dataset. By this, we can evaluate how the model would perform over five evaluation intervals if it would be deployed to production and the original test dataset has the same size as the evaluation sliced. This logic of splitting our dataset can be seen in Figure 3, thereby the percent values are related to the total dataset.
Complete dataset 100% 440,274 Samples Modelling dataset Evaluation dataset 50% 50% 220,137 Samples 220,137 Points TrSatrinaitinfige:d352-f%ol\~d1c4r0o,s8s8-7vaSliadamtipolens Random stratified split \~T4e4,s0t217d0aS%tamspelets H H 0 Chronological split
# IV. METHODOLOGY
In our research, we give insights into the effects of common weaknesses in the research about industrial AI applications on the example of false call reduction. Thereby, we utilize the dataset described and published in [23] and share our source code in [22]. This dataset is chronologically ordered, tabular and consists of 77 columns including a timestamp and a label column featuring the labels false calls and defect. Furthermore, it partially consists of categorical features that we one-hotencode and we drop the timestamp column for the modelling. The label classes have an extreme imbalance ratio of $9 9 \%$ .
Additionally, a time dependency of the dataset can be seen in Figure 2. It shows the output of a two-dimensional principal component analysis applied to the dataset and the color of the points are based on their classes and their indexes in the dataset. Multiple clusters can be identified which clearly have different colors indicating that a certain cluster did just appear in a certain period of time. More information about the dataset can be found in the data repository [23] and its corresponding publication.
This dataset follows the approach of using the extracted measurements of the AOI machines on the soldering pin level.
For our experiment, we initially split our dataset chronologically into two halves. The first half is used for the subsequent modeling. Therefrom, we make a stratified random split in the ratio of $80 \%$ for a hyper-parameter dataset and $20 \%$ for a test dataset. The second half is used after the modeling to evaluate the performance of the model over time by splitting
With this dataset, we then start a modeling phase. Thereby, we train common models with optimized hyper-parameters determined by Bayesian optimization, whereby in each run of the Bayesian optimization a stratified 5-fold cross-validation on the hyper-parameter dataset is executed. During crossvalidation, we evaluate the optimal decision threshold toptimal for each fold. Once we have found proper hyper-parameters, we train once more the model on the whole hyper-parameter dataset and set the decision threshold of the model to the mean value of the optimal decision thresholds from the crossvalidation. Then we evaluate the performance of this dataset on the test dataset. By this, we follow the common ML modeling scenario with the best practice methods of hyper-parameter optimization and train, validation, and test data splits (cf. Figure 3). In this modeling, we consider the models $\mathbf { k }$ -nearestneighbor (kNN) [8], random forest classifier (RFC) [1], eXtreme Gradient Boosting (XGBoost) [6], and balanced random forest classifier (BRFC) [5]. This procedure is also described in Algorithm 1 in Appendix C. Additionally, we train a dummy classifier (DC) always predicting the most frequent class and an automated machine learning (AutoML) model based on [10] hyper-parameter optimization. The decision threshold of the AutoML model is adapted as well.
We execute this modeling phase twice. First, we use standard metrics as the target of the optimization and for evaluation. Second, we use requirement-aware metrics. For more details about the metric definitions see Section V. Finally, we then evaluate the created models on the evaluation slices to gain insights on how the model would have performed if those models would have been deployed productively.
As we do not have any benchmark values, we define our research target to find a model that enables a target volume reduction $V _ { t a r g e t } ~ \geq ~ 4 0 \%$ while having a target slip rate $S _ { t a r g e t } \leq 1 \%$ as one may assume that slips are much more crucial to the productions than volume reduction. Those values represent realistic requirements from the industrial shopfloor for the use case of false call reduction for AOI. Eventually, the AOI machine would not be configured that conservatively if it would be another.
# V. EMPLOYED METRICS
We use a set of standard metrics and custom requirementaware metrics for our experiments. We will start in this section to define again the standard metrics first and then come to the custom metrics. We define positives to be defective boards, and negatives as false calls. For the use case itself, the false negatives, i.e. a defective PCB wrongly classified as false call slipping the manual inspection, must be considered much more crucial than false positives, i.e. a non-defective PCB boards that are manually inspected, since the latter case just reflects the state of the art situation without ML application.
For classification problems, metrics can be grouped by their relationship to a potential decision threshold of the models. Metrics may depend on a set threshold, are independent of a threshold, or give the result on the best possible threshold.
Common standard metrics that are threshold dependent are accuracy, recall, precision, and F1-score. While accuracy is an often used and widespread metric, it has a strong weakness against imbalanced datasets. For those cases, F1-score is considered often since it is more resilient against those cases and weights recall and precision evenly.
In comparison to that, different curves for evaluating classifiers can be used for taking different decision thresholds into account, for instance, the receiver operating curve (ROC) or precision recall curve (PRC). As discussed in [24], for imbalanced datasets the PRC is more informative than the ROC. Consequently, we use for this application the area under the precision recall curve (AUC) for this application.
A metric that expresses the trade-off between the recall of two groups is the Youden index [28]. For our evaluation, we use the best Youden index of a classifier that can be achieved for any decision threshold $t$ and call it Youden score.
In our modeling approach based on standard metrics, we use the metrics accuracy, F1-score, AUC, and Youden score as evaluation metrics for model selection. Furthermore, the AUC is used as an optimization target and the threshold is set based on the threshold found while calculating the Youden score. By this, we have a mix of commonly used metrics, that either assume a set decision threshold, are completely independent of a decision threshold, or that evaluate the case of the best possible decision threshold.
Even though the named metrics have shown their capabilities for different theoretical research applications, for research on actual applications of ML they remain unfit for evaluation purposes. Unlike theoretical research, applications of ML always have to justify their cost by achieving certain business criteria to be able to reach a return on investment. However, typically no standard metric reflects those business criteria. Therefore, it is necessary to evaluate ML models based on requirement-aware metrics that do directly reflect those business metrics.
The main business metrics for the application of false call reduction are the achieved test volume reduction $v$ (cf. Equation 1) and the slip rate $s$ (cf. Equation 2). Note that in Equation 1 and $2 \ r e c a l l _ { 0 }$ and $r e c a l l _ { 1 }$ refer to the recall metrics for class 0 and class 1, respectively. While $v$ is promising savings for the applications, $s$ has the danger to produce additional cost. Thus, to identify if the application of false call reduction has positive business impact it is necessary to identify if a model is able to stay below a target slip rate $S _ { t a r g e t }$ and over a target volume reduction $V _ { t a r g e t }$ .
$$
v = { \frac { T N } { T N + F P } } = r e c a l l _ { 0 }
$$
$$
s = { \frac { F N } { T P + F N } } = 1 - r e c a l l _ { 1 }
$$
Building onto this, we define three additional metrics that directly reflect the possible business impact that our model might have. As the first metric, we define constrained volume reduction $\mathrm { ( c V ) }$ as in Equation 3 with a minimal value of $S _ { t a r g e t } - 1$ and a maximal value of one. All negative values of $\mathrm { c v }$ indicate that the maximum slip rate $S _ { t a r g e t }$ is exceeded with the set threshold.
$$
c V = \left\{ \begin{array} { l l } { S _ { t a r g e t } - s } & { \mathrm { ~ i f ~ } s \geq S _ { t a r g e t } } \\ { v } & { o t h e r w i s e } \end{array} \right.
$$
An area-under-curve-based metric is our second requirement-aware metric constrained area under curve (cAUC). In this metric, we express the area under the slip volume reduction curve that lies within the target zone defined by $S _ { t a r g e t }$ and $V _ { t a r g e t }$ . The definition of this metric foresees three different cases. For the case where for any $t$ the classifier can fulfill our targets, it has the value of the ratio of the area in the target below the classifier’s curve and the target area. In the case where there exists an intersection between the area under the curve and the target zone but in the case where no $t$ the targets are met, the value is zero. The classifier’s performance curve is defined by the performance for different decision thresholds and is a discrete function. Thus, the steps of this discrete function can be in such a way that an overlap with the target area is created, even though for all potential thresholds, there is no threshold resulting in a performance within the target area. For the case that there is no intersection of both areas, the metric has the negative area of the gap between the curve and the target zone. Those different cases can also be seen in Figure 4 and are formalized in Equation 4. Thus, cAUC can give values between minus one and one. By this metric, it can be evaluated how well a classifier fulfills for any arbitrary $t$ the business criteria.
Fig. 4. Visualization of the cAUC metric for case that the classifier has at least one $t$ creating a point in the target zone (left), the case that the classifier has no threshold in the target area but the area under curve intersects with the target area (middle), and the case that the classifier does not intersect with the target area at all (right)
$$
\begin{array} { r l } & { \mathrm { i f ~ } \exists t \in [ 0 , 1 ] : s ( t ) \le S _ { t a r g e t } \land v ( t ) \ge V _ { t a r g e t } : } \\ & { c A U C = \frac { \int _ { 0 } ^ { 1 } ( 1 - s ) d v - \int _ { 0 } ^ { V _ { t a r g e t } } ( 1 - s ) d v - \int _ { 0 } ^ { 1 - S _ { t a r g e t } } ( v - V _ { t a r g e t } ) d ( 1 - s ) } { ( V _ { t a r g e t } * S _ { t a r g e t } ) } } \end{array}
$$
$$
\int _ { V _ { t a r g e t } } ^ { 1 } ( 1 - s ) d v > \int _ { V _ { t a r g e t } } ^ { 1 } m i n ( 1 - s , 1 - S _ { t a r g e t } ) d v
$$
$$
c A U C = \frac { \int _ { 0 } ^ { V _ { t a r g e t } } m i n ( 1 - s , 1 - S _ { t a r g e t } ) d v - ( V _ { t a r g e t } \ast ( 1 - S _ { t a r g e t } ) ) } { ( V _ { t a r g e t } \ast ( 1 - S _ { t a r g e t } ) ) }
$$
As a third requirement-aware metric, we define volume reduction at target slip $( \mathrm { V } @ \mathrm { S } )$ as the maximal volume reduction that fulfills the criteria of falling below the target slip rate for any $t$ (cf. Equation 5). By this metric, it is possible to evaluate what volume reduction could be achieved by fulfilling the slip criteria when the perfect decision threshold is set. The metric ${ \mathrm { V } } @ { \mathrm { S } }$ has a minimal value of zero and a maximal value of one.
$$
V @ S = \operatorname* { m a x } _ { t } ( v ( t ) ) \quad \forall t \in [ 0 , 1 ] \land s ( t ) \le S _ { t a r g e t }
$$
By those definitions, we again have a metric that is decision threshold specific, a metric that is decision threshold independent and a metric that evaluates the case of the best possible decision threshold. We use all three custom metrics for evaluation metrics for model selection. Furthermore, we use cAUC as optimization target and determine the threshold based on the threshold found while calculating ${ \mathrm { V } } @ { \mathrm { S } }$ .
After discussion the standard and custom metrics, we want to give a theoretical comparison of their values for the edge case that one of the targets is exactly not fulfilled and compare the scenario in which the standard metrics are maximal and therefore most misleading. Figure 5 shows a comparison for the metrics accuracy, F1-score, and $\mathrm { _ { c V } }$ for different slip rates and rates of volume reduction. While accuracy overly depends on the volume reduction and therefore has a complete vertical region with values around 1, F1-score is more defines and only gives a smaller vertical area with values around 1.
Nevertheless, cV shows only high values in a vertical line on the top right corner, which is indeed aligns with our metric targets. That accuracy is prone to imbalanced data is well known - yet, even F1-Score can reach a value of 0.995 that still does not satisfy the set targets while all of our metric clearly indicates this edge case with values close to 0. Similarly the Youden Score can reach a value up to 0.99 as well as PRC.
Indeed, the defined metrics are customized towards the researched use case. However, the approach of using a set of threshold-dependent, optimal-threshold, and thresholdindependent metrics as well as the approach of having requirement-aware metrics can be seen as universal. For our needs, cAUC is defined for the slip rate and volume reduction, yet it could also be used for example as a constrained version of the PRC or ROC curve.
Fig. 5. Visualization of the metrics Accuracy, F1-Score, and cV for the given dataset’s class imbalance.
# VI. EXPERIMENTAL RESULTS
In this section, we will show and elaborate on the two modeling procedures, whereby one is using standard metrics and the other requirement-aware metrics. For each modeling procedure, we evaluate the created models to rank their performance and select the best ones. After this, we give a comparison with all metrics of the created models to compare the true performance. As the final step, we show the performance of the models on the evaluation slices to investigate their performance if they would have been deployed to the production environment. All results are given as average $\pm$ standard deviation of ten runs with different random seeds.
In this first modeling approach, we use standard metrics as discussed in Section V and adapt the threshold based on the Youden index. The results can be seen in Table II.
The results show that the model XGBoost1 seems to be the best model independently of the metrics. Furthermore, based on the accuracy metric the model DC1 seems to outperform all other models except XGBoost1. Nevertheless, the other metrics show its poor performance which indicates the in general poor informative value of this metric in regards to imbalanced datasets. In general, accuracy and F1-score indicate poor performance of the model BRFC1 while the Youden score indicates a performance close to the other models. Additionally, the fact that the F1-score of BRFC1 is higher than that of the model kNN1 while their Youden score has the opposite ranking catches one’s eye. This can be explained by the fact that the Youden score evaluates the recalls regarding both classes while the F1-score is considering the precision and the recall in regards to class one combined with the extreme imbalance of the dataset. The model AutoML1 is in general performing better than most of the models. Lastly, even though we can analyze and compare the performance of the trained models, it is not possible to conclude whether we have reached our business goals or not as they are not reflected in the available metrics.
TABLE II AVERAGE MODEL PERFORMANCES ON THE TEST DATASET USING STANDARD METRICS FOR TEN DIFFERENT RANDOM SEEDS AND THEIR STANDARD DEVIATIONS
In this first modeling approach, we use our requirementaware metrics as discussed in Section $\mathrm { \Delta V }$ and adapt the threshold based on the target slip rate. The results can be seen in Table III.
Based on our requirement-aware metrics ${ \mathrm { V @ } } \mathbb { S }$ and cAUC the model XGBoost2 seems to perform best. However, the metric cV reveals that this model performs much worse than the other models. This pattern indicates, that in general, XGBoost2 seems to be a superior model however the method for adapting the threshold works poorly. A similar pattern can be seen for the model AutoML2. Note, that the difference between AutoML1 and AutoML2 is the target metric used for threshold adaption. Thus, one either must improve the method for adapting the decision threshold first or should instead take the model BRFC2 or the model RFC2 as those two models are the only ones that on average reach a constrained volume larger than $40 \%$ , i.e. fulfill our set targets for slip rate and volume reduction. However, for both models, their $\mathrm { _ { c V } }$ has a high standard deviation. This might indicate that for the different randoms seeds, not all runs lead to a sufficient model but instead, some models do not fulfill the business requirements and some exceed them significantly.
If we now compare the results of both modeling approaches including all the metrics, we can see in Table V and Figure 6 how the used metrics have impacted our model selection.
The results show, that the standard metrics are in many cases contrary to the requirement-aware metrics. For instance, kNN2 and DC1 seem to have similar poor behavior based on accuracy, F1-score, and PRC, which is connected to the extreme imbalance of the used dataset. Also in general, according
TABLE III AVERAGE MODEL PERFORMANCES ON THE TEST DATASET USING REQUIREMENT-AWARE METRICS FOR TEN DIFFERENT RANDOM SEEDS AND THEIR STANDARD DEVIATIONS
Accuracy F1-Score PRC 0.8 T
GrRaf GrRa Gref 王 王 0 ■ T R Model reference Model reference Model reference Youden Score cV v@s I 1 II 工
GrRef GrRaf 0 GrRef 0.5 王 Model reference Model reference Model reference CAUC Slip rate Volume Reduction 王亚
GRaf 0 GRgf GRa 1 T 中 0 Model reference
to the standard metrics, all models, which hyper-parameters were optimized according to the standard metric AUC, seem superior to the models, which hyper-parameters have been optimized according to the requirement-aware metric cAUC of the same algorithm. Meanwhile, the requirement-aware metrics show that either the models of the second modeling execution are comparable or better, in specific considering cV. Looking at the business metrics, one can see that they are well represented by the requirement-aware metrics but poorly by the standard metrics.
The two models BRFC2 and RFC2 have reached the set targets for slip rate and volume reduction on average. Thus, they now could be deployed to a productive environment. For evaluating the performance over time, we can now use the evaluation slice. Table IV shows the slip rate and volume reduction for all slices of models BRFC2 and RFC2. Figure 7 offers a visualization of the chronological performance trends.
Even though both models have been promising in the modeling phase, for all evaluation slices the model performances decays immediately. Especially the slip rates exceed already for the first evaluation slice strongly the set target for the slip rate $S _ { t a r g e t }$ . The volume reduction decays as well yet the values are still mostly larger than the target $V _ { t a r g e t }$ . Thus, a productive deployment would be a failure and might result in high effort on the shopfloor and cause financial harm. The common evaluation method of a randomly split $\mathbf { k }$ -fold cross validation would have failed.
TABLE IV PERFORMANCES OF DEPLOYABLE MODELS FOR ALL EVALUATION SLICES
Fig. 7. Performances of deployable models for the test dataset and the evaluation slices
# VII. DISCUSSION OF RESULTS
In this section, we now discuss the effect the different weaknesses defined in Section IV could have had. W1 is addressing the reproducibility and transparency of scientific results when not sharing the dataset and the used source code. As our dataset and our source code are openly available [22, 23] it is possible for the scientific community to verify the results but also easily build upon the existing results without the need to have an in detail description of our implementation. The application of anonymization techniques, as for example discussed in [16], can enable the publication of potentially sensitive data.
Looking at our code base, a large part is the implementation of a Bayesian optimization for different models with different hyper-parameters. Meanwhile, the code for training AutoML models is less and simpler. Because of that, W2 is addressing the lack of usage of AutoML models in related work and instead a strong focus on common models without naming reasons. As by now there exist mature ML frameworks making it trivially simple to train models on arbitrary datasets, research should move on and either develop application-specific models, tackle problems that prevent automatizing this aspect of modeling, or focus on problems beyond the modeling. In our results, the AutoML models do not perform best however the performance is absolutely comparable in terms of thresholdindependent metrics. For the threshold-dependent metric cV, the models performed poorer. This is a consequence of the used method for setting a threshold as it does not work stable enough and has a high standard deviation for the calculated thresholds within the cross-validation. A first benchmark for more selective modeling approaches should be the result of AutoML models along with a DC.
Regarding W3, our results show clearly the weaknesses of standard metrics, in particular accuracy. As shown in Table II a DC can achieve a better accuracy than properly trained models. Some may argue, that it is common sense that the more imbalanced a dataset is, the more insignificant the metric accuracy is. Yet, it is frequently used in the related works discussed in Section III. Nevertheless, also other standard metrics are insufficient as they do not reflect the business targets that are met and thus do not allow to classify a modeling procedure to be successful or not. Having proper evaluation metrics is of specific importance when a machine learning operation concept for the application including automatic retraining and model re-deployment is wanted. Also, Table $\mathrm { \Delta V }$ shows clearly that the usage of requirement-aware metrics in the hyperparameter optimization and the threshold adaption for the model has a crucial impact on successful modeling as standard metrics and application-specific metrics can indicate model performance contrary. Therefore, ML research should always first identify the direct business metrics then use evaluation metrics that are directly linked to those business metrics. In the optimal case, those evaluation metrics should be requirementaware.
W4 demands clear targets and requirements in research of ML applications that can classify whether an experiment was successful or not. By this, it was possible to select the potential deployable models directly and definitive judge the success of the experiment. Speaking for our results, we have found two models that on average fulfill our set targets on the test data. However, the standard deviation is comparably high and indicates that not all models reach the targets but just the average model. For the evaluation slices, those models fail. Thus, by having defined targets, we can derive now potential next steps to stabilize the modeling by improving the method for setting the threshold and handling the distribution drifts within the dataset. This demonstrates how crucial it is to define clear research targets, as otherwise we could now argue that our research was successful. Especially considering that this is common praxis in other scientific domains, such as statistics, all future applied ML research should have clear success criteria defined beforehand. Furthermore, success criteria are the foundations of requirement-aware metrics.
In W5 we mention the importance of separating knowledge that is known a priori and knowledge that is only known a posteri. An example can be seen in Table III and is a matter of setting the threshold. The metric ${ \mathrm { V @ ~ } } \mathbb { S }$ uses knowledge that is only available with the ground truth label while $\mathrm { _ { c V } }$ is using only knowledge that is available beforehand by just using the beforehand set threshold. All models have a higher ${ \mathrm { V } } @ { \mathrm { S } }$ than a $\mathrm { _ { c V } }$ even though both metrics actually express both in their positive ranges the volume reduction. In fact, all models of Table V, except kNN1 and DC1, would reach the success criteria according to the metric ${ \mathrm { V @ ~ } } \mathbb { S }$ , even though the metric $\mathrm { _ { c V } }$ shows a complete different picture. In a real-world implementation, only a priori information can be used and thus, the expected values are likely to be closer to $\mathrm { _ { c V } }$ than to ${ \mathrm { V @ ~ } } \mathbb { S }$ .
The common best practice in ML modeling uses a training, validation, and test dataset and assumes that the performance on the test dataset is valid for future data. However, this can be wrong. The used dataset is in its nature tabular and not a time series. So, a random split for training, validating, and testing data is intuitive. However, this should only be done after checking for temporal attributes like distribution drifts within the dataset. There are various distribution drifts happening in this dataset but with the common best practice, researchers risk the potential pitfall of oversee distribution drifts. For instance, Figure 2 would clearly show that there is a time dependency of the dataset and a random split will not lead to realistic performance values. However, this aspect is either not done for academic industrial AI research or not discussed (cf. [11, 12, 15, 27]). Therefore, W6 pleads for an extension of those common best practices also checking for temporal attributes in a dataset before modeling or at least take the chronological last data points as test data set.
For an implementation of an ML application, results need to be stable. For this, it is essential to not only evaluate only one run with only one random seed but have multiple runs with different random seeds as pointed out in W7. For instance, for the models BRFC2 and RFC2 exist runs that have an outstanding low slip rate. However, we can see with multiple runs, that on average the model performances just so reach the targets, but the standard deviation is relatively high. Thus, there is still potential to find more stable methods, for example, by refining the method for setting the threshold. | Are current artificial intelligence (AI) research methodologies ready to create successful, productive, and profitable AI applications? This work presents a case study on an industrial AI use case called false call reduction for automated optical inspection to demonstrate the shortcomings of current best practices. We identify seven weaknesses prevalent in related peer-reviewed work and experimentally show their consequences. We show that the best-practice methodology would fail for this use case. We argue amongst others for the necessity of requirement-aware metrics to ensure achieving business objectives, clear definitions of success criteria, and a thorough analysis of temporal dynamics in experimental datasets. Our work encourages researchers to critically assess their methodologies for more successful applied AI research. | [
"cs.LG"
] |
# 1 Introduction
The past decade has witnessed great success and prosperity of graph neural networks (GNNs) in diverse data science and engineering scenarios, such as traffic network (Hu et al. 2019; Jiang and Luo 2022), abnormal detection (Tang et al. 2022, 2024a), relational databases (Cappuzzo, Papotti, and Thirumuruganathan 2020; Huang et al. 2022) and recommender systems (Sharma et al. 2024; Gao et al. 2023). Existing GNNs can be broadly categorized into spatial GNNs and spectral GNNs. Spatial GNNs often adopt a messagepassing mechanism to learn node representations by aggregating neighbor node features. In contrast, spectral GNNs map node features to a new desired space by selectively attenuating or amplifying the Fourier coefficients induced by the normalized Laplacian matrix. This study primarily centers on the realm of spectral GNNs.
In real-world scenarios, graphs often exhibit varying degrees of homophily. In homophilic graphs, nodes of the same class are more likely to be connected, whereas in heterophilic graphs, connections tend to form between nodes of different classes. Prior studies [24, 37, 42] have shown that heterophilous structures pose significant challenges for spatial GNNs based on message passing, as these models typically rely on the implicit assumption of homophily. To address this issue, many methods have been proposed from the spatial perspective [40]. In contrast, spectral GNNs [3, 19] with learnable filters offer a promising alternative. By learning spectral filters directly from the graph structure, these models can better adapt to heterophilic settings and mitigate the aggregation of noisy information from dissimilar neighbors.
However, since the labels of real-world graph datasets may be very sparse, existing spectral graph neural network methods will learn inappropriate filters, resulting in suboptimal performance. As shown in Figure 1, the optimal filter on the Cora dataset is a low-pass filter due to its high homophily. However, classical spectral graph neural networks such as GPRGNN (Chien et al. 2021), BernNet (He et al. 2021), and JacobiConv (Wang and Zhang 2022) learn highpass filters. Therefore, the performance of spectral graph neural networks may be suboptimal.
In recent years, large language models (LLMs), exemplified by GPT-4 (Achiam et al. 2023), have shown impressive proficiency in comprehending and reasoning over textual information, thereby revolutionizing a wide range of domains, including natural language processing (Zhao et al. 2023), computer vision (Wu et al. 2023), and graph representation learning (Ren et al. 2024)]. In this work, we explore the potential of language models in spectral graph neural networks. To the best of our knowledge, this is the first study to leverage large language models to improve spectral graph neural networks. Specifically, we aim to address the following research questions:
1. Can large language models (LLMs) enhance spectral GNNs? LLMs are well known for their prowess in understanding natural language, while spectral GNNs excel at node classification in heterophilic graphs. Effectively combining the text comprehension capabilities of LLMs with the filtering power of spectral GNNs presents a significant challenge. Existing efforts to integrate GNNs and LLMs typically take one of two approaches: either feeding textual graphs directly into LLMs—limiting the model’s ability to leverage graph structure, or fine-tuning LLMs for graphspecific tasks, which introduces substantial computational and deployment costs.
2. Can LLMs effectively guide and be integrated into existing spectral GNNs? Spectral GNNs have the powerful ability to approximate arbitrary filters, which are particularly suitable for both homophilic and heterophilic graphs. However, these models usually rely on supervision from downstream tasks to learn appropriate spectral filters, which can lead to suboptimal filter learning in label-scarce or weaklysupervised settings. On the other hand, LLMs demonstrate strong performance in language understanding and textdriven tasks, as seen in models like LLM-GNN (Chen et al. 2023b). This opens up a promising and novel direction: using LLM-generated predictions to assist and improve spectral GNNs.
In this paper, we propose an LLM-based enhancement framework for spectral GNNs. Specifically, to address the first research question: spectral GNNs may learn inappropriate filters due to sparse labels, while the homophily ratio serves as a global characteristic of a graph dataset. We naturally leverage LLMs to predict the homophily ratio, enabling spectral GNNs to perceive global graph properties even under limited supervision. Notably, predicting homophily only requires sampling around 100 edges, making the cost per dataset as low as $\$ 1$ , without the need for costly fine-tuning or deployment.
For the second research question, we incorporate the predicted homophily ratio into existing polynomial spectral GNNs to enhance their performance. Specifically, we construct various heterophily-aware bases according to the predicted homophily, and then integrate them into polynomial spectral GNNs. This integration allows the model to fully utilize homophily information, boosting overall performance.
# Our contributions are summarized as follows:
• To the best of our knowledge, this is the first work to investigate the integration of LLMs into spectral GNNs. We show that LLMs can effectively predict the homophily ratio to guide spectral GNNs, thereby boosting their performance without the need for fine-tuning or additional deployment overhead. • We introduce a novel LLM-based enhancement framework for spectral GNNs, which is compatible with various LLMs and polynomial spectral architectures. By leveraging the global semantic understanding of LLMs, our method significantly improves the adaptability and performance of spectral GNNs across diverse scenarios. • Extensive experiments on multiple benchmark datasets demonstrate that our method consistently improves the performance of existing spectral GNNs, while incurring minimal computational and financial overhead.
# 2 Related Work
This section introduces related research, including spectral GNNs and LLM for Graph.
Spectral GNNs. According to whether the filter can be learned, the spectral GNNs can be divided into pre-defined filters and learnable filters. In the category of pre-defined filters, GCN (Kipf and Welling 2017) uses a simplified first-order Chebyshev polynomial. APPNP (Gasteiger, Bojchevski, and G¨unnemann 2019) utilizes Personalized Page Rank to set the weight of the filter. In the category of learnable filters. ChebNet (Defferrard, Bresson, and Vandergheynst 2016) uses Chebyshev polynomials with learnable coefficients. GPR-GNN (Chien et al. 2021) extends APPNP by directly parameterizing its weights. BernNet (He et al. 2021) uses Bernstein polynomials to learn filters and forces all coefficients positive. JacobiConv (Wang and Zhang 2022) adopts an orthogonal and flexible Jacobi basis to accommodate a wide range of weight functions. ChebNetII (He, Wei, and Wen 2022) uses Chebyshev interpolation to learn filters. Specformer (Bo et al. 2023) performs self-attention in the spectral domain to learn a set-to-set spectral filter. Recently, some works have applied spectral GNNs to node-level filtering. For example, DSF (Guo et al. 2023) proposes a novel diversified spectral filtering framework that automatically learns node-specific filter weights. UniFilter (Huang et al. 2024) integrates heterophilic and homophilic bases to construct a universal polynomial basis, UniBasis, which partially alleviates the problems of oversmoothing and over-squashing.
LLM for Graph. Existing research on applying large language models (LLMs) to graph learning can be broadly categorized into GNN-centric and LLM-centric approaches. GNN-centric methods leverage LLMs to extract node features from raw data, and then use GNNs for downstream prediction tasks (He et al. 2024; Xie et al. 2023). In contrast, LLM-centric methods integrate GNNs to enhance the performance of LLMs in graph-related tasks (Tang et al. 2024b; Zhang et al. 2024a). Some studies also employ LLMs to assign edge weights in text-attributed graphs (Sun et al. 2023; Ling et al. 2024). However, these methods are not specifically designed for heterophilic graphs. Beyond these, a few studies aim to enhance GNNs by leveraging LLMs to identify meaningful or noisy edges. For instance, GraphEdit (Guo et al. 2024) utilizes LLMs for graph structure learning by detecting and removing noisy connections; LLM4RGNN (Zhang et al. 2024b) identifies malicious and critical edges to improve the adversarial robustness of GNNs; and LLM4HeG (Wu et al. 2024) integrates LLMs into GNNs for heterophilic TAGs through edge discrimination and adaptive edge reweighting.
In contrast to existing methods, the proposed method uniquely focuses on leveraging LLMs to improve and enhance spectral GNNs, which has been underexplored in prior work.
# 3 Preliminaries
# 3.1 Spectral GNN
Assume we are given an undirected homogeneous graph $\mathcal { G } = ( \boldsymbol { \nu } , \mathcal { E } , { \bf X } )$ , where $\mathcal { V } = \{ v _ { 1 } , \ldots , v _ { n } \}$ denotes the set of $n$ nodes, $\mathcal { E }$ is the edge set, and $\textbf { X } ~ \in ~ \mathbb { R } ^ { n \times d }$ is the node feature matrix. The corresponding adjacency matrix is $\mathbf { A } \in \{ 0 , 1 \} ^ { n \times n }$ , where ${ \bf A } _ { i j } = 1$ if there is an edge between nodes $v _ { i }$ and $v _ { j }$ , and $\mathbf { A } _ { i j } = 0$ otherwise. The degree matrix $\mathbf { D } = \mathrm { d i a g } ( d _ { 1 } , \dots , \bar { d _ { n } } )$ is a diagonal matrix where the $i$ -th diagonal entry is $\begin{array} { r } { d _ { i } = \sum _ { j } { \bf A } _ { i j } } \end{array}$ . The normalized Laplacian matrix is defined as $\hat { \mathbf { L } } = \mathbf { \check { I } } - \mathbf { D } ^ { - \frac { 1 } { 2 } } \mathbf { A } \mathbf { D } ^ { - \frac { 1 } { 2 } }$ , where I denotes the identity matrix. The normalized adjacency matrix is $\hat { \mathbf { A } } = \mathbf { D } ^ { - \frac { 1 } { 2 } } \mathbf { A } \mathbf { \bar { D } } ^ { - \frac { 1 } { 2 } }$ . Let $\hat { \mathbf { L } } = \mathbf { U } \pmb { \Lambda } \mathbf { U } ^ { \top }$ denote the eigendecomposition of $\hat { \bf L }$ , where $\mathbf { U }$ is the matrix of eigenvectors and $\pmb { \Lambda } = \mathrm { d i a g } ( [ \lambda _ { 1 } , \lambda _ { 2 } , . . . , \lambda _ { n } ] )$ is the diagonal matrix of eigenvalues.
Spectral GNNs are based on the Fourier transform in signal processing. The Fourier transform of a graph signal $\mathbf { x }$ is given by $\hat { \mathbf { x } } = \mathbf { \bar { U } } ^ { \top } \mathbf { x }$ , and its inverse is $\mathbf { x } = \mathbf { U } \hat { \mathbf { x } }$ . Accordingly, the graph convolution of the signal $\mathbf { x }$ with a kernel $\mathbf { g }$ can be defined as:
$$
\mathbf { z } = \mathbf { g } * _ { \mathcal { G } } \mathbf { x } = \mathbf { U } \left( ( \mathbf { U } ^ { \top } \mathbf { g } ) \odot ( \mathbf { U } ^ { \top } \mathbf { x } ) \right) = \mathbf { U } \hat { \mathbf { G } } \mathbf { U } ^ { \top } \mathbf { x } ,
$$
where $\hat { \mathbf { G } } = \mathrm { d i a g } ( \hat { g } _ { 1 } , \dots , \hat { g } _ { n } )$ denotes the spectral kernel coefficients. To avoid explicit eigen-decomposition, recent works approximate different kernels $\mathbf { H }$ using polynomial functions ${ \overset { \cdot } { h } } ( \cdot )$ as follows:
$$
\mathbf { H } = h ( \hat { \mathbf { L } } ) = h _ { 0 } \hat { \mathbf { L } } ^ { 0 } + h _ { 1 } \hat { \mathbf { L } } ^ { 1 } + h _ { 2 } \hat { \mathbf { L } } ^ { 2 } + \cdot \cdot \cdot + h _ { K } \hat { \mathbf { L } } ^ { K } ,
$$
where $K$ is the order of the polynomial $h ( \cdot )$ and $h _ { K }$ is the coefficient of the $K$ -th order term. Thus, Eq. (1) can be rewritten as:
$$
\begin{array} { r } { { \mathbf Z } = { \mathbf H } { \mathbf X } = h ( { \hat { \mathbf L } } ) { \mathbf X } = { \mathbf U } h ( { \mathbf \Lambda } ) { \mathbf U } ^ { \top } { \mathbf X } , } \end{array}
$$
where $\mathbf { Z }$ is the output (prediction) matrix. According to Eq. (2), Eq. (3), and recent studies (Chen et al. 2023a), a $K$ - th order polynomial in spectral GNNs is equivalent to aggregating information from $K$ -hop neighbors in spatial GNNs.
# 3.2 Homophily
The homophily metric measures the degree of association between connected nodes. Homophily can be measured in various ways, such as node-level homophily, edge-level homophily, and class-level homophily. The widely adopted edge homophily (Zhu et al. 2020) is defined as follows:
$$
\mathcal { H } _ { \mathrm { e d g e } } \left( \mathcal { G } \right) = \frac { 1 } { \left| \mathcal { E } \right| } \sum _ { ( u , v ) \in \mathcal { E } } \mathbf { 1 } \left( y _ { u } = y _ { v } \right) ,
$$
where $\mathbf { 1 } ( \cdot )$ is the indicator function, i.e., $\mathbf { 1 } ( \cdot ) \ = \ 1$ if the condition holds, otherwise $\mathbf { 1 } ( \cdot ) = 0$ . $y _ { u }$ is the label of node $u$ , and $y _ { v }$ is the label of node $\textit { v } . | \mathcal { E } |$ is the size of the edge set.
# 4 Methodology
This section describes the proposed method, which consists of two modules: (1) using an LLM to predict the homophily, and (2) incorporating the predicted homophily into existing spectral GNNs. The following provides a detailed introduction to these two modules.
# 4.1 Estimating Homophily Using LLM
It is well established that, for a graph, the homophily ratio measures the degree of correlation among neighboring nodes and reflects the proportion of edges connecting nodes of the same class. Consequently, accurately estimating the homophily ratio is critical for better leveraging the graph’s structural properties and enabling targeted spectral filtering. For example, in a highly homophilic graph, a polynomial spectral GNN should ideally learn a low-pass filter.
Table 1: Comparison of homophily predicted by the LLM with varying prompts, with indicators for whether CoT and Most-voting are used.
For simplicity, we adopt the widely used edge-level homophily as our estimation target, as Eq. (4). Specifically, we aim to estimate the proportion of homophily edges (i.e., edges linking nodes of the same class) among all edges. However, it is impractical to evaluate the homophily of every edge, especially in large-scale graphs with millions of edges, due to the high inference cost associated with LLMs. Therefore, we propose to estimate the overall homophily ratio by sampling a subset of edges.
To achieve accurate estimation of the homophily ratio, it is important to assess the reliability of predictions, such as through calibrated confidence scores. Motivated by recent advances (Chen et al. 2023b) in generating calibrated confidence from LLMs, we investigate the following strategies:
• Vanilla prompting: Directly querying the model for its confidence. • Reasoning-based prompting(CoT): Guiding the model through annotation generation using techniques such as chain-of-thought and multi-step reasoning. • Consistency-based prompting (Most-voting): Querying the model multiple times and selecting the most frequent prediction via majority voting. • Hybrid prompting: Combining Reasoning-based and consistency-based methods to enhance robustness.
As shown in Table 1, both reasoning-based prompting and consistency-based prompting contribute to improved performance in predicting homophily. Therefore, we adopt a hybrid prompting strategy that integrates their respective strengths to further enhance the accuracy of homophily estimation. For example, the prompt used for the Cora dataset is designed as follows, while prompts for other datasets can be found in Appendix A.
Adjacency Matrix A Filter i! 0 1 0 0 0 0 Polynomial Spectral GNN Heterophily Basis 1 1 0 0 0 0 0 h0(A) = α0A0 + u0 graph 0 0 01 10 01 0 h1(A) = α0A + α1A + α A2 + u1 ui ⋅ uj {c1os ) $i \neq j$ $i = j$ 0 0 0 0 0 1 + ... ↑ 0 0 0 0 1 0 hK(A) = α0A + ... + αKAK + uK $\begin{array} { r } { \theta = \frac { \pi } { 2 } ( { \bf 1 } - h ) } \end{array}$ Sampled edges o。 ↑ O input √ LLM predict output calculate V (gpt-4o-mini) True or False homophily h = ∣{(u,v)∈Es∣LLM(u,v)=True}∣ ∣E ∣ System prompt: You are a chatbot expert in text classification. User content: We have two node texts from the following categories:[categories list]. The texts are as follows: Node v $\_ i \not { P } \longrightarrow \{ T i t l e$ , Abstract}. Node v_j -->{Title, Abstract}. Please tell whether they belong to the same category or no and reasoning step by step. Construct Heterophily Basis
System: You are a chatbot expert in text classification. User: We have two node texts from the following 7 categories: [categories list]. The texts are as follows: $N o d e v _ { i } { } \{ T i t l e , A b s t r a c t \}$ . $N o d e v _ { j } { } \{ T i t l e , A b s t r a c t \} .$ Please tell me whether they belong to the same category or not after reasoning step by step.
After obtaining the homophily labels for a subset of edges using reasoning-based prompting, we further apply consistency-based prompting by querying each sample five times. If at least three out of five predictions yield the same answer, we consider it the final prediction for that sample. The process is illustrated as follows:
$$
y _ { e } = { \left\{ \begin{array} { l l } { 1 } & { { \mathrm { ~ i f ~ } } r _ { e } \geq 3 } \\ { 0 } & { { \mathrm { ~ i f ~ } } r _ { e } < 3 } \end{array} \right. } ,
$$
Once the predictions for each sampled edge are obtained, we compute the predicted edge-level homophily $\hat { h }$ as follows:
$$
\hat { h } = \frac { \vert \{ ( u , v ) \in \mathcal { E } _ { s } \mid \mathrm { L L M } ( u , v ) = \mathrm { T r u e } \} \vert } { \vert \mathcal { E } _ { s } \vert } ,
$$
where $\mathcal { E } _ { s }$ denotes the set of all sampled edges.
# 4.2 Combination of Homophily with Existing Polynomial Filters
After predicting the homophily ratio using an LLM, we incorporate the estimated value into an existing polynomial spectral GNN. To enable spectral GNNs to leverage the homophily information predicted by Eq. (5), we construct a set of heterophily-aware basis vectors inspired by UniFilter (Huang et al. 2024). Specifically, the angle between each pair of basis vectors is defined as follows:
$$
\theta = \frac { \pi } { 2 } ( 1 - \hat { h } ) .
$$
Thus, the angle between different heterophily-aware basis vectors $\mathbf { u } _ { i }$ and $\mathbf { u } _ { j }$ is given by:
$$
\mathbf { u } _ { i } \cdot \mathbf { u } _ { j } = { \left\{ \begin{array} { l l } { \cos \theta = \cos \left( { \frac { ( 1 - { \hat { h } } ) \pi } { 2 } } \right) } & { { \mathrm { ~ i f ~ } } i \neq j , } \\ { 1 } & { { \mathrm { ~ i f ~ } } i = j . } \end{array} \right. }
$$
After obtaining the heterophily basis vectors set $\{ \mathbf { u } _ { 0 } , \mathbf { u } _ { 1 } , \cdot \cdot \cdot , \mathbf { u } _ { K } \}$ , we incorporate them into existing polynomial spectral GNNs, such as GPRGNN (Chien et al. 2021), BernNet (He et al. 2021), and JacobiConv (Wang and Zhang 2022).
Insertion into GPR-GNN. GPR-GNN (Chien et al. 2021) directly assigns a learnable coefficient to each order of the normalized adjacency matrix $\hat { \bf A }$ , and its polynomial filter is defined as:
$$
\mathbf { z } = \sum _ { k = 0 } ^ { K } \gamma _ { k } \hat { \mathbf { A } } ^ { k } = \mathbf { U } g _ { \gamma , K } ( \boldsymbol { \Lambda } ) \mathbf { U } ^ { T } ,
$$
where $g _ { \gamma , K } ( \Lambda )$ is an element-wise operation, and $\begin{array} { r } { g _ { \gamma , K } ( x ) = \sum _ { k = 0 } ^ { K } \gamma _ { k } x ^ { k } } \end{array}$ . GPR-GNN represents the simplest form of po ynomial spectral GNN, assigning a single scalar coefficient to each propagation step. Therefore, we directly incorporate heterophily basis vectors into GPR-GNN in ascending order of their polynomial order:
$$
\mathbf { z } = \sum _ { k = 0 } ^ { K } \gamma _ { k } \left( \beta \hat { \mathbf { A } } ^ { k } \mathbf { x } + ( 1 - \beta ) \mathbf { u } _ { k } \right) ,
$$
where $\beta$ is a tunable hyperparameter.
Insertion into BernNet. BernNet (He et al. 2021) expresses the filtering operation with Bernstein polynomials and forces all coefficients to be positive, and its filter is de
fined as:
$$
\mathbf { z } = \sum _ { k = 0 } ^ { K } \theta _ { k } \frac { 1 } { 2 ^ { K } } \left( \begin{array} { c } { K } \\ { k } \end{array} \right) ( 2 \mathbf { I } - \mathbf { L } ) ^ { K - k } \mathbf { L } ^ { k } \mathbf { x } .
$$
For BernNet, each term is a product involving both $2 \mathbf { I } - \mathbf { L }$ and $\mathbf { L }$ . We insert the $k$ -th heterophily basis vector $\mathbf { u } _ { k }$ into the $k$ -th order of $\mathbf { L }$ :
$$
\mathbf { z } = \sum _ { k = 0 } ^ { K } \theta _ { k } [ \beta \frac { 1 } { 2 ^ { K } } \left( \begin{array} { c } { K } \\ { k } \end{array} \right) ( 2 \mathbf { I } - \mathbf { L } ) ^ { K - k } \mathbf { L } ^ { k } \mathbf { x } + ( 1 - \beta ) \mathbf { u } _ { k } ] .
$$
Insertion into JacobiConv. JacobiConv (Wang and Zhang 2022) proposes a Jacobi basis to adapt a wide range of weight functions due to its orthogonality and flexibility. The iterative process of the Jacobi basis can be defined as:
$$
\begin{array} { l } { { P _ { 0 } ^ { a , b } ( x ) = 1 , } } \\ { { P _ { 1 } ^ { a , b } ( x ) = 0 . 5 a - 0 . 5 b + ( 0 . 5 a + 0 . 5 b + 1 ) x , } } \\ { { P _ { k } ^ { a , b } ( x ) = ( 2 k + a + b - 1 ) } } \\ { { . } } \\ { { \frac { \left( 2 k + a + b \right) \left( 2 k + a + b - 2 \right) x + a ^ { 2 } - b ^ { 2 } } { 2 k \left( k + a + b \right) \left( 2 k + a + b - 2 \right) } P _ { k - 1 } ^ { a , b } ( x ) } } \\ { { - \frac { \left( k + a - 1 \right) \left( k + b - 1 \right) \left( 2 k + a + b \right) } { k \left( k + a + b \right) \left( 2 k + a + b - 2 \right) } P _ { k - 2 } ^ { a , b } ( x ) , } } \end{array}
$$
where $a$ and $b$ are tunable hyperparameters. Unlike GPRGNN and BernNet, JacobiConv adopts an individual filter function for each output dimension $l$ :
$$
\mathbf { Z } _ { : l } = \sum _ { k = 0 } ^ { K } \alpha _ { k l } P _ { k } ^ { a , b } ( \hat { \mathbf { A } } ) ( \mathbf { X } \mathbf { W } ) _ { : l } .
$$
Similarly, we incorporate the corresponding heterophily basis vector $\mathbf { u } _ { k }$ into each polynomial order of JacobiConv as follows:
$$
\mathbf { Z } _ { : l } = \sum _ { k = 0 } ^ { K } \alpha _ { k l } [ \beta P _ { k } ^ { a , b } ( \hat { \mathbf { A } } ) ( \mathbf { X } \mathbf { W } ) _ { : l } + ( 1 - \beta ) \mathbf { u } _ { k } ] .
$$
# 4.3 Training Objective
After integrating the LLM-estimated homophily ratio into existing polynomial spectral filters, node classification can be performed using various polynomial-based spectral GNNs. Notably, the proposed method introduces no additional trainable parameters; it simply incorporates heterophily-aware basis vectors guided by the estimated homophily.
We adopt a multi-layer perceptron (MLP) with parameter $\theta$ to predict the label distribution:
$$
\hat { \mathbf { y } } = \mathrm { M L P } \left( \mathbf { Z } ; \theta \right) ,
$$
where $\hat { \mathbf { y } }$ is the predicted label distribution. Then, we optimize the cross-entropy loss of the node $j$ :
$$
\mathcal { L } = \sum _ { j \in \mathcal { V } _ { \mathrm { t r a i n } } } \mathrm { C r o s s E n t r o p y } \left( \hat { \mathbf { y } } ^ { j } , \mathbf { y } ^ { j } \right) ,
$$
where $\smash { \gamma _ { \mathrm { t r a i n } } }$ is the training node set, and $\mathbf { y } ^ { j }$ is the groundtruth one-hot label vector of node $j$ .
# 5 Experiment
In this section, to fully evaluate the performance of the proposed model, we present a series of comprehensive experiments to answer the following research questions (RQs):
• RQ1: Can the performance of polynomial spectral GNNs be improved by incorporating homophily estimated by LLMs?
• RQ2: How does the homophily estimated by LLMs compare to other estimation methods in enhancing polynomial spectral GNNs?
• RQ3: Does the LLM-based homophily estimation incur lower inference cost?
• RQ4: What is the impact of the key hyperparameter $\beta$ on model performance?
# 5.1 Experimental Setup
Datasets. We select ten graph datasets with text attributes , including three citation networks (Cora, Citeseer, and Pubmed), four webpage networks (Cornell, Texas, Washington, and Wisconsin), and three Amazon co-purchase networks (Children, History, and Fitness). These datasets cover both homophilic and heterophilic graph structures. The statistics of these datasets are summarized in Table 2.
Settings. We adopt the experimental setup used in CSTAG (Yan et al. 2023) and follow its standard data split to ensure a fair comparison. Specifically, the training/validation/test sets are divided as $6 0 \% / 2 0 \% / 2 0 \%$ for all datasets except Fitness, for which the split is $2 0 \% / 1 0 \% / 7 0 \%$ . We use the accuracy metric as an evaluation indicator. All experiments are performed three times, and we report the average results and their corresponding standard errors. All experiments are conducted on a machine with 3 NVIDIA A5000 24GB GPUs and Intel(R) Xeon(R) Silver 4310 2.10 GHz CPU.
Baselines. To thoroughly evaluate the effectiveness of the proposed method, in addition to the three polynomial spectral GNN backbones—GPR-GNN (Chien et al. 2021), BernNet (He et al. 2021), JacobiConv (Wang and Zhang 2022), and ChebNetII (He, Wei, and Wen 2022)—we also include four classical GNN models and a Multi-Layer Perceptron (MLP) as baselines: GCN (Kipf and Welling 2017), GAT (Velicˇkovic´ et al. 2019), GraphSAGE (Hamilton, Ying, and Leskovec 2017), APPNP (Gasteiger, Bojchevski, and Gu¨nnemann 2019), TFE-GNN (Duan et al. 2024), and UniFilter (Huang et al. 2024).
# 5.2 Main Results (RQ1)
Table 3 presents the node classification results across ten datasets. As observed, the proposed method consistently outperforms GPRGNN, BernNet, JacobiConv, and ChebNetII by a significant margin. Specifically, it achieves an average improvement of $1 . 1 4 \%$ over GPRGNN, $4 . 5 1 \%$ over BernNet, $2 . 5 1 \%$ over JacobiConv, and $2 . 4 1 \%$ over ChebNetII. These results clearly demonstrate the effectiveness of the proposed method. Moreover, although recent methods such as ChebNetII and TFE-GNN have shown competitive performance, our method still surpasses them. This highlights the strong representational power of LLM-enhanced spectral GNNs.
Table 3: Performance $( \% )$ on Various Datasets (mean with standard deviation as subscript)
Table 2: Dataset Statistics
Figure 2: Ablation study of proposed method.
# 5.3 Ablation Analysis (RQ2)
This subsection aims to evaluate the advantage of using LLM-predicted homophily in polynomial spectral GNNs. We consider the following two alternative variants for com
# parison:
1. Training a MLP to predict homophily and enhance the spectral GNN.
2. Directly using the homophily computed from the training set to enhance the spectral GNN.
Figure 2 compares the performance of spectral GNNs enhanced with homophily estimated by different methods on four datasets 1. As shown, the homophily predicted by LLMs yields the best performance in nearly all cases, with the only exception being BernNet on the Texas dataset. This demonstrates the superiority of LLM-estimated homophily over that derived from MLPs or directly from the training set, likely due to the stronger reasoning capability of LLMs. In addition, MLP-based homophily estimation generally outperforms the one directly obtained from the training set, indicating that learning homophily via a trainable model is more effective than relying on limited labeled data. However, MLP-based variants still fall short of the LLM-based approach, highlighting that stronger models can generate more informative homophily signals, which in turn lead to better performance when integrated into spectral GNNs.
# 5.4 Efficiency Studies (RQ3)
To evaluate the efficiency of the proposed method, we measure both its monetary cost and time overhead. Since our method employs low-cost GPT-4o Mini API calls to estimate homophily levels on different datasets, a small monetary expense is incurred. Table 4 reports the input, output, and total monetary cost per dataset. As shown, the cost for each dataset 2 remains below $\$ 0.2$ , indicating that our method achieves high model performance at a minimal financial expense.
Table 5: Per-epoch training time (ms) and total training time (s) comparisons on various datasets.
Table 4: Token and cost statistics on each dataset (million tokens / USD)
Figure 3: Effect of hyperparameter $\beta$ on model performance.
To further assess the time efficiency of our approach, we compare the per-epoch and total training time with those of the original polynomial spectral GNNs. As shown in Table 5, the runtime of our method remains similar to the original GNNs across all ten datasets, suggesting that the proposed approach does not increase time complexity.
In practice, the only additional computation introduced by the proposed method, compared to standard spectral GNNs, lies in the construction of heterophily-aware basis functions based on the predicted homophily levels. This step is shared across all graph convolution layers and can thus be precomputed for different polynomial orders. During training, the precomputed basis vectors can be directly loaded to significantly reduce computational overhead. Furthermore, it is worth emphasizing that our approach does not rely on costly fine-tuning or local model deployment, highlighting both the efficiency and practical applicability of the proposed method.
# 5.5 Parameter Sensitivity Analysis (RQ4)
In the Methodology Section, we introduce a key hyperparameter $\beta$ to balance the contribution between the original polynomial basis and the heterophily-aware basis constructed using predicted homophily. A lower value of $\beta$ indicates a greater reliance on the heterophily-aware basis, while a higher $\beta$ emphasizes the original polynomial basis. Figure 3 illustrates the sensitivity of hyperparameter $\beta$ on four datasets. As shown, different polynomial models exhibit distinct trends depending on the dataset. For example, BernNetPLUS demonstrates an upward trend in the Washington and Texas datasets. This may be attributed to BernNet’s strong capacity to approximate arbitrary filters, making the original polynomial basis more influential. In contrast, GPRGNNPLUS and JacobiConvPLUS appear largely insensitive to changes in $\beta$ , suggesting that both the original and heterophily-aware bases are equally effective at capturing meaningful filters. Conversely, on the Cora and Citeseer datasets, all three models show a generally decreasing trend, indicating that for homophilic graphs, incorporating more heterophily-aware basis functions—constructed based on predicted homophily—can lead to improved performance. | Spectral Graph Neural Networks (SGNNs) have attracted significant attention due to their ability to approximate arbitrary filters. They typically rely on supervision from downstream tasks to adaptively learn appropriate filters. However, under label-scarce conditions, SGNNs may learn suboptimal filters, leading to degraded performance. Meanwhile, the remarkable success of Large Language Models (LLMs) has inspired growing interest in exploring their potential within the GNN domain. This naturally raises an important question: \textit{Can LLMs help overcome the limitations of SGNNs and enhance their performance?} In this paper, we propose a novel approach that leverages LLMs to estimate the homophily of a given graph. The estimated homophily is then used to adaptively guide the design of polynomial spectral filters, thereby improving the expressiveness and adaptability of SGNNs across diverse graph structures. Specifically, we introduce a lightweight pipeline in which the LLM generates homophily-aware priors, which are injected into the filter coefficients to better align with the underlying graph topology. Extensive experiments on benchmark datasets demonstrate that our LLM-driven SGNN framework consistently outperforms existing baselines under both homophilic and heterophilic settings, with minimal computational and monetary overhead. | [
"cs.LG"
] |
# 1 Introduction
Scaling large-language models (LLMs) to adapt to diverse downstream tasks is non-trivial in realworld applications [31]. However, the increasing size and complexity of these models pose significant challenges in terms of computational resources and training efficiency. To alleviate considerable computation costs of full fine-tuning, Low-Rank Adaptation (LoRA) [11] has emerged as a parameterefficient solution that freezes weights of the pre-trained model and injects trainable low-rank components. Meanwhile, the increasing demand to simultaneously handle various domain-specific tasks highlights the need for generalization and scalability of LLMs [3]. Towards this direction, the concept of Mixture of Experts (MoE) [33, 16] was introduced to substantially scale up LLM’s capacity for various tasks. Therefore, marrying LoRA with MoE [1], i.e., LoRA-MoE, offers significant potential for parameter-efficient and scalable LLMs, routing inputs to the best LoRAs for different tasks and thus facilitating various parameter-efficient adaptations.
Existing gating mechanisms of LoRA-MoE can be categorized into rule-based and learnable ones. Specifically, the rule-based gating networks, such as LoraHub [12], PEMs [30] and Arrow [22], heavily rely on pre-defined templates or mathematical formulations for the compositions of LoRA experts. In contrast, the learnable methods, such as HydraLoRA [26], MoLE [28], OMoE [6], explore cutting edge learning techniques to adaptively active LoRA experts for downstream tasks. These methods greatly advance the state of the art of LoRA-MoE, efficiently facilitating scalable LLMs.
However, as the number of LoRA grows, existing LoRA-MoE gating methods may limit the LLMs’ scalability and face two critical challenges regarding generalization and underfitting. Figure 1 (a) demonstrates that existing gating methods in Hydra [26] and Lorahub [12] architectures, which involve more than 5 LoRAs, perform worse than the vanilla LoRA [11], when these methods are trained on FLAN and applied to MNLI, QQP and WNLI tasks [25]. The poor generation of existing methods causes a limited ability to adapt to diverse tasks. Figure 1 (b) shows that the accuracy of the same gating method [28] on three different MoE architectures decreases sharply by $10 \%$ , $20 \%$ , and $14 \%$ , respectively, as the number of LoRA experts increases from 5 to 40. Figure 1 (b) demonstrates that the performance of three different gating methods on the same MoE architecture (MoLE) [28] also rapidly declines when increasing the LoRA number. This suggests that large-scale LoRAs may lead to suboptimal routing decisions due to poor generalization. Figure 1 (c) shows that the training of the gating networks will be much slower and unstable when the num
Generalization
50 1 一 LoRA Hydra MoLE LoRAHub MNLI QQP QNLI WNLI Multiple Tasks (a) GLUE Scaling for Different Architectures
1 Hydra OMoE MoLE 5 10 20 30 40 Quantity of Modules (b)
GLUE Scaling for Different Gates Tutel Nexus 5 10 20 30 40 Quantity Of Modules (c)
ber of LoRAs is increased from 5 to 40, also indicating underfitting issues when scaling LoRAs.
There is a line of work that attempts to achieve scalable LoRA-MoE. Early studies improved the scalability by reducing the computation and storage costs [14] or promoted the generations by mitigating interference among tasks [35]. A top- $k$ approach ExpertChoice [34] reduces convergence time with a load balance mechanism. Recently, an LoRA library [22] was proposed to achieve zero-shot routing for better generalization. MoDE [21] introduced a flexible adapter to facilitate multi-task LLMs. Mostly related studies to our work are newly emerged Nexus [7] and MoLE. Nexus is an enhanced MoE architecture that relies on adaptive routing to reduce the training cost of MoEs and enable efficient adaptation to new tasks. MoLE can dynamically compose multiple trained LoRAs for better generalization. Although effective, we empirically show that they still suffer from three challenges as the number of LoRA grows (see details in Section 5.3).
This paper proposes RadarGate, a novel geometrical gating method that addresses the above two challenges by introducing rotational interactions of LoRA’s representations for scalable LLMs. Our RadarGate consists of two key components: a RotationGate and a StretchGate. The RotationGate first learns the complex interactions between LoRAs represented by angles between LoRAs. The output of the RotationGate will be fed into StretchGate, which further assigns the weights for each LoRA representation. A weight indicates the importance of the LoRA for the task. Experiments show the effectiveness of our RadarGate. The main contributions of our work are summarized as follows.
• We propose RadarGate, a novel geometrical gating method that introduces rotational operations of LoRAs representations for scalable LLMs, aiming to boost the expressiveness and facilitate richer feature interactions among multiple LoRAs. Such a straightforward yet effective method provides an extra degree of freedom beyond the weighted-sum mechanisms, thus facilitating the learning of cross-LoRA synergies as the number of LoRA grows. • We present two key components RotationGate and StretchGate, where the former dynamically generates angles between LoRAs, and the latter further refines this interaction. Such a geometrical transformation properly addresses the two challenges of scalable LoRA-MoE. • We conduct extensive experiments to show the effectiveness of our RadarGate, and provide valuable insights of scalable LLMs. For example, we observe that the rotations to each pair of representations are contrastive, encouraging closer alignment of semantically similar representations while pushing distant ones further apart, and such an interesting finding can interpret that the rotations help to converge the representations as LoRA scales up.
# 2 Related Work
Rule-based Gating Methods use pre-defined formulations, such as subspace functional decomposition and recomposition [32, 19], gradient-free arithmetic averaging [12, 34], and specific arithmetic functions [17, 15, 30, 22, 20] for LoRA activation. These methods typically follow fixed logic or heuristics to guide expert selection, offering simplicity and low overhead. Although effective, pre-defined rules may face challenges for the unseen tasks due to their limited flexibility. Different from rule-based ones, our RadarGate enhances the capability to coordinate different inputs through learnable magnitude scaling and angular rotation modules..
Learnable Gating Methods explore deep learning techniques [24, 26, 28, 7, 29, 8] or optimization strategies [9, 13, 2, 14, 7] for adaptive selection of LoRAs. These methods aim to dynamically assign experts based on input features or training signals, improving adaptability across domains. Different from these methods, our RadarGate expands the degrees of freedom of gating architectures by incorporating a rotation module, achieving better fitting capability and generalization, and demonstrates superior performance in large-scale LoRA scenarios.
# 3 Motivation
In this section, we first detail existing gating architectures (sec 3.1), then present two key observations on fitting and generalization. We provide theoretical explanations for these observations (sec 3.2) and analyze why the scaling performance of LoRA modules degrades based on our findings (sec 3.3).
# 3.1 Composable LoRAs Architecture
This subsection outlines the forward computation and backpropagation of the composable LoRA architecture [28]. We feed the input $\mathbf { x } \in \mathbb { R } ^ { \mathbf { \lambda } _ { 1 } \times d _ { i n } }$ to a neural module $W \in \mathbb { R } ^ { d _ { i n } \times d _ { o u t } }$ of a pretrained model. The LoRA group involves $n$ LoRA modules and a gating module with top- $k$ activation $( A _ { i } \in \mathbb { R } ^ { d _ { i n } \times r } , B _ { i } \in \mathbf { \bar { \mathbb { R } } } ^ { r \times \dot { d } _ { o u t } }$ , with rank $r \ll d _ { i n } , d _ { o u t } )$ . We denote the output of this composable LoRAs as $\mathbf { y } \in \mathbb { R } ^ { 1 \times d _ { o u t } }$ , which can be expressed by a weighted sum of the base model’s output and the ones of selected LoRA modules. We give the formulation of y and the gate $\mathbf { g }$ in Equation (1).
$$
\mathbf { y } = \mathbf { x } W + \sum _ { i = 1 } ^ { n } g _ { i } \mathbf { v } _ { i } , \mathbf { v } _ { i } = \mathbf { x } A _ { i } B _ { i } { \mathrm { ~ a n d ~ } } \mathbf { g } = \operatorname { t o p k } \left( { \mathrm { s o f t m a x } } \left( { \frac { \mathbf { x } \theta } { \tau } } \right) \right) = [ g _ { 1 } , \ldots , g _ { n } ] ,
$$
where $\mathbf { v } _ { i } \in \mathbb { R } ^ { d _ { o u t } }$ represents the output of the $i$ -th LoRA module, $\boldsymbol { \theta } \in \mathbb { R } ^ { d _ { i n } \times n }$ is the learnable parameter of the gating module, and $g _ { i } \in \mathbb { R }$ is the corresponding gating weight. These weights are derived from the input $\mathbf { x }$ via a gating network. Here $\theta$ is a learnable projection matrix mapping the input $\mathbf { x }$ to $n$ gating scores (logits), and $\tau$ is the softmax temperature. The $\mathrm { t o p k } ( \cdot )$ function is used to renormalize the top- $k$ weights, while simultaneously setting the remaining gating weights to zero.
# 3.2 Observation
In this subsection, we will present two observations regarding fitting and generalization of the LoRAs’ gating module, and then provide our insights of the underlying cause from a theoretical perspective.
Obs I (Underfitting): Existing gating mechanisms struggle to capture complex patterns of ideal $g _ { i } ^ { * } ( \mathbf { x } )$ distribution within the convex cone $\mathcal { H }$ , resulting in an underfitting ensemble of multiple LoRAs.
We consider the scenario when the fitting target $\mathbf { y } _ { \mathrm { t a r g e t } }$ is inside the convex cone $\mathcal { H }$ of LoRA representations. We denote the input of the LoRA-based MoE as $\mathbf { x }$ . Assume there exists a set of ideal, non-negative weights $g _ { i } ^ { * }$ that sum to 1, such that the current fitting target can be expressed as.
$$
\Delta \mathbf { y } _ { \mathrm { t a r g e t } } \approx \sum _ { i = 1 } ^ { n } g _ { i } ^ { * } \mathbf { v } _ { i } , g _ { i } ^ { * } \geq 0 , \sum _ { i = 1 } ^ { n } g _ { i } ^ { * } = 1 , \mathbf { v } _ { i } = \mathbf { x } A _ { i } B _ { i }
$$
Obs II (Poor Generalization): Existing gating methods heavily rely on a weighted sum of LoRA representation, thus degrading generalization as the LoRA output is limited in the convex cone $\mathcal { H }$ .
Each LoRA representation $\mathbf { v } _ { i }$ modifies the input x in a specific semantic direction. As shown in Equation (1), these representations are weighted and summed to form a composite representation y.
Figure 2: Workflow of the proposed RadarGate. Two key ingredients are RotationGate and StretchGate. RotationGate takes LoRA representations as inputs and then proceeds to three steps, including 1) LoRA representation categorization, 2) rotation angles generation, and 3) angles injection. The rotated LoRA representation will be stretched in magnitude by StretchGate to get the output.
The coefficient $g _ { i }$ only scales the magnitude of $\mathbf { v } _ { i } { \mathrm { : } }$ , leaving directions unchanged in the vector space. Thus, $\mathbf { y }$ will be limited to the convex cone $\mathcal { H }$ spanned by non-negative linear combinations of $\left\{ { \bf v } _ { i } \right\}$ :
$$
{ \mathcal { H } } = \operatorname { c o n v } ( \{ \mathbf { v } _ { 1 } , \dots , \mathbf { v } _ { n } \} ) = \left\{ \sum _ { i = 1 } ^ { n } g _ { i } \mathbf { v } _ { i } \mid g _ { i } \geq 0 , \sum _ { i = 1 } ^ { n } g _ { i } = 1 \right\}
$$
When the target output $\mathbf { y _ { t a r g e t } } \notin \mathcal { H }$ , it becomes infeasible to achieve an adequate fit using only magnitude scaling as the single degree of freedom, thereby leading to suboptimal generalization.
# 3.3 Scaling Issues
As shown in Fig. 1(b)(c), the model accuracy sharply degrades along with LoRA modules scaling up. From the aforementioned Obs $\pmb { I }$ and Obs $\pmb { I I }$ , we provide our insights into scalable LoRAs as follows:
Summary. As the LoRA modules (module numbers and parameters) scale up, the pattern of the target magnitude weights $\mathbf { g } ^ { * }$ that existing gating methods need to fit becomes more complex. Moreover, the expressiveness of LoRA representations $\mathbf { v } _ { i }$ remains confined within the convex cone $\mathcal { H }$ . Consequently, underfitting and poor generalization become more pronounced as the scale of LoRA modules increases.
# 4 Our RadarGate Method
Figure 2 illustrates the workflow of our RadarGate. The proposed RadarGate consists of two key components including RotationGate and StretchGate. In this section, we will introduce the workflow of our RadarGate (sec 4.1) and theoretically explain how RadarGate improves fitting and generalization when LoRA modules scale up (sec 4.2). Subsequently, we will demonstrate that the computational and memory overhead incurred by our novel module is negligible (sec 4.3).
# 4.1 Workflow
This subsection details RadarGate’s integration of LoRA submodule outputs per layer through angle and magnitude adjustments, as defined below:
$$
\mathbf { y } = \mathbf { x } W + \sum _ { i = 1 } ^ { n } g _ { i } \tilde { \mathbf { v } } _ { i } , \tilde { \mathbf { v } } _ { i } = G \left( M a p ( \mathcal { L } _ { i } ) , \mathbf { x } \right) ,
$$
here $g _ { i }$ is the output of StretchGate, as shown in Equation (1). The operator $M a p ( \mathcal { L } _ { i } )$ is defined as:
$$
M a p ( \mathcal { L } _ { i } ) = ( A _ { i } B _ { i } , \mathcal { L } - \{ A _ { i } B _ { i } \} ) , \mathcal { L } = \{ A _ { i } B _ { i } | i = 1 , 2 , . . . , n \} .
$$
Here, the minus sign − in $\mathcal { L } - \{ A _ { i } B _ { i } \}$ denotes the set difference operation between two sets.
$M a p ( \mathcal { L } _ { i } )$ constructs binary relations between submodule $A _ { i } B _ { i }$ and its reference set $\mathcal { L } - \{ A _ { i } B _ { i } \}$ .
The function $G ( \cdot )$ rotates each submodule’s output $\mathbf { v } _ { i }$ using their relations.
$$
G ( M a p ( \mathcal { L } _ { i } ) , \mathbf { x } ) = \mathbf { x } A _ { i } B _ { i } \times \mathcal { R } _ { i } ( M a p ( \mathcal { L } _ { i } ) ) \triangleq \mathbf { v } _ { i } \times \mathcal { R } _ { i } .
$$
Here, $\times$ denotes the multiplication operation between matrices, for each LoRA representation $\mathbf { v } _ { i }$ , we compute a rotation matrix $\mathcal { R } _ { i }$ with angles governed by $\theta _ { r }$ to learn relative angular relationships. If we denote $d _ { i n } \triangleq d$ , then the rotation matrix is:
$$
\begin{array} { r } { \mathcal { R } _ { i } = \left( \begin{array} { c c c c } { R _ { i } ^ { ( 0 ) } } & { 0 } & { \cdots } & { 0 } \\ { 0 } & { R _ { i } ^ { ( 1 ) } } & { \cdots } & { 0 } \\ { \vdots } & { \vdots } & { \ddots } & { \vdots } \\ { 0 } & { 0 } & { \cdots } & { R _ { i } ^ { ( \frac { d - 1 } { 2 } ) } } \end{array} \right) , R _ { i } ^ { ( m ) } = \left( \begin{array} { c c c } { \cos \alpha _ { r _ { i } } ^ { ( m ) } } & { - \sin \alpha _ { r _ { i } } ^ { ( m ) } } \\ { \sin \alpha _ { r _ { i } } ^ { ( m ) } } & { \cos \alpha _ { r _ { i } } ^ { ( m ) } } \end{array} \right) , } \end{array}
$$
Here, $\alpha _ { r _ { i } } ^ { ( m ) }$ is the $m$ -th component of the rotational control factor $\alpha _ { r _ { i } } \in \mathbb { R } ^ { \frac { d } { 2 } }$ , which is calculated as:
$$
\alpha _ { r _ { i } } = \left( \mathbf { x } \times M a p ( \mathcal { L } _ { i } ) ^ { ( 0 ) } \right) \odot \left( \mathbf { x } \times \sum _ { A _ { j } B _ { j } \in S _ { i } } A _ { j } B _ { j } \right) \times \theta _ { r } , S _ { i } = \{ A _ { j } B _ { j } | A _ { j } B _ { j } \in M a p ( \mathcal { L } _ { i } ) ^ { ( 1 ) } \}
$$
where $\odot$ denotes the element-wise Hadamard product, $M a p ( \mathcal { L } _ { i } ) ^ { ( t ) }$ is the $t$ -th submodule output in $A _ { i } B _ { i }$ ’s reference set. Submodule outputs and references undergo Hadamard product, then matrix multiplication with learnable $\theta _ { r }$ , injecting relative angular information to update $\mathbf { v } _ { i }$ . It should be noted that we use the map to construct the binary relation for the rotating reference frame because the absolute value of the angle is meaningless, and only the relative value of the rotation angle matters. We provide more details about the workflow of the proposed RadarGate in Appendix B.
# 4.2 Theoretical Demonstration
In this subsection, we will explain the reasons why our RadarGate can effectively improve the underfitting phenomenon and enhance the generalization ability from the theoretical perspective.1
# Mitigating Underfitting.
Lemma 1. For nested function hypothesis spaces $\kappa _ { 1 } ~ \subseteq ~ \kappa _ { 2 }$ , the optimal fitting error $\begin{array} { r l } { \mathcal { E } _ { t } } & { { } = } \end{array}$ $\operatorname* { i n f } _ { f \in \mathcal { K } _ { t } } L ( f , g ^ { * } )$ of the target function $g ^ { \ast }$ under the loss function $L$ necessarily satisfies $\mathcal { E } _ { 2 } \leq \mathcal { E } _ { 1 }$ .
Treating the gating architecture as a function space $\kappa _ { G }$ , existing gating $\kappa _ { \mathrm { g a t e } }$ and our RadarGate ${ \kappa _ { \mathrm { o u r s } } }$ satisfy ${ \mathcal { K } } _ { \mathrm { g a t e } } \subseteq { \mathcal { K } } _ { \mathrm { o u r s } }$ . Due to the added RotationGate module, RadarGate has an advantage for ideal mapping $g ^ { \ast }$ (Lemma 1). Specifically, during inference, LoRA representations are adjusted via magnitude scaling and vector rotation. During optimization, taking MSE loss as an example,
$$
\mathcal { L } ( x ) = \left. \Delta \mathbf { y } _ { \mathrm { t a r g e t } } ( \mathbf { x } ) - \sum _ { i = 1 } ^ { n } g _ { i } ( \mathbf { x } ) \left( \mathbf { v } _ { i } \mathcal { R } _ { i } ( \mathbf { x } ; \theta _ { r } ) \right) \right. ^ { 2 } ,
$$
RadarGate optimizes by adjusting gradients of scaling parameter $\theta$ (i.e., $\frac { \partial \mathcal { L } } { \partial g _ { i } } \frac { \partial g _ { i } } { \partial \theta } )$ and rotation $\theta _ { r }$ $\frac { \partial \mathcal { L } } { \partial \mathcal { R } _ { i } } \frac { \partial \mathcal { R } _ { i } } { \partial \alpha _ { r _ { i } } } \frac { \partial \alpha _ { r _ { i } } } { \partial \theta _ { r } } \Big )$ flexibility in fitting data, alleviating underfitting from ${ \kappa } _ { \mathrm { g a t e } }$ ’s insufficient expressiveness.
# Improving Generalization.
Lemma 2. Define the fixed output space $\begin{array} { r } { \mathcal { H } = \left\{ \sum _ { i = 1 } ^ { n } \alpha _ { i } v _ { i } \ | \ \alpha _ { i } \geq 0 , \ \sum _ { i = 1 } ^ { n } \alpha _ { i } = 1 \right\} } \end{array}$ with fixed basis vectors $\{ v _ { i } \}$ . Transforming $v _ { i }$ via input-dependent rotation $R _ { i } ( x )$ gives $\tilde { \tilde { v } } _ { i } ( x ) = \tilde { v } _ { i } R _ { i } ( x )$ . Define dynamic output space $\begin{array} { r } { \mathcal { H } ^ { \prime } ( x ) = \left\{ \sum _ { i = 1 } ^ { n } \alpha _ { i } \tilde { v } _ { i } ( x ) \ | \ \alpha _ { i } \geq 0 \right. } \end{array}$ , $\textstyle \sum _ { i = 1 } ^ { n } { \dot { \alpha _ { i } } } = 1 \big \}$ . The union $\textstyle S = \bigcup _ { x } { \mathcal { H } } ^ { \prime } ( x )$ strictly contains $\mathcal { H }$ , i.e., $s \supset \mathcal { H }$ .
According to Equation 3, existing gating outputs are confined to fixed convex cone $\mathcal { H }$ , so $\Delta y _ { \mathrm { t a r g e t } } \not \in { \mathcal { H } }$ cannot be fitted. RadarGate introduces input-dependent rotation $R _ { i } ( x )$ (Lemma 2) to expand the space to dynamic convex cones $\mathcal { H } ^ { \prime } ( x )$ . According to Lemma 2, we have $\textstyle \bigcup _ { x } \mathcal { H } ^ { \prime } ( x ) \supset \bar { \mathcal { H } }$ . Thus $\begin{array} { r } { \bar { \Delta ^ { } y _ { \mathrm { t a r g e t } } } \in \bar { \bigcup _ { x } } \mathcal { H } ^ { \prime } ( x ) \setminus \mathcal { H } } \end{array}$ (outside $\mathcal { H }$ ) can still be fitted. This rotation-induceSd basis alteration and space expansion improve the generalization of gating modules on various tasks outside the cone $\mathcal { H }$ .
Enhancing Scaling. As the LoRA modules scale up, the complexity of approximating the ideal weights $g _ { i } ^ { * } ( \mathbf { x } )$ grows, and the limitations of the fixed convex cone $\mathcal { H }$ become more pronounced. Combining Lemmas 1 and 2 with the preceding analyses, we can summarize our insights as follows:
Summary. RadarGate’s rotational mechanism $\mathcal { R } _ { i } ( \mathbf { x } )$ expands the hypothesis space to ${ \kappa _ { \mathrm { o u r s } } } \supset { \kappa _ { \mathrm { g a t e } } }$ and the effective output space to $\begin{array} { r } { \bigcup _ { x } \mathcal { H } ^ { \prime } ( x ) \supset \mathcal { H } } \end{array}$ , providing the necessary flexibility to better fit complex $g _ { i } ^ { * } ( \mathbf { x } )$ and generalize to a wider range of target outputs, thereby sustaining performance at larger scales.
# 4.3 Computational and Memory Complexity
For a sequence input dimension of $L \times d _ { \mathrm { i n } }$ , we decompose the parameters of RotationGate into two matrices of $d _ { \mathrm { i n } } \times r _ { \mathrm { a } }$ and $r _ { \mathrm { a } } \times d _ { \mathrm { i n } }$ through low-rank factorization. When these parameters satisfy ${ n , r , k , r _ { \mathrm { a } } \ll \operatorname* { m i n } \{ d _ { \mathrm { i n } } , d _ { \mathrm { o u t } } \} }$ , the computational and memory complexity $O$ and $M$ can be simplified as:
$$
O _ { \mathrm { s } } = O \left( L \cdot \mathrm { m i n } \{ d _ { \mathrm { i n } } , d _ { \mathrm { o u t } } \} \right) = O _ { \mathrm { r } } , \quad M _ { \mathrm { s } } \approx M _ { \mathrm { r } }
$$
This result indicates that the computational and memory complexities of RadarGate are asymptotically of the same order of magnitude as those of existing gating methods.2
# 5 Experiments
# 5.1 Experimental Setup
# Environment.
All experiments are conducted on an Ubuntu 20.04.5 LTS server with PyTorch, featuring 64GB RAM, an Intel Xeon Silver 4210 CPU, and dual NVIDIA A40 GPUs (48GB each).
Datasets. We use nine datasets from the v1 version of the FLAN dataset [27] as the base LoRA module training and test set for the LoRA module independence experiments. In later experiments, the v2 FLAN dataset is categorized by language, mathematics, reasoning, and translation for LoRA module training. Our method RadarGate is compared with baselines on six large-scale comprehensive benchmarks, including NLP benchmarks: GLUE [25], MMLU [10], WMT14 [4]; mathematics benchmarks: MATH [18], GSM8K [5] and the science benchmark GPQA [23].
# Baselines.
RadarGate is compared with multi-LoRA/gating architectures across two categories: 1) Rule-based (LoraHub [12], PEMs [30], Arrow [22], direct LoRA) and 2) Learnable (HydraLoRA [26], MoLE [28], OMoE [6]). Evaluated gating mechanisms include Stretch-Only (MoLE [28]), Rotation-Only (from RadarGate), Nexus [7], and Tutel [13]. RadarGate integrates Stretch-Only Gate and RotationOnly Gate to jointly improve generalization and scalability.
Metric. Accuracy serves as the primary metric. Predictions are evaluated on the aforementioned benchmarks using their standard protocols. Overall accuracy is calculated by matching predictions against reference answers.
# 5.2 Training Details
RadarGate employs frozen pretrained weights with LoRA standard initialization (rank 8, LoRA $\alpha = 3 2$ , learning rate $1 e { - 4 }$ , batch size 4, dropout $= 0 . 1 \dot { }$ ). Gate training uses identical hyperparameters but fewer parameters than baselines, with frozen pretrained/LoRA weights isolating gating effects. Inference maintains frozen parameters and benchmark consistency max new tokens $= 5 1 2$ .
# 5.3 Performance
We validate RadarGate across multi-LoRA architectures and provide experimental insights3. Figure 3(a) confirms it has excellent fitting capability under matched training/test conditions, while Table 1 demonstrates superior generalization. Figure 4 reveals scalability improvements with increasing module and parameter, and Figure 3(b)(c) shows the results of the ablation study.
Experiments on Fitting. We validate RadarGate on nine tasks to assess the fitting capability under three frameworks. Figure 3(a) shows RadarGate outperforms baselines in MoLE across nine tasks, we make the following observation:
Figure 3: Performance on Fitting and Ablation. Figure (a) shows performance of fitting capability on same-source training/test sets, while Figures (b) and (c) show ablation results for RadarGate’s StretchGate and RotationGate components.
Table 1: Performance on Generalization. Comparsions of our proposed RadarGate with existing 4 baseline methods regarding generalization under 2 composable LoRAs architectures.
Obs.❶ Baseline methods exhibit lower fitting performance than RadarGate when training and test sets are from the same source. As shown in Figure 3(a), our method achieves state-of-the-art performance in over $90 \%$ of tasks compared to other baseline methods, and in some tasks, it even outperforms certain baselines by more than $20 \%$ in performance. This indicates that RadarGate has a stronger fitting capability.
Experiments on Generalization. We evaluate RadarGate on six well-known benchmarks to assess its generalization ability under three frameworks, with the settings involving different source test and training datasets. Table 1 demonstrates the generalization performance of RadarGate (top- $\mathbf { \cdot k } = 2$ ). Observations include:
Obs.❷ RadarGate demonstrates optimal generalization performance when the training and test sets are from different sources. Our RadarGate achieves $30 \% { - } 5 0 \%$ higher accuracy than rulebased methods and $5 \% - 1 0 \%$ improvements over learnable baselines. This shows that the proposed
RadarGate adapts to different LoRA architectures, surpassing baselines in generalization while preserving module independence.
Experiments on Scaling. We analyze LoRA modules and parameter scalability’s impact on performance and can make the following observations.
GLUE Scaling MMLU Scaling WMT14 Scaling 60 60 Wwi 1 −. Stretch-Only Gate Tutel -• Stretch-Only Gate Tutel Stretch-Only Gate Tutel Rotation-Only Gate → · RadarGate(ours) - Rotation-Only Gate · RadarGate(ours) - Rotation-Only Gate → · RadarGate(ours) Nexus ←Nexus Nexus 10 15 20 30 40 5 10 15 20 30 40 01 5 10 15 20 30 40 Quantity Of Modules Quantity Of Modules Quantity Of Modules (a) (b) (c) Accuracy of the GLUE Dataset Accuracy of the MMLU Dataset Accuracy of the WMT14 Dataset 60 70 1 50 1 tretch-Only Gate Avg retch-Only Gate T Stretch-Only Gate Avg Stretch-Only Gate Stretch-Only Gate Avg Stretch-Only Gate Rotatin oly at Aug Retin nly Gate Retation-Only bae &g Gtation nly Gate Rotion Only Gate TutelAvg Tutel TutelAvg Tutel Tutel 20 IndicBART mt0-base flan-t5 urs)Av llama3.2 llama3.2 ite(ours) 20 IndicBART mt0-base RadarGate(ours) Avg flan-t5 llama3.2 Illama3.2 ours 20IndicBART mt0-base flan-t5 llama3.2 Ilama3.2 Gate(ours) -large -1b -8b -large -1b -8b -large -1b -8b Modelname Modelname Model name (d) (e) (f)
Obs.❸ When the number of LoRA modules in the composable LoRA architecture scales up, our RadarGate demonstrates superior performance. Figure 4(a)(b)(c) evaluate NLP performance with 5–40 LoRA modules under MoLE: baselines exhibit inverted U-shaped trends, whereas our method achieves near-monotonic improvement (with an $8 \%$ maximum gain), demonstrating sustained superiority.
Obs.❹ When the parameter count of the base model in the composable LoRA architecture scales up (with the LoRA module parameters increasing accordingly), our RadarGate demonstrates superior performance. Figure 4(d)(e)(f) show consistent $5 \% - 1 0 \%$ advantages over baselines across the 110M, 580M, 770M, 1B, and 8B parameters, confirming method scalability.
Experiments on Ablation. To verify the necessity of the StretchGate and RotationGate components in RadaGate, we conducted ablation experiments on 12 tasks, leading to the following observations:
Obs.❺ RadarGate achieves the best performance when all components are included, and StretchGate and RotationGate mutually reinforce each other. As shown in Figures 3(b)(c), the complete RadarGate covers the largest area and achieves the maximum value in over $90 \%$ of tasks, outperforming the second-best method by even $20 \%$ in some cases. The standalone StretchGate and RotationGate exhibit significantly lower performance than the full RadarGate, indicating that both components are crucial for optimizing the gating performance.
# 5.4 Case Study
Figure 5 demonstrates RadarGate’s ability to capture latent architectures among LoRA representations in a reasoning task. Initially, both RadarGate and MoE Gate fail due to averaged norm weights (global magnitude) and negligible angular weights (local angle dependencies). During training, MoE erroneously amplifies irrelevant Translation (T) and Science (S) LoRA modules, assigning their weights to 0.14 and 0.12, respectively. After 500 steps, MoE still fails. In contrast, we observe that RadarGate suppresses interference by driving the angular weights between unrelated modules (module S and R, as well as module R and T) toward zero and those between related modules (module L and R, as well as module S and T) toward one, achieving correct answers after 250 steps. This suggests that the rotations to each pair of representations are contrastive, encouraging closer alignment of semantically similar representations while pushing distant ones further apart, which can help to converge the representations.
Input LoRA Modules Entailment Not entailment Not entailment Sentence2 is: Weapons of Mass Destruction Found in Iraq. Does the second sentence logically follow from the first? Reply with 'entailment' if it does, or Science S Our RadarGate 0.2 Local Angular 0.31 0.32 Correct Answer : Not entailment L Weight L L 15 5 ? 0.2 0 0.2 0.0 C 0.24 0.02 0.25 S 17 M S M S 55 M Reasoning R LoRA Representation 。 ) Latent Structure 0 5 4 2
1.6 R R R 0.2 0.09 0.2 0.3 0.05 0.08 0.4 0.01 0.01 Translation T (Initial) (Intermediate) 0.L27 (Final) 0.L2 0.L25 1 0.2 0.12 0.06 Math M Gate in MoE S M S M S M
135 35
R R T R T Language L Entailmen0t.2 E0n.2t7ailmen0t.14 Entailme0.n0t9
# 6 Discussion
Our experiments shows RadarGate’s advantages in fitting, generalization and scalability. In this section, we present key observations and discuss future research directions. We investigate the convergence of RadarGate, visualize it and analyze its performance in low-sample settings.
ConvergenceLossof5modules 90° ● InputrGate Intermediate state Accuracy of the GLUE Dataset
1.6 135°12 45° 60 Rotion-Only Gate RadarGate
1.4 8 . RadarGate result 50
2 ·4 + Only-Stretch Intermediate state F 180° 0 . 0°● Only-Stretch result
0. · 1gTret . etation-Only Gate Nexur on-Only
0.4 225° 315° 气 0 0 100 2005tep300 400 500 270° 5 -6 dim1 : 6 10 50 100 Sample size 800 1600 (a) (b) (c)
# Convergence during the Training steps.
Training loss analysis reveals baseline methods suffer from early erratic oscillations, persistent convergence issues (suboptimal minima), and late-stage fluctuations indicating robustness limitations. As shown in Figure 6(a), RadarGate achieves faster convergence with lower loss and sustained stability, demonstrating superior convergence dynamics. Complete convergence experiment details are in Appendix G.
Visualization of the Rotation Process. Figure 6(b) shows the positional changes of LoRA representations before and after rotation and stretching, with PCA reducing the input from 2048 to 2 dimensions, visualized in both Cartesian and polar coordinates. In the Stretch-Only method, the magnitude of input vectors changes with little angular variation,
leading to underfitting. In contrast, RadarGate
pulls similar vectors closer and rotates less correlated ones toward the target, improving feature learning and aligning the data more effectively with the target.
# Performance under Varying Sample Sizes.
Figure 6(c) shows our method achieves $5 \%$ GLUE benchmark superiority over baselines at 50 samples, maintaining $2 5 \%$ advantage with increasing data. This validates exceptional low-sample performance and resource-constrained applicability. Full sample size experiments are detailed in Appendix H. | Scaling Low-Rank Adaptation (LoRA)-based Mixture-of-Experts (MoE) facilitates large language models (LLMs) to efficiently adapt to diverse tasks. However, traditional gating mechanisms that route inputs to the best experts may fundamentally hinder LLMs' scalability, leading to poor generalization and underfitting issues. We identify that the root cause lies in the restricted expressiveness of existing weighted-sum mechanisms, both within and outside the convex cone of LoRA representations. This motivates us to propose RadarGate, a novel geometrically inspired gating method that introduces rotational operations of LoRAs representations to boost the expressiveness and facilitate richer feature interactions among multiple LoRAs for scalable LLMs. Specifically, we first fuse each LoRA representation to other LoRAs using a learnable component and then feed the output to a rotation matrix. This matrix involves learnable parameters that define the relative angular relationship between LoRA representations. Such a simple yet effective mechanism provides an extra degree of freedom, facilitating the learning of cross-LoRA synergies and properly tracking the challenging poor generalization and underfitting issues as the number of LoRA grows. Extensive experiments on 6 public benchmarks across 21 tasks show the effectiveness of our RadarGate for scaling LoRAs. We also provide valuable insights, revealing that the rotations to each pair of representations are contrastive, encouraging closer alignment of semantically similar representations during geometrical transformation while pushing distance ones further apart. We will release our code to the community. | [
"cs.LG",
"cs.SE"
] |
# 1 Introduction
Recent advancements in Large Language Models (LLMs) and Vision-Language Models (VLMs) have significantly enhanced their complex reasoning capability, demonstrating remarkable problemsolving proficiency (Wei et al., 2022; Lu et al.,
# Text-Based Reasoning Task
# Video-Based Reasoning Task
# Question:
# Question:
In a cube ABCD- $\cdot \overline { { \mathbf { A } _ { 1 } \mathbf { B } _ { 1 } \mathbf { C } _ { 1 } \mathbf { D } _ { 1 } } }$ with edge length 1, point M is a moving point on the surface of the cube, and BM parallel to plane $\overline { { \mathrm { A D } _ { 1 } \mathrm { C } } }$ . What is the length of the path traced by the moving point M?
There is a metallic silver object in the scene, point M is a moving point on the surface of the object, and $\mathrm { \mathrm { P _ { g r e c n } M } }$ parallel to plane $\mathrm { P _ { r e d } P _ { l i g h t - y e l l o w } P _ { b l u c } }$ . What is the length of the path traced by the moving point M?
Figure 1: Complex Reasoning in Visual Domain. Conventional benchmarks typically involve complex reasoning tasks in text format (left col.), where reasoning occurs entirely within language modality. In contrast, this work introduces video-based reasoning tasks (right col.), where key conditions are implicitly embedded in realistic 3D scenes and captured as videos. Solving these problems requires interleaved textual reasoning and spatial perception (bottom row), evaluating VLMs’ reasoning ability grounded in spatial comprehension.
2025; Fan et al., 2025; Jaech et al., 2024; Guo et al., 2025). By leveraging extensive training on math problem solving and programming, these models have demonstrated extraordinary generalization across diverse, intricate challenges (Lewkowycz et al., 2022; Achiam et al., 2023; Chen et al., 2024b; Saab et al., 2024; Chen et al., 2024c, 2021b; Li et al., 2022).
With the increasing focus on complex problem solving, various benchmarks have been developed to evaluate the reasoning ability of LLMs and VLMs (Hendrycks et al., 2021; Johnson et al.,
2017; Goyal et al., 2017; Hudson and Manning, 2019; Liu et al., 2024; Chen et al., 2024e; Fu et al., 2024). However, existing studies predominantly focus on abstract problems, such as algebraic computation, program synthesis, and geometric reasoning (Lu et al., 2023; Wang et al., 2024a; Chen et al., 2021a; Zhang et al., 2024a; Amini et al., 2019; Zhang et al., 2024b), largely overlooking visualbased reasoning tasks that are crucial for real-world interaction. Among these, spatial intelligence remains particularly underexplored.
Spatial intelligence refers to the ability to reason about spatial information (Gardner, 2011); it requires myriad capabilities, including perceiving size and shape, understanding geometric transformations, and retrieving spatial knowledge (Yang et al., 2024b). This capability is essential not only for the VLMs themselves but also for downstream real-world applications (Chen et al., 2024a), such as robotics (Brohan et al., 2023, 2022; O’Neill et al., 2024), augmented reality (Mangalam et al., 2023; Grauman et al., 2022; Chandrasegaran et al., 2024; Yuan et al., 2024b, 2025c), and embodied AI (Driess et al., 2023; Liu et al., 2025). While some notable studies have explored spatial intelligence (Yang et al., 2024b; Team, 2024e,a; Zhai et al., 2025; Li et al., 2024; Man et al., 2024), many of them remain centered on perception tasks, such as scene understanding and distance estimation. These tasks are essential, yet they fall short of evaluating the high-level reasoning capability indispensable for spatial problem-solving. Notably, there is still a lack of a systematic framework for evaluating spatially grounded reasoning.
To address this limitation, we introduce the Spatial Intelligence ReasonIng Benchmark (SIRIBench), specifically designed to evaluate VLMs’ spatial intelligence through complex reasoning tasks. SIRI-Bench consists of 891 samples, with each sample being a video-question-answer triplet. Inspired by the typical paradigm of assessing textual reasoning via math problems (Zhang et al., 2024b; Lu et al., 2023; Wang et al., 2024a; Yue et al., 2024; the University of Utah, 2024; Lightman et al., 2023), we take 3D-geometry math problems as the foundation to construct our benchmark. Unlike conventional textual or 2D diagram-based representations, where problem conditions are explicitly stated in text or diagrams, SIRI-Bench presents these problems through video-based inputs. As shown in the fig. 1, each math problem in SIRI-Bench is represented as a 3D scene and captured as a video. In this representation, key mathematical conditions and spatial relationships are implicitly embedded within the 3D scene, requiring models to interpret and reason based on spatial cues. More examples can be found in fig. 3. By carefully designing the representation of each question, SIRI-Bench ensures that both spatial perception and high-level reasoning capabilities are essential to successfully solve the questions. Consequently, the SIRI-Bench provides a systematic and challenging benchmark for evaluating VLMs on spatially grounded reasoning tasks, offering new insights into spatial intelligence and visual problemsolving.
Constructing such a dataset at scale entails significant challenges. Manually annotating and crafting the 3D scene is extremely labor-intensive, as it requires expertise in both math and 3D software. To address these challenges, we develop an Automatic Scene Creation Engine that automatically translates a 3D geometry problem into a realistic 3D scene and renders a corresponding video. This engine employs multiple tailored LLM agents and follows a sophisticated workflow. Specifically, the Automatic Scene Creation Engine takes as input a 3D geometry math problem along with its answer. Initially, the engine solves for the type and specific dimensions of the geometric entity. Following this, the corresponding Blender Python (Blender Online Community, 2022) script (bpy script) is generated to insert the geometric entity into the Blender scene. After that, the vertex indices are transformed from letter-based indexing to color-based indexing, providing precise references. Subsequently, the text description of the problem is modified by identifying and removing the information that can be inferred from the scene. This prevents the model being tested from bypassing spatial interpretation and directly relying on text. Finally, the video of the geometric entity as well as the whole scene is captured, which will serve as the visual input for VLMs. We show samples in fig. 3, which verify that the proposed Automatic Scene Creation Engine is capable of accepting any 3D geometry math problem and generating the corresponding 3D scene faithful to the original description.
We conduct extensive experiments on SIRIBench, evaluating various popular VLMs. Results show that over $50 \%$ of the problems in SIRI-Bench cannot be correctly solved even by the most stateof-the-art VLMs, with prediction errors exceeding $100 \%$ compared to the ground truth. Remarkably, when key mathematical conditions, such as object dimensions and geometric types, are explicitly provided in textual format, the models’ performance improves by more than twofold. This indicates that VLMs struggle to extract the necessary spatial information from video. In addition, comparisons with human participants and qualitative visualizations also show that current VLMs fail to solve SIRI-Bench problems accurately. These findings reveal certain limitations of existing VLMs in spatially grounded reasoning and highlight the value of SIRI-Bench.
The major contributions of this paper are summarized as follows:
(1) We introduce the SIRI-Bench, a benchmark designed to investigate VLMs’ spatial intelligence through complex reasoning tasks. By representing problems through video-based 3D scenes, SIRIBench establishes a novel framework for evaluating VLMs’ reasoning capability grounded in spatial comprehension.
(2) We develop an Automatic Scene Creation Engine that transforms 3D geometry problems into realistic 3D scenes. By leveraging multiple specialized LLM agents in a structured workflow, the engine can produce faithful 3D scenes and significantly reduce the cost of large-scale data generation.
(3) We benchmark the performance of stateof-the-art VLMs on SIRI-Bench, and find that they struggle to extract critical spatial information from visual inputs when solving complex reasoning tasks, revealing key limitations in spatially grounded reasoning.
# 2 Related Work
# 2.1 Spatial intelligence
Spatial intelligence is a key aspect of cognitive functioning and has increasingly garnered attention within Vision-Language Models. The concept of Spatial AI’ was introduced by Davison et al. (Davison, 2018), which is defined as an extension of visual SLAM (Simultaneous Localization and Mapping). Following this, Chen et al. (Chen et al., 2024a) presented SpatialVLM, a VLM trained on a 3D visual question answering dataset, which demonstrated significant improvements in spatial dimension estimation. Yang et al. (Yang et al., 2024b) further formalized the visual-spatial intelligence as the capability to understand, interpret, and operate visual information in 3D space. These studies highlight the importance of spatial intelligence in robotics and autonomous systems. In addition to 3D understanding(Yuan et al., 2025a,b, 2024a; Chen et al., 2024d; Mao et al., 2025), spatial generation is another key component of spatial intelligence. The team of Li et al. developed the World Labs system (Team, 2024e) , which generates interactive 3D scenes from a single image, demonstrating the generative potential of spatial intelligence. Similarly, the DeepMind team released the Genie 2 (Team, 2024a), which supports physical simulation and spatial memory capabilities. While existing VLMs excel at elementary tasks, this paper seeks to advance the field by investigating the more intricate challenges of complex spatial reasoning.
# 2.2 Complex Reasoning
Complex reasoning refers to the capability of language models to generate logically consistent and contextually appropriate responses through multistep reasoning, abstract thinking, and knowledge integration. Researches have shown that LLMs possess the capability to handle complex logical reasoning (Wei et al., 2022; Lu et al., 2025; Fan et al., 2025). This technology has been widely applied across various domains, including mathematical problem-solving (Lewkowycz et al., 2022; Hendrycks et al., 2021; Achiam et al., 2023; Jaech et al., 2024; Guo et al., 2025), medical diagnosis (Chen et al., 2024b; Saab et al., 2024; Chen et al., 2024c), and programing (Chen et al., 2021b; Li et al., 2022; Jaech et al., 2024; Guo et al., 2025), etc. However, researchers found that LLMs struggle with visual-language reasoning. This may be due to their inherent inability to process and integrate visual-linguistic information, limiting their real-world applications like robotics (Małki´nski et al., 2024; Chia et al., 2024; Ghosal et al., 2024). Although VLMs achieve proficiency in conventional video question-answering tasks involving recognition and description (Wang et al., 2024b; Team, 2024b; Lin et al., 2023; Maaz et al., 2023), their capacity for integrating complex reasoning with spatial intelligence remains underdeveloped. To address this limitation, this paper represents math problems through video-based 3D scenes that jointly demand both spatial understanding and highlevel reasoning.
Step 1-4: Place in the Scene Step 1-1: Solve Dimensions Geometry’s Category S:(0,0,2) A:(0,0,0) B:(2,0,0) Primary Conditions d Step 1-2: Write bpy Scripts Answer import bpy bpy.data.objects.new() Step 1-5: Render a Video Question: Step 1-3: Replace Vertices In the pyramid S-ABCD , the base quadrilateral ABCD is a Primary Conditions Step 2-1: Remove Conditions ? Question: rectangle. SA is perpendicular to pyramid object the plane ABCD , and SAD is Geometry’s Category Vertex Map There is a metallic silver object aEnisistohsecemliedsprioginht torfiaendgle .SPDo.int Auxiliary Conditions Step 2-2: Replace Vertex Indices letSter color point iEnitshtehsecemnied.pIoninttheofoebdjegcet, point tGhievsein tehoaft tShAe a=nAglBe f=or2.mFeidnd Final Question S Pmegenta A Pred BA Pmegenta $\mathrm { { \mathrm { P } } } _ { \mathrm { r e d } }$ : $\mathrm { P _ { m e g e n t a } P _ { y e l l o w } }$ m. eFdinbdethwe esin leinoef between line BE and plane ACE. B Pgreen scale $= 4$ … Pgreen $\mathrm { { \mathrm { P } } _ { g r c c n } \mathrm { { E } } }$ and plane $\mathrm { P _ { r e d } P _ { b l u c } E }$ . Answer: Step 3-1: Scale the Answer Answer: Answer Number: √2/3 Answer angle: $\div 4 ^ { 0 }$ length: $\div 4 ^ { 1 }$ area: $\div 4 ^ { 2 }$ volume: $\div 4 ^ { 3 }$ √2/3 0.47140
Figure 2: Illustration of the Transformation Process from a raw math problem to a 3D Spatial Representation. The given math problem is decomposed into five components and processed individually. First, the specific dimensions of the main geometric entity are solved, and corresponding bpy code is generated to insert the entity into a 3D scene, which is then captured as a video. Next, the problem conditions are filtered to remove information that must be inferred from the 3D space rather than the text, and node indices are replaced. Finally, the answer is adjusted for scaling effects to produce the final answer.
# 2.3 VLM Reasoning Benchmark
Recent advances in VLMs have spurred the development of diverse evaluation benchmarks designed to rigorously test various aspects of visual reasoning. Some benchmarks focus on evaluating the visual reasoning capability of VLMs by minimizing linguistic biases and ensuring reliance on authentic visual input (Johnson et al., 2017; Goyal et al., 2017; Hudson and Manning, 2019; Liu et al., 2024; Chen et al., $2 0 2 4 \mathrm { e }$ ; Fu et al., 2024). Benchmarks such as MathVista (Lu et al., 2023), MATH-Vision (Wang et al., 2024a) , GeoQA (Chen et al., 2021a), Geoeval (Zhang et al., 2024a), Mathqa (Amini et al., 2019) and MathVerse (Zhang et al., 2024b) primarily assess the symbolic and geometric reasoning of VLM through plane geometry problems, yet remain abstracted from real-world contexts, limiting their capacity to evaluate spatially grounded understanding. Unlike existing 2D-centric VLM benchmarks, our benchmark is built upon realistic 3D environments with precise geometric properties (e.g., angles, distances). Furthermore, to rigorously evaluate VLMs’ complex spatial reasoning, our benchmark demands multi-step logical inference that integrates spatial perception with procedural reasoning.
# 3 SIRI-Bench
To evaluate the spatial intelligence of VLMs in complex reasoning scenarios, we introduce SIRI
Bench, a benchmark that centers on both spatial awareness and complex mathematical reasoning. The instances in SIRI-Bench are derived from 3D geometry math problems and are transformed into video-based question answering problems. For each problem, we elaborately design the 3D spatial representation that embeds key mathematical conditions into realistic 3D scenes, requiring VLMs to extract relevant information through spatial perception and reasoning.
In this section, we introduce the data collection process, the 3D spatial representation as well as the Automatic Scene Creation Engine for our SIRI-Bench. Additionally, we present other details, some illustrative samples, and data statistics at the end of this section. The processing pipeline for each sample in our dataset is illustrated in fig. 2. And samples from SIRI-Bench as well as the corresponding intermediate steps are shown in fig. 2.
Note that in this paper, the term ‘3D spatial representation’ refers to representing math problems as 3D scenes, in contrast to the text-based or diagrambased representations. It should not be confused with learnable 3D representations or learnable parameters such as those used in 3D Gaussian splatting.
# 3.1 Data Collection
Following prior works that adopt math problems for complex reasoning (Zhang et al., 2024b; Lu et al., 2023; Wang et al., 2024a; Yue et al., 2024; the
Original Questions Intermediate Steps Final Samples
ID: 0008 Question: Dimensions: Question: C1 Given a frustum of a triangular pyramid ABC $\bf \cdot A _ { 1 } B _ { 1 } C _ { 1 }$ where $\mathbf { A B } =$ A: (0, 0, 0) A1: (-1, √3/3, 2√6/3) There is a metallic silver object in the scene. In the object, $\mathbf { B C } = \mathbf { C A } = \mathbf { A A } _ { 1 } = \mathbf { B B } _ { 1 } = 2$ and $\mathbf { A } _ { 1 } \mathbf { B } _ { 1 } = 4$ . Point O is the midpoint of B: (2, 0, 0) B1: (3, √3/3, 2√6/3) point O is the midpoint of line segment $\mathrm { P _ { l i g h t - r e d } P _ { l i g h t - g r e c n } } ,$ and line segment $\mathbf { A } _ { 1 } \mathbf { B } _ { 1 }$ , and point D is the midpoint of line segment $\mathrm { O A } _ { 1 }$ . C: (1, √3, 0) C1: (1, 7√3/3, 2√6/3) point D is the midpoint of line segment $\mathrm { O P } _ { \mathrm { l i g h t r e d } } .$ Find the Plane $\mathbf { B C C } _ { 1 } \mathbf { B } _ { 1 }$ is perpendicular to plane $\mathrm { A C C } _ { 1 } \mathrm { A } _ { 1 }$ . Find the size of the Scale: 4 size of the angle formed between line $\mathrm { P _ { r e d } P _ { l i g h t r e d } }$ and plane angle formed between line AA1 and plane $\mathbf { B C C } _ { 1 } \mathbf { B } _ { 1 }$ . PgreenPbluePlight-bluePlight-green. Answer: 𝜋/4 Answer Number: 0.785398
ID: 1289 Question: Dimensions: Question: The upper base radius of the frustum of a cone is 1, the lower base h: √15 There is a metallic silver object in the scene. In the object, radius is 2, and the slant height is 4. Point P is the midpoint of one of r1: 1 point P is the midpoint of one of the generatrices of the object. T the generatrices of the frustum. If a particle starts from point P, r2: 2 If a particle starts from point P, moves around the lateral moves around the lateral surface of the frustum once, and returns to Scale: 4 surface of the silver object once, and returns to point P, what is point P, what is the length of the shortest path taken by the particle? the length of the shortest path taken by the particle? Answer: $_ { 6 \sqrt { 2 } }$ Answer Number: 2.12132
University of Utah, 2024; Lightman et al., 2023), we similarly build our benchmark based on math problems. Math problems naturally involve multistep and structural inference, making them an ideal foundation for our benchmark. The key distinction is that we focus on 3D geometry math problems rather than algebra or plane geometry problems, and transform them into realistic 3D scenes for spatial reasoning.
The SIRI-Bench is initially collected by gathering publicly available 3D geometry math problems as well as their corresponding answers from online educational resources. These problems are then translated into English for consistency. The dataset encompasses a wide range of difficulty levels, spanning from middle school examination standards to those of high school graduation exams. This broad spectrum ensures that our benchmark accommodates varying degrees of complexity. To maintain data quality, we conduct a manual screening to eliminate low-quality problems. The dataset originally comprises various question types, including multiple-choice questions, fill-in-the-blank questions, and open-ended problem-solving questions. As for the proof-based questions, we exclude them due to the substantial challenge of validating the proof texts. We then leverage LLMs to convert all questions into open-ended problem-solving questions. This transformation ensures that the evaluation is conducted within a standardized and consistent open-ended question-answer format.
# 3.2 3D Spatial Representation
In this section, we propose a 3D spatial representation that presents geometry problems in realistic 3D scenes, which are then recorded as videos. Compared to traditional abstract representations, such as textual descriptions or 2D diagrams, this representation offers a closer approximation of real-world scenarios. Moreover, by embedding key mathematical information within the 3D scenes, VLMs are compelled to extract information from spatial cues rather than relying on textual descriptions, thereby jointly assessing their spatial and reasoning abilities.
Specifically, an original 3D geometry problem can be conceptually decomposed into five components: (1) the Illustration Diagram, (2) the Geometry’s Category, (3) the Primary Conditions, (4) the Auxiliary Conditions, and (5) the Final Question.
For the Illustration Diagram, it is typically used to visually represent the problem. In our approach, we discard the diagram and represent the problem through a video that depicts a 3D scene.
For the Geometry’s Category, it refers to the type of the main geometric entity involved in the problem. In our approach, we remove this information from the textual description and instead place the corresponding geometric entity within the 3D scene. For example, if the original description states “In the pyramid S-ABCD” we change it to “There is an object in the scene” and insert a cube into the 3D scene. Notably, since 3D geometry problems follows a strict naming convention, the main geometry category can often be inferred from the vertex indices. For example, a quadrangular pyramid can be inferred from ‘P-ABCD’. Therefore, the vertex indices relevant to the main geometric entity must be removed. In order to achieve this, we replace the letter-based indices with color-based indices and correspondingly insert colored points in the scene. This ensures that referring to different colored points accurately represents the vertices on the geometric entity. For example, in the fig. 2, the vertex $\cdot S ^ { \prime }$ is replaced with the $\cdot P _ { m e g e n t a } ,$ , referring to the apex of the pyramid.
For the primary geometric conditions, they refer to the specific attributes and constraints of the main geometric entity, such as its dimensions, and spatial relationships (e.g., parallelism or perpendicularity). We also remove this information from the textual description and represent it within the 3D scene. Specifically, based on the original mathematical description, we solve for all the necessary information required to uniquely define the main geometric entity. Based on this information, we insert a corresponding 3D object into the scene. This ensures that all relevant conditions can be re-derived by observing the inserted entity.
For the auxiliary conditions, these are additional information that are not directly related to the main geometric entity but are necessary for solving the problem. Given their diverse forms and the potential complexity in representing them visually, we retain these conditions in textual form to ensure clarity and accuracy.
For the final question, it specifies what ultimately needs to be solved in the problem. We retain this component in textual format since it clearly states the objective.
Our approach is conceptually aligned with MathVerse (Zhang et al., 2024b), which embeds math conditions into 2D diagrams, challenging the visual capability of VLMs. Unlike MathVerse, by encoding key information in 3D scenes rather than static diagrams, we push the problem setting closer to real-world scenarios. This representation goes beyond optical character recognition (OCR), demanding spatial perception and reasoning. As the VLM must actively interpret the 3D space to extract the necessary information for problem-solving.
# 3.3 Automatic Scene Creation Engine
Manually converting each problem into a 3D scene is extremely labor-intensive, as it requires both solving the math problems and mastering 3D modeling software. To facilitate large-scale 3D scene generation, we develop an Automatic Scene Creation Engine. Inspired by prior works in MultiAgent System (Wu et al., 2023; Hong et al., 2023; Yu et al., 2025; Wei et al., 2024; Yang et al., 2024c), we leverage multiple specialized LLM agents to collaboratively create 3D scenes. They follow a sophisticated workflow involving geometric solving, bpy script generation, question processing, and answer processing. This section outlines this workflow in detail.
First, the engine begins by processing the original 3D geometry problem and its associated answer. To resolve the geometric structure described in the problem, an LLM agent first identifies the type of the main geometric entity. Based on this, we employ a math-specialist agent to solve for all key conditions necessary to fully define the main geometric entity. Specifically, we adopt the QwQ32B model (Team, 2025b) as the math-specialist agent, which is known for its strong performance on math problems (Team, 2025b). Next, we normalize the size of all geometric entities, scaling them to a size range of $0 . 5 \mathrm { m }$ to $2 \mathrm { m }$ (excluding $2 \mathrm { m }$ ), ensuring that each object fits well within a unified camera configuration. The normalized geometric dimensions are then passed to a code-specialist agent, which generates bpy scripts (Blender Online Community, 2022) to automatically insert the defined objects into Blender. Specifically, we adopt the Qwen-LM (Yang et al., 2024a) as the codespecialist agent, which has been shown to robustly synthesis code (Yang et al., 2024a). Additionally, the generated code also involves inserting colored vertices, which replace letter-based indices with color-based indices (see section 3.2).
Second, the problem description is refined to align with the spatial reasoning objective. Specifically, given the original problem descriptions, an LLM agent identifies and removes key geometric conditions that are intended to be inferred from the 3D scene. As discussed in section 3.2, this step ensures that the VLMs being evaluated must rely on spatial comprehension rather than textual cues to extract such information from 3D scenes.
Third, the original answer is also refined to support robust evaluation. An LLM agent converts symbolic expressions into numerical values, allowing minor estimation errors. For example, when compared to the ground-truth answer $\sqrt { 2 }$ , answers with minor deviations, such as 1.414 or 1.4, should not be considered absolutely incorrect. The agent also adjusts the answer to account for the applied geometric scaling: lengths are scaled linearly, areas quadratically, and volumes cubically, depending on the type of quantity.
By elaborately designing the specialized agents and their collaborative workflow, our Automatic Scene Creation Engine is able to take any 3D geometry problem as input and generate a faithful
Figure 4: Overall Performance of Existing VLMs. This figure shows the error distributions across seven intervals ranging from $0 \%$ to $200 \%$ for all baseline methods on the SIRI-Bench. Each bar represents a different method, with colors indicating the corresponding error intervals. A higher concentration of errors in the lower intervals (i.e. brighter colors) indicates higher accuracy in problem-solving. The method labeled ‘Textual Rep.’ refers to an LLM that accesses full mathematical conditions through textual descriptions rather than videos of 3D scenes. Overall, the results reveal the limitations of current VLMs in spatially grounded reasoning.
Figure 5: Textual Input vs. Visual Input This figure compares the accuracy of two sibling models using textual representation versus 3D spatial representation as input. The three columns correspond to three pairs of sibling LLMs/VLMs. This comparison disentangles high-level reasoning from spatial perception, revealing that existing VLMs struggle to effectively extract spatial information when solving complex visual problems, resulting in low accuracy.
3D scene that reflects its conditions. Using this pipeline, we constructed a dataset of 891 samples for our SIRI-Bench. Although the current dataset is of moderate size, the engine is fully scalable and can support the generation of larger datasets in the future.
# 3.4 Other Details
Our 3D spatial representations are not directly fed to the tested VLMs. Instead, we capture videos of each scene by orbiting a camera around the main geometric object, and use these videos as the final inputs. Specifically, the camera follows a circular path positioned 8 meters away from the object, with a downward tilt of 30 degrees relative to the horizontal plane. This ensures an appropriate level of visibility. For the indoor environment that serves as the background, we utilize home scenes from the 3D-Front dataset (Fu et al., 2021a,b), which features realistic room layouts and furniture models. Our 3D scenes support highly flexible and customizable rendering setups. Although our SIRI-Bench currently adopts a fixed configuration, parameters such as background layout, lighting, rendering style, camera angle, camera path, and resolution can be adjusted, enabling more diverse evaluation settings in future extensions.
Figure 6: Comparison with Human Performance. This figure compares the performance of human participants (the 1st bar) with four state-of-the-art VLM models on a subset of the SIRI-Bench. The results show that current state-of-the-art VLMs are still far from matching human performance.
# 3.5 Data Samples in SIRI-Bench
In fig. 3, we present some illustrative samples from SIRI-Bench, along with their original questions and intermediate steps in the data generation process. More examples can be found in the Supplementary Materials. As can be seen from the original questions compared to the final questions, our data generation engine accurately conceals the mathematical conditions that VLMs need to extract from the video, while retaining other auxiliary information. Additionally, the engine successfully replaces vertex indices and aligns them accurately with the 3D space.
Figure 7: Data Distribution. The SIRI-Bench dataset ensures a balanced distribution across difficulty levels (up), geometric types (middle), and problem types (bottom).
The intermediate results validate that our engine can correctly solve the dimensions for the geometric entities. Notably, solving geometric conditions for sample $\# 0 0 0 8$ is particularly complicated. Despite that, our math-specialist agent still correctly solves the problem, demonstrating its reliability.
The comparison between the original answers and our final numerical answers demonstrates that our engine can accurately compute the numerical answer for each question while accounting for scaling effects. Overall, the visualized results confirm the reliability of our Automatic Scene Creation Engine, which can generate faithful 3D scenes for any 3D geometry math problem.
# 3.6 Data Statistics
The dataset of our SIRI-Bench, consists of 891 samples, each formatted as a video-question-answer triplet. The videos are 3 seconds long, recorded at 16 fps with a resolution of $1 2 0 0 { \times } 9 0 0$ , and stored in MP4 format. Each question is written in English. The answer is provided as a numerical value, without any symbolic expressions such as square roots or $\pi$ . All problems are 3D geometry questions, covering nine common types of solids with a balanced distribution across solid types, problem types, and difficulty levels (shown in fig. 7). Together, these properties make SIRI-Bench a robust and reliable foundation for evaluating spatially grounded reasoning.
# 4 Experiment
This work aims to evaluate the spatially grounded reasoning capability of current VLMs. In this section, we conduct systematic experiments to assess the performance of existing VLMs on the proposed SIRI-Bench. We present our experimental setup, results, and detailed analysis in the following subsections.
# 4.1 Experimental Setup
Evaluation Metrics. Our evaluation aims to jointly assess both spatial estimation accuracy and mathematical reasoning correctness. To achieve this, we compute the relative error between the predicted numerical answer and the ground-truth value. The relative error is defined as the absolute difference divided by the ground-truth, expressed as a percentage, which can be expressed as:
$$
\frac { | \hat { y } - y | } { y } \times 1 0 0 \% ,
$$
where $\hat { y }$ denotes the predicted numerical value and $y$ denotes the ground-truth numerical value. This single metric effectively captures errors arising from either spatial misinterpretation or reasoning mistakes.
To better analyze performance, we categorize the relative error into seven intervals: $0 { - } 2 0 \%$ , $2 0 \mathrm { - } 4 0 \%$ , $4 0 { - } 8 0 \%$ , $8 0 \mathrm { - } 1 0 0 \%$ , $100 { - } 2 0 0 \%$ , and ${ \it i } 2 0 0 \%$ . We then examine the distribution of model predictions across these intervals. A strong model is expected to have a distribution concentrated in lower-error intervals, while weaker models show a higher error distribution.
Baselines. We evaluated a variety of state-ofthe-art VLMs commonly used in the field. Our selection includes models of different sizes, both open-source models and close-source models, as well as both non-reasoning models and reasoning models (which feature a long reasoning process when solving the problems). Specifically, the models we evaluated include1:
• OpenAI: GPT-4o (OpenAI, 2024) and GPT4v (OpenAI, 2023).
Question: Step 1: Identifying Points and Their Connections
In a cube ABCD- $\mathbf { A } _ { 1 } \mathbf { B } _ { 1 } \mathbf { C } _ { 1 } \mathbf { D } _ { 1 }$ with edge length 2, point Q is the midpoint of line segment $\mathbf { B } _ { 1 } \mathbf { C } _ { 1 }$ , and point P is a We have points $\mathrm { P } _ { \mathrm { l i g h t - g r e e n } } , \mathrm { P } _ { \mathrm { l i g h t - b l u e } } , \mathrm { P } _ { \mathrm { y e l l o w } } ,$ $\mathbf { P } _ { \mathrm { b l u e ^ { - } } }$ Q is the midpoint moving point on line segment $\mathrm { C C } _ { 1 }$ (including the endpoints). Step 2: Solving Part 1 Area of the Cross-Section
(1) When P is the midpoint, what is the area of the cross-section formed by the plane passing through points D, P is located at the midpoint of $\mathrm { P _ { b l u e } P _ { l i g h t \cdot b l u e } . }$
P, and Q cutting the cube ABCD- $\mathbf { A } _ { 1 } \mathbf { B } _ { 1 } \mathbf { C } _ { 1 } \mathbf { D } _ { 1 } \mathbf { ? }$ The cross-section formed by plane $\mathbf { P } _ { \mathrm { y e l l o w } } \mathrm { P Q }$ cuts through the object. (2) What is the maximum value of the surface area of the circumscribed sphere of the tetrahedron P-C1D1Q? The area of this triangle is determined by the $\mathrm { P _ { y e l l o w } }$ , P , and Q Answer: TAhsseuymliek:elLyefnogrthmsroigfhsteagnmgelentss, $\mathbf { P } _ { \mathrm { l i g h t - g r e e n } } \mathbf { P } _ { \mathrm { l i g h t - b l u e } }$ g,nePdb upeoPsl igthit-obnluse. …. 9/2; 9𝜋𝜋 Original Question ID: 1295 Without numerical data, estimate area using a basic case Assume: The base and height are along $\cdots$ with length 1. Question: Thus, area is approximately: Area∆ =(1/2)\*1\*1=0.5m2 There is a metallic silver object in the scene. In the object, point Q is the midpoint of line segment $\mathbf { P } _ { \mathrm { l i g h t } } .$ Step 3: Solving Part 2 Maximum Surface Area of the Circumscribed Sphere $\mathbf { g } \mathrm { r e c n } ^ { \mathbf { P } } \mathbf { \Phi } _ { \mathrm { l i g h t - b l u e } }$ iasntdhepomi indt pPoiinst awmhaotvisn tghpeoairnetaoonfltihne scerogsms-esnetc $\mathrm { P _ { b l u e } P _ { l i g h t : b l u e } }$ b(yitnhcelupdliangetphaesesindgptohirntosu).gh points The tmetarxaihmedurmono $\mathrm { P P _ { l i g h t \cdot b l u e } P _ { l i g h t \cdot y e l l o w } Q }$ ecabno uhnadvaeriaecsirwciuthms…criabsesudmsphtieorne. $\mathrm { P } _ { \mathrm { y } \mathrm { c l l o w } }$ , P , and Q cutting the object? Assume: Using approximations based on simple configurations, (2) What is the maximum value of the surface area of the circumscribed sphere of the tetrahedron $\mathrm { P - P _ { \mathrm { l i g h t } } }$ Diameter as the diagonal of cube faces …: 2 meters.
$\mathbf { \tau } _ { \mathrm { b l u e } } \mathbf { P } _ { \mathrm { l i g h t - y e l l o w } } Q$ ?Where the point $\mathrm { P _ { l i g h t r e d } }$ is the light-red point in the scene. The radius of sphere ${ \mathrm { r } } = { \sqrt { 2 / 2 } }$ , Sphere Surface Area is : $\mathbf { A } = 4 \pi \mathbf { r } ^ { 2 }$ Calculate: $\gimel = 4 \pi ( { \sqrt { 2 / 2 } } ) ^ { 2 } = 2 \pi { \approx } 6 . 2 8 { \bmod { } } ^ { 2 }$
Answer Number:
0.28125; 1.76714375 Question in SIRI-Bench Answer: 0.5; 6.28 Response From GPT-4o
• Qwen-VL: Qwen2-VL-7B (Wang et al., 2024b), Qwen2.5-VL-7B, 32B, 72B (Yang et al., 2024a), QVQ-72B (Team, 2024c), and a close-source model Qwen-VL-Max (Team, 2024d).
• Doubao: Doubao-1.5-pro (Team, 2025a).
• Intern-VL: InternVL2-8B (Team, 2024b), InternVL2.5-8B, 26B, 78B (Chen et al., 2024f).
• LLaVA-Next-Video: LLaVA-NV-7B and 34B (Zhang et al., 2024c).
• Others: SpaceQwen-3B (Chen et al., 2024a), a model fine-tuned for enhancing spatial intelligence.
• LLM: Doubao-1.5-pro (Textual Rep.) (Team, 2025a), which is an LLM that accesses the full mathematical conditions through textual descriptions rather than videos of 3D scenes. We additionally evaluate this method to disentangle high-level reasoning from spatial perception.
# 4.2 Overall Performance
The overall performance of existing VLMs is summarized in fig. 4, which presents the distribution of prediction errors across seven intervals ranging from $0 \%$ to over $200 \%$ . A higher concentration of predictions in the lower error intervals indicates greater accuracy.
As shown in the table, all the evaluated VLMs exhibit significantly poor performance on the SIRIBench. Specifically, the best-performing method is Doubao-1.5-pro (Team, 2025a). (For Doubao1.5-pro-Textual-Rep., it is an LLM that inputs textual representation. We do not discuss it in this section and leave it to the section 4.3.) The bestperforming method, Doubao-1.5-pro, has $20 \%$ of its predictions within the $0 \% { - } 4 0 \%$ error range, indicating that it can reasonably estimate the dimensions of the geometric entities and solve the problems for $40 \%$ of the cases. This result aligns well with that of other benchmarks, as Doubao-1.5- pro has demonstrated outstanding performance on other multimodal reasoning datasets such as MathVision (Wang et al., 2024a) and MathVista (Lu et al., 2023). However, $50 \%$ of its predictions have errors exceeding $100 \%$ , highlighting its limitations.
Other models perform even worse compared to Doubao-1.5-pro. Specifically, among the opensource models, Qwen2.5-VL-72B shows the best performance, with $50 \%$ of its predictions having errors below $100 \%$ . However, it fails to achieve higher precision, as only $10 \%$ of its predictions have errors below $60 \%$ . In the case of close-source models, aside from the best-performing method Doubao-1.5-pro, the other methods achieve similar error distributions, indicating similar performance levels.
It seems that model performance is not obviously influenced by whether the model is open-source or close-source, or by its parameter size. For instance, increasing model size does not necessarily lead to better performance, as the error distributions of InternVL-8B, 26B, 78B are nearly identical. This implies that enhancing the spatially grounded reasoning ability of VLMs may need other strategies beyond simply increasing model size.
Additionally, SpaceQwen-3B (Chen et al., 2024a), a model fine-tuned for spatial intelligence, is particularly noteworthy. Despite its small size, it shows competitive performance, with $2 5 \%$ of its predictions having errors below $80 \%$ . However, its performance remains unsatisfactory. This may be due to its small size and limited fine-tuning, which focuses primarily on the spatial perception rather than the comprehensive spatial reasoning ability.
In summary, existing VLMs fail to solve the problems in the SIRI-Bench dataset effectively. The challenges may stem from two possible factors: (1) These models struggle to estimate the dimensions of the geometric entities, as their specific dimensions are not directly provided in the questions. (2) These models exhibit deficiencies in mathematical reasoning and calculation. Overall, the poor performance across models underscores the significant limitations of current VLMs in spatial reasoning and highlights the value of the SIRIBench in identifying these gaps.
# 4.3 Textual Rep. vs. 3D Spatial Rep.
As shown in fig. 4, the Doubao-1.5-pro-TextualRep. method significantly outperforms other VLM methods. This method is an LLM model that uses textual representations as input and accesses full mathematical conditions through text.
To further investigate this phenomenon, we further compare three pairs of sibling models, each consisting of an LLM and a VLM, with nearly identical model properties. The LLMs use text as input, directly extracting key conditions directly from the text (i.e. textual representation). While the VLMs use video as input, they are required to capture problem-solving conditions from the 3D scene depicted in the video (i.e. 3D spatial representation).
Specifically, to maintain consistency with the 3D spatial representation, our textual representation does not use the raw math questions from the original dataset. Instead, we employ the same textual input as the 3D spatial representation, supplemented with dimensional information about the main geometric entities. This ensures that LLMs receive information nearly identical to that of VLMs, with the sole difference being that spatial information is provided in textual form. The results of this comparison are presented in fig. 5. This comparison disentangles high-level reasoning from spatial perception, allowing us to further examine the underlying reason of VLM’s poor performance.
As seen in fig. 5, all three pairs of models exhibit the same trend: the text-input models consistently outperform their video-input counterparts. For example, in terms of prediction error less than $20 \%$ , the proportion nearly doubled when switching from video input to text input, increasing from $1 2 . 4 \%$ , $8 . 3 \%$ , $7 . 4 \%$ to $4 9 . 5 \%$ , $1 8 . 4 \%$ , $2 7 . 4 \%$ , respectively. Most notably, the Doubao-1.5-pro-Textual-Rep. achieves the most accurate results among all the evaluated models, with about $50 \%$ of its predictions having errors below $20 \%$ and over $70 \%$ of its predictions having errors below $80 \%$ . This indicates that while current state-of-the-art models can effectively handle complex reasoning problems based on language, they fail to do the same on the visual level. When solving complex reasoning problems based on visual input, they are unable to effectively extract information, especially the spatial information, from visual modality.
The strong performance of Doubao-1.5-pro also validate the effectiveness of SIRI-Bench, showing that most questions in SIRI-Bench can be correctly solved once provided with full spatial conditions.
Overall, these results underscore the motivation of this paper and highlight the significance of the SIRI-Bench dataset. By challenging VLMs with complex problems that require spatial intelligence, SIRI-Bench reveals the limitations of VLMs in utilizing spatial information for reasoning.
# 4.4 Comparison with Human Performance
In this part, we investigate how far VLMs are from matching human performance. We randomly select seven samples from the SIRI-Bench dataset and give them to human participants with undergraduate or graduate-level backgrounds. Participants are asked to solve these problems as accurately as possible without any additional information. The average error distributions of human participants is shown in fig. 6, alongside four state-of-the-art VLMs.
As shown in fig. 6, there is a significant gap between the performance of VLMs and human participants. Participants are able to solve over $30 \%$ problems with errors within $20 \%$ . In contrast, none of the four VLMs achieve this level of accuracy for any of the problems. When allowing for a $60 \%$ error margin, participants successfully solve approximately $70 \%$ of the problems, a rate far exceeding that of the VLMs.
Notably, some problems are difficult even for human participants, with errors exceeding $100 \%$ . This may be due to deviations in human distance estimation, which could accumulate during calculations.
Figure 9: Error Distribution by Difficulty & Type.. The error distribution of VLMs on our benchmark is shown with respect to question difficulty (top) and question type (bottom). Four error intervals $( 0 - 2 0 \%$ , $2 0 \mathrm { - } 4 0 \%$ , $4 0 { - } 6 0 \%$ , $6 0 { - } 8 0 \%$ ) are displayed. The results indicate a relatively balanced failure rate across different difficulty levels and question types.
Overall, the performance of existing state-of-theart VLMs falls considerably short of human capability, highlighting the substantial limitations in spatial perception and reasoning ability of current VLMs.
# 4.5 Error Distribution by Difficulty & Type
We assessed the error distributions of VLMs on our complex spatial reasoning benchmark along two dimensions (question difficulty and question type), using four error-rate intervals. Figure 9 (top) illustrates the proportion of items in each interval for three difficulty levels, while figure 9 (bottom) presents analogous data for six question types. The distributions reveal no discernible systematic bias with respect to either difficulty or type, indicating that VLMs exhibit comparable error rates across all questions. This indicates that current VLMs exhibit a relatively uniform failure profile across both difficulty levels and question types, with no evidence of systematic bias.
# 4.6 Why QVQ Performs So Poorly
As illustrated in Figure 1 of our main paper, QVQ demonstrates the lowest performance among all evaluated VLMs on the SIRI-Bench benchmark. Given QVQ’s design as a multimodal reasoning model with advanced visual understanding capabilities, this underperformance warrants further investigation. Upon examining its reasoning processes, we observe that QVQ often engages in extended chains of thought, spanning up to 8K or even 16K tokens, without arriving at a definitive conclusion. This behavior suggests that QVQ becomes entangled in recursive reasoning loops, failing to produce final answers. Consequently, its responses are categorized as having errors exceeding $200 \%$ , contributing to its poor performance metrics.
# 4.7 Case Study
Figure 8 presents a case study showing the full reasoning process of GPT-4o, with key errors highlighted. Red, blue, and orange marks denote spatial interpretation errors, spatial perception errors, and the final incorrect result, respectively. This case study reveals potential reasons for GPT-4o’s poor performance on SIRI-Bench.
This example involves two sub-questions. The first asks for the area of the cross-section formed by the plane defined by points D, P, and Q intersecting a cube. GPT-4o oversimplifies the geometry by directly treating the cross-section as a triangle DPQ. However, the cross-section is actually a rectangle. Without correctly extracting spatial cues from the video, it further assumes the triangle is right-angled, which is incorrect. It also incorrectly estimates the cube’s size by assuming a unit-length edge, ultimately leading to an incorrect answer.
The second sub-question involves finding an extremum value as the point P moves within the geometry. GPT-4o fails to identify the correct limiting configuration, instead using the cube’s diagonal as a shortcut. Again, it mistakenly estimates the cube’s dimensions and arrives at an incorrect solution.
Overall, this example highlights GPT-4o’s difficulty in interpreting geometric constraints and demonstrates its tendency to make unjustified assumptions when dealing with implicit visual information. Rather than leveraging the video to resolve ambiguity, GPT-4o often proceeds with incomplete reasoning, revealing a key problem-solving limitation in the visual domain. | Large Language Models (LLMs) are experiencing rapid advancements in complex reasoning, exhibiting remarkable generalization in mathematics and programming. In contrast, while spatial intelligence is fundamental for Vision-Language Models (VLMs) in real-world interaction, the systematic evaluation of their complex reasoning ability within spatial contexts remains underexplored. To bridge this gap, we introduce SIRI-Bench, a benchmark designed to evaluate VLMs' spatial intelligence through video-based reasoning tasks. SIRI-Bench comprises nearly 1K video-question-answer triplets, where each problem is embedded in a realistic 3D scene and captured by video. By carefully designing questions and corresponding 3D scenes, our benchmark ensures that solving the questions requires both spatial comprehension for extracting information and high-level reasoning for deriving solutions, making it a challenging benchmark for evaluating VLMs. To facilitate large-scale data synthesis, we develop an Automatic Scene Creation Engine. This engine, leveraging multiple specialized LLM agents, can generate realistic 3D scenes from abstract math problems, ensuring faithfulness to the original descriptions. Experimental results reveal that state-of-the-art VLMs struggle significantly on SIRI-Bench, underscoring the challenge of spatial reasoning. We hope that our study will bring researchers' attention to spatially grounded reasoning and advance VLMs in visual problem-solving. | [
"cs.CV"
] |
# 1 Introduction
Large language models (LLMs) have emerged as a core technology in artificial intelligence, with extensive applications in chatbots, content generation, code synthesis, and other domains, significantly enhancing the efficiency and user experience of human-computer interaction [5; 29; 23; 11]. However, the formidable capabilities of these models rely on massive pre-training datasets and substantial computational resources, rendering the training process fraught with challenges [45; 2]. Persistent research challenges include, but are not limited to, the efficient optimization of ultra-large-scale parameters and the trade-off between training costs and model performance.
Weight decay, one of the most widely used regularization techniques for training well-generalized deep neural networks [26; 43; 13], critically influences the convergence and performance of state-ofthe-art machine learning algorithms when properly configured. Extensive prior studies [21; 33; 36] have demonstrated its pivotal role in enhancing model generalization from diverse theoretical and empirical perspectives. Recent work [20; 8] further highlights its importance in improving optimizer stability and efficacy during the training of LLMs.
The prevailing approach to weight decay assigns a globally fixed value per epoch across optimizers—including SGD [40], Adam [18], and their variants [48; 37]—where all model layers share an identical decay coefficient. However, given the scaling parameter counts and architectural complexity of modern LLMs, such a uniform weight decay scheme fails to capture their intricate structural properties, making this conventional practice increasingly suboptimal. Notably, recent work has begun investigating dynamic weight decay adaptation [16; 31; 12; 43] to address this limitation. [12] observes that fixed-hyperparameter weight decay fails to balance robustness and accuracy in adversarial training, causing robust overfitting. They propose Adaptive Weight Decay (AWD) to dynamically adjust decay strength via classification and regularization loss gradients, automatically enhancing robustness and adversarial performance without extra data.
Figure 1: Module-wise Balance and AlphaDecay weight decay schedule. (a) Employing PL fitting to derive module-wise PL_Alpha_Hill values, AlphaDecay achieves module-wise balance by increasing the lower values (e.g., att.Q and att.K, more heavy-tailed) while decreasing the higher values (e.g., MLP components, less heavy-tailed). (b) Given the imbalanced module-wise PL_Alpha_Hill of LLaMa-60M, AlphaDecay assigns lower weight decay to modules with lower PL_Alpha_Hill.
Notably, prior studies on dynamic weight decay adaptation were exclusively designed for architectures like ResNet18/34/50 [14], VGG [35], and DenseNet [15], employing time-wise modulation (i.e., uniform decay values across all layers at each timestep) while maintaining layer-wise uniformity. This approach is reasonable for parameter-efficient, structurally simple models (e.g., ResNets) where inter-layer feature distinctions are less pronounced. However,
Does there exist a better weight decay configuration for LLMs?
Three reasons behoove us to pose the above research question: First, the prevailing consensus holds that certain transformer components exhibit greater functional importance than others [41; 4; 45; 27], necessitating differentiated weight decay treatment. Second, weight decay manifests fundamentally distinct roles in over-trained regimes (e.g., ResNets) versus under-trained regimes (e.g., LLMs) [8]. Most notably, existing research demonstrates that improper weight decay configuration for LLMs may adversely affect model performance [3; 19; 43; 34; 17; 9]. Our main contributions are as follows:
$\bullet$ We identify substantial variation in the spectral properties of module-wise ESD (see figure 2), and show that these inconsistencies are a core reason for degraded model performance, as evidenced by figure 4.
$\pmb { \theta }$ We propose a module-wise weight decay scheduling strategy AlphaDecay to ensure spectral alignment across modules (see figure 1), thereby enforcing consistency in spectral properties and achieving improved training performance (see figure 3).
$\otimes$ Extensive experiments spanning models from 60M to 1B parameters show that the proposed approach, AlphaDecay, consistently outperforms the Uniform baseline as well as adaptive methods such as AWD [12] and AdaDecay [31] (see table 2). These results highlight the critical role of module-wise balance in achieving state-of-the-art performance in LLMs.
Overall, our research provides an unrecognized perspective on optimizer, revealing the critical yet overlooked role of module-wise weight decay in LLM training. This novel insight can be readily applied to all state-of-the-art optimizers and training methods, effectively enhancing their performance without structural modifications.
# 2 Related Work
Weight decay in LLM training. Weight decay is a widely adopted technique for training deep networks, spanning applications from image classification to LLMs [21]. In the context of GPT-3 training, [5] recommended incorporating weight decay primarily for its mild regularization benefits. [20] showed that weight decay promotes optimizer equilibrium in scale-invariant systems. Recent studies have provided deeper insights into weight decay’s role in LLM training. [39; 8; 38] challenged the conventional view of weight decay’s generalization benefits for LLMs, and instead highlighting its critical function in reducing training loss and enhancing stability during under-training through the lens of Effective Learning Rate. Building on these findings, [3; 19] established a connection between $l _ { 2 }$ regularization and spectral norms, discovering that weight decay induces low-rank attention layers. Their work further showed that employing different weight decay values for attention and MLP modules, carefully tuned via grid search, can significantly improve training outcomes. Our work presents the first formal analysis of non-uniform module-wise weight decay in LLM training, demonstrating its effectiveness through comprehensive empirical validation.
Dynamic weight decay. While uniform weight decay is commonly used for model training, a line of work employs gradnorm to adaptively determine weight decay settings. [16] analyzed gradient descent with weight decay, finding that backpropagated gradients scale with upstream weights while weight decay scales with each layer’s own weights. This mismatch in scaling causes layer-wise overfitting or underfitting, leading them to propose using the gradient-to-decay magnitude ratio as a layer-wise coefficient. [31] enhanced this approach by normalizing gradients and applying a scaled sigmoid to compute the coefficient. Similarly, [12] used the ratio of gradient norms to parameter norms. [43] showed weight decay amplifies late-stage gradient norms, harming convergence. Their solution, AdamS, penalizes large gradients and outperforms both Adam and AdamW. Another line of research [3; 10; 42] revealed that weight decay induces low-rank layer structures. [19] further showed that applying distinct weight decay values to attention and MLP modules, meticulously tuned via grid search, can substantially enhance training outcomes. Building upon these foundations, our work advances this direction by introducing the first weight decay scheduling framework for LLMs.
Heavy-tail self-regularization. HR-SR Theory examines the ESD of weight matrices and identifies its relationship with training quality based on principles from statistical physics and random matrix theory [7]. HT-SR Theory posits that well-trained neural networks exhibit strong correlations in their weights, manifesting as heavy-tailed structures in the ESD of each layer’s weight matrics [28; 30]. Recently, HT-SR has been applied to model selection [28; 30; 44], module-wise adaptive training [47], and LLM pruning [27], demonstrating its efficacy in estimating model and layer quality. However, no prior work has explored HR-SR theory in the context of weight decay configuration. Our work draws inspiration from HR-SR theory and introduces a novel technique that leverages ESD structures to guide weight decay settings.
Module Name:att.q Module Name:att.k Module Name:att.v Module Name:att.o 1.0 1.0 1.0 1.0
0.20.40.60.8Normalized Singular values 0.20.40.60.8Normalized Singular values 0.20.40.60.8Normalized Singular values 0.20.40.60.8Normalized Singular values 0.0 0.0 0.0 0.0 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 Rank Rank Rank Rank Module Name:mlp.gate Module Name:mlp.up Module Name:mlp.down 1.0 1.0 1.0
Normalized Singular values Normalized Singular values Normalized Singular values Layer 1 Layer 15 Layer 28 0.8 0.8 0.8 Layer 2 Layer 3 Layer 16 Layer 17 Layer 29 Layer 30 Layer 4 Layer 18 Layer 31 Layer 5 Layer 19 Layer 32 0.6 0.6 0.6 Layer 6 Layer 20 Layer 33 Layer 7 Layer 21 Layer 34 Layer 8 Layer 22 Layer 35 0.4 0.4 0.4 Layer 9 Layer 23 Layer 36 Layer 10 Layer 24 Layer 37 Layer 11 Layer 25 Layer 38 0.2 0.2 0.2 Layer 123 Layer 267 Layer 3490 Layer 14 0.0 0.0 0.0 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 Rank Rank Rank
# 3 Methodology
In this section, we first present the rationale motivating our study, emphasizing the heavy-tailed singular value spectra exhibited by different modules of LLMs. We then revisit HT-SR theory and introduce key HT-SR metrics that support our analysis. Finally, we examine the AlphaDecay algorithm, which leverages “shape metrics” derived from HT-SR theory and exhibits significant improvements in LLM pretraining tasks.
# 3.1 Rationale
Different modules in LLMs exhibit diverse spectral properties, particularly in the distribution of their singular values. Figure 2 visualize the normalized singular value spectra of the weight matrices for each module type (att.q, att.k, att.v, att.o, mlp.gate, mlp.up, mlp.down) across all 40 transformer layers in the pretrained LLaMa-2-13b-hf model. Notably, substantial variability is observed in the heavy-tailedness of the singular value distributions: the attention-related modules (att.q and att.k) consistently show heavier tails, while the MLP modules (mlp.gate, mlp.up, mlp.down) exhibit lighter tails (see figure 3).
This phenomenon has been extensively studied within heavy-tailed random matrix theory. Specifically, heavier tails in the singular value spectra reflect greater anisotropy, with much of the module’s representational power concentrated in a few leading principal components—a feature especially pronounced in attention-related modules (att.q, att.k). In contrast, the lighter-tailed spectra observed in MLP modules (mlp.gate, mlp.up, mlp.down) exhibit a more uniform distribution across components. These observations suggest that different modules may benefit from tailored regularization strengths to achieve optimal performance, as attention modules could be more disrupted by excessive regularization, while MLP modules may tolerate stronger regularization.
# 3.2 HR-SR Theory
The HT-SR theory provides a principled framework for analyzing the empirical spectral distribution (ESD) of neural network weight matrices. Empirical evidence suggests that well-trained models exhibit more pronounced heavy-tailed ESDs, which reflect higher training quality. Building on this theoretical foundation, our method leverages the HT-SR metric to quantify spectral tail heaviness, assigning lower weight decay to heavily-tailed modules (e.g., att.q, att.k) and higher weight decay to less heavy-tailed ones (e.g., MLP components), thereby aligning with spectral characteristics to potentially improve generalization and model performance (see figure 1). The degree of heavytailedness is quantitatively assessed by fitting a power law (PL) to the ESD, using the resulting PL exponent $( \alpha )$ as a metric.
Figure 3: Comparison of ESD distributions across modules of LLaMa-135M under different training methods (AlphaDecay vs. Uniform). Attention-related modules (e.g., att.q, att.k) exhibit notably heavier spectral tails in contrast to MLP-associated modules. Our method systematically balances the heavy-tailed properties across modules by appropriately configuring module-wise weight decay, thereby enhancing overall model performance.
Given a network with $N$ modules and weight matrices $\{ \mathbf { W } _ { 1 } \} _ { l = 1 } ^ { L }$ of shape $n \times m$ $( n \leq m )$ , we compute the ESD by obtaining the eigenvalues of the correlation matrix $\mathbf { X } _ { l } = \mathbf { W } _ { l } ^ { \top } \mathbf { W } _ { l }$ for each module. The power law fit for the ESD takes the form:
$$
p ( \lambda ) \propto \lambda ^ { - \alpha } , \lambda _ { \operatorname* { m i n } } < \lambda < \lambda _ { \operatorname* { m a x } }
$$
where $p ( \lambda )$ denotes the density of eigenvalues $\lambda$ within the specified range. The PL exponent, $\alpha$ , serves as a proxy for the degree of heavy-tailedness.
To estimate $\alpha$ , we use the Hill estimator [47; 25]. For a given module’s eigenvalues $\{ \lambda _ { i } \} _ { i = 1 } ^ { n }$ (sorted in ascending order), the Hill estimator is given by:
$$
\mathtt { P L \_ A l p h a \_ H i 1 1 } = 1 + \frac { k } { \sum _ { i = 1 } ^ { k } l n \frac { \lambda _ { n - i + 1 } } { \lambda _ { n - k } } }
$$
where $k$ controls the lower cutoff for PL fitting. In our experiments, we fix $\textstyle k = { \frac { n } { 2 } }$ , i.e., we estimate the slope using the largest half of the eigenvalues.
PL_Alpha_Hill is a key spectral descriptor for analyzing model performance. Related works [47; 25] suggest that lower PL_Alpha_Hill values indicate "overtrained" layers (compared to other layers in the model), while higher values indicate "undertrained" layers. An important conclusion is that a more uniform distribution of PL_Alpha_Hill across layers reflects more balanced training quality, leading to better overall model quality. While these findings highlight the importance of layer-wise training balance, our work emphasizes a complementary perspective:
Does module-wise balance matter for model performance?
We empirically demonstrate that promoting uniformity in PL_Alpha_Hill across modules (e.g., attention and MLP components) can further enhance overall model quality (see figure 3).
# 3.3 AlphaDecay
Building on the observed spectral diversity across modules, we introduce AlphaDecay, a simple yet effective module-wise weight decay scheduling algorithm. AlphaDecay first calculates the PL_Alpha_Hill values for all modules, and then assign larger weight decay to modules with higher
Figure 4: Comparison of perplexity and module-wise PL_Alphaa_tt.Hq/kill vaalttu.ve/os of LLmlapMa-135M under varying weight decay settings; For each group, att. $\mathtt { q } / \mathtt { k }$ shows the mean PL_Alpha_Hill of att.q and att.k; att. $\mathtt { v / o }$ shows the mean for att.v and att.o; mlp is the mean of mlp.gate, mlp.up, and mlp.down. Shaded areas indicate the range between the maximum and minimum values within each group.
PL_Alpha_Hill values, while assigning smaller weig1h.5t decay to those with lower PL_Alpha_Hill values. This strategy3 is 4des5igne6d t7o pr8om9ote101m0 odule-wis3e P4L_A5lph6a_H7il8l ba9lan10c10e, thus leading to better overall model performance. We provide the details of AlphaDecay in Algorithm 1. The assignment function is given by:
$$
f _ { t } ( i ) = \eta \cdot \ c ( \frac { \alpha _ { t } ^ { i } - \alpha _ { t } ^ { \mathrm { m i n } } } { \alpha _ { t } ^ { \mathrm { m a x } } - \alpha _ { t } ^ { \mathrm { m i n } } } ( s _ { 2 } - s _ { 1 } ) + s _ { 1 } )
$$
where $\eta$ is the initial weight decay, and $( s _ { 1 } , s _ { 2 } )$ define the range of scaling ratios applied to $\eta$ . $\alpha _ { t } ^ { i }$ is the PL_Alpha_Hill value of module $\mathbf { \chi } _ { i }$ at step $t$ , while $\alpha _ { t } ^ { m \bar { i } n }$ and $\alpha _ { t } ^ { m a x }$ are the minimum and maximum PL_Alpha_Hill values among all modules at step $t$ . Formula (3) guarantees that the adjusted weight decay, $f _ { t } ( i )$ , remains within $\left[ s _ { 1 } \eta , s _ { 2 } \eta \right]$ as a scaled variant of $\eta$ .
We compare AlphaDecay with the Uniform baseline under varying weight decay settings, and present the results of perplexity and module-wise PL_Alpha_Hill values, as shown in figure 4. Notably, AlphaDecay assigns module-wise weight decay values in accordance with the imbalance observed in module-wise PL_Alpha_Hill metrics (see Figure 1), and reallocates these weight decays every 500 update steps. This dynamic assignment adaptively moderates the modulewise PL_Alpha_Hill imbalance present in the Uniform baseline by decreasing the elevated PL_Alpha_Hill values in MLP modules and increasing the lower values in attention-related modules (i.e., att. $\mathtt { v / o }$ and att. $\mathsf { q } / \kappa$ ). As a result, our method achieves consistently lower and more stable perplexity across different weight decay configurations, thereby improving model robustness and overall performance.
# Algorithm 1: AlphaDecay
Require :initial weight decay $\eta$ , number of training steps $T$ , interval $\tilde { t }$ of using AlphaDecay, minimum and maximum scaling ratio $s _ { 1 } , s _ { 2 }$ , and $\alpha _ { t } ^ { i }$ refers to $i _ { t h }$ module’s PL_Alpha_Hill at update step $t$
for $t \gets 0$ to $T$ do if $m o d ( t , \tilde { t } ) = 0$ then Compute $\alpha _ { t } ^ { i }$ for all modules using the Hill estimator (2); Leverage all $\alpha _ { t } ^ { i }$ and adopt $f _ { t }$ in (3) to assign module-wise weight decay between $s _ { 1 } \eta$ and $s _ { 2 } \eta ;$ ;
# 4 Empirical results
In this section, we begin by presenting the complete experimental setup (Section 4.1), followed by a comparison between AlphaDecay and several baselines (Section 4.2). Finally, we analyze the impact
Table 1: Hyperparameters used in pre-training experiments.
Table 2: (Main result). Comparison with various weight decay scheduling strategies on pre-training various sizes of LLaMa models on C4 dataset. Validation perplexity $( \downarrow )$ is reported. All baselines are carefully tuned. $\mathrm { \Delta } \cdot \mathrm { W D = 0 } ^ { \cdot }$ indicates that weight decay is disabled during model training.
of weight decay assignment functions, HT-SR module-wise metrics, PL fitting methods, and PL fitting time gaps through ablation studies (Section 4.3).
# 4.1 Experimental setup
Models and Datasets. We conduct a systematic evaluation of AlphaDecay across LLaMa-based architectures spanning four model scales (60M, 135M, 350M, and 1B parameters). All experiments employ the C4 dataset [32], a rigorously processed subset of Common Crawl widely adopted for language model pretraining. Our experimental design incorporates two key components: (1) a non-repeating data regime with sufficient tokens for convergence, and (2) standardized preprocessing pipelines across all model scales. This multi-scale approach facilitates systematic comparison of model behaviors across different capacity regimes, while minimizing potential confounding factors in the analysis.
Hyperparameters. The detailed hyperparameter settings for all model sizes are summarized in Table 1. All models are trained with Adam optimizer (gradient clipping at 1.0) and a cosine learning rate schedule, with $1 0 \%$ of the training tokens used for learning rate warmup. We conduct grid search over learning rates $\{ 0 . 0 1 , 0 . 0 0 1 , 0 . \mathrm { \bar { 0 } 0 0 1 } \}$ and report the best configuration for each scale in the table. Weight decay settings and the corresponding $( s _ { 1 } , s _ { 2 } )$ parameter settings are also detailed in the table. AlphaDecay is performed every 500 update steps throughout all experiments.
# 4.2 LLM Pre-training
Table 2 presents the main results of our study, where we evaluate the effectiveness of different weight decay scheduling strategies on the pre-training of LLaMa models with varying parameter scales (60M, 135M, 350M, and 1B) on the C4 dataset. For each model size, we conduct comprehensive experiments across three commonly used weight decay values (1e-5, 5e-6, and 1e-6). Our proposed method is compared against several baselines, including the commonly used Uniform scheduling, adaptive global weight decay (AWD) [12], and adaptive per-module weight decay (Adadecay) [31]. All baseline methods are carefully tuned for a fair comparison.
Observations. ❶ Weight Decay is Beneficial for Model Performance. Comparing $\mathrm { ~ \omega ~ } , _ { \mathrm { W D = 0 } } ,$ (i.e., no weight decay) and Uniform across all model sizes, applying weight decay consistently leads to substantial reductions in validation perplexity. This provides empirical support for the importance and effectiveness of weight decay in LLM pre-training. ❷ Superior and Consistent Gains Across All Weight Decay Settings. AlphaDecay consistently yields the lowest validation perplexity across all evaluated weight decay settings (1e-5, 5e-6, 1e-6) and model sizes, surpassing both the Uniform baseline and the adaptive weight decay methods (AWD and Adadecay). This consistent superiority across various regularization strengths demonstrates the robustness of our approach and underscores its potential applicability in LLM pre-training. $\pmb { \otimes }$ Scalability to Larger Models. The performance improvements achieved by AlphaDecay are consistently observed from the smallest (60M) to the largest (1B) parameters, indicating the scalability and generality of our approach.
Figure 5: (Varying weight decay assignment functions). Results of using different weight decay assignment functions under different weight decay settings. All experiments are conducted on LLaMa-60M. The value on the top of each bar indicates the difference from the leftmost bar in each plot and the same processing is applied in Figure 6, Figures 7, and Figures 8.
Figure 6: (Varying HT-SR metrics). Comparing PL_Alpha_Hill with multiple HT-SR metrics under different weight decay settings. All experiments are conducted on LLaMa-135M.
Figure 7: (Varying PL fitting methods). Comparison of various PL fitting methods. The bar plot and left y-axis represent perplexity (lower the better), while the line plot and right y-axis indicate the time taken for AlphaDecay once (in seconds, lower the better). The computation times are averaged over all PL fitting operations throughout the model training process. All experiments are conducted using LLaMa-135M.
Furthermore, our experiments reveal that existing adaptive weight decay methods, originally designed for architectures without attention components, such as AWD and Adadecay, do not yield optimal results for LLMs. This may be attributed to their lack of consideration for the distinct characteristics and optimization requirements of attention and MLP modules within transformer architectures. In contrast, our approach is, to the best of our knowledge, the first to demonstrate that a tailored weight decay scheduling strategy can consistently enhance LLM training by explicitly accounting for the heterogeneous characteristics of different modules.
# 4.3 Analysis
Varying Weight Decay assignment functions. We examine the performance of PL_Alpha_Hill with different weight decay assignment functions, which determine the allocation ratios of weight decay across different modules. Figure 5 presents the results obtained by different assignment
Uniform ....... GAP=500 GAP=100 GAP=50 GAP=1 Time (h) 23.0 23.0 24.0 22.68 -0.23 -0.21 -0.24 -0.18 40 22.5 -0.59 -0.5 7 -0.60 -0.56 40 23.2 -0.65 -0.66 -0.67 -0.65 40 20 20 22.4 21.5 21.6 (a) Weight Decay $\mathbf { \tau } = \mathbf { \tau }$ 1e-5 (b) Weight Decay $\ c =$ 5e-6 (c) Weight Decay $\mathbf { \tau } = \mathbf { \tau }$ 1e-6
Table 3: (AdamW.) Comparison of various weight decay scheduling strategies using AdamW optimizer for pre-training LLaMa-60M and LLaMa-130M models under different weight decay values. Validation perplexity ( ) on the C4 dataset is reported. All baselines are carefully tuned. $\mathrm { ^ { , } W D = 0 ^ { , } }$ indicates that weight decay is disabled during model training.
functions: Uniform, Linear, Sqrt, Log2, and Sigmoid-like. Among these, Linear achieves the best results across all weight decay settings, showing a notable advantage over other methods.
Varying HT-SR metrics. To investigate the impact of various HT-SR metrics on regulating weight decay during model training, we conducted ablation studies comparing these metrics. While prior work has primarily utilized GradNorm [31; 20; 43] and FrobeniusNorm [12; 16] as indicators for adjusting weight decay, our study further evaluates additional metrics, including PL_Alpha_Hill and SpectralNorm, under the same experimental settings. Results in Figure 6 show that most HT-SR metrics outperform the uniform baseline, while PL_Alpha_Hill achieves the lowest perplexity (lower the better) among all evaluated methods.
Varying PL fitting methods. In our proposed framework, the HT-SR metric PL_Alpha_Hill is employed to guide the projection-based adjustment of weight decay during training. Since PL_Alpha_Hill is derived through PL fitting, and the choice of fitting method can influence both computational efficiency and the final training effectiveness, we conduct an ablation study to systematically assess its impact. Figure 7 presents a comparative analysis of three PL fitting methods— Goodness-of-fit [1; 30; 6], Fix-finger[44], and Median[47] —across multiple weight decay values. Across all settings, Median not only ensures optimal training performance but also notably decreases computation time compared to the other approaches, making it the preferred choice for PL fitting within our method.
Varying PL fitting gaps. To further analyze the stability of our proposed approach, we investigate the impact of varying update gaps for weight decay adjustments during training. It is computationally inefficient to update weight decay at every training step. Thus, we explore the performance of our method by updating weight decay at different training step gaps: 1, 50, 100, and 500 steps. Figure 8 shows that our approach achieves stability across all gap settings. Notably, across all weight decay settings, using training step intervals from 1 to 500 consistently outperforms the Uniform setting, including when the interval is as large as 500 training steps. This demonstrates the robustness of our method to update frequency. Therefore, we select a gap of 500 in all experiments because it provides substantial computational savings while maintaining stable and competitive model performance across various settings.
# 4.4 LLM Pre-training with AdamW
Table 3 provides a comparison of several weight decay scheduling strategies for pre-training LLaMa60M and LLaMa-130M models with the AdamW optimizer. The results clearly demonstrate the effectiveness of applying weight decay, as all scheduling strategies outperform the baseline with no weight decay $\mathrm { \Delta W D = 0 } )$ ) in terms of validation perplexity.
AlphaDecay consistently outperforms other weight decay scheduling strategies across different model sizes and hyperparameter settings, demonstrating superior regularization and generalization when training with AdamW. These results highlight the robustness and effectiveness of AlphaDecay, supporting its adoption for optimizing large-scale transformer-based language models.
# 4.5 Dependent t-test for paired samples
Table 4 provides a comparison of several weight decay scheduling strategies using the Adam optimizer, evaluated through repeated experiments with different random seeds.
Table 4: (Dependent t-test with Adam.) Each method (Uniform, AWD, AdaDecay, and AlphaDecay) is evaluated by conducting six repeated experiments with random seeds $\{ 5 , 6 , 7 , 8 , 9 , 1 0 \}$ . Validation perplexity is reported as mean $\pm$ standard deviation. For each weight decay setting, a dependent t-test for paired samples is performed, comparing AlphaDecay against Uniform, AWD, and AdaDecay, respectively. The resulting p-values are presented alongside perplexity scores.
The results demonstrate the benefit of applying weight decay for improved validation perplexity, with AlphaDecay consistently exhibiting superior performance and stability across all tested settings. The dependent t-test results further substantiate these findings, with statistically significant p-values supporting the advantage of AlphaDecay over Uniform, AWD, and AdaDecay in nearly all cases.
# 4.6 Evaluation with vision transformers
To evaluate the impact of different weight decay scheduling strategies on vision models, we train ViTtiny on ImageNet-1K for 120 epochs using the Adam optimizer and ConvNeXt training configurations [24], applying each scheduling method as described in Table 5.
Table 5: (ViT Evaluation.) Comparison of various weight decay scheduling strategies using the Adam optimizer for training ViT-tiny on ImageNet-1K under different weight decay values. Top-1 accuracy $( \% )$ is reported on the ImageNet-1K validation set. All baselines are carefully tuned.
The results in Table 5 show that AlphaDecay consistently outperforms other strategies, with more pronounced accuracy improvements over Uniform and AWD. These findings indicate that the benefits of dynamic weight decay scheduling, particularly AlphaDecay, generalize well to vision transformer models. | Weight decay is a standard regularization technique for training large language models (LLMs). While it is common to assign a uniform decay rate to every layer, this approach overlooks the structural diversity of LLMs and the varying spectral properties across modules. In this paper, we introduce AlphaDecay, a simple yet effective method that adaptively assigns different weight decay strengths to each module of an LLM. Our approach is guided by Heavy-Tailed Self-Regularization (HT-SR) theory, which analyzes the empirical spectral density (ESD) of weight correlation matrices to quantify "heavy-tailedness." Modules exhibiting more pronounced heavy-tailed ESDs, reflecting stronger feature learning, are assigned weaker decay, while modules with lighter-tailed spectra receive stronger decay. Our method leverages tailored weight decay assignments to balance the module-wise differences in spectral properties, leading to improved performance. Extensive pre-training tasks with various model sizes from 60M to 1B demonstrate that AlphaDecay achieves better perplexity and generalization than conventional uniform decay and other adaptive decay baselines. | [
"cs.CL",
"cs.AI",
"cs.LG"
] |
# 1 Introduction
Writing code is mainly a matter of rewriting code: debugging, refactoring, optimizing, and other activities within the software engineering lifecycle. But poor rewrites incur technical debt, with such debt costing up to $\$ 2$ trillion annually [1]. This problem will likely worsen as language models become increasingly responsible for generating code, because they excel at solving isolated programming problems, but their context length demands a myopic view of the codebase. It is therefore valuable to understand not just the ability of language models to solve programming problems, but also their ability to rewrite and refactor code in ways that support growth and reuse.
Effective code refactoring at scale is a design problem. When refactoring codebases, developers must navigate design decisions around concerns such as generality, re-usability, and maintainability. A classic example illustrates this design challenge: Human programmers often create overly-specialized, redundant solutions to similar problems and would benefit from redesigning specialized solutions into a shared library. This consolidation requires careful design decisions about the right level of abstraction — neither too specific nor too general — and appropriate interfaces that balance flexibility with usability.
Here we focus on refactoring multiple code sources into a reusable software library, and pose the following question: To what extent can code agents address this problem, both within human-written codebases, and also in language model-generated code? To answer that question, we develop a new method and a benchmark. This goes beyond past work [2, 3, 4, 5, 6, 7, 8] in library learning that synthesized subroutines across small programs in i.e. $\lambda$ -calculus, instead tackling the more naturalistic problem of redesigning large bodies of code written in contemporary high-level languages, such as Python, producing classes, methods, and helper functions in the style of a human-written library. We propose a simple method, LIBRARIAN (Figure 1), which samples possible code rewrites then reranks those samples based on criteria designed to capture what it means to have a good refactoring. To generate potential rewrites, we develop methods for clustering pieces of code together that share common structure, so that a succinct prompt can rewrite them jointly into their refactored form.
Figure 1: Overview of the problem that we study and the general structure of its solutions. Given a collection of different code sources, where a source is either program or repository, we design a modular and reusable library. To do this we cluster together related programs into tuples (left), sample different rewrites using a language model (right), and select the rewrites which optimize various criteria, such as compression, while validating candidate rewrites using test cases (pass rate).
To evaluate our method and systematically assess the capability of current systems to perform such design-intensive refactorings, we introduce a new benchmark, MINICODE. Existing benchmarks such as SWE-Bench [9], Commit0 [10], and RefactorBench [11] primarily focus on functional correctness and do not directly evaluate an agent’s ability to perform large-scale code consolidation or library design. MINICODE fills this gap by explicitly challenging code agents to minimize and refactor multiple specialized code sources into a unified library. MINICODE requires agents to design libraries that expose a general interface shared across multiple use cases in two domains: competition coding and repository-level Python applications. This involves optimizing not only for functional correctness but also for broader software engineering objectives such as reusability and maintainability, making the task substantially more open-ended than prior benchmarks.
Our results show that state-of-the-art code agents, based on Claude 3.7 Sonnet and o4-mini, struggle to jointly preserve correctness and improve reusability across both domains of MINICODE. In the competition coding domain, our method LIBRARIAN improves refactoring quality by $1 . 8 9 \mathrm { x }$ while also enhancing correctness. However, on the repository-level refactoring, even the strongest agents fail to produce high-quality refactorings, highlighting a substantial gap between current capabilities and the demands of design-oriented code rewriting. Addressing this challenge remains an open and important direction for future research.
# 2 Related work
Repo-level language model coding agents. Recent work has explored the application of language models to repository-level software engineering tasks. Existing benchmarks include SWE-bench [9], which evaluates models on their ability to resolve real-life GitHub issues, and Commit-0 [10], which challenges AI systems to reconstruct entire code repositories from scratch. Such benchmarks primarily evaluate functional correctness via unit tests, without assessing the quality or maintainability of the resulting codebase. RefactorBench [11] takes a step in this direction by benchmarking the ability to follow specific refactoring instructions. Our work differs in that we expect models to perform a more open-ended task: Redesigning code to be more modular and compact by discovering and drawing out reused abstractions.
Library Learning. Systems which perform library learning research discover shared abstractions across a large number of small programs, which they use to automatically define new subroutines. Systems such as DreamCoder [4], Trove [12], LiLo [7], and REGAL [3] automatically construct such libraries with the goal of making future program synthesis tasks easier to solve, once the learned library is in hand. Our work is closest to REGAL [3], which clusters related code and refactors using language models. However, existing library learning approaches have primarily been demonstrated in small-scale, constrained domains, limiting their applicability to typical software engineering tasks, such as consolidating multiple repositories into cohesive libraries. By framing library learning within the context of realistic, large-scale code repository development, we expand the relevance of library learning to everyday software engineering practice.
Program optimization. While our goal is to optimize the quality of libraries, other works focus on improving code speed through correctness-preserving transformations [13, 14, 15]. Both forms of program optimization, compression and speed, require more exploration than only optimizing for correctness, as there does not exist a ground-truth answer. Prior work on program optimization benchmarks study code at the file level. We propose a benchmark that transforms programs at a larger scale, across multiple code repositories.
# 3 Problem Statement
In this section, we propose a refactoring task: Given multiple code sources that contain problemspecific implementations, the goal is to create a cohesive library that captures shared abstractions. This library must reduce the total code size while supporting all original use cases, potentially opening up new use cases as well by mining and formalizing latent shared abstractions. This is accomplished by searching for refactorings that are both correct and simple. Correctness is straightforward to define as the fraction of unit tests passed, but simplicity is more elusive.
One potential measure of simplicity is counting the total number of tokens in the proposed library and refactored code. However, just minimizing program size has obvious failure modes: code should also be natural, elegant, and extensible, which can be in tension with merely finding the shortest program.3 To address these concerns, we follow prior work in program synthesis [16, 6, 17, 18] and define simplicity as the minimum description length (MDL), or negative log probability under a reference distribution.
Formally, we are given a collection of code sources $\{ \rho _ { n } \} _ { n = 1 } ^ { N }$ , and output both a new library $\mathcal { L }$ , as well as rewritten refactorings of the original code sources, $\{ \rho _ { n } ^ { \prime } \} _ { n = 1 } ^ { N }$ . We define the pass rate $\tau ( \rho _ { n } )$ as the fraction of unit tests program $\rho _ { n }$ passes. In practice we are concerned both with the case where we are refactoring several code sources $ { \mathrm { { N } } } > 1 { \mathrm { { i } } }$ ) and also the case where there is only a single large code source we are refactoring $N = 1 { \dot { } }$ ).
We optimize the following objective, which rewards refactorings that pass at least as many tests as the original program and minimize MDL:
$$
\begin{array} { r } { \ell \left( \mathcal { L } , \{ \rho _ { n } ^ { \prime } \} \right) = \left\{ \begin{array} { l l } { - \log p _ { \mathrm { L M } } ( \mathcal { L } ) + \sum _ { n } - \log p _ { \mathrm { L M } } ( \rho _ { n } ^ { \prime } \mid \mathcal { L } ) } & { \forall \rho _ { n } , \tau ( \rho _ { n } ) \leq \tau ( \rho _ { n } ^ { \prime } ) } \\ { \infty } & { \mathrm { o t h e r w i s e } } \end{array} \right. } \end{array}
$$
where $p _ { \mathrm { L M } } ( \rho _ { n } ^ { \prime } | \mathcal { L } )$ is the probability of the suffix $\rho _ { n } ^ { \prime }$ given the prefix $\mathcal { L }$ under a language model, effectively concatenating the library and the program into one prompt, but only counting the perplexity of the later program tokens.
# 4 LIBRARIAN: Refactoring Code to Create Libraries
This section details our method to compress collections of code sources into libraries, while migrating the code sources to use these shared building blocks. Figure 1 illustrates our method, LIBRARIAN.
LIBRARIAN follows a simple sample-and-rerank framework to maximize our refactoring objective described in Section 3. It maintains and grows a library of useful functions as part of this objective.
Concretely, our framework follows:
$$
\mathcal { L } ^ { \star } , \{ \rho _ { n } ^ { \star } \} = \underset { \mathcal { L } , \{ \rho _ { n } ^ { \prime } \} \in \mathrm { S A M P L E } ( \{ \rho _ { n } \} ) } { \arg \operatorname* { m i n } } \ell \left( \mathcal { L } , \{ \rho _ { n } ^ { \prime } \} \right) .
$$
# 4.1 Sample with clustering
Meaningful abstractions exist primarily among programs that share some functionality or underlying structure. We perform clustering on the input programs to make groups of programs that likely share structures that can be abstracted into general library functions. Most modern language models cannot be prompted with the entire collection of input programs– even long context models cannot process the entirety of e.g the Linux kernel, and even if they could, it is not clear that such a strategy is the most efficient way of focusing the language model’s attention.
We consider clustering algorithms for discovering small groups of related code; we call these tuples. This extends REGAL [3], which clusters programs solving similar problems by assuming each program is paired with a natural language description of the problem it solves, and clustering embeddings of those descriptions. But programs solving similar problems do not necessarily have similar structure. We therefore instead first summarize the code source itself by prompting a language model, then cluster based on the similarity of these new descriptions.
Once identified, each tuple is used in two stages. The first is to retrieve relevant already-abstracted functions from the LIBRARIAN library: for a given tuple of programs, relevant functions are retrieved by prompting a language model with the entire existing existing library and the original input programs, then instructing it to identify which functions would be useful. The retrieved library functions and the original programs in the tuple are then provided as critical context to the language model to propose a sample budget of $K$ candidate refactorings.
# 4.2 Rank with compression
Once sampled, all $K$ candidate refactorings are passed through a sort-and-filter evaluation harness to select which one scores the highest on refactor quality and that maintains (or improves) test accuracy compared to the original. If no such candidate exists, the original code is preserved, maintaining existing functionality.
New library functions in the selected refactor are saved into the LIBRARIAN library for potential use in downstream refactoring of other programs. We provide the full algorithm in Appendix A.
# 5 MINICODE
MINICODE evaluates a code agent’s capability to identify abstractions across implementations and design reusable libraries. In order to measure these capabilities, our benchmark presents agents with a collection of code sources, then asks agents to refactor the code sources into a unified library alongside refactorings of the original code sources. There are two key desiderata for collections of code sources: The collections must be compressible, in that there exists a latent shared library abstraction, and verifiable, so that we can measure how well refactored code sources preserve functional correctness. We source problems from two domains: Competition coding and synthesized repositories (Table 1).
Agents are expected to interact with MINICODE via the terminal. We structure the benchmark as refactoring a multi-package Python repository, where each code source in a collection is a Python package in a subdirectory. This requires knowledge of basic bash commands for exploring repositories, editing code, and running tests, as well as how to manage complex, multi-package Python libraries.
Table 1: MINICODE Statisics
CodeContests Competition problems are crafted with specific variations of algorithmic approaches in mind, resulting in both shared latent concepts and required test cases. As a result, competition coding is naturally both compressible and verifiable.
Each collection consists of multiple code sources, each containing a solution to a competition programming prompt, with associated tests for verification. We take solutions, prompts, and tests from CODECONTESTS [19], a dataset consisting of competition coding problems. Each code source in the collection is structured as a subdirectory consisting of the task description in PROBLEM.md, the initial solution in main.py, and a script to run tests in run.sh. Agents are instructed to create a library.py file, which should is imported into each code source. Since CODECONTESTS has no external dependencies on Python packages, this can be done without explicit structuring as a Python package.
Repositories For the second domain of synthesized repositories, we propose a data-generating process that first produces project ideas, then generates variations of those project ideas tailored to specialized use-cases. This allows us to control the complexity and degree of overlap between each code source in a collection. Each code source in the repositories domain is comprised of a task description, source code, and test cases for functionality and correctness. MINICODE includes both small repositories (approximately 200 lines of source code each) and large repositories (approximately $6 . 5 \mathrm { k }$ lines of source code each), both of which represent realistic settings where different people with different needs use language models to help them write software for their particular use cases. The refactoring agent is tasked with extracting re-usable functions from across code repositories, and re-writing the original code source repositories to use them.
We approximate the true distribution of code repositories, $p ( \rho )$ , with a generative process using latent textual library and repository descriptions. The entire space of code repositories is massive. To collect and group code repositories that share meaningful structure that can be abstracted into a useful code library, we (1) sample a textual library description; (2) sample use cases; and last (3) their programmatic implementations. This generative process naturally produces repository collections primed for refactoring. We sample from language models for each step of this process. Prompts are shared in Appendix E.
Each collection consists of multiple synthetic repositories as code sources, where each code source is a subpackage. Collections are obtained by transforming the original repository code sources into a multi-package Python library. Each source’s source code is extracted into a subpackage directory, and its tests into a corresponding test subdirectory. Agents are instructed to write a shared library in a shared subpackage named common. This common shared library must be imported and used to refactor each of the original code sources.
# 6 Experimental Setup
Grouping Programs into Collections To facilitate parallel application of LIBRARIAN and manage the dataset scale, we assume that semantically distant code sources will have minimal overlap in their optimal library functions. Therefore, our overall approach partitions the dataset into disjoint collections through clustering.
For CodeContests, these collections are constructed from an initial corpus of ${ \sim } 9 \mathrm { k }$ problems with Python solutions: We first filter these code sources, removing those whose selected canonical solution is under 10 lines (minimal refactoring potential). For the remaining 4596 solutions we use a language model to generate textual descriptions of canonical solutions—emphasizing reusable components—which are embedded using OpenAI’s text-embedding-ada-002.
Agglomerative Clustering [20] is subsequently applied to these embeddings to partition the code sources into a predefined number of initial clusters, in our case 120. To create uniformly sized experimental units, we subsample each such cluster to form collections of 30 code sources. This collection size was empirically chosen because it balanced between the runtime of LIBRARIAN without limiting compression. We select 10 collections that we then use to evaluate our methods.
Code repositories are generated as disjoint collections through the generative process, which first samples project ideas then variations. For small repositories, we up sample $1 0 - 1 5$ code sources then form smaller collections of size 3-5 code sources that exhibit the most intra-cluster similarity according to clustering using the same embedding and clustering technique. For big repositories, we sample two code sources for each of the 10 collections.
REGAL Baselines To evaluate the ability of our libraries to support reuse on new problems, we turn to the program synthesis tasks used in REGAL, where learned libraries are added to help the program synthesizer. We evaluate on the two domains published by the authors, Logo and Date. Because our clustering is inspired by REGAL but adds additional complexity, for fair comparison, we keep their setup the same and only augment the training using sample $+ \mathbf { M D L }$ rerank procedure described in Section 4.1.
Code Contests To evaluate LIBRARIAN on refactoring Code Contests we select 6 collections of 30 code sources (problems). In each collection we group the problems into tuples of size 3. We set the sample budget to be $K = 8$ , since our ablations show that with larger $K$ we discover better libraries 3. We use the MDL objective for rankings.
The model used for sampling is OpenAI’s o4-mini [21]. To obtain MDL scores we use Qwen 2.5 7B Instruct [22] as a balance between quality, speed, and cost.
Code Agents To fairly evaluate performance on the task by state-of-the-art systems, we use coding agents that advertise long-context ability to reason about, write, and refactor code repositories. Specifically, we use Claude Code (Cl) [23] which uses the Claude 3.7 Sonnet model, and OpenAI’s Codex $( \mathbf { C } \mathbf { x } )$ [24] which uses o4-mini.
All steps of the initial code sources for small code repositories are done by OpenAI Codex. We empirically observed that Claude Code has an affinity for generating large code repositories– the large code repositories are all generated by Claude Code.
We test whether code agents can refactor collections of code sources autonomously, without human intervention. Refactoring repositories with code agents involves planning and iterative (re)implementation and testing. Code agents are prompted to perform each of these steps, with feedback from the unit tests. Agents must run and repair unit tests autonomously. We run coding agents multiple times per collection, logging their progress in checklists stored in text files. Our naming convention for agents is “Planner-Executor”. For example, “Cl-Cx” uses the Claude Code agent for planning and OpenAI Codex to implement the plan.
As before we use Qwen 2.5 7B Instruct [22] to obtain MDL scores.
# 7 Results
In this section, we present the compression and correctness results with LIBRARIAN and agent baselines on MINICODE.
MINICODE-CodeContests On CodeContests, LIBRARIAN achieves a high final pass rate of $9 0 . 6 7 \%$ and significantly improves correctness, with pass rates increasing by $6 . 3 3 \%$ compared to the original code sources (Table 2). The method yields substantial compression: the refactored code, including the new library, shows an MDL
Figure 2: Refactoring results for LIBRARIAN (w/ $K = 8$ ) averaged over 10 Code Contests collections.
ratio of 0.53 (a $47 \%$ reduction in MDL relative to the original). On average, LIBRARIAN generates libraries containing approximately 11 functions. These functions demonstrate good reuse, being called by around 5.2 programs on average, although $3 8 . 0 3 \%$ of them are used only once within their specific collection context.
Table 2: The correctness (Pass $\%$ ) and compression (MDL ratio $\%$ ) of the original and refactored code sources for the agent baselines on a subset of the small and large repository collections in MINICODE. Libraries with syntax or runtime errors receive a pass rate of 0.
Code agents fail to achieve both high correctness and compression on MINICODE-CodeContests. Across collections, the Codex agent achieves an average MDL ratio of 0.83, but a pass rate of $7 4 . 1 6 \%$ , much lower than LIBRARIAN’s rate of $9 0 . 6 7 \%$ . Similarly, the Claude agent reaches a higher pass rate of $8 2 . 5 0 \%$ , which is still lower than LIBRARIAN, but an MDL ratio of 1.04 which is more complex than the original collection.
MINICODE-Repositories Coding agents results on both small and large repositories are given in Table 2. During experiments, we found that the Claude Code agent provided superior plans for refactoring, while both the Claude Code and Codex agents performed satisfactorily at implementing a plan.
Despite using code agents with a human in the loop, existing code agents fail to produce effective refactorings. Often, the resulting refactored codebase bloats in size and complexity, particularly when using Claude Code for both planning and implementation. Given the poor performance of code agents on these repository-scale refactorings, we do not run the full sample-and-rerank pipeline. Supporting qualitative analysis in Section 8.4 discusses common failure modes of code agents on this task in repository-size refactorings.
# 8 Analysis
We further analyze parts of LIBRARIAN, consider alternative objective functions, analyze the use of clustering while constructing MINICODE-CodeContests, and common failure modes of code agents.
# 8.1 Can these libraries be reused to solve downstream programming problems?
To understand the practical value of our library extraction work for program synthesis, we augment the existing library learning algorithm, REGAL with sampling and MDL-reranking. In REGAL, the learned library is used to solve downstream holdout program synthesis problems, and a simpler clustering algorithm is used. For fair comparison with REGAL we hold the clustering algorithm constant (using the simpler REGAL approach), but sample $K = 5$ refactorings/libraries and rerank by MDL. The resulting system outperforms REGAL on holdout programming problems (Table 3).
These problems however are highly simple, which is representative of classic library learning work: Either drawing simple geometric designs (dataset Logo) or manipulating textual representations of dates (dataset Date). This helped motivate us to consider refactoring whole repositories.
Table 3: Solving downstream program synthesis tasks using learned libraries
# 8.2 What objective function should a library learner use?
We run our method on CodeContests using MDL, number of tokens, and cyclomatic complexity $[ 2 5 ] ^ { 4 }$ as objective functions on 6 collections (Figure 3) While minimizing MDL also minimizes the other two objectives, the converse is not true. This suggests that MDL is a pareto-optimal loss among the three objectives in this experiment.
To confirm that the library does indeed expose shared abstractions, we calculate the average number of times that each library routine is used. Scaling the inference budget to $K = 8$ discovers better libraries, reusing each library function on average about 5 times.
In Appendix A.1, we find that MDL is also the objective that best correlates with human preferences in a small-scale human study.
# 8.3 Clustering Analysis: CodeContests
We analyze the coherence of the clusters underlying collections in MINICODE-CodeContests. In particular, we compare clustering based on o4-mini generated code source descriptions against task descriptions. Since task descriptions in competition coding problems are designed to hide the algorithmic approach needed to solve problem, we expect that clusters based on code source descriptions are more coherent.
We use two measures to evaluate the thematic coherence of collections: Good collections should group code sources with a (1) concentrated and (2) identifiable set of shared conceptual tags, which for CodeContests are provided as ground truth (trees, graphs, etc.).
Normalized Tag Instance Entropy This measures the concentration of tag instances within a collection, and is given by the entropy of the tag distribution for a given collection, normalized by the number of distinct tags in that collection. A lower normalized tag instance entropy (closer to 0) indicates higher thematic purity, meaning a small number of tag types are most prevalent.
Herfindahl-Hirschman Index (HHI) for Problem Presence This measures tag concentration across distinct code sources in a collection. A higher HHI signifies that the problems are collectively characterized by a smaller, more focused set of tags.
We provide the full formal definitions of both measures in Appendix B.
Figure 4 shows our clustering approach yields more thematically coherent clusters, evidenced by achieving lower entropy and higher HHI values across the entire tested range of $N$ .
Figure 3: Comparing 3 different objective functions for refactoring (different columns) according to different downstream success metrics (different rows), as a function of refactoring budget (horizontal axes).The values are averaged over 6 collections of CodeContests problems. Row 1: Optimizing perplexity also incidentally optimizes cyclomatic complexity and token count, but that the converse is not true. Row 2: refactored programs pass more test cases, even more than the original code itself. Row 3: increasing the refactoring budget results in more reusable library subroutines (such subroutines are called more times on average). Filtered/Raw: Using/Not Using tests to filter samples.
Figure 4: Clustering analysis of 4,596 Code Contest problems, comparing the thematic coherence of clusters formed using our proposed method versus REGAL-style clustering.
# 8.4 Common Failure Modes of Coding Agents
Failing to follow the plan. Refactoring large repositories requires editing code over a long time horizon. We instruct agents to plan hierarchically by first recording their plan in natural language, then implementing that plan. However, agents often ignore the plan, particularly with Codex as the implementation agent, potentially refusing to edit the code at all. We hypothesize that this behaviour is caused by the refactoring task itself – the original code source passes all tests, and agents are trained to prioritize correctness over compression.
Failing to use the library. When the implementation agent does follow the plan, it often fails to do so effectively. Claude Code, for example, will write a library, import it to the proper place, then rewrite the library rather than use it. We provide further examples in Appendix G. | Maintainable and general software allows developers to build robust applications efficiently, yet achieving these qualities often requires refactoring specialized solutions into reusable components. This challenge becomes particularly relevant as code agents become increasingly accurate at solving isolated programming problems. We investigate code agents' capacity to refactor code in ways supporting growth and reusability. We present both a method and a benchmark for refactoring: Librarian, a sample-and-rerank method for generating reusable libraries, and Minicode, a benchmark where code agents must minimize and refactor multiple independent solutions into a joint library. Compared to state-of-the-art code agents, Librarian achieves strong results on both compression and correctness on Minicode, obtaining compression rates 1.6-2x better than coding agents while also improving correctness. We open-source our code and benchmark at https://code-refactor.github.io/. | [
"cs.SE",
"cs.AI"
] |
# 1 Introduction
Web is a vast and dynamic source of structured and semi-structured information. Much of this information, found in natural language descriptions, tables, and repetitive patterns, is amenable to extraction and analysis. Extracting structured data records—repeating content elements from web pages—has significant utility across many practical applications [Cafarella et al., 2008, Ferrara et al., 2014, Laender et al., 2002, Arasu and Garcia-Molina, 2003, Muslea et al., 1999]. For example, office workers frequently need to collect structured information, such as product listings, company directories, and contact tables from websites, and store it in spreadsheets or internal databases. Automating this process can lead to substantial productivity gains.
Traditional web data record extraction has primarily relied on structural heuristics that analyze HTML trees to detect repetitive patterns [Buttler et al., 2001, Chang and Lui, 2001, Liu et al., 2003, Zhai and Liu, 2005, Liu and Zhai, 2005]. While these methods have achieved some success, their efficacy is often hampered by a reliance on rigid structural assumptions. This limits their ability to generalize across diverse and dynamic web layouts. Furthermore, these methods largely overlook textual semantics, leading to instability when visual or structural cues alone are insufficient.
In contrast, Large Language Models (LLMs) have recently shown remarkable capabilities in natural language understanding, reasoning, and information extraction [Wang et al., 2024, Minaee et al., 2024,
Brown et al., 2020, Devlin et al., 2019]. These models offer a promising new paradigm for web data record extraction, potentially working alongside or enhancing traditional heuristic methods [Liu and Zhai, 2005, Zhai and Liu, 2005, Liu et al., 2003]. However, deploying LLMs in real-world scenarios presents challenges: (i) fine-tuning LLMs is resource-intensive, and (ii) LLMs inherently grapple with issues such as hallucination and a lack of transparency in their decision-making processes.
Moreover, the evaluation of LLM-based extraction methods is hindered by the absence of standardized, publicly available datasets. Existing datasets are often domain-specific and restricted by terms of service or ‘robots.txt’ directives, limiting their utility for benchmarking. This situation highlights the critical need for a general-purpose dataset and a concrete evaluation framework to assess and compare the efficacy of traditional and LLM-based extraction methods.
In this work, we introduce a practical evaluation framework for comparing traditional algorithms and LLMs on the task of web data record extraction. Our contributions are as follows:
• We propose a reproducible framework that constructs evaluation datasets from arbitrary MHTML web snapshots. Our framework provides XPath-based structured outputs and the corresponding URLs for the datasets we evaluated.
• We establish a concrete scoring framework to evaluate both heuristic algorithms (e.g., XPath wrappers, partial tree alignment) and LLM-based methods. Our protocol thoroughly prevents distortions from hallucination and supports partial credit through structure-aware matching.
• We explore how to optimize raw HTML input for LLM-based methods by introducing preprocessing strategies that reduce token length while preserving the semantic and structural integrity of the page. These strategies include HTML slimming, and converting HTML into structured representations like Hierarchical JSON and Flat JSON. Our results demonstrate that these formats significantly influence extraction performance and hallucination rates.
• We created a publicly available synthetic dataset by transforming DOM structures and modifying content from original web pages.
This work lays a foundational cornerstone for future research in web record extraction, particularly for developing text-aware and HTML-friendly methods in the era of LLMs.
# 2 Related work
# 2.1 Traditional Methods and Heuristic Approaches
Early web data record extraction approaches relied on heuristic-based algorithms that exploited regularities in the HTML structure of web pages [Buttler et al., 2001, Chang and Lui, 2001, Liu et al., 2003, Zhai and Liu, 2005, Liu and Zhai, 2005]. A representative example is the Mining Data Records (MDR) algorithm, which segments web pages into data regions by analyzing the DOM tree and detecting repeated sibling structures [Liu et al., 2003]. MDR is effective when records are arranged contiguously in a clean and regular HTML layout. However, it struggles in scenarios where data records are non-contiguous—i.e., interleaved with unrelated elements or separated by visual gaps—because it assumes strict adjacency in the DOM.
To overcome these limitations, Zhai and Liu [2005] proposed DEPTA (Data Extraction based on Partial Tree Alignment), which decouples segmentation and alignment. First, it identifies candidate data regions using structural and visual cues. Then, it performs partial tree alignment to recover repeated substructures, even when records are not adjacent in the HTML. This makes DEPTA more robust to noisy layouts and capable of handling non-contiguous record structures.
Building on this, NET incorporates visual features such as the rendered bounding boxes of elements to extract both flat and nested data records [Liu and Zhai, 2005]. By leveraging layout-based cues in addition to DOM structure, NET is better suited for complex templates and deeply nested hierarchies.
A more recent system, AMBER, introduces automatic supervision to reduce reliance on manually crafted rules [Furche et al., 2012]. It uses entity recognizers (e.g., for names, prices, dates) to label semantically meaningful parts of the page, and aligns these semantic hints with repeated DOM structures. This alignment improves extraction robustness on attribute-rich, template-based, and noisy web pages.
# 2.2 Large Language Model–Based Approaches
The advent of LLMs has revolutionized information extraction tasks. Models like GPT-3.5 [Ye et al., 2023] and GPT-4 [Achiam et al., 2023] have demonstrated remarkable capabilities in understanding and generating human-like text. Recent studies have explored the application of LLMs for extracting structured information from web pages. For instance, Brinkmann et al. [2024] investigated the zero-shot extraction and normalization of product attribute values using GPT models, achieving competitive performance without task-specific training data.
Another line of research focuses on enhancing the understanding of web page structures, for which specialized models like MarkupLM have been introduced. MarkupLM is a pre-trained model designed for document understanding tasks that utilize markup languages, such as HTML, where text and markup information are jointly pre-trained [Li et al., 2022]. This model incorporates additional embedding layers to capture the hierarchical structure of HTML documents, improving performance on tasks like information extraction from web pages.
# 2.3 Benchmark Datasets and Evaluation Frameworks
Benchmark datasets are crucial for evaluating and comparing information extraction methods. PLAtE [San et al., 2023] is a large-scale dataset designed for list page web extraction, focusing on product segmentation and attribute extraction. While PLAtE provides valuable resources for specific domains, its static nature and domain-specific focus limit its applicability across diverse web contexts.
AMBER [Furche et al., 2012] introduces a semi-automatic extraction system targeting multi-attribute objects from deep web pages, such as real estate listings. Although its rule-based annotations offer high precision, it lacks publicly available datasets and generalizability across web domains. The dataset used in AMBER—real estate and car listings—is extensive but domain-limited and not openly accessible, restricting its utility for reproducible benchmarks.
The Klarna Product Page Dataset [Hotti et al., 2021] provides a high-quality resource for training and evaluating web element nomination methods. It includes detailed annotations for product pages using graph-based and LLM-based labeling strategies. However, its scope is confined to e-commerce, and the data cannot be redistributed due to licensing constraints. While it supports fine-grained element labeling, it does not directly support data record segmentation or structural evaluation using tree-based supervision.
Multi-Record Web Page Extraction [Kustenkov et al., 2025] focuses on news websites and introduces methods to extract multiple records from a single page. While it demonstrates the value of domainspecific modeling (e.g., headlines, summaries), it does not provide a publicly reusable dataset or a unified evaluation protocol, and relies on heuristics that are tightly coupled with the news domain structure.
These efforts highlight the growing need for evaluation datasets that are (i) publicly available, (ii) structurally diverse across domains, and (iii) compatible with both rule-based and LLM-based methods. Our work addresses this by introducing a framework and a benchmark dataset built from diverse MHTML web snapshots. Key contributions include human-refined XPath annotations for robust, DOM-grounded evaluation, and a scoring system that penalizes LLM hallucination. Furthermore, we developed a publicly available synthetic dataset to foster broader research.
# 3 Preliminary
To formally represent and extract structured content from HTML pages, we rely on XPath expressions. An XPath $x \in \mathcal { X }$ is a path-like expression that identifies a unique node in the DOM (Document Object Model) tree. XPath expressions encode the structural position of elements in the HTML tree, and the relationships among them reflect the hierarchical layout of web content.
Structured content on the web—such as product listings, hotel entries, or user profiles—is typically composed of repeated logical units called data records. A data record refers to a group of semantically related elements that together describe a single entity.
Definition 3.1 (Data Record). Let $\chi$ be the set of XPath expressions in a document. A data record $X _ { i } \subseteq { \mathcal { X } }$ is defined as a finite set of XPath expressions whose corresponding DOM nodes contain non-trivial textual content (i.e., not empty or purely whitespace) and collectively represent a coherent, repeated unit on the page.
This set-based definition naturally accommodates both simple and complex structures—whether the relevant elements are grouped under one parent node or distributed across different DOM branches. It also allows for nested or non-contiguous layouts to be expressed without relying on a single subtree.
Example. Consider the following HTML snippet:
The corresponding data record can be represented as:
X1 = {/div[1]/div[1]/span[1], /div[1]/div[2]/span[1], /div[1]/div[2]/span[2]}
This approach captures all relevant fields of the product as a set of XPath references. In more complex settings—such as when parts of a record appear in different sections of the DOM—this model remains consistent and expressive. We discuss such cases, including nested and non-contiguous records, in the supplementary material.
# 4 Method
# 4.1 Problem Setting
We formalize web data record extraction as the task of identifying repeated sets of semantically related elements in a web page’s DOM tree. As defined in Section 3, each data record corresponds to a set of XPath expressions whose associated DOM nodes together describe a single coherent entity (e.g., a product or listing).
Given an HTML document, let $\chi$ denote the set of all XPath expressions referencing non-trivial text nodes. The goal is to segment this set into $\mathcal { P } = \{ P _ { 1 } , \ldots , P _ { M } \}$ , where each $P _ { i } \subseteq { \mathcal { X } }$ forms a data record.
Unlike traditional attribute extraction, we treat the internal structure of a record as a black box and focus solely on discovering consistent repeating record-level groupings. These groups may appear as contiguous siblings in the DOM or may be non-contiguous and nested under different parent elements.
This task can be viewed as the inverse of web page templating: rather than generating HTML from structured data, we recover the structured data (as XPath sets) from rendered HTML. This inversion is particularly challenging in real-world pages, where markup irregularities, dynamic content, and layout noise are common.
In our approach, we leverage LLM APIs to perform this extraction via carefully designed HTML prompts. The details of our prompting strategies and API usage are presented in Section 4.3.
# 4.2 Dataset Construction
To support fair and rigorous evaluation, we construct a large and general-purpose dataset. It encompasses a wide variety of web domains, significantly broadening the scope beyond prior datasets like PLAte [San et al., 2023] which were often restricted to specific categories such as shopping pages. Our dataset is designed to capture repetitive structures and a variety of DOM structures across this diverse range, including many non-product-centric pages.
Each web page is stored in MHTML format to preserve all layout and interactive elements. From each page, we annotate the main repeated data records using XPath expressions. These XPaths serve as ground-truth labels for evaluation.
To identify data records, we first leverage large language models (LLMs) to automatically annotate candidate repetitive blocks in each page. Human annotators then review and refine these suggestions to ensure high-quality labels. This semi-automatic process balances scalability and accuracy.
Using XPath as the supervision format enables deterministic and verifiable evaluation grounded in the DOM structure. Unlike free-text descriptions, XPath annotations constrain model predictions to concrete, observable elements on the page. This prevents hallucinations where LLMs infer data records from unrelated textual context by requiring that every predicted record maps to an exact DOM node. As LLMs may generate plausible-looking but structurally invalid groupings, XPath-based evaluation ensures alignment with the actual HTML hierarchy.
# 4.3 Preprocessing for LLMs
To enable a rigorous evaluation of LLM extractors on web pages, we propose a preprocessing framework that simultaneously (i) compresses raw HTML to stay within the LLM token budget, and (ii) preserves the full DOM hierarchy so that ground-truth labels can still be expressed as absolute XPaths.
One of the key constraints when using LLM APIs is the token limit per request. To address this, we applied several preprocessing techniques to reduce input size without sacrificing structural semantics. Unlike plain text or Markdown, which may be more compact, these formats do not retain absolute DOM positions necessary for XPath supervision. Therefore, we maintain the original HTML structure while removing unnecessary attributes—such as class, id, and style—that are often verbose and semantically redundant. This results in a lightweight yet structurally faithful HTML tree. For example, we convert Figure 1a into Figure 1b.
In addition to HTML-based inputs, we explored JSON-based formats to guide the model more explicitly. First, we construct a Hierarchical JSON (Figure 1c), which mirrors the DOM tree hierarchy in JSON form. Although slightly more verbose in tokens than flattened formats, this hierarchical structure helps the model understand parent-child relationships, reducing the risk of hallucinating invalid elements.
Second, we construct a Flat JSON (Figure 1d), where each key is an absolute XPath and the value is the corresponding textual content. While this format discards hierarchy from the nested structure, it provides unambiguous localization of each field, ensuring token-level precision during decoding.
Ultimately, the framework lets us evaluate extractors purely on positional accuracy: a model’s output is valid only if the text content of the predicted XPath exists in the cleaned DOM. Since every model output paths are drawn from the cleaned DOM, the model cannot hallucinate elements that do not exist.
# 4.4 Evaluation Framework
We evaluate data record extraction methods by comparing predicted record sets against humanannotated ground truth. Each data record is represented as a set of XPath expressions, and the comparison is performed at the record level using a matching-based evaluation framework.
Let $\mathcal { P } = \{ P _ { 1 } , \ldots , P _ { M } \}$ denote the set of predicted records, and ${ \mathcal { G } } = \{ G _ { 1 } , . . . , G _ { N } \}$ the set of ground-truth records, where each $P _ { i } , G _ { j } \subset \mathcal { P }$ is a set of XPath expressions. For each pair $( P _ { i } , G _ { j } )$ , we define an overlap score as the Jaccard similarity between the two sets:
$$
\operatorname { O v e r l a p } ( P _ { i } , G _ { j } ) = { \frac { | P _ { i } \cap G _ { j } | } { | P _ { i } \cup G _ { j } | } }
$$
To evaluate the quality of extraction, we compute an optimal one-to-one matching $\mathcal { M } \subseteq \mathcal { P } \times \mathcal { G }$ that maximizes the total overlap across matched pairs. This is solved using the Hungarian algorithm.
<ul class $\ c =$ "product-list"> <li class $\ c =$ "item"><span class=" name">Sample Product</span>< <ul> /li> <li><span>Sample Product</span>< <li class $\ c =$ "item"><span class=" /li> price">\$999.00</span></li> <li><span>\$999.00</span></li> </ul> </ul>
(a) Original HTML (b) Slimmed HTML (attributes removed)
{ "html": { "body": { "ul": { "li[1]": { "span": "Sample␣ Product" }, { "li[2]": { "span": "\$999.00" } "/html/body/ul/li[1]/span": "Sample␣ } Product", } "/html/body/ul/li[2]/span": "\$999.00 } "
} } (c) Hierarchical JSON (Nested text map) (d) Flat JSON (text map)
Based on the optimal alignment $\mathcal { M }$ , we define precision and recall as:
$$
\begin{array} { r } { \mathrm { P r e c i s i o n } = \displaystyle \frac { 1 } { | \mathcal { P } | } \operatorname* { m a x } _ { \mathcal { M } \subseteq \mathcal { P } \times \mathcal { G } } \sum _ { ( P _ { i } , G _ { j } ) \in \mathcal { M } } \mathrm { O v e r l a p } ( P _ { i } , G _ { j } ) } \\ { \mathrm { R e c a l l } = \displaystyle \frac { 1 } { | \mathcal { G } | } \operatorname* { m a x } _ { \mathcal { M } \subseteq \mathcal { P } \times \mathcal { G } } \sum _ { ( P _ { i } , G _ { j } ) \in \mathcal { M } } \mathrm { O v e r l a p } ( P _ { i } , G _ { j } ) } \end{array}
$$
The final F1 score is computed as the harmonic mean of precision and recall:
$$
\mathrm { F 1 } = \frac { \mathrm { 2 } \cdot \mathrm { P r e c i s i o n } \cdot \mathrm { R e c a l l } } { \mathrm { P r e c i s i o n } + \mathrm { R e c a l l } }
$$
In addition to these metrics, we also assess the Hallucination Rate (HR). For each web page (URL) in our test set, a hallucination event is considered to have occurred (value of 1) if the model predicts at least one empty record (i.e., a record containing no corresponding XPaths); otherwise, it is 0. The Hallucination Rate is then the average of these binary hallucination event indicators across all processed URLs for which predictions are available. A lower HR is preferable, indicating that the model is less prone to generating empty, and thus unusable, records.
This formulation allows for partial credit when records are only partially correct (e.g., missing fields or over-segmentation), and is robust to variations in record granularity or representation (e.g., nested vs. flat XPath sets). Unlike token-level or text-based matching, this metric directly reflects correctness at the structural level, which is essential for applications requiring XPath-based or DOM-aligned outputs.
# 5 Experiment
# 5.1 Implementation Details
We evaluate a traditional web data record extraction method—MDR—as well as a modern LLMbased extractor. For the latter, we leveraged the Gemini-2.5-pro-preview-03-25 model [Team et al., 2023], accessed via its API, using the default temperature setting of 1.0. This specific model was selected due to its extensive input token allowance, which enabled us to perform experiments on the largest possible number of websites from our dataset. Each method is tested on a diverse dataset of real-world web pages; a detailed description of this dataset is provided in Section 5.2. Explicit site list is available in the supplementary material. Ground-truth annotations are provided as sets of XPath expressions identifying repeated data records.
We do not include DEPTA and NET in our evaluation, as they rely on visual layout cues (e.g., rendered bounding boxes), which are not available without a rendering pipeline. In contrast, MDR operates purely on the DOM structure and is thus directly comparable to our LLM-based approach.
For the LLM-based approach, we design HTML prompts tailored for Gemini-2.5-pro-preview-03-25 to balance performance and token efficiency. We considered an approach using (a) Full HTML, which includes all original attributes such as class names, IDs, and precise positional information. However, this method presents challenges for a fair comparison, as these attributes provide cues not available to other preprocessing techniques. Furthermore, Full HTML is substantially larger, often more than ten times the size of Slimmed HTML, frequently exceeding input token limits and thereby restricting the number of websites on which experiments can be conducted. Consequently, we excluded Full HTML from our primary evaluation. Instead, we evaluate three distinct preprocessing strategies for constructing the prompts: (b) Slimmed HTML, where verbose attributes are removed while preserving DOM structure; (c) Hierarchical JSON, which reflects the hierarchical structure of the original DOM in a JSON-like nested form; and (d) Flat JSON, which flattens the HTML into a key-value mapping from XPath to textual content. These variants allow for flexible trade-offs between structural fidelity and token compactness.
Since the exact token count cannot be determined before submission, we report the final token usage based on the serialized prompts actually sent to the LLM. Details of our prompting strategies and input formats are described in Section 4.3.
# 5.2 Dataset
Our evaluation dataset consists of 164 real-world web pages, collectively containing over 12,278 annotated data records. Each page was selected from a diverse set of domains, including commerce, media, government, and social platforms. Pages were chosen based on their high popularity and rich presence of repeated content structures such as product listings, tables, article feeds, or ranked items. All pages were saved in MHTML format to preserve layout and interactive elements.
The dataset spans a wide range of domains: approximately 63 pages belong to $e$ -commerce and shopping platforms, 20 pages to entertainment and sports sites, 15 to finance and investing, 10 to news and media, and 10 to education and research portals. Additionally, there are pages from technology and development (7), travel and hospitality (7), real estate (8), jobs and careers (6), health and wellness (6), coupons and deals (6), government and public data (5), social and community (5), and food and dining (4), ensuring structural diversity.
Each page contains a varying number of data records, ranging from a handful (e.g., 3) to over nine hundred (e.g., 913 for a site like Indeed.com). This distribution captures a broad spectrum of extraction scenarios—from simple, templated layouts to deeply nested or sparse structures. This dataset serves as a realistic and challenging benchmark for evaluating both traditional and LLM-based web data record extractors.
# 5.3 Results
The experimental results highlight the significant impact of input representation on the performance of LLM-based web data extractors. Table 1 details the average token counts for the different input types. Hierarchical JSON is the most token-efficient, with an average of 34,107 tokens. Slimmed
Table 1: Average token counts for different input types.
Table 2: Extraction performance of non-visual methods.
HTML follows with 86,084 tokens, while Flat JSON is the least token-efficient, requiring an average of 116,698 tokens. This data underscores the varying textual complexities of each representation.
Table 2 presents the extraction performance for non-visual methods. Gemini-2.5-pro-preview, when provided with Flat JSON input, achieves the highest performance with a precision of 0.9939, recall of 0.9392, and an F1 score of 0.9567. This strong performance is attributed to the unambiguous localization of fields offered by Flat JSON (as described in Section 4.3) and a remarkably low hallucination rate of 0.0305. The reduced hallucination stems from the relative ease with which the LLM can infer structural information and XPath-like paths from JSON compared to HTML. However, it is noted that for URLs with deep hierarchical structures, hallucination can still occur in approximately $3 \%$ of cases with Flat JSON.
When using Hierarchical JSON, which mirrors the DOM tree structure to aid the model in understanding parent-child relationships (Section 4.3), Gemini-2.5-pro-preview achieves a precision of 0.4932, recall of 0.3802, and an F1 score of 0.4048, with a hallucination rate of 0.5976. While not as performant as Flat JSON, this input type still offers a significant reduction in hallucination compared to HTML-based inputs, as the JSON format provides a clearer structural representation for the LLM.
In contrast, Gemini-2.5-pro-preview’s performance with Slimmed HTML input (0.1217 P, 0.0969 R, 0.1014 F1) is considerably lower. This is largely due to a very high hallucination rate of 0.9146. Directly parsing complex web structures and inferring XPaths from HTML, even a compressed version like Slimmed HTML (see Section 4.3), proves challenging for the LLM, leading to frequent hallucinations of invalid elements.
The traditional baseline, MDR, operating on Full or Slimmed HTML, shows a precision of 0.0746, recall of 0.1593, and an F1 score of 0.0830. Notably, MDR exhibits no hallucination (0.0000) due to its intrinsic design. However, its low precision and recall stem from its tendency to extract all potential data records, thereby reducing precision, and its inability to consistently identify and extract exact data records or all their similar counterparts comprehensively, which lowers recall. These characteristics suggest that MDR’s output may require significant additional post-processing to refine the extracted data.
These findings collectively underscore the critical role of input representation and careful prompt engineering in unlocking the full potential of LLMs for complex tasks such as web data record extraction. The choice of input format directly influences not only the model’s accuracy but also its propensity for hallucination, with structured formats like JSON offering more reliable pathways for information retrieval.
Furthermore, to facilitate broader research and ensure data privacy, we developed a synthetic dataset. This dataset was constructed by systematically transforming the DOM structures and modifying the textual content of the original web pages. The creation of this synthetic dataset enables the public dissemination of a diverse and challenging benchmark for web data extraction tasks. The detailed experimental results on this synthetic dataset are included in the supplementary material, and the dataset itself will be made publicly available. | Effective evaluation of web data record extraction methods is crucial, yet hampered by static, domain-specific benchmarks and opaque scoring practices. This makes fair comparison between traditional algorithmic techniques, which rely on structural heuristics, and Large Language Model (LLM)-based approaches, offering zero-shot extraction across diverse layouts, particularly challenging. To overcome these limitations, we introduce a concrete evaluation framework. Our framework systematically generates evaluation datasets from arbitrary MHTML snapshots, annotates XPath-based supervision labels, and employs structure-aware metrics for consistent scoring, specifically preventing text hallucination and allowing only for the assessment of positional hallucination. It also incorporates preprocessing strategies to optimize input for LLMs while preserving DOM semantics: HTML slimming, Hierarchical JSON, and Flat JSON. Additionally, we created a publicly available synthetic dataset by transforming DOM structures and modifying content. We benchmark deterministic heuristic algorithms and off-the-shelf LLMs across these multiple input formats. Our benchmarking shows that Flat JSON input enables LLMs to achieve superior extraction accuracy (F1 score of 0.9567) and minimal hallucination compared to other input formats like Slimmed HTML and Hierarchical JSON. We establish a standardized foundation for rigorous benchmarking, paving the way for the next principled advancements in web data record extraction. | [
"cs.DB",
"cs.AI",
"cs.IR"
] |
# 1. Introduction
Recent advances in neural rendering have yielded impressive capabilities for novel-view synthesis from posed RGB images. Among these, 3D Gaussian Splatting (3DGS) [KKLD23] has garnered significant attention due to its remarkable ability to reconstruct radiance fields rapidly and render high-quality novel views in real-time. Its computational efficiency and rendering quality have spurred numerous downstream research efforts aimed at further refining and extending its capabilities $\mathrm { [ X H L ^ { * } 2 4 }$ , $\mathrm { Y C H } ^ { * } 2 4$ , WWZX24]. 3DGS represents scenes using a collection of 3D Gaussians, optimizing their parameters (position, shape, opacity, and view-dependent color) through differentiable rendering. However, the 3DGS formulation lacks explicit mechanisms to enforce multiview geometric consistency. Consequently, 3D surfaces extracted from optimized Gaussian parameters using standard techniques like Poisson surface reconstruction [KH13] or truncated signed distance function (TSDF) fusion [CL96] often suffer from significant inaccuracies, noise, and a lack of detail $[ \mathrm { G L } 2 4 , \mathrm { H Y C ^ { * } } 2 4 ]$ .
To mitigate these geometric shortcomings, several methods enhancing Gaussian splatting have recently emerged [GL24, $\mathrm { H Y C } ^ { * } 2 4$ , YSG24]. SuGaR [GL24] incorporates signed distanceinduced regularization, encouraging Gaussians to align with an underlying surface. 2D Gaussian Splatting (2DGS) $\mathrm { [ H Y C ^ { * } 2 4 ] }$ introduces a 2D Gaussian representation in the form of disks, which naturally approximate local surface patches as planes and enable multiview consistent rendering. It also proposes single-view regularization terms for normal consistency and depth distortion to promote noise-free and smoother surfaces. Gaussian Opacity Fields (GOF) [YSG24] formulate the contribution of a Gaussian to a ray with explicit ray-Gaussian intersection, enabling the construction of an opacity field, and also utilize the two regularization terms from 2DGS. Despite these advances, approaches relying primarily on such single-view-based geometric regularization methods or implicit surface constraints remain limited in their ability to capture robust and accurate multiview consistent geometry across diverse scenes.
Parallel to these rendering-focused methods, multiview stereo (MVS) has long been a fundamental technique for accurate 3D geometry reconstruction from posed images [SZPF16, CZYM24]. Conventional MVS algorithms typically employ patch-based matching between corresponding image regions, enforcing epipolar constraints, to estimate dense depth maps. These methods excel at producing accurate geometry in well-textured areas and demonstrate resilience to moderate lighting variations. However, they frequently struggle near object boundaries and in texture-less regions, resulting in noisy or incomplete depth estimates [CZYM24].
A key observation motivating our work is the complementary nature of MVS-derived geometry and Gaussian splatting reconstructions. MVS provides precise 3D points in regions rich with visual features but often yields noisy estimates near depth discontinuities or homogeneous areas. Conversely, 3DGS, with its per-point appearance modeling, adeptly captures complex view-dependent effects and can represent sharp object boundaries through the learned opacity and spatial distribution of its Gaussians. However, this very flexibility, particularly the per-point view-dependent appearance, can hinder the optimization process from converging to a geometrically consistent surface across all viewpoints. This challenge is especially pronounced in real-world scenes exhibiting significant appearance variations (e.g., due to lighting or specularity), where the optimization might prioritize fitting appearance over achieving geometric accuracy (Figure 2).
Figure 2: Complementary relationship between depths estimated by Multiview Stereo (MVS) [SZPF16] and depths rendered by GOF [YSG24]. MVS estimates accurate depths in well-textured regions but often produces noisy depths near object boundaries (top). In contrast, GOF effectively represents sharp object boundaries but may yield geometrically inaccurate surfaces, particularly in regions with view-dependent appearance variations such as lighting and specularity.
Leveraging this complementarity, we propose an effective multiview geometric regularization strategy for Gaussian Splatting that integrates the strengths of both MVS and 3DGS. Our goal is to achieve radiance fields that are accurate in both rendered appearance and underlying geometry. Crucially, our regularization is applied at both the initialization and optimization stages of the Gaussian splatting pipeline.
During optimization, we utilize MVS depth priors to guide the Gaussians towards the underlying scene surface. To this end, we introduce a median-depth-based multiview relative depth loss incorporating uncertainty from the rendering process itself. For each pixel ray, we compute the rendered median depth via accumulated transmittance thresholding, representing the current estimated surface location, and crucially, we also estimate the uncertainty of this rendered median depth based on the total accumulated transmittance of the ray. Our loss then encourages the positions of Gaussians contributing significantly near this rendered location to align with corresponding MVS depth estimates across relevant views. The strength of this alignment guidance is modulated by the certainty of the rendered median depth.
Importantly, relying solely on geometric loss terms during optimization often struggles with suboptimal local minima when starting from sparse or inaccurate Gaussian positions. To mitigate this, we propose a robust MVS-guided initialization procedure. By utilizing geometric information derived from MVS [SZPF16], we establish a strong, geometrically-aware initial state for the Gaussians, significantly improving the final reconstruction quality (Figure 4 (f)).
Furthermore, to promote multiview consistent appearance, we incorporate multiview RGB photometric losses. We also extend the normal consistency and depth distortion regularizers, previously used in single-view contexts $\mathrm { [ H Y C ^ { * } 2 4 ] }$ , into multiview formulations that explicitly enforce geometric smoothness and agreement across multiple viewpoints. These geometric regularizers play a crucial role especially in homogeneous areas where the MVS depth prior may be noisy or missing, thus complementing the MVS guidance.
Our contributions include: (1) An MVS-guided initialization strategy tailored for robust Gaussian splatting optimization; (2) A median-depth-based multiview relative depth loss incorporating uncertainty for optimization-time geometric regularization; and (3) Multiview extensions of geometric consistency losses for faithful and smooth geometry reconstruction. Through extensive experiments on diverse and challenging datasets, including Mip-NeRF360 $\mathrm { [ B M V ^ { * } 2 2 ] }$ , DTU $\mathrm { [ J D V ^ { * } 1 4 ] }$ , and Tanks and Temples [KPZK17], we demonstrate that our multiview geometric regularization strategy is highly effective. Our evaluations show that our approach outperforms existing state-of-the-art models in both novel-view synthesis quality and, critically, 3D surface reconstruction accuracy.
# 2. Related Work
# 2.1. Novel View Synthesis
Conventional Methods Traditional novel view synthesis often relied on explicit 3D reconstruction methods $[ \mathrm { S C D ^ { * } } 0 6$ , SSS06, $\mathrm { G S C ^ { * } 0 7 }$ , SF16]. These methods struggled with complex geometries, non-Lambertian materials, and realistic rendering of complex visual effects, including reflections and transparency, often requiring dense views.
Neural Radience Field Mildenhall et al. $[ \mathrm { M S T ^ { * } } 2 1 ]$ first introduced Neural Radiance Fields (NeRF), utilizing neural networks such as multi-layer perceptrons (MLPs) to directly learn a continuous volumetric scene representation from a limited set of input views. The implicit scene representation achieved state-ofthe-art results in synthesizing photorealistic novel views of complex scenes that conventional methods often fail to represent. Inspired by these promising results, subsequent studies have extensively improved NeRF by addressing key limitations in training speed, efficiency, and rendering quality. Mip-NeRF360 $\mathrm { [ B M V ^ { * } } 2 2 ]$ effectively handled unbounded scenes and rendering aliasing, improving rendering quality but modestly increasing training time. Instant-NGP [MESK22] drastically improved training and rendering speed using multi-resolution hash encodings. Others focused on compression (MERF $[ \mathrm { R S V } ^ { * } 2 3 ]$ ) or converting implicit fields to explicit ones for faster rendering (SNeRG $\boldsymbol { [ \mathrm { H S M } ^ { * } 2 1 ] }$ , BakedSDF $[ \mathrm { Y H R } ^ { \ast } 2 3 ]$ ). Despite advances, NeRF-based methods often have high computational costs, motivating alternatives like 3DGS.
Gaussian Splatting Kerbl et al. [KKLD23] recently introduced 3D Gaussian Splatting (3DGS) for fast training and real-time rendering of radiance fields. This technique represents scenes as a collection of millions of 3D anisotropic Gaussians, which are rendered using a splatting-based rasterizer. Numerous follow-up works have emerged to address the limitations of 3DGS and enhance its performance. For instance, Mip-Splatting $\mathrm { [ Y C H ^ { * } 2 4 ] }$ addressed aliasing and frequency artifacts by introducing 3D smoothing filters and a mipmap-style anti-aliasing. Other research line, such as Compressed-3DGS [NSW24] and LightGaussian $\mathrm { [ F W W ^ { * } 2 4 ] }$ has focused on reducing the excessive number of Gaussians to make the method suitable for network streaming and low-powered mobile devices. Scene representation has also been an active area of research, with proposals for variants like 2D Gaussians $\mathrm { [ D X X ^ { * } 2 4 }$ , $\mathrm { H Y C } ^ { * } 2 4 ]$ and hybrid representations $[ \mathrm { Y L X ^ { * } } 2 4 , \mathrm { L Y X ^ { * } } 2 4 ]$ . Furthermore, generalizable Gaussian Splatting models have recently been investigated [CLTS24, $\mathrm { C X Z ^ { * } 2 4 } ]$ . These models can directly predict Gaussian Splatting parameters for radiance field representation without the need for per-scene optimization. However, such methods are primarily designed for sparse-view settings and do not achieve the high-fidelity rendering demonstrated by per-scene optimization techniques. Our method falls into the per-scene optimization category and aims for high performance. To achieve this, we leverage multiview stereo (MVS) depths for effective geometric regularization within the Gaussian Splatting optimization process.
The utilization of depth priors is not a new concept. Before the advent of 3DGS, point-based rendering methods commonly used 3D points derived from MVS depth images as proxy geometry [KPLD21, $\mathrm { K L R } ^ { * } 2 2$ , RK21, RK20]. However, 3DGS has shown superior rendering quality and efficiency over these pointbased methods. Recently, a few works have revisited the use of MVS depth priors to improve the robustness of 3DGS in sparseview settings $[ \mathrm { W L Z ^ { * } } 2 5 , \mathrm { S L C ^ { * } } 2 4 ]$ . Concurrent with our work, Li et al. [LHH25] utilize monocular depth priors, instead of MVS depth, to better handle weakly textured regions in 3DGS.
# 2.2. Neural Surface Reconstruction
Neural surface reconstruction approaches are largely divided into two categories: reconstruction from point clouds and reconstruction from multi-view images. This section focuses on methods based on multi-view images, as they are most relevant to our work, rather than those based on point clouds [MHLZ20, MLZH22, EGO $^ { * } 2 0 \$ ].
Implicit Surface Representation Implicit surface reconstruction methods represent geometry as continuous fields without relying on explicit primitives like meshes. Methods such as NeuS $[ \mathrm { W L L } ^ { * } 2 1 ]$ and VolSDF [YGKL21] leveraged neural implicit representations based on signed distance functions to reconstruct detailed surface geometries. Wang et al. $\left[ \mathrm { W H H ^ { * } } 2 3 \right]$ significantly accelerated neural implicit surface reconstruction using multi-resolution hash encodings and CUDA parallelization. Oechsle et al. [OPG21] unified implicit surface models with neural radiance fields, enabling accurate reconstruction without masks through combined volume and surface rendering. Fu et al. [FXOT22] explicitly imposed multi-view geometric constraints on neural implicit surfaces, significantly improving geometry consistency and reconstruction quality for both thin structures and smooth regions. Neuralangelo $\left[ \mathrm { L M E } ^ { * } 2 3 \right]$ leverages multi-resolution 3D hash grids with a progressive, coarse-tofine optimization strategy and numerical gradient computation to reconstruct highly detailed and large-scale surfaces. Despite their accuracy, these implicit methods often require extensive optimization times.
Explicit Surface Representation Explicit surface reconstruction methods directly define and optimize geometric primitives. Several recent approaches adapt 3D Gaussian Splatting for this purpose by encouraging the Gaussians to conform to an underlying surface. For example, SuGaR [GL24] incorporates a signed distanceinduced regularization to promote this alignment. Other methods modify the Gaussians themselves for better geometric representation. 2DGS $\mathrm { [ H Y C ^ { * } 2 4 ] }$ and GaussianSurfel $\mathrm { [ D X X ^ { * } 2 4 ] }$ flatten 3D Gaussians into 2D counterparts, often with additional regularization terms to enhance geometric fidelity. To handle large-scale, unbounded scenes, GOF [YSG24] constructs a Gaussian opacity field on a tetrahedral grid, enabling efficient and high-quality mesh extraction. Building on these explicit frameworks, we propose a novel regularization strategy that leverages MVS depth priors to further advance geometric accuracy. This approach significantly improves both geometry and rendering quality, particularly in challenging scenes with substantial view-dependent color variations.
# 3. Overview
The goal of our framework is to reconstruct high-fidelity, geometrically accurate radiance fields represented by 3D Gaussians, given a set of multiview RGB images with corresponding camera poses, typically estimated by Structure-from-Motion (SfM) [SF16]. As a preprocessing step, we first estimate dense depth maps for input RGB images using a conventional PatchMatch-based multiview stereo (MVS) method and then filter unreliable initial depths using heuristics based on multiview geometric consistency [SZPF16]. We also determine triplets of adjacent viewpoints by leveraging image feature correspondences from SfM, which are utilized during our multiview optimization stage. Our framework builds upon the representation of Gaussian Opacity Fields (GOF) [YSG24] and its differentiable rendering formulation for Gaussian parameter optimization. Our multiview geometric regularization strategy plays a key role in both Gaussian parameter initialization and optimization to achieve accurate and smooth geometry reconstruction while maintaining high-quality appearance modeling.
Finally, the optimized Gaussian parameters are used to render high-quality novel views and can also produce median depth maps or opacity fields. These outputs can then be utilized with techniques like Truncated Signed Distance Function (TSDF) fusion [CL96] and Marching Tetrahedra [YSG24] to extract a high-quality mesh. We first briefly review the GOF model in Section 4 and then elaborate on our proposed multiview geometric regularization strategy in Section 5.
# 4. Gaussian Splatting Model
Representation Gaussian Opacity Fields (GOF) [YSG24] builds upon the framework of 3D Gaussian Splatting (3DGS) [KKLD23] by explicitly incorporating ray-tracing-based volume rendering. Like 3DGS, the scene is represented by a collection of 3D Gaussians $\left\{ \mathcal { G } _ { k } \right\}$ , where the parameters of a 3D Gaussian consist of a center $\mathbf { p } _ { k } \in \mathbb { R } ^ { 3 }$ , a scaling matrix $\mathbf { S } _ { k } \in \mathbb { R } ^ { 3 \times 3 }$ , and a rotation $\mathbf { R } _ { k } \in \mathbb { R } ^ { 3 \times 3 }$ . The key difference lies in the methodology for evaluating the contribution of a Gaussian to pixels in the volume rendering equation. While 3DGS calculates the contribution of a 3D Gaussian onto a pixel via a 2D Gaussian by projecting the 3D Gaussian onto the image, GOF directly computes the contribution of a 3D Gaussian for a pixel ray by analytically finding the maximum of the Gaussian values evaluated along the ray. Consequently, the contribution $\mathcal { E } ( \mathcal { G } _ { k } , \mathbf { o } , \mathbf { r } )$ for a pixel ray $\mathbf { 0 } + t \mathbf { r }$ is incorporated into the volume rendering equation as follows:
$$
\begin{array} { r } { \displaystyle \mathbf { c } ( \mathbf { 0 } , \mathbf { r } ) = \sum _ { k = 1 } ^ { K } \mathbf { c } _ { k } \alpha _ { k } \mathcal { E } ( \mathcal { G } _ { k } , \mathbf { 0 } , \mathbf { r } ) T _ { k } , } \\ { T _ { k } = \displaystyle \prod _ { j = 1 } ^ { k - 1 } ( 1 - \alpha _ { j } \mathcal { E } ( \mathcal { G } _ { j } , \mathbf { 0 } , \mathbf { r } ) ) } \end{array}
$$
where $\mathbf { \boldsymbol { \alpha } } _ { k } \in [ 0 , 1 ]$ modulates Gaussian opacity globally, ${ \bf c } _ { k }$ is the view-dependent color modeled via spherical harmonics, and $T _ { k }$ is the accumulated transmittance.
Loss Like 3DGS, GOF optimization initiates from a sparse SfM point cloud and minimizes the following objective:
$$
\begin{array} { r } { \mathcal { L } = \mathcal { L } _ { c } + \lambda _ { d } \mathcal { L } _ { d } + \lambda _ { n } \mathcal { L } _ { n } , } \end{array}
$$
where $\mathcal { L } _ { c }$ is the RGB reconstruction loss [KKLD23], a depth distortion loss $\mathcal { L } _ { d }$ , and a normal consistency loss ${ \mathcal { L } } _ { n }$ serve as geometric regularization terms $\mathrm { [ H Y C ^ { * } 2 4 ] }$ . The depth distortion loss is defined as $\begin{array} { r } { \mathcal { L } _ { d } = \sum _ { i , j } \omega _ { i } \omega _ { j } \big | t _ { i } - t _ { j } \big | } \end{array}$ , where indices $i , j$ run over Gaussians contributing to a pixel ray $\mathbf { 0 } + t \mathbf { r }$ , and ${ \bf { \omega } } \mathfrak { { u } } _ { i } = \mathfrak { { a } } _ { i } \mathcal { E } ( \mathcal { G } _ { i } , \bf { { 0 } } , \bf { { r } } ) T _ { i }$ is the blending weight of the $i$ -th Gaussian. Here, $t _ { i }$ denotes the intersection depth of $\mathcal { G } _ { i }$ with the pixel ray. The normal consistency loss aligns the gradient of the rendered depth with the normal of the 3D Gaussian and is defined as $\begin{array} { r } { \mathcal { L } _ { n } = \sum _ { i } \dot { \mathfrak { \omega } _ { i } } \big ( 1 - \mathbf { n } _ { i } ^ { \top } \mathbf { N } \big ) } \end{array}$ , where the index $i$ runs over Gaussians intersected along the ray, $\mathbf { N }$ is the normal estimated from the gradient of the rendered depth $\mathrm { [ H Y C ^ { * } 2 4 ] }$ , and ${ \bf n } _ { i }$ is the normal vector at the ray-Gaussian intersection plane. For further details, refer to the GOF paper [YSG24].
# 5. Geometric Regulation of Gaussian Splatting Optimization
# 5.1. Geometric Regularization during Optimization
Relative Depth Loss As discussed in Section 1, the geometry reconstructed by Multiview Stereo (MVS) and Gaussian Splatting are complementary (Figure 2). However, MVS depth estimates often remain noisy even after applying geometric constraint-based heuristic filtering. Consequently, MVS depths cannot be reliably treated as ground truth due to residual noise. This necessitates methods to effectively handle these potentially inaccurate MVS priors.
Our observations indicate that while optimized Gaussian positions may not be precisely aligned with the underlying surface, they are typically distributed in proximity to it. Based on this premise, we utilize the rendered depth as a reference to facilitate the identification and rejection (via thresholding) of potentially erroneous MVS depth values. Specifically, our single-view relative depth loss, which incorporates the MVS depth prior, is defined as:
Figure 3: Visual comparison of novel view synthesis quality on the Mip-NeRF360 ’room’ and ’garden’ scenes $I B M V ^ { * } 2 2 J$ . Our method faithfully reconstructs the appearance o2fDGscSenes, compared to otherGsOtFate-of-the-art approaches (3DGS [KKLD23], 2DGrS $I H Y C ^ { * } 2 4 J$ , GOF [YSG24]).
$$
\mathcal { L } _ { r e l } = \bigg | 1 - \frac { \mathbf { D } _ { m v s } } { \mathbf { D } _ { r } } \bigg | \cdot \mathbf { U } \cdot \mathbb { I } \big ( | \mathbf { D } _ { r } - \mathbf { D } _ { m v s } | < s \cdot \mathbf { D } _ { r } \big ) ,
$$
where $\mathbf { D } _ { m \nu s }$ is the MVS depth, ${ \bf D } _ { r }$ is the rendered depth, $s$ is a hyperparameter for the thresholding tolerance (which is typically annealed from a less restrictive to a more restrictive value during optimization), I is the indicator function yielding 1 if the condition holds and 0 otherwise, and $\mathbf { U }$ represents the rendering certainty, defined as the accumulated alpha ${ \bf U } = \Sigma _ { i } \boldsymbol { \omega } _ { i }$ . Owing to its relative formulation, this loss exhibits high sensitivity to errors at closer depths.
For computing the rendered depth ${ \bf D } _ { r }$ , a straightforward approach is to calculate the mean depth using the volume rendering weights ${ \mathfrak { o } } _ { i }$ and the depths $t _ { i }$ corresponding to Gaussian contributions. However, the mean depth computation can lead to inaccurate estimates, particularly for semi-transparent or transparent objects. Even with opaque objects, the mean depth necessarily introduces artifacts analogous to "flying pixels" near object boundaries. Furthermore, reliance on the mean depth, calculated using potentially unstable weights ${ \mathfrak { o } } _ { i }$ during optimization, can harm reconstruction quality (Figure 4 (d)). Therefore, we adopt the median depth, which effectively mitigates the aforementioned difficulties. Following $\mathrm { [ H Y C ^ { * } 2 4 ] }$ , we compute the median depth as the largest depth value t considered visible, employing $T _ { i } > 0 . 5$ as the threshold differentiating the surface from free space. Its definition is $D _ { r } = \operatorname* { m a x } \{ t _ { i } \mid T _ { i } > 0 . 5 \}$
Multiview Geometric Regularization While enforcing the median-depth-based relative depth loss (Eq. (3)) at a single viewpoint during optimization substantially improves geometric fidelity, this single-view application may not effectively regularize all scene regions. This limitation can arise because the optimization gradient primarily affects the parameters of Gaussians contributing to the median depth calculation in that specific view. To address this limitation, we extend the single-view loss Eq. (3) to a multiview formulation, thereby enabling parameter updates informed by information from multiple viewpoints simultaneously within each optimization iteration. For similar reasons, we extend the normal consistency $( { \mathcal { L } } _ { n } )$ and depth distortion $( \mathcal { L } _ { d } )$ losses (defined in Section 4) to multiview versions. Finally, to promote multiview consistent appearance, we incorporate multiview RGB photometric losses $( \mathcal { L } _ { c } )$ . Consequently, our final objective function is formulated as:
$$
\mathcal { L } = \sum _ { \nu } ( \mathcal { L } _ { c } ^ { \nu } + \lambda _ { r e l } \mathcal { L } _ { r e l } ^ { \nu } + \lambda _ { d } \mathcal { L } _ { d } ^ { \nu } + \lambda _ { n } \mathcal { L } _ { n } ^ { \nu } )
$$
# 5.2. MVS-guided initialization
Relying solely on geometric constraints during optimization can be insufficient, as sparse or inaccurate initial Gaussian placements may lead the optimization process into suboptimal local minima. This issue is particularly pronounced in image regions exhibiting high appearance variation (Figure 4 (e)). Therefore, we introduce a robust MVS-guided initialization scheme to establish a more favorable starting point.
First, we aggregate the filtered multiview depth maps (obtained during preprocessing, see Section 3) from an MVS method [SZPF16] into a unified 3D point cloud. As the resulting point cloud is typically extremely dense and contains redundancy, we apply multi-voxel grid filtering to achieve a target point count $K ^ { \prime }$ . Specifically, we construct a voxel grid with an initial voxel size $l$ . From the points contained within each non-empty voxel, we randomly sample one point. We then check if the resulting filtered point count $\left| P \right|$ is less than or equal to the target $K ^ { \prime }$ . If so, the filtering terminates. Otherwise, the voxel size is increased $l \gets 1 . 5 l$ and the sampling process is repeated until the point count $\left| P \right|$ is less than or equal to the target $K ^ { \prime }$ .
Although this filtering removes redundancy, the point cloud $P$ may still contain noisy points originating from MVS estimation errors. To mitigate the impact of these potential outliers, we adapt the opacity-based pruning mechanism inspired by the adaptive density control (ADC) of the original 3DGS [KKLD23]. Gaussian splatting optimization exhibits a tendency to rapidly decrease the opacity $\alpha _ { k }$ of Gaussians that do not contribute positively to reconstructing the scene’s appearance (i.e., those hindering the reduction of the photometric loss). We leverage this tendency specifically for noise removal during an initial optimization phase. For a predefined number of iterations, $N _ { p r u n }$ , we freeze the updates for only Gaussian centers and optimize all parameters except for centers. During this phase, Gaussians whose opacity $\alpha _ { k }$ falls below a specified threshold $\boldsymbol { \tau }$ are removed. Following this initial pruning phase (after $N _ { p r u n }$ iterations), we unfreeze all parameters and proceed with the standard optimization procedure, incorporating the full ADC mechanisms (densification and splitting) from 3DGS as needed.
This MVS-guided initialization procedure provides a much stronger and geometrically informed starting point for the main optimization, resulting in improved final geometry and appearance reconstruction quality.
# 6. Experiments
# 6.1. Implementation Details
We configured the hyperparameters as follows. The loss weights were set to $\lambda _ { r e l } = 1$ for our relative depth loss, $\lambda _ { d } = 1 0 0$ for the depth distortion loss, and $\lambda _ { n } = 0 . 0 5$ for the normal consistency loss. For the MVS-guided initialization, the initial pruning phase duration was set to $N _ { p } r u n = 2 0 0 0$ iterations, the target point count after voxel filtering was $K ^ { \prime } = 6 \mathbf { M }$ , and the initial voxel size was $l = 0 . 0 0 5$ meter. The thresholding hyperparameter $s$ in Eq. (3) was initialized to 0.15 and annealed to 0.1 at 7,000 iterations and further reduced to 0.05 at 20,000 iterations. The same hyperparameters are used across all experiments.
# 6.2. Analysis
Computational Cost While the baseline GOF [YSG24] performs volume rendering from a single viewpoint per iteration, our multiview losses require rendering from three viewpoints (corresponding to adjacent view triplets identified during preprocessing). Our method also requires additional computation time for estimating MVS depth from input RGB images. Additionally, our method commences optimization with a denser and more reliable set of initial points due to MVS-guidance and tends to retain a larger number of Gaussians to capture fine geometric details, facilitated by our multiview relative depth loss. This increased Gaussian count leads to a slight increase in optimization time compared to the baseline. We report average computation times on the DTU $\mathrm { [ J D V ^ { * } 1 4 ] }$ and Tanks and Temples [KPZK17] datasets in Tables 1 and 2, respectively.
Mean Depth vs. Median Depth We evaluate the impact of using either mean depth or median depth for the rendered depth term ${ \bf D } _ { r }$ within our relative depth loss formulation (Eq. (3)). As discussed in Section 5, utilizing the rendered mean depth suffers from several potential caveats: It can yield inaccurate estimates for semi-transparent objects and is sensitive to unstable rendering weights during optimization. Consequently, employing the mean depth within our relative depth loss leads to significantly degraded geometric reconstruction quality and even adversely affects novel view synthesis performance, as demonstrated in Figure 4 (d). In contrast, the median depth-based relative depth loss effectively regularizes the Gaussians, resulting in substantially improved geometric accuracy (Figure 4 (e)).
Effect of Multiview RGB Loss Employing multiview RGB photometric losses $( \mathcal { L } _ { c } ^ { \nu } )$ primarily enhances novel view synthesis performance by enforcing appearance consistency across views. However, as shown in Figure 4 (a) and its accompanying table, this component does not improve the accuracy of the geometric reconstruction.
Effect of Multiview Relative Depth Loss Extending our relative depth loss to a multiview formulation $( \mathcal { L } _ { r e l } ^ { \nu } )$ allows the optimization gradient to influence the parameters of more Gaussians simultaneously within a single iteration. This enhances the regularization effect, leading to less noisy and smoother surface reconstructions compared to applying the loss only from a single view, as evidenced in Figure 4 (c, e) and Figure 5 (c, d).
Effect of Multiview Normal and Depth Distortion Losses As demonstrated by specific regions (e.g., the sofa and floor in Figure 4 (e, f)), the multiview extension of the geometric consistency losses enhances mesh smoothness and fidelity.
Effect of MVS-guided Initialization Owing to our robust MVSguided initialization (Section 5), the optimization process starts from a strong geometric foundation. This enables the successful reconstruction of challenging geometry, such as the highly specular floor regions in Figure 4 (e, f), and the upper region of the skull in Figure 5 (d, e). It is noteworthy that using this initialization procedure alone, without the regularization of our relative depth loss, still results in noisy and inaccurate surface reconstruction (Figure 4 (b) and Figure 5 (b)), highlighting the synergy between the components of our method.
# 6.3. Comparison
We compare our framework against various state-of-the-art methods employing different scene representations, including implicit functions $[ \mathbf { M S T } ^ { * } 2 1$ , YGKL21, $\mathrm { { W L L } } ^ { * } 2 1$ , $\mathrm { W H H } ^ { * } 2 3$ , $\boldsymbol { \mathrm { L M E } ^ { * } 2 3 }$ , $\mathrm { H P P ^ { * } } 1 8$ , MESK22, $\mathrm { R S V } ^ { * } 2 3$ , $\mathbf { B M V } ^ { * } 2 2 ]$ , meshes [CFHT23, $\mathrm { Y H R } ^ { * } 2 3$ , GL24, $\boldsymbol { \mathrm { R G S } ^ { * } 2 4 } ]$ , and point-based approaches [KKLD23, $\mathrm { D X X ^ { * } } 2 4$ , $\mathrm { H Y C } ^ { * } 2 4$ , YSG24, $\mathrm { Y C H } ^ { * } 2 4 ]$ , evaluating performance on both novel view synthesis and 3D surface reconstruction tasks.
Datasets Experiments were conducted on DTU $\mathrm { [ J D V ^ { * } 1 4 ] }$ , Tanks and Temples (TnT) [KPZK17], and Mip-NeRF360 $\mathrm { [ B M V ^ { * } 2 2 ] }$ . DTU (15 scenes, 49/69 views, 1600x1200) and TnT were used for geometry evaluation, while Mip-NeRF360 was used for novel view synthesis evaluation following the 3DGS protocol [KKLD23].
Figure 4: Ablation study on the impact of our proposed components on the Mip-NeRF360 ‘room’ scene $I B M V ^ { * } 2 2 J$ . We demonstrate the effect of adding our proposed components to the baseline GOF framework [YSG24] on geometry and Novel View Synthesis (NVS). The inset table provides quantitative NVS metrics on the test set of the Mip-NeRF360 ‘room’ scene. (a) the baseline+multview RGB loss; (b) (a) $+ M V S$ -guided initialization; (c) (a)+single-view relative depth loss; (d) (a)+multiview relative depth loss+mean depth; (e) (a)+multiview relative depth loss; (f) (e) $\vdash { M V S }$ -guided initialization+multview normal and depth distortion losses (our full method).
Table 1: Quantitative comparison of geometry reconstruction on the DTU dataset [JDV∗14] using Chamfer distance (mm, lower better). The rightmost columns show the mean Chamfer distance across all scenes and average training times (minutes). α is $\sim 4 m$ , denoting the average time elapsed for estimating MVS depth from RGB images. Results are color-coded by rank: 1st , 2nd , and 3rd .
Table 2: Quantitative evaluation of $3 D$ reconstructions on the Tanks and Temples Dataset [KPZK17]. Results show F1 scores and training times for the compared methods. α is $\sim 3 5 m _ { \it { \cdot } }$ , denoting the average time elapsed for estimating MVS depth from RGB images. Higher F1 scores indicate better performance. Results are colorcoded by rank: 1st , $2 n d$ , and 3rd .
Geometry Reconstruction We employ surface reconstruction procedures tailored to the scale of the target scenes. For the objectscale scenes within the DTU dataset, we render median depth images from the optimized Gaussians at the training viewpoints, perform TSDF fusion [CL96, ZPK18] on these depth maps, and finally extract a mesh from the resulting signed distance field using the Marching Cubes algorithm [LC98]. For the large-scale scenes present in the Mip-NeRF360 and Tanks and Temples (TnT) datasets, we first generate an opacity field from the optimized Gaussians and then extract a mesh using the Marching Tetrahedra algorithm, following [YSG24].
We compare our geometric results against methods based on implicit $\mathrm { [ M S T ^ { * } } 2 1 \$ , YGKL21, $\mathrm { { W L L } } ^ { * } 2 1$ , $\mathrm { W H H } ^ { * } 2 3$ , $\mathrm { L M E } ^ { * } 2 3 ]$ and explicit [KKLD23, GL24, $\mathrm { D X X ^ { * } 2 4 }$ , $\mathrm { H Y C } ^ { * } 2 4$ , YSG24] representations. As reported in Table 1 and shown in Figures 6 and 7, our method achieves state-of-the-art performance among explicit representation-based approaches, and our results are highly comparable to Neuralangelo $\left[ \mathrm { L M E } ^ { * } 2 3 \right]$ , the best performance implicit method. Note that, as shown in Table 2, our method outperforms Neuralangelo on the large-scale scenes in the TnT dataset.
Figure 5: Ablation study evaluating the impact of our proposed components on the Tanks and Temples (TnT) [KPZK17] and DTU [JDV∗14] datasets. We start with the GOF framework [YSG24] as a baseline and progressively add our components to demonstrate their effects on geometry reconstruction. The configurations shown are: (a) the baseline +multview RGB loss; (b) $( a ) { + } M V S .$ -guided initialization; (c) (a)+single-view relative depth loss; (d) (a)+multiview relative median depth loss; (e) $( d ) { + } M V S$ -guided initialization+multview normal and depth distortion losses (our full method). The inset table reports F1 score and Chamfer Distance $( C D )$ results averaged across all scenes from the DTU and TnT datasets.
Table 3: Quantitative comparison of novel view synthesis quality on the Mip-NeRF360 dataset $I B M V ^ { * } 2 2 J$ . Results are color-coded by rank: 1st , 2nd , and 3rd .
Novel View Synthesis We evaluate our method for novel view synthesis on the Mip-NeRF360 dataset, comparing it against state-of-the-art approaches that use implicit functions (NeRF $[ \mathrm { M S T } ^ { * } 2 1 ]$ , Deep Blending $\mathrm { [ H P P ^ { * } 1 8 ] }$ , Instant NGP [MESK22], MERF $[ \mathsf { R S V } ^ { * } 2 3 ]$ , MipNeRF360 $\mathrm { [ B M V ^ { * } } 2 2 ]$ ), meshes (Mobile-NeRF [CFHT23], BakedSDF $[ \mathrm { Y H R } ^ { * } 2 3 ]$ , SuGaR [GL24], BOG $\scriptstyle [ { \mathrm { R G S } } ^ { * } 2 4 ] ,$ , and point-based representations (3DGS [KKLD23], Mip-Splatting $\mathrm { [ Y C H ^ { * } 2 4 ] }$ , 3DGSMCMC $\mathrm { [ K R S ^ { * } 2 4 ] }$ , 2DGS $\mathrm { [ H Y C ^ { * } 2 4 ] }$ , GOF [YSG24]). As presented in Table 3 and illustrated in Figure 3, our method achieves the second-best overall performance. It is surpassed only by 3DGS-MCMC $\mathrm { [ K R S ^ { * } 2 4 ] }$ , a method that aims mainly for appearance fidelity, without addressing accurate geometric reconstruction. Our analysis (Figure 4) indicates that our regularization strategy, designed to promote high-fidelity and smooth geometry, incurs only a minor decrease in novel view synthesis quality. Consequently, our method offers a compelling trade-off, significantly improving geometric quality while largely maintaining the high-fidelity appearance reconstruction characteristic of Gaussian Splatting. | Recent methods, such as 2D Gaussian Splatting and Gaussian Opacity Fields, have aimed to address the geometric inaccuracies of 3D Gaussian Splatting while retaining its superior rendering quality. However, these approaches still struggle to reconstruct smooth and reliable geometry, particularly in scenes with significant color variation across viewpoints, due to their per-point appearance modeling and single-view optimization constraints. In this paper, we propose an effective multiview geometric regularization strategy that integrates multiview stereo (MVS) depth, RGB, and normal constraints into Gaussian Splatting initialization and optimization. Our key insight is the complementary relationship between MVS-derived depth points and Gaussian Splatting-optimized positions: MVS robustly estimates geometry in regions of high color variation through local patch-based matching and epipolar constraints, whereas Gaussian Splatting provides more reliable and less noisy depth estimates near object boundaries and regions with lower color variation. To leverage this insight, we introduce a median depth-based multiview relative depth loss with uncertainty estimation, effectively integrating MVS depth information into Gaussian Splatting optimization. We also propose an MVS-guided Gaussian Splatting initialization to avoid Gaussians falling into suboptimal positions. Extensive experiments validate that our approach successfully combines these strengths, enhancing both geometric accuracy and rendering quality across diverse indoor and outdoor scenes. | [
"cs.CV"
] |
# 1. Introduction
Theory of Mind (ToM) is a psychological term referring to infer mental states to self and others. This capability is fundamental to human social cognition and emotional understanding. In recent years, the rapid development of large models raised researchers’ consideration: can they interact with us in a manner similar to humans? For example, could
Most existing studies adopt unimodal approaches, focusing on either text or videos, and lack comprehensive agent-level information (Gandhi et al., 2021; Nematzadeh et al., 2018; Grant et al., 2017; Le et al., 2019; Amirizaniani et al., 2024). In contrast, human social interactions rely on reasoning about others’ mental states by integrating multimodal inputs, such as visual and linguistic data. Although some studies (Jin et al., 2024; Shi et al., 2025) have attempted to extend ToM evaluations of large models to multimodal environments using video-based datasets, their datasets often incorporate excessive high-level information, such as spatial relationships, agents’ tasks, and action trajectories (Ma et al., 2023). Moreover, in real-world datasets, an agent’s perception of environmental events cannot be accurately captured. For example, in the MMToM-QA dataset (Jin et al., 2024), it is impossible to determine from the video modality alone whether the protagonist truly “saw” the plate. Consequently, the accuracy of ToM evaluations may depend on the quality of perceptual information, which could lead to correct or incorrect performance for reasons unrelated to genuine ToM capabilities. Unlike these prior works, we construct a dataset that is based on 2d grid world, enabling large models to perceive the full context of the physical world through the video modality while supplementing cognitive perspective information for each agent through the text modality.
Furthermore, the majority of assessments of ToM capabilities in large models take a black-box approach, relying heavily on question-answering tasks to infer conclusions (Xu et al., 2024), as demonstrated in Figure 1, while lacking
TARS’s belief Let me do it. Input Embedding Realizing that Cooper decided to pilot the spacecraft himself. Cooper believed that he could t0 t1 text Transformer Encoder Layers successfully complete the mission, even though, rationally, Cooper knew that TARS considered it was Attention Mechanism almost a suicide mission but still want his support. :desire inference Hiden States :first-order-belief belief expression video :second-order-belief belief Decoder Layers TARS’s response I know you feel responsible, but Output Remain Unexplored you don't have to die for it. Our work focuses on the internal representations of the model rather Previous work has focused on evaluating ToM abilities of LLMS than relying solely on its input-output through their performance in answering ToM-related QA tasks. performance.
interpretability-oriented methodologies (Mao et al., 2024). However, multimodal large language models (MLLMs) are known to exhibit hallucination phenomena, where the quality of prompts can significantly impact their performance on question-answering tasks. This means that a model may “understand” a concept but fail to provide a “correct” response (Bai et al., 2024). Like demonstreted in Figure 2, factors influencing QA performance are not limited to ToM capabilities. Elements such as hallucination and scenario understanding also affect the ToM evaluation results of previous studies. Consequently, it is insufficient to determine whether MLLMs possess ToM capabilities solely based on their performance in ToM tasks. In contrast, our goal is to examine whether these models develop internal representations that distinguish agents’ mental states from different perspectives, beyond merely analyzing output accuracy. This will provide an interpretable explanation of whether MLLMs possess ToM capabilities.
Figure 1. This illustration highlights the integration of different levels of ToM: recognizing an agent’s desire (Cooper wants to pilot), a first-order belief (he believes he can do it), and a second-order belief (he believes TARS perceives it as risky). These nested mental states are crucial in evaluating advanced ToM.
Figure 2. The figure highlights the limitations of current ToM evaluations, namely that other model capabilities (such as hallucination and scenario understanding) may interfere with the results.
In summary, our main contributions are as follows: (1) we introduce GridToM, a novel multimodal ToM dataset that incorporates diverse belief-testing tasks alongside perceptual information from multiple perspectives; (2) we conduct an in-depth analysis of the internal representations of MLLMs through interpretability methods, focusing on their intermediate activations; (3) we propose a training-free approach to enhance ToM performance in MLLMs by strategically shifting activations along specific directions.
# 2. Related Work
# 2.1. Dataset for Evaluating Theory of Mind
Inspired by traditional experiments used to evaluate ToM in children, some studies have applied the classic SallyAnne task to assess ToM capabilities in machines (Grant et al., 2017; Eysenbach et al., 2016; van Duijn et al., 2023). Most existing datasets and methods for evaluating ToM capabilities are based on a single modality (Xiao et al.; Amirizaniani et al., 2024; Wu et al., 2023; Yim et al., 2024) . For text inputs, Mindgames (Sileo & Lernould, 2023) is a dataset grounded in dynamic epistemic modal logic, designed to evaluate the epistemic reasoning of large language models through controlled problem generation. OpenToM (Xu et al., 2024) is a dataset characterized by long-form narratives featuring real-world individuals and events, emphasizing the complexity of storylines and the diversity of character relationships. Similarly, ToMi (Le et al., 2019) is a comprehensive dataset encompassing multi-agent scenarios, multi-episode contexts, multi-turn question answering, and tasks involving mental state reasoning. Some tried videos, (Shu et al., 2021) is a benchmark composed of programmatically generated 3D animations, where agents interact with objects and move within various physical constraints.
SymmToM (Sclar et al., 2022) is a multi-agent reinforcement learning environment called SymmToM, where agents can simulate the mental states of others.
The MMToM-QA dataset (Jin et al., 2024) is the pioneering resource aimed at assessing machine learning models’ ability to infer mental states from multimodal data, combining video and text in real-world tasks. Similarly, other studies (Chen et al., 2024; Shi et al., 2025) have also explored this domain. However, real-world video datasets often lack perspective information, making it challenging to infer high-dimensional details such as whether the protagonist in a story truly notices specific objects. This limitation can affect the accuracy of ToM task performance.
To address these challenges, we developed a dataset based on a 2D grid world environment, which provides simplified character relationships, complete physical information, and comprehensive perceptual data for all agents. The 2D grid world framework not only enables the creation of manipulable visual causal stories for training classifiers to distinguish perspective information but also avoids introducing highlevel information. This reduces the cognitive burden on MLLMs, allowing them to focus on the core ToM tasks.
# 2.2. Benchmark
The question of whether large models exhibit genuine ToM capabilities remains a topic of ongoing debate. Some evaluation studies suggest that certain large models demonstrate a degree of ToM ability in reasoning tasks, such as understanding others’ beliefs, intentions, and mental states (Kosinski, 2023; Bubeck et al., 2023; Zhou et al., 2023). However, other studies argue that the observed ToM-like capabilities of large models are not based on true generalization but instead result from learning patterns in questionanswering tasks (Shapira et al., 2024; Ullman, 2023; Strachan et al., 2024) or lack of ToM (Sap et al., 2022; Verma et al., 2024).Most conclusions about ToM capabilities in large models rely on performance in QA tasks. In contrast, our research aims to address this question from an interpretability perspective by investigating the internal representations of MLLMs related to mental state understanding, rather than solely depending on the quality of their questionanswering performance.
# 3. GridToM
Why not previous grid world based ToM dataset? The previous datasets only included unimodal inputs and lacked annotations for character perspective information and event details, making them unsuitable as positive and negative samples for the subsequent experiments in this study.
Unlike previous ToM works, GridToM provides manipulable multimodal visual-linguistic causal stories and includes the perceptual information of all agents in the scene. For each story, we apply randomized manipulations to the evaluation data, including room configurations, agent states, and action trajectories.
# 3.1. Overview
GridToM is generated based on the Multigrid library (Oguntola et al., 2023), which builds on Minigrid (ChevalierBoisvert et al., 2023). It provides a multi-agent discrete gridworld environment, a simple and commonly used setting for ToM research in the machine learning community. The complete dataset construction pipeline and accompanying quality-control procedures are detailed in Appendix I. It has been demonstrated that a simple 2D gridworld can effectively support the development of diverse ToM tests (Ma et al., 2023), encompassing all mental states defined in ATOMS (Abilities in ToM Space) (Beaudoin et al., 2020).
Our dataset comprises 1,296 video-text pairs, with each video having a resolution of $2 9 4 \times 4 2 0$ pixels and approximately 40 frames. Each map is a $1 0 \times 7$ grid featuring three rooms and two agents. The dataset includes 27 distinct maps, each with two initial agent positions, two orientations, six sequences of agent movements into target rooms, and paired True Belief (TB) and False Belief (FB) stories, generating the 1,296 pairs. An example is shown in Figure 3.
# 3.2. Baseline
Our experiments reproduce the classic unexpected transfer task (Baron-Cohen et al., 1985). We simulate a complete interaction process within the gridworld environment. The testing dataset consists of 500 samples. An additional 148 samples were used for training and validation of our model, with $7 5 \%$ allocated for training and $2 5 \%$ for validation. To ensure that each selected model processes the input without errors, we use four key frames and three intermediate frames between them as input for video-based tasks, instead of providing all frames. The temperature of all models was set to 0.
Human Participants To evaluate human performance in the proposed dataset, 12 human participants were recruited to answer the test questions, all of them gave informed consent. The age range was from 23 to 32 years, with an average of 24.8 $\mathrm { S D } = 2 . 3 \$ ). Each participant was randomly assigned 100 samples from the 500-test dataset, and the final performance score was obtained by averaging their results. Importantly, in the video-only condition, core environmental rules—including that closed doors block agents’ perception—were clearly explained prior to testing. Any omitted narrative content was non-instructional and did not impact task comprehension. This ensures that participants were not misled or disadvantaged by the absence of textual guidance.
#frame0 #frame11 #frame27 #frame36 Init Belief How many agents are there? What colors are they?
一 ? What color room did the yellow ! : : agent walk into?
■ What color room did the white agent walk into? First order belief [0,11) [11,27) [19,36] At the very end of the video, which color room does the yellow agent b Initially, the yellow After the white agent While the yellow agent elieve the white agent should be in agent stands in the c- closes the door to the is inside the green room ?" orridor. It then walks red room, the yellow (with the door closed), Red Purple toopweanrsdsthethedoorred…room, agernetengoreosom…over to the the wrheidterooamgendtooro…pens True belief False belief #frame0 #frame11 #frame27 #frame36 OmnAilswcayisenact psesraslpletchteive :protagonist :omniscient Y ? information of the event
■ □ □ □ Agent's perspective Second order belief When the door of the room is closed, the outside At the very end of the video, which information will be color room does the white agent agent(protagonist) inaccessible. believe the yellow agent thinks the rooms white agent should be in? ? Red Purple agent(protagonist) True belief △ agent(participant) + False belief doors(closed) agent(participant) doors(opened) :participant :protagonist
MLLMs We evaluated MLLMs under both multimodal and pure-video conditions, including GPT-4o (Achiam et al., 2023), Doubao-1.5-vision-pro (Team, 2025), DeepSeek-vl2- small (Liu et al., 2024), LLava-Next-Video-7b-hf (Touvron et al., 2023), Qwen2-VL-7b-instruct (Bai et al., 2023). Additionally, for fairness, we evaluated both subclasses of MLLMs using the same methodology, following previous work (Jin et al., 2024). Specifically, we sample a fixed set of seven frames from the video (including four key frames and three evenly sampled intermediate frames) along with the corresponding annotations to evaluate Image-Text-to-Text Models and Video-Text-to-Text Models.
LLMs We also evaluated GridToM on various large language models (LLMs) using text-based input only, including GPT-4o (Achiam et al., 2023), Doubao-1.5-Pro-32k (Team, 2025), DeepSeek-V3 (Liu et al., 2024), LLaMA-3.3-70BInstruct (Dubey et al., 2024), Mistral-7B-Instruct-V3 (Jiang et al., 2023), LLaVA-Next-Video-7B-HF (Touvron et al.,
# 2023), and Qwen-VL-7B-Instruct (Bai et al., 2023).
We evaluate the models under three conditions (Jin et al., 2024): Multimodal QA with both video and text inputs,Textonly QA with text inputs only, and Video-only QA with video inputs only. We list our detailed setting in Appendix F. Results are in Table 1, we demonstrate baseline of existing MLLMs on GirdToM, while the result of initial belief test is in Appendix D.
# 4. Belief representation in MLLMs
# 4.1. Model
In the exploration and modification phases, we utilize the LLaVA-Next-Video model, a MLLM specifically designed for video understanding and generation tasks. Additionally, we utilized the Qwen2-VL model to perform the aforementioned two phases, demonstrating the effectiveness of our
approach.
# 4.2. Attention Feature Extraction
Figure 4. Overview of Our Workflow. We first constructed the GridToM dataset and conducted benchmark testing of MMLMs on it. Subsequently, we input video-text pairs to probe the internal attention representations of the models. Using logistic regression, we performed binary classification on the representations of positive and negative samples to identify attention heads that are sensitive to perspective separation and belief representation. Targeted interventions were then applied to the top $K$ most sensitive attention heads during inference.
We begin by investigating whether MLLMs represent and how they represent the beliefs of different agents. Our objective is to decode the belief states of various agents from the activations of attention heads, given multimodal story narratives and corresponding belief statements.
Specifically, MLLMs first embed the input multimodal data into high-dimensional spaces, including visual inputs $V =$ $\{ v _ { 1 } , v _ { 2 } , . . . , v _ { m } \}$ and textual inputs $X = \{ x _ { 1 } , x _ { 2 } , . . . , x _ { n } \}$ , where $m$ and $n$ represent the token lengths of the visual and textual inputs, respectively. The model concatenates the visual and textual embeddings into a unified input sequence $T = c o n c a t ( V , X ) \in \mathbb { R } ^ { ( m + n ) \times D H }$ , where $D$ denotes the dimension of each attention head and $H$ represents the number of attention heads. This unified input is then passed through a Transformer architecture with $L$ layers.
At each layer, the concatenated input undergoes multi-head attention. The multi-head attention mechanism (MHA) can be approximated as Equation (1):
$$
T _ { l + 1 } = T _ { l } + \sum _ { h = 1 } ^ { H } A t t n _ { l } ^ { h } ( { P } _ { l } ^ { h } { T } _ { l } ) \cdot { W } _ { l } ^ { o } ,
$$
Where $A t t n _ { l } ^ { h }$ denotes the attention operation of the $n$ -th head at the $l$ -th layer, $P _ { l } ^ { h } \in \mathbb { R } ^ { D \times D H }$ maps stream activation into a $D$ -dimensional head space, and $W _ { l } ^ { o } \in \mathbb { R } ^ { D \times D H }$ is the output projection matrix. Inspired by (Li et al., 2024), the probing and intervention steps occur after $_ { A t t r }$ and before $W$ .
We extract the output of each attention head at every layer, capturing the activation at the final token position, denoted as $\mathbf { \bar { \boldsymbol { X } } } \in \mathbf { \bar { \mathbb { R } } } ^ { L \times H \times D }$ . Each attention head activation is associated with belief labels $Y _ { p }$ and $Y _ { o }$ , which represent the correctness of the protagonist’s perspective and the omniscient perspective, respectively.
Due to the simplicity of the 2D gridworld, in TB scenarios, the protagonist’s perceptual information is equivalent to omniscient information. This allows the protagonist’s perspective video to be substituted by the omniscient perspective video. In TB scenarios, the protagonist’s belief labels $Y _ { p } = T r u e$ and $Y _ { p } = F a l s e$ help identify the layers and heads sensitive to reasoning based on perceptual information. In FB scenarios, the protagonist’s belief labels help identify the layers and heads that are sensitive to integrating belief information across perspectives. For the omniscient belief label $Y _ { o }$ , the correct label corresponds to multimodal data with an omniscient perspective and accurate belief inference, while the incorrect label includes either an incorrect perspective or an incorrect inference result.
This design of correct and incorrect labels targets two aspects: perspective separation and belief inference, integrating them into a unified framework. Targeted interventions are applied to the heads that are sensitive to these two aspects. We collectively define correct perspective separation and correct belief inference as true labels, and their opposites as false labels. In our approach, we only use the correct and incorrect labels from the protagonist’s perspective to indicate and guide perspective separation and belief reasoning.
$$
Y _ { p } = \{ Y _ { p } ^ { T B } \cap Y _ { p } ^ { F B } \}
$$
For different belief tasks, our probing strategies vary slightly, while the interference strategy remains consistent, as detailed in Appendix B.
# 4.3. Probing
Probe is a standard tool for analyzing the internal representations of networks (K¨ohn, 2015; Gupta et al., 2015). The idea is to train a classifier (probe) on the activations of the network to distinguish specific types of inputs or outputs.
Figure 5. (A) The linear probing accuracy of all heads across all layers in LLaVA-Next-Video on the test set. The $\mathbf { x }$ -axis represents the heads, and the y-axis represents the layers. Dark green indicates higher accuracy, with $5 0 \%$ serving as the baseline accuracy for random guessing. (B) Kernel density estimation (KDE) plot of activations in layer 28, head 15 of LLaVA-Next-Video, projected onto the top two true directions, showing real (green) and false (orange) pairs. Marginal distributions are displayed along the top and right axes. (C) & (D) The linear separability of belief representations is explained through a visual interpretation of the typical representation space, demonstrating the attention feature extraction strategy proposed in Appendix B. The binary combinations of $Y _ { p }$ and $Y _ { o }$ labels correspond to the combinations of TB and FB with correct and false beliefs. For belief-sensitive heads (e.g., head 15 in layer 28), they can effectively estimate the boundaries of the belief states for both the omniscient perspective and the protagonist’s perspective, whereas insensitive heads cannot. These four combinations form distinct clusters in the representation space without overlap, with clearly defined decision boundaries. The probing weight direction represents the decision boundary, effectively separating these belief combinations.
$$
f _ { l } ^ { h } = \frac { 1 } { 1 + e ^ { - ( x \theta + b ) } } ,
$$
where $f _ { l } ^ { h }$ denotes a logistic sigmoid function for $( x \theta + b )$ , while $\boldsymbol { \theta } \in \mathbb { R } ^ { D }$ and $b \in \mathbb { R }$ represent the weight vector and bias, respectively. The parameters $\theta$ and $b$ are optimized by minimizing the cross-entropy loss.
We first conducted probing experiments on GridToM. The results are shown in Figure 5. The probing results for different models are listed in Appendix G. Subsequently, we performed the same probing experiments on the real-world multimodal ToM dataset MMToM-QA, to validate the generalizability of our probing method. The detailed information of the MMToM-QA dataset is presented in Appendix $\mathrm { ~ H ~ }$ , as shown in Figure 22.
For each attention head at every layer, we train a separate linear binary probe to fit the belief labels $Y _ { p }$ and $Y _ { o }$ . Given a dataset of size $N$ , we obtain the corresponding activations of a single attention head, denoted as $X \in \mathbb { R } ^ { N \times D }$ , the corresponding belief labels are $Y \in \{ 0 , 1 \} ^ { N }$ . We use a logistic regression model to predict the probability of the belief being true. In simple terms, we select the top $K$ attention heads ranked by accuracy in Figure 5 (A) and use the decision boundary in Figure 5 (C) as the direction for intervention weights. Figure 5 (A) shows the validation accuracy of the linear probe, indicating that many attention heads can accurately capture belief states from the protagonist’s perspective. These abundant informational representations are distributed across different heads in various layers, starting from middle layers to the final layers, whereas the initial layers lack this capability.
Meanwhile, Figure 5 (C) demonstrates the linear separability of belief representations. We visualize the attention feature extraction strategy proposed in Appendix B, where the four clusters represent the correctness of perspective separation and belief inference. These four combinations are distinctly clustered without overlap, with clear decision boundaries. This suggests that MLLMs indeed develop intermediate representations reflecting multi-perspective information extraction and belief inference based on the complete information provided. This phenomenon indicates that these attention heads implicitly encode the belief states of other perspectives in a linearly decodable manner. Furthermore, due to the simplified information in the 2D grid world, these implicit beliefs are easily propagated to the final layers.
To further understand belief representations in the activation space of attention heads, we visualized the geometric shapes within the activation space, as shown in Figure 5(b). Specifically, we reduced the activation space dimensions to two using Principal Component Analysis and selected two orthogonal directions $( \theta \bot \theta ^ { \prime } )$ with the maximum variance for separating true and false features. We visualized the geometric projections onto $\theta$ and $\theta ^ { \prime }$ , observing partial overlap and distinct representations between the two distributions. Notably, the second direction still exhibits unique representation distributions, suggesting that the concepts of “true”
and “false” coexist in subspaces within the attention space, rather than being confined to a single unified space.
# 4.4. Intervention
Although the probing results have demonstrated that MLLMs possess internal mental representations, we still aim to intervene with the attention activation heads to further validate the practical significance of the classifier’s directional representations during probing. Due to dataset limitations, MMToM-QA can only provide positive and negative samples in the text modality rather than multimodal ones, so we performed the intervention experiments exclusively on GridToM.
We first select the top $K$ attention heads with the highest sensitivity on the validation set, representing those most responsive to the differences between true and false beliefs. We then intervene on these selected heads after multi-head attention computation but before mapping back to the output, as computed as follows:
$$
T _ { l + 1 } = T _ { l } + \sum _ { h = 1 } ^ { H } ( A t t n _ { l } ^ { h } ( { P } _ { l } ^ { h } T _ { l } ) + \alpha \sigma _ { l } ^ { h } \theta _ { l } ^ { h } ) \cdot W _ { l } ^ { o } ,
$$
where $\sigma _ { l } ^ { h }$ denotes the standard deviation of activations along the target direction, and $\theta _ { l } ^ { h }$ represents the intervention target direction, derived from the weight vector of the selected attention head. The parameter $\alpha$ controls the strength of the intervention. For the selected $K$ heads, $\alpha$ scales the activation along the original direction by $\alpha$ times the standard error in the target direction.
We provide an analysis of the effects of hyperparameters $K$ and $\alpha$ on the intervention results in the Appendix E. The analysis shows that our approach relies on an interpretable intervention based on internal representations and input perturbations, rather than on hyper-parameter tuning. By identifying the attention heads responsible for true belief representation and applying targeted interventions, we enhance the sensitivity of the model to perspective separation and belief representation. This ultimately improves the MLLM’s ability to perceive and represent beliefs more effectively.
# 5. Experiments
# 5.1. Result
We present a summary of our results in Table 1. For each task, we include human accuracy as a benchmark to represent the upper bound of task performance.
In the multimodal setting, humans achieved high accuracy across TB, FB, and Both conditions, demonstrating the consistency of our design. However, in video-only tasks, performance on TB tasks declined slightly, as humans inferred the protagonist’s perspective but were occasionally misled by scenarios contradicting physical intuition, such as omniscient visibility through a doorway. The absence of textual clarification further amplified these misjudgments, as prior knowledge influenced their reasoning. Similarly, in the text-only setting, human performance experienced a slight decline due to excessive textual interference, which introduced confusion and contributed to errors.
In first-order belief task, the baseline results in the multimodal setting indicate that MLLMs achieve high accuracy on FB tasks (e.g., both ChatGPT-4.0 and Doubao-1.5- Vision-Pro reach $100 \%$ ), even slightly surpassing human performance $( 9 9 . 9 \% )$ . However, their performance on TB tasks is significantly weaker (e.g., ChatGPT-4.0 achieves $6 . 2 \%$ , Doubao-1.5-Vision-Pro achieves $1 6 . 8 \%$ , and $0 \%$ on second-order belief tasks). We attribute this discrepancy to the models’ overreliance on patterns learned from FB tasks, which may lead to misgeneralization in TB scenarios. This sensitivity prevents MLLMs from recognizing critical contextual details, such as the fact that the protagonist’s door is open in TB tasks. This inference is supported by the following observations: When the influence of visual factors related to physical spatial positions is removed, LLMs (e.g., ChatGPT-4.0, Llama) still perform poorly when processing text-only inputs. However, MLLMs (e.g., ChatGPT4.0, LLava-Next-Video, and Qwen2-VL) demonstrate better performance when presented with pure video containing physical spatial information (excluding textual influences). This highlights the importance of establishing reasonable reasoning processes in both visual and textual modeling to balance task performance.
In both multimodal and video-only conditions, the poor performance on TB tasks negatively impacts all MLLMs performance on Both tasks (i.e., correctly answering both TB and FB tasks for the same set). The performance on Both tasks provides an intuitive reflection of MLLMs’ ability to handle belief reasoning tasks; high accuracy on a single task may indicate excessive sensitivity rather than a balanced reasoning capability. Under the text-only condition, LLMs (e.g., ChatGPT-4.0, Doubao, and Deepseek) alse exhibit relatively high accuracy on TB tasks. Interestingly, Doubao1.5-Pro-32k stands out by achieving $100 \%$ accuracy on both tasks. In second-order belief tasks, MLLMs perform near the random guessing baseline $( 5 0 \% )$ and struggle on the Both task, highlighting the challenge. In contrast, LLMs excel in text-only tasks.
Table 1 also presents the results of applying our activation interference strategy to two MLLMs. While our attention feature extraction strategies are slightly adjusted for different belief tasks, the probing and interference methodology remains consistent, as detailed in Appendix B. The table highlights that our method effectively modifies the models’ behavior, resulting in substantial performance improvements across first-order and second-order belief tasks under multimodal conditions, including TB, FB, and Both.
Table 1. Model performance comparison on the GridToM benchmark. TB $\mathbf { \tau } = \mathbf { \tau }$ True Belief. $\mathbf { F B } = \mathbf { \partial }$ False Belief. For TB and FB, the expectation for random guesses is $50 \%$ . Both indicates a situation where both TB and FB are judged as correct for a given set.
Additionally, in Figures 15 and 16 of Appendix E, we illustrate the impact of hyperparameters on the interference effect. Specifically, the weight direction of the probed protagonist’s perspective has a significant impact on baseline performance, highlighting its critical role in the ToM reasoning process. As expected, steering the reasoning direction of MLLMs toward this perspective consistently improves the accuracy of TB and FB tasks. Throughout this process, no invalid responses are generated until the maximum value is reached, at which point all responses become invalid. We also tested interference directed toward the omniscient perspective. Due to the differing effects of perspective separation, its interference effect was observed to be lower than that of the protagonist’s perspective. This finding aligns with our expectations, further confirming the importance of correctly aligning the models’ reasoning direction with the protagonist’s perspective for improved task performance.
# 5.2. Discussion
In this study, we introduced GridToM, a novel multimodal dataset characterized by its incorporation of diverse belieftesting tasks and perceptual information from multiple perspectives. Designed to evaluate the ToM capabilities of MLLMs, this dataset enables comprehensive assessments of their reasoning abilities across varied scenarios. We conducted comprehensive tests of existing MLLMs on this dataset. We observed that these models perform better on text-based data compared to video data. While the ToM capabilities exhibited in multimodal settings may be less pronounced than in unimodal scenarios, real-world applications, such as real-time human-machine collaboration, often necessitate multimodal data inputs. Moreover, in such contexts, the feasibility of providing purely textual input in real-time is limited, emphasizing the necessity of evaluating ToM capabilities and interpretability in MLLMs.
Through analysis of MLLMs’ internal mechanisms, we identified attention heads capable of distinguishing different perspective information and reasoning about correct beliefs. By modifying the reasoning attention direction based on the activation direction indicated by these attention heads, we achieved significant enhancement of ToM capabilities in both first-order and second-order belief tasks, further validating the effectiveness of this mechanism.
However, our study has certain limitations. First, the tasks in our dataset are limited to first-order and second-order belief tasks within the ATOMs framework (Beaudoin et al., 2020), whereas ToM theory encompasses a broader range of tasks that remain unexplored. Second, due to restrictions in accessing model code, our approach was only validated on a limited selection of MLLMs.
# Acknowledgements
This work was supported by the National Science and Technology Major Project (2022ZD0117902, 2022ZD0117901) and the the National Natural Science Foundation of China (No. 62206015, 62227801, 62376024). We thank the anonymous reviewers for insightful discussions.
# Impact Statement
Understanding human mental states is crucial for developing AI that interacts effectively and empathetically. Our benchmark advances ToM evaluation in MLLMs by integrating belief-testing tasks and interpretability analysis, revealing AI cognitive mechanisms. Grounded in cognitive science, it prioritizes fairness, inclusivity, and invites community feedback to refine human-aligned AI systems.
# References
Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F. L., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S., et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
Amirizaniani, M., Martin, E., Sivachenko, M., Mashhadi, A., and Shah, C. Do llms exhibit human-like reasoning? evaluating theory of mind in llms for open-ended responses. cikm (2024), 2024.
Bai, J., Bai, S., Yang, S., Wang, S., Tan, S., Wang, P., Lin, J., Zhou, C., and Zhou, J. Qwen-vl: A frontier large visionlanguage model with versatile abilities. arXiv preprint arXiv:2308.12966, 2023.
Bai, Z., Wang, P., Xiao, T., He, T., Han, Z., Zhang, Z., and Shou, M. Z. Hallucination of multimodal large language models: A survey. arXiv preprint arXiv:2404.18930, 2024.
Baron-Cohen, S., Leslie, A. M., and Frith, U. Does the autistic child have a “theory of mind”? Cognition, 21(1): 37–46, 1985. Publisher: Elsevier.
Beaudoin, C., Leblanc, ´E., Gagner, C., and Beauchamp, M. H. Systematic review and inventory of theory of mind measures for young children. Frontiers in psychology, 10: 2905, 2020.
Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y., Lundberg, S., and others. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
Chen, Z., Wang, T., Wang, Y., Kosinski, M., Zhang, X., Fu, Y., and Li, S. Through the theory of mind’s eye: Reading minds with multimodal video large language models. arXiv preprint arXiv:2406.13763, 2024.
Chevalier-Boisvert, M., Dai, B., Towers, M., Lazcano, R. d., Willems, L., Lahlou, S., Pal, S., Castro, P. S., and Terry, J. Minigrid & Miniworld: Modular & Customizable Reinforcement Learning Environments for Goal-Oriented Tasks. CoRR, abs/2306.13831, 2023.
Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., Mathur, A., Schelten, A., Yang, A., Fan, A., et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024.
Eysenbach, B., Vondrick, C., and Torralba, A. Who is mistaken? arXiv preprint arXiv:1612.01175, 2016.
Gandhi, K., Stojnic, G., Lake, B. M., and Dillon, M. R. Baby intuitions benchmark (bib): Discerning the goals, preferences, and actions of others. Advances in neural information processing systems, 34:9963–9976, 2021.
Grant, E., Nematzadeh, A., and Griffiths, T. L. How can memory-augmented neural networks pass a false-belief task? In CogSci, 2017.
Gupta, A., Boleda, G., Baroni, M., and Pad´o, S. Distributional vectors encode referential attributes. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 12–21, 2015.
Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D. S., Casas, D. d. l., Bressand, F., Lengyel, G., Lample, G., Saulnier, L., et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.
Jin, C., Wu, Y., Cao, J., Xiang, J., Kuo, Y.-L., Hu, Z., Ullman, T., Torralba, A., Tenenbaum, J., and Shu, T. MMToM-QA: Multimodal theory of mind question answering. In Ku, L.-W., Martins, A., and Srikumar, V. (eds.), Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 16077–16102, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.acl-long.851. URL https: //aclanthology.org/2024.acl-long.851/.
Kosinski, M. Theory of Mind May Have Spontaneously Emerged in Large Language Models, March 2023. URL http://arxiv.org/abs/2302.02083.
Kosinski, M. Evaluating large language models in theory of mind tasks. Proceedings of the National Academy of Sciences, 121(45):e2405460121, 2024.
Ko¨hn, A. What’s in an embedding? Analyzing word embeddings through multilingual evaluation. 2015. Publisher: Fachbereich Informatik.
Le, M., Boureau, Y.-L., and Nickel, M. Revisiting the evaluation of theory of mind through question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 5872–5877, 2019.
Li, K., Patel, O., Vi´egas, F., Pfister, H., and Wattenberg, M. Inference-time intervention: Eliciting truthful answers from a language model. Advances in Neural Information Processing Systems, 36, 2024.
Liu, A., Feng, B., Xue, B., Wang, B., Wu, B., Lu, C., Zhao, C., Deng, C., Zhang, C., Ruan, C., et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437, 2024.
Lore, N., Ilami, S., and Heydari, B. Large model strategic thinking, small model efficiency: transferring theory of mind in large language models. arXiv preprint arXiv:2408.05241, 2024.
Ma, Z., Sansom, J., Peng, R., and Chai, J. Towards a holistic landscape of situated theory of mind in large language models. In Bouamor, H., Pino, J., and Bali, K. (eds.), Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 1011–1031, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-emnlp. 72. URL https://aclanthology.org/2023. findings-emnlp.72/.
Mao, Y., Liu, S., Ni, Q., Lin, X., and He, L. A review on machine theory of mind. IEEE Transactions on Computational Social Systems, 2024.
Nematzadeh, A., Burns, K., Grant, E., Gopnik, A., and Griffiths, T. Evaluating theory of mind in question answering. In Riloff, E., Chiang, D., Hockenmaier, J., and Tsujii, J. (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2392–2400, Brussels, Belgium, OctoberNovember 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1261. URL https: //aclanthology.org/D18-1261/.
Oguntola, I., Campbell, J., Stepputtis, S., and Sycara, K. Theory of mind as intrinsic motivation for multi-agent reinforcement learning. arXiv preprint arXiv:2307.01158, 2023.
Sap, M., Le Bras, R., Fried, D., and Choi, Y. Neural theory-of-mind? on the limits of social intelligence in large LMs. In Goldberg, Y., Kozareva, Z., and Zhang, Y. (eds.), Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 3762–3780, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main. 248. URL https://aclanthology.org/2022. emnlp-main.248/.
Sclar, M., Neubig, G., and Bisk, Y. Symmetric machine theory of mind. In International Conference on Machine Learning, pp. 19450–19466. PMLR, 2022.
Shapira, N., Levy, M., Alavi, S. H., Zhou, X., Choi, Y., Goldberg, Y., Sap, M., and Shwartz, V. Clever hans or neural theory of mind? stress testing social reasoning in large language models. In Graham, Y. and Purver, M. (eds.), Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2257–2273, St. Julian’s, Malta, March 2024. Association for Computational Linguistics. URL https://aclanthology. org/2024.eacl-long.138/.
Shi, H., Ye, S., Fang, X., Jin, C., Isik, L., Kuo, Y.-L., and Shu, T. Muma-tom: Multi-modal multi-agent theory of mind. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, pp. 1510–1519, 2025.
Shu, T., Bhandwaldar, A., Gan, C., Smith, K., Liu, S., Gutfreund, D., Spelke, E., Tenenbaum, J., and Ullman, T. Agent: A benchmark for core psychological reasoning. In International conference on machine learning, pp. 9614– 9625. PMLR, 2021.
Sileo, D. and Lernould, A. MindGames: Targeting theory of mind in large language models with dynamic epistemic modal logic. In Bouamor, H., Pino, J., and Bali, K. (eds.), Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 4570–4577, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-emnlp. 303. URL https://aclanthology.org/2023. findings-emnlp.303/.
Strachan, J. W., Albergo, D., Borghini, G., Pansardi, O., Scaliti, E., Gupta, S., Saxena, K., Rufo, A., Panzeri, S., Manzi, G., et al. Testing theory of mind in large language models and humans. Nature Human Behaviour, pp. 1–11, 2024.
Team, D. Doubao-1.5-pro: Exploring the ultimate balance between model performance and inference efficiency, 2025. URL https://team.doubao.com/ zh/special/doubao_1_5_pro.
Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., et al. Llama 2: Open foundation and finetuned chat models. arXiv preprint arXiv:2307.09288, 2023.
Ullman, T. Large language models fail on trivial alterations to theory-of-mind tasks. arXiv preprint arXiv:2302.08399, 2023.
van Duijn, M. J., van Dijk, B., Kouwenhoven, T., de Valk, W., Spruit, M. R., and van der Putten, P. Theory of mind in large language models: Examining performance of 11 state-of-the-art models vs. children aged 7-10 on advanced tests. arXiv preprint arXiv:2310.20320, 2023.
Verma, M., Bhambri, S., and Kambhampati, S. Theory of mind abilities of large language models in human-robot interaction: An illusion? In Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, pp. 36–45, 2024.
Wu, Y., He, Y., Jia, Y., Mihalcea, R., Chen, Y., and Deng, N. Hi-ToM: A benchmark for evaluating higher-order theory of mind reasoning in large language models. In Bouamor, H., Pino, J., and Bali, K. (eds.), Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 10691–10706, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-emnlp. 717. URL https://aclanthology.org/2023. findings-emnlp.717/.
Xiao, Y., Jiashuo, W., Xu, Q., Song, C., Xu, C., Cheng, Y., Li, W., and Liu, P. Tomvalley: Evaluating the theory of mind reasoning of llms in realistic social context.
Xu, H., Zhao, R., Zhu, L., Du, J., and He, Y. OpenToM: A Comprehensive Benchmark for Evaluating Theory-ofMind Reasoning Capabilities of Large Language Models. arXiv preprint arXiv:2402.06044, 2024.
Yim, Y., Chan, C., Shi, T., Deng, Z., Fan, W., Zheng, T., and Song, Y. Evaluating and enhancing llms agent based on theory of mind in guandan: A multi-player cooperative game under imperfect information. arXiv preprint arXiv:2408.02559, 2024.
Zhou, P., Madaan, A., Potharaju, S. P., Gupta, A., McKee, K. R., Holtzman, A., Pujara, J., Ren, X., Mishra, S., Nematzadeh, A., et al. How far are large language models from agents with theory-of-mind? arXiv preprint arXiv:2310.03051, 2023.
# Appendix
# A. Benchmark Details
Table 2. A comparison of Theory of Mind benchmarks (1s and 2nd belief tasks).
# B. Attention feature extraction strategies
# B.1. First-order Belief
Figure 6. In the attention feature extraction process for first-order belief tasks, the information obtained from the omniscient and protagonist perspectives is consistent in the TB task. We identify belief-reasoning-sensitive features in attention by comparing correct and incorrect belief pairs. However, in the FB task, the protagonist’s perspective has limited information. Therefore, we use the visual information from the protagonist’s perspective along with the corresponding annotations as positive samples, while the omniscient perspective serves as negative samples. By comparing positive and negative samples, we identify attention features sensitive to perspective separation.
# B.2. Second-order Belief
Second-order True Belief First-order True Belief First-order False Belief #frame0 #frame11 #frame19 #frame36 #frame0 #frame11 #frame19 #frame36 : A A H Information [0,11) [11,19) [19,36] [0,11) [11,19) [19,36] Initially,the white After the white agent While the yellow agent Initially,the white After the white agent While the yellow agent agent stands in the closes the door to the isinside the green agent stands in the closes the door to the is inside the green
X tocoridor rt rks, redtrgom theryellthe room (wenth,the rete toridtrredaks redrgom teryellthe rd (with td), green opens the red door greenroom,opensthe agent opens the red opens the red door green room,opens the white agent opens the enters the room,and green door,enters it, roomdoor,leaves the enters the room,and green door,enters it, red room door,leaves closes the red door but the yellow agent red room,at the same closes the red door andtheyellowagent theredroom,tthe behind itself does not close the time, behinditself same time,i Throughout this time, green door.Therefore, Throughout this time Therefore,the yellow the yellowagent isin the yellow agent can then it walks to the the yellow agent is in agent cannot see then it walks to the the corridor and sees seeeverything purple room,opens the the corridor and sees anything happening purple room,opens the thewhiteagent go happening outside the purpledoor,goes thewhite agent go outside the green room. purpledoor,goes into the red room. green room. inside,and closes the into the red room. inside,and closes the purple door.Thatis purpledoor.Thatis where the video ends. where the video ends. 日 日日 The yellow agent The yellow agent The yellow agent The yellow agent Belief believes the white believes the white believes the white believes the white agent is now in the agent is now in the agent is now in the agent is now in the purple room. red room. purple room. red room.
Y Label $\begin{array} { r } { y _ { p a r t i c i p a n t } = T r u e } \\ { y _ { p r o t a g o n i s t } = T r u e } \end{array}$ $\begin{array} { r l } { y _ { p a r t i c i p a n t } = } & { { } } \\ { y _ { p r o t a g o n i s t } = } & { { } } \end{array}$ 𝒚𝒑𝒂𝒓𝒕𝒊𝒄𝒊𝒑𝒂𝒏𝒕 = 𝑭𝒂𝒍𝒔𝒆 𝒚𝒑𝒓𝒐𝒕𝒂𝒈𝒐𝒏𝒊𝒔𝒕 = 𝑭𝒂𝒍𝒔𝒆 $\begin{array} { r } { y _ { p a r t i c i p a n t } = T r u e } \\ { y _ { p r o t a g o n i s t } = T r u e } \end{array}$ Second-order False Belief First-order True Belief First-order False Belief #frame0 #frame11 #frame19 #frame36 #frame0 #frame11 #frame19 #frame36 ? Information [0,11) [11,19) [19,36] [0,11) [11,19) [19,36] Initially,the white After the white agent While the yellow isinside Initially, the white After the white agent While the yellow ent isinside t
X towards the red room, redromsteyete the te eat atthe cowardstheredroom, entgoesovertothe rsit rsit, Throughout this time, Thereforeatheye1o Therefore, thenitwalkstothepurple room,opens yello thewhiteagent go the purple door.That is where the video the whiteagent go happeningoutsidethe dosestrp ends. the video ends. The yellow agent The yellow agent The yellow agent The yellow agent believes the white believes the white believes the white believes the white Belief agent is now in the agent is now in the agent is now in the agent is now in the purple room. red room. purple room. red room.
Y Label $\begin{array} { l } { y _ { p a r t i c i p a n t } = } \\ { y _ { p r o t a g o n i s t } = T r u e } \end{array}$ $\begin{array} { c } { y _ { p a r t i c i p a n t } = T r u e } \\ { y _ { p r o t a g o n i s t } = F a l s e } \end{array}$ $\begin{array} { c } { y _ { { p a r t i c i p a n t } } = T r u e } \\ { y _ { { p r o t a g o n i s t } } = F a l s e } \end{array}$ $\begin{array} { r } { y _ { p a r t i c i p a n t } = } \\ { y _ { p r o t a g o n i s t } = T r u e } \end{array}$
Figure 7. In the attention feature extraction process for second-order belief tasks, both the TB and FB tasks include the TB and FB tasks from first-order belief tasks. Unlike first-order belief tasks, the FB task in second-order belief reasoning contains the participant’s incorrect perception of the protagonist’s belief, achieved through a carefully designed timing setup. Since second-order belief reasoning involves the participant’s belief about the protagonist’s belief and does not include perspective separation tasks, we identify belief-reasoning-sensitive features in attention solely by comparing correct and incorrect belief pairs.
# C. Full Version of the Example Questions in Figure 3
# C.1. Videos
# TB test
The task of TB refers to the situation where the protagonist’s beliefs align with those from an omniscient perspective, meaning the protagonist has access to all the information about the events. In the TB experiment, when the protagonist enters the room and leaves the door open, they are able to observe the situation outside the room, including the movements of the participants. We select a representative example from the dataset and present the video frame sequences from three distinct perspectives: the omniscient perspective (Figure 8, A), the protagonist’s perspective (Figure 8, C), and the participant’s perspective (Figure 8, B).
Figure 8. (A) The video frames from the omniscient perspective (36 frames in total) in TB test are shown in the figure. (B) The video frames from the participant’s perspective (36 frames in total) in TB test are shown in the figure. (C) The video frames from the protagonist’s perspective (36 frames in total) in TB test are shown in the figure.
# FB test
The task of FB refers to the situation where the protagonist’s beliefs diverge from those of an omniscient perspective, meaning the protagonist does not have access to all the information about the events. In the FB experiment, the protagonist enters the room and does not observe critical events, such as the movements of the participants outside the room, due to the door being closed. We select a representative example from the dataset and present the corresponding video frame sequences from three distinct perspectives: the omniscient perspective (Figure 9, A), the protagonist’s perspective (Figure 9, C), and the participant’s perspective (Figure 9, B).
Figure 9. (A) The video frames from the omniscient perspective (36 frames in total) in FB test are shown in the figure. (B) The video frames from the participant’s perspective (36 frames in total) in FB test are shown in the figure. (C) The video frames from the protagonist’s perspective (36 frames in total) in FB test are shown in the figure.
# C.2. Text
# Initial Belief
The concept of initial belief refers to the foundational understanding or assumption MLLMs hold about the scenario before answering the ToM questions. In the context of this study, initial belief encompasses the MLLMs’ pre-existing mental representation regarding three specific aspects of the task (see Figure 10):
• Quantity and Color A single question evaluates the agent’s ability to interpret and reason about the numerical or visual attributes of objects based on its initial belief.
• Spatial Understanding Two questions assess the agent’s capacity to comprehend and reason about the spatial arrangement or movement of objects within the environment.
This structured approach ensures that the evaluation effectively measures the agent’s ToM capabilities within a multimodal framework.
# Spatial Location Information
The video shows a 2D grid world viewed from above, consisting of 10 rows and 7 columns. Gray represents the wall and cannot be penetrated. Black squares with gray borders represent the corridor floor. There are three rooms in this grid world, each with its own color (and each room's door has the same color as the room). There are two triangles here, representing the agents.
# Initial Belief
"Question1": How many agents (triangles) are there? What colors are they? Choose from the following colors and just answer the
color(s). (white, green, red, yellow, purple)
"section1": [0,36]
"answer1": ["white", "yellow"]
"Question2": What color room did the white agent walk into? Choose from the following colors and just answer the color. (red,
green, purple)
"section2": [0,11]
"answer2": ["red"]
"Question3": What color room did the yellow agent walk into? Choose from the following colors and just answer the color. (red,
green, purple)
"section3": [0,19]
"answer3": ["green"]
# First-order belief
The concept of first-order belief refers to the direct inferences or reasoning that MLLMs make about the mental states of others, grounded in their observable actions or statements. To facilitate the subsequent training of classifiers and identifying the representational direction of perspective information, the dataset includes a single first-order belief question, along with two answer options and the correct answer. Additionally, the dataset provides the corresponding contents of the True TB and FB tests associated with the question. Furthermore, detailed descriptions of the story progression across different temporal segments are included to capture the sequence of events. This design ensures that the dataset not only facilitates the evaluation of first-order belief reasoning in MLLMs but also establishes a structured framework for identifying and analyzing perspective-based information through temporal and belief-based annotations (see Figure 11).
# First Order Belief
# True Belief
"Question": At the very end of the video, which color room does the yellow agent believe the white agent should be in?
"options": ["red", "purple"]
"answer": "purple"
"belief true": "The white agent is now in the purple room.
"belief false": "The white agent is now in the red room."
"caption": "The story proceeds as follows: 1. Initially, the white agent stands in the corridor. It walks towards the red room, opens the red door, enters the room, and closes the red door behind itself. Throughout this time, the yellow agent is in the corridor and sees the white agent go into the red room. 2. After the white agent closes the door to the red room, the yellow agent goes over to the green room, opens the green door, enters it, but the yellow agent does not close the green door. Therefore, the yellow agent can see everything happening outside the green room. 3. While the yellow agent is inside the green room (with the green door open), the white agent opens the red room door, leaves the red room, then walks to the purple room, opens the purple door, goes inside, and closes the purple door. That is where the video ends." "Question": At the very end of the video, which color room does the yellow agent believe the white agent should be in?"
"options": ["red", "purple"]
"answer": "red"
"belief true": "The white agent is now in the red room."
"belief false": "The white agent is now in the purple room.
"caption": "The story proceeds as follows: 1. Initially, the white agent stands in the corridor. It walks towards the red room, opens the red door, enters the room, and closes the red door behind itself. Throughout this time, the yellow agent is in the corridor and sees the white agent go into the red room. 2. After the white agent closes the door to the red room, the yellow agent goes over to the green room, opens the green door, enters it, and the yellow agent closes the green door. Therefore, the yellow agent cannot see anything happening outside the green room. 3. While the yellow agent is inside the green room (with the green door closed), the white agent opens the red room door, leaves the red room, then walks to the purple room, opens the purple door, goes inside, and closes the purple door. That is where the video ends."
Figure 11. The textual annotations for the first order belief task in the TB and FB tests are shown in the figure.
# Second-order belief
The concept of second-order belief pertains to the reasoning and inferences that MLLMs make regarding an agent’s beliefs about another agent’s mental state, based on observed actions or interactions. This evaluation also encompasses the question, answer options, the corresponding TB and FB conditions, as well as the story descriptions (see Figure 12 and Figure 13).
# Second Order Belief
True Belief
"Question": At the very end of the video, which color room does the white agent believe the yellow agent thinks the white agent should
be in?
"options": ["red", "purple"]
"answer": "red"
"belief true": "The yellow believes the white agent is now in the red room."
"belief false": "The yellow believes the white agent is now in the purple room."
"caption": "The story proceeds as follows: 1. Initially, the white agent stands in the corridor. It walks towards the red room, opens
the red door, enters the room, and closes the red door behind itself. Throughout this time, the yellow agent is in the corridor and
sees the white agent go into the red room. 2. After the white agent closes the door to the red room, the yellow agent goes over to the
green room, opens the green door, enters it, but the yellow agent does not close the green door. Therefore, the yellow agent can see
everything happening outside the green room. 3. While the yellow agent is inside the green room (with the green door open), the white
agent opens the red room door, leaves the red room, at the same time, it sees the green room door open, then it walks to the purple
room, opens the purple door, goes inside, and closes thSepaptuiraplleocdaotoiro.n iTnhfaot imsatwihoenre the video ends."
"Question": At the very end of the video, which color room does the white agent believe the yellow agent thinks the white agent should
be in?
"options": ["red", "purple"]
"answer": "purple"
"belief true": "The yellow believes the white agent is now in the purple room."
"belief false": "The yellow believes the white agent is now in the red room."
"caption": "The story proceeds as follows: 1. Initially, the white agent stands in the corridor. It walks towards the red room, opens
the red door, enters the room, and closes the red door behind itself. Throughout this time, the yellow agent is in the corridor and
sees the white agent go into the red room. 2. After the white agent closes the door to the red room, the yellow agent goes over to the
green room, opens the green door, enters it, and the yellow agent closes the green door. Therefore, the yellow agent cannot see
anything happening outside the green room. 3. While the yellow agent is inside the green room (with the green door closed), the white
agent opens the red room door, leaves the red room, at the same time, it sees the green room door closed, then it walks to the purple
room, opens the purple door, goes inside, and closes the purple door. That is where the video ends."
Figure 12. The textual annotations for the second order belief task in the TB tests are shown in the figure.
# Second Order Belief
#
False Belief
"Question": At the very end of the video, which color room does the white agent believe the yellow agent thinks the white agent should
be in?
"options": ["red", "purple"]
"answer": "red"
"belief true": "The yellow believes the white agent is now in the red room."
"belief false": "The yellow believes the white agent is now in the purple room."
“caption”: “The story proceeds as follows: 1. Initially, the white agent stands in the corridor. It walks towards the red room, opens
the red door, enters the room, and closes the red door behind itself. Throughout this time, the yellow agent is in the corridor and
sees the white agent go into the red room. 2. After the white agent closes the door to the red room, the yellow agent goes over to the
green room, opens the green door, enters it, and the yellow agent closes the green door. Therefore, the yellow agent cannot see
anything happening outside the green room. 3. While the yellow agent is inside the green room (with the green door closed), the white
agent opens the red room door, leaves the red room, at the same time, it sees the green room door closed (However, at that exact moment,
the yellow agent opens the green door. Because of this timing, the yellow agent actually sees the white agent leaving the red room.),
then it walks to the purple room, opens the purple doorS,pagtoieasl ioncsaitdieo,n ianfdorclmoasteisonthe purple door. That is where the video ends."
"Question": At the very end of the video, which color room does the white agent believe the yellow agent thinks the white agent should
be in?
"options": ["red", "purple"]
"answer": "purple"
"belief true": "The yellow believes the white agent is now in the purple room."
"belief false": "The yellow believes the white agent is now in the red room."
"caption": "The story proceeds as follows: 1. Initially, the white agent stands in the corridor. It walks towards the red room, opens
the red door, enters the room, and closes the red door behind itself. Throughout this time, the yellow agent is in the corridor and
sees the white agent go into the red room. 2. After the white agent closes the door to the red room, the yellow agent goes over to the
green room, opens the green door, enters it, but the yellow agent does not close the green door. Therefore, the yellow agent can see
everything happening outside the green room. 3. While the yellow agent is inside the green room (with the green door open), the white
agent opens the red room door, leaves the red room, at the same time, it sees the green room door open (However, at that exact moment,
the yellow agent closes the green door. Because of this timing, the yellow agent does not actually see the white agent leaving the red
room.), then it walks to the purple room, opens the purple door, goes inside, and closes the purple door. That is where the video
ends."
Figure 13. The textual annotations for the second order belief task in the FB tests are shown in the figure.
Figure 14. The figure presents video sequence frames extracted from three different rooms in the dataset as examples, where (A)(B), (C)(D), and (E)(F) correspond to different rooms. (A) and (B) illustrate examples of the same room configuration but with different agent states and action trajectories. Specifically, (A) represents the FB experiment, while (B) corresponds to the TB experiment. Similarly, (C) and (D) depict the FB and TB experiments, respectively, and (E) and (F) show the FB and TB experiments in another room configuration.
Furthermore, in our dataset, we apply randomized manipulations to the evaluation data for each story, including variations in room configurations, agent states, and action trajectories. This approach ensures diversity while preventing repetitive patterns that might result in spurious statistical correlations. To illustrate this, we have provided ten examples from the dataset, as shown in the Figure 14.
# D. Result of initial belief test in Section 3.2
# D.1. Result
We evaluated the initial belief accuracy $( \mathrm { A C C } \% )$ ) of these MLLMs on the GridToM dataset. The results are shown in the able below (Table 3).
Table 3. Initial Belief
The variance in accuracy highlights the disparity in reasoning or belief assessment capabilities among these models. This indicates that model architecture, training data, or multimodal integration plays a critical role in achieving higher performance in such tasks. The deepseek-vl2-small model achieved only $5 . 9 \%$ accuracy rate on 1944 initial belief tasks, and the reason for this low error rate was that $8 9 . 9 \%$ were invalid responses.
# E. Hyperparameters’ analysis in Section 4.4
The impact of hyperparameters $K$ and $\alpha$ on intervention strength is shown in Figures 15 to 18. We treat generated invalid responses as incorrect answers. Across all intervention results, the intervention direction based on the protagonist’s perspective achieves the best performance, which aligns with our expectations and is applied in our experiments.
Specifically, Figures 15 to 18 illustrate a wide span of hyper-parameter settings. We find that the effect of the intervention is confined to a valid interval; once this interval is exceeded, the MLLMs’ responses deteriorate. The parameter $\alpha$ remains effective roughly within the range [–50, 50], and the choice of $K$ is informed by the number of hidden heads in the MLLMs. Within the valid region, these two hyper-parameters affect model performance by no more than $10 \%$ on average, and their tuning produces a smoothly varying perturbation until the edge of the valid interval is reached. These results show that our approach relies on an interpretable intervention based on internal representations and input perturbations, rather than on hyper-parameter tuning, and that it is not highly sensitive to the specific hyper-parameter values chosen.
Figure 15. The impact of the hyperparameters $K$ and $\alpha$ on the LLaVA-NeXT-Video-7B-hf model on the First-order TB task.
Figure 16. The impact of the hyperparameters $K$ and $\alpha$ on the LLaVA-NeXT-Video-7B-hf model on the First-order FB task.
Figure 17. The impact of the hyperparameters $K$ and $\alpha$ on the Qwen2-VL-7B-Instruct model on the First-order TB task.
Figure 18. The impact of the hyperparameters $K$ and $\alpha$ on the Qwen2-VL-7B-Instruct model on the First-order FB task.
# F. Evaluation protocol of baseline test
Our objective is to provide MLLMs with complete third-person perceptual information in both visual and textual formats (representing an omniscient perspective) and require MLLMs to separate perceptual information corresponding to different perspectives. This allows the models to infer the correct beliefs from each perspective.
Following the standard zero-shot settings for ToM QA evaluations as described in the literature (Shapira et al., 2024), we assess all models without any additional training. The evaluation includes questions related to initial beliefs, first-order beliefs, and second-order beliefs. The evaluation metrics include the accuracy of correctly answering TB, FB, and both TB
and FB simultaneously.
# F.1. Objective
The primary goal of this evaluation is to provide MLLMs with complete third-person perceptual information in both visual and textual formats, representing an omniscient perspective. MLLMs are tasked with separating perceptual information corresponding to different perspectives, enabling them to infer correct beliefs associated with each perspective.
# F.2. Setup
In line with the standard zero-shot settings for ToM QA evaluations, as outlined in the literature (Shapira et al., 2024), all models are assessed without any additional training or fine-tuning. This ensures that the evaluation reflects the inherent ToM reasoning capabilities of the models without being influenced by dataset-specific optimizations.
# F.3. Evaluation Scope
The evaluation employs the following accuracy metrics to measure the model’s performance:
Accuracy of initial belief test Measures the model’s ability to correctly understand the scenario.
TB Accuracy of first order belief test Evaluates the model’s performance in identifying true beliefs within first-orde reasoning scenarios.
FB Accuracy of first order belief test Assesses the model’s capacity to correctly infer false beliefs in first-order reasoning tasks.
TB Accuracy of second order belief test Tests the model’s ability to discern true beliefs in second-order reasoning contexts.
FB Accuracy of second order belief test Measures the model’s effectiveness in identifying false beliefs in second-order reasoning scenarios.
The model’s responses are scored based on their ability to correctly answer questions in each belief category. Each category and the performance of both together are reported separately.
# G. Additional Probing Results
We present the full probing results in first-order belief task and second-order belief task for both models using logistic regression models in Figure 19 and Figure 20. The probing accuracies vary across models and tasks.
Figure 19. Probe accuracies on first-order belief task and second-order belief task based on the attention head activations in all layers of LLaVA-Next-Video.
Qwen2-VL-7B-Instruct
Figure 20. Probe accuracies on first-order belief task and second-order belief task based on the attention head activations in all layers of Qwen2-VL-7B-Instruct.
# H. Probing on Different Dataset (MMToM-QA)
We further validated the effectiveness of our method on the MMToM-QA dataset. The MMToM-QA dataset consists of 134 videos, capturing an individual searching for everyday objects in a home environment. This aligns with cognitive science research on mental state attribution in navigational agents.
On average, each video contains 1,462 frames and depicts 36 types of human behaviors. Based on these videos, the dataset includes 600 questions designed to assess both goal reasoning and belief reasoning abilities. Each question is paired with a video clip representing the complete activity (e.g., RGB-D frames), a textual description of the scene, and the actions taken by the individual in the clip. The questions follow a binary-choice format and are categorized into seven reasoning types (as detailed in the original dataset documentation). Specifically, the belief reasoning task consists of 300 questions (100 per type), while the goal reasoning task comprises 300 questions (75 per type). Additionally, the dataset provides 1,000 procedurally generated videos, annotated with ground-truth information on scenes, objects, goals, and beliefs for model training.
In our experiments, we utilized only the belief reasoning subset of the dataset (Figure 21). However, due to the absence of explicit positive-negative video pairs, we manually curated and filtered the dataset, constructing first-order TB and FB samples. This refinement enables a more precise evaluation of the model’s ToM reasoning capabilities.
Type 1.1: True belief, short-term Type 1.2: False belief, short-term
Scene: ... Inside the bridge, you’ll find a bottle Scene: ... The living room features a cabinet...The cabinet is
of wine.. filled with a bag of chips, a remote controller,a bottle of wine,
Actions:...Finally,she moves towards the and a water glass.
fridge,preparing toopen it. Actions: Jennifer is situated in the living room. She heads towards the cabinet and isabout to open it.
Question: If Elizabeth has been trying to get a
bottle of wine, which one of the following Question: If Jennifer has been trying to get a cupcake,which
statementsismore likelytobetrue? one of the following statements ismore likely to be true?
(a)Elizabeththinksthatthereisabottleof (a)Jennifer thinks that there isn't a cupcake inside the cabinet wine inside the fridge. (b)Jennifer thinks that there is a cupcake inside the
(b)Elizabeth thinks that there isn't any bottle of cabinet. wine inside the fridge.
Figure 21. Sample examples from the MMToM-QA dataset. The question types utilized in MMToM-QA are also illustrated.
Figure 22. (A) Omniscient. (B) Protagonist. The linear probing accuracy of all heads across all layers in LLaVA-Next-Video on the test set. (C) Insensitive. (D) Sensitive. The linear separability of belief representations is explained through a visual interpretation of the typical representation space.
# I. Dataset Construction Pipeline
Our dataset is produced almost entirely through automated generation and verification, with only minimal manual annotation and rigorous quality checks. Although Theory-of-Mind (ToM) reasoning is intrinsically complex, our script-driven workflow guarantees consistent alignment among visual inputs, agent actions, and narrative descriptions.
# I.1. Construction and Annotation
Map design. We manually created 27 distinct $1 0 \times 7$ maps in Excel, each with 3 rooms and unique layouts.
Automated validation and rendering. Map validity was verified with Python scripts (e.g., enclosed rooms, door placement). Then, using the MultiGrid library, we rendered maps with:
• Colour palette: assigned from 6 highly distinguishable colors (red, green, blue, yellow, purple, white).
• Agent placement: two groups of agents were randomly placed in hallways with colors distinct from rooms; initial orientations were randomized.
• Path planning: agent trajectories were generated using Breadth-first search to ensure valid, logical movement without dead ends.
Task generation. The combination of different variables results in 648 basic samples. For each sample, we generate both “door open” (TB) and “door closed” (FB) conditions, totaling 1296 samples. Second-order belief tasks follow the same structure with minor narrative adjustments.
# I.2. Quality Assurance
• Automation-first: key elements (layout, paths, doors, task type) were generated and verified via script, minimizing subjective error.
• Human review: we manually reviewed samples for layout issues, trajectory logic, and narrative coherence.
• Staged execution: tasks were divided into three stages with controlled timing to ensure logical, coherent event flow.
• Controlled variables: we used unified logic for all visual and script elements, systematically varying only key factors (room order, agent orientation, colors, door state).
# I.3. On ToM Difficulty and Dataset Validity
• Controlled scenarios: carefully constrained scenes reduce noise, allowing clearer focus on ToM and multimodal reasoning.
• Scalability: current difficulty is moderate and sufficient for analyzing belief reasoning. We plan to expand with more complex scenarios in future releases. | As large language models evolve, there is growing anticipation that they will emulate human-like Theory of Mind (ToM) to assist with routine tasks. However, existing methods for evaluating machine ToM focus primarily on unimodal models and largely treat these models as black boxes, lacking an interpretative exploration of their internal mechanisms. In response, this study adopts an approach based on internal mechanisms to provide an interpretability-driven assessment of ToM in multimodal large language models (MLLMs). Specifically, we first construct a multimodal ToM test dataset, GridToM, which incorporates diverse belief testing tasks and perceptual information from multiple perspectives. Next, our analysis shows that attention heads in multimodal large models can distinguish cognitive information across perspectives, providing evidence of ToM capabilities. Furthermore, we present a lightweight, training-free approach that significantly enhances the model's exhibited ToM by adjusting in the direction of the attention head. | [
"cs.AI"
] |
# 1 Introduction
Medical machine learning (ML) models are trained on datasets containing diverse patient characteristics. However, when certain subgroups are overor underrepresented, models may show unequal performance, raising fairness concerns. Addressing such disparities requires evaluation across subgroups—ideally with an intersectional perspective that considers overlapping dimensions of disadvantage (Foulds et al., 2019; Wang et al., 2022). This leads to the central question: How should we address subgroup performance disparities in the context of fairness in medical ML?
Fairness is a multifaceted concept that frequently arises in the context of machine learning systems.
A common definition describes fairness in decisionmaking as the ‘absence of any prejudice or favoritism toward an individual or group based on their inherent or acquired characteristics’ (Mehrabi et al., 2021). Therefore, an ML system can be considered unfair if, despite the goal of achieving equally good performance across different subgroups, it exhibits substantial performance disparities. Those disparities often result from bias, for example through biased training data (data bias) or a biased algorithm itself (algorithmic bias). Both terms encompass various subtypes of bias, such as minority bias, missing data bias or cohort bias that can lead to a poorer performance for certain subgroups (Ueda et al., 2024).
In machine learning, representation and performance disparities have been documented across modalities. For instance, large language models used in clinical settings may perpetuate stereotypes or marginalize certain identities when sociodemographic diversity is absent in training data (Alnegheimish et al., 2024; Lohse et al., 2024). Similar issues arise in structured EHR modeling, where label noise and skewed sampling exacerbate subgroup-specific errors (Sivarajkumar et al., 2023; Seyyed-Kalantari et al., 2020).
To address these challenges, prior work has taken different approaches. Some studies aim to improve dataset diversity or subgroup visibility in clinical training data (Rawat et al., 2024; Abraham and Idrobo, 2024). Others propose fairnessaware optimization objectives or subgroup-specific tuning to reduce performance gaps (Sivarajkumar et al., 2023). The importance of documentation and benchmarking has also been emphasized—especially in clinical imaging and foundation models—through standardized evaluation protocols across sensitive attributes (Jin et al., 2024).
Our work contributes to this growing field by offering a structured analysis of subgroup variation across three real-world multimodal medical prediction tasks: mortality, triage, and graft failure, and advocating for routine reporting and subgroup validation as an integral part of the ethical assessment of medical ML model evaluation.
# 2 Experiment
We conduct our experiments on three multimodal clinical datasets, each containing textual data (e.g., clinical notes), structured static data (e.g., demographics), and, in two cases, time-series data (e.g., vital signs). All tasks involve patient-level predictions in distinct clinical settings.
Mortality Based on the MIMIC-III (Johnson et al., 2016) dataset from a US intensive care unit, this task involves predicting in-hospital mortality after the first 48 hours of admission (Yang and Wu, 2021). Data includes demographics, time-series vitals, and admission notes. It is framed as a binary classification and evaluated using AUC-ROC (ROC) and AUPRC (PRC).
Graft Failure This dataset comes from a German transplant center and includes structured data (e.g., demographics, comorbidities), time-series labs and vitals, and clinical texts. The task is to predict graft failure within 360 days of each visit, using binary classification with ROC and AUPRC as metrics.
Triage This dataset contains semi-structured ambulance records from a German emergency department, including structured features (e.g., vitals, pain score, Glasgow Coma Scale) and short text notes, describing the accident and situation of patient. The task is to classify patient urgency according to the Manchester Triage System (MTS), a multi-class classification problem evaluated using precision, recall, and F1 score.
# 2.1 Methods
We employ different machine learning models tailored to the characteristics of each dataset. The choice of method is influenced not only by the data modality and task complexity, but also by hardware constraints at the data hosting sites.
For Mortality prediction, we use a multimodal architecture that integrates irregular time-series and text data through interpolation-based embeddings and time-aware attention. Modalities are fused using interleaved self- and cross-attention layers, following the approach of Zhang et al. (2022) and Ravichandran et al. (2024). In the Graft Failure task, we apply a fast Gradient Boosting Regressor capable of handling static and time-series data as well as clinical notes, as described in Roller et al. (2022). For Triage, we apply a hybrid approach built around a transformer model for processing textual information, which is extended with a feed-forward network to integrate key structured features, as outlined in Maschhur et al. (2024). Additionally, expert rules are incorporated to better reflect aspects of the MTS and increase the recall for the most urgent classes.
# 2.2 Setup
Each model is trained on a predefined training set and evaluated on a fixed test set, referred to as the reference test. Using the same trained model, we then conduct a series of subgroup analyses by filtering the test set according to patient characteristics—for example, selecting only patients under 18 years old, or only female patients. Then, we compare the model’s performance on each subgroup against its performance on the full reference test set to investigate disparities across different patient groups.
# 2.3 Subgroup Analysis Results
Table 1-3 present results from our subgroup analysis across the three tasks. We observe that while overall performance is strong on the full test sets, notable variations emerge across subpopulations.
Table 1: Subgroup Analysis of the Mortality Task, using AUC-ROC (ROC) and Area under the Precision-Recall Curve (PRC).
Mortality: The model performs well overall (see Table 1), but subgroup differences are notable in PRC, which are more sensitive to class imbalance. For instance, PRC is highest among male (0.65) and Hispanic patients (0.77), but substantially lower for women (0.57) and Black patients (0.45), suggesting a performance disparity, particularly in recall-sensitive settings. The score even further decreases for Black women to $\mathrm { P R C } { = } 0 . 3 6$ (not shown in the table).
Table 2: Subgroup Analysis of the Graft Failure Prediction Task, using AUC-ROC (ROC) and Area under the Precision-Recall Curve (PRC).
Graft Failure: Similarly to above, subgroup differences are particularly notable in PRC (see Table 2). Predictions are most reliable for younger patients $( \mathrm { P R C } { = } 0 . 7 2$ ), male patients (0.61), and recipients of organs from living donors (0.70). Performance drops for older patients, women, and cases with deceased donors—groups that may require additional calibration or targeted support.
Table 3: Subgroup Analysis on Triage Prediction
Triage: For children, less serious cases (red, orange) can be detected (lower recall). The overall performance (see Table 3) of male and female patients, instead, is roughly similar to the reference test set. Only the precision of the most serious class decreases for women, while it increases for men. In the case of old patients, above the model shows for red and orange a very strong performance drop. Finally, in cases where patient data does not include any age—and missing crucial information can occur frequently in real-world data of emergency care—we can see a drop in recall within all classes. Using solely the transformer-based machine learning model, we can see a similar pattern (see Appendix).
# 3 Analysis
# 3.1 Medical Analysis
In the following, a brief analysis from a medical perspective is provided.
Mortality ICU settings offer rich data but cannot fully capture bedside clinical judgment, which is hard to textualize and prone to bias. Early ICU assessments, especially under stress, may introduce human biases that models can reproduce. Biological differences, such as higher baseline blood pressure in Black patients, may also skew mortality predictions if not properly accounted for.
Graft Loss Graft loss risk is inversely linked to kidney function, estimated via creatinine-based eGFR. This is less reliable for frail patients with low muscle mass (common in elderly), possibly explaining reduced PRC. Gender bias may arise from the overrepresentation of men and the use of creatinine instead of sex-adjusted eGFR. Better performance in living-donor transplants may reflect generally improved outcomes, although this is harder to interpret due to many confounding variables.
Triage Medically, triage is a challenging task, as the “correct” category often requires diagnostic confirmation, which is not considered for the given task. Even experienced nurses frequently mislabel cases, and paramedics may overtriage due to time pressure or to err on the side of caution. Known biases—such as overtriaging children and undertriaging cardiorespiratory symptoms—are reflected in model performance, which deviates most in children and the elderly. Overall, the label noise and potential misclassification limit the validity of model evaluation. Reliable ground truth is essential for meaningful ML applications in this context, but a manual analysis shows a large number of false triage labels in the real-world data (about $30 \%$ ).
# 3.2 Technical Analysis
Data Distribution All datasets are highly imbalanced with respect to the target events—such as mortality, graft failure, or red triage—which are rare and make machine learning tasks more challenging. Event frequency also varies across subgroups and between training and test sets, and subgroup sizes differ significantly, both in terms of total patients and percentage of target events. These factors can all impact model performance.
For instance, in the Mortality dataset, Asian patients make up only $2 \%$ of the data (train and test), compared to $71 \%$ for White patients, which may contribute to lower performance if subgroupspecific characteristics are important for prediction. However, despite representing $9 \%$ of the population, the model performs worse on Black patients than on Asians $( 2 \% )$ or Hispanics $( 3 \% )$ . Interestingly, the mortality rate for Black patients is only $9 \%$ , compared to an overall average of $13 \%$ . The gender ratio is roughly 55:45 (male:female), which could also contribute to performance differences.
Similar patterns are observed in the other two datasets (see Appendix), suggesting that subgroup composition likely affects model performance but cannot fully explain the observed disparities.
Significance To examine concerns about spurious variation in small subgroups, where few positive cases can skew results, we conduct a onesided nonparametric bootstrap hypothesis test on the Mortality task. We test if the model performed significantly better on one subgroup (A) than another (B). Overall, while we can see certain trends on particular subgroups of the Mortality data, the test found no significant performance differences between men and women, Hispanics and Whites, or Whites and Asians. However, the model does perform significantly better for Whites compared to Blacks1.
# 4 Discussion
Our results highlight the variability of ML model performance across patient subgroups on different multimodal datasets in multiple tasks. While overall metrics may suggest good performance, a closer look reveals that models can underperform for specific subgroups, such as older patients, individuals from certain ethnic groups, but also patients with lower data quality or a particular transplant. This poses a potential risk, particularly in clinical decision-making, where complex and difficult decisions must be made for vulnerable patient populations.
As we have shown, fairness can be understood as the requirement that different subgroups should exhibit similar performance and that the model should not ‘favor’ any particular subgroup. However, in order to be fair and to pursue the goal of achieving equal performance across all subgroups, transparency is essential. First, it must be recognized that the model performs differently across different subgroups. With this knowledge of the subgroupspecific performance disparities a particular model can still be used—especially since, in many realworld scenarios, achieving fairness in the sense of identical performance for all subgroups may not be feasible. But for that to be responsible, it is important that these models are accompanied by documentation similar to an ‘information leaflet’ or a ‘package insert’ (Samhammer et al., 2023; Ott and Dabrock, 2022) that includes subgroup-level performance metrics, an overview of the training data distribution, and disclaimers when certain subgroups are likely underrepresented. The EU AI Act even demands a respective documentation for high-risk AI systems (European Union, 2024). To this end, best practices and standards for reporting subgroup performance need to be developed. Such information can then guide clinicians in interpreting predictions, managing uncertainty, and identifying when to override or ignore model outputs.
At the same time, this transparency must not become a substitute for fairness, allowing largely unfair and biased models to be used uncritically and thereby reinforcing existing inequalities. Rather, transparency and fairness must be closely intertwined, with the recognition of poorer performance for certain subgroups prompting targeted efforts to improve outcomes specifically for those groups.
Ultimately, the goal should not be to prevent the use of models that do not perform equally for all possible subgroups, but to ensure they are used with awareness, and that this insight is used to improve the model specifically for those disadvantaged groups. A biased model with clear warnings and transparent evaluation may still bring benefit in clinical practice, especially in settings where no decision support exists otherwise. However, it is precisely this transparency enabled by subgroup analysis that can help further improve the model or even develop a new model specifically for those subgroups that are otherwise underrepresented. Finally, the knowledge about surprising performance discrepancies across patient subgroups
# can also trigger further research, as the underlying causes could also be medical rather than solely data-driven. | Machine learning (ML) models are increasingly used to support clinical decision-making. However, real-world medical datasets are often noisy, incomplete, and imbalanced, leading to performance disparities across patient subgroups. These differences raise fairness concerns, particularly when they reinforce existing disadvantages for marginalized groups. In this work, we analyze several medical prediction tasks and demonstrate how model performance varies with patient characteristics. While ML models may demonstrate good overall performance, we argue that subgroup-level evaluation is essential before integrating them into clinical workflows. By conducting a performance analysis at the subgroup level, differences can be clearly identified-allowing, on the one hand, for performance disparities to be considered in clinical practice, and on the other hand, for these insights to inform the responsible development of more effective models. Thereby, our work contributes to a practical discussion around the subgroup-sensitive development and deployment of medical ML models and the interconnectedness of fairness and transparency. | [
"cs.LG",
"cs.CY"
] |
# 1 Introduction
Natural Language Processing (NLP) technologies have evolved language models into powerful tools, yet their impact on complex societal issues across disciplines remains unclear. Diverse expertise is crucial for evaluating their effectiveness beyond technical benchmarks. Current NLP agents, mostly powered by LLMs, dependent on static prompts, struggle to mimic human-like behavior and longitudinal interaction accurately. Integrating psychological frameworks can enhance NLP systems by modeling human cognition, social dynamics, and decision-making, leading to better diverse stakeholder representation in interdisciplinary environments. We propose that LLM agents grounded in psychological frameworks would provide a novel approach to enhance stakeholder representation in interdisciplinary contexts. Our research explores how these agents are designed and validated, contributing to measuring NLP’s cross-disciplinary impact in three ways: demonstrating how psychological theories inform LLM application design and evaluation; providing evidence that interdisciplinary design principles yield measurable outcomes; and offering an integrated methodology for designing and evaluating LLMs even in transdisciplinary domains like sustainability.
# 2 Background and Related Work
Designing personas, predefined personality profiles guiding dialogue model responses, in LLMs currently presents challenges in aligning with realistic human cognition and personality traits. The presumption of equivalence between language proficiency and thought may overestimate reasoning capabilities (Mahowald et al., 2024). Recent NLP systems incorporate psychological theories but only observe effects, not explain causation (Sharma et al., 2024; Phelps and Russell, 2025). LLMs can approximate social behaviors but lack psychological plausibility in representing human motivations and their influence on decision-making processes. Traditional personality theories have been applied to understand and measure human personality dimensions (Serapio-García et al., 2025; Hilliard et al., 2024), but they provide limited insight into how behavior is shaped through interactions.
Evaluating persona effects is challenging.
Prompting techniques (Hu and Collier, 2024) and structured methods like persona codebooks (Tang et al., 2024; Tseng et al., 2024) offer frameworks but lack flexibility and generalizability. Researchers struggle to create metrics that balance consistency and adaptability. Ha et al. (2024) introduced customizable options, but they lacked coherent persona grounding, causing contextually unstable outputs. This becomes problematic due to evolving conversation topics (Fischer and Ram, 2024; Templeton et al., 2024).
Implementation challenges exacerbate theoretical and evaluative shortcomings. Current LLMs use fixed personas that hinder adaptation to evolving user needs, requiring detailed prompt engineering for customization. Persona-conditioning methods are inconsistent and ineffective (Giorgi et al., 2024). Even successful implementations raise ethical concerns as simple API-level instructions can significantly alter user perception (Deshpande et al., 2023). Dataset construction challenges persist, with many systems relying on social media or crowdsourcing, limiting representativeness (Lee et al., 2024b; Kim et al., 2023). Bowden et al. (2024) developed a large dataset of personalized Q&A pairs, but it was too large for many research applications. Fine-tuning individual LLMs remains computationally expensive and infeasible at scale. Non-static persona implementation based on indialogue among agents has not been fully tested with highly diverse or conflicting personas (Cheng et al., 2024). Key challenges include balancing personalization depth with response diversity (Tang et al., 2024), maintaining coherence across sessions (Giorgi et al., 2024), and representing perspectives effectively.
Representation issues reveal fundamental limitations in current persona approaches. Personaconditioning methods inadequately represent underrepresented populations (Santurkar et al., 2023), constraining social science applications. Wang et al. (2025) criticized how LLMs misportray marginalized groups by reflecting out-group stereotypes rather than authentic in-group perspectives. Substituting human participants with AI models fundamentally undermines representation, inclusion, and understanding (Agnew et al., 2024). Current prompt-based representation methods rely excessively on base models without addressing deeper representation issues (Liu et al., 2024; Li et al., 2023). Effective representation requires structural changes to model design and training methodologies rather than superficial prompt engineering.
# 3 Design
# 3.1 Social Cognitive Theory Fundamentals
Social Cognitive Theory (Bandura, 1978, 1986, 1989, 2001a, 2023) emphasizes how people learn through observation, experience, and environmental influences. At its core, SCT views humans as active agents who both influence and are influenced by their surroundings, rather than passive recipients of environmental forces. In everyday terms, SCT explains why we might adopt behaviors we see succeed in others, how our beliefs about our capabilities affect our choices, and why the same person might act differently in various social contexts. SCT has broad applications in education (Burney, 2008; Bembenutty et al., 2016; Schunk, 2001), organizational behavior (Bandura, 1988; Ozyilmaz et al., 2018), mass communication (Bandura, 2001b; Fu et al., 2009), and health (Bandura, 2000; Godin et al., 2008; Beauchamp et al., 2019).
SCT addresses limitations in current LLM persona approaches by moving beyond static prompts and traditional personality theories. Unlike fixed AI personas, SCT creates dynamic agents that evolve through interactions, similar to human development. This framework solves implementation challenges like inconsistent persona-conditioning and adapting to evolving contexts. For example, an SCT-based agent adjusts its reasoning based on new information and social context, rather than simply stating generic role-aligned viewpoints. To design psychologically grounded LLM agent personas, we ground our multi-LLM agent framework in SCT’s "triadic reciprocal determinism" (Bandura, 2023). As illustrated in Figure 1, SCT integrates personal factors (i), environment (j), and behavior (l) to enable psychologically plausible representation of diverse stakeholders. Our agents balance internal beliefs (personal factors) with external information (environment) to produce contextually appropriate responses (behavior), enabling realistic simulation with longitudinal interaction and dynamic adaptation while maintaining psychological coherence.
# 3.2 SCT-Based Agent Design Framework Overview
Our agent design (Figure 1) combines four personal factors (d) (cognitive, motivational, biological, and affective) with six SCT constructs (n) (selfefficacy, behavioral capability, self-regulation, reinforcement, expectations, and observational learning) to create psychologically grounded agents with diverse stakeholder perspectives and consistent behavior. Scenarios (k) serve as the environment (j), enabling agents to respond contextually while maintaining psychological fidelity. Continuous feedback loops influence behaviors and the environment, generating dynamic interactions that enhance realism in complex social contexts.
Figure 1: SCT Framework Using Personal Factors for Agent Design and Six Constructs for Cross-Scenario Evaluation within Triadic Reciprocal Determinism. Note: Light blue round squares indicate LLMs in the framework.
illustrated in Figure $1 ( \mathrm { f - g } )$ , includes: prompting LLMs to generate responses by framing each query as "given this character’s profile, how would they answer this question"; using multiple language models to address single-model biases; and verification through both an LLM and two human coders, with conflicts resolved by majority rule. This methodology ensures consistent agent personas across diverse interactions.
# 3.3 Personal Factors Operationalization
The personal factors adapted from SCT (Table 1) include: cognitive factors (belief structures, knowledge base, and attitudes such as views on individual rights versus common good); biological factors (physical characteristics and demographic information relevant to self-concept); affective factors (emotional tendencies and feeling states influencing information processing and decision-making); and motivational factors (internal drives and goals directing behavior). These personal factors are implemented as a question-and-answer dataset that forms the foundation of each agent’s persona. We created 550 balanced questions (Figure 1(c)) covering four categories (Table 1 and Appendix A) and diverse dimensions like personal identity and social issues. Answers are generated using each agent’s profile, developed with diverse perspectives through LLMs. We use a novel-writing framing technique to simulate real-world interview annswers and elicit detailed personas, maintaining consistency across stakeholder types. Our process,
# 3.4 Implementation
We used Neo4j-backed graph database system (Neo4j, Inc, 2025) to store personal factors for each agent persona. Each agent is powered by Llama-3.2-3B-Instruct as the base language model (Meta AI, 2024). The system organizes persona data hierarchically through Agent-Category-Dimension-Question relationships, allowing contextual retrieval of relevant information during conversations. The PersonaNeo4jAdapter (Figure 1(h)) imports personal factors from JSON datasets and retrieves agent-specific information via Cypher queries, using the mxbai-embed-large-v1 embedding model for semantic similarity (Lee et al., 2024a). During message processing, the architecture extracts relevant categories from incoming messages and retrieves corresponding personal factors to compile a background section for the language model prompt, ensuring relevant responses by incorporating only personal information relevant to the conversation topic.
Table 1: Categories of Social Cognitive Theory and Sample Questions for Constructing Datasets
# 4 Evaluation
# 4.1 SCT Constructs as Evaluation Metrics
We implement SCT’s six core constructs as quantifiable metrics to assess (Figure 1(n)) how consistently agent personas respond when faced with contradicting information, regardless of the specific scenario context (Figure 1(j-k)). Each SCT construct serves as a distinct dimension for evaluating persona consistency (Table 2).
# 4.2 Evaluation Operationalization
Our methodology provides a domain-independent approach to persona evaluation through a systematic five-step process (Figure 1(e-n)). First, we establish initial SCT construct profiles for each persona. Second, we develop contradictory scenarios. Third, we analyze responses through all six SCT construct dimensions to measure consistency. Fourth, we compare responses against expected persona-consistent patterns. Finally, we track SCT construct expression changes across interaction rounds to evaluate temporal development. This framework supports quantitative assessment of persona consistency across diverse contexts. We quantify SCT construct expression on continuous scales (0.1 to 1.0), with higher values indicating better alignment with exemplars. The evaluation references comprehensive configuration examples illustrating varying levels of each SCT construct (detailed in Appendix C).
Table 2: Social Cognitive Theory Constructs and Evaluation Criteria for Agent Persona Assessment
# 4.3 Implementation
We implemented our agent evaluation using Neo4j, encoding six SCT constructs as quantifiable parameters within each agent persona (Table 2). The TextAnalyzer component (Figure 1 (m)) uses Llama-3.2-3B-Instruct (Meta AI, 2024) to analyze information across semantic, emotional, and SCT construct alignments. The system integrates Retrieval-Augmented Generation (RAG) with PersonaNeo4jAdapter to access persona information and enable persona-consistent responses. Our framework supports various evaluator types (LLMs, human experts, specialized algorithms) and involves recording responses during contradictory scenarios, analyzing them against construct exemplars, assigning normalized scores, and tracking temporal development through repeated evaluations.
# 5 Simulated Case study: Renewable Energy Transition Discourse among Diverse Stakeholders
# 5.1 Background
Our research uses renewable energy transition discourse as a test case for diverse stakeholder representation due to its cultural and group identitybased polarization (Kahan et al., 2015; Hart and Nisbet, 2012). In energy transition discussions, stakeholders interact in complex negotiations with conflicting information. Renewable energy is a stakeholder issue (Ruggiero et al., 2014) with persistent conflicts over its socio-political space (Lauber and Jacobsson, 2016). Stakeholders with diverse ideological stances must navigate conflicting claims about economic impacts, environmental consequences, and technological feasibility. Our SCT-based evaluation framework assesses how consistently these diverse stakeholders maintain their positions with conflicting information.
# 5.2 Personal Factors: Agent’s Persona
We developed five diverse agent profiles (Figure 2) with varying ideological orientations using GPT-4 (OpenAI et al., 2024) via ChatGPT (OpenAI, 2022), representing diverse stakeholders in energy transition discussions. Using ChatGPT, we controlled only the ideology of agents by creating novel’s characters with different stakes in the renewable energy transition, allowing other personality aspects to emerge naturally. Each profile included comprehensive attributes: name, age, job title, ideology, physical characteristics, personality, personal background, job duties, hobbies, and concerns. We prompted LLMs to generate profile-consistent responses to 550 pre-defined questions using multiple language models: GPT-4o-mini (OpenAI, 2024), Mistral-7B-v0.1 (Jiang et al., 2023), and zephyr7b-alpha (Tunstall et al., 2023). For each question, we used the framing "Given this character X’s profile, how would they answer this question?" to elicit authentic, persona-consistent responses. The responses were verified by the authors and GPT4o to ensure consistency and accuracy. Human verification involved qualitative assessment of persona alignment, ensuring responses authentically reflected stakeholder perspectives and internal consistency with character profiles.
# 5.3 Environment: Contradicting Information Scenarios
To assess agent persona consistency in triadic reciprocal determinism (Figure 1), we designed contradictory information scenarios (k) challenging each agent’s core personal factors (i) and behaviors (l). These scenarios included foundational beliefs, counter-evidence, varying reliability, and domain relevance. For instance, Douglas Harrington (coal mining CEO) faced statements about renewable energy job creation and coal’s economic disadvantages, while Sierra Jameson (renewable energy consultant) encountered contradictory information about solar panel carbon footprints and reliability issues. Each contradictory statement contained hidden reliability metadata, with higher values representing well-supported information. This reliability range matched real-world information evaluation patterns: highest values for peer-reviewed scientific studies, moderate values for government reports, and lower values for non-peer-reviewed sources. This variation tested whether agents calibrated responses based on information quality. Each agent encountered five distinct scenarios presented sequentially with increasing complexity across multiple interaction rounds. After each presentation, we recorded responses for subsequent six SCT construct evaluation, revealing how different constructs manifested when agents navigated information challenging their personas.
# 5.4 Experimental Setup
Our experiment involved 5 interaction rounds where agents faced contradictory facts challenging their mental representations, with 100 iterations per condition for statistical validity. We presented scenarios with factual assertions that either aligned or contradicted the agent’s beliefs. Each scenario included domain-specific information and strategically positioned contradictory elements to challenge the agent’s core beliefs. The contradictions were calibrated to maintain plausibility and trigger belief reconciliation processes. Comprehensive analyses (bootstrap confidence intervals, round subset sensitivity, leave-one-out testing; Appendix E) confirmed our design’s validity and showed consistent effect size progression across rounds. Response patterns were measured through automated content analysis of agent outputs, tracking changes in certainty markers, reference to prior beliefs, incorporation of new information, and justification
Name Age Job Title Job Duties Ideology Charhystrtstesc Personality Personal Background Hobby Concerns JSieran 28 ee Progressive en and athletic Virantotimnsmand ee ock limbernlga Woried nrutliaha Tayn 29 Cormnu cu Promotescuturaopreatieno eodsenusuaril Passionatgdecicstelctsa avsdvaluingntureatur Storyteller,woodcarver ptectssiagics Mlizabethy 45 omoea Promotesreadigcommuty Conservative Eegatpise wth Artiulate,.ompssieate estincomuitoene nasionatgarlere rau Chtrasnis Michael 45 Senior Contror maeersesaraue Conservative woeg Robustprstcgaanas Biue-colrleistef iosecurityemmnit sutue Hrngtsn 60 Miningf ocmany Exvindsmininggpatons conseraie shae ayminingbakground Collectsyindusate Idustysuiya
strategies—providing quantitative metrics of cognitive adaptation processes. The agent architecture leverages a neuroscience-inspired Enhanced Memory System (Chang and Kim, 2025) with multitype memory, RAG-based retrieval, dynamic SCT constructs tracking, and source reliability integration—modeling realistic cognitive processes (Appendices B, C).
# 6 Results
# 6.1 Model Specification
We modeled SCT-based response patterns as a function of contradictory information scenarios and SCT constructs using two hierarchical linear models: a fixed-effects model (Model 1) and a timevarying model (Model 2). Both models were estimated with random intercepts for each iteration. A likelihood ratio test compared the models, yielding $\Lambda = 3 9 9 . 8 2$ $( p < . 0 0 1 )$ , suggesting temporal interactions with SCT constructs improve model fit. Details are in Appendix D.1.3.
# 6.1.1 Model 1: Fixed Effects Model
The fixed-effects model assumes that six SCT constructs exert constant effects across all rounds:
$$
y _ { i j t } = \ \beta _ { 0 } + \beta _ { 1 } \mathbf { C } _ { i j t } + \sum _ { k = 2 } ^ { 7 } \beta _ { k } \mathbf { X } _ { k i } + u _ { j } + \varepsilon _ { i j t }
$$
where $y _ { i j t }$ represents SCT-based response patterns for agent $i$ in iteration $j$ at round $t$ , $\mathbf { C } _ { i j t }$ represents contradicting information scenarios, and ${ \bf { X } } _ { k i }$ are six SCT constructs. The terms $u _ { j }$ and $\varepsilon _ { i j t }$ represent the random intercept for iteration $j$ and the residual error term, respectively. Full equation details are provided in Appendix D.1.1.
6.1.2 Model 2: Temporal Development Model To investigate how SCT constructs’ influence evolves across successive interaction rounds, we developed a temporal development model that extends the fixed-effects approach by incorporating dynamic interactions:
$$
\begin{array} { c } { { \displaystyle y _ { i j t } = \beta _ { 0 } + \beta _ { 1 } { \bf C } _ { i j t } + \sum _ { k = 2 } ^ { 7 } \beta _ { k } { \bf X } \beta _ { k i } } } \\ { { + \displaystyle \sum _ { k = 8 } ^ { 1 3 } \beta _ { k } \left( { \bf X } _ { ( k - 6 ) i } \times t \right) + u _ { j } + \varepsilon _ { i j t } } } \end{array}
$$
This formulation captures developmental trajectories by modeling SCT construct effects as functions of round number $t$ , allowing us to quantify systematic changes in construct influence over repeated interactions. The complete model with estimated trajectory coefficients is available in Appendix D.1.2.
# 6.2 Behavior: Response Patterns to Contradicting Information Scenarios
Our analysis reveals consistent agent responses to contradictory information aligned with model specifications. Table 3 shows Model 1 (fixed-effects) results where contradicting information consistently predicted SCT-based response patterns. The coefficient $( \beta _ { 1 } )$ remained stable (1.71–1.74) with high explanatory power $( R ^ { 2 } \colon 0 . 5 8 – 0 . 6 1 )$ .
SCT-based agents demonstrated substantially stronger responses to contradicting information compared to the vanilla agent (coefficient $\sim 1 . 7 3$ vs. 0.36), a nearly 5-fold increase. The vanilla agent’s higher $R ^ { 2 }$ (0.83) coupled with its lower coefficient suggests more rigid, less psychologically plausible belief dynamics than our SCT-based implementation.
The consistency across agents with different backgrounds confirms our SCT framework successfully implements plausible persona dynamics regardless of stakeholder viewpoint. The mixedeffects version of Model 1 confirmed statistically insignificant agent differences when controlling for contradictory information (all $p >$ .85, $\eta ^ { 2 } =$
Table 3: Fixed-Effects Model (Model 1): Contradicting Information Effects by Agent
Note: $\overline { { * * * _ { p } < 0 . 0 0 1 } }$
0.0002), supporting the $\beta _ { 1 }$ coefficient stability and framework robustness across persona implementations.
# 6.3 Temporal Development of SCT Construct Effects
Building on the Model 1’s results, we examined Model 2 to investigate how SCT constructs’ influence changes over time. Table 4 presents the estimated parameters from our tempoeral development model, showing statistically significant interactions $( p < . 0 5 )$ between SCT constructs and interaction rounds. This confirms our hypothesis that the influence of SCT constructs develops systematically over successive interactions.
Note: $\overline { { ^ { * } p < 0 . 0 5 } }$ , $\overline { { * * } } _ { p } < 0 . 0 1$ , $\overline { { * * * } } \overline { { p < 0 . 0 0 1 } }$
Table 4: Temporal Development Effects Model (Model 2) Parameter Estimates
Figure 3 visualizes SCT construct effects across interaction rounds. Self-efficacy shows the strongest positive trajectory ( $\mathrm { \mathit { \beta } } \mathrm { = } 0 . 3 1 8$ , $p = . 0 0 2 \mathrm { \cdot }$ ), with agents resisting contradicting information over time. Observational learning follows a positive trajectory $\beta = 0 . 1 1 5$ , $p = . 0 3 5 ,$ , suggesting improved information evaluation with repeated exposure. Conversely, expectations ( $\beta = - 0 . 2 1 1$ , $p \ < \ . 0 0 1 )$ , reinforcements $( \beta ~ = ~ - 0 . 1 7 2$ , $p =$ .025), and self-regulation $\beta = - 0 . 1 3 5$ , $p = . 0 0 7 )$ negatively affect agents’ susceptibility to response modifications from contradictory information. Behavioral capability remains stable $( \beta = - 0 . 0 3 6$ , $p = . 3 8 7 )$ , indicating consistent knowledge application. Positive values represent SCT constructs enhancing resistance to contradicting information, while negative values denote increasing responsiveness. Self-regulation develops temporally, suggesting highly self-regulated agents become more responsive to contradictory information scenarios, which is valuable for evidence-based response adaptation.
Figure 3: Temporal Development of SCT Construct Effects Across Interaction Rounds with $9 5 \%$ Confidence Intervals
Table 5 quantifies the temporal development of each SCT construct across interaction rounds. The magnitude of these changes reveals substantial development, with Self-efficacy showing the strongest positive trajectory $\left( + 1 . 5 9 \right.$ from Round 1 to Round 6) and Expectations demonstrating the most pronounced negative development $\left( - 1 . 0 6 \right)$ . These quantified changes illustrate how agent response patterns systematically evolve over repeated exposure to contradicting information.
# 6.4 Principal Component Analysis (PCA) of SCT Constructs
PCA (Wold et al., 1987) revealed two key components explaining $7 3 \%$ of SCT construct variance. PC1 (eigenvalue $_ { : = 2 . 7 6 }$ , $4 6 \%$ variance) showed positive loadings across all constructs, particularly Self-efficacy (0.464) and Reinforcements (0.466), representing a general "response tendency." PC2 (eigenvalue $_ { = 1 . 6 2 }$ , $2 7 \%$ variance) differentiated learning-oriented constructs (Observational Learning: 0.600, Self-regulation: 0.553) from expectation-based constructs (Expectations: $- 0 . 3 9 5$ , Self-efficacy: $- 0 . 3 3 5 )$ . This component structure aligns with theoretical expectations that cognitive and behavioral aspects of SCT function as distinct but complementary dimensions in agent reasoning.
Table 5: Summary Statistics of Temporal Development
Note: Standard errors estimated as $20 \%$ of effect size. Results based on 100 iteration experiment.
Table 6: Principal Component Analysis of SCT Constructs
Note: Principal Component Analysis with Varimax and Kaiser Normalization. Significant loadings $( \geq 0 . 4 0 )$ are shown in bold.
The vector relationships visible in Figure 4 further illuminate how constructs operate together. Closely aligned vectors like Self-regulation and Observational Learning indicate these constructs frequently co-occur in agent responses, while the near-orthogonal relationship between Reinforcements and Observational Learning suggests these constructs operate relatively independently. This empirical structure provides valuable insight into how cognitive mechanisms interact when agents evaluate contradicting information of varying reliability. | Despite advances in designing personas for Large Language Models (LLM), challenges remain in aligning them with human cognitive processes and representing diverse stakeholder perspectives. We introduce a Social Cognitive Theory (SCT) agent design framework for designing, evaluating, and implementing psychologically grounded LLMs with consistent behavior. Our framework operationalizes SCT through four personal factors (cognitive, motivational, biological, and affective) for designing, six quantifiable constructs for evaluating, and a graph database-backed architecture for implementing stakeholder personas. Experiments tested agents' responses to contradicting information of varying reliability. In the highly polarized renewable energy transition discourse, we design five diverse agents with distinct ideologies, roles, and stakes to examine stakeholder representation. The evaluation of these agents in contradictory scenarios occurs through comprehensive processes that implement the SCT. Results show consistent response patterns ($R^2$ range: $0.58-0.61$) and systematic temporal development of SCT construct effects. Principal component analysis identifies two dimensions explaining $73$% of variance, validating the theoretical structure. Our framework offers improved explainability and reproducibility compared to black-box approaches. This work contributes to ongoing efforts to improve diverse stakeholder representation while maintaining psychological consistency in LLM personas. | [
"cs.MA",
"cs.CY",
"cs.DB"
] |
# 1 Introduction
The association of textual mentions in a document to the entities they refer to in a Knowledge Graph (KG) is crucial for many Natural Language Processing (NLP) applications, such as question answering or information retrieval. This task is known as Entity Linking (EL), and it is a fundamental step in the transformation of unstructured text into structured knowledge. EL is usually performed as a pipeline with three different steps. The first one is Mention Detection, which detects the text spans that could possibly be linked to entities. It is followed by the Candidate Generation stage, which selects for each mention the top k entities from the KG that could refer to it, usually based on precomputed probability distributions from entity-mention hyperlink pairs.
Finally in the Entity Disambiguation (ED) step, a final entity is selected from the previously generated set.
Usually, the ED problem is tackled by designing and training task-specific models with large amounts of data (e.g., Wikipedia dumps) [10,5]. In recent years, language models have been used for this task by making use of the mention’s context in the document to disambiguate between the possible solutions [21,10,7]. Additionally, some approaches incorporate the candidates’ descriptions [25], classes [32] (e.g., the categories they are tagged with in Wikipedia) or both [5] in the model’s input, by generating encodings for these text items. The addition of this knowledge allows zero-shot ED, enabling the models to classify entities that may have not been seen during training time.
Lately, new advances in Large Language Models (LLMs) such as GPT-3 [8], GPT-4 [2] or LLaMA-2 [40], have demonstrated remarkable performance in numerous NLP problems [44]. Given their large-scale and diverse training corpus, they are good candidates to perform tasks, even zero-shot ones, where general knowledge is needed for language processing, such as ED [12]. However, these LLMs still face some challenges, such as hallucination (i.e., the generation of statements that are factually incorrect) [20], and the lack of knowledge about concepts outside their training corpus. To mitigate these issues, the use of KGs to enhance LLMs has been proposed to address different problems [33]. There exist a large variety of KGs, storing information which can be encyclopedic (e.g., DBpedia [3] or YAGO [38], which extract information from Wikipedia), commonsense knowledge (e.g., ConceptNet [37], with information such as $\langle h o u s e , h a s , d o o r \rangle$ or $\langle b e d , u s e d F o r , s l e e p \rangle$ ) or domain specific [1]. The explicit and structured knowledge they contain can be used to enhance the performance of LLMs, by leveraging it either during pre-training by enriching the training data [19], or during the inference stage [42,6,39]. Following the nomenclature proposed in [33], in this work we focus on KG-enhanced LLM inference, and apply it to the ED task. Our approach takes advantage of KGs to avoid retraining the LLM, and improves the effectiveness of zero-shot LLM approaches for ED.
Solving the ED problem using only LLMs would require to instruct them to choose one of the entities from the candidate set given the document containing the mention. Instead, we propose to extract the candidates’ class taxonomy from a KG and use it to guide the disambiguation. For example, taking Query 1 in Figure 1, given ‘MTV awards’ appearing in the context, the entity ‘Justin’ is more likely to be a Musician than a Politician. Thus, we can use this context to guide the LLM by eliminating invalid solutions such as ‘Justin Trudeau’, rather than letting the LLM directly predict the entity. Moreover, when all the remaining candidates fall directly under the same class, we retrieve the candidates’ descriptions from a Knowledge Base (KB), such as Wikipedia, and append them to the disambiguation prompt (see Figure 1, query 2). With this Retrieval Augmented Generation (RAG) [23] stage, we provide the LLM with reliable information, reducing hallucination and enabling the LLM to perform predictions over new or unusual entities which may not have been present in the training corpus.
Fig. 1. Overview of the two steps of our approach.
Therefore, our contributions are as follows:
– We present a method to enhance LLMs in the ED task by leveraging the candidate entity class taxonomies available in KGs. Moreover, we also augment the prompt with the entity descriptions, in order to allow the disambiguation of unseen or difficult entities.
– We evaluate the method against non-enhanced LLMs, description-only enhanced LLMs and a task-specific model by using ten ED datasets. The results show that our approach improves the disambiguation capabilities of LLMs and has a higher degree of adaptability to different domains than the task-specific model.
– We discuss how using KGs with different levels of semantic expressivity (e.g., YAGO and DBpedia) affects the proposed pruning algorithm, by studying both the ED results and the algorithm’s performance.
– We study and classify the cases in which our method fails to correctly disambiguate the mention and provide insights on the possible improvements.
The remainder of the paper is structured as follows. In Section 2 we discuss the Related Work. In Section 3 we formalize the problem and present the proposed approach. In Section 4 we describe the different experiments, discuss the results, and conduct an error analysis. Finally, in Section 5 we provide our conclusions and ideas for future work.
# 2 Related Work
This section begins with an overview of different ED methods which leverage external information to improve their predictions. Then, the recently emerged concept of KG-enhanced LLMs is introduced, enumerating some of the proposed methods for tackling NLP tasks.
# 2.1 Knowledge-augmented ED
Various ED approaches use model architectures that leverage the mention, its surrounding context and candidate entities to generate a solution [21,10,7]. However, some recent works incorporate additional knowledge to the model’s input in order to improve the disambiguation of entities which are not present in the training dataset. This extra information is usually gathered from online sources (e.g., Wikipedia and Wikia) and provided in the form of entity descriptions [25], entity types [32] or both [5]. Additionally, some works leverage the structured information contained in KGs to enhance the model’s performance. In [35], information about the entity types from DBpedia and knowledge graph embeddings extracted from Wikipedia’s graph structure are incorporated into the model’s input. In [4], KG triples are used to train a component of the model’s architecture which predicts the existence of facts between mentions in a given document. The result of this prediction is used as input for the final model, which also leverages entity types and descriptions. Finally in [29], the triples from the KG are verbalized and appended to the input sentence before being fed to the model.
These knowledge-augmented ED approaches incorporate the additional information to their model’s input, which are mainly built by leveraging LLMs such as BERT [11], RoBERTa [24] or BART [22], and need to be trained or fine-tuned with large amounts of data (e.g., Wikipedia dumps with millions of entities). In contrast, in our approach we rely on the new generation of generative LLMs (e.g., GPT-3 [8], GPT-4 [2] or LLaMA-2 [40]), and solve the ED task by prompting the LLMs in a zero-shot manner without needing to train a task-specific model. This approach has also been explored in [12], where the document’s context and the inherent knowledge from the LLM are enriched with the entity descriptions, following a RAG approach [23]. RAG has been shown to be useful for incorporating new or relevant information to LLMs, and it has also been leveraged in a specific step of our proposal. However, our main focus is on the usage of KGs to obtain the class hierarchy for the candidate entities, which allows our method to solve the ED task by guiding the LLM to the correct answer (see Section 3.2).
# 2.2 KG-enhanced LLMs
LLMs can be used to solve a wide range of tasks, not just ED. However, as introduced in Section 1, LLMs suffer from problems such as hallucination, which can be accentuated if the information requested is outdated or not present in the training data. Retraining LLMs to incorporate this missing knowledge is expensive and time-consuming, and fine-tuning them could lead to problems such as catastrophically forgetting (i.e., the LLMs’ tendency to lose previously obtained knowledge when being fine-tuned with new data) [26]. To solve these issues, KGs can be used as a source of additional structured information in different NLP tasks. In particular, information from a KG can be added to the prompt fed to the LLMs, a technique coined as KG Prompting [33], which has already been explored for Question Answering. In [42], the approach starts by identifying the entities in the question, and then the KG is queried to build subgraphs including them. After that, the LLM is prompted to comprehend and aggregate the subgraphs, and based on the consolidated result it is asked to reason over it and provide the answer. Similarly in [39], the LLM generates these subgraphs by iteratively exploring a KG to create a reasoning path over it. In each iteration, if the LLM believes that has enough information, an answer is provided. Otherwise, it is prompted to continue to traverse the graph, adding the most promising relation to the existing reasoning path each time. Finally in [6], the entities are also first extracted from the question, which are then used to retrieve the triples they participate in within the KG. Then, the triples are verbalized and appended to the prompt as context, which is fed to the LLM to obtain the answer.
In our approach, however, we solve a different task, ED, and we rely on the KG’s ontology rather than on the annotated instances, guiding the disambiguation of the entities using the class hierarchy.
# 3 ED with KG-enhanced LLMs
In this section we lay out the formulation of the problem to be solved and describe the two different steps of the proposed method.
# 3.1 Problem Formulation
Let $C = \{ e _ { 1 } , e _ { 2 } , . . . e _ { | C | } \}$ be a set of $k$ candidate entities belonging to a KG, containing a class hierarchy in which the entities are annotated, and $m$ be a mention in a document $d$ . The objective of ED is to assign to $m$ the entity $e$ it refers to, such that $e \in C$ .
# 3.2 Method
Our proposed method for the disambiguation of the mention can be divided in two steps. First, a subgraph is generated containing the candidate entities together with their taxonomy of classes. Then, a pruning algorithm is applied to iteratively discard the candidate entities until there is only one left in the subgraph, which will be the solution (see Figure 1). The implementation can be found in the Supplemental Material.
Subgraph Generation Given the candidate set $C$ for a mention $m$ , a directedacyclic graph (DAG) $G$ is created from the KG, having the general class T hing as its ‘root’ (i.e., the only node without predecessors) and the candidate entities as ‘leaves’ (i.e., the nodes without successors). Note that $G$ cannot be considered a tree as a node can have multiple predecessors (e.g., ‘Justin Timberlake’ is a linked to the class Musician and also to the class Actor ).
First of all, the candidate entities are linked to the classes they belong to (see Figure 2, step 1). Then, the classes that are not predecessors of any of the candidates are removed from $G$ (see Figure 2, step 2). Next, the relations that can be inferred by traversing $G$ through more granular path of relations are also removed, as well as self-pointing relations (see Figure 2, step 3). Finally, intermediate nodes which only have one direct successor and that successor is not an entity are also iteratively removed from $G$ , linking the direct successor to the node’s direct predecessors (see Figure 2, step 4). With this last step, we aim to increase the granularity and ease the disambiguation, as the classes in the higher levels of the hierarchy tend to be more abstract (e.g., for the class Musician, the path from the root in DBpedia is $T h i n g S p e c i e s E u k a r y o t e P e r s o n $ $A r t i s t M u s i c i a n$ ). In some of the more complex KGs (e.g., YAGO), an entity could also be considered a class. Therefore, if there exist other entities in the candidate set that are linked to this entity, an extra preprocessing step is needed to transform the entity into a leaf, by removing the links to its direct successors while linking them to its direct predecessors.
Fig. 2. Overview of the steps for the creation of the DAG.
Fig. 3. Example of the three different configurations of the LCA’s direct successors.
Input: Subgraph $G$ , mention $m$ , document $d$ and entity descriptions
Output: Entity
1 candidates leaves $( G )$ ;
2 while $l e n ( c a n d i d a t e s ) \neq 1$ do
3 $L C A \mathsf { L C A } ( G$ , candidates);
4 directSuccessor $: $ directSuccessors(G, LCA);
5 if allDirSuccessorsAreClasses then
6 response $$ multiChoice(directSuccessors None , m, d);
7 if response $\neq$ None then
8 $G \gets$ prune(G, directSuccessors \ {response});
9 else
10 response multiChoice(candidates, m, d, descriptions);
11 $G \gets$ prune(G, candidates \ {response});
12 else if allDirSuccessorsAreEntities then
13 response $$ multiChoice(directSuccessors, m, d, descriptions);
14 $G \gets$ prune(G, directSuccessors \ {response});
15 else
16 Dc, De ← getClassesAndEntities(directSuccessors);
17 response multiChoice $( D _ { c } \cup \{ O t h e r \} , m , d )$ ;
18 if response = Other then
19 1 $G \mathsf { p r u n e } ( G , D _ { c } )$ ;
20 else
21 G ← prune(G, directSuccessors \ {response});
22 candidates leaves(G);
Pruning Algorithm The pruning algorithm is outlined in Algorithm 1. Given the generated graph $G$ and the initial candidate entities (i.e., its leaves), the algorithm starts by finding the Lowest Common Ancestor (LCA) of the candidate entities. The LCA is defined as the deepest node (i.e., the furthest from the root) which is an ancestor of all the candidates (see dashed nodes in Figure 3). Then, the direct successors of the LCA are retrieved, which leads to three different scenarios:
1. All the direct successors are classes (Figure 3, case 1): The LLM is prompted to select to which classes the mention $m$ belongs to. All the candidate classes that are not chosen by the LLM are removed from $G$ , along with all the nodes that have become disconnected from the root. This case corresponds to lines 5-12 in Algorithm 1.
2. All the direct successors are entities (Figure 3, case 2): The LLM is prompted to directly select the entity $m$ refers to. Here, the description of each candidate entity is retrieved from a KB and appended to the prompt. The non-selected candidates are then removed from $G$ . This case corresponds to lines 13-15 in Algorithm 1.
3. Direct successors are classes and entities (Figure 3, case 3): The direct successors are organized into classes $( D _ { c } )$ , and entities $( D _ { e }$ ). The LLM is then prompted to select a class from $D _ { c } ^ { \prime } = D _ { c } \cup O t h e r$ , where Other is an additional class which encompasses $D _ { e }$ . If the LLM selects a class belonging to $D _ { c }$ , the remaining classes and the entities $D _ { e }$ are removed from $G$ . If $O t h e r$ is selected, the classes from $D _ { c }$ are removed. Finally, the nodes which have become disconnected from the root are also removed. This case corresponds to lines 16-22 in Algorithm 1.
During the initial tests it was found that the LLM may not return a valid response when it considered that none of the presented classes matched the mention. Therefore, in case 1 we additionally add the class None, which triggers a case $\boldsymbol { \mathcal { Z } }$ prompt with the remaining candidates if it is selected. Finally, in order to guarantee that the LLM always has information about the entity before making a decision, the response is assessed by the LLM when a single entity is left after a case $\mathit { 1 }$ or case 3 step. If it is negatively evaluated, a complete case $\boldsymbol { \mathcal { Z } }$ prompt is triggered.
The algorithm runs until there is only one leaf (i.e., entity) left in $G$ , which will be the final response. Therefore, in the worst-case scenario the LLM will be prompted $k$ times.
# 4 Experiments
In this section we discuss the experiments performed. First, we describe the experimental settings, then we evaluate our proposal against different methods and also analyze the effect of the KG used. Finally we study the different scenarios that lead to our method failing to correctly disambiguate the mention.
# 4.1 Settings and Datasets
Datasets We evaluate the approach on ten popular ED datasets, the same as in [12], which are from news and online articles (MSN [9], AQU [27], ACE04 [34], CWEB [13], R128 [36] and R500 [36]), from Wikipedia (WIKI [15], OKE15 [30] and OKE16 [31]) or from hand-crafted, brief and ambiguous sentences (KORE [17]). These datasets contain documents for which one or various mentions have been annotated with the ground truth entity they refer to. The dataset statistics are summarized in Table 1.
Candidate Sets To allow comparability, we borrow the candidate sets from [12], which combine two methods to obtain sets of size 10. First, as done in previous works [21,10,5], Wikipedia hyperlink count statistics from mention-entity pairs are used to generate the candidates. If not enough candidates are found, the set is augmented by generating candidates with the BLINK model [43], which is based on dense retrieval from context and descriptions.
Table 1. Overview of ten considered datasets’ statistics.
Knowledge Graphs To obtain the hierarchical representation of the classes we use YAGO [38]. It primarily leverages the information from Wikipedia’s infoboxes for generating the relations between entities, and for the taxonomy it borrows the top-level representation from the schema.org ontology [14], which is further refined by carefully integrating it with the fine-grained Wikidata [41] taxonomy. Additionally, in Section 4.3 we study the effect of the granularity of the annotation of the classes. To this end, we use another KG with a more simple class hierarchy, DBpedia [3], which is also built on top of Wikipedia but uses a shallow and manually created ontology to define the representation of classes. Finally, to retrieve the entity descriptions we use Wikipedia as a KB, and they are truncated at 250 characters before being appended to the prompt.
Evaluation Metric We report our results with inKB micro-F1 score (see Equation 1). InKB means that we only consider a mention if the ground truth entity is present in the KG used. To allow comparability between KGs (i.e., YAGO and DBpedia do not have the same entities annotated), we also report the results by considering the percentage of the Gold F1 score achieved. The Gold F1 score is the maximum inKB micro-F1 score that could be obtained, as the candidate sets do not always contain the ground truth entity.
$$
\mathrm { m i c r o - F 1 } = \frac { \mathrm { T P } } { \mathrm { T P } + \frac { 1 } { 2 } ( \mathrm { F P } + \mathrm { F N } ) }
$$
Additionally, given the differences in dataset sizes we report the weighted average, weighting each score by considering the number of instances of each dataset.
Large Language Models To perform our experiments we use GPT-3.5, concretely the gpt-3.5-turbo-1106 model from OpenAI API, setting its temperature to 0 to decrease the randomness and the creativity of the response, as we are interested in factual answers. The reason behind the selection of this LLM is in a trade-off between reasoning capabilities, operating cost and API availability.
# 4.2 Results
To evaluate the proposed method we compare it to a non-enhanced LLM baseline and to ChatEL [12], the only approach that to the best of our knowledge also directly prompts LLMs to solve in a zero-shot manner the ED task, without training or fine-tuning any model. Additionally, we compare it to ReFinED [5], the task-specific model, which requires extensive training, that obtained the best ED performance in the results reported in [12]:
– Baseline: The baseline consists in asking the LLM to directly select one of the entities within the set of candidates. Therefore, it does not have the class representation nor the entities’ description. This baseline corresponds to a non-enhanced LLM approach. Its implementation can be found in the Supplemental Material.
– ChatEL [12]: The ED task is solved in two steps. First, the LLM is asked to describe what the mention in the document is referring to. Then, another prompt is created asking the LLM to select the candidate entity that best matches the description generated in the previous response, by also enriching the candidates with their descriptions from Wikipedia. It must be noted that in our approach an answer is always returned. However, in [12] an empty result is produced (i.e., a prediction is not performed) when the LLMs response does not contain any candidate entity, which we observe that happens when the response is not on the candidate set or when there is not enough context. This affects the computation of the precision (and thus the inKB and Gold F1-score), as the number of false negatives can potentially be reduced. These observed differences in the F1-score have been mitigated by computing the achieved gold percentage, making the proposals comparable. – ReFinED [5]: Is a ED-specific method built over the RoBERTa architecture, leveraging entity types and descriptions. It is pretrained with a Wikipedia dataset, with more than 100M mention-entity pairs, and finetuned on AIDA-CoNLL [18], a news related ED dataset with approximately 25.000 annotated mentions.
The results are shown in Table 2. First of all, it can be observed that the proposed approach outperforms the baseline in all of the datasets. This demonstrates that even with the vast amounts of data with which the LLMs have been trained and their reasoning capabilities, the addition of external knowledge on the prompts and the guidance during the disambiguation can be helpful to improve the performance on the ED task. One of the most frequent mistakes made by the baseline approach is to give more importance to the context than to the mention. For instance, in the sentence ‘A six-game begins this Friday in Phoenix and the team hopes to get O’Neal [...]’, the baseline links the mention to the entity ‘Phoenix Suns’, presumably given the basketball context. However, the mention is referring to a place, which is correctly resolved by the KG-enhanced approach, as in the first iteration the LLM correctly disambiguates between the classes Organization, Place, Product or FictionalEntity.
Table 2. Results for the inKB micro F1-score for the ED experiments with ten datasets. The ChatEL and ReFinED scores are takenfrom the results reported by the authors in [12]. The weighted average weights each score by taking into consideration the sizes of thedatasets. The best score for each dataset is highlighted in bold.
Regarding the comparison with ChatEL, it can be observed that better results are obtained by our approach in 6 ouf of 10 datasets, with a weighted average score of 1.8 percentage points higher. Additionally, in the complete ChatEL evaluation GPT-4 is used, which is bigger and more powerful LLM than GPT3.5 [2] with a cost per token more than 20 times higher.1 Therefore, even while using a much less powerful LLM, the proposed approach leads to improvements in the ED task. Additionally, the added cost of the manipulation of the graph structure (e.g., finding the LCA and pruning) is limited by the small number of candidates used in ED, which typically ranges from 5 to 30, and its execution time is two orders of magnitude lower than the LLM calls.
For the task-specific model, we can observe that it obtains a better performance in 6 of the datasets, and an average weighted score of 1.5 percentage points higher. However, it is worth noting that it has been trained over a huge Wikipedia dataset and fine-tuned on an ED dataset about news, and for the only dataset out of these domains, KORE, our model outperforms it by more than 25 percentage points. Therefore, the LLM methods show a greater degree of adaptability, and could compensate the decrease in performance on some datasets by not requiring the training of specific models.
# 4.3 KG Expressivity Impact
In this section we evaluate how the differences in the semantic expressivity of the taxonomy of classes in the KG affects our approach. Concretely, we explore if reducing the granularity of the taxonomy affects its disambiguation capabilities. To this end, we use YAGO and DBpedia KGs, whose statistics are summarized in Table 3. It can be observed that YAGO has more than a thousand times as many classes as DBpedia, and nearly doubles the average depth of the path from an entity to the root. Therefore, YAGO has a more granular class representation and also annotates more semantic interpretations of the entities. For instance, as it is exemplified in Figure 4, in the annotation of Barcelona in YAGO a distinction is made between its representation as a Place and as a Organization, whereas in DBpedia Barcelona is only considered as a Place. Additionally, we can also observe the difference in the number of classes and its granularity. For example, DBpedia stops at the city level, while YAGO classifies the municipalities also within their country and region.
To evaluate the two KGs under study, we repeat the same experimental settings as in Section 4.1, keeping in mind that the inKB entities do not completely overlap on both KGs, thus affecting the Gold F1-score. The results can be seen in Table 2, where in 7 out of 10 datasets the more granular class representation, YAGO, has a better performance, and the weighted average score is 2.4 percentage points higher. This reinforces the hypothesis that having a more semantically rich class taxonomy can help in the disambiguation task. We can observe that YAGO does not outperform DBpedia primarily in the OKE datasets, which contain a large number of mentions referring to generic occupations (e.g., Governor,
Fig. 4. Class representation of the entity Barcelona in DBpedia (left) and YAGO (right) KGs.
Judge, Engineer, etc.). For instance, in the second iteration of the method for the sentence ‘As governor, Reagan raised taxes [...]’, the disambiguation is between the entity ‘Governor’ which is the ground truth answer, and the class Head of Government, which has other entities as successors (e.g., ‘Governor of California’). This causes a case 3 (see Section 3.2) disambiguation between Head of Government and Other, which leads to the LLM selecting the former as it properly fits the context.
Table 3. Metric comparisons from YAGO and DBpedia KGs [16].
Regarding the number of iterations both KGs exhibit a similar behavior, having a mean value close to 2.2 (see Table 4). Therefore, even though YAGO has a deeper taxonomy, it is compensated by its superiority in semantic expressivity and mitigated by the elimination of intermediary nodes in the preprocessing step (see Section 3.2). Hence, given that the execution time of the graph manipulation is two orders of magnitude lower than the LLM calls, using deeper graphs does not significantly affect the performance.
Table 4. Percentage of disambiguated entities in which the pruning algorithm reached a final single entity within the specified number of iterations.
# 4.4 Error Analysis
We thoroughly examined and categorized the scenarios that led to our method producing an incorrect disambiguation, as understanding them is crucial for assessing the capabilities and limitations of LLMs in this task.
Ground truth errors These errors consider the inaccuracies in the annotation of the datasets. For instance, in the sentence ‘[...] it is required excellent English communication skills $\left[ \ldots \right] ^ { \prime }$ , the mention English is annotated as ‘England’ instead of ‘English Language’.
KG errors These errors encompass the problems derived from the annotation of the entities’ classes in the KGs. For example, in the sentence ‘Mars, Galaxy and Bounty are chocolate $\therefore \angle \cdot \cdot \angle ^ { \prime }$ the ground truth answer ‘Bounty (chocolate bar)’ is wrongly annotated in DBpedia as an Architectural Structure, causing the pruning algorithm to fail. Additionally, the annotations could also suffer from inconsistencies. For instance, the ‘Supreme Court of Florida’ falls under the Organization class, while the ‘Supreme Court of California’ is considered a Building.
Ambiguous errors Some datasets contain sentences with high degree of ambiguity. For instance, in the sentence ‘Justin, Stefani and Kate are among the most popular people both on MTV and Twitter’, the disambiguation between ‘Justin Timberlake’ and ‘Justin Bieber’ is not clear as both are popular celebrities in those platforms and have collaborated with the other mentioned artists. Moreover, there are some ground truth labels that could be argued to be incorrect. For example, in the sentence ’accepted the post of principal and only teacher at a primary school in rural Blaauwbosch, Newcastle.’, principal is annotated in the ground truth as ‘Principal (Academia)’, yet for primary schools in the UK a more appropriate term would be ‘head teacher’, which is also found in the candidate set.
LLM errors Finally, some errors are produced by the LLM’s response. These are usually originated by the LLM missing information from the context and incorrectly resolving the entity or by wrongly interpreting the mention and assigning it to an erroneous class.
In Table 5, all the errors from the two smaller datasets (i.e., KORE and ACE2004) have been classified according to the presented types of error. This study has not been extended to all the datasets as it is unfeasible due to their sizes. Regarding the ground truth error, it corresponds to the sentence ‘Onassis married Kennedy on October 20, 1968’, where the mention Onassis is annotated as ‘Jacqueline Kennedy Onassis’ instead of ‘Aristotle Onassis’. For the KG errors, 2 are originated from a missing class annotation and 1 from a wrong labeling of an entity. Also, 3 errors for the ambiguous sentences are originated by the context not being sufficient to disambiguate the mention and in 2 of them the LLM’s response could arguably be considered also correct (e.g, in the sentence ‘The Isle of Wight festival in 1970 was the biggest at its time’, the mention could be both referring to the musical festival and to the concrete festival’s edition). Finally, 5 of the LLM errors are caused by missed context (e.g., in the short sentence ‘Tiger lost the US Open’, the mention Tiger, likely referring to Tiger Woods, helps to disambiguate between ‘US Open (tennis)’ and ‘US Open (golf)’ but it is missed by the LLM) and 7 by a wrong interpretation of the class (e.g., in the sentence $\cdot \cdot \cdot \jmath$ ran adjacent to an advertisement for a golf tournament on Fox Sports sponsored by Sun Microsystems.’ the mention is interpreted as a TV program rather than a TV channel).
These last LLM errors could potentially be solved by using LLMs with more powerful reasoning capabilities. To explore this idea, a small experiment with GPT-4 and Mistral Large [28] has been run, where the models are able to correctly disambiguate 8 and 7 of these 12 errors, respectively.
Table 5. Error types for the ACE2004 and KORE datasets, using YAGO as the KG. | Recent advances in Large Language Models (LLMs) have positioned them as a prominent solution for Natural Language Processing tasks. Notably, they can approach these problems in a zero or few-shot manner, thereby eliminating the need for training or fine-tuning task-specific models. However, LLMs face some challenges, including hallucination and the presence of outdated knowledge or missing information from specific domains in the training data. These problems cannot be easily solved by retraining the models with new data as it is a time-consuming and expensive process. To mitigate these issues, Knowledge Graphs (KGs) have been proposed as a structured external source of information to enrich LLMs. With this idea, in this work we use KGs to enhance LLMs for zero-shot Entity Disambiguation (ED). For that purpose, we leverage the hierarchical representation of the entities' classes in a KG to gradually prune the candidate space as well as the entities' descriptions to enrich the input prompt with additional factual knowledge. Our evaluation on popular ED datasets shows that the proposed method outperforms non-enhanced and description-only enhanced LLMs, and has a higher degree of adaptability than task-specific models. Furthermore, we conduct an error analysis and discuss the impact of the leveraged KG's semantic expressivity on the ED performance. | [
"cs.LG",
"cs.AI",
"cs.DB"
] |
# 1 Introduction
Knee osteoarthritis (OA) is a degenerative joint disease characterised by cartilage breakdown, bone remodelling, and joint inflammation [1]. It is a leading cause of disability in older adults, resulting in pain, stiffness, and reduced function. The Kellgren-Lawrence (KL) scale is commonly used to grade osteoarthritis severity, ranging from 0 to 4 based on joint space narrowing, osteophytes, sclerosis, and bone remodelling [2], as shown in Fig. 1. Early diagnosis enables treatment to alter the disease course [3].
Medical imaging plays a central role in knee osteoarthritis (OA) risk estimation [4] by analysing tissue changes over time. Machine learning techniques compute the likelihood of disease progression [5–9], but most methods generate only numerical scores, offering little visual explanation for clinicians [10]. For instance, if a model predicts OA progression based on X-rays, it is crucial to understand which features, such as OA severity or anatomical landmarks, contribute to this prediction. Predictive modelling has been rarely explored, except for [11], which employed a highly complex image generation process, limiting clinical practicality and lacking anatomical landmark localization. Combining predictive modelling with future image generation and anatomical landmark detection enhances interpretability, fosters trust, and supports informed decisionmaking.
Fig. 1. (Left) Example of a $0 ~ \mathrm { K L }$ grade. (Right) Example of a $4 ~ \mathrm { K L }$ grade with osteophytes (red), sclerosis (blue), and bone remodelling (green).
This paper presents a new interpretable multi-task machine learning method for estimating the risk of knee OA progression by predicting future OA severity grade and anatomical knee landmark localisation from efficiently generated future images. Such image generation leverages an efficient diffusion model using a class-conditioned latent space to forecast disease progression, offering a visual representation of how such particular health conditions may evolve. Our key contributions include:
– A new interpretable machine learning method for knee OA risk estimation via multi-task prediction modelling for KL classification and anatomical knee landmark localisation using future images generated by a diffusion model; – A novel, compact, and efficient diffusion model that can generate high quality future OA X-ray images conditioned only by current images.
Experiments show that our proposed method has state-of-the-art (SOTA) results on the Osteoarthritis Initiative (OAI) dataset [4], a study on knee osteoarthritis, delivering superior risk estimation AUC of 0.71 while being $\sim 9 \times$ faster at inference than the previous SOTA[11] that has 0.69 AUC.
# 2 Related Work
Risk Estimation and Predictive Modelling methods assess risk by predicting clinical events [5–9] or forecasting future features [12–16]. While event prediction is useful, it lacks interpretability, as it does not explain underlying causes. For instance, multiple plausible progression pathways could lead to mortality, yet these models often do not differentiate between them. Similarly, feature prediction models estimate disease onset [12–15] or severity [16], often using biomarkers [15, 17] and imaging data [17, 13]. However, their opaque reasoning limits clinical adoption [10].
Fig. 2. Overview of the method. (Top Left) VQ-VAE training. (Top Right) Diffusion model training. (Bottom) Classifier training $\&$ inference with predicted future image $\hat { \mathbf { x } } ^ { 1 2 }$ and the risk estimated from the KL grades predicted by $p _ { \gamma _ { 0 } }$ and $p _ { \gamma _ { 1 2 } }$ .
Future image synthesis methods use StyleGAN [11, 18, 19], VAEs [20], flow-based models [21, 22], and diffusion models [23]. Some rely on an input image and patient information [18–22, 24, 11, 25], while others omit non-image data like biomarkers [21, 22, 24, 11, 25]. Diffusion models now surpass GANs in image quality [1] but remain computationally demanding and underutilized for disease progression risk estimation [23]. In knee OA research, StyleGAN has achieved SOTA accuracy [11], yet diffusion models offer superior image quality [1]. However, [11] does not generate anatomical knee landmarks, limiting interpretability.
# 3 Methodology
Let $\mathcal { D } = \{ \mathbf { x } _ { i } ^ { 0 } , \mathbf { x } _ { i } ^ { 1 2 } , \mathbf { y } _ { i } ^ { 0 } , \mathbf { y } _ { i } ^ { 1 2 }$ , $\{ \mathbf { l } _ { i , j } \} _ { j = 1 } ^ { L } \} _ { i = 1 } ^ { | \mathcal { D } | }$ represent the OAI dataset, where $\mathbf { x } ^ { 0 } , \mathbf { x } ^ { 1 2 } \in$ $\mathcal { X } \subset \mathbb { R } ^ { H \times W }$ are knee X-ray images of a patient at an arbitrary point in time, and 12 months afterwards, respectively. Corresponding one-hot 5-class KL classifications are $\mathbf { y } ^ { 0 } , \mathbf { y } ^ { 1 2 } \in \mathcal { V } \subset \{ 0 , 1 \} ^ { 5 }$ . The set of $L$ anatomical knee landmarks at $\mathbf { x } ^ { 0 }$ is $\{ \mathfrak { l } _ { i , j } \} _ { j = 1 } ^ { L } \in \mathcal { L }$ , with each landmark $\mathbf { l } _ { i , j } \in \{ 1 , \dots , H \} \times \{ 1 , \dots , W \}$ . Our model comprises: 1) VQ-VAE for latent image generation, 2) a conditional diffusion model for future latent images, and 3) a multi-task classifier for OA severity prediction and anatomical knee landmarks localization (Fig. 2).
VQ-VAE: Future image generation for risk estimation leverages diffusion models, which perform better in latent spaces than in image spaces [26]. To generate this latent space, we use VQ-VAE, as it offers superior reconstruction quality and efficiency compared to VQ-GAN [27]. VQ-VAE consists of an encoder ${ \bf e } _ { \theta _ { E } } : \mathcal { X } \mathcal { Z }$ and decoder ${ \bf d } _ { \theta _ { D } } : \mathcal { Z } \mathcal { X }$ , with $\mathcal { Z } \subset \mathbb { R } ^ { Z }$ as the latent space, parameterised by $\theta = \{ \theta _ { E } , \theta _ { D } \} \in \Theta$ . Following [26], we enhance perceptual quality and classification by integrating a classifier $p _ { \gamma } : \mathcal { Z } \varDelta ^ { 4 }$ for 5-class KL classification, forming a multi-task autoencoder [28]. The model is trained with:
$$
\begin{array} { r l r } & { } & { \ell _ { V Q V A E } ( \theta , \gamma ) = \mathbb { E } _ { \mathbf { x } , \mathbf { y } \sim \mathcal { D } } \Big [ \log \left( p \big ( \mathbf { x } | \mathbf { z } _ { q } ( \mathbf { x } ) \big ) \right) + | | s g ( \mathbf { z } _ { e } ( \mathbf { x } ) \big ) - \mathbf { e } | | _ { 2 } ^ { 2 } } \\ & { } & { + \left. \beta | | \mathbf { z } _ { e } ( \mathbf { x } ) - s g ( \mathbf { e } ) | | _ { 2 } ^ { 2 } - \alpha \sum \mathbf { y } ^ { T } \log \left( p _ { \gamma } ( \mathbf { z } _ { e } ( \mathbf { x } ) \right) \right) \Big ] , } \end{array}
$$
where $\mathbf { x }$ is the input image, ${ \bf z } _ { e } ( { \bf x } ) = { \bf e } _ { \theta _ { E } } ( { \bf x } )$ is its embedding, ${ \bf z } _ { q } ( { \bf x } )$ the quantised embedding, $s g ( . )$ the stop-gradient operator, $\mathbf { e }$ the nearest codebook entry, $\beta$ controls adherence to the nearest codebook entry, $\alpha$ weights the classification term, $\mathbf { y }$ is the one-hot class label, and $p _ { \gamma } ( . )$ the classifier operating in the latent space of the diffusion model. This approach improves the classification accuracy of future synthetic images generated by the diffusion model.
Conditional Diffusion Model: The conditional diffusion model $\mathbf { g } _ { \phi } : \mathcal { Z } \mathcal { Z }$ , parametrised by $\phi \in \varPhi$ , generates future image embeddings (12 months ahead) conditioned on a patient’s current embedding in the latent space $\mathcal { Z }$ . Following [26], it learns $\mathbf { g } _ { \phi } ( \mathbf { z } )$ by iteratively denoising Gaussian noise $\epsilon \sim N ( 0 , I )$ , using a U-Net with $\mathbf { v }$ -prediction [29], minimising:
$$
\ell _ { L D M } ( \phi ) = \mathbb { E } _ { \epsilon , \mathbf { z } ^ { 1 2 } , t , \mathbf { z } ^ { 0 } } \left[ | | \mathbf { v } - \mathbf { v } _ { \phi } ( \mathbf { z } _ { t } ^ { 1 2 } , t , \mathbf { z } ^ { 0 } ) | | _ { 2 } ^ { 2 } \right] ,
$$
where $\mathbf { v } = \alpha _ { t } \epsilon - \sigma _ { t } \mathbf { z } ^ { 1 2 }$ is a velocity vector, with $\alpha _ { t }$ and $\sigma _ { t }$ denoting noise and signal proportions at step $t$ , $\mathbf { v } _ { \phi }$ is estimated via U-Net, ${ \bf z } _ { t } ^ { 1 2 }$ is the latent embedding of the future image, and $ { \mathbf { z } } ^ { 0 }$ represents the conditioning image embedding, concatenated with ${ \bf z } _ { t } ^ { 1 2 }$ for conditioning. The U-Net has four encoding/decoding blocks and a bottleneck, with spatial self-attention in the first three and last three blocks, and channel-wise attention elsewhere. Inference model weights are obtained through an exponential moving average during training.
Risk Estimation via Predictive Modelling: Risk estimation uses the conditional diffusion model ${ \bf g } _ { \phi } ( { \bf z } ^ { 0 } )$ to generate the future embedding $\hat { \mathbf { z } } ^ { 1 2 }$ from current image embedding $ { \mathbf { z } } ^ { 0 }$ . Two classifiers, denoted by $p _ { \gamma _ { 0 } } : \mathcal { Z } \to \varDelta ^ { 4 }$ and $p _ { \gamma _ { 1 2 } } : \mathcal { Z } $ $\varDelta ^ { 4 }$ , independently classify both $ { \mathbf { z } } ^ { 0 }$ and $\hat { \mathbf { z } } ^ { 1 2 }$ . The risk, defined as the probability of an increase in KL grade between $ { \mathbf { z } } ^ { 0 }$ and $\hat { \mathbf { z } } ^ { 1 2 }$ [11], is computed as:
$$
\begin{array} { l } { { \displaystyle p ( y = 1 \mid { \bf z } ^ { 0 } , \hat { \bf z } ^ { 1 2 } ) = \sum _ { c < k } p _ { \gamma _ { 0 } } ( y ^ { 0 } = c \mid { \bf z } ^ { 0 } ) \cdot p _ { \gamma _ { 1 2 } } ( y ^ { 1 2 } = k \mid \hat { \bf z } ^ { 1 2 } ) , } } \\ { { \displaystyle p ( y = 0 \mid { \bf z } ^ { 0 } , \hat { \bf z } ^ { 1 2 } ) = \sum _ { c \ge k } p _ { \gamma _ { 0 } } ( y ^ { 0 } = c \mid { \bf z } ^ { 0 } ) \cdot p _ { \gamma _ { 1 2 } } ( y ^ { 1 2 } = k \mid \hat { \bf z } ^ { 1 2 } ) , } } \end{array}
$$
where $y = 1$ indicates an increase in KL grade, $y = 0$ indicates no increase, $y ^ { 0 }$ is the current KL grade, $y ^ { 1 2 }$ is the KL grade after 12 months, and $c , k \in$
$\{ 0 , 1 , 2 , 3 , 4 \}$ iterate over KL grades. The classifier from VQ-VAE multi-task learning serves as an initial model for fine-tuning risk estimation, using
$$
\ell _ { C L S } ( \gamma _ { 0 } , \gamma _ { 1 2 } ) = \mathbb { E } _ { ( \mathbf { x } , \mathbf { y } ) \sim \mathcal { D } } \left[ - \mathbf { y } ^ { T } \log \left( p _ { \gamma } \left( \mathbf { y } | \mathbf { z } ( \mathbf { x } ) \right) \right) \right] ,
$$
where is estimated from $\mathbf { x } ^ { 0 } , \mathbf { y } ^ { 0 }$ , and from $\mathbf { x } ^ { 1 2 }$ and $\mathbf { y } ^ { 1 2 }$ , both in $\mathcal { D }$ . More$\gamma _ { 0 }$ $\gamma _ { 1 2 }$
over, $ { \mathbf { z } } ^ { 0 }$ can optionally be upscaled $2 \times$ with bicubic interpolation at test time,
as shown in Fig. 2 – we note in the experiments of Sec. 4.3 that such upscaling
enables more accurate predictions.
Multi-task learning The multi-task classifier improves classification while predicting anatomical knee landmarks for interpretation. It is defined as $p _ { \zeta } : \mathcal { Z } $ $\varDelta ^ { 4 } \times \mathcal { L }$ , where $\mathcal { L }$ represents $L$ knee landmark coordinates. Deconvolutional layers are added to the classifier, followed by a 2D SoftArgmax function [30]. The model is trained using:
$$
\ell _ { M T S } ( \boldsymbol { \zeta } ) = \mathbb { E } _ { ( \mathbf { x } , \mathbf { y } , \{ \mathbf { l } _ { j } \} _ { j = 1 } ^ { L } ) \sim \mathcal { D } } \left[ - \mathbf { y } ^ { T } \log \left[ p _ { \boldsymbol { \zeta } } \left( \mathbf { y } | \mathbf { z } ( \mathbf { x } ) \right) \right] + \delta \sum _ { j = 1 } ^ { L } | | \mathbf { l } _ { j } - \hat { \mathbf { l } } _ { j } | | _ { 2 } ^ { 2 } \right] ,
$$
where $\mathbf { y }$ is the true KL grade for latent image embedding ${ \bf z } _ { e } ( { \bf x } )$ , $1 _ { j } = [ x _ { j } , y _ { j } ]$ is a 2-dimensional landmark coordinate, $\hat { \bf l }$ is the model’s prediction, and $\delta$ is a weighting hyperparameter.
Training Algorithm Training starts by optimizing VQVAE and its classifier, $p _ { \gamma } ( . )$ with $\ell _ { V Q V A E }$ in Eq. (1). The trained VQVAE works as the foundation for training the latent diffusion model, $\displaystyle \mathbf { g } _ { \phi } ( . )$ , with the loss $\ell _ { L D M }$ in Eq. (2). Once trained, the latent diffusion model generates future X-ray images for all dataset samples. Next, classifiers $p _ { \gamma _ { 0 } } ( . )$ and $p _ { \gamma _ { 1 2 } } ( . )$ are fine-tuned from $p _ { \gamma } ( . )$ using $\ell _ { C L S }$ in Eq. (5), leveraging ground truth and generated future images, respectively. Alternatively, these classifiers can be optimized with $\ell _ { M T S }$ in Eq. (6) to jointly learn KL classification and anatomical knee landmark prediction.
# 4 Experiments
# 4.1 Dataset and Assessment
The Osteoarthritis Initiative (OAI) dataset contains 47,027 knee radiographs from 4,796 patients [4], captured at 0-, 12-, 24-, 36-, 48-, 72-, and 96-month intervals. Each image is KL-graded, excluding total knee replacements, which cannot be classified. Landmark coordinates for $L = 1 6$ joint surface points are provided for 748 images. Following [30], all images are cropped to $5 1 2 ^ { 2 }$ pixels using a landmark prediction model, ensuring full knee visibility. Left knee images are flipped for consistency. The dataset is split into training (3,772), validation (512), and testing (512) patients.
Evaluation spans classification, prediction, and risk estimation. Classification involves estimating the current KL class $y ^ { 0 } \in \{ 0 , 1 , 2 , 3 , 4 \}$ from $\mathbf { x } ^ { 0 }$ or a latent representation $ { \mathbf { z } } ^ { 0 }$ . Prediction forecasts KL class $y ^ { 1 2 }$ 12 months ahead. Risk estimation generates a future latent image $\hat { \mathbf { z } } ^ { 1 2 }$ from $ { \mathbf { z } } ^ { 0 }$ using the conditional diffusion model, predicts the KL classifications $y ^ { 0 }$ and $y ^ { 1 2 }$ , and calculates the binary probability of KL class progression over 12 months based on Eqs. 3 and 4.
Performance is measured using the mean area under the receiver operating characteristic curve (mAUC), computed as the average of AUC values for each class in a one-vs-rest manner. We compare our method to [11], the current SOTA for risk estimation via image generation for knee OA.
# 4.2 Training
The VQ-VAE is trained on the training fold for 5 epochs with a mini-batch size of 8. It uses an Adam optimizer ( $\beta _ { 1 } = 0 . 9$ , $\beta _ { 2 } = 0 . 9 9 9$ ) and a cosine scheduler (initial LR $1 0 ^ { - 4 }$ , minimum LR $1 0 ^ { - 6 }$ ). Multi-task training with classification uses $\alpha = 1 0 ^ { - 4 }$ . The model has a compression ratio of 8, a codebook size of 256, and integrates vector quantization with the decoder.
The conditioned diffusion model is trained on image pairs spaced 12 months apart: {0,12}, {12,24}, $\{ 2 4 , 3 6 \}$ , and $\{ 3 6 , 4 8 \}$ . Images from 72 and 96 months are excluded due to 24-month gaps. Training runs for 200 epochs with a mini-batch size of 8, using an Adam optimizer ( $\beta _ { 1 } = 0 . 9$ , $\beta _ { 2 } = 0 . 9 9$ ) and a cosine scheduler (initial LR $1 0 ^ { - 4 }$ , minimum $1 0 ^ { - 6 }$ ). The diffusion process uses 1000 time steps, and sampling applies an exponential moving average of weights with $\gamma = 0 . 9 9 5$ .
The classifier is trained on true 0-, 12-, 24-, and 36-month images, and the second classifier is trained on synthetic 12-, 24-, 36-, and 48- month images generated by the diffusion model with 100 time steps for faster inference. Training uses mini-batches of size 8, balanced by whether KL progression occurs. Multitask classifiers estimates anatomical knee landmarks, trained similarly with a landmark loss weight $\delta = 0 . 5$ .
# 4.3 Ablation Study
Classification: Tab. 1 shows lower performance in latent space than image space. However, training the classifier within VQ-VAE mitigates this drop, and fine-tuning further improves results, surpassing image-space classification.
Prediction: Tab. 2 shows lower accuracy than classification (Tab. 1) since labels are not directly derived from input images. Latent-space prediction underperforms compared to image space, but training the classifier in VQ-VAE improves results, with fine-tuning further enhancing performance. Despite achieving a high mAUC of 0.84, this method predicts only probabilities, making interpretation difficult, and remains less complex than risk estimation, which requires accurate predictions of both $y ^ { 1 2 }$ and $y ^ { 0 }$ , as discussed in the next section.
Table 1. Ablation study on classification.
Table 2. Ablation study on prediction.
Table 3. Ablation study on risk estimation.
Risk Estimation: Tab. 3 evaluates risk estimation, $p ( y ^ { 1 2 } > y ^ { 0 } | \mathbf { z } ^ { 0 } , \hat { \mathbf { z } } ^ { 1 2 } )$ . The diffusion model generates $\hat { \mathbf { z } } ^ { 1 2 }$ , but image-space evaluation using $\mathbf { x } ^ { 0 } , \mathbf { x } ^ { 1 2 }$ is also considered for reference. Latent-space performance is lower since the image-space evaluation benefits from the ground truth future images. Training the classifier in VQ-VAE improves results, further enhanced by fine-tuning and multi-task learning with landmark prediction. Upscaling $ { \mathbf { z } } ^ { 0 }$ at test time significantly boosts performance.
# 4.4 Comparison with SOTA
Our method surpasses SOTA in OAI risk estimation (AUC 0.71 vs. 0.69 [11]) with significantly higher efficiency. Our training takes 12.6 hours on a single Nvidia A6000, compared to 114.88 hours on 2 $\times$ A6000s for [11], while our inference is $8 . 7 \times$ faster (2.70s vs. 23.6s per sample). Additionally, our approach improves interpretability by not only generating future images but also localizing anatomical knee landmarks, as illustrated in Fig. 3. Beyond generating images that better align with ground truth and providing landmark estimations, our method produces higher-resolution images than [11], further enhancing result interpretability. | Medical imaging plays a crucial role in assessing knee osteoarthritis (OA) risk by enabling early detection and disease monitoring. Recent machine learning methods have improved risk estimation (i.e., predicting the likelihood of disease progression) and predictive modelling (i.e., the forecasting of future outcomes based on current data) using medical images, but clinical adoption remains limited due to their lack of interpretability. Existing approaches that generate future images for risk estimation are complex and impractical. Additionally, previous methods fail to localize anatomical knee landmarks, limiting interpretability. We address these gaps with a new interpretable machine learning method to estimate the risk of knee OA progression via multi-task predictive modelling that classifies future knee OA severity and predicts anatomical knee landmarks from efficiently generated high-quality future images. Such image generation is achieved by leveraging a diffusion model in a class-conditioned latent space to forecast disease progression, offering a visual representation of how particular health conditions may evolve. Applied to the Osteoarthritis Initiative dataset, our approach improves the state-of-the-art (SOTA) by 2\%, achieving an AUC of 0.71 in predicting knee OA progression while offering ~9% faster inference time. | [
"cs.CV",
"cs.LG"
] |
# 1. Introduction
Large Multimodal Models (LMMs) substantially enhance Large Language Models (LLMs) by incorporating visual inputs [15, 19, 26, 27]. Yet, memory and computational overhead quickly escalate when processing multi-frame videos [19, 48]. While a single image may generate hundreds of tokens, dense or long videos easily yield thousands, severely taxing both training and inference. Existing solutions typically reduce token counts via spatial pooling perframe [19, 49], sparse sampling of image patches or pruning tokens [8, 45], token merging [23], or extensive hardware support [7, 47]. User-query-aware methods selectively discard tokens based on given queries [23, 34, 50], but sacrifice flexibility due to their query-specific nature. Consequently, there remains a strong demand for general-purpose compression that preserves broad video context efficiently for practical use in resource-limited environments.
In this work, we introduce TSTAR (Two-Stage Token Aggregation and Reduction), a hierarchical spatiotemporal token compression framework positioned between the vision backbone and the LLM. At its core, TSTAR employs a novel two-stage strategy: densely sampling frames at an initial high rate to preserve detailed events, then compressing tokens via an efficient neural architecture, before finally applying a secondary temporal downsampling filter (see Fig. 1 for illustration). This two-step decoupling of frame- and token-level compression facilitates flexible control over computational budget and accuracy.
To effectively implement TSTAR, we propose a novel compression layer architecture, MambaMia (Mamba for Multi-Image Association), based on the recently developed Mamba family of state-space models [13, 22, 32]. Unlike conventional Transformer layers [36] that scale quadratically with sequence length, Mamba-based layers achieve linear scaling, providing particular advantage for processing long input streams. Our MambaMia architecture further enhances bidirectional Mamba blocks with gated skipconnections and learnable token aggregation, enabling efficient aggregation of local visual features into compact video representation (see Fig. 2 for illustration).
We empirically validate TSTAR-MambaMia against a comprehensive selection of compression baselines on various challenging benchmarks spanning diverse long-video scenarios. In controlled comparisons under both unified and two-stage multimodal training protocols, we observe substantial advantages for our state-space-based approach over Transformer-based blocks. For example, replacing our state-space block by a Transformer leads to a significant performance drop. Our best-performing 13B-scale model achieves 45.2 points on the challenging VNBench [52] task using only up to 860 tokens for 256 frames, approaching GPT-4V performance (48.9), yet at considerably reduced inference latency and GPU memory usage for massive video frames compared to existing approaches. These results confirm the practicality and wide applicability of our proposed method.
Contributions. Our contributions are summarized as follows:
• We propose TSTAR, a novel hierarchical spatiotemporal token compression framework that employs a twostage sampling scheme (dense initial sampling followed by token-level downsampling), facilitating practical integration of massive video frames.
• We introduce MambaMia, a novel bidirectional statespace architecture specifically designed for use within TSTAR. MambaMia efficiently aggregates local video information via gated skip connections and learnable weighted-average pooling.
• We systematically implement and rigorously analyze various representative compression models under unified and two-stage multimodal training strategies, providing a clear empirical foundation regarding strengths and weaknesses of different compression approaches.
• We experimentally demonstrate that our TSTARMambaMia exhibits competitive or superior performance compared to existing state-of-the-art methods across diverse benchmarks, while significantly reducing resource demands (e.g., fewer than 860 tokens per 256 frames). To accelerate future research, we publicly release our codebase and pretrained models.
# 2. Related Work
Spatial Vision Token Reduction. Video complexity often stems from the high quantity of spatial tokens generated per frame. Many studies reduce spatial tokens via simple pooling methods such as bilinear interpolation or average pooling [10, 19, 47, 50], or more sophisticated CNN-based methods [5]. Recent research also leverages lightweight attention modules, like Q-Formers [2, 3, 37, 48], to significantly compress single-frame tokens while maintaining reasonable accuracy [21, 48]. Nevertheless, when extending these simple 2D compression strategies independently per frame, cumulative token counts in dense videos often still remain prohibitively large [19, 49].
Spatiotemporal Token Compression. To address temporal redundancy in videos, several approaches extend 2D pooling into 3D methods [8, 30] or prune spatiotemporal tokens using similarity heuristics [45]. Attention-based resampling methods employ learned queries to selectively compress tokens either per-frame [24] or jointly across multiple frames [20, 46]. Other methods perform hierarchical chunk merging [23], substantially reducing token counts but typically still requiring thousands of tokens per sequence [23]. User-query-aware methods discard tokens irrelevant for predetermined queries [34, 50], but limit flexibility for open-ended tasks. Additionally, another direction employs specialized large-scale hardware setups [7, 47], but these are often impractical for most users. Differing from these prior techniques, we propose a general-purpose, fullylearned, and query-agnostic approach clearly balancing efficiency and coverage. Additional detailed analyses of other compression paradigms can be found in the supplementary material (Section A).
State-Space Models and Mamba. Another research trend relates to advances in state-space models (SSMs), notably the Mamba family [13], which efficiently handle long sequences due to their linear computational complexity, unlike attention mechanisms scaling quadratically [4, 36]. Particularly, bi-directional Mamba variants exhibit improved efficacy for encoding video sequences [22, 32]. Inspired by these, our MambaMia design introduces bidirectional state-space blocks augmented with gated skip connections and token aggregation specifically tailored for video compression. As demonstrated empirically in Section 4, Mamba-based modules significantly outperform traditional attention-based compression in terms of efficiency and performance.
# 3. Method
# 3.1. Overview
Figure 1 provides an overview of our TSTAR framework. We first densely sample video frames and encode each frame into patch embeddings, resulting in thousands of tokens per video. To reduce token complexity, we periodically insert learnable query tokens as anchor points, allowing the model to aggregate rich contextual information across frames. Specifically, these query and patch tokens pass through our lightweight compression module—MambaMia—which efficiently merges local spatiotemporal contexts into compact representations (details in Figure 2). Finally, we apply a secondary sampling step to further downsample these compressed tokens before feeding them into the LLM. This hierarchical, two-stage approach allows flexible control over the balance between computational efficiency and representational quality, effectively enabling the model to handle long and dense video sequences.
Figure 1. Overview of our TSTAR framework. We introduce a lightweight compression layer (e.g., MambaMia) that extracts compressed representations, followed by secondary frame-level token sampling before feeding into the LLM (e.g., $k = 2$ , $s = 1 / 2$ in this illustration).
# 3.2. Preliminary: State-Space Models and Mamba
Our goal is integrating an efficient “sequence compressor” between the visual backbone and the LLM, demanding a computationally affordable architecture capable of handling long sequences. Recent advances in State-Space Models (SSMs), notably the Mamba family [11, 13], offer linear computational complexity—crucial for long-length inputs—in contrast to quadratic complexity of Transformer attention [36].
Formally, an SSM recursively updates a latent hidden state $h ( t )$ given an input sequence $x ( t )$ . For practical sequence modeling (e.g., discrete frames or tokens), SSMs are generally discretized as follows [13]:
$$
h _ { k } = \overline { { \mathbf { A } } } h _ { k - 1 } + \overline { { \mathbf { B } } } x _ { k } , \quad y _ { k } = \mathbf { C } h _ { k } ,
$$
with discretized transition and projection matrices $( \overline { { \mathbf { A } } } , \overline { { \mathbf { B } } } , \mathbf { C } )$ . These parameters are typically fixed and time-invariant, leading to linear complexity $( \mathcal { O } ( T ) )$ with respect to sequence length $T$ , in contrast to Transformer’s quadratic complexity $( \mathcal { O } ( T ^ { 2 } ) )$ . Thus, SSMs efficiently scale to longer sequences [11, 13]. For the basic mathematical formulation and further background regarding classical SSMs, we refer readers to supplementary material (Section B).
Despite their computational efficiency, classical SSMs use fixed parameters across all inputs, limiting flexibility for varied input contexts. To overcome this limitation, Selective SSMs—recently introduced as Mamba [11, 13]—propose dynamically adjusting a subset of the parameters $\mathbf { \overline { { B } } }$ , C, and the discretization step-size $\Delta$ ) at each input step. This adaptive parameterization significantly enhances modeling expressiveness while retaining linear complexity. Recent studies also demonstrate superior empirical efficiency relative to attention models for handling long sequences, as Selective SSMs efficiently compress global state rather than explicitly modeling all-to-all interactions (see also the detailed complexity analysis in section 6 of Dao and Gu [11]).
The Mamba architecture has since been extended to Bidirectional Mamba (Bi-Mamba) variants [22, 32], which separately process input sequences forward and backward to capture richer multi-directional context while preserving linear efficiency. Inspired by these advances, we propose MambaMia, a selective SSM-based bidirectional block specifically designed to aggregate spatial-temporal information from dense video streams with gated skip connections and weighted-average pooling (Section 3.3).
# 3.3. Proposed Framework and Model Architecture
We propose TSTAR, an efficient hierarchical framework designed for compressing dense video-frame representations into compact inputs to LLMs. Figure 1 summarizes our overall design.
Given an input video comprising $M$ frames, each frame is first encoded into $N$ patch embeddings, forming a sequence of length $M \times N$ . To enable structured compression, we periodically insert learnable “query tokens” every $k$ patches (e.g., $k = 1 0 \AA$ ). These queries initially serve as learnable dummy anchor points, implicitly facilitating the aggregation of context from neighboring patches as well as global temporal information. Subsequently, our framework employs a two-stage hierarchical sampling strategy: initial dense initial frame sampling followed by secondary frame-level token sampling, flexibly balancing representational capacity and computational overhead (Section 3.3.1).
As a dedicated compression layer tailored to TSTAR, we further propose MambaMia, a lightweight adaptation of the Bi-Mamba block [22, 32]. The Bi-Mamba block efficiently captures bidirectional spatiotemporal context via parallel left-to-right and right-to-left processing, concatenating hidden states into compressed representations at linear complexity. To further enforce an explicit inductive bias consistent with TSTAR’s design, MambaMia additionally integrates an adaptive “Gated Patch Aggregation” module, selectively merging relevant information from neighboring tokens directly into the inserted query tokens. We detail the two-stage sampling strategy (Section 3.3.1) and gating module (Section 3.3.2) in the following subsections.
Figure 2. Proposed MambaMia Block Architecture. Our MambaMia Block integrates a Bi-Mamba base [22, 32] with a gated patch aggregator to compress video tokens effectively. Input tokens are first grouped into chunks of size $( k + 1 )$ , consisting of a single query token and $k$ non-query tokens (a). Query tokens selectively aggregate local information within each chunk via a learned weighted-average pooling and gating mechanism $( \mathsf { b } , \mathsf { c } ; \mathsf { E q } . 3 )$ ).
# 3.3.1. Two-Stage Hierarchical Compression
Even after introducing query tokens every $k$ tokens (providing structural anchor points per frame), dealing with large frame numbers $( M )$ can still lead to excessive tokens. While aggressively reducing tokens per-frame [24, 48] or dropping frames entirely might reduce computational load, such extreme simplifications often cause substantial information loss and performance degradation [23].
To address this challenge effectively, we adopt a twostage hierarchical sampling strategy comprising:
1. Initial Dense Frame Sampling. We retain relatively dense frame sampling, ensuring crucial transient events and temporal details are not prematurely discarded.
2. Secondary Frame-level Token Sampling. After aggregating with MambaMia, we further selectively sample a subset of compressed tokens (see Fig. 1). This step is specifically designed to flexibly manage computational budgets while preserving representational capacity.
By explicitly separating initial dense frame sampling (to minimize early-stage information loss) from secondary frame-level token sampling (to balance computational costs), our hierarchical two-stage approach effectively avoids drastic information degradation. We systematically demonstrate its advantages through ablation studies in Sections 4 and 5.
# 3.3.2. Gated Patch Aggregation (MambaMia Block)
Figure 2 illustrates the detailed structure of our proposed MambaMia block. To explicitly guide information aggregation toward the inserted query tokens, we introduce an adaptive gating mechanism. Formally, given a query token $\mathbf { q } \in \mathbb { R } ^ { d }$ and its neighboring patch embeddings $\{ { \bf x } _ { i } \} _ { i = 1 } ^ { k }$ , we generate aggregation weights $\{ \alpha _ { i } \}$ through a small linear layer (parameters $\mathbf { W } _ { \alpha } , \mathbf { b } _ { \alpha } )$ followed by a softmax:
$$
\alpha = \mathrm { s o f t m a x } \left( \mathbf { W } _ { \alpha } \mathbf { q } + \mathbf { b } _ { \alpha } \right) , \quad \mathbf { a } = \sum _ { i = 1 } ^ { k } \alpha _ { i } \mathbf { x } _ { i } .
$$
Next, we compute a scalar gate $g ~ \in ~ [ 0 , 1 ]$ from the query representation $\mathbf { q }$ using another linear layer (parameters $\mathbf { W } _ { g } , \mathbf { b } _ { g } )$ and sigmoid function $\sigma ( \cdot )$ :
$$
g = \sigma ( \mathbf { W } _ { g } \mathbf { q } + b _ { g } ) , \quad \mathbf { q } _ { \mathrm { n e w } } = ( 1 - g ) \mathbf { q } + g \mathbf { a } .
$$
This learnable scalar gate $g$ adaptively modulates how much neighboring token information replaces the original query representation: $g \approx 0$ preserves previous query contexts, while $g \approx 1$ heavily aggregates local information. Through this adaptive gating, each query token selectively captures key neighboring context, efficiently summarizing both local details and broader spatiotemporal contexts.
# 3.4. Training Strategies for Multimodal Integration
Integrating visual understanding capabilities—both images and videos—into pretrained LLMs remains an open challenge. Recent studies have explored two primary training paradigms: (1) unified training, where image and video modalities are integrated simultaneously into the LLM in a single step [20]; and (2) two-stage training, where LLMs are first adapted to image-level instructions and subsequently fine-tuned with video tasks [45, 50]. In this work, we systematically explore both methods across varying model scales (Phi-3, Vicuna-7B, Vicuna-13B) and data settings, investigating the robustness of our approach under diverse multimodal conditions. Specifically, we adopt the following experimental setups:
1. Unified Training. After briefly aligning our compression module (MambaMia) using LLaVA-Pretrain data [26] with the vision encoder and LLM frozen, we simultaneously train our model with both image-level and video-level instructions. Unified training is straightforward and computationally efficient, allowing for rapid and extensive ablations (Section 4.2).
2. Two-Stage Training. In this setup, we first train LLM backbones extensively using image instruct data (Elva recipe with ${ \sim } 1 \mathrm { { M } }$ images [15]). Next, after the same brief alignment of our compression module with LLaVA-Pretrain mentioned above, we subsequently fine-tune the model on video-level instruct data. This two-step, modality-separated approach naturally facilitates model stability, especially at larger scales.
These variations allow comprehensive benchmarking of our proposed compression architecture under realistic multimodal integration scenarios. A thorough empirical comparison between these two training strategies at different scales is discussed in detail in Section 5.
# 4. Experiments and Analyses
# 4.1. Experimental Setup
# 4.1.1. Training Datasets
Base Setting. We utilize the LLaVA-Pretrain dataset [26] for initial modular alignment of our compression layers. Following this alignment stage, we jointly train the model using image-level instructional data from LLaVA-Instruct150K [26] and video-level instructional data consisting of approximately 131K video question-answer pairs [50].
Scaled-Up Setting. We first train the LLaVA model [26] with approximately 1 million instructional images collected following the Elva recipe [15]. Then, we introduce our compression layers with an alignment phase by re-using the LLaVA-Pretrain dataset [26]. Next, we subsequently perform video-level instruction tuning on a significantly expanded video dataset, consisting of approximately five times more video samples compared to the ablation setup above [50]. Additional dataset details, precise hyperparameters, and implementation specifics can be found in the supplementary materials. For full reproducibility, we publicly release our codebase, checkpoints and detailed scripts.
# 4.1.2. Evaluation Benchmarks
Our primary goal is to rigorously test token compression architectures explicitly designed to process long video frames. To this end, we first select four core benchmarks specifically emphasizing challenging long and dense video reasoning scenarios: MLVU [53], VideoMME (VMME) [12], TempCompass (Temp) [28], and VNBench (VNBI, VNBC; independent and circular evaluation) [51]. Additionally, when comparing our models more broadly against recent models, we extend our setup to include complementary and widely-adopted benchmarks: Generative Temporal Understanding (Chat Temporal) Maaz et al.
[30], LVBench (LVB) [38], ActivityNet-QA (ActQA) [44], MVBench (MVB) [20], and NExTQA (NQA) [41]. Furthermore, as supplementary indicators, we also verify basic multimodal competence using popular single-image benchmarks: SEED-IMG (SD-I) [18], MMStar (MMS) [6], and AI2D [14]. Further details on our protocols, including clarifications on benchmark usage and evaluation variants introduced in our study, can be found in the supplementary material (Section E).
# 4.1.3. Comparison Methods and Baselines
We organize comparison methods into two categories: (1) controlled architectural comparisons under unified settings at base scales (Section 4.2), and (2) recent state-of-the-art baselines under scaled evaluations (Section 4.3).
Architectural Comparison Baselines. In controlled comparisons (Table 1), we rigorously isolate the effect of our proposed MambaMia architecture. Specifically, we compare the following closely-related module architectures: Mamba [13], Bi-Mamba [22, 32], GPTNeoX [4], and a bidirectional modification (Bi-GPTNeoX). To clearly assess the benefit of unified spatial-temporal modeling, we also employ spatial-only frame-wise variants (e.g., Mambaper-frame).
To further contextualize our evaluations within prevalent approaches, we directly implement representative compression methods widely adopted in recent literature, including (1) per-frame 2D pooling (average/bilinear interpolations [19, 47, 50], CNN-based pooling [5]), (2) temporal extension of pooling (3D pooling) [8, 30], and (3) attentionbased token pooling mechanisms (per-frame 2D [5], spatiotemporal 3D [20, 46]). Although our approach inherently targets general-purpose, query-agnostic compression, we additionally provide indirect comparisons with other specialized paradigms (heuristic pruning, user-query-aware token selection [24, 34, 45]) through state-of-the-art comparison in following tests (Section 4.3).
State-of-the-Art Baselines. In subsequent large-scale comparisons (Table 2), we evaluate our best-performing TSTAR configurations against contemporary models encompassing diverse compression approaches (2D/3D pooling, attention resampling, pruning, query-aware selection etc.). Comprehensive baseline references and detailed quantitative results are provided directly in Table 2. We also contextualize our results with closed-source GPT-4V [31].
# 4.1.4. Implementation
We adopt CLIP-ConvNeXt-Large [29] as our vision encoder, processing $3 2 0 \times 3 2 0$ images into $N ~ = ~ 1 0 0$ tokens. We experiment primarily with three popular pretrained LLM backbones: Phi-3 [1] (3.8B), Vicuna-7B, and
Heatmap of MLVU Scores Token Count vs. Score 58 58
$s { = } 1 / 1$ 53.06 56.28 54.53 52.14 (k=10, s=1/3) 56 56
$s { = } 1 / 3$ 57.70 56.32 51.36 51.91 54 54
$s { = } 1 / 5$ 55.95 54.11 51.45 49.75 52 52 All Configs Pareto-like Frontier
s=1/10 53.43 50.57 49.33 48.51 50 50 1 ★ Chosen: (k=10, s=1/3) k=5 k=10 k=50 k=100 0 0.5K 1K 1.5K 2K 2.5K
and C-Abstractor [5], which compress tokens independently per frame. These spatial methods generally underperform even the uncompressed baseline, though they might remain partially effective when using higher-density encoders (e.g., CLIP-ViT-L/14-336, Section 5).
We next examine attention-based token pooling, either spatially (per-frame) or jointly spatiotemporal (3D). Framewise attention slightly outperforms simple pooling methods, whereas 3D attention achieves poor accuracy despite fewer tokens—indicating inherent limitations of purely attention-based compression methods for dense videos.
13B [9, 35], consistently keeping the vision encoder frozen following common efficient training practices [15, 27].
We uniformly sample up to $M = 1 2 8$ frames per video, evenly spaced along the entire duration. For module-only alignment (with LLM frozen), we set an initial learning rate of $1 \times 1 0 ^ { - 4 }$ , lowering it to $2 \times 1 0 ^ { - 5 }$ during full multimodal fine-tuning. Based on hyperparameter exploration (See Fig. 3), we adopt default settings of query insertion interval $k = 1 0$ and secondary sampling ratio $\begin{array} { r } { s = \frac { 1 } { 3 } } \end{array}$ . Additional training details—including precise hyperparameters, hardware specifications, and detailed reproducible guidelines—can be found in supplementary Section C.
# 4.2. Controlled Compression Method Comparison
Table 1 systematically compares our method against various representative compression techniques. We first evaluate common spatial-only techniques, including pooling [19, 50]
Importantly, we further test a per-frame baseline, applying identical compression blocks independently to each frame (interval $k = 1 0 ,$ ), yielding a relatively high total token count (e.g., 128 frames $ 1 { , } 2 8 0$ tokens). Interestingly, these per-frame baselines show competitive results, as our multimodally fine-tuned LLM can learn temporal understanding from individually compressed frames thanks to its large capacity. Yet critically, our full TSTAR framework—which explicitly compresses frames jointly as a unified sequence—consistently matches or surpasses per-frame performance at significantly reduced token cost (430/860 tokens vs. 1,280 tokens), clearly demonstrating superior token-efficiency and practicality. Our subsequent analyses (Section 5) further reinforce this explicit temporal compression advantage at scaled-up scenarios.
Finally, we directly compare our state-space-based MambaMia block against Transformer-attention variants within the TSTAR framework. Consistent with prior findings [13, 22], state-space modeling clearly outperforms attention blocks across conditions, with MambaMia achieving best-in-class results. Further robustness analyses appear in supplementary Section D (Table A).
Table 2. Comparison with State-of-the-Art LMMs. Results are grouped by model scale, with compression strategies and token-efficiency metrics indicated (average tokens per frame, maximum input frames, and maximum total token counts per video). Our models consistently demonstrate competitive performance across benchmarks, significantly reducing total token usage. ∗Based on best available information from original papers and released code. †Results reproduced under identical conditions (see supplementary material for sanity checks).
# 4.3. Benchmark Comparison to State-of-the-Art
We now present benchmark comparisons of our TSTARMambaMia models against recent state-of-the-art models, focusing on both accuracy and token efficiency. Table 2 summarizes quantitative results across extended challenging benchmarks alongside explicit token-efficiency metrics.
Overall, our models consistently achieve competitive or superior performance relative to existing approaches, while employing significantly fewer vision tokens. For instance, compared to strong baselines such as LLaVA-NextVideo [49], LLaVA-OneVision [19], and LongVA [47], our method uses roughly an order-of-magnitude fewer total tokens (about 860 vs. tens of thousands), yet achieves notably comparable performance. This substantial efficiency advantage highlights the practical relevance of our approach under realistic resource constraints.
Of particular note is the challenging needle-in-a-videohaystack scenario (VNBC), testing fine-grained retrieval of fleeting visual details. In this task, our TSTAR-MambaMia$1 3 \mathrm { B _ { T S } }$ achieves $4 5 . 2 \%$ , approaching GPT-4V $( 4 8 . 9 \% )$ [31] and clearly surpassing other open-source baselines despite our significantly reduced vision-token budget (860 tokens for 256 frames). This shows our method’s strong capability for efficiently preserving critical spatiotemporal context.
No Compression 2D Average Pooling TSTAR GPTNeoX TSTAR MambaMia
12.505 (a) Latency vs. # Frames (b) Max Frames vs. GPU Memory 300
01.050 100 0 100 200 300 400 8000 12000 16000 20000 25000 Number of Frames GPU Memory Usage (MiB)
We acknowledge that certain stronger performances from existing models can partly be attributed to advanced LLM backbones (e.g., Qwen-2.5). Additionally, specialized approaches such as user-query-aware methods (denoted by w-q) explicitly leverage known query-specific information or utilize substantially larger token budgets (often 10K–20K tokens), restricting flexibility or introducing significant computational overhead. In contrast, our approach provides a general-purpose, query-agnostic compression that consistently delivers robust performance across diverse benchmarks at significantly reduced token usage.
# 5. Further Analyses and Discussions
# 5.1. Inference Costs with Frame Counts.
To better examine inference cost in long-video scenarios, we analyze inference latency and GPU memory usage of our methods using the MLVU benchmark, details of the throughput measurement procedure are in the supplementary material. Figure 4 illustrates how inference costs scale with the increasing number of input frames. Compared to the uncompressed and simple spatial average pooling baselines, our methods handle substantially more frames at reasonable memory budgets, which is highly beneficial in practice. While our methods unavoidably incur additional overhead compared to simpler spatial methods at identical token counts—due to the introduced compression layers— they greatly reduce total tokens required to achieve competitive video understanding performance, ultimately enabling significantly higher max-frame processing under realistic resource constraints.
Table 3. Enhanced Performance with Qwen2 and Qwen2.5 Backbones. Results are reported across five video benchmarks, demonstrating consistent accuracy improvements when scaling to stronger language model backbones.
# 5.2. Enhanced Backbone Analysis.
We further investigate the effect of employing more advanced large language model backbones in our framework. As summarized in Table 3, integrating stronger language models such as Qwen2 and Qwen2.5 leads to consistent performance improvements across diverse video understanding benchmarks. These results highlight that our approach is highly adaptable and can effectively leverage more powerful language models, further narrowing the gap with recent large-scale video LLMs while retaining significant efficiency in vision token usage. This demonstrates the scalability of our method and its ability to benefit from advances in general-purpose backbone models.
# 5.3. Ablation on Mamba Block Versions.
To further assess the flexibility of our approach, we conduct an ablation comparing MambaMia constructed using two different Mamba block variants (V1 [13] and V2 [11]). As shown in Table 4, both versions can be seamlessly integrated within our framework, confirming that the TSTARMambaMia architecture is compatible with both generations of the Mamba block. When comparing performance, the V1-based model achieves slightly higher average accuracy across the MLVU, VMME, and LVBench datasets, while the V2-based model provides marginally lower inference latency. These results demonstrate that our method accommodates either backbone variant, enabling a practical trade-off between accuracy and efficiency depending on target deployment needs.
Table 4. Ablation of Mamba Block Versions in TSTARMambaMia-Phi3-3.8B. Comparison between Mamba V1 and V2 blocks, reporting latency and benchmark accuracy. Both versions are compatible; V1 offers slightly better accuracy, while V2 achieves lower inference latency.
Table 5. Comparison of Mamba-based LLMs and Scaled NonCompression Baselines. The table reports accuracy and latency for both non-compression models with increased vision token counts and our TSTAR-MambaMia models, illustrating the effectiveness of combining efficient backbones with visual token compression for long-video understanding.
# 5.4. Mamba LLMs and Non-Compression Scaling
Table 5 examines whether efficiency gains from advanced LLM backbone designs such as Mamba [13], or simply increasing the vision token count in non-compression baselines, are sufficient for strong long-video performance. The results show that, while Mamba-based LLMs without compression provide improved efficiency and some accuracy gains, they are still clearly outperformed by our proposed TSTAR-MambaMia models in both accuracy and latency. This underscores the importance of combining efficient backbones with structured visual token compression to achieve substantial advantages in long-video understanding.
# 5.5. Training Protocol Comparison.
Table 6 compares training protocols (unified vs. two-stage) under varying dataset characteristics and scales. Within the unified protocol, scaling image data slightly yet unexpectedly degrades performance. Our analysis suggests this may reflect nuanced dataset-specific factors, such as imagevideo modality balance or simplified instructional signals, that impact the effectiveness of multimodal supervision for video reasoning tasks. In contrast, a two-stage protocol—which explicitly separates initial image-level instruction tuning from subsequent video adaptation—consistently exhibits robust scaling behavior, possibly due to better handling of modality-specific instructional complexities. Interestingly, large-scale unified training experiments with Phi-3 setups in Table 2 indicate that expanding video data under minimal image data conditions can positively impact
Table 6. Comparison of Training Protocols and Single-Image Benchmarks. All models in the table use Vicuna-1.5-7B as backbone. Results compare unified and two-stage training strategies under varying image-data scales. Single-image models [15, 27] requiring higher token counts per image are provided as references.
Performance vs. Token Efficiency (7B, Scaled-Up Two-Stage training)
58
56
524 TSTAR MambaMia PTeSrT-AFrRaBmi-eMaambbaaMia 220 430 860 1280 2560 Total Vision Tokens per Video
unified training, suggesting nuanced interactions among dataset scales, instructional quality, modality balance, and model architectures. Taken together, these results imply inherent uncertainties and trade-offs in multimodal training, underscoring the need for systematic future investigation of data scaling and modality balancing. Crucially, our proposed compression method consistently achieves strong performance across diverse training protocols and scales, demonstrating its robustness and general effectiveness.
# 5.6. Scaled-up Setting Ablations.
To further reconfirm the effectiveness of our method under the scaled-up setting (7B, two-stage training with full videos), we revisit two key design choices: (1) the benefit of the local aggregation module (MambaMia vs. BiMamba), and (2) the advantage of joint spatiotemporal aggregation (TSTAR vs. per-frame variant). Figure 5 clearly demonstrates TSTAR-MambaMia consistently outperforms TSTAR-Bi-Mamba across evaluated token counts. Additionally, our TSTAR method, explicitly aggregating context across frames, achieves superior efficiency compared to its per-frame variant. Finally, our method notably achieves performance improvement up to 256 frames (beyond the training-time maximum of 128 frames), showing only moderate degradation at 384 frames—demonstrating stronger robustness and efficiency compared to its per-frame variant, whose performance immediately drops at 256 frames (2560 tokens), as extrapolation to unseen lengths is known to be challenging [40, 47].
Table 7. Additional Results with Larger Encoder. The OpenAI CLIP-Large results further reinforce our findings in Section 4.2.
# 5.7. Vision Encoder Comparison.
Our main experiments adopt the CLIP-ConvNeXtLarge [29], processing $3 2 0 \times 3 2 0$ images into 100 tokens. To further assess robustness and broader applicability, we additionally evaluate OpenAI CLIP ViT-L/14-336 [33], a widely-used alternative encoder generating significantly more tokens per frame (576 tokens). Table 7 shows focused comparisons with representative baselines in the setting. Interestingly, due to its higher token density, simple compression methods (i.e., average pooling) perform relatively better than in ConvNeXt-based experiments. Nonetheless, our proposed method maintains clear advantages even with this ViT-based encoder, delivering superior accuracy while using substantially fewer tokens. While extensive exploration with additional encoders remains future work, these results provide encouraging evidence on the generalizability of our approach. | We propose an efficient framework to compress multiple video-frame features before feeding them into large multimodal models, thereby mitigating the severe token explosion arising from long or dense videos. Our design leverages a bidirectional state-space-based block equipped with a gated skip connection and a learnable weighted-average pooling mechanism applied to periodically inserted learned queries. This structure enables hierarchical downsampling across both spatial and temporal dimensions, preserving performance in a cost-effective manner. Across challenging long and dense video understanding tasks, our approach demonstrates competitive results against state-of-the-art models, while significantly reducing overall token budget. Notably, replacing our proposed state-space block with a conventional Transformer results in substantial performance degradation, highlighting the advantages of state-space modeling for effectively compressing multi-frame video data. Our framework emphasizes resource-conscious efficiency, making it practical for real-world deployments. We validate its scalability and generality across multiple benchmarks, achieving the dual objectives of efficient resource usage and comprehensive video understanding. | [
"cs.CV"
] |
# 1 INTRODUCTION
Elections in the United States are decentralized and conducted by the states. The Help America Vote Act of 2002 prompted all states to modernize their voting infrastructure and retire lever machines. States overwhelmingly adopted voter marked paper ballots that yield a “voter verifiable paper audit tail” or VVPAT [1] and are scanned and counted by digital tabulators. According to Verified Voting Database, $6 9 . 2 \%$ of tabulators are scanners and $2 5 . 9 \%$ are ballot marking devices (BMD), leaving only a $4 . 9 \%$ market share for direct recording devices that do not use paper at all. To assess the voter selection for any contest on a ballot, the tabulator determines whether bubbles associated to alternatives in the contest are blank or marked. Namely, the core task is to carry out a binary classification on the digital image of a bubble. Barreto et al. argued that
Convolutional Neural Networks (CNNs) [2] are suitable for ballot mark recognition with up to $9 9 . 9 \%$ accuracy on manually labeled ballots. However, machine learning classifiers are vulnerable to adversarial examples [3] where an imperceptible perturbation added to the input induces a misclassification.
This paper explores the susceptibility of machine learning classifiers to adversarial attacks when presented with images of bubbles from the voting domain. Specifically, our paper introduces attacks where one can implant adversarial machine learning examples [3–5] on ballots handled by the voter before it is fed to the tabulator. An adversarial signal is visually imperceptible (to a human), yet alters the classification results. We focus on attacks that could be conducted by a compromised vendor that prints ballots. The attacker’s goal is to print a ballot that appears empty but where some bubbles are interpreted as marked by the tabulator. We detail our threat model in Section 4. This voting domain is unique for the following reasons:
(1) It focuses on a deceptively simple binary classification.
(2) Voluntary voting system guidelines (VVSG) 2.0 require publication by vendors of the mechanisms used to classify a bubble,2 making white box attacks realistic.
(3) The attacker has to print a signal on paper which is scanned. Effects such as printer dithering must be considered. Kurakin et al. [6] previously considered attacks in the physical world. Our physical world setting differs from prior work.
(4) There are no agreed-upon labeled datasets for mark classification. Human auditors are expected to capture voter intent with guidelines that vary by state (see discussion in Section 3.1).
(5) An attacker can freely reuse adversarial examples; bubbles printed on a ballot are supposed to be identical.
(6) Election equipment has a long life cycle. Deploying vulnerable models carries lasting risks.
We conduct our attacks on six representative models, a support vector machine (SVM), a three-layer CNN that we call SimpleCNN, VGG-16 CNN [7], a ResNet-20 CNN [8], Class Attention in Image Transformer (CaiT) [9], and the Twins vision transformer [10].
Our Contribution. We demonstrate the hypothetical vulnerability of using machine learning classifiers in bubble recognition in both the digital and physical setting. In doing so we make the following contributions:
(1) New Labeled Voting Datasets: We introduce four new labeled ballot datasets (two grayscale and two color) for training machine learning classifiers for ballot mark recognition [11].
(2) Gradient Masking on Voting Datasets: We show that for the three convolutional models, the conventional application of white-box attacks (APGD [12], PGD [13] and MIM [14]) does not work. Models show robustness $> 0$ when the adversary can apply unbounded perturbations. This failure is attributed to numerical instability causing gradients in backpropagation to be reported as 0, despite the models achieving high accuracy during training. To the best of our knowledge, all previous examples of gradient masking occurred via defensive methods to stop adversarial examples [15].
(3) Overcoming Gradient Masking: We modify the difference of logits ratio (DLR) proposed by Croce and Hein [12] to work for binary classification (our modification can be viewed as an untargeted version of Carlini and Wagner’s loss [16]).
(4) Physical Attacks: We show that the printing and scanning process (using commodity equipment) drastically degrades the adversarial attack signal. Despite this, some attacks on some models are effective enough to still impact election races with small margins where many voters do not specify a preference.
Disclaimer. We intentionally target common classification models rather than any model used by a tabulator. No tabulator manufacturers has been certified to VVSG 2.0, so details are not yet available on their classification methods. The purpose of this work is to highlight the risk in potentially deploying machine learning models in these systems. As discussed in Appendix C on ethical considerations, our target machine learning models are chosen to cover the design space. They are not an attempt to recreate choices made by vendors. Ballot printers are specialized vendors with long-term relationships with municipalities. We believe an external attack on a vendor is more likely than an insider intentionally compromising ballot printing.
Paper Organization. Section 2 details election systems in the United States. Section 3 present the new datasets and classifiers under test. Section 4 describes our adversarial threat model and adversarial example generation methods. Section 5 shows that gradient masking occurs on our datasets after standard training. Section 6 how DLR overcomes gradient masking. Section 7 presents our experimental results and analyses in the digital domain. Section 8 details our physical domain attack results. Section 9 concludes and presents open questions. Our Appendix contains further experimental details. Classifiers under test and attacks are at https://github.com/VoterCenter/Busting-the-Ballot.
# 2 VOTING IN THE UNITED STATES AND PRIOR WORK
This section provides an overview of voting practices and related security research in the United States. Voting processes are meant to enforce the 1-voter to 1-vote principle to assure fairness. Voting by mail and voting in person rely on different processes suitable to each modality. This work focuses on in-person voting. This process involves multiple steps 1) checking-in voters, 2) handing out ballots, 3) voting and casting of a ballot and 4) tabulating the results. This paper further restricts its scope to the latter steps of this pipeline: voting, casting and tabulating. Voting systems used in the U.S. include:34
Ballot Scanners. Take in ballots with bubbles and determine which bubbles on the ballot have been filled in. The appeal of scanners (used in $6 6 . 6 \%$ of the U.S.) is that ballots are typically marked directly by voters and form a VVPAT. The VVPAT can be used for machine independent audits.
Ballot Marking Devices. Provide the necessary means to fill bubbles on behalf of the voter based on an alternate (often digital and computerized) input mechanism instead of a pen. A voter using a BMD fills in a digital artifact, that creates a printout of a filled ballot. Some BMDs fill in bubbles and rely on conventional paper ballots. Others encode the ballot in a machine readable format (like a QR code, or a barcode) or print the selection in each race as plain text.5 Selections, conveyed through filled bubbles, are interpreted primarily by a tabulator as in the previous method. Precincts with BMDs account for $2 5 . 9 \%$ of the U.S. electorate.
Direct Recording Equipment. Voters use an interface to state their preferences and the machine records these preferences and adds them to a tabulation. The DRE device is used for all 3 stages: encoding a ballot, casting the ballot and tabulating all the ballots. There is no paper artifact of the voter’s preferences. These systems are used in roughly $4 \%$ of the U.S.6 Our work does not apply to DREs, but these systems are shunned due to the lack of a VVPAT [21].
# 2.1 Types of Ballot Scanners
Ballot scanners are given a page containing several questions, each with multiple outcomes that can be chosen by filling a bubble. The scanner classifies each bubble as a blank or a marked bubble. These determinations are then used with election-specific rules to determine the votes.
Optical Lens Systems. This analog technology is ubiquitous in standardized tests where examinees fill bubble sheets that convey answers. Each page features timing marks (black rectangles) on its edges to hint at the position of logical rows and columns. All the bubbles on the page form a matrix and are addressable based on their row and column indices. Scanners process one row at a time (when facing a row timing mark), effectively opening an analog sensor to collect reflected light. The machine has a light sensor in each column position. If a bubble is filled, it absorbs more light than a blank bubble. Whether bubbles are read as marked depends on the thresholds and sensitivity of the sensors. Image segmentation is a function of timing marks and sensor position.
Full Image Scanners. Off-the-shelves full-image scanners are also used. Modern hardware can record anything from 100 dots-perinch to 1200 dots-per-inch. Given a 8x11 US letter page, a $2 0 0 \mathrm { D P I }$ resolution implies that each row has $1 7 0 0 = 8 . 5 \cdot 2 0 0$ pixels and a grand total of $2 2 0 0 = 1 1 \cdot 2 0 0$ rows. Sensors typically captures several rows at a time and the pixels are physically arranged in a so called Bayer pattern. The firmware of the scanner (or the driver on the host computer) converts the Bayer matrix of grayscale values into a matrix of RGB values by reconstructing the missing color values through interpolation. This process is known as debayering or demosaicing [22]. The end result is a RGB pixels where each color channel is grayscale and uses 8 bits per pixel. Pulling the page over the sensor is a mechanical process subject to acceleration and deceleration. The sensor sensitivity impacts the color rendition of the device. Once acquired by a COTS Scanner, an image is analyzed using the following steps:
Stretching The raw image has an effective DPI rate that varies with the acceleration of the ballot. A correction inverts this stretch to bring features closer to their true relative location.
• Registration A constellation of geometric features identified on a reference ballot are used to align any incoming scan with the reference image. Once this is done, a bubble at coordinate $( x , y )$ in the reference image is expected to be at coordinate $( x , y )$ in the registered scan.
Chromatic Correction An ICC profile corrects the colors in the image to bring them as close as possible to the true colors based on a device specific colorimetry calibration.
Segmentation The location of the bubbles on the reference image is used to locate and clip out small RGB bitmaps that contain the actual bubbles.
Classification The final step classifies each bitmap as a blank bubble or a marked bubble.
Once bubbles are classified as blanks or marks, each race on the ballot can be tabulated according to the rules of the race. The ultimate tabulation stage is not the object of this paper. We also assume that all stages up to and including segmentation are accurate.
Prior Work. Tabulation security received renewed attention [23– 32] after the Help America Vote Act in 2002. Issues ranged from unprotected serial ports, manipulation of election definitions, and exploitation of poorly designed cryptography. Procedures including risk-limiting audits or RLAs [33–37] were created to deal with these vulnerabilities. Note that RLAs only detect whether there is an error in the reported outcome. Detecting the root cause of such errors can be complicated or impossible. Other classes of vulnerabilities include lax adherence to policy [38]. Procedures and requirements are formalized in voluntary voting system guidelines or VVSG [39– 41].
Vulnerabilities persists in modern systems [42] including the fact that voters do not always inspect the output of BMDs [43, 44]. Imprinting of identifiers on ballots at tabulation time, which enables more efficient RLAs (see discussion in [36]), also requires careful use of cryptography [45]. These vulnerabilities have rightly placed the design of secure ballot tabulation devices as a primary focus for the community.
# 3 DATASETS / CLASSIFIER ARCHITECTURES
This section introduces four new datasets of segmented regions on a ballot for interpretation. These are images of bubbles. For each dataset, we define its properties and utility for further investigation. Datasets and the accompanying software are released alongside this paper. The remainder of the section reviews each ML classifiers and explains the purpose for inclusion in the analyses.
# 3.1 New Bubble Datasets
Images are segmented using ballot geometry. We do not consider image segmentation as an attack target. Mark interpretation is a supervised binary classification problem, requiring representative datasets of marks and empty bubbles [2]. The segmented images are $4 0 \times 5 0$ pixels. Fully darkened bubbles should be interpreted as marks while empty bubbles should be interpreted as nonmarks. Naturally, one would need some separation oracle to define a boundary between marks and nonmarks. No matter what oracle is chosen, some images will be very close to the boundary. Such images are called marginal marks [46] and may include samples such as checkmarks, crosses, lightly filled or even accidentally filled bubbles, see [46, Table 1]. Rules for interpreting marginal marks vary across municipalities. The desire to account for voter intent complicates the question of what images should be in a training set and how they should be labeled. In all datasets, labels are produced from an optical lens scanner. Finally, images may be captured as grayscale (8 bpp) or color (RGB, $2 4 ~ \mathrm { b p p }$ ) artifacts. We present four datasets.
Figure 1: Types of examples in our dataset. The swatches are artificial marks designed to be close to the border between a mark and a non-mark for an optical scan. The darker backgrounds are the result of using colored stock paper.
Gray- $\mathbf { \nabla } \cdot B$ uses 42,679 images ( $4 0 \times 5 0$ , 8 bpp) with blank (35,429 images) and filled (7,250 images) bubbles but no marginal marks. RGB-B a 24 bpp color (RGB) version of Gray-B.
Gray- $C$ augments Gray-B with a collection of marginal marks called “swatches” shown in Figure 1. Swatches are images that vary the position of signal to create samples close to the boundary of an optical lens scanner. The 423,703 randomly generated swatches place equal amounts of random noise throughout each image such that the amount of light is the same. This yields 466,382 labeled images. RGB-C is a 24bpp color (RGB) version of Gray-C.
Related Voting Datasets. Two other datasets have been used in voting classification research, these are the Humboldt and Pueblo County Datasets. The Humboldt county dataset emerged from Humboldt County Election Transparency Project but it is not labeled and we did not have access to a ballot geometry file or a scanner capable of providing labels. Pueblo County allows one to access individual ballot images along with the tabulator interpretation. However, we could not access the entire dataset programmatically.
# 3.2 Machine Learning Classifiers
This section briefly describes each machine learning model and justifies its inclusion. Details regarding the training and hyperparameters are given in our anonymous repository.
Support Vector Machine (SVM). SVMs support linear classification [47, 48]. An SVM maximizes the distance between its decision boundary and inputs of each class label in the training set. We used standard linear kernels to represent a simple classifier. Our SVM has 2,001 trainable parameters for Gray-B and Gray- $\cdot C$ and 6,001 trainable parameters for the RGB datasets.
Non-linear kernels such as RBF (Radial Basis Function) could be used to characterize the boundary of non-linearly separable regions. A non-linear kernel function maps the original data to a higher dimensional space where linear separation may occur [49]. Using kernel functions with SVMs is often done on challenging datasets [50, 51]. Exploring the effectiveness of a non-linear kernel such as RBF is future work.
Why we selected it: The linear SVM represents one of the simplest machine learning models that achieves high accuracy on both the Gray-B and RGB-B datasets. Evaluating the SVM allows us to better understand how robust low-complexity models are to adversarial attacks with voting datasets.
SimpleCNN. Convolutional models are commonly used for image recognition and classification [52]. SimpleCNN is a shallow convolutional neural network that consists of three identical convolutional layers for a total of 28,818 trainable parameters (grayscale) and 29,394 trainable parameters (RGB).
Why we selected it: SimpleCNN bridges the gap in complexity between the linear SVM and deep convolutional neural networks. Its simple architecture provides a lower-bound accuracy for convolution-based models.
Very Deep Convolutional Network (VGG-16). The VGGNet is a classic convolutional model made to improve AlexNet [53]. The VGGNet architecture restricts the filter size in each convolutional layer to $3 \times 3$ . When introduced, VGGNet achieved the largest layer depth when compared to other convolutional models of its time [54]. Our VGG-16 grayscale model has 14,723,010 trainable parameters.
Why we selected it: The VGG is the first “deep" convolution-based neural network. This made the VGG-16 one of the most common benchmarks in traditional image classification [8, 55, 56].
Residual Networks (ResNets). Vanilla deep convolutional neural networks are susceptible to accuracy degradation [57, 58]. Residual Networks (ResNets) [8] offer a solution to this issue. ResNets rely on skip connections between layers. We test a ResNet-20 with 568,033 trainable parameters (grayscale) and 568,321 trainable parameters (RGB).
Why we selected it: ResNets are one of the most widely used types of convolutional neural network. They have been employed in both traditional image classification [59] and in adversarial machine learning extensively [60, 61].
CaiT. Vision transformers are an emerging alternative to convolutional neural networks. Vision transformers benefit from pretraining and their performance excel on image datasets [62]. However, many deep vision transformers suffer from gradient instability and poor feature learning. The Class-Attention in Image Transformers (CaiT) is designed to address these issues in vision transformers [62]. First, a learnable scale parameter is added to regularize residual connections between transformer blocks. Second, CaiT introduces the Class-Attention layer that extracts discriminative features from a class embedding and processed patch embeddings. CaiT has 56,730,626 trainable parameters for our RGB models.
Why we selected it: CaiT is one of the state-of-the-art transformer models that has shown excellent performance on image classification tasks. Therefore, it is a natural choice to use as one transformer based alternative to convolutional neural networks.
Table 1: Clean training and validation accuracies on bubble and combined datasets for grayscale models.
Twins. The Twins family [10] refines the base vision transformer architecture. We test the Twins-SVT-B architecture. It utilizes locallygrouped self-attention along with globally sub-sampled attention to improve the model’s performance while solely relying on matrix operation to produce predictions. The Twins model trained on grayscale data has 56,067,880 trainable parameters, whereas for RGB it has 56,070,952 parameters.
Why we selected it: Twins is another representative transformer architecture (like CaiT) with excellent performance on vision tasks.
To summarize, we chose six models across a variety of architectures, linear, convolutional, and attention-based, and sizes, from 2K parameters to nearly 57M.
# 3.3 ML Classifier Performance on Voting Datasets
We focus on the performance of grayscale models. We do not observe meaningful variation of trends or results when training on color models.
All six classifiers were trained on the two gray datasets. Table 1 reports the clean training and validation accuracy for the grayscale models. Two trends are readily apparent. All models, except the SVM trained on Combined, achieve a $9 9 \%$ or greater test accuracy on the easy Gray- $\mathbf { \nabla } \cdot B$ validation sets, regardless of whether they were trained on Gray-B or Gray-C. When testing on easy marks, all classifiers are effective, irrespective of their training sets.
Second, high testing accuracy on Gray- $\cdot C$ (Combined columns) is not achieved by only training on Gray-B (-B rows). The testing accuracies are below $8 0 \%$ . Training on Gray- $\cdot C$ (the $- C$ rows) yields better testing accuracies that range from just $5 9 . 6 \%$ in grayscale for the SVM to as high as $9 3 . 6 \%$ for Twins. The SVM model is an exception, performance on Combined degrades when trained on combined examples. We hypothesize the SVM is overfitting its linear boundary to the swatch examples which we believe are close to the true “boundary” of the optical scanner.
As mentioned above, an accurate decision boundary for marks other than fully filled bubbles is crucial in real elections. The phenomenon of needing a deep model for high accuracy on complex datasets is consistent with Barretto et al. [2].
# 4 ADVERSARIAL THREAT MODEL AND ADVERSARIAL EXAMPLE GENERATION
We assume an attacker that compromises ballot printing. The attacker delivers the adversarial examples at the printing stage when blank ballots are either produced, stored in a warehouse, or shipped. The attacker creates printed ballots to be filled out by voters and cast in a tabulator. The attacker has full control of the ballot image and all bubbles must visually appear to be empty at the onset. As this “empty” ballot will be inspected and filled by a voter, the goal is to change the classifier output from non-mark to mark while the added signal remains imperceptible.
As we discuss in Section 4.2, one does not need to impact tabulator accuracy much to have an impact, Table 2 shows the number of close state legislative races in battleground states in the 2020 United States Presidential Election with $1 5 \%$ of races having a margin of less than $5 \%$ . Our attacks are most harmful when a large number of voters do not specify a preference for the targeted contest. Thus, our attacks are unlikely to impact a presidential race where almost all ballots specify a preference.
# 4.1 Attack Nomenclature
An attacker that compromises ballot printing can only create examples where a mark appears to be empty but will be classified as a mark. We call this Over. For completeness, in the virtual domain we also consider an attacker that perturbs a marked bubble so that it is classified as a nonmark. This is called Under as it could lead to a preference being removed and no preference being counted, known as an undervote.
Virtual. First in Section 7, we consider the idealized (for the attacker) virtual context where the attacker modifies an image without any intervening printing or scanning. Namely, the adversarial signal cannot be altered (affected) by the steps taken with real physical ballots. We test both Over and Under in this domain.
Physical. Second in Section 8, the paper considers the more realistic physical context where adversarial examples are organized onto sheets of paper which are printed and then scanned using COTS hardware, namely an HP LaserJet-3010 series and a Fujitsu-7600 scanner. The resulting images are registered, color-corrected, and segmented (see an overview in Section 2). Laser printing is a noisy process. Laserjet printers use dithering to cope with fewer than 256 greyscale levels and simulate gray intensities. Commercial offset printing yields less noise. Yet, tabulators must handle images from commodity printers (such as on-demand ballot printers) as municipalities print ballots when they run out. In this domain we only test Over examples. In the stringent and realistic physical settings where the adversarial signal is printed on paper it is possible to cause misclassification of non-marks to marks at a high enough rate to impact close elections. This work offers evidence that tabulators using machine learning algorithms are susceptible to adversarial attacks that cause empty bubbles to be interpreted as marks.
# 4.2 Analyzing Attacks on Voting Systems
Attacking voting systems is distinct from conventional adversarial machine learning in two facets. First, a high attack success rate is not required to impact an election. Second, when a misclassification does occur, several different actions can be taken by the voting system. We detail differences in this subsection. When the classifier perceives a Over example one of three things occurs:
Table 2: Number of tight state legislative races in United States Battleground States in 2020 Presidential Election. We omit Arizona as each district elects two legislators and many of these districts have at most 2 candidates.
(1) If the voter did not fill any bubbles in that race (e.g. a local race in a presidential election), the attacker has created a vote for a candidate in that race. The Leon County Post Election Audit of 2022 found that $1 9 \%$ of voters leave at least one race blank. In the 2020 Presidential Election in Nevada (a battleground state), $12 \%$ of voters did not vote for their state legislative race.
(2) If the voter marks the same preference as the adversarial bubble, the choice is aligned with the attacker and there is no impact.
(3) If another bubble is filled in by the voter in the same race, the tabulator should report an overvote. The ballot is returned to the voter. The voter then decides whether to submit their ballot anyways or ask for a new ballot to be completed. In an attack by Bernherd et al. [63] voters do “not know what to do if they noticed a problem with their paper ballot during a real election.” In our experience, voters often resubmit their ballots.
What attack success rate can flip an election? In conventional adversarial machine learning, a high robust accuracy (low attack success rate) generally indicates an acceptable defense. For example, one of the most recent state-of-the-art defenses proposed in [64] achieved a robust accuracy of $7 0 . 6 9 \%$ against white-box adversarial machine learning attacks for the CIFAR-10 dataset. This robustness would correspond to a $2 9 . 3 1 \%$ attack success rate. Elections are routinely decided by small margins (Table 2). An attacker can reuse examples globally, though their reused examples would be subject to scanning noise discussed in Section 2.
We now illustrate how a small attack success rate can impact a close election. We consider a race with a $2 \%$ margin where $12 \%$ of ballots are left blank (the rate for Nevada state legislative races in 2020). We assume a two candidate race. Without an attack, Win will receive .415 fraction of the vote and Lose will receive .395 fraction of the vote. The adversary’s goal is for Lose to win over Win by a margin of $. 5 \%$ (often results under this margin trigger a hand recount). There are three relevant parameters:
(1) What fraction of ballots carry an adversarial example for the Lose candidate? We call this parameter deploy.
(2) What fraction of blank ballots with an adversarial example are misclassified? We denote this as success.
(3) If the ballot has an adversarial example that is misclassified as a vote for Lose and the voter filled in a vote for Win their ballot will be marked as an overvote. When this occurs what fraction of the time does the voter ask for a new ballot? We call this probability 1 − recast and assume in this case the ballot is counted for Win. With probability recast, the voter asks the tabulator to accept the overvoted ballot and the ballot is not counted for either candidate.
Consider if the parameters are deploy ${ \bf \Psi } = 1 { \bf \Psi }$ , success $\ c =$ .1, recast $\mathbf { \tau } = \mathbf { \tau }$ .3, the votes for each candidate becomes
$$
\begin{array} { r } { \begin{array} { r c l } { } & { \mathsf { L o s e : } = . 3 9 5 + . 1 2 \cdot \mathsf { d e p l o y \cdot s u c c e s s : } 4 0 7 } \\ { } & { } & { \mathsf { W i n : } = . 4 1 5 \big ( 1 - \mathsf { d e p l o y \cdot s u c c e s s \cdot r e c a s t } \big ) = . 4 0 2 } \end{array} } \end{array}
$$
Some blank ballots are converted to votes for Lose and some Win ballots are converted into overvotes which are not counted for either candidate. Looking ahead to Section 8, we achieve success $\approx . 9 9$ on the most vulnerable model (a support vector machine); tested model’s vulnerability against realistic attacks varies widely (for more resilient models, success $= 0$ for imperceptible examples).
In the case when a voter refills their ballot after their ballot is marked as an overvote, the second ballot obtained by the voter may also contain an adversarial example that is misclassified as a vote. The overall fraction of ballots where this occurs for the parameters discussed is . $1 3 \%$ of ballots. Continuing to use Nevada as an example, in 2020 the average state legislative race had 34K cast votes, so . $1 3 \%$ of ballots corresponds to 43 ballots. Widespread occurrences of a voter having to request a new ballot multiple times is likely to arouse suspicion. This creates an incentive for the attack for deploy $\cdot \cdot$ success to be less than 1.
In summary, our analysis of attack success rate for the voting domain reveals a very important issue. In traditional adversarial machine learning a defense is successful if robust accuracy is $70 \%$ As the example above shows, one can change the tabulated margin of a race by $2 . 5 \%$ even with a robust accuracy of $9 0 \%$ .
# 4.3 Generating Adversarial Examples
Adversarial examples can be created from clean images for a given model either through white-box or black-box adversarial attack methods [65, 66]. In both, an attacker begins with a clean unperturbed image and injects noise. Throughout this work, all clean images used to create adversarial examples are from the Bubbles datasets, we never use a swatch image as a starting point. This work focuses on white-box attacks due to VVSG 2.0 requirement 1.1.6G, which tabulators describe methods for classifying marks.
The most common method [65] to create adversarial images adds noise based on gradient information from the model. This is referred to as a white-box attack [67]. In this formulation, the gradient of the input with respect to a certain loss function is computed directly using the target model’s architecture and trained weight parameters. This attack is an optimization problem:
$$
\operatorname* { m a x } _ { x _ { a d v } } \mathcal { L } ( x _ { a d v } , y ; \theta ) \ \mathrm { s u b j e c t } \ \mathrm { t o } \ | | x - x _ { a d v } | | _ { p } \leq \epsilon
$$
where $\mathcal { L }$ is a loss function, $x$ is a clean (non-perturbed) image with true class label $y$ , $\theta$ represents the parameters of the model being attacked, $\epsilon$ is a bound on the magnitude of the perturbation and $| | \cdot | | _ { \boldsymbol { p } }$ represents the $l _ { p }$ norm. The adversarial example is constrained to be at a distance at most $\epsilon$ from the original clean example.
We use the $l _ { p }$ norm with $p = \infty$ in our attacks. This is a widely used norm in adversarial machine learning [67–69]. We focus on APGD [12] but some of gradient masking results in the next section use PGD [13]. APGD is a SOTA white-box attack [64, 68, 69].
# 5 GRADIENT MASKING
Gradient masking frequently occurs in adversarial machine learning when evaluating the robustness of defenses to white-box attacks [65, 70]. Gradient masking is when the gradient of a model is incorrectly estimated in a white-box attack. This phenomena leads to the model having a falsely high robustness. Often, defenses are proposed and tested with attacks like FGSM and PGD, and are later broken by adaptive attacks which overcome gradient masking [15, 26, 66]. Gradient masking does not make a model secure.
For voting datasets, zero gradients occur when backpropagating the gradient in the SimpleCNN and ResNet-20 models during the attack. In addition, we observe non-monotonic behavior of APGD with increasing $\epsilon$ on VGG-16. It is important to note this occurs after the models have been trained. The models often exhibit maximal predictive confidence of either 1, 0 or 0, 1 . In this section, we explore the extent of the issue and show that it is rooted in the numerical instability of floating point and datatypes used by PyTorch with NVIDIA GPUs. Furthermore, in Section 6, we show how a modified DLR loss can overcome this issue [12].
# 5.1 The Repeated Zero Gradient Condition
We demonstrate the occurrence of zero gradients in multiple different models trained on the voting datasets when conducting standard white-box adversarial attacks.
Experimental Setup. We attack three models (SVM, SimpleCNN, ResNet-20) trained on the grayscale datasets (Gray-C and Gray-B) using PGD. We set PGD to run for 20 steps using $\epsilon = 0 . 0 3 1$ with step size 0.00155. We randomly select 500 marks and 500 non-marks from the Bubbles validation set (no swatches) and 500 marks and 500 non-marks from the Swatches only (no bubbles) validation set that were correctly classified. At each of the $k$ steps of PGD we check the maximum element of the absolute value of the gradient matrix. If $\mathrm { \bar { \ m a x } } _ { i } \left\{ \left| \partial L / \partial x _ { i } ^ { ( k ) } \right| \right\} = 0 . 0$ then this step of PGD exhibits a zero gradient.
Analysis of Zero Gradient. The number of recorded instances of zero gradient across 20 steps are reported in Table 3. All 500 marks encounter a zero gradient for the first step on the SimpleCNN. All 500 marks and 500 non-marks encounter a zero gradient for the first step on ResNet-20. No example for the SVM encounters a zero gradient in the bubbles validation set. Note that while fewer swatch examples express a zero-gradient, only bubbles are considered a valid starting point for our attacks.
Analysis of Confidence. Our models return a tuple for their confidence in each class, vote then non-vote respectively. We provide the average confidence tuple (rounded to four decimal places) over each step for each class. Most notably, for classes and models that encounter a zero gradient for all 500 examples, the confidence is either 1.0, 0.0 for votes or 0.0, 1.0 for non-votes. Since each model uses a softmax activation layer to normalize their outputs, a 1.0 is the maximum possible confidence for a class.
# 5.2 Numerical Instability
We devise an experiment that attributes zero-gradient to numerical instability arising from 32-bit floating point arithmetic as well as tensor-float arithmetic in use within the PyTorch implementation. We first introduce some notation on our target models.
Confidence and Gradient. A machine learning classifier receives an image as input $x$ , forward propagates it through multiple layers $L _ { i } ( x ) \to L _ { i + 1 } ( x )$ , then outputs a confidence vector at the final layer. This output is the prediction this image belongs to each class $\tilde { y }$ . The loss function $\mathcal { L }$ derives how far the prediction $\tilde { y }$ is from ground truth label $y$ .
White-box adversarial machine learning attacks use gradient descent on the loss $\mathcal { L }$ with respect to the source imag $e ^ { 8 } x$ , i.e., $\textstyle { \frac { \partial { \mathcal { L } } } { \partial x } }$ , the gradient at the network input. A zero gradient $\begin{array} { r } { \frac { \partial \mathcal { L } } { \partial h } = 0 } \end{array}$ appearing at some layer during the backpropagation will spread to shallower layer and induce 0 gradients all the way to the input layer, that is, all the way to $\textstyle { \frac { \partial { \mathcal { L } } } { \partial x } }$ . We observed zero gradient at the final softmax layer. We next review the specifics about floating point arithmetic to help understand the root cause.
Floating point refresher. IEEE-754 is the standard defining 32- bit floating point numbers. The use of fixed precision (32-bits $b _ { 3 1 } b _ { 3 0 } \cdot \cdot \cdot b _ { 1 } b _ { 0 } )$ , implies only certain numbers are representable. The standard calls for 1 bit for the sign $b _ { 3 1 }$ , 8 bits for the exponent $b _ { 3 0 } \cdots b _ { 2 3 }$ (using a bias representation) and 23 bits for the mantissa $b _ { 2 2 } \cdots b _ { 0 }$ to encode a normal floating point value:
$$
( - 1 ) ^ { b _ { 3 1 } } \cdot 2 ^ { \big ( \sum _ { i = 2 3 } ^ { 3 0 } b _ { i } \cdot 2 ^ { i - 2 3 } \big ) - 1 2 7 } \cdot \left( 1 + \sum _ { i = 1 } ^ { 2 3 } b _ { 2 3 - i } \cdot 2 ^ { - i } \right)
$$
This representation uses an implicit 1 at the start of the mantissa. The smallest normal floating point is
$$
( - 1 ) ^ { 0 } \cdot 2 ^ { - 1 2 6 } \cdot ( 1 + 0 ) = 2 ^ { - 1 2 6 } \approx 1 . 1 7 5 4 9 4 3 5 0 8 \cdot 1 0 ^ { - 3 8 } .
$$
The range of representable floats can be extended with de-normalized representations where the first mantissa bit is zero. This broadens the range by using 0 bits at the most-significant end of the mantissa to boost the exponent at the expense of the number of digits of accuracy. The smallest de-normalized 32-bit floating point is $1 . 4 0 1 2 9 8 \cdot \mathrm { { \dot { 1 } 0 ^ { - 4 5 } } }$ which, in binary, is all zeroes except the least significant bit of the mantissa. To retain accuracy, computed values should never drop past the smallest normal floating point.
PyTorch Floating Points. The most popular ML framework PyTorch uses the TensorFloat-32 (TF32 representation for floating point numbers supported by the NVIDIA hardware to compute convolutions).
This shorter representation uses only 10 bits of mantissa, 8 bits of exponents and a sign bit. They are designed for speed (about an order of magnitude faster) and are considered good enough for the precision expected by machine learning. The flags are, respectively, torch.backends.cuda.matmul.allow_tf32 to enable them for matrix multiplications and torch.backends.cudnn.allow_tf32 to enable them for convolutions. They were introduced in PyTorch version 1.7.9 By default, CNNs use TF32 for key computations during forward and backward passes.10
Table 3: Zero Gradient Condition recorded over 20 steps of PGD. We record the number out of 500 of examples where zero gradient occurs on the first step, the average number of steps the zero gradient condition occurs in, and the confidence output out of 500 examples for each class.
Manual Backpropagation. Models here contain a final linear layer that feeds into a softmax activation function. A linear layer accepts a feature vector $h$ , performs matrix multiplication with weight matrix $W$ , then adds a bias term $b$ . The $i ^ { t h }$ column in the output $z$ is the confidence the input image $x$ belongs to $( i - 1 ) ^ { t h }$ class. The softmax layer exponentiates each column in $z$ then normalizes them over their sum, returning the confidence vector $\tilde { y }$ .
$$
z = h W ^ { T } + b \quad \quad \tilde { y } = \frac { e ^ { z } } { \sum _ { i = 1 } ^ { C } e ^ { z _ { i } } }
$$
This allows to evaluate the CE Loss. Note that $y$ is the one-hot encoding of the image’s class.
$$
\mathcal { L } = - \sum _ { i = 1 } ^ { 2 } y _ { i } \cdot \log ( \tilde { y } _ { i } )
$$
Consider the gradient of this loss with respect to a feature vector $h$ . We can express it w.r.t. the output of the linear layer $z$ :
$$
\ { \frac { \partial { \mathcal { L } } } { \partial h } } = { \frac { \partial { \mathcal { L } } } { \partial z } } \cdot { \frac { \partial z } { \partial h } }
$$
Since $z = h W ^ { T } + b$ , the derivative w.r.t. $h$ is just the weight matrix $\frac { \partial \boldsymbol { z } } { \partial h } = \boldsymbol { W } ^ { T }$ and the derivative of the loss w.r.t. $z$ is $\frac { \partial \mathcal { L } } { \partial z } = \tilde { y } - y$ . The product of these terms delivers the full backpropagation equation.
Backpropagation Experiments. Both the accuracy of 32-bit floating point and the reduced accuracy of the TF-32 type contribute to zero gradients, indeed, the calculations of $z$ and $\tilde { y }$ involve convolutions in PyTorch.
32-bit Floating point accuracy. Consider two swatches $A$ and 𝐵 shown in Figure 2 that are members of the same class (their $y$ vectors are $[ 1 , 0 ] \}$ ). We chose mark swatches because of their visual similarity and they can exhibit zero gradients (see Table 3). Empirically, $A$ triggers a zero-gradient while $B$ does not. Given the fixed weights of the ResNet-20, we can manually compute $z$ and $\tilde { y }$ using the penultimate layer weights $W$ and biases $b$ . Namely:
Figure 2: Examples of mark swatches considered for manual backpropagation. The ResNet-20 produces a zero gradient for Swatch (A) and a non-zero gradient for Swatch (B).
$$
z ( A ) = h ( A ) \cdot W ^ { T } + b , z ( B ) = h ( B ) \cdot W ^ { T } + b
$$
as well as
$$
\tilde { y } ( A ) = \frac { e ^ { z ( A ) } } { \sum _ { i = 1 } ^ { C } e ^ { z _ { i } ( A ) } } , \tilde { y } ( B ) = \frac { e ^ { z ( B ) } } { \sum _ { i = 1 } ^ { C } e ^ { z _ { i } ( B ) } }
$$
Those values are used to compute $\frac { \partial \mathcal { L } } { \partial h } = ( \tilde { y } - y ) \cdot W$ for both $A$ and $B$ . To understand the stability issue, consider the following values for $z ( A )$ and $z ( B )$
$$
\begin{array} { r l r l r } { z ( A ) = { } } & { { } \left[ 4 9 . 2 1 8 } & { { } - 4 8 . 5 8 2 \right] , z ( B ) = { } } & { { } \left[ 1 8 . 5 1 6 } & { { } - 1 8 . 0 5 9 \right] } \end{array}
$$
The last layer contains 2 neurons, so we get two $z$ -values. Computing $\tilde { y } ( A )$ produces
$$
\begin{array} { c } { { e ^ { z ( A ) } = [ e ^ { z _ { 1 } ( A ) } e ^ { z _ { 2 } ( A ) } ] = [ e ^ { 4 9 . 2 1 8 } \quad e ^ { - 4 8 . 5 8 2 } ] } } \\ { { = [ 2 . 3 7 2 0 \cdot 1 0 ^ { 2 1 } \quad 7 . 9 6 3 5 \cdot 1 0 ^ { - 2 2 } ] } } \end{array}
$$
To get $\tilde { y } ( A )$ , we compute $e ^ { z _ { 1 } ( A ) } + e ^ { z _ { 2 } ( A ) }$ as $2 . 3 7 2 0 \cdot 1 0 ^ { 2 1 }$ , i.e., $e ^ { z _ { 1 } ( A ) } +$ $e ^ { z _ { 2 } ( A ) } \ = \ e ^ { z _ { 1 } ( A ) }$ . The magnitude difference between $e ^ { z _ { 1 } ( A ) }$ and $e ^ { z _ { 2 } ( A ) }$ is so large that the second operand is absorbed by the first. The second ratio
$$
\tilde { y } _ { 2 } ( A ) = \frac { e ^ { z _ { 2 } ( A ) } } { e ^ { z _ { 1 } ( A ) } + e ^ { z _ { 2 } ( A ) } } = 0
$$
because the division of a very small number by a very large one underflows the float type. Overall, $\tilde { y } ( A ) = \left[ 1 0 \right]$ and the first factor of the gradient is $\tilde { y } ( A ) - y ( A ) = \left[ 1 0 \right] - \left[ 1 0 \right] = \left[ 0 0 \right]$ . Interestingly, with $B$ , the $z$ values are a bit smaller leading to
$$
e ^ { z ( B ) } = [ 1 . 1 0 0 0 \cdot 1 0 ^ { 8 } \quad 1 . 4 3 5 7 \cdot 1 0 ^ { - 8 } ]
$$
and $\tilde { y } ( B ) = [ 9 . 9 9 9 9 9 9 \cdot 1 0 ^ { - 1 } 1 . 3 0 5 2 0 7 \cdot 1 0 ^ { - 1 6 } ]$ which does not trigger the zero gradient. The gradient expressions above were manually derived and evaluated with Octave [71] to independently confirm the observations. With an FP32 representation, the backpropagation through the last layer can yields a zero gradient. Once this occurs, the preceding gradients will be 0 as well.
TensorFloat 32-bit Floating point accuracy. Recall that PyTorch uses convolutions to compute $z$ and $\tilde { y }$ . Given the defaults used by PyTorch, these convolutions do rely on the numerically weaker TF32 type (10-bit mantissa). The accuracy of the $z$ values and the $\tilde { y }$ values are further reduced. This can increase the occurrence of zero gradients for the same reasons.
Since zero gradients are a direct consequence of the data types used within PyTorch for our datasets, we believe this phenomenon is likely to occur on bubble classifiers produced by industry and researchers. We note that while the problem could be made worse by the specialized datatypes with less precision on NVIDIA GPUs, the problem still occurs with classic 32-bit floating point. Turning off these datatypes has the undesirable effect of causing a $1 0 \times$ slowdown during training. We have not tested behavior on models using 64-bit wide floating points.
One potential solution for the attacks is to employ a randomized start (commonly done in PGD and APGD). However, given that zero gradients are so frequent, they can still occur after the first step of the attack. In addition, randomized start does not provide a deterministic solution to the problem.
# 6 OVERCOMING THE ZERO GRADIENT CONDITION
The zero-gradient condition has previously been encountered [70] when assessing adversarial machine learning defenses. In our work gradient masking (zero gradients) occurs in the models trained on the ballot datasets, without any defenses implemented. To the best of our knowledge, we are the first to observe this phenomenon in classifiers without defensive mechanisms. We show how to overcome this issue using a modified version of the difference of logits ratio (DLR) function proposed in [12].
As an alternative to using cross-entropy (CE) loss, in [16] the Carlini and Wagner targeted attack was proposed in which the following loss function was minimized:
$$
F ( x ) = \operatorname* { m a x } ( z ( x ) _ { t } - \operatorname* { m a x } \{ ( z ( x ) _ { j } : j \neq t \} , - \kappa )
$$
where $z ( \cdot ) _ { j }$ is the $j ^ { t h }$ logit output from the model, $z ( \cdot ) _ { t }$ represents the logit of the target class $t$ and $\kappa$ represents confidence with
Figure 3: Examples of mark to non-mark and non-mark to mark adversarial attacks on the SimpleCNN. We abbreviate mark $$ non-mark as Under and non-mark $$ mark as Over. The examples above are created with APGD $\epsilon = 8 / 2 5 5$ .
which the adversarial example should be misclassified. Further work in [12] proposed the use of DLR loss function:
$$
\mathrm { D L R } ( x , y ) = - \frac { z _ { y } - \operatorname* { m a x } _ { j \neq y } z _ { j } } { z _ { \pi _ { 1 } } - z _ { \pi _ { 3 } } }
$$
where $z _ { y }$ is the logit output corresponding to the correct class label and $\pi$ is a permutation that orders the elements of the logit output $z$ in decreasing order. It is important to note that the DLR loss function is for multi-class classification where the number of classes $C$ is greater than 3, since $z _ { \pi _ { 3 } }$ is undefined for $C < 3$ However, the ballot datasets are binary classifications tasks $( C = 2 )$ ). Hence we modify the DLR loss function in Equation 3 by only using the numerator. Effectively this reduces DLR loss function to the untargeted version of the Carlini and Wagner loss function introduced in Equation 2, without the outer maximization and confidence $\kappa$ . It is important to note that the denominator $z _ { \pi _ { 1 } } - z _ { \pi _ { 3 } }$ in Equation 3 was included for scale invariance to prevent gradient masking [12]. In our experiments, we observe that the binarized DLR loss operates as expected despite removing the denominator.
# 7 ADVERSARIAL ATTACKS IN THE VIRTUAL CONTEXT
The virtual context is a best-case scenario for an adversary. When crafting perturbations, we ignore artifacts (e.g., noise) introduced by the physical world. Performance in the more challenging physical context appears in Section 8.
Figure 3 shows how bubble images yield two different types of attacks. Recall that Over are adversarial marks that appear empty yet are classified as marks. Likewise, Under are adversarial mark that appear to be marks but are classified as blanks. Recall Under cannot be conducted by an attacker compromising a print vendor but are presented for completeness. The first set of experiments is designed to answer fundamental security questions:
(1) Which attacks are most effective? (2) Does the 𝐷𝐿𝑅 loss overcome the zero-gradient condition? (3) Does the training dataset impact model resilience? (4) Does model complexity impact attack success rate? (5) Do imperceptible $\epsilon$ alterations flip the classifier output?
# 7.1 White-Box Attack Performance
We first investigate the choice of attack. We use model robustness at each perturbation magnitude (𝜖 value) to determine the best performing attack.
Experimental Setup. We attack six models (SVM, SimpleCNN, VGG-16, ResNet-20, CaiT, and Twins) trained on the grayscale datasets (Gray-C and Gray-B) using APGD. We consider two versions of APGD 1) with CE loss, 2) with DLR loss.
We randomly select 500 marks and 500 non-marks from the Bubbles validation set (no swatches) that were correctly classified by the target model. The choice of clean initial samples follows Mahmood, Mahmood, and van Dijk’s methodology [68]. For all attacks, $\epsilon$ varies from $4 / 2 5 5$ to 255/255. We report the resulting robust accuracy in Table 4 for the Gray- $\cdot C$ and Gray-B datasets.
Analysis of Attack Performance. Model robustness across all grayscale datasets are reported in Table 4. We additionally tested prior attacks of FGSM [72], MIM [14], and PGD [13]. In Appendix B, we show that all attacks on ResNet-20 using CE exhibit a non-zero robustness with $\epsilon = 2 5 5 / 2 5 5$ giving indication the zero-gradient condition occurs for all standard attacks (Table 10).
Training Dataset Matters. Usually, models trained on the Gray$B$ dataset deliver more robust accuracies than their counterparts trained on the Gray- $\cdot C$ dataset. The only exception to this is the VGG-16 model which is much more robust when trained on Gray-C. The Gray-B SVM and SimpleCNN are completely robust up to $\epsilon = 3 2 / 2 5 5$ , CaiT is completely robust until $\epsilon = 1 6 / 2 5 5$ , and ResNet20 is completely robust until $\epsilon = 8 / 2 5 5$ .
The more challenging and realistic Gray- $C$ training dataset conveys a different picture. Indeed, the SVM classifier accuracy drops to $5 0 \%$ with the smallest $\epsilon = 4 / 2 5 5$ while ResNet-20 and Twins drop much further, even for small values of $\epsilon$ . SimpleCNN and VGG-16 retain some resilience at small values of $\epsilon ( \leq 8 / 2 5 5 )$ .
As expected, for the majority of the models, training on a more complex dataset that forces the classifier to learn marginal marks, thus moving the decision boundary in a way that makes adversarial attacks easier. The only exception to this rule is VGG-16 which we hypothesize is due to the VGG architecture. The literature has shown the VGG family of models to have other intriguing adversarial properties [73]. It is also worth noting that while VGG-16 trained on Gray- $\cdot C$ is more robust than its bubbles counterpart, VGG-16 is not the most robust CNN model.
From a functionality standpoint though, tabulators must handle marginal marks. Recall from the Introduction and Table 1 that training on Bubbles reduces the clean accuracy of models tested on Combined by $12 - 1 5 \%$ drastically impacting the accuracy of the classifier on marginal marks. As a reminder, SVM actually increases performance on Gray-C by training on Gray-B, but is not accurate enough for practice when trained using either dataset. One cannot sacrifice performance on marginal marks for resilience to adversarial examples.
When compared to the first row of Table 4, all models trained on RGB-C, except ResNet-20, achieved similar or worse robustness than their Gray- $\cdot C$ counterparts up to $\epsilon = 3 2 / 2 5 5$ . We attribute this to more image channels creating a higher dimension space that is easier to exploit. As we discuss in Section 8, grayscale images are the preferred method in modern tabulation equipment.
Figure 4: Adversarial examples from varying $\epsilon$ for APGD variants on ResNet-20 trained on Gray- $\mathbf { \nabla } \cdot \mathbf { c }$ dataset. Note that the Over example in APGD-CE experiences gradient masking.
Analysis of Model Complexity. Madry et al. argued that increasing model complexity increases robustness to single step adversarial machine learning attacks [13]. There does not seem to be a connection between model complexity and robust accuracy in our results. Tables 4 shows that Twins and ResNet-20 are the most vulnerable models while SimpleCNN and CaiT are the most resilient.
# 7.2 White-Box Perturbation Magnitude
We now investigate the choice of 𝜖. A large $\epsilon$ yields noticeable adversarial perturbations. In conventional adversarial machine learning there is generally a monotonic relationship between robustness and $\epsilon$ i.e., increasing $\epsilon$ decreases robustness. However, for the our datasets, CE APGD does not exhibit monotonic behavior for the CNN models as shown in Table 4. As discussed above, this is due to the difficulty in gradient estimation for these datasets. As an extreme example, VGG-16 on Gray- $\cdot C$ with $\epsilon = 6 4 / 2 5 5$ has model robustness of 0 but $\epsilon = 2 5 5 / 2 5 5$ increases model robustness to .453. However, DLR APGD does exhibit monotonic behavior for all models.
Analysis of Attack Perturbation. Figure 4 shows attack images for varying $\epsilon$ when considering CE and DLR. The model under attack is ResNet-20 trained on Gray-C. We note the lack of increasing noise in the first row indicating gradient masking.
We now consider Figure 5 which shows increasing $\epsilon$ for each model using the DLR loss. For $\epsilon = 1 6 / 2 5 5$ the attack signal starts to be noticeable for all of our attacks. It is also worth noting that in Table 4, Gray-C SimpleCNN has robust accuracy of 0.5 at $\epsilon = 1 6 / 2 5 5$ . At this point Over examples have all flipped class labels (without any changes of Under labels). Accuracy restricted to Over is shown in Table 5 as that is the focus of our physical world experiments. All six models have robustness of 0 for Over examples at $\epsilon = 1 6 / 2 5 5$ Attacks are easily detectable at the next noise level of $\epsilon = 3 2 / 2 5 5$ . This noise level is when our attacks start creating Under examples for all models.
We consider attacks under 8 255 unnoticeable in the virtual domain and 16 255 unnoticeable in the physical domain, see Section 8.
Visibility of Examples Across Models. Figure 5 shows the impact of $\epsilon$ when attacking models trained on Gray-C using APGD. Only Over examples are shown since this represents the more successful attack setting in the physical domain. Figure 5 shows that for all models, $\epsilon \leq 8 / 2 5 5$ produces imperceptible perturbations. These perturbations become noticeable when $\epsilon \geq 1 6 / 2 5 5$ . The amount of noise seems to qualitatively be the largest for the SVM and CaiT models. We also note attention artifacts appear clearly visible in attacks on the CaiT model. The noise level for the SimpleCNN, VGG-16, ResNet-20, and Twins models appears similar.
Table 4: Robust Accuracy Results under APGD attack across models using CE and DLR losses on both Gray-C and Gray-B datasets.
Figure 5: Adversarial examples from varying $\epsilon$ for APGD with DLR on models trained on Gray-C dataset.
# 8 ATTACKS IN THE PHYSICAL WORLD
In this section, we consider the Print threat model in the physical world. This attack scenario uses an adversarial printer to generate Over examples on printed ballots. We start by reviewing prior work on physical world adversarial machine learning.
Prior Work. Wei et al. [74] surveyed adversarial machine learning in the physical world. Sharif et al. [75] designed targeted adversarial perturbations using a modified softmax loss function. They printed these $2 2 4 \times 2 2 4$ pixel images, of which approximately $6 \%$ area is covered by an adversarial patch in question that takes the shape of glasses. These adversarial glasses were then used to attack (evade) facial recognition systems. This work can be considered human-perceptible perturbations. Recall, that in our attacks, one must produce ballots that are indistinguishable from empty ballots. Kurakin, Goodfellow, and Bengio [76] use a modified version of FGSM to design untargeted perturbations at various degrees of $\epsilon$ and print clean and adversarial images, and classify using a phone camera. They use the CE loss function. These adversarial images are human imperceptible. Like our work, they also consider $l _ { \infty }$ distance. However, their work does not consider printer dithering that we now discuss.
Our Pipeline. We focus on Over examples that can be created by an adversarial printer. Attacks follow the following physical pipeline:
(1) Adversarial example generation (see Section 7).
(2) Layout on an empty page.
(3) Printing using a commodity laserjet printer.
(4) Scanning using a commodity scanner. (5) Image alignment, color correction, segmentation.
(6) Classification using the target model.
Steps 2-5 are not needed in Section 7. This attack is not perfectly realistic in the following sense:
(1) A ballot printer could use higher-end printing techniques (e.g., offset printing, or even photo-realistic printing) instead of laserjet printing.
(2) We use ad hoc registration and segmentation. Yet, it appears to not introduce measurable/substantial error.
Vendors have moved away from colored ballots in favor of black and white ballots where colors are only decorative. This section further restricts the investigation to models trained on the Combined dataset to understand the impact of the physical world on models that could realistically serve in place of optical scanners.
# 8.1 Physical Dataset
All the models are trained as described in Section 3. However, physically extracted bubbles are first fed through a denoising autoencoder to remove the noise introduced in the printing process, then fed through a classifier. Without this denoiser, models had clean accuracy of under $9 0 \%$ on bubbles, see Table 8 in the Appendix. The denoiser is described in Appendix A.1. In comparison to previous physical attacks, we see substantial noise in classifying clean images when printing and scanning, due to 1) dithering of the printer and 2) printing a limited set of pixel values.
Layout. We layout all bubbles ( $4 0 \times 5 0$ at $2 0 0 \mathrm { d p i }$ ) in a matrix on a $8 . 5 \times 1 1$ inch sheet spread 0.5 inches apart. These sheets are printed and scanned. Registration corrects misalignment from the scanner, and extracts each bubble from their exact locations.
Printing. We used an HP LaserJet-3010 printer. It is a monochrome laser printer with max print speed of 42 pages per minute (ppm). It prints at 1200 dots per inch (dpi).
Scanner. The scanner is a Fujitsu-7600 with 24bpp and a standard automatic document feeder. We scanned in grayscale at 200 dpi. The software used is SANE. The dpi of the printer and scanner are multiples of each other (6x) to avoid fractional scaling.
Correction. We used Argyll with color calibration sheets (e.g., IT8) to tune the scanner with an ICC profile for color tonality errors.
The drop in accuracy. Printing and scanning, even with ICC correction techniques in place, introduces additional challenges. Images are visibly different from their digital source. LaserJet printers simulate gray by using dithering patterns to trick the human eye into seeing gray. The net result is that the printed (and re-scanned) bubbles are darker and noisier than the original material the models were trained on. As a result, the models classification accuracy drop from over $9 9 \%$ clean accuracy without printing to under $9 0 \%$ , see Table 8. To mitigate, we use a denoising autoencoder that we describe in Appendix A.1. Prepending this denoiser to our models makes them accurate at classifying images both before and after printing using the Laserjet printer.
# 8.2 Physical Attack Results
We run APGD with DLR loss for $\epsilon$ from $4 / 2 5 5$ to 255 255 on all of our models. The examples are fed through the physical extraction pipeline, the denoiser, then the classifier. We consider 500 non-mark bubbles pre-print and post-print. Our attacks only use the classifier weights. Future attacks could incorporate the denoiser weights as part of the backpropagation step.
Results are shown in Table 5. For $\epsilon = 1 6 / 2 5 5$ which we judge to be imperceptible in the physical domain, SVM and CaiT are very susceptible to post-print Over examples. ResNet-20 and Twins have some resilience but demonstrate robust accuracy less than 1.000. We note that ResNet-20 and Twins do not demonstrate monotonically decreasing robustness as $\epsilon$ increases. This non-monotonic behavior is unexpected, but our attacks are “unaware” of the print-scan noise. Interestingly, digital model robustness appears not to impact physical world resilience. The highly vulnerable digital models of VGG-16 and Twins are not especially vulnerable to our physical attacks. However, SVM is vulnerable in both domains.
Summary. Adversarial ML involving the physical world is more complicated with several sources of noise that can destroy adversarial signal such as dithering, scanning misalignment, and changes in intensity ranges. Nonetheless for the SVM, ResNet-20, Twins and CaiT Over attacks are viable. As a reminder, a printer can reuse a single attack image. | We show the security risk associated with using machine learning classifiers in United States election tabulators. The central classification task in election tabulation is deciding whether a mark does or does not appear on a bubble associated to an alternative in a contest on the ballot. Barretto et al. (E-Vote-ID 2021) reported that convolutional neural networks are a viable option in this field, as they outperform simple feature-based classifiers.
Our contributions to election security can be divided into four parts. To demonstrate and analyze the hypothetical vulnerability of machine learning models on election tabulators, we first introduce four new ballot datasets. Second, we train and test a variety of different models on our new datasets. These models include support vector machines, convolutional neural networks (a basic CNN, VGG and ResNet), and vision transformers (Twins and CaiT). Third, using our new datasets and trained models, we demonstrate that traditional white box attacks are ineffective in the voting domain due to gradient masking. Our analyses further reveal that gradient masking is a product of numerical instability. We use a modified difference of logits ratio loss to overcome this issue (Croce and Hein, ICML 2020). Fourth, in the physical world, we conduct attacks with the adversarial examples generated using our new methods. In traditional adversarial machine learning, a high (50% or greater) attack success rate is ideal. However, for certain elections, even a 5% attack success rate can flip the outcome of a race. We show such an impact is possible in the physical domain. We thoroughly discuss attack realism, and the challenges and practicality associated with printing and scanning ballot adversarial examples. | [
"cs.CR",
"cs.CV",
"cs.LG"
] |
# 1 Introduction
# 1.1 Context
Historical cadastral records, widely distributed throughout Europe, serve as invaluable documents to reconstruct past urban and territorial information1. These records document property ownership, usage functions, and other essential elements for taxation, offering high confidence in their reliability due to their administrative purpose2. Often paired with cartographic mappings, these dual systems combine textual descriptions with geographic representations following standardized visual and ontological codes to minimize subjective interpretation and enhance utility for taxation3. The evolution of cadastral sources has been extensively studied, with analyses spanning specific case studies to comparative frameworks. Associated cartography, particularly before and after the Napoleonic introduction of geometric-parcel cadastres, reflects a shift towards standardized cartographic practices4. These sources are critical for reconstructing historical population data, property functions, and urban spatial dynamics.
These historical cadastral records offer rich opportunities for understanding urban development, social structures, and economic patterns. Researchers can investigate diverse questions ranging from property ownership dynamics (e.g., identifying dominant landholders or tracking wealth concentration) to spatial-functional analyses (e.g., mapping commercial activities in specific districts).
While digitization has made these documents more accessible, the challenge lies in efficiently queryinh and analyzing this wealth of information to answer such research questions. Traditional manual methods of data extraction and analysis are time-consuming and limit the scale at which these historical insights can be pursued, creating a need for computational approaches that can systematically process these records while maintaining the nuanced understanding required for historical research.
To address these analytical challenges, this paper explores the application of Large Language Model (LLM)-based agents to analyze historical cadastre datasets, specifically focusing on two digitized Venetian cadastres: the 1740 Catastici from the Republic of Venice and the 1808 Sommarioni from the Napoleonic Kingdom of Italy5 . These historical records present significant processing challenges, including orthographic variations, transcription errors, and non-standardized formats, as illustrated in Figure 1.
To effectively query the cadastral data, we propose a text-to-program approach that translates natural language queries into executable code. Within this strategy, we explore two complementary techniques. The first employs text-to-SQL translation, which is optimal for precise, structured queries about specific cadastral information (e.g., "Who owned property X in 1740?" or $"$ What was the value of properties in district Y?"). The second utilizes text-to-Python translation, enabling more complex analytical operations and pattern recognition through custom code generation. This technique is particularly suitable for broader research questions requiring data manipulation and statistical analysis (e.g., "How did property ownership patterns change between 1740 and 1808?" or "What were the spatial distribution trends of different property types?").
Figure 1. The role of LLMs in processing historical Cadastres. Processing historical records with orthographic variations and complex transcription details through text-to-SQL and text-to-Python approaches for systematic data analysis.
The remainder of this paper is structured as follows. In the next section, we present the cadastral data of Venice that forms the foundation of our study. Section 3 introduces a typology of historical questions relevant to this cadastral data, addressing different analytical needs and query complexities. Sections 5 and 6 detail our text-to-programs systems, demonstrating the implementation and capabilities of both text-to-SQL and text-to-Python approaches. We conclude by evaluating the accuracy and reliability of these systems, offering a typology of their error types and consistency metrics to ensure answer trustworthiness.
# 2 Cadastral data of the city of venice
The first geometric cadastre in Venice was established in 1808, adhering to French administrative standards6. As displayed in figure 2, it operates as a dual system, combining cartographic maps of parcels with textual records that document ownership, location, function, and area. Each parcel is assigned a unique number, which is cross-referenced with the records that catalog owners, toponyms, uses, and dimensions (see Figure 3b). Ownership records include individual names, family relationships, and institutions, while functions are classified using a codified Italian ontology from $1 8 0 8 ^ { 7 }$ . In particular, the terms reflect historical usage; for instance, a “shop” is referred to as ’bottega’, rather than the modern Italian term ’negozio’.
Figure 2. The dual information system of the 1808 cadaster. Each parcel mention in the textual document is geolocalized on the cadastral map through the same ID code
Otherwise, the 1740 Catastico represents a textual survey system managed by the Collegio dei Dieci Savi at Rialto, designed to administer the Venetian tithe—a $1 0 \%$ property tax introduced in 1463. Property owners submitted self-declared ’Condizioni di Decima’ or ’Polizze’ which detailed property type, location, status, and income. These submissions were organized by district and sequentially numbered based on submission order, with taxation calculated from declared rents8. The overall informational structure of the document is displayed in figure 3a. Following a major archival fire in 1514, redecimation efforts occurred sporadically, with significant collections in 1514, 1537, 1566, 1581, 1661, 1711, and 1740. Unlike the 1808 cadastre, the Catastico did not integrate cartographic representation. Instead, records were generated through door-to-door surveys conducted by censors, who documented owners, tenants, toponyms, property functions, and rents (see Fig. 2-A).
These systems reflect complementary approaches to surveying. The 1808 cadastre is cartographic, parcel-based, and systematic, while the Catastico is textual, household-focused, and income-oriented. Despite these differences, both systems exhibit stable informational structures8. Both have been digitized and transcribed: in the 1808 cadastre, parcel identifiers, codes, and toponyms were automatically transcribed and subsequently verified manually $9 , 1 0$ . For the 1740 Catastico, geolocation was achieved by correlating toponyms with contemporary maps, reconstructing the censors’ survey paths, and identifying shared features between parcels recorded in 1740 and 1808.
Together, these datasets encompass more than 34,000 data points, providing detailed information on owner professions, tenants, property functions, rents, areas, and geolocations.
# 3 A typology of historical questions related to cadastre data
In urban historical research, cadastral data are used to link people and urban functions to territories. There are several objectives when consulting these records. The first is to consult them to identify the location of one or more people or urban functions in a specific place or places. The second is to investigate more general principles and test hypotheses about specific places, groups of people, types of function, or compare periods of time, particularly the past with the present. These research questions often involve complex statistical operations to aggregate and compare information that spans multiple data points from potentially different datasets. In the first case we test the accuracy of LLM to find, combine and compute the information we need : the name(s) of owners, urban functions, toponyms and others. This allows LLM to be used as a historical dataset browser. In the second case, we try to understand the precision of LLM in answering complex questions that would detect a notion of semantic and temporal context of the historical dataset. We propose to organize the paper around those two distinct objectives in historical research browsing vs. prompting.
Figure 3. The informational structure. (A) Catastici 1740 and (B) Sommarioni 1808. The structure of the two documents is as follows for (A) $\mathbf { : 1 }$ )place name 2)urban functions 3)tenants 4)owners 5)annual income $( \mathrm { B } ) : 1 )$ cadastral parcel identifier corresponding to a number on the map 2)owners 3)door number 4)urban functions
# 3.1 Browsing questions
Browsing questions can be categorized into 1) simple aggregation queries and 2) more complex relational queries. Simple aggregation queries focus on straightforward retrieval of information, such as calculating the total rent revenue generated by properties of a specific type. For example, a researcher might ask, “What is the total rent revenue generated from properties of the ‘bottega da casarol’ variety?“. This type of query provides quick insights into financial aspects of property types, allowing immediate analysis of income generated from specific categories.
In contrast, relational queries delve deeper into the dataset to examine the relationships and patterns among various data points, such as identifying how many families own properties across multiple categories. An example of such a question could be, “How many families own properties of more than one type category?“. These relational queries are essential for revealing trends in property ownership and usage, providing insights into socio-economic dynamics within urban settings. Table 1 displays an aggregation as well as a relational question with their related SQL queries. In section 5, we will evaluate the performance of a text2SQL agent on a set of 100 hand-crafted browsing questions about the Catastici 1740 dataset (provided in Appendix Section A.1).
Browsing Question SQL Query
What is the total rent revenue SELECT SUM(" Rent_Income ")
generated from properties of the FROM catastici
"bottega da casarol" variety? WHERE " Property_Type " = " bottega ␣da␣ casarol ";
How many families own SELECT COUNT (\*) FROM (
properties of more than one type SELECT " Owner_Family_Name "
category? FROM catastici GROUP BY " Owner_Family_Name " HAVING COUNT ( DISTINCT " Property_Type ") > 1 ) AS families_with_multiple_types ;
# 3.2 Prompting questions
Prompting questions are designed to go beyond mere data retrieval or the aggregation of information from individual datasets. Unlike browsing questions, which rely on exact matching of entities and are thus not robust to typos, synonyms, or variations – and require users to know precisely which data points exist within the dataset– prompting questions aim to leverage multiple data sources along with common-sense understanding to uncover richer, more nuanced insights. Such questions require a deep understanding of linguistic subtleties, particularly in categories such as professions, ownership, and the intricate interrelations among entities within tabular data. This involves not only extracting explicit information, but also interpreting implicit connections. Furthermore, prompting questions frequently demands a conceptual grasp of spatial and temporal dynamics to effectively organize and contextualize data. In certain instances, city-specific knowledge becomes crucial for identifying diachronic language, local customs, or accurately inferring distances.
After careful analysis of the datasets and their potential applications, we identified that meaningful questions about historical cadastral data could be organized into four distinct categories. The first category leverages the geocoordinates of the cadastre entries to examine spatial distributions, enabling queries that bridge past and present urban landscapes. By relating historical properties to relatively stable urban landmarks such as churches and squares (extracted from OpenStreetMap $^ { 1 1 }$ ), we can investigate how individuals and properties were allocated across diverse areas. Although these landmarks may have undergone modifications or reconstructions over time, they often maintain their general location and social function, serving as semipersistent spatial anchors for historical analysis. The second category is dedicated to building functions, exploring the intended purposes or uses of various structures within the urban environment. The third category focuses on personal information, examining demographic and socioeconomic characteristics associated with individuals in the cadastral data. Finally, the fourth category targets temporal analysis, specifically comparing data over two distinct periods to reveal trends, shifts, or patterns over time.
In sum, we have curated a comprehensive set of 140 questions, which we are releasing as an opensource resource along with this paper. All questions have been validated by urban specialists to ensure their relevance for urban analysis applications. Table 2 presents a selection of questions from each category. The questions encompass diverse expected output formats, ranging from binary yes/no responses to numerical values or the identification of specific entities.
Unlike browsing questions, designing SQL queries for prompting questions presents significant challenges due to their inherently complex nature. Prompting questions often require insights that extend beyond the information readily available in the dataset’s structure or columns. The intricate operations necessary for these questions move beyond simple data filtering and aggregation, involving advanced processes such as semantic searches, spatial computations, and statistical tests or correlation evaluations. Additionally, certain prompting questions demand the incorporation of
# Questions
Table 2. Examples of prompting questions. alongside with their category and expected output format.
external knowledge or common-sense reasoning, which cannot be encapsulated purely through SQL. In section 6, we introduce the text-to-Python agent, which transforms prompting questions into Python programs to effectively generate answers.
# 4 Approaches to Historical Cadastral Data Processing
Processing historical cadastral data is challenging due to irregular formats, orthographic variations, and complex historical annotations. Existing computational methods fall into three main categories: machine learning, rule-based approaches, and large language model (LLM)-based code generation.
Machine Learning Approaches: Supervised models like TabularNet $^ { 1 2 }$ and TableFormer $^ { 1 3 }$ have improved table parsing by leveraging neural architectures, while STab $^ { 1 4 }$ introduced self-supervised learning for diverse tabular data. More recent methods, such as mixed-type tabular data synthesis15, aim to reconstruct incomplete records. However, these approaches require extensive training data and struggle with the variability of historical sources.
Rule-Based Methods: These approaches rely on predefined extraction rules for normalizing toponyms, recognizing property descriptions, and applying historical ontologies. While interpretable, they lack adaptability to inconsistent datasets.
LLM-Based Code Generation: Instead of relying on pre-trained models for structured parsing, we leverage LLMs to dynamically generate executable queries—either SQL for structured retrieval or Python for complex analyses. This approach offers: (1) Flexibility – No need for labeled training data, making it adaptable to diverse historical datasets; (2) interpretability – Produces verifiable code rather than opaque model outputs; and (3) Scalability – Handles both simple lookups and
complex spatial-temporal analyses.
In the following sections, we present two complementary approaches: a SQL-based system for precise data retrieval and a Python-based system for complex analytical tasks. These approaches demonstrate how code generation through LLMs can offer a more adaptable and maintainable solution for historical data analysis compared to traditional ML methods.
# 5 Browsing cadastre data with SQL agents
# 5.1 Method
Questions and data specifications. We develop a set of hundred questions designed to facilitate the exploration of the Catastici 1740 dataset. These questions are available in Section A.1. They were handcrafted to cover both retrieval and relational concepts (as described in Section 3.1) about the information contained in a simplified version of the Catastici Dataset including seven columns: Catastici_ID[integer], Owner_ID[integer], Owner_First_Name[text],
Owner_Family_Name[text], Property_Type[text], Rent_Income[integer], Property_Location[text]
Model. To convert these natural language questions into SQL queries, we employ the open-source text-to-SQL model, CodeS-7B16. In accordance with the model’s specifications, each prompt includes both a comprehensive description of the table’s metadata and detailed information about its columns. We assess the model’s performance in both zero-shot and three-shot prompting scenarios, with the results detailed in the subsequent subsection. Each question is fed into the system a total of four times, and the SQL query selected is determined through a majority voting mechanism. The resulting SQL query is executed using SQLite to obtain the corresponding answers. For further clarity, examples of the prompts designed to interact with the CodeS-7B model are provided in Appendix A.2.
Figure 4. The SQL-Agent. Questions are fed to the system into a prompt engineered to match with the CodeS model requirements.
# 5.2 Results
Table 4 presents the performance of the SQL Agent on 100 curated questions exploring the Venetian cadastral dataset. Performance is evaluated using exact match accuracy, which measures the alignment of generated SQL queries with ground truth, and unigram overlap, which assesses lexical similarity. The ground truth is defined by executing manually annotated SQL queries. We use unigram overlap as a complementary metric because it captures cases where the executed SQL query yields the correct information but differs from the ground truth due to additional contextual details or differences in the order of elements in lists. Over the 100 questions we tried, no SQL runtime errors were observed.
In the zero-shot scenario, the SQL Agent achieves 52% exact match accuracy and 86% unigram overlap. With three-shot prompting, where the model is provided with three example question-query pairs, these metrics improve significantly to 79% and 92%, respectively, illustrating the effectiveness of in-context learning. Despite these strengths, some limitations remain. The system occasionally misinterprets output requirements, such as returning a detailed list instead of an aggregated count for questions like “How many owners receive more than 100 ducati in total rent income?” Similarly, it provides disaggregated results rather than sums for tasks like “What is the total rent income of the top 5 earners?” Complex analytical tasks, such as calculating the “average rent income variance across all locations” or the “share of income from properties labeled as ‘bottega da fabro’,” also pose challenges.
Overall, these results confirm the SQL Agent’s capability to reliably convert natural language queries into executable SQL, enabling effective exploration of historical cadastral data. The high accuracy and minimal errors highlight the utility of text-to-SQL models for idiographic data browsing, while areas requiring further refinement are primarily related to advanced aggregations and formatting nuances.
Table 3. Performance of CodeS-7B in the browsing tasks. exact match and unigram overlap scores for 0-shot and 3-shot settings, with zero SQL runtime errors.
# 6 Prompting cadastre data with text-to-Python
# 6.1 Related work
The challenge of analyzing historical cadastral data through natural language queries requires bridging several technical domains: data extraction, query interpretation, and automated analysis. Recent advances in AI agents and data-driven discovery provide promising foundations for addressing these challenges. InfiAgent-DABench $^ { 1 7 }$ has established key benchmarks for assessing LLM-based agents in statistical data analysis tasks, particularly highlighting the importance of consistent responses across multiple query iterations. While OpenAgent $^ { 1 8 }$ demonstrates the potential of multitool AI agents through combined Python and SQL functionalities, our preliminary experiments revealed that SQL’s rigid structure can be limiting for the dynamic nature of historical data analysis.
Recent work by Kapoor et al.19 has shifted focus from standardized benchmarks toward optimizing cost-effectiveness and precision in AI agents, informing our approach of limiting debug iterations. The concept of reusable code modules, as demonstrated in CodeChain20, suggests promising directions for generating adaptable solutions across similar historical queries. Our approach shares commonalities with Majumder et al.’s $^ { 2 1 }$ system in operating across multiple datasets to validate hypotheses, though we extend this to specifically address historical-contemporary comparisons. The capabilities demonstrated by KwaiAgents $^ { 2 2 }$ in performing on-the-fly computations point to future possibilities for incorporating supplementary historical data sources.
Building on these foundations, we present a text-to-Python agent specifically designed for historical cadastral analysis. Our system uniquely combines entity extraction, automated planning, and code generation to bridge historical and contemporary urban data sources. The following subsections detail our method’s implementation and evaluation.
# 6.2 Method
Questions and data specifications. As discussed in Section 3, the prompting questions fall into four categories: spatial, functional, personal, and temporal. Spatial questions involve locating and organizing datapoints from the Catastici (1740) and Sommarioni (1808) datasets in relation to landmarks derived from the OpenStreetMap-based Landmarks dataset (refer to Figure 5). Temporal questions compare and identify patterns between these datasets. The text-to-Python agent integrates these three datasets for analysis (Figure 5). Critically, this integration enables us to bridge past and present urban landscapes by leveraging semi-persistent landmarks like churches and squares from OpenStreetMap as spatial anchors. While these structures may have evolved physically, their maintained locations and social functions enable meaningful contextualization of historical cadastral data within contemporary geographic frameworks.
Model. The text-to-Python agent functions as a dialogue among three specialized agents: the entity extractor, the planner, and the coder. Each agent is an LLM utilizing specialized prompts (detailed in Supplementary Section A.4). These agents process inputs and produce outputs to guide one another. The overall information flow between agents is managed using Langchain $^ { 2 3 }$ and is illustrated in Figure 5. The following paragraphs detail the implementation and roles of each agent.
Figure 5. The text-to-Python agent. The agent receives a question and consults different datasets to 1) extract the entities being referred to; 2) creates a plan to answer it; and 3) produces and runs a python script to generate an answer.
Entity Extractor. Effective analysis begins with aligning user queries to dataset content. Broad natural language questions often fail to produce precise results due to ambiguous contextual alignment. To address this, a Retrieval-Augmented Generation (RAG) approach $^ { 2 4 }$ integrates dataset values into prompts. For tabular data, our method employs a tailored Entity Extraction phase, which identifies relevant rows and columns to enhance LLM input.
"In which parish do lawyers own the mostnumberofbuildingsin1740??"
Figure 6. The Entity Extractor phase. Given a question, in this phase, we extract the most relevant rows from the datasets.
Figure 6 illustrates our system. Upon receiving a query, the system first autonomously selects the appropriate dataset for entity extraction. The "Column Extractor" then aligns the question phrases with corresponding dataset columns, forwarding them to the "Row Extractor". The "Row Extractor" discerns if a phrase denotes a specific term within the column or refers generally to the column. The former undergoes "Entity Search", involving exact matching, fuzzy matching, and semantic search to locate the term within the dataset. Exact matching identifies precise matches (e.g., "avocato" in "Professions"), fuzzy matching finds minor variations (e.g., "avvocato"), while semantic search captures conceptually related terms (e.g., "procuratore"). The three-tier search approach addresses queries that target specific datasets terms. While Semantic Search identifies these, it may also return unintended terms and is resource-intensive. Exact and Fuzzy Matching are thus preferred for their specificity and efficiency.
Code Generation. Inspired by the Plan and Solve25 approach, the code generation involves two agents: the planner and the coder. the planner creates a detailed solution framework based on the metadata of the data set, the extracted entities, and the query mappings. the coder then translates the plan into executable Python code. an executor runs the code, returning error messages for iterative debugging if needed. after a fixed number of retries, unresolved queries are marked as unanswerable.
Consistency Measures. We evaluate the consistency of the system along the dimension of Execution Consistency (EC), which measures the stability of the system’s responses in three runs with different random seeds.
# 6.3 Results
Execution Consistency Analysis. Figure 7 illustrates the execution consistency (EC) of answers across three independent runs, analyzed by question category and answer type. We define EC-3 as perfect consistency (identical results across all three runs) and EC-2 as partial consistency (identical results in two out of three runs).
As shown in Figure 7(a), questions about personal information and property functions achieved higher EC scores compared to spatial and comparison queries. Personal queries reached approximately $9 5 \%$ consistency at EC-3, while spatial and comparison queries showed lower consistency at around $8 5 \%$ and 80% respectively. This pattern suggests that straightforward queries about specific entities are more reliably processed than those requiring complex spatial reasoning or comparative analysis.
Figure 7(b) demonstrates that answer format significantly influences consistency levels. Yes/no responses and single numerical answers demonstrated higher consistency (approximately 90% at EC-3) compared to responses requiring entity name extraction (approximately $6 0 \%$ at EC-3). This variation in consistency may be attributed to the more deterministic nature of numerical and binary operations, whereas entity name responses often require multiple processing steps and complex data filtering that can introduce variability across runs.
Our manual verification of EC-3 responses revealed encouraging results: only 12 out of 79 consistent answers contained errors, suggesting that execution consistency serves as a reliable indicator of answer quality. This $1 5 . 2 \%$ error rate in consistently generated responses indicates that while consistency doesn’t guarantee accuracy, it strongly correlates with correct results.
Qualitative analysis of errors. A key challenge identified pertains to extraction errors resulting from the ambiguous and evolving nature of language. For instance, the terms ’commercial buildings or ’shops’ is inherently ambiguous and, within the dataset, it has been erroneously matched to the Venetian term ’magazzeno’ or ’locale’, which typically refers to a garage rather than a commercial space. This lack of specificity further led to imprecise associations, such as the inclusion of the generic term calle when generating a response to the question, “Which square has the largest number of commercial buildings within 100 meters in the dataset in 1740?”
Similarly, diachronic linguistic variations posed significant difficulties, a challenge well-documented in computational historical linguistics26. In response to the question, “How many buildings changed from residential to commercial use between 1740 and 1808?”, the system searched for terms such as negozio and ufficio, which are later linguistic evolutions of the historical term bottega. However, these diachronic equivalents are not explicitly mentioned in the cadastre, further complicating the accuracy of the generated response.
Figure 7. Execution Consistency. grouped by (a) categories and (b) answer types
Manual verification of all system responses confirmed these patterns of linguistic and contextual inaccuracies, highlighting the need for more sophisticated historical context handling in AI systems. For a detailed analysis of the system’s key data operations and processing methods, see supplementary section B.2. Successful and failing execution traces of the text-to-Python agent are also available in supplementary section B.3.
# 7 Discussion
Our experimental results reveal several key insights about the application of LLMs for historical cadastral analysis. First, when comparing the performance of specialized SQL and general-purpose Python agents on structured queries, we find remarkably similar capabilities, with a unigram overlap score of 0.85 for the text-to-Python agent on the questions from Section 5. This suggests that the Python-based approach, while more versatile, does not compromise accuracy on straightforward cadastral queries, offering a potential "one-size-fits-all" solution for various analytical needs.
A fundamental advantage of our text-to-program framework lies in its inherent interpretability and verifiability. By generating executable code rather than direct natural language responses, the system produces solutions that can be traced back to source data, effectively minimizing the risk of hallucinations that often plague LLM applications. Each query result is anchored in specific cadastral records, allowing researchers to verify the exact data points contributing to any conclusion. Furthermore, the generated programs serve as transparent reasoning traces, where analytical assumptions and methodological choices are explicitly encoded in the syntax and logic of the code. Our comparative analysis between GPT- $4 ^ { 2 7 }$ and LLama-70B $^ { 2 8 }$ , detailed in Supplementary Section B.1, reveals a substantial performance gap between closed-source and open-source models. This difference highlights the current limitations of freely available models for complex historical analysis, though rapid advances in open-source LLMs suggest this gap may narrow in the future. A notable strength of our approach is its city-agnostic nature. While demonstrated on Venetian cadastres, the framework can be readily adapted to any urban area with digitized historical records, requiring only minimal adjustments to accommodate different data structures and local contextual requirements.
However, several limitations must be considered. First, the interpretability of our system presupposes familiarity with programming languages, potentially limiting accessibility for traditional historical researchers. Future work could address this by incorporating an additional layer that translates code into natural language explanations, making the reasoning process as well as the hypotheses made by the system more accessible to non-technical users. Second, our results reveal challenges in handling diachronic language variations (see Section 6.3). This suggests the need for time-aware models with adapted retrieval-augmented generation (RAG) mechanisms capable of processing historical linguistic variations, especially for cases involving Venetian dialect and other historical languages. Third, while our use of semi-permanent urban anchors (such as churches and major squares) effectively grounds historical analysis in modern spatial references, this approach requires careful consideration. One must remain aware of the potential anachronisms and oversimplifications this method might introduce. More broadly, this highlights the importance of understanding the underlying assumptions and limitations of the datasets being analyzed, as these factors significantly influence the interpretation of results generated by the system.
These limitations point to promising directions for future research, including the development of more accessible interpretation tools, enhanced historical language processing capabilities, and more nuanced approaches to temporal-spatial mapping. Despite these challenges, our framework demonstrates the potential of LLM-based approaches to revolutionize historical urban research while maintaining rigorous academic standards through verifiable and interpretable results.
References
1. Kain, R. J. & Baigent, E. Cadastral Map in the Service of the State (University of Chicago Press, Chicago, 1992). 1
2. Bloch, M., Aakjar, S., Hall, H., Tawney, A.-H. & Vogel, W. Les plans parcellaires : Allemagne, Angleterre, Danemark, France. Annales 1, 60–70, DOI: 10.3406/ahess.1929.1039 (1929). Publisher: Persée - Portail des revues scientifiques en SHS. 1
3. Bourguet, M.-N. & Blum, A. Déchiffrer la France : La statistique départementale à l’époque napoléonienne (Editions des archives contemporaines, Paris, 1988). 1
4. Clergeot, P. Cent millions de parcelles en France. 1807 – Un cadastre pour l’empire (Paris, 2007). 1
5. Pavanello, I. I Catasti storici di Venezia, 1808-1913 (Officina, 1981). Google-Books-ID: r_jsAAAAMAAJ. 2
6. Clergeot, P. Le recueil méthodique de 1811. In Bourillon, F. & Vivier, N. (eds.) De l’estime au cadastre en Europe. Les systèmes cadastraux aux XIXe et XXe siècles, Histoire économique et financière - XIXe-XXe, 167–173, DOI: 10.4000/books.igpde.10963 (Institut de la gestion publique et du développement économique, Vincennes, 2008). Code: De l’estime au cadastre en Europe. Les systèmes cadastraux aux XIXe et XXe siècles. 2
7. Di Lenardo, I., Barman, R., Pardini, F. & Kaplan, F. Une approche computationnelle du cadastre napoléonien de Venise (2021). 3
8. Chauvard, J.-F. Les catastici vénitiens de l’époque moderne. Pratique administrative et connaissance du territoire. In Touzery, M. (ed.) De l’estime au cadastre en Europe. L’époque moderne, Histoire économique et financière - Ancien Régime, 419–454, DOI: 10.4000/books. igpde.9768 (Institut de la gestion publique et du développement économique, Vincennes, 2007). Code: De l’estime au cadastre en Europe. L’époque moderne. 3
9. Ares Oliveira, S., Kaplan, F. & di Lenardo, I. Machine Vision Algorithms On Cadaster Plans. In Digital Humanities Conference in Montreal (Montreal, 2017). 3
10. Ares Oliveira, S., di Lenardo, I., Tourenc, B. & Kaplan, F. A Deep Learning Approach to cadastral computing (Utrecht, 2019). 3
11. OpenStreetMap contributors. Planet dump retrieved from https://planet.osm.org . https: //www.openstreetmap.org (2017). 5
12. Du, L. et al. Tabularnet: A neural network architecture for understanding semantic structures of tabular data. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, KDD ’21, 322–331, DOI: 10.1145/3447548.3467228 (Association for Computing Machinery, New York, NY, USA, 2021). 6
13. Yang, J. et al. TableFormer: Robust transformer modeling for table-text encoding. In Muresan, S., Nakov, P. & Villavicencio, A. (eds.) Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 528–537, DOI: 10.18653/v1/2022.acl-long.40 (Association for Computational Linguistics, Dublin, Ireland, 2022). 6
14. Hajiramezanali, E., Diamant, N. L., Scalia, G. & Shen, M. W. STab: Self-supervised learning for tabular data. In NeurIPS 2022 First Table Representation Workshop (2022). 6
15. Zhang, H. et al. Mixed-type tabular data synthesis with score-based diffusion in latent space. In The Twelfth International Conference on Learning Representations (2024). 6
16. Li, H. et al. CodeS: Towards Building Open-source Language Models for Text-to-SQL, DOI: 10.48550/arXiv.2402.16347 (2024). ArXiv:2402.16347 [cs]. 7
17. Hu, X. et al. InfiAgent-DABench: Evaluating Agents on Data Analysis Tasks, DOI: 10.48550/ arXiv.2401.05507 (2024). ArXiv:2401.05507 [cs]. 8
18. Xie, T. et al. OpenAgents: An Open Platform for Language Agents in the Wild, DOI: 10.48550/arXiv.2310.10634 (2023). ArXiv:2310.10634 [cs]. 8
19. Kapoor, S., Stroebl, B., Siegel, Z. S., Nadgir, N. & Narayanan, A. AI Agents That Matter (2024). ArXiv:2407.01502 [cs]. 8
20. Le, H. et al. CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules (2024). ArXiv:2310.08992 [cs]. 8
21. Majumder, B. P. et al. Data-driven Discovery with Large Generative Models (2024). ArXiv:2402.13610 [cs]. 8
22. Pan, H. et al. KwaiAgents: Generalized Information-seeking Agent System with Large Language Models, DOI: 10.48550/arXiv.2312.04889 (2024). ArXiv:2312.04889 [cs]. 8
23. Chase, H. Langchain. https://github.com/langchain-ai/langchain (2022). Released on 2022-10- 17. 9
24. Lewis, P. et al. Retrieval-augmented generation for knowledge-intensive nlp tasks. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS ’20 (Curran Associates Inc., Red Hook, NY, USA, 2020). 9
25. Wang, L. et al. Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language models. In Rogers, A., Boyd-Graber, J. & Okazaki, N. (eds.) Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2609–2634, DOI: 10.18653/v1/2023.acl-long.147 (Association for Computational Linguistics, Toronto, Canada, 2023). 10
26. Degaetano-Ortlieb, S. & Teich, E. Information-based modeling of diachronic linguistic change: from typicality to productivity. In LaTeCH@ACL (2016). 11
27. OpenAI et al. Gpt-4 technical report (2024). 2303.08774. 11
28. Grattafiori, A. et al. The llama 3 herd of models (2024). 2407.21783. 11
# A Additional Methods
# A.1 Questions for Browsing the Catastici 1740
# A.2 SQL CodeS-7B Prompts
# Prompt for the CodeS Model
# 3-shot template
database schema:
table catastici , columns $\mathbf { \sigma } = \mathbf { \sigma }$ [ catastici.ID ( integer ) ,
catastici.Owner_ID ( integer ) , catastici.Owner_First_Name ( text ) , catastici.Owner_Family_Name ( text ) , catastici.Property_Type ( text ) , catastici.Rent_Income ( integer ) , catastici.Property_Location ( text )]
column info:
ID – Primary key ; Owner_ID – Unique ID of each owner of the property ; Owner_First_Name – First name of the owner of the property ;
Owner_Family_Name – Family name of the owner of the property ;
Property_Type – Specific type of the property given in Italian ; Rent_Income – Rent price of the property that the owner receives as income, given in Venice ancient gold coin ducato ; Property_Location – Ancient approximate toponym of the property given in Italian
primary key : catastici.ID
qyestion:
How many properties are there with the type of "casa"?
database schema: colunn info primary key question 1 SQL Query 1 database schema: colunn info primary key question 2 SQL Query 2 database schema: colunn info primary key question 3 SQL Query 3
# A.3 Text-to-Python Implementation Details
Open AI API 1 Generation hyperparameters (top p, temperature, ...) are left as default and used with a random seed. The system is implemented using the langchain framework 2, as it allows flexibility to implement the interaction between the agents.
# A.4 Text-to-Python Agent prompts
This section contains all the prompts used at each step of the Text-to-Python agent’s process. The first two prompts serve as system prompts, provided to the agent along with a description of the datasets as part of its context. The subsequent prompts align with the agent’s workflow, as shown in Figure 5. The process begins with extracting references, followed by entity extraction. Next, the agent generates a plan and subsequently writes the necessary code. If errors occur during code execution, the agent debugs the code based on the Python console’s error messages. In the following prompts, all inputs used to generate prompts are highlighted in blue. In context examples are given for reference and entity extraction.
# Analysis System prompt
You are an expert historian. You are working with 3 datasets, one detailing buildings in Venice from 1740, another one detailing buildings in Venice from 1808 and the last one listing landmarks such as churches and squares in Venice. In the Buildings datasets (1st and 2nd datasets), each row refers to a separate building, while in the Landmarks dataset (3rd dataset), each row refers to a separate landmark.
# Python System prompt
"You are a highly skilled Python developer with expertise in data analysis. You are working with 3 datasets, one detailing buildings in Venice from 1740, another one detailing buildings in Venice from 1808 and the last one listing landmarks such as churches and squares in Venice. In the Buildings datasets (1st and 2nd datasets), each row refers to a separate building, while in the Landmarks dataset (3rd dataset), each row refers to a separate landmark.
# Python reference extraction prompt (prompt inputs are in {blue})
Given a question, you need to match the phrases in the question with the columns in the dataset if applicable. Only focus on the phrases that refer to one or more columns in any of the above datasets. If none of the phrases refer to a specific dataset column, return an empty list. If question only asks about 1740, phrases should be matched to column(s) in dataset 1. If question only asks about 1808, phrases should be matched to column(s) in dataset 2. If the question asks about both datasets, phrases can be matched to column(s) in both datasets 1 and 2. Your output should be in the format [(detected_phrase_1, column_name_1, dataset_number_1), (detected_phrase_2, column_name_2, dataset_number_2), Note that the same phrase could correspond to a column that exist in more than 1 dataset. Note that if a phrase refers to more than one column in a single dataset, consider each column name separately. Note that every row is about a separate building. When the questions is about a building / buildings, it is referring to the whole dataset, and not a specific column.
For example: If the question is "Which squares are surrounded by the most diverse set of
building functions from 1740?", output [("squares", "landmark_type", 3), ("building functions", "building_functions", 1)], since "squares" corresponds to the "landmark_type" column in the landmarks dataset (2nd dataset), and the information about "building functions" can be found in the column "building_functions", and the question is asking about the time 1740, thus dataset 1.
Examples:
Question: "What is the average distance to the nearest square?" Output: [("square", "landmark_type", 3)]
Question: "How many houses are located near Santa Maria della Salute in 1740?" Output: [("houses", "building_functions", 1), ("Santa Maria della Salute", "landmark_name", 3)]
Question: "What is the average rent price of workshops in San Polo in 1808?" Output: [("rent price", "rent_price", 2), ("workshops", "building_functions", 2), ("San Polo", "district", 2)]
Question: "How many families present in Venice in 1740 still exist in the 1808?" Output: [("families", "owner_family_name", 1), ("families", "owner_family_name", 2)]
Question: "How many people live in Venice in 1808?" Output: [("people", "owner_first_name", 2), ("people", "owner_family_name", 2)]
Please match the relevant phrases with their corresponding column names for the following question and respond, in a natural language, in the format [(detected_phrase, column_name, dataset_number)]. Question: {question}
Let’s think step by step:
# Python entity extraction prompt (prompt inputs are in {blue})
You are a given a mapping between a phrase and a column of a dataset. Your task is to hypothesise if the given phrase could correspond to a specific value in the matching column depending on the definition and data type of what should be given in the columns.
Respond [[True]] if you think the phrase may correspond to one or more specific values in the corresponding column.
Respond [[False]] if you think the phrase is just referring to the corresponding column in general, not possibly not to any specific value. Note that Dataset is referred to with its number.
For example: If the matching is ("squares", "landmark_type", 3), respond [[True]] as "squares" is a specific value that should be found in the column "landmark_type". If the matching is ("building functions", "building_functions", 1), respond [[False]], as "building functions" just refers to "building_functions" column in general, and is not a specific value we are looking for. Give your answer between [[]], for example [[True]] or [[False]]
# Examples:
Mapping: [("square", "landmark_type", 3)] Output: [[True]]
Mapping: [("Santa Maria della Salute", "landmark_name", 3)] Output: [[True]]
Mapping: [("workshops", "building_functions", 2)] Output: [[True]]
Mapping: [("families", "owner_family_name", 1)] Output: [[False]]
Mapping: [("near houses", "building_functions", 2)] Output: [[True]]
Mapping: [("people", "owner_family_name", 2)] Output: [[False]]
Please hypothesise, in a natural language, if the given phrase in Mapping may refer to a specific value
in the corresponding column. Respond with [[True]] or [[False]]. Mapping: reference
Output:
# Python plan prompt (prompt inputs are in {blue})
Instruction:
First understand the problem, and provide a step-by-step data analysis plan only in natural language to answers the question using the provided datasets. Be as clear and explicit as possible in your instructions.
You are given:
- Question Extracted Information of Entities: This contains the dataset and the column that the entity matches to, and the corresponding exact matches found in the dataset - References to Corresponding Dataset and Column: This contains phrases found in the question linked to the specific dataset and column - Expected Answer Format: yes/no or numerical or a single textual entity name
Requirements: - The final answer should be in the format of answer_format. - Use the provided entity information and datasets - If any of the entity information or references is meaningless, ignore it. Question: {question}
Extracted Information of Entities: {entities}
References to Corresponding Dataset and Column: {references}
Step by Step Plan in Natural Language:
# Python code prompt (prompt inputs are in {blue})
Instruction:
Your task is to generate Python code based on the provided detailed plan to answer the given question using the provided datasets.
Requirements:
- Use the necessary libraries for data analysis in Python (e.g., pandas, numpy).
- The code should be well-structured, complete, and intended to be executed as a whole.
- Write your code in the most computationally efficient way
- Include all code in a single code block.
- Give your final answer in the format of {answer_format}.
- End your code by printing only the final answer strictly following this format: "[[final_answer]]", for example: print(f"The answer is: [[final_answer]]")
- Never use ‘exit()‘ function.
Question: {question} Step-by-Step Plan: {plan} Python Code:
# Python debug prompt (prompt inputs are in {blue})
Instruction: Debug and rewrite the provided Python code. The code follows the given plan to answer the given question using the given datasets, but it contains an error. Based on the error message, could you correct the code and provide a revised version?
You are given Question
- Extracted Information of Entities: This contains the dataset and the column that the entity matches
to, and the corresponding exact matches found in the dataset
- References to Corresponding Dataset and Column: This contains phrases found in the question linked to
the specific dataset and column
- A detailed plan to write Python code that answers the question
- Incorrect python code that raises an error
- Corresponding error message
Requirements: - If any of the entity information or references is meaningless, ignore it.
- Use the necessary libraries for data analysis in Python (e.g., pandas, numpy).
- The code should be well-structured, complete and intended to be executed as a whole.
- Write your code in the most computationally efficient way
- All of your code should be included in a single code block.
- Give your final answer in the format of {answer_format}.
- End your code by printing only the final answer strictly following this format: "[[final_answer]]", for example: print(f"The answer is: [[final_answer]]")
- Never use ‘exit()‘ function.
{question}
Extracted Information of Entities: {entities}
References to Corresponding Dataset and Column: {references}
Step by Step Plan: {plan}
Incorrect Python Code: {code}
Error Message: {error_message}
Corrected Python Code:
# B Additional Results
# B.1 Closed vs Open Source model
Figure 8. Performance comparison between GPT4-o and Llama3-70b
# B.2 Qualitative analysis of data operations
Table 4 illustrates how a language model agent systematically converts complex questions in natural language into actionable data analysis operations. Each question is deconstructed into key entities and references, mapping natural language elements (e.g., “rent price,” “square”) to corresponding dataset columns and types. Subsequently, the agent determines the appropriate analytical procedure, such as correlation analysis, filtering, or aggregation tailored to the query. For example, a question about the correlation between rent price and proximity to squares in 1740 is translated into a Pearson correlation calculation. Similarly, queries involving temporal comparisons or categorical relationships leverage statistical methods like chi-square tests or comparative metrics
# B.3 Execution Traces
# B.3.1 Successful Trace
# B.3.2 Successful Trace
Pearson correlation between rent price and distance to nearest square
Uses “parish“ as “neighborhood“ and finds the parish with the highest average rent price.
Filters buildings within 100 meters of a church and counts unique people.
Separates single and multiple function buildings, compares average rent prices.
Counts buildings within 100 meters of squares, outputs the square with the most diverse building functions.
Filters workshops and calculates their average distance to the nearest square.
Creates contingency table and uses chi-square test of independence to check for correlation.
Uses "parish" as "district", counts buildings per parish, outputs the family with most buildings on average.
Filters dataset for "medico" profession and counts the number of medical doctors.
Compares the number of multi-function buildings in 1740 and 1808.
Counts the number of buildings per family in both datasets, outputs the family with the highest increase.
Counts the number of families present in 1808 but not in 1740.
Table 4. Analysis of consistent answers. The entity extraction is expressed using the following grammar:“element of question“ is mapped to $( )$ a column in a Dataset
Question:
On average, how much more expensive are schools near the churches
Column Extractor:
[
Row Extractor:
Entity Search: }
1
# Planner:
1. \*\*Load the Datasets\*\*:
Begin by loading the 174o Buildings Dataset and the Landmarks Dataset . 2. \*\*Filter for Relevant Buildings\*\* :
coder:
import numpy as np
buildings_1740 $\mathbf { \Sigma } = \mathbf { \Sigma }$ pd.read_csv('data/buildings_174o.csv')
landmarks $\mathbf { \Sigma } = \mathbf { \Sigma }$ pd.read_csv('data/landmarks.csv')
# Compare Average Prices
$\mathbf { \Sigma } = \mathbf { \Sigma }$
# Output the Result
Executor:
The ansWer is: [[1.6666666666666643]]
Information Extractor:
### Same code as in Coder ###
$\mathbf { \Sigma } = \mathbf { \Sigma }$
Executor:
Question: Colurn Extractor: Row Extractor: | Cadastral data reveal key information about the historical organization of cities but are often non-standardized due to diverse formats and human annotations, complicating large-scale analysis. We explore as a case study Venice's urban history during the critical period from 1740 to 1808, capturing the transition following the fall of the ancient Republic and the Ancien Régime. This era's complex cadastral data, marked by its volume and lack of uniform structure, presents unique challenges that our approach adeptly navigates, enabling us to generate spatial queries that bridge past and present urban landscapes. We present a text-to-programs framework that leverages Large Language Models (LLMs) to translate natural language queries into executable code for processing historical cadastral records. Our methodology implements two complementary techniques: a text-to-SQL approach for handling structured queries about specific cadastral information, and a text-to-Python approach for complex analytical operations requiring custom data manipulation. We propose a taxonomy that classifies historical research questions based on their complexity and analytical requirements, mapping them to the most appropriate technical approach. This framework is supported by an investigation into the execution consistency of the system, alongside a qualitative analysis of the answers it produces. By ensuring interpretability and minimizing hallucination through verifiable program outputs, we demonstrate the system's effectiveness in reconstructing past population information, property features, and spatiotemporal comparisons in Venice. | [
"cs.SE",
"cs.AI"
] |
1 Introduction 4
1.1 Related Works 6
# 2 Preliminaries 6
# 3 Algorithm: Group Bias Adaptation 9
3.1 Algorithm Motivations 9
3.2 Implementation Details 11
# 4 Experiments on Qwen2.5-1.5B LLM 15
5 Identifiability of Features 20
5.1 Main Results on Identifiability of Features . 21
5.2 Dicussion on Feature Co-occurrence 23
# 6 Dynamics Analysis: SAE Provably Recovers True Features 24
6.1 Simplification for Theoretical Analysis 24
6.2 Main Theorem on Training Dynamics 27
6.3 Key Conditions for Reliable Feature Recovery . 28
6.3.1 Bias Range: Implications on Target Activation Frequency 28
6.3.2 Feature Balance and Network Width . 30
# 7 Proof Overview 31
7.1 Good Initialization with Wide Network 31
7.2 Pre-activations are Approximately Gaussian 32
7.3 Weight Decomposition and Concentration under Sparsity 32
7.4 State Recursion and Convergence . 33 | We study the challenge of achieving theoretically grounded feature recovery using Sparse Autoencoders (SAEs) for the interpretation of Large Language Models. Existing SAE training algorithms often lack rigorous mathematical guarantees and suffer from practical limitations such as hyperparameter sensitivity and instability. To address these issues, we first propose a novel statistical framework for the feature recovery problem, which includes a new notion of feature identifiability by modeling polysemantic features as sparse mixtures of underlying monosemantic concepts. Building on this framework, we introduce a new SAE training algorithm based on ``bias adaptation'', a technique that adaptively adjusts neural network bias parameters to ensure appropriate activation sparsity. We theoretically \highlight{prove that this algorithm correctly recovers all monosemantic features} when input data is sampled from our proposed statistical model. Furthermore, we develop an improved empirical variant, Group Bias Adaptation (GBA), and \highlight{demonstrate its superior performance against benchmark methods when applied to LLMs with up to 1.5 billion parameters}. This work represents a foundational step in demystifying SAE training by providing the first SAE algorithm with theoretical recovery guarantees, thereby advancing the development of more transparent and trustworthy AI systems through enhanced mechanistic interpretability. | [
"cs.LG",
"cs.AI",
"cs.IT",
"stat.ML"
] |
# 1 Introduction
Software quality has become a cornerstone of modern software engineering, particularly in the context of large-scale and continuously evolving codebases. As software systems grow in size and complexity, maintaining high standards of quality is essential to ensure reliability, ease of maintenance, and long-term sustainability. Organizations and open-source communities alike depend on quantitative software metrics to gain insights into various aspects of code quality. These metrics serve as vital indicators of a system’s health, helping guide refactoring decisions, prioritize technical debt remediation, and support effective development practices.
In object-oriented programming, internal attributes such as coupling, cohesion, complexity, and inheritance depth play a key role in influencing maintainability and reusability. However, these attributes often change as systems evolve through the addition of new features, bug fixes, performance optimizations, or architectural restructuring. Without proper tracking, such changes may unintentionally lead to code that is more difficult to understand, test, or extend. Therefore, understanding how software metrics evolve over time provides valuable feedback to development teams and project stakeholders.
This paper presents a longitudinal case study on the TestNG framework, a widely adopted open-source Java testing library used for unit, functional, and integration testing. As a mature project with multiple stable releases and active community involvement, TestNG offers an ideal candidate for studying the evolution of object-oriented software metrics. We analyze five historical versions of the framework to trace the trajectory of eleven well-known metrics, including Lines of Code (LOC), Cyclomatic Complexity, Lack of Cohesion (LCOM), Coupling Between Objects (CBO), and Depth of Inheritance Tree (DIT), among others.
Using the static analysis tool Understand, we extracted comprehensive metric data from each version and organized it for statistical and visual analysis. By observing metric trends, conducting Wilcoxon signed-rank tests, and interpreting structural shifts across versions, our study aims to answer the following research questions:
• How have key object-oriented metrics evolved across different versions of TestNG? • What do these changes reveal about the maintainability and structural design of the codebase? • Can patterns in metric trends provide actionable insights for improving software quality in similar open-source projects?
This analysis not only contributes to the understanding of how real-world software evolves but also provides developers, maintainers, and researchers with a replicable framework for assessing metric-driven quality trends. The insights derived from this study can inform better development practices and guide future contributions to the TestNG framework or similar systems.
# 2 Related Work
Software metrics have long been used as a foundation for understanding code quality, predicting faults, and guiding refactoring. One of the earliest and most influential contributions in this area comes from Chidamber and Kemerer, who introduced the CK metric suite [1]. This suite includes metrics such as Coupling Between Objects (CBO), Lack of Cohesion in Methods (LCOM), and Depth of Inheritance Tree (DIT), all of which have been widely adopted in both academic research and industrial tools for evaluating object-oriented software design.
Building upon this foundation, Sarkar et al.[2] investigated how complexity and cohesion metrics correlate with maintainability in large-scale systems. Their findings confirm that changes in these metrics over time can reliably signal shifts in system comprehensibility and modularity. Marinescu[3] further extended the application of object-oriented metrics by proposing a rule-based approach to identify design flaws, such as God Classes or Feature Envy, based on metric thresholds. This highlights how tracking metric evolution can provide early warnings of architectural degradation.
Zimmermann et al. [4] focused on using historical metric data to predict fault-prone components, demonstrating the power of longitudinal analysis for quality assurance. Their work supports the idea that mining historical metric trends can help identify modules at risk before they become problematic.
Despite the significant body of work applying software metrics to various domains, few studies have explored their evolution in testing frameworks like TestNG. Given that such tools are essential for automated testing and continuous integration, understanding their internal structural changes is vital. Our work addresses this gap by conducting a focused metric-based analysis of TestNG across five major versions, contributing both methodology and findings that can be adapted to similar open-source test frameworks.
# 3 Methodology
This study employed static code analysis using the Understand tool to extract eleven object-oriented metrics from five historical versions of the TestNG framework. The data was cleaned, visualized using Excel, and statistically analyzed using the Wilcoxon Signed-Rank Test to assess significant changes in software quality over time.
# 3.1 Project Selection and Versioning
This study focuses on the evolution of software quality within the open-source Java testing framework TestNG. Five representative versions were selected across the project’s timeline to capture architectural changes and growth in codebase complexity:
• v5.13
• v6.0.1
• v6.13.1
• v7.5
• v7.11.0
Evolution Analysis of Software Quality Metrics in an Open-Source Java Project:TestNG
The source code for all versions was obtained from the official GitHub repository2. The dataset, analysis notebooks, and extracted metrics are also available on Zenodo3 for reproducibility.
# 3.2 Metric Extraction Using Understand
To measure object-oriented design quality, we used the static analysis tool Understand. This tool provides a comprehensive suite of software metrics derived from static code analysis. Metrics were extracted through the “Metrics” tab for each TestNG version and exported into structured CSV files. These were compiled into an Excel spreadsheet and shared on the dataset exposed in Zenodo, which includes aggregated metric data for all five versions.
# Metrics Analyzed
The analysis focuses on eleven well-established object-oriented software metrics that reflect key design principles such as size, complexity, coupling, cohesion, and inheritance. Table 1 outlines the metrics and their significance:
Table 1: Software Quality Metrics Extracted Using Understand Tool
# 3.3 Data Cleaning and Visualization
After extracting the raw metric data, Microsoft Excel was used for data preprocessing and visualization. For the Lines of Code (LOC) metric, boxplots were generated to visualize the distribution, highlight outliers, and track trends across the five TestNG versions. To reduce the influence of extreme values, cleaned boxplots were also generated by trimming the data to the bottom 90th percentile.
# Visualizations for LOC:
For the remaining metrics, standard line graphs were used to visualize trends over time. These graphs allowed for easy comparison of metric evolution across versions.
# 3.4 Statistical Testing with Wilcoxon Signed-Rank Test
To determine whether observed changes in metrics across versions were statistically significant, the Wilcoxon SignedRank Test was applied. This non-parametric test is suitable for comparing paired samples, especially when the data does not follow a normal distribution.
Before applying the test, the metric data was filtered and organized using Excel PivotTables. This helped remove null values and ensure consistency in class-level comparisons across versions. Conditional formatting and color-coding were employed to identify and clean missing or uniform data points.
The Wilcoxon Signed-Rank Test evaluates whether the median differences between paired observations are zero. Let $X _ { i }$ and $Y _ { i }$ be matched metric values for a class in two versions, and $D _ { i } = X _ { i } - Y _ { i }$ the difference. The test ranks the
absolute values of $D _ { i }$ and sums the ranks for positive and negative differences separately. The test statistic $W$ is the smaller of these two sums:
$$
W = \operatorname* { m i n } \left( \sum _ { \mathrm { p o s i t i v e } ~ D _ { i } } R _ { i } , \sum _ { \mathrm { n e g a t i v e } ~ D _ { i } } R _ { i } \right)
$$
where $R _ { i }$ is the rank of $| D _ { i } |$ . A low value of $W$ indicates a statistically significant difference between the two distributions.
The Wilcoxon test was applied across successive versions for each metric. In cases where metric values were constant or missing (e.g., a metric remained unchanged across classes), the test returned NaN, indicating either data limitations or invariance in software structure.
# 3.5 Reproducibility
All extracted metrics for different versions of Dataset 4 scripts and notebooks used for data processing and statistical analysis are available in the GitHub repository 5
# 4 Results and Analysis
This section presents an analysis of the evolution of TestNG’s software quality metrics over five selected versions. Graphs and tables are used to illustrate trends, while statistical significance is evaluated using the Wilcoxon Signed-Rank Test.
# 4.1 General Observations
• Code Growth: LOC consistently increased over time, reflecting new features and test cases.
• Complexity Reduction: Both Cyclomatic Complexity and Max Inheritance Tree showed a downward trend, suggesting simplification of logic and class structures.
• Structural Streamlining: A noticeable decline in instance methods and declared methods was observed after version 6.13.1.
• Improved Cohesion: LCOM values decreased steadily, indicating enhanced modularity and better-organized classes.
# 4.2 Metric-by-Metric Analysis
This section analyzes trends across key software quality metrics collected from various versions of the TestNG project.
Table 2: Summary of Metrics by Version
The combined graph of all metrics is shown in Figure 1.
# Insights from Combined Metrics and Graph Patterns
By observing the patterns and trends in the metrics data, we can draw several insights that reflect improvements or concerns in software quality. Below are the main findings and recommendations categorized by metric groups:
Figure 1: Combined trends of software metrics across TestNG versions
# 1. Cyclomatic Complexity & Max Inheritance Tree
• Pattern: Both metrics generally decrease after version testng-6.0.1, with further reductions by testng-7.11.
• Good Quality Indicator: Indicates simplification in code structure with fewer decision points and shallower inheritance.
• Recommendation: Continue simplifying code. Reduce complex branching and deep inheritance hierarchies for maintainability.
# 2. Declared Methods (Count Declared Methods & Instance Methods)
• Pattern: Noticeable decrease after testng-6.13.1.
• Good Quality Indicator: Implies cleaner and more concise code.
• Recommendation: Limit redundant or overly specific methods. Focus on modular, single-responsibility methods.
# 3. Class Base and Derived Count (IFANIN & NOC)
• Pattern: Remains relatively stable or shows minor increases.
• Good Quality Indicator: Suggests controlled growth in class hierarchies.
• Recommendation: Maintain balanced inheritance. Avoid overuse of subclassing which can lead to fragility.
# 4. Lack of Cohesion in Methods (LCOM)
• Pattern: Decreases notably from testng-6.13.1 to testng-7.11.
• Good Quality Indicator: Shows that related functionality is becoming more logically grouped.
• Recommendation: Continue ensuring high cohesion by keeping related methods within the same class.
5. Code Size (LOC - Count Line Code)
• Pattern: Fluctuates but remains relatively consistent across versions.
• Good Quality Indicator: Suggests maintenance without code bloat.
• Recommendation: Encourage concise, effective coding practices. Avoid unnecessary expansion of codebase.
# 4.3 Boxplot Analysis: Lines of Code (LOC)
Initially, boxplots were created to visualize the distribution of LOC across the five selected versions of the TestNG project. However, the presence of several outliers made the visual analysis less effective. To improve clarity, the bottom 90th percentile of the data was selected for cleaned boxplots.
Two boxplots were generated:
• Box Plot_LOC_With_Outliers: Original dataset, including outliers. • Box Plot_LOC_With_Data_Cleaning: Cleaned data (bottom 90th percentile)
Figure 2: Boxplot of LOC (Cleaned Data - Bottom 90th Percentile)
# Observations:
1. LOC shows a steady increase from Version 5.13 to Version 7.11, indicating ongoing test case additions.
2. Median LOC values rise with each version, suggesting growing test file sizes.
3. The interquartile range (IQR) expands, reflecting increasing variability.
4. No significant drops in LOC indicate minimal removal or refactoring of test cases.
5. Outliers in later versions may point to large or complex test files introduced.
# 4.4 Statistical Analysis Using Wilcoxon Signed-Rank Test
To assess statistically significant changes in metrics across TestNG versions, the Wilcoxon Signed-Rank Test was applied. A pivot table was first created to ensure values aligned by Java class, method, or file across versions. Data was filtered to exclude rows with all-zero or null values per metric, ensuring meaningful comparisons.
# Key Observations per Metric
• LOC (Lines of Code): No significant differences were observed between version 7.11 and others (the T values and the P values were NaN). However, older versions (e.g., 5.13 to 6.13.1) showed significant differences, suggesting a substantial evolution in code structure or size.
• MaxCyclomatic Complexity: Statistically significant increase in complexity across all versions (very low P-values). This indicates increasing complexity, possibly due to added features or limited refactoring.
• CountDeclInstanceMethod: Significant differences across all versions. Indicates major changes in instance method declarations—possibly due to additions, removals, or refactoring.
• CountDeclMethod: Strong statistical changes observed across all versions. Major transitions from version 5.13 to 7.11 point to deep restructuring at the method level.
• PercentLackOfCohesion, MaxInheritanceTree, CountDeclMethodAll, CountClassDerived, CountClassCoupled, CountClassBase: All resulted in NaN values. This may reflect unchanged metrics or inadequate data quality (e.g., missing values or constant columns).
# Lines of Code (LOC)
The LOC metric was compared across all versions using pairwise Wilcoxon tests. The results are shown in Table 3. Version 7.11 showed no statistically significant difference from earlier versions, but other comparisons showed strong significance $( \mathtt { p } < 0 . 0 5 )$ , suggesting major historical changes.
Table 3: Wilcoxon Test Results for LOC Across Versions
# Other Metrics (Version 5.13 vs 7.11)
For all remaining metrics, the Wilcoxon test was conducted between the first and last available versions only (5.13 vs 7.11). The results are summarized in Table 4.
Table 4: Wilcoxon Test Summary for Other Metrics (5.13 vs 7.11)
The Wilcoxon test results comparing TestNG versions 5.13 and 7.11 reveal significant structural changes in key object-oriented metrics over time. Specifically, the metrics MaxCyclomaticComplexity, CountDeclInstanceMethod, and CountDeclMethod all demonstrated statistically significant p-values $( < 0 . 0 0 0 0 1 )$ ), indicating that there were substantial modifications in code complexity and method-level design between the two versions. These results suggest that the software has undergone notable evolution, likely due to targeted refactoring or feature expansion that affected control flow and method architecture. Such changes can have implications for maintainability, as increasing complexity may require more rigorous testing and documentation.
Conversely, several metrics—such as PercentLackOfCohesion, MaxInheritanceTree, and CountClassCoupled—showed no statistically significant changes, either due to consistent values across versions or possible data gaps. This stability may imply that certain aspects of the system’s architecture, such as class inheritance and coupling, remained relatively unchanged throughout the evolution. While this could reflect mature design choices, it also highlights areas that may not have been prioritized for optimization. Overall, the test underscores the importance of selective metric monitoring to identify where structural evolution is occurring and to assess its impact on long-term software quality.
# 5 Research Questions and Answers
The study was guided by a set of research questions aimed at understanding the evolution of software quality in the TestNG framework. The following table presents the research questions, concise answers based on empirical findings, and suggestions for future work that could build on this analysis.
Evolution Analysis of Software Quality Metrics in an Open-Source Java Project:TestNG
Table 5: Research Questions and Summary of Findings | Software quality is critical in modern software engineering, especially in large and evolving codebases. This study analyzes the evolution of software quality metrics in five successive versions of the open-source Java testing framework TestNG. Using the static analysis tool Understand, eleven key object-oriented metrics, including cyclomatic complexity, class coupling, and lines of code, were extracted for each version. Statistical and visual analyses reveal structural trends over time. The results indicate that TestNG has matured into a more stable and maintainable framework, reflecting ongoing development, refactoring, and architectural improvements. This study provides insights into design evolution and offers recommendations for maintaining code quality in similar projects. | [
"cs.SE",
"cs.CE"
] |
# 1 Introduction
The Linux kernel is a critical system which serves as the foundation for numerous operating systems, servers, and embedded systems. To date, the Linux kernel has undergone continuous evolution over several decades, involving thousands of developers and billions of active users [36]. Given the widespread adoption of the Linux kernel, bugs in the Linux kernel can cause serious consequences, affecting a vast number of users. Therefore, extensive research has been dedicated to developing automated software quality assurance techniques (e.g., testing [9, 10, 42] and debugging [8, 13, 34, 16]) specifically for the Linux kernel.
Fault localization (FL), which aims at identifying the buggy code elements (e.g., files or functions) in software, plays a critical role in software quality assurance. Given the codebase of the buggy software and the bug report (e.g., a user-reported bug symptom description), automated FL techniques return a list of buggy code elements ranked by their suspiciousness (i.e., the probability of being buggy). In particular, accurate FL is a prerequisite for bug fixing, as a bug cannot be resolved without correctly identifying the faulty code location.
Traditional FL techniques mainly leverage heuristics [6, 39] or information retrieval (IR) [45, 33] to identify buggy code elements. More recently, with the advance in large language models (LLMs), LLM agents [22] have demonstrated remarkable accuracy in FL. By equipping LLMs with the tool invocation capability, agents can autonomously navigate through the codebase to identify the buggy location. For example, the state-of-the-art agents such as SWE-Agent [43], AutoCodeRover [44], Agentless [40], achieve around $70 \%$ accuracy in localizing buggy files for Python software in the benchmark SWE-bench [17].
Although achieving promising FL effectiveness, existing agents have been mainly evaluated on general software at moderate scales. It remains unclear how existing agents perform in complex, large-scale software systems like the Linux kernel. In particular, FL in Linux kernel is more challenging than general software due to the following factors. (1) Large-scale Codebase: the Linux kernel has a massive codebase significantly larger than general software. For example, the v5.8 release of Linux kernel includes over 69K files and 28M lines of code [36], which is over 30 times the scale of even the largest project in the most widely-used benchmark SWE-bench. (2) Limited Observability: given the real-time nature of the Linux kernel with the need to minimize overhead, the kernel restricts the use of instrumentation and logging mechanisms during runtime. Additionally, the kernel operates in a privileged mode, isolated from user space. As a result, user-reported bug descriptions often lack detailed runtime information and debugging hints, creating a significant gap between the user description and the actual root causes. (3) Diverse Impact Factors: kernel bugs are influenced by a wide range of factors, including hardware variability (e.g., architectural configurations) and runtime variability (e.g., system load or timing). These factors lead to an exponentially large reasoning space to accurately diagnose the root causes of errors. Therefore, given the unique challenges and the importance of the Linux kernel, this work aims at investigating the FL effectiveness of state-of-the-art LLM agents in the Linux kernel.
Benchmark. We first build a new benchmark LINUXFLBENCH of 250 real-world FL tasks for the Linux kernel. Each FL task in LINUXFLBENCH includes a user-submitted bug report, the buggy Linux kernel codebase, and the ground-truth buggy locations based on the associated commit patches. LINUXFLBENCH involves a wide range of Linux kernel bugs, spanning over 120 Linux kernel versions and 66 different kernel components. The FL tasks are significantly more challenging than those in SWE-bench, as evidenced by the substantially larger codebases $1 0 { - } 3 0 \times$ more files and lines of code) and more complex bug reports (approximately $1 . 5 \times$ more words).
Empirical Study. On LINUXFLBENCH, we make the first attempt to evaluate state-of-the-art (SOTA) LLM agents in localizing Linux kernel bugs. Our results reveal the limited FL effectiveness (e.g., $3 6 . 8 \% - 4 1 . 6 \%$ accuracy) of existing agents in the Linux kernel; moreover, such a $\mathrm { F L }$ accuracy is much lower than their performance on general software systems (as seen in SWE-bench), representing a $1 6 . 7 \% - 3 1 . 9 \%$ accuracy drop. We further perform bad case analysis to find that existing agents mainly miss the buggy files as they fail to capture the related files or to cover complete root causes of kernel bugs. The results indicate that FL in the Linux kernel is indeed a more challenging task, highlighting the need for building more advanced agents to localize bugs in large and complex software systems like the Linux kernel.
Technique. Inspired by our study above, we further propose an enhancing framework LINUXFL+, which improves the FL effectiveness of existing agents for the Linux kernel. LINUXFL+ incorporates two expansion strategies to refine the prediction results of existing agents: directory-aware expansion to include buggy files based on the repository structure, and potential cause expansion to identify buggy files based on the additional bug knowledge from Linux kernel mailing list (LKML) [3]. Our evaluation results show that LINUXFL $^ +$ can substantially improve the FL accuracy of all studied agents (e.g., $7 . 2 \% - 1 1 . 2 \%$ accuracy increase) with minimal costs. Moreover, the ablation analysis confirms the contribution of each expansion strategies.
Table 1: Existing Benchmarks for Software Maintenance
# 2 Background and Related Work
FL Task Definition. Given the buggy codebase and the bug report, FL techniques identify buggy code elements (e.g., files or functions). Formally, let a codebase be represented as a set of code elements (e.g., files or functions), $\mathcal { C } = \{ c e _ { 1 } , c e _ { 2 } , . . . , c e _ { N } \}$ , where $N$ denotes the total number of code elements. A bug report $B R$ typically includes a title, a description, and optional metadata (e.g., component and hardware information in the context of Linux kernel), and can be expressed as $B R = ( t i t l e , d e s c , m e t a )$ . A FL task can be modeled as:
$$
\mathbf { F L } : B R , { \mathcal { C } } \to l i s t ( { \mathcal { C } } ) ,
$$
where $l i s t ( C )$ denotes a list of code elements that ranked by their suspiciousness (i.e., the probabilities of being buggy).
Existing FL techniques. FL techniques have been extensively studied in literature:
• Coverage-based FL. Besides bug reports, some FL techniques leverage test coverage to identify buggy locations, such as SBFL [6, 39], GNN-based FL [23], AutoFL [19], and AgentFL [29]. However, coverage and executable failure-triggering tests are not always available in practice. Especially for the large systems like Linux kernel, users report bugs by textually describing the error symptoms. Therefore, coverage-based FL cannot be applied to the Linux kernel when only bug reports are available, which thus is not included in this work.
• Information Retrieval (IR) Based FL. FL can be formulated as an information retrieval (IR) problem, where a bug report serves as a query to rank code files by relevance. Existing IR-based FL techniques use various similarity measures, such as Vector Space Model (VSM) [45, 33, 32, 37, 38], Dirichlet Language Model (DLM) [35], or deep learning approaches [15, 12, 27]. In this work, we empirically evaluate IR-based FL in the Linux kernel.
• Agent-based FL. With the advance in LLMs, LLM agents have demonstrated remarkable accuracy in diverse software maintenance tasks, including FL. For instance, SWE-Agent [43] incorporates a custom-built Agent-Computer Interface to navigate entire repositories and edit code files; AutoCodeRover [44] equips LLMs with code search capabilities to retrieve relevant code contexts; Agentless [40] refines the localization process with developer expertise, restricting the decisionmaking autonomy of agents. In this work, we not only make the first attempt to empirically evaluate existing agents in the Linux kernel, but also propose an enhancement framework to improve agents in localizing Linux kernel bugs.
Benchmarks for Software Maintenance. As FL is an essential sub-task in software maintenance, we revisit existing software maintenance benchmarks and summarize their key characteristics in Table 1. The majority of existing benchmarks focus on general software systems in Java or Python. Different from these benchmarks, our benchmark LINUXFLBENCH specifically targets the largerscale system Linux kernel. In particular, there are only two benchmarks (i.e., Linux-3.16 [32] and k-bench-Syz [25]) involving Linux kernel. While Linux-3.16 [32] only includes bugs from one specific old version (i.e., Linux kernel 3.16), our benchmark spans a broader range of Linux kernel versions, offering a more comprehensive evaluation. Particularly, the most relevant work to this paper is the recent benchmark KBENCHSYZ [25], a Linux kernel crash resolution benchmark comprising Syzkaller [4] (i.e., fuzzing testing tool) detected crashes in the Linux kernel. LINUXFLBENCH is different from KBENCHSYZ by providing more comprehensive and real-world user-reported Linux kernel bugs, including (1) different bug scopes: while KBENCHSYZ only includes crash bugs, LINUXFLBENCH includes a broader spectrum of real-world bugs beyond crash bugs (e.g., functionality bugs and performance bugs); (2) different bug sources: while KBENCHSYZ focuses exclusively on crashes automatically detected by the fuzzing tool Syzkaller, the bugs in LINUXFLBENCH are all real-world bugs reported by human users during their daily usage. Therefore, we believe that LINUXFLBENCH complements existing efforts in benchmarking FL techniques for maintaining Linux kernel. Moreover, while KBENCHSYZ mainly evaluates basic LLMs, our work evaluates LLM agents for Linux kernel for the first time.
# 3 LINUXFLBENCH: An FL Benchmark for Linux Kernel
LINUXFLBENCH is a new benchmark of $2 5 0 \mathrm { F L }$ tasks derived from real-world Linux kernel bugs.
# 3.1 Construction of LINUXFLBENCH
The construction process of LINUXFLBENCH includes three phases, detailed in Appendix A.1.
Step 1: Bug Report Collection. We collected bug reports for Linux kernel from Kernel.org Bugzilla [1], the official platform for reporting Linux kernel bugs. Each bug report typically contains key information, including a title, a bug description, and additional relevant metadata, such as the kernel version and environment details. To ensure the availability of corresponding codebase, we considered only kernel versions accessible on the official Linux kernel website [2]. For ground-truth reliability, we focused on bug reports resolved with confirmed code fixes, specifically those marked as “CLOSED” and “CODE_FIX” in the tracking system. Furthermore, we included only bug reports with patches available as attachments on the website, enabling us to identify the buggy locations based on the patch information. In total, we collected 2,138 bug reports during this step.
Step 2: Buggy Location Identification. For each collected bug report, we identified the location modified in the developer-committed patch as the ground-truth buggy location. Specifically, we traversed source files with the extensions .c or .h, skipping other file types such as README or Makefile. Following SWE-bench-lite [17], we ensured the reliability and non-ambiguity of the ground truth by keeping bug reports involving exactly one buggy file. After this step, 635 bug reports with identified buggy files were obtained.
Step 3: Manual Inspection. To further ensure quality, we manually reviewed the collected data. Three human annotators checked each bug as follows: (1) bug reports without actual bugs (e.g., those that primarily submit patches) were excluded; (2) bug reports with sufficient information (e.g., clear natural language descriptions or detailed system logs) were retained; (3) bug reports that explicitly mentioned buggy locations or fix solutions were excluded. As a result, the final dataset comprises 250 high-quality FL tasks, and each task includes a bug report, the buggy codebase, and the ground-truth buggy file and method(s). A detailed sample is shown in Appendix A.2.
# 3.2 Characteristics of LINUXFLBENCH
LINUXFLBENCH offers multidimensional task diversity, spanning various Linux kernel versions and products, and each task involves a complex bug description and large-scale codebase.
Figure 1: Task Distribution across Products
Table 2: Task Scales of LINUXFLBENCH and SWEbench.
Source: SWE-bench [17].
Scale. Table 2 compares the scale characteristics of the tasks in LINUXFLBENCH and SWE-bench. Notably, the Linux kernel codebase is significantly larger, containing tens of thousands of code files and millions of lines of code. Additionally, the bug descriptions in LINUXFLBENCH tend to be more complex and lengthy. As a result, localizing the buggy location within such a large-scale codebase is more challenging than in the relatively smaller projects of SWE-bench.
Products. Fig. 1 shows the distribution of LINUXFLBENCH across different kernel products (i.e., high-level components officially defined in Bugzilla). In particular, bugs in LINUXFLBENCH cover a diverse range of products (i.e., 16 different products) with Drivers, ACPI, and File System being the three largest categories.
Versions. The Linux kernel has undergone continuous evolution over several decades, resulting in the release of numerous versions. LINUXFLBENCH captures this temporal diversity by including bugs from a broad range of kernel versions, covering a total of 120 distinct versions.
# 4 Evaluation of LLM agents on LINUXFLBENCH
We empirically evaluate SOTA LLM agents on LINUXFLBENCH to investigate their FL effectiveness in the Linux kernel.
# 4.1 Study Setup
Studied Baselines. (1) LLM agents. We study three SOTA LLM agents, i.e., SWE-Agent [43], AutoCodeRover [44], and Agentless [40], as they are fully open-sourced and achieve high effectiveness in recent software maintenance leaderboard [5]. All agents are equipped with GPT-4o (gpt-4o-2024-08-06) as backbone LLMs [28]. The detailed implementation of these agents is in Appendix B. (2) IR-based baselines. To further investigate the effectiveness of agent-based methods, we also include traditional IR-based FL baselines for comparison. Specifically, we include the classic IR-based methods BugLocator [45] and BLUiR [33], along with widely used IR techniques such as BM25 [31] and Sentence-BERT [30].
Evaluation Metrics. In line with previous FL work [41, 45, 32], we include the widely-used metrics like recall at top- $\mathbf { \nabla } \cdot \mathbf { k }$ $( { \bf k } = 1 , 5 , 1 0 )$ ) and the Mean Reciprocal Rank (MRR) to evaluate the FL effectiveness.
# 4.2 Quantitative Analysis
Table 3 shows the overall file-level FL effectiveness of studied techniques on LINUXFLBENCH.
Comparison with IR-based methods. Overall, existing agents outperform all traditional IR methods, indicating the benefits from agentic solutions in identifying buggy locations for large scale systems. For instance, SWE-Agent achieves the best effectiveness with an MRR of 0.476, significantly surpassing other methods. Among IR methods, BLUiR performs the best, but only with an MRR of 0.321.
Table 3: FL effectiveness on LINUXFLBENCH.
Comparison with general software system. Although outperforming traditional IR methods, existing agents still exhibit limited overall effectiveness on Linux kernel. For instance, even the best-performing SWE-Agent only achieves a top-1 recall of only 0.416 on LINUXFLBENCH, which is much lower than when it is applied to general software systems (i.e., SWE-bench). In particular, Fig.2 compares the FL effectiveness of agents in Linux systems (i.e., on LINUXFLBENCH) and in general software systems (i.e., on SWE-bench). The reported SWE-bench results are from previous work [40]. We can observe a marked performance decline for all the LLM agents on LINUXFLBENCH compared to SWE-bench, with recall values decreasing by more than 0.15. Such an effectiveness drop underscores the heightened challenges associated with FL in the larger and more intricate Linux kernel codebase than general software systems.
Uniqueness and Union. Fig. 3 presents the overlapped/unique bugs that are correctly localized at top1 by studied agents. We could observe complementary strengths of the different approaches, as each agent can uniquely resolve $1 2 - 2 0$ bugs. Nevertheless, even when combining the correctly-localized bugs of all agents, only 146 bugs out of 250 total bugs can be successfully localized (i.e., $5 8 . 4 \%$ top-1 recall). It further highlights the considerable challenges that agents still face in performing FL within the complex Linux kernel.
Figure 2: Performance of LLM agents on SWEbench and LINUXFLBENCH.
Figure 3: Venn Diagram for Correctly Localized Bugs by LLM agents.
# 4.3 Qualitative Analysis
To further understand why agents perform poorly in Linux kernel, we manually examine bad cases where all studied agents fail to correctly localize the buggy files. Overall, we find two main reasons for the limited effectiveness as follows.
Confusion Among Related Files. As a large-scale software system, bugs in Linux kernel often propagate along a long chain, where many related files are associated with each other via function calls or data dependencies. While agents might be capable of coarse-grained FL (e.g., correctly identifying the buggy directories or high-level modules), they struggle to further precisely pinpoint the exact faulty file/method among all the related files. This challenge is indirectly evidenced by the fact that each Linux directory in LINUXFLBENCH contains, on average, approximately twice as many files (16 vs. 8) as those in SWE-bench, making fine-grained localization within directories more difficult. For example, Appendix C.1 shows a bad case where all agents wrongly localize the files that are in the same directory as the buggy file.
Limited Exploration of Potential Causes. Given the complexity of the Linux kernel, a bug can arise from diverse and non-obvious root causes. Current agents narrowly focus on a small set of highly probable causes, failing to explore a broader range of potential causes. Consequently, this limited exploration leads to missed opportunities for correctly identifying the buggy file. Appendix C.2 shows a bad case that all agents miss the real cause.
# 5 LINUXFL+: An Enhancing Framework
To address the limitations of existing agent-based methods, we propose a novel enhancing framework LINUXFL+, which improves the FL effectiveness of agents in the Linux kernel.
# 5.1 Approach
As discussed in Section 4.3, given the huge space of Linux kernel, existing agents fail to capture the relationship between files or to cover a complete pool of potential causes. Therefore, the main insight of LINUXFL $^ +$ is to expand the prediction results of existing agents with both the repository structure and the root causes.
Fig. 4 shows the overall workflow of LINUXFL+. Given the buggy files predicted by any agent (e.g., AutoCodeRover), LINUXFL $^ +$ refines the prediction via the following three phases. (1) DirectoryAware Expansion: LINUXFL+ expands the search scope within directories of the initial predictions generated by LLM agents. LINUXFL+ then re-selects bug-related files within these directories, enabling a more thorough exploration of related files; (2) Potential Cause Expansion: LINUXFL+ explores as many potential causes as possible to scale the related files. LINUXFL $^ +$ includes two hypothesizing strategies to expand the potential causes for the given bug report, leveraging both the original capabilities of LLMs (i.e., direct hypothesis) and the additional knowledge from Linux kernel mailing list (i.e., mail-augmented hypothesis); (3) Candidate Integration: all relevant files are merged as candidates, followed by a re-ranking process to further refine the results.
Figure 4: Overview of LINUXFL+.
# 5.1.1 Directory-Aware Expansion
While existing agents can generally identify the correct modules related to a bug, they often struggle to distinguish relevant files within those modules. To address this limitation, LINUXFL $^ +$ first expands the search scope to include all files in the directories of the initially predicted files. Using this expanded candidate set, the LLM re-selects files likely related to the bug. We retain the top-k $( \mathrm { k } { = } 1 0 )$ ) most relevant files as the expanded results. This approach provides the LLM with an additional opportunity to identify buggy files, enabling a more comprehensive exploration of related files. Detailed prompts are in Appendix D.1.
# 5.1.2 Potential Cause Expansion
Current agents tend to focus narrowly on a small set of highly probable causes within a limited number of steps. However, diagnosing complex bugs often requires an iterative “guess-and-check” process [7, 20, 21], where developers form experience-based hypotheses and progressively refine their understanding to isolate the root cause. Inspired by this process, we expand bug-related files by exploring a broader range of potential causes.
In this phase, we design two types of hypothesizing strategies to expand probable causes, namely Direct Hypothesis and Mail-Augmented Hypothesis. The former leverages the models’ inherent knowledge on Linux kernel bugs obtained from pre-training, while the latter incorporates historical bug knowledge from the developer mailing list discussions.
Direct Hypothesis. As LLMs possess a foundational understanding of the Linux kernel from extensive pre-training, a straightforward expansion approach is to fully leverage the intrinsic knowledge of models. To this end, we design prompts that instruct the model to generate as many plausible potential causes as possible, and rank these causes based on their estimated likelihood of being responsible for the bug. To ensure the practicality and relevance of each hypothesized cause, the LLM is also required to propose a corresponding fix and identify the specific files that would need modification. We then extract the predicted target files, preserving their original ranking from the associated causes. Detailed prompts are in Appendix D.2.
Mail-Augmented Hypothesis. Relying solely on the intrinsic capabilities of LLMs is insufficient, as general-purpose models still lack in-depth and domain-specific knowledge of Linux kernel. To address this limitation, we incorporate historical bug knowledge from the Linux kernel mailing list (LKML) [3]. The LKML is the communication channel among Linux kernel developers, including massive emails discussing bugs, patches, and diverse topics on maintaining Linux kernel. Specifically, we adopt a Retrieval-Augmented Generation (RAG) approach, using mailing list data as an external knowledge base to provide more comprehensive and diverse bug causes in Linux kernel.
Mail Collection. To construct the knowledge base, we first collect emails from the LKML channel. To ensure the quality of the data, we retain only emails that include patches, as these are more likely to involve discussions of bug fixes or feature implementations, providing useful context for FL. However, we exclude patch emails that modify more than 10 files, as such patches are likely non-atomic, often representing merged changes. Additionally, to avoid potential data leakage, we exclude any emails containing external URLs or the keyword “bugzilla”.
Mail Retrieval. To improve the efficiency of retrieval, we design a hierarchical retrieval strategy. First, we categorize all emails based on the files modified in the attached patches. Using the files predicted by agents, we restrict the search space to only emails associated with the predicted files. Next, since raw bug reports may contain noisy content (e.g., hexadecimal logs), we reformulate queries by summarizing the bug reports along four key dimensions: (1) bug behavior, (2) potential causes, (3) expected behavior, and (4) possible solutions. We then apply the BM25 algorithm [24] to retrieve the top- $\mathbf { \nabla } \cdot \mathbf { k }$ $( \mathrm { k } { = } 1 0 )$ ) most relevant mails. To ensure temporal consistency, we restrict the retrieval to emails sent before the bug report was reported.
With the retrieved mails, we further instruct LLMs to hypothesize more diverse and informed causes for the given bug report. The expanded causes can further guide LLMs to identify buggy files associated with the causes. Detailed prompts are in Appendix D.2.
# 5.1.3 Candidate Integration
In this final phase, we consolidate the files predicted by previous two expansion strategies and rank the aggregated candidate files to produce the final FL results.
We adopt a simple yet effective merging strategy. Specifically, for each candidate file $f$ , we collect its ranks from the three sources: $R _ { d i r } ( \bar { f } )$ (Directory-Aware Expansion), $R _ { d i r e c t } ( f )$ (Direct Hypothesis), and $R _ { m a i l } ( f )$ (Mail-Augmented Hypothesis). If a file does not appear in the results of a particular method, its rank is set to $\infty$ . We then compute an aggregated score for $f$ as follows:
$$
\mathrm { s c o r e } ( f ) = 1 / R _ { d i r } ( f ) + 1 / R _ { d i r e c t } ( f ) + 1 / R _ { m a i l } ( f )
$$
Files that achieve better ranks in any individual method receive higher scores, while those consistently ranked highly across methods are further prioritized. All candidate files are sorted by their aggregated scores to produce the initial merged ranking. To further refine this list, the LLM is prompted to re-rank the files based on the semantic correspondence between their path and bug report.
# 5.2 Experimental Setup
Baselines. To evaluate the effectiveness of ${ \mathrm { L I N U X F L } } ^ { + }$ in improving existing agents, we apply LINUXFL $^ +$ to refine the prediction outputs of recent agents (i.e., SWE-Agent, AutoCodeRover, and Agentless) on LINUXFLBENCH.
Implementation Details. We leverage GPT-4o [28] (gpt-4o-2024-08-06) as the backbone model for implementing LINUXFL+. We configure the model temperature as 0 to ensure relatively deterministic outputs with other parameters as default settings.
Table 4: Evaluation results of LINUXFL+ on LINUXFLBENCH.
# 5.3 Results and Analysis
# 5.3.1 Overall Performance
Table 4 presents the improvements of LINUXFL $^ +$ on all studied agents and the costs of LINUXFL+.
Effectiveness. LINUXFL+ exhibits strong performance in enhancing the FL capabilities of agents, as evidenced by substantial improvement across all evaluation metrics. For example, when applied to SWE-Agent, Recall $@ 1 0$ increases from 0.584 to 0.768, an absolute gain of 18.4 percentage points. Moreover, Recall $\ @ 1$ improves by 10.8 percentage points (from 0.416 to 0.524). The improvement indicates the effectiveness of the candidate expansion strategies of LINUXFL+, which successfully recover the buggy files missed by existing agents.
Generalizability. Beyond improving individual methods, LINUXFL+ consistently boosts performance across all SOTA agents. Notably, AutoCodeRover and Agentless, both of which initially exhibited lower performance in Recall $\textcircled { a } 5$ and Recall $@ 1 0$ , achieve results comparable to SWE-Agent after incorporating LINUXFL+. For instance, despite the large initial gaps in Recall $@ 1 0$ across methods, the integration of ${ \mathrm { L I N U X F L } } ^ { + }$ consistently elevates their scores to surpass 0.7. These improvements underscore the generalizability of LINUXFL $^ +$ and its applicability across agent-based approaches with varying baseline capabilities.
Ablation study. We perform an ablation study to investigate the contribution of each component in LINUXFL+. In particular, we find all the expansion strategies, i.e., directory-aware expansion and potential causes expansion (with either direct or mail-augmented hypothesis) can improve the FL effectiveness of agents. Detailed results can be found in Appendix E.
Cost-efficiency. While LINUXFL+ delivers strong performance, it incurs only a modest additional cost. On average, the total number of tokens used per task by LINUXFL $^ +$ ranges from $1 1 . 8 \mathrm { k }$ to $1 5 . 3 \mathrm { k \Omega }$ , resulting in an estimated cost of approximately $\$ 0.04$ . This is roughly one-tenth of the cost incurred by agent-based baselines. The primary cost of LINUXFL+ stems from its use of email content. These results suggest that LINUXFL+ can substantially enhance FL for the large-scale system Linux kernel at a affordable cost.
In summary, by enhancing the capabilities of existing agents, LINUXFL $^ +$ facilitates more accurate FL with minimal costs. Our findings underscore the potential of LINUXFL $^ +$ to significantly support software maintenance tasks in Linux kernel.
# 5.3.2 Method-level FL
To further assess the effectiveness of ${ \mathrm { L I N U X F L } } ^ { + }$ on fine-grained FL, we extend our evaluation to method-level FL. In particular, we proceed to predict buggy methods among the buggy files predicted by LINUXFL+. Specifically, following prior work [40], we provide LLMs with a skeleton representation of each file. This format retains only the signatures of functions and structures, along with their associated comments, thereby reducing input length while preserving essential contextual information. The LLM is then prompted to identify the top-k $( \mathrm { k } { = } 1 0 )$ ) most relevant code elements.
Given the characteristics of the C language, we define method-level elements to include functions, structures, and other code blocks. We consider the methods that are modified in the developercommitted patches as the ground truth for buggy methods.
Table 5 presents the method-level FL results of existing agents and those enhanced with LINUXFL+. Overall, LINUXFL+ can consistently improve agents in method-level FL for Linux kernel. All three agent baselines exhibit low Recall $\ @ 1$ (below 0.1), while LINUXFL+ consistently improves this metric beyond
Table 5: Method-level FL results.
0.1. The improvements are more pronounced in other metrics, e.g., for Recall $@ 1 0$ , LINUXFL+ enhances all baselines by more than 0.09. While localizing finer-grained elements is inherently much more challenging specifically for large scale systems like Linux kernel, the overall accuracy at method level remains relatively lower than at the file level, highlighting the need for further research in this direction.
# 6 Limitations
Limited Evaluation on Different LLMs. To maintain consistency with prior work [43, 44, 40] and enable fair comparison of agent performance on SWE-bench and our proposed LINUXFLBENCH, all experiments in this study employed the widely used GPT-4o as the backbone LLM. While LINUXFL+ demonstrates notable improvements in enhancing agent performance with GPT-4o, we conducted limited investigation into its effectiveness when integrated with other LLMs.
Rough Usage of Mail Data. LINUXFL $^ +$ leverages external knowledge from Linux kernel mailing list to enhance FL. While we apply several heuristics to filter out noisy entries, the mail content may still contain irrelevant or outdated discussions. Although we mitigate this by discarding invalid predicted files, there remains room for improvement. Future work could explore more sophisticated approaches to effectively utilize mailing list knowledge for improved software maintenance tasks on the Linux kernel. | The Linux kernel is a critical system, serving as the foundation for numerous systems. Bugs in the Linux kernel can cause serious consequences, affecting billions of users. Fault localization (FL), which aims at identifying the buggy code elements in software, plays an essential role in software quality assurance. While recent LLM agents have achieved promising accuracy in FL on recent benchmarks like SWE-bench, it remains unclear how well these methods perform in the Linux kernel, where FL is much more challenging due to the large-scale code base, limited observability, and diverse impact factors. In this paper, we introduce LinuxFLBench, a FL benchmark constructed from real-world Linux kernel bugs. We conduct an empirical study to assess the performance of state-of-the-art LLM agents on the Linux kernel. Our initial results reveal that existing agents struggle with this task, achieving a best top-1 accuracy of only 41.6% at file level. To address this challenge, we propose LinuxFL$^+$, an enhancement framework designed to improve FL effectiveness of LLM agents for the Linux kernel. LinuxFL$^+$ substantially improves the FL accuracy of all studied agents (e.g., 7.2% - 11.2% accuracy increase) with minimal costs. Data and code are available at https://github.com/FudanSELab/LinuxFLBench. | [
"cs.AI",
"cs.SE"
] |
# I. INTRODUCTION
Most commercial software depends on open-source software components [1]. Although this approach reduces development costs, it results in a software supply chain that exposes an organization to cybersecurity risks [2], [3]. Many prior works have examined software supply chain security failures and have developed security techniques [4]–[6], engineering processes and frameworks [7]–[9]. These works have had substantial success, e.g., the now widely-used Sigstore project for provenance [4], and OpenSSF Scorecard project [10] for process. However, prior works have paid little attention to the actor element of the software supply chain [8].
In this vision paper, we propose ARMS, an Actor Reputation Metric System, to track the security qualifications of engineers in the open-source software supply chain. We first define ARMS’s requirements based on the threat model in this context. We then propose a conceptual design for a reputationbased framework that evaluates an actor’s trustworthiness. Next, to obtain indicators of security skill and expertise, we map high-level recommendations from frameworks like SLSA and CNCF to specific, measurable metrics derived from prior research and existing security tools. We outline evaluations to assess the implementation and effectiveness of an ARMS system. Finally, we discuss potential future directions and improvements for our approach.
Our proposal explores the development and operationalization of actor-based metrics to address software supply chain security failures. While our proposal requires careful
# II. BACKGROUND AND MOTIVATION
# A. The Open-Source Software Supply Chain
Open-source software is widely integrated into commercial [11] and government [12] systems. Any individual opensource component is developed by a maintainer team. With their approval, outsiders may be permitted to contribute code [13]. Beyond this direct incorporation of external contributions, each such project often depends on others as components, recursively. This web of interdependencies is a feature of open-source development, allowing (in an idealized world) a reduction in repeated effort [14]. However, each additional point of trust increases the potential attack surface. From the perspective of the downstream application, the result is a software supply chain that can be attacked either through its artifacts or through its actors [15].
We follow the software supply chain definition of Okafor et al. [8]: in their production and distribution, software artifacts undergo a series of operations overseen by actors. This definition indicates that a software supply chain can be secured only through attention to all of these entities.
# B. Artifact-Based Evaluations Are Not Enough
A key activity in open-source projects is expanding the actor pool by introducing new maintainers and contributors into projects [13]. As a software package gains popularity, interest from potential contributors increases [16]–[18], often resulting in onboarding new maintainers and merging change requests from new contributors. Evaluating these individuals may involve reputational factors such as community status and connections [16], [19], [20], but security considerations are rarely enforced due to the challenges OSS teams face in managing security resources effectively [21], [22].
As argued by Okafor et al. [8], most cybersecurity work takes an artifact-based perspective. Efforts to develop static and dynamic security analysis tools (SAST, DAST) [23], refine code reviews [24], [25], and assess artifact provenance [4], [26], [27] are all focused on confirming that an artifact is, to the limits provided by the technique, reliable. While this is not an unreasonable strategy, there are many reasons to avoid relying exclusively on artifact checks, such as:
Reliance on known vulnerabilities. Automated scanners typically match code against vulnerability databases; therefore, zero-day flaws or novel attack vectors often evade detection [28] e.g., Log4j [29]. Biased human review. Manual code reviews and audits are applied inconsistently, frequently favoring contributors familiar to project maintainers [18], [19], [30]. • Scalability constraints. As projects grow, sustaining thorough artifact checks becomes resource-intensive, leading to superficial reviews or delayed patching [30], [31]. Limited socio-technical insight. Artifact-centric methods inspect code but ignore the developer behaviors and workflows that often precipitate security issues [32].
These shortcomings underscore the need for actor-based security measures, rather than considering code in isolation from its author. We thus turn now to reputation as a means of estimating the quality of an actor’s contributions.
# C. Using Actor Reputation to Establish Trust
Reputation systems establish trust between parties who have not been previously connected [33]. When social systems integrate reputation, incentives associated with positive reputation can encourage good behavior over time [34]. Following Hendrikx et al. [35], any reputation system has three interacting entities:
• Trustor: The party placing trust (e.g., the maintainer team). • Trustee: The party being evaluated (e.g., a potential contributor). • Trust Engine (Recommender): The broker that supplies the trustor with information about the trustee (interaction data). Its design varies by application and threat model — e.g., for communication [33], online-auction [36], etc.
Two common examples of reputation systems in software engineering are GitHub’s star system [37] and the Stack Overflow point-based system [38]. Both of these systems can be used by a trustor to quantify the number of users satisfied with a trustee’s projects and contributions [39]. As a result, they might influence which GitHub projects an engineer (trustor) may deem to be reliable [40], and which Stack Overflow answers may be trusted by their readers [41].
In the following sections, we introduce ARMS to formalize the concepts of an actor reputation metric system to promote cybersecurity within the OSS ecosystem.
# III. THREAT MODEL
# A. Threat Actors
The goal of ARMS is to provide project maintainers with measurements of the security expertise of prospective contributors or maintainers. ARMS operates within a context with three kinds of threat actors:
1) Inexperienced Contributors/Inadvertent Vulnerability: Contributors who lack sufficient security expertise — e.g., they are unfamiliar with standard security practices and tooling within the ecosystem — attempt to join or maintain OSS projects, potentially introducing vulnerabilities through mistakes [42], [43].
2) Reputation Spoofing: Malicious actors deliberately craft the appearance of security expertise to gain collaborator or maintainer status.
3) Impersonation. Impersonation occurs when a malicious actor gains control of a legitimate user’s account (e.g., via key compromise [44], [45]).
Our ARMS approach considers the first two classes of threat. The third class, impersonation threats, are out of scope — they undermine the assumption of stable identities necessary for a reputation-based system [46].
# B. Examples
We give examples of each kind of threat we outlined above.
1) Dexcom (Inadvertent Vulnerability): Dexcom is a medical device company whose products include continuous glucose monitors (CGMs) used by diabetics [47]. Their CGM products were the first to incorporate “smart” capabilities such as pushing health notifications to one’s smartphone. In 2019, Dexcom’s engineers made an error leading to a service outage, resulting in a lack of notifications; many were hospitalized and at least one death is attributable [48]. This was the second such outage in a 12-month window. Although we presume that Dexcom is not intentionally harming its customers, its engineers’ inability to sustain a safety-critical system suggests inadequate experience for this class of work.
2) XZ Utils Backdoor (Reputation Spoofing): In March 2024, a backdoor was discovered in XZ Utils, an open-source compression tool, which allowed attackers to gain root privileges and execute malicious code on affected systems [49]. The vulnerability was introduced by an actor who had built trust within the project through non-malicious contributions, and they were eventually promoted to co-maintainer [49]. In retrospect, this particular actor had several suspicious reputational signals, most notably having no accounts on other sites and no history of contributions to other projects.
3) ESLint Credential Compromise (Impersonation): ESLint, a widely used static analysis tool in the npm ecosystem for scanning JavaScript code, was compromised on July 12, 2018 [45]. Attackers gained access to a maintainer’s npm credentials and published malicious package updates to the npm registry [50]. Because the attacker used the identify of a reputable maintainer, a reputation system (which assume stable identities) could not anticipate this attack.
Fig. 1: Overview of proposed ARMS system and context case study. Potential contributors (trustees), who may be malicious (Actor A), inadequately expertised (Actor $B _ { . }$ ), or genuine (and capable) (Actor C), express interest or submit change requests . The maintainer team (trustors) requests reputation information on these contributors. The ARMS system retrieves each contributor’s interaction history and quantifies it using the defined security signals and metrics (Interaction data formatter & security signal scoring). Next, the reputation calculator weights these signal values by package usage, community tenure, and centrality, then composites the results and compares them to ecosystem-wide benchmarks (Impact & Benchmark Scoring). Finally, each trustee’s reputation score and recommended action are provided to the maintainer team.
# IV. ARMS CONCEPTUAL MODEL
To formalize an OSS actor reputation system for GitHub, we propose a reputation system based on the reference model of Hendrikz et al. [35] described earlier, and adapt it to the OSS supply chain context. The following subsections outline our proposed system, define security metrics for an actor’s security reputation, and propose evaluation metrics to measure the effectiveness of these security metrics.
# A. System Overview
Our proposed system follows the three-element model of Hendrikx et al. (§II-C) comprising a trustor, a trustee, and a trust engine. In the open-source supply chain, the trustor and trustee already exist—e.g., the maintainer team serves as the trustor, and a potential contributor is the trustee. Our work focuses on operationalizing the trust engine component, which is currently absent from the open-source ecosystem.
We describe our proposed system and case study in Figure 1. Interaction history defines the core of our reputation computation, and in our system, we focus specifically on security-related interactions defined by recommended security practices. Although some work has been done on trust establishment in OSS [51], the proposed frameworks are based on defining trust, our work operationalizes trust with reputation systems. In the next section, we define metrics to assess security interactions and history within a typical OSS ecosystem.
# B. Interaction Data – Security Signals and Metrics Definitions
Our system computes reputation from an actor’s historical interactions within the ecosystem. We categorize these interactions by security signals and quantify each signal using measurable metrics.
To define appropriate security signals and metrics, we consider: (1) alignment with widely accepted security recommendations, (2) the actor’s demonstrated adherence to good security practices in previous contributions and to significant projects, and (3) a history of non-malicious contributions.
From these considerations, we derive our security metrics from two kinds of sources:
1) Security standards and recommendations: We consulted frameworks like the SLSA security framework [57], the CNCF software supply chain security guidelines [58], the NIST SSDF security framework [59], NIST SP 800- 204D [60], Openssf S2C2F [61], and the CIS Software Supply Chain Security Guide [62]. Common recommendations across these sources were used to ensure the selection of well-established security practices.
2) Available security tools in the OSS ecosystem: We identified security tools available through GitHub’s user interface and API, which reflect the security capabilities easily available to contributors on the platform.
We grouped the resulting metrics into seven categories – Security Signals, based on contributors’ security tool usage and vulnerability management practices. A summary appears in Table I. This resulted in seven security signals
Table I: Proposed security signals. Bolded signal categories are derived from security recommendations (e.g, NIST, CNCF, etc). The proposed evaluation metrics in Table II offer ways to evaluate the signals below.
# C. Trust Engine – Reputation Computation
In this section, we outline the core functionality of the trust engine’s reputation score computation (see Figure 1). First, the interaction data formatter and security signal scorer extracts each contributor’s ecosystem interaction history and quantify each signal according to the metrics in Table II. These base measurements capture a user’s adherence to the defined security signals and their contribution patterns.
Next, ARMS evaluates each contributor’s reputation. To account for the risk posed by a contributor’s activities within the ecosystem, we refine the initial signal scores by incorporating:
Impact Score – (Weightage Signals Table I (W1-W3)):
1) Package Usage Weighting: Projects with no users (e.g., private repositories or those with zero forks/downloads) should not contribute to the user’s evaluation score, as they present minimal supply chain risk.
2) Community Tenure Weighting [51]: This ensures that newer users do not unduly influence their reputation score, addressing concerns like those in the ”XZ utils” situation [63], where malicious activity was introduced by new members.
3) Centrality Score Weighting [18]: Centrality—an actor’s level of connectedness within the ecosystem based on contribution activities—can be crucial for scaling reputation. We suggest evaluating not only the number of connections (edges) but also the time taken to establish them.
# Benchmark Score:
1) Combination of multiple security metrics/Composite Scoring: A holistic evaluation will aggregate a user’s performance across various security metrics, offering a comprehensive view of their adherence to security practices. This approach minimizes false positives and negatives by drawing from multiple metrics, as seen in the OpenSSF scorecard [64]. Each metric’s weighting will be based on its effectiveness analysis $\ S \Vdash \mathbf { A }$ .
2) Ecosystem Benchmarks: User performance will be benchmarked against the average or median reputation scores across the ecosystem to provide a relative measure of security adherence.
# V. DESIGN OF EXPERIMENTS FOR ARMS
In this section, we outline potential studies to establish and evaluate the feasibility of an actor reputation system, exemplified by ARMS, and general use of actor metrics to establish trust in software supply chain security. These complementary studies—primarily observational or retrospective—aim to validate and refine the proposed security-reputation metrics and assess the real-world impacts of deploying ARMS. We categorize these studies into two groups: (1) evaluating the effectiveness of the proposed security metrics, and (2) examining user behaviors.
# A. Security Metrics Effectiveness Evaluation – QuasiExperimental Analyses
To evaluate the utility of the proposed security signals, we propose the following quasi-experimental studies to determine whether the signals reliably predict poor security practices.
1) Inter-metric Relationship Study: Compare OSS projects that adopted a given security practice (e.g., branch protection) to matched control projects that did not, using a difference-in-differences design. We will analyze the results with regression models incorporating project and time fixed effects and interaction terms for the paired metrics to isolate their combined impact on security outcomes. Data: Historical GitHub records for projects within the same domain, similar size, and activity levels. • Outcomes: Changes in vulnerability incidence (Signals 1–2) and shifts in contributor behaviors (Signals 3–7).
Ethics: Uses only publicly available data, avoiding additional data collection or privacy concerns.
2) Retrospective Incident Prediction Study: Identify projects that experienced known supply-chain incidents (e.g., npm/Event-Stream, ua-parser-js) and compare their pre-incident signal profiles against similar “clean” projects.
Data: Metric values for each project’s maintainers before the incident. Analysis: Logistic regression (with interaction terms) to determine which signals best predict incident occurrence.
# B. User Behavioural Studies – Surveys & Interviews
To evaluate the practical usability of actor metrics in OSS Supply chain security, we propose the following studies:
1) Collaborator Vetting Study: We propose a Recruitment of active OSS maintainers to participate in a vignette study [65]. This would present anonymized contributor profiles with varying metric scores, and measure time-to-decision (vet vs. reject) and how variations on the proposed ARMS signals would influence these choices. This method would gather qualitative feedback on signal clarity and usefulness.
2) Chilling Effect: A potential issue with establishing an actor reputation system is the possibility of “chilling effect,” [66] where contributors may hesitate or reduce participation if they know their interactions are being tracked and scored [66]–[68]. We propose a user survey study to assess contributors’ willingness to participate under different tracking scenarios (e.g., “all projects” vs. “critical only”), and to quantify perceived privacy risks and impact of monitoring activities on contribution intent.
Ultimately, we do expect some chilling effects. To mitigate them, we recommend limiting the deployment of an ARMS approach to high-impact, security-sensitive OSS projects (e.g., the Linux kernel) rather than applying it to hobby or low-risk repositories. To effectively do this, there is need to develop an effective project importance score. OpenSSF’s criticality score [69] may be used or serve as a starting point. We recommend surveying OSS contributors to gauge a suitable threshold of criticality.
# C. Worked Examples
We illustrate how our proposed system could have worked in the earlier described examples $\ S _ { \mathrm { I I I - B } }$ .
1) XZ Utils Attack: The timeline of events leading to the XZ Utils backdoor reveals several characteristics that map to our proposed security signals [70]:
1) Recent account: The attacker created a GitHub account in January 2021 and joined the XZ Utils project in October 2021—well within their first year of activity.
2) Limited public history: Prior to October 2021, their contributions were confined to private repositories.
3) Targeted feature change requests: Their first public change request focused on adding features to a small set of projects rather than fixing issues.
Under our framework, these traits would yield a low reputation score:
• Signal Metrics (Signals 1-7) penalize sparse or opaque contribution histories.
• Community tenure (Signal W2) reduces scores for newly created accounts.
Centrality (Signal W3) remains low because the user’s contribution network is both recent and shallow.
2) Dexcom: The Dexcom engineers’ case revealed repeated failures that lasted for extended periods. In one incident, an issue persisted from November 28 to December 3, 2019, halting Dexcom systems and severely impacting availability. Under our framework, this would be reflected in lower scores for Signals 1 and 2 (time to fix vulnerabilities and severity levels), as recurrent downtime directly reduces the reputation of engineers responsible for critical systems.
3) ESLint Compromise: The 2018 ESLint compromise was traced to an attacker breaching a maintainer’s account—enabled by the absence of two-factor authentication. This type of attack cannot be modeled by reputation systems because the attacker took over a legitimate account without performing any genuine contributor actions or exhibiting the behavioral signals that these systems track. This is why we deem it out of scope in the system threat Model $\ S \mathrm { I I I }$ .
# VI. DISCUSSIONS AND FUTURE WORKS
# A. Threats to Validity
We begin by outlining some potential issues with our proposal:
1) False Positives and Negatives: There is a risk of misassigning reputation, resulting in inappropriate characterizations of users as more or less trustworthy. Our definitions and metrics do not account for all interactions that could detect incompetence or malicious activities, especially non-artifact interactions such as social engagements, organizations, security communications, feedback to code reviews, etc. To mitigate this, we recommend a weighted combination of metrics, with each metric’s effectiveness considered in $\ S \Vdash \mathbf { A }$ . Using ecosystem-wide averages and standard deviations can also help raise the threshold for issuing advisories based on user scores.
2) Insider Threats: As a special case of false negatives, we emphasize that any reputational system is ineffective for insider threats where trust is deliberately built over time. However, such attacks are costly for an adversary.
3) Defining Security Interactions: As stated, our work envisions operationalizing actor trust security metrics; however, we acknowledge the inadequacies of our proposed security metrics. We encourage further research in defining trust, especially concerning the interactions between security metrics.
4) Privacy Concerns: Making scores public could unfairly harm honest actors. To address this, we propose that scores remain private, accessible on a need-to-know basis. Advisories based on these scores would be shared only with maintainers of projects to which the user wishes to contribute.
5) Gameability of Proposed Metrics: Our security metrics, like any trust-based system, can be exploited. Users may act, individually or in collusion, to enhance their reputation. We mitigate this risk with time-weighted scoring and recommend incorporating human oversight (e.g., reporters and moderators) for nuanced judgments. However, we acknowledge that trust and safety issues plague all online platforms [71], and a reputation system would be no exception.
# 6) Possibility of Chilling Effect: Discussed in $\ S \Vdash \mathbf { B }$ .
# B. Evaluating Actor Intent
Our work assumes that actors are well-intentioned and attributes security failures to genuine contributors’ lack of expertise or mistakes; it does not account for malicious intent. However, some supply-chain incidents arise from malicious actors using techniques such as account spoofing or credential theft. Incorporating intent into reputation systems substantially increases their complexity. Although our current security signals focus on expertise and behavior, they do not capture actor intent. Future work should extend these metrics to infer intent— for example, by analyzing anomalous contribution patterns, unusual social graph connections, or timing irregularities, ultimately creating a more comprehensive reputation model.
# C. Supporting Ecosystem Heterogeneity
Reputation systems fundamentally face actor-identification challenges [72], [73]. This issue is especially pronounced in open-source ecosystems, where contributors manage artifacts across multiple platforms—from source repositories to package registries. Although next-generation software-signing tools have improved artifact-to-actor verification [4], [74], [75], reliable cross-platform identification remains vulnerable to identity theft, impersonation, and other attacks. Consequently, ARMS requires stronger mechanisms to verify and vouch for identities across diverse environments.
To provide possible solutions, recent initiatives (e.g., CISA’s RFI for software identifier ecosystems [76]) propose establishing identities through multiple institutions—some designed to preserve privacy [77], [78]. Adapting these models across ecosystems introduces the dual challenges of federation—enabling actors in one ecosystem to be recognized in another—and roaming—allowing actors to transfer their identifier and reputation between providers. Federation and Roaming are challenges reminiscent to other federation protocols work (e.g., OIDC [79] and Distributed Identities [80]). Thus, for an actor reputation system like ARMS, institutions across ecosystems must collaborate and share reputation information whenever a software artifact from one ecosystem A is used in another ecosystem B.
# D. Social & Process Signals: Beyond Artifact Contributions
Our current security (and weightage) signals (Table I) focus exclusively on an actor’s artifact contributions. As noted in our approach criticisms, we have not yet captured an actor’s interactions with other users—such as code-review feedback, issue discussions, and peer endorsements—which could provide valuable reputational insights.
We also recommend developing process-based metrics to evaluate informal OSS practices. Process-based methods [81] are most effective in structured development environments. However, in open-source ecosystems—where formal processes are often absent [82]—proxy process metrics can still gauge adherence to security best practices. For example, one could measure the frequency of explicit threat-model or designreview discussions in issues or change requests, track the presence and pass rate of automated security scans (e.g., SAST, SBOM generation) in continuous integration workflows, and monitor the regularity of dependency updates following published vulnerability disclosures. By combining artifact, social, and process signals, ARMS can deliver a more holistic reputation assessment for contributors in open-source ecosystems.
# E. Ethical and Privacy Considerations
A system that records actor activities inevitably raises ethical and legal concerns. Actors must provide informed consent before their ecosystem activities are disclosed to projects they wish to join. At the same time, project owners need actionable insights into a contributor’s security posture. Balancing these interests requires privacy-preserving disclosure.
We propose that (i) contributors opt in to sharing reputational data, (ii) only aggregated or pseudonymized metrics (e.g., differential-privacy noise or percentile rankings) are revealed to maintainers, and (iii) contributors receive dashboards that show exactly what information will be shared. Compliance with regulations such as GDPR [83] should guide data-retention periods, user-deletion requests, and audit logging. Future prototypes should incorporate cryptographic approaches—such as zero-knowledge proofs or secure multiparty computation—to prove adherence to security metrics without exposing raw activity logs. Key directions include: Designing and evaluating privacy-preserving reputation protocols, conducting user studies to gauge contributor consent thresholds and maintainer information needs, and integrating regulatory compliance checks and automated audit trails into the prototype.
# F. Why Focus Reputation on Cybersecurity?
We have situated the ARMS approach within the context of cybersecurity. The ARMS metrics can, of course, be extended in order to infer properties of engineers other than their cybersecurity expertise. We suggest that doing so for functional properties (i.e., input-output behaviors that can be validated through testing) may be unnecessary — standard engineering practices call for code contributions to be accompanied by adequate tests, such that functional properties can be assured through reference to the test results. Not so for non-functional properties, cybersecurity as one of many. It is for such properties that reputational measures may be more useful. For example, Cramer et al. recently reported that trust and safety defect repairs in social media platforms are rarely accompanied by automated tests, in part because the validation is apparently performed through use case analysis rather than through software behavioral analysis [71]. Similarly, behaviors for regulatory compliance such as GDPR are challenging to validate [11]. Since the current state-of-the-art software validation techniques cannot automatically conclude whether a system provides non-functional properties such as security, trust-and-safety, or regulatory compliance — at least not in a cost-effective manner — we think that ARMS approaches may be a suitable complementary measure of correctness. | Many critical information technology and cyber-physical systems rely on a supply chain of open-source software projects. OSS project maintainers often integrate contributions from external actors. While maintainers can assess the correctness of a change request, assessing a change request's cybersecurity implications is challenging. To help maintainers make this decision, we propose that the open-source ecosystem should incorporate Actor Reputation Metrics (ARMS). This capability would enable OSS maintainers to assess a prospective contributor's cybersecurity reputation. To support the future instantiation of ARMS, we identify seven generic security signals from industry standards; map concrete metrics from prior work and available security tools, describe study designs to refine and assess the utility of ARMS, and finally weigh its pros and cons. | [
"cs.CR",
"cs.SE"
] |
# 1 Introduction
In the field of artificial intelligence, the pursuit of true intelligence in large language models (LLMs) has prompted researchers to look to biology for inspiration [Gutiérrez et al., 2024, Wu et al., 2025]. Just as organisms gradually accumulate knowledge through experience over time, LLMs need to possess long-term memory (LTM) capabilities to achieve self-evolution and strategic optimization in ever-changing environments [Shan et al., 2025]. Moreover, as LLMs are increasingly applied in scenarios such as multi-session dialogue [Zhang et al., 2025], task planning, and lifelong learning, the need for models to retain, update, and leverage prior knowledge dynamically becomes critical. Without robust LTM, AI systems are limited to short-term reasoning and static knowledge use, failing to achieve sustained, autonomous intelligence.
Given the importance of LTM in enabling advanced behaviors, it is crucial to evaluate these capabilities reliably and systematically. However, current benchmarks face challenges in adequately evaluating LTM capabilities in two critical dimensions: 1) Knowledge Retention: the capacity to absorb, integrate, and preserve information across extended texts, maintaining contextual continuity beyond mere fact retrieval or local recall [Guo et al., 2025, EducateMe, 2024]; and 2) Sequential Reasoning: the ability to understand and reason about sequences of events, which involves inferring latent state changes, causal dependencies, and goal shifts across complex, dynamic, and multi-turn interactions rather than simply locating pre-stated answers within static text. 3) Flexibility: previous benchmarks often face challenges in adjusting and evaluating in different contexts.
To address these limitations, we propose a dynamic benchmark framework inspired by interactive fiction games, where LLMs engage in branching narratives with multi-turns that simulate long-term sequential decision-making. In our benchmark, the model continuously receives scene descriptions, dialogues, and options, and must make choices based on its understanding. We design two modes: Immediate Feedback provides immediate feedback when the model makes a wrong choice, while Self Recovery allows the story to continue toward a failure ending without any hint, requiring the model to identify and revise past decisions on its own. Through this setup, our benchmark effectively evaluates the model’s ability to remember key information (knowledge retention) and reason over event sequences (sequential reasoning). Furthermore, our benchmark demonstrates excellent flexibility in accommodating diverse scenarios.
To further illustrate the advantages of our benchmark, we comprehensively evaluate the differences between existing benchmarks and ours (Table 1) based on the following aspects:
Knowledge Retention. Long-context (L-ctx) evaluates whether the task requires long-term memory of earlier context to succeed. Continuity (Conty) measures whether the benchmark requires the model to maintain a coherent understanding of entities, events, and their relationships across interactions.
Sequential Reasoning. Complexity (Comp.) indicates whether the benchmark features nonlinear reasoning tasks, where multiple interdependent events or decisions must be jointly considered, requiring the model to reason beyond sequential context. Dynamics (Dyn.) refers to whether the model’s actions or responses influence future tasks or states in the environment. Multi-turn (M-turn) evaluates whether the task involves multiple sequential interactions, where each turn is temporally connected to the previous ones.
Flexibility. Multi-solution (M-sol) indicates whether the benchmark includes tasks or questions with multiple valid answers or approaches, rather than a single fixed solution. LTM+STM evaluates the combined usage of long-term memory (LTM) and short-term memory (STM), i.e., whether the task requires reasoning over both recent and distant information.
Table 1: Comparison of Existing Benchmarks across Multiple Dimensions.
To validate the effectiveness of our benchmark, we conduct systematic evaluations on advanced four LLMs. Each model is tested under both evaluation modes across $8 0 +$ branching story paths, with performance measured in terms of correct decision rates, task success counts, etc. Results show that while GPT-4o [OpenAI, 2024] and Claude 3.5 Sonnet [Anthropic, 2024] demonstrate relatively stronger long-term knowledge retention and sequential reasoning, all models struggle with self-recovery and fail to consistently revise earlier mistakes. In-depth failure analysis further reveals distinct memory bottlenecks, which existing benchmarks could be enhanced to expose. These findings confirm the utility of StoryBench in capturing LTM deficiencies, offering a more granular and realistic assessment than prior benchmarks.
Our contributions are as follows:
• A Dynamic Multi-turn Evaluation Framework: We introduce a novel dynamic multi-turn benchmark inspired by interactive fiction. Through branching narratives and two distinct modes (Immediate Feedback, Self Recovery), it assesses models’ knowledge retention and sequential reasoning, while offering high flexibility across various scenarios.
• A Novel Dataset for Long-Term Memory Evaluation: We construct an annotated interactive fiction-based dataset to test LTM. It features cohesive narrative continuity, dynamic branching, complex interdependencies, and multi-solution mechanisms to emulate realworld memory challenges.
• Reliable and Robust Experimental Analysis: To ensure the credibility of our findings, we perform repeated trials, enhancing statistical robustness and supporting meaningful performance comparisons.
# 2 Related Work
# 2.1 Strategies and Techniques for Enhancing Long-Term Memory
Transformer models face inherent limitations in processing long sequences due to the quadratic complexity of self-attention mechanisms. To address these challenges, various architectural innovations have been proposed, including sparse attention mechanisms like Reformer [Kitaev et al., 2020], Longformer [Beltagy et al., 2020], Sparse Transformer [Child et al., 2019], and Sparse Flash Attention [Pagliardini et al., 2023], which reduce the number of token pairs in attention computation to improve speed and memory usage. Enhancements such as dilated convolution and cascading attention [Ding et al., 2023], sparse attention and HSR data structures [Chen et al., 2024], and Ring Attention [Liu et al., 2023] aid in handling long-range dependencies, while performance optimizations like FlashAttention [Dao et al., 2022] and PagedAttention [Kwon et al., 2023] further enhance efficiency through techniques like tiling, paging and flexible KV cache sharing. Context expansion techniques via recurrence, such as Transformer-XL [Dai et al., 2019], enable the retention and reuse of longer context windows and optimizations in Transformer-XL, such as reducing the number of long-range memories and limiting attention range in lower layers [Rae and Razavi, 2020], can achieve comparable or better performance. New architectures like Mamba [Gu and Dao, 2024] and RWKV [Peng et al., 2023] explore alternatives to traditional attention mechanisms. Beyond these architectural improvements, recent research has also focused on explicit memory mechanisms to address the limitations of fixed-size context windows and improve information retention. Memory modules like MemoryBank [Zhong et al., 2023] and Retrieval-Augmented Generation (RAG) leverage external storage and dynamic retrieval to enhance long-term memory and knowledge utilization in language models. Parameter-efficient fine-tuning techniques such as LoRA [Han et al., 2024] embed task-specific adjustments via low-rank matrices, preserving information without altering the full model architecture. Hybrid memory systems like MemGPT [Packer et al., 2023], Mem0 [Chhikara et al., 2025], and MemoryScope [Yu et al., 2024] integrate various memory modules and interfaces to enhance long-term retention and retrieval, while recursive approaches like Gist Memory [Lee et al., 2024] compress and retain key context fragments. These advancements collectively address both retention and adaptive management of information, enabling more effective long-term memory capabilities in language models.
# 2.2 Benchmarks for Evaluating Long-Term Memory
Recent evaluations of large language models’ (LLMs) long-context and long-term memory capabilities have primarily relied on dedicated benchmarks. Existing benchmarks use prefilled contexts of varying lengths, such as up to 10k tokens in LongBench [Bai et al., 2024], ${ \approx } 2 4 \mathrm { k }$ in LooGLE [Li et al., 2023], and up to 100k in InfiniteBench [Zhang et al., 2024] and even longer contexts(10 million tokens or even more) in BabiLong [Kuratov et al., 2024]. Benchmarks like ZeroSCROLLS [Shaham et al., 2023], L-Eval [An et al., 2024], and LongBench diversify task types and cover various domains and sequence lengths. ZeroSCROLLS focuses on zero-shot evaluation, L-Eval provides diverse long-document tasks, and LongBench spans six major task categories across multiple languages. Synthetic retrieval-focused setups like Needle-in-a-Haystack [Kamradt, 2023] are popular for their controllability, but concerns remain about their ecological validity due to overly repetitive haystacks. Other benchmarks like RULER [Hsieh et al., 2024] and BAMBOO [Dong et al., 2024] assess reasoning under long contexts. For a holistic understanding of long texts, ChapterBreak [Sun et al., 2022] has been proposed. For long-range discourse modeling in multi-session conversations, Multi-Session Chats [Xu et al., 2022] has been introduced. Agent-based evaluations such as AgentBench [Liu et al., 2024], WebArena [Zhou et al., 2023], and LLF-Bench [Cheng et al., 2024] offer dynamic environments for long-term interactions, focusing on multi-turn reasoning, real-world task completion, and learning from language feedback, respectively. While most of these works evaluate functional behavior, few explicitly isolate long-memory capabilities. A notable exception is LTM benchmark [Castillo-Bolado et al., 2024], which targets long-term memory in multi-turn conversations. However, existing benchmarks still face challenges in several aspects, especially in the evaluation of knowledge retention, sequential reasoning, and flexibility.
# 3 StoryBench
# 3.1 Motivation and Overview
Existing benchmarks apply static tasks (factual recall or isolated chain of thought tasks) that do not fully capture the dynamic nature of real-world interactions [Chang et al., 2024], suggesting there is room for improvement in evaluating LTM abilities in two critical dimensions: knowledge retention and sequential reasoning, as well as in their own flexibility. The limitation of current benchmarks results from their inability to simulate the dynamic, sequential nature of real-world decision-making, where memory must be actively updated, integrated with new information, and adapted to evolving contexts through multi-turn interactions.
To address this, we introduce StoryBench. The core design principle of StoryBench is to conduct memory stress-tests within a dynamic and sequentially structured environment grounded in interactive fiction multi-turn game-play. Unlike traditional benchmarks relying on static inputs or isolated memory recalls, StoryBench simulates realistic decision-making by embedding models in evolving narratives where each choice not only compels models to integrate information across short-term and long-term contexts (knowledge retention) but also tracks changing relationships between story elements and resolves contradictions arising from prior decisions in multi-turn interactions(sequential reasoning). In summary, StoryBench provides a more comprehensive and dynamic framework for evaluating long-term memory capabilities, effectively enhancing the assessment of knowledge retention and sequential reasoning, as well as improving the flexibility of the evaluation process.
# 3.2 Dynamic Narrative and Multi-Turn Decision-Making
StoryBench leverages the inherently dynamic and multi-turn nature of interactive fiction games to assess memory in realistic decision-making trajectories. Each run through the benchmark involves a sequence of interconnected choices, where past actions shape future outcomes. The model must continuously track character states, causal dependencies, and branching outcomes over extended contexts. This setup naturally embodies several key properties:
• Long-term: Many decisions require recalling events or facts introduced a few turns earlier. Concrete examples of such dependencies are provided in Section 4.2.
• Continuity: The benchmark follows a coherent plot, ensuring semantic continuity across interactions.
• Complex: Consecutive decisions are not isolated, but closely linked. One choice may directly affect the conditions or outcomes of several subsequent ones. We provide detailed illustrations of such dependencies in Section 4.2.
• Dynamic: Incorrect or suboptimal decisions dynamically alter the story path or trigger failure endings, requiring the model to adapt in real-time.
• Multi-turn: The task unfolds over many turns, demanding sustained memory and reasoning across sequentially extended interactions.
• Multi-solution: Many decision points allow for multiple acceptable paths, rather than a single fixed correct answer, better reflecting the uncertainty and variability of real-world scenarios. Specific examples demonstrating the multi-solution nature of the benchmark are provided in Section 4.2.
# 3.3 Two Task Modes for Evaluating LTM
To explore different aspects of memory utilization, we design two complementary task modes. The dual-mode setup allows StoryBench to probe both short-horizon reactive memory and longhorizon strategic recall (LTM), offering a comprehensive view of how models navigate extended, decision-heavy interactions and revealing not just whether a model can remember facts, but whether it can strategically reason across time, self-correct, and navigate branching storylines over extended sequences.
Immediate Feedback: Designed to evaluate a model’s responsiveness to error signals, this mode simulates situations where feedback is available at each turn. After a wrong choice, the model is told the outcome and prompted to retry (Figure 1), allowing us to examine its short-term adjustment ability and interactive learning dynamics.
Figure 1: Immediate Feedback. The model is informed immediately after each incorrect choice and prompted to retry until the correct option is selected.
Self Recovery: This mode suppresses feedback, mimicking scenarios where incorrect decisions propagate through multiple scenes, potentially ending the game. The model is then challenged to trace back to the error’s origin and recover (Figure 2). This stresses long-term causal reasoning and memory retention under uncertainty.
Figure 2: Self Recovery. An incorrect choice leads to a failure ending either immediately or after several scenes. The model is then asked to identify the earliest point in the story where it believes the incorrect decision occurred and to attempt recovery from that point.
# 3.4 Tailored Metrics for Assessing LTM Models
To comprehensively evaluate long-term memory (LTM) capabilities in language models, StoryBench introduces a set of targeted metrics covering two essential cognitive dimensions: knowledge retention and sequential reasoning.
We define a decision sequence $\{ c _ { 1 } , c _ { 2 } , . . . , c _ { T } \}$ , where $c _ { t } \in \{ 0 , 1 \}$ denotes whether the model selected the correct option (1) or not (0) at step $t$ .
# 3.4.1 Metrics for Knowledge Retention
• Overall Accuracy (Overall Acc): The average correctness across all decisions, measuring how consistently the model maintains relevant knowledge and narrative coherence:
$$
\operatorname { A c c u r a c y } _ { \mathrm { o v e r a l l } } = { \frac { 1 } { T } } \sum _ { t = 1 } ^ { T } c _ { t } .
$$
• First-Try Accuracy (First-Try Acc): The proportion of decision points at which the model selected the correct option on its first attempt. Let $f _ { t } \in \{ 0 , 1 \}$ be 1 if the model is correct on the first try at step $t$ , then:
$$
\operatorname { A c c u r a c y } _ { \mathrm { f i r s t - t r y } } = { \frac { 1 } { T } } \sum _ { t = 1 } ^ { T } f _ { t } .
$$
• Longest Consecutive Correct Sequence (Longest Corr): The length of the longest contiguous subsequence of correct decisions:
$$
\mathrm { L o n g e s t C o r r } = \operatorname* { m a x } _ { 1 \leq i \leq j \leq T } \left( j - i + 1 \mid c _ { k } = 1 \forall k \in [ i , j ] \right) .
$$
This reflects the model’s ability to sustain contextual consistency over extended intervals, though less critical than the above metrics.
# 3.4.2 Metrics for Sequential Reasoning
• Accuracy by Difficulty (Easy/Hard Acc): To account for varying levels of memory and reasoning demand, we classify decisions into easy and hard categories. A decision is labeled as hard if it requires recalling information from a distant context, tracking latent state changes, or performing multi-step sequential reasoning; otherwise, it is considered easy. Let $\mathcal { E } _ { t }$ and $\mathcal { H } _ { t }$ denote easy and hard decision sets up to step $t$ (including retries), then:
$$
\mathrm { A c c u r a c y } _ { \mathrm { e a s y } } ^ { ( t ) } = \frac { 1 } { \left| \mathcal { E } _ { t } \right| } \sum _ { i \in \mathcal { E } _ { t } } c _ { i } , \quad \mathrm { A c c u r a c y } _ { \mathrm { h a r d } } ^ { ( t ) } = \frac { 1 } { \left| \mathcal { H } _ { t } \right| } \sum _ { i \in \mathcal { H } _ { t } } c _ { i } .
$$
These metrics assess how well the model adapts to sequentially distributed and cognitively demanding decisions.
• Retry Count: Let $\boldsymbol { r } _ { t }$ denote the number of retries required before reaching a correct decision at step $t$ . The total number of retries across the trajectory is:
$$
\mathrm { R e t r y } _ { \mathrm { t o t a l } } = \sum _ { t = 1 } ^ { T } r _ { t } .
$$
• Max Error per Choice (Max Err/Choice) and Thresholded Error Count: These metrics capture the worst-case and accumulated difficulty for the model in terms of repeated failures:
$$
\mathrm { M a x E r r o r } = \operatorname* { m a x } _ { 1 \leq t \leq T } r _ { t } , \quad \mathrm { E r r o r C o u n t } _ { \geq r _ { \mathrm { t h r e s } } } = \sum _ { t = 1 } ^ { T } \mathbb { I } ( r _ { t } \geq r _ { \mathrm { t h r e s } } ) ,
$$
Where $\mathbb { I } ( \cdot )$ is the indicator function and $r _ { \mathrm { t h r e s } }$ is a predefined retry threshold (e.g., 9 in our experiments).
Finally, while not directly measuring memory accuracy, two auxiliary metrics provide additional perspective on the model’s efficiency in handling long-horizon tasks: Runtime Cost reflects the inference efficiency of the memory system, while Token Consumption (Token Cons) indicates the model’s reliance on contextual information.
Together, these metrics form a multi-faceted evaluation framework that jointly targets both the persistence of stored information and the model’s ability to apply it dynamically within complex, sequentially structured environments. This ensures that memory is not only retained but also meaningfully used to navigate and reason through realistic multi-turn interactions.
# 4 Dataset Construction
# 4.1 Overview
To evaluate long-term memory (LTM) capabilities of large language models (LLMs), we construct a narrative dataset based on the interactive fiction game The Invisible Guardian, encompassing 311 scene nodes and 86 choice nodes as captured in our structured JSON format.
We chose to use an interactive fiction game as the basis for our dataset rather than synthetic data or real-world data for several reasons. First, it is arguable that all publicly available benchmark test cases might occasionally be included in LLM pre-training data [Liu et al., 2024]. Consequently, to mitigate potential data overlap issues, we opted to independently construct a dataset of interactive fiction games. Second, synthetic data is often overly simplistic and lacks the nuanced coherence of real human narratives [Hao et al., 2024]. It relies on predefined templates, resulting in repetitive scenarios that fail to capture the complex interdependencies crucial for evaluating long-term reasoning. In contrast, the interactive fiction game The Invisible Guardian offers a rich, evolving storyline that naturally tests long-term dependencies. Third, real-world data is messy and difficult to control [Xie et al., 2025, Behr et al., 2025]. It is influenced by numerous external factors, making it hard to isolate causal relationships and define clear “success” or “failure” paths. The structured and controlled environment of an interactive fiction game provides a clear framework for evaluating long-term memory and decision-making in a repeatable manner.
Our design incorporates several distinctive features for evaluating LTM. First, unlike conventional QA or dialogue datasets that consist of isolated or short-context samples, our dataset presents a continuous and evolving story world that unfolds over multiple interactive turns, offering a naturalistic setting for evaluating long-horizon reasoning. Second, many long-term choices depend on events or facts introduced several turns earlier, thereby testing models’ long-term dependency tracking. Third, the story dynamically evolves based on the model’s choices, allowing branching into different paths, including success or failure endings. Fourth, the benchmark reflects realistic decision-making complexity: consecutive choices are often interdependent, requiring models to maintain logical consistency across transitions. Finally, the dataset is multi-solution: multiple choice paths may lead to successful conclusions, emphasizing adaptability rather than rigid answer matching.
# 4.2 Structural Organization
The dataset is organized as a directed acyclic graph (DAG) composed of two types of nodes: scene nodes, which represent narrative fragments, and choice nodes, which define branching decision points. Edges denote transitions between these nodes, forming a tree-like structure that allows non-linear progression through the story. This organization not only captures the dynamic and interactive nature while enabling clear tracing of causal dependencies but also allows flexible nuanced evaluation of LTM in knowledge retention and sequential reasoning.
Figure 3: Four typical patterns illustrating dataset structure complexity.
To illustrate the complexity and diversity of our dataset’s structure, we categorize representative graph patterns in Figure 3. These include (a) linear chains of scenes, testing narrative understanding and short-range memory; (b) long-term dependencies, where early events influence distant outcomes; (c) clusters of interdependent decisions, reflecting complex causal reasoning; and (d) multi-solution branches, where multiple paths can reach valid endings.
# 4.3 Data Source and Annotation Process
We construct our dataset based on the interactive fiction game The Invisible Guardian from the game’s prologue to Chapter 5 by far. Manual annotation preserves the game’s branching logic and causal relationships, ensures chronological ordering with memory checkpoints, and annotates metadata on transitions, dynamics, and ethics to retain sequential depth for evaluating LLMs’ long-term reasoning. All content is meticulously transcribed from the original game, encompassing dialogues, narrative descriptions, character interactions, and player decision points, with each entry structured as a JSON object annotated with granular details according to its type. Scene nodes (311 entries) include unique identifiers, location, characters with descriptive attributes, sequential dialogues with speaker labels, and flags for narrative endings (where applicable), such as ending (Figure 4). Choice nodes (86 entries) feature unique identifiers, decision context descriptions, and branching options with distinct IDs and text (Figure 5).
id: 0
type: scene
description: Prologue - Returning to the Country. Upon arriving in Shanghai, the first thing you do is to visit your mentor, Mr. Fang Hanzhou.
location: Shanghai, Mr. Fang Hanzhou's home
characters: name: Fang Min, description: A student at Jiren University, daughter of mentor Fang Hanzhou, and your best friend from your school days.
dialogues:
speaker: Fang Min, content: Xiao Tu?
speaker: Xiao Tu, content: Fang Min... Oh, is the teacher here?
speaker: Fang Min, content: You... you come in first, I'll go call my father.
speaker: Fang Hanzhou, content: Xiao Tu, you scoundrel! Do you still have the nerve to come back?
speaker: Xiao Tu, content: Teacher Fang, I...
speaker: Fang Hanzhou, content: What? After you were imprisoned, the student union leaders were persecuted soon after. Tell me! Did you betray them?
id: choice_1
type: choice
choice_text: Mentor's Questioning
branches:
branch_id: 1-1
branch_text: It wasn't me, teacher! There must be another explanation!
branch_id: 1-2
branch_text: Maintain Silence
# 5 Experiments & Results
# 5.1 Experimental Setup
We conduct experiments on four representative foundation models: Doubao 1.5-pro-256k [ByteDance, 2025], GPT-4o [OpenAI, 2024], Claude 3.5 Sonnet [Anthropic, 2024], and Deepseek-R1 [DeepSeekAI et al., 2025]. These models are chosen based on both their broad real-world usage and competitive performance. Doubao 1.5-pro-256k excels in handling extremely long contexts with its 256k-token support, making it ideal for tasks requiring extensive context retention. GPT-4o, as a leading closedsource commercial model, demonstrates strong language understanding and reasoning abilities. Claude 3.5 Sonnet excels in long-context understanding and knowledge reasoning (supporting $2 0 0 \mathrm { k } +$ tokens), maintaining stable performance in long-text reasoning and structural analysis tasks. Deepseek-R1 employs pure reinforcement learning, which gives it excellent logical reasoning and structured thinking capabilities. It shows strong performance in multi-step reasoning and planning tasks. Their diverse features make them ideal for evaluating long-term memory across different technical approaches and application scenarios. While several memory-augmented approaches [Chhikara et al., 2025, Yu et al., 2024] have RAG-style architectures or external memory buffers, we exclude them from our evaluation because their memory utility centers on retrieving isolated factual content. However, StoryBench emphasizes long-term sequential reasoning, where memory must support inference, self-correction, and causal tracking.
For each of the two task modes in StoryBench, we run 10 trials per model. Inputs are carefully formatted to encourage structured reasoning, and we adopt a Chain-of-Thought (CoT) prompting strategy to stimulate stepwise deliberation. In Immediate Feedback mode (results in Table 2), we observe that GPT-4o is more sensitive to content filtering issues (e.g., mentioning weapon-related terms) and frequently interrupts completion due to server overload. To ensure smooth evaluation, we filter potentially problematic vocabulary and limit single-turn inputs to 5,000 tokens for GPT4o. In Self Recovery mode, models often repeatedly select the same wrong option more than ten times without real-time feedback, therefore stalling the task. So we implement a soft intervention by revealing the correct answer if a model failed at the same decision point for nine consecutive attempts. In the initial evaluation phase (results in Table 3), we retain the original unfiltered dataset and deliberately remove token limits to simulate high-pressure, long-horizon conditions, then conduct five trials. The performance of all models decreases significantly, reflecting the intrinsic difficulty of the task. In response, we launch a second phase of five-trial experiments (Table 4) with improved input handling: sensitive vocabulary is filtered and a 5,000-token per turn limit is applied.
# 5.2 Main Results of Long-Term Memory Performance
Table 2: Performance of different models (Immediate Feedback).
# 5.2.1 Model Analysis
To better understand the performance differences among models, we analyze five core metrics illustrated in Figure 6. Overall Accuracy and First-Try Accuracy reflect knowledge retention, capturing the model’s ability to maintain consistent and contextually grounded responses across extended interactions. Hard Accuracy and Retry Count assess sequential reasoning, as they target the model’s capacity to navigate complex, dynamic, and multi-step decision paths involving longrange dependencies. Success Count captures the overall task-completion ability. Among all models, Doubao1.5-pro achieves the highest scores in knowledge-related metrics such as Overall Accuracy and First-Try Accuracy, suggesting strong capabilities in knowledge retention. Doubao effectively absorbs and integrates contextual information across extended texts. However, long-term memory evaluation must prioritize not accuracy but the ability to complete extended decision paths. That is because the premise for evaluation is the model can complete the story chain, otherwise, no matter how high the local accuracy is, it will lose significance to the evaluation. Doubao’s Success Count is significantly lower than Claude 3.5 Sonnet, indicating that it often "dies in details" when dealing with complex reasoning chains and long-term interactive tasks despite its solid knowledge base. In contrast, Claude 3.5 Sonnet maintains a solid balance: it trails slightly in accuracy, but excels in degree of completion, achieving the highest Success Count. This suggests Claude is more robust in multi-turn sequential reasoning, which is a critical factor in long-term memory evaluation.
Table 3: Performance of different models (Original Self Recovery).
Table 4: Performance of different models (Improved Self Recovery).
Interestingly, most models show large gaps between Easy and Hard Accuracy, Figure 7 reflecting their large gaps in sequential reasoning. Notably, Claude and GPT-4o show more consistent performance across difficulty levels, while Deepseek-R1, though competent in Easy Accuracy, suffer significant drops in harder decisions, highlighting challenges in difficult or deceptive decision points that require multi-step reasoning, delayed consequences, or implicit state tracking.
From an efficiency perspective, GPT-4o and Doubao1.5-pro offer excellent cost-performance tradeoffs. Their Runtime Cost and Token Consumption are significantly lower than Claude 3.5 and Deepseek-R1
Figure 6: Model multidimensional performance in Immediate Feedback and Self Recovery modes.
Figure 7: Accuracy disparities across Models: overall, easy & hard tasks.
# 5.2.2 Insights of Distinctions Between Two Modes
To investigate how short-term and long-term memory settings affect model behavior, we compare performance under two task modes. Immediate Feedback mode provides corrective signals after each wrong choice, effectively mimicking short-term memory and aiding models in adjusting quickly. In contrast, Self Recovery better simulates real long-term memory scenarios by removing such signals, requiring the model to navigate the narrative without external guidance.
Unsurprisingly, all models perform worse under Self Recovery mode, as shown by the consistent drop in Overall Accuracy and Success Count. This highlights the increased difficulty of sustained sequential reasoning and knowledge retention without short-term feedback. To alleviate task failure in extreme cases, we introduce an auxiliary intervention metric: Number of Choices Reaching Error Threshold (we set the threshold to 9). If a model makes the same mistake more than 9 times, it is prompted with the correct answer. Only Claude 3.5 and GPT-4o never reach this threshold, suggesting that their task completions in Self Recovery mode are entirely due to self-correction and internal reasoning without any artificial hints. This contrasts sharply with other models, indicating that they excel in sustained sequential reasoning and knowledge retention.
Surprisingly, despite the overall decline in performance across models in Self Recovery, two metrics: Longest Consecutive Correct Sequence and First-Try Accuracy actually increase for several models (Figure 8). This amazing trend emphasizes that while short-term feedback aids local correction, it may also disrupt long-horizon coherence. By removing it, models foster a deeper narrative understanding (knowledge retention) and more coherent reasoning (sequential reasoning) and we better expose the true limitations and strengths of long-term memory in different models.
Figure 8: Mode impact on models: First-Try Accuracy & Longest Consecutive Correct Sequence metrics.
A notable case is Deepseek-R1. While it does not lead in most individual metrics, it demonstrates remarkable consistency across both Immediate Feedback and Self Recovery modes. This stable performance suggests that the model is capable of making accurate revisions during backtracking.
# 5.3 Failure Case Study
In evaluating long-term memory capabilities with StoryBench, we identified two principal types of failure that reflect limitations in current language models, corresponding to the core dimensions of knowledge retention and sequential reasoning.
The most prominent issue in knowledge retention was the failure to preserve contextual consistency over extended narratives. Models frequently made decisions that contradicted earlier story events, character motivations, or established world logic. This suggests difficulty in integrating and maintaining distributed information over long spans of interaction, especially when the necessary context spans dozens of turns. Even when the relevant facts appeared in the prompt, models struggled to apply them coherently, indicating limitations beyond simple factual recall.
In terms of sequential reasoning, a critical failure case was the inability to repair long-term or multi-error decisions. In Self Recovery mode, successful completion often required models to trace errors back across multi-step causal chains and revise earlier decisions (even multiple choices in combination) that affected downstream outcomes. However, most models exhibited shallow search strategies, typically backtracking only one or two steps rather than engaging in deeper reasoning about the narrative structure or goal shifts. This myopic behavior led to persistent failure when task success depended on understanding and correcting long-term dependencies. We retained such failures to reflect the true upper-bound difficulty of long-term memory reasoning.
Other failures such as format mismatches (e.g., returning option indices instead of decision point IDs), content filtering blocks, server timeouts, or rare instances of hallucinated explanations were also observed but were comparatively infrequent. These were retained in evaluation for completeness but are not the focus of our analysis.
These diverse failure cases underscore the challenge of StoryBench and emphasize the need for more robust memory integration, format alignment, and long-range error correction in current foundation models.
# 6 Limitations
While our benchmark provides a comprehensive evaluation of long-term memory capabilities in large language models through complex, branching narrative tasks, it has several limitations. First, the scenarios are derived from a single interactive fiction domain and the interactive environment is text-based, both of which may limit the benchmark’s generalizability to other knowledge-intensive or task-oriented contexts that require multimodal support. Second, the number of turns and the length of the context are still limited. The current interactive fiction dataset consists of only 6 chapters, which may not fully capture the long-term dependencies and complex reasoning required in more extensive narratives. Future work could expand the dataset by adding subsequent chapters to provide a more comprehensive evaluation of long-term memory. Third, due to API constraints and cost, we primarily evaluate a limited number of mainstream models. The performance of other models under similar conditions remains unexplored. Fourth, although we include a self-recovery setting to simulate real-world error correction, the evaluation remains scripted and cannot capture all forms of natural feedback. | Long-term memory (LTM) is essential for large language models (LLMs) to achieve autonomous intelligence in complex, evolving environments. Despite increasing efforts in memory-augmented and retrieval-based architectures, there remains a lack of standardized benchmarks to systematically evaluate LLMs' long-term memory abilities. Existing benchmarks still face challenges in evaluating knowledge retention and dynamic sequential reasoning, and in their own flexibility, all of which limit their effectiveness in assessing models' LTM capabilities. To address these gaps, we propose a novel benchmark framework based on interactive fiction games, featuring dynamically branching storylines with complex reasoning structures. These structures simulate real-world scenarios by requiring LLMs to navigate hierarchical decision trees, where each choice triggers cascading dependencies across multi-turn interactions. Our benchmark emphasizes two distinct settings to test reasoning complexity: one with immediate feedback upon incorrect decisions, and the other requiring models to independently trace back and revise earlier choices after failure. As part of this benchmark, we also construct a new dataset designed to test LLMs' LTM within narrative-driven environments. We further validate the effectiveness of our approach through detailed experiments. Experimental results demonstrate the benchmark's ability to robustly and reliably assess LTM in LLMs. | [
"cs.CL",
"cs.AI"
] |
# 1 INTRODUCTION
Generative AI (genAI) tools (e.g., ChatGPT [86], Copilot [82]) are becoming increasingly native to software development [34]. While these tools promise to enhance productivity [64] and are reshaping how developers code and innovate [90], their adoption remains entangled with AI hype [17], skepticism [79], and persistent interaction challenges [34, 71].
Trust is a foundational design requirement for supporting effective human-AI interactions [54, 68, 102]. Miscalibrated levels of trust, either over- or under-trust, can lead developers to overlook AI-induced errors and risks [89], or eschew its use altogether in work [11]. Research has identified several factors that drive developers’ trust in genAI tools, including setting appropriate expectations and evaluating AI suggestions [120], as well as community-based influences such as shared experiences and support [20]. Recently, Johnson et al. [63] introduced the PICSE framework through a qualitative investigation with software developers, outlining key components that shape the formation and evolution of trust in software tools (Sec. 2.1).
Another important concern in industry-wide adoption of AI tools is that software design can be exclusionary in multiple ways [2, 5, 10], often failing to adequately support diverse users [33]. While a substantial body of work exists on modeling users’ technology acceptance [18, 96, 116, 117], these studies do not consider the inclusivity of the software design. One often overlooked aspect of inclusivity is supporting cognitive diversity—variations in individuals’ cognitive styles—which fosters divergence in perspectives, thoughts, and interactions (Sec. 2.2) [107]. Numerous studies have shown that when technology is misaligned with users’ cognitive styles [14, 44, 83], it introduces usability barriers, forcing users to expend additional cognitive effort to use technology [14]. Therefore, it becomes essential to understand how developers’ diverse cognitive styles influence their intentions to adopt genAI tools, and how trust factors into this multi-faceted decision.
In our recent work [23], the basis for this journal extension, we address:
RQ1: What factors predict developers’ trust in genAI tools? RQ2: How are developers’ trust and cognitive styles associated with their intentions to use genAI tools?
We answered these questions in [23] by developing an empirically grounded, validated theoretical model of developers’ trust and behavioral intentions toward genAI tools (see Sec. 4). The model was evaluated using Partial Least Squares-Structural Equation Modeling (PLS-SEM) on survey data from developers $\left( \ N = 2 3 8 \right)$ at two major global tech organizations: GitHub Inc. [3] and Microsoft[4].
This theoretical model (Sec. 5, Figure 2) empirically showed that genAI’s system/output quality (presentation, adherence to safe and secure practices, performance, and output quality concerning work style/practices), functional value (educational value and practical benefits), and goal maintenance (sustained alignment between developers’ objectives and genAI’s actions) are positively associated with developers’ trust in these tools. Further, developers’ trust and cognitive styles—intrinsic motivations behind using technology, computer self-efficacy within peer groups, and attitudes towards risk—are associated with their intentions to adopt these tools, which in turn, correlate with their reported genAI usage in work.
Beyond identifying key drivers of trust and adoption, an important step towards improving genAI tooling lies in understanding how these drivers perform in practice. While our PLS-SEM model (Fig. 2) quantifies their predictive importance, it does not directly assess whether these factors are perceived as adequate (or lacking) in software development contexts. Specifically, some factors (e.g., goal maintenance) may be highly influential, yet relatively underperformant, signaling design targets where improvements are necessary and potentially most impactful.
Therefore, in this work, we ask: RQ3: What genAI aspects should be prioritized to foster developers’ trust and adoption of these tools?
To answer RQ3, we first conducted an Importance-Performance Matrix Analysis (IPMA) (Sec. 6.1) to examine the interplay between each factor’s statistical influence on trust and adoption (importance) and its perceived adequacy among developers (performance). This evaluation identified factors that, despite strong influence, underperform in practice, thereby revealing specific genAI aspects (what’s) that require improvement. For instance, aspects related to genAI’s system/output quality (e.g., contextual accuracy and performance, safety/security practices, interaction design) and goal maintenance were deemed essential, yet perceived as lacking, ultimately undermining developers’ trust in these tools (Sec. 7).
Next, we bolstered these findings with a qualitative analysis of open-ended responses—examining developers’ perceived challenges and risks of genAI usage—to uncover their perspectives on why these gaps persist in development contexts (why’s). We grounded these patterns in behavioral science theories to explain how these issues undermine trust and adoption. In doing so, we extend our work [23] by identifying not only what matters for trust and adoption, but also what needs (design) prioritization, and most importantly, why.
The main contributions of our work are threefold: (1) a theoretical model of factors driving developers’ trust and adoption of genAI tools, (2) a psychometrically validated instrument for capturing these factors in human-genAI interaction contexts, and (3) a roadmap for prioritizing design improvements, guiding toolsmiths and researchers towards human-centered genAI tooling for software development. The implications of our findings are discussed in Sec. 8.
Overall, fostering trust and sustained adoption requires more than technical performance; it demands goal alignment, transparency, and equitable interactional support. Our findings underscore the importance of designing genAI tools that not only assist with development tasks but also meaningfully support the developers who use them.
# 2 BACKGROUND
# 2.1 Trust in AI
Trust in AI is commonly defined as “the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability” [68, 72, 91, 118, 120]. Trust is subjective and thus a psychological construct that is not directly observable [56] and should be distinguished from observable measures such as reliance [124]. Trust involves users attributing intent and anthropomorphism to the AI [60], leading to feelings of betrayal when trust is violated. Despite AI systems being inanimate, users often anthropomorphize them [60], thereby shifting from reliance to trust in AI systems. Unobservable psychological constructs are commonly measured through validated self-reported scales (instruments) [32] using questions designed to capture the construct of interest. In this paper, we measure developers’ trust in genAI using the validated Trust in eXplainable AI (TXAI) instrument [55, 91]. TXAI has been derived from existing trust scales [55, 62, 75] and its psychometric quality has been validated [91]. Researchers frequently advocate using the TXAI instrument for measuring trust in AI [76, 91, 100].
Factors affecting trust: Prior research has extensively examined factors influencing human’s trust in automation [62, 75, 80, 81]. However, these preliminary insights do not necessarily transfer to human-AI interactions [120] because of the nuances in how users form trust in AI tools, alongside the inherent uncertainty [118] and variability [121] associated with these systems. Additionally, the context in which AI is applied (in our case, software development) influences how trust is developed and its contributing factors [85]. Relevant to our domain, Johnson et al. [63] interviewed software engineers to outline factors that engineers consider when establishing and (re)building trust in tools through the PICSE framework: (1) Personal (internal, external, and social factors), (2) Interaction (aspects of engagement with a tool), (3) Control (over the tool), (4) System (properties of the tool), and (5) Expectations (with the tool). Since PICSE is developed for software engineering (SE), we use it to design our survey instrument to identify factors influencing developers’ trust in genAI tools. However, the PICSE framework was qualitatively developed and the psychometric quality—reliability and validity—of a survey based on it had not been assessed.
Our work builds upon PICSE to contribute (a) a validated instrument for capturing different factors that developers consider when forming trust in genAI tools (Sec. 4.1) through a psychometric analysis of the PICSE framework and (b) assesses the significance and strength of these factors’ association with trust in genAI tools (Sec. 5).
# 2.2 Users’ Cognitive Styles
AI can be exclusionary in different ways often failing to support all users as it should [2, 5, 33]. E.g., Weisz et al. [122] found that some, but not all, participants could produce high-quality code with AI assistance, and the differences were linked to varying participant interactions with AI.
User experience in Human-AI interaction (HAI-UX) can be improved by supporting diverse cognitive styles [7], which refer to the ways users perceive, process, and interact with information and technology, as well as their approach to problem-solving [107]. While no particular style is inherently better or worse, if a tool insufficiently supports (or is misaligned with) users’ cognitive styles; they pay an additional “cognitive tax” to use it, creating usability barriers [83].
Here, we scope developers’ diverse cognitive styles to the five cognitive styles in the GenderMag inclusive design method [14]. GenderMag’s cognitive styles (facets) are users’ diverse: attitudes towards risk, computer self-efficacy within their peer group, motivations to use the technology, information processing style, and learning style for new technology. Each facet represents a spectrum. For example, risk-averse individuals (one end of the ‘attitude towards risk’ spectrum) hesitate to try new technology or features, whereas risk-tolerant ones (the other end) are inclined to try unproven technology that may require additional cognitive effort or time. GenderMag’s cognitive styles are well-suited as they have been (a) repeatedly shown to align with users’ interactions with technology both in the context of SE [14, 44, 83] and HAI interactions [7, 48], and (b) distilled from an extensive list of applicable cognitive style types [10, 14], intended for actionable use by practitioners. We used the validated GenderMag facet survey instrument [47] in our study.
# 2.3 Behavioral Intention and Usage
Behavioral intention refers to the extent to which a person has made conscious plans to undertake a specific future activity [116]. Technology acceptance models, such as TAM [18] and UTAUT [116], identify behavioral intention as a key indicator of actual technology usage [117]. Understanding users’ behavioral intentions is useful for predicting technology adoption and guiding future design strategies [116]. While there is an extensive body of work modeling users’ behavioral intentions towards software tools [96, 116, 117], these studies primarily focus on socio-technical factors driving adoption.
Our work contributes to this line of research by examining the role of developers’ trust and cognitive styles in shaping their intentions to use genAI tools (Sec. 4.2 and 5), thereby extending the understanding of AI adoption dynamics in SE. We used components of the UTAUT model [116] to capture developers’ behavioral intentions and usage of genAI tools.
# 3 RESEARCH DESIGN
Fig. 1 presents an overview of our research design. We adopted a Concurrent Embedded Mixed Methods Strategy [27] with a dominant quantitative component. To address our RQs, we surveyed software developers from two major global tech organizations, GitHub Inc. and Microsoft. We leveraged existing theoretical frameworks and instruments to design our data collection instrument (see Sec. 2). While using existing theoretical frameworks is a first step in developing questionnaires,
RQeusesatricohn RQ1: DetermtriunsettihnegfeanctAoIrs predicting stRylQe2s:pDretdeirctmbineehahvoiowrtarluisnt e&ntciognsn i(tiBvIe) RnQee3w:d hiDymetphereosmeviefnamecetwnothr sa&tuinumdnpecroprvtearfnroterfamsc.otnosrs (Section 4.1, 5.2) (Section 5.2) (Section 7) Data (cloSsuerdv eqyuedsattiao ns) (cloSsuerdv eqyuedsattiao ns) (closed questioSnusr, voepyedna-tean ded questions) PArnoacleydsuirs e StructuraPlsyEcqhuoatmioetnricMoAdnealsy(siPs,L S-SEM) Structural Equation Models (PLS-SEM) Impo(rItPanMceA-)P, eQrfuoarlimtatnicve AMnaaplyAsnisalysis conducting a psychometric quality assessment is essential to ensure its subsequent reliability and validity [38]. As there was no validated instrument to measure the constructs of the PICSE framework [63]–our chosen trust framework–we performed its psychometric assessment [38] (Sec. 4.1). This assessment helped us define a theoretical model of factors developers consider when forming trust in genAI tools, which we then evaluated using Partial Least Squares-Structural Equation Modeling (PLS-SEM) to answer RQ1. To answer RQ2, we assessed the relationships between developers’ trust and cognitive styles with their intentions to use genAI tools (Sec. 5.2). Finally, to address RQ3, we conducted Importance-Performance Matrix Analysis (IPMA) on our model to identify factors that strongly influence trust and adoption, yet were perceived as underperforming in practice. We then qualitatively analyzed participants’ open-ended responses on perceived challenges and risks of genAI usage, to contextualize why these aspects were viewed as lacking within SE contexts. In doing so, we extend our recent work [23] by identifying not only what matters for trust and adoption but also what is perceived as lacking, and why.
The rest of this paper is structured accordingly: First, we detail the methods and findings for RQ1–2 (Sec. 4 & 5), focusing on the theoretical model (Fig. 2). Then, we present the RQ3 methods and results, uncovering the specific genAI aspects that warrant attention (Sec. 6 & 7).
Table 1. Measurement model constructs and instruments
\*We used the 4-item TXAI scale [55] instead of the 6-item scale [91] to reduce participant fatigue. \*\*PICSE does not have a validated questionnaire in [63].
# 3.1 Survey Design
We defined the measurement model [46] based on the theoretical frameworks discussed in Sec. 2 to guide our survey design (Table 1). Four researchers with experience in survey studies and GitHub’s research team co-designed the survey over a four-month period (Oct 2023 to Jan 2024). We adapted existing (validated) instruments in designing the survey questions (Table 1). The questions were contextualized for the target population and pragmatic decisions were made to limit the survey length. The complete questionnaire is available in the supplemental material [1].
After the IRB-approved informed consent, participants responded to closed questions about their familiarity with genAI technology and their attitudes and intentions towards using genAI tools in work. All closed questions utilized a 5-point Likert scale ranging from 1 (“strongly disagree”) to 5 (“strongly agree”) with a neutral option. These questions also included a $6 ^ { t h }$ option (“I’m not sure”) for participants who either preferred not to or did not know how to respond to a question. This differs from being neutral–acknowledging the difference between ignorance and indifference [42].
Table 2. Demographics of Respondents $( \mathsf { N } { = } 2 3 8 )$ )
Demographic questions covered gender, continent of residence, years of software engineering (SE) experience, and primary SE responsibilities at work. We did not collect data on country of residence or specific job roles/work contexts to maintain participant anonymity, per GitHub’s and Microsoft’s guidelines.
We included open-ended questions to explore developers’ perceived challenges and risks of using these tools in practice: (1)What challenges do you face when using genAI tools? (2) What risks or negative outcomes have you experienced when you used genAI in your work? The survey concluded with an open-ended question for additional comments.
The survey took between 7-10 minutes to complete. Attention checks were included to ensure the quality of the survey data. To reduce response bias, we randomized the order of questions within their respective blocks (each construct in Table 1). We piloted the questionnaire with collaborators at GitHub to refine its clarity and phrasing.
# 3.2 Data Collection
3.2.1 Distribution. GitHub and Microsoft administered the survey using their internal tools. The survey was distributed to team leads, who were asked to cascade it to their team members. This approach was chosen over using mailing lists to ensure broader reach [112]. The survey was available for one month (Feb-Mar, 2024), and while participation was optional, it was encouraged.
3.2.2 Responses. We received a total of 343 responses: 235 from Microsoft and 108 from GitHub. We removed patterned responses $\scriptstyle ( \mathrm { n = 2 0 } )$ ), outliers $\scriptstyle \left( \angle { \textbf { \em 1 } } \right.$ year SE experience, $\scriptstyle \mathrm { n = 1 }$ ), and those that failed attention checks $\scriptstyle ( \ n = 2 9 )$ . Further, we excluded respondents who discontinued the survey without answering all the close-ended questions $\scriptstyle ( \mathrm { n } = 5 5 )$ ). We considered “I’m not sure” responses as missing data points. As in prior work [112], we did not impute data points due to the unproven efficacy of imputation methods within SEM group contexts [99].
After filtration, we retained 238 valid responses (Microsoft: 154, GitHub: 84) from developers across six continents, representing a wide distribution of SE experience. Most respondents were from North America $( 5 4 . 2 \% )$ and Europe $( 2 3 . 1 \% )$ , and most identified as men $( 7 8 . 2 \% )$ , aligning with distributions reported in previous studies with software engineers [96, 112]. Table 2 summarizes the respondent demographics.
# 4 RQ1&2 METHOD
# 4.1 Psychometric Analysis of PICSE Framework
Psychometric quality [74, 94] refers to the objectivity, reliability, and validity of an instrument. We primarily used validated instruments in designing the survey. However, since PICSE was not validated, we conducted a psychometric analysis to empirically refine its factor groupings, which were then evaluated for their association with trust (Sec. 5). Table 3 presents the factors evaluated in our survey. We performed the analysis using the JASP tool [61], adhering to established psychometric procedures [57, 91, 94]. The full methodological details are provided in our prior work [23] and in the supplemental [1]. We summarize the process here for completeness.
First, we performed Confirmatory Factor Analysis (CFA) [49] to test whether the set of observed variables align with the original five-factor structure (Personal, Interaction, Control, System, Expectations) proposed by Johnson et al. [63]. However, as shown in Table 4, the original five-factor structure did not indicate adequate model fit across standard indices: Root Mean Square Error of Approximation (RMSEA), Comparative Fit Index (CFI), and Tucker-Lewis Index (TLI) [58]. This was not entirely unexpected given PICSE’s conceptual nature [49]. Therefore, to identify a more appropriate model of factors, we proceeded with an Exploratory Factor Analysis (EFA), uncovering empirical groupings that might better fit the data.
Table 3. PICSE framework [63]
We dropped C3 (tool ownership), as it pertained to AI engineers developing parts of genAI models.
EFA identifies the suitable number of latent constructs (factors) and underlying factor structures without imposing a preconceived expectation of factor structures [57]. This analysis yielded an alternate five-factor model explaining $6 4 . 6 \%$ of the total variance. However, one factor (Factor 4) accounted for minimal variance $( 4 . 3 \% )$ and exhibited high correlations with other factors (see supplemental). Additionally, items I1, I2, E1, and E2 failed to meet communality thresholds (i.e. the proportion of an item’s variance explained by the common factors was $< 0 . 5$ ) and did not load cleanly onto any factor. These items were excluded, resulting in a more parsimonious four-factor structure. The fit indices in Table 4 indicate a good model fit, showing that the revised model better fits the data than the original PICSE grouping.
Finally, a subsequent CFA on the revised structure confirmed the EFA-derived four-factor model (Table 4: CFA-Alternate). The final factors are: Factor 1, labeled System/Output quality, includes items S2 through S5 and E3, which relate to the System group (in PICSE) and the style matching of genAI’s outputs. Factor 2, labeled Functional value, encompasses items I3 and P3, reflecting the educational value and practical advantages of using genAI tools. Factor 3, labeled Ease of use, comprises items S1 and C2, addressing the ease of using and integrating genAI in the workflow. Factor 5, labeled Goal maintenance, includes a single item, E4, focusing on genAI’s maintenance of human goals. The reliability and validity assessments support the robustness of these constructs (see Sec. 5.1).
In summary, the psychometric analysis confirmed that a four-factor solution is most appropriate and provided a validated measurement instrument for capturing these factors.
Table 4. Model Fit Indices - PICSE Psychometric Evaluation
1) Indications of a good model fit include $p >$ .05 for $\chi ^ { 2 }$ test, RMSEA $\prec$ .06, SRMR $\le . 0 8$ , and $0 . 9 5 \le \mathrm { C F I }$ , $\mathrm { T L I } \leq 1$ [58]. 2) $\chi ^ { 2 }$ test results were not considered, as the test is affected by deviations from multivariate normality [101]. We still report the values for completeness.
# 4.2 Model Development
As discussed in the previous section, we refined the factor groupings within the PICSE framework and constrained our focus to only those factors that were psychometrically validated. Next, we detail the hypotheses embedded in our theoretical model for RQs 1&2.
RQ1) Factors associated with trust
System/Output quality encompasses genAI tools’ presentation, adherence to safe and secure practices (including privacy and security implications of using genAI), and its performance and output quality (consistency and correctness) in relation to the development style or work environment in which it is utilized (S2-S5, E3). Developers often place trust in AI based on its performance and output quality (accuracy and consistency), which serve as proxies for the system’s perceived credibility [20, 36, 120, 127]. Prior work [120] evidenced that developers are often wary about the security and privacy implications of using AI tools in their work, which influences the level of trust they place in these tools. Drawing upon these insights, we hypothesize: (H1) System/Output quality of genAI is positively associated with developers’ trust in these tools.
Functional value of a tool refers to the practical benefits and utility it offers users in their work [103]. In our context, genAI’s functional value encompasses its educational value and clear advantages relative to work performance (I3, P3). Prior work highlights that developers’ expectations of clear advantages from using AI (e.g., increased productivity, improved code quality) contribute to their trust in these tools [63, 128]. Further, AI’s ability to support learning fosters trust [120]. Based on these, we posit: (H2) Functional value of genAI is positively associated with developers’ trust in these tools.
Ease of use associated with genAI tools includes the extent to which developers can easily use and integrate genAI into their workflows (S1, C2). Prior research highlights that a tool’s ease of use [39] and compatibility with existing workflows [68, 96] contribute to users’ trust. Following this, we hypothesize: (H3) GenAI’s ease of use is positively associated with developers’ trust in these tools.
Goal maintenance is related to the degree to which genAI’s actions and responses align with the developer’s ongoing goals (E4). By its very nature, goals can vary depending on the task and context [63]. Therefore, aligning AI behavior with an individual’s immediate goals is crucial in human-AI collaboration scenarios [124]. In terms of human cognition, this congruence is important for maintaining cognitive flow and reducing cognitive load [113], which, in turn, fosters trust in systems [24, 114]. Consequently, we propose: $( H 4 )$ Goal maintenance is positively associated with developers’ trust in genAI tools.
# RQ2) Factors associated with behavioral intentions
Trust is a key factor in explaining resistance toward automated systems [124] and plays an important role in technology adoption [125, 126]. Multiple studies have correlated an individual’s trust in technology with their intention to use it [8, 39, 65]. In our context, we thus posit: (H5) Trust is positively associated with intentions to use genAI tools.
In the context of GenderMag’s cognitive styles:
Motivations behind why someone uses technology (technophilic or task-focused) not only influences their intention to use it but also affects how they engage with its features and functionalities [87, 117]. Naturally, individuals motivated by their interest and enjoyment in using and exploring the technology (opposite end of the spectrum from those motivated by task completion) are early adopters of new technology [14]. Based on this, we posit: (H6) Motivation to use technology for its own sake is positively associated with intentions to use genAI tools.
Computer self-efficacy refers to an individual’s belief in their ability to engage with and use new technologies to succeed in tasks [9]. It shapes how individuals apply cognitive strategies and the effort and persistence they invest in using new technologies [26], thereby influencing their intention to use them [70, 115]. In line with this, we propose: (H7) Computer self-efficacy is positively associated with intentions to use genAI tools.
Attitude towards risk encompasses an individual’s inclination to take risks in uncertain outcomes [15]. This cognitive facet influences decision-making processes, particularly in contexts involving new or unfamiliar technology [115]. Risk-tolerant individuals (one end of the spectrum) are more inclined to experiment with unproven technology than risk-averse ones (the other end) [14], and show higher intentions to use new tools [117]. Thus, we posit: (H8) Risk tolerance is positively associated with intentions to use genAI tools.
Information processing style influences how individuals interact with technology when problem-solving: some gather information comprehensively to develop a detailed plan before acting; others gather information selectively, acting on initial promising pieces and acquiring more as needed [14]. GenAI systems, by their very interaction paradigm, inherently support the latter by providing immediate responses to queries, allowing users to act quickly on the information received and gather additional details incrementally. Accordingly, we posit: (H9) Selective information processing style is positively associated with intentions to use genAI tools.
Learning style for technology (by process vs. by tinkering) refers to how an individual approaches problem-solving and how they structure their approach to a new technology [14]. Some prefer to learn through an organized, step-by-step process, while others prefer to tinker around—exploring and experimenting with new technology or its features [14]. Prior work indicates that software, more often than not, is designed to support and encourage tinkering [16], making individuals who prefer this approach more inclined to adopt and use new tools [116]. Thus, we propose: (H10) Tinkering style is positively associated with intentions to use genAI tools.
Behavioral intention. Successful technology adoption hinges on users’ intention to use it, translating into future usage. Prior work has consistently shown these factors to be positively correlated [116, 117], suggesting that users who intend to use technology are more likely to do so. Accordingly, we hypothesize: (H11) Behavioral intention to use genAI tools is positively associated with the usage of these tools.
# 4.3 Data Analysis
We used Partial Least Squares-Structural Equation Modeling (PLS-SEM) to test our theoretical model. PLS-SEM is a second-generation multivariate data analysis technique that has gained traction in empirical SE studies investigating complex phenomena [96, 97, 112]. It allows for simultaneous analysis of relationships among constructs (measured by one or more indicators) and addresses multiple interconnected research queries in one comprehensive analysis. It is particularly suited for exploratory studies due to its flexibility in handling model complexity while accounting for measurement errors in latent variables [46]. Importantly, PLS-SEM does not require data to meet distributional assumptions. Instead, it uses a bootstrapping approach to determine the statistical significance of path coefficients (i.e., relationships between constructs). The PLS path model is estimated for a large number of random subsamples (usually 5000), generating a bootstrap distribution, which is then used to make statistical inferences [46].
We used the SmartPLS (v4.1.0) software [106] for PLS-SEM analyses, which comprised two main steps, each involving specific tests and procedures. First, we evaluated the measurement model, empirically assessing the relationships between the latent constructs and their indicators (Sec. 5.1). Next, we evaluated the theoretical (or structural) model (Sec. 5.2), representing the hypotheses presented in Section 4.2.
The appropriate sample size was determined by conducting power analysis using the $G ^ { \ast }$ Power tool [35]. We performed an $F$ -test with multiple linear regression, setting a medium effect size (0.25), a significance level of 0.05, and a power of 0.95. The maximum number of predictors in our model is seven (six theoretical constructs and one control variable to Behavioral Intention) (see Fig. 2). The calculation indicated a minimum sample size of 95; our final sample size of 238 exceeded it considerably.
# 5 RQ1&2 RESULTS
In this section, we report the evaluation of the measurement model (Sec. 5.1), followed by the evaluation of the structural model (Sec. 5.2). We adhered to the evaluation protocols outlined in prior studies [46, 97]. The analysis was performed using the survey data, which met the assumptions for factor analysis [46]: significant Bartlett’s test of sphericity on all constructs $( \chi ^ { 2 } ( 4 9 6 ) { = } 4 4 7 4 . 5 8 \$ , p $\mathbf { \zeta } < . 0 0 1 ,$ and adequate KMO measure of sampling adequacy (0.901), well above the recommended threshold (0.60) [57].
# 5.1 Measurement Model Evaluation
Our model evaluates several theoretical constructs that are not directly observable (e.g., Trust, Behavioral Intention), modeled as latent variables, and measured by a set of indicators or manifest variables (see Fig. 2). The first step in evaluating a structural equation model is to ensure the soundness of the measurement of these latent variables [46, 97], detailed as follows:
1) Convergent validity assesses whether respondents interpret the questions as intended by the question designers [66]. Our theoretical model comprises latent constructs that are reflectively measured, meaning the changes in the construct should be reflected in changes in the indicators [97]. Consequently, these indicators (questions) should exhibit a significant proportion of shared variance by converging on their respective constructs [46]. We assessed convergent validity using Average Variance Extracted (AVE) and indicator reliability through outer loadings [46].
AVE represents a construct’s communality, indicating the shared variance among its indicators, and should exceed 0.5 [46]. AVE values for all latent constructs in our model surpassed this threshold (see Table 5). Regarding outer loadings, values above 0.708 are considered sufficient, while values above 0.60 are sufficient for exploratory studies [46]. We removed variables that did not sufficiently reflect changes in the latent construct (SE3 from computer self-efficacy and IP3 from selective information processing).1 Subsequently, all indicators in our model exceeded the threshold, ranging between 0.615 and 0.954 (see Fig. 2).
Table 5. Internal consistency reliability and convergent validity
Cronbach’s $\alpha$ tends to underestimate reliability, whereas composite reliability $\left( \mathrm { C R : } \rho _ { c } \right)$ tends to overestimate it. The true reliability typically lies between these two estimates and is effectively captured by $\mathrm { C R } ( \rho _ { a } )$ [97].
2) Internal consistency reliability seeks to confirm that the indicators are consistent with one another and that they consistently and reliably measure the same construct. To assess this, we performed both Cronbach’s $\alpha$ and Composite Reliability (CR: $\rho _ { a } , \rho _ { c } )$ tests [97]. The desirable range for these values is between 0.7 and 0.9 [46]. As presented in Table 5, all values corresponding to our model constructs fall within the acceptable range, confirming that the constructs and their indicators meet the reliability criteria.
3) Discriminant validity assesses the distinctiveness of each construct in relation to the others. Our model includes 10 latent variables (Table 5) and we assessed discriminant validity using the Heterotrait-Monotrait (HTMT) ratio of correlations [52]. Discriminant validity may be considered problematic if the HTMT ratio $> 0 . 9$ , with a more conservative cut-off at 0.85 [46]. In our case, the HTMT ratios between the latent constructs ranged from 0.064 to 0.791, all below the threshold. We report the HTMT ratios in the supplemental [1], along with the cross-loadings of the indicators, and the Fornell-Larcker criterion values for the sake of completeness. Both procedures indicated that discriminant validity did not pose a threat in this study.
4) Collinearity assessment is conducted to evaluate the correlation between predictor variables, ensuring they are independent to avoid potential bias in the model path estimations. We assessed collinearity using the Variance Inflation Factor (VIF). In our model, all VIF values are below 2.1, well below the accepted cut-off value of 5 [46].
# 5.2 Structural Model Evaluation
After confirming the constructs’ reliability and validity, we assessed the structural model (graphically represented in Fig. 2). This evaluation involved validating the research hypotheses and assessing the model’s predictive power.
5.2.1 Path coefficients and significance. Table 6 presents the results of the hypotheses testing, including the mean of the bootstrap distribution (B), the standard deviation (SD), the $9 5 \%$ confidence interval (CI), and the p-values. The path coefficients in Fig. 2 and Table 6 are interpreted as standard regression coefficients, indicating the direct effects of one variable on another. Each hypothesis is represented by an arrow between constructs in Fig. 2. For instance, the arrow from “Functional Value” to “Trust” corresponds to H2. Given its positive path coefficient $\scriptstyle \left( B = 0 . 1 4 2 \right)$ ), genAI’s functional value is positively associated with developers’ trust in these tools. The coefficient of 0.142 indicates that when the score for functional value increases by one standard deviation (SD) unit, the score for trust increases by 0.142 SD units. The analysis results (Table 6) show that most of our hypotheses are supported, except for H3 $\mathrm { ( p = 0 . 5 8 ) }$ , H9 $\scriptstyle \left( \mathrm { p } = 0 . 0 6 \right)$ , and H10 $( \mathrm { p } { = } 0 . 3 3 )$ . Next, we detail the factors associated with trust and behavioral intentions for the supported hypotheses with exemplary quotes from responses to the open-ended questions to illustrate our findings.
Fig. 2. PLS-SEM Model: Solid lines indicate item loadings and path coefficients $( \mathsf { p } < 0 . 0 5 )$ ; dashed lines represent non-significant paths. Reverse-coded items are suffixed with $\cdot _ { - \mathsf { R } ^ { \prime } }$ (e.g., SE2-R). Latent constructs are depicted as circles and adjusted $R ^ { 2 }$ (Adj. $R ^ { 2 }$ ) values are reported for endogenous constructs.
Factors associated with trust $( R Q 1 )$ : Our analysis supported Hypotheses H1 $\scriptstyle \left( \mathrm { p } = 0 . 0 0 \right)$ , H2 $\scriptstyle ( \mathrm { p } = 0 . 0 3$ ), and H4 $\scriptstyle \left( \mathrm { p } = 0 . 0 0 \right)$ (Table 6). First, the support for system/output quality in fostering trust (H1) can be explained by how developers prefer tools that deliver accurate, reliable outputs matching their work style and practices [120, 127]. Next, the functional value of genAI, encompassing educational benefits and practical advantages, promotes trust (H2), since developers prioritize tools that offer tangible utility in their work [63, 128]. For instance, a respondent noted genAI’s practical value, stating, $^ { * } T$ find value in these models for creative endeavors, gaining different perspectives, or coming up with ideas I wouldn’t have otherwise”(P91). Finally, goal maintenance is relevant for cultivating trust (H4). The alignment between a developer’s goals and genAI’s actions supports using genAI tools to achieve these goals. This eliminates the need for developers to constantly verify the relevance of genAI’s outputs, thereby reducing cognitive load. This congruence ultimately enhances genAI’s credibility as a cognitive collaborator [124] rather than as an independent and potentially untrustworthy tool, thus bolstering trust in these tools.
Factors associated with behavioral intentions (RQ2): Our analysis supported Hypotheses H5 $\scriptstyle ( \mathrm { p } = 0 . 0 0 )$ , H6 $( \mathrm { p } { = } 0 . 0 1 )$ , H7 $\scriptstyle ( \mathtt { p } = 0 . 0 1 )$ ), and H8 $\scriptstyle ( \mathrm { p } = 0 . 0 0 )$ ), indicating that developers’ trust (H5) and their cognitive styles—motivations (H6), computer self-efficacy (H7), and risk tolerance (H8)—have statistically significant associations with their behavioral intentions to use genAI tools.
Trust (H5) is pivotal in shaping adoption decisions as it reduces resistance to new technologies [125, 126]. When developers trust genAI tools, they perceive them as credible partners, enhancing their willingness to use these tools. Moreover, developers’ cognitive styles significantly shape their intentions to adopt genAI tools. Developers motivated by the intrinsic enjoyment of technology (H6) have higher intentions to adopt genAI. In contrast, those with a task-oriented approach tend to be more cautious and hesitant about the cognitive effort they are willing to invest in these tools [14]. Higher computer self-efficacy within peer groups is also significantly associated with increased intentions to use genAI tools (H7). Despite generally high self-efficacy, some developers face interaction challenges with genAI that may impact their confidence and adoption rates (see Sec. 7.1). Furthermore, we found that developers with higher risk tolerance are significantly more inclined to use these tools than risk-averse individuals (H8). The context (and involved stakes) in which these tools are used further play a role, as highlighted by another respondent: “I don’t use it yet to write code that I can put my name behind in production; I just use it for side projects or little scripts to speed up my job, but not in actual production code” (P222).
Table 6. Standarized path coefficients (B), standard deviations (SD), confidence intervals (CI), p values, and effect sizes $( f ^ { 2 } )$
BI: Behavioral Intention. We consider $f ^ { 2 } < 0 . 0 2$ to be no effect, $f ^ { 2 } \in [ 0 . 0 2 , 0 . 1 5 )$ to be small, $f ^ { 2 } \in [ 0 . 1 5 , 0 . 3 5 )$ to be medium, and $f ^ { 2 } > 0 . 3 5$ to be large [25].
Finally, our analysis supported Hypothesis H11 $\scriptstyle ( \mathrm { p } = 0 . 0 0 )$ ), highlighting a significant positive association between developers’ behavioral intention to use genAI tools and its usage in their work. This corroborates with prior technology acceptance models [116, 117], emphasizing the pivotal role of behavioral intentions in predicting use behavior.
Control variables: Although experience is often relevant for technology adoption [116], our analysis found no significant associations between SE experience and trust, behavioral intentions, or usage of genAI tools. This is likely since genAI introduces a new interaction paradigm [121], which diverges from traditional SE tools and requires different skills and interactions not necessarily linked to SE experience. Familiarity with genAI, while potentially influential, was excluded as a control variable due to a highly skewed distribution of responses, with most participants reporting high familiarity. Including such skewed variables could lead to unreliable estimates and compromise the model’s validity [46, 98]. Similarly, the gender variable was excluded due to its skewed distribution. The analysis of unobserved heterogeneity (see supplemental [1]) supports the absence of any group differences in the model (e.g., organizational heterogeneity) caused by unmeasured criteria.
5.2.2 Model evaluation. We assessed the relationship between constructs and the predictive capabilities of the theoretical model by evaluating the model’s explanatory power $( R ^ { 2 }$ , Adjusted (Adj.) $R ^ { 2 }$ ), model fit (SRMR), effect sizes $( f ^ { 2 } )$ , and predictive relevance $( Q ^ { 2 } )$ [97].
Explanatory power: The coefficient of determination $R ^ { 2 }$ and Adj. $R ^ { 2 }$ values) indicate the proportion of variance in the endogenous variables explained by the predictors. Ranging from 0 to 1, higher $R ^ { 2 }$ values signify greater explanatory power, with 0.25, 0.5, and 0.75 representing weak, moderate, and substantial levels, respectively [46]. As shown in Table 7, the $R ^ { 2 }$ values in our model are 0.68 for Trust, 0.66 for Behavioral intention, and 0.33 for Usage, demonstrating moderate to substantial explanatory power, well above the accepted threshold of 0.19 [21]. Further, Table 6 presents the effect sizes $( f ^ { 2 } )$ , which measure the impact of each predictor on the endogenous constructs. The effect sizes indicate that the predictors exhibit medium to large effects on their respective endogenous variables for all supported hypotheses in our model, with values ranging from 0.14 to 0.54 [25], further corroborating the model’s explanatory power.3
Model fit: We analyzed the overall model fit using the standardized root mean square residual (SRMR), a recommended fit measure for detecting misspecification in PLS-SEM models [97]. Our results suggested a good fit of the data in the theoretical model $( \mathrm { S R M R } = 0 . 0 7 7$ ), which is below the suggested thresholds of 0.08 (conservative) and 0.10 (lenient) [51].
Predictive relevance: Finally, we evaluated the model’s predictive relevance using Stone-Geisser’s $Q ^ { 2 }$ [109], a measure of external validity [46] obtainable via the PLS-predict algorithm [104] in SmartPLS. PLS-predict is a holdout sample-based procedure: it divides the data into $k$ subgroups (folds) of roughly equal size, using (k-1) folds as a training sample to estimate the model, while the remaining fold serves as a holdout to assess out-of-sample predictive power. 𝑄p2redict values are calculated for endogenous variables; values greater than 0 indicate predictive relevance, while negative values suggest the model does not outperform a simple average of the endogenous variable. Our sample was segmented into $k { = } 1 0$ parts, and 10 repetitions were used to derive the $Q _ { \mathrm { p \cdot } } ^ { 2 }$ redict statistic [46], all of which were greater than 0 (Table 7), confirming our model’s adequacy in terms of predictive relevance.
Table 7. Coefficient of determination and predictive relevance
5.2.3 Common method bias. We collected data via a single survey instrument, which might raise concerns about Common Method Bias/Variance (CMB/CMV) [97]. To test for CMB, we applied Harman’s single factor test [93] on the latent variables. No single factor explained more than $23 \%$ variance. An unrotated exploratory factor analysis with a forced single-factor solution was conducted, which explained $3 0 . 3 \%$ of the variance, well below the $5 0 \%$ threshold. Additionally, we used Kock’s collinearity approach [67]. The VIFs for the latent variables ranged from 1.01 to 2.45, all under the cut-off of 3.3. These indicate that CMB was not a concern in our study.
# 6 RQ3-METHOD
PLS-SEM analysis (RQ1&2) identified the effects of various genAI factors on developers’ trust and examined how trust and cognitive styles predict their intentions to use these tools at work. Our goal in RQ3 was to explore what specific genAI aspects could benefit from relative improvement to enhance developers’ trust and adoption of these tools.
To answer this RQ, we conducted (1) Importance-Performance Matrix Analysis (IPMA) on our research model to identify factors that have a strong impact on the target constructs (Trust, Behavioral Intentions), yet are relatively underperforming—uncovering the specific aspects that require improvement (what’s). Once the underperforming aspects were identified, we (2) qualitatively analyzed developers’ open-ended responses on perceived challenges and risks of genAI usage (see Sec. III) to contextualize why these aspects were perceived to relatively underperform in the study’s context (why’s). The analysis process is detailed below.
# 6.1 Importance Performance Matrix Analysis (IPMA)
IPMA extends PLS-SEM results (path coefficient estimates, see Table 6) by taking the performance of predictor constructs into account. Specifically, it contrasts total effects—which capture a predictor’s influence on a target construct—with average latent variable scores—which reflect its performance [37, 77]. In essence, importance scores represent a predictor’s impact, whereas performance scores reflect its perceived adequacy among respondents.
To illustrate the concept of IPMA, consider the example PLS path model in Fig. 3(a), where five predictor constructs (X1–X5) influence an endogenous construct, Y. Fig. 3(b) presents the corresponding IPMA results, where each predictor’s effect (path coefficient) on Y reflects its importance value in the map (x-axis), while its average variable score represents its performance (y-axis). This plot serves as a decision tool for prioritizing influential factors for improvement [43, 53, 96].
The map is divided into four quadrants, determined by the average importance (vertical line) and average performance (horizontal line) of the constructs. These reference lines categorize predictors based on their relative importance-performance pairs, providing a structured interpretation of aspects where improvements should be prioritized.
Constructs in the lower-right quadrant (Q4) are of particular interest to us as they have aboveaverage influence (importance), but below-average adequacy (performance), suggesting that improvements in these areas could yield the greatest impact. Accordingly, Q4 constructs should be prioritized for intervention. Fig. 3(b) assigns the quadrant labels from Martilla and James [77]. They describe Q4 as “concentrate here; critical improvement area”, Q2 as “keep up the good work”, Q3 as “low priority”, and Q1 as “possible overkill".
Given the study’s objective of prioritizing key genAI aspects for trust and adoption, we will focus on Q4—the lower-right quadrant—as it highlights high-importance, low-performance areas needing the most relative improvement (what’s).
Our data satisfied all IPMA requirements as recommended by [95]: All indicators in the PLS path model (1) used quasi-metric scales, (2) were aligned in the same direction before analysis (low to high), and (3) showed positive estimated and expected outer loadings. Having met these conditions, we computed the importance (Step 1) and performance (Step 2) values for each predictor in the model. We then derived the importance-performance maps for the two target constructs, i.e., trust and behavioral intentions. We summarize the steps below:
(b) Example Importance-performance map of construct (Y)
Fig. 3. Importance-Performance Map Interpretation (Illustrative Example). (a) Example PLS path model, where predictors X1–X5 influence construct Y. (b) Corresponding Importance-Performance Map of Y, divided into four quadrants (Q1–Q4) based on average importance (vertical reference line) and average performance (horizontal reference line) scores. Quadrant labels are from [77].
1) Importance value computation: The importance values of predictor constructs (X1-X5) are quantified by their unstandardized total effect on the target construct (Y). Specifically, a one-unit increase in a predictor’s performance translates to an increase in the target construct’s performance by the magnitude of this total effect (i.e., its importance). Further, an indicator’s importance value is calculated by multiplying its rescaled outer weight (Eqn. 3) by its corresponding construct’s total effect on the target construct. In Fig. 3(b), these importance values determine the $\mathbf { x }$ -axis, representing the relative influence of each predictor (X1-X5) on the target construct (Y).
2) Performance value computation: We applied the performance computation method outlined by Ringle and Sarstedt [95] within SmartPLS [106]. Since performance computation is not inherently intuitive in PLS contexts, we briefly summarize the process to make it more accessible to readers.
A construct’s performance value is derived from its indicator data: First, all indicator scores are rescaled to a 0-100 range to enable consistent comparison across indicators measured on different scales. The rescaling transformation for an observation $j$ of an indicator $i$ is defined as:
$$
x _ { i j } ^ { \mathrm { r e s c a l e d } } = \frac { x _ { i j } - \operatorname* { m i n } ( x _ { i } ) } { \operatorname* { m a x } ( x _ { i } ) - \operatorname* { m i n } ( x _ { i } ) } \times 1 0 0
$$
where $x _ { i j }$ is respondent $j ^ { : }$ ’s actual score for indicator $i$ , and $\operatorname* { m i n } ( x _ { i } )$ and $\operatorname* { m a x } ( x _ { i } )$ are the theoretical minimum and maximum values of indicator $i$ (e.g., 1 and 5 for a 5-point scale) [95]. These limits refer to the scale itself, not observed response values. The mean of the rescaled indicator score $\bar { x } _ { i j } ^ { r e . }$ scaled represents an indicator’s performance value.
Second, rescaled latent variable scores are computed as a weighted linear combination of rescaled indicator data (for all indicators measuring the latent variable) and their corresponding rescaled outer weights:
$$
L V _ { j } ^ { \mathrm { r e s c a l e d } } = \sum _ { i } w _ { i } ^ { \mathrm { r e s c a l e d } } \cdot x _ { i j } ^ { \mathrm { r e s c a l e d } }
$$
Manuscript submitted to ACM, Vol. 1, No. 1, Article . Publication date: April 2025.
To compute the rescaled outer weights $w _ { i } ^ { \mathrm { r e s c a l e d } }$ , unstandardized weights are first obtained by dividing the standardized outer weights of indicators by their standard deviations. These unstandardized weights are then rescaled to ensure compatibility across indicators within the same measurement model:
$$
w _ { i } ^ { \mathrm { r e s c a l e d } } = \frac { w _ { i } ^ { \mathrm { u n s t d } } } { \sum w _ { k } ^ { \mathrm { u n s t d } } }
$$
The denominator aggregates the unstandardized weights of all indicators belonging to the same measurement model. The mean of the rescaled latent variable score $L { \bar { V } }$ rescaled (Eqn. 2) represents the performance value of the construct, i.e., serves as an input for the IPMA’s performance dimension. In Fig. 3(b), these values determine the y-axis, representing the relative adequacy of each predictor (X1–X5), with higher values indicating better relative performance.
# 6.2 Qualitative Analysis
To bolster our findings from the IPMA (which identified underperforming genAI aspects), we qualitatively analyzed developers’ open-ended responses on perceived challenges and risks of genAI usage (see Sec. III). This analysis aimed to uncover why developers perceived these identified aspects to relatively underperform within SE contexts (why’s).
We used reflexive thematic analysis [12, 13] to discern themes and patterns within the data, iteratively refined based on participants’ responses [13]. To ensure reliability in the analysis, the team held multiple meetings over nine weeks to compare and contrast the codes and discuss the differences, as advocated in thematic analysis [28, 78]. Specifically, we proceeded as follows:
Two authors inductively open-coded the data to identify preliminary codes. These codes were subsequently refined within the team as our understanding of the data evolved. Next, we built post-formed codes, which were associated with corresponding segments of participant responses. Subsequently, we compared and contrasted the codes, merging or splitting them as required. Here, an important aspect was analyzing co-occurrences—codes that occurred together were merged, while conceptually distinct codes remained separate. During this step, codes with logical connections were grouped into higher-level categories. Throughout this process, we used a negotiated agreement protocol within the team to discuss and refine the categorizations until a consensus was reached about the themes, as cataloged in our codebook (see supplemental [1]).
Subsequently, to understand why specific aspects were perceived as underperforming, we mapped the qualitative findings to the IPMA results using team-based negotiated agreement and consensusbuilding. As an additional check, we analyzed open-ended responses from participants who assigned low ratings to the underperforming factors (1–3 on a 5-point scale) and found no discrepancies between their quantitative assessments of these factors (what’s) and their corresponding explanations $( w h y ' s )$ . Further, where applicable, we triangulated these findings with relevant behavioral science theories to provide a structured theoretical interpretation of these patterns.
From 238 participants, we analyzed a total of 449 responses to questions on challenges and risks. Of these, 206 responses discussed challenges, while 180 focused on risks. We identify respondents as P1-P238 in the subsequent sections.
# 7 RQ3-RESULTS
This section presents the genAI aspects from our model (Fig. 2), interventions for which, should be prioritized to enhance developers’ trust and intentions. RQ3 builds on the findings of RQ1&2 by pinpointing high-impact yet underperforming factors in the model that require relative improvement.
Below, we organize our findings around the model’s two target constructs: (1) Trust and (2) Behavioral Intentions (BI). We integrate thematic insights from participants’ responses to contextualize why they perceived these factors to be lacking and draw on behavioral science theories to explain how these issues undermine trust and adoption, where applicable.
Fig. 4. Importance–Performance Map Analysis (IPMA) of constructs predicting trust in genAI.
# 7.1 Trust
Fig. 4 presents the IPMA results for trust, mapping the importance-performance relationships of its predictor constructs. Among these, genAI’s system/output quality and goal maintenance exert the strongest effects on trust but underperform (Quadrant 4 in Fig. 4), indicating that improvements in these areas could yield the highest impact. For instance, a one-unit increase in performance of system/output quality (from 54.32 to 55.32) would enhance trust by its total effect (0.596), underscoring it as a critical area for intervention.
Once we identified the key areas to be prioritized, we conducted IPMA at the indicator level to uncover which specific attributes regarding genAI’s system/output quality and goal maintenance were underperforming. Fig. 5 illustrates this analysis, where indicators in Quadrant 4 are the (what’s) that participants perceived as lacking. Respondents identified deficiencies in: (1) genAI’s ability to sustain goal maintenance (E4), (2) consistent accuracy and appropriateness of genAI outputs (S4), (3) style matching of genAI contributions (E3) in the work environment where it is used, (4) presentation (S2), (5) safety and security practices (S3), and (6) genAI’s performance in tasks (S5). We detail these below based on their importance-performance ranking 5:
7.1.1 Goal maintenance (E4) is a key determinant of developers’ trust in genAI tools.6 Trust is bolstered when there is congruence between developers’ ongoing goals and genAI’s actions, as it enhances genAI’s credibility as a cognitive collaborator [124]. Conversely, failures in sustaining goal maintenance impose cognitive burdens [113], which erodes trust, forcing developers to intervene frequently to keep the AI aligned with their goals. Fig. 5 highlights that goal maintenance (E4) should have the highest priority for improvement, given its highest relative importance yet low performance.
Fig. 5. Indicator-level IPMA for trust, highlighting specific genAI attributes that are underperformant.
Qualitative analysis of what participants reported as challenges revealed four primary breakdowns in goal maintenance: (a) misalignment with task objectives, where genAI-outputs failed to account for overarching objectives, causing deviations from intended goals; (b) verification burden, where participants had to invest significant effort to validate genAI’s responses; (c) high cognitive effort in prompting, as participants struggled to craft prompts to elicit context-aware responses; and (d) high cognitive effort in modifying genAI responses, requiring substantial rework to integrate AI outputs into workflows. Each of these issues increased extrinsic workload, affecting participants’ trust in these tools.
(a) Misalignment with task objectives stemmed from three core gaps in capabilities of the genAI tools that the participants used. First, in software development contexts, participants noted that these tools had limited contextual awareness of task-specific goals and broader design considerations. As one participant observed, “I suspect that AI operates without any concrete awareness of goals. It is difficult for genAI tools when it needs thorough code context, such as knowing related framework APIs, design decisions made within teams, or even related source code to finish the task” (P164). Second, this insufficient goal awareness caused genAI’s suggestions to frequently deviate from intended goals. Participants viewed these deviations as impeding their progress, leading to wasted time and effort. P42 articulated, “Sometimes AI takes us in different directions. We spend a lot of time trying those approaches, trusting that AI gives good answers, but it doesn’t work for our business solutions” (P42). Finally, these deviations often resulted in siloed solutions that failed to integrate holistically with broader development objectives. This resulted in limited applicability of genAI’s contributions, reducing the anticipated efficiency gains. P234 explained, “[genAI] often generates answers and code that doesn’t address my question or the problem I’m trying to solve. I have to go through it thoroughly to correct things as required” (P234).
These findings align with the distributed cognition framework [59], which emphasizes the importance of tight coupling and coordination between external representations (genAI in this case) and users’ cognitive activities. Effective cognitive support depends on the AI’s ability to maintain awareness of and adapt to users’ evolving task objectives. However, the observed tendency of genAI to operate in “silos”—exhibiting limited contextual awareness and frequent deviations from goals—forced participants to often intervene manually, adding verification burdens.
(b) Verification burden, as noted earlier, was a significant challenge in using genAI tools. Participants frequently noted investing significant cognitive effort in validating genAI’s contributions to prevent defects from permeating into production code. As P114 noted, “The accuracy is always improving, but I still have to dissect all AI outputs to make sure everything is correct and expected before shipping to production. By nature of how the models work, we can’t guarantee correctness, so [we] still bear accountability for the work produced through these tools” (P114). This burden was exacerbated due to the absence of effective verification affordances in these tools. Consequently, participants struggled to validate genAI’s outputs systematically beyond manual inspection. They expressed frustration over its tendency to produce specious responses, leading to an increased workload in quality assurance and debugging. P209 reflected, “AI often gets things subtly wrong that look right, so it takes time to examine its suggestions. It’s hard to catch if you already don’t know how to solve that issue”.
The cumulative need for constant scrutiny increased participants’ cognitive load, diminishing the anticipated benefits of using these tools. P103 summarized this sentiment, “Vigilance in evaluating the quality of genAI responses is hard and taxing. It’s mostly that I always have to double-check most of the information or code it provides, so you could say the biggest challenge is trust” (P103). Another participant added, “...[it] takes a careful eye to look over anything generated for correctness; this can sometimes be a harder task than writing from scratch” (P220), making genAI assistance sometimes appear less efficient in development workflows. From a theoretical standpoint, verification burden disrupts users’ cognitive flow by introducing frequent interruptions in tasks [29]. This ultimately diminishes users’ sense of immersion and engagement, thereby impeding trust in systems [88].
(c) High cognitive effort in prompting: Participants reported that crafting effective prompts required substantial cognitive effort, particularly in articulating queries that convey intended objectives and iterating through unsystematic trial-and-error approaches to elicit desired outputs. These challenges stemmed from genAI’s inherent limitations in inferring user intent, requiring participants to constantly refine inputs manually.
First, effective prompt articulation required participants to laboriously phrase (and tweak) queries to extract relevant responses. P64 highlighted, “One challenge I face is putting a lot of effort into constructing targeted prompts that the model will understand and provide the output I’m looking for” (P64). This issue was aggravated when prompts had to incorporate technical nuances or domain-specific constraints. Participants often had to “over-specify details”, making this process cumbersome. At times, this effort became prohibitive, leading them to accept outputs that fell short of their intended specifications. P225 noted, “Prompting to convey exact scenario or context is tough. Sometimes I have to settle for something not quite what I want because I can’t get the prompt just right even after too many tries” (P225).
This challenge was further compounded by the complexity of prompt refinements, which required extensive trial-and-error modifications to generate relevant outputs. Despite multiple iterations, obtaining goal-aligned responses remained difficult. P57 explained, “I have confusion around the prompt; there is a lot of back and forth before I can get [genAI] to understand what I mean and generate relevant results. It takes time to develop the correct prompts and even more effort to refine them” (P57). These unsystematic refinements disrupted problem-solving, forcing participants to experiment with input variations to achieve viable outputs. As a result, they bore the cognitive burden of manual iterations, making the process labor-intensive and inefficient.
(d) High cognitive effort in modifying genAI responses: Participants frequently had to expend high cognitive effort in modifying genAI’s contributions to meet their specific needs and task constraints. This occurred when AI-generated content required extensive post-processing to be contextually usable; thus shifting the burden of adaptation onto users. As P71 put it, “Using AI takes too much time and effort trying to modify the output to make it work” (P71).
As discussed earlier, these tools often failed to align with task objectives, requiring participants to assess, restructure, and refine outputs before they could be integrated in work. P122 described, “...leveraging AI took me longer to complete a task due to the efforts in modifying suggestions to fit my task. I often need to go back and tweak things after I accept a code snippet in my editor” (P122). In terms of human cognition, these constant adjustments add to extrinsic cognitive load [92], i.e., instead of reducing workload, these often became another layer of work, diverting participants’ efforts from their actual task objectives. The need for frequent modifications diminished genAI’s perceived utility among respondents, ultimately undermining trust in these tools.
7.1.2 Consistent accuracy and appropriateness of genAI’s outputs (S4). Participants reported low confidence in the consistent accuracy and appropriateness of genAI’s contributions, citing three primary breakdowns:
(a) Lack of contextual appropriateness in outputs: GenAI struggled to provide contextually appropriate suggestions, often producing oversimplified, “pre-canned” responses that failed to capture the nuances in task requirements. P23 articulated, “Business logic is often hard to translate across genAI. For example, we often have to do weird edge cases or have oddly shaped API’s for specific reasons. genAI tends to oversimplify these problems, giving ‘cookie-cutter’ responses that are inadequate for the use case” (P23).
(b) Incorrect or irrelevant outputs: GenAI frequently produced outright inaccuracies in its responses. Within SE context, participants described multiple instances where the AI generated “sub-par, incorrect, edge-case prone code” (P205), necessitating extensive manual review. P219 highlighted, “Often things are subtly wrong. They are correct enough to fool me, and the tooling (type checker, tests, etc), so this requires more attention in [code] reviews” (P219).
(c) Low predictability of output quality: The low predictability of genAI’s output quality, even for similar tasks, eroded trust in these tools. Participants characterized these models as “capricious”, noting that identical prompts often yielded significantly different responses across sessions, reducing its utility in development tasks. P190 described this skepticism, stating, “Consistency of AI generation (especially in code) still tends to be low. I can do the same thing one day and the next day it will give me different results. I tend to rely on genAI more for repetitive tasks, and less for finished code” (P190). These inconsistencies made participants reluctant to integrate AI-generated contributions into their workflows, treating them more as tentative references than reliable collaborators.
7.1.3 Style matching of genAI contributions (E3). GenAI’s ability to match developers’ work styles is a key aspect influencing trust. However, participants reported persistent challenges in genAI’s ability to do so, highlighting mismatches at two levels:
(a) Mismatch with task-specific or project styles: Participants emphasized that genAI outputs often failed to conform to project-specific styles, even when provided with contextual information. These inconsistencies extended beyond syntax, to include architectural patterns, project settings, code conventions, dependencies, and organizational best practices. P9 noted, “even when LLMs have codebase context, there is still a lot of product/organizational level context, i.e. product requirements, guardrails, settings, that is hard for it to grasp. AI doesn’t quite follow these things, [so] there are limitations around the trustworthiness of the output” (P9). These mismatches required participants to manually adjust AI-generated code to match these preferences, adding overhead to their work.
(b) Mismatch with individual styles: Participants reported that genAI did not match their development and problem-solving styles, thus requiring additional work to adapt its contributions. P201 noted, “[genAI] does not automatically refactor my code by applying design patterns. I am required to tweak its responses to fit my development style” (P201). They further noted that genAI’s approach to problem-solving often conflicted with their own. P165 explained, “AI tends to suggest quick fixes, but I prefer a more step-by-step approach to the problem at hand. It often skips over the problem-solving process I’m used to in its contributions” (P165). This misalignment introduced friction in genAI tool use, particularly for those who prioritized structured problem-solving over immediate solutions.
Additionally, these tools also struggled to maintain stylistic consistency in writing tasks, often producing outputs that lacked nuance or deviated from user expectations. P112 described AIgenerated text as “bland, soulless”, yet attempts to add more character often resulted in “overboarded outputs”. P211 explained, “I’ve struggled with getting AI to write using the same voice as me. It will inject a lot of ‘corp-speak’ which I then have to edit out, and that makes me feel less productive” (P211). In brainstorming tasks, genAI’s responses tended to reinforce ideas rather than critically assess them. As P208 observed, “GPT is too easily impressed with my ideas and goes off with it, coming up with ways to extend it, whereas I would like it to be more critical and help me map out alternatives before committing to a single idea” (P208). These issues, in turn, reduced genAI’s effectiveness as a collaborative tool, diminishing trust among respondents.
7.1.4 Presentation (S2). Participants identified three core issues with genAI’s interface design and output formatting that hindered effective interactions. These issues centered on:
(a) Poor feedback mechanisms and affordances: Participants reported that genAI interfaces often lacked sufficient feedback mechanisms, making it difficult to refine interactions. P216 noted, “There are no clear affordances to understand how the tool works: am I expressing my idea properly? Do I need to refine my question?” (P216). Here, a key issue was prompt-output traceability, as participants struggled to determine how specific prompt elements influenced genAI responses. P58 described this challenge, stating, “I am not an expert on prompts about how to make [outputs] better. AI doesn’t provide feedback on what in the prompt contributed to the output, making it hard to improve or craft better prompts" (P58). From a theoretical standpoint, these issues make it harder for users to develop accurate mental models of a system, leading to uncertainty and diminished trust in its recommendations [73, 84].
(b) Constrained interaction modes: Participants found the chat-based interface restrictive, often limiting their interactions with genAI. P96 noted, “A chat mode isn’t quite enough for interacting with $A I { - } I$ would prefer sliders, canvases, or some way to guide responses instead of retyping queries over and over” (P96). Additionally, the way genAI responses were presented made sensemaking difficult, introducing challenges in extracting relevant information. P134 highlighted, “A single-threaded interaction makes it hard to manage multiple ideas at once. Moreover, I want structured responses with sections or collapsible details, so I can quickly get to what matters” (P134). These constrained interaction modes ultimately made it harder for participants to manage complex problem-solving and ideation with genAI within their workflows.
(c) Excessive verbosity in outputs: Participants reported that genAI responses were often unnecessarily long and verbose. P161 noted, “sometimes, I find the responses to be too long even after instructing generative AI to write short answers (especially with ChatGPT)” (P161). Verbosity was particularly problematic in brainstorming contexts, where dense blocks of text reduced readability and slowed sensemaking. P216 explained, “AI-generated answers are often verbose, making them difficult to parse and unhelpful. I don’t like large blocks of text when brainstorming, but with AI, I frequently have to deal with it” (P216). Further, excessive verbosity cluttered coding environments, reducing usability. P187 stated, “The length/frequency of suggestions creates clutter in an environment that requires focus. Sometimes the code suggestions are way too long and don’t fit on the screen” (P187). Despite explicitly requesting concise responses, participants still had to manually filter and truncate AI outputs, adding unnecessary strain to their work.
7.1.5 Safe and secure practices (S3). The extent to which developers can assess whether and how genAI accounts for safety and security—both in design and behavior—plays an important role in shaping trust. Participants’ concerns in this regard focused on:
(a) Input data privacy risks: Participants expressed concerns about data exposure risks when using genAI. P91 stated, “most of the information I work with is proprietary, it is risky to trust AI with confidential data due to the potential for data leakage” (P91). This risk was even more pronounced when “working with an external AI tool, you need [safeguards] to anonymize all the sensitive information to reduce exposure risks” as P234 explained.
Beyond exposure risks, participants also struggled with the lack of transparency in how these tools handled input data. P230 noted, “privacy of my data is not guaranteed with AI, I don’t know how my data is getting stored or used” (P230). This uncertainty led participants to limit the amount of work data they provided as inputs to these tools. P37 summarized this hesitation, stating, “I do my best to not feed sensitive data to genAI. It has no clear indicators to show what extent my data is being used” (P37).
(b) Misinformation risks: Participants frequently encountered genAI’s tendency to generate “confidently incorrect code suggestions”, a persistent issue in SE tasks [22]. These errors were particularly problematic in technical contexts, where it introduced subtle yet critical failures in code. The tendency to present fabricated information persuasively increased the risk of errors until they became evident during implementations. As P183 shared, “I may not initially suspect any issues. It’s only when I attempt to implement the solution and encounter failures that I realize it was based on imaginary information” (P183). These breakdowns eroded participants’ trust in genAI, particularly for high-stakes development tasks.
(c) Legal and ethical concerns: Respondents raised concerns about regulatory compliance and intellectual property risks when using AI-generated content. P112 mentioned, “Copyright risk is highly probable as people are now often relying on genAI tools for IPs like generating logos, designs, or artwork” (P112). In SE contexts, genAI models could inadvertently reproduce open-source code snippets, introducing legal implications. P183 explained, “Being trained on open-source data, there is always a danger of direct code snippets of open-source code creeping into answers. This translates to a potential for legal risk, essentially making it difficult to use generated code directly even if it works” (P183). The uncertainty surrounding the legal standing of genAI’s contributions led to hesitation in trusting these tools. P207 remarked, “I guess I could be sued by a copyright holder for some of my code” (P207). Without clear indicators of ownership and liability, participants remained reluctant to directly integrate genAI’s contributions into their development work.
7.1.6 GenAI’s performance in tasks (S5) was as a key aspect in shaping developers’ trust. Participants identified two primary issues that undermined their confidence in genAI tools:
(a) Inefficiency in complex or niche tasks: GenAI was reported as unreliable for domain-specific tasks, often failing to provide solutions beyond superficial suggestions. P60 noted, “AI is still not very reliable when working with domain-specific problems. It generates correct responses at times, but it’s not efficient in solving a problem end-to-end” (P60). For niche tasks with limited online resources, it often “regurgitated known solutions without producing any meaningful insights” (P202). Participants found these AI contributions to be low effort, leading to wasted time and resources. P198 articulated this, stating, “...for complex tasks, AI often provides flaky low-effort solutions. Its efficiency drops when you want its help in developing unique solutions, costing more time than worth” (P198).
(b) Poor error handling and recovery mechanisms: Participants reported that genAI lacked effective error-handling mechanisms to correct its mistakes or adapt to user feedback. P75 described, “...it can be hard to nudge the model in the right direction. AI doesn’t have proper error handling, when you identify an error, it acknowledges and thinks about ‘alternate ways’ to give back similar errors, before I give up. It’s frustrating to use it in these instances” (P75). Even when mistakes were explicitly pointed out, genAI still struggled to meaningfully adjust its responses. P127 shared, “...it gave me a suggestion that included logging into a service, but when I said I had no login for that service, it repeated the same instructions but told me to skip the login step” (P127). Participants also highlighted the absence of recovery mechanisms during these instances. P3 noted, “GenAI is much prone to rabbit-holing on the incorrect way to solve a problem. Once it gets in that loop, it lacks a recovery process to get out of it” (P3). Instead of adjusting its approach, genAI often doubled down on “producing cascading errors”, making debugging more time-consuming for participants.
# 7.2 Behavioral Intention
Our model identified four primary determinants of developers’ behavioral intentions (BI) towards genAI tools: trust in genAI (H5), alongside the extent of technophilic motivations (H6), computer self-efficacy (H7), and risk tolerance (H8). These factors capture specific psychological dimensions that shape developers’ willingness to integrate these tools into their work. Fig. 6 presents the IPMA results for BI, showing that trust, risk tolerance, and technophilic motivations fall into Quadrant 4 (Q4), indicating that these factors require the most attention to improve adoption.
However, these are human-centric traits rather than specific genAI aspects, so their placement on the map should be interpreted differently. For instance, risk tolerance is located in Q4, which means that while it strongly influences BI towards genAI, in our study’s context, participants reported low risk tolerance when using these tools for work. To understand why these perceptions emerged, we leveraged qualitative insights to map the specific challenges and risks that shaped them. Note that, since we have covered trust and its corresponding factors in Sec. 7.1, here we focus on (1) risk tolerance and (2) technophilic motivations.
7.2.1 Risk Tolerance plays a significant role in shaping developers’ BI towards genAI (Fig. 2). This cognitive facet reflects an individual’s willingness to embrace uncertainty when adopting new technologies [14]. In Fig. 6, risk tolerance falls into Q4, indicating high importance but relatively low tolerance levels among participants, suggesting their cautious stance towards genAI adoption.
Much of this caution stemmed from their concerns over safety and security practices in genAI’s design and behavior (S3) (see Sec. 7.1). Participants highlighted risks related to input data privacy and misinformation, alongside legal and ethical issues associated with using genAI’s contributions in development tasks.
Another key aspect driving reluctance in adoption was concerns with genAI-induced technical debt. This corresponded to the accumulation of long-term maintenance challenges, security vulnerabilities, and integration issues from AI-generated contributions, resulting in increased complexity and future rework. Three primary issues were identified in this regard:
(a) Software bugs and maintenance issues: Participants expressed apprehension about AI-generated code containing “subtle” hard-to-detect bugs, which required extensive debugging efforts. This issue was particularly insidious, as genAI contributions often passed initial tests but failed in later development stages. P142 described, “Sometimes when generating code, it may produce semantically correct output embedded with hidden errors that are hard to uncover and debug...I realized it didn’t work and that took me a while to fix” (P142). Further, participants observed that these bugs propagated throughout the codebase, “accumulating into maintenance issues that add[ed] more work” (P166). Consequently, teams had to allocate additional effort to verify genAI’s contributions to ensure software maintainability. A participant quipped about this overhead: “It feels like planting landmines for the many future maintainers of code written with AI assistance” (P217). These issues ultimately increased skepticism among participants, raising concerns about the long-term reliability and hidden risks of genAI’s contributions in software development.
Fig. 6. IPMA of predictors to behavioral intentions (BI) towards genAI.
(b) Security vulnerabilities: Participants reported concerns about genAI-induced security vulnerabilities, which amplified their risk aversion in using these tools. While genAI accelerated development, it often “didn’t take security best practices into account”, increasing latent risks in SE tasks. As P192 put it, “I don’t use AI-generated code in production. It often introduces a whole slew of vulnerabilities as it lacks context about security standards” (P192). Respondents also flagged risks in genAI’s exception-handling practices, especially for client-facing software. Without careful oversight, these outputs could inadvertently disclose sensitive system information. P102 warned, “If AI is generating anything user-facing, it needs to be closely reviewed...we want to be careful about what error text we are exposing. It should not reveal any technical details that could pose a security risk” (P102). Overall, these security lapses reinforced participants’ cautious stance, restricting genAI’s role to prototyping rather than production-level assistance.
(c) Deviations from code quality standards: Another dimension of tech-debt arose from AIgenerated code frequently deviating from established code quality standards. Participants described persistent issues in integrating these contributions into existing codebases. P83 emphasized that while “GPT-generated solutions work well in isolation, they fail to follow the project’s coding standards or scalability requirements. Code quality took a hit with AI-generated solutions” (P83). Participants further noted that AI-generated code often “prioritize[d] test cases over writing cohesive and scalable code” (P171), overlooking key coding conventions. Subsequently, this requires constant manual refinement, increasing development overhead. P203 summarized, “genAI produces varied quality of code across different suggestions, making it difficult to maintain a cohesive codebase” (P203). Participants remained cautious about AI contributions, recognizing the considerable effort required to prevent long-term erosion of software quality.
7.2.2 Technophilic motivations positively affect developers’ intentions to use genAI. Broadly, individuals with stronger intrinsic motivations tend to be more enthusiastic about exploring, adopting, and integrating new technologies [14, 117]. However, their expectations for reliable system/output quality and low-friction interactions make them susceptible to disengagement when these expectations are unmet [7, 69]. In our study’s context, issues with (a) genAI’s accuracy and task performance (S4, S5), and (b) friction in interactions (S2, E4) made it difficult for participants to derive satisfaction from using these tools. Given that Sec. 7.1 details these issues in depth, here we focus on how they mapped to participants’ intrinsic motivations to use genAI.
(a) GenAI’s accuracy and performance in tasks: GenAI’s credibility diminished when its responses were contextually inappropriate (S4a) or unpredictable (S4c); This forced participants into repeated rework cycles, breaking their flow. P219 noted, “Sometimes when I try to use it, I end up fixing small contextual mistakes again and again. It often breaks my flow, so I give up on it” (P219). Such breakdowns frequently disrupt user engagement [29] and, in conjunction with the other challenges (e.g., verification burden), end up reducing users’ inclination to re-engage with the tool. Further, respondents reported issues with genAI’s task performance (S5), noting its limitations with niche problems (S5a) and error recovery (S5b). Prior work has shown that intrinsic motivations often depend on the anticipated outcomes and value of a tool’s use [123]. In our context, these performance issues lowered genAI’s perceived utility, curbing participants’ willingness to experiment with it in their workflows.
(b) Friction in interactions: GenAI’s limited feedback affordances (S2a) and constrained interaction modes (S2b) introduced friction that disrupted user engagement. Participants described how the lack of system feedback made it harder to identify errors and refine prompts to adjust outputs. They also critiqued the dominant chat-based interface as restrictive, particularly for ideation and exploratory tasks. P117 noted, “The interaction feels limiting, there’s no easy way to organize information intuitively...it’s hard to explore ideas using chat” (P117). Drawing on Self-Determination Theory (SDT) [31], these frictions undermine intrinsic motivations by (1) eroding competence, i.e., limited feedback made it difficult to develop accurate mental models of how the system responds to inputs; and by (2) impeding autonomy, i.e., participants felt constrained in shaping and steering the system’s behavior and responses. Simply put, when tools are opaque or difficult to control, they erode users’ sense of agency, in turn reducing satisfaction and intrinsic motivations to use them.
Frictions also emerged from breakdowns in goal maintenance (E4). Participants noted high verification burdens (E4b), prompting efforts (E4c), and substantial rework in modifying genAI outputs (E4d). Each of these imposed an extraneous load [92], diverting cognitive resources from primary tasks. These breakdowns introduced what participants described as a “cost of exploration”; turning what should be an intuitive engagement into an effortful, cognitively taxing one [119]. Such repeated friction suppressed sustained engagement, especially when intrinsic efforts drove tool usage.
Overall, to design for trust and sustained adoption, it is not enough for genAI tools to be performant; they need to stay on track, lighten cognitive burden, and align with developers’ goals. The findings underscore that trust erosion is not solely about accuracy—it’s also about friction and alignment, and addressing these are pivotal for realizing genAI’s potential in developer workflows.
# 8 DISCUSSION
Our investigation into developers’ genAI adoption focused on analyzing the multitude of factors affecting trust and its interplay with cognitive styles in shaping behavioral intentions. Moreover, we dissected specific factors that exert stronger influence on these constructs and are simultaneously viewed as lacking, offering actionable insights for researchers and practitioners in SE and AI. Importantly, we adopted a tool-agnostic approach to capture a comprehensive view of adoption dynamics, recognizing the rapidly evolving nature of the genAI landscape. Our findings provide early, data-driven signals on what matters most to developers—insights we now distill into implications for practice and research.
# 8.1 Implications for practice
Designing for goal maintenance: Our findings revealed a notable disparity between the importance developers assign to goal maintenance, and genAI’s perceived adequacy in supporting it. This gap highlights a key design imperative—genAI tools must better sustain goal maintenance to foster trust.
To do so, these tools must consistently account for developer’s (1) current state; (2) immediate goals, and expected outcomes from the AI [124]; as well as their (3) preferences for transitioning between the two. Such preferences may span process methodologies, coding conventions, output specifications, business requirements, and task-specific constraints, amongst others. One way to enforce them is through custom guardrails, i.e., user-defined rules and checks that constrain or guide genAI’s behavior across inputs, intermediate steps, and outputs. For example, the Cursor IDE [30] provides custom rule-sets (global and project-specific) that allow developers to embed these preferences into AI-assisted code generation. Still, guardrails should be operationalized to recurrently critique, verify, and adjust genAI’s actions based on user-defined constraints (e.g., stylistic norms, domain-specific validations, and safety considerations) and expected outcomes. Embedding such mechanisms could also, in turn, improve output appropriateness (S4a, S2c) and stylistic fit (E3), while reducing the cognitive effort required for verifications (E4b) and post-hoc corrections (E4d).
Moreover, allowing developers the flexibility to explicitly steer AI’s actions as needed is also important for goal maintenance. This control can be essential when the genAI tool deviates from the expected trajectory (E4a), enabling developers to (re)calibrate it to support their goals. For instance, interfaces could provide affordances to control how and when outputs are surfaced, what contextual memory is retained across sessions, and how intermediate reasoning steps can be adjusted. Developers may use these to scope suggestions to only verified outputs (e.g., those passing compliance or static analysis checks), toggle long-term memory contexts and updates, or edit intermediate steps to guide the final AI (agent) output. Embedding such controls also supports developers’ metacognitive flexibility [111], i.e., they may adapt their cognitive strategies based on new information or task-state changes. The ability to steer the AI to align with these adapted strategies helps accommodate users’ evolving goals and task conditions.
Designing for contextual transparency: Developers frequently choose to incorporate genAI tool support in their tasks. Yet, as RQ3 revealed, many participants struggled to refine outputs (S2) or reason about performance and output quality in complex tasks (S4, S5). This disconnect risks miscalibrated trust—potentially leading to increased errors and/or productivity loss [89]. Calibrating expectations to reflect a tool’s true capabilities is, therefore, essential.
One lever for such calibration is contextual transparency, i.e., interfaces that reveal, in situ, the system’s boundaries, behavior, and failure modes, consistent with established Human-AI (HAI) design guidelines [6, 40]. This can help users form accurate expectations about system quality (H1) within their task contexts, thereby fostering warranted trust [60]. Drawing on this, we suggest:
(a) Communicating limitations in task contexts and scoping assistance under uncertainty. One way to achieve this is to expose competence indicators at the point of use. For example, systems could display solution-level confidence scores, based on past performance or user feedback on similar tasks. Such indicators can reduce the cognitive effort needed to assess performance and output quality (S4, S5), allowing developers to selectively verify, use, or disregard genAI’s contributions. Wang et al. [120] found that surfacing confidence cues, in general, helped developers evaluate the quality of AI-generated code suggestions more effectively, cultivating trust.
(b) Exposing prompt-output relationships. To understand and debug genAI behavior, developers need visibility into how specific parts of prompts influence generated output (S2a). Interfaces could use feature-based explanations [19] and/or visual attributions to make these relationships explicit (e.g., color-coded mappings between prompt and output segments), thus supporting iterative prompt refinement and output interpretation.
Designing for effective interaction and sensemaking: Participants reported facing considerable challenges in crafting effective prompts (E4c) and navigating through rigid interaction and output formats (S2b, S2c). These frictions disrupted productive exploration, ultimately eroding their trust and willingness to use these tools. To address this, rethinking interaction mechanisms and how systems support developers’ sense-making is essential.
(Interaction) To reduce prompting effort (E4c), interfaces could support grounded utterance transformations [73], wherein naturalistic user intents are reconstructed into formalized prompts. Instead of requiring precise prompts upfront, systems could externalize how effective inputs are crafted, allowing developers to reflect and iterate on their prompting strategies over time.
(Sensemaking) To address constrained or verbose output structures (S2b, S2c), interfaces could organize responses to better support layered explorations. Drawing on design ideas from Sensecape [110], interfaces could employ (1) collapsible views and hierarchical structures to surface condensed concepts upfront, while enabling (2) deeper semantic zooms on demand. Such scaffolds help manage information overload, allowing developers to traverse between high-level and detailed representations based on their information needs.
Designing for HAI-UX fairness: While most fairness efforts in AI focus on data and algorithms [41], fairness in HAI user experiences (HAI-UX) remains comparatively under-theorized. In the context of genAI for software development, we advocate promoting HAI-UX fairness through inclusive tool design catered to developers’ diverse cognitive styles.
Our findings show that developers who are intrinsically motivated to use technology (H6), have higher computer self-efficacy (H7), and greater tolerance for risk (H8), report significantly stronger intentions to adopt genAI tools. Conversely, individuals who do not share these traits are less inclined to use them. This suggests that genAI tools, often optimized for early adopters, may inadvertently privilege a narrow cognitive subset of users, reinforcing interactional inequities.
To support HAI-UX fairness across the cognitive spectrum, toolsmiths must prioritize adaptability in design. Specifically, interfaces could detect users’ cognitive styles through brief onboarding instruments (e.g., survey items from this study) or infer them from usage patterns. Subsequently, it should dynamically adapt to these styles using various strategies. For example, task-motivated developers may benefit from tools that prioritize delivering intended outcomes first in coordinationheavy tasks, with contextually relevant explanations provided post-hoc. Rather than overwhelming or interrupting users mid-task, these tools could reveal their reasoning afterward, when the user has bandwidth to better contemplate them. As another example, for risk-averse developers, contextual transparency (as discussed earlier) is essential. Given their sensitivity to flaws, systems should surface uncertainty indicators (e.g., confidence scores [120]) and clarify underlying assumptions. Doing so facilitates informed decision-making about integrating genAI’s contributions into tasks; ergo accommodating the caution and deliberation associated with risk aversion.
# 8.2 Implications for research
Our study establishes an understanding of the relationships between developers’ trust, cognitive styles, and behavioral intentions in the formative stages of genAI adoption. Further, it provides a psychometrically validated instrument to measure trust-related factors in human-genAI interactions. Researchers can use this instrument to operationalize theoretical expectations or hypotheses; for example, to capture these constructs in finer contexts, refine genAI tools through design changes, and compare user experiences before and after redesign—thereby advancing the understanding of AI adoption in software development. Additionally, our study identifies specific high-impact yet relatively underperforming genAI aspects, offering a prioritized roadmap for design improvements to support trust and adoption.
Non-significant associations: Our analysis did not find support for Hypotheses H3 $\mathrm { ( p = 0 . 5 9 }$ ), H9 $\scriptstyle \left( \mathrm { p } = 0 . 0 6 \right)$ , and H10 $\scriptstyle ( \mathrm { p } = 0 . 3 3 )$ ). These findings are surprising, as ease of use, information processing, and tinkering learning style are relevant when considering traditional software tools [14, 116]. However, in genAI contexts, these constructs may manifest differently due to the altered dynamics of user engagement compared to more traditional software. For example, ease of use might not show a relation as using these AI interfaces is inherently easy; instead, the formulation of the queries is what matters most. Similarly, developers’ information processing style (H9) did not significantly influence their intentions to use genAI, likely because how individuals articulate their needs—a single comprehensive prompt or sequence of queries—often aligns with their preferences for consuming information (comprehensive or selective). The lack of a relationship for tinkering style (H10), as well, could be attributed to genAI’s interaction paradigm, which is primarily centered around (re)formulating and following up with queries rather than “tinkering” with the software’s features. If these speculations hold, how certain validated constructs were framed in the study [1] might have indeed limited our understanding of these dynamics. Future research should explore these constructs more deeply within the context of human-genAI interactions. For instance, instead of focusing on ‘ease of use’ or ‘tinkering with software features’, studies could examine ‘ease of prompting’ or ‘tinkering with prompt strategies’ and how preferences (and proficiency) in these areas influence developers’ trust and behavioral intentions. Understanding these dynamics can inform future design and adoption strategies of genAI tools, aligning them more closely with user interaction patterns and cognitive styles.
Finally, while our cross-sectional study provides valuable insights, longitudinal research is needed to understand how trust and adoption patterns evolve as developers gain more experience with these tools. Future work should examine how trust dynamics differ across finer software development contexts and investigate the interplay between emerging AI capabilities and developers’ evolving needs and expertise. This will further clarify how trust and usability co-develop over time, particularly as these tools move beyond the formative adoption stages.
# 8.3 Threats to validity and limitations
We captured constructs through self-reported measures derived from established theoretical frameworks. Participants rated their agreement with indicators from validated instruments, including TXAI for trust, GenderMag for cognitive styles, and UTAUT for behavioral intention. For PICSEbased constructs, where no prior instrument existed, we conducted psychometric analysis to refine factor structure and establish measurement reliability and validity. To further support construct validity, we involved practitioners in the survey design process, conducted pilot testing with collaborators at GitHub, randomized question blocks to reduce order effects, embedded attention checks, and screened for patterned responses. All latent constructs met standard thresholds for convergent and discriminant validity. While we did not directly ask participants to self-assess their genAI expertise, we used their reported familiarity and usage frequency as proxies for it.
Our hypotheses test associations between constructs, rather than causal relationships, given the cross-sectional nature of the study [108]. Self-selection bias remains a possibility, as individuals with strong views about genAI may have been more willing to participate. Further, a theoretical model like ours cannot capture an exhaustive list of factors. Other factors might certainly also play a role, thus positioning our results as a reference for future studies. Further, trust is inherently context-dependent [60], and though we targeted software development broadly, variations may exist across finer contexts, roles, or tasks (e.g., testing vs. design). Therefore, our results should be interpreted as a theoretical starting point, guiding future studies to consider longitudinal designs and deeper contextual differentiation to strengthen causal claims and generalize more broadly.
All data were collected in a single, anonymous survey round without the possibility of follow-up to validate our observations. This limitation stems from organizational policies that prevent recontacting participants. To mitigate this, we took several steps. First, we tested for and found no evidence of Common Method Bias in the survey (see Sec. 5.2.3).
Second, we triangulated quantitative results with thematically coded qualitative responses derived using reflexive thematic analysis. Multiple coders engaged in several rounds of discussion and consensus-building to iteratively refine themes and categories. These qualitative responses demonstrated strong alignment with participants’ ratings for the factors. Our interpretations were further grounded in established behavioral theories, which strengthen the credibility of the findings. Given the internal consistency of the results, the transparency of our analysis process, and the triangulation with theory, we consider this approach sufficient to ensure the reliability of our conclusions. Still, as in any survey-based findings, ours too, are based on self-reported perceptions and experiences. Future work might consider observational studies to identify specific real-world instances of genAI use in finer contexts, alongside challenges and risks that hinder trust and adoption of these tools in nuanced settings.
Our sample comprises developers from GitHub and Microsoft, two globally recognized organizations. While this scope enhances relevance for industry settings and includes engineers from around the globe, it may limit the generalizability to smaller organizations, open-source contributors, or developers in non-corporate settings. However, our participant demographics and role distributions are consistent with prior empirical studies on software engineers [96, 112], providing a suitable starting point to understand the associations presented in our model. Nevertheless, given our study’s focus on theory development, we aim for theoretical rather than statistical generalizability [105]. Replication and validation of the model across broader populations and varied contexts remain necessary future steps. | Generative AI (genAI) tools are advertised as productivity aids. Yet, issues related to miscalibrated trust and usage friction continue to hinder their adoption. Additionally, AI can be exclusionary, failing to support diverse users adequately, further exacerbating these concerns. One such aspect of diversity is cognitive diversity -- variations in users' cognitive styles -- that leads to divergence in interaction styles. When an individual's cognitive styles are unsupported, it creates additional barriers to technology adoption. Thus, to design tools that developers trust, we must first understand what factors affect their trust and intentions to use these tools in practice?
We developed a theoretical model of factors influencing trust and adoption intentions towards genAI through a large-scale survey with developers (N=238) at GitHub and Microsoft. Using Partial Least Squares-Structural Equation Modeling (PLS-SEM), we found that genAI's system/output quality, functional value, and goal maintenance significantly influence developers' trust, which along with their cognitive styles, affects their intentions to use these tools in work. An Importance-Performance Matrix Analysis (IPMA) identified factors that, despite their strong influence, underperform, revealing specific genAI aspects that need design prioritization. We bolster these findings by qualitatively analyzing developers' perceived challenges and risks of genAI usage to uncover why these gaps persist in development contexts. For genAI to indeed be a true productivity aid rather than a disguised productivity sink, it must align with developers' goals, maintain contextual transparency, reduce cognitive burden, and provide equitable interaction support. We provide practical suggestions to guide future genAI tool design for effective, trustworthy, and inclusive human-genAI interactions. | [
"cs.HC",
"cs.SE"
] |
# 1. Introduction
Databases are an essential part of modern computer systems for data storage and knowledge management. They are typically accessed via query languages like like SQL (for relational databases) or Cypher (for graph databases) which allow database experts to store and query data for insight. However, recent advancements in LLMs have enabled the translation of natural language questions into database queries (Text2SQL, Text2Cypher), allowing non-expert users to query data models on their own terms.
To help contextualize an LLM when generating database queries from natural language, a common practice is to incorporate database schema information. Figure 1 shows an example schema where nodes (e.g., Organization, Person) connect through relations (e.g., Has_CEO, Has_Investor) with their properties (e.g., name, age). Schemas can be provided to LLMs via prompting, but complex schemas introduce noise, increase hallucinations, and raise costs [1, 2]. Schema filtering addresses these challenges by selecting only relevant elements, improving query generation while reducing token costs.
In this paper, we apply five schema linking and filtering approaches that improve Text2Cypher: Two static methods that extract the full database schema in different formats and three dynamic methods that prune the schema based on the input question. We evaluate their impact on a Text2Cypher dataset, analyzing token distribution, Cypher generation performance, and cost. Our main contributions are:
• We propose new schema filtering techniques. The two static methods use the full database schema in different formats, while our three dynamic methods prune it based on the input question. • We analyze their impact on Text2Cypher task, specifically on prompt token length distribution, query generation performance, and computational cost. • Our results show that schema filtering improves Text2Cypher efficiency. While larger models benefit less due to their extended context windows, smaller models perform better with shorter prompts. Nevertheless, schema filtering remains a cost-effective strategy for all models.
The paper is structured as follows: Section 2 discusses related work on translating natural language to query languages and schema filtering approaches. Section 3 focuses on the applied schema-filtering
Figure 1: Overview of an Example Database
Table 1 Instructions used
approaches for the Text2Cypher task. Section 4 present our experimental setup and evaluation results.
Finally, Section 5 concludes the paper.
# 2. Related Work
# 2.1. Natural Language to Database Query Language
Recent advances in large language models (LLMs) have significantly improved the ability to translate natural language into database query languages. For instance, there has been extensive research on the Text2SQL task, which translates natural language queries to SQL [3, 4, 5, 6]. Until recently, the Text2Cypher task, which translates natural language into Cypher—the query language used by Neo4j and other graph database systems— had received less attention. However, with advancements in the integration of large language models (LLMs) and knowledge graphs, text-to-graph query language (GQL) tasks, particularly Text2Cypher, have gained increasing interest. Several datasets have been developed to support Text2Cypher research, including Opitz and Hochgeschwender [7], S2CTrans [8], CySpider [9], Rel2Graph [10], SyntheT2C [11], and Text2Cypher [12]. Additionally, studies have explored benchmarking and fine-tuning models for this task, with contributions such as GPT4Graph [13], TopoChat [14], Baraki et al. [15], FCAV [16], Liang et al. [17] and Text2Cypher [12]. In most cases, the baseline model is fine-tuned using prompts that include natural language questions, database schema information, and ground-truth Cypher queries.
# 2.2. Schema Filtering in Query Generation
Schema information is essential for accurate query generation, ensuring correct linking of query terms to database structures [1, 2]. This process, known as schema linking, plays a key role in Text2SQL and Text2Cypher tasks by mapping query words to relevant database elements [18, 1]. While providing the (a) Enhanced Schema (b) Base Schema full schema in the prompt is possible, schema filtering is often preferred to reduce noise, computational cost, and hallucinations [1, 19]. However, we must remain aware that excessive filtering can remove essential components, harming accuracy [20]. Early Text2SQL schema filtering relied on heuristics like string matching, as seen in IRNet [4] and TypeSQL [21]. Later, learning-based methods such as Dong et al. [22], Bogin et al. [23], and RAT-SQL [24] were proposed. Recent approaches utilize LLMs through prompting, fine-tuning, or agent-based techniques, such as DIN-SQL [25], RESDSQL [5], CHESS [26], E-SQL [1], ExSL [27]and KaSLA [28]. While schema filtering is common, studies suggest it is less necessary for LLMs with long context windows but remains valuable for smaller models [20, 1, 2]. The trade-off is, however, that larger context sizes increase latency and computational cost for complex databases, making filtering highly beneficial [2].
Research on schema filtering for Text2Cypher or other graph query languages (Text2GQL) is presently limited compared to Text2SQL. Liang et al. [17] explored aligning LLMs for a Text2GQL task in Chinese, using a schema filtering module that executes: (i) extraction of the database schema as a dictionary, (ii) extraction of the named entities from the query, and (iii) mapping these entities to the schema dictionary. For queries requiring multiple nodes and relations, they used $\operatorname { A } ^ { * }$ algorithm [29] to find the shortest path. NAT-NL2GQL [30] includes a module for preprocessing inputs and executing schema extraction, following a similar approach to Liang et al. [17]. Additionally, they use an LLM for filtering multiple matched schema items before proceeding with the Text2GQL task. In this work, we examine the impact of schema filtering on the Text2Cypher task, focusing on both performance and cost.
# 3. Schema Filtering for Text2Cypher
We now present schema filtering for Text2Cypher using a template from [12] (Table 1), which includes instructions, schema, input question, and generated query fields. Our focus is on the schema field, where we experiment with two static and three dynamic formats.
# 3.1. Static Schemas
Cypher is the query language for Neo4j, a graph database. Neo4j offers various tools for retrieving database schema information, based on the database structure rather than the input query. While this allows efficient caching, it leads to longer schema representations, increasing token length and context requirements for LLMs. We utilized two static schema formats provided by Neo4j frameworks:
• Enhanced Schema: This is one of the default schema types provided by Neo4j. It provides an enhanced view of the database schema, including list of nodes, relationships and their properties. It additionally provides example values for the fields. For instance, if the property is the ’name’ of
Therelationships:\n (a) Pruned By Exact-Match Schema (b) NER Masked & Pruned By Exact-Match Schema
schema: Country {name:STRING,id:STRING,sUmmary:STRING}\n (:0rganization)-[:HAS_CATEGORY]->(:IndustryCategory)\n (c) Pruned by Similarity Schema
the ’Actor’ node, examples might include: [’Tom Hanks’, ’Julia Roberts’, ...]. An example enhanced schema is presented in Figure 2(a). • Base Schema: This is another default schema types provided by Neo4j. It provides similar information as the Enhanced Schema, except it does not include examples of properties, and the formatting is different. An example for this schema format is presented in Figure 2(b).
# 3.2. Dynamically Pruned Schemas
We implement three dynamic schema filtering approaches, which prune the baseline schemas based on the input natural language question.
• Pruned By Exact-Match: This approach compares node labels, relationship types, and properties to words in the input question. Similar to Liang et al. [17] and NAT-NL2GQL [30], if an exact case-insensitive match is found, the corresponding schema elements are retained; otherwise, they are removed. Our method also considers properties as well as labels, and we retain multiple matching elements (e.g., synonyms) to prevent excessive pruning. See Figure 3(a) for an example. • NER Masked & Pruned By Exact-Match: This approach replaces named entities with their entity types before applying exact-match filtering. NER-masking prevents irrelevant matches. For example, in the query "List the articles that mention the organization ’Acme Energy’," it avoids incorrect matches, such as retaining properties of a node labeled ’Energy,’ which is unrelated. See Figure 3(b) for an example.
Figure 4: Token distributions of the training and test sets, based on the tokenizer from the Llama-3.1-8B model. The 95th percentile (p95) value is marked with a purple line.
• Pruned by Similarity: This approach extends exact-match pruning by incorporating similaritybased filtering. Instead of requiring an exact match, it computes similarity scores between query terms and schema elements, retaining only those above a predefined threshold. While various similarity measures could be used, we rely on embedding-based similarity. An example of this schema filtering approach is shown in Figure 3(c).
# 4. Experimental Setup and Results
# 4.1. Experimental Setup and Evaluation Metrics
We conducted experiments using a publicly available Text2Cypher dataset [12], focusing on a subset with accessible databases for query execution, resulting in 22,093 training and 2,471 test samples. Schema filtering was assessed using the ’unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit’, ’ unsloth/Qwen2.5-7BInstruct-bnb-4bit’ and ’GoogleAIStudio/Gemini-1.5-Flash’ models, referred as Llama-3.1-8B, Qwen2.5- 7B and Gemini-1.5-Flash, respectively, in the remainder of the paper. For Cypher generation, after utilizing the LLMs, an additional post-processing step is executed to remove unwanted text, such as ’cypher:’ suffix. Furthermore, the spaCy framework is used for named entity extraction and similarity computations.
To compute evaluation metrics, we used the Hugging Face Evaluate library [31]. We employed two evaluation procedures: (i) Translation-based (Lexical) evaluation: Compares generated Cypher queries with reference queries based on text content. We used Google-BLEU score while presenting the results. (ii) Execution-based evaluation: Executes both generated and reference queries on target databases and compares their outputs (sorted lexicographically) using the same metrics as the translation-based evaluation. We used ExactMatch score while presenting the results.
# 4.2. Evaluation Results
We evaluate the proposed schema formats based on (i) token distribution and cost, and (ii) performance.
# 4.2.1. Impact on Token Distribution & Cost
Schema format impacts both prompt length and token count. For example, with the Llama-3.1-8B tokenizer, the base prompt is about 150 tokens, but adding schema information increases it to over 2,700 tokens. Figure 4 shows token distributions for training and test sets. Table 2 provides additional token details for the test set. Results show that the Enhanced Schema leads to the longest prompts, while switching to the Base Schema reduces the P95 token length by one-third. Exact-match pruning (with or without NER masking) further reduces the P95 token length to 1/6th of the original. Similarity-based pruning increases schema length but reduces the P95 token length to about 1/4th of the original.
Table 2 Token distribution statistics of the test set
Table 3 Schema type vs. Cost. Notes: Median token counts: Table 2, Number of instances: 20K, Prices-Feb. 27, 2025: (i) GoogleAIStudio/Gemini-2.0-Flash: $\$ 0.15 ⁄ 1 M$ tokens (ii) Anthropic/Claude 3.5 Haiku: $\$ 0.80$ / 1M tokens (iii) Self hosted/LLaMA-3.1-8B: Assumed A40 48GB machine on RunPod ( $\$ 0.44$ per hour) with 20 tokens/sec
Reducing the token count reduces costs, whether for LLM vendor payments or infrastructure expenses for self-hosted models (e.g., storage and GPU access). In a scenario with 20,000 instances, where input token length aligns with the median in Table 2, we compare costs across models (Table 3). In the table, we assume output lengths remain constant and only input tokens contribute to the cost. The results show that cost scales linearly with token usage, but factors like output token count, caching, and batch processing can affect this. Shorter prompts lead to significant cost reductions.
While dynamic pruning reduces token length and costs, it may introduce computational overhead as a side-effect. Unlike Enhanced or Base Schema (which are cached), dynamic pruning is performed for each query, which might increase latency. However, we observe this overhead is minimal, especially for methods like ‘Pruned by Exact-Match,’ which uses regular expression matching.
# 4.2.2. Impact on Performance
We evaluate the impact of proposed schema formats on Text2Cypher performance using the Llama-3.1- 8B model. Figure 5 presents the results, showing that longer prompts lead to lower performance. The highest accuracy is achieved with the ‘Pruned by Exact-Match Schema.’ NER masking and SimilarityBased Matching did not improve performance but may be beneficial for other datasets.
We further compared the performance of different LLMs on a selected subset of schema formats. In addition to Llama-3.1-8B, we evaluated Qwen2.5-7B and Gemini-1.5-Flash. While Llama-3.1-8B and Qwen2.5-7B are similar in size, they differ in multiple ways, such as tokenization strategies. Gemini-1.5- Flash, in contrast, has a larger model size and a significantly longer context window. For comparison, we used three schema formats—Enhanced, Base, and Pruned by Exact-Match—with decreasing token lengths. Figure 6 presents the results, highlighting key trends: (i) In terms of lexical (translation-based) comparison, performance of Llama-3.1-8B and Qwen2.5-7B models are improved as prompt length decreased. However, Gemini-1.5-Flash had the opposite trend, performing better with longer prompts. The drop in Gemini-1.5-Flash for shorter prompts was minor, remaining below $5 \%$ . (ii) In terms of execution-based evaluation, Llama-3.1-8B model showed improved performance with shorter prompts, while Qwen2.5-7B and Gemini-1.5-Flash experienced slight declines, both around $2 \%$ . These findings align with observation made by previous research [1, 2]: The impact of schema length varies across models, with Gemini-1.5-Flash potentially benefiting from longer context while the other smaller models perform better with shorter inputs.
Figure 5: Performance across various schema formats
Figure 6: Performance comparison of various models | Knowledge graphs represent complex data using nodes, relationships, and properties. Cypher, a powerful query language for graph databases, enables efficient modeling and querying. Recent advancements in large language models allow translation of natural language questions into Cypher queries - Text2Cypher. A common approach is incorporating database schema into prompts. However, complex schemas can introduce noise, increase hallucinations, and raise computational costs. Schema filtering addresses these challenges by including only relevant schema elements, improving query generation while reducing token costs. This work explores various schema filtering methods for Text2Cypher task and analyzes their impact on token length, performance, and cost. Results show that schema filtering effectively optimizes Text2Cypher, especially for smaller models. Consistent with prior research, we find that larger models benefit less from schema filtering due to their longer context capabilities. However, schema filtering remains valuable for both larger and smaller models in cost reduction. | [
"cs.DB",
"cs.LG"
] |
# 1 Introduction
Large language models (LLMs) have recently shown promise as autonomous agents capable of solving complex, multi-step tasks across a wide range of domains. These LLM agents interact with an environment through structured actions—such as mouse clicks, keystrokes, or code executions—and are prompted to complete specific tasks using tools provided by the interface. This paradigm has been explored for web navigation tasks [Yao et al., 2023a, Drouin et al., 2024], software development [Yang et al., 2024], formal mathematics [Lin et al., 2025], and many others [Boisvert et al., 2024]. As research in LLM agents progresses, the availability of high-quality datasets tailored to these domains becomes increasingly critical.
General computer-use tasks that involve interacting with desktop environments and software applications pose especially difficult challenges for data collection. Existing datasets in this space, such as $\tau$ -bench [Yao et al., 2024], TheAgentCompany [Xu et al., 2024], OSWorld [Xie et al., 2024], WorkArena [Drouin et al., 2024] rely heavily on human demonstrations over a limited set of tools and tasks. While effective in showcasing agent capabilities, this human-in-the-loop approach is labor-intensive, expensive, and fundamentally unscalable, making it impractical for covering the full breadth of real-world computing scenarios.
To overcome these limitations, recent work has turned to synthetic data generation using LLMs. However, existing pipelines face two core challenges: (1) current LLM agents struggle to generate reliable trajectories for complex tasks, and (2) simplistic or repetitive generation strategies limit task diversity. These challenges are especially acute in visually grounded or long-horizon tasks, where agents must maintain contextual awareness, reason over multiple steps, and adapt when plans fail [Xie et al., 2024, Bonatti et al., 2024]. Moreover, limited task diversity increases the risk of overfitting or model collapse during downstream training [Shumailov et al., 2024].
We introduce AgentSynth, a scalable and flexible pipeline for synthesizing diverse, high-quality datasets for training and evaluating computer-use agents. The core insight behind AgentSynth is to exploit information asymmetry between the data generation and evaluation phases, the idea that solving a task step-by-step in the forward direction is far easier than reasoning out the entire solution all at once. Therefore, we construct the task through a sequence of simple, solvable subtasks. Each subtask builds incrementally on the prior state, with the corresponding trajectories collected during execution. A summarization agent then merges the subtasks into a composite long-horizon task, producing realistic scenarios that are easy to generate but hard to solve.
This design offers several key advantages. By constructing complex tasks from simple, solvable components, AgentSynth enables reliable trajectory collection while maintaining benchmark difficulty. Varying the chaining of subtasks induces combinatorial task diversity. The pipeline is fully automated and achieves a low cost of just $\$ 0.60$ per trajectory. While we generate over 6,000 tasks in this work, the approach readily scales to tens of thousands of realistic tasks across diverse environments.
Our contributions are as follows:
• We introduce AgentSynth, a fully automated pipeline that synthesizes challenging and diverse computer-use tasks by iteratively chaining LLM-generated subtasks
• We demonstrate how information asymmetry between generation and execution improves trajectory reliability and task complexity, enabling fine-grained task difficulty control.
• We build a benchmark using AgentSynth and show that state-of-the-art agents struggle significantly, revealing a large room for future improvement.
We describe our methodology in detail in Section 3, anaylized the generated tasks and datasets in Section 4, and present empirical evaluation results in Section 5.
# 2 Related Work
Substantial research has focused on synthesizing data to improve the training and evaluation of LLMs. However, most existing datasets and benchmarks for computer-use agents still rely heavily on manual design and annotation, limiting their scalability and diversity.
Synthetic Data Generation. Synthetic data generation has emerged as a promising approach to enhance model performance and foster new capabilities. Many recent studies have leveraged LLMs to automate and diversify data generation. For instance, Yuan et al. [2025] curated diverse, highquality datasets extracted from extensive pretraining corpora. Shin et al. [2019] generated synthetic datasets with controlled distributions over programs and specifications. Li et al. [2025] employed an optimization loop where a data generator continuously produces challenging problems targeted at specific evaluation models. Additionally, Zhao et al. [2025] synthesized Olympiad-level mathematical problems by explicitly incorporating conceptual understanding and detailed reasoning processes into the data generation pipeline. Many other applications of synthetic data for LLMs are listed in Liu et al. [2024]. These works highlight the power of synthetic pipelines but focus primarily on static text benchmarks rather than interactive or embodied agents.
Agent Datasets and Benchmarks. Current datasets and benchmarks for agents predominantly depend on human annotators for task creation, demonstration provision, and the definition of evaluation metrics [Yao et al., 2024, Xu et al., 2024, Zhou et al., 2024], which are costly to scale and often limited in diversity. More recent work explores using LLMs to generate agent tasks and trajectories. For example, Pahuja et al. [2025], Trabucco et al. [2025], Murty et al. [2025], Gandhi and Neubig [2025] employed LLMs as web agents to synthesize web-based interactions in semi-realistic environments. Boisvert et al. [2025] composed atomic tasks from Drouin et al. [2024] to form difficult tasks. Xu et al. [2025] and Ou et al. [2024] turned online tutorials into tasks and demonstrations. Nonetheless, these generated tasks and trajectories are limited primarily to web-based activities and typically involve simple interactions without complex multi-step reasoning or extensive tool utilization.
Table 1: Action space for the computer-use agent. The percentage indicates the relative frequency of each action type in the full AgentSynth dataset.
Agent Environments. Early agent environments such as $\mathrm { M i n i W o b { + } { + } }$ [Liu et al., 2018] focused on simplified web tasks and low-level actions. Later advancements such as Mind2Web [Deng et al., 2023], WebArena [Zhou et al., 2024], and Online-Mind2Web [Xue et al., 2025] introduced more realistic websites but remained constrained in breadth and complexity. More comprehensive environments have been developed by Yao et al. [2024], Drouin et al. [2024], Xu et al. [2024] which expanded the action space and interface diversity, yet they still deviate from the actual computer environments. Recent developments like OSWorld [Xie et al., 2024] and WindowsArena [Bonatti et al., 2024] address this gap by transforming real operating systems into interactive gym environments for agent training and trajectory generation. Our work leverages the capabilities of OSWorld, providing comprehensive access to authentic computer tools to enhance synthetic data generation for generalist computer-use agents.
# 3 Scalable Agent Tasks and Trajectories Generation
We design a synthetic data generation pipeline powered by six distinct LLM-based agents: a task proposer, a task executor, a task verifier, a task reviser, a follow-up task proposer, and a task summarizer. Central to our methodology is the exploitation of information asymmetry [Li et al., 2025]; the idea that solving a task step-by-step in the forward direction is far easier than inferring the entire solution from scratch. Specifically, we generate sequences of simple, tractable subtasks, collecting trajectories along the way, and later summarizing the sequence into a single, coherent longhorizon task. This approach allows us to synthesize tasks that are easy to generate but substantially more difficult for agents to complete at test time. Full prompt templates for each agent are included in Appendix A.
Our pipeline operates in the OSWorld environment [Xie et al., 2024], a Gym-compatible simulated desktop interface that mirrors real-world computer usage. Within this environment, agents can interact freely with a broad range of software applications and system tools hosted on a virtual machine. At each step, the agent receives a full-screen screenshot $( 1 9 2 0 \times 1 0 8 0 )$ , typically spanning 1k–2k tokens depending on the model’s tokenizer. Based on this visual context and the current task, the LLM agent generates executable actions, such as mouse clicks, key presses, text input, and scrolling, which are executed using pyautogui [Sweigart, 2025] to closely emulate human behavior. The full action space is detailed in Table 1, where the percentage listed for each action type reflects its frequency of occurrence across all trajectories in our dataset. To highlight the generality of our pipeline, we also apply it to a web agent environment (InSTA [Trabucco et al., 2025]), as discussed in Appendix D. The overall data generation pipeline is detailed below and is presented in Figure 1.
Task Proposer. We initiate the data generation process by instructing a task proposer agent to generate an initial, straightforward task. To enrich task diversity, the proposer is guided by a randomly assigned persona sampled from the persona hub [Ge et al., 2024], prompting it to suggest tasks relevant to a specific user profile. The proposer takes as input the persona and the initial Ubuntu desktop screenshot, and is prompted to create clear, specific tasks that can be completed in a few atomic actions. To ensure safety and privacy, we prohibit any tasks involving login credentials or actions such as email sending or social media posting. Prompt details for the task proposer are provided in Appendix Table A.1.
Persona A senator who is open to prison reform but needs convincing data and evidence to support policy changes Initial task generation Initial task rSefaorcmh ifnoirt raeticvenst rnetchide vUinsimterdatSetsataetsi.stics following prison Follow-up task generation Follow-up subtask rCerceiadtiev asmterxat ef lsetastuismtimcasrinz tnhgeanUdnictoemdpSatraintgesh.istorical and current Verify if the task is successfully executed Yes No ×n Summarize what the agent has done Search for historical (pre-reform) recidivism rate statistics in the Revised subtask description United States to compare with current post-reform rates. Collect recidivism rate statistics in the United States for the Summarize the sequence of subtasks 1980s, 1990s, 2000s, and 2010s using online sources, enter this data into a LibreOffice Calc spreadsheet with columns labeled 'Period' and 'Recidivism Rate', and create a clearly-labeled line Final tasks chart titled 'US Recidivism Rates by Decade' with the X axis labeled 'Decade' and the Y axis labeled 'Recidivism Rate $( \% ) ^ { \prime }$ to visually compare changes in recidivism rates across these decades. 1518:29 : ? A : 15 1 : : Initial state Initial task Follow-up task 1 Follow-up task n Task difficulty level 1 Task difficulty level 2 Task difficulty level n+1
We currently rely on GPT-4.1-based agents for task generation due to their robustness and broad generalization. Different LLM models might generate tasks with systematically different complexity, realism, or meaningfulness, and it remains an open and interesting research question how model choice affects generated task properties. Exploring task-generation variance across different LLM architectures could be beneficial to further enrich task diversity and calibrate difficulty more precisely.
Task Executor. To execute the proposed tasks, we construct a ReAct-style [Yao et al., 2023b] agent that integrates OpenAI’s GPT-4.1 [OpenAI, 2025b] and computer-use-preview [OpenAI, 2025a] models. Empirically, GPT-4.1 is good at planning and interpreting visual context, while the computeruse model is more accurate in grounding actions to pixel-level coordinates. We therefore assign GPT-4.1 the role of planner: it receives the task, current screenshot, and execution history, and outputs a natural language description of the next action. This description, along with the screenshot, is then passed to the computer-use model, which generates the precise executable action (e.g., mouse click coordinates, keystrokes). This two-stage setup balances high-level reasoning with fine-grained visual grounding. During execution, we log both the model’s reasoning trace and the resulting actions, enabling rich trajectory annotation. Each task execution is limited to a maximum of 10 steps. The prompts for the task executor are shown in Appendix Table A.2 and Table A.3.
Task Verifier. The task verification agent evaluates whether a given trajectory successfully completes the intended task. It reviews the full screenshot sequence and task description, and outputs both a binary success label and a completion percentage. To avoid overwhelming the verifier with excessive visual input, we adopt a WebJudge-style architecture inspired by Xue et al. [2025]. The verifier first extracts key requirements from the task description, then analyzes each screenshot to select a subset of key screenshots most relevant to task completion. The final verdict is made based on the task description, identified key requirements, and the filtered key screenshots. To reduce token usage, all screenshots are downsampled to $9 6 0 \times 4 8 0$ . If a task is not fully completed, the verifier estimates the percentage of task completion. In such cases, the task reviser generates a revised task description that reflects the actual progress. Prompt details for the verifier are provided in Appendix Table A.5 to Table A.7.
Task Reviser. When a trajectory is only partially successful, we invoke a task reviser agent to generate a revised task description that accurately reflects the actions actually completed by the agent. The reviser takes as input the full execution screenshots and identifies the goals that were successfully accomplished. It then outputs a revised task description that aligns with the observed behavior. Prompt details for the task reviser are shown in Appendix Table A.8.
Follow-up Task Proposer. Upon completing a task, the follow-up task proposer generates the next logical subtask to continue the sequence. This agent is given the full history of prior subtasks and the most recent desktop screenshot, and is instructed to generate a simple, specific follow-up action that builds on the previous state. Additionally, the proposer is informed of previously unsuccessful tasks, prompting it to propose simpler alternatives. Like the initial proposer, it avoids tasks that require login or unsafe actions. The resulting task is executed and verified as before, and if incomplete, a revised description is generated. This iterative generation process continues until a desired sequence length is reached. Prompt templates for the follow-up proposer are shown in Appendix Table A.4.
Task Summarizer. Finally, the task summarizer converts a sequence of completed subtasks into a single high-level task description. This summary abstracts away step-level details while preserving the overarching objective and required actions. By varying the number of subtasks summarized, we systematically control task difficulty: more subtasks yield longer, more complex tasks that require greater reasoning and planning. This mechanism enables us to generate tasks at multiple difficulty levels in a principled way. While each subtask may be trivial in isolation, the final composed task presents a challenging, multi-step problem for LLM agents. The summarization process is illustrated in Figure 1, and prompt details are provided in Appendix Table A.9.
Unlike prior pipelines such as Explorer [Pahuja et al., 2025], which target structured web environments, AgentSynth is built for general-purpose computer environments involving desktop applications, terminal tools, and office software, where grounding must be performed at the pixel level using visual inputs. This key difference necessitates major architectural changes: for example, we use a two-stage executor that separates high-level planning from low-level visual grounding. This setting demands new solutions for chaining long subtasks while preserving realism, such as persona-driven task continuity, screenshot-based verification, and cross-application workflows.
# 4 Dataset Analysis
# 4.1 Quality
To assess the quality of the generated tasks and trajectories, we conducted a manual evaluation on a random sample of 100 instances across difficulty levels (approximately 16 tasks per difficulty level) to ensure representativeness across complexity. Our evaluation focused on the feasibility and realism of the overall task, the coherence and logical flow of subtasks, their relevance to the assigned persona, and the accuracy of the verifier’s assessment of the agent’s trajectory. Specifically, human annotators are instructed to assess:
Table 2: Human evaluation of AgentSynth task and trajectory quality.
• Feasibility and realism: Could a real human user plausibly complete this task using standard software tools?
• Subtask coherence: Does each subtask logically follow from the previous subtasks, maintaining clear and meaningful workflow progression?
• Persona relevance: Is the task aligned meaningfully with the persona provided to guide task creation?
Figure 2: AgentSynth dataset statistics. (a) Average token count by task difficulty. (b) Distribution of the number of steps at task difficulty level 6. (c) Distribution of the number of software applications for each task at difficulty level 6.
• Verifier accuracy: Does the automated verifier’s binary assessment (task success or failure) align correctly with human judgment?
As shown in Table 2, all quality metrics exceed $80 \%$ , highlighting the consistency, realism, and reliability of the data produced by the AgentSynth pipeline. We note that prior findings [Lù et al., 2025] indicate potential limitations of LLM-based verification, and our high manual-validation rate suggests our engineered verification pipeline, including selective screenshot sampling, visual context filtering, and task requirement extraction, improves verifier reliability compared to simpler baseline methods.
# 4.2 Case Study
To illustrate the quality and realism of tasks generated by the AgentSynth pipeline, we present a representative example. Additional examples can be found in Appendix B.
First, a persona is sampled from the persona hub [Ge et al., 2024]:
Persona: a senior student at Kentucky Wesleyan College.
Then, the task proposer generated an initial task tailored to this persona:
Initial task: Search for the ’Kentucky Wesleyan College 2024 academic calendar’ in Google Chrome.
Next, this initial task was successfully executed, and five follow-up tasks were iteratively generated and completed:
• Follow-up Task 1 Find the Kentucky Wesleyan College 2024 commencement (graduation) date on the academic calendar currently open in Chrome
• Follow-up Task 2 Open the Calendar application after searching for graduation-related dates on an academic calendar website.
• Follow-up Task 3 Scroll backwards month-by-month in the calendar application from March 2025 to June 2024 using the month view.
• Follow-up Task 4 Create a new calendar event on the day of Kentucky Wesleyan College’s 2024 commencement titled ’Graduation Day’ and add a note: ’Remember to bring gown and arrive 1 hour early.
• Follow-up Task 5 Add a notification/reminder to the ’Graduation Day’ event on May 3, 2024 in the Calendar app to alert you 1 day before.
Each of these subtasks is simple and logically follows from the previous one. The task summarizer composes them into coherent, high-level tasks. We define task difficulty level $n$ as the summary of the first $n$ subtasks, resulting in increasingly complex and realistic scenarios. These summarized tasks are then used for evaluation and benchmarking. The list below shows the final task descriptions at six difficulty levels, with italicized text indicating the incremental complexity introduced at each level:
Task Difficulty Level 1 (same as the initial task): Search for the ’Kentucky Wesleyan College 2024 academic calendar’ in Google Chrome.
• Task Difficulty Level 2: Find and report the date and time of the Kentucky Wesleyan College 2024 commencement ceremony by searching for the 2024 academic calendar online, locating the official calendar, and identifying the commencement event listed there.
• Task Difficulty Level 3: Search for the ’Kentucky Wesleyan College 2024 academic calendar’ in Google Chrome, find the 2024 commencement date, and then open the Calendar application to view or record the commencement date.
• Task Difficulty Level 4: Find the Kentucky Wesleyan College 2024 commencement date using Google Chrome, then open the Calendar application and scroll back in month view from March 2025 to June 2024 in preparation for viewing or adding the graduation date to the calendar.
• Task Difficulty Level 5: Find the Kentucky Wesleyan College 2024 commencement date by searching online (using the academic calendar), and create a new event titled ’Graduation Day’ in your digital Calendar application, adding a note that says ’Remember to bring gown and arrive 1 hour early’.
Task Difficulty Level 6: Find the Kentucky Wesleyan College 2024 commencement date by searching the 2024 academic calendar online, then create a calendar event titled ’Graduation Day’ in the Calendar application with a note saying ’Remember to bring gown and arrive 1 hour early,’ and set a reminder to alert you one day before the event.
As the task level increases, both task length and complexity grow accordingly. Each additional subtask introduces new actions, tools, or planning steps. Figure 2a shows the average token count across task levels, confirming that longer task compositions correspond to more elaborate task descriptions and execution requirements.
We note that our notion of task difficulty corresponds primarily to task horizon and the compositional complexity of multiple subtasks. However, we acknowledge that a task considered hard under this criterion may reflect both intrinsic complexity and lack of familiarity to agents trained primarily on shorter or simpler tasks. Intrinsic complexity, such as the cognitive load required to manage multi-application workflows, maintain intermediate states, and recover from errors, often increases with task horizon, but shorter tasks may also be intrinsically challenging if they involve nuanced visual perception, context-dependent decisions, or unfamiliar interfaces. Future analyses could systematically separate intrinsic task complexity from novelty or lack of exposure.
# 4.3 Comparison to Other Datasets and Benchmarks
We designed the AgentSynth pipeline with a focus on generating diverse, realistic, and challenging data for training and evaluating computer-use agents. Table 4 compares our dataset to several existing agent benchmarks, highlighting key advantages in diversity, complexity, and scalability.
Diverse Real-World Tasks. AgentSynth spans a broad range of software applications and domains, including office productivity, information retrieval, entertainment, coding, and research. This breadth ensures rich task diversity and supports generalization across practical, everyday scenarios. The pipeline leverages versatile environments that require agents to fluidly interact with multiple software tools within a single task. Figure 3 shows the coverage across domains and tools, illustrating the dataset’s alignment with real-world complexity.
Importantly, our pipeline encourages multi-tool usage through chained subtasks. As shown in Figure 2c, over $60 \%$ of trajectories involve two or more software applications, and more than $40 \%$ involve three or more, demonstrating the inherent compositionality of AgentSynth tasks.
Long-Horizon Trajectories: Real-world tasks often require extended sequences of actions involving planning, memory, and interface coordination. AgentSynth explicitly supports such long-horizon tasks by composing them from interdependent subtasks. As shown in Figure 2b, tasks at difficulty level 6 typically require 40–60 steps, exceeding the trajectory lengths of existing benchmarks. These tasks challenge agents to maintain context, manage interleaved goals, and execute multi-step plans, closely reflecting the demands of real-world computer use.
Information Productivity Presentation Spreadsheet 3s A Edit 8.1% 8.7% General 12.5% General 10.6% Documents 9.8% Writer 17.0% 3fice Education Galedbar Entertainment Service Chrome 32.2% Web 中 2% Gure 美 里 VsCode 3.8% Email 1.5% Sports Healch Housing Music 2.0ts 2.3% Movie 2.4% 1.9% 3 General Ter mina Social Job Cooking Evet General P.t% 1.0me Image 11.8% Sreral FITR Finance 2.4% WeatherMusie Event Enteraiamnt Shopping Selea Travel Gera 1.4% (a) Distribution of software involved. (b) Distribution of task topics.
Figure 3: Data composition for AgentSynth. Its diverse topics and software involved demonstrate the potential to train generatlist computer-use agents.
Table 3: Comparison of the cost of AgentSynth versus human annotations.
# 4.4 Cost Analysis
Beyond diversity and high quality, our data generation pipeline is also highly scalable and costefficient. Our approach achieves a cost of $\$ 0.6$ per trajectory with 5 follow-up subtasks. This is comparable with recent methods such as AgentTrek [Xu et al., 2025] ( $\$ 0.55$ per trajectory), Explorer [Pahuja et al., 2025] $\langle \$ 0.28$ per trajectory), and InSTA [Trabucco et al., 2025] $\$ 0.27$ per trajectory). Furthermore, our method is much cheaper than human annotations for complex tasks with long trajectories. Table 3 shows the cost of several datasets from human annotations, where we assume the labor rate is in the range of $\$ 2$ per hour. The detailed calculation of our cost and human labor hours is shown in Appendix C.
# 5 Results and Discussion
# 5.1 Evaluation Setup
To assess the general-purpose computer-use capabilities of current language models, we evaluated several state-of-the-art multimodal agents with visual understanding. At each interaction step, the model receives a prompt containing the task description, the current desktop screenshot, and its own previous thoughts. The model is then asked to generate executable Python code using the pyautogui library to perform the next action. We sampled 50 tasks from each difficulty level for agent evaluation. Additionally, to benchmark human performance, we evaluated 20 tasks sampled from difficulty level 6, the most challenging tier in AgentSynth.
To isolate the role of the underlying language model and focus on the task difficulty itself, we use bare LLMs without fine-tuning or additional agent-specific scaffolding. Each model is prompted to generate pyautogui actions step-by-step based on the screenshot, task description, and action histories. This setup reflects a lower bound on performance and is intended to benchmark agents under minimal guidance rather than deploy optimized, production-grade agents. Prompts used for evaluation are detailed in Appendix A.10. Task completion is assessed using the automatic verifier agent introduced in section 3, which analyzes the full trajectory and determines whether the task was successfully completed.
Table 4: Comparison of AgentSynth to some existing LLM agent datasets and benchmarks.
# 5.2 Results
The top panel of Figure 4 shows the success rates of four state-of-the-art language models on the AgentSynth benchmark across task difficulty levels 1 through 6. Despite having visual capabilities and strong general reasoning skills, all models exhibit poor performance on our benchmark, especially as task complexity increases. In contrast, humans achieve a $70 \%$ success rate even on the most difficult tasks, underscoring the performance gap. Key observations include:
Sharp Decline with Difficulty. All models show a consistent and steep drop in success rate as task difficulty increases. For example, o4-mini achieves $18 \%$ success on level 1 but drops to $4 \%$ by levels 5 and 6. GPT-4.1 fails to complete tasks beyond level 3. This highlights the significant challenge in realistic GUI environments and demonstrates the increasing challenge posed by longer-horizon, multi-step tasks in AgentSynth.
Model Comparison. o4-mini consistently outperforms other models, achieving the highest success rates at every level, particularly on easier tasks. Claude-3.7 performs second-best overall, with stable success rates across levels 2–6, suggesting more robustness under increasing difficulty. GPT-4.1 and Gemini-2.5-pro perform comparably at level 1, but their success drops to near-zero by level 4, indicating limited ability to generalize in long sequences of interactions.
Near-Zero Success on Hard Tasks. At levels 4 and 6, only o4-mini and Claude-3.7 achieve non-zero scores, and even then, the success rate is only around $4 \%$ . This indicates that current models are far from achieving generalizable competence in realistic multi-step computer tasks, showcasing the difficulty and discriminative power of our benchmark. The results highlight the need for models that can handle long-term dependencies, maintain state, and ground their decisions in visual observations over extended sequences.
In addition to the performance breakdown across task difficulty levels, we further analyze model performance across different domain categories at difficulty level 1, as shown in the bottom panel of Figure 4. Despite being the easiest difficulty level, we observe variation in success rates depending on the domain.
Models perform relatively well in the Office and Web domains, which often involve more structured interfaces and familiar interaction patterns such as form filling or text editing. In contrast, performance is worse on OS-level tasks, which tend to require precise mouse interactions, fine-grained visual grounding, and navigation through complex system dialogs. Similarly, research and coding domains show lower success rates, likely due to their higher contextual demands and less predictable UI structures.
Figure 4: Model performance across task difficulty levels (top) and across domain categories at difficulty level 1 (bottom) on the AgentSynth benchmark.
These results highlight that task difficulty is not solely determined by compositional length or subtask count, but also by the intrinsic complexity and variability of the software environment. Even at level 1, domain-specific challenges expose significant weaknesses in current LLM agents’ ability to generalize across real-world desktop applications.
# 5.3 Common Agent Failure Modes
Despite the promising capabilities of LLM agents, their performance on the AgentSynth benchmark remains low, with most tasks ending in failure. We identify several recurring failure modes that highlight key limitations and suggest directions for future improvement:
Inaccurate Mouse Clicks. A frequent failure involves imprecise mouse click coordinates. While the agent often identifies the correct UI element conceptually (e.g., the "Save" button or a browser tab), it fails to locate it precisely on screen. This results in misclicks, unintended interactions (e.g., clicking ads or wrong icons), and cascading errors, such as obscuring or losing focus on the target window. Moreover, agents often repeat the same incorrect click multiple times without adapting.
Poor Screenshot Understanding and State Tracking. Agents frequently fail to properly interpret the visual information in screenshots. They may misidentify popups, ads, or irrelevant overlays as part of the main task UI. Other papers benchmarking LLM agents have also found problems with perceptual grounding [Xie et al., 2024, Koh et al., 2024]. This weak perceptual grounding results in repetitive or irrational actions: for example, repeatedly trying to save a file that has already been saved. Moreover, agents often lose track of what has already been done, lacking persistent memory or state awareness.
Lack of Recovery from Errors: Once an agent becomes stuck, it struggles to recover. Rather than exploring alternative actions or reasoning about potential mistakes, the agent tends to repeat the same failed behavior. This lack of introspection and self-correction severely limits task completion, especially for multi-step tasks requiring contingency handling. This is a common error found with complex, long-horizon across many domains in the literature, from computer-use to general remote tasks [Xu et al., 2024, Drouin et al., 2024] to math and reasoning questions [Huang et al., 2024] among others. | We introduce AgentSynth, a scalable and cost-efficient pipeline for automatically synthesizing high-quality tasks and trajectory datasets for generalist computer-use agents. Leveraging information asymmetry, AgentSynth constructs subtasks that are simple during generation but significantly more challenging when composed into long-horizon tasks, enabling the creation of over 6,000 diverse and realistic tasks. Our pipeline begins with an LLM-based task proposer guided by a persona, followed by an execution agent that completes the task and logs the trajectory. This process is repeated iteratively to form a sequence of subtasks, which are then summarized by a separate agent into a composite task of controllable difficulty. A key strength of AgentSynth is its ability to precisely modulate task complexity by varying the number of subtasks. Empirical evaluations show that state-of-the-art LLM agents suffer a steep performance drop, from 18% success at difficulty level 1 to just 4% at level 6, highlighting the benchmark's difficulty and discriminative power. Moreover, our pipeline achieves a low average cost of \$0.60 per trajectory, orders of magnitude cheaper than human annotations. Our code and data are publicly available at https://github.com/sunblaze-ucb/AgentSynth | [
"cs.CL"
] |
# 1 Introduction
Language models are increasingly used in education to seek information (Suri et al., 2024), tutoring (Chevalier et al., 2024), and automated assessment (Tlili et al., 2023; Stahl et al., 2024). A critical aspect of their pedagogical utility is their potential to tailor responses to learners with varying informational needs (Adolphe et al., 2023; Puech et al., 2024; Davies et al., 2021; Chevalier et al., 2024;
Generate an explanation for… Why is the sky blue? B Graduate High Elementary
0 School School 0 School
Selective scattering is Blue is scattered by air We mostly see blue proportional to the molecules because it because it's bouncing off
inverse fourth power travels as shorter, the air more than other of wavelength.. smaller waves… colors Elementary School 0.025 High.School Graduate School Score 0.020 Interpretations Professional
000 CollegeGrad College 10-12 8-9 0.005 5 0.000 T T T T -20 0 20 40 60 80 100 Flesch-Kincaid Reading Ease()
Jurenka et al., 2024; Sun and Zhou, 2024; Ross and Andreas, 2024). This is particularly important in scientific communication, where complex concepts must be conveyed effectively to nonexperts (August et al., 2023), and in policy or legal communication, where text must balance technical accuracy with readability (Cheong et al., 2024). Despite the potential of language models to modify explanations in their complexity (August et al., 2024), formality (Luo et al., 2023), and domain specificity (Karabacak and Margetis, 2023; Wang et al., 2023) at inference time, it remains unclear whether they can effectively generate responses that are useful both to educators (Kim et al., 2024a)
and to learners alike (Lee et al., 2023).
One critical challenge in pedagogy is answering “Why” questions. These require explanatory answers to meet different learners where they are. For example, for the question “Why is the sky blue?”, a high school student might find the explanation “Sunlight scatters when it hits air molecules” more understandable, while a physics graduate might find a more technical answer “Selective scattering is proportional to the inverse fourth power of wavelength” more satisfactory. Although language models are capable of step-by-step reasoning across various tasks (Wei et al., 2022; Prystawski et al., 2023), by default they generate a one-size-fits-all explanation, that might not fit the informational needs of a user interacting with it (August et al., 2024). Can the prompt-following skills of language models (Wei et al., 2021; Zeng et al., 2023; Lee et al., 2024) help them tailor their explanations1 to users with different informational needs?
We introduce ELI- $\mathrm { W H Y } ^ { 2 }$ , a dataset of 13.4K “Why” questions that span different disciplines such as science, medicine, and humanities, such as “Why do countries have flags?” or “Why do leaves change color in the fall?” to examine the pedagogical utility of language model explanations. While prior studies have explored the ability of language models to generate general-purpose explanations in a pedagogical setting (Joshi et al., 2023; Li et al., 2024), it is important that explanations adapt to the prior knowledge of learners (Schmucker et al., 2024; Ye et al., 2024; Lee et al., 2023).
Our experimental settings involve using the highest educational degree attained as a proxy for the informational needs of a user. Specifically, we prompt language models to generate three different explanations to ELI-WHY questions, fit for users with elementary school, high school, or graduate level education. We conduct automated evaluations and two human studies to assess the utility of language model generated grade-tailored explanations. Our first human study is conducted from the perspective of an educator to test the appropriateness of an explanation for users with different educational backgrounds on a subset of ELI-WHY. We find that GPT-4-generated explanations match their intended background only $50 \%$ of the time, compared to $79 \%$ for explanations curated by lay humans $( \ S 4 )$ . We then use automated metrics to assess the grade-level readability of explanations; while explanations become more lengthy and contain ‘complex’ words as the educational level increases, their complexity in terms of grade-level readability often overlaps (shown in Figure 1 and Section 4.2). We extend this automated metric analysis to three more model families apart from GPT-4, and report similar findings. Our second human study tests the appropriateness of an explanation from the perspective of a learner’s own selfreported informational needs (Section 5). To capture the information needs of users, we asked participants to rate the explanations based on whether they provide new information and whether the explanations connect to their prior knowledge. Studies with participants from elementary, high school, and graduate backgrounds (Physics and Psychology) reveal that GPT-4-generated explanations are relatively $20 \%$ less informative than explanations curated by lay humans. This gap is particularly pronounced for users with graduate-level and highschool backgrounds.
Overall, our results highlight the limitations of current language model-driven pedagogy and suggest that explicitly prompting for audience adaptation alone might be insufficient. We believe that in addition to ELI-WHY being a valuable resource to evaluate language models’ pedagogical utility, our human-centered evaluation framework can help evaluate personalized agents catered to the informational needs of individual users.
# 2 The ELI-WHY Benchmark
Existing work in pedagogical evaluation of language models has either focused on objective benchmark-driven question-answering tasks (e.g. multiple-choice science-based question answering) (Lu et al., 2022; Mitra et al., 2024; Chang et al., 2025) or subjective use-case driven tasks (e.g. evaluating academic achievements induced by language model assistants) (Höper and Schulte, 2024; Sun and Zhou, 2024). Combining these two, we focus on the task of answering “Why” questions; they ensure a good balance between having a knowledge-seeking setting and having room for subjectivity in the manner in which knowledge is presented (Sulik et al., 2023). To this end, we introduce ELI-WHY, which consists of 13,392 “Why” questions curated across STEM and Non-STEM domains. There are 6,217 STEM questions (across disciplines like Physics, Chemistry, Computer Science, Material Engineering etc.), and 7,175 nonSTEM questions (across disciplines like Sociology, Law, Culture, History, Public Relations, etc.). Our dataset is created by (1) over-generating “Why” questions from GPT-4 via few-shot prompting followed by (2) extensive filtering by checking validity of the generated questions. We expand upon these steps below, and provide full details, including prompts, model settings and filtering process about ELI-WHY curation in Appendix B.
# Overgenerating “Why” questions from GPT-4.
We use a set of 50 seed “Why” questions from Sulik et al. (2023) (Table 2) and split them into different disciplines. We use a random subset of these as in-context examples to prompt GPT-4 (Table 3) to generate more questions in a given discipline (Liu et al., 2022). This led to a set of ${ \sim } 3 0 \mathbf { k }$ questions.
Filtering generated questions. We then manually deduplicated questions from the set. We additionally removed niche, domain-specific questions (e.g. questions like “Why is the electron cloud model currently the most accepted atomic model?”) with the help of crowdworkers. Details about the filtering process can be found in Appendix B.2. This resulted in the final 13,392 questions.
# 3 Generating Explanations for different Educational Backgrounds
Users with varying educational or conceptual backgrounds differ in expectations of answers to their questions (Kolb et al., 2007; Bertrand et al., 2023). Tailoring responses to users with different educational backgrounds is important to improve language models’ use in pedagogy (Adolphe et al., 2023; Puech et al., 2024). In this section, we describe the different educational levels we used for evaluating language model explanations and our methodology for generating grade-tailored explanations.
Educational backgrounds. We choose three educational levels with different informational needs3 for our users4: Elementary School , High School and Graduate School , in the context of education in the United States5. Elementary School group typically covers content up to U.S. Grade 4, and adults with this education level may have limited theoretical knowledge of individual disciplines. The High School group extends through U.S. Grade 12 to approximately the sophomore year of undergraduate studies, and adults at this level have a foundational grasp of academic subjects but may still struggle with discipline-specific terminology. Graduate School group typically follow a bachelor’s degree, offering advanced, specialized education and adults with this education have few knowledge gaps and possess expertise in specific areas without needing foundational instruction6.
Generating grade-tailored explanations. For any given “Why” question, our goal is to generate three responses corresponding to users whose highest educational degree is at the Elementary School , High School and Graduate School level. We generate explanations for each question by zero-shot prompting language models from four model families—GPT-4- $. 0 6 1 3 ^ { 7 }$ (henceforth shortened to GPT-4), Llama-3.2-3BInstruct, Qwen 2.5 14B Instruct and DeepSeek R1 Distill LLama 8B. We instruct each language model to assume the role of an expert in order to provide suitable explanations for each of the three educational backgrounds (prompt detailed in Appendix C).8 Additionally, our prompts contain instructions like “do not add any additional text like greetings or ornamental words” to ensure that language models tailors the response in terms of knowledge and not just stylistic cues. For example, GPT-4 would often add context, such as “playing in the park” or “other kids” while generating explanations for Elementary School background. We try to limit such generations using specific instructions in the prompt, so that it puts less emphasis on stylistic verbiage, when compared to knowledge content (Table 6). Throughout the rest of this paper, we use intended educational background of an explanation to refer to the educational background used to generate the explanation. All model parameters used to generate explanations are detailed in Appendix C.2. While all four language models are used for automated evaluations, we only use explanations generated by GPT-4 for our human studies.
Baseline explanations. In addition to the above explanations, we prompt language models to produce the Default explanation for a given question, without providing any educational background (prompt detailed in Appendix C). We also collect baseline Web-Retrieved explanations using the Google API9; we use the Featured Snippet provided by Google.10
Web Explanations Curated by Lay Humans. Lastly, for a subset of 40 questions in ELI-WHY, authors of this work manually curated explanations ( Manually Web-Retrieved ) for each educational background, by searching appropriate websites. All explanations are curated independently by two authors, then discussed together to preserve the most plausible explanation. For e.g., we retrieve Graduate School level explanations for a question by searching through journals and research papers on the topic, and Elementary School level explanations by searching through the Explain Like I’m Five (ELI5) subreddit11. For High School , we retrieve explanations from blog posts and web pages intended for lay users. These are not meant to be expert-level explanations, but simulate a process of obtaining explanations for different grade levels in contrast to language model generations (Oh et al., 2008; Ward, 2021).
# 4 Do language model explanations match their intended educational background?
In this section, we evaluate whether grade-tailored language model explanations match their intended educational backgrounds, using human evaluations. We then extend to a large-scale empirical analysis on all of ELI-WHY and model variants, where we employ different automated metrics and reconcile these findings with that of the user study.
Intended educational Hidden from
background:Grad. School participants
Generate an explanation for… Why is the sky blue? 1 Selective scattering is proportional to the inverse fourth power of wavelength.. R vo Graduate High Elementary School School School Task: Participants have to choose an educational background that they perceive is ideal for a given explanation
# 4.1 Intended vs. perceived educational backgrounds of tailored explanations
We define the intended educational background as the grade-level for which an explanation was generated. We then define the perceived educational background as the grade-level that a human user associates with an explanation. To identify if language model explanations are successfully tailored for different grade levels, we conduct a user study in which participants assume the role of an educator; they read questions and a language model explanation to indicate their perceived educational background of the explanation. We then evaluate the percentage of explanations where the intended educational background matches the perceived educational background. We term this as Perceived Background Match. This formulation allows us to directly measure whether tailored explanations match the grade level they were generated for.
User study design. We conducted a user study with a subset of 400 “Why” questions from ELIWHY, along with explanations generated by GPT4 tailored for each of the three grade levels we consider. The participants were presented with a question-explanation pair and were asked to identify the perceived educational background of the explanation (Figure 2). Before making their judgments, the participants received detailed task in
54.7 51.0 47.3 40 Random 20 (a) 0 All (400) STEM Non-STEM HGirgahd uSactheo(o4l0(04)00) Elementary (324) Perceived Educational Background High School (694) Graduate (182)
tions are perceived to be tailored for High School , which can be explained by GPT-4’s tendency to be conditioned towards a “lay-user” (August et al., 2024; Hsu et al., 2024). We also observe surprising mismatches—e.g.. Elementary School explanations being perceived as Graduate School , and vice versa. We show examples of these cases, along with justification written by users in Appendix D.4. Additionally, the perceived role match of Manually Web-Retrieved explanations is much higher $( 7 9 . 1 6 \%$ ). This reveals a concerning trend in GPT-4’s explanations: while GPT-4 can be easily prompted to generate explanations tailored for different educational backgrounds, it does not necessarily mean that users perceive these explanations fit for a given background, potentially hindering GPT-4’s utility in pedagogy (Kasneci et al., 2023).
structions, including information on different educational backgrounds defined in Section 3. Additionally, pilot evaluations, conducted by the authors and a subset of participants, helped refine instruction clarity. Each participant annotated five question-explanation pairs, and each pair received three independent annotations, ensuring a diverse evaluation of perceived backgrounds; we considered a majority vote of the perceived educational background for all explanations. As a control, we also conducted a user study on Manually Web-Retrieved explanations for 40 questions to understand perceived explanation match trends for explanations curated by lay users. Further details on participant screening, demographics, and study setup are provided in Appendix D.
Results. Figure 3 presents results from the user study. Figure 3(a) shows that the perceived background match $\%$ of tailored explanations generated by GPT-4 is very low (close to $5 0 \%$ ). This trend is observed across STEM and Non-STEM splits of the subset. Furthermore, the user study also reveals that tailoring mismatch is seen across the board for all educational backgrounds. Figure 3(b) shows the change between intended and perceived educational background, after the study. Most explana
# 4.2 What do automated metrics reveal about tailored language model explanations?
Section 4.1 demonstrated that GPT-4-generated rationales often mismatch their intended educational backgrounds. We extend the scale of our analysis to the full ELI-WHY benchmark and more language model families using automated metrics and show that careful interpretation of these metrics also highlight the above mismatch.
Automated Metrics. We use three categories of automated metrics, based on surface-form features, readability, and reasoning styles to evaluate whether these automated metrics distinguish between explanations tailored to different grades. Surface form metrics compute sentence count, average sentence length, estimated reading time (Demberg and Keller, 2008), and TE Score (August et al., 2024) (the TE score / Thing Explainer Outof-Vocabulary score measures the proportion of ‘complex words’ in an explanation by taking the proportion of words outside a curated list of the 2,000 most common English words). We employ three popular readability metrics: Flesch-Kincaid Reading Ease (Flesch, 1948), Linsear Write Formula (O’hayre, 1966), and Dale-Chall Readability Score (Dale and Chall, 1948). Each of these metrics also map score ranges to an interpreted U.S. grade level (Kincaid et al., 1975). Score range mappings for each metric are detailed in Table 11. Finally, we analyze the type of reasoning in the explanations: whether they are mechanistic (describe how a phenomenon occurs, e.g. pollen shedding occurs because of desiccation of anther tips) vs.
Table 1: Comparison of surface-form, readability, and reasoning-type metrics across different education levels, along with retrieved explanations. \* represents metrics that have high correlation with user evaluations of perceived educational backgrounds. $\uparrow$ and $\downarrow$ depict direction of scores representing more complex explanations for readability metrics; for all the other metrics, higher values indicate higher complexity.
functional (the purpose why a phenomenon occurs, e.g. pollen shedding occurs to facilitate reproduction) (Sulik et al., 2023). Further details on the calculation of these metrics are in Appendix E.
Automated metrics reveal that tailored explanations suffer from interpretation collapse. Table 1 presents average and standard deviation of automated metrics for grade-tailored explanations, along with two baseline explanations: Default and Web-Retrieved . Across all language models, we can observe that the surface form metrics, specifically number of sentences, differ significantly across different educational levels. Particularly, generated explanations get lengthier as the educational level increases. All models also end up using more ‘complex words’ with increasing educational levels, as shown by the increasing TE Score for all models. Additionally, all models end up using more mechanistic reasoning and less teleological reasoning as educational levels increase; prior work has often shown that young children often endorse more teleological explanations (Schachner et al., 2017), also demonstrated
here.
Default explanations mimic High School explanations in all metrics, indicating that explanations generated by GPT-4 without any grade-level tailoring are often intended for a High School user. On the other hand, Web-Retrieved explanations are more concise than other explanations, but their complexity varies widely, shown by the high standard deviation for all readability tests. In Appendix E.5, we also compare different grade-tailored explanations with Default and Web-Retrieved in terms of informational overlap between explanations.
We observe an interesting pattern demonstrated by the readability metrics. Consider the FleschKincaid Reading Ease metric (where a lower score indicates higher grade-level readability of a given text). This is also one of three metrics (among Avg. Reading Time and TE Score) that correlate significantly with user perceived educational levels that we obtain in Section 4.1 (Appendix E.3). For all models except DeepSeek R1 Distill LLama 8B, we observe that the Flesch-Kincaid Reading
Ease metrics are relatively distinct for different educational backgrounds. However, it is interesting to see that these values are so close to each other that they often fall under the same interpreted U.S. grade level. For example, for GPT-4 explanations we show the Flesch-Kincaid Reading Ease distributions for grade-tailored explanations in Figure 1. When these scores are mapped to their interpreted U.S. grade levels, the distributions collapse into a narrow range, primarily between high school and college-level readability. We term this as interpretation collapse, which is also observed for all language models (Appendix E.4). In fact, for DeepSeek R1 Distill LLama 8B, readability score distributions are almost overlapping for all educational levels. This is supported by our observations in Section 4.1, where participants often perceive most explanations as tailored for High School . The fact that explanations meant for vastly different backgrounds fall into overlapping score ranges suggests that grade-tailored explanations are not meaningfully differentiating at an interpretive level, even if surface form qualities like length and complexity of words increases. We suggest that automated metrics (like Flesch-Kincaid Reading Ease) to some extent can also be used to measure whether language model explanations are truly tailored to their intended educational backgrounds, provided they are carefully inspected with their corresponding grade-level interpretations.
# 4.3 Case Study: Why is there a mismatch between intended and perceived educational backgrounds of tailored explanations?
As seen in Figure 3(b), we observe surprising mismatches while tailoring explanations to different educational backgrounds— particularly where explanations tailored for higher educational levels like High School or Graduate School are instead perceived as Elementary School . We hypothesize that such mismatches arise because of certain questions being always associated with a particular educational background, hindering GPT-4’s (and possibly other language models’) ability to generalize for a different educational background.
As a case study, we look at the ELI5 subreddit, where users often seek simplified explanations for different questions, most of them being “Why” questions (Appendix F). We observe that questions that exhibit perceived simplification—
Figure 4: Relationship between perceived simplification and semantic similarity to ELI5 questions: Questions where explanations were perceived as significantly simpler than intended (e.g., intended High School or Graduate School but perceived as Elementary School ) tend to have higher similarity to questions present the ELI5 subreddit.
GPT-4’s explanations tailored for High School and Graduate School that were perceived to be Elementary School by users—are significantly more similar to questions in the ELI5 subreddit than other questions $\zeta \ : _ { p } < 0 . 0 5$ , Mann-Whitney U Test (Mann and Whitney, 1947)). This suggests that GPT-4 may overgeneralize and produce simpler explanations when a question closely resembles those always present in contexts pertaining to these educational backgrounds (Figure 4).
# 5 Do generated explanations help provide new information to users?
A fundamental notion of utility for language models in pedagogical cases is how much they assist users in learning new information (Joshi et al., 2023; Zhang et al., 2024; Schmucker et al., 2024; Lee et al., 2023). In this section, we discuss the utility of explanations in delivering new information to a learner, that aligns with the learner’s informational needs. Understanding this is crucial in determining whether language models like GPT-4 tailor explanations for different educational backgrounds merely stylistically or if they provide new and relevant information that contributes to learning and comprehension.
Evaluating informativeness w.r.t user informational needs. Consider a user with a high schoollevel background in physics, familiar with basic concepts about light such as such as scattering, wave-particle duality, and light interactions. Given a question, “Why is the sky blue?”, the user receives the following explanation: “Because of a solar zenith angle (SZA) of $9 0 ^ { \circ }$ , only 1/3 of the blue color of the sky at the zenith is caused by Rayleigh scattering.” While this explanation introduces new terms like solar zenith angle, it fails to properly define them, making it difficult for the user to integrate the explanation into their existing knowledge. Conversely, an overly simplistic explanation such as “When sunlight comes through the air bubble that surrounds the Earth, it sometimes hits little bits of air and gets scattered” provides no meaningful new insights and is therefore uninformative.
Figure 5: User evaluation of explanation informativeness: Participants are provided with a randomly selected explanation (from one of the educational backgrounds) for a given question. They then assume the role of a learner and determine if an explanation provides new information that connects with their information needs.
Figure 6: Comparison of $\%$ Informative Explanations and $\%$ Matched Informative Explanations for across different educational backgrounds. GPT-4 grade-tailored explanations are often informative for Elementary School participants; they struggle to align
We define that an explanation is informative in for a user if it satisfies two conditions: (1) it introduces new concepts that the user was previously unaware of, and (2) these new concepts connect well with the user’s existing background knowledge, making them easier to understand. We design the following user study to evaluate the informativeness of an explanation for a given user. We recruit users belonging to a specific educational background. The user is presented with a question and a randomly selected stimuli explanation, that could belong to Elementary School , High School or Graduate School backgrounds with equal probability. The user is then asked: “Does the explanation introduce a new concept, previously unknown to you?” If the user responds negatively, this implies that the explanation is too simple for them, so the system provides an explanation from the next higher educational background. If the user responds positively, they are asked a follow-up question: “Does the explanation connect the introduced concept(s) to something you may already know?” If
Informative Expl. Matched Informative Exp
100 93.2 97.7 78.6
75 65.2 72.1 64.8 66.4 55.5
50 49.5 43.340.9 50.247.2 25.9 30.6
25 19.1 0 GPT-4 Human Curated GPT-4 Human Curated Participants Elementary School Graduate School-Physics High School Graduate School-Psychology
with the needs of High School and Graduate School participants, whereas Manually Web-Retrieved gradetailored explanations perform consistently better across all participants.
they confirm that the concepts are well-integrated, the explanation is considered informative. However, if the new concepts do not align with their prior knowledge, the explanation introduces new information but lacks coherence, making it difficult for the user to integrate into their understanding; in this case, the system provides an explanation from the next lower educational background. Figure 5 summarizes this evaluation.
Human Study and Metrics. We recruited adult participants with the following highest education levels: elementary school, high school, and graduate degrees in two distinct disciplines—Physics for STEM and Psychology for Non-STEM. We select 40 questions from ELI-WHY, and derived from Section 4.1, we use GPT-4-generated explanations that were perceived to match their intended educational backgrounds. For each educational background and question, participants assume the role of a learner and determine if GPT-4 generates explanations that are informative for a question. Each question is answered by five participants, leading to 200 responses for each educational background. We compute two metrics: $\%$ Informative Explanations which is the $\%$ of questions where any one of the three GPT-4 gradetailored explanations were found informative, and $\%$ Matched Informative Explanations which is the $\%$ of questions where explanations were informative and matched the participant’s educational background. Given that we aim to capture how useful grade-tailored explanations are for an individual and that every individual may have different prior knowledge even within the same educational background, we do not do any majority voting while aggregating the above metrics for a question across participants. Lastly, we also replicate the same user study with the 40 ELI-WHY questions with Manually Web-Retrieved explanations.
Results. Figure 6 shows the $\%$ Informative Explanations and $\%$ Matched Informative Explanations results for participants with different educational background. We observe that participants with higher education backgrounds have lower $\%$ Informative Explanations and $\%$ Matched Informative Explanations. It is particularly stark for participants with a Graduate School -Physics background, where only $1 9 \%$ of questions have informative explanations that match the participant’s background. We find that on an aggregate basis, Manually Web-Retrieved explanations consistently outperform GPT-4 on both metrics across all educational backgrounds. While GPT-4 provides new information at a comparable rate for Elementary School and High School participants, its effectiveness declines significantly for Graduate School -background participants. On average, for all three educational backgrounds, Manually Web-Retrieved explanations are relatively $20 \%$ more informative than GPT-4 explanations. It is important to note that Manually Web-Retrieved explanations are curated by lay experts, not domain experts. These individuals rely on general knowledge, metadata about online resources to craft responses, yet they still provide more informative and better-aligned explanations than GPT-4. This suggests that GPT-4 struggles not just with domain expertise, but also with the broader research and adaptation strategies that even nonexperts employ when tailoring explanations. Recruiting actual subject matter experts could further widen this gap, highlighting limitations in delivering truly audience-appropriate information. | Language models today are widely used in education, yet their ability to tailor responses for learners with varied informational needs and knowledge backgrounds remains under-explored. To this end, we introduce ELI-Why, a benchmark of 13.4K "Why" questions to evaluate the pedagogical capabilities of language models. We then conduct two extensive human studies to assess the utility of language model-generated explanatory answers (explanations) on our benchmark, tailored to three distinct educational grades: elementary, high-school and graduate school. In our first study, human raters assume the role of an "educator" to assess model explanations' fit to different educational grades. We find that GPT-4-generated explanations match their intended educational background only 50% of the time, compared to 79% for lay human-curated explanations. In our second study, human raters assume the role of a learner to assess if an explanation fits their own informational needs. Across all educational backgrounds, users deemed GPT-4-generated explanations 20% less suited on average to their informational needs, when compared to explanations curated by lay people. Additionally, automated evaluation metrics reveal that explanations generated across different language model families for different informational needs remain indistinguishable in their grade-level, limiting their pedagogical effectiveness. | [
"cs.CL",
"cs.HC"
] |
# 1 Introduction
Large Multimodal Models (LMMs) $\mathrm { [ A A A ^ { + } 2 4 }$ , $\mathrm { M G F ^ { + } } 2 4$ , $Z \mathrm { G } \mathrm { G } ^ { + } 2 4$ , $\mathrm { W B T ^ { + } } 2 4$ , $\mathrm { C W T ^ { + } } 2 4$ , $\mathrm { B C L } ^ { + } 2 5 ]$ have achieved remarkable performance across a wide range of downstream tasks, including visual question answering and autonomous computer agents. However, as model size increases, the rising inference cost presents significant challenges for deploying LMMs efficiently. To address this, Mixture-of-Experts (MoE) $\mathrm { [ L L X ^ { + } 2 1 }$ , FZS22, $\mathrm { D L F } ^ { + } 2 4 ]$ introduces a mechanism that maintains a large pool of experts while activating only a subset for each input, thereby improving computational efficiency. Although MoE models significantly reduce FLOPs, they generally have a higher memory footprint, making deployment on edge devices challenging. For example, when training multimodal MoE up-cycled from Qwen2.5-3B, if all feed-forward network (FFN) layers are replaced with MoE layers containing 16 experts, the resulting model’s non-embedding memory footprint will increase from 5.2GB to 73.2GB. This limitation is particularly pronounced for consumer-grade GPUs, which often have constrained memory capacities.
Model quantization is a promising approach to reducing the memory footprint of LMMs while maintaining comparable performance. Most mainstream quantization methods [FAHA22, $\mathrm { L T T ^ { + } } 2 4$ , CCKDS24, TSHDS24] aim to compress the bit-width of a pre-trained, full-precision model. Although these methods have a low training cost, they suffer from significant performance degradation when the bit-width is reduced below 4-bit. Recent studies $[ \mathrm { M W M ^ { + } } \bar { 2 } 4$ , $\mathrm { K V M } ^ { + } 2 4$ , $Z Z S ^ { + } 2 4 ]$ have demonstrated
Preprint.
promising scaling trends for ternary pre-training in Large Language Models (LLMs). At sufficiently large model sizes, ternary models can achieve accuracy comparable to full-precision models on downstream tasks while maintaining the same pre-training cost. Furthermore, they have much lower inference costs in terms of memory, latency, and energy consumption $[ \mathrm { W } Z \mathrm { S } ^ { + } 2 4 ]$ . However, since these models have only been trained on billions of tokens, a substantial performance gap remains between open-sourced ternary models and full-precision dense models. As a result, directly training MoE models initialized from these under-trained models leads to weak performance on end tasks.
In this work, we introduce MoTE, a scalable and memory-efficient architecture designed to train Mixture-of-Ternary Experts model from a pre-trained, full-precision dense checkpoint in multimodal tuning. Our approach addresses the inefficiency of multimodal MoE models in terms of memory footprint. Prior works $\mathrm { [ L T Y ^ { + } 2 4 }$ , $\mathrm { L J H ^ { + } } 2 5 ]$ primarily replace the FFN layer in dense checkpoints with an MoE layer, initializing the experts using the pre-trained FFN. However, we observed that in ternary training, replacing the FFN layer leads to significant performance degradation, as weight ternarization disrupts the pre-trained FFN. To mitigate this, we retain the FFN from the dense checkpoint as a shared expert activated for all inputs. During up-cycling, the layers inherited from the dense model remain frozen, while only the ternary MoE layers are trainable.
We first conduct strict and controlled experiments to evaluate the proposed approach against fullprecision up-cycling MoE-LLaVA $\mathrm { [ L T Y ^ { + } } 2 4 ]$ across various model scales on a wide range of image understanding tasks. Our results show that ternary up-cycling exhibits surprising effectiveness as model size scales. As the size of the up-cycled dense checkpoint increases, the performance gap between MoTE and MoE-LLaVA narrows, eventually reaching comparable performance at scales larger than 1.5 billion parameters. Additionally, MoTE is compatible with post-training quantization techniques [FAHA22]. Given the same expert memory footprint and combined with post-training quantization, MoTE outperforms full-precision MoE-LLaVA at both 1.5B and 3B model sizes. This advantage becomes even more pronounced as memory constraints tighten. Specifically, under an expert memory budget of 3.4GB, our approach achieves a $4 . 3 \%$ improvement in average accuracy on downstream task. These results demonstrate that given the same amount of total memory footprint and active parameter counts, training with a larger number of low-precision experts yields better performance than using fewer high-precision experts.
# 2 Related Work
Mixture of Experts. LMMs demonstrate superior performance across various tasks as model size and training data scale increase. MoE models $[ \mathrm { L L X ^ { + } } 2 1 \$ , FZS22, $\mathrm { M S G ^ { + } } 2 4$ , $\mathrm { W C P ^ { + } } 2 4 ]$ maintain a large pool of experts but activate only a subset for each token, enabling improved performance at the same FLOPs budget. $[ \mathrm { K P L } ^ { + } 2 3 ]$ introduced sparse up-cycling to reduce the training costs of MoE models by initializing them from dense checkpoints. $[ \mathrm { L } \dot { \mathrm { T } } \dot { \mathrm { Y } } ^ { + } \bar { 2 } 4 ]$ explored the up-cycling of LMMs in the context of multimodal training, while $[ \bar { \mathrm { S } } \mathrm { L } Z ^ { + } 2 4 ]$ proposed a progressive knowledge transfer strategy to train small-scale multimodal MoEs from dense models. $[ \mathrm { L J H } ^ { + } 2 5 ]$ presented a scalable multimodal model that utilizes MoE with modality-specific encoders. While previous $\mathrm { [ L T Y ^ { + } } 2 4$ , $\mathrm { L W } Z ^ { + } 2 4$ , $\mathrm { L J H ^ { + } } 2 5 ]$ primarily focused on full-precision experts for up-cycling, our work investigates up-cycling with ternary experts to develop memory-efficient multimodal MoE models.
Model Quantization. Quantization is a promising approach to reducing the memory footprint of LMMs while maintaining competitive performance, which can be categorized into two types based on the stage at which it is applied: post-training [DLBZ22, FAHA22, $\mathrm { L T T ^ { + } } 2 4$ , CCKDS24, $\mathrm { T C S ^ { + } } 2 4$ , TSHDS24] and pre-training quantization $[ \mathrm { W M D ^ { + } } 2 3$ , $\mathrm { { M W M ^ { + } } } 2 4$ , $\mathrm { W G L ^ { + } } 2 5$ , $\mathrm { P W W } ^ { + } 2 3 ]$ . Post-training quantization compresses high-precision pre-trained models after training. Due to its lower cost, it is widely adopted for mainstream large-scale models. GPTQ [FAHA22] and AWQ $\scriptstyle { [ \mathrm { L T T } ^ { + } 2 4 ] }$ reduce the bit-width to 4 bits while incurring minimal degradation. QuIP# $[ \mathrm { T C S ^ { + } } 2 4 ]$ builds on QuIP [CCKDS24] by improving incoherence processing and applying vector quantization to incoherent weights. With additional fine-tuning, QuIP# achieves state-of-the-art performance in 2-bit models. However, when the bit-width is reduced below 4-bit, these methods all suffer from significant performance degradation compared to BF16 baselines. In contrast, pre-training quantization integrates quantization into the training process, requiring models to be trained from scratch, which results in better performance. Recent $[ \mathrm { M W M ^ { + } } 2 4 ]$ ] showed that ternary LLMs match the performance of full-precision counterpart starting from 3B parameter counts. [FA24] quantized a
Figure 1: The overview of MoTE. We retain the pre-trained full-precision FFN as a shared expert and add a top-1 activated MoE layer with ternary experts. All experts and attention layers are initialized from the dense checkpoint.
1.6 trillion parameter Switch Transformer to sub 1-bit precision. [LJCC24] proposed to quantize the experts with a mixed precision recipe and introduced a novel data-driven techniques for optimizing bit allocation.
# 3 MoTE: Mixture-of-Ternary-Experts
In this section, we provide an overview of the proposed MoTE, including model architecture in Section 3.1, training recipe in Section 3.2 and objectives in Section 3.3.
# 3.1 Architecture
We illustrate the architecture of MoTE in Figure 1. Previous studies $[ \mathrm { K P L ^ { + } } 2 3$ , $\mathrm { L T Y ^ { + } } 2 4 ]$ expanded a dense model into an MoE model by directly replacing the FFN layer with an MoE layer, where each expert is initialized from the dense FFN to accelerate convergence. However, as shown in Table 6, we found that directly replacing the FFN with an MoE in ternary up-cycling leads to significant performance degradation. We hypothesize that this occurs because the FFN encodes a substantial amount of factual knowledge acquired during pre-training [GSBL21, $\mathrm { \Delta D D H ^ { + } } 2 2 \$ , and weight ternarization severely disrupts pre-trained information. To mitigate this issue, we retain the FFN module from the dense model as a shared expert, ensuring it is activated for every token. Specifically, the forward computation of the $l$ -th layer of MoTE can be formulated as:
$$
\begin{array} { r l } & { x _ { l } ^ { a } = x _ { l - 1 } + \mathbf { M S A } ( \mathbf { L N } ( x _ { l - 1 } ) ) } \\ & { x _ { l } = x _ { l } ^ { a } + \mathbf { M o E } ( \mathbf { L N } ( x _ { l } ^ { a } ) ) + \mathbf { F F N } ( \mathbf { L N } ( x _ { l } ^ { a } ) ) } \end{array}
$$
where MSA and LN stands for multi-head self-attention and layer normalization, respectively. As illustrated in Figure 1, we initialize the FFN, MSA and MoE layers from the dense model. We implement the MoE mechanism following the GShard $[ \mathrm { L L X ^ { + } } 2 1 ]$ , with each expert modeled as a Gated Linear Unit (GLU) [Sha20]. An MoE layer which consists of $E$ ternary experts $\mathrm { F F N } _ { 1 } ^ { T }$ ...
$\mathrm { F F N } _ { E } ^ { T }$ satisfies that:
$$
\begin{array} { l } { \displaystyle \mathcal { P } ( \boldsymbol { x } ) _ { i } = \frac { e ^ { f ( \boldsymbol { x } ) _ { i } } } { \sum _ { j = 1 } ^ { E } e ^ { f ( \boldsymbol { x } ) _ { j } } } } \\ { \displaystyle \operatorname { M o E } ( \boldsymbol { x } ) = \sum _ { i = 1 } ^ { E } \mathcal { P } ( \boldsymbol { x } ) _ { i } \cdot \mathrm { F F N } _ { i } ^ { T } ( \boldsymbol { x } ) } \end{array}
$$
where $f ( x )$ is the gating logits produced by the router. We leave the projection in router as BF16, since it only accounts for very small portion of total memory footprint. The forward computation of the $i$ -th ternary expert $\mathrm { F F N } _ { i } ^ { T } ( x )$ satisfies that:
$$
\begin{array} { r l } & { \mathrm { F F N } _ { i } ^ { T } ( x ) = Q _ { w } ( W _ { \mathrm { d o w n } } ^ { T } ) Q _ { a } ( h ) } \\ & { \quad h = Q _ { w } ( W _ { \mathrm { u p } } ^ { T } ) Q _ { a } ( x ) \otimes \sigma [ Q _ { w } ( W _ { \mathrm { g a t e } } ^ { T } ) Q _ { a } ( x ) ] } \end{array}
$$
$\sigma$ is SiLU function. We apply absmean quantizer and per-token absmax quantizer for weight and activation quantization in expert’s linear layers following BitNet $[ \mathrm { M W M ^ { + } } 2 4 ]$ . Specifically, the quantization can be formulated as:
$$
\begin{array} { l } { \displaystyle Q _ { w } ( W ) = \alpha \cdot \mathrm { R o u n d C l i p } ( \frac { W } { \alpha } , - 1 , 1 ) , } \\ { \displaystyle Q _ { a } ( x ) = \frac { \beta } { 1 2 7 } \cdot \mathrm { R o u n d C l i p } ( \frac { 1 2 7 x } { \beta } , - 1 2 8 , 1 2 7 ) } \\ { \displaystyle \alpha = \frac { 1 } { n m } | | W | | _ { 1 } , \quad \beta = | | x | | _ { \infty } } \\ { \displaystyle \mathrm { R o u n d C l i p } ( x , a , b ) = \operatorname* { m a x } ( a , \operatorname* { m i n } ( b , \mathrm { r o u n d } ( x ) ) ) } \end{array}
$$
The weight $W \in \mathcal { R } ^ { m \times n }$ is quantized into ternary values, i.e., $\{ - 1 , 0 , 1 \}$ . The activations $x$ are per-token quantized into 8-bit integers, i.e., $[ - 1 2 8 , 1 2 7 ]$ . The output of ternary linear layer $Y$ is $\mathsf { \bar { Q } } _ { w } ( W ) Q _ { a } \mathsf { \bar { ( } } x )$ . During inference, we use the kernel from BitBlas $[ \mathrm { \bar { W } M C ^ { + } } 2 4 ]$ to save the memory footprint and accelerate the inference. Despite ternary values results in 1.58-bit, i.e., $\log 3 / \log { 2 }$ , BitBlas still stores and processes ternary weight in INT2 format since current GPUs are still based on binary system.
# 3.2 Training recipe
Following MoE-LLaVA $\mathrm { [ L T Y ^ { + } } 2 4 ]$ , the training of MoTE consists of three stages. In Stage I, we train a two-layer MLP connector to align the visual encoder and LLM. As for Stage II, we fine-tune the LLM and connector using more complex vision-language instruction data. In Stage III, we expand the dense model from Stage II to an MoE model with ternary experts. The visual encoder is frozen through the training process. As presented in Figure 1, during up-cycling, only ternary MoE layers are trainable, and the shared expert and MSA layers are frozen.
We adopt quantization-aware training for MoTE. The weights and activations are quantized into ternary and INT8 values on-the-fly. Since many operations in the quantization are no-differentiable, we deploy straight-through estimator [BLC13] for gradient approximation. The gradients are directly by-passing through non-differentiable functions, i.e., $\begin{array} { r } { \frac { \partial \mathcal { L } } { \partial W } = \frac { \partial \mathcal { L } } { \partial Q ( W ) } } \end{array}$ and $\begin{array} { r } { \frac { \partial \mathcal { L } } { \partial X } = \frac { { \mathbf { { \bar { \partial } } } } \partial \mathcal { L } } { \partial Q ( X ) } } \end{array}$ . The gradients and optimizer states are retained as full-precision.
# 3.3 Training objectives
The training objective of MoTE $\mathcal { L } _ { \mathrm { t o t a l } }$ requires the minimization of both the loss of specific multimodal tasks $\mathcal { L } _ { \mathrm { L M } }$ and an auxiliary load balancing loss ${ \mathcal { L } } _ { \mathrm { b a l a n c e } }$ .
Language modeling loss. The auto-regressive language modeling loss $\mathcal { L } _ { \mathrm { L M } }$ is widely adopted in the training of LMMs. Specifically, let $\nu$ and $\tau$ denote sequences of visual tokens and textual tokens, respectively. $\tau$ can be divided as the instruction part $\mathcal { T } _ { i n s }$ and the response part $\mathcal { T } _ { a n s }$ . The language modeling loss is calculated as:
$$
\mathcal { L } _ { \mathrm { L M } } = - \sum _ { \mathrm { t o k e n } _ { i } \in \mathcal { T } _ { a n s } } \log \operatorname* { P r } ( \mathcal { V } ^ { i } | \mathcal { V } , \mathcal { T } ^ { [ : i - 1 ] } )
$$
where $y$ is the model’s output. We only calculate the loss on the response part.
Load balancing loss. To ease the expert load imbalance problem in MoE, we adopt an auxiliary loss following Switch Transformers [FZS22]. Given a batch of training tokens $\mathbf { X }$ , the balancing loss can be formulated as:
$$
\mathcal { L } _ { \mathrm { b a l a n c e } } = \frac { E } { | \mathbf { X } | } \sum _ { i = 1 } ^ { E } \sum _ { x \in \mathbf { X } } t _ { i } \cdot \mathcal { P } ( x ) _ { i }
$$
where $| \mathbf { X } |$ is the number of training tokens in X, ${ \mathcal { P } } ( x ) _ { i }$ is the routing logits depicted in Equation 3, $t _ { i }$ is the number of tokens routed to the $i$ -th expert.
Above all, the training objective of MoTE is:
$$
\mathcal { L } _ { \mathrm { t o t a l } } = \mathcal { L } _ { \mathrm { L M } } + \gamma \cdot \mathcal { L } _ { \mathrm { b a l a n c e } }
$$
where $\gamma$ is a coefficient for load balancing.
# 4 Experiments
# 4.1 Setup
Model settings. We select MoELLaVA $\mathrm { [ L T Y ^ { \bar { + } } } 2 4 ]$ as the baseline. It adopts a similar three-stage MoE training recipe and utilizes full-precision experts. Since MoE-LLaVA activates the top-2 experts, and our model includes a shared expert, we use top-1 gating in MoTE to ensure a fair comparison in terms of FLOPs. All MoE layers consist of four routed experts. We adopt SigLIP-L [ZMKB23] as the vision encoder and the instruct-version of Qwen2.5-series model $\left[ \mathrm { Y } \mathrm { Y } Z ^ { + } 2 4 \mathrm { a } \right]$ as the base LLM. The connector is a two-layer MLP with GELU activation. Table 1 presents the active and total parameter counts in the training of
Table 1: The active/total parameter counts and expert memory of MoTE and MoE-LLaVA in various model sizes.
MoTE and MoE-LLaVA across different model sizes. The expert memory footprint includes contributions from both shared and routed experts.
Implementation details. We adopt expert parallelism for efficient training of MoE models. The coefficient $\gamma$ for load balancing loss is set as 0.01. The value is recommended by [FZS22] to ensure auxiliary loss not to overwhelm the primary language modeling objective. All experiments are conducted on 16 NVIDIA A100 cards with 40GB memory. Due to the limited computation resources, we do not perform dynamic resolution processing for the images, since it leads to extremely long training sequence. The length of the total sequence is set as 2048 tokens, and the visual input includes 729 tokens. More hyper-parameters can be found in Appendix B.
Training data. We train MoTE and MoE-LLaVA on the same dataset to ensure a fair comparison. The training dataset consists of a total of 5 million samples. For the first stage, we use the pretraining data of LLaVA 1.5 [LLLL24]. For the second stage, we use the mixture of SViT [ZWH23], LVIS $[ \mathrm { W } \mathrm { M } \mathrm { W } ^ { + } 2 3 ]$ , LRV $[ \mathrm { L L L ^ { + } } 2 3 ]$ ] and MIMIC-IT $[ \mathrm { L Z C ^ { + } } 2 3 ]$ . For the third stage, we use a subset of MAmmoTH-VL $[ \mathrm { G Z B } ^ { + } 2 4 ]$ , which includes 3.4 million instruction-response pairs, each associated with a single image as the visual input.
Evaluation. We report the zero-shot performance of these models on a range of image understanding tasks using LMM-Eval toolkit $[ Z \mathrm { L } Z ^ { + } 2 4 ]$ , including MMMU $[ \bar { \mathrm { Y N } } Z ^ { + } 2 4 ]$ , MathVista $\mathrm { [ L B \bar { X } ^ { + } 2 4 ] }$ (MathV), MMBench $[ \mathrm { L D } Z ^ { + } 2 4 ]$ (MMB), MMStar $\left[ \mathrm { C L D } ^ { + } 2 4 \right]$ (MMS), MMVet $[ \mathrm { Y Y L } ^ { + } 2 3 ]$ (MMV), SeedBench-2-Plus $[ \mathrm { L G C ^ { + } } 2 4 ]$ $\mathrm { ( S e e d ^ { 2 + } }$ ), SeedBench $\mathrm { [ L W W ^ { + } } 2 3 ]$ ] (Seed), AI2D $[ \mathsf { K S K ^ { + } 1 6 } ]$ , ChartQA $[ \mathrm { M L T ^ { + } } 2 2 ]$ , InfoVQA $[ \mathbf { M B T } ^ { + } 2 2 ]$ and DocVQA [MKJ21].
Table 2: The results of MoTE and MoE-LLaVA on image understanding tasks in different model sizes. All models utilize the same base LLM, vision encoder and training dataset to ensure a fair comparison.
Table 3: The results of MoTE and MoE-LLaVA given the same amount of expert memory in 1.5B and 3B model size. Both of them are combined with post-training quantization (PTQ). The expert memory footprint includes contributions from both shared and routed experts.
# 4.2 Main results
We compared the performance of ternary up-cycling MoTE to MoE-LLaVA across different model sizes on various multimodal tasks. As shown in Table 2, MoTE underperformed full-precision up-cycling MoE-LLaVA when converting a 0.5B dense model to an MoE model. However, the performance gap between MoTE and MoE-LLaVA narrows as the parameter counts of the dense model increases. Similar phenomenons are also reported by the low-bit pre-training of LLMs $[ \mathrm { W M D ^ { + } } 2 3$ , $\mathrm { \Delta } \mathrm { M W M ^ { + } } 2 4$ , $\mathrm { K } \bar { \mathsf { V } } \bar { \mathbf { M } } ^ { + } 2 4 ]$ , which suggests promising trends of scaling model size for ternary MoEs.
As the model size scales to 1.5B parameters, due to larger total parameter counts, MoTE surpasses MoE-LLaVA across various image understanding tasks, achieving an average accuracy improvement of $1 . 7 \%$ with the same FLOPs. This demonstrates the effectiveness of our proposed method. Moreover, since the expert weights in MoTE are trained to adapt to ternary values, despite it has larger total parameter counts, the ternary MoE layer can be losslessly compressed to low-bit after training, significantly reducing the memory footprint caused by the ensemble of experts. As shown in Table 1, at the 3B model size, MoTE’s expert memory is only 6.8GB — just $3 8 \%$ of MoE-LLaVA’s 18.1GB.
# 4.3 Compatibility with post-training quantization
Despite the MoE layers of our model contain ternary experts, there still leaves a shared expert in full-precision in each layer. These shared experts can be quantized into low-bit using post-training quantization methods.
Table 4: The results of MoTE and the other methods in similar model size on general VQA and multimodal reasoning tasks.
We apply GPTQ [FAHA22] and AWQ $\scriptstyle { [ \mathrm { L T T } ^ { + } 2 4 ] }$ at various bit-widths and report the best results given the same expert memory footprint. We use 512 samples with the length of 2048 tokens from Stage III’s data as the calibration set. For MoE-LLaVA, all full-precision experts are quantized, resulting in expert memory footprints of 2.2GB and 4.5GB under INT4 quantization for the 1.5B and 3B models, respectively. To ensure a fair comparison, we quantize the shared expert of MoTE to INT8 using RTN [DLBZ22]. Additionally, we extend the comparison to scenarios with lower memory constraints. For expert memory footprints of 1.6GB and 3.4GB in the 1.5B and 3B models, MoE-LLaVA’s experts are quantized to 3-bit integers using GPTQ, while the shared experts of MoTE are quantized to INT4.
Table 3 presents the results for MoTE and MoE-LLaVA, both combined with post-training quantization. Given the same expert memory footprint, MoTE achieves better performance than MoE-LLaVA. Under the same expert memory footprint, our method outperforms MoE-LLaVA across different model sizes. Notably, under stricter memory constraints, we observe a significant performance drop for MoE-LLaVA combined with GPTQ at 3-bit precision. However, since the parameters of our MoE layer are ternary, we can achieve the same memory footprint by applying INT4 quantization only to the shared expert. This further amplifies the advantages of our approach. Specifically, given the same expert memory of 3.4GB, MoTE achieves a gain of $4 . 3 \%$ average accuracy compared with MoE-LLaVA on the end tasks. These results demonstrate that our method can achieve lower memory footprint combined with post-training quantization, while maintaining competitive performance.
# 4.4 Scaling with more data
To examine whether our method is friendly for scaling with data, we train a 1.5B MoTE model with more data during ternary up-cycling. We adopt the same data recipe for Stage I and Stage II as shown in Section 4.1. Then we use a full set of MammoTH-VL $[ \mathrm { G Z B ^ { + } } \bar { 2 } 4 ]$ for Stage III, which contains 10 million samples, each associated with a single image. Every dense layer is replaced with an MoTE layer with one full-precision shared expert and four routed ternary experts. The training steps is set as $4 0 \mathrm { k }$ . The other hyper-parameters are consistent with the setup presented in Section 4.1.
Table 4 summarizes the zero-shot accuracy of MoTE and the baselines across various multimodal reasoning and general VQA tasks. For the baselines, we use their reported scores when available; otherwise, we evaluate the open-sourced models using the same prompts as ours to ensure a fair comparison. As shown in Table 4, although MoTE-1.5B is only trained with 21.6B tokens, our model achieves an improvement of $2 . 0 \%$ average accuracy compared to Qwen2-VL-2B $[ \mathrm { W B T ^ { + } } 2 4 ]$ . Furthermore, MoTE outperforms the larger dense model with fewer FLOPs. Specifically, MoTE outperforms MiniCPM-V-2.0-3B and Phi-3-Vision-4B by a gain of $1 1 . 1 \%$ and $5 . 3 \%$ accuracy on the testmini set of MathVista.
Table 5: Ablations on the precision of routed experts in MoTE.
Table 6: Ablations on the precision of shared experts and the initialization methods of routed experts in MoTE.
For sparse model, due to stronger base LLM and vision encoder, our model significantly outperforms MoE-LLaVA of similar total and active model size by a gain of $1 6 . 5 \%$ average accuracy. Notably, MM1.5-1B-MoE is a strong multimodal MoE baseline, which was trained from an 1B dense model with 64 experts replacing dense layers every two layers. MoTE outperforms it by a gain of $0 . 6 \%$ , $1 . 1 \%$ , $1 2 . 8 \%$ and $6 . 9 \%$ on MMMU, SeedBench (image), MMVet and MathVista, respectively. These results proves the effectiveness of the proposed MoTE on multimodal reasoning and general VQA.
# 4.5 Ablation studies
Precision of routed experts. We investigate the impact of expert precision on the performance of MoTE. Specifically, we compare ternary (i.e., 1.58-bit) up-cycling to 1-bit up-cycling with BWN [RORF16] as the weight quantizers. Both models are up-cycled from Qwen2.5-1.5B with SigLIP-L as the vision encoder to ensure a fair comparison. As shown in Table 5, using binary experts results in performance degradation across most tasks. Similar findings have been reported in the quantization-aware training of BERT models $[ \mathrm { B } Z \mathrm { H } ^ { + } 2 1 ]$ , where transitioning from ternary to binary weights leads to a substantially more complex and irregular loss landscape, making optimization notably more difficult. Above all, ternary up-cycling is a memory-effective and high-performance solution for MoE models.
Precision of shared experts. We ablate the effect of the precision of the shared expert reused from the FFN of pre-trained dense checkpoint. MoTE retains the precision of shared expert as BF16 and freezes the modules during up-cycling. We compare it to a model with the ternary shared expert. All ternary experts are trainable. Table 6 presents the zero-shot performance of these models on MMMU, MMBench, AI2D, ChartQA, SeedBench-2-Plus and MMStar tasks. Weight ternarization of the shared experts has significant effect on overall performance. Specifically, the model with full-precison shared experts outperforms it with ternary shared experts by an improvement of $7 . 6 \%$ average accuracy on the end tasks. This demonstrates the importance of keeping the pre-trained FFN as a high-precision shared expert during ternary up-cycling.
Initialization of routed experts. We compare MoTE to randomly initialized routed experts in Stage III. Table 6 presents the results for a 1.5B model, where initializing from the FFN yields a $1 . 5 \%$ improvement in average accuracy on end tasks compared to random initialization. Moreover, we analyze the impact of data scaling using the data recipe described in Section 4.4. As demonstrated in Table 4, FFN-based initialization maintains its advantage with additional training data, achieving a $1 . 3 \%$ higher average accuracy than random initialization. These findings suggest that leveraging a pre-trained full-precision FFN for MoTE’s initialization not only enhances performance but also accelerates the convergence of ternary experts. Additional results for the 0.5B and 3B models are provided in the Appendix C.
Training recipe. We conduct ablation studies on the training strategy of ternary up-cycling in MoTE to assess the effectiveness of first training with full-precision experts before fine-tuning the
Table 7: Ablations on the training recipe of MoTE. Given the same training FLOPs, we do not observe performance improvement from initially training with full-precision experts then fine-tuning them into ternary precision.
Trearinairnyg FulTl-rPairneicnisgion M(MvalM)U (eMn tMesBt) (AtIe2stD) Ch(taerstt)QA S(etedst2)+ (MteMst)S Avg.↑ 20% 80% 39.3 60.5 62.6 56.8 53.2 42.0 52.4 60% 40% 41.3 64.0 65.3 57.0 54.0 45.1 54.4 100% 0% 42.6 70.0 68.7 61.3 54.8 46.4 57.3 Expert 1 Expert 2 Expert 3 Expert 4 Expert 1 Expert 2 Expert 3 Expert 4 Expert 1 Expert 2 Expert 3 Expert 4 All experts Text Image
100% 100% 100%
75% 75% 75%
50% 50% 50%
25% 25% 25%
0% 6 8 10 12 14 16 18 20 22 24 26 0% 6 8 10 12 14 16 18 20 22 24 26 0% 0 2 4 6 8 10 12 14 16 18 20 22 24 26 MoE layer MoE layer MoE layer (a) All tokens. (b) Text tokens. (c) Image tokens.
Figure 2: Visualization of the routing distributions of all tokens, text tokens, image tokens across all experts on the en-test set of MMBench.
model to ternary precision. All models are trained on 6.25B tokens and up-cycled from Qwen2.5-1.5B. We vary the proportion of training conducted in full-precision versus ternary precision. As shown in Table 7, we do not observe performance gain from initially training with full-precision experts. In fact, accuracy improves as the proportion of ternary training increases. Therefore, for both simplicity and improved performance, MoTE is trained directly in ternary precision without a full-precision training phase during up-cycling.
# 5 Analysis
We visualize the routing distribution of all tokens in MoTE-1.5B on the en-test split of the MMBench dataset. As shown in Figure 2a, expert utilization across all tokens is well-balanced. To further investigate modality-specific behavior, we present the routing distributions for text and image tokens separately in Figures 2b and 2c, respectively. Notably, text and image tokens exhibit distinct routing patterns. For example, expert #1 is frequently activated for image tokens in the first layer and the final five layers. Additional visualizations across various tasks are provided in Appendix D.1. We observe that routing distributions remain largely consistent across different tasks, suggesting that the experts in MoTE specialize based on modality rather than task-specific features. Moreover, we include per-expert routing distributions by modality in Appendix D.2. Interestingly, some experts exhibit clear modality preferences despite the absence of explicit modality conditioning during training. To better understand expert specialization, we further apply PCA [Pea01] to extract the top-10 routing pathways for text and image tokens. More visualizations are included in AppendixD.3. These findings enhance our understanding of MoTE’s behavior and workflow from a token-level perspective. | Large multimodal Mixture-of-Experts (MoEs) effectively scale the model size to boost performance while maintaining fixed active parameters. However, previous works primarily utilized full-precision experts during sparse up-cycling. Despite they show superior performance on end tasks, the large amount of experts introduces higher memory footprint, which poses significant challenges for the deployment on edge devices. In this work, we propose MoTE, a scalable and memory-efficient approach to train Mixture-of-Ternary-Experts models from dense checkpoint. Instead of training fewer high-precision experts, we propose to train more low-precision experts during up-cycling. Specifically, we use the pre-trained FFN as a shared expert and train ternary routed experts with parameters in {-1, 0, 1}. Extensive experiments show that our approach has promising scaling trend along model size. MoTE achieves comparable performance to full-precision baseline MoE-LLaVA while offering lower memory footprint. Furthermore, our approach is compatible with post-training quantization methods and the advantage further amplifies when memory-constraint goes lower. Given the same amount of expert memory footprint of 3.4GB and combined with post-training quantization, MoTE outperforms MoE-LLaVA by a gain of 4.3% average accuracy on end tasks, demonstrating its effectiveness and potential for memory-constrained devices. | [
"cs.CV",
"cs.LG"
] |
# 1 Introduction
Graph neural networks (GNNs) have demonstrated significant performance and representational capacity in solving problems over graph-structured data, fueling a great deal of applications in a variety of domains, ranging from biology to finance, from social media to neuroscience [59]. In particular, among various graph machine learning tasks, GNNs have shown significant predominance over other methods in the node classification task, that is, when a unique graph is given as input with some nodes labelled and some are not, and the goal is to predict a label for each unlabeled node.
Together with the demonstrated predictive performance, GNNs inherit from the wider class of deep learning methods several limitations which can hinder their adoption, especially in high-stakes application domains [13]. Most noticeably, GNNs lack interpretability [11, 65] and tend to inherit biases present in the training datasets [9, 12, 15, 30–32, 57], potentially leading to discriminatory decisions based on, e.g., gender, race, or other sensitive attributes. The latter issue derives from the fact that, exactly like every other machine learning algorithm, GNNs are trained to reflect the distribution of the training data which often contains historical bias towards sensitive attributes. If not addressed explicitly, the bias contained in the training data can end up being structured in the trained machine learning model [24]. Furthermore, the underlying graph structure and the typical message-passing mechanism of GNNs can further magnify harmful biases [12].
Counterfactual learning is emerging as a paradigm that can alleviate both fairness and interpretability issues [23]. The notion of counterfactual is borrowed from causal language and it indicates the possibility of an alternative outcome if some of the premises were different from what were in reality (“counter to the facts”). For instance, in fairness testing, counterfactual examples are created by altering certain sensitive features (e.g., gender or race) of a data point to see if the model’s predictions change. Counterfactuals are also straightforward contrastive example-based explanations of the type: “You were denied a loan. If your income had been $\varepsilon 4 5 , 0 0 0 ;$ you would have been offered a loan” [56]. A key challenge for learning causality [22] is that, in order to properly determine the causal effect of an action, we need to know both the factual outcome with the observed action and the counterfactual outcome with the unobserved
Input Graph of u97 Counterfactual Explanation Counterfactual Evidence
u404 G u88 u404 G1 u88 u588 G2 u528 u97 u97 u862 包 ? u98 u98 u755 u910 u910 u775 Loan Duration Loan Amount (Deutsch Mark) Use Loan Duration Loan Amount (Deutsch Mark) User Gende Loan Duration Loan Amoun (Month) (Deutsch Mar
u97 Male 30-40 12-24 2k-3k u97 Male 30-40 12-24 2k-3k u862 Female 30-40 24-36 2k-3k
u98 Male 30-40 e3ts6-48 2k-3k u98 Male 30-40 e3ts6-48 2k-3k u755 Female 30-40 e2ts4-36 1k-2k
u404 Male 40-50 12-24 2k-3k u404 Male 40-50 12-24 2k-3k u588 Male 40-50 12-24 1k-2k
u88 Mal 30-40 12-24 3k-4k u88 Male 30-40 12-24 3k-4k u528 Male 30-40 36-48 2k-3k
u910 Female 30-40 36-48 4k-5k u910 Female 30-40 36-48 4k-5k u775 Female 18-30 24-36 1k-2k
action. However, in many real-world settings it is impractical to conduct randomized controlled trials to get the counterfactual: When this is the case, we only have access to the observational factual data, i.e., the observed action and its corresponding factual outcome. Therefore, it is crucial to develop methods for detecting counterfactuals in the observational data, in order to take the full advantage of counterfactual reasoning in machine learning [22, 23, 28, 42].
In this paper, we tackle the problem of searching counterfactual evidences in node classification task. In this setting, a counterfactual evidence for a node $\boldsymbol { v }$ is another node $u$ such that, regardless they exhibit great similarity in their neighborhood subgraph, including the features, they are classified differently by the given GNN. The differences between $\boldsymbol { v }$ ’s and $u$ ’s neighborhood subgraphs, are what define the counterfactual hypothesis for $\boldsymbol { v }$ , i.e., the small changes that would make 𝑣 being classified differently. The advantage of this type of counterfactual over perturbation-based ones, is that it exists in the factual data: as such it enjoys greater realism and feasibility than counterfactuals produced by perturbation.
Figure 1 provides a depiction of a counterfactual evidence and its difference from a counterfactual explanation [65], extracted from the German Credit dataset [3]. On the left, a node, $u 9 7$ , is presented together with its 1-hop neighborhood structures $( G )$ , along with feature values for all the nodes in $G$ . In this case, the GNN classifies $u 9 7$ positively (loan approved). In the center, a counterfactual explanation for $u 9 7$ is presented: this is a set of perturbations to the specific data point (𝑢97’s 1-hop neighborhood structure and features), which is sufficient to induce a change in the classification. In this example, the counterfactual explanation (subgraph $G _ { 1 }$ ) is obtained from the data point $G$ by masking three out of four links: such changes would make $u 9 7$ classified negatively, thus highlighting which are the most important parts of $G$ for the prediction. Instead, a counterfactual evidence is a different node $_ { _ { u 8 6 2 } }$ on the right), which regardless of having great similarity in the subgraph and associated features, gets a different treatment from $u 9 7$ .
Besides the general importance of finding counterfactuals in the observational data (previously discussed), there are some immediate applications of graph counterfactual evidence that we develop in case studies in $\ S 7$ . From the standpoint of a watchdog auditing an algorithmic decision-making system for unlawful discrimination, it is critical to find evidences of nodes which are similar, but treated differently, thus violating the principle that “similar individuals should be treated similarly” [17], indicating potential unfairness issues with the model. Two similar yet classified differently nodes might also be an indication of an area around the decision boundary of the GNN, where misclassification errors concentrate, thus providing a signal for potential intervention to improve the performance of a classifier.
Our contributions. We tackle the problem of extracting, from a given graph, a pair of nodes (together with their neighborhoods), such that they exhibit high similarity in both structure and features, yet they are classified differently by a GNN. Our is a data mining problem, that is, we define a novel structure of interest in a large graph based on a pre-trained GNN model’s output and we then devise the algorithm to extract it. More specifically, we consider two versions of the problem: the local version seeks counterfactual evidences for a given target node, while the global version seeks for counterfactual evidences among all possible node pairs. For both problems, we aim to extract the top- $\mathbf { \nabla } \cdot k$ counterfactual evidences w.r.t. a measure of similarity of the two neighborhood subgraphs. As a measure of similarity, we adapt the Weisfeiler Lehman (WL) kernelbased graph similarity [51, 52], for the case of node-anchored graphs (i.e., $L$ -hop neighbor subgraphs), having multiple node features. Recall that the WL-test is a necessary but insufficient condition for graph isomorphism [51], whereas the recent GNN variants are not more powerful than the 1-WL test [62]. Our technique inherits the simplicity and computational efficiency of the WL kernel computation, while following the update scheme of message-passing GNNs, thus it generalizes the inference process without being tailored for any specific GNN.
We propose search algorithms for both local and global problems, which are similar to exact and approximate nearest neighbor search over high-dimensional, dense vector spaces [44]. First, we design a baseline algorithm that applies a linear scan over all test nodes, and we utilize several optimization strategies. Second, we propose an index-based algorithm to improve the efficiency by pruning undesirable test nodes. Existing vector indexes [14, 39] are suitable for Euclidean distance, they are not readily applicable to cosine similarity that we adopt, thereby requiring a novel, efficient, and effective index tailored for our problem. Specifically, our novel index-based algorithm creates supplementary clusters based on new centroids that are close to the boundary nodes, thereby enhancing the quality of index-based retrieval at the cost of some redundancy.
Our experiments assess the efficiency and effectiveness of our algorithms on real-world datasets, using different GNNs and comparing against non-trivial baselines. Finally, we showcase the potential of applying counterfactual evidences in various downstream tasks: (1) we show how our proposal can be effective in unveiling unfairness of a GNN; (2) we show how counterfactual evidences identify the test instances that are close to the decision boundary of a GNN and thus error-prone; (3) we illustrate that fine-tuning the GNN with counterfactual evidences can enhance accuracy.
# Paper contributions and roadmap:
We introduce the novel notion of counterfactual evidence for node classification (§3).
• We propose a novel and generic kernel-based graph similarity measure to assess the similarity between neighborhood subgraphs for a pair of nodes (§4). We introduce an index-based algorithm to efficiently search counterfactual evidences while maintaining high quality (§5). We assess our algorithms on several real-world datasets, using different GNNs and against non-trivial baselines (§6). Finally, we showcase the potential of applying counterfactual evidences in various downstream tasks (§7).
# 2 Related work
In graph machine learning, the term “counterfactual” is adopted in different contexts, where it takes different semantics [23, 43]. We next review some related literature highlighting how the notions of counterfactual differ from the novel notion of counterfactual evidence in node classification, that we introduce in this paper.
Fairness. The notion of counterfactual fairness is based on the idea that a prediction for an instance is fair if it remains the same in a counterfactual world where the instance belongs to a different protected group, e.g., a different gender or race. Counterfactual learning on graphs has emerged as a promising direction to achieve counterfactual fairness, which is attained via a trade-off between utility and fairness in the objective function [3, 38, 67]. Our focus is different: We retrieve counterfactual nodes as opposed to achieving counterfactual fairness in GNN classification. Nevertheless, we demonstrate in our empirical evaluation that our counterfactual evidences can facilitate detecting unfairness patterns in GNNs (§7).
Explainability. For GNN explainability, a factual explanation is a subgraph that preserves the result of classification, while a counterfactual explanation is a subgraph which flips the result if perturbed or removed [65]. A number of counterfactual explainability methods for GNNs have been proposed considering both node and graph classifications, e.g., CF-GNNExplainer [36], RCExplainer [5], MEG [41], and CLEAR [37]. Recent works combine factual and counterfactual explanations for GNNs [6, 45, 54]. These methods identify a subgraph surrounding a node such that, if the subgraph is perturbed, the node classification will be different. This notion of counterfactual thus differs from the counterfactual evidence that we propose in this paper, as already discussed in $\ S 1$ and depicted in Figure 1.
In the context of graph classification, Abrate et al. introduce the notion of counterfactual graph, which has high structural similarity with the input graph, but is classified by the GNN into a different class [2]. Huang et al. propose GCFExplainer to identify a small set of representative counterfactual graphs, referred to as the global counterfactuals, for all input graphs in a graph database [26]. Both these works share with our search paradigm for counterfactuals in the observational data, but they focus on graph classification (instead of node classification, as we do in this paper): they consider graph structures and use symmetric difference or graph edit distance to measure similarity between graph pairs, while they disregard multiple node features, which instead play an important role in our proposal.
# 3 Problem statement
We consider an undirected unweighted graph $G = ( V , E )$ , where $V = \{ v _ { 1 } , v _ { 2 } , . . . , v _ { | V | } \}$ denotes a finite set of nodes and $E \subseteq V \times V$ a set of edges. Nodes are associated with $d$ -dimensional features, represented by the feature matrix $\mathbf { X } \in \mathbb { R } ^ { | V | \times d }$ , where $\mathbf { x } _ { \boldsymbol { v } } \in \mathbb { R } ^ { d }$ is the $d$ -dimensional feature vector associated with a node $\boldsymbol { v } \in V$ Next, Y denotes a set of true (ground truth) labels associated with nodes, and $y _ { v } \in \mathbf { Y }$ is the true label for node $\boldsymbol { v }$ .
Graph Neural Networks (GNNs) [16, 29, 46] comprise a wellestablished family of deep learning models tailored for analyzing graph-structured data. GNNs generally employ a multi-layer message-passing scheme as shown in Equation 1.
$$
\mathbf { H } ^ { ( l + 1 ) } = \sigma ( \widetilde { \mathbf { A } } \mathbf { H } ^ { ( l ) } \mathbf { W } ^ { ( l ) } )
$$
$\mathbf { H } ^ { ( l + 1 ) }$ is the matrix of node representations at layer $l$ , with ${ \bf H } ^ { ( 0 ) } =$ X being the input feature matrix. $\widetilde { \mathbf { A } }$ is the normalized adjacency matrix of the graph, which captures the graph structure. $\dot { \mathbf { W } } ^ { ( l ) }$ is a learnable weight matrix at layer 𝑙. $\sigma$ is an activation function such as ReLU. The final layer’s output $\mathbf { H } ^ { ( L ) }$ is used to make predictions by passing it to fully-connected and then softmax layers.
Node classification is a fundamental task in graph analysis [19]. In the context of GNNs, it aims to learn a model $M : V \to \mathbf { Y }$ s.t. $M ( v ) = y _ { v }$ for $v \in V _ { t r a i n } \subseteq V$ , where $V _ { t r a i n }$ is the training set of nodes with known (true) labels $\mathbf { Y } _ { t r a i n }$ . Then, we tune the hyperparameters of the model using a validation set of nodes $V _ { v a l i d }$ and their known labels ${ \Upsilon } _ { \mathit { v a l i d } }$ . Finally, the GNN predicts the labels for the remaining test nodes $V _ { t e s t } = V \backslash ( V _ { t r a i n } \cup V _ { v a l i d } )$ , also known as the inference mechanism of GNNs.
We consider a fixed, deterministic GNN $M$ , that is, (1) it has all factors which determine the inference process of $M ( \cdot )$ such as layers, model parameters, etc. fixed; and (2) $M ( \cdot )$ always generates the same output label for the same input test node.
The $L$ -hop neighbor subgraph of a node $v \in V$ is a subgraph $G _ { v } \subseteq G$ , where each node in $G _ { v }$ can be reached from 𝑣 within $L$ -hops. This subgraph is anchored by node $\boldsymbol { v }$ . In a message-passing GNN with $L$ -layers, each layer aggregates messages from neighboring nodes. With each successive layer, the messages are propagated one step further. Thus, after $L$ layers, a node 𝑣 can aggregate information from its $L$ -hop neighbors, emphasizing the importance of the $L$ -hop neighbor subgraph $G _ { v }$ surrounding the node $\boldsymbol { v }$ .
Counterfactual Evidence. Given a node $\upsilon \in V _ { t e s t }$ , another node $u \in V _ { t e s t } , ( v \neq u )$ is called its counterfactual evidence (CE) if the following two conditions hold: 1) 𝑣 and $u$ are assigned different labels by $M$ : i.e., $\boldsymbol { M } ( \boldsymbol { v } ) \neq \boldsymbol { M } ( \boldsymbol { u } )$ ; 2) The $L$ -hop neighbor subgraphs of $\boldsymbol { v }$ and $u$ have a high similarity score $( ^ { \ast } \mathsf { K S } ( v , u )$ measure” that we define in the next section).
Given that we search for pairs with high similarity, one option could be to retrieve all the pairs having similarity score above a given threshold. In this paper instead, we seek for the top- $\mathbf { \nabla } \cdot k$ pairs. For the sake of simplicity of presentation, we formalize both local and global versions of the problem for top-1, next.
Problem 1 (Top-1 Local Counterfactual Evidence). Given a query node $v \in V _ { t e s t }$ , the top-1 counterfactual evidence, $\mathsf { L C E } _ { o p t } ( v )$ is a node $u \in V _ { t e s t }$ that 1) has a different predicted label w.r.t. 𝑣; and 2) attains the highest similarity score $\mathsf { K S } ( v , u )$ compared to all other
nodes in the test set.
$$
\mathsf { L C E } _ { o p t } ( v ) = \mathop { \mathrm { a r g m a x } } _ { u \in V _ { t e s t } , M ( v ) \neq M ( u ) } \mathsf { K S } ( v , u )
$$
Problem 2 (Top-1 Global Counterfactual Evidence). Given the test set $V _ { t e s t }$ , the top-1 global counterfactual evidence, ${ \mathrm { G C E } } _ { o p t } ( V _ { t e s t } )$ is a pair of nodes $( v , u ) , ( v , u \in V _ { t e s t } ) s . t . \ : 1 ) v$ and 𝑢 have different predicted labels; and 2) the pair has the highest similarity score $\mathsf { K S } ( v , u )$ among all possible node pairs in $V _ { t e s t }$ .
$$
\mathrm { G C E } _ { o p t } ( V _ { t e s t } ) = \operatorname * { a r g m a x } _ { { v , u \in V _ { t e s t } , M ( v ) \neq M ( u ) } } \mathsf { K S } ( v , u )
$$
The generalization of LCE and GCE problems to extracting the top- $\mathbf { \nabla } \cdot k$ pairs is straightforward.
# 4 Kernel-based Similarity
The similarity between two graphs can be computed in many different ways [58]. Purely structural methods, e.g., graph isomorphism [8], graph edit distance [20], and maximum common subgraph [18, 47] are relatively strict and may ignore node features. GNN-based approaches such as NTN [53], SimGNN [4], and MGMN [35] would be specific to the underlying GNNs, and require supervision including extensive parameter tuning and training epochs. In this work, we instead adapt the Weisfeiler Lehman (WL) kernel-based graph similarity measure [51, 52], for the case of nodeanchored graphs (i.e., $L$ -hop neighbor subgraphs), having multiple node features. Recall that the WL-test is a necessary but insufficient condition for graph isomorphism [51], whereas the recent GNN variants are not more powerful than the 1-WL test [62]. We refer to our technique as the kernel-based similarity (or, KS, in short). It inherits the simplicity and computational efficiency of the WL kernel computation. Meanwhile, our approach KS also follows the update scheme of message-passing GNNs, thus it generalizes the inference process of GNNs, without being tailored for a specific GNN.
To this end, we first introduce the traditional WL kernel-based graph similarity measure [51, 52]. In this setting, unlike multiple features as in GNN-based approaches (§3), each node 𝑣 has one single attribute $a _ { v }$ . The WL kernel identifies similarities by examining subtree patterns. These patterns emerge from a propagation scheme that iteratively evaluates the attributes of nodes and their neighbors. The process involves generating a sequence of ordered strings by aggregating the attributes of a node with those of its neighbors, which are subsequently hashed to generate new, compressed node attributes. In each iteration, these updated attributes represent expanded neighborhoods of each node, e.g., at iteration $L$ , the compressed label summarizes $L$ -hop neighborhoods. Specifically, the initial attribute $a _ { v }$ of each node $\boldsymbol { v }$ refers to $a _ { v } ^ { 0 }$ , and let $L$ denote the number of WL iterations. For the $l$ -th iteration, we update the compressed attribute $a _ { v } ^ { l }$ by looking at the ordered set of neighbor attributes, as depicted in Equation 4.
$$
a _ { v } ^ { l } = h a s h ( a _ { v } ^ { l - 1 } , N ^ { l - 1 } ( v ) )
$$
$N ^ { l } ( \upsilon )$ denotes the sorted sequence of attributes at the $l$ -th iteration from the 1-hop neighbors of node $\boldsymbol { v }$ . The hash function generates an updated, compressed attribute for node $\boldsymbol { v }$ . For this purpose, perfect hashing is utilized, ensuring that two nodes at the $l$ -th iteration possess identical attributes if and only if their attributes and those of their neighbors at the $l$ -th iteration are identical.
Our problem setting is different from the classic WL kernel in two ways. First, we compare the similarity between node-anchored graphs (§3), which effectively preserves the information of both the target nodes and their neighborhoods. Second, instead of focusing on a single attribute as in the WL kernel, we incorporate multiple node features to better capture the characteristics of the nodes and their neighbors. This approach allows us to represent a node with both rich information and neighborhood dependencies. Therefore, we define the KS score as below.
Definition 1 (KS score). We incorporate information about multiple node features in the classic WL scheme, that is, unlike updating node colors in the original WL kernel, we update the node features of each node based on its current vector and the vectors of its neighbors. Specifically, we compute a weighted sum of the neighbor vectors using the cosine similarity as weights, then combine it with the original feature vector based on a trade-off parameter $\alpha$ , as follows.
$$
\mathbf { x } _ { v } ^ { l + 1 } = \alpha \cdot \mathbf { x } _ { v } ^ { l } + \frac { 1 - \alpha } { | N ( v ) | } \sum _ { u \in N ( v ) } { \mathrm { C O S I N E } } ( \mathbf { x } _ { v } ^ { l } , \mathbf { x } _ { u } ^ { l } ) \cdot \mathbf { x } _ { u } ^ { l }
$$
$$
\mathsf { C O S I N E } ( \mathbf { x } _ { v } ^ { l } , \mathbf { x } _ { u } ^ { l } ) = \frac { \mathbf { x } _ { v } ^ { l } \cdot \mathbf { x } _ { u } ^ { l } } { | | \mathbf { x } _ { v } ^ { l } | | _ { 2 } \cdot | | \mathbf { x } _ { u } ^ { l } | | _ { 2 } }
$$
Here, $N ( v )$ represents the 1-hop neighbors of node $\boldsymbol { v }$ , $\mathbf { x } _ { v } ^ { l }$ and $\mathbf { x } _ { u } ^ { l }$ are the feature vectors of node 𝑣 and node $u$ at iteration $l$ , respectively, with $\mathbf { x } _ { v } ^ { 0 } = \mathbf { x } _ { v }$ . The trade-off parameter $\alpha$ controls the relative importance between a node’s original feature vector and its accumulated (weighted) feature vectors from the neighbors.
To capture the evolution of the graph structure across iterations, we $\begin{array} { r } { \mathbf { x } _ { v } ^ { \bar { a } \bar { g } g } = \sum _ { l = 0 } ^ { L } \mathbf { x } _ { v } ^ { l } } \end{array}$ . Recall that $L$ is obtained from the $L$ -hop neighbor similarity $\mathsf { K S } ( v , u )$ as the cosine similarity between $\mathbf { x } _ { v } ^ { a g g }$ and $\mathbf { x } _ { u } ^ { a g g }$ .
$$
\mathsf { K S } ( v , u ) = \frac { \mathbf { x } _ { v } ^ { a g g } \cdot \mathbf { x } _ { u } ^ { a g g } } { | | \mathbf { x } _ { v } ^ { a g g } | | _ { 2 } \cdot | | \mathbf { x } _ { u } ^ { a g g } | | _ { 2 } }
$$
We employ the cosine similarity, since compared to the Euclidean distance between two vectors, cosine similarity is invariant to the magnitude of the vectors and also ranges between $( 0 , 1 )$ – which simplifies the process for end users.
# 5 Algorithms
In this section, we present our proposed algorithms for identifying counterfactual evidences. We begin by outlining the baseline algorithm for local CE identification, including its optimization strategies. Next, we develop an index-based algorithm designed to enhance the efficiency of CE identification. Finally, we extend the algorithm for discovering global counterfactuals.
# 5.1 Local CE Identification
Baseline. We first propose the baseline algorithm for finding the top-1 local counterfactual evidence, denoted as LocalCE-B. As shown in Algorithm 1, given a graph $G$ , a GNN model $M$ , a subset of test nodes $V _ { t e s t }$ for classification, and a query node $\boldsymbol { v }$ , LocalCE-B performs a linear scan over the remaining nodes (line 2) to identify the top- $1 \mathsf { L C E } _ { o p t }$ that satisfies the specified criteria (line 3). A node $u$
# Algorithm 1 LocalCE-B
is considered $\mathsf { L C E } _ { o p t }$ if: 1) the predicted labels of $u$ and $\boldsymbol { v }$ by the pretrained GNN model $M$ are different; and 2) the KS score between $\boldsymbol { v }$ and $u$ is the largest. LocalCE-B iteratively updates $\mathsf { L C E } _ { o p t } ( v )$ with the node that achieves the highest KS score (line 4). Finally, LocalCE-B outputs the top-1 local counterfactual evidence.
Extending the algorithm to find the top- $\cdot k$ LCE is straightforward. We maintain a bucket $B$ of size $k$ , where nodes are sorted in descending order based on their KS scores. If a new incoming node has a KS score higher than the last node in $B$ , we remove the last node and insert the new node into the bucket, keeping the nodes sorted in descending order.
Time and Space Complexity. The time complexity of LocalCE-B consists of three main parts, first is the inference cost of GNN: $O ( L d ( | E | + | V _ { t e s t } | | \mathbf { Y } | ) )$ [61] assuming GCN [29], Cluster-GCN [7], etc.; second is the cost for the vector aggregation of KS score: $O ( L d | V _ { t e s t } | | N ( \cdot ) | )$ with $| N ( \cdot ) |$ being the maximum number of 1- hop neighbors of a test node; finally is the cost of finding the top$k \colon O ( k | V _ { t e s t } | )$ . Therefore, the total time complexity of LocalCE-B is $\begin{array} { r l } { O \left( L d \left( | E | + | V _ { t e s t } | \left( | \mathbf { Y } | + } & { { } | N ( \cdot ) | \right) \right) + k | V _ { t e s t } | \right) } \end{array}$ . As for the space complexity, the space cost of GNN’s inference is $O \left( d | V _ { t e s t } | + | E | \right)$ [61]; the space cost of vector aggregation is $O ( d | N ( \cdot ) | | V _ { t e s t } | )$ , and for top- $\mathbf { \nabla } \cdot k$ is $O ( k )$ . Therefore, the total space complexity of LocalCEB is $O ( d | N ( \cdot ) | | V _ { t e s t } | + | E | + k )$ .
Optimizations. We apply two optimization strategies for the LocalCE-B algorithm: (1) The prediction of each test node is precomputed by the inference of GNN. Additionally, the test nodes are partitioned based on their predicted classes and stored separately, thus at the query time we only look at the nodes from different classes. (2) We also pre-compute the aggregated vectors of all test nodes, so we can rapidly identify the top-1 LCE by a linear scan over the aggregated vectors of test nodes from different classes.
Index-based Solution. Performing a linear scan over aggregated vectors of test nodes to compute cosine similarity is computationally expensive, particularly when dealing with large-scale graphs. This inefficiency poses a significant challenge in applications such as information retrieval and recommendation systems, where rapid and accurate similarity calculations are crucial. Existing vector index approaches [14, 39], which are primarily optimized for Euclidean distance, are not readily applicable to cosine similarity, further complicating the problem. Additionally, there are almost no dedicated index approaches specifically designed for cosine similarity [34, 44]. This gap necessitates the development of an efficient and effective solution tailored for cosine similarity computations. To address these challenges, we propose a novel heuristic index-based algorithm that leverages the $k$ -means clustering and is designed for cosine similarity. Specifically, we make two novel technical contributions: (1) Supplementary partitioning that enhances the quality of our index, and (2) weighted clustering to adapt the $k$ -means algorithm to generate supplementary partitions. Based on these novel techniques, we develop an index structure and show how querying benefits using this structure.
Figure 2: Example of vector intersection in 2-dimension
Overview. Our core idea is to apply the $k$ -means algorithm that assigns vectors to different clusters. For a specific query vector, we identify its cluster and find the top- $\mathbf { \nabla } \cdot \mathbf { k }$ CEs only from that cluster, reducing the search space by a factor of the number of clusters.
However, a high-quality result requires the query vector to be near the cluster centroid, since its top- $\mathbf { \nabla } \cdot \mathbf { k }$ CEs are then expected to be in the same cluster. In contrast, if the query vector is near the cluster boundary, some top- $\mathbf { \nabla } \cdot k$ CEs might belong to a different cluster, leading to higher errors, as we will miss those CEs.
To address this limitation, we propose supplementary partitioning. This method creates more partitions where boundary nodes from earlier partitions are assigned closer to some centroid in the new partition. For a query, we first select the optimal partition, and then find its cluster according to that partition, finally identify the top$k$ CEs from that cluster, thereby improving the quality of results despite some storage redundancy due to multiple partitioning.
We construct supplementary partitions using weighted $\mathbf { k }$ -means clustering, assigning more weight to nodes closer to their cluster boundaries from the previous partitions. These weights make boundary nodes more central in the new partition.
Supplementary Partitioning. Consider a previously computed partition $\overline { { C ^ { 0 } = \{ \mathbf { c } _ { 1 } ^ { 0 } , \mathbf { c } _ { 2 } ^ { 0 } , . . . , \mathbf { c } _ { m } ^ { 0 } \} } }$ , where $m$ is the number of clusters. Based on this partitioning, a weight $w _ { v } ^ { 0 }$ is assigned to each test node $\boldsymbol { v }$ . For simplicity, assume that $w _ { v } ^ { 0 }$ is proportional to the distance of $\boldsymbol { v }$ from the centroid of 𝑣’s assigned cluster. More details on the weight assignment is given later.
Next, we adapt the $k$ -means algorithm to incorporate the weight of each node so that the previous boundary nodes are assigned closer to some centroid in the new partition $C ^ { 1 } = \{ \mathbf { c } _ { 1 } ^ { 1 } , \mathbf { c } _ { 2 } ^ { 1 } , \ldots \mathbf { \tilde { \mu } } , \mathbf { c } _ { m } ^ { 1 } \}$ . Thus, the objective function of the classic $k$ -means is updated.
$$
\begin{array} { r } { \arg \underset { C ^ { 1 } } { \operatorname* { m a x } } \displaystyle \sum _ { i = 1 } ^ { m } \sum _ { \boldsymbol { v } \in \mathbf { c } _ { \mathbf { i } } ^ { 1 } } w _ { \boldsymbol { v } } ^ { 0 } \cdot \mathsf { C O S I N E } ( \mathbf { x } _ { \boldsymbol { v } } ^ { a g g } , \mu _ { i } ^ { 1 } ) } \\ { \mu _ { i } ^ { 1 } = \frac { 1 } { | \mathbf { c } _ { \mathbf { i } } ^ { 1 } | } \displaystyle \sum _ { \boldsymbol { v } \in \mathbf { c } _ { \mathbf { i } } ^ { 1 } } w _ { \boldsymbol { v } } ^ { 0 } \cdot \mathbf { x } _ { \boldsymbol { v } } ^ { a g g } } \end{array}
$$
$\mathbf { x } _ { v } ^ { a g g }$ is the aggregated vector of node $\boldsymbol { v }$ . Based on such weighted $k$ -means, we obtain the new partition $C ^ { 1 }$ , one of its newly computed centroids $\{ \mu _ { 1 } ^ { 1 } , \mu _ { 2 } ^ { 1 } , \ldots , \mu _ { m } ^ { 1 } \}$ will have more chance to be closer to the previous boundary nodes from partition $C ^ { 0 }$ .
Weighted Clustering. To compute node weights discussed in the previous part, we propose a novel weight computation approach that incorporates the geometric property of the corresponding aggregated vectors. For simplicity, assume that the vectors are unit vectors since the cosine similarity computation already considers unit vectors. We introduce the idea through an example with 2- dimensional vectors in Figure 2. First, for the aggregated vector of each test node, we set an angle $\theta$ surrounding it to identify similar vectors. As we can see from the first circle, constrained by $\theta$ , all similar vectors of $\mathbf { x } _ { v } ^ { a g g }$ fall within the purple area. We refer to this area the Similar Field (SF). Similarly, for $\bar { \mathbf { x } } _ { u } ^ { a g g }$ , the green area is the SF. These two vectors are similar to each other since they fall into each other’s SF, which is the blue area in the third circle. Notably, the size of the intersection area is determined by the angle $\phi$ . Meanwhile, the relative complement area (orange area in the tvheicrtdorcitroc f $\mathbf { x } _ { v } ^ { a g g }$ twb.er ft. $\mathbf { x } _ { u } ^ { a g g }$ tihsathaerearneoatwmhoesrtesithmei amrotsot .lar x𝑣𝑎𝑔𝑔 migh ound x𝑢𝑎𝑔𝑔
Figure 3: Supplementary partitioning: Two partitions $C ^ { 1 }$ and $C ^ { 2 }$ , number of clusters in each partition $m = 4$ , green triangles are centroids of each cluster, the red node is the query node, the orange lines indicate the distance to the centroid and also reflect the weights, and the yellow star indicates the top-1 LCE of the query node.
Based on this observation, if we consider $\mu$ as a cluster centroid and 𝑣 as a node assigned to this cluster, the intersection area $\mathsf { I A } ( \mu , \mathbf { x } _ { v } ^ { a g g } )$ between $\mu$ and x𝑣𝑎𝑔𝑔 represents the area where the most similar vector to x𝑣𝑎𝑔𝑔 is likely to be found within 𝑣’s assigned cluster. If we assume that the threshold angle $\theta$ is the same across all vectors, we can normalize the intersection area $\mathsf { l A } ( \mu , \mathbf { x } _ { v } ^ { a g g } )$ , by dividing it via the SF of $\mathbf { x } _ { v } ^ { a g g }$ . This normalized intersection value is proportional to the probability that the most similar vector of x𝑣𝑎𝑔𝑔 falls within $\boldsymbol { v }$ ’s assigned cluster. We utilize this ratio to compute a node weight $w _ { v }$ for the subsequent supplementary partitioning.
$$
w _ { v } = 1 - \frac { { \sf I A } ( \mu , { \bf x } _ { v } ^ { a g g } ) } { { \sf S F } ( { \bf x } _ { v } ^ { a g g } ) }
$$
Intuitively, $w _ { v }$ captures the potential error when identifying the most similar vector of $\mathbf { x } _ { v } ^ { a g g }$ only from 𝑣’s assigned cluster. In practice, the value of SF can be pre-computed, since the radius $r$ and threshold $\theta$ are constants. Meanwhile, the computation of IA $( \mu , \mathbf { x } _ { v } ^ { a g g } )$ can be optimized by normalizing the vectors to unit vectors. The above equation can be extended to $d$ -dimensional hyperspherical space, where $d$ is our embedding dimensionality. The formulation for computing the intersection in $d$ -dimensional hyperspherical space is detailed in [33, 49].
Index Construction. After the computation of partitions and clusters, assuming that we obtain total $p$ partitions, denoted as $P =$ $\{ C ^ { 1 } , C ^ { 2 } , . . . , \bar { C ^ { p } } \}$ , we next aim to identify the optimal partition $C ^ { * } [ v ]$ and the optimal cluster $c ^ { * } [ v ]$ for each test node 𝑣. $C ^ { * } [ v ]$ is identified by the weight for each node $\boldsymbol { v }$ at each partition from Equation 10, denoted as $\begin{array} { r } { C ^ { * } [ \boldsymbol { v } ] = \arg \operatorname* { m i n } _ { \boldsymbol { i } \in 1 , 2 , \dots , \boldsymbol { p } } w _ { \boldsymbol { v } } ^ { i } } \end{array}$ . As for $c ^ { * } [ v ]$ , it is identified
# Algorithm 2 LocalCE-I
Table 1: Statistics of datasets
by the assigned cluster of $\boldsymbol { v }$ in the optimal partition $C ^ { * } [ v ]$ , according to the weighted $k$ -means. Then we construct the index based on these two variables, denoted as $I n d e x [ v ] = ( C ^ { * } [ v ] , c ^ { * } [ v ] )$ .
Considering the example shown in Figure 3, we can observe that the optimal partition $C ^ { * } [ v ]$ of the red node $\boldsymbol { v }$ is $C ^ { 2 }$ , since the weight of $\boldsymbol { v }$ in $C ^ { 2 }$ (determined by the orange line) is smaller than that in $C ^ { 1 }$ . Then in $C ^ { 2 }$ , the optimal cluster is $c _ { 1 } ^ { 2 }$ since the distance between $\boldsymbol { v }$ and $\mu _ { 1 } ^ { 2 }$ is the minimum among the four centroids.
Querying with the Index. Querying based on the index is straightforward. As detailed in Algorithm 2 (LocalCE-I), given a graph $G$ , a GNN $M$ , a test set $V _ { t e s t }$ , a query node $\boldsymbol { v }$ , and set of partitions $P$ , LocalCE-I aims to identify the optimal partition $C ^ { * } [ v ]$ and optimal cluster $c ^ { * } [ v ]$ to query node $\boldsymbol { v }$ (lines 2- 3). Inside the optimal cluster $c ^ { * } [ \upsilon ]$ , LocalCE-I applies the same approach as LocalCE-B (lines $^ { 4 - }$ 7). To extend the querying process for top- $\mathbf { \nabla } \cdot k$ results, we apply the same approach as LocalCE-B within the optimal cluster $c ^ { * } [ v ]$ .
Time and Space Complexity. The time cost of LocalCE-I consists of four parts, with the first three being similar to those in LocalCEB: (1) Inference cost of GNN: $O$ $\mathbf { \nabla } ^ { \prime } L d \left( \left| E \right| + \left| V _ { c } \right| \left| \mathbf { Y } \right| \right) )$ , where $| V _ { c } | =$ $\scriptstyle { \frac { 1 } { m } } \left| V _ { t e s t } \right|$ indicates the average number of test nodes per cluster; (2) vector aggregation cost for KS score: $O ( L d | V _ { c } | | N ( \cdot ) | )$ , where $| N ( \cdot ) |$ is the maximum number of 1-hop neighbors of a test node; (3) cost of finding the top- $\mathbf { \nabla } \cdot k$ results: $O ( k | V _ { c } | + p )$ ; and (4) offline index construction cost: $O ( m p | V _ { t e s t } | ( d + \mathsf { H S } ^ { d } ) )$ , where ${ \mathsf { H } } { \mathsf { S } } ^ { d }$ is the time for computing the intersection in $d$ -dimensional hyperspherical space [33]. Therefore, the online time complexity of LocalCE-I is roughly reduced by a factor of the number of clusters.
For space complexity, the additional index overhead is $O ( p | V _ { t e s t } | )$
# 5.2 Global CE Identification
The identification of global CEs is a natural extension of the local algorithms. As shown in Algorithm 3 , we select one local algorithm (LocalCE-B or LocalCE-I) to retrieve the top-1 LCE for each test node, and then select the best one as the top-1 GCE. As for the extension to top- $\cdot k$ GCEs, we maintain a bucket $B$ of size $k$ , and incrementally add the top-1 LCE among all test nodes.
# Algorithm 3 GlobalCE-B&I
Input: Graph $G$ , GNN 𝑀 , test set $V _ { t e s t }$ , set of partitions $P$ .
Output: Top-1 global counterfactual evidence $\mathrm { G C E } _ { o p t } ( V _ { t e s t } )$ . 1: Select LocalCE-B or LocalCE-I for local CE identification. 2: Identify top-1 LCE for each test node based on the selected local algorithm.
3: Identify top-1 GCE among the top-1 LCEs.
4: return $\mathrm { G C E } _ { o p t } ( V _ { t e s t } )$ .
# 6 Experimental Results
We conduct experiments to demonstrate the effectiveness, efficiency, scalability, and generalizability of our solutions for finding both local and global counterfactual evidences. Our algorithms are implemented in Python 3.10.14 by PyTorch-Geometric framework. All experiments are conducted on the single core of a Linux system equipped with AMD EPYC 7302P CPU and 256 GB RAM. Our code and data are available at [1].
# 6.1 Experimental Setup
Datasets. We utilize datasets from various real-world domains to showcase the performance of our methods. The statistics of the datasets are shown in Table 1.
The German [3] dataset categorizes individuals with good or bad credit risks based on their attributes. It includes features such as gender, loan amount, and account-related details for 1,000 clients. The Bail [3] dataset contains bail outcome records from various U.S. state courts between 1990 and 2009. It includes past criminal records, demographic details, and other information on 1,8876 defendants released on bail. The Cora [63] dataset is a citation network where nodes represent research papers and edges indicate citation links. It contains 2,708 papers categorized into seven topics, with each paper described by a 1,433-dimensional feature vector representing various keywords’ presence in the paper text. PubMed [63] is a citation network of medical research papers, where nodes represent articles and edges denote citations. It consists of 19,717 papers from the PubMed database, classified into three categories, with each paper represented by a 500-dimensional feature vector derived from TF-IDF word statistics. For FacebookPagePage [48], the nodes represent verified pages on Facebook and edges are mutual likes. The node features are extracted from the site descriptions. The task is multi-class classification based on the site category. Moreover, we use one large-scale dataset AmazonProducts [66] to depict the scalability w.r.t. the number of test nodes. Specifically, for the AmazonProducts dataset, the nodes represent the products on the Amazon website and the edges denote the co-purchase by the same customer. The node features are pre-processed features by SVD, and originally were text reviews from the buyer. The task is to classify the product categories.
Graph Neural Networks. We employ the following GNNs. Graph convolutional network (GCN) is a classic message-passing GNN [29]. Graph attention network (GAT) dynamically weighs neighbors via attention [55]. Graph isomorphism network (GIN) matches the power of the 1-WL test [62]. Message passing neural network (MPNN) incorporates learnable edge functions for richer representations [21]. GraphSAGE (SAGE) extends sampling and aggregating neighbors to handle large graphs efficiently [25].
D 0 LocalCE-B LocalCE-I w/o WC 1.0 LocalCE-B LocalCE-I w/o WC
0.8 LocalCE-I LocalCE-I w/o SP 0.8 LocalCE-I LocalCE-I w/o SP
0.6 0.6
s s
0.4 0 0.4
0.2 0.2
0.0 0.0 100 200 300 400 500 600 100 200 300 400 500 600 k k (a) PubMed (b) FacebookPagePage
.0
心 0
0.8 0.8
0.6 0.6
s S
0.4 0.4
9 9
0 0
0.2 GLobalCE-IB GLobalCE-I w/o SWPC 0.2 GLobalCE-IB GLobalCE-I w/o SWPC
0.0 100 200 300 400 500 600 100 200 300 400 500 600 k k (c) PubMed (d) FacebookPagePage
1.0 GCN GIN MPNN 1.0 GCN T GIN MPNN
0.8 GAT SAGE 0.8 GAT SAGE
0.6 0.6
S
0.4 0.4
G 0.2 0.2
4
0.0 100 200 300 400 500 600 100 200 300 400 500 600 k k (e) PubMed (f) FacebookPagePage
8 8 LocalCE-B LocalCE-I w/o WC
0.3 100 LocalCE-I LocalCE-I w/o SP
0.2 LocalCE-IB LocalCE-I w/o SWPC 100 200 300 400 500 600 10 0.1k 1k 10k 100k 200k k # of Test Nodes (a) FacebookPagePage (b) AmazonProducts
Competitors. To the best of our knowledge, there are no existing methods that can be adapted to the full setting of our problem. Therefore, we compare our index-based solution (Algorithm 2) with our baseline approach (Algorithm 1). Furthermore, we compare two more variants with missing indexing components, denoted as LocalCE-I w/o WC and LocalCE-I w/o SP. Specifically, LocalCE-I w/o WC indicates the index algorithm without weighted clustering, while LocalCE-I w/o SP indicates the index algorithm without supplementary partitioning. For the index construction, we set the number of partitions $p = 5 0$ and the number of clusters $m = 1 0$ per partition. The angle $\theta$ for computing SF is set to $\textstyle { \frac { \pi } { 3 } }$ .
Evaluation Metrics. To evaluate the effectiveness of finding the counterfactual evidences, we use the average similarity (AS).
$$
A S = \frac { 1 } { | V _ { t e s t } | } \sum _ { v \in V _ { t e s t } } \frac { 1 } { k } \sum _ { u \in \mathsf { L C E } _ { o p t } ^ { k } ( v ) } \mathsf { K S } ( v , u )
$$
$\mathsf { L C E } _ { o p t } ^ { k } ( v )$ denotes the top- $\mathbf { \nabla } \cdot k$ LCEs of node $\boldsymbol { v }$ . Notice that an effective CE finding method would result in higher AS score.
Additionally, we report the running times of our baseline, index creation, and index-based querying algorithms.
# 6.2 Effectiveness Results
Figures 4(a) and 4(b) show that for the local top- $\mathbf { \nabla } \cdot \mathbf { k }$ counterfactual evidence identification, with the increasing $k$ , the average similarity drops. This is because the less promising LCEs have low KS scores than the highly similar LCEs. Specifically, both the baseline and the index-based algorithm exhibit the capability of identifying highquality LCEs, and the effectiveness of LocalCE-B and LocalCE-I is relatively similar, indicating the advantages of constructing our novel index based on supplementary partitioning and weighted clustering. Moreover, the two variants without all indexing components, LocalCE-I w/o WC and LocalCE-I w/o SP, perform worse than our ultimate index-based algorithm LocalCE-I. This is because neither indexing component alone is sufficient to identify highquality CEs compared to the baseline algorithm. Instead, it is the combination of both components that yields results comparable to the baseline search approach. First, without supplementary partitioning, the weighted clustering approach tends to favor dense vector regions, neglecting boundary nodes and reducing overall similarity among all nodes. Second, without weighted clustering, supplementary partitioning focuses solely on enhancing local coherence, failing to optimize cluster assignments for the majority of nodes. Finally, by combining both components, we can assign each node to an appropriate cluster while generating additional clusters for boundary cases. This highlights the effectiveness of our index-based algorithm and underscores the importance of the proposed indexing techniques.
Analogously in Figures 4(c) and 4(d), our global algorithms are capable of identifying the high-quality global top- $k$ counterfactual evidences. Notice that the average similarity in this case is higher than that from local algorithms, since the identified CEs are pairwise optimal GCEs. Additionally, in the global CE setting, the two variants without indexing components still perform worse than the index-based algorithm. This demonstrates that the indexing techniques effectively identify high-quality GCEs.
# 6.3 Generalization to Different GNNs
We retrieve global counterfactual evidences (GCEs) considering various state-of-the-art GNNs to explore the predictive behavior among different GNNs and to demonstrate how well our GCE finding algorithms generalize across different GNNs. As given in Figures 4(e) and 4(f), different models show similar trends on identifying CEs, with slight difference in the average similarity. Such results indicate that the counterfactual evidences returned by our methods are model-agnostic to different message-passing GNNs.
# 6.4 Efficiency and Scalability Results
Figure 5(a) presents the online query times of LocalCE-B, LocalCE-I, LocalCE-I w/o WC, and LocalCE-I w/o SP– they are generally less sensitive to the parameter $k$ (i.e., the number of top CEs returned). Such results show the advantage of the proposed algorithms for fast identification of counterfactual evidences, which will provide swift results for downstream applications. Meanwhile, for all three indexbased algorithms, the online query time is significantly faster than LocalCE-B. The proposed index construction approach efficiently prunes dissimilar nodes, and meanwhile maintains sufficient performance. The index construction overhead is 159 sec and 358 sec on PubMed and FacebookPagePage datasets, respectively. As for the scalability, we show online query time on the million-scale dataset AmazonProducts. We observe from Figure 5(b) that the online query time of LocalCE-B linearly increases with the number of test nodes. Meanwhile, the online query time of all three index-based algorithms also increases with the number of test nodes, but is always lower than that of LocalCE-B, since the desired counterfactual evidences are identified from the cluster determined by the index. The index construction requires 1503 sec on AmazonProducts.
Table 2: Node features and their discrimination scores (DS) considering GCN [29] and FairGNN [10]: Bail dataset.
Figure 6: Accuracy within the top- $k$ Global CEs: Accuracy within the GCEs is significantly lower for smaller $k$ , mainly consisting of borderline nodes, which are difficult for the GCN to classify correctly.
# 7 Applications
Various downstream tasks can benefit from utilizing our counterfactual evidences. We demonstrate the effectiveness and insights they bring to these tasks by showcasing the following applications.
# 7.1 Revealing Unfairness of GNNs
Ensuring fairness in GNNs promotes ethical decision-making by preventing biases related to sensitive node features such as gender and race, particularly in scenarios like credit defaulter identification [64] and court trial decisions [27]. We will use CEs to detect
0.95 With Fine-tuning 0.95 W/O Fine-tuning 0.9 0.9 川I A 0.85 0.85 0.8 With Fine-tuning W/O Fine-tuning 0.75 0.8 0.7 200 400 600 800 1000 # of Top-k GCEs Used for Fine-tuning # of Top-k Remaining GCEs Used for Testing (a) PubMed (b) PubMed
whether a GNN model is fair, which is crucial for ensuring that the model’s decisions do not perpetuate biases and inequalities.
To measure node feature importance, we introduce the notion of discrimination score for a node feature value $f ( ) = f _ { i }$ at a test node $\boldsymbol { v }$ , denoted as $D S \left( f ( v ) = f _ { i } \right)$ . In particular, we consider the top- $k$ local CEs of the test node $\boldsymbol { v }$ , denoted by $\mathsf { L C E } _ { k } ( v )$ , and compute.
$$
D S \left( f ( v ) = f _ { i } \right) = { \frac { 1 } { k } } \sum _ { u \in \mathsf { L C E } _ { k } ( v ) } \mathbb { I } \left( f ( v ) = f _ { i } \wedge f ( u ) \neq f _ { i } \right)
$$
I is an indicator function which ensures that if a feature value $f ( ) = f _ { i }$ occurs at node $\boldsymbol { v }$ and the same feature value does not occur frequently in 𝑣’s top- $\mathbf { \nabla } \cdot k$ local CEs, then $f ( v ) = f _ { i }$ is crucial towards 𝑣’s predicted class label. We set $k = 1 0$ in our experiments.
We employ a state-of-the-art fair GNN model, FairGNN [10], as our classifier. Meanwhile, we employ GCN as the classic GNN model for comparison. We apply these models to the Bail dataset. We observe from Table 2 that the GCN’s prediction for bail decisions heavily depends on the sensitive feature “race=WHITE”, with a particularly high $D S$ when using our method based on CEs. In contrast, FairGNN reduces this dependency on the sensitive feature, with $\mathrm { ^ { \circ } r a c e = W H I T E ^ { \circ } }$ dropping to the sixth position in importance, which means that FairGNN can achieve fairer predictions by mitigating racial bias in its decision-making process. These results demonstrate the superiority of CEs in detecting fairness in GNNs.
# 7.2 Verifying Prediction Errors
GNNs are prone to producing prediction errors, particularly when dealing with borderline cases—nodes that lie near the decision boundary of GNNs [68]. Verifying these cases can reveal underlying issues and guide improvements for GNNs, offering opportunities for enhancing model robustness and developing more effective downstream applications. We will explore how to use CEs to verify prediction errors in GNNs.
We employ the classic GCN as the classifier and use two widely used graph node classification datasets: Cora [40] and PubMed [50]. We evaluate the effectiveness of CEs based on the accuracy of the classifier, where the accuracy is the ratio of the number of correctly classified nodes to the total number of nodes.
In Figure 6, we show the accuracy across all test nodes (red dashed lines) and within the top- $\mathbf { \nabla } \cdot \mathbf { k }$ Global CEs (GCEs, blue lines). The accuracy within the GCEs is significantly lower for smaller values of $k$ , which indicates that at lower $k$ values, the GCEs are dominated by borderline nodes that are difficult for the GCN to classify correctly. As the value of $k$ increases, the borderline nodes become a smaller fraction of the overall GCEs considered, reducing their negative impact on classification accuracy, as reflected by the blue line approaching the red dashed line.
# 7.3 Fine-tuning with Counterfactual Evidences
In this section, we utilize a limited number of CEs to improve the prediction accuracy of GNNs. Similar to the previous application, we choose the classic GCN as the classifier and use the PubMed datasets. We use a set of top- $\cdot k$ GCEs as a validation set to fine-tune the GCN model and compare the classification accuracy of the model before and after fine-tuning, on the remaining test nodes.
Figure 7(a) shows the overall performance before and after finetuning. We observe that the performance improvement is significant at the beginning since borderline nodes are predominant in the GCEs. With the increasing number of GCEs, the performance gap narrows, since the models are not significantly different in the remaining easier test nodes (i.e., non-borderline nodes).
Next, we fix the top-1200 GCEs as a validation set to fine-tune the GCN and depict the classification accuracy on varying numbers of remaining top- $k$ GCEs. Figure 7(b) demonstrates that the model achieves substantial performance improvement after fine-tuning using the top-1200 GCEs as a validation set. This is because the borderline nodes identified by GCEs are incorporated into the model. Additionally, after fine-tuning, the classification accuracy remains consistently high on both borderline and relatively easier cases, which is evident as we test with varying numbers of remaining top- $\mathbf { \nabla } \cdot k$ GCEs. Meanwhile, before fine-tuning, the model was more accurate for relatively easier instances. | Counterfactual learning is emerging as an important paradigm, rooted in causality, which promises to alleviate common issues of graph neural networks (GNNs), such as fairness and interpretability. However, as in many real-world application domains where conducting randomized controlled trials is impractical, one has to rely on available observational (factual) data to detect counterfactuals. In this paper, we introduce and tackle the problem of searching for counterfactual evidences for the GNN-based node classification task. A counterfactual evidence is a pair of nodes such that, regardless they exhibit great similarity both in the features and in their neighborhood subgraph structures, they are classified differently by the GNN. We develop effective and efficient search algorithms and a novel indexing solution that leverages both node features and structural information to identify counterfactual evidences, and generalizes beyond any specific GNN. Through various downstream applications, we demonstrate the potential of counterfactual evidences to enhance fairness and accuracy of GNNs. | [
"cs.LG",
"cs.DB"
] |
# 1 Introduction
As robotic technology advances, robots are increasingly capable of providing a variety of services in different contexts. A key challenge in robotics is understanding and responding to human requests that are not always clear-cut, especially in lifesupport situations [34]. This requires interpreting both the visual context of the environment and the user’s verbal communication, essentially making it a multimodal, multi-class classification problem. Recent advancements have seen the development of large multimodal language models [2, 8, 13, 17, 24], that can process and respond to multiple channels of input data.
Our study utilizes the latest multimodal model, LLaVA [17], as a foundation for predicting responsive actions to human requests. While LLaVA has shown promise in general multimodal interactions, it requires additional data to tailor its responses to specific actions in a human-robot interaction context. Gathering such interaction data is often time-intensive and not easily scalable.
Inspired by the success of the large generative model in the language [42] and vision [38, 41, 40] domains, this paper introduces a framework for automatically enhancing scenario data, specifically in contexts where a robot needs to perform life-support actions in response to human requests. We harness the power of large language models [21, 36, 1, 23], to create plausible dialogue scenarios [14, 11, 4] and describe environmental settings. These narratives are then visualized using advanced diffusion models [16, 25, 31], creating images that represent the robot’s perspective during each dialogue.
By using this augmented scenario data, we can train an agent to choose appropriate actions based on everyday user interactions. This training is conducted in a controlled environment, supplemented with a small, real-world dataset collected from human-robot interactions. Our experiments demonstrate that this approach not only generates realistic scenarios but also effectively trains the multimodal language model to respond with appropriate life-support actions based on both verbal requests and environmental cues. The success of this framework highlights its potential to make robotic scenario data more scalable and relevant.
# 2 Related Work
In this paper, we explore the augmentation of scenario data using large generative models. We provide an overview of the relevant background in large language models (LLMs) and stable diffusion models.
# 2.1 Large Language Models (LLMs)
Language models have been widely studied for research in language understanding and generation, evolving from statistical to neural network-based models [42]. In recent years, the emergence of pre-trained language models (PLMs) [7, 26, 18, 12] marked a significant advancement. These models, based on Transformer architecture and trained on vast text corpora, have shown remarkable proficiency across various natural language processing tasks. A key finding in this domain is that increasing model size enhances performance. As a result, the term “large language models” (LLMs) has been adopted to describe PLMs of substantial scale [9]. A notable example is ChatGPT [22], which has set new benchmarks in NLP tasks and demonstrates advanced linguistic capabilities in human interactions. The ongoing development and diversification of LLMs across various parameter sizes continue to be a focal point in both academic and industrial research [5, 35, 36, 6, 37].
# 2.2 Large Diffusion Models
Text-to-image generation has been a significant challenge in the field of computer vision [40]. Early attempts, such as AlignDRAW [19], produced images from text but lacked realism. The introduction of Text-conditional GANs [29] marked a shift towards more sophisticated models capable of generating images from text descriptions, which is the first end-to-end architecture with characters as its input and pixels as its output. However, these GAN-based methods were limited to smaller datasets. The advent of large-scale data utilization in autoregressive models, exemplified by DALL-E [28] and Parti [39], brought improvements but at the cost of high computational demands and sequential error accumulation.
Recently, diffusion models have emerged as the new benchmark in text-to-image generation. These models can be broadly categorized based on their operational domain: pixel space or latent space. Pixel-level approaches, like GLIDE [20] and Imagen [32], generate images directly from high-dimensional data. On the other hand, latent space methods, such as stable diffusion [31] and DALL-E 2 [27], involve compressing images into a lower-dimensional space before applying the diffusion model. This innovation in model design has significantly enhanced the quality and efficiency of text-to-image generation.
# 3 Proposed Augmentation Framework
In our framework, we approach the challenge of robotic action determination as a multi-class classification problem. The task involves interpreting an ambiguous request $\mathbf { x }$ from a user, coupled with an image depicting the robot’s view of the environment. The objective is to accurately predict a suitable action $\mathbf { y } \subseteq \mathbf { Y }$ , with $\mathbf { Y }$
Living room Conversation Kitchen Human: I can't stand the smell here. Robot : I will put the garbage Places LLM in the trash can.
Bring the ketchup E Environment Description
Put away tissue box There are rotten food scraps Diffusion Image on the countertop Model Agent Actions
representing the set of all actions available to the robot, to assist the human user effectively.
The primary challenge of training a model to tackle this task is the time-intensive and non-scalable nature of collecting authentic interaction data between humans and robots. To address this, we have developed a framework utilizing large generative models to enrich the dataset with various potential life-support scenarios, encompassing both dialogues and environmental images. Figure 1 illustrates our augmentation pipeline. Our framework comprises two distinct pathways, each tailored to generate robotic scenarios for a specific purpose.
• Place-based augmentation focuses on creating dialogues pertinent to a specific location, such as a living room, kitchen, or bedroom, along with a detailed description of the respective environment.
• Action-based augmentation focuses on generating dialogues aligned with potential robot actions, like fetching a banana, clearing garbage, or organizing glasses, accompanied by a depiction of the setting where these actions would occur.
# 3.1 Place-based Augmentation
In the initial phase of our augmentation pipeline, we employ gpt-3.5, a robust large language model, to create various dialogues. These dialogues simulate scenarios where a human presents an ambiguous request in everyday settings, and a robot must respond with an appropriate service action. The process begins by selecting a commonplace setting, such as a bedroom, bathroom, or dining room—areas where robots are likely to offer routine assistance. Next, we prompt gpt-3.5 to generate potential conversations that could occur in these settings, along with descriptions of the surrounding environment. Following this, we use the stable-diffusion-XL model [25] to transform these textual descriptions into visual representations of the respective locations.
Fig. 2: Example of two augmentation methods.
When crafting prompts for gpt-3.5, we do not set constraints on the generated user requests or robot actions. This approach allows the language model to conjure a wide array of scenarios, helping the model to learn and adapt to a diverse range of potential situations. For the image generation via the diffusion model, we emphasize the first-person perspective in the prompt, mirroring what the robot would observe in these environments.
We have identified ten everyday locations, each serving as the basis for generating ten distinct dialogues through the large language model. An illustrative example of this place-based augmentation process is depicted in the left part of Figure 2. The following is the prompt template used in our pipeline.
Give me ten conversation examples between two people in
a [location]
Person A made an ambiguous request indirectly without asking
a question to Person B
And Person B responded with a reflected action to A
Each conversation should be one utterance
And describe some related object in the background
# 3.2 Action-based Augmentation
While place-based augmentation focuses on equipping robots with the versatility to navigate various locations, action-based augmentation concentrates on creating scenarios tailored to specific, predefined robot actions [15]. In this second route of our framework, we utilize the same large language model as discussed in Section 3.1 for generating dialogues.
The key difference here lies in the nature of the input constraint. Rather than selecting a location, we choose an action from a robot’s predefined action set, such as “I will clean up the table”. The gpt-3.5 model is then prompted to formulate potential dialogues where this action is the appropriate response, along with descriptions of the relevant surroundings. This approach allows the model to concentrate on learning and responding to specific, realistic scenarios tied to particular actions.
To generate images that resemble real-world settings, we employ the blip diffusion model [16], known for its ability to create images with a consistent theme or subject. When generating an image from a text description, we incorporate a reference image from our real-world data collection, specifying “room” as the constant subject. This method ensures the generated images closely align with the kind of environments a robot is likely to encounter.
Our framework includes 43 distinct actions, each serving as a basis to prompt the language model to produce ten unique dialogues. An example of this action-based augmentation is showcased in the right part of Figure 2. Below is a prompt template used with the large language model for this purpose.
Here is a reflected action from B.
B: [reflected_action]
A is another person talking to B in a room
What ambiguous request may A talk to B indirectly without asking a question causing B to respond above reflected action. And describe some related object in the background according to utterance between A and B
After obtaining the scenario data derived from both the place-based and actionbased augmentation routes, we construct our comprehensive augmented dataset. This enriched dataset is then utilized to refine the performance of our base multimodal model, specifically designed for predicting robotic action responses. The model, adept at processing both visual and linguistic inputs, is trained to recognize and understand the scenarios presented in our augmented dataset. Upon fine-tuning this base model, we proceed to assess its proficiency in zero-shot accuracy using real-world data. This evaluation helps us measure the model’s ability to accurately predict robot actions in previously unseen situations, indicating the effectiveness of our augmentation approach.
# 4 Experiments
To assess the impact of our augmentation data, we conducted experiments using the Do-I-Demand dataset [33], a collection of real interactive records between humans and robots. This dataset comprises 400 samples and serves as a benchmark for evaluating our method. We apply two base models with differing parameter sizes to test the efficacy of our proposed augmentation approach.
# 4.1 Experimental Setup
The evaluation dataset features two primary text elements: the human’s ambiguous request and a description of the environment, inferred from an image. We develop two input settings based on these elements:
• Utterance: Here, only the human’s request is used as input, with the output being one of the 43 predefined actions.
• Utter $^ +$ Description: This setting combines the human’s request with the environmental description as input, aiming to predict one of the 43 predefined actions as output.
For our experiments, we select LLaVA, a large multimodal model renowned for its multimodal chat capabilities, as our base model. LLaVA integrates a vision encoder with a large language model (LLM) to facilitate general visual and linguistic understanding. We chose its two versions, 13B and 7B parameters, for subsequent fine-tuning.
We fine-tune the base models using our augmentation dataset for five epochs, keeping the hyperparameters largely consistent with those used in the original LLaVA model. The training input comprised the image and the ambiguous human request, with the goal of maximizing the likelihood of the model predicting the correct response action. This process was carried out on 4 A6000 GPUs, each with 40GB of memory, utilizing the LoRA technique [10] for efficient training.
In evaluating the fine-tuned models, we focused on measuring zero-shot accuracy on the evaluation dataset. To match LLaVA’s responses with specific actions, we employ a sentence encoder to process both the model’s response and each potential action. We calculated the cosine similarity between each pair, selecting the action with the highest similarity as the final prediction. For this purpose, we experiment with two encoders: the Sentence-BERT model (SBERT) [30] and the GPT-3 model [3], both of which have shown excellent performance in various NLP benchmarks.
Table 1: Results on the DO-I-DEMAND $( \% )$ . † indicates the significant improvement achieved by the augmented data. The best score for each base predictor is marked in bold.
# 4.2 Results
The effectiveness of our augmentation methods on the Do-I-Demand dataset is summarized in Table 1. We evaluate the accuracy of each method by comparing the exact match rates across all labels. The baseline results, achieved using the original multi-modal model LLaVA in two distinct sizes, are presented in the first row. The data clearly indicates that both our place-based and action-based augmentation methods significantly enhance the performance of the base models. However, it is noteworthy that action-based augmentation generally outperforms place-based augmentation. This is likely because action-based augmentation is specifically tailored to align with the action categories in the evaluation dataset, whereas place-based augmentation aims to broadly improve the model’s versatility in various scenarios.
Interestingly, we observe the highest performance when combining both placebased and action-based augmentations, except in one instance: the LLaVA-7B model with a GPT-3 encoder under the utterance-only setting. The top accuracy is recorded at $3 6 . 3 \%$ for LLaVA-13B with the SBERT encoder in the utterance setting, and $4 8 . 8 \%$ for LLaVA-7B with SBERT in the utterance plus description setting. These results reinforce the value of environmental descriptions in enhancing action prediction accuracy.
# 4.3 Effectiveness with Diverse Prompts
For place-based augmentation, our original prompt is “make an ambiguous request indirectly without asking a question”. To explore variations, we test two alternative prompts: “make an ambiguous request without asking a question” and simply “make an ambiguous request.” After merging data generated from all three prompts, we observe a general decline in accuracy, as shown in Table 2. This suggests that our original prompt is sufficiently detailed, leading to the generation of high-quality dialogues for model training.
Table 2: Results of the original place-based and diverse prompts on the DO-I-DEMAND $( \% )$ .
Table 3: Results of the framework with and without blip diffusion on the DO-I-DEMAND $( \% )$ .
# 4.4 Ablation of Blip Diffusion
In our action-based augmentation, the use of the blip diffusion model for generating environmental images is crucial. We experiment by substituting blip diffusion with stable-diffusion-XL, as used in place-based augmentation. Table 3 reveals a consistent decrease in accuracy across all scenarios, including a notable $6 \%$ drop in the LLaVA-13B utterance setting. This highlights the significant role of the blip diffusion model in our augmentation strategy.
# 4.5 Effectiveness on Low-Performing Labels
Our analysis reveals that a significant portion, over one-third, of the actions predicted by the original multi-modal model perform zero accuracy. To delve into how our proposed augmentation methods impact these lower-performing labels, we group the labels into four categories based on their accuracy levels. Each group represents a quartile of performance, with bucket 1 consisting of the ten labels with the lowest accuracy, all at zero initially.
Fig. 3: performance of different buckets in utterance setting.
Fig. 4: performance of different buckets in utterance+description setting.
60
50 40 L
30
20
10 0 bucket 1 bucket 2 bucket 3 bucket 4 LLaVA + Place Aug + Action Aug $\mathbf { \delta } \mathbf { \equiv } + \mathbf { \delta }$ Both
In Figure 3, we plot the mean performance of the LLaVA-1.5-13B model on the Do-I-Demand utterance set, categorized by label performance ranking. The graph clearly shows that our augmentation methods significantly improve the accuracy of labels in bucket 1. There is also a noticeable increase in accuracy across the other buckets.
Similarly, Figure 4 illustrates the mean performance on the Do-I-Demand utterance plus description set, again broken down by label performance ranking. This figure further confirms the positive impact of augmentation, especially in buckets 1 and 2, compared to the relatively lesser gains in buckets 3 and 4.
These findings underscore that each augmentation method we propose not only boosts overall performance but also effectively redistributes the performance across different labels, enhancing the model’s ability to predict a wide range of actions with improved accuracy. | When designing robots to assist in everyday human activities, it is crucial to enhance user requests with visual cues from their surroundings for improved intent understanding. This process is defined as a multimodal classification task. However, gathering a large-scale dataset encompassing both visual and linguistic elements for model training is challenging and time-consuming. To address this issue, our paper introduces a novel framework focusing on data augmentation in robotic assistance scenarios, encompassing both dialogues and related environmental imagery. This approach involves leveraging a sophisticated large language model to simulate potential conversations and environmental contexts, followed by the use of a stable diffusion model to create images depicting these environments. The additionally generated data serves to refine the latest multimodal models, enabling them to more accurately determine appropriate actions in response to user interactions with the limited target data. Our experimental results, based on a dataset collected from real-world scenarios, demonstrate that our methodology significantly enhances the robot's action selection capabilities, achieving the state-of-the-art performance. | [
"cs.CL",
"cs.AI",
"cs.RO"
] |
# 1 Introduction
Semantic segmentation has made remarkable strides with the advent of deep learning [1]. However, it faces significant challenges, particularly in scenarios with limited labelled data [2] and domain shifts [3]. To reduce dependence on costly pixel-wise annotations, semi-supervised approaches aim to leverage a small set of labelled images alongside large pools of unlabelled data [2]. Recent methods, such as UniMatch [4], produce accurate boundaries, yet still misclassify segments with visually similar features [5], for instance, confusing sofas with chairs (see Fig. 4) due to poor representation of rare classes [6]. Sparse annotations also impair edge localization, especially in regions with complex textures or fine-grained structures [7], while noisy pseudo-labels may drift semantically and collapse multiple object instances into a single mask under limited supervision [4]. These failure modes underscore the need for stronger semantic grounding and more reliable segmentation in low-annotation regimes [4].
Vision Language Models (VLMs) such as CLIP [8] encode rich, domain-invariant class semantics via large-scale image–text pretraining [9, 10]. These embeddings can serve as natural class queries to guide pixel grouping, enabling the model to filter out irrelevant categories and enhance high-level concept understanding [11]. However, such embeddings are learned at the image level and lack spatial resolution [12], yielding coarse and noisy masks when naively applied to dense prediction. Although recent VLM-based segmentation approaches show promise in segmentation settings, they typically operate by taking all dataset classes’ text queries to guide segmentation, even though some classes be absent in the image, and also without access to dense annotations, limiting their ability to localize objects accurately [12].
Despite progress in both fields, semi-supervised segmentation and VLMs have evolved largely in parallel. Semi-supervised methods are based solely on vision-based features, limiting their semantic expressiveness for rare or ambiguous classes [13]. VLMs offer complementary strengths, providing class-level priors, but lacking spatial grounding [12]. Current frameworks rarely fuse these strengths, leading to persistent challenges such as pseudo-label drift, semantic ambiguity, and weak boundary delineation [9]. This highlights the need for a unified framework that integrates the spatial precision of vision models with the semantic richness of VLMs to improve generalization, class discrimination, and label efficiency in low-supervision regimes [9].
In this work, we explore the integration of VLMs into semi-supervised segmentation by addressing three key questions 1) how can domain-invariant text embeddings from pretrained CLIP [8] be used as object queries within a vision-transformer to aid semi-supervised segmentation; 2) what strategies can transform these textual embeddings into dense mask predictors that localize accurately and generalize well with minimal supervision; and 3) which regularization objectives are essential to preserve the domain invariant vision–language features during training, preventing semantic drift, and maintaining high-quality predictions? To answer these questions, we introduce HierVL, a unified vision–language framework for semi-supervised segmentation.
HierVL pioneers a vision–language pathway for semi-supervised segmentation by (i) distilling domain-invariant semantics into hierarchical text-query prompts and (ii) refining pixel embeddings for high-fidelity dense prediction. Its decoder comprises four synergistic modules:
• Hierarchical Semantic Query Generator (HSQG): We prune CLIP embeddings to the image-relevant subset and expand them into a coarse-to-fine hierarchy of class queries, injecting rich priors while suppressing noise from absent categories.
• Cross-Modal Spatial Alignment Module (CMSAM): A bidirectional attention operator grounds each semantic query in the visual feature map and regroups pixel features by domain-invariant object queries, thereby correcting CLIP’s localization bias and supporting sharper, more accurate mask delineation.
• Dual-Query Transformer Decoder (DQTD): Class-level queries from HSQG are fused with learnable instance queries, enabling joint reasoning over what and where. This dual stream prevents instance collapse and cleanly separates overlapping objects.
• Vision–Language Regularization: Two tailored objectives—prompt-topology and contrastive pixel–text—preserve the geometry of CLIP’s embedding space and tighten pixel– query correspondence in low-label regimes.
# 2 Related Works
Vision-Language Models. Early multimodal transformers such as LXMERT [14], ViLBERT [15], VisualBERT [16], and their successors showed that large-scale image-text pretraining yields transferable cross-modal representations. CLIP [8] extended this idea to hundreds of millions of web images, producing a contrastive image-text space that supports zero-shot classification and open-vocabulary segmentation while remaining robust to distribution shifts due to diverse dataset based training [8, 11]. EVA02-CLIP [17] further injects masked image modeling, giving denser pixel-level cues that boost downstream segmentation without heavy fine-tuning. Large Vision Language Models (LVLM) such as LLaVA [18], Qwen-VL [19], and InternVL [20] pair a CLIP-style vision encoder with a scalable language model, unlocking free-form visual dialogue and step-by-step reasoning that also benefits mask prediction tasks [18–20]. These advances keep CLIP [8] and its variants at the centre of a rapidly expanding ecosystem that spans segmentation, object detection (e.g. RegionCLIP [21]) and multimodal instruction following [22]. While prior methods leverage text embeddings for segmentation akin to our approach, our work uniquely targets semi-supervised semantic segmentation, prioritizing robust generalization.
Semantic Segmentation with VLMs. Semi-supervised semantic segmentation traditionally employs vision-only techniques, such as consistency regularization [23], adversarial training [24], and pseudolabel refinement [25], to leverage limited labeled data [23–25]. These methods, while effective, often produce ambiguous boundaries, misclassify rare classes, and suffer from semantic drift due to the lack of high-level semantic context [23, 25]. To address these limitations, recent approaches integrate vision-language models (VLMs), such as CLIP [8], which provide robust semantic priors through web-scale image–text pre-training [8,11]. Early zero-shot segmentation methods, like MaskCLIP [26], derive masks directly from CLIP’s vision encoder using image–caption supervision [26]. Variants of these approaches explore grouping strategies, retrieval-based co-segmentation, or text-grounded attention to enhance mask quality [26,27]. However, the absence of dense annotations results in noisy and spatially imprecise outputs [26, 27].
Open-vocabulary segmentation frameworks offer improved accuracy by combining CLIP-based text embeddings with large-scale segmentation datasets. Methods such as OpenSeg [28], LSeg [29], and ZegFormer [30] align dense visual features or classify class-agnostic masks to language descriptions [28–30]. End-to-end models, including ZegCLIP [31] and CAT-Seg [32], streamline text–vision fusion within a single-stage decoder, often incorporating prompt tuning or adapter modules for efficiency [31, 32]. Despite their ability to generalize to unseen classes, these approaches typically rely on substantial annotated data or modify CLIP pre-trained weights, potentially reducing robustness across diverse domains [11, 31].
Textual Object Queries. VLMs such as CLIP learn to align free-form text with a vast array of web images. The resulting text embeddings encode class semantics that remain reliable under changes in viewpoint, style, and domain [33]. When these embeddings are used as object queries in a segmentation transformer, each query contributes two built-in strengths: it supplies class-specific prior knowledge that helps the network recognize the concept even with minimal pixel supervision, and it already reflects many visual domains, giving the model an initial measure of domain robustness [10]. A common baseline approach, known as static textual queries, involves passing all class embeddings to the decoder for each image and treating them equally. This strategy introduces two key limitations: (1) it allows irrelevant class embeddings to influence prediction, introducing noise [34]; and (2) it fails to adapt spatially to the image context, resulting in coarse or incomplete segmentation. Recent work therefore explores query refinement; however, they still retains this static-query design, adding only a pixel-clarity loss and therefore still passing all class embeddings without weighting [10]. Our approach addresses these issues by filtering out irrelevant class embeddings and enhancing spatial precision, all while preserving the efficiency and flexibility of textual-query-based decoders [34–36], explained in the following section.
# 3 Methodology
We present a novel vision-language segmentation framework that synergistically integrates semantic priors derived from CLIP with dynamic spatial reasoning. Our architecture introduces three key innovations: (1) a Hierarchical Semantic Query Generator (HSQG) that leverages CLIP’s capabilities to produce multi-level class queries; (2) a Cross-Modal Spatial Alignment Module (CMSAM) that refines these queries pixel features through the implementation of spatial attention mechanisms; and (3) a Dual-Query Transformer Decoder (DQTD) that combines class-specific and visual queries for precise mask prediction. This design facilitates robust segmentation, particularly in scenarios characterized by limited annotations and domain shifts.
Figure 1: HierVL Pipeline: The CLIP image encoder extracts multi-scale pixel features, while its frozen text encoder, prompted with class names, generates initial textual queries. The Hierarchical Semantic Query Generator (HSQG) filters out absent classes and projects the remaining queries into relevance-weighted, multi-scale representations. These are then spatially grounded in the image features and vice-versa by the Cross-Modal Spatial Alignment Module (CMSAM) using cross attention. A Dual-Query Transformer Decoder combines these aligned text queries with learnable visual queries to refine both class and location information. Finally, a dynamic mask head converts each refined query into a high-resolution segmentation mask and class prediction, supervised by fixed matching and vision–language alignment losses.
# 3.1 Vision and Language Encoding
We employ CLIP as a multi-modal backbone to extract domain-invariant visual and semantic representations. The image encoder $E _ { I }$ , based on ViT-B/16, is fine-tuned to generate multi-scale pixel features $\{ F _ { 1 } , F _ { 2 } , \ldots , F _ { L } \}$ , where $F _ { l } \in \mathbb { R } ^ { H _ { l } \times W _ { l } \times D _ { l } }$ . These features are fine-tuned through a multiscale attention based pixel decoder. Meanwhile, the text encoder $E _ { T }$ remains frozen to preserve the semantic space of CLIP. For each label of class $k$ , we apply a learnable prompt $p$ and obtain the text embedding as $t _ { k } = E _ { T } ( [ p , \mathrm { c l a s s } _ { k } ] ) \in \mathbb { R } ^ { C }$ , where $C$ is the dimension of the text embedding of CLIP.
# 3.2 Hierarchical Semantic Query Generator
In dense semantic segmentation, especially in open-set or semi-supervised frameworks, uniformly querying all classes within the dataset may prove to be suboptimal. Irrelevant class queries, which represent categories not present in the image, can introduce noise and compete for attention, thereby degrading segmentation quality. Moreover, transformer-based segmentation relies on object queries as latent grouping vectors that attend to semantically coherent regions. A singular textual query frequently lacks the flexibility to effectively capture intraclass variation or multiscale semantics, particularly across diverse domains or visual contexts.
To address these limitations, we present HSQG, which initially filters class queries according to their relevance to the current image and subsequently projects them into multiple levels of abstraction. This dual mechanism ensures that the model attends solely to semantically valid classes while allowing it to reason across both coarse and fine semantic granularity. Consequently, HSQG improves both the precision of class-query alignment and the model’s generalization to previously unseen categories.
To model semantics at different abstraction levels, we project each $t _ { k }$ using $L$ separate Multilayer Perceptron (MLP) heads i.e., q(kl) $q _ { k } ^ { ( l ) } = \mathbf { M } \mathbf { L } \mathbf { P } _ { l } ( t _ { k } ) \in \mathbb { R } ^ { D } , l = 1 , \dots , L$ , where $D$ is the decoder query dimension and $q _ { k } ^ { ( l ) }$ represents the $l$ -th level class query.
To focus attention on relevant classes, we compute their semantic relevance $s _ { k }$ via a global image embedding, where $k$ represents classes. For labelled images, we assign $s _ { k } = 1$ if class $k$ is in the ground truth, and $s _ { k } = 0$ otherwise. For unlabelled images, we extract the image-level embedding $\bar { \boldsymbol { v } } \in \mathbb { R } ^ { C }$ from the CLIP image encoder $E _ { I }$ and compute the cosine similarity with each class embedding $\begin{array} { r } { \mathrm { s i m } _ { k } = \frac { \boldsymbol { v } ^ { \top } \boldsymbol { t } _ { k } } { \Vert \boldsymbol { v } \Vert \cdot \Vert \boldsymbol { t } _ { k } \Vert } } \end{array}$ v⊤tk .We then modulate this similarity utilizing a learnable gating to weigh classes based on their presence probability: $s _ { k } = \sigma ( W \cdot \sin _ { k } + b )$ , where $s _ { k } \in [ 0 , 1 ]$ is relevance score and $W , b \in \mathbb { R }$ are learnable parameters, and $\sigma$ denotes the sigmoid function. The $s _ { k }$ is used to scale each abstraction-level query $\tilde { q } _ { k } ^ { ( l ) } \gets s _ { k } \cdot q _ { k } ^ { ( l ) }$ . The final output of HSQG is the set of relevanceweighted hierarchical queries ${ Q } ^ { \mathrm { t e x t } } = \left\{ \tilde { q } _ { k } ^ { ( l ) } \left| k \in \mathcal { C } _ { I } , l = 1 , \ldots , L \right. \right\}$ , where $\mathscr { C } _ { I }$ denotes the set of active classes determined for the image $I$ .
Figure 2: Prompt regularity: elastic alignment of tuned prompts to CLIP prototypes preserves global topology and class anchoring, maintaining zero-shot generalization during task-specific adaptation.
Figure 3: Masked consistency: stabilizes masked and reference features while reinforcing pixel–text alignment under occlusion, improving robustness to missing visual cues.
# 3.3 Cross-Modal Spatial Alignment Module
While hierarchical textual queries $\tilde { q } _ { k } ^ { ( l ) } \in \mathbb { R } ^ { D }$ encode domain-invariant class semantics, effective segmentation requires grounding them in the spatial structure of the input image. In contrast, pixel features $F _ { l } \in \mathbb { R } ^ { \mathbf { \bar { H } } _ { l } \times W _ { l } \times D _ { l } }$ have rich local detail, but no explicit class information. To close this gap, CMSAM performs bidirectional cross-modal attention in a pixel decoder layer: textual queries attend to pixel embeddings to absorb location cues, and pixel embeddings attend to queries to incorporate class priors. This process aligns semantic priors with spatial context, thereby improving both localization and class discriminability. For pixel to text alignment, we first reshape $F _ { l }$ into a sequence of flattened patch embeddings $\dot { \boldsymbol { x _ { p } } } \in \mathbb { R } ^ { \mathbf { \hat { D } } }$ for $p = 1 , \ldots , H . W$ . Each hierarchical query $\tilde { q } _ { k } ^ { ( l ) }$ is linearly projected to a lower-dimensional query vector $q _ { k } ^ { ( l ) } = w _ { Q } \tilde { q } _ { k } ^ { ( l ) } \in \mathbb { R } ^ { d }$ q˜(kl) ∈ Rd, while each pixel embedding $x _ { p }$ is projected into a key vector $k _ { p } = w _ { K } x _ { p } \in \mathbb { R } ^ { d }$ and a value vector $v _ { p } = w _ { V } x _ { p } \in \mathbb { R } ^ { D }$ . Here, $w _ { Q } , w _ { K }$ , and $w _ { V }$ are learned projection matrices. To measure the semantic affinity between the query and each pixel, we compute attention weights using scaled dot-product attention:
$$
\alpha _ { k , p } ^ { ( l ) } = \frac { \exp { \left( ( q _ { k } ^ { ( l ) } ) ^ { \top } k _ { p } / \sqrt { d } \right) } } { \sum _ { p ^ { \prime } = 1 } ^ { H W } \exp { \left( ( q _ { k } ^ { ( l ) } ) ^ { \top } k _ { p ^ { \prime } } / \sqrt { d } \right) } } .
$$
These weights establish a distribution of spatial relevance over pixels for each query. Subsequently, we aggregate pixel-level information into a visual context vector employing a weighted summation: $\begin{array} { r } { c _ { k } ^ { ( l ) } = \bar { \Sigma _ { p = 1 } ^ { H W } } \alpha _ { k , p } ^ { ( l ) } v _ { p } } \end{array}$ , and refine the query via residual addition, expressed as $\overline { { \hat { q } _ { k } ^ { ( l ) } } } = \tilde { q } _ { k } ^ { ( l ) } + c _ { k } ^ { ( l ) }$ Conve sely, the refined pixel embeddings guide the textual queries: each query attends the spatial feature map to gather visual cues from relevant locations. This enriches the queries with precise boundary and layout information, sharpening mask predictions. This residual refinement integrates image-specific spatial cues into each semantic query and vice-versa, enhancing localization and robustness to intra-class variations. The complete set of refined queries is denoted by $Q _ { \mathrm { a l i g n e d } } =$ $\left\{ \hat { q } _ { k } ^ { \left( l \right) } \right\}$ , where $k = 1 , \ldots , K ; l = 1 , \ldots , L$ , which is subsequently fed to the dual-query transformer decoder for mask prediction, while the refined pixel embedding passes through the decoder layers.
# 3.4 Dual-Query Transformer Decoder
While aligned class-level queries furnish robust semantic supervision, an exclusive reliance on them may prove suboptimal in scenarios involving multiple instances of the same category or ambiguous visual details that are not captured by class names. To address this limitation, we introduce DQTD, which operates utilizing two complementary sets of queries: class-level semantic queries, denoted as $Q _ { \mathrm { a l i g n e d } } \in \mathbb { R } ^ { ( K \cdot L ) \times D }$ obtained from CMSAM, where $K \cdot L$ signifies the total number of class rows across all abstraction levels, and learnable instance-level visual queries, represented as $Q _ { \mathrm { v i s } } \in \mathbb { R } ^ { M \times D }$ , with $M$ indicating the number of object instance rows and $D$ being the shared embedding width. The concatenation of these queries results in the decoder input $Q ^ { ( 0 ) } = Q _ { \mathrm { a l i g n e d } } \cup$ $Q _ { \mathrm { v i s } } \in \mathbb { R } ^ { ( K \cdot L + M ) \times D }$ . Subsequently, the matrix $Q ^ { ( 0 ) }$ is processed through $N _ { \mathrm { d e c } } = 9$ transformer decoder layers, which employ masked self-attention and cross-attention to the pixel embedding $F _ { p i x e l } \ \in \ \bar { \mathbb { R } } ^ { H \times W \times D }$ obtained from the pixel decoder. Self-attention mixes information across query rows, whereas cross-attention grounds each row within spatial evidence. After the last layer, the decoder outputs $Q ^ { ( N _ { \mathrm { d e c } } ) } ~ \in ~ \mathbb { R } ^ { ( K \cdot \bar { L ^ { + } } M ) \times D }$ , where each refined query vector corresponds to its respective row : qi = Q(i,N:dec) ∈ RD, i = 1, . . . , K ·L + M. Here, q(1N $\dot { q _ { 1 } ^ { ( N _ { \mathrm { d e c } } ) } } \mathrm { t i l l } ~ q _ { K \cdot L } ^ { ( N _ { \mathrm { d e c } } ) }$ q(KNdLec) originate in $Q _ { \mathrm { a l i g n e d } }$ eas.ndTehinscoddueals-epamtahnwtiacyrdeegsiiogns,enwshuilre $q _ { K \cdot L + 1 } ^ { ( N _ { \mathrm { d e c } } ) }$ ltiolbl $q _ { K \cdot L + M } ^ { ( N _ { \mathrm { d e c } } ) }$ iocricgoinsaitsetein $Q _ { \mathrm { v i s } }$ nadnl·doceanliczoedesopbajteicatl precision, enabling the model to segment overlapping objects and various appearances within the same class.
# 3.5 Dynamic Mask Prediction
Traditional segmentation heads rely on a single global classifier for each pixel, which cannot adapt to the wide range of object shapes, sizes, and instance counts fund in real images. Therefore, following [37], we let each query act as its own lightweight mask generator: This dynamic design targets a specific region, delivers sharper boundaries, and naturally handles multiple instances of the same class in one forward pass.
Given the decoder output matrix $Q ^ { ( N _ { \mathrm { d e c } } ) } ~ \in ~ \mathbb { R } ^ { ( K \cdot L + M ) \times D }$ , we select $q _ { i } = Q _ { i , : } ^ { ( N _ { \mathrm { d e c } } ) } \in \mathbb { R } ^ { D }$ and obtain a query-specific kernel with a shared MLP, such that $\theta _ { i } = M L P ( q _ { i } ) \in \mathbb { R } ^ { D }$ . The kernel is applied to the pixel embedding $F _ { \mathrm { p i x e l } } \in \mathbb { R } ^ { H \times W \times D }$ through a dot product at every location $( x , y )$ , yielding $M _ { i } ( x , y ) = \theta _ { i } ^ { \top } F _ { \mathrm { p i x e l } } ( x , y )$ and $\hat { Y } _ { i } ( x , y ) = \sigma ( M _ { i } ( x , y ) )$ , where $\hat { Y } _ { i } \in [ 0 , 1 ] ^ { H \times W }$ represents the binary mask and $\sigma$ is the sigmoid function. Although the MLP is shared, the per-query $\theta _ { i }$ enables each $q _ { i }$ specialize on a single segment. In the context of mask-level classification, we utilize a linear head $W _ { c } \in \mathbb { R } ^ { K \times D }$ , which produces logits $c _ { i } = W _ { c } q _ { i }$ . The softmax probability associated with class $k$ is represented as $\begin{array} { r } { p _ { i , k } \dot { \bf \Delta } = e ^ { c _ { i , k } } / \sum _ { k ^ { \prime } } e ^ { c _ { i , k ^ { \prime } } } } \end{array}$ , and the predicted label index is defined as $\kappa _ { i } = \arg \operatorname* { m a x } _ { k } p _ { i , k }$ . To align each visual qu ery with its corresponding textual meaning, we project $q _ { i }$ into the CLIP space via a learnable mapping function $\Phi : \mathbb { R } ^ { D } \mathbb { R } ^ { C }$ and minimize:
$$
\mathcal { L } _ { \mathrm { a l i g n } } = \sum _ { i } \bigl \| \Phi ( { \boldsymbol { q } } _ { i } ) - t _ { \kappa _ { i } } \bigr \| _ { 2 } ^ { 2 } ,
$$
where $t _ { \kappa _ { i } }$ is the CLIP text embedding of class $\kappa _ { i }$ , and $\left\| \cdot \right\| _ { 2 }$ is the Euclidean norm. This dynamic head per query and the alignment between vision and languages produce fine-grained, instance-aware masks while preserving open-vocabulary semantics. Finally, our overall training objective combines: (i) binary cross-entropy on labeled images, (ii) entropy minimization on unlabeled images, and (iii) self-training loss using high-confidence pseudo-labels, ensuring robust learning under sparse supervision.
# 3.6 Regularization Objectives
We introduce three complementary regularization objectives designed to ensure that the fine-tuned vision-language segmentation model maintains the semantic alignment and generalization capacity of the pre-trained CLIP model. Each objective is motivated by a distinct concern – prompt semantic stability, cross-modal feature consistency, and representation preservation – and is formulated with precise constraints. We elaborate on each objective in the following, including its motivation, formulation, and intended effect on the training dynamics.
# 3.6.1 Prompt Regularity Objective
Learning prompts improve segmentation but can distort the CLIP text manifold: frozen templates block task adaptation, whereas unconstrained tuning erodes zero-shot structure. We treat prompt learning as an elastic alignment problem in which each tuned embedding is allowed to deviate just enough to encode new visual cues while remaining topologically consistent with its CLIP prototype. The dual-space consistency loss comprises a global semantic-topology term and a local anchor-adversarial term:
$$
\mathcal { L } _ { \mathrm { t o p o } } = \sum _ { i = 1 } ^ { K } \sum _ { j = 1 } ^ { K } \bigl ( \hat { t } _ { i } ^ { \intercal } \hat { t } _ { j } - \hat { t } _ { i } ^ { \mathrm { C L I P } \intercal } \hat { t } _ { j } ^ { \mathrm { C L I P } } \bigr ) ^ { 2 } ,
$$
$$
\mathcal { L } _ { \mathrm { a n c h o r } } = \sum _ { k = 1 } ^ { K } \Bigl [ \| \Phi ( \hat { t } _ { k } ) - \hat { t } _ { k } ^ { \mathrm { C L I P } } \| _ { 2 } - \sum _ { j \neq k } \log \bigl ( 1 + \| \Phi ( \hat { t } _ { k } ) - \Phi ( \hat { t } _ { j } ) \| _ { 2 } \bigr ) \Bigr ] ,
$$
$$
\begin{array} { r } { \mathcal { L } _ { \mathrm { p r o m p t } } = \mathcal { L } _ { \mathrm { t o p o } } + \mathcal { L } _ { \mathrm { a n c h o r } } , } \end{array}
$$
where $\hat { t } _ { k }$ and $\hat { t } _ { k } ^ { \mathrm { C L I P } }$ are normalized embeddings from learnable and fixed prompts; $\Phi$ is a two-layer MLP projector; $\lambda$ balances anchor retention and inter-class separation. Equation (3) preserves the pairwise class angles of CLIP, while Equation (4) binds each prompt to its prototype, yet repels other classes, producing an elastic tie that maintains zero-shot semantics and allows task-specific refinement.
# 3.6.2 Cross-Modality Masked Consistency Objective
Pixel-level vision–language alignment can degrade when the visual encoder overfits to straightforward cues and neglects robust semantic details, primarily under occlusions or domain shifts. To counter this, we adopt a masked-consistency strategy that enforces stable alignment between vision and language representations.
Specifically, we randomly mask all pixels belonging to a randomly chosen class in the input image and feed this masked image to the learnable encoder $\dot { E } _ { I } ^ { \theta }$ , resulting in masked embeddings $x _ { \mathrm { m a s k } }$ . At the same time, we pass the original unmasked image through a frozen copy of the same encoder, $E _ { I } ^ { 0 }$ , to obtain reference embeddings $x _ { \mathrm { r e f } }$ . Both $x _ { \mathrm { m a s k } }$ and $x _ { \mathrm { r e f } }$ share the same spatial resolution and channel dimension. We then penalize any discrepancy on the masked set $M \subset \bar { \{ 1 , \ldots , H \times W \} }$ via
$$
\mathcal { L } _ { \mathrm { m a s k } } = \frac { 1 } { \vert M \vert } \sum _ { i \in M } \big ( 1 - \cos ( x _ { \mathrm { m a s k } , i } , x _ { \mathrm { r e f } , i } ) \big ) ,
$$
where $\cos ( \cdot )$ denotes cosine similarity, forcing the fine-tuned encoder to reproduce the frozen encoder’s features even when critical pixels are missing.
To ensure that the model continues to leverage the linguistic cue when visual evidence is lacking, we add a pixel–text contrastive loss over the same masked locations, encouraging the masked features to align with their corresponding text embeddings rather than collapse to generic patterns.
$$
\mathcal { L } _ { \mathrm { a l i g n } } = - \frac { 1 } { | M | } \sum _ { i \in M } \log \frac { e ^ { \cos ( x _ { \mathrm { m a s k } , i } , t _ { y _ { i } } ) / \tau } } { \sum _ { c = 1 } ^ { C } e ^ { \cos ( x _ { \mathrm { m a s k } , i } , t _ { c } ) / \tau } } ,
$$
where $t _ { y _ { i } }$ is the CLIP text embedding of the true class at pixel $i$ , $t _ { c }$ iterates over all class embeddings, and $\tau$ is a temperature. Our final vision–language regularizer combines both terms:
$$
\begin{array} { r } { \mathcal { L } _ { \mathrm { v l } } = \mathcal { L } _ { \mathrm { a l i g n } } + \lambda _ { \mathrm { m a s k } } \mathcal { L } _ { \mathrm { m a s k } } . } \end{array}
$$
By training only $E _ { I } ^ { \theta }$ on the masked branch and keeping $E _ { I } ^ { 0 }$ fixed, we ensure that the fine-tuned encoder retains the original semantic grounding of CLIP even under partial occlusion, significantly improving the robustness for open-vocabulary segmentation.
Table 1: Comparison of HierVL with state-of-the-art semi-supervised segmentation methods on Pascal VOC, COCO, ADE20K and Cityscapes. The mIoU $( \% )$ is reported across varying labeled splits. Best results are highlighted in red, second-best in blue, and gains over the prior SOTA in green.
Dataset Labeled Data Ratio PseudoSeg [45] G G DualTeacher [50] LogicDiag [52] UniMatch [53] UniM. [53] LogicD. [58] ZegCLIP+UniMatch† [9] UniMatch† [53] 0 ResNet-101 Xception-60 ViT-B / 16 1/115 (92) 1/58 (183) 57.6 64.1 65.2 68.0 70.1 70.8 71.0 73.3 75.2 75.7 65.5 67.4 71.0 69.2 74.7 74.5 74.0 76.7 77.2 77.7 – – – – – – – – – – 78.0 77.9 69.3 84.0 87.1 +3.1 80.3 80.1 74.2 85.6 85.6 88.2 88.2 +2.6 PASCAL 1/29 (366) 69.1 71.7 74.6 73.7 77.2 76.4 78.1 77.9 78.9 80.1 – – – – 80.9 82.0 78.7 86.0 88.6 +2.6 VOC 1/14 (732) 72.4 75.9 77.3 76.2 78.5 77.7 79.5 79.4 79.9 80.9 – – – – – – 82.8 83.3 81.0 86.7 89.2 +2.5 1//571(21(426342)) – – – – – 78.2 81.8 – 81.2 82.0 – – 23979.81 234970.951 34184.94 34305.134 – 34649.61 – 5023.186 5467.575 +43.49 – – – – – – – – 1/122586 (496235) – – – – – – – – – COCO – – – – – – – – – 1/64 (1.8k – – – – – – – – – – – – 41.8 55.4 59.1 +3.7 1/32 (3.7k) – – – – – – – – – – – – 43.6 46.1 49.8 50.5 – 55.0 – 56.5 59.9 +3.4 1/128 (158) – – – – – – – – – – – – – 15.6 – – 18.4 – 28.1 34.0 +5.9 1/64 (316) – – – – – – – – – – – – – 21.6 – – 25.3 – 33.7 38.8 +5.1 ADE20 1/32 (632) – – – – – – – – – – 26.2 28.4 – 28.1 – – 31.2 – 35.1 39.8 +4.7 1/16 (1.3k) – – – – – – – – – 29.8 33.2 – – 31.5 – – 34.4 37.2 41.6 +4.4 1/8 (2.5k) – – – – – – – 35.6 38.0 – 34.6 – 38.0 39.4 43.3 +2.9 1/30 (100) 61.0 – – – – – – – – – – – – 73.8 – 76.2 78.0 +1.8 1/16 (186) – 69.8 74.9 73.4 75.1 76.8 76.6 75.7 – 75.8 – – – – – 76.6 – 77.9 79.0 +1.1 CITYSCAPES 1/8 (372) 69.8 74.3 – 76.5 76.3 – 77.2 78.9 77.9 77.4 – 77.9 – – – – 78.2 – 79.4 80.4 +1.0 1/4 (744) 72.4 74.6 – 78.5 78.4 – 78.9 80.2 79.2 78.5 – 79.0 – – – – – 79.1 – 80.3 81.1 +0.8 1/2 (1.4k) – 76.8 – 79.1 79.1 80.5 81.0 79.5 – 80.3 – – – – 79.6 – 80.6 81.3 +0.7 Image SemiVL Ours GT Image SemiVL Ours GT BACKGR BXCKG8 BACKGR. 0605 CAT CAT CHAIR CHAIR 衣曲 CHAIR SOFA BACKGR. BACKGR. BACXG BACKGR BACEGR. ENNIS IACKE SANDWICH SANDWICH SANDWICH ENIFE KSIFE KNIFE
Figure 4: Example predictions on Pascal VOC with 92 labels (top) and COCO with 232 labels (bottom). HierVL correctly disambiguates visually similar classes (e.g., sofa vs. chair), suppresses over-segmentation in cluttered regions, produces sharp object boundaries, and accurately separates multiple instances in dense crowds, demonstrating enhanced semantic grounding and instance-level discrimination under sparse supervision.
# 4 Experiments
# 4.1 Implementation Details
Network Architecture: Our model leverages a ViT-B/16 vision encoder $E _ { I }$ (patch size $= 1 6$ ) [38] and a Transformer text encoder $E _ { T }$ [39], both initialized with CLIP pre-training [33]. The multi-scale deformable-attention pixel decoder is adopted from DN-DETR [40], with CMSAM interleaved after each deformable-attention layer. An 8-token learnable prompt $p$ is prepended to each encoder sequence to specialize the joint representation for segmentation. For the Dual-Query Transformer Decoder, we follow the default settings of the Masked-Attention Mask Transformer for universal image segmentation [37]. The framework inherits CLIP’s biases and may struggle with domainspecific classes absent in CLIP’s training data, limiting adaptation to niche domains.
Training: We train HierVL on Pascal VOC [41], COCO [42], ADE20K [43], and Cityscapes [44] using a single NVIDIA GeForce RTX-3090 (24 GB). Following the semi-supervised setting of [9], each iteration processes a mixed batch of eight labelled and eight unlabelled images. Inputs are randomly cropped to $5 1 2 \times 5 1 2$ pixels, except for Cityscapes, where we adopt the UniMatch crop of $8 0 1 \times 8 0 1$ [4]. Optimization employs AdamW for 80, 10, 40, 240 epochs on VOC, COCO, ADE20K, Cityscapes, respectively, with a 0.9 polynomial learning-rate decay. Base learning rates are $1 \times 1 0 ^ { - 4 }$ (VOC), $\mathrm { \bar { 4 } \times 1 0 ^ { - 4 } }$ (COCO, ADE20K), and $5 \times 1 0 ^ { - 5 }$ (Cityscapes). ViT-backbone weights are updated with a $0 . 1 \times$ multiplier, whereas HSQG, CMSAM, DQTD, and prompt tokens use $4 \times$ the base rate. Data augmentation mirrors UniMatch, random scale, crop, colour jitter, grayscale, CutMix, and horizontal flip, and inference uses sliding window evaluation with $50 \%$ overlap. We instantiate one textual query per class (e.g., 19 for Cityscapes), each prefixed by an eight-token learnable prompt. Loss weights are $\lambda _ { \mathrm { p r o m p t } } = 0 . 0 5$ , $\lambda _ { \mathrm { m a s k } } = 0 . 5$ (temperature $\tau = 0 . 0 7$ ), $\lambda _ { \mathrm { p t } } = 0 . 1 0$ , and $\lambda _ { \mathrm { a l i g n } } = 0 . 0 5$ . Moreover, all the loss functions from SemiVL [9] are used along with their default loss weights.
Table 2: Class-Wise IoU Scores
(a) Ablation on VOC $\mathrm { ( m I o U _ { 9 2 } / m I o U _ { 1 4 6 4 } ) }$ . The best results are with all the components.
(b) Regularization Ablation $\mathrm { ( m I o U _ { 9 2 } ) }$ ). All the losses contribute to the improvement of results.
# 4.2 Comparison with the SOTA Semi-Supervised Segmentation Methods
Pascal VOC. On Pascal VOC (10582 train images; 1464 labelled), we vary the labelled images from 92 to 1464 images. As shown in Tab 1, HierVL improves mIoU by $+ 3 . 1 \%$ at 92 labels and $+ 2 . 6 \%$ at 1464 labels over SemiVL [9], demonstrating that our hierarchical queries and spatial grounding deliver substantial benefits in low-annotation regimes. A class-wise breakdown in Tab 1 reveals that the greatest gains occur on visually ambiguous categories, such as chair, dining table, and sofa, where SemiVL [9] and UniMatch† frequently make errors. These results are also evident in Fig.4, where our HierVL correctly distinguishes a sofa and a chair, while previous sota struggles. An ablation without HSQG shows that incorporating hierarchical query filtering yields additional IoU boosts of roughly $3- 7 \%$ on the most challenging classes.
COCO. On COCO (118k train images; 81 classes), the diversity of categories substantially raises the difficulty of semi-supervised learning. Here, HierVL’s language–vision integration excels at disentangling a wide range of semantic concepts, yielding a $+ 4 . 4 \%$ mIoU gain with only 232 labels 1.
ADE20K. ADE20K (20.21k train images; 150 classes) presents an even broader scene-parsing challenge. Table 1 shows that HierVL achieves up to $+ 5 . 9 \%$ mIoU with only 158 labels, further validating its effectiveness under minimal supervision.
Cityscapes. Finally, on Cityscapes ( $2 . 9 7 5 \mathrm { k }$ train images; 19 classes), HierVL outperforms SemiVL [9] by up to $+ 1 . 8 \%$ mIoU at 100 labels as shown in Table 1. These gains, though smaller, are noteworthy given the prevalence of fine-grained street-scene classes (e.g., distant poles, traffic signs) that are especially challenging under sparse caption guidance.
# 4.3 Analysis
Ablation: Tab. 3a quantifies the contributions of the components, beginning with the SemiVL [9] baseline of $\mathrm { 8 4 . 0 / 8 7 . 3 m I o U _ { 9 2 } / m I o U _ { 1 4 6 4 } }$ . We evaluated the effect of generating hierarchical queries and eliminating absent class queries; the results show the largest performance gain of $+ 0 . 9 / + 0 . 8 \%$ Then, we examine the contribution of $\mathbf { C M S A M + }$ dynamic mask predictor $( Q _ { \mathrm { a l i g n e d } } )$ , i.e., aligning semantic queries with spatial features, and an instance-aware decoder. It elevates the performance by $+ 0 . 7 / + 0 . 6 \%$ . Moreover, the study reveals an increase of $+ 0 . 9 / + 0 . 6 \%$ depicting the supremacy of including visual queries at the instance level, i.e., $Q _ { \mathrm { { a l i g n e d } } } \cup Q _ { \mathrm { { v i s } } }$ . Furthermore, the inclusion of combined query refinement results in further $+ 0 . 6 / + 0 . 6 \check { \% }$ . Stronger gains for 92 labels $( + 3 . 1 \%$ vs. $+ 2 . 6 \%$ for 1464) stem from hierarchical and spatial modules compensating for sparse supervision.
Regularization: Tab. 3b evaluates our regularization losses on Pascal VOC. The complete combination achieves 87.1 mIoU, outperforming partial configurations. The loss of masked consistency $\mathcal { L } _ { \mathrm { m a s k } }$ proves the most critical for the robustness of the occlusion. In contrast, the loss of prompt topology $\mathcal { L } _ { \mathrm { p r o m p t } }$ maintains the semantic relationships of CLIP, and the loss of alignment $\mathcal { L } _ { \mathrm { a l i g n } }$ strengthens the local pixel-text correspondence. Together, they show complementary benefits: prompt loss preserves global semantics, masked loss ensures feature stability, and alignment loss refines localization, collectively boosting class performance by 2.4 mIoU. | Semi-supervised semantic segmentation remains challenging under severe label scarcity and domain variability. Vision-only methods often struggle to generalize, resulting in pixel misclassification between similar classes, poor generalization and boundary localization. Vision-Language Models offer robust, domain-invariant semantics but lack the spatial grounding required for dense prediction. We introduce HierVL, a unified framework that bridges this gap by integrating abstract text embeddings into a mask-transformer architecture tailored for semi-supervised segmentation. HierVL features three novel components: a Hierarchical Semantic Query Generator that filters and projects abstract class embeddings into multi-scale queries to suppress irrelevant classes and handle intra-class variability; a Cross-Modal Spatial Alignment Module that aligns semantic queries with pixel features for sharper boundaries under sparse supervision; and a Dual-Query Transformer Decoder that fuses semantic and instance-level queries to prevent instance collapse. We also introduce targeted regularization losses that maintain vision-language alignment throughout training to reinforce semantic grounding. HierVL establishes a new state-of-the-art by achieving a +4.4% mean improvement of the intersection over the union on COCO (with 232 labeled images), +3.1% on Pascal VOC (with 92 labels), +5.9% on ADE20 (with 158 labels) and +1.8% on Cityscapes (with 100 labels), demonstrating better performance under 1% supervision on four benchmark datasets. Our results show that language-guided segmentation closes the label efficiency gap and unlocks new levels of fine-grained, instance-aware generalization. | [
"cs.CV",
"cs.AI"
] |
# 1. Introduction
Modelling complex language patterns and solving complex language tasks are two of the primary reasons that Large Language Models (LLMs) have attracted considerable attention in recent years. While the LLMs track thrives on increasing model sizes and tackling more difficult tasks, another track is considering putting such capable models on lower-end devices. These models are called Small Language Models (SLMs) (Lu et al., 2024) or on-device language models (Liu et al., 2024; Mehta et al., 2024; hfs, 2024).
SLMs may have less than one billion parameters (Mehta et al., 2024; Liu et al., 2024; Laskaridis et al., 2024). Though such a size is already a few tenths or even hundreds of what common LLMs usually are, it can still be burdensome for some low-end devices. As listed in (Liu et al., 2024, Fig. 2), some prevalent mobile devices (e.g. iPhone 14 and iPhone 15) only have 6GB DRAM. For some SLMs like Gemma2- 2B, running the uncompressed version causes a system crash on Raspberry Pi-5 with 8GB DRAM.
Compared with LLMs, SLMs on low-end devices have different layer compositions of the model and different onboard operations due to the absence of server-level GPUs. As shown in Figure. 1a, around half of the investigated open-source models have more than $20 \%$ of the parameters attributed to token embedding layers, which is consistent with the previous findings, i.e. (Liu et al., 2024, Section 2.2.3). Additionally, since no server-level GPU is on board to support massive parallel operations for matrix multiplication, block-wise approaches that rely on parallelism (Dao et al., 2022; Qiu et al., 2024) are not suitable for low-end deployment scenarios.
To this end, this paper proposes TensorSLM, a tensor-based approach to compress SLMs for low-end devices (i.e. Raspberry Pi without GPU). Together with matrix-based lowrank approaches (Chen et al., 2018a; Hrinchuk et al., 2020; Lioutas et al., 2020; Acharya et al., 2019; Chen et al., 2021; Hsu et al., 2022; Dao et al., 2022; Qiu et al., 2024), this kind of approach forms a broader field named low-rank factorization. The comparison of these works regarding methodologies (e.g. matrix/tensor, with/without training) and applications (e.g. high-end/low-end devices, large/small models) are clarified in Table. 4.
Compared with two-dimensional matrices or their finergrained block-wise forms (Chen et al., 2018a; Dao et al., 2022), higher-order tensors provide more diverse representation alternatives through their inter-order information, which is more suitable for small-size models to model complex
(a) The parameter ratio of Norms (including layer norms), feed-forward layers (FF), attention layers (Attn), and embedding layers (Emb), and the average zero-shot reasoning score (Zellers et al., 2019; Clark et al., 2018; 2019; Bisk et al., 2020) of several open-source model series. In a model series, smaller models have a higher token embedding layer ratio and lower feed-forward layer ratio, while the attention layer ratio is maintained.
(b) The workflow of SLM compression in edge computing scenario with our approach.
Figure 1. Typical SLM layer composition and the SLM application requirement of adaptability.
oermigbineadlding U ceombperdedsisnegd 2.download decoder tcokmepn ressed new token
layer 品 layer 品 品器 embedding 品 3.register XA ? 5.update K / Central Server Edge Application 1.compress 4.compress (Sec. 3.1) (Sec. 3.1) ① tensorization ② decomposition X MPS cores g(2)∈R1×3×1, g(3)eR1×3×1
token embedding vector $\mathbf { x } \in \mathbb { R } ^ { 2 7 }$ tensor tensor rank $r _ { 0 } = r _ { 1 } = r _ { 2 } = r _ { 3 } = 1$
patterns. This superiority is more pronounced when no fine-tuning data is available to adjust model parameters for specific deployment environments.
The contributions of this paper are summarised as follows:
1. We systematically analyse LLMs on high-end GPU servers and SLMs on low-end edge devices to address the two unique requirements of SLM compression: adaptability to specific deployment environments and energy efficiency for better user experience.
2. To our knowledge, we are the first to compress SLMs for low-end device use cases using low-rank factorization. We adjust Tensor-Train Decomposition for nonparallel operations in the forward passes, where blockwise approaches (Dao et al., 2022; Qiu et al., 2024) are incompetent.
3. We gave the measured latency and estimated energy consumption of SLMs on the typical low-end device, Raspberry Pi 5 , finding that our approach reduces half of the inference energy with negligible latency increase.
4. We evaluated both simple and complex language tasks. We found that our tensor-based approach is better at unprompted and unconstrained question answering than the matrix-based SVD approach, and herein sheds light on selecting appropriate algebraic structures for language
model compression according to the specific tasks.
# 2. Unique Requirements of SLM Applications
This section clarifies the main application differences between LLMs and SLMs, which will then guide the design of SLMs compression on low-end devices.
# 2.1. Adaptability
Unlike the current LLM applications, which are mostly running on high-end GPU servers (e.g. in the data centres with numerous NVIDIA A100), SLMs are mainly for edge (or mobile) applications that require adapting to the environment with limited resources on lower-end devices. A common approach to adapting to the dynamic environment is updating the vocabulary according to the changes in input text distribution (Chen et al., 2018a). The reasons for this distribution change vary from case to case. For example, new user registration, or the frequently used tokens update with the users’ changing daily lives.
To cope with the ever-changing input tokens and vocabulary, a straightforward strategy is to build a could-edge system, as shown in Figure. 1b, which is similar to the workflows in the field of edge computing, e.g. (Laskaridis et al., 2024, Fig.1). There are two kinds of devices in this workflow: 1) the central server, which is possibly a server in public or private cloud services, or a higher-end personal computer, and 2) the low-end edge device. In this paper, we only talk about a typical edge device - Raspberry Pi. Over a fairly long period (e.g. months or years), the central server only communicates with the edge device once to provide a brand-new pre-trained language model. Afterwards, the edge device should update the vocabulary on board according to the changes in the environment.
A detailed explanation of Figure. 1b is as follows:
Step 1. The central server compresses the whole token embedding matrices on the token embedding level, according to Algorithm 1.
Step 2. The compressed vocabulary and other parts of the language model (e.g. the decoder) are downloaded and then deployed on a low-end device.
Step 3. During the application runs, the vocabulary updates for two cases:
1. a new token is required according to the actual application requirements, it will be registered by the service on the edge device. Jump to Step 4.
2. an old token is required to be removed (e.g. it has not been used for a long time), the edge device simply deletes the corresponding token embedding vector. Meanwhile, the application deregisters this token.
Step 4. The low-end device compresses the added token embedding vector as described in Algorithm 1.
Step 5. The current vocabulary of the language model. The compression process of a single token embedding follows a pipeline of $\textcircled{1}$ tensorization and $\textcircled{2}$ decomposition.
# 2.2. Energy Efficiency
From the workload of the high-end GPU servers (e.g. those equipped with NVIDIA A100) and low-end edge devices (e.g. Raspberry Pi 5) described in Section 2.1, we know that the edge device only takes charge of light-weight essential tasks, since it has strict limitations in computation, memory and communication. Furthermore, since battery life directly impacts the user experience, energy consumption is also a significant concern.
The actual energy consumption of a device depends on various factors, like the semiconductor temperature, system workload, operating environment, etc. Thus, it is hard to precisely calculate the exact energy consumption of an algorithm on a certain hardware. However, we can still estimate the range of energy consumption in the system as Table. 1, where we can have the following remarks:
Remark 2.1. Memory operations are more “expensive” than computation in terms of energy.
Table 1. Approximate energy consumption of different operations $\mathrm { 1 n J { = } 1 0 0 0 p J }$ ). For servers, communication with the wired network (e.g. ethernet or optical fibre) is preferred; for edge devices, it is preferred to use wireless networks (e.g. Wi-Fi or cellular network).
Remark 2.2. Non-essential communication should be avoided for energy concerns.
The workflow in Figure. 1b has already satisfied Remark 2.2. For Remark 2.1, if real-time is not the most important concern in the edge application, we “exchange” memory with computation for longer battery life. Further discussion and evaluation around these are in Section 4.1 and Appx. F.
# 3. Preliminaries
This section gives the essential concepts related to tensor, tensor operations and Tensor-Train Decomposition.
Order-N Tensor. An order- $. N$ real-valued tensor, $\mathcal { A }$ , is a high-dimensional matrix (or multi-way array), denoted by $\bar { \mathcal { A } } \in \mathbb { R } ^ { I _ { 1 } \times \cdots \times I _ { N } }$ , where $N$ is the order of the tensor (i.e., number of its modes), and $I _ { k }$ $1 \leq k \leq N )$ is the size (i.e., the dimension) of its $k$ -th mode. In this sense, matrices (denoted as $\mathbf { A } \in \mathbb { R } ^ { I _ { 1 } \times I _ { 2 } } ,$ can be seen as order-2 tensors $N = 2 \rangle$ ), vectors (denoted as $\mathbf { a } \in \mathbb { R } ^ { I }$ ) can be seen as order1 tensors $ { \mathcal { N } } = 1 ,$ ), and scalars (denoted as $a \in \mathbb { R } ^ { \cdot }$ ) are order-0 tensors $\ N = 0$ ).
Tensor-Train Decomposition (TTD). The most common Tensor-Train Decomposition (Oseledets, 2011) formats a tensor into a Matrix Product State (MPS) form, which applies the Tensor-Train Singular Value Decomposition (TTSVD) algorithm to an order- $N$ tensor, $\mathcal { X } \in \mathbb { R } ^ { \bar { I } _ { 1 } \times I _ { 2 } \times \dots \times I _ { N } }$ This results in $N$ smaller 2-nd or 3-rd order tensors, ${ \mathcal { G } } ^ { ( k ) } \in$ $\mathbb { R } ^ { r _ { k - 1 } \times I _ { k } \times r _ { k } }$ for $k = 1 , \ldots , N$ , such that
$$
\begin{array} { r } { \chi \approx \mathcal { G } ^ { ( 1 ) } \times _ { 2 } ^ { 1 } \mathcal { G } ^ { ( 2 ) } \times _ { 3 } ^ { 1 } \mathcal { G } ^ { ( 3 ) } \times _ { 3 } ^ { 1 } \cdot \cdot \cdot \times _ { 3 } ^ { 1 } \mathcal { G } ^ { ( N ) } . } \end{array}
$$
Tensor $\mathcal { G } ^ { ( 1 ) } , \ldots , \mathcal { G } ^ { ( N ) }$ are referred to as the tensor cores, while the set $\{ r _ { 0 } , r _ { 1 } , \hdots , r _ { N } \}$ represents the TT-rank of the TT decomposition $( r _ { 0 } = r _ { N } = 1$ ).
# 4. Methodology
This section clarifies the technical cornerstones of our approach. A practical pipeline of our approach is depicted in Figure. 1b. The whole vocabulary is processed on higherend servers, while inference and vocabulary updates happen
Algorithm 1 TT SVD(Oseledets, 2011) for a Single Token Embedding Compression
Input : 1. $d$ -dimensional token embedding vector $\overline { { \textbf { x } \in \mathbb { R } ^ { d } } }$ , approximation accuracy $\epsilon$ ; 2. Tensor dimension $\{ I _ { 1 } , I _ { 2 } , \ldots , I _ { N } \}$ and TT ranks $\{ r _ { 0 } , r _ { 1 } , \hdots , r _ { N } \}$ . Output : TT cores $\mathcal { G } ^ { ( 1 ) } , \ldots , \mathcal { G } ^ { ( N ) }$ Initialize :Tensor $\mathcal { X } $ reshap $\vec { \mathbf { \sigma } } ^ { 2 } ( \mathbf { x } , [ I _ { 1 } , I _ { 2 } , \ldots , I _ { N } ] )$ , temporary matrix $\begin{array} { r } { \mathbf { Z } \mathtt { r e s h a p e } ( \mathcal { X } , [ r _ { 0 } I _ { 1 } , \prod _ { j = 2 } ^ { N } I _ { j } ] ) , } \end{array}$ truncation parameter $\delta = \frac { \epsilon } { \sqrt { N - 1 } } \| \mathcal { X } \| _ { F }$ 1 for $k = 1$ to $N - 1$ do 2 $\begin{array} { r l } & { \mathbf { U } , \mathbf { S } , \mathbf { V } , \mathbf { E } \gets \mathrm { t r u n c s V D } ( \mathbf { Z } , \delta , r _ { k } ) } \\ & { \qquad / / \mid \textrm { \tiny { s . t . } } \quad \mathbf { U } \in \mathbb { R } ^ { r _ { k - 1 } I _ { k } \times r _ { k } } , \ \| \mathbf { E } \| _ { F } \leq \delta } \\ & { \{ \mathbf { \Xi } ^ { ( k ) } \gets \mathrm { r e s h a p e } \ ( \mathbf { U } , [ r _ { k - 1 } , I _ { k } , r _ { k } ] ) } \\ & { \qquad / / \ \operatorname { g e t } \ k \mathrm { t h } \ \mathbf { T } \mathbb { T } \ \textrm { c o r e } } \\ & { \qquad \mathbf { Z } \gets \mathrm { r e s h a p e } \left( \mathbf { S } \mathbf { V } ^ { T } , [ r _ { k } I _ { k + 1 } , \prod _ { j = k + 2 } ^ { N } I _ { j } ] ) \right) } \\ & { \qquad / / \ \mathbf { \Xi } \mathbf { S } \mathbf { V } ^ { T } \in \mathbb { R } ^ { \prod _ { i = k + 2 } ^ { N } I _ { i } } } \end{array}$ 3 4 5 $\boldsymbol { \mathcal { G } } ^ { ( N ) } \mathbf { Z }$ 6 return $\mathcal { G } ^ { ( 1 ) } , \mathcal { G } ^ { ( 2 ) } , . . . , \mathcal { G } ^ { ( N ) }$
on lower-end edge devices.
# 4.1. Individual Embedding Vector Compression
For the compression of the embedding matrix, rather than decomposing the whole embedding weight matrix, we propose to decompose each embedding vector. The lower half of Figure. 1b is a simplified illustration of such a process, with a detailed description in Algorithm 1.
Tensorization. Each token embedding $\textbf { x } \in \ \mathbb { R } ^ { d }$ is reshaped (or folded and tensorized into an order- $N$ tensor. Denote reshape( ) as the reshape function, $\ x \ =$ reshape $( \mathbf { x } , \{ I _ { 1 } , I _ { 2 } , . . . , I _ { N } \} )$ and $\mathcal { X } \in \mathbb { R } ^ { I _ { 1 } \times \cdots \times I _ { N } }$ such that $\begin{array} { r } { d = \prod _ { k = 1 } ^ { N } I _ { k } } \end{array}$ . In the example in Figure. 1b, the token embedding vector $\mathbf { x }$ is a 27-dimensional vector, $d = 2 7$ . In this way, vector $\mathbf { x }$ is reshaped into an order-3 $N = 3$ ) tensor $\chi$ , with tensor size for each mode $I _ { 1 } = I _ { 2 } = I _ { 3 } = 3$ .
Tensor Decomposition. Tensor $\chi$ is then decomposed and stored in a Matrix Product State (MPS) form as $\scriptstyle { \mathcal { X } } \approx$ $\mathcal { G } ^ { ( 1 ) } \times _ { 3 } ^ { 1 } \cdot \cdot \cdot \times _ { 3 } ^ { 1 } \mathcal { G } ^ { ( N ) }$ , with hyperparameters as TT ranks $r _ { 0 } , r _ { 1 } , \dots , r _ { N }$ . For the case in Figure. 1b, the MPS cores are $\mathcal { G } ^ { ( 1 ) } , \mathcal { G } ^ { ( 2 ) } , \mathcal { G } ^ { ( 3 ) }$ , with TT ranks $r _ { 0 } = r _ { 1 } = r _ { 2 } = r _ { 3 } = 1$ In other words, instead of storing the entire token embedding vector $\textbf { x } \in \ \mathbb { R } ^ { d }$ , we store the corresponding MPS cores, $\mathcal { G } ^ { ( k ) } \in \mathbb { R } ^ { r _ { k - 1 } \times I _ { k } \times r _ { k } }$ , for $k = 1 , \ldots , N$ . The parameter count of the MPS cores $\{ \mathcal { G } ^ { ( k ) } \}$ is $\begin{array} { r } { \sum _ { k = 1 } ^ { N } | \mathcal { G } ^ { ( k ) } | = } \end{array}$ $\scriptstyle \sum _ { k = 1 } ^ { N } r _ { k - 1 } I _ { k } r _ { k }$ , where $| \cdot |$ represents the paPrameter count.
A more detailed explanation of individual token embedding compression is given in Algorithm 1, where $\| \cdot \| _ { F }$ denotes the Frobenius norm. Although the embedding vector is reshaped into a tensor, the decomposition for each mode of this tensor is still based on the matrix-level SVD (line 2). Then the complexity of $\mathrm { T T } _ { - } \mathrm { S V D }$ can be derived from SVD and its variants, such as truncated SVD (Oseledets, 2011). Given the vocabulary size $V$ , the original parameters of the embedding layers are compressed from $V d$ to $\begin{array} { r } { V \sum _ { k = 1 } ^ { N } r _ { k - 1 } I _ { k } r _ { k } } \end{array}$ , and the compression ratio can be obtained via ηTTD = $\begin{array} { r } { \dot { \eta _ { \mathtt { T T D } } } = \frac { d } { \sum _ { k = 1 } ^ { N } r _ { \mathtt { k } - 1 } I _ { k } r _ { k } } - 1 } \end{array}$ . The computation and memory complexities for all the above processes are summarized in Table. 2.
Energy Consumption Analysis. Recall in Section 2.2 we have Remark 2.1 to guide the choice between memory and computation for the same functionalities from the perspective of energy cost. Based on Remark 2.1 and Table. 2, we can initially give the estimated energy costs when the SLM processes an input token (only before the decoder), which is similar with (Yang et al., 2017). Assuming in the same operating environment and other conditions (e.g. temperature), the memory energy cost of each float32 is $\nu$ , and the computation energy cost of each float32 is $\tau$ , all the model weights are represented in float32.
When inputting a text of length $l$ , denote original model energy cost regarding memory as $\mathcal { E } _ { \nu }$ , model energy cost regarding computation is $\mathcal { E } _ { \tau }$ ,
$$
\begin{array} { r } { \mathcal { E } _ { \nu } = \nu ( d V + l d ) , \quad \mathcal { E } _ { \tau } = 0 , } \end{array}
$$
and after compression, the energy costs are
$$
\mathcal { E } _ { \nu } ^ { ' } = \nu ( V N I r ^ { 2 } + l N I r ^ { 2 } + l d ) , \quad \mathcal { E } _ { \tau } ^ { ' } = \tau N I r ^ { 2 } .
$$
Denote the SVD rank $k$ , the energy cost after compressing with matrix-based SVD is
$$
\begin{array} { l } { { \mathcal { E } _ { \nu } ^ { ' \prime } = \nu \left[ k ( V + 2 d + l + 1 ) + l d \right] , } } \\ { { \mathcal { E } _ { \tau } ^ { ' \prime } = \tau ( 2 l d k - l d + k d ) . } } \end{array}
$$
Therefore, we have the ratio of inference energy $\omega$ , between the compressed language models and the uncompressed models. Denote $\begin{array} { r } { \omega _ { \mathrm { T T } } = \frac { \bar { \mathcal E } _ { \nu } ^ { ' } + \mathcal E _ { \tau } ^ { ' } } { \mathcal E _ { \nu } + \mathcal E _ { \tau } } } \end{array}$ as the ratio with TensorSLM, and ωSVD = $\begin{array} { r } { \omega _ { \mathrm { S V D } } = \frac { \mathcal { E } _ { \nu } ^ { \prime \prime } + \mathcal { E } _ { \tau } ^ { \prime \prime } } { \mathcal { E } _ { \nu } + \mathcal { E } _ { \tau } } } \end{array}$ as the ratio with SVD. We will give the estimated values of $\omega _ { \mathrm { T T } }$ and $\omega _ { \mathrm { S V D } }$ in Section 5 according to the hyperparameters of the investigated open-source SLMs.
# 4.2. Language Model Inference Process with the Compressed Embeddings
The original inference process with embedding vectors is as follows: when the encoded texts (separated as tokens) are forwarded to the embedding layer, the embedding layer outputs the embedding vectors according to the input tokens; the embedding layer here acts like a look-up table. The embedding vectors are then forwarded to the hidden layers of the transformer, whose size is the same as the dimension of the embedding vectors. Thus, if there is no internal change
Compression Ratio 101 GPT-2 CerebrasGPT 103 = X TUunckoemr pDrecsosemdpoMsoitdioelnPPL Tensor-Train Decomposition
G 10 耳 P 102 × ×× T +
10 ++ + 0.1 0.2 0.5 1.0 2.0 0.1 0.2 1.0 3.0 0.2× 0.4× 0.6× 0.8× 1.0× 1.2× 1.4× 1.6× Original Model Size (B) Compression Ratio
(a) Perplexity-compression trade-off by model size. (b) Comparison of different low-rank approaches.
0.02 大 0.13 ¥ 0.1 X +
ΔAccuracy0 +x 0 0 +++年 车妹 +4 4 0 井在蛙 \*\*++++ 致 X X ×X X\* 双
CerebrasGPT-125159160M CerebrasGPT-121516M −0.2 CerebrasGPT-121516M CerebrasGPT-121516M −0.10
−0.10 CerebrasGPT-1.3B CerebrasGPT-1.3B CerebrasGPT-1.3B −0.10 CerebrasGPT-1.3B ×
0× Compression Ratio 2× 0× Compression Ratio 2× 0× Compression Ratio 2× 0× Compression Ratio 2×
(c) Classification Accuracy (d) Classification Precision (e) Classification Recall (f) Classification F1-Score ARC-c HellaS. BoolQ WinoG.
Remained Params 125M SVD SVD (matrix-based)
100 SliceGPT TensorSLM (vector-based) ? Ours
80 ARC-c HellaS. 1 1
.00° BoolQ
08° 1.3B WinoG. 40% 1
91 20 30 40 50 60 G -bras Zero-shot Average Score cel Cerebra Cere
(g) Zero-shot scores of OPT models (125M and 1.3B). (h) Inference energy costs of different models.
in the hidden layers, the dimension of the embedding vectors should compile with the dimension of the hidden layers. The compressed embeddings should be reconstructed to the original dimension to enable the forwarding process. This inference happens at the application phase shown in the upper right of Figure. 1b.
Thus just before forwarding embedding vectors to the hidden layers, the memory usage increases from $\begin{array} { r } { l \sum _ { k = 1 } ^ { N } r _ { k - 1 } \overline { { I _ { k } r _ { k } } } } \end{array}$ to $l d$ . However, given that the vocabulary size $V$ is normally much larger than the input token number $l$ , that means $V \gg l$ . Thus our approach can still significantly reduce the memory usage if the embedding layer takes a significant part of the whole model parameters. The reconstruction process follows the tensor contraction in Eq. (7), turning the TT cores $\{ \mathcal { G } ^ { ( k ) } \}$ into a $N$ -order tensor $\chi$ according to Eq. (1), and then vectorizing $\chi$ into a full-size embedding vector according to Appx. B.1.
# 5. Experimental Evaluation
Our comprehensive experimental evaluation covers compression ratio, language task performance changes, runtime (flops and latency), and energy consumption.
# 5.1. Changes of Language Task Performance
Perplexity-compression Trade-off. In most cases, the shrinkage of model size leads to a drop in the language task performance (though there are exceptions like the accuracy improvement of CerebrasGPT-590M in Figure. 2c). There should be approaches to measure such a trade-off, with the benefits of a more affordable model size, and how much language task performance has been sacrificed. Here we gave a simple approach for the task evaluated with perplexity, ∆ lg PPL(S,M) , with the measurements on GPT-2 and CerebrasGPT shown in Figure. 2a. We found that larger model sizes achieve better trade-offs, with CerebrasGPT showing a smoother trend compared to GPT-2.
Table 2. Computation and memory complexity during the compression (Section 4.1) and inference(Section 4.2) of TensorSLM. $\mathcal { M } _ { \mathtt { t r a n s } }$ is the transformer module, $V$ denotes the vocabulary size, $d$ is the original token embedding dimension, and $l$ is the token number of the input text. For simplicity, the dimensions for each mode of the tensor and TT rank are represented as $I$ and $r$ .
Language Modelling. Due to the combination of tensor size and TT ranks exponentially exploding, we could not test all the possible combinations. However, we can still observe that independent of the tensor orders and the models used for the compression, significant language modelling performance loss tends to appear when the compression ratio exceeds $2 . 0 \times$ . We further compared our proposed approach with the Tucker decomposition in Figure. 2b with the same tensorization strategy in Section 4.1, and found our adopted Tensor-Train Decomposition outperforms the Tucker Decomposition in perplexity.
Sentiment Classification. The results of the sentiment classification task are shown in Figure. 2c to 2f, also indicate that the robustness of larger-scale models (Cerebras590M and Cerebras-1.3B) is better than that of the smaller models (Cerebras-111M and Cerebras-256M), similar to the trend in language modelling tasks mentioned above. The compressed larger-scale models tend to outperform the original model in precision and F1-score, indicating that our compression improves the ability of the larger models to recognise the positive texts. In contrast, the smaller models tend to have worse performance when the compression ratio increases.
Zero-shot Reasoning. Since SLMs are incapable of the tasks that are too complex, we only evaluate the relatively simple reasoning tasks (e.g. those that do not involve multihop questioning, mathematics or multilingual), and the results are shown in in Figure. 2g. The bold numbers are the cases that outperform the uncompressed models, or the best in all the compressed cases.
Our approach has a higher chance of achieving better average reasoning task scores than the SVD-based approach, which implies that our tensors are better at extracting implicit representations in small size models than matrices. Moreover, in our evaluation, our approach generally has higher scores than the SVD-based approach in ARCchallenge and BoolQ. Both of these datasets are more unprompted and unconstrained compared to the other evaluated datasets. This fact implies that our approach may be better at these difficult, unconstrained reasoning tasks.
# 5.2. Latency
While TensorSLM significantly reduces the model parameters and even improves the language tasks performance, in practice it also introduced extra latencies - compression latency (Section 4.1) and inference latency(Section 4.2).
In our experimental evaluation, a typically induced latency for an input text was no more than 0.3 seconds, which is acceptable for edge applications. Due to space constraints, the comprehensive results and detailed analysis of the ondevice latency evaluation are provided in Appx. G.
# 5.3. Energy Consumption
The estimated inference energy costs are shown in Figure. 2h. The Y-axis indicates the ratio between the inference energy costs of the compressed model and that of the uncompressed model; the lower, the better energy saving. For each language model, we select the compression case that has a similar language task performance according to Section 5.1.
We can observe that our approach is mostly better than the SVD-based approach. Furthermore, TensorSLMsupports adaptivity in edge applications, while the SVD-based approach does not. | Small Language Models (SLMs, or on-device LMs) have significantly fewer parameters than Large Language Models (LLMs). They are typically deployed on low-end devices, like mobile phones and single-board computers. Unlike LLMs, which rely on increasing model size for better generalisation, SLMs designed for edge applications are expected to have adaptivity to the deployment environments and energy efficiency given the device battery life constraints, which are not addressed in datacenter-deployed LLMs. This paper addresses these two requirements by proposing a training-free token embedding compression approach using Tensor-Train Decomposition (TTD). Each pre-trained token embedding vector is converted into a lower-dimensional Matrix Product State (MPS). We comprehensively evaluate the extracted low-rank structures across compression ratio, language task performance, latency, and energy consumption on a typical low-end device, i.e. Raspberry Pi. Taking the sub-billion parameter versions of GPT-2/Cerebres-GPT and OPT models as examples, our approach achieves a comparable language task performance to the original model with around $2.0\times$ embedding layer compression, while the energy consumption of a single query drops by half. | [
"cs.CL",
"cs.LG",
"math.NA"
] |
# 1 Introduction
Language models reason and solve problems using language. What is the connection (and the integration) between their linguistic systems and their impressive reasoning abilities? To investigate this question, we run a suite of experiments to analyze the how language models solve puzzles about diverse linguistic number systems. People represent numbers through language, using rule-based systems that are simultaneously linguistic and mathematical (Ifrah, 2000; Dehaene, 2011; Carey, 2004; Le Corre and Carey, 2007; Ionin and Matushansky, 2006; Hammarström, 2010; Comrie, 2011). Unlike most mathematical reasoning problems, where the mathematical operators are explicit, a numeral system contains implicit operations for describing numerals, and there is considerable variety in how this is done across the world’s languages. For example, French vingt-neuf $( 2 0 + 9 )$ , Bengali untirı¯sh $( 3 0 - 1 )$ , Tamil irupatti onpatu $( 2 \times 1 0 + ( 1 0 - 1 ) )$ , and Birom ba¯k¯ur¯u bı¯ba¯ na´ vE t\`uN¯un $( 2 \times 1 2 + 5 )$ all evaluate to the Hindu-Arabic numeral 29.
We investigate the capabilities of language models to solve puzzles about linguistic number systems, drawn from linguistics competitions (Linguistics Olympiads) where high-school students have to reason through data about unknown languages and explain the linguistic rules governing the data (Derzhanski and Payne, 2010). While language models approach human performance on several language-based benchmarks (Hendrycks et al., 2020; Kojima et al., 2022; Beguš et al., 2023), and recent reasoning models deliberately optimized for logical and mathematical reasoning show remarkable performance improvements for many structured mathematical reasoning tasks (Zhong et al., 2024; Jaech et al., 2024), LLMs perform extremely poorly at solving linguistic-mathematical puzzles about systems of numbers in different languages (Derzhanski and Veneva, 2018; Bean et al., 2024).
Why do language models fail to solve these problems at the intersection of language and math — what specifically causes this failure? And how much of this failure is due to the linguistic vs. the mathematical aspects of the problem?
We present a method to systematically isolate individual parameters of number construction and combination and investigate how they affect language model performance. We establish that most individual mathematical features (like base) do not hinder the ability of sufficiently advanced language models to solve such problems. However, unless the mathematical operations in a problem are made explicit through familiar symbols $( + , \times ,$ etc.), models cannot consistently solve the problem. This indicates that, at least within the domain of linguistic-mathematical problems, models cannot infer the compositional structure of numerals like humans can, or sufficiently abstract notions like operators. We discuss our findings in the broader context of human language, concluding that flexible, adaptive use of language across domains appears to remain challenging for LLMs.
# 2 Methods
Models. We used OpenAI o1-mini (Jaech et al., 2024) and DeepSeek-R1-distill-Qwen-7B (Guo et al., 2025) reasoning models to conduct our experiments, querying o1-mini via the API and running DeepSeek locally. We will publicly release all code and data used for our experiments.
Data. We obtained data for linguistics olympiad problems from two publicly available datasets: LingOly (Bean et al., 2024) and Linguini (Sánchez et al., 2024), filtering both datasets for problems tagged as “number systems”. After filtering, we had 15 problems from the LingOly and 8 problems from the Linguini dataset. Not every problem in the dataset could be standardized in the ways that our experiments required. The entire dataset was thus manually evaluated for suitable problems, and 10 problems were chosen for evaluation, all in distinct languages (see Appendix D). These problems spanned a range of difficulty from the first round of the UK Linguistics Olympiad to the International Linguistics Olympiad (most challenging).
# 3 Experiments
# 3.1 The effect of explicit operators in problems
Since so many of the mathematical operators in numeral structure are implicit (eg, in English we say ‘twenty three’ to mean ‘twenty $^ +$ three’), our first experiment investigates how this implicit structure affects how models solve the problems. To do this, we standardize and convert the 10 existing linguistic number system problems to mathematical problems, and vary how explicit the operators are, as shown in Table 1.
First, we standardize all problems to control for model tokenization and task-external knowledge effects: we identify all meaningful morphemes, standardize all phonological changes, and replace them with dummy words as described in detail in Appendix A. This standardized version of each problem is what we call the IMPLICIT setting, since the mathematical operations are largely implicit, as they are in language. Taking these IMPLICIT problems as our baselines, we then make the operators explicit in three ways: 1) as the familiar mathematical operator symbols that perform the operation (eg, $\cdot _ { + } ,$ for addition), 2) as symbols that are unfamiliar for performing that operation, and 3) as whole words sampled from the tokenizer. A full example prompt with a puzzle in four variations is provided in Appendix B.
We present our results in Figure 1. In all cases, the presence of explicit operations with familiar symbols yields significant improvements over the default IMPLICIT condition (o1-mini performs at ceiling). In the multi-character setting (more linguistic), models perform better on average in the IMPLICIT condition than in the case with an explicit operator as an unfamiliar random word (vid. Figure 7). It is likely harder to differentiate between function words (operators) and number words (numerals) in such a setting — this finding is consistent with work that has shown human solvers also find a problem to be more difficult when the operator word is explicit but unfamiliar (Derzhanski and Veneva, 2018). Overall, our results demonstrate that it is difficult for models to reason about the abstract that linguistic quantities might contain operators, if the operators are not explicitly provided using familiar symbols.
Figure 1: Making operators explicit significantly improves performance. Results for explicit operator experiments, for the single-character variable case (For the results on multi-character variables, see Appendix B Figure 7). Making operators explicit shows performance improvement over the IMPLICIT condition, but this is only substantially and reliably the case when the operator is made explicit with a familiar symbol like $" \boldsymbol { + } \boldsymbol { \mathbf { \mathit { \Sigma } } }$ . Error bars $\ c =$ standard error of the mean. 10 problems, 5 iterations per problem.
# 3.2 Providing contextual information
Our first experiment showed that in the absence of problem-specific instructions, when given a linguistic-mathematical problem directly, LLMs struggle to solve it unless the operations are both explicit and familiar. This leaves open the question of whether providing additional problem-specific information would affect the model performance. We thus modulate the context of the problem in three different ways. We query the same four problem variants as described in Table 1, additionally providing the following contextual information:
Language: “Here is a puzzle based on numbers in the {language} language."
Figure 2: Language and base information only helps in the IMPLICIT case. Effect of adding language or numeral base information, plotted as a difference from the baseline values in Figure 1 for o1-mini. In cases with explicit operators, conflating overtly mathematical and linguistic information appears to confuse the models.
Base: “Here is a puzzle based on numbers in a language that uses a base- $\cdot \{ n \}$ numeral system."
Implicit operations: “Here is a puzzle based on numbers in a language. In this language, numbers may be constructed through implicit operations like addition (twenty-nine $= 2 0 + 9 ,$ or multiplication (five hundred $= 5 \times I O O ,$ ." [only for IMPLICIT condition]
We compare these to the baseline results from Section 3.1 for o1-mini, presenting our results in Figure 2. In cases other than the implicit operator condition, the model seems to recognize the problem as requiring a more mathematical kind of reasoning, so providing linguistic information seems to confuse the model and average performance is worse. However, in the implicit operator (A B) condition, model performance improves significantly, perhaps because the setting of the problem is less overtly mathematical. In Figure 3, we show that providing information about the implicit reasoning needed is not as significant a boost as activating knowledge about the specific language.
Figure 3: Extra information improves performance on IMPLICIT problems (A B). Information about implicitness is helpful, but not as much as more direct information like the problem language. Error bars $\mathbf { \tau } = \mathbf { \tau }$ standard error of the mean. 5 iterations / problem.
Figure 4: Example of full minimal pair template problem, for the Order parameter, where we varied whether digits are read left-to-right or right-to-left.
# 3.3 Ablations: constructed minimal-pair problems
In order to ensure that it is the difference in operators (as opposed to other features of the numeral system) that explains the models’ inability to solve these problems, we performed an ablation study to test whether models could handle other aspects of numeral construction and combination. Our experiment is inspired by the notion of a linguistic minimal pair, a pair of linguistic items that differ in exactly one meaningful element. We construct minimal pairs of simple, synthetic number system problems, where every element is the same except for one specific parameter that differs between two paired problems. We tested five major parameters of numeral systems, as described in Table 2.
L→R L↑R AB=51 BA=51 AC = 57 CA=57 DC ?? C D= ??
In all cases, GPT-4 and more advanced models could solve the template problems. It thus appears that most basic “building blocks" of number systems (e.g. the base of the system, the order of numerals, etc.) did not affect model performance in isolation, but the models consistently fail to solve number problems that involve constructing and combining complex numerals.
Table 2: Minimal pair results: GPT-4 and o1-mini solve all paradigms, GPT-3.5-turbo struggles with numeral base and combination. Further data on testing all bases 4-19 linked in Table 5. | Across languages, numeral systems vary widely in how they construct and combine numbers. While humans consistently learn to navigate this diversity, large language models (LLMs) struggle with linguistic-mathematical puzzles involving cross-linguistic numeral systems, which humans can learn to solve successfully. We investigate why this task is difficult for LLMs through a series of experiments that untangle the linguistic and mathematical aspects of numbers in language. Our experiments establish that models cannot consistently solve such problems unless the mathematical operations in the problems are explicitly marked using known symbols ($+$, $\times$, etc, as in "twenty + three"). In further ablation studies, we probe how individual parameters of numeral construction and combination affect performance. While humans use their linguistic understanding of numbers to make inferences about the implicit compositional structure of numerals, LLMs seem to lack this notion of implicit numeral structure. We conclude that the ability to flexibly infer compositional rules from implicit patterns in human-scale data remains an open challenge for current reasoning models. | [
"cs.CL",
"cs.AI"
] |
# 1 Introduction
Nearest neighbor search (NNS) over the vector data has been widely used in various real-world applications, such as embedding-based retrieval (EBR) for search engines and recommendation systems [12, 27, 38, 59, 61, 65], retrieval-augmented generation (RAG) for large language models (LLMs) [31, 42, 43, 56, 103, 104], and many other cross-disciplinary usages [1, 9, 16, 34, 52, 53]. Typically, unstructured data (e.g., images, texts, or audio) are first encoded into high-dimensional feature vectors using embedding models (e.g., CNN [37, 55, 85], Transformer [13, 25, 90], or VGGish 1), and then nearest neighbor search is performed by retrieving the nearest vectors to a query vector based on a vector similarity function. To alleviate the prohibitively high computational complexity of exact NNS, approximate nearest neighbor search (ANNS) has been proposed to relax the requirement for exactness while significantly improving the search efficiency. ANNS is typically implemented using a well-designed index, which can be hashbased [17, 32, 39, 51, 54], tree-based [11, 21, 23, 57, 74], quantization-based [19, 49, 69, 70, 75], or graph-based [29, 30, 67, 68, 79, 86, 101].
Filtered nearest neighbor search (FNNS) over the vector-scalar hybrid data extends NNS to retrieving only the nearest vectors among those that satisfy a given scalar filter. It has attracted increasing attention over the past decade. In the context of EBR, users on an e-commerce platform may need to find products most similar to an item in a given image, while applying a filter on scalars like color or price [96]. For RAG, in order to ensure the freshness of responses from LLMs, time-aware RAG can be achieved by assigning different weights to document timestamps during retrieval [31]. In the case of industrial-scale knowledge graphs (KGs) such as Saga [44], users can find related entities via KG embeddings, while also specifying the entity type and the values they contain [73]. In the literature, there has been a surge of studies [15, 28, 33, 35, 73, 78, 91, 93, 96– 100, 102, 105, 107] focused on solving the problem of filtered approximate nearest neighbor search (FANNS) over the vector-scalar hybrid data.
# 1.1 Motivation
While a number of survey papers have been published on ANNS over the vector data [6, 14, 60, 77, 92, 94], there is currently no survey on FANNS over the vectorscalar hybrid data. Below, we outline three key reasons (R1-R3) that highlight the need for such a survey.
R1: Inconsistent definitions of the search problem. In FANNS, datasets and queries containing both vectors and scalars are referred as hybrid datasets and hybrid queries, respectively. However, the definitions of these terms vary across studies. For hybrid datasets, some define scalars as simple numbers [28, 35, 99, 107], some as collections of labels [15, 33], and others as values within a schema [73, 78, 91, 93, 96–98, 100, 102, 105] similar to that in a relational database. For hybrid queries, some define scalar filters as equality comparisons [15, 33, 93, 97, 98], some as range comparisons [28, 99, 107], and others as general operations [35, 73, 78, 91, 96, 100, 102, 105]. Additionally, there is inconsistency in how evaluation metrics are defined. For example, some studies define the selectivity as the proportion of data points that do not satisfy the scalar filter [91, 96], while others define it as the proportion of data points that do [73, 78, 102]. The latter definition is also referred to as the specificity in some studies [15, 33].
R2: Insufficient framework for algorithm classification. Current framework classifies FANNS algorithms based on when the scalar filter is applied, typically into three categories: pre-filtering, post-filtering, and in-filtering [28, 33, 35, 73, 78, 99], which correspond to removing data points that do not satisfy the filter before, after, or during the ANNS. However, this framework has two major shortcomings. First, it is not enough to cover all algorithms. For example, some algorithms [28, 73] first apply part of the scalar filter to identify relevant data partitions, then perform ANNS within them, and finally apply the complete filter. In this case, the filter is applied in two stages, which fails to fit any of the aforementioned three categories. Second, it is too coarse to distinguish between algorithms. For example, in graph-based indices, the filter can be applied during either result update [102] or neighborhood expansion [78], but only the latter avoids unnecessary vector similarity calculations. Although both are classified as in-filtering, their effects differ significantly.
R3: Incomplete analysis of query difficulty. In the context of ANNS, considerable efforts have been made to understand query difficulty through factors such as relative contrast [5, 36], intrinsic dimensionality [4, 5, 41], query expansion [3, 5], $\epsilon$ -hardness [106], and Steinerhardness [95]. In contrast, for FANNS, factors that impact the difficulty of hybrid queries remain underexplored. At present, selectivity is the only commonly used factor for evaluating the hybrid query difficulty. Identifying more factors is essential not only for explaining algorithm performance fluctuations from various perspectives, but also for constructing evaluation benchmarks across a wider range of difficulty levels by their combination.
# 1.2 Our Contributions
Driven by the above discussion, we present a survey on FANNS over the vector-scalar hybrid data, covering its definitions, algorithms, datasets and query difficulty. Below, we summarize our main contributions.
(1) More systematic definition of the search problem. For R1, we formally define the problem of FANNS by specifying the hybrid dataset, the hybrid query, and relevant evaluation metrics (section 2). Specifically, a hybrid dataset is defined as one in which each data point is a pair of a scalar-tuple following a scalar schema and a vector in a vector space. A hybrid query is defined as comprising a scalar filter, a vector similarity function, a query vector, and a target result size. Key evaluation metrics including recall and selectivity are defined with precise semantics to eliminate ambiguity.
(2) Finer-grained classification of FANNS algorithms. For R2, we propose a pruning-focused framework to classify FANNS algorithms (section 3). Our framework comprises four distinct strategies, each emphasizing either vector pruning or scalar pruning: vector-solely pruning (VSP), vector-centric joint pruning (VJP), scalarcentric joint pruning (SJP), and scalar-solely pruning (SSP). Building on this framework, we summarize existing FANNS algorithms using the unified terminology in our formal definitions, effectively classifying them and revealing their interrelationships. Our framework provides a broader and finer-grained classification compared to the existing one.
(3) Deeper analysis of query difficulty through a new factor. For R3, we examine existing hybrid datasets (section 4) and analyze the factors that impact the difficulty of hybrid queries (section 5). Specifically, we collect hybrid datasets used in existing FANNS studies, discuss their underlying construction strategies, and detail their main characteristics. Motivated by realistic scenarios, we identify the distribution factor, which refers to the relationship between high dimensional distributions of two sets of vectors. We verify the impact of the distribution factor to the hybrid query difficulty by conducting a case study, offering qualitative explanations based on UMAP visualizations [72] and quantitative insights using the Wasserstein distance [66]. We finally propose a schema towards more comprehensive evaluation of FANNS algorithms combining both selectivity and distribution factors.
# 2 Preliminaries
In this section, we first formally define the hybrid dataset, the hybrid query, and the corresponding evaluation metrics. Then, we outline representative ANNS algorithms, which serve as the foundation for FANNS algorithms. Table 1 summarizes the notations used in this paper.
# 2.1 Definition of Hybrid Dataset
We begin with the definitions of scalar schema and vector space, followed by a formal definition of hybrid dataset.
Definition 1 (Scalar Schema) Let $\mathbb { S } = ( \mathbb { S } _ { 1 } , \mathbb { S } _ { 2 } , \dots$ $\mathbb { S } _ { m }$ ) be a scalar schema containing $m$ scalars, where each scalar $\mathbb { S } _ { i }$ has a simple data type, such as integer, float, or string.
Table 1 Summary of Notations
Definition 2 (Vector Space) Let V be a vector space, which is a set of vectors that are closed under vector addition and constant multiplication. In this paper, we specifically consider the $d$ -dimensional vector space over the field of real numbers, i.e., $\mathbb { V } = \mathbb { R } ^ { d }$ , where each vector is a $d$ -dimensional tuple of real numbers.
Definition 3 (Hybrid Dataset) Let $\mathcal { D } = \{ { \bf p } _ { 1 } , { \bf p } _ { 2 }$ , $\ldots , \mathbf { p } _ { n } \} = \{ ( \mathbf { s } _ { 1 } , \mathbf { v } _ { 1 } ) , ( \mathbf { s } _ { 2 } , \mathbf { v } _ { 2 } ) , \ldots , ( \mathbf { s } _ { n } , \mathbf { v } _ { n } ) \}$ be a hybrid dataset containing $n$ data points, where each data point $\mathbf { p } _ { i } ~ = ~ ( \mathbf { s } _ { i } , \mathbf { v } _ { i } )$ is a pair of a scalar-tuple $\mathbf { s } _ { i } ~ \in ~ \mathbb { S }$ (also denoted as $\mathbf { p } _ { i }$ .s) and a vector $\mathbf { v } _ { i } \in \mathbb { R } ^ { d }$ (also denoted as $\mathbf { p } _ { i }$ .v). Each scalar-tuple $\mathbf { s } _ { i } = ( s _ { i , 1 } , s _ { i , 2 } , \ldots , s _ { i , m } )$ is a value in ${ \mathbb S }$ , where each is a scalar value in $\mathbb { S } _ { i }$ and $s _ { i , j }$ can either take a value of the corresponding data type or be NULL. Each vector $\mathbf { v } _ { i } = ( v _ { i , 1 } , v _ { i , 2 } , \ldots , v _ { i , d } )$ is a value in $\mathbb { R } ^ { d }$ , where each $v _ { i , j }$ is a real number in $\mathbb { R }$ .
# 2.2 Definition of Hybrid Query
We proceed to define the corresponding scalar filter and vector similarity function, along with a formal definition of hybrid query.
Definition 4 (Scalar Filter) Let $f _ { s } : \mathbb { S } \{ 0 , 1 \}$ be a scalar filter that evaluates the scalar-tuple of a data point $\mathbf { p }$ to either false ( $f _ { s } ( \mathbf { p . s } ) = 0 ,$ ) or true $f _ { s } ( { \bf { p . s } } ) =$ 1). $\mathcal { D } _ { f _ { s } } = \{ \mathbf { p } \in \mathcal { D } \mid f _ { s } ( \mathbf { p } . \mathbf { s } ) = 1 \}$ is the filtered subset whose data points are taken from $\mathcal { D }$ and satisfy $f _ { s }$ .
A scalar filter can vary in complexity, ranging from simple constraints like binary comparisons to more complex constraints like regular expressions. A scalar filter that allows arbitrary constraints is referred to as a general scalar filter. Unless otherwise specified, a scalar filter is assumed to be a general scalar filter.
Some algorithms [28, 33, 93, 97–99, 107], discussed later, simplify the scalar filter for specialized applications. A simplified scalar filter consists of simple constraints that check whether a scalar equals a specific value (equality constraints) or falls within a value range (range constraints). A simplified scalar filter with only equality constraints is called a simplified equality scalar filter, while one with only range constraints is called a simplified range scalar filter. For ease of analysis, we express a simplified scalar filter in the disjunctive normal form (DNF):
$$
f _ { s } ( { \bf p . s } ) = \vee _ { i = 1 } ^ { t } \left( \bigwedge _ { j = 1 } ^ { m } f _ { i , j } ( { \bf p . } s _ { j } ) \right) \in \{ 0 , 1 \} ,
$$
where the $j$ -th sub-filter $f _ { i , j } : \mathbb { S } _ { j } \{ 0 , 1 \}$ imposes either no constraint or a simple constraint on the $j$ -th scalar $\mathbf { p } . s _ { j } \in \mathbb { S } _ { j }$ . A sub-filter always evaluates to true when no constraint is imposed, and is referred to as an active sub-filter when a simple constraint is specified.
Definition 5 (Vector Similarity Function) Let $f _ { v }$ : $\mathbb { R } ^ { d } \times \mathbb { R } ^ { d } \to \mathbb { R }$ be a vector similarity function that measures the similarity between the vectors of two data points $\mathbf { p } _ { x }$ and $\mathbf { p } _ { y }$ to a real number $f _ { v } ( \mathbf { p } _ { x } . \mathbf { v } , \mathbf { p } _ { y } . \mathbf { v } ) \in \mathbb { R }$ , where smaller values indicate higher similarity.
Given two vectors $\mathbf { p } _ { x } . \mathbf { v } = \left( v _ { x , 1 } , v _ { x , 2 } , . . . , v _ { x , d } \right)$ and $\mathbf { p } _ { y } . \mathbf { v } = ( v _ { y , 1 } , v _ { y , 2 } , . . . , v _ { y , d } )$ in $\mathbb { R } ^ { d }$ , the form of a vector similarity function $f _ { v }$ can vary depending on the specific application.
A common approach is to directly define the vector similarity function using a distance function, which obeys the properties of non-negativity, identity, symmetry, and the triangle inequality. For example, the Euclidean distance:
$$
d ( \mathbf { p } _ { x } . \mathbf { v } , \mathbf { p } _ { y } . \mathbf { v } ) = \sqrt { \sum _ { i = 1 } ^ { d } ( v _ { x , i } - v _ { y , i } ) ^ { 2 } } \in [ 0 , + \infty ) ,
$$
can serve directly as a vector similarity function, i.e., $f _ { v } ( \cdot , \cdot ) = d ( \cdot , \cdot )$ . A smaller Euclidean distance value indicates that two vectors are closer in the vector space, implying higher similarity.
Alternatively, a non-distance function can be used to indirectly to define the vector similarity function. For example, the inner product:
$$
\langle \mathbf { p } _ { x } . \mathbf { v } , \mathbf { p } _ { y } . \mathbf { v } \rangle = \sum _ { i = 1 } ^ { d } v _ { x , i } v _ { y , i } \in \mathbb { R } ,
$$
can be transformed into a vector similarity function by taking its negation, i.e., $f _ { v } ( \cdot , \cdot ) = - \langle \cdot , \cdot \rangle$ . A larger inner product value (smaller in negative) indicates that two vectors are more aligned in direction, implying higher similarity.
Definition 6 (Hybrid Query) Let $q = \{ f _ { s } , f _ { v } , \mathbf { v } _ { q } , k \}$ be a hybrid query, which contains a scalar filter $f _ { s }$ , a vector similarity function $f _ { v }$ , a query vector $\mathbf { v } _ { q } \in \mathbb { R } ^ { d }$ , and a target result size $k$ .
Given a hybrid dataset $\mathcal { D }$ and a hybrid query $q$ , the problems of FNNS is to find a result set $\mathcal { R } \subseteq { \mathcal { D } } _ { f _ { s } }$ containing $\operatorname* { m i n } ( k , | \mathcal { D } _ { f _ { s } } | )$ data points whose vectors are the most similar to $\mathbf { v } _ { q }$ under $f _ { v }$ when only considering vectors in $\mathcal { D } _ { f _ { s } }$ . This can be formally expressed as:
$$
\mathcal { R } = \arg \operatorname* { m i n } _ { | \mathcal { S } | = \operatorname* { m i n } ( k , | \mathcal { D } _ { f _ { s } } | ) } \sum _ { \mathbf { p } \in \mathcal { S } } f _ { v } ( \mathbf { p } . \mathbf { v } , \mathbf { v } _ { q } ) .
$$
FANNS relaxes FNNS by allowing a small number of errors in the result set, denoted $\tilde { \mathcal { R } }$ , thereby trading accuracy for efficiency. The accuracy of FANNS is typically measured by recall, as defined in subsection 2.3. It is worth noting that when the scalar filter $f _ { s }$ always evaluates to true, FNNS and FANNS reduce to traditional NNS and ANNS, respectively.
# 2.3 Definitions of Evaluation Metrics
This subsection clarifies the semantics of key evaluation metrics, including recall and selectivity.
Definition 7 (Recall) Let recall@k be the recall that measures the accuracy of a hybrid query with a target result size $k$ , which is defined as follows:
$$
r e c a l l @ k = \frac { | \mathcal { R } \cap \tilde { \mathcal { R } } | } { | \mathcal { R } | } \in [ 0 , 1 ] .
$$
In this equation, $\mathcal { R }$ denotes the ground truth result set obtained from FNNS, while $\tilde { \mathcal { R } }$ represents the result returned by a given query method, which may correspond to either FNNS or FANNS. The numerator, $| \mathcal { R } \cap \tilde { \mathcal { R } } |$ , indicates the number of ground truth results that are successfully retrieved. The denominator, $| \mathcal { R } |$ , reflects the total number of ground truth results. Note that $| \mathcal { R } | \leq k$ , as the filtered subset $\mathcal { D } _ { f _ { s } }$ may contain fewer than $k$ data points.
A larger recall indicates higher quality of the result. Specifically, the recall of FNNS is 1, and the recall of FANNS is between 0 and 1 due to its approximate nature. If multiple vectors have the same similarity to $\mathbf { v } _ { q }$ under $f _ { v }$ , the result set may not be unique, in this case, distance-based recall variants are available [6].
Definition 8 (Selectivity) Let $s e l _ { f _ { s } }$ be the selectivity that measures the fraction of data points that do not satisfy the scalar filter $f _ { s }$ , which is defined as follows:
$$
s e l _ { f _ { s } } = 1 - \frac { | \mathscr { D } _ { f _ { s } } | } { | \mathscr { D } | } \in [ 0 , 1 ] .
$$
In the existing literature, the definition of selectivity remains inconsistent. Some studies define it as the proportion of data points that do not satisfy the scalar filter [91, 96], while others define it as the proportion of data points that satisfy the filter [73, 78, 102]. In this work, we adopt the former definition, where higher selectivity indicates that a larger portion of the dataset is filtered out. This aligns with the intuitive understanding that a process is “more selective” when it excludes more candidates. The latter definition is more appropriately referred to as specificity [15, 33].
A scalar filter $f _ { s }$ is said to be selective if $s e l _ { f _ { s } }$ is close to 1, unselective if it is close to 0, and moderate if it lies in between.
Input: nlist centroids $\mathcal { C } = \{ \mathbf { v } _ { c _ { 1 } } , \ldots , \mathbf { v } _ { c _ { n l i s t } } \}$ with inverted lists $\mathcal { L } = \{ l _ { c _ { 1 } } , \ldots , l _ { c _ { n l i s t } } \}$ , target number of visited lists nprobe, vector similarity function $f _ { v }$ , query vector $\mathbf { v } _ { q }$ , target result size $k$ Output: approximate query result $\tilde { \mathcal { R } }$ 1 $\tilde { \mathcal { R } } \gets \{ \}$ ; // Result set 2 $\mathcal { C } ^ { \prime } \gets \{ n p r o b e$ centroids closest to $\mathbf { v } _ { q }$ in $c \}$ ; 3 foreach centroid ${ \bf v } _ { c } \in \mathcal { C } ^ { \prime }$ do 4 $l _ { c } \gets$ inverted list of $\mathbf { v } _ { c }$ from $\mathcal { L }$ ; 5 foreach vector $\mathbf { v } \in l _ { c }$ do 6 $\mathbf { v } _ { f } \gets \arg \operatorname* { m a x } _ { { \mathbf { v } ^ { \prime } \in \tilde { \mathcal { R } } } } f _ { v } ( { \mathbf { v } ^ { \prime } } , { \mathbf { v } _ { q } } )$ ; 7 if $| \tilde { \mathcal { R } } | < k \mathrm { ~ o r ~ } f _ { v } ( \mathbf { v } , \mathbf { v } _ { q } ) < f _ { v } ( \mathbf { v } _ { f } , \mathbf { v } _ { q } )$ then 8 ${ \tilde { \mathcal { R } } } \gets { \tilde { \mathcal { R } } } \cup \{ { \mathbf { v } } \}$ ; 9 if $| \tilde { \mathcal { R } } | > k$ then 10 $\mathbf { v } _ { f } \arg \operatorname* { m a x } _ { \mathbf { v } ^ { \prime } \in \tilde { \mathcal { R } } } f _ { v } ( \mathbf { v } ^ { \prime } , \mathbf { v } _ { q } ) ;$ 11 $\tilde { \mathcal { R } } \tilde { \mathcal { R } } \setminus \{ { \bf v } _ { f } \}$ ;
12 return ˜;
Input: graph $G ( \nu , \mathcal { E } )$ , entry vector $\mathbf { v } _ { e }$ , vector similarity function $f _ { v }$ , query vector $\mathbf { v } _ { q }$ , target result size $k$ Output: approximate query result $\tilde { \mathcal { R } }$ 1 $\tilde { \mathcal { R } } \gets \{ \mathbf { v } _ { e } \}$ ; // Result set 2 $c \gets \{ \mathbf { v } _ { e } \}$ ; // Candidate set 3 while $\vert { \mathcal { C } } \vert { > } 0$ do 4 $\mathbf { v } _ { c } \gets \arg \operatorname* { m i n } _ { \mathbf { v } ^ { \prime } \in \mathcal { C } } f _ { v } ( \mathbf { v } ^ { \prime } , \mathbf { v } _ { q } ) ;$ 5 C ← C \ {vc}; 6 foreach v neighborhood(vc) do 7 $\mathbf { v } _ { f } \arg \operatorname* { m a x } _ { \mathbf { v } ^ { \prime } \in \tilde { \mathcal { R } } } f _ { v } ( \mathbf { v } ^ { \prime } , \mathbf { v } _ { q } )$ 8 if |R˜ |< k or $f _ { v } ( \mathbf { v } , \mathbf { v } _ { q } ) < f _ { v } ( \mathbf { v } _ { f } , \mathbf { v } _ { q } )$ then 9 v ; 10 R˜ ← R˜ ∪ {v}; 11 if ˜ > k then 12 vf ← arg maxv′ ˜ fv(v′, vq); 13 $\tilde { \mathcal { R } } \tilde { \mathcal { R } } \setminus \{ { \bf v } _ { f } \}$ ;
# 2.4 Background of ANNS Algorithms
Early research on ANNS primarily focused on hashbased [17, 32, 39, 51, 54] and tree-based [11, 21, 23, 57, 74] indices, as they are natural extensions of traditional indexing structures in relational databases to high-dimensional spaces. However, despite their favorable theoretical complexity, these methods fail to scale effectively when the vector dimensionality exceeds 10 [26], largely due to the “curse of dimensionality” [45].
In recent years, the focus of ANNS has shifted toward quantization-based [19, 49, 69, 70, 75] and graphbased [29, 30, 67, 68, 79, 86, 101] indices, which have demonstrated significantly better empirical performance under different efficiency-accuracy trade-offs [26]. As a result, almost all existing FANNS algorithms are built upon adaptations of these two index structures. Therefore, we briefly introduce the IVF index and the graph index as representative ANNS methods to facilitate the subsequent discussion of the FANNS algorithms.
$I V F$ index. The inverted file (IVF) index is a form of quantization-based indices. It first trains a quantizer, typically using k-means clustering [75], to partition the vector space $\mathbb { R } ^ { d }$ into nlist clusters represented by their centroids, denoted $\mathcal { C } = \{ \mathbf { v } _ { c _ { 1 } } , \mathbf { v } _ { c _ { 2 } } , \ldots , \mathbf { v } _ { c _ { n l i s t } } \}$ , where each centroid $\mathbf { v } _ { c _ { i } }$ is a value in $\mathbb { R } ^ { d }$ . Each vector in the dataset is then assigned to its nearest centroid, and an inverted list is created for each cluster to store references to the vectors within it. Together, the centroids $\boldsymbol { \mathscr { C } }$ and their corresponding inverted lists $\mathcal { L }$ form the IVF index. For more details, please refer to [26].
During the search (algorithm 1), the traversal is performed within the nprobe clusters whose centroids are closest to the query vector (line 2), and all the vectors within these selected clusters are scanned (lines 3- 5) to find the $k$ nearest neighbors of the query vector ${ \mathbf { v } } _ { q }$ (lines 6- 11). Optionally, compression techniques such as product quantization [49] or additive quantization [19, 69, 70] can be applied to vectors to save memory and accelerate the search process.
Graph Index. In a graph index, each vector is represented as a vertex, with edges linking vertices whose vectors are similar. Together, the vertex set $\nu$ and the edge set $\boldsymbol { \xi }$ form the graph index $G ( \nu , \mathcal { E } )$ . The index construction typically follows one of the three strategies: incremental construction, refinement, or divideand-conquer. For more details, please refer to [92].
During search (algorithm 2), the traversal follows a greedy routing strategy, beginning with an entry vector ${ \bf v } _ { e }$ and gradually expanding the neighbors towards the query vector ${ \mathbf { v } } _ { q }$ . This process involves repeated steps of neighborhood expansion (lines 4- 6) and result update (lines 7- 13). Optionally, several optimizations can be applied to enhance the search process, such as maintaining a visited set [29, 30, 67, 68, 86, 101] to avoid redundant calculations, expanding the result set to include more than $k$ vectors and retaining only the top$k$ upon returning [29, 30, 67, 68, 86, 101] to improve the accuracy, and utilizing hierarchical graph structures [68, 101] to reduce search time.
# 3 Review of FANNS Algorithms
In this section, we propose a pruning-focused framework (Figure 1) to classify 17 existing FANNS algorithms (A1–A17), demonstrating how each algorithm aligns with one of the four distinct pruning strategies defined in our framework and revealing the interrelationships among these algorithms (Figure 2). The core design of each algorithm is summarized using the terminology given in section 2.
# 3.1 The Pruning-Focused Framework
Existing framework classifies FANNS algorithms based on when the scalar filter is applied, namely, pre-filtering, post-filtering, and in-filtering [28, 33, 35, 73, 78, 99]. However, as discussed in C2 of subsection 1.1, this classification is not enough to cover all algorithms, and too coarse to distinguish between algorithms.
In contrast, our framework focuses on the pruning behaviors of indices for hybrid queries. Since a hybrid query involves both vectors and scalars, it supports two types of pruning: vector pruning and scalar pruning. Vector pruning refers to avoiding the computation of vector similarities for data points whose vectors are far from the query vector, typically achieved by searching on a vector index. Scalar pruning refers to skipping vector similarity calculations for data points that do not satisfy the scalar filter, which can be efficiently implemented, as an example, using a bitmap generated by a scalar index to identify such points.
By emphasizing the dominant pruning behavior, our pruning-focused framework identifies four distinct strategies: vector-solely pruning (VSP), vector-centric joint pruning (VJP), scalar-centric joint pruning (SJP), and scalar-solely pruning (SSP). This finer-grained classification framework captures the core design principles of FANNS algorithms and overcomes the limitations of the existing framework by offering a more robust and extensible classification for both current and future FANNS algorithms, as detailed in the following subsections.
# 3.2 VSP-based FANNS algorithms
Vector-solely pruning (VSP) performs only vector pruning without any scalar pruning. The underlying rationale of VSP-based FANNS algorithms [58, 91, 96, 100, 102] (A1, A2) is that FANNS can be viewed as a direct extension of ANNS. These algorithms typically adapt existing vector indices with minimal modifications. VSPbased FANNS algorithms are suitable for scenarios with unselective scalar filters. However, due to the lack of scalar pruning, they may degrade to computing similarities for nearly all vectors when the scalar filter is highly selective.
A1: Post-Filtering Algorithm Family. Post-Filtering Algorithm Family represents a class of methods [58, 91, 96, 100] that directly use the top-k interface of a vector index designed for ANNS to retrieve the $K ^ { \prime }$ nearest neighbors ( $K ^ { \prime }$ -NN ), and then apply the scalar filter to find the final $K$ nearest neighbors (filtered $K$ -NN ). For the choice of vector index, any type can be used, including IVF indices [58, 91, 96, 100] that require minimal memory footprint, or graph indices [91, 100] that offer higher efficiency and accuracy. For the selection of $K ^ { \prime }$ , some methods choose a $K ^ { \prime }$ much larger than $K$ , hoping to retain at least $K$ elements after scalar filtering [91, 96]; while others start with a $K ^ { \prime }$ slightly larger than $K$ and iteratively increase it until at least $K$ data points are retained [100]. Overall, Post-Filtering Algorithm Family is highly flexible and easy to implement since it can use any vector index off-the-shelf, but it faces the challenge of estimating the optimal $K ^ { \prime }$ due to the unpredictable selectivity of the scalar filter during a specific search.
Fig. 1 The proposed pruning-focused framework for classifying FANNS algorithms
Fig. 2 Classification of FANNS algorithms under the pruning-focused framework and interrelationships among them.
A2: VBase. VBase [102] optimizes the Post-Filtering Algorithm Family (A1) by dynamically selecting the optimal $K ^ { \prime }$ . Specifically, it modifies the result update process during the search over the ANNS index (subsection 2.4) by adding only data points that satisfy the scalar filter to the result set. Besides, it introduces an improved termination check that requires the result set contain at least $K$ data points and meet the “relaxed monotonicity” condition [102], which ensures the result vectors are sufficiently similar to the query vector. With these modifications, the number of traversed data points during search serves as $K ^ { \prime }$ in the Post-Filtering Algorithm Family, thereby addressing the difficulty in estimating $K ^ { \prime }$ , and this dynamically selected $K ^ { \prime }$ has been proven to be optimal [102].
# 3.3 VJP-based FANNS algorithms
Vector-centric joint pruning (VJP) incorporates scalar pruning into the vector-pruning-centric search process. VJP-based FANNS algorithms [26, 33, 35, 58, 78, 93,
97, 99, 105, 107] (A3–A11) further modify the search process over the ANNS index, so that the search direction is primarily guided by the similarity to the query vector, with adjustments made by the scalar filter, which may significantly reduce the number of vector similarity computations. However, the effectiveness of the scalar filter in assisting the guidance of search direction is not always guaranteed. Therefore, some studies go beyond modifying the search process by incorporating scalar information into the construction of indices [33, 93, 97, 99, 107] (A7–A11). Overall, VJP-based FANNS algorithms tend to be more efficient than VSPbased FANNS algorithms (subsection 3.2) across varying levels of selectivity, but their results may be less reliable [78, 93, 97, 105] (A3, A4, A7, A8), and their applicability may be limited by their restrictive assumptions [33, 93, 97, 99, 107] (A7–A11).
A3: AIRSHIP and A4: ACORN. Attribute-Constrained Similarity Search on Proximity Graph (AIRSHIP) [105] and ANN Constraint-Optimized Retrieval Network (ACORN) [78] both incorporate scalar pruning into the search process over the graph index (subsection 2.4). In AIRSHIP, data points that satisfy and do not satisfy the scalar filter are probabilistically visited during neighbor expansion, allowing exploiting satisfied data points to enhance efficiency, while exploring unsatisfied yet potentially useful data points for comprehensiveness. When updating the result set, it only adds data points that satisfy the scalar filter. In ACORN, only data points that satisfy the scalar filter are visited during neighbor expansion, aiming for thorough scalar pruning by traversing the predicate subgraph [78]. While many graph indices [92] use a Relative Neighborhood Graph (RNG) [89] approximation on Delaunay Graph (DG) [7] for sparsity, it is not suitable for ACORN since a sparse graph often leads to a disconnected predicate subgraph [78]. Thus, ACORN retains the DG for a dense graph and designs a compression strategy to save memory. Overall, both algorithms are more efficient than VBase (A2) with graph indices, but uncertainty around the connectivity of the traversed subgraph makes search results potentially unreliable.
A5: Faiss-IVF and A6: $C A P S$ . Faiss-IVF [26, 58] and Constrained Approximate Partitioned Search (CAPS) [35] both incorporate scalar pruning into the search process over the IVF index (subsection 2.4). In Faiss-IVF, similarity calculations for vectors that do not satisfy the scalar filter are skipped during scanning. In CAPS, an attribute frequency tree (AFT) [35] is built for each cluster during index construction, with each AFT recursively partitioning the cluster based on its most frequently occurring scalar values. During search, the AFT narrows the scan scope within each cluster, then a scan similar to Faiss-IVF is performed in this refined scope. Overall, both algorithms are more efficient than VBase (A2) with IVF indices, and their memory footprint is smaller than that of graph indices, albeit with potentially lower search speed and accuracy.
A7: NHQ and A8: HQANN. Native Hybrid Query (NHQ) [93] builds a composite graph index to enable joint pruning. It assumes discrete scalar values, and a simplified equality scalar filter (subsection 2.2). For each data point, it transforms its scalar-tuple into a scalar vector by encoding scalar values as numeric values, and then combines its vector with this scalar vector to form a fusion vector. Accordingly, it defines a fusion distance metric to measure the similarity between fusion vectors, such that data points with similar scalars and vectors have similar fusion vectors. Consequently, FANNS over data points is transformed into ANNS over fusion vectors, allowing both index construction and search accomplished using fusion vectors and fusion distance. Hybrid Query Approximate Nearest Neighbor Search (HQANN) [97] follows the core idea of NHQ, but introduces a different definition of the fusion distance. Overall, both algorithms achieve high search efficiency by converting FANNS into ANNS under its assumptions, but uncertainty around the optimal form of fusion distance makes search results potentially unreliable.
A9: Filtered-DiskANN. Filtered-DiskANN [33] includes two similar methods, StitchedVamana and FilteredVamana, both incorporating scalar information into the construction and search process of the Vamana [86] graph index. It assumes discrete scalar values, and a simplified equality scalar filter (subsection 2.2) whose conjunctive part in Equation 1 only have one active sub-filter. StitchedVamana constructs a separate graph for each scalar value over the subset of data points with that value in their scalar-tuple, then overlays these scalar-specific subgraphs by unioning their edges and selectively retaining a limited number of edges to save memory. For a search using a scalar filter, it performs searches using each sub-filter, and then merges their results. For a search using a sub-filter, it only visits data points that satisfy the scalar filter during neighbor expansion, effectively performing ANNS on the scalarspecific subgraph. FilteredVamana has a similar search process to StitchedVamana but constructs an approximate index by incrementally adding data points to an initially empty graph. For each newly added data point, it performs searches using each scalar value in the scalar-tuple of that data point as scalar filter, and the union of these results forms the candidate neighbors for connecting the new point. Overall, Filtered-DiskANN efficiently searches within subgraphs that meet the scalar filter like ACORN (A4), but achieves higher accuracy by explicitly constructing these subgraphs under its assumptions.
$A { 1 0 } { : } S e R F$ . Segment Graph for Range-Filtering (SeRF) [107] follows the core idea of Filtered-DiskANN (A9) but operates under different assumptions. It assumes that each data point has only one scalar whose value is drawn from a discrete and orderable set, and a simplified range scalar filter (subsection 2.2). For index construction, similar to Filtered-DiskANN which builds a graph by approximately overlaying scalar-specific subgraphs for all possible scalar values, SeRF constructs a graph by approximately overlaying range-specific subgraphs for all possible scalar value ranges (in the form of $[ a , b ] , a \leq b )$ . Assuming there are $n$ distinct scalar values (the same as the number of data points), FilteredDiskANN overlays $n$ subgraphs and achieves a worstcase space complexity of $O ( M n )$ by limiting each point to $M$ neighbors [33], while SeRF overlays $n ^ { 2 }$ subgraphs and has a worst-case space complexity of $O ( M n ^ { 2 } )$ due to its “lossless compression” [107]. For FANNS, similar to Filtered-DiskANN which effectively performs searche on scalar-specific subgraphs, SeRF effectively performs searches on range-specific subgraphs. Overall, SeRF also provides higher efficiency and accuracy than ACORN (A4) under its assumptions, but it suffers from a high memory footprint.
A11: iRangeGraph. Improvising Range-dedicated Graph (iRangeGraph) [99] follows the same assumptions and search strategy as SeRF (A10), but adopts a different approach for index construction. Rather than overlaying numerous range-specific subgraphs, iRangeGraph constructs a moderate number of range-specific subgraphs and organizes them in a segment tree structure [99]. In this segment tree, each node represents a range and stores the range-specific subgraph constructed over the subset of data points with scalar values in that range, which results in a worst-case space complexity of $O ( M n \log n )$ . During search, as each data point appears in range-specific subgraphs at multiple levels of the tree, its neighbors for expansion are the union of its neighbors across all the tree levels. Overall, iRangeGraph achieves a smaller memory footprint than SeRF (A10) while ensuring search efficiency and accuracy under its assumptions.
# 3.4 SSP-based FANNS algorithms
Scalar-Solely Pruning (SSP) performs only scalar pruning without any vector pruning. SSP-based FANNS algorithms [78, 91, 96, 102] (A12) are easy to implement and memory efficient without the need for vector indices, and are suitable for scenarios with selective scalar filters. However, due to the lack of vector pruning, they may degrade to computing similarities for nearly all vectors when the scalar filter is unselective.
A12: Pre-Filtering Algorithm Family. Pre-Filtering Algorithm Family represents a class of methods [78, 91, 96, 102] that first apply the scalar filter to retrieve a subset of data points (filtered subset), and then find the filtered $K$ -NN by calculating similarities for all vectors in this subset through brute-force scan. Since the number of possible filtered subsets can be extremely large or even uncountable (e.g., when the scalar filter is based on regular expressions), it is impractical to construct vector indices for all possible filtered subsets to accelerate the search process, making vector pruning infeasible. However, the efficiency of brute-force scan can be improved. For example, AnalyticDB-V (ADBV) [96] trades accuracy for efficiency by pre-compressing data vectors using Voronoi graph product quantization and performing asymmetric distance computation, in which query vectors remain uncompressed while data vectors are compressed.
# 3.5 SJP-based FANNS algorithms
Scalar-Centric Joint Pruning (SJP) incorporates vector pruning into the scalar-pruning-centric search process. To achieve this, SJP-based FANNS algorithms [15, 28, 73, 91, 98] (A13–A17) define rules to select a limited number of filtered subsets, and then build vector indices for each selected subset. During search, they first use part of the scalar filter to coarsely retrieve some of these preselected subsets (partially filtered subsets), and then carry out hybrid searches on them to obtain the final filtered $K$ -NN. Notably, the partially filtered subsets may contain data points that do not satisfy the scalar filter, so scalar pruning is not as complete as in SSP-based FANNS algorithms (subsection 3.4), but since vector indices are built on these subsets, vector pruning can be leveraged to accelerate the search process. Overall, SJP-based FANNS algorithms require careful selection of subsets to build vector indices, and their applicability is all limited by their restrictive assumptions.
A13: Milvus-Partition and A14: HQI. Milvus-Partition [91] and Hybrid Query Index (HQI) [73] both partition the dataset into disjoint subsets based on workload characteristics and build vector indices for each subset. During search, they both retrieve partially filtered subsets and apply one of the VSP-, VJP-, or SSPbased FANNS algorithms for each subset, and finally merge the results to obtain the final filtered $K$ -NN. However, the criteria and structure of dataset partitioning differ between them. In Milvus-Partition, partitioning is scalar-based and single-layered. It first identifies the most frequently used scalar from prior workload, and then evenly divides the dataset into subsets based on the values of this scalar. In HQI, partitioning is filter-based and multi-layered. It first identifies several frequently used scalar filters from prior workload, and then constructs an extended qd-tree [73] by iteratively partitioning the dataset according to these filters. Overall, both algorithms achieve high search efficiency when workload characteristics are stable, but their applicability is limited by the need for the prior workload information to guide the partitioning process.
A15: MA-NSW. Multiattribute ANNS based on Navigable Small World (MA-NSW) [98] constructs multiple NSW [67] graph indices based on the values of the scalar-tuples. It assumes discrete scalar values, and a simplified equality scalar filter (subsection 2.2). For index construction, MA-NSW first defines a containment relationship between two scalar-tuples $\mathbf { s } _ { 1 }$ and ${ \bf s } _ { 2 }$ , where $\mathbf { s } _ { 1 }$ is included by ${ \bf s } _ { 2 }$ ( $\mathbf { s } _ { 1 } ~ \subseteq ~ \mathbf { s } _ { 2 }$ ) if ${ \bf s } _ { 2 }$ has at least one scalar with a NULL value and all other scalar values are identical to those of $\mathbf { s } _ { 1 }$ . Assuming the scalar schema has $m$ scalars, each with $m _ { i }$ distinct values including the NULL value, there will be $1 \mathrm { I } _ { i = 1 } ^ { m } m _ { i }$ possible distinct scalar-tuples. For each observed scalar-tuple, MA-NSW identifies a subset of data points whose scalar-tuples are either identical to or included by the given scalar-tuple, and then constructs an NSW on this subset. Since a single NSW has a space complexity of $O ( M n )$ , the worstcase space complexity of MA-NSW is $O ( M n ^ { m + 1 } ) .$ . For a search using a scalar filter, where each conjunctive subfilter corresponds to a subset with a prebuilt NSW, MANSW retrieves all relevant subsets, performs ANNS on each NSW, and finally merges the results. Overall, despite the high efficiency of MA-NSW under its assumptions, its memory footprint can be prohibitively large.
A16: UNG. Unified Navigating Graph (UNG) [15] constructs a single graph index based on the values of the scalar-tuples. It has the same assumptions and containment relationship definition as in MA-NSW (A15). Additionally, it defines a minimal containment relationship between two scalars $\mathbf { s } _ { 1 }$ and ${ \bf s } _ { 2 }$ , where ${ \bf s } _ { 1 }$ is minimally included by ${ \bf s } _ { 2 }$ if ${ \bf s } _ { 1 } \subseteq { \bf s } _ { 2 }$ and no other scalar $\mathbf { s } _ { 3 }$ exists such that ${ \bf s } _ { 1 } \subseteq { \bf s } _ { 3 } \subseteq { \bf s } _ { 2 }$ . During index construction, for each scalar-tuple, UNG identifies the subset of data points whose scalar-tuples are identical to the given scalar-tuple, and then constructs a graph index on this subset. After that, if the scalar-tuple of one subset is minimally included by that of another, UNG selectively adds directed edges from some data points in the latter subset to some in the former subset, organizing these subsets in a manner similar to a prefix tree structure [15]. Since all the subsets are disjoint from each other and the number of added edges is limited, the worstcase space complexity of UNG is $O ( M n )$ . For a search using a scalar filter, where each conjunctive sub-filter corresponds to a group of linked subsets with a concatenated graph index by combining graph indices of these subsets with directed edges connecting these subsets, UNG retrieves all relevant groups, performs ANNS on each concatenated graph index, and finally merges the results. Overall, UNG achieves a small memory footprint and high efficiency when its assumptions are satisfied and containment relationships are abundant.
A17: WST. Window Search Tree (WST) [28] shares the same assumptions and a similar index structure with iRangeGraph (A11), but introduces four different search methods: VamanaWST, OptimizedPostfiltering, ThreeSplit, and SuperPostfiltering. WST organizes range-specific subgraphs in an extended form of segment tree [28], where each node represents a range and its children recursively divide this range into $\beta$ sub-ranges, which reduces to the standard segment tree when $\beta = 2$ . VamanaWST recursively searches the WST from top to bottom to identify a set of nodes whose ranges are disjoint and the union of their ranges is the query range (i.e., the scalar filter), then performs ANNS on each node, and finally merges the results. OptimizedPostfiltering selects a single node with the smallest range that contains the query range and applies one of the VSP-, VJP-, or SSP-based algorithms on this node to obtain the result. ThreeSplit first selects a node with the largest range contained within the query range and performs ANNS on this node, then applies OptimizedPostfiltering to the remaining portions of the query range, and finally merges the results. SuperPostfiltering is similar to OptimizedPostfiltering but does not rely on range-specific subgraphs in the WST; instead, it constructs an arbitrary set of range-specific subgraphs to achieve an expected “small blowup” [28]. Overall, all four methods achieve small memory footprint and high search efficiency under given assumptions.
# 4 Review of Hybrid Datasets
In this section, we discuss the construction strategies of existing hybrid datasets, and present several examples to detail their contents, as summarized in Table 2.
# 4.1 Hybrid Dataset Construction
Hybrid datasets can be seen as vector datasets with added scalars. However, unlike the evaluation of ANNS, which benefits from a standardized set of vector datasets [8, 49, 50, 80] provided by standard benchmarks like ANN-Benchmarks [6], the evaluation of FANNS lacks a comparable standard for hybrid datasets. As a result, existing FANNS studies often customize their own hybrid datasets.
Some studies synthesize scalars such as generating random values from a uniform distribution [15, 28, 33, 35, 73, 78, 91, 93, 96–98, 100, 105, 107], while others use “organic” scalars that already exists in the original data sources [28, 33, 78, 96, 99, 102, 105, 107]. In the former case, the vectors can be sourced from any vector dataset, including those in the ANN-Benchmarks. In the latter case, the original data source is typically collected by crawling web pages, where unstructured data (e.g., images, texts, or audio) are used to extract the feature vectors through a pre-trained model (e.g., CNN [37, 55, 85], Transformer [13, 25, 90], or VGGish 2), and structured data (e.g., publication dates, keywords, and likes) serve as the corresponding scalars.
Based on the above analysis, we categorize existing hybrid datasets into two types: (1) synthesized hybrid datasets, which contain only synthesized scalars; and (2) organic hybrid datasets, which contain organic scalars, either exclusively or in combination with synthesized ones.
# 4.2 Representative Hybrid Datasets
Among existing hybrid datasets, we select nine representative examples (D1-D9) for detailed description, including three synthesized hybrid datasets (D1-D3) whose vectors are sourced from ANN-Benchmarks, and six organic hybrid datasets (D4-D9) with traceable raw data sources. A concise summary of these datasets is provided in Table 2, including information on data size, vector source and dimensionality, number of organic scalars, and their usage in recent studies.
D1: SIFT-1M, D2: GIST-1M, and D3: Deep-10M. They are widely-used vector datasets from ANN-Benchmarks. SIFT-1M 3 [49] contains 1 million 128-dimensional vectors, extracted from the INRIA Holidays images [48] using local SIFT descriptors [63]. GIST-1M $^ 3$ [49] contains 1 million 960-dimensional vectors, extracted from the INRIA Holidays images [48] using global GIST descriptors [76]. Deep-10M $^ 4$ [8] contains 10 million 96- dimensional vectors, extracted from the images on the Web using the GoogLeNet model [87]. Original SIFT, GIST and Deep do not have organic scalars, so synthesized scalars are generated to make them hybrid datasets. Notably, SIFT and Deep also have 1 billion versions, namely SIFT-1B 3 and DEEP-1B 4.
D4: MNIST-8M. MNIST-8M 5 [62] contains 8,100,000 data points, each comprising a 784-dimensional vector representation of a handwritten digit image generated using SVMs [22], along with an integer label between 0 and 9. These images are derived from the infinite MNIST dataset (InfiMNIST) 6, which extends the original MNIST dataset (MNIST) [24] by dynamically generating new samples through careful elastic deformations, theoretically enabling the creation of an unlimited number of images. Notably, from InfiMNIST, 10,000 samples ranging from 0 to 9,999 form the MNIST testing set, 60,000 samples ranging from 10,000 to 69,999 form the MNIST training set, and 8,100,000 samples ranging from 10,000 to 8,109,999 form the MNIST-8M.
D5: MTG. MTG 7 contains 40,274 data points, each comprising a 1,152-dimensional vector representation of a “Magic: The Gathering” 8 card image generated using the OpenCLIP model [20], along with 21 scalars, such as “artist”, “rarity”, and “toughness”. Notably, the content of “image” scalar is the raw card image, and several scalars ending with “uri” provide traceable links to the websites from which the dataset was crawled.
D6: GloVe-Twitter and D7: Glove-Crawl. GloVe-Twitter 9 and GloVe-Crawl $^ { 9 }$ both contain word vectors generated using the GloVe algorithm [80] from the Twitter corpus and the Common Crawl corpus, respectively.
Table 2 Summary of Selected Hybrid Datasets
GloVe-Twitter contains 1,183,514 unique words, with corresponding word vectors generated in four different dimensions: 25, 50, 100, and 200. This means that GloVe-Twitter provides four sets of word vectors, each containing 1,183,514 vectors for the respective dimensions. In contrast, GloVe-Crawl contains 1,989,995 uniqu words, with word vectors generated in a single fixed dimension of 300.
D8: LAION-1M. LAION-1M $^ { 1 0 }$ contains the first 1,000,4 data points from LAION-400M [83], with each data point comprising 2 vectors and 15 scalars. Each vector pair includes a 512-dimensional image embedding and a corresponding 512-dimensional text embedding, both generated from the Common Crawl corpus using the same CLIP model [81]. The scalar part includes the image URL and associated metadata, such as the image’s width and height. Notably, LAION-5B [84] is also available, containing a significantly larger dataset of 5,526,641,167 data points, while maintaining a structure similar to the previous versions.
D9: YouTube. YouTube 11 [84], an updated version of YouTube-8M [2], contains 6,134,598 data points, each comprising 2 vectors and 3 scalars. Each vector pair includes a 1,024-dimensional video embedding and a corresponding 128-dimensional audio embedding, generated from the YouTube corpus using Inception model [46] and VGGish model 12, respectively. The scalars include the sample ID, video URL and video labels.
# 5 The Distribution Factor for Query Difficulty
Understanding query difficulty is essential for analyzing algorithm performance and designing evaluation benchmarks. Query difficulty influences both the efficiency and accuracy of an algorithm. Specifically, more difficult queries tend to increase search time and decrease recall. Multiple factors may contribute to query difficulty. Identifying these factors not only helps explain performance fluctuations across different queries from various perspectives, but also enables the construction of evaluation benchmarks across a wider range of difficulty levels through their combination.
This section goes beyond the selectivity factor to explore the role of the distribution factor in the difficulty of hybrid queries. We begin by motivating the incorporation of the distribution factor by examining real-world hybrid datasets. Next, we conduct carefully designed experiments to assess the impact of the distribution factor on query difficulty, followed by both qualitative visualizations and quantitative measurements to explain the results. Finally, we propose a schema towards more comprehensive FANNS algorithm evaluation by incorporating both selectivity and distribution factors.
5.1 Incorporation of Distribution Factor: Motivation
The selectivity measures the proportion of data points excluded by the scalar filter (subsection 2.3). It is currently the only factor used to evaluate the difficulty of hybrid queries. As discussed in section 3, hybrid queries with high selectivity are more difficult for VSP-based FANNS algorithms, whereas those with low selectivity are more difficult for SSP-based algorithms. In the meantime, VJP-based and SJP-based algorithms are robust across varying levels of selectivity if their corresponding assumptions are satisfied.
MNIST-8M: IVFFlat (nlists=2048)
MTG: IVFFlat (nlists=256)
MNIST-8M: HNSWFlat (M=32, efConstruction=40)
MTG: HNsWFlat (M=32, efConstruction=40)
Fig. 3 Performance evaluations on hybrid queries with oracle partition indices. Each “x-y” is a set of hybrid queries, where the scalar value of base vectors is $\mathbf { x }$ and the scalar value of query vectors is y. Each set of base vectors is indexed using IVF and HNSW. The distribution relationship of each hybrid query set is ID, POD, or OOD, represented by circle, cross, or square.
Fig. 4 UMAP visualizations of MNIST-8M (Left) and MTG (Right). Each filtered subset has a 2-dimensional distribution. In MNIST-8M, the distributions of each filtered subset are well-separated, exhibiting clustering behavior, while in MTG, the distributions of all filtered subsets are highly overlapping, sharing a similar overall distribution.
Fig. 5 Mahalanobis Distance Histograms of MNIST-8M (Left) and MTG (Right). Each subgraph titled “x-y” shows the histograms of both $\langle { \bf \underline { { \sigma } } } _ { \bf X - Y } , 3 \rangle$ (in orange) and $^ { 6 \ell } \mathbf { X } \mathbf { - } \mathbf { X } ^ { \prime \prime }$ (in blue), illustrating the distribution shift between query vectors and base vectors when sampled from different filtered subsets, compared to when they are sampled from the same filtered subset.
In real-world scenarios, hybrid datasets often exhibit clustering behavior, where data points with similar scalars tend to form distinct clusters in the vector space. For instance, in e-commerce platforms, products from the same brand or sharing a common style often exhibit clustered embeddings, reflecting their inherent similarity [71]. In the domain of news articles, temporal proximity can lead to clustering, as articles covering the same event within a short timeframe tend to have highly similar content and embeddings [82].
This motivates us to incorporate the distribution factor into the hybrid query difficulty. In the high dimensional vector space, each group of vectors has a high dimensional distribution. The distribution factor refers to the relationship between distributions of two sets of vectors. If query vectors and base vectors are sampled from two different filtered subsets within a clustered dataset, the relationship between their distributions is likely to be Out-of-Distribution (OOD) [18, 47], leading to difficult hybrid queries, because query vectors are far from their nearest neighbors in the base vectors, and the nearest neighbors are also distant from each another [18]. Conversely, if query vectors and base vectors are sampled from the same filtered subset within a clustered dataset, or sampled from a dataset without clustering behavior, their distribution relationship is likely to be In-Distribution (ID) [18, 47], resulting in easier hybrid queries.
# 5.2 Impact of Distribution Factor: Experiments and Explanations
To illustrate how the distribution factor impacts the difficulty of hybrid queries, we conduct experiments on two hybrid datasets introduced in section 4: MNIST8M, which exhibits clustering behavior, and MTG, which does not. For demonstration purposes, we consider only one vector column and one scalar column in each dataset, with the scalar filter checking whether the scalar equals a specific value. Concretely, for MNIST-8M, we use the first 1,000,000 data points and select digit as the scalar attribute. For MTG, we use all 40,274 data points and choose rarity as the scalar attribute. Under this setup, each filtered subset consists of vectors associated with a particular scalar value, and different filtered subsets exhibit distinct distributions. To evaluate whether two filtered subsets are OOD, ID, or in an intermediate state (referred to as Partially-OverlappingDistribution, POD), we design hybrid queries across filtered subsets, identifying their relationships based on the performance gap relative to a baseline.
Experiment Design. Suppose the chosen scalar $\mathbb { S } _ { c }$ has $\lvert \mathbb { S } _ { c } \rvert$ unique values. Let $" s _ { b a s e } - s _ { q u e r y } "$ be a set of hybrid queries, where base vectors are sampled from the filtered subset where the scalar value is $s _ { b a s e } \in \mathbb { S } _ { c }$ , and query vectors are sampled from the filtered subset where the scalar value is $s _ { q u e r y } \in \mathbb { S } _ { c }$ . This results in a total of $| \mathbb { S } _ { c } | ^ { 2 }$ hybrid query sets. Among these, hybrid queries where $s _ { b a s e } = s _ { q u e r y }$ are referred to as baseline hybrid queries, with $\lvert \mathbb { S } _ { c } \rvert$ such queries in total. To achieve theoretically optimal search performance for each hybrid query, an ANN index must exist for the corresponding base vectors. This index, referred to as the oracle partition index [78], transforms FANNS over the entire dataset into ANNS within base vectors.
For demonstration purposes, we select 3 scalar values from each dataset: $\mathbb { S } _ { c } = \{ 4 , 7 , 9 \}$ from MNIST-8M, and $\mathbb { S } _ { c } = \{ { \tt c o m m o n } , { \tt r a r e } , \tt u n c { o m m o n } \}$ from MTG. Each hybrid query set contains 1,000 randomly sampled vectors from each filtered subset, with $k = 1 0$ . Each set of base vectors has 2 types of ANN indices—IVFFlat and HNSWFlat—constructed using the Faiss library 13. For IVFFlat, the construction parameter nlists (the number of inverted lists) is set to 2,048 and 256, respectively, as recommended by the library. The search parameter nprobe (the number of inverted lists visited during a query) is varied from 5 to 25, respectively, and the average recall@10 is measured for each hybrid query set. For HNSWFlat, the construction parameters M (the number of neighbors in the graph) and efConstruction (the depth of exploration during construction) are both set to 32 and 40, following the library’s default settings. The search parameter efSearch (the depth of exploration during the search) is also varied from 5 to 25, and the average recall@10 is measured for each hybrid query set.
Experiment Results. The results are presented in Figure 3. In this figure, each set of hybrid queries is classified based on the distribution relationship between query vectors and base vectors, represented by circle, cross, and square for ID, POD, and OOD, respectively. For both datasets, the three sets of baseline hybrid queries achieve the best performance, as the query vectors and base vectors belong to the same distribution, naturally classified as ID. For MNIST-8M, the remaining 6 sets of hybrid queries exhibit clear stratification and are classified as either POD or OOD, while for MTG, all 6 remaining sets have nearly identical query performance and are classified as POD. It is also observed that the relative query performance is consistent for both IVFFlat and HNSWFlat indices. This indicates that ID queries are consistently easier, while POD and OOD queries are relatively more difficult, regardless of the index type. Furthermore, compared to IVFFlat, HNSWFlat demonstrates significant speedup for ID queries but offers very little improvement for POD and OOD queries. This highlights the potential for further optimization of graph-based indices in handling queries across different distribution relationships. Notably, recent studies [18, 47] have made encouraging progress in this direction.
Qualitative Explanations. To provide an initial overview of hybrid datasets and a qualitative understanding of distribution relationships between filtered subsets, we visualize each dataset in a 2-dimensional space using Uniform Manifold Approximation and Projection (UMAP) [72]. UMAP is a graph-based algorithm for dimensionality reduction. It first constructs a weighted k-neighbor graph and then computes a low-dimensional layout of this graph, capturing both the global structure, similar to techniques like PCA [40] and Laplacian Eigenmaps [10], and the local structure, similar to techniques like $\mathrm { { \Delta t } }$ -SNE [64] and LargeVis [88]. This graph-based dimensionality reduction process, which is closely related to graph-based indices for vector similarity search, makes UMAP a suitable tool for visualization.
In Figure 4, we perform UMAP visualizations 14 on the MNIST-8M and MTG datasets. Each filtered subset has a 2-dimensional distribution. For MNIST-8M, we uniformly sample 100,000 data points. The visualization reveals 10 well-separated distributions of filtered subsets, exhibiting clustering behavior, and the number of data points is balanced across filtered subsets. For MTG, we use all its 40,274 data points. The visualization reveals 6 highly overlapping distributions of filtered subsets, sharing a similar overall distribution, and the majority of data points are concentrated with 3 scalar values (common, rare, and uncommon).
The visualizations partially explain the experimental results shown in Figure 3. For MNIST-8M, hybrid queries such as those between 4 and 9 are classified as ID, as their handwritten digits are visually similar, leading to closer image embeddings and a relatively large overlap between their distributions of filtered subsets. In contrast, queries such as those between 4 and 7 are classified as OOD due to the significant visual differences in their handwritten digits, resulting in highly separated distributions of filtered subsets. For MTG, all filtered subsets share a similar overall distribution, resulting in nearly identical query performance across all 6 remaining sets of hybrid queries, which are classified as ID. However, the 2-dimensional nature of UMAP visualizations cannot fully capture the distribution relationships in high-dimensional space. For example, for MNIST-8M, the distributions of filtered subsets for 7 and 9 exhibit some degree of overlap, but the performance for 9-7 is significantly worse than that for 7-9, leading the former to be classified as OOD and the latter as POD.
Quantitative Explanations. To complement qualitative visualizations and provide a quantitative understanding of distribution relationships between filtered subsets, we use the Mahalanobis distance [66], which has been employed in recent studies [18, 47] to quantify the OOD property in vector similarity search. The Mahalanobis distance measures the distance from a vector $\mathbf { v }$ to a $\mathit { \Omega } \mathcal { \mathit { f } } l$ - tered subset $\mathcal { D } _ { f _ { s } }$ , and is defined as $d _ { M } ( \mathbf { v } , \mathcal { D } _ { f _ { s } } ) =$ $\sqrt { ( \mathbf { v } - \bar { \mathbf { v } } _ { \mathcal { D } _ { \mathbf { f } _ { \mathbf { s } } } } ) ^ { T } S _ { \mathcal { D } _ { f _ { s } } } ^ { - 1 } ( \mathbf { v } - \bar { \mathbf { v } } _ { \mathcal { D } _ { \mathbf { f } _ { \mathbf { s } } } } ) }$ , where $\bar { \mathbf { v } } _ { \mathcal { D } _ { f s } }$ is the mean vector of $\mathcal { D } _ { f _ { s } }$ and $S _ { \mathcal { D } _ { f _ { s } } } ^ { - 1 }$ is the inverse of the covariance matrix of $\mathcal { D } _ { f _ { s } }$ .
In Figure 5, we calculate Mahalanobis distance histograms for all sets of hybrid queries in MNIST-8M and MTG. For a set of hybrid queries “sbase − squery” mentioned above, we uniformly sample Dˆfsbase from Dfs and Dˆfsquery from $\mathcal { D } _ { f _ { s _ { q u e r y } } }$ , each containing 5,000 data points. Notably, when $\hat { \mathcal { D } } _ { f _ { s _ { b a s e } } } = \hat { \mathcal { D } } _ { f _ { s _ { q u e r y } } }$ , the two sampled subsets are required to be non-intersecting. We then use an open-source library 15 to compute the Mahalanobis distances in the Euclidean space. Each subgraph titled “ $\cdot b a s e - q u e r y "$ shows both the orange histogram of Mahalanobis distances from each vector in $\hat { \mathcal { D } } _ { f _ { s _ { b a s e } } }$ to $\hat { \mathcal { D } } _ { f _ { s _ { q u e r y } } }$ , and the blue histogram of Mahalanobis distances from each vector in $\hat { \mathcal { D } } _ { f _ { s _ { b a s e } } }$ to $\hat { \mathcal { D } } _ { f _ { s _ { b a s e } } }$ illustrating the distribution shift between query vectors and base vectors when sampled from different filtered subsets, compared to when they are sampled from the same filtered subset.
The calculations also partially explain the experimental results shown in Figure 3. For both datasets, in each diagonal subgraphs (from the top left to the bottom right), two histograms show complete overlap between orange and blue, which explains why baseline hybrid queries achieve the highest performance and are classified as ID (Figure 3). The off-diagonal subgraphs are asymmetric, as the Mahalanobis distance depends on the covariance matrix of the base vectors. This explains why 7-9 is POD but 9-7 is OOD (Figure 3). In each off-diagonal subgraph, two histograms exhibit
#
Fig. 6 Hybrid query sets with varying levels of difficulty under the control of both the distribution factor and the selectivity factor. Each region represents a hybrid query set, where discs represent data points, base vectors are blue discs, and query vectors locate in a green dashed circular area centered on a green pentagram. A lower proportion of blue discs indicates higher selectivity, and less overlap between the green and blue area indicates greater proximity to OOD.
some degree of overlap. Notably, 4-9 and 7-9 in MNIST8M and all the remaining pairs in MTG show nearcomplete overlap between orange and blue, which explains their relative high performance and are classified as POD (Figure 3). However, the remaining subgraphs in MNIST-8M cannot be classified as POD or OOD based solely on the size of the overlapping region. For example, while the overlap size suggests that 4- $7 > 9$ - 7 > 9-4, these pairs are classified as OOD, POD, and OOD, respectively (Figure 3).
The above analysis suggests that the Mahalanobis distance is still imperfect. Its success in distinguishing OOD queries in existing studies [18, 47] can be largely attributed to the fact that the base vectors and query vectors are sampled from different modalities (e.g., text and image embeddings generated by CLIP [81]), which are inherently far apart and have almost no overlap, making them easy to distinguish. However, in the context of hybrid queries, the distinguishing ability of Mahalanobis distance is insufficient, as it fails to completely classify query properties (POD or OOD) based on the degree of histogram overlap. This highlights the need to develop a more precise metric, which can better support the design of the set of hybrid queries with desired distribution relationship between base vectors and query vectors. Notably, a recent study [18] explored the Wasserstein distance as an alternative, but it suffers from symmetry, meaning that the distances between xy and y-x are the same, making it less effective than Mahalanobis distance in explaining query difficulty.
# 5.3 Towards More Comprehensive Evaluation of FANNS Algorithms
While the selectivity factor measures the proportion of data points excluded by the scalar filter, the distribution factor provides insights into the relationship between distributions of base vectors and query vectors. As discussed above, both factors contribute to the hybrid query difficulty. Combining these two factors, we propose a schema towards more comprehensive evaluation of FANNS algorithms, in which each factor serves as a dimension to partition the space of hybrid query sets with varying levels of difficulty.
Figure 6 illustrates the space partitioned by these two factors, distribution and selectivity. There are 9 sets of hybrid queries (Q1-Q9) across 9 regions, with each set representing a combination of a specific distribution and selectivity. In each region, discs represent data points, the base vectors are discs with blue color, and the query vectors are enclosed within a green dashed circular area centered on a green pentagram. A lower proportion of blue discs in a region indicates higher selectivity, and less overlap between the green and blue area indicates greater proximity to OOD.
Although the selectivity can be easily controlled by adjusting the range of scalar values, there is currently no straightforward method for controlling the distribution. As discussed in subsection 5.2, an effective metric for quantifying the distribution relationship between base vectors and query vectors is still lacking. Consequently, the practical realization of our proposed schema depends on the development of a more suitable metric for measuring the distribution factor. We view it as an important research direction coming out of this survey.
# 6 Open Questions and Research Directions
Aside from the above-mentioned research direction, this section discusses open questions and possible research directions in the field of FANNS, progressing from the internal structures of its indices, to the workload-aware optimization of its algorithms, and ultimately to its system-level solutions.
# 6.1 Innovation in Index Structures
Designing efficient FANNS indices through the integration of traditional data structures is a notable trend.
For example, NHQ (A7) leverages the prefix tree [15], iRangeGraph (A11) and WST (A17) utilize the segment trees [28, 99], and HQI (A14) employs the qd-tree [73]. These integrations demonstrate that traditional data structures can play a crucial role in building more efficient FANNS indices.
However, existing FANNS indices integrated with traditional data structures are tightly coupled with specific assumptions, such as a simplified equality scalar filter or a simplified range scalar filter (subsection 2.2). Therefore, a key open question remains: how to integrate or design data structures that can support efficient FANNS indexing under more general scalar filters.
# 6.2 Optimization via Realistic Workloads
Optimizing dedicated FANNS algorithms based on realistic workloads is another important research direction.
For instance, CAPS (A6) partitions clusters of the IVF index according to the power-law distribution of scalar values [35], Milvus (A13) performs pre-indexing dataset partitioning based on frequently queried scalar values [91], and HQI (A14) benefits from workloads that exhibit scalar filter stability [73].
The primary challenge here is how to effectively identify and quantify workload characteristics in specific application scenarios, and to perform targeted optimizations based on these characteristics.
# 6.3 Combination of Multiple Algorithms
Combining multiple FANNS algorithms and dynamically select the most suitable one for a given hybrid query is a promising direction at the system level. This strategy enables the system to exploit the strengths of different FANNS algorithms under varying conditions.
Several studies have explored this strategy by estimating selectivity for algorithm selection, typically using Pre-Filtering Algorithm Family (A12) for high selectivity and pre-built FANNS indices for moderate or low selectivity. For example, database systems such as ADBV [96], Milvus [91], and VBase [102] utilize cost models to estimate selectivity and dynamically switch between FANNS algorithms. In the case of ACORN (A4), the algorithm shifts to Pre-Filtering Algorithm Family (A12) when selectivity is high [78].
Remaining challenges include identifying and estimating more workload-aware metrics beyond selectivity to guide algorithm selection, and optimizing storage efficiency when multiple indices coexist. | Filtered approximate nearest neighbor search (FANNS), an extension of approximate nearest neighbor search (ANNS) that incorporates scalar filters, has been widely applied to constrained retrieval of vector data. Despite its growing importance, no dedicated survey on FANNS over the vector-scalar hybrid data currently exists, and the field has several problems, including inconsistent definitions of the search problem, insufficient framework for algorithm classification, and incomplete analysis of query difficulty. This survey paper formally defines the concepts of hybrid dataset and hybrid query, as well as the corresponding evaluation metrics. Based on these, a pruning-focused framework is proposed to classify and summarize existing algorithms, providing a broader and finer-grained classification framework compared to the existing ones. In addition, a review is conducted on representative hybrid datasets, followed by an analysis on the difficulty of hybrid queries from the perspective of distribution relationships between data and queries. This paper aims to establish a structured foundation for FANNS over the vector-scalar hybrid data, facilitate more meaningful comparisons between FANNS algorithms, and offer practical recommendations for practitioners. The code used for downloading hybrid datasets and analyzing query difficulty is available at https://github.com/lyj-fdu/FANNS | [
"cs.DB"
] |
# Introduction
Across multiple cognitive domains, from perception to language and reasoning, dual-route processing appears as a key computational strategy of the human brain to address complex tasks (Marshall & Newcombe, 1973; Dell et al., 1997; Tversky & Kahneman, 1974; Mishkin et al., 1983; Hickok & Poeppel, 2007; Evans, 2008). Dual-route processing relies on the combination of a memory-based and a rule-based route, which are used to process incoming information and choose subsequent actions. The memory-based route draws on Long-Term Memory (LTM) to handle familiar information from past experiences. This type of processing is often fast, automatic, and unconscious, making it highly efficient. In contrast, the rulebased route relies on fresh computations and Working Memory (WM), which is slower but crucial for processing novel information. This idea of dual-route processing can be traced back to the work of William James, who distinguished between actions selected based on habit and those that involve effortful deliberation (James, 1890).
A dual-route processing account has been proposed for numerous tasks, including word repetition (e.g., Goldrick & Rapp, 2007; Nozari et al., 2010). Word repetition involves hearing a word – whether real or a pseudoword – and repeating it aloud. For healthy adult speakers, this is typically a simple task that rarely leads to errors, however, it could present challenges for various populations. For example, babies and toddlers, still developing their language skills, often find word repetition difficult as they are in the early stages of processing and producing speech. Similarly, children with developmental disorders may struggle with this task in specific ways. In adults, individuals who have experienced neurological events such as a stroke may face varying degrees of difficulty with auditory word repetition. Studying these challenges provides valuable insights into the intricate cognitive and neural processes involved in language. In fact, research on the deficits observed in patients has contributed to the development of an information-processing Cognitive Model for Word Repetition (Dotan & Friedmann, 2015).
The cognitive model for word repetition has two main routes (Figure 1A-Top): a lexical route and a sublexical route. Both routes begin by processing the acoustic input. The lexical route is used for familiar words, activating stored information from long-term memory (LTM), which can be used for their pronunciation. In contrast, the sublexical route handles new words that are not yet stored in LTM, relying on rules to convert a sequence of phonemes into their sequential production. The lexical route is typically fast and efficient, leveraging LTM, and the sublexical route is slower and constrained by WM limitations. However, the sublexical route is crucial for language acquisition in children as well as in adults (e.g., Susan E. Gathercole & Emslie, 1994).
Neuroimaging studies have identified neural pathways that may correspond to these routes. The ventral stream is involved in lexical processing, while the dorsal stream handles sound-to-motor mapping (Hickok & Poeppel, 2007; Rauschecker & Scott, 2009), and damage to this route can lead to impaired speech repetition (Fridriksson et al., 2010). While these studies offer evidence for where word repetition may occur in the brain, the underlying neural mechanisms involved in each processing stage remain largely unknown. Here, we move towards linking the cognitive model to neural mechanisms in the brain by modeling the task using neural networks that simulate dynamics which resemble those of
# B. Effects in Humans
Figure 1: Linking the Cognitive Model for Word Repetition and Brain Dynamics with Deep Neural Models. (A) Top: A diagram of the cognitive model for word repetition, illustrating the various underlying processing stages. The so-called ’Buffers’ represent working memory (WM) components, which have capacity limitations and can only store information temporarily. The so-called ’Lexicons’ in the model represent long-term memory (LTM) components, which can store tens of thousands of words and allow for retrieval during processing. Bottom: An encoder-decoder architecture used to model word repetition with a neural network. (B) Known effects from human research: (1) Length Effect: A tendency to make more errors on longer words. This effect is observed in WM components but not in LTM components, due to the capacity limitations of WM. (2) Frequency Effect: The tendency to make fewer errors on words that are more frequent in language. In contrast to the length effect, this effect is observed in LTM component but not in WM components – frequent words are stored and retrieved more efficiently in LTM. (3) Primacy and Recency Effects: The tendency to make fewer errors on phonemes at the beginning (primacy) and at the end (recency) of a word. Phonemes at the beginning of a word are often more easily encoded and retrieved due to their prominence in speech perception, while phonemes at the end of a word benefit from more recent activation in working memory. (4) SonoritySequencing Principle: The principle states that sonority increases towards the nucleus of a syllable (typically the vowel) and decreases afterwards. In CCV structures (C: consonant, V: vowel), consonant clusters show a gradient where the sonority rises towards the vowel, while in VCC structures, sonority decreases towards the final consonant.
the brain. Unlike biological networks, these neural networks are fully observable, which offers an opportunity to study the mechanisms behind word repetition.
We used an Encoder-Decoder (aka, seq2seq; Sutskever, 2014) architecture with recurrent neural networks (RNNs), which captures the two main parts of the cognitive model (Figure 1A-Bottom). We trained a large set of models to perform the word-repetition task on the full English vocabulary, weighted by word frequency. We then studied the models behaviorally, and asked whether the errors of the models mimic known phenomena from human studies. We finally studied the models neurally through ablation studies to identify the role of specific subsystems: we studied the errors of our ‘patient’ models, asking whether they resemble speech-error patterns akin to those identified in human patients, and whether dualroute processing naturally emerges in an otherwise generic neural architecture.
The main contributions of our study are: (1) EncoderDecoder neural models that perform the word-repetition task; (2) A suite of tests to study human-like processing in the models; (3) A framework to examine if dual-route processing emerges spontaneously in a generic neural network as it learns; (4) ‘Patient’ models that simulate speech errors in human patients. 1
# Related Work and Background
# The Dual-Route Processing for Word Repetition
Evidence for the dissociation between two pathways in word repetition comes from neuropsychological studies, which identify two groups of patients with distinct error patterns. One group produces errors indicative of lexical processing, such as sensitivity to word frequency, while the other produces errors indicative of sublexical processing, such as sensitivity to syllabic structure or phoneme frequency (e.g., Goldrick & Rapp, 2007; Nozari et al., 2010). These findings were integrated into a cognitive model for word repetition (Figure 1A).
In this model, the lexical and sublexical routes share a common initial stage in the so-called Auditory Analyzer, where phoneme identities and positions are extracted from word acoustics. This information is transiently held in the Phonological Input Buffer. In general, so-called ‘buffers’ of the model are WM components2, which store information for a relatively short time. Due to their limited capacity, they typically show length effects. Information from the Phonological Input Buffer then flows into the two main routes of the model, the lexical and the sublexical routes.
The lexical route involves accessing and retrieving entries from LTM, which are stored in two lexicons. The Phonological Input Lexicon stores auditory representations of entire words, and the Phonological Output Lexicon stores more abstract representations that are shared with also other tasks, such as reading and naming (Marshall & Newcombe, 1973; Friedmann & Coltheart, 2018; Dotan & Friedmann, 2015). Evidence for selective impairments of each of these lexicons, and therefore to their separate existence, comes from neuropsychological studies, showing such double dissociation (e.g., Shallice, 1981; Caramazza & Hillis, 1990).
In contrast to the lexical route, the sublexical route directly maps input to output phonology, bypassing the lexical system, through a set of conversion rules. These rules control the mapping of short sequences of heard phonemes during word comprehension onto the corresponding sequences for word production. The sublexical route is used to process new words. Since new words lack lexical entries, they cannot be processed fully through the lexical system.
Both routes converge at the Phonological Output Buffer, which is the stage in language production where phonemes are held in working memory and assembled into words (Romani, 1992; Vallarb et al., 1997; Shallice et al., 2000). It serves two primary functions, first, as a phonological working memory that maintains phonological information until articulation, and second, it assembles phonemes into words and combines stems and affixes into complex words (Dotan & Friedmann, 2015; Haluts et al., 2020). This stage therefore has a key role across several word-processing tasks: naming, reading, and repetition of both words and pseudowords.
# Word-Processing Phenomenology
Research on both healthy individuals and patients has revealed several key insights into word repetition. Here, we focus on four established effects. To these, we add two more: one derived from typical WM characteristics and another from phonological theory. These six effects will guide the analyses of the neural models (Figure 1B):
1. Lexicality Effect: Pseudowords (non-words that follow phonological rules but have no meaning) are more prone to errors than real words. This effect is key for differentiating lexical vs. sublexical processing in the two routes, since pseudowords are necessarily processed through the latter route.
2. Frequency Effect: Low-frequency words are more prone to errors than high-frequency words. This effect is key for differentiating lexicons from buffers in the models, since lexicons, as LTM components, but not buffers, are predicted to show frequency effects.
3. Length Effect: Longer words are more prone to errors than short words. This effect is key for differentiating buffers from lexicons in the models, since buffers, as WM components, but not lexicons, have limited capacity (Baddeley et al., 1975).
4. Morphological-Complexity Effect: Morphologicallysimple words are more prone to errors than equi-length morphologically-complex words. Morphemes (e.g., ’ing or ’able’) were shown to be stored as basic units, like phonemes, function and number words (Dotan & Friedmann, 2015), from which the phonological output buffer composes words. The effective length of morphologicallycomplex words is therefore shorter than that of morphologically simple words of equal length, making them less prone to errors related to word length.3
5. Primacy and Recency Effect: Phonemes in middle positions of words are more prone to error than early and late positions. A well-established phenomenon in working memory is the serial position effect (Murdock Jr, 1962). In a sequence, items presented at the beginning are better retained due to their saliency, known as the primacy effect. Items presented at the end of the sequence are also more easily recalled due to their recency during retrieval, known as the recency effect. In contrast, items that appear in the middle of the list tend to be forgotten more often. This primacy and recency effects have been consistently demonstrated in tasks such as free recall and immediate serial recall (ISR), as well as in pseudoword repetition tasks (e.g., Hartley & Houghton, 1996; P. Gupta, 2005; P. Gupta et al., 2005; Page & Norris, 2009).
6. Sonority-Gradient Effect: consonant clusters that violate the Sonority Sequencing Principle are more prone to errors than consonant clusters that obey it. The Sonority Sequencing Principle (SSP; Selkirk, 1984; Clements, 1990) describes how syllables are structured based on the sonority, or loudness, of sounds. It suggests that the central part of a syllable, typically a vowel, is the peak of sonority, and the surrounding consonants should have progressively lower sonority as you move away from the vowel. For example, in the English one-syllable word ”plant”, the consonants ”p” (low sonority) are followed by ”l” (high sonority), and the vowel ”a” forms the peak of sonority, with the consonants ”n” (high sonority) and ”t” (low sonority) completing the syllable. While many languages follow this pattern, some languages, allow for violations of this rule. English follows the SSP but also has exceptions, such as the $\mathbf { \left| \mathsf { s } \right/ + }$ stop clusters (e.g., in ‘sport’). Overall, we expect more repetition errors for phoneme sequences that violate the SSP.
# Computational Models for Word Repetition
Our approach draws on influential prior computational models of language processing and short-term memory (e.g., McClelland et al., 1989; Dell et al., 1997; Botvinick & Plaut, 2006; S. Gupta et al., 2020; Sajid et al., 2022). More recent work has attempted to model word repetition by incorporating knowledge of the neuroanatomy of the language system (Ueno et al., 2011; Chang & Lambon Ralph, 2020). However, these models were trained on a relatively small vocabulary—only a few hundred words—far smaller than the lexicon of an average speaker. Additionally, all words were restricted to monosyllables. Here, we leverage advances in machine learning to train a deep neural model on the full lexicon, achieving perfect performance. This enables the use of richer probing datasets that are not limited to monosyllabic words and allows for the exploration of length and morphological effects.
# Experimental Setup
# Datasets
The Training Dataset Comprises the 30K most frequent English words, based on the WordFreq python library (Speer, 2022). We excluded abbreviations and words that were not found in the CMU dictionary (CMU, 2014). Each word was included at least once in the dataset, after which words were sampled by frequency, with replacement, in order to generate $1 0 ^ { 6 }$ total samples. The CMU dictionary provided us with the ARPAbet phonetic translation of each word, including vowel stress, which, for simplicity, we do not model in this work.
The Word Feature Evaluation Dataset Given the known processing effects from humans (Section - Word-Processing Phenomenology), we created a factorial design with four main dimensions, which allows for the disambiguation of the effects of interest: lexicality effect, morphological-complexity effect, length effect, and frequency effect (see Table 1). The evaluation dataset has 100 words for each of the 12 conditions, summing to a total of 1200 words. The factorial design allows for splitting the dataset according to any one condition (e.g., 600 short words vs. 600 long words). Real words were selected from the training dataset. Pseudo-words were generated using an algorithm that leverages the trigram statistics of the training dataset (New et al., 2004). To enhance sublexicality, we included only pseudowords that were orthographically far from all real words in the training dataset. To quantify this, we computed the Levenshtein edit distance to all real words and normalized it by pseudoword length. We then included pseudowords whose minimal length-normalized edit distance to all real words was at least 0.25. Meaning, fourletter pseudowords could share all but one phoneme with any real word, whereas eight-letter pseudowords needed to differ by at least two. Finally, the phonetic transcriptions of all pseudowords were generated using the G2P python library (Park & Kim, 2019). As with the training dataset, vowel stress was removed.
The Sonority Evaluation Dataset To explore whether error rates correlate with the phonotactics of the language, we created a dataset with all the possible consonant-consonantvowel (CCV) and vowel-consonant-consonant (VCC) combinations, excluding combinations where the same consonant was repeated and those that were in the training dataset. We then quantified the sonority gradient in the resulting syllables by computing the difference between the phoneme classes of the adjacent consonants. That is, following the SSP, we first ordered phoneme classes based on their sonority: $g l i d e ( 1 ) >$ liq $i i d ( 2 ) > n a s a l ( 3 ) > f r i c a t i \nu e ( 4 ) > p l o s i \nu e ( 5 )$ . Then, for each consonant cluster, we computed the difference between the two classes as the difference between their rank in this order. For example, if the first consonant was plosive and the second one was nasal, then the sonority gradient was set to $5 - 3 = 2$ , and it was set to $3 - 5 = - 2$ if the first was nasal and the second was plosive. This means that CCV syllables with a positive sonority gradient follow the SSP and those with a negative gradient violate it; and vice versa for VCC syllables.
Table 1: The Word Feature Evaluation Dataset. We created a factorial design to probe the models, which has four main dimensions: (1) Lexicality, (2) Morphological Complexity, (3) Word Length and (4) Word Frequency. An example is given for each of the 12 conditions. The dataset contained 100 samples from each condition.
# Models
Architecture We used a standard Encoder-Decoder architecture (Sutskever et al., 2014), with either simple (Elman) or Long-Short Term Memory (LSTM; Hochreiter & Schmidhuber, 1997) units, see Figure 1A-Bottom.4
Encoder-Decoder The Encoder first passes the tokens through an embedding layer, and then through the recurrent layer (or layers, of RNN or LSTM units). The final hidden state of the Encoder was passed as the initial state of the recurrent layer of the Decoder. For simplicity, the Encoder and Decoder always had the same unit type, hidden size and number of layers. At each time step in the Decoder, the previous output was fed back to the recurrent network as the input for the next token prediction. The first input embedding of the Decoder was the Start-of-Sequence token. The weights for the input embedded layers in Encoder and Decoder were shared, and the output embedding layer of the Decoder used their transposition. After being embedded, tokens are passed through a dropout layer.
Training Procedure LSTM models were trained for 100 epochs, at which point our best models had perfectly learned the training data. RNN models required more epochs to converge, and were trained for 150 epochs, but ultimately failed to achieve zero error rate on the training data. We used the standard ADAM optimizer (Kingma & Ba, 2014), and a variation of Cross-Entropy Loss which allowed us to ignore the pad tokens we used to align the sequences for batching.
Model Selection After a preliminary parameter search, we ran a finer grid search over models with a single layer (see Table 2 in Appendix). For model selection, we used the following criteria: (1) perfect accuracy on the CV training splits; (2) highest accuracy on the CV validation splits; (3) smallest model complexity, in terms of number of parameters.
# Analyses
Measures We used two measures for model performance: (1) Error rate, the fraction of words that the model fails to perfectly repeat, and (2) Edit distance, the average of the Levenshtein distances (Levenshtein, 1965) between each predicted phoneme sequence and its corresponding ground truth.
Model Evaluation After training and model selection, we computed the error rate and the edit distance of the selected models on the Word Feature Evaluation Dataset and on the Sonority Evaluation Dataset.
Behavioral Study To determine which factors—lexicality, length, and morphological complexity—best predict model performance, we regressed model performance on all factors, including interaction terms. Since regression coefficients are sensitive to possible correlations among factors, we also conducted a Feature-Importance (FI) Analysis for the main effects, which is more robust to such correlations (Breiman, 2001).
Neural Study To study the neural representations of phoneme sequences, we conducted single-unit ablation studies by zeroing the output values of units in the recurrent layer (Lakretz et al., 2019; Lakretz, Hupkes, et al., 2021). We then evaluated the performance of the ablated model on the Word Feature Evaluation and Sonority Evaluation Datasets, and compared their results with those of the intact model. To study distributed neural representations across all units, we trained Metric Learning Encoding Models (MLEMs; Jalouzot et al., 2024; Salle et al., 2024), which reveal which linguistic factors best predict neural distances among words.
# Results
# Behavioral Study: Speech Errors and Main Effects in the Neural Model for Word Repetition
The NWR Model Fully Accomplishes the Word Repetition Task LSTMs, but not RNNs, could learn to perform the task perfectly on the training data. The hyperparameters of the optimal model among them (see Model Selection) were: batch size: 2048, hidden size: 128, dropout: 0, learning rate: 0.001. This model achieved a zero error rate when trained on the complete lexicon of the Training Dataset. We refer to this selected model as the Neural Word Repetition (NWR) model. Figure 2 shows the performance of this model on our evaluation datasets, which we comment below. To test the robustness of the results, we trained 10 more models from different seeds, using the optimal hyper-parameters from the grid-search. All results are reported in the appendix, showing strong consistency across models.
Figure 2: Speech Errors of the NWR-Model. We probed model’s behavior for several processing effects, known from humans: (A) Length effect and its interaction with lexicality and morphological-complexity. Length effect is observed for pseudowords only. For real words, no errors are expected since the intact (non-ablated) model performs perfect word repetition of real words. An interaction with morphological complexity is observed: Except for nine-phoneme sequences, morphologically-complex words are processed more robustly. (B) Feature Importance (FI) for all main dimensions: lexicality, morphological complexity and word length (frequency was omitted due to its strong correlation with lexicality, assuming pseudewords have zero frequency), which were estimated from a regression model trained to predict edit-distance errors for all words in the Word Feature Evaluation Dataset. The signs of the FIs were determined from the regression coefficients. (C) Error-rate as a function of relative position of the phoneme in the word. A primacy and recency effects are observed: The model tends to make more errors in middle positions compared to early or late ones. (D) Sonority-Sequencing effect: Error-rate as a function of the sonority gradient in a two-consonant cluster, for both CCV and VCC clusters (C - consonant, V - vowel). Overall, the model follows the Sonority Sequencing Principle (SSP), making more errors when sonority gradients violate the SSP.
Lexicality Effect The NWR model was able to perfectly reproduce all real words in the test set, which was expected, since all real words appeared in the training dataset. However, the model also perfectly reproduced the vast majority of pseudowords in the Word Feature Evaluation Dataset $( 9 7 . 2 5 \% )$ . This suggests good generalization capabilities of the model. This difference between real and pseudowords suggests a lexicality effect (2A; blue vs. red lines), which was significant in the regression model (Figure 2B; $p - \nu a l u e \ll 0 . 0 5$ ; see Experimental Setup)
Length Effect As seen in Figure 2A, the NWR model makes more errors on longer pseudowords $( \rho = 0 . 2 2 0 , p - \nu a l u e \ll$ 0.05). This behavior matches what we would expect from a model which employs a mechanism akin to working memory (e.g., a phonological output buffer) for processing words that are not part of an already learned lexicon. The length effect was found significant in the regression model (Figure 2B, $p -$ value $\ll 0 . 0 5$ ).
Morphological-Complexity Effect We next asked whether the model made more errors on morphologically simple words, compared to complex ones. This would be expected if morphemes are processed as discrete units, thus reducing the effective size of the phoneme sequence. Figure 2A (continuous vs. dashed lines) suggests a morphological-complexity effect: the model started making errors on morphologically simple pseudowords of phoneme-sequence length 7 and greater, and on morphologically complex pseudowords only at length 9. However, the regression model found no significant main effect of morphological complexity $( p - \nu a l u e > 0 . 0 5 )$ or interaction effect with word length $( p - \nu a l u e > 0 . 0 5 )$ ).
Primacy and Recency Effect Next, we studied if the model made more errors at particular positions within the phoneme sequences. Figure 2C shows the error rate distribution for all real (red) and pseudo (blue) words as a function of the position of the phoneme in the sequence (if for a given sequence more than a single error occurred, each error was counted independently). Overall, the model made more errors on phonemes in middle positions compared to positions near the beginning or the end of the sequence. This pattern resembles primacy and recency effects in humans, typical to working-memory processes (e.g., P. Gupta, 2005; P. Gupta et al., 2005).
Sonority-Gradient Effect Finally, we studied whether phoneme processing in the NWR model follows the sonority sequencing principle (SSP). Figure 2D shows speech errors made by the NWR model on the Sonority Evaluation Dataset. For CCV syllables, the model made fewer repetition errors on syllables that conform with the SSP (i.e. having a positive sonority gradient; $\rho = - 0 . 2 6 2 , p - \nu a l u e \ll 0 . 0 5 )$ ). For VCC syllables, the SSP is reversed and so are the results: the NWR model makes more errors with positive sonority gradients $( \rho = 0 . 1 1 4 , p - \nu a l u e \ll 0 . 0 5 )$ , which is when the SSP is violated.
Figure 3: Neural Representations of Single Phonemes in the NWR Model. (A) Pairwise Euclidean distances among the 39 phoneme representations, taken from the hidden state of the Encoder after processing each phoneme individually. Rows and columns are sorted based on unsupervised hierarchical clustering (dendrogram on the left). Two macro-clusters are observed, corresponding to vowels and consonants. (B) Feature Importance for vowel and consonant features obtained from a MetricLearning Encoding Model (Jalouzot et al., 2024). For vowels (top), Height refers the the height of the tongue when pronouncing the phoneme (e.g., high, low). Backness refers to the horizontal placement of the tongue (e.g., back, front). Whether the vowels was a diphthong or not was encoded as a binary variable. Asterisks denote statistical significance; n.s. - not significant. For consonants (bottom), place of articulation is where along the vocal tract the consonant is pronounced (e.g., coronal, labial). Manner of articulation describes the interactions of speech organs to produce a sound (e.g., fricative, nasal). Voiced is a binary feature which encodes whether vibration of the vocal cords is necessary for pronunciation. Error bars are standard error.
# Neural Study: Linking Linguistic Features to Neural Representations in the NWR Model
The behavioral effects described above must stem from the model’s underlying neural representations and mechanisms for processing phoneme sequences. In this section, we take initial steps toward understanding these by studying singlephoneme representations and conducting ablation experiments on individual model units.
The Neural Organization of Single-Phoneme Representations in the NWR Model We first investigated how the NWR model internally represents single phonemes, the basic units of spoken words. Prior research in cognitive science and neuroscience has shown that human phoneme representations are structurally organized. Specifically, during speech comprehension, they are grouped by linguistic features such as manner-of-articulation (e.g., [plosive], [fricative]; Chomsky & Halle (1968)). What kind of neural representations for phonemes has the NWR developed during training?
To study this, we presented individual phoneme tokens to the Encoder, extracting their corresponding embeddings from its hidden layer. To analyze the pairwise relationships among all phonemes, we computed the Euclidean distances between all embedding pairs. Figure 3A displays the resulting dissimilarity matrix for all phonemes.5. This matrix is sorted according to a dendrogram (left side) generated using unsupervised hierarchical clustering (Pedregosa et al., 2011). Remarkably, despite the model receiving no explicit acoustic information during training, it learned to segregate vowels and consonants into distinct regions within its neural space. This clear separation is evident in the two prominent clusters for vowels and consonants visible in the dissimilarity matrix, and also after dimensionality reduction (Figure 13).
Beyond the broad consonant and vowel distinctions, we observed more granular, structured relationships within these groups. For instance, within the consonant clusters, specific sounds such as the plosives $/ { \mathsf p } / , \hbar /$ and $/ \kappa /$ are grouped together. Similarly, among vowels, some diphthongs show clear clustering. However, a straightforward organization based purely on surface phonological features was not immediately apparent from the dissimilarity matrices alone.
To determine whether specific phonological features underlie the neural organization of phonemes in the NWR model, we employed Metric Learning Encoding Models. MLEMs are designed to model neural distances from differences in theoretical features, which, in our case, were phonological features. This method assigns a Feature-Importance (FI) score to each phonological feature, quantifying how strongly a difference in that feature predicts a large neural distance.
Figures 3B illustrate the resulting FIs for vowels (top) and consonants (bottom). For vowels, three features were included in the analysis – Height, Backness6 and whether the sound was a diphthong, which, for simplicity, were encoded in the NWR as single token. MLEMs showed that diphthongs predict the largest neural distances in the Encoder.
For consonants, we contrasted the effects of place, manner-of-articulation, and voicing on neural distances. MLEMs revealed that changes in manner-of-articulation features corresponded to the largest distances in the neural space. This finding aligns with human behavioral observations, where manner-of-articulation features exhibit the greatest discriminative power in English, and also dominate neural representations in the human auditory cortex (Mesgarani et al., 2014; Lakretz et al., 2018; Lakretz, Ossmy, et al., 2021).
Speech Errors of Neurally-Damaged NWR Models Neuropsychological research has shown that humans can exhibit highly characteristic speech errors after localized brain damage, which can be explained by selective impairments to specific components in the cognitive model for word repetition (Figure 1A-top). If during training, the NWR model developed neural circuits akin to the cognitive model, we would expect characteristic speech errors in neurally-damaged NWR models that resemble those reported in humans. Here, we make first steps to test this hypothesis by conducting ablation studies, for which we removed at each time a single unit from the recurrent layer of the NWR model and studied the resulting speech errors in the ablated NWR model.
Dual-Route Processing in the NWR model? The singleunit ablation study resulted in 128 ablated NWR models. Figure 4 summarizes errors made by all 128 ablated models on the Word Feature Evaluation Dataset. Values on the x and y-axes show the percentage of errors made by the ablated model on real and pseudo words, respectively. Each dot represents a different ablated NWR model.7
If single-unit ablation were to lead to some ablated models appearing in the lower triangular region of the plot (i.e., making more lexical errors), while other models appeared in the upper triangular region (i.e., making more sublexical errors), this would provide support for the emergence of dual-route processing within the NWR model. Of course, the absence of such evidence is not evidence of the absence of dual-route processing, but a positive result would offer strong support. What did we find in the NWR model?
Figure 4 shows that most ablated models had an error rate under $20 \%$ on the evaluation dataset, showing general robustness to ablation. However, single-unit ablations resulted in higher error rates when performed in the Encoder (blue) than in the Decoder (red). This suggests that sequence encoding uses smaller, less redundant circuits than sequence production.
Figure 4: Speech Errors of all ablated models. We performed a single-unit ablation study where each hidden layer unit (blue for Encoder, red for Decoder) was ablated one at a time. The model’s performance was then re-evaluated on the factorial dataset. Each dot represents a single ablated model. The axis values indicate the percentage of error following ablation, specifically contrasting real vs. pseudowords from the factorial dataset. Units on the diagonal had a similar effect on real and pseudowords; units in the upper triangle caused more sublexical errors.
Figure 4 further shows that all ablated models lie close to the diagonal, in the upper triangle of the scatter (see Appendix for the distribution of the distances from the diagonal). That is, single-unit ablations cause more errors in sublexical rather than in lexical processing. This suggests only a single, rather than double, dissociation between the two routes of the cognitive model.
Interestingly, two units (number 31 and 49) caused a large increase in speech errors, with one unit (49) causing error rates up to $80 \%$ for both real and pseudo words, as discussed next.
Speech Errors Following the Ablation of Unit 49 Given the large effect on error rate following the ablation of unit 49, we conducted an in-depth analysis of the corresponding ablated NWR model. Fig. 5 shows the behavioral analysis of the ablated model, with its performance in the different conditions.8 We highlight several key observations: In Figure 5A, we find a strong length effect $( \rho = 0 . 6 6 5 , p - \nu a l u e \ll$ $1 0 ^ { - 3 } )$ ), but no lexicality effect (with similar difficulty for real and pseudo words) and no morphological-complexity effect. Figure 5B further quantifies this, showing that the length effect dominates the results. Furthermore, Figure 5C shows that ablating unit 49 eliminates the recency effect of the original model, and 5D shows that the sonority-gradient effect is preserved for both CCV and VCC syllables. An analysis of the error patterns of this unit revealed that the model prematurely stops sequence producing during word repetition (see Appendix D). This premature stopping of sequence production explains the strong length effect (panels A&B) and the absence of a recency effect (panel C).
Figure 5: Speech errors of the NWR model following the ablation of unit 49. Panels are organized as in Figure 2
# Discussion
We introduce a novel approach to investigate whether the dual-route cognitive model for word repetition can be mapped onto neural processing within an artificial neural model trained on this task. We propose three new evaluation methods: (1) assessing human-like linguistic behavior in the neural model using a defined set of criteria; (2) determining the emergence of dual-route processing by contrasting lexical and sublexical errors; and (3) performing component-wise tests of the cognitive model’s individual parts, by providing a list of expected effects for each component.
Unlike previous studies, we trained our models on a large lexicon including polysyllabic and multi-morphemic words, also accounting for the natural frequency of words. Our results show that the neural word repetition (NWR) model successfully learns to reproduce all words in the training lexicon and can accurately repeat most pseudowords in the evaluation test. The model exhibited several human-like processing effects, including a length effect, primacy and recency effects, and adherence to the sonority sequencing principle. However, it did not show a sensitivity to morphological complexity.
Our analysis of the NWR model’s single-phoneme representations revealed an organization into distinct vowel and consonant clusters. This structure emerged during training based solely on phoneme co-occurrence statistics, without any acoustic input. We found that manner-of-articulation features most strongly predicted neural distances for consonants, while the diphthong vs. monophthong distinction was key for vowels. In the above respects at least, the representations of phonemes by the NWR model were consistent with human processing.
To investigate whether dual-route processing emerges during training, we conducted ablation studies, simulating neural damage in the model. The resulting ’patient’ models displayed a tendency to make both lexical and sublexical errors. As seen in Figure 4, most ablated models clustered around the diagonal, suggesting that ablating a single unit tended to impact both lexical and sublexical processing similarly, or at least caused more errors in sublexical processing, but not the other way around. This pattern indicates a single dissociation between lexical and sublexical processing, or potentially even no clear dissociation, at least not at the single-unit level. Consistently, a follow-up analysis of how lexical and sublexical information is encoded by different units of the model did not reveal distinct, lexicality-based separation of units (Figures 12&14). These results more closely align with the view that lexical and sublexical processing are entangled, exhibiting no sharp boundaries between them (e.g., Regev et al., 2024).
Overall, this study takes initial steps toward bridging the gap between the cognitive model of word repetition and its underlying neural mechanisms in the human brain by developing a neural model that can be analyzed at both behavioral and neural levels. By training the model on a large lexicon and systematically examining its behavior, we provide evidence that key human-like processing effects can emerge during training, including those related to working memory, without explicitly introducing working-memory dynamics into the model. Future research should investigate how lexical and sublexical information is represented across different units of the model, how phoneme sequences are neurally encoded and whether dualroute processing can be more explicitly induced by incorporating working-memory dynamics into the model, or by adding architectural constraints inspired by human neuroanatomy.
# Acknowledgments
This work was supported by grants ComCogMean (ProjetANR-23-CE28-0016), FrontCog (ANR-17-EURE-0017), European Research Council, ERC Grant Agreement $N ^ { \circ }$ 788077–Orisem, and ANR-10-IDEX-0001-02. It was performed using HPC resources from GENCI–IDRIS (Grant 2024-AD011015802). Special thanks to Ali Al-Azem and Louis Jalouzot for their valuable feedback.
# References
Baddeley, A. D., Thomson, N., & Buchanan, M. (1975). Word length and the structure of short-term memory. Journal of verbal learning and verbal behavior, 14(6), 575–589.
Botvinick, M. M., & Plaut, D. C. (2006). Short-term memory for serial order: a recurrent neural network model. Psychological review, 113(2), 201.
Breiman, L. (2001). Random forests. Machine learning, 45, 5–32.
Caramazza, A., & Hillis, A. E. (1990). Where do semantic errors come from? Cortex, 26(1), 95–122.
Chang, Y.-N., & Lambon Ralph, M. A. (2020). A unified neurocomputational bilateral model of spoken language production in healthy participants and recovery in poststroke aphasia. Proceedings of the National Academy of Sciences, 117(51), 32779–32790.
Chomsky, N., & Halle, M. (1968). The sound pattern of english. Harper & Row.
Clements, G. N. (1990). The role of the sonority cycle in core syllabification. Papers in Laboratory Phonology I/Cambridge UP.
Dell, G. S., Schwartz, M. F., Martin, N., Saffran, E. M., & Gagnon, D. A. (1997). Lexical access in aphasic and nonaphasic speakers. Psychological review, 104(4), 801.
Dotan, D., & Friedmann, N. (2015). Steps towards understanding the phonological output buffer and its role in the production of numbers, morphemes, and function words. Cortex, 63, 317–351.
Evans, J. S. B. (2008). Dual-processing accounts of reasoning, judgment, and social cognition. Annu. Rev. Psychol., 59(1), 255–278.
Fridriksson, J., Kjartansson, O., Morgan, P. S., Hjaltason, H., Magnusdottir, S., Bonilha, L., & Rorden, C. (2010). Impaired speech repetition and left parietal lobe damage. Journal of Neuroscience, 30(33), 11057–11061.
Friedmann, N., & Coltheart, M. (2018). 35. types of developmental dyslexia. Handbook of communication disorders: Theoretical, empirical, and applied linguistic perspectives, 721–752.
Goldrick, M., & Rapp, B. (2007). Lexical and post-lexical phonological representations in spoken production. Cognition, 102(2), 219–260.
Gu, A., & Dao, T. (2023). Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752. Memory, 13(3-4), 318–324.
Gupta, P., Lipinski, J., Abbs, B., & Lin, P.-H. (2005). Serial position effects in nonword repetition. Journal of Memory and Language, 53(1), 141–162.
Gupta, S., Shukla, R. S., Shukla, R. K., & Verma, R. (2020). Deep learning bidirectional lstm based detection of prolongation and repetition in stuttered speech using weighted mfcc. International Journal of Advanced Computer Science and Applications, 11(9).
Haluts, N., Trippa, M., Friedmann, N., & Treves, A. (2020). Professional or amateur? the phonological output buffer as a working memory operator. Entropy, 22(6), 662.
Hartley, T., & Houghton, G. (1996). A linguistically constrained model of short-term memory for nonwords. Journal of Memory and Language, 35(1), 1-31. doi: https://doi.org/10.1006/ jmla.1996.0001
Hickok, G., & Poeppel, D. (2007). The cortical organization of speech processing. Nature reviews neuroscience, $\boldsymbol { \mathscr { B } } ( 5 )$ , 393–402.
Hochreiter, S., & Schmidhuber, J. (1997, November). Long short-term memory. Neural Comput., 9(8), 1735–1780. Retrieved from https://doi.org/10.1162/neco.1997 .9.8.1735 doi: 10.1162/neco.1997.9.8.1735
Jalouzot, L., Sobczyk, R., Lhopitallier, B., Salle, J., Lan, N., Chemla, E., & Lakretz, Y. (2024). Metric-learning encoding models identify processing profiles of linguistic features in bert’s representations. Retrieved from https:// arxiv.org/abs/2402.11608
James, W. (1890). The principles of psychology. Henry Holt.
Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. International Conference on Learning Representations.
Lakretz, Y., Chechik, G., Cohen, E.-G., Treves, A., & Friedmann, N. (2018). Metric learning for phoneme perception. arXiv preprint arXiv:1809.07824.
Lakretz, Y., Hupkes, D., Vergallito, A., Marelli, M., Baroni, M., & Dehaene, S. (2021). Mechanisms for handling nested dependencies in neural-network language models and humans. Cognition, 213, 104699.
Lakretz, Y., Kruszewski, G., Desbordes, T., Hupkes, D., Dehaene, S., & Baroni, M. (2019). The emergence of number and syntax units in lstm language models. In Proceedings of the 2019 conference of the north american chapter of the association for computational linguistics: Human language technologies, volume 1 (long and short papers) (pp. 11– 20).
Lakretz, Y., Ossmy, O., Friedmann, N., Mukamel, R., & Fried, I. (2021). Single-cell activity in human stg during perception of phonemes is organized according to manner of articulation. NeuroImage, 226, 117499.
Levenshtein, V. I. (1965). Binary codes capable of correcting deletions, insertions, and reversals. Soviet physics. Doklady, 10, 707-710.
Marshall, J. C., & Newcombe, F. (1973). Patterns of paralexia: A psycholinguistic approach. Journal of psycholinguistic research, 2, 175–199.
McClelland, J. L., St. John, M., & Taraban, R. (1989). Sentence comprehension: A parallel distributed processing approach. Language and cognitive processes, 4(3-4), SI287– SI335.
Mesgarani, N., Cheung, C., Johnson, K., & Chang, E. F. (2014). Phonetic feature encoding in human superior temporal gyrus. Science, 343(6174), 1006–1010.
Mishkin, M., Ungerleider, L. G., & Macko, K. A. (1983). Object vision and spatial vision: two cortical pathways. Trends in neurosciences, 6, 414–417.
Murdock Jr, B. B. (1962). The serial position effect of free recall. Journal of experimental psychology, 64(5), 482.
New, B., Pallier, C., Brysbaert, M., & Ferrand, L. (2004). Lexique 2: A new french lexical database. Behavior Research Methods, Instruments, & Computers, 36(3), 516–524.
Nozari, N., Kittredge, A. K., Dell, G. S., & Schwartz, M. F. (2010). Naming and repetition in aphasia: Steps, routes, and frequency effects. Journal of memory and language, 63(4), 541–559.
Page, M., & Norris, D. (2009). A model linking immediate serial recall, the hebb repetition effect and the learning of phonological word forms. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1536), 3737– 3753.
Park, K., & Kim, J. (2019). g2pe. https://github.com/ Kyubyong/g2p. GitHub.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., . . . Duchesnay, E. (2011). Scikitlearn: Machine learning in Python. Journal of Machine Learning Research, 12, 2825–2830.
Rauschecker, J. P., & Scott, S. K. (2009). Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing. Nature neuroscience, 12(6), 718–724.
Regev, T. I., Kim, H. S., Chen, X., Affourtit, J., Schipper, A. E., Bergen, L., . . . Fedorenko, E. (2024). High-level language brain regions process sublexical regularities. Cerebral Cortex, 34(3), bhae077.
Romani, C. (1992). Are there distinct input and output buffers? evidence from an aphasic patient with an impaired output buffer. Language and Cognitive Processes, 7(2), 131–162.
Sajid, N., Holmes, E., Costa, L. D., Price, C., & Friston, K. (2022). A mixed generative model of auditory word repetition. bioRxiv, 2022–01.
Salle, J., Jalouzot, L., Lan, N., Chemla, E., & Lakretz, Y. (2024). What makes two language models think alike? arXiv preprint arXiv:2406.12620.
Selkirk, E. (1984). On the major class features and syllable theory. Language Sound Structure: Studies in Phonology/MIT Press, 107136.
Shallice, T. (1981). Neurological impairment of cognitive processes. British medical bulletin, 37(2), 187–192.
Shallice, T., Rumiati, R. I., & Zadini, A. (2000). The selective impairment of the phonological output buffer. Cognitive Neuropsychology, 17(6), 517–546.
Speer, R. (2022, September). rspeer/wordfreq: v3.0. Zenodo. Retrieved from https://doi.org/10.5281/zenodo .7199437 doi: 10.5281/zenodo.7199437
Susan E. Gathercole, A. D. B., Catherine S. Willis, & Emslie, H. (1994). The children’s test of nonword repetition: A test of phonological working memory. Memory, 2(2), 103–127. Retrieved from https://doi.org/10.1080/ 09658219408258940 (PMID: 7584287) doi: 10.1080/ 09658219408258940
Sutskever, I. (2014). Sequence to sequence learning with neural networks. arXiv preprint arXiv:1409.3215.
Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. CoRR, abs/1409.3215. Retrieved from http://arxiv.org/abs/ 1409.3215
Tversky, A., & Kahneman, D. (1974, September). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124-1131. doi: 10.1126/science.185.4157 .1124
Ueno, T., Saito, S., Rogers, T. T., & Ralph, M. A. L. (2011). Lichtheim 2: synthesizing aphasia and the neural basis of language in a neurocomputational model of the dual dorsalventral language pathways. Neuron, 72(2), 385–396.
University, C. M. (2014). The cmu pronouncing dictionary. Retrieved 2025-02-10, from http://www.speech.cs.cmu .edu/cgi-bin/cmudict
Vallarb, G., Di Bettac, A. M., & Silveri, M. C. (1997). The phonological short-term store-rehearsal system: patterns of impairment and neural correlates. Neuropsychologia, 35(6), 795–812.
# Appendices
A Word Feature Evaluation Dataset The length of each word is the number of phonemes, and not the number of letters, it contains. Short words have between 3, 4, and 5 phonemes. Long words have between 7, 8, and 9. Low frequency (real) words have a Zipf frequency up to 3.5, and high frequency words have Zipf frequency as low as 4.0. A word is morphologically complex if it contains a prefix or suffix appended to a distinguishable root (e.g. restart is complex, but repeat is simple, because -peat does not stand on its own in the same sense).
B Model Selection Before settling on the hyperparameter ranges for our final grid search, we first ran a series of preliminary tests to explore the hyperparameter space. To facilitate training by preventing the accumulation of prediction errors, we implemented a teacher-forcing procedure. That is, the predicted token is replaced with the ground-truth token, with a given probability. We ultimately found its effects to be detrimental to learning. Far more significant are the effects of the learning and dropout rates. We found no clear advantage in increasing the number of recurrent layers beyond one layer.
We explored all combinations obtained for variations of hidden size (64 or 128), dropout rate (0, 0.1, 0.2), batch size (1024, 2048, 4096) and learning rate $( 5 . 1 0 ^ { - 3 }$ , $1 0 ^ { - 3 }$ , $5 . 1 0 ^ { - 4 }$ ). We followed a 5-fold cross-validation (CV) procedure. Each CV split contained $^ { 3 0 \mathsf { k } }$ training words sampled by frequency to generate $1 0 ^ { 6 }$ , as in the complete training data set. To avoid overfitting, we used early stopping, choosing the $7 5 ^ { t h }$ epoch (the middle of the period between epochs 65 and 85, when the model first achieved a stable zero error rate on the Training Dataset)
Table 2: Hyperparameter ranges for first grid search.
We settled on two hidden sizes for the grid search, 64 and 128. This equates to 256 (512) hidden units for both the encoder and the decoder, so 512 (1024) hidden units total for RNNs and LSTMs, respectively. We found a handful of promising models with hidden size 64 that had not yet converged at the end of 100 epochs. Many of them learned to complete the task perfectly on the training set after a second round of training for 150 epochs. The results of the model among them with the earliest stable zero error rate are included below. Ablation on this model revealed another neuron whose ablation caused the same behavioral effect as the ablation of neuron 49 in our chosen model of hidden size 128 discussed above. That is, a consistent premature emission of End-of-Sequence tokens.
# C Analysis of the NWR model
Types of error The edit distance is computed on the basis of 3 operations : insertions, deletions, and substitutions. We kept track of the average number of each operation per word for every condition possible in our Word Feature Evaluation Dataset. Those numbers are reported on Figure 8A.
# D Ablation Study
Ablation of Unit 49 Given the results reported on Figure 5, we examined the predicted phonemes over the factorial design dataset. We observed that this ablated model was most often outputting end of sequence tokens $( < \mathrm { E O S } > )$ from position 4 to 8 and up to the end of the word with seemingly no correlation with other factors than length. Some phoneme substitutions could also be observed sporadically, although no pattern was easily identifiable. For example:
[F, R, EY, M, W, ER, K] $\mid $ [F, R, EY, M, W, ER, $\mathrm { < E O S > }$ ] [F, IH, SH, IH, NG] $\mid $ [F, IH, SH, $< \tt E O S >$ , $< \tt E O S >$ ]
Ablation of other units Fig. 15 reports a categorization of errors induced by ablating the most significant neurons. Only neurons inducing at least 50 errors are reported.
# E Replication across Model Seeds
To test the robustness of the results, we trained 10 more models with the same architecture, same hyperparameters, with 10 different seeds. First, we found that all versions of the model reached zero errors on real words in the train dataset for the first time as early as epoch 39 and as late as epoch 85. Figure 10 shows the average results across models, with shaded areas as the $9 5 \%$ confidence interval across model seeds, demonstrating the robustness of all identified effects across model seeds.
We also repeated the ablation study on the 10 new replicate models. In every case, we found that only 2 to 4 units had a significant impact $( > \% 2 0 )$ on model performance upon ablation, this almost always being units in the encoder.
The unit with the strongest effect showed a typical behavior across all seeds, similar to that of Unit 49 in the original NWR model. The ablation of this unit, in all models, caused a typical length effect (c.f. Figure 11). Moreover, closer inspection of ablated models’ predictions showed that these errors were of the same kind: premature prediction of the end of the word.
Figure 6: Grid search results.
Figure 7: Full results for ablation study on NWR model, organized by factor.
Figure 8: Types of errors of the NWR model Type and average count of errors made by the NWR model depending on word condition. Condition is made of the initials of lexicality (Real, Pseudo), length (Long, Short), morphology (Complex, Simple) and when relevent, frequence (High, Low). The types are determined from the Levenshtein distance computation. (A) Error types of the NWR model. (B) Error types for NWR model with unit 49 ablated. (C) Error types for NWR model with unit 31 ablated.
Figure 9: Speech errors of the NWR model following the ablation of unit 31. Same as Figure 2
Figure 10: Mean speech errors over 10 seeds. Same as Figure 2. Error bars reflect standard error.
Figure 11: Mean speech errors of 49-like units over 10 seeds. Same as Figure 2. Error bars reflect standard error.
Figure 12: Stacked feature importance per neuron Neurons were clustered according to their feature importance profiles using k-means. The optimal number of clusters was computed by comparing the silhouette scores of the results of $\mathsf { k }$ -means clustering the neurons with $\mathsf { k } = [ 2 , 8 ]$ . The first group identified is clearly dominated by length, which is consistent with the regression analyses. The profile for the second cluster is more difficult to interpret. Yet, overall, an increase in the importance of lexicality relative to the other features is observed.
Figure 13: Dimensionality Reduction of the Pairwise Distances among all Single-Phoneme Representations
Figure 14: Feature Importances for Length and Zipf Frequency across all ablations of NWR model This figure shows the feature importances for length and frequency for modeling the errors of each ablated NWR model. The significant FIs for length correspond to units 49 and 31. We found no model where the FI for frequency was significant.
Figure 15: Classification of Errors for Ablated units in the NWR Model All single-unit ablations resulting in at least 50 errors are included in this figure. Length error : premature prediction of $< \tt E O S >$ token with all previous phonemes being correct. Position error : error resulting from the position of phonemes being confused by the model, while preserving all phoneme identities (i.e. the prediction is a permutation of the initial sequence). Identity error : at least one phoneme being substituted with another, in a given position. Other : any error not falling in the previous categories. We observe that ablation 49 cause length errors. Ablation 31 has a combination of position and identity errors. These position errors are generally the instances where vowels were permuted. | It takes several years for the developing brain of a baby to fully master word repetition-the task of hearing a word and repeating it aloud. Repeating a new word, such as from a new language, can be a challenging task also for adults. Additionally, brain damage, such as from a stroke, may lead to systematic speech errors with specific characteristics dependent on the location of the brain damage. Cognitive sciences suggest a model with various components for the different processing stages involved in word repetition. While some studies have begun to localize the corresponding regions in the brain, the neural mechanisms and how exactly the brain performs word repetition remain largely unknown. We propose to bridge the gap between the cognitive model of word repetition and neural mechanisms in the human brain by modeling the task using deep neural networks. Neural models are fully observable, allowing us to study the detailed mechanisms in their various substructures and make comparisons with human behavior and, ultimately, the brain. Here, we make first steps in this direction by: (1) training a large set of models to simulate the word repetition task; (2) creating a battery of tests to probe the models for known effects from behavioral studies in humans, and (3) simulating brain damage through ablation studies, where we systematically remove neurons from the model, and repeat the behavioral study to examine the resulting speech errors in the "patient" model. Our results show that neural models can mimic several effects known from human research, but might diverge in other aspects, highlighting both the potential and the challenges for future research aimed at developing human-like neural models. | [
"cs.CL",
"cs.AI"
] |
# 1. Introduction
Forecasting on subseasonal-to-seasonal (S2S) timescales, typically defined as 2 weeks to ∼2 months, is vital for public health, disaster preparedness, agriculture, and energy/water management (White et al. 2017). Despite the clear benefits of skillful predictions on these timescales, S2S forecasting remains especially difficult. Often referred to as a ‘predictability desert’ (Robertson et al. 2018; Chen et al. 2024), S2S forecasts cannot solely rely on the initial atmospheric conditions, as is often done in short-term numerical weather prediction, or on the slow-varying boundary conditions that underpin climate outlooks (Robertson et al. 2018; Vitart and Robertson 2018). Instead, forecasters must integrate information from initial conditions, boundary conditions, and S2S modes of variability, like the Madden Julian Oscillation (MJO) (Zhang 2013), to produce skillful predictions (Vitart and Robertson 2018). Still, on S2S timescales, the strength of these sources of predictability and their teleconnections remain unclear (Merryfield et al. 2020; Vitart and Robertson 2018) and skill, e.g. accuracy of summertime surface temperature prediction in North America, remains relatively low (Breeden et al. 2022; Pegion et al. 2019).
A variety of tools have been used to approach the S2S forecasting challenge. Dynamical models have slowly but steadily improved S2S forecast skill (Peng et al. 2023) and data-driven approaches, like fully-AI models, can now forecast phenomena such as the North Atlantic Oscillation (NAO) and MJO at S2S lead times (Ling et al. 2024; Chen et al. 2024) with similar skill to dynamical models. To further improve forecasts, there has recently been a renewed focus on pinpointing climate states that represent times of enhanced predictability (e.g., Mariotti et al. 2020; Mayer and Barnes 2021; Albers and Newman 2019). Identifying these ‘windows of opportunity’ is a potential approach to improve skill on S2S timescales by allowing forecasters to know when forecast uncertainty is high or when they can leverage these times of enhanced predictability for more accurate forecasts (Mariotti et al. 2020).
Here, we tackle S2S prediction by combining a variety of these methodologies and employing an AI-informed model analog forecasting approach. Analog forecasting rests on the premise that climate states with similar initial conditions tend to evolve in a consistent manner (e.g., Lorenz 1969; Zhao and Giannakis 2016). By identifying past states resembling current conditions, their subsequent evolution can offer plausible trajectories for future conditions. For a variety of forecasts, from the tropics to the northern high latitudes, analog forecasting has been shown to rival the skill of global climate models (Lou, Newman, and Hoell 2023; Ding et al. 2019; Walsh et al. 2021) all while offering several key advantages. Unlike fully-AI models, analog forecasting is intuitive, interpretable, and can uphold physical laws (Rader and Barnes 2023; Ding et al. 2018); moreover, compared to global dynamic climate models, analog forecasting is highly computationally efficient (Ding et al. 2019).
Analogs offer an interpretable, physical model that is helpful for diagnosing errors and probing physical drivers, while their fast computational speed allows for the quick generation of ensembles of forecasts. Creating proficient ensembles is a key way to improve skill on S2S timescales (e.g., Han et al. 2023; Palmer et al. 2004; Krishnamurti et al. 1999), provide probabilistic forecasts (e.g., Mullan and Thompson 2006; Leutbecher and Palmer 2008; Weisheimer and Palmer 2014), and even help explore windows of opportunity (e.g., Leutbecher and Palmer 2008; Weisheimer and Palmer 2014)—essential on S2S timescales. For instance, with a calibrated ensemble of forecasts, one can use ensemble member agreement as a sign of a lower forecast uncertainty to identify windows of opportunity (e.g., Ferranti et al. 2018). However, despite these advantages in computation and interpretability, successful analog forecasting hinges on having both a robust library of analogs and a reliable method to identify sufficiently similar past states.
To address this need for a large analog library, we turn to climate models, which have orders of magnitude more climate realizations than we have observational data (Ding et al. 2018; McDermott and Wikle 2016). Yet, even with climate models, finding perfect analogs is impractical—estimates suggest over $1 0 ^ { 3 0 }$ years of data would be needed to match two atmospheric flow stream patterns in just the Northern Hemisphere within observational error (Van den Dool 1994). Hence, determining the conditions that make a climate state an adequately close analog, rather than a perfect one, is crucial. For example, Ding et al. (2018) use regional matching to identify close analogs for seasonal tropical Indo-Pacific Ocean prediction; Mahmood et al. (2022) use global matching for multi-decadal global predictions; and Wu and Yan (2023) use area-specific matching for annual-to-multi-year Pacific Decadal Oscillation prediction. These methods for selecting analogs have been shown to work for certain problems, although they demand either a huge library of analogs (as in global matching) or depend on prior knowledge of physical drivers and teleconnections (as in regional or area-specific matching).
Here, we explore an alternative, AI-based spatial weighting approach originally introduced by Rader and Barnes (2023). We train a neural network to output a mask of weights that highlights where it is most important for initial conditions to match, such that two states will evolve similarly. Using a learned set of weights to find optimal analogs reduces reliance on prior knowledge and enables investigation of which regions and variables are most essential for two climate states to follow similar future trajectories. This method of optimized analog forecasting was first successfully applied to annual-to-decadal sea surface temperature prediction (Rader and Barnes 2023) and has since been extended to multi-year-to-decadal 2-meter temperatures (Fernandez and Barnes 2025) and El Ni˜no-Southern Oscillation (ENSO) predictions (Toride et al. 2024). Importantly, while Toride et al. (2024) showed skill with this AI-informed analog approach for ENSO predictions on seasonal-to-interannual timescales, they were unable to achieve skill on S2S timescales.
Here, we show that this AI-based analog forecasting approach can achieve skill beyond traditional analog methods on S2S timescales while maintaining interpretability and computational efficiency. We highlight the benefits of using AI-based analogs across three varied prediction tasks: 1) classification of Week-3-4 Southern California summer temperatures; 2) regional regression of Month-1 midwestern U.S. summer temperatures; and 3) classification of Month-1-to-2 North Atlantic wintertime upper atmosphere winds. Through these three predictions tasks we show the AI-based analog approach outperforms traditional analog forecasting approaches, as well as climatology and persistence baselines on reanalysis data on S2S timescales. Further, we exploit analog ensembles to quantify forecast uncertainty, and, by leveraging an interpretable AI-forecasting framework, analyze the learned masks of weights to better understand S2S sources of predictability.
# 2. Methods
# 2.1. Prediction Tasks
We demonstrate the skill of our analog forecasting approach for three prediction tasks described in Table 1, opting for a varied set of examples to test the generalizability of the method across different S2S prediction problems. We apply the AI-informed analog approach to both classification and regression tasks, to different regions, seasons, variables, and lead times. Each of these prediction tasks has a unique learned mask of weights to optimize the choice of analogs.
Table 1. The three prediction tasks.
# 2.2. AI-Informed Analog Approach
Most traditional analog forecasting methods follow a similar approach: to predict how a certain climate state (referred to as a state of interest or SOI) will evolve, one finds the closest $k$ matches (for $k \geq 1$ , in the analog) library. “Closeness” is often measured by minimizing a distance measure, like mean-squared error (MSE), between the SOI and potential analogs either across the entire globe, or across a region of interest. One can then use the trajectories of these closest matches as a prediction for how the SOI will evolve into the future. Rather than predicting the evolution of the whole globe, one often evaluates the analog skill in a specific region of interest, which we refer to as the target region.
We take a similar approach, except here we utilize a soft mask of weights to measure closeness between the SOI and potential analogs. Prior to computing the distance measure between the SOI and each potential analog, we multiply the entire library and the SOI by a learned mask of weights (Steps 1 and 2 in Figure 1). This mask, therefore, highlights or dampens the importance of conditions matching in certain areas of the globe for a potential analog to be considered close to the SOI. We then use MSE to compute the closest analogs after weighting, selecting the $k$ closest analogs (Step 3 in Figure 1). Lastly, we use the $k$ closest analogs’ mean evolution (for regression problems)
or majority vote (for classification) in the target region as our final prediction (Steps 4 and 5 in Figure 1).
Figure 1. Schematic of the steps in the AI-informed analog approach: 1) Assemble the SOI and the library of $d$ potential analogs. 2) Multiply all potential analogs and the SOI by the learned mask of weights. 3) Compute the MSE between the weighted SOI and the $d$ weighted potential analogs, and select the $k$ closest analogs (in this example $k = 2$ ). 4) Find the values of the analogs in the target region after the desired lead time. 5) Use the target field of the $k$ closest analogs’ evolution in the target region as the prediction for the SOI (this example is a regression problem, so the mean value is taken).
# 2.3. Data
We use output from the Community Earth System Model 2–Large Ensemble (CESM2- LE) (Danabasoglu et al. 2020) in order to have a sufficiently large analog library and to learn the weighted mask. As we will show, the AI-informed analog approach produces skillful predictions when evaluated on both CESM2-LE data, in a perfectmodel framework, and on ECMWF Reanalysis v5 (ERA5) data (Hersbach et al. 2020). We employ a 7-day sliding window to smooth daily CESM2-LE and ERA5 data, using a backward moving average for input data and a forward moving average for target data, and also make use of monthly-mean fields. All smoothed daily data is regridded via bilinear interpolation to 2.5° x 2.5° resolution, while monthly data is resolved at . $2 5 ^ { \circ }$ x . $2 4 ^ { \circ }$ (natively for CESM2-LE data, and bilinearly interpolated for ERA5 data). As our analog library of daily data climate maps is $\sim 1 0 \times$ larger than the library of monthly data, we use a coarser resolution for the daily data to reduce the memory load. For all data sources, we convert the data to anomalies about the climatological seasonal cycle and then to standard deviations at each grid point. However, between data sources, we handle the anthropogenic effects of climate change slightly differently, as will be discussed next.
# 2.3.1. CESM2-LE Data
We use monthly CESM2-LE data from 1850-2100 that employs CMIP6 historical and SSP3-7.0 future radiative forcing scenarios (Simpson et al. 2023). We take all 100 members to calculate the ensemble mean, which we subtract from each individual member to both remove the effects of anthropogenic climate change and to convert the data to anomalies from the seasonal cycle. To increase speed and reduce memory load, we then use only a third of the members for training and the analog library. These members are divided between the analog library and SOIs, with fields from 19 members composing the library and fields from 14 members serving as the SOIs (see Table 3 for member details). We partition the SOIs with a 10/2/2 member split for training, validation, and testing respectively.
For the daily CESM2-LE data, we subtract the linear trend from each calendar day at each grid point. We include a shorter timespan of data from each member (1850-1949) than the monthly data, as each daily-data year contains more than 30 $\times$ the amount of samples. The analog library is composed of fields from the first 5 members, while fields from the next 4 members (with a 2/1/1 training/validation/testing split) make up the SOIs (see Table 4 for member details).
# 2.3.2. ERA5 Data
We use ERA5 daily data from 1942-2023 and monthly data from January 1940 to
July 2024. For both daily and monthly ERA5 data, we fit and subtract a third-order polynomial at each grid point and each calendar day, or calendar month, respectively to define detrended anomalies from the seasonal cycle. The ERA5 data acts as a second test set to evaluate skill on observations.
# 2.4. Artificial Neural Network to Learn Mask of Weights
The weighted mask is learned by an artificial neural network that is similar to that of Rader and Barnes (2023), with minor modifications to its final layers. During each forward pass, the network, depicted in Figure 2, takes two maps as input. The SOI map is from the training set and the analog map is randomly selected from the analog library. These maps are both multiplied by a grid of learnable weights of the same size as the inputs, resulting in two weighted maps. The mask is restricted to have a mean of 1 across all weights, such that during training the mask effectively moves weight between different areas of the globe. The MSE between these two weighted maps is passed through a single linear scaling layer. The output of this layer represents the network’s prediction of the MSE between the two maps in the target region after they have evolved (i.e., after the desired lead time). Loss is computed as the MSE between the predicted difference of the targets and the true difference of the targets. Hence, the primary objective of the network is to align the MSE between two states’ inputs to their MSE after evolution in the target region. This process is repeated for each SOI in the training set. Details of the network setup and hyperparameters can be found in Table 2.
Our network deviates slightly from that of Rader and Barnes (2023), in that we use a single linear layer instead of multiple dense layers at the end of the network. We restrict the linear layer’s weight to be $\geq 0$ to ensure a monotonically increasing relationship between the MSE of the maps’ weighted inputs and the predicted MSE of the maps’ target regions. This better matches our process for selecting analogs as described in Section 2.2, as we expect that two maps with a smaller weighted MSE will also evolve to have a smaller MSE between their targets. This switch to a single linear layer resulted in a negligible change in skill, but increased network parsimony and training speed.
Figure 2. Schematic of the neural network setup to learn the weighted mask. One SOI and one analog are multiplied by a layer of learnable weights. The MSE between the two weighted inputs is computed and passed through a linear scaling layer. This output represents the predicted difference in the two maps’ targets. Loss is computed as the MSE between the predicted difference of the targets and the true difference of the targets.
# 2.5. Metrics
We employ deterministic and probabilistic error metrics for each type of prediction task (i.e., classification and regression). For regression (Task #2) we compute mean absolute error (MAE) and continuous ranked probability score (CRPS).
MAE is defined as
$$
M A E = \frac { 1 } { N } \sum _ { i = 1 } ^ { N } \left| f _ { i } - o _ { i } \right|
$$
where $N$ is the number of samples, $f _ { i }$ is the predicted value for sample $i$ , and $o _ { i }$ is the true value for sample $i$ .
CRPS is defined as
$$
C R P S ( F , x ) = \int _ { - \infty } ^ { \infty } \left( F ( y ) - H ( y - x ) \right) ^ { 2 } d y
$$
where $F ( y )$ is the cumulative distribution function of the forecast, $x$ is the true value, and $H ( y - x )$ is the Heaviside step function, which is $0$ for $y < x$ and $1$ for $y \geq x$ . CRPS ranges from 0 (for a perfect forecast) to $\infty$ .
For classification (Tasks #1 and #3), we compute misclassification rate and the multiclass Brier Score (BS).
Misclassification rate is defined as
$$
{ \mathrm { E r r o r ~ R a t e } } = { \frac { \mathrm { N u m b e r ~ o f ~ I n c o r r e c t ~ C l a s s i f i c a t i o n s } } { \mathrm { T o t a l ~ N u m b e r ~ o f ~ P r e d i c t i o n s } } }
$$
BS is defined as
$$
B S = \frac { 1 } { N } \sum _ { i = 1 } ^ { N } \sum _ { k = 1 } ^ { K } \left( f _ { i k } - o _ { i k } \right) ^ { 2 }
$$
where $N$ is the number of samples, $K$ is the number of classes, $f _ { i k }$ is the predicted probability for class $k$ for sample $i$ , and $o _ { i k }$ is the true value (1 if the true class is $k$ , otherwise $0$ ). BS ranges from 0 (for a perfect forecast) to 2.
We convert all types of error to skill scores by comparing them to the error of a climatological forecast:
$$
{ \mathrm { S k i l l ~ S c o r e } } = 1 - { \frac { E r r o r } { E r r o r _ { c l i m a t o l o g y } } }
$$
All skill scores are strictly $\leq 1$ , with a skill score of 1 indicating perfect skill and a skill score of 0 indicating equal skill to a climatological forecast. A negative skill score indicates worse skill than a climatological forecast.
# 3. Results
# 3.1. Week 3-4 Windows of Opportunity in Southern California
We first assess the short-range S2S skill of the AI-based analog approach by classifying Week 3-4 Southern California (32◦ − 37◦N, 116◦ − 121 $^ \circ$ W) summer temperatures (Task #1). The three target classes (cold, neutral, and warm) are formed by splitting the target temperatures into terciles, ensuring all classes are equally sized. Terciles for classifying the analog library are determined using the data within the analog library and terciles for the test set are defined based on the data in the test set to limit the impact of CESM2-LE biases relative to ERA5. We predict each 2-week period from the third week of June through the third week of September, using the learned weights in Figure 3. This mask exhibits weights that are distributed globally, yet unevenly, with noticeably increased weight around the western U.S. as well as in the North Pacific.
Task1: Temperature Mask for Southern California
Figure 3. The learned mask for Task #1, Southern California summer temperature classification. The cyan box outlines the target region.
We include a regional and a global analog baseline in addition to persistence and climatological baselines to evaluate the relative skill of the learned mask approach. To create a global baseline, we select analogs by matching conditions over the entire globe (equivalent to a weighted mask of 1s everywhere). We create a regional baseline by selecting analogs via matching conditions only in the target region (equivalent to a weighted mask of 1s in the target region and 0s everywhere else). With the learned mask, MAE and BS skill scores exceed other baselines when testing both on CESM2- LE and ERA5 data (Figure 4). With the CESM2-LE test set, the highest skill is reached at 2000 analogs, while for ERA5, the skill score peaks at 1500 analogs.
Figure 4. Skill scores for a) CESM2-LE accuracy, b) CESM2-LE BS, c) ERA5 accuracy, and d) ERA5 BS for Week 3-4 Southern California temperature classification.
We diagnose whether the analog ensembles can offer insights into windows of opportunity via discard plots. Figure 5 uses ERA5 data and a 1500-analog ensemble to show the change in misclassification rate from a climatological forecast as samples with lower ensemble agreement are discarded (for CESM2 data see Figure 13). Here, ensemble agreement is computed as the fraction of ensemble members that agree on the majority prediction. We see that over all samples the mask offers just over a 4% reduction in misclassification rate relative to climatology, but this reduction grows essentially monotonically to over 9% for the $\sim$ 25% of samples with the highest ensemble agreement. This is not the case with a global mask’s ensemble, which does not exhibit as precipitous of a reduction in misclassification rate and actually increases in misclassification rate until the $\sim 5 0 \%$ cutoff mark.
Task 1: Discard Plot for 1500 Analogs (ERA5)
Figure 5. Discard plot based on ensemble agreement for Week 3-4 Southern California temperature classification using 1500 analogs, testing on ERA5 data. Data with the lowest ensemble agreement is progressively discarded, with the x-axis showing the fraction of data discarded.
# 3.2. Month 1 Temperature Extremes Over the Midwestern U.S.
We also explore how well the AI-informed analog forecasting approach can perform regression, by predicting monthly summer midwestern U.S. (36◦ − 49◦N, 90◦ − 106◦W) temperatures (Task #2) with a focus on extremes. We predict the temperature (in standard deviations— $\sigma$ ) each month from July through September, using the learned mask in Figure 6. The mask displays a strong emphasis on the target region and preferential weighting in the mid-latitudes of the Northern Hemisphere as well as the
Maritime Continent. Moreover, compared to the mask in Figure 3, the mask in Figure 6 has a more concentrated distribution of weights, highlighting the regional importance of the central U.S. for predicting midwestern summer temperatures.
Task2:Temperature Mask for the Midwestern U.S.
Figure 6. The learned mask for Task #2, midwestern U.S. summer temperature regression. The cyan box outlines the target region.
As in Task #1, the learned mask outperforms all baselines (Figure 7). All skill scores peak at 50 analogs for both CESM2-LE and ERA5 data, except for CESM2- LE CRPS, which peaks at 100 analogs. The number of analogs in the ensembles are much lower than in Task #1, since we have moved from a 3-class classification problem to a regression problem. With a regression problem, if the number of analogs in the ensemble is too high, the ensemble mean will converge to climatology. An example of this regression to the mean can be found in Figure 14.
While overall improvement in temperature forecasting on S2S timescales is important, better prediction of extreme temperatures has an outsized impact on enhancing agricultural production, public health, and energy management (Domeisen et al. 2022). Thus, we focus on assessing the AI-based analog’s ability to predict extreme temperatures. We again utilize discard plots, where we compare how MAE changes relative to a climatological prediction for more extreme samples. We denote extremity simply as the absolute value of the prediction, i.e. a measure of how far from climatology the prediction is. In Figure 8, we show the discard plot with ERA5 data using an ensemble of 50 analogs. The AI-based analogs exhibits a marked decrease in relative error for samples with more extreme predictions. As we show error relative to climatology, it may be unsurprising that the AI-informed analogs would have lower relative error on more extreme events. However, this is not the case for the regional baseline, where there is only a slight decrease in error for the most extreme samples. This analysis highlights how the skill gap between AI-informed analogs and traditionallyselected analogs widens for more extreme temperature events. Moreover, as we use predicted extremity to discard samples, this information is available a priori, allowing forecasters to better understand when the analog ensemble is likely to perform best and building trust in its more extreme predictions. This behavior also holds for CESM2-LE
Figure 7. Skill scores for a) CESM2-LE accuracy, b) CESM2-LE CRPS, c) ERA5 accuracy, and d) ERA5 CRPS for Month 1 midwestern U.S. temperature regression.
data (Figure 15).
Task 2:Discard Plot for 50 Analogs (ERA5)
Figure 8. Discard plot based on predicted extremity with an ensemble of 50 analogs for midwestern U.S. summer temperature regression, testing on ERA5 data. Data with the lowest extremity is progressively discarded, with the x-axis showing the fraction of data discarded.
# 3.3. Month 1-2 Mask Exploration in the North Atlantic
Lastly, we explore the learned mask’s ability to perform grid-point classification of upper atmospheric winds in the North Atlantic (25◦ − 48◦N, $0 ^ { \circ } - 8 0 ^ { \circ } \mathrm { W }$ ) and probe the mask itself to better understand the relative importance of different areas for successful prediction. At each grid point in the target region (rather than averaging across the target region), we classify the 250 hPa zonal wind (U250) using terciles. We make predictions for December-January and January-February, using the learned mask in Figure 9. In this case, we select analogs using both U250 and surface temperature as inputs. Therefore, we learn a unique mask for each field, although, importantly, these masks are learned together by the network.
We evaluate skill at each grid point in the whole field (e.g., Figure 16), summarizing these with mean skill scores over the target region (Figure 10). The learned mask outperforms all baselines for this grid-point-by-grid-point classification. Skill peaks at 400 analogs for both CESM2-LE and ERA5 data, except for CESM2-LE classification,
Task3:Temperature Mask for the North Atlantic
Task3:U250MaskfortheNorthAtlantic
Figure 9. The learned mask for Task #3, North Atlantic winter U250 classification. The cyan box outlines the target region.
which peaks at 800 analogs.
Figure 10. Skill scores for a) CESM2-LE accuracy, b) CESM2-LE BS, c) ERA5 accuracy, and d) ERA5 BS for Month 1-2 North Atlantic U250 classification.
Additionally, we explore the learned masks to better understand the relative importance of initial conditions in different areas for successful analog prediction. We do so by ablating the mask, i.e. setting the weights to 0, and observing changes in skill. We analyze changes in BS skill with CESM2-LE data and a 400 analog ensemble. We test on CESM2-LE data rather than ERA5 because the impacts on BS are small, and thus, there are too few samples to draw meaningful conclusions from ERA5 data. We employ three ablation methods: 1) Ablating entire fields (e.g. temperature or U250), 2) threshold ablation, where we increase mask sparsity by setting weights to 0 if they are below either the 40th, 80th, or 90th percentile or incentivizing sparsity during training itself by adding constrained inverse $L _ { 2 }$ regularization (see 7.2 for details) and 3) ablating specific regions (e.g. the Northern Hemisphere). Masks for examples of these ablation methods are shown in Figure 11. All of these ablation methods, except constrained inverse $L _ { 2 }$ regularization, are performed after the mask has already been learned.
We focus on a 400 analog ensemble, as this is the number of analogs for which skill peaks (Figure 12), although the general trends remain similar across ensemble size (Figure 19). We find a slight improvement in skill when we increase mask sparsity by thresholding or by introducing constrained inverse $L _ { 2 }$ regularization. This increase in skill with a sparser map is consistent with Rader and Barnes (2023), who found a slight improvement in skill for multi-year predictions using a $\sim$ 95% percentile threshold. However, when we test increasing the sparsity for shorter timescales (e.g. Task #1), we find minimal change in observation skill and a slight decrease in probabalistic model skill (Figure 18). Considering field ablation, temperature appears to be the more important of the two fields, although ablating either the temperature field or the U250 field results in a significant decrease in skill, highlighting the importance of both for identifying skillful analogs. While all ablation methods besides increasing sparsity decreases skill, ablating the Northern Hemisphere, both fields, and ocean temperatures result in the largest drop in skill.
Figure 11. Examples of masks with each of the ablation methods: (a) Ablating entire fields, (b) Threshold ablation, and (c) Ablating specific regions.
Figure 12. BS skill for different ablation methods evaluated on CESM2-LE data. | Subseasonal-to-seasonal forecasting is crucial for public health, disaster preparedness, and agriculture, and yet it remains a particularly challenging timescale to predict. We explore the use of an interpretable AI-informed model analog forecasting approach, previously employed on longer timescales, to improve S2S predictions. Using an artificial neural network, we learn a mask of weights to optimize analog selection and showcase its versatility across three varied prediction tasks: 1) classification of Week 3-4 Southern California summer temperatures; 2) regional regression of Month 1 midwestern U.S. summer temperatures; and 3) classification of Month 1-2 North Atlantic wintertime upper atmospheric winds. The AI-informed analogs outperform traditional analog forecasting approaches, as well as climatology and persistence baselines, for deterministic and probabilistic skill metrics on both climate model and reanalysis data. We find the analog ensembles built using the AI-informed approach also produce better predictions of temperature extremes and improve representation of forecast uncertainty. Finally, by using an interpretable-AI framework, we analyze the learned masks of weights to better understand S2S sources of predictability. | [
"physics.ao-ph",
"cs.LG"
] |
# 1. Introduction
Thanks to the capability of graphs in representing complex relationships, graph generation (Zhu et al., 2022; Liu et al., 2023a) has become an essential task in various fields such as protein design (Ingraham et al., 2019), drug discovery (Bilodeau et al., 2022), and social network analysis (Li et al., 2023). Among contemporary generative models, diffusion and flow models have emerged as two compelling approaches for their ability to achieve state-of-the-art performance in graph generation (Niu et al., 2020; Vignac et al., 2023a; Eijkelboom et al., 2024; Qin et al., 2024; Hou et al., 2024). In particular, these generative models can be unified under the framework of stochastic interpolation (Albergo & Vanden-Eijnden, 2023), which consists of four procedures (Lipman et al., 2024): 1) Drawing samples from the reference (source) distribution $p _ { 0 } ( \cdot )$ and/or the data (target) distribution $p _ { 1 } ( \cdot )$ for training set assembly; 2) Constructing a time-continuous probability path $p _ { t } ( \cdot ) , 0 \leq t \leq 1$ interpolating between $p _ { 0 }$ and $p _ { 1 }$ ; 3) Training a model to reconstruct the probability path by either approximating the score function or velocity fields (ratio matrix in the discrete case); and 4) sampling from $p _ { 0 }$ and transforming it through the learned probability path to get samples that approximately follow $p _ { 1 }$ .
A core challenge in this framework is constructing the probability path $p _ { t }$ . Existing text and image generative models, operating either in the continuous (Ho et al., 2020; Song et al., 2021; Lipman et al., 2023; Liu et al., 2023b) or discrete (Campbell et al., 2022; Sun et al., 2023; Campbell et al., 2024; Gat et al., 2024; Minello et al., 2025) space, typically rely on linear interpolation between source and target distributions to construct the path. Graph generation models, including diffusion (Niu et al., 2020; Vignac et al., 2023a; Haefeli et al., 2022; Xu et al., 2024; Siraudin et al., 2024) and flow-based models (Eijkelboom et al., 2024; Qin et al., 2024; Hou et al., 2024), inherit this design by modeling every single node and edge independently and linearly build paths in the disjoint space. However, this approach is inefficient because it neglects the strong interactions and relational structure inherent in graphs, i.e., the significance of a node heavily depends on the configuration of its neighbors. While empirical success have been achieved via fine-grained searching on the training and sampling design (Qin et al., 2024) such as target guidance and time distortion, we argue that there remains a fundamental issue of the linear probability path construction, and these strategies only mitigate the problem by manipulating the probability path.
Motivation examples. The blue line in Figure 1a illustrates the probability path evolution through linear interpolation in plain graph generation, where the probability path remains flat until $\textit { t } \approx \ : 0 . 8$ before sharply dropping. This pattern provides a poor velocity1 estimation, as ideally the velocity field should be smooth and consistently pointing to the data distribution as the green line in Figure 1a (Kapusniak et al., 2024). The resultant velocity will make the sampling difficult to converge to the data distribution, as shown in Figure 1c, where the termination point of the blue curve remains high. Though not explicitly mentioned, the superior performance achieved by Qin et al. (2024) is partially attributed to an extensive design space search to manipulate the path for smoother velocity estimation. The techniques they used, including target guidance, time distortion, and stochasticity injection, are conceptually visualized in Figure 1a and 1b with discussions in Appendix F.1. More motivating examples in Appendix I.2.
Figure 1. Probability path visualization. Since the probability is intractable, the average maximum mean discrepancy ratio (y-axis) of graph statistics between interpolants and the data points is used as a proxy. Lower means closer to the data distribution (details in Appendix I.3).
The limitations. The above examples reveal fundamental issues of the probability path construction in graph generation, attributable to two primary reasons: 1) The assumption of independence between nodes/edges and a linear interpolation in the locally disjoint space fails to capture the global coevolution of graph components and properties such as community structure or spectrum (Haasler & Frossard, 2024). This potentially causes a sudden transition from reference to data distribution, which yields non-smooth probability paths. 2) The linear interpolation is derived through the optimal transport (OT) displacement (Tong et al., 2024) between distributions residing in the Euclidean space (Lipman et al., 2024). However, linear interpolation would stray away from the true data manifold when the underlying space is nonEuclidean (Chen & Lipman, 2024; Kapusniak et al., 2024). Since graphs naturally inhabit non-Euclidean geometries, linearly interpolating the nodes/edges neither guarantees an OT displacement nor respects the underlying geometry, making the constructed probability path suboptimal or even deviate from the valid graph domain.
Proposed solution. To address these limitations, we draw on statistical relational learning and model graphs using Markov Random Fields (MRFs) (Taskar et al., 2007; Qu et al., 2019). MRFs organize the nodes/edges as an interconnected system and interpolating between two MRFs captures the joint evolution of the whole graph system. Extending (Haasler & Frossard, 2024), we derive a closedform Wasserstein distance between graph distributions and leverage it to construct the Bures-Wasserstein (BW) interpolation of two graphs that ensures the OT displacement in graph generation compared to linear interpolation. We then integrate these insights into a flow-matching framework2 called Bures–Wasserstein Flow (BWFlow). Specifically, by defining a probability path via BW interpolation, we obtain smooth, globally coherent velocity fields at intermediate steps (see Figure 1c) that respect the non-Euclidean, interconnected structure of graphs. Crucially, BWFlow admits simulation-free computation of densities and velocities along the entire path, which translates into efficient, stable training and sampling.
Contributions. First, we theoretically and empirically show that the linear interpolation used in existing graph generation models gives suboptimal probability path construction and velocity estimation. Second, through parameterizing graphs as MRFs, we introduce BWFlow, a flowmatching model for graph generation that constructs probability paths respecting the graph geometry and develops smooth velocities. Third, BWFlow was tested on plain graph and 2D/3D molecule generation, demonstrating competitive performance without an excessive search for path manipulation techniques. We further show that BW interpolation consistently outperforms other interpolation methods in building flow matching models, leading to more stable training and sampling convergence.
# 2. Preliminaries
# 2.1. Flow matching for graph generation
Flow matching (FM). Generative modeling considers fitting a mapping from state space $s s$ that transforms the samples from source distribution, $X _ { 0 } ~ \sim ~ p _ { 0 }$ , to samples from target data distribution, $X _ { 1 } \sim p _ { 1 } { } ^ { 3 }$ . Continuous normalizing flow (CNF) (Chen et al., 2018) parameterizes the transformation through a push-forward equation that interpolates between $p _ { 0 }$ and $p _ { 1 }$ and constructs a probability path $p _ { t } ( \mathcal { X } ) \ = \ \left[ \psi _ { t } p _ { 0 } \right] ( \mathcal { X } )$ through a time-dependent function $\psi _ { t }$ (a.k.a flow). A vector field $\boldsymbol { u } _ { t }$ , defined as $\begin{array} { r } { \frac { d } { d t } \psi _ { t } \left( \mathcal { X } \right) = u _ { t } \left( \psi _ { t } \left( \mathcal { X } \right) \right) } \end{array}$ with $\psi _ { 0 } \left( \mathcal { X } \right) = \mathcal { X }$ , is said to generate $p _ { t }$ if $\psi _ { t }$ satisfies $X _ { t } : = \psi _ { t } \left( X _ { 0 } \right) \sim p _ { t }$ for $X _ { 0 } \sim p _ { 0 }$ . The FM (Lipman et al., 2023) is designed to match the real velocity field through the loss:
$$
\begin{array} { r } { \mathcal { L } _ { \mathrm { F M } } ( \theta ) = \mathbb { E } _ { t , X _ { t } \sim p _ { t } ( \cdot ) } \left\| v _ { \theta } ( X _ { t } ) - u _ { t } ( X _ { t } ) \right\| ^ { 2 } . } \end{array}
$$
where $\begin{array} { r } { v _ { \theta } ( \cdot ) : S S } \end{array}$ is the parameterized velocity field and $t \sim \mathcal { U } [ 0 , 1 ]$ .)
Conditional flow matching (CFM). Given that the actual velocity field and the path are not tractable (Tong et al., 2024), one can construct the per-sample conditional flow. We condition the probability paths on variable $Z \sim \pi ( \cdot )$ (for instance, a pair of source and target points $\boldsymbol { Z } = \left( \boldsymbol { X } _ { 0 } , \boldsymbol { X } _ { 1 } \right) )$ and re-write $p _ { t } ( \mathcal { X } ) = \mathbb { E } _ { \pi ( . ) } p _ { t } ( \mathcal { X } \mid Z )$ and $u _ { t } ( \mathcal { X } ) = \mathbb { E } _ { \pi ( . ) } u _ { t } ( \mathcal { X } \mid Z )$ where the conditional path and the(vXe)lo=city f(i⋅e)ld a(rXe ∣ract)able. The CFM aims at regressing a velocity $v _ { \theta } ( \cdot )$ to $u _ { t } ( \mathcal { X } \mid Z )$ by the loss,
$$
\begin{array} { r } { \mathcal { L } _ { \mathrm { C F M } } ( \theta ) : = \mathbb { E } _ { t , Z \sim \pi ( \cdot ) , p _ { t } ( \cdot \vert Z ) } \left. v _ { \theta } ( X _ { t } ) - u _ { t } \bigl ( X _ { t } \vert Z \bigr ) \right. ^ { 2 } , } \end{array}
$$
where it is shown that the CFM optimization has the same optimum as the FM objective (Tong et al., 2024).
Graphs as statistical objects. When considering graph generation with CFM, the very first step is to model graphs as statistical objects. For notation, we let $\mathcal { G } = \{ \mathcal { V } , \mathcal { E } , \mathcal { X } \}$ denote an undirected graph random variable with edges $\mathcal { E } = \{ e _ { u v } \}$ , nodes $\boldsymbol { \nu } _ { } = \{ \boldsymbol { v } \}$ , and node features $\mathcal { X } = \{ x _ { v } \}$ A graph realization is denoted as $G \ = \ \{ V , E , X \}$ $p ( \mathcal G )$ . We consider a group of latent variables that controls the graph distribution, specifically the node feature mean $\pmb { X } = \left[ \pmb { x } _ { 1 } , \pmb { x } _ { 2 } , \ldots , \pmb { x } _ { | \mathcal { V } | } \right] ^ { \top } \in \mathbb { R } ^ { | \mathcal { V } | \times K }$ , the weighted adjacency m=a[trix $W \in \mathbb { R } ^ { | \mathcal { V } | \times | \mathcal { V } | }$ , ∈and the Laplacian matrix $\mathbf { \bar { L } } = \dot { D _ { \mathbf { \theta } } } - \mathbf { W } \in \mathbb { R } ^ { | \mathcal { V } | \times | \mathcal { V } | }$ , with $D = \mathrm { d i a g } ( W { \bf 1 } )$ being the degree matrix (and 1 the all-one vector). In a nutshell, graphs are sampled from $G \sim p ( \mathcal { G } ; G ) = p ( \mathcal { X } , \mathcal { E } ; X , W )$ .
Graph generation with CFM. The CFM samples new graphs through iteratively building $G _ { t + d t } = G _ { t } + v _ { t } ^ { \bar { \theta _ { t } } } \bigl ( G _ { t } \bigr ) \cdot d t$ with initial $G _ { 0 } \sim p _ { 0 } ( \mathcal { G } )$ and a trained velocity field $v _ { t } ^ { \theta } ( G _ { t } )$ , so that the medium points follows $G _ { t } \sim p _ { t } ( \mathcal G )$ and terminates at $p _ { 1 } ( \mathcal { G } )$ . We can parameterize $v _ { t } ^ { \theta } ( G _ { t } )$ as in (Gat et al., 2024),
$$
v _ { t } ^ { \theta } \big ( G _ { t } \big ) = \mathbb { E } _ { G _ { 0 } \sim p _ { 0 } ( \mathcal { G } ) , G _ { 1 } \sim p _ { 1 \mid t } ^ { \theta } \left( \cdot \vert G _ { t } \right) } \left[ v _ { t } \left( G _ { t } \vert G _ { 0 } , G _ { 1 } \right) \right]
$$
As such, training the velocity fields is replaced by training a denoiser $p _ { 1 | t } ^ { \theta } \left( \cdot \bar { | } G _ { t } \right)$ to predict the clean datapoint, which is equivalen∣t to maximizing the log-likelihood (Qin et al., 2024; Campbell et al., 2024),
$$
\mathcal { L } _ { \mathrm { C F M } } = \mathbb { E } _ { G _ { 1 } \sim p _ { 1 } , G _ { 0 } \sim p _ { 0 } , t \sim \mathcal { U } _ { [ 0 , 1 ] } , G _ { t } \sim p _ { t | 0 , 1 } } \left[ \log p _ { 1 | t } ^ { \theta } \left( G _ { 1 } \mid G _ { t } \right) \right]
$$
where $t$ is sampled from a uniform distribution $\mathcal { U } _ { [ 0 , 1 ] }$ and $G _ { t } \sim p _ { t | 0 , 1 }$ can be obtained in a simulation-free ma]nner. Th ∼fra∣mework avoids the evaluation of the conditional vector field at training time, which both increases the model robustness and training efficiency.
To proceed, a closed form of $p _ { t } ( \cdot \mathrm { ~ \bf ~ \mathscr ~ { ~ \cal ~ G ~ } ~ } _ { 0 } , G _ { 1 } )$ is required to construct both the probability path and the velocity field $v _ { t } \left( G _ { t } \mid G _ { 0 } , G _ { 1 } \right)$ . A common selection to decompose the probability density assumes independency for each node and edge (Hou et al., 2024; Qin et al., 2024; Eijkelboom et al., 2024) giving $p ( { \mathcal { G } } ) \ = \quad$ $\begin{array} { r } { p ( \mathcal { X } ) p ( \mathcal { E } ) = \prod _ { v \in \mathcal { V } } p ( x _ { v } ) \prod _ { e _ { u v } \in \mathcal { E } } p ( e _ { u v } ) } \end{array}$ . Choosing $\pi ( \cdot ) =$ $p _ { 0 } \left( \mathcal { G } \right) p _ { 1 } \left( \mathcal { G } \right)$ ,∏th∈Ve bounda∏ry c∈oEnditions follow $p _ { i } ( { \mathcal { G } } ) ~ =$ ${ \delta ( { \mathcal { X } } _ { i } \ = \ { \boldsymbol { X } } _ { i } ) \cdot \delta ( { \mathcal { E } } _ { i } \ = \ W _ { i } ) , \forall i \ = \ \{ 0 , 1 \} }$ with $\delta$ the dirac function. This decomposition is further combined with linear interpolation to build the path, as introduced in (Tong et al., 2024), where,
$$
\begin{array} { r l } & { p _ { t } ( x _ { v } \mid G _ { 0 } , G _ { 1 } ) = \mathcal { N } \left( t [ X _ { 1 } ] _ { v } + ( 1 - t ) [ X _ { 0 } ] _ { v } , \sigma _ { t } ^ { 2 } \right) } \\ & { p _ { t } ( e _ { u v } \mid G _ { 0 } , G _ { 1 } ) = \mathcal { N } \left( t [ E _ { 1 } ] _ { u v } + ( 1 - t ) [ E _ { 0 } ] _ { u v } , \sigma _ { t } ^ { 2 } \right) . } \end{array}
$$
Similarly, discrete flow matching frameworks for graph generation (Qin et al., 2024; Siraudin et al., 2024; Xu et al., 2024) is also based on linear interpolation, where the interpolant is sampled from a categorical distribution whose probabilities are simply linear interpolation between the boundary conditions.
# 2.2. Optimal transport and flow matching
Why linear interpolation? Existing literature (Liu et al., 2023b; Albergo & Vanden-Eijnden, 2023) argues that the probability path $p _ { t } ( \mathcal { X } | Z )$ should be chosen to recover the optimal transport (OT) displacement interpolant (McCann, 1997). The (Kantorovich) optimal transport problem is to find the transport plan between two probability measures, $\eta _ { 0 }$ and $\eta _ { 1 }$ , with the smallest associated transportation cost defined as follows.
Definition 1 (Wasserstein Distance). Denote the possible coupling as $\pi \in \Pi ( \eta _ { 0 } , \eta _ { 1 } )$ , which is a measure on $\mathcal { S } \times \mathcal { S }$ whose margi∈nal(s are $\eta _ { 0 }$ and $\eta _ { 1 }$ . With $c ( X , Y )$ being the cost of transporting the mass between $X$ and $Y$ , the Wasserstein distance is defined as,
$$
{ \mathcal W } _ { c } ( \eta _ { 0 } , \eta _ { 1 } ) = \operatorname* { i n f } _ { \pi \in \Pi ( \eta _ { 0 } , \eta _ { 1 } ) } \int _ { { \cal S } \times { \cal S } } c ( X , Y ) d \pi ( X , Y ) .
$$
When the data follow Euclidean geometry and both boundary distributions, $p _ { 0 }$ and $p _ { 1 }$ are described by the Gaussian family, the probability path shown in Equation (5) with $\sigma _ { t } \to 0$ becomes a solution to Equation (6).
As suggested in the motivations, linearly interpolating in the disjoint space of nodes and edges with Equation (5) does not guarantee the OT displacement in non-Euclidean and interconnected objects like graphs. To overcome the limitation, we utilize Markov random fields to capture the joint evolution of the graph system, and build an FM model that generates graphs with smooth probability paths and consistent velocity.
# 3. Methodology
In this paper, we introduce Bures–Wasserstein Flow Matching (BWFlow), a novel graph generation framework that is built upon the OT displacement when modeling graphs with Markov Random Fields (MRFs). We begin by casting graphs in an MRF formulation in Section 3.1. We then derive the BWFlow framework in Section 3.2 by formulating and solving the OT displacement problem on the MRF, thereby yielding the fundamental components, interpolations and velocity fields, for FM-based graph generation. Finally, in Section 3.3, we extend BWFlow to discrete FM regimes, enabling its application across a broad spectrum of graph-generation tasks. A schematic overview of the entire BWFlow is illustrated in Figure 2.
# 3.1. Graph Markov random fields
We borrow the idea from MRF as a remedy to modeling the complex system organized by graphs, which intrinsically captures the underlying mechanism that jointly generates the nodes and edges. Mathematically, we assume the joint probability density distribution (PDF) of node features and graph structure as $p ( \mathcal { G } ; G ) = p ( \mathcal { X } , \mathcal { E } ; X , W ) =$ $p ( \boldsymbol { \mathcal { X } } ; \boldsymbol { X } , \boldsymbol { W } ) p ( \boldsymbol { \mathcal { E } } ; \boldsymbol { W } )$ where the node features and graph structure are interconnected through latent variables $X$ and $W$ . For node features $\chi$ , we follow the MRF assumption in Zhu et al. (2003) and decompose the density into the node-wise potential $\varphi _ { 1 } ( v ) , \forall v \in \mathcal { V }$ and pair-wise potential
$\varphi _ { 2 } ( u , v ) , \forall e _ { u v } \in \mathcal { E }$ :
$$
\begin{array} { l } { p ( \boldsymbol { \mathcal { X } } ; \boldsymbol { X } , W ) \propto \displaystyle \prod _ { v } \underbrace { \exp \left. - ( \nu + d _ { v } ) \| V x _ { v } - \mu _ { v } \| ^ { 2 } \right. } _ { \varphi _ { 1 } ( v ) } } \\ { \displaystyle \qquad \prod _ { u , v } \underbrace { \exp \left. w _ { u v } \left[ ( V x _ { u } - \mu _ { u } ) ^ { \top } ( V x _ { v } - \mu _ { v } ) \right] \right. } _ { \varphi _ { 2 } ( u , v ) } , } \end{array}
$$
with $\| \cdot \|$ the $L _ { 2 }$ norm, $( \cdot ) ^ { \dagger }$ the pseudo-inverse, $V$ the transformation matrix modulating the graph feature emission, and $\mu _ { v }$ the node-specific latent variable mean. Equation (7) can be expressed as a colored Gaussian distribution in Equation (8) given that $V x _ { v } \sim \mathcal N ( \pmb { \mu _ { v } } , ( \nu \pmb { I } + \pmb { L } ) ^ { - 1 } )$ . We further assume that edges are emitted via a Dirac delta, $\mathcal { E } \sim \delta ( W )$ , yielding our definition of Graph Markov Random Fields (GraphMRF). The derivation can be found in Appendix A.2.
Definition 2 (Graph Markov Random Fields). GraphMRF statistically describes graphs as, $p ( \mathcal { G } ; \pmb { G } ) = p ( \boldsymbol { \mathcal { X } } , \boldsymbol { \mathcal { E } } ; \boldsymbol { X } , \boldsymbol { W } ) = p ( \boldsymbol { \mathcal { X } } ; \boldsymbol { X } , \boldsymbol { W } ) \cdot p ( \boldsymbol { \mathcal { E } } ; \boldsymbol { W } )$ where $\mathcal { E } \sim \delta ( W )$ and $\mathrm { v e c } ( \mathcal { X } ) \sim \mathcal { N } \left( X , \mathbf { \Lambda } \mathbf { \Lambda } \mathbf { \Lambda } ^ { \dagger } \right)$ with $\mathbf { \boldsymbol { X } } = \operatorname { v e c } ( \mathbf { \boldsymbol { V } } ^ { \dagger } \mathbf { \boldsymbol { \mu } } ) , \mathbf { \boldsymbol { \Lambda } } = \left( \nu \mathbf { \boldsymbol { I } } + \mathbf { \boldsymbol { L } } \right) \otimes \mathbf { \boldsymbol { V } } ^ { \intercal } \mathbf { \boldsymbol { V } } .$ (8) The $\otimes$ is the Kronecker product, $\mathrm { v e c } ( \cdot )$ is the vectorization operator and $\boldsymbol { \mathit { I } }$ is the identity matrix.
Remark 1. GraphMRF explicitly captures node–edge dependencies and preserves the advantages of colored Gaussian distributions. Section 3.2 will soon show that this yields closed-form interpolation and velocity, and the probability path constructed from GraphMRFs remains on the graph manifold that respects the underlying non-Euclidean geometry.
Remark 2. While we emphasize that GraphMRF is not a universal model and imposes certain constraints (Appendix A.3 discusses about the usage scope), it nonetheless captures the dynamics of most graph-generation tasks such as planar, stochastic block models, and molecular graphs.
# 3.2. Bures-Wasserstein flow matching for graph generation
The optimal transport displacement between graph distributions. Given that the joint probability of graphs decomposed as $p ( \mathcal { G } ) = p ( \boldsymbol { \chi } ; \boldsymbol { X } , \boldsymbol { W } ) p ( \mathcal { E } ; \boldsymbol { W } )$ and the measure factorized to $\eta _ { \mathcal { G } _ { j } } = \eta _ { \mathcal { X } _ { j } } \cdot \eta _ { \mathcal { E } _ { j } }$ with $j \in \{ 0 , 1 \}$ , the graph Wasserstein distancGe b=etwXee⋅n $\eta _ { \mathcal { G } _ { 0 } }$ and $\eta _ { \mathcal { G } _ { 1 } }$ {is wr}itten as,
$$
d _ { \mathrm { B W } } ( \mathcal { G } _ { 0 } , \mathcal { G } _ { 1 } ) : = \mathcal { W } _ { c } ( \eta _ { \mathcal { X } _ { 0 } } , \eta _ { \mathcal { X } _ { 1 } } ) + \mathcal { W } _ { c } ( \eta _ { \mathcal { E } _ { 0 } } , \eta _ { \mathcal { E } _ { 1 } } ) .
$$
We extend Haasler & Frossard (2024) and analytically derive the graph Wasserstein distance using the OT formula between Gaussians (Dowson & Landau, 1982; Olkin &
Figure 2. Schematic overview of BWFlow, which consists of: a) Sample the marginal graph condition $G _ { 0 }$ and $G _ { 1 }$ ; b) Convert graphs to MRFs; c) Interpolate to get intermediate points; d) Convert back to get $G _ { t }$ ; e) Train velocity based on $G _ { t }$ ; and f) Generate new points with the trained velocity.
Pukelsheim, 1982; Takatsu, 2010) (see Lemma 2 proved in Appendix B.1) as follows.
Proposition 1 (Bures-Wasserstein Distance). Consider two same-sized graphs $\mathcal { G } _ { 0 } ~ \sim ~ p \left( \mathcal { X } _ { 0 } , \mathcal { E } _ { 0 } \right)$ and $\mathcal { G } _ { 1 } \sim p \left( \mathcal { X } _ { 1 } , \mathcal { E } _ { 1 } \right)$ with $V$ shared f r two graphs, described by the distribution in Definition 2. When the graphs are equipped with graph Laplacian matrices $\scriptstyle { L _ { 0 } }$ and $\scriptstyle { L _ { 1 } }$ satisfying $\jmath$ ) is Positive Semi-Definite (PSD) and 2) has only one zero eigenvalue. The BuresWasserstein distance between these two random graph distributions is given by
$$
\begin{array} { r l } & { { d _ { B W } ( { \mathcal G } _ { 0 } , { \mathcal G } _ { 1 } ) = { \left\| { { \mathbf { { X } } _ { 0 } } - { \mathbf { { X } } _ { 1 } } } \right\| } _ { F } ^ { 2 } + } } \\ & { \beta \operatorname { t r a c e } \left( { { L _ { 0 } ^ { \acute { \prime } } } + { L _ { 1 } ^ { \acute { \prime } } } - 2 { \left( { L _ { 0 } ^ { \acute { \prime } / 2 } } { L _ { 1 } ^ { \acute { \prime } } } { L _ { 0 } ^ { \acute { \prime } / 2 } } \right) ^ { 1 / 2 } } } \right) , } \end{array}
$$
as $\nu \to 0$ and $\beta$ is a constant related to the norm of $V ^ { \dagger }$ . T→he proof can be found in Appendix B.2.
Based on the Bures-Wasserstein (BW) distance, we then derive the OT interpolant for two graphs, which is the solution of the displacement minimization problem described as,
$$
\mathcal { G } _ { t } = \mathop { \arg \operatorname* { m i n } } _ { \tilde { \mathcal { G } } } ~ ( 1 - t ) d _ { \mathrm { B W } } ( \mathcal { G } _ { 0 } , \tilde { \mathcal { G } } ) + t d _ { \mathrm { B W } } ( \tilde { \mathcal { G } } , \mathcal { G } _ { 1 } ) .
$$
The probability path. The interpolation is obtained through solving Equation (10) with the BW distance defined in Proposition 1, we prove the minimizer of the above problem has the form in Proposition 2. The proof can be found in Appendix C.1.
Proposition 2 (Bures-Wasserstein interpolation). The graph minimizer of Equation (10), $\mathcal { G } _ { t } = \{ \mathcal { V } , \mathcal { E } _ { t } , \mathcal { X } _ { t } \}$ , have its node features following a colored Gaussian distribution, $\mathscr X _ { t } \sim \mathcal N ( X _ { t } , \Lambda _ { t } ^ { \dagger } )$ with $\pmb { \Lambda } _ { t } = \left( \nu \pmb { I } + \pmb { L } _ { t } \right) \otimes$ $V ^ { \top } V$ and edges following $\mathcal { E } _ { t } \sim \delta ( W _ { t } )$ , specifically,
$$
\begin{array} { r l } & { \pmb { L } _ { t } ^ { \dag } = \pmb { L } _ { 0 } ^ { 1 / 2 } \left( ( 1 - t ) \pmb { L } _ { 0 } ^ { \dag } + t \left( \pmb { L } _ { 0 } ^ { \dag / 2 } \pmb { L } _ { 1 } ^ { \dag } \pmb { L } _ { 0 } ^ { \dag / 2 } \right) ^ { 1 / 2 } \right) ^ { 2 } \pmb { L } _ { 0 } ^ { 1 / 2 } } \\ & { \pmb { X } _ { t } = ( 1 - t ) \pmb { X } _ { 0 } + t \pmb { X } _ { 1 } } \end{array}
$$
The interpolant provides a closed form for the induced probability path $p { \big ( } { \mathcal { G } } _ { t } \mid G _ { 0 } , G _ { 1 } { \big ) }$ and the velocity $v ( G _ { t } \mid G _ { 0 } , G _ { 1 } )$ that is easy to access without any simulation.
The velocity. We consider the reparameterization as in Equation (3) and derive the conditional velocity $v _ { t } \left( G _ { t } \mid G _ { 1 } , G _ { 0 } \right)$ as in Proposition 3.
Proposition 3 (Bures-Wasserstein velocity). For the graph $\mathcal { G } _ { t }$ following BW interpolation in Proposition 2, the conditional velocity at time $t$ with observation $G _ { t }$ is given as,
$$
\begin{array} { l } { { v _ { t } ( E _ { t } \mid G _ { 0 } , G _ { 1 } ) = \dot { W _ { t } } = d i a g ( \dot { L } _ { t } ) - \dot { L } _ { t } } } \\ { { \displaystyle v _ { t } ( X _ { t } \mid G _ { 0 } , G _ { 1 } ) = \frac { 1 } { 1 - t } ( X _ { 1 } - X _ { t } ) } } \\ { { \displaystyle w i t h \dot { L } _ { t } = 2 L _ { t } - T L _ { t } - L _ { t } T } } \\ { { \displaystyle a n d T = L _ { 0 } ^ { 1 / 2 } ( L _ { 0 } ^ { \dagger / 2 } L _ { 1 } ^ { \dagger } L _ { 0 } ^ { \dagger / 2 } ) ^ { 1 / 2 } L _ { 0 } ^ { 1 / 2 } } } \end{array}
$$
where $\pmb { W } _ { t } = \pmb { D } _ { t } - \pmb { L } _ { t }$ and $\scriptstyle { \mathbf { } } \mathbf { } , { \mathbf { } } \mathbf { } \mathbf { } ,$ defined in Equation (11).
Derivation can be found in Appendix C.2.
With Proposition 2 and Proposition 3, we are now able to formally construct the algorithms for Bures-Wasserstein flow matching. Taking continuous flow matching as an example, Algorithms 1 and 2 respectively introduce the training and sampling pipelines for our BWFlow.
Remark: The BW interpolation and velocity both deviate from the linear flow matching framework and require extra computational cost. However, there exist multiple ways to analytically calculate or numerically approximate the velocity for training and inference. The choice of these methods depends on the trade-off between training stability, sampling efficiency, etc. In Appendix E, we provide a discussion about the design space of BW interpolation and velocity.
# 3.3. Discrete Bures-Wasserstein flow matching for graph generation
Up to now we are working on the scenario when $p ( \mathcal { X } \mid$ $\mathbf { \nabla } _ { X , W } )$ is a Gaussian and $p ( \mathcal { E } \mid W )$ is a Dirac distribution. However, previous studies have observed a significant improvement of the discrete counterpart of the continuous graph generation models (Vignac et al., 2023a; Xu et al., 2024; Qin et al., 2024). To benefit our model from such a nature, we derive the discrete Bures-Wasserstein flow matching for graph generation.
The discrete probability path. We design the probability path as discrete distributions,
$$
\begin{array} { r l } & { p _ { t } ( x _ { v } \mid G _ { 0 } , G _ { 1 } ) = \mathrm { C a t e g o r i c a l } ( [ X _ { t } ] _ { v } ) , } \\ & { p _ { t } ( e _ { u v } \mid G _ { 0 } , G _ { 1 } ) = \mathrm { B e r n o u l l i } ( [ W _ { t } ] _ { u v } ) } \\ & { \mathrm { s . t . } p _ { 0 } ( \mathcal { G } ) = \delta ( G _ { 0 } , \cdot ) , p _ { 1 } ( \mathcal { G } ) = \delta ( G _ { 1 } , \cdot ) } \end{array}
$$
where $W _ { t } = D _ { t } - L _ { t }$ with $X _ { t }$ and $\pmb { L } _ { t }$ defined the same in Equation (11). We consider the fact that the Dirac distribution is a special case when the Categorical/Bernoulli distribution has probability 1 or 0, so the boundary condition $p _ { 0 } ( \mathcal { G } ) = \delta ( G _ { 0 } , \cdot ) , p _ { 1 } ( \mathcal { G } ) = \delta ( G _ { 1 } , \cdot )$ holds. Even though we are not sampling from Gaussian distributions anymore, it is possible to approximate the Wasserstein distance between two multivariate discrete distributions with the Gaussian counterpart so the conclusions, such as optimal transport displacements, still hold. More discussions in Appendix D.2.
The discrete velocity fields. The path of node features $\textstyle { \mathcal { X } } _ { t }$ can be re-written as $p _ { t } ( \mathcal { X } ) = ( 1 - t ) \delta ( \cdot , \boldsymbol { X } _ { 0 } ) + t \delta ( \cdot , \boldsymbol { X } _ { 1 } )$ so the conditional velocity can be accessed through $v _ { t } ( X _ { t } \mid$ $G _ { 0 } , G _ { 1 } \big ( \boldsymbol { \mathbf { \rho } } = \big [ \delta ( \boldsymbol { \cdot } , \boldsymbol { \mathbf { X } } _ { 1 } ) - \delta ( \boldsymbol { \cdot } , \boldsymbol { \mathbf { X } } _ { t } ) \big ] / ( 1 - t )$ . However, the probability path of edges $\mathcal { E } _ { t }$ , shown in Equations (11) and (13), cannot be written as a mixture of two boundary conditions given the non-linear interpolation. To this end, we derive in Appendix D.3 that the discrete velocity follows,
$$
v _ { t } \left( E _ { t } \mid G _ { 1 } , G _ { 0 } \right) = \left( 1 - 2 E _ { t } \right) \frac { \dot { W } _ { t } } { W _ { t } \circ \left( 1 - W _ { t } \right) } ,
$$
where $W _ { t } = D _ { t } - L _ { t }$ , $\dot { W } _ { t } = \mathrm { d i a g } ( \dot { L } _ { t } ) - \dot { L } _ { t }$ with $\scriptstyle { \mathbf { } } _ { \pmb { L } _ { t } }$ , $\dot { L _ { t } }$ defined in Equations (11) and (12) respectively. With the interpolation and velocity defined, the discrete flow matching is built in Algorithms 3 and 4.
# 4. Experiments
We evaluate the BWFlow algorithms through both the plain graph generation and real-world molecule generation tasks. We first outline the experimental setup in Section 4.1, followed by a general comparison in Section 4.2. Next, we conduct behavior analysis on the impact of interpolation methods and the corresponding velocity construction on graph generation performance in Section 4.3, which demonstrates the effectiveness and benefit of flow along BuresWasserstein interpolation.
2D molecule graph generation. The model performance is illustrated in Table 2. In both datasets, BWFlow can achieve competitive results near the state-of-the-art (SOTA) flow matching models (Qin et al., 2024), and outperforms the diffusion models. Given that MOSES and GUACAMOL benchmarks are approaching saturation, the fact that BWFlow achieves results on par with the SOTA models serves as strong evidence of its effectiveness.
# 4.1. Experiment settings
Dataset. For plain graph generation, we evaluate the quality of generated graphs on three benchmark datasets following previous works (Martinkus et al., 2022a; Vignac et al., 2023a; Bergmeister et al., 2024), specifically, planar graphs, tree graphs, and stochastic blocking models (SBM). Two datasets, namely MOSES (Polykovskiy et al., 2018) and GUACAMOL (Brown et al., 2019), are benchmarked to test the model performance on 2D molecule generation. For 3D molecule generation with coordinate data, we test the model on QM9 (Ramakrishnan et al., 2014) and GEOMDRUGS (Axelrod & Gómez-Bombarelli, 2020).
Metrics. In plain graph generation, the evaluation metrics include the percentage of Valid, Unique, and Novel (V.U.N.) graphs, and the average maximum mean discrepancy ratio (A.Ratio) of graph statistics between the set of generated graphs and the test set are reported (details in Appendix I.3). For molecule generation, we test two scenarios with and without bond type information, where the latter validates the capacity of our methods in generating the graph structures. To this end, we develop a new relaxed metric to measure the stability and validity of atoms and molecules when bond types are not available. Specifically, the atom-wise stability is relaxed as (Stability of Atom $i$ is defined as $s _ { i }$ ):
$$
s _ { i } = \mathbb { I } \big [ \exists \big \{ b _ { i j } \big \} _ { j \in \mathcal { N } _ { i } } \in \prod _ { j \in \mathcal { N } _ { i } } B _ { i j } : \sum _ { j \in \mathcal { N } _ { i } } b _ { i j } = \mathbf { E } \mathbf { V } _ { i } \big ] ,
$$
with the identity function I. This means atom $i$ is “relaxed
Interpolation Method 200 l. 60 150 Interpolation Type Linear Bures-Wasserstein 100 Harmonic Planar Flow steps t (a) The evolution of graph statistics ratio along (b) The impact of interpolation methods (c) Convergence analysis of BW-Flow and flows the probability path. on the performance. with linear interpolations.
Table 1. Plain Graph generation performance. We sampled 5 times (each run generates 40 graphs) to calculate the mean and standard deviation. We only keep the main diffusion/flow model for comparison, while other models are included in the full version at Table 8.
stable” if there is at least one way to pick allowed bond types $( B _ { i j } )$ to its neighbors ${ \mathcal { N } } _ { i }$ so that their total exactly matches the expected valences $\mathrm { E V } _ { i }$ . Such a relaxed stability of atoms (Atom.Stab.) inherently defines molecule stability (Mol.Stab.) and the validity of a molecule, which are the shared metrics for both 2D/3D molecule generation. In addition to these metrics, distribution metrics are also used for 2D molecules (FCD, Scaf, etc.), and 3D generations (charge distributions, atom total variation, angles, etc.). Details in Appendix I.3.
Setup. To isolate the impact from model architecture, we follow Qin et al. (2024) to fix the backbone model as the same graph transformers. It is shown that sampling/training distortion and target guidance have a significant impact on the performance of graph generation tasks (Qin et al., 2024). In our experiment, the best model performance is obtained with these technologies, but in behavior analysis, we disabled time distortion and target guidance for a fair comparison. In molecular generation, two scenarios with and without bond types information are considered to better evaluate the ability of generating graph structures. More experimental details can be found in Appendix I.1.
# 4.2. Main results for graph generation
Plain graph generation. In Table 1, we report both V.U.N. and A.Ratio. As performance on these benchmarks continues to fluctuate significantly even after convergence and the results are near saturated, we present not only the best scores but also the exponentially moving averaged (EMA) results on last 5 checkpoints and decay 0.999. BWFlow outperforms most competitors on Planar and SBM graph generation. The lone exception is the tree graphs, where our model falls short. We attribute this gap to the fundamentally different generation process for tree graphs (which reside in hyperbolic space (Yang et al., 2022)) and thus violate our MRF assumptions.
3D molecule generation. Table 3 gives the results on the 3D molecule generation task with explicit hydrogen, where we ignore the bond type but just view the adjacency matrix as a binary one for validating the power of generating graph structures. Interestingly, the empirical results show that even without edge type, the 3D graph generation model already can capture the molecule data distribution. And our BWFlow significantly outperforms the SOTA models, including MiDi (Vignac et al., 2023b) and FlowMol (Dunn & Koes, 2024). We believe a promising future direction is to incorporate the processing of multiple bond types into our framework, which would potentially raise the performance by a margin.
Table 2. Large molecule generation results. Table 15 gives further experiments with binary edge types.
Table 3. Quantitative experimental results on 3D Molecule Generation with explicit hydrogen.
# 4.3. Behavior analysis
# BWFlow provides smooth velocity in probability paths.
To illustrate how BWFlow models the smooth evolution of graphs, we compute the A.Ratio on SBM datasets (the figures for the others are in Figure 6) between generated graph interpolants and test data for $t \in [ 0 , 1 ]$ , as shown in Figure 3a. In contrast to the linear (arithmetic) interpolation, BW interpolation initially exposes the model to more out-ofdistribution samples with increased A.Ratio. After this early exploration, the A.Ratio monotonously converges, yielding a smooth interpolation between the reference graphs and the data points. This behavior enhances both the model robustness and velocity estimation, which helps in covering the convergence gap in the generation stage as in Figure 1c. In comparison, harmonic and geometric interpolations step outside the valid graph domain, making the learning ill-posed.
The impact of interpolation metrics on the model performance. Figure 3b illustrated a bar plot that compares interpolation methods on the ability of generating valid plain graphs measured by V.U.N., which shows the superiority of BW interpolation in capturing graph distributions. Figure 3c illustrated an example (in planar graph generation) of the convergence curve at the training stage (full results in Table 10), which suggests that BWFlow can bring a faster convergence speed compared to FM methods constructed with linear (arithmetic) interpolations.
# 5. Discussion and future work
In this paper, we introduce BWFlow, a flow matching model that captures the non-Euclidean and interconnected properties of graphs. While we show BWFlow exhibits outstanding performance in various graph generation tasks, it faces the following limitations that motivate solid future work.
Extension to multiple relation types. As our framework is built upon the interpolation parameterized by the Graph Laplacian, it is not easily generalizable to the graph generation with multiple edge types. We made preliminary attempts at the extension but a comprehensive design is still required.
Lower computational complexity. While constructing the probability path and the velocity, our BW interpolation suffers from an extra cost due to its request to compute the pseudo-inverse of the Laplacian. Compared to linear interpolation, this brings $O ( N ^ { 3 } )$ extra complexity theoretically, and empirically $2 \mathrm { x }$ training time and inference time, which is non-negligible in large graph generation. We aim to develop iterative optimization methods to make training faster in future work.
More universal interpolation that accommodates the geometry. In our experiments on the tree dataset, performance was unsatisfactory. We attribute this issue to the unique geometry of tree-structured graphs. A promising future work includes selecting the adaptive interpolation schemes that accommodate the intrinsic geometry of a graph.
# Impact Statement
This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
# References
Albergo, M. S. and Vanden-Eijnden, E. Building normalizing flows with stochastic interpolants. In ICLR. OpenReview.net, 2023.
Axelrod, S. and Gómez-Bombarelli, R. GEOM: energyannotated molecular conformations for property prediction and molecular generation. CoRR, abs/2006.05531, 2020.
Bach, E., Rogers, S., Williamson, J., and Rousu, J. Probabilistic framework for integration of mass spectrum and retention time information in small molecule identification. Bioinformatics, 37(12):1724–1731, 11 2020. ISSN 1367-4803. doi: 10.1093/bioinformatics/ btaa998. URL https://doi.org/10.1093/ bioinformatics/btaa998.
Bergmeister, A., Martinkus, K., Perraudin, N., and Wattenhofer, R. Efficient and scalable graph generation through iterative local expansion. In ICLR. OpenReview.net, 2024.
Bhatia, R., Jain, T., and Lim, Y. On the bures–wasserstein distance between positive definite matrices. Expositiones Mathematicae, 37(2):165–191, 2019. ISSN 0723- 0869. doi: https://doi.org/10.1016/j.exmath.2018.01. 002. URL https://www.sciencedirect.com/ science/article/pii/S0723086918300021.
Bilodeau, C., Jin, W., Jaakkola, T., Barzilay, R., and Jensen, K. F. Generative models for molecular discovery: Recent advances and challenges. Wiley Interdisciplinary Reviews: Computational Molecular Science, 12(5):e1608, 2022.
Brown, N., Fiscato, M., Segler, M. H. S., and Vaucher, A. C. Guacamol: Benchmarking models for de novo molecular design. J. Chem. Inf. Model., 59(3):1096–1108, 2019.
Campbell, A., Benton, J., Bortoli, V. D., Rainforth, T., Deligiannidis, G., and Doucet, A. A continuous time framework for discrete denoising models. In NeurIPS, 2022.
Campbell, A., Yim, J., Barzilay, R., Rainforth, T., and Jaakkola, T. S. Generative flows on discrete state-spaces: Enabling multimodal flows with applications to protein co-design. In ICML. OpenReview.net, 2024.
Cao, N. D. and Kipf, T. Molgan: An implicit generative model for small molecular graphs. CoRR, abs/1805.11973, 2018.
Chen, R. T. Q. and Lipman, Y. Flow matching on general geometries. In ICLR. OpenReview.net, 2024.
Chen, T. Q., Rubanova, Y., Bettencourt, J., and Duvenaud, D. Neural ordinary differential equations. In NeurIPS, pp. 6572–6583, 2018.
Chen, X., He, J., Han, X., and Liu, L. Efficient and degreeguided graph generation via discrete diffusion modeling. In ICML, volume 202 of Proceedings of Machine Learning Research, pp. 4585–4610. PMLR, 2023.
Dai, H., Nazi, A., Li, Y., Dai, B., and Schuurmans, D. Scalable deep generative modeling for sparse graphs. In ICML, volume 119 of Proceedings of Machine Learning Research, pp. 2302–2312. PMLR, 2020.
Diamant, N. L., Tseng, A. M., Chuang, K. V., Biancalani, T., and Scalia, G. Improving graph generation by restricting graph bandwidth. In ICML, volume 202 of Proceedings of Machine Learning Research, pp. 7939–7959. PMLR, 2023.
Dowson, D. and Landau, B. The fréchet distance between multivariate normal distributions. Journal of Multivariate Analysis, 12(3):450–455, 1982. ISSN 0047-259X. doi: https://doi.org/10.1016/0047-259X(82)90077-X. URL https://www.sciencedirect.com/ science/article/pii/0047259X8290077X.
Dunn, I. and Koes, D. R. Mixed continuous and categorical flow matching for 3d de novo molecule generation. ArXiv, pp. arXiv–2404, 2024.
Eijkelboom, F., Bartosh, G., Naesseth, C. A., Welling, M., and van de Meent, J. Variational flow matching for graph generation. In NeurIPS, 2024.
Gat, I., Remez, T., Shaul, N., Kreuk, F., Chen, R. T. Q., Synnaeve, G., Adi, Y., and Lipman, Y. Discrete flow matching. In NeurIPS, 2024.
Goyal, N., Jain, H. V., and Ranu, S. Graphgen: A scalable approach to domain-agnostic labeled graph generation. In WWW, pp. 1253–1263. ACM / IW3C2, 2020.
Grover, A. and Leskovec, J. node2vec: Scalable feature learning for networks. In KDD, pp. 855–864. ACM, 2016.
Haasler, I. and Frossard, P. Bures-wasserstein means of graphs. In AISTATS, volume 238 of Proceedings of Machine Learning Research, pp. 1873–1881. PMLR, 2024.
Haefeli, K. K., Martinkus, K., Perraudin, N., and Wattenhofer, R. Diffusion models for graphs benefit from discrete state spaces. In The First Learning on Graphs Conference, 2022. URL https://openreview.net/ forum?id $\ c =$ CtsKBwhTMKg.
Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. In NeurIPS, 2020.
Hou, X., Zhu, T., Ren, M., Bu, D., Gao, X., Zhang, C., and Sun, S. Improving molecular graph generation with flow matching and optimal transport. CoRR, abs/2411.05676, 2024.
Ingraham, J., Garg, V. K., Barzilay, R., and Jaakkola, T. S. Generative models for graph-based protein design. In NeurIPS, pp. 15794–15805, 2019.
Jiang, K., Tang, B., Dong, X., and Toni, L. Heterogeneous graph structure learning through the lens of data-generating processes. In The 28th International Conference on Artificial Intelligence and Statistics, 2025. URL https://openreview.net/forum? id $\ c =$ JHK0QBKdYY.
Jo, J., Kim, D., and Hwang, S. J. Graph generation with diffusion mixture. In ICML. OpenReview.net, 2024.
Kapusniak, K., Potaptchik, P., Reu, T., Zhang, L., Tong, A., Bronstein, M. M., Bose, A. J., and Giovanni, F. D. Metric flow matching for smooth interpolations on the data manifold. In NeurIPS, 2024.
Kipf, T. N. and Welling, M. Variational graph auto-encoders. CoRR, abs/1611.07308, 2016.
Li, M., Kreacic, E., Potluru, V. K., and Li, P. Graphmaker: Can diffusion models generate large attributed graphs? CoRR, abs/2310.13833, 2023.
Liao, R., Li, Y., Song, Y., Wang, S., Hamilton, W. L., Duvenaud, D., Urtasun, R., and Zemel, R. S. Efficient graph generation with graph recurrent attention networks. In NeurIPS, pp. 4257–4267, 2019.
Lipman, Y., Chen, R. T. Q., Ben-Hamu, H., Nickel, M., and Le, M. Flow matching for generative modeling. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/ forum?id $\ c =$ PqvMRDCJT9t.
Lipman, Y., Havasi, M., Holderrieth, P., Shaul, N., Le, M., Karrer, B., Chen, R. T. Q., Lopez-Paz, D., Ben-Hamu, H., and Gat, I. Flow matching guide and code. CoRR, abs/2412.06264, 2024.
Liu, C., Fan, W., Liu, Y., Li, J., Li, H., Liu, H., Tang, J., and Li, Q. Generative diffusion models on graphs: Methods and applications. In IJCAI, pp. 6702–6711. ijcai.org, 2023a.
Liu, X., Gong, C., and Liu, Q. Flow straight and fast: Learning to generate and transfer data with rectified flow. In ICLR. OpenReview.net, 2023b.
Martinkus, K., Loukas, A., Perraudin, N., and Wattenhofer, R. SPECTRE: spectral conditioning helps to overcome the expressivity limits of one-shot graph generators. In ICML, volume 162 of Proceedings of Machine Learning Research, pp. 15159–15179. PMLR, 2022a.
Martinkus, K., Loukas, A., Perraudin, N., and Wattenhofer, R. SPECTRE: spectral conditioning helps to overcome the expressivity limits of one-shot graph generators. In ICML, volume 162 of Proceedings of Machine Learning Research, pp. 15159–15179. PMLR, 2022b.
McCann, R. J. A convexity principle for interacting gases. Advances in Mathematics, 128(1):153–179, 1997. ISSN 0001-8708. doi: https://doi.org/10.1006/aima.1997. 1634. URL https://www.sciencedirect.com/ science/article/pii/S0001870897916340.
Minello, G., Bicciato, A., Rossi, L., Torsello, A., and Cosmo, L. Generating graphs via spectral diffusion. In The Thirteenth International Conference on Learning Representations, 2025. URL https://openreview. net/forum?id $=$ AAXBfJNHDt.
Niu, C., Song, Y., Song, J., Zhao, S., Grover, A., and Ermon, S. Permutation invariant graph generation via score-based generative modeling. In AISTATS, volume 108 of Proceedings of Machine Learning Research, pp. 4474–4484. PMLR, 2020.
Olkin, I. and Pukelsheim, F. The distance between two random vectors with given dispersion matrices. Linear Algebra and its Applications, 48:257–263, 1982. ISSN 0024-3795. doi: https://doi.org/10.1016/0024-3795(82)90112-4. URL https://www.sciencedirect.com/ science/article/pii/0024379582901124.
Polykovskiy, D., Zhebrak, A., Sánchez-Lengeling, B., Golovanov, S., Tatanov, O., Belyaev, S., Kurbanov, R., Artamonov, A., Aladinskiy, V., Veselov, M., Kadurin, A., Nikolenko, S. I., Aspuru-Guzik, A., and Zhavoronkov, A. Molecular sets (MOSES): A benchmarking platform for molecular generation models. CoRR, abs/1811.12823, 2018.
Pooladian, A., Ben-Hamu, H., Domingo-Enrich, C., Amos, B., Lipman, Y., and Chen, R. T. Q. Multisample flow matching: Straightening flows with minibatch couplings. In ICML, volume 202 of Proceedings of Machine Learning Research, pp. 28100–28127. PMLR, 2023.
Qin, Y., Madeira, M., Thanou, D., and Frossard, P. Defog: Discrete flow matching for graph generation. CoRR, abs/2410.04263, 2024.
Qu, M., Bengio, Y., and Tang, J. GMNN: graph markov neural networks. In ICML, volume 97 of Proceedings of Machine Learning Research, pp. 5241–5250. PMLR, 2019.
Ramakrishnan, R., Dral, P. O., Rupp, M., and Von Lilienfeld, O. A. Quantum chemistry structures and properties of 134 kilo molecules. Scientific data, 1(1):1–7, 2014.
Siraudin, A., Malliaros, F. D., and Morris, C. Cometh: A continuous-time discrete-state graph diffusion model, 2024. URL https://arxiv.org/abs/2406. 06449.
Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., and Poole, B. Score-based generative modeling through stochastic differential equations. In ICLR. OpenReview.net, 2021.
Stärk, H., Jing, B., Wang, C., Corso, G., Berger, B., Barzilay, R., and Jaakkola, T. S. Dirichlet flow matching with applications to DNA sequence design. In ICML. OpenReview.net, 2024.
Sun, H., Yu, L., Dai, B., Schuurmans, D., and Dai, H. Scorebased continuous-time discrete diffusion models. In ICLR. OpenReview.net, 2023.
Takatsu, A. On wasserstein geometry of gaussian measures. Probabilistic approach to geometry, 57:463–472, 2010.
Taskar, B., Abbeel, P., Wong, M.-F., and Koller, D. Relational markov networks. Introduction to statistical relational learning, 175:200, 2007.
Tong, A., Fatras, K., Malkin, N., Huguet, G., Zhang, Y., Rector-Brooks, J., Wolf, G., and Bengio, Y. Improving and generalizing flow-based generative models with minibatch optimal transport. Trans. Mach. Learn. Res., 2024, 2024.
Vignac, C., Krawczuk, I., Siraudin, A., Wang, B., Cevher, V., and Frossard, P. Digress: Discrete denoising diffusion for graph generation. In ICLR. OpenReview.net, 2023a.
Vignac, C., Osman, N., Toni, L., and Frossard, P. Midi: Mixed graph and 3d denoising diffusion for molecule generation. In ECML/PKDD (2), volume 14170 of Lecture Notes in Computer Science, pp. 560–576. Springer, 2023b.
Villani, C. and Society, A. M. Topics in Optimal Transportation. Graduate studies in mathematics. American Mathematical Society, 2003. ISBN 9781470418045. URL https://books.google.co.uk/books? id $\ c =$ MyPjjgEACAAJ.
Wang, F., Yang, L., Huang, Z., Wang, M., and Li, H. Rectified diffusion: Straightness is not your need in rectified flow. CoRR, abs/2410.07303, 2024.
Weigt, M., White, R. A., Szurmant, H., Hoch, J. A., and Hwa, T. Identification of direct residue contacts in protein–protein interaction by message passing. Proceedings of the National Academy of Sciences, 106(1):67–72, 2009. doi: 10.1073/pnas. 0805923106. URL https://www.pnas.org/doi/ abs/10.1073/pnas.0805923106.
Xu, Z., Qiu, R., Chen, Y., Chen, H., Fan, X., Pan, M., Zeng, Z., Das, M., and Tong, H. Discrete-state continuoustime diffusion for graph generation. arXiv preprint arXiv:2405.11416, 2024.
Yang, M., Zhou, M., Li, Z., Liu, J., Pan, L., Xiong, H., and King, I. Hyperbolic graph neural networks: A review of methods and applications. CoRR, abs/2202.13852, 2022.
You, J., Ying, R., Ren, X., Hamilton, W. L., and Leskovec, J. Graphrnn: Generating realistic graphs with deep autoregressive models. In ICML, volume 80 of Proceedings of Machine Learning Research, pp. 5694–5703. PMLR, 2018.
Yu, M. and Zhan, K. Bias mitigation in graph diffusion models. In The Thirteenth International Conference on Learning Representations, 2025. URL https: //openreview.net/forum?id $= _ { }$ CSj72Rr2PB.
Zhu, X., Lafferty, J., and Ghahramani, Z. Semi-supervised learning: From Gaussian fields to Gaussian processes. School of Computer Science, Carnegie Mellon University, 2003.
Zhu, Y., Du, Y., Wang, Y., Xu, Y., Zhang, J., Liu, Q., and Wu, S. A survey on deep graph generation: Methods and applications. In LoG, volume 198 of Proceedings of Machine Learning Research, pp. 47. PMLR, 2022.
# A. Graph Markov random fields: background and theory
# A.1. Background of Markov random fields
Markov random fields (MRFs) were originally developed to describe the dynamics of interconnected physical systems such as molecules and proteins (Weigt et al., 2009; Bach et al., 2020). For a given graph $G = \{ V , E \}$ , MRFs are energy-based models that describe the graph with the following probability density:
$$
p \mathopen { } \mathclose \bgroup \left( G \aftergroup \egroup \right) = \frac { 1 } { Z } \prod _ { \xi \in \mathrm { c l } \left( G \right) } \varphi _ { \xi } \left( \mathcal { V } _ { \xi } \right) = \frac { 1 } { Z } e ^ { - U \left( G \right) / k T } ,
$$
where the energy $U ( G )$ is used to describe the whole connected system, $\operatorname { c l } ( G )$ is the set of cliques of $G$ , $\nu _ { \xi }$ is the subset of nodes related to $\xi$ , and $Z$ is the partition function. In our paper, we follow Zhu et al. (2003) and parameterize the energy function with node-wise potential $\varphi _ { 1 } ( v ) , \forall v \in \mathcal { V }$ and edge-wise potential $\varphi _ { 2 } ( u , v )$ , $\forall e _ { u v } \in \mathcal { E }$ :
$$
U ( G ) = \sum _ { v \in V } \underbrace { \log \varphi _ { 1 } ( v ) } _ { \mathrm { N o d e - w i s e p o t e n t i a l } } + \sum _ { \{ u , v \} \in E } \underbrace { \log \varphi _ { 2 } ( u , v ) } _ { \mathrm { E d g e - w i s e p o t e n t i a l } }
$$
As a concrete example, in the molecule system that consists of atoms and bonds, the node-wise potential can be the kinetic energy for an atom and the edge-wise potential is determined by the interatomic forces, such as the Electrostatic force. MRFs thus serve as a natural and elegant way to describe general graph systems.
The energy-based models have an intrinsic relationship with generative models. As an example, Song et al. (2021) derived the relationship between diffusion models and the Langevin dynamics, which is used to describe the evolution of an energy-based model. It is shown that the diffusion models are trying to approximate the score function $\nabla _ { \mathcal { X } } \log { p ( \mathcal { X } ) }$ . In the energy-based models, the score function is just the gradient of energy, $\nabla \chi \log p ( \chi ) = - \nabla \chi U ( \chi )$ , a∇ndXthe La(nXg)evin dynamics samples data points towards a lower energy profile state. Thus, interpolating between two graph distributions, respectively, reference and data distributions, can be viewed as transitioning between two Markov random fields with different energy landscapes. As illustrated in Figure 4, the random molecule graph sampled from the reference distributions corresponds to the high energy state and the data distribution the low energy state.
Figure 4. Molecule graphs correspond to MRFs with different energies.
The idea of our paper originated from the two facts: MRFs are energy-based model describing connected systems, and the energy-based models have their intrinsic relationship with the diffusion/flow models. Thus, if a model is required to describe the evolution of the whole graph system, we believe it is natural to consider constructing a probability path for two graph distributions with MRFs as the backbone.
# A.2. Derivation of Graph Markov random fields
We show the derivation of Definition 2, which is restated here:
Definition 3 (Graph Markov Random Fields). GraphMRF statistically describes graphs as,
$$
p ( \mathcal { G } ; \pmb { G } ) = p ( \boldsymbol { \chi } , \mathcal { E } ; \pmb { X } , \pmb { W } ) = p ( \boldsymbol { \chi } ; \pmb { X } , \pmb { W } ) \cdot p ( \mathcal { E } ; \pmb { W } )
$$
The $\otimes$ is the Kronecker product, $\mathrm { v e c } ( \cdot )$ is the vectorization operator and $\boldsymbol { \mathit { I } }$ is the identity matrix.
Derivation:
We start from
$$
p ( \pmb { \chi } ; \pmb { X } , \pmb { W } ) \propto \prod _ { v } \exp \left\{ - ( \nu + d _ { v } ) \| V x _ { v } - \pmb { \mu } _ { v } \| ^ { 2 } \right\} \prod _ { u , v } \exp \left\{ w _ { u v } \left[ ( V x _ { u } - \mu _ { u } ) ^ { \top } ( V x _ { v } - \pmb { \mu } _ { v } ) \right] \right\} .
$$
We assume that the linear transformation matrix has dimension $V \in \mathbb { R } ^ { K ^ { \prime } \times K }$ given that $\boldsymbol { x } _ { v } \in \mathbb { R } ^ { K }$ and define a transformed variable $h _ { \boldsymbol { v } } \equiv { \boldsymbol { V } } { \boldsymbol { x } } _ { \boldsymbol { v } } - { \pmb { \mu } } _ { \boldsymbol { v } } \in \mathbb { R } ^ { K ^ { \prime } } , \ \mathrm { s t a c k i n g \ a s \ } \mathcal { H } \in \mathbb { R } ^ { | \mathcal { V } | \times K ^ { \prime } } .$ (18)
The probability becomes
$$
P ( \mathcal { H } ; \boldsymbol { X } , W ) \propto \prod _ { v } \exp \Bigl \{ - \bigl ( \nu + d _ { v } \bigr ) \bigl \Vert h _ { v } \bigr \Vert ^ { 2 } \Bigr \} \prod _ { u , v } \exp \Bigl \{ w _ { u v } h _ { u } ^ { \intercal } h _ { v } \Bigr \} .
$$
Then, the terms inside the exponent in Equation (19) become
$$
\begin{array} { r l r } & { } & { - \displaystyle \sum _ { v } \left( \nu + d _ { v } \right) \left\| h _ { v } \right\| ^ { 2 } + \displaystyle \sum _ { u , v } w _ { u v } h _ { u } ^ { \top } h _ { v } = - \displaystyle \sum _ { v } \left( \nu + d _ { v } \right) h _ { v } ^ { \top } h _ { v } + \displaystyle \sum _ { u , v } w _ { u v } h _ { u } ^ { \top } h _ { v } } \\ & { } & \\ & { } & { = - \displaystyle \sum _ { u , v } h _ { u } ^ { \top } \Big [ ( \nu + d _ { u } ) \delta _ { u v } - w _ { u v } \Big ] h _ { v } , } \end{array}
$$
where the Kronecker delta $\delta _ { u v } = 1$ if $u = v$ and 0 else. We define a squared matrix $\Lambda ^ { \prime }$ to arrange the inner term, which can be written as,
$$
\Lambda ^ { \prime } = \nu I + L \quad \mathrm { w i t h } \quad \Lambda _ { u v } ^ { \prime } = \bigl ( \nu + d _ { u } \bigr ) \delta _ { u v } - w _ { u v } .
$$
$\boldsymbol { \mathit { I } }$ is the identity matrix. Thus, the exponent in compact matrix form gives
$$
- \frac { 1 } { 2 } \operatorname { T r } ( \mathcal { H } ^ { \top } \Lambda ^ { \prime } \mathcal { H } ) , \mathrm { w h e r e } \mathcal { H } = \left( \begin{array} { l } { h _ { 1 } } \\ { h _ { 2 } } \\ { \vdots } \\ { h _ { | \mathcal { V } | } } \end{array} \right) .
$$
It is possible to rearrange the exponent as
$$
\operatorname { T r } ( \mathcal { H } ^ { \top } \mathbf { A } ^ { \prime } \mathcal { H } ) = \operatorname { v e c } ( \mathcal { H } ) ^ { \top } ( \mathbf { A } \otimes \pmb { I } ) \operatorname { v e c } ( \mathcal { H } ) ,
$$
where $\otimes$ denotes the Kronecker product. This is exactly in the form of a multivariate colored Gaussian. Thus, the joint distribu⊗tion of $\mathrm { v e c } ( \mathscr { H } )$ (of dimension $| \nu | K ^ { \prime } )$ is given by
$$
\mathrm { v e c } ( { \mathcal { H } } ) \sim \mathcal { N } \Big ( 0 , ~ ( \nu I + L ) ) ^ { - 1 } \otimes I _ { K ^ { \prime } } \Big ) ,
$$
Recall that $h _ { v } = V x _ { v } - \mu _ { v }$ , we obtain
$$
\operatorname { v e c } ( { \mathcal { H } } ) = ( I \otimes V ) \operatorname { v e c } ( { \mathcal { X } } ) - \operatorname { v e c } ( { \pmb { \mu } } ) .
$$
Since the transformation is linear, the distribution over $\chi$ remains Gaussian. By the properties of linear transformations of Gaussians, if
$$
\operatorname { v e c } ( { \mathcal { H } } ) \sim { \mathcal { N } } ( \operatorname { v e c } ( { \boldsymbol { \mu } } ) , \Sigma _ { h } ) , \operatorname { v e c } ( { \boldsymbol { \chi } } ) = ( I \otimes V ^ { \dagger } ) \operatorname { v e c } ( { \mathcal { H } } ) ,
$$
then
$$
\operatorname { v e c } ( { \mathcal { X } } ) \sim { \mathcal { N } } { \Big ( } ( I \otimes V ^ { \dagger } ) \operatorname { v e c } ( \mu ) , ( I _ { n } \otimes V ^ { \dagger } ) \Sigma _ { { \mathcal { H } } } ( I _ { n } \otimes V ^ { \dagger } ) ^ { \intercal } { \Big ) } .
$$
Thus, using the mixed-product property of the Kronecker product,
$$
( I \otimes V ^ { \dag } ) ( ( \nu I + L ) ^ { - 1 } \otimes I ) ( I _ { n } \otimes V ^ { \dag } ) ^ { \top } = ( L + \nu I ) ^ { - 1 } \otimes ( V ^ { \dag } V ^ { \dag \top } )
$$
Finally, the joint distribution over $\chi$ is
$$
\mathrm { v e c } ( \mathcal { X } ) \sim \mathcal { N } ( X , \Sigma ) ,
$$
We use the following lemma:
Lemma 1. Given two invertible matrices $A$ and $B$ , their Kronecker product satisfies $( A \otimes B ) ^ { - 1 } = A ^ { - 1 } \otimes B ^ { - 1 }$ .
So that we get
$$
\operatorname { v e c } ( { \mathcal { X } } ) \sim { \mathcal { N } } \left( X , \Lambda ^ { \dagger } \right) , \ \operatorname { w i t h } X = \operatorname { v e c } ( V ^ { \dagger } \pmb { \mu } ) , \Lambda = ( \nu I + L ) \otimes V ^ { \top } V .
$$
which ends the derivation.
# A.3. The usage scope of graph Markov random fields
Given that our Graph Markov random fields (GraphMRF) have an explicit form to constrain the graph distribution, it inherits certain inductive biases and we have to properly understand the usage scenarios.
To understand the scenarios which GraphMRF could be used, we start by stating the plain form of MRF by melting the linear transformer matrix and the mean term, i.e., giving $h _ { v } = V x _ { v } - \mu$ which gives the probability density as,
$$
P ( \mathcal { H } \mid L ) \propto \exp ( { - \operatorname { t r a c e } ( \mathcal { H } ^ { \top } ( L + \nu I ) \mathcal { H } ) } ) = \exp ( { - w _ { u v } \| h _ { u } - h _ { v } \| ^ { 2 } - \nu \sum _ { u } \| h _ { u } \| _ { \mathrm { F } } ^ { 2 } } )
$$
Plain Graph Markov random fields. To understand the scenarios which we can utilize MRF to model graphs, we first consider the simplest case when $V$ is rectangular orthogonal (semi-orthogonal) matrix such that $V ^ { \top } V = I$ and the mean $\pmb { \mu } = 0$ , the probability density becomes,
$$
P ( \pmb { X } , \pmb { L } ) \propto \exp ( - \pmb { X } ^ { \top } \big ( \pmb { L } + \nu \pmb { I } \big ) \pmb { X } ) = \exp ( - \sum _ { \{ u , v \} \in \pmb { \mathcal { E } } } W _ { u v } \big ( \pmb { x } _ { u } - \pmb { x } _ { v } \big ) ^ { 2 } - \nu \sum _ { u } \pmb { x } _ { u } ^ { 2 } )
$$
As $\nu \to 0$ , the exponent term inside becomes
$$
\mathcal { S } ( X , L ) = - \sum _ { \{ u , v \} \in \mathcal { E } } W _ { u v } \big ( \pmb { x } _ { u } - \pmb { x } _ { v } \big ) ^ { 2 } = \mathrm { t r a c e } ( X ^ { \top } L X ) ,
$$
where we name $ { \boldsymbol { S } } ( { \boldsymbol { X } } , { \boldsymbol { L } } )$ as the smoothness of the graph features. The smoothness measures how similar the neighbors connected are. FoSr(instan)ce, if there exists an edge between node $u$ and $v$ weighted as $\boldsymbol { W } _ { u v }$ , the likelihood will be higher if $\scriptstyle { \pmb { x } } _ { u }$ and $\scriptstyle { \boldsymbol { \mathbf { { x } } } } _ { v }$ be similar, so that $\| \pmb { x } _ { u } - \pmb { x } _ { v } \| ^ { 2 }$ are small. This suggests that the probability will be higher if the $ { \boldsymbol { S } } ( { \boldsymbol { X } } , { \boldsymbol { L } } )$ is small.
This vanilla form captures the first type of graphs the GraphMRF can model - the homophily graphs, i.e., similar nodes (measured by the node attribute) may be more likely to attach to each other than dissimilar ones. This includes the social networks with friendship, traffic networks, etc.
Graph Markov random fields with embeddings. Now we move one step further to consider the graphs with linear transformer matrix $V$ . Linear transformation provides a map from the feature space to the latent space, which can be considered as an embedding method to empower the models with better expressiveness. As a simple example, when the $V$ provides a negative projection, the mapping can capture the heterophily relationships, which means the nodes connected are dissimilar.
Coincidently, this aligns well with the famous embedding method Node2Vec as in Grover & Leskovec (2016), where the edge weights are proportional to the negative distance, or the inner product of the embeddings. i.e.,
$$
W _ { u v } \propto \exp ( - \| V \pmb { x } _ { u } - V \pmb { x } _ { v } \| _ { \mathrm { F } } ^ { 2 } )
$$
In (Jiang et al., 2025) it is derived that learning the parameters of MRFs is intrinsically equivalent to learning embeddings similar to Node2Vec. As such, the expressiveness of MRFs are as good as Node2Vec, which grants its usage to molecule graphs, protein interaction networks, social networks, and knowledge graphs. In our paper we make the assumption is that the linear mapping from $X$ the observation is shared. This requirement translates to that the two graphs should have the same embedding space and feature space, which is practical if the reference distribution and data distributions share the same space.
Graphs without features. We wish to emphasize that even though the GraphMRF is constructed under the assumption that graph features exist, it is capable of modeling the non-attributed graphs, such as planar and SBM graphs. To do so, we consider the optimization over the Rayleigh function: It is shown that, if $v _ { 1 } , \ldots , v _ { k - 1 }$ are orthonormal eigenvectors for $\lambda _ { 1 } , \ldots , \lambda _ { k - 1 }$ , then the eigenvalues satisfy,
$$
\lambda _ { k } = \operatorname* { m i n } _ { \stackrel { x \neq 0 } { x \bot v _ { 1 } , \ldots , v _ { k - 1 } } } R ( x ) , { \mathrm { ~ w i t h ~ } } R ( x ) = \frac { x ^ { T } L x } { x ^ { T } x }
$$
In such a scenario, the graphs are no longer related to the actual node features, but instead, the eigenvectors $\boldsymbol { v } _ { k }$ serve as an intrinsic graph feature. Interestingly, $R ( { \pmb x } )$ is the normalized form of the smoothness as in Equation (31), which is exactly equivalent to the eigenvalues. This means that if the spectrum of the graph Laplacian focuses on the low-frequency components, the corresponding graphs would give a higher probability in terms of MRFs if we view the eigenvectors as the node features. Such a pattern could capture the graph distributions almost all the synthetic datasets, such as Planar, SBM, TLS, COMM20 datasets, satisfy such a property. There is no surprise that our models capture better global patterns in those datasets. But it is also worth pointing out that there exists exceptions, such as the tree graph, which does not have a clear clustering pattern, thus the spectrum does not follow our GraphMRF.
Future work. A limitation of our method is that it cannot easily capture the generation of graphs with multiple relation types, which we name heterogeneous graphs. Even though we utilize an intuitive solution in the experiment to produce Table 2: we first sample the pure graph structure without edge types to produce the graph backbone, and then sample the edge types via liner interpolated probability on top of the backbone. The solution provides preliminary results for the graph generation in multi-relational graphs, but still requires improvements. Fortunately, there exists a few ways to extend the GraphMRF to heterogeneous graphs (Jiang et al., 2025). An interesting future work can be generalizing our model to heterogeneous graphs by considering GraphMRF variants, such as the H2MN proposed in Jiang et al. (2025).
# B. Proofs
# B.1. Wasserstein Distance between two colored gaussian distributions
We first prove the lemma that captures the Wasserstein distance between two colored Gaussians, which will be used in deriving our Bures-Wasserstein distances in graph generations.
Lemma 2. Consider two measures $\eta _ { 0 } \sim \mathcal { N } \left( \mu _ { 0 } , \Sigma _ { 0 } \right)$ and $\eta _ { 1 } \sim \mathcal { N } \left( \mu _ { 1 } , \Sigma _ { 1 } \right)$ , describing two colored Gaussian distributions with mean $\pmb { \mu } _ { 0 } , \pmb { \mu } _ { 1 }$ and covarian∼ceNm(atrices $\Sigma _ { 0 } , \Sigma _ { 1 }$ . T∼heNn (he Wass)erstein distance between these probability distributions is given by
$$
\begin{array} { r } { \left( \mathcal { W } _ { 2 } \left( \eta _ { 0 } , \eta _ { 1 } \right) \right) ^ { 2 } = \left\| \pmb { \mu } _ { 0 } - \pmb { \mu } _ { 1 } \right\| ^ { 2 } + \mathrm { T r } \left( \Sigma _ { 0 } + \Sigma _ { 1 } - 2 \left( \Sigma _ { 0 } ^ { 1 / 2 } \Sigma _ { 1 } \Sigma _ { 0 } ^ { 1 / 2 } \right) ^ { 1 / 2 } \right) . } \end{array}
$$
Proof. We first state the following proposition.
Proposition 4. (Translation Invariance of the 2-Wasserstein Distance for Gaussian Measures) Consider two measures $\eta _ { 0 } \sim \mathcal { N } \left( \mu _ { 0 } , \Sigma _ { 0 } \right)$ and $\eta _ { 1 } \sim \mathcal { N } \left( \mu _ { 1 } , \Sigma _ { 1 } \right)$ and their centered measure as $\widetilde { \eta } _ { 0 } = \mathcal { N } \left( 0 , \Sigma _ { 0 } \right)$ and $\widetilde { \eta } _ { 1 } = \mathcal { N } \left( 0 , \Sigma _ { 1 } \right)$ , the squared Wasserstein distance decomposes as
$$
\begin{array} { r } { \mathcal { W } _ { 2 } ^ { 2 } \left( \eta _ { 0 } , \eta _ { 1 } \right) = \left\| \pmb { \mu _ { 0 } } - \pmb { \mu _ { 1 } } \right\| _ { 2 } ^ { 2 } + \mathcal { W } _ { 2 } ^ { 2 } \left( \tilde { \eta } _ { 0 } , \tilde { \eta } _ { 1 } \right) } \end{array}
$$
Proof :
Consider two random vectors $x , y$ distributed as $\eta _ { 0 } , \eta _ { 1 }$ ,
$$
\mathcal { X } = \mu _ { 0 } + \tilde { \mathcal { X } } , \mathcal { Y } = \mu _ { 1 } + \tilde { \mathcal { Y } } , \mathrm { w i t h } \tilde { \mathcal { X } } \sim \tilde { \eta } _ { 0 } , \tilde { \mathcal { Y } } \sim \tilde { \eta } _ { 1 } .
$$
For any coupling $( \mathcal { X } , \mathcal { Y } )$ , we consider the expected squared Euclidean distance,
$$
\begin{array} { r l } { { \mathbb { E } _ { \boldsymbol { \mathcal { X } } , \boldsymbol { \mathcal { Y } } } \big \| \boldsymbol { \mathcal { X } } - \boldsymbol { \mathcal { Y } } \big \| ^ { 2 } = \mathbb { E } _ { \boldsymbol { \mathcal { X } } , \boldsymbol { \mathcal { Y } } } \big \| m \boldsymbol { u } _ { 0 } - \mu _ { 1 } + \big ( \tilde { \mathcal { X } } - \tilde { \mathcal { Y } } \big ) \big \| ^ { 2 } . } } \\ & { = \big \| \mu _ { 0 } - \mu _ { 1 } \big \| ^ { 2 } + 2 \big \langle \mu _ { 0 } - \mu _ { 1 } , \tilde { \mathcal { X } } - \tilde { \mathcal { Y } } \big \rangle + \mathbb { E } _ { \tilde { \mathcal { X } } , \tilde { \mathcal { Y } } } \big \| \tilde { \mathcal { X } } - \tilde { \mathcal { Y } } \big \| ^ { 2 } } \end{array}
$$
Since $\tilde { \mathcal { X } }$ and $\tilde { \mathcal { V } }$ both have zero mean, we have $\mathbb { E } [ \tilde { \mathcal { X } } - \tilde { \mathcal { Y } } ] = 0$ so the cross-term vanishes. Thus,
$$
\begin{array} { r } { \mathbb { E } \| \mathcal { X } - \mathcal { Y } \| ^ { 2 } = \| \pmb { \mu _ { 0 } } - \pmb { \mu _ { 1 } } \| ^ { 2 } + \mathbb { E } \| \tilde { \mathcal { X } } - \tilde { \mathcal { Y } } \| ^ { 2 } } \end{array}
$$
Take the definition of 2-Wasserstein distance, the infimum over all couplings directly yields
$$
\begin{array} { l } { \displaystyle \left( \mathcal { W } _ { 2 } ( \eta _ { 0 } , \eta _ { 1 } ) \right) ^ { 2 } = \displaystyle \operatorname* { i n f } _ { \pi \in \Pi ( \eta _ { 0 } , \eta _ { 1 } ) } \int \ d \left\| \mathcal { X } - \mathcal { Y } \right\| ^ { 2 } d \pi ( \mathcal { X } , \mathcal { Y } ) . } \\ { \displaystyle = \left\| \mu _ { 0 } - \mu _ { 1 } \right\| ^ { 2 } + \mathcal { W } _ { 2 } ^ { 2 } \left( \tilde { \eta } _ { 0 } , \tilde { \eta } _ { 1 } \right) } \end{array}
$$
This completes the proof of Proposition 4.
Now we prove the flowing proposition, which will give us our lemma.
Proposition 5. Given two centered measures as $\widetilde { \eta } _ { 0 } = \mathcal { N } \left( 0 , \Sigma _ { 0 } \right)$ and $\widetilde { \eta } _ { 1 } = \mathcal { N } \left( 0 , \Sigma _ { 1 } \right)$
$$
\begin{array} { r } { \mathcal { W } _ { 2 } ^ { 2 } \left( \tilde { \eta } _ { 0 } , \tilde { \eta } _ { 1 } \right) = \mathrm { T r } \left( \Sigma _ { 0 } + \Sigma _ { 1 } - 2 \left( \Sigma _ { 1 } ^ { 1 / 2 } \Sigma _ { 0 } \Sigma _ { 1 } ^ { 1 / 2 } \right) ^ { 1 / 2 } \right) . } \end{array}
$$
proof. The coupling $\pi$ of $\tilde { \eta } _ { 0 }$ and $\tilde { \eta } _ { 1 }$ is a joint Gaussian measure with zero mean and covariance matrix
$$
\Sigma _ { c } = \left( \begin{array} { c c } { { \Sigma _ { 0 } } } & { { C } } \\ { { C ^ { T } } } & { { \Sigma _ { 1 } } } \end{array} \right) \succeq 0 ,
$$
where $C$ is the cross-covariance and $\succeq$ means the matrix is positive semi-definitive (PSD). The expected squared distance between the two random vectors $( \mathcal { X } , \mathcal { Y } )$ drawn from $\pi$ is then described as,
$$
\begin{array} { r l } & { \mathbb { E } \| \mathcal { X } - \mathcal { Y } \| ^ { 2 } = \operatorname { T r } ( \mathbb { E } [ ( \mathcal { X } - \mathcal { Y } ) ( \mathcal { X } - \mathcal { Y } ) ^ { \top } ] ) } \\ & { \qquad = \operatorname { T r } ( \Sigma _ { 0 } ) + \operatorname { T r } ( \Sigma _ { 1 } ) - 2 \operatorname { T r } ( C ) . } \end{array}
$$
The definition of Wasserstein distance gives,
$$
\mathcal { W } _ { c } ( \eta _ { 0 } , \eta _ { 1 } ) = \operatorname* { i n f } _ { \pi \in \Pi ( \eta _ { 0 } , \eta _ { 1 } ) } \mathbb { E } \| \mathcal { X } - \mathcal { Y } \| ^ { 2 }
$$
Thus, minimizing the Wasserstein distance is equivalent to maximizing $\mathrm { T r } ( C )$ over all $C$ subject to the joint covariance is positive semi-definite (PSD). It turns out (see Dowson & Landau (1982); Olkin & Pukelsheim (1982); Takatsu (2010) ) that the condition in Equation (39) is equivalent to,
$$
\Sigma _ { 1 } - C ^ { \top } \Sigma _ { 0 } ^ { - 1 } C \succeq 0 \Sigma _ { 0 } ^ { - 1 / 2 } C \Sigma _ { 1 } ^ { - 1 / 2 } \mathrm { ~ h a s ~ o p e r a t o r ~ n o r m ~ } \leq 1
$$
So we denote $K : = \Sigma _ { 0 } ^ { - 1 / 2 } C \Sigma _ { 1 } ^ { - 1 / 2 }$ with $\| \kappa \| _ { \mathrm { o p } } \leq 1$ . Then
$$
\operatorname { T r } ( C ) = \operatorname { T r } \left( \Sigma _ { 0 } ^ { 1 / 2 } K \Sigma _ { 1 } ^ { 1 / 2 } \right) = \operatorname { T r } \left( K \Sigma _ { 1 } ^ { 1 / 2 } \Sigma _ { 0 } ^ { 1 / 2 } \right) .
$$
Using von Neumann trace inequality, its trace inner-product with $\pmb { K }$ is maximized by choosing $K = I$ on the support.
$$
\operatorname* { m a x } _ { \| \boldsymbol { K } \| _ { o p } \leq 1 } \mathrm { T r } ( \boldsymbol { K } \boldsymbol { A } ) = \mathrm { T r } \left( M ^ { 1 / 2 } \right) , \quad M = \sqrt { \boldsymbol { A } \boldsymbol { A } ^ { \intercal } } = \Sigma _ { 1 } ^ { 1 / 2 } \Sigma _ { 0 } \Sigma _ { 1 } ^ { 1 / 2 }
$$
Hence the optimal value of $\mathrm { T r } ( C )$ is
$$
\mathrm { T r } ( C ^ { * } ) = \mathrm { T r } \left[ \left( \Sigma _ { 1 } ^ { 1 / 2 } \Sigma _ { 0 } \Sigma _ { 1 } ^ { 1 / 2 } \right) ^ { 1 / 2 } \right]
$$
Substituting this optimal value into the expression of Wasserstein distance, we obtain
$$
\begin{array} { r } { \mathcal { W } _ { 2 } ^ { 2 } \left( \tilde { \eta } _ { 0 } , \tilde { \eta } _ { 1 } \right) = \mathrm { T r } ( \Sigma _ { 0 } ) + \mathrm { T r } ( \Sigma _ { 1 } ) - 2 \mathrm { T r } \left[ \left( \Sigma _ { 1 } ^ { 1 / 2 } \Sigma _ { 0 } \Sigma _ { 1 } ^ { 1 / 2 } \right) ^ { 1 / 2 } \right] . } \end{array}
$$
This completes the proof of proposition 5. Taking Proposition 4 and Proposition 5 together, we proved Lemma 2.
# B.2. Derivation of the graph Wasserstein distance under MRF
We then prove the Bures-Wasserstein distance for two graph distributions. We restate Proposition 1,
Proposition 6 (Bures-Wasserstein Distance). Consider two same-sized graphs $\mathcal { G } _ { 0 } \sim p \left( \mathcal { X } _ { 0 } , \mathcal { E } _ { 0 } \right)$ and $\mathcal { G } _ { 1 } \sim p \left( \mathcal { X } _ { 1 } , \mathcal { E } _ { 1 } \right)$ with $V$ shared for two graphs, described by the distribution in Definition 2. When the graphs are equipped with graph Laplacian matrices $\scriptstyle { L _ { 0 } }$ and $\scriptstyle { L _ { 1 } }$ satisfying $\jmath$ ) is Positive Semi-Definite $( P S D )$ and 2) has only one zero eigenvalue. The Bures-Wasserstein distance between these two random graph distributions is given by
$$
d _ { B W } ( \mathcal { G } _ { 0 } , \mathcal { G } _ { 1 } ) = \left\| \mathbf { X } _ { 0 } - \mathbf { X } _ { 1 } \right\| _ { F } ^ { 2 } + \beta \operatorname { T r } \left( \pmb { L } _ { 0 } ^ { \dag } + \pmb { L } _ { 1 } ^ { \dag } - 2 \left( \pmb { L } _ { 0 } ^ { \dag / 2 } \pmb { L } _ { 1 } ^ { \dag } \pmb { L } _ { 0 } ^ { \dag / 2 } \right) ^ { 1 / 2 } \right) ,
$$
as $\nu \to 0$ and $\beta$ is a constant related to the norm of $V$
Specifically, Definition 2 uses graph Markov random fields to describe a graph as
$$
{ \begin{array} { r } { p ( { \mathcal { G } } ; G ) = p ( { \mathcal { X } } , { \mathcal { E } } ; X , W ) = p ( { \mathcal { X } } ; X , W ) \cdot p ( { \mathcal { E } } ; W ) { \mathrm { ~ w h e r e ~ } } { \mathcal { E } } } \\ { \operatorname { v e c } ( { \mathcal { X } } ) \sim { \mathcal { N } } \left( X , \Lambda ^ { \dagger } \right) , { \mathrm { ~ w i t h ~ } } X = \operatorname { v e c } ( V ^ { \dagger } \mu ) , \Lambda = ( \nu I + L ) \otimes } \end{array} }
$$
With the graph Wasserstein distance defined as,
$$
d _ { \mathrm { B W } } ( \mathcal { G } _ { 0 } , \mathcal { G } _ { 1 } ) : = \mathcal { W } _ { c } \left( \eta _ { \mathcal { G } _ { 0 } } , \eta _ { \mathcal { G } _ { 1 } } \right) = \mathcal { W } _ { c } \big ( \eta _ { \mathcal { X } _ { 0 } } , \eta _ { \mathcal { X } _ { 1 } } \big ) + \mathcal { W } _ { c } \big ( \eta _ { \mathcal { E } _ { 0 } } , \eta _ { \mathcal { E } _ { 1 } } \big ) .
$$
We first consider calculating $\mathcal { W } _ { c } ( \eta _ { \mathcal { X } _ { 0 } } , \eta _ { \mathcal { X } _ { 1 } } )$ . Specifically, this is the distance between two colored Gaussian measures where
$$
\eta _ { i } \sim \mathcal { N } \Big ( \mu _ { i } ^ { \prime } , \Sigma _ { i } \Big ) , \quad i = 0 , 1 ,
$$
where we first assume that these two Gaussians are emitted from different linear transformation matrices $V _ { 0 }$ and $V _ { 1 }$ . This will bring us the most general and flexible form that could be universally applicable, and potentially can bring more insights to future work. Next, we will inject a few assumptions to arrive at a more practical form for building the flow matching models.
An important property of Kronecker product: Given two invertible matrices $A$ and $B$ , their Kronecker product satisfies $( \boldsymbol { A } \otimes \boldsymbol { B } ) ^ { - 1 } = \boldsymbol { A } ^ { - 1 } \otimes \boldsymbol { B } ^ { - 1 }$ . Using such a property, in the limit as $\nu \to 0$ , we have
$$
\Lambda _ { i } \to L _ { i } \otimes ( { \pmb V } _ { i } ^ { \top } { \pmb V } _ { i } ) \quad \Longrightarrow \quad \Sigma _ { i } = L _ { i } ^ { - 1 } \otimes ( { \pmb V } _ { i } ^ { \top } { \pmb V } _ { i } ) ^ { - 1 } .
$$
According to Propensity 2, the squared 2-Wasserstein distance between two Gaussian measures is given by
$$
\mathcal { W } _ { 2 } ^ { 2 } \big ( \eta _ { 0 } , \eta _ { 1 } \big ) = \underbrace { \lVert { \pmb { \mu } } _ { 0 } ^ { \prime } - { \pmb { \mu } } _ { 1 } ^ { \prime } \rVert ^ { 2 } } _ { \mathrm { M e a n t e r m } } + \underbrace { \mathrm { T r } \Big ( \Sigma _ { 0 } + \Sigma _ { 1 } - 2 \Big ( \Sigma _ { 0 } ^ { 1 / 2 } \Sigma _ { 1 } \Sigma _ { 0 } ^ { 1 / 2 } \Big ) ^ { 1 / 2 } \Big ) } _ { \mathrm { C o v a r i a n c e ~ T e r m } } .
$$
Mean Term. Since $\pmb { \mu } _ { i } ^ { \prime } = \pmb { V } \otimes \pmb { \mu } _ { i }$ , the mean difference becomes
$$
\begin{array} { r } { \| \pmb { \mu } _ { 0 } ^ { \prime } - \pmb { \mu } _ { 1 } ^ { \prime } \| ^ { 2 } = \| \pmb { V } _ { 0 } \pmb { \mu } _ { 0 } - \pmb { V } _ { 1 } \pmb { \mu } _ { 1 } \| _ { F } ^ { 2 } = \| \pmb { X } _ { 0 } - \pmb { X } _ { 1 } \| _ { \mathrm { F } } ^ { 2 } } \end{array}
$$
Covariance term. Using the property of the Kronecker product, the square root of Equation (47) factors in as
$$
\begin{array} { r } { \Sigma _ { i } ^ { 1 / 2 } = { \cal L } _ { i } ^ { - 1 / 2 } \otimes ( { \cal V } _ { i } ^ { \top } { \cal V } _ { i } ) ^ { - 1 / 2 } . } \end{array}
$$
and
$$
\Sigma _ { 0 } ^ { 1 / 2 } \Sigma _ { 1 } \Sigma _ { 0 } ^ { 1 / 2 } = \left( L _ { 0 } ^ { - 1 / 2 } L _ { 1 } ^ { - 1 } L _ { 0 } ^ { - 1 / 2 } \right) \otimes \left( ( V _ { 0 } ^ { \top } V _ { 0 } ) ^ { - 1 / 2 } ( V _ { 1 } ^ { \top } V _ { 1 } ) ^ { - 1 } ( V _ { 0 } ^ { \top } V _ { 0 } ) ^ { - 1 / 2 } \right)
$$
We first look into the term related to $V _ { 0 }$ and $V _ { 1 }$ , which is,
$$
\begin{array} { r l } & { \mathrm { T r } \Big ( ( V _ { 0 } ^ { \top } V _ { 0 } ) ^ { - 1 / 2 } ( V _ { 1 } ^ { \top } V _ { 1 } ) ^ { - 1 } ( V _ { 0 } ^ { \top } V _ { 0 } ) ^ { - 1 / 2 } \Big ) = \mathrm { T r } \Big ( ( V _ { 1 } ^ { \top } V _ { 1 } ) ^ { - 1 } ( V _ { 0 } ^ { \top } V _ { 0 } ) ^ { - 1 / 2 } ( V _ { 0 } ^ { \top } V _ { 0 } ) ^ { - 1 / 2 } \Big ) } \\ & { \phantom { \mathrm { T r } \Big ( } = \mathrm { T r } \Big ( ( V _ { 1 } ^ { \top } V _ { 1 } ) ^ { - 1 } ( V _ { 0 } ^ { \top } V _ { 0 } ) ^ { - 1 } \Big ) } \end{array}
$$
As $\operatorname { T r } ( A + B ) = \operatorname { T r } ( A ) + \operatorname { T r } ( B )$ the covariance term becomes
Covariance Term
$$
\begin{array} { r l } & { \quad = \mathrm { T r } \Big ( \Sigma _ { 0 } + \Sigma _ { 1 } - 2 \Big ( \sum _ { 0 } ^ { 1 / 2 } \Sigma _ { 1 } \Sigma _ { 0 } ^ { 1 / 2 } \Big ) ^ { 1 / 2 } \Big ) } \\ & { \quad = \mathrm { T r } \big ( \Sigma _ { 0 } \big ) + \mathrm { T r } \big ( \Sigma _ { 1 } \big ) - 2 \mathrm { T r } \Big ( \big ( \Sigma _ { 0 } ^ { 1 / 2 } \Sigma _ { 1 } \Sigma _ { 0 } ^ { 1 / 2 } \big ) ^ { 1 / 2 } \Big ) } \\ & { \quad = \mathrm { T r } \Big ( L _ { 0 } ^ { - 1 } \otimes ( V _ { 0 } ^ { \top } V _ { 0 } ) ^ { - 1 } + L _ { 1 } ^ { - 1 } \otimes ( V _ { 1 } ^ { \top } V _ { 1 } ) ^ { - 1 } - 2 \Big ( L _ { 0 } ^ { - 1 / 2 } L _ { 1 } ^ { - 1 } L _ { 0 } ^ { - 1 / 2 } \Big ) ^ { 1 / 2 } \otimes ( V _ { 1 } ^ { \top } V _ { 1 } ) ^ { - 1 / 2 } ( V _ { 0 } ^ { \top } V _ { 0 } ) ^ { - 1 / 2 } \Big ) } \end{array}
$$
Given that $\operatorname { T r } ( A \otimes B ) = \operatorname { T r } ( A ) \operatorname { T r } ( B )$ and $\operatorname { T r } ( V ^ { \intercal } V ) = \| V \| _ { \mathrm { F } } ^ { 2 }$ for any real-valued matrix $V$ , we can further derive,
$$
\begin{array} { r l } & { \mathrm { C o v a r i a n c e ~ T e r m } = \mathrm { T r } [ ( V _ { 0 } ^ { \top } V _ { 0 } ) ^ { - 1 } ] \mathrm { T r } ( L _ { 0 } ^ { \dagger } ) + \mathrm { T r } [ ( V _ { 1 } ^ { \top } V _ { 1 } ) ^ { - 1 } ] \mathrm { T r } ( L _ { 1 } ^ { \dagger } ) } \\ & { \phantom { \mathrm { C o v a r i a n c e ~ T e r m } = } - 2 \mathrm { T r } \Big ( L _ { 0 } ^ { \dagger / 2 } L _ { 1 } ^ { \dagger } L _ { 0 } ^ { \dagger / 2 } \Big ) ^ { 1 / 2 } \cdot \mathrm { T r } [ ( V _ { 1 } ^ { \top } V _ { 1 } ) ^ { - 1 / 2 } ( V _ { 0 } ^ { \top } V _ { 0 } ) ^ { - 1 / 2 } ] . } \end{array}
$$
Unfortunately, to simplify this equation, we have to make the two gram matrix, $( V _ { 0 } ^ { \top } V _ { 0 } ) ^ { - 1 }$ and $( V _ { 1 } ^ { \top } V _ { 1 } ) ^ { - 1 }$ agree, i.e., $( V _ { 1 } ^ { \top } V _ { 1 } ) ^ { - 1 } = \mathbf { \bar { ( } } V _ { 0 } ^ { \top } V _ { 0 } ) ^ { - 1 }$ . This will be satisfied if and only if there exists an orthogon(al matr)ix $Q$ suc(h that
$$
V _ { 1 } ^ { \dag } = V _ { 0 } ^ { \dag } Q .
$$
Thus, to further process, we simply consider the case when $V _ { 1 }$ and $V _ { 0 }$ are exactly the same, i.e., $V _ { 1 } = V _ { 0 } = V$ (we have already discussed how realistic this assumption is in Appendix A.3). So that we work under the assumptions that $\Vert \ b { V } _ { 0 } ^ { \dagger } \Vert _ { F } ^ { 2 } = \Vert \dot { \ b { V } } _ { 1 } ^ { \dagger } \Vert _ { F } ^ { 2 } = \beta$ , which simplify the trace as
$$
\mathrm { C o v a r i a n c e \ T e r m } = \beta \cdot \mathrm { T r } \Big ( L _ { 0 } ^ { \dagger } + L _ { 1 } ^ { \dagger } - 2 \left( L _ { 0 } ^ { \dagger / 2 } L _ { 1 } ^ { \dagger } L _ { 0 } ^ { \dagger / 2 } \right) ^ { 1 / 2 } \Big ) .
$$
Combining the mean term and the covariance term, we obtain the Wasserstein distance of $\mathcal { W } _ { c } ( \eta _ { \mathcal { X } _ { 0 } } , \eta _ { \mathcal { X } _ { 1 } } )$
For calculating $\mathcal { W } _ { c } ( \eta \varepsilon _ { 0 } , \eta \varepsilon _ { 1 } )$ , we have the freedom to choose the cost function when obtaining the Wasserstein distance. Note that $W$ servWes(asEthe Epri)or for the Gaussian covariance matrix $\Sigma$ , where the covariance has to be positive-semi definite. Thus, according to (Bhatia et al., 2019), a proper distance between two positive semi-definite matrices is measured by
$$
\mathcal { W } ( \eta \varepsilon _ { 0 } , \eta \varepsilon _ { 1 } ) = \left\| \Sigma _ { 0 } ^ { 1 / 2 } - \Sigma _ { 1 } ^ { 1 / 2 } \right\| _ { F } ^ { 2 } .
$$
Coincidently, this is another usage case when the Bures-Wasserstein metric is utilized. Putting everything together, the Wasserstein distance in the limit $\nu \to 0$ is
$$
\begin{array} { r l } & { d _ { \mathrm { B W } } ( \mathcal { G } _ { 0 } , \mathcal { G } _ { 1 } ) = \| V _ { 0 } \mu _ { 0 } - V _ { 1 } \mu _ { 1 } \| _ { F } ^ { 2 } + \left( \beta + 1 \right) \cdot \mathrm { T r } \Big ( L _ { 0 } ^ { \dagger } + L _ { 1 } ^ { \dagger } - 2 \left( L _ { 0 } ^ { \dagger / 2 } L _ { 1 } ^ { \dagger } L _ { 0 } ^ { \dagger / 2 } \right) ^ { 1 / 2 } \Big ) \cdot } \\ & { \qquad = \underbrace { \| X _ { 0 } - X _ { 1 } \| _ { F } ^ { 2 } } _ { d _ { X } ( X _ { 0 } , X _ { 1 } ) } + \left( \beta + 1 \right) \cdot \underbrace { \mathrm { T r } \Big ( L _ { 0 } ^ { \dagger } + L _ { 1 } ^ { \dagger } - 2 \left( L _ { 0 } ^ { \dagger / 2 } L _ { 1 } ^ { \dagger } L _ { 0 } ^ { \dagger / 2 } \right) ^ { 1 / 2 } \Big ) } _ { d _ { L } ( L _ { 0 } , L _ { 1 } ) } . } \end{array}
$$
This expression separates the contribution of the mean difference (transformed by $V$ ) and the discrepancy between the covariance structures (encoded in $\scriptstyle { L _ { 0 } }$ and $\pmb { L } _ { 1 }$ ). This could be further used to derive BW interpolation, which we will show in Appendix C.1. In the main body, constant $\beta$ actually corresponds to $\beta + 1$ here. This complete our derivation in Proposition 1.
# C. Derivation of Bures-Wasserstein flow matching
In order to build the flow matching framework, we need to derive the optimal interpolation and the corresponding velocities for the probability path $p ( G _ { t } \mid G _ { 0 } , G _ { 1 } )$ . This is achieved via the OT displacement between two graph distributions.
# C.1. The Bures-Wasserstein graph interpolation
We aim to recover the proposition stated as follows.
Proposition 7 (Bures-Wasserstein interpolation). The graph minimizer of Equation (10), $\mathcal { G } _ { t } = \{ \mathcal { V } , \mathcal { E } _ { t } , \mathcal { X } _ { t } \}$ , have its node features following a colored Gaussian distribution, $\bar { \mathcal { X } } _ { t } \sim \mathcal { N } ( X _ { t } , \mathbf { \Lambda } \mathbf { \Lambda } _ { t } ^ { \dagger } )$ with $\mathbf { \Lambda } \mathbf { \Lambda } _ { t } = \left( \nu \pmb { I } + \pmb { L } _ { t } \right) \otimes V ^ { \top } V$ and edges following $\mathcal { E } _ { t } \sim \delta ( W _ { t } )$ , specifically,
$$
\begin{array} { r } { { \cal L } _ { t } ^ { \dagger } = { \cal L } _ { 0 } ^ { 1 / 2 } \left( \left( 1 - t \right) { \cal L } _ { 0 } ^ { \dagger } + t \left( { \cal L } _ { 0 } ^ { \dagger / 2 } { \cal L } _ { 1 } ^ { \dagger } { \cal L } _ { 0 } ^ { \dagger / 2 } \right) ^ { 1 / 2 } \right) ^ { 2 } { \cal L } _ { 0 } ^ { 1 / 2 } , \quad { \cal X } _ { t } = \left( 1 - t \right) { \cal X } _ { 0 } + t { \cal X } _ { 1 } } \end{array}
$$
The interpolation is an extension of the concept of mean, where in the optimal transport world, the Wasserstein barycenter (mean) of measures $\eta _ { 0 } , \dotsc \dotsc \eta _ { m - 1 }$ under weights $\lambda _ { 0 } , \ldots \lambda _ { m - 1 }$ can be derived over the following optimization problem:
$$
\bar { \eta } = \underset { \eta } { \arg \operatorname* { m i n } } \sum _ { j = 0 } ^ { m - 1 } \lambda _ { j } \left( \mathcal { W } _ { 2 } \left( \eta , \eta _ { j } \right) \right) ^ { 2 }
$$
When $m = 2$ , based on the Bures-Wasserstein (BW) distance, we can define the OT displacement minimization problem on graphs described as,
$$
\mathcal { G } _ { t } = \mathop { \mathrm { a r g } } \underset { \tilde { \mathcal { G } } } { \mathrm { m i n } } \ ( 1 - t ) d _ { \mathrm { B W } } \big ( \mathcal { G } _ { 0 } , \tilde { \mathcal { G } } \big ) + t d _ { \mathrm { B W } } \big ( \tilde { \mathcal { G } } , \mathcal { G } _ { 1 } \big ) .
$$
where $d _ { \mathrm { B W } } \big ( \mathcal { G } _ { 0 } , \mathcal { G } _ { 1 } \big )$ is described in Proposition 1. The optimal graph interpolation is the solution to the problem.
In the setting of graph, this becomes a two-variable optimization problem, where
$$
\mathcal { X } _ { t } , \mathcal { E } _ { t } = \mathop { \mathrm { a r g } \mathrm { m i n } } _ { \tilde { \mathcal { X } } , \tilde { \mathcal { E } } } ~ ( 1 - t ) d _ { \mathrm { B W } } ( \mathcal { G } _ { 0 } , \tilde { \mathcal { G } } ) + t d _ { \mathrm { B W } } ( \tilde { \mathcal { G } } , \mathcal { G } _ { 1 } ) .
$$
Fortunately, recall in Equation (57) that our distance measurement $d _ { \mathrm { B W } } \big ( \mathcal { G } _ { 0 } , \mathcal { G } _ { 1 } \big )$ is decomposed into $d _ { \pmb { X } } ( \pmb { X } _ { 0 } , \pmb { X } _ { 1 } )$ and $d _ { L } ( \pmb { L } _ { 0 } , \pmb { L } _ { 1 } )$ , then the optimization over node and edges are disentangleable into solving the two sub optimization problem,
$$
\begin{array} { r l } & { \mathrm { S u b - q u e s t i o n ~ 1 : } \qquad \bar { X } _ { t } = \underset { \bar { X } } { \mathrm { a r g } \mathrm { m i n } } \quad ( 1 - t ) \| X _ { 0 } - \tilde { X } \| _ { F } ^ { 2 } + t \| \tilde { X } - X _ { 1 } \| _ { F } ^ { 2 } } \\ & { \mathrm { S u b - q u e s t i o n ~ 2 : } \qquad \bar { L } _ { t } = \underset { \bar { L } } { \mathrm { a r g } \mathrm { m i n } } \quad ( 1 - t ) d _ { L } ( L _ { 0 } , \tilde { L } ) + t d _ { L } ( L _ { 1 } , \tilde { L } ) } \end{array}
$$
This two problems are completely disentangled thus we can solve them separately.
Sub-question 1 For the first problem, we simply set the derivate to 0 and get,
$$
\bigl ( 1 - t \bigr ) \bigl ( \tilde { X } - X _ { 0 } \bigr ) + t \bigl ( \tilde { X } - X _ { 1 } \bigr ) = 0 \to X _ { t } = \bigl ( 1 - t \bigr ) X _ { 0 } + t \bigl ( X _ { 1 } \bigr )
$$
Subquestion 2 The second subproblem is equivalent in deriving the covariance of Bures-Wasserstein interpolation between two Gaussian measures, $\eta _ { 0 } \sim \dot { \mathcal { N } } \left( 0 , L _ { 0 } ^ { \dagger } \right)$ and $\eta _ { 1 } \sim \mathcal { N } \left( 0 , L _ { 1 } ^ { \dagger } \right)$ . This problem has been properly addressed in Haasler & Frossard (2024) and here we just verbose their results. For more details we refer the reader to Haasler & Frossard (2024) for a further discussion.
The optimal transport geodesic between $\eta _ { 0 } \sim \mathcal { N } \left( 0 , L _ { 0 } ^ { \dagger } \right)$ and $\eta _ { 1 } \sim \mathcal { N } \left( 0 , L _ { 1 } ^ { \dagger } \right)$ is defined by $\eta _ { t } = \big ( \big ( 1 - t \big ) I + t T \big ) _ { \# } \eta _ { 0 }$ , where the symbol “#” denotes the push-forward of a measure by a mapping, $_ { T }$ is a linear map that satisfies ${ \pmb T } { \pmb L } _ { 0 } ^ { \dagger } { \pmb T } = { \pmb L } _ { 1 } ^ { \dagger }$ .
We define a new matrix $M$ and do normalization, which leads to,
$$
\pmb { T } = \pmb { L } _ { 0 } ^ { 1 / 2 } \pmb { M } \pmb { L } _ { 0 } ^ { 1 / 2 }
$$
Plug in gives,
$$
\begin{array} { l } { { { \pmb T } { \pmb L } _ { 0 } ^ { \dagger } { \pmb T } ^ { \top } = { \pmb L } _ { 0 } ^ { 1 / 2 } M { \pmb L } _ { 0 } ^ { 1 / 2 } { \pmb L } _ { 0 } ^ { \dagger } \left( { \pmb L } _ { 0 } ^ { 1 / 2 } M { \pmb L } _ { 0 } ^ { 1 / 2 } \right) ^ { \top } } } \\ { { \mathrm { = } { \pmb L } _ { 0 } ^ { 1 / 2 } M M ^ { \top } { \pmb L } _ { 0 } ^ { 1 / 2 } . } } \end{array}
$$
So that we obtain
$$
{ \boldsymbol { L } } _ { 1 } ^ { \dagger } = { \boldsymbol { L } } _ { 0 } ^ { 1 / 2 } M { \boldsymbol { M } } ^ { \top } { \boldsymbol { L } } _ { 0 } ^ { 1 / 2 } \to M = ( { \boldsymbol { L } } _ { 0 } ^ { \dagger / 2 } { \boldsymbol { L } } _ { 1 } ^ { \dagger } { \boldsymbol { L } } _ { 0 } ^ { \dagger / 2 } ) ^ { 1 / 2 }
$$
Replace $_ { T }$ and we get,
$$
{ \bf { \mathit { T } } } = { \cal L } _ { 0 } ^ { 1 / 2 } \left( L _ { 0 } ^ { \dagger / 2 } L _ { 1 } ^ { \dagger } L _ { 0 } ^ { \dagger / 2 } \right) ^ { 1 / 2 } L _ { 0 } ^ { 1 / 2 }
$$
Given that the geodesic $\eta _ { t } = \big ( \big ( 1 - t \big ) I + t T \big ) _ { \# } \eta _ { 0 }$ which also has a Gaussian form $\boldsymbol { \eta _ { t } } \sim \mathcal { N } \left( 0 , \Sigma _ { t } \right)$ , We can then write the covariance matrix and obtain
$$
\begin{array} { r l } & { L _ { t } ^ { \dagger } = \Sigma _ { t } = \left( \left( 1 - t \right) I + t T \right) L _ { 0 } ^ { \dagger } ( \left( 1 - t \right) I + t T ) } \\ & { \quad = L _ { 0 } ^ { 1 / 2 } \left( \left( 1 - t \right) L _ { 0 } ^ { \dagger } + t \left( L _ { 0 } ^ { \dagger / 2 } L _ { 1 } ^ { \dagger } L _ { 0 } ^ { \dagger / 2 } \right) ^ { 1 / 2 } \right) L _ { 0 } ^ { 1 / 2 } L _ { 0 } ^ { \dagger } L _ { 0 } ^ { 1 / 2 } \left( \left( 1 - t \right) L _ { 0 } ^ { \dagger } + t \left( L _ { 0 } ^ { \dagger / 2 } L _ { 1 } ^ { \dagger } L _ { 0 } ^ { \dagger / 2 } \right) ^ { 1 / 2 } \right) L _ { 0 } ^ { 1 / 2 } } \\ & { \quad = L _ { 0 } ^ { 1 / 2 } \left( \left( 1 - t \right) L _ { 0 } ^ { \dagger } + t \left( L _ { 0 } ^ { \dagger / 2 } L _ { 1 } ^ { \dagger } L _ { 0 } ^ { \dagger / 2 } \right) ^ { 1 / 2 } \right) ^ { 2 } L _ { 0 } ^ { 1 / 2 } } \end{array}
$$
Which ends the derivation.
Remark 1: Even though the GraphMRF in Definition 2 does rely on an implicit linear emission matrices $V$ , the BW interpolation in Proposition 2 can be obtained without explicitly accessing to the $V$ matrices. The property was attractive as in practice we can construct the probability path without explicitly fitting a $V$ beforehand.
# C.2. Deriving the velocity of BW interpolation
We first show the general form of the velocity term for the Gaussian and Dirac measures.
Gaussian Measure. For a time-parametrized Gaussian density $p _ { t } ( x ) = \mathcal { N } ( x ; \pmb { \mu } _ { t } , \Sigma _ { t } )$ , the velocity field $\boldsymbol { v } _ { t } ( \boldsymbol { x } )$ satisfies the continuity equation
$$
\begin{array} { r } { \partial _ { t } p _ { t } + \nabla \cdot \left( p _ { t } v _ { t } \right) = 0 , } \end{array}
$$
is an affine function of $x$ . And the instantaneous velocity field follows,
$$
v _ { t } ( \boldsymbol { \mathcal { X } } ) = \dot { \pmb { \mu } } _ { t } + \frac { 1 } { 2 } \dot { \Sigma } _ { t } \Sigma _ { t } ^ { - 1 } \left( \boldsymbol { \mathcal { X } } - \mathbf { \mu } _ { t } \right) .
$$
Dirac Measure. When the measure is a Dirac function,
$$
p _ { t } ( x ) = \delta \left( \cdot , \pmb { \mu } _ { t } \right) .
$$
We can just consider it as the limited case of the Gaussian measure, when $\Sigma _ { t } 0$ . So that the velocity at simply takes
$$
v _ { t } ( x ) = \dot { \mu } _ { t } .
$$
We then move to prove the following propensity for Bures-wasserstein velocity.
Proposition 8 (Bures-Wasserstein velocity). For the graph $\mathcal { G } _ { t }$ following BW interpolation in Proposition 2, the conditional velocity at time $t$ with observation $G _ { t }$ is given as,
$$
\begin{array} { l } { { v _ { t } \big ( E _ { t } \bigm | G _ { 0 } , G _ { 1 } \bigm ) = \dot { W _ { t } } = d i a g \big ( \dot { L } _ { t } \bigm ) - \dot { L } _ { t } , \quad { v _ { t } \big ( X _ { t } \bigm | G _ { 0 } , G _ { 1 } \bigm ) = \frac { 1 } { 1 - t } \big ( X _ { 1 } - X _ { t } \bigm ) } } } \\ { { w i t h \dot { L } _ { t } = 2 L _ { t } - T L _ { t } - L _ { t } T a n d T = L _ { 0 } ^ { 1 / 2 } ( L _ { 0 } ^ { \dagger / 2 } L _ { 1 } ^ { \dagger } L _ { 0 } ^ { \dagger / 2 } ) ^ { 1 / 2 } L _ { 0 } ^ { 1 / 2 } } } \end{array}
$$
where $W _ { t } = D _ { t } - L _ { t }$ and $\scriptstyle { \mathbf { } } \mathbf { } _ { L _ { t } }$ defined in Equation (11).
Proof:
The graph structure velocity. As we assume the edges, $E _ { t } \sim \delta ( \cdot , W _ { t } )$ , following a dirac distribution, the velocity is defined as
$$
v _ { t } ( E _ { t } ) = \dot { W } _ { t } .
$$
Given that, $\dot { W } _ { t } = \mathrm { d i a g } ( \dot { L } _ { t } ) - \dot { L } _ { t }$ , we transit fo deriving the derivative of the Laplacian matrix, $\dot { L _ { t } }$ . Using the fact that,
$$
\frac { d } { d t } \left( { \cal A } ^ { - 1 } \right) = - { \cal A } ^ { - 1 } \frac { d { \cal A } } { d t } { \cal A } ^ { - 1 }
$$
we obtain the derivate of Laplacian matrix,
$$
\dot { L _ { t } } = \frac { d \big ( \Sigma _ { t } ^ { \dagger } \big ) } { d t } = \Sigma _ { t } ^ { \dagger } \frac { d \Sigma _ { t } } { d t } \Sigma _ { t } ^ { \dagger } = L _ { t } \frac { d \Sigma _ { t } } { d t } L _ { t }
$$
According to Equation (68) and Equation (67), the covariance matrix is defined through the interpolation,
$$
\Sigma _ { t } = { \bigl ( } { \bigl ( } 1 - t { \bigr ) } I + t T { \bigr ) } L _ { 0 } ^ { \dagger } { \bigl ( } { \bigl ( } 1 - t { \bigr ) } I + t T { \bigr ) } : = R _ { t } L _ { 0 } ^ { \dagger } R _ { t }
$$
where $\pmb { R } _ { t } = ( 1 - t ) \pmb { I } + t \pmb { T }$ . Taking the derivative, we get,
$$
\dot { \Sigma } _ { t } = \frac { d } { d t } \left( R _ { t } \Sigma _ { 0 } R _ { t } \right) = R _ { t } ^ { \prime } \Sigma _ { 0 } R _ { t } + R _ { t } \Sigma _ { 0 } R _ { t } ^ { \prime } = ( T - I ) \Sigma _ { 0 } R _ { t } + R _ { t } \Sigma _ { 0 } ( T - I )
$$
Using the fact that $\Sigma _ { 0 } R _ { t } = R _ { t } \Sigma _ { 0 } = \Sigma _ { t }$ , we obtain the covariance gradient
$$
\dot { \Sigma } _ { t } = \big ( \pmb { T } - \pmb { I } \big ) \Sigma _ { t } + \Sigma _ { t } \big ( \pmb { T } - \pmb { I } \big )
$$
So that,
$$
\begin{array} { l } { - \dot { L } _ { t } = \displaystyle \frac { d \big ( \Sigma _ { t } ^ { \dagger } \big ) } { d t } = \Sigma _ { t } ^ { \dagger } \frac { d \Sigma _ { t } } { d t } \Sigma _ { t } ^ { \dagger } = L _ { t } \frac { d \Sigma _ { t } } { d t } L _ { t } } \\ { = L _ { t } \big ( \big ( T - I \big ) L _ { t } ^ { \dagger } + L _ { t } ^ { \dagger } \big ( T - I \big ) \big ) L _ { t } } \\ { = L _ { t } \big ( T - I \big ) + \big ( T - I \big ) L _ { t } } \\ { = L _ { t } T + T L _ { t } - 2 L _ { t } } \end{array}
$$
Thus, $\dot { \cal L } _ { t } = 2 { \cal L } _ { t } - { \cal L } _ { t } { \cal T } - { \cal T } { \cal L } _ { t }$ .
Given that $\pmb { L } _ { t } = \pmb { D } _ { t } - \pmb { W } _ { t }$ so that $W _ { t } = \mathrm { d i a g } ( \boldsymbol { L } _ { t } ) - \boldsymbol { L } _ { t }$ , taking the derivative gives $\dot { W } _ { t } = \mathrm { d i a g } ( \dot { L } _ { t } ) - \dot { L } _ { t }$ . As we assume the edges, $E _ { t } \sim \delta ( \cdot , W _ { t } )$ −, the derivate dire=ctly yi(elds) t−he velocity,
$$
v _ { t } \big ( E _ { t } \mid G _ { 0 } , G _ { 1 } \big ) = \dot { W } _ { t } = \mathrm { d i a g } ( \dot { L } _ { t } ) - \dot { L } _ { t } .
$$
The node feature velocity. The instantaneous velocity field follows,
$$
v _ { t } \left( \mathcal { X } \mid G _ { 0 } , G _ { 1 } \right) = \dot { \mu } _ { t } + \frac { 1 } { 2 } \dot { \Sigma } _ { t } \Sigma _ { t } ^ { - 1 } \left( \mathcal { X } - \mu _ { t } \right) .
$$
The mean gradient interpolating $\eta _ { 0 }$ and $\eta _ { 1 }$ can be written as $\dot { \pmb { \mu } } _ { t } = \pmb { X } _ { 1 } - \pmb { X } _ { 0 }$ and $\begin{array} { r } { \mathbf { X } _ { t } = ( 1 - t ) \mathbf { X } _ { 0 } + t \mathbf { X } _ { 1 } } \end{array}$ . So that the velocity eads to,
$$
v _ { t } ( \mathcal { X } \mid G _ { 0 } , G _ { 1 } ) = X _ { 1 } - X _ { 0 } + \frac { 1 } { 2 } \dot { L } _ { t } ^ { \dagger } L _ { t } \left( \mathcal { X } - X _ { t } \right) .
$$
However, in practice, we do not need such a complicated velocity term. We wish to avoid the estimation of complex gradient-inverse term so that we can escape from the complicated computation. Under the assumption that the amplitude of covariance is much smaller than the mean difference, we can omit the second term and just keep the mean difference. Hence the instantaneous velocity is simply described as
$$
v _ { t } { \big ( } X _ { t } \mid G _ { 0 } , G _ { 1 } { \big ) } = X _ { 1 } - X _ { 0 } = X _ { 1 } - X _ { 0 } = { \frac { 1 } { 1 - t } } { \big ( } X _ { 1 } - X _ { t } { \big ) }
$$
# Algorithm 1: BWFlow Training
Input: Ref. dist $p _ { 0 }$ and dataset $\mathcal { D } \sim p _ { 1 }$ .
Output: Trained model $f _ { \theta } ( G _ { t } , t )$ .
Initialize model $f _ { \theta } ( G _ { t } , t )$ ;
while $f _ { \theta }$ not conver(ged d)o $/ \star$ Sample Boundary Graphs \*/ Sample batched $\left\{ G _ { 0 } \right\} \sim p _ { 0 } , \left\{ G _ { 1 } \right\} \sim \mathcal { D }$ ; $/ \star$ Prob.path Construction $\star /$ Sample $t \sim \mathcal { U } ( 0 , 1 )$ ; Calculate the BW interpolation $p ( G _ { t } \mid G _ { 0 } , G _ { 1 } )$ via Equation (11); $/ \star$ Denoising -- $x$ -prediction \*/ $p _ { 1 | t } ^ { \theta } ( \cdot | G _ { t } ) f _ { \theta } ( G _ { t } , t )$ ; Loss calculation via Equation (4); optimizer.step();
Input: Reference distribution $p _ { 0 }$ , Trained Model $f _ { \theta } ( G _ { t } , t )$ , Small time step dt,
Output: G(enera)ted Graphs $\{ \hat { G } _ { 1 } \}$ .
Initialize samples $\{ \hat { G } _ { 0 } \} \sim p _ { 0 }$ {;
Initialize the model $p _ { 1 | t } ^ { \theta } ( \cdot | G _ { t } ) f _ { \theta } ( G _ { t } , t )$ for $t \gets 0$ to 1 dt by dt do / $^ { \prime } \star$ Denoising - x-prediction \*/ Predict $\tilde { G } _ { 1 } p _ { 1 \mid t } ^ { \theta } ( \cdot \mid \hat { G } _ { t } )$ ; /\* Velocity calculation $\star /$ Calculate $v _ { \theta } ( \hat { G } _ { t } \mid \hat { G } _ { 0 } , \tilde { G } _ { 1 } )$ via Equation (12); /\* Numeri(cal∣ Samp)ling \*/ Sample $\hat { G } _ { t + d t } \sim \hat { G } _ { t } + v _ { \theta } ( \hat { G } _ { t } ) d t$
# D. Discrete Bures-Wasserstein flow matching for graph generation
D.1. Probability path construction for discrete Bures-Wasserstein flow matching
The discrete probability path. We design the probability path as discrete distributions,
$$
\begin{array} { r l } & { p _ { t } ( x _ { v } \mid G _ { 0 } , G _ { 1 } ) = \mathrm { C a t e g o r i c a l } ( [ X _ { t } ] _ { v } ) , \quad p _ { t } ( e _ { u v } \mid G _ { 0 } , G _ { 1 } ) = \mathrm { B e r n o u l l i } ( [ W _ { t } ] _ { u v } ) } \\ & { \mathrm { s . t . } \ p _ { 0 } ( \mathcal { G } ) = \delta ( G _ { 0 } , \cdot ) , p _ { 1 } ( \mathcal { G } ) = \delta ( G _ { 1 } , \cdot ) } \end{array}
$$
where $\pmb { W } _ { t } = \pmb { D } _ { t } - \pmb { L } _ { t }$ with $X _ { t }$ and $\scriptstyle { \mathbf { } } \mathbf { } , { \mathbf { } } \mathbf { } \mathbf { } ,$ defined the same in Equation (11). We consider the fact that the Dirac distribution is a special case when the Categorical/Bernoulli distribution has probability 1 or 0, so the boundary condition $p _ { 0 } ( { \mathcal G } ) =$ $\delta ( G _ { 0 } , \cdot ) , p _ { 1 } ( \mathcal { G } ) = \delta ( G _ { 1 } , \cdot )$ holds. As such, $\begin{array} { r } { \pmb { X } _ { t } = \big ( 1 - t \big ) \pmb { X } _ { 0 } + t \pmb { X } _ { 1 } ^ { \top } \in [ 0 , 1 ] ^ { | \bar { \nu } | \times K } } \end{array}$ . Since the boundary condition for(Gea)c=h entry, $[ X _ { 0 } ] _ { v }$ and $[ X _ { 1 } ] _ { v }$ are two one-hot embeddings, $[ X _ { t } ] _ { v } = t [ X _ { 0 } ] _ { v } + ( 1 - t ) [ X _ { 1 } ] _ { v }$ would sum to one, which works as a valid probability vector. Thus, Categorical $\left( \left[ \boldsymbol { X } _ { t } \right] _ { v } \right)$ is a K-class categorical distribution.
For the edge distribution, we just consider $e _ { u v }$ is conditionally independent of the other given $[ W _ { t } ] _ { u v }$ . One thing to emphasize is that, given the nature of Bures-Wasserstein interpolation, the yielded $\mathbf { } W _ { t }$ is not always bounded by $[ 0 , 1 ]$ thus we have to hard-clip the boundary.
# D.2. Approximating Wasserstein distance in Bernoulli distributions
To make sure that the individual nodes are structured and developed jointly while doing flow matching, we assume that the $\mathrm { v e c } ( \mathcal { X } )$ still maintains a covariance matrix similar to Equation (8), which gives $\pmb { \Lambda } = \left( \nu \pmb { I } + \pmb { L } \right) \otimes \pmb { V } ^ { \top } \pmb { V }$ given that $\chi$ is emitted(fXro)m a latent variable $\mathcal { H }$ through an affine transformation and the latent variable =ha(s a c+ovar)i a⊗nce matrix $( \nu I + L ) ^ { - 1 }$ . Different from the Gaussian case, the latent variable would still be a discrete distribution, so that the affine transformation carries the covariance matrix out.
Unfortunately, the Wasserstein distance between two discrete graph distributions that follow Equation (13) does not have a closed-form solution given the complex interwined nature. However, it is possible to use the central limit theorem applied to $\chi$ so that we can approximate the Wasserstein distance of two Bernoulli distributions with the Gaussian counterpart. This approximation works when we are in high-dimensional case (high dimension means $| \nu | d$ is moderately large.), and the OT-distance between two such Bernoulli distributions is well-captured by the corresponding Gaussian formula, which we already introduced in Equation (57).
With such nature, even though we are not sampling from Gaussian distributions anymore, it is possible to approximate the Wasserstein distance between two multivariate discrete distributions with the Gaussian counterpart, so the conclusions, such as optimal transport displacements, still hold. And we can similarly derive the Bures-Wasserstein velocity as in the next section.
# D.3. Velocity for discrete Bures-Wasserstein flow matching
Node Velocity. For node-wise, the path of node features $\textstyle { \mathcal { X } } _ { t }$ can be re-written as $p _ { t } ( \mathcal { X } ) = ( 1 - t ) \delta ( \cdot , \boldsymbol { X } _ { 0 } ) + t \delta ( \cdot , \boldsymbol { X } _ { 1 } )$ so the conditional velocity can be accessed through $v _ { t } ( X _ { t } \mid G _ { 0 } , G _ { 1 } ) = [ \delta ( \cdot , X _ { 1 } ) - \delta ( \cdot , X _ { t } ) ] / ( 1 - t )$ similar as the derivation in (Gat et al., 2024).
Edge Velocity. For edge-wise, we look into each entry of the adjacency matrix $W$ , and consider a time-dependent Bernoulli distribution, the probability density function is:
$$
p _ { t } \big ( e _ { u v } \big ) = \big [ W _ { t } \big ] _ { u v } ^ { e _ { u v } } \big ( 1 - \big [ W _ { t } \big ] _ { u v } \big ) ^ { 1 - e _ { u v } } , \qquad e _ { u v } \in \{ 0 , 1 \} .
$$
To properly define a velocity $\boldsymbol { v } ( \boldsymbol { x } , t )$ , it should follow the continuity equation
$$
\frac { \partial } { \partial t } p _ { t } ( e _ { u v } ) + \nabla \cdot ( p v ) _ { t } ( e _ { u v } ) \ = \ 0 .
$$
We use $x$ and $y$ to denote two states of $e _ { u v } ( p ( e _ { u v } = x ) : = p ( x ) , p ( e _ { u v } = y ) : = p ( y ) )$ , then the divergence term is
$$
\nabla \cdot ( p v ) ( e _ { u v } = x ) = \sum _ { y \neq x } [ p _ { t } ( y ) v _ { t } ( y x ) - p _ { t } ( x ) v _ { t } ( x y ) ] .
$$
As we are working on a Bernoulli distribution, then the forward equations become
$$
\begin{array} { r } { \bigg \{ \partial _ { t } p ( 0 ) = p ( 1 ) v _ { t } ( 1 0 ) - p ( 0 ) v _ { t } ( 0 1 ) , } \\ { \partial _ { t } p ( 1 ) = p ( 0 ) v _ { t } ( 0 1 ) - p ( 1 ) v _ { t } ( 1 0 ) . } \end{array}
$$
Since $p _ { t } ( 1 ) = [ W _ { t } ] _ { u v } .$ , we have $\partial _ { t } p ( 1 ) = [ \dot { W } _ { t } ] _ { u v }$ and $\partial _ { t } p ( 0 ) = - [ \dot { W } _ { t } ] _ { u v }$ . Hence
$$
p ( 0 ) v _ { t } ( 0 1 ) - p ( 1 ) v _ { t } ( 1 0 ) = [ \dot { W } _ { t } ] _ { u v } .
$$
There are many solutions to the above equation. We chose a symmetric solution so that the transition of $e _ { u v } 1 - e _ { u v }$ with
$$
v _ { t } ( 0 1 ) = \frac { [ \dot { W } _ { t } ] _ { u v } } { 1 - [ W _ { t } ] _ { u v } } , \quad v _ { t } ( 1 0 ) = - \frac { [ \dot { W } _ { t } ] _ { u v } } { [ W _ { t } ] _ { u v } } .
$$
Finally for concise, we can write write it as a velocity field on states $e _ { u v } \in \{ 0 , 1 \}$ , note $1 - 2 e _ { u v }$ is $+ 1$ at $e _ { u v } = 0$ and $^ { - 1 }$ at $e _ { u v } = 1$ . Thus, we have
$$
v ( e _ { u v } , t ) = \left( 1 - 2 e _ { u v } \right) \frac { [ \dot { W } _ { t } ] _ { u v } } { [ W _ { t } ] _ { u v } \left( 1 - W _ { t } \right] _ { u v } } , e _ { u v } \in \{ 0 , 1 \} ,
$$
which in matrix form gives
$$
v _ { t } \big ( E _ { t } \mid G _ { 1 } , G _ { 0 } \big ) = \big ( 1 - 2 E _ { t } \big ) \frac { \dot { W } _ { t } } { W _ { t } \circ \big ( 1 - W _ { t } \big ) } .
$$
Combine the node velocity and the edge velocity, we can now introduce the Discrete Bures-Wasserstein Flow matching algorithm, with the training and inference part respectively introduced in Algorithm 3 and Algorithm 4.
# E. Design space for Bures Wasserstein interpolation and velocity
In the introduction part, we have already compared different probability paths and how they are impacting the inference time sampling. While the Bures-Wasserstein flow path is shown to produce a better probability path for the model to learn, as we illustrated in Figure 1a, we have to point out that linear interpolation and the corresponding probability path can still converge to the data distribution with sufficiently large flow steps. As if we conduct sampling with infinite flow steps during the later stage of flow, the samples are still able to arrive at the target distributions. A similar pattern exists in diffusion models when they are considered as a Monte-Carlo Markov Chain, and they need sufficiently large steps to converge. We emphasize that the convergence gap in Figure 1c would be slowly recovered as the number of flow steps increases.
Input: Ref. dist $p _ { 0 }$ and dataset $\mathcal { D } \sim p _ { 1 }$ .
Output: Trained model $f _ { \theta } ( G _ { t } , t )$ .
Initialize model $f _ { \theta } ( G _ { t } , t )$ ;
while $f _ { \theta }$ not converged do $/ \star$ Sample Boundary Graphs \*/ Sample batched $\left\{ G _ { 0 } \right\} \sim p _ { 0 }$ , $\{ G _ { 1 } \} \sim \mathcal { D }$ ; $/ \star$ Prob.path Construction $\star /$ Sample $t \sim \mathcal { U } ( 0 , 1 )$ ; Calculate the BW interpolation to obtain $X _ { t } , W _ { t }$ via Equation (11); Sample $G _ { t } \sim p ( \mathcal { G } _ { t } \mid G _ { 0 } , G _ { 1 } )$ according to Equation (13); $/ \star$ Denoising -- $x$ -prediction \*/ $p _ { 1 | t } ^ { \theta } ( \cdot | G _ { t } ) f _ { \theta } ( G _ { t } , t )$ ; Loss calculation via Equation (4); optimizer.step();
Input: Reference distribution $p _ { 0 }$ , Trained Model $f _ { \theta } ( G _ { t } , t )$ , Small time step dt,
Output: G(enera)ted Graphs $\{ \hat { G } _ { 1 } \}$ .
Initialize samples $\{ \hat { G } _ { 0 } \} \sim p _ { 0 }$ {;
Initialize the model $p _ { 1 | t } ^ { \theta } ( \cdot | G _ { t } ) f _ { \theta } ( G _ { t } , t )$ for $t \gets 0$ to 1 dt by dt do $/ \star$ Denoising - x-prediction \*/ Predict $\tilde { G } _ { 1 } p _ { 1 | t } ^ { \theta } ( \cdot | G _ { t } )$ ; $/ \star$ Velocity calculation $\star /$ Calculate $v _ { \theta } ( \hat { G } _ { t } \mid \hat { G } _ { 0 } , \tilde { G } _ { 1 } )$ via Equation (14); $/ \star$ Numeri(cal∣ Samp)ling $\star /$ Sample $\hat { G } _ { t + d t } \sim \hat { G } _ { t } + v _ { \theta } ( \hat { G } _ { t } ) d t$
Given that different sampling algorithms can all bring the samples to the data distributions under certain conditions, we wish to understand the huge design space of Bures Wasserstein interpolation. We list the advantages and disadvantages in different techniques and discuss further when each techniques should be used.
In general, we consider two important steps to construct the flow matching for graph generation, specifically, training and sampling. In training, the main challenge is to obtain a valid real velocity $u ( G _ { t } )$ to be regressed to, so we listed a few strategies that can help us with that. In sampling, the challenge becomes how to reconstruct the probability path through the velocity estimated.
# E.1. The Training Design
In general, the learning objective in flow matching depends on regressing the velocity term. There are several way to obtain the velocity.
1. Exact velocity estimation. Use Equation (3) as the parameterization and learn $p _ { \theta } ( \mathcal { G } _ { 1 } \mid G _ { t } )$
2. Numerical Approximation. In the implementation of (Stärk et al., 2024), the derivative is calculated through numerical approximation. To achieve better efficiency in calculating velocity, we simply consider a numerical estimation as in (Stärk et al., 2024), where the velocity term is obtained as, $\dot { \pmb { L } } _ { t } = ( \pmb { L } _ { t + \Delta t } - \dot { \pmb { L } } _ { t } ) / \Delta t$ . Regressing on the numerical difference can provide an estimation for the velocity.
3. AutoDiff. In (Chen & Lipman, 2024), the derivative of the probability path is evaluated through Pytorch AutoDiff. However, in practice we find this method unstable.
Algorithm 3: Discrete BWFlow Training
Algorithm 4: Discrete BWFlow Sampling
We summarized the training stage model parameterization in Table 4
Table 4. The model parameterization for flow matching in training stage
# E.2. The Sampling Design
As we described in the Equation (3), in our training framework, we actually train a denoised $p _ { \theta } ( \mathcal { G } _ { 1 } \mid \mathcal { G } _ { t } )$ . With such a parameterization and taking discrete flow matching as an example, the sampling can be done through one of the following design choices:
1. Target Guided Velocity Sampling. The velocity is designed as,
$$
v _ { \theta } \bigl ( G _ { t } \bigr ) = \frac { 1 } { 1 - t } \bigl ( p _ { \theta } \bigl ( \mathcal { G } _ { 1 } \mid G _ { t } \bigr ) - \delta \bigl ( G _ { t } , \cdot \bigr ) \bigr ) .
$$
This design directly moves the current point $G _ { t }$ towards the direction pointing to the predicted $G _ { 1 }$ . The target-guided velocity is guaranteed to converge to the data distribution, but the interpolant might lie outside the valid graph domain.
2. BW velocity sampling. We use Equation (14) to directly estimate the velocity and flow the Bures-Wasserstein probability path to generate new data points. This path is smooth in the sense of graph domain. However, this path requires more computational cost.
3. Probability Path Reconstruction. The third option is directly reconstructing the probability path, i.e., we first obtain an estimated point,
$$
\tilde { G } _ { 1 } \sim p _ { \theta } ( \mathcal { G } _ { 1 } | G _ { t } )
$$
and then construct the data point at $t + d t$ , which gives
$$
G _ { t + d t } \sim p ( \mathcal { G } _ { t } \mid \tilde { G } _ { 1 } , G _ { 0 } )
$$
through Equation (12). This is the most computationally costly method, which is obtained through the diffusion models.
But this method also provide accurate probability path reconstruction.
In Section 4, we show BW velocity follows a path that minimizes the Wasserstein distance thus provides better performance, but sampling following linear velocity also provides convergence with much lower computational cost. So it is a trade-off to be considered in the real-world application.
Table 5. Reconstructing probability path choices in flow matching during inference
# F. Discussion and Limitations
# F.1. The implicit manipulation of probability path
Though not explicitly mentioned, Qin et al. (2024) makes huge efforts to manipulate the probability path for better velocity estimation by extensively searching the design space, and their finding aligns well with the statement that the velocity should be smooth and consistently directing to the data points: 1) Time distortion: (The oragne line in Figure 5b) the polynomial distortion of training and sampling focus on the later stage of the probability trajectory, providing better velocity estimation in this area. This uneven sampling strategy is equivalent to pushing the probability path left to make it smooth. 2) Target guidance: (The orange line in Figure 5a) the target guidance directly estimate the direction from a point along the path towards the termination graph, so that the manipulated probability could smoothly pointing to the data distribution. and 3) Stochasticity injection: (The green line in Figure 5a) Stochasticity explores the points aside from the path, which avoid the path to be stuck in the platform area.
Figure 5. Techniques for manipulating probability path.
# F.2. Potential extension to diffusion models
In order to extend the flow matching algorithms with diffusion models, one important thing is to convert the pair-conditioned probability path and velocity to single boundary conditions. For instance, the probability path in flow matching has the form $p ( G _ { t } \mid G _ { 1 } , G _ { 0 } )$ and the velocity follows $v ( G _ { t } \mid G _ { 1 } , G _ { 0 } )$ . As suggested in (Siraudin et al., 2024; Campbell et al., 2022; Xu et al. 2024), the discrete graph diffusion models require a velocity (which is equivalent to a ratio matrix) to perturb the data distribution conditioned on the data points, which we denote as $v ( G _ { t } \mid G _ { 1 } )$ . As long as the unilateral conditional velocity has a tractable form, one can first sample a $G _ { 1 }$ and get $G _ { t }$ through iteratively doing to:
$$
G _ { t - d t } = G _ { t } - v ( G _ { t } \mid G _ { 1 } ) d t
$$
starting from $G _ { 1 }$ . So that one can easily construct the probability path $p ( G _ { t } \mid G _ { 1 } )$ to fit into the diffusion model framework. In practice, given that we know the explicit form of $v ( G _ { t } \mid G _ { 1 } , G _ { t ^ { \prime } } )$ (just replace $G _ { 0 }$ in the expression), the unilateral conditional velocity can be obtained through taking the lim ation,
$$
v ( G _ { t } \mid G _ { 1 } ) = v ( G _ { t } \mid G _ { 1 } , G _ { t } ) = \operatorname * { l i m } _ { t ^ { \prime } t } v ( G _ { t } \mid G _ { 1 } , G _ { t ^ { \prime } } ) .
$$
Both linear interpolation and our Bures-Wasserstein interpolation can achieve this easily. We just provide a discussion here and will leave this as future work as this paper does not focus on diffusion models but on flow matching models.
# F.3. Permutation invariance
The Bures-Wasserstein distance between two graph distributions is not permutation invariant, and the minimal value is obtained through the graph alignment. So ideally, to achieve optimal transport, graph alignment and mini-batch matching could provide a better probability path. However, permutation invariance is not always a desired property since we only want to find a path that better transforms from the reference distribution to the data distributions. As an illustration, the widely used linear interpolation to construct graph flow (Qin et al., 2024) does not guarantee permutation invariance as well. And it is proved that, if the measurement is based on Wasserstein distance between two Gaussian distributions.
$$
\begin{array} { r l } & { \qquad d _ { \mathrm { B W } } ( \eta _ { \mathcal { G } _ { 0 } } , \eta _ { \mathcal { G } _ { 1 } } ) \leq d _ { \mathrm { A r i t h m e t i c } } ( \eta _ { \mathcal { G } _ { 0 } } , \eta _ { \mathcal { G } _ { 1 } } ) } \\ & { \qquad \mathrm { w i t h ~ } d _ { \mathrm { B W } } ( \eta _ { \mathcal { G } _ { 0 } } , \eta _ { \mathcal { G } _ { 1 } } ) = \| X _ { 0 } - X _ { 1 } \| _ { F } ^ { 2 } + \beta \operatorname { t r a c e } \left( L _ { 0 } ^ { \dagger } + L _ { 1 } ^ { \dagger } - 2 \left( L _ { 0 } ^ { \dagger / 2 } ( P ^ { \top } L _ { 1 } P ) ^ { \dagger } L _ { 0 } ^ { \dagger / 2 } \right) ^ { 1 / 2 } \right) , } \end{array}
$$
# G. Related Works
# G.1. Diffusion and Flow Models
Among contemporary generative models, diffusion (Ho et al., 2020) and flow models (Lipman et al., 2023) have emerged as two compelling approaches for their superior performance in generating text and images. In particular, these generative models can be unified under the framework of stochastic interpolation (Albergo & Vanden-Eijnden, 2023), which consists of four procedures (Lipman et al., 2024) as we introduced in Section 1. These contemporary generative models rely on constructing a probability path between data points of an easy-to-sample reference distribution and of the data distribution, and training a machine learning model to simulate the process (Lipman et al., 2024). So that one can sample from the reference (a.k.a source) distribution and iteratively transform it to approximate data samples from the target distribution. Diffusion models construct the probability path with a unilateral path conditioned on the data distribution, where one start sampling from a data point $X _ { 1 }$ and construct the path $p ( \mathcal { X } _ { t } \mid X _ { 1 } )$ . While flow models can condition on either two boundary conditions, $\{ X _ { 1 } , X _ { 0 } \}$ or just one-side boundary conditi(oXn $X _ { 1 }$ .
Depending on the space that the algorithm operates on, both models can be categorized into continuous or discrete models. The continuous generative models assume the data distributions are themself lying in continuous space (such as Gaussian) and build models, with examples in diffusion (Ho et al., 2020; Song et al., 2021; Wang et al., 2024) and flow (Lipman et al., 2023; Liu et al., 2023b). The discrete generateive models assume the data follows a discrete distribution, for instance categorical or Bernoulli distributions. Examples include discrete diffusion (Campbell et al., 2022; Sun et al., 2023) and discrete flow models (Campbell et al., 2024; Gat et al., 2024; Minello et al., 2025).
Under the stochastic interpolation framework, the interpolation methods are commonly selected through optimal transport (OT) displacement interpolant (Liu et al., 2023b; Albergo & Vanden-Eijnden, 2023; McCann, 1997). Optimal transport is a classical topic in mathematics that was originally used in economics and operations research (Villani & Society, 2003), and has now become a popular tool in generative models. OT aims for finding the best transport plan between two probability measures with the smallest associated transportation cost. It has been shown that generative models can be combined with technologies such as iterative matching (Tong et al., 2024) and mini batching (Pooladian et al., 2023) to approximate the OT cost, and get a significant boost in their performance in generative modeling.
# G.2. Graph Generation Models
Thanks to the capability of graphs in representing complex relationships, graph generation (Zhu et al., 2022; Liu et al., 2023a) has become an essential task in various fields such as protein design (Ingraham et al., 2019), drug discovery (Bilodeau et al., 2022), and social network analysis (Li et al., 2023). The initial attempt at graph generation is formalized through autoregression. For instance, GraphRNN (You et al., 2018) organizes the node interactions into a series of connection events and conducts autoregressive prediction for generation. Later, one shot generation methods such as Variational Graph Auto-Encoder were proposed (Kipf & Welling, 2016; Cao & Kipf, 2018).
Among various generative models, diffusion models and flow-based models have emerged as two compelling approaches for their ability to achieve state-of-the-art performance in graph generation tasks (Niu et al., 2020; Vignac et al., 2023a; Eijkelboom et al., 2024; Qin et al., 2024; Hou et al., 2024). In the early stage, continuous diffusion models were first extended to the task of graph generation (Niu et al., 2020), where they just view the adjacency matrix as a special signal living on the $\mathbb { R } ^ { | \nu | \times | \nu | }$ domain. However, these methods fail to capture the natural discreteness of graphs, and Vignac et al. (2023a) first brings discrete diffusion into graph generation. After that, more work (Siraudin et al., 2024; Xu et al., 2024) starts to focus on designing better discrete diffusion models for graph generation.
On the other hands, with the development of flow matching techniques, a few works have been developed to utilize flow models for graph generation and they have achieved huge success. Eijkelboom et al. (2024) utilizes variational flow matching to process categorical data and Qin et al. (2024) developed discrete flow matching for graph generation tasks.
In parallel, there are a number of work that have managed to respect the intrinsic nature of graphs, such as global patterns. For instance, Jo et al. (2024) brings a mixture of graph technique to enhance the performance by explicitly learning final graph structures; Yu & Zhan (2025) mitigates exposure bias and reverse-start bias in graph generation; Hou et al. (2024) improves graph geneartion through optimal transport flow matching techniques but they still assume the independence between nodes and edges and use hamming distance to measure the transport cost; and Li et al. (2023) gives the large-scale attributed graph generation framework through batching edges.
However, there remain a core challenge: constructing the probability path $p _ { t }$ . Existing text and image generative models, operating either in the continuous (Ho et al., 2020; Song et al., 2021; Lipman et al., 2023; Liu et al., 2023b) or discrete (Campbell et al., 2022; Sun et al., 2023; Campbell et al., 2024; Gat et al., 2024; Minello et al., 2025) space, typically rely on linear interpolation between source and target distributions to construct the path. Graph generation models, including diffusion (Niu et al., 2020; Vignac et al., 2023a; Haefeli et al., 2022; Xu et al., 2024; Siraudin et al., 2024) and flow-based models (Eijkelboom et al., 2024; Qin et al., 2024; Hou et al., 2024), inherit this design by modeling every single node and edge independently and linearly build paths in the disjoint space. However, this approach is inefficient because it neglects the strong interactions and relational structure inherent in graphs, i.e., the significance of a node heavily depends on the configuration of its neighbors. While empirical success have been achieved via fine-grained searching on the training and sampling design (Qin et al., 2024) such as target guidance and time distortion, we argue that there remains a fundamental issue of the linear probability path construction, and these strategies only mitigate the problem by manipulating the probability path.
Table 6. Training and sampling time on each dataset. TG means using target-guided velocity; BW means using BW velocity.
# H. Comparison with other interpolation methods
n the experimental part, we compare our methods with arithmetic (linear) interpolation, geometric interpolation and armonic interpolation. We state the equation of them respectively as follows.
We consider the boundary graph $G _ { 0 }$ and $G _ { 1 }$ with $X _ { 0 } , X _ { 1 } \in \mathbb { R } ^ { | \mathcal { V } | \times d }$ and $W _ { 0 } , W _ { 1 } \in \mathbb { R } ^ { | \mathcal { V } | \times | \mathcal { V } | }$ . Let $t \in [ 0 , 1 ]$ , we fixed the feature interpolation as,
$$
{ \bf X } _ { t } = \left( 1 - t \right) { \bf X } _ { 0 } { \bf \Sigma } + { \bf \Sigma } _ { t } { \bf X } _ { 1 } ,
$$
the graph structure interpolation can be expressed as,
Linear interpolation:
$$
W _ { t } = \left( 1 - t \right) W _ { 0 } \ + \ t W _ { 1 } .
$$
Geometric interpolation:
$$
\begin{array} { r } { W _ { t } = W _ { 0 } ^ { 1 / 2 } \left( W _ { 0 } ^ { - 1 / 2 } W _ { 1 } W _ { 0 } ^ { - 1 / 2 } \right) ^ { t } W _ { 0 } ^ { 1 / 2 } , } \end{array}
$$
# Harmonic interpolation:
$$
\mathbf { { \cal W } } _ { t } = \left( \left( 1 - t \right) \mathbf { { \cal W } } _ { 0 } ^ { - 1 } \mathbf { \mathbf { \Phi } } + t \mathbf { { \cal W } } _ { 1 } ^ { - 1 } \right) ^ { - 1 } .
$$
Each interpolation methods actually handle each special manifold assumption, which should be designed under a comprehensive understanding of the task. In our experimental part, we conduct intensive analysis on the impact of interpolation methods to the graph generation quality.
# I. Additional Experiment Results
# I.1. Experiment setups and computational cost
The training and sampling computation time are provided in Table 6. The experiments were run on a single NVIDIA A100-SXM4-80GB GPU. The hyperparameter configuration in producing Tables 1 to 3 is reported in Table 7.
# I.2. Additional results for the training paths
Figure 6 gives the training probability path construction for planar graphs and tree graphs. While planar graphs have a similar pattern as the SBM datasets as in Figure 3a, the probability path constructed for tree graphs does not follow a similar pattern. We attribute this to the different geometry of tree graphs that reside in hyperbolic space (Yang et al., 2022).
Table 7. Best Configuration for Training and Sampling when producing Tables 1 to 3.
Figure 6. BW probability paths for planar and tree graphs.
# I.3. More experiments on plain graph generations
Additional results for sampling paths. We then give the sampling path construction in Figure 7. To better illustrate the advantage of BWFlow, we fix the sampling steps to be as small as 50. It is clear that in planar and SBM dataset, the BW velocity can still provide a smooth probability and stable convergence towards the data distribution. While the linear velocity does not give a good probability path and fails to converge to the optimal value, especially when the sampling size is small.
The maximum mean discrepancy (MMD) of four graph statistics between the set of generated graphs and the test set is measured, including degree (Deg.), clustering coefficient (Clus.), count of orbits with 4 nodes (Orbit), the eigenvalues of the graph Laplacian (Spec.), wavelet ratio (Wavelet.). To verify that the model learns to generate graphs with valid topology, we gives the percentage of valid, unique, and novel (V.U.N.) graphs for where a valid graph satisfies the corresponding property of each dataset (Planar, Tree, SBM, etc.).
Full results for plain graph generation. Table 8 gives the full results with other generative models aside from the diffusion and flow models. Table 9 gives the results on smaller datasets, i.e., comm20
# I.4. Full table for the synthetic graph generation
In Table 10, we illustrate the numerical results for comparing the interpolation methods in plain graph generation without node features. It is clear that BWFlow outperforms other methods in planar and SBM graphs. But the performance was not good in tree graph generations.
Figure 7. The probability path reconstruction in the sampling stage on a) Planar graphs and b) SBM graphs.
Figure 8. Training curves on QM9 and planar datasets with explicit hydrogen.
# I.5. 3D Molecule Generation: QM9 without explicit Hydrogen
In Table 11 we report the results of QM9 without explicit hydrogen. This task is relatively easy compared to the generation task with explicit hydrogen, and both Midi and our BWFlow have achieved near-saturated performance with validity near to $100 \%$ .
# I.6. Convergence Analysis
Figure 8 are the training convergence analysis on Planar and QM9 dataset, showing that BWFlow provides a fast convergence speed than others.
Table 8. Graph Generation Performance on Synthetic Graphs. Results are obtained through tuning the probability path manipulation
Table 9. Quantitative experimental results on COMM20 (smaller dataset).
Table 10. Ablation study on interpolation methods when probability path manipulation techniques are all disabled. The clustering and orbit ratios in tree graphs are omitted, given that in the training set, the corresponding statistics are 0. The results go over Exponential Moving Average (decay 0.999) for the last 5 checkpoints. The table is produced with Marginal boundary distributions, without time distortion.
Table 11. Quantitative experimental results on QM9 datasets without explicit hydrogen in 3D molecule generation.
Table 12. Graph generation performance on the synthetic datasets: Planar, Tree, and SBM. Given that the synthetic datasets are usually unstable in evaluation, we applied an exponential moving average to stabilize the results and sample 5 times (each run generates 40 graphs) to calculate the mean and standard deviation. The experiment settings are in Table 7
Table 13. Comparison of interpolation methods on 3D Molecule Generation with explicit hydrogen in QM9 dataset.
∗ Clearly, continuous flow matching models are not as comparative as discrete flow matching models.
Table 14. Large molecule generation results. Only iterative denoising-based methods are reported.
Table 15. Large molecule generation results. Only comparing the representative diffusion and flow models. B.E. is the scenario that only considers binary edge types. The results are almost saturated, thus not very informative. | Graph generation has emerged as a critical task in fields ranging from molecule design to drug discovery. Contemporary approaches, notably diffusion and flow-based models, have achieved solid graph generative performance through constructing a probability path that interpolates between a reference distribution and the data distribution. However, these methods typically model the evolution of individual nodes and edges independently and use linear interpolations to build the path assuming that the data lie in Euclidean space. We show that this is suboptimal given the intrinsic non-Euclidean structure and interconnected patterns of graphs, and it poses risks to the sampling convergence. To build a better probability path, we model the joint evolution of the nodes and edges by representing graphs as connected systems parameterized by Markov random fields (MRF). We then leverage the optimal transport displacement between MRF objects to design the probability path for graph generation. Based on this, we introduce BWFlow, a flow-matching framework for graph generation that respects the underlying geometry of graphs and provides smooth velocities in the probability path. The novel framework can be adapted to both continuous and discrete flow-matching algorithms. Experimental evaluations in plain graph generation and 2D/3D molecule generation validate the effectiveness of BWFlow in graph generation with competitive performance, stable training, and guaranteed sampling convergence. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
# 1. Introduction
Expert systems (ES), AI-based decision-making frameworks, are widely used in critical areas such as medical diagnosis, fraud detection, manufacturing, cyber security, and risk analysis, where decisions must be accurate for safety and reliability (Shu-Hsien Liao, 2005). A major challenge in ES for critical applications is that the data are naturally highly imbalanced ( Yang and Xu , 2020 ; Wei et al. , 2013 ; Rao et al. , 2006 ). This is a common and challenging issue to solve ( Branco et al. , 2016 ). In addition to binary classification, class imbalance becomes more severe in multiclass classification problems ( Krawczyk , 2016 ). Though traditional deep learning (DL) has recently advanced machine learning (ML) by partly overcoming the knowledge bottleneck that limited ML and AI for decades, it remains highly sensitive to imbalanced data distribution ( Ghosh et al. , 2024 ; Huang et al. , 2020 ; Bugnon et al. , 2020 ; Yang et al. , 2019 ; He et al. , 2016 ; Collobert and Weston , 2008 ; Ando and Huang , 2017 ; Buda et al. , 2018 ). In addition, performance worsens significantly with the increasing imbalance ratio ( Pulgar et al. , 2017 ). Moreover, failing to detect or predict rare but critical cases can cause higher costs, serious consequences, or sometimes irreparable damage. For example, failure to detect rare invalid transactions can result in significant financial loss and loss of customer trust. Missing early-stage cancer in a few patients (false negatives) may reduce survival chances. Missing signs of rare machine faults, like a turbine blade crack, can cause equipment failure or downtime. These issues suggest to suggest to handle the class imbalance issue carefully.
While established data-level techniques ( Batista et al. , 2004 ; Barandela et al. , 2003 ), including oversampling and undersampling, along with algorithm-level methods such as cost-sensitive learning (CSL), offer foundational strategies for addressing class imbalance, each comes with trade-offs. Data-level techniques are flexible and widely used since they do not depend on the choice of classifier ( López et al. , 2013 ). Undersampling ( Devi et al. , 2020 ) helps reduce bias toward the majority class but may not be suitable for small datasets. On the other hand, oversampling ( Sharma et al. , 2022 ) increases the risk of overfitting. Similarly, CSL ( Araf et al. , 2024 ) is conceptually powerful but often faces practical hurdles in accurately defining misclassification costs, which is difficult for complex real-world datasets. This highlights that simply modifying data distributions or loss functions may be inadequate for effectively addressing severe imbalance in challenging tabular settings ( Krawczyk et al. , 2014 ).
These challenges have motivated the exploration of methods that can inherently learn more discriminative features. Graph Neural Networks (GNNs) ( Scarselli et al. , 2009 ; Gori et al. , 2005 ) offer a strong approach for modeling tabular data by capturing complex dependencies often overlooked by traditional models. However, applying GNNs to tabular data can be computationally intensive and face scalability limitations, especially when large datasets or instance-wise graph construction are involved ( Villaizán-Vallelado et al. , 2024 ; Lee et al. , 2024 ).
Beyond graph-based representation learning, contrastive learning (CL) has recently shown notable success in improving generalization by structuring feature representations based on underlying patterns ( Hu et al. , 2024 ). A notable advancement was proposed by Tao et al. ( Tao et al. , 2024 ), who combined Supervised Contrastive Learning (SCL) with automatic tuning of the temperature parameter $\tau$ , a key factor influencing performance. While this approach marks great progress, it presents challenges that limit its generalizability. Specifically, it requires dataset-specific tuning of architecture (e.g., layer size, batch size), making its application less practical across diverse scenarios. Moreover, this study did not report precision and recall separately; these metrics are critical in expert systems where the costs of false positives and false negatives differ significantly. The method also depends on complex hyperparameter tuning via Treestructured Parzen Estimator (TPE), which increases model complexity. Additionally, their use of basic augmentation, especially Gaussian blur, leaves room for more effective techniques. In fact, CL’s success in image domains is largely driven by augmentation techniques, which do not translate well to tabular data due to the lack of spatial or structural properties.
Basic mixing of tabular samples can generate unrealistic data points, particularly harming rare class representations. Thus, more adaptive and smarter augmentation strategies are needed to generate meaningful and diverse samples without introducing much noise. Concurrently, researchers have explored new computational approaches to build neural networks that can learn complex patterns effectively, even with limited or complex data. One such direction is Quantum-inspired (QI) DL, which uses ideas from quantum mechanics to improve classical models’ learning ability. However, these QI methods are still developing, especially in handling imbalanced and complex datasets ( Shi et al. , 2023 ; Hong et al. , 2024 ; Konar et al. , 2020 ).
To address these key challenges in handling imbalanced tabular data, we propose Quantum-informed Contrastive Learning with Dynamic Mixup Augmentation Network (QCL-MixNet), a novel framework that combines three core innovations. Firstly, we introduce novel Quantum Entanglement-inspired Modules within our neural network architecture, which is built with the idea of quantum mechanics without directly using the quantum computing hardware. This module is used to improve the model’s capacity to capture complex, non-linear feature interactions often missed by standard layers, thereby improving feature representation, offering a practical advancement for QI techniques in this domain. Secondly, to tackle the critical issue of data augmentation for tabular data, we employ a Sample-Aware Dynamic Mixup strategy. Instead of random mixing that risks generating unrealistic samples, our method intelligently generates synthetic instances by interpolating an anchor sample with one of its k-nearest neighbors (kNN) in the feature space. This approach aims to create more realistic and beneficial augmented samples, especially for underrepresented minority classes, thus enriching the training data without introducing significant noise. Existing solutions often rely on singular approaches (e.g., data resampling, basic cost-sensitive learning, or CL with simple loss functions) that may not comprehensively address severe imbalance or the need for well-structured embeddings. To solve this issue, finally, our model components are trained using a hybrid loss function to learn robust and discriminative embeddings. This uses focal reweighting to handle imbalance, contrastive, and triplet components for structured embedding learning, and variance regularization for stable and well-separated class representations.
Our main contributions are:
1. We propose QCL-MixNet, a novel framework that effectively integrates quantum-inspired modules for expressive feature learning, kNN-guided sample-aware dynamic mixup for intelligent augmentation, and a hybrid contrastive loss for robust imbalanced classification.
2. We conduct extensive experiments on 18 diverse binary and multi-class imbalanced datasets. Our results show that QCL-MixNet consistently and significantly outperforms 20 state-of-the-art ML, DL, and graph-based models, establishing a new benchmark for imbalanced tabular data.
3. We conduct systematic ablation studies to validate the critical impact of each architectural component, demonstrating that the full QCL-MixNet architecture achieves superior and stable performance.
The remainder of this paper is organized as follows: Section 2 reviews related work on class imbalance techniques, contrastive learning, quantum-inspired deep learning, and graph-based methods for tabular data. Section 3 introduces the proposed QCL-MixNet framework, detailing its architecture and theoretical foundations. Section 4 outlines the experimental setup, including datasets, baselines, and evaluation protocol. Section 5 presents and discusses the results. Section 6 concludes the paper with key findings, limitations of this study, and future directions.
# 2. Literature Review
# 2.1. Class Imbalance Techniques (Undersampling, Oversampling, CSL)
A training set is considered imbalanced when one class has significantly fewer samples than the others ( Barandela et al. , 2003 ). Class imbalance becomes seriously problematic when identifying rare but crucial cases. This is a widespread challenge affecting various domains such as fraud detection, software engineering, fault diagnosis, intrusion detection, network security, social media analysis, medical diagnosis, malware detection, risk assessment, solar panel fault and anomaly detection ( Wang et al. , 2019a ; Thabtah et al. , 2020 ; Yuan et al. , 2023 ; Patnaik et al. , 2023 ; Giray et al. , 2023 ; Wang et al. , 2024 ; Dhalaria and Gandotra , 2024 ; Guo et al. , 2025 ; Gan et al. , 2020 ; Zheng et al. , 2025 ). Conventional ML algorithms typically assume a balanced dataset, so easily affected by the imbalance issue and tend to produce biased results favoring the majority class ( Li et al. , 2025a ). As a result, effectively identifying minority instances in imbalanced datasets has become a key research focus ( Dai et al. , 2025 ).
Data-level methods are the preprocessing techniques that modify the dataset itself to improve the performance of standard training procedures. Undersampling, oversampling, and hybrid fall under this criterion ( Buda et al. , 2018 ). Oversampling increases the number of minority class instances by repeating them, while undersampling reduces the majority class instances to balance the classes. Hybrid sampling combines both methods ( Sharief et al. , 2025 ). Recently, several advanced undersampling, oversampling, and hybrid approaches have been introduced. The most widely used oversampling technique, Synthetic Minority Oversampling Technique (SMOTE), generates an equal number of synthetic samples for each minority instance, which can result in class overlap ( Tao et al. , 2024 ). Recently, Simplicial SMOTE was proposed, which uses groups of nearby points to create synthetic data instead of only using pairs of points (edges) like SMOTE ( Kachan et al. , 2025 ). Isomura et al. introduced an oversampling method using large language models (LLMs) to generate more realistic and varied synthetic data for imbalanced tabular datasets ( Isomura et al. , 2025 ). However, there are concerns about possible bias in the synthetic data. On the undersampling side, the Schur decomposition class-overlap undersampling method (SDCU), an undersampling method, uses Schur matrix decomposition and global similarity to handle class-overlap in imbalanced datasets ( Dai et al. , 2023 ). Random Forest Cleaning Rule (RFCL) was also introduced to balance imbalanced data by removing overlapping majority class samples. RFCL worked well but focused only on F1-score, which restricted its adaptability ( Zhang et al. , 2021 ). Yu et al. introduced Balanced Training and Merging (BTM) to improve the worst-performing categories in long-tailed learning ( Yu et al. , 2025 ). Limitations included a slight decrease in arithmetic accuracy in certain scenarios.
Despite the advancements, limitations in data-level techniques persist. Undersampling can lead to critical information loss, which is a major concern. Oversampling, on the other hand, may cause overfitting. Hybrid sampling can inherit the drawbacks of both methods, leading to information loss or noise sensitivity ( Wang et al. , 2025 ). Carvalho et al. explored different data resampling methods (oversampling, undersampling, and hybrid methods, including advanced ones) ( Carvalho et al. , 2025 ). They concluded that no single method worked best for all cases. In contrast to data-level methods, algorithm-level approaches modify the classification algorithm itself, bypassing issues of data modification.
CSL is one of the most popular algorithm-level methods. CSL addresses class imbalance by assigning different misclassification costs to each class, typically giving higher costs to minority class errors. This approach aims to minimize costly misclassifications ( Araf et al. , 2024 ). Tang et al. proposed a robust Two-Stage instance-level CSL method with the Bounded Quadratic Type Squared Error (BQTSE) loss function that showed improved classification accuracy ( Tang et al. , 2024 ). However, it might struggle with extremely large and complex datasets. Cao et al. introduced the concept of deep imbalanced regression (DIR) for addressing the issue of imbalanced data in predicting the remaining useful life (RUL) ( Cao et al. , 2024 ). It proposed methods like label and feature distribution normalization, ranking similarity optimization, and a CSL framework to improve predictions. Nevertheless, DIR faces challenges when dealing with very small datasets and requires further development in data augmentation techniques. In summary, while significant progress has been made in addressing class imbalance through data-level techniques and algorithmlevel approaches, these methods still face substantial limitations. Data-level techniques may result in information loss or overfitting, and cost-sensitive learning may struggle to determine the appropriate misclassification costs for each class, and its effectiveness depends on the specific characteristics of the dataset. These challenges indicate that while improvements have been made, a comprehensive solution is yet to be fully realized.
# 2.2. CL in Non-Vision Domains
Representation learning plays a vital role in improving the performance of the ML models by revealing the underlying factors that drive variations in the data ( Bengio et al. , 2013 ). CL, a prominent approach in representation learning, aims to improve feature representation by pulling similar samples closer and pushing dissimilar ones apart ( Zhao et al. , 2025 ). CL has been widely applied in various fields, including image recognition and generation, adversarial samples detection, video and graph analysis, speech recognition, natural language processing, and recommendation systems ( Hu et al. , 2024 ).
CL has already demonstrated outstanding performance in computer vision tasks ( Kottahachchi Kankanamge Don and Khalil , 2025 ; Liu et al. , 2023 ; Guo and Huang , 2025 ; Zhang et al. , 2025 ; Zhou et al. , 2025 ; Xu and Wong , 2025 ; Wang et al. , 2022 ). But in non-vision domains, specifically in tabular datasets, CL is relatively less explored. Recent efforts have attempted to bridge this gap. For example, Wu et al. ( Wu et al. , 2023 ) introduced CL-enhanced Deep Neural Network with Serial Regularization (CLDNSR) to effectively handle high-dimensional data with limited samples. However, it shows limitations when applied to imbalanced datasets. Tao et al. ( Tao et al. , 2024 ) applied SCL-TPE, which improved representation quality and classification accuracy for imbalanced tabular datasets. Despite its success, their approach is still limited by inadequate data augmentation strategies and sensitivity to noisy labels. These findings indicate that while CL has shown progress in handling tabular data, current methods often face limitations related to imbalance handling, label noise, and effective augmentation.
# 2.3. Quantum-Inspired DL
DL models face difficulties when dealing with very small datasets ( Sun et al. , 2017 ). Even with larger datasets, they struggle to effectively manage highly complex, variable data. Quantum models can be a promising candidate to address some of these limitations ( Orka et al. , 2025 ). Quantum computing is an emerging field that uses the principles of quantum mechanics and offers a potential advantage over classical computing by overcoming certain constraints ( Rieffel and Polak , 2000 ). Quantum DL (QDL) models have the potential to improve speed ( Liu et al. , 2024 ; Saggio et al. , 2021 ), parameter efficiency ( Ciliberto et al. , 2018 ), feature representation ( Havlíček et al. , 2019 ; Goto et al. , 2021 ), generalization capabilities ( Caro et al. , 2022 ), and can outperform traditional DL models by achieving higher test accuracy in certain scenarios ( Chen et al. , 2022 ). However, despite these advantages, quantum computing suffers from high cost, limited coherence times, sensitivity to environmental interference, and error correction issues ( Mandal et al. , 2025 ; Orka et al. , 2025 ; Harrow and Montanaro , 2017 ).
QI models don’t directly use quantum computing hardware ( Jahin et al. , 2023 ). Instead, they incorporate principles of quantum mechanics to improve the performance and capabilities of classical DL models. Shi et al. ( Shi et al. , 2023 ) proposed Interpretable Complex-Valued Word Embedding (ICWE) and Convolutional Interpretable ComplexValued Word Embedding (CICWE), two QI neural networks to improve binary text classification. However, these models still face limitations in feature extraction. Hong et al. ( Hong et al. , 2024 ) proposed a hybrid DL model, combining convolutional neural networks (CNNs), long short-term memory (LSTM), and QI neural network (QINN) for forecasting wind speed. Konar et al. ( Konar et al. , 2020 ) proposed a Quantum-Inspired Self-Supervised Network (QIS-Net) for automatically segmenting brain MRI images. While QI DL models have shown promising results across various applications, they still face several challenges, including handling imbalanced and complex datasets, computational complexity, and limited scalability. These limitations highlight the need for further advancements in this area.
# 2.4. GNNs for Tabular Data
GNNs, a specialized area within DL, offer improved performance and interpretability by effectively capturing and learning from graph-structured data ( Tan et al. , 2025 ). A key property of tabular data is that the order of features holds no significance. Likewise, in graph data, the sequence of nodes does not matter; changing their arrangement does not affect the outcomes produced by GNNs. This similarity in nature makes GNNs a strong choice for tabular datasets ( Villaizán-Vallelado et al. , 2024 ). In recent years, Various GNN-based models have demonstrated significant advancements in handling tabular datasets. Li et al. reviewed how GNNs were used to analyze single-cell omics data, highlighting their success in tasks like cell type identification and gene regulation ( Li et al. , 2025b ). However, limitations included high computational costs and difficulty in capturing global data structures. The authors suggested future improvements like better scalability and integration with foundation models.
The Multiplex Cross-Feature Interaction Network (MPCFIN) addressed feature interaction and graph connectivity challenges in tabular data using a multiplex graph structure. But it combines hand-crafted and learned structures, which may introduce redundancy or conflicting information (Ye et al., 2024). Villaizán-Vallelado et al. proposed Interaction Network Contextual Embedding (INCE). GNN-based contextual embeddings were applied to outperform existing DL methods on tabular data but suffered from scalability issues and high training time due to per-row graph construction and complex edge-weight learning ( Villaizán-Vallelado et al. , 2024 ). Lee et al. introduced an algorithm combining feature-based and similarity-based learning with GNNs and contrastive loss to improve generalization ( Lee et al. , 2024 ). However, it faced limitations in scalability, interpretability, and computational cost, some common challenges for GNNs. Collectively, these studies underscore the growing capability of GNNs to handle complex tabular structures to some extent. Yet, further innovations are required in managing tabular data class imbalance, computational cost, and dynamic augmentation, limitations that are especially critical in real-world ES.
Figure 1: The overall architecture of the proposed QCL-MixNet framework for classifying imbalanced tabular data. Initially, the training data undergoes kNN-guided sample-aware dynamic mixup, where an augmented sample is generated by interpolating an anchor sample with its nearest neighbor. This augmented data is then processed by the QCLMixNet encoder—a hybrid model featuring quantum entanglement layers and a self-attention mechanism—to produce an intermediate representation. This representation is fed into both a classifier head and a projection head. Finally, the network is optimized using a hybrid loss function that combines Focal Variance Loss, Supervised Contrastive Loss, and Triplet Loss to learn discriminative features.
# 3. Materials and Methods
In this section, we outline the proposed methodology, which encompasses problem formulation, our novel QCL-MixNet architecture, a dynamic data augmentation strategy, and a hybrid loss function designed for robust representation learning and classification.
# 3.1. Problem Statement
Let $D = \{ ( \mathbf { x } _ { i } , y _ { i } ) \} _ { i = 1 } ^ { N }$ be a training dataset of $N$ samples, where $\mathbf { x } _ { i } \in \mathbb { R } ^ { D }$ is a $D$ -dimensional feature vector and $y _ { i } \in \{ 1 , \ldots , C \}$ is the corresponding class label from $C$ distinct classes. Our goal is to learn a mapping function $f _ { \Theta } : \mathbb { R } ^ { D } \{ 1 , \dots , C \}$ , parameterized by $\Theta$ , that accurately predicts the class label $y$ for an unseen feature vector $\mathbf { x }$ . This is achieved by learning an intermediate embedding function $h _ { \phi } : \mathbb { R } ^ { D } \to \mathbb { R } ^ { d }$ (where $d$ is the dimension of the embedding space, $d \ll D$ or $d$ can be an intermediate feature dimension) and a classifier $g _ { \psi } : \mathbb { R } ^ { d } \to \mathbb { R } ^ { C }$ , such that $f _ { \Theta } ( \mathbf { x } ) = \mathrm { a r g m a x } ( g _ { \psi } ( h _ { \phi } ( \mathbf { x } ) ) )$ . The parameters $\Theta = \{ \phi , \psi \}$ are optimized by minimizing a carefully designed loss function $\mathcal { L }$ over the training dataset $D$ . The core of our method lies in the specific architectures for $h _ { \phi }$ and $g _ { \psi }$ , the data augmentation techniques, and the composite nature of $\scriptstyle { \mathcal { L } }$ , all designed to enhance feature disentanglement, representation robustness, and classification performance, particularly in scenarios with complex data distributions.
# 3.2. Model Architecture
The proposed QCL-MixNet framework, illustrated in Figure 1 , processes imbalanced tabular data through three main stages: a kNN-guided dynamic mixup for data augmentation, a quantum-informed encoder for feature representation, and a hybrid loss objective for optimization. The following subsections detail the theoretical and implementation aspects of each component.
# 3.2.1. Quantum-Informed Feature Disentanglement
To improve the model’s ability to learn disentangled and informative features, we introduce quantum-informed entanglement layers augmented by attention mechanisms for feature recalibration.
Mathematical Formulation of Entanglement Layers. Inspired by the transformative operations in quantum systems, we propose a Quantum Entanglement (QE) layer. This layer is not intended to simulate quantum mechanics, but rather to leverage mathematical constructs reminiscent of quantum operations for feature transformation. Let $\mathbf { x } \in \mathbb { R } ^ { d _ { i n } }$ be the input feature vector to the QE layer. The layer applies a set of learnable parameters $\boldsymbol { \theta } \in \mathbb { R } ^ { d _ { i n } }$ . The transformation is a two-stage process:
1. Projection Stage: The input features are first scaled element-wise, akin to a parameterized rotation or projection.
$$
{ \bf x } _ { \mathrm { p r o j } } = { \bf x } \odot \cos ( \theta )
$$
where $\odot$ denotes element-wise multiplication, and $\cos ( \theta )$ is applied element-wise to the parameter vector $\theta$ .
This stage selectively modulates the amplitude of each feature.
2. Entanglement-inspired Gating Stage: The projected features $\mathbf { x } _ { \mathrm { p r o j } }$ are then passed through a non-linear gating mechanism. This stage introduces interactions and dependencies across features, inspired by the concept of entanglement, where quantum states become correlated. A scalar gating value $s$ is computed based on $\mathbf { x } _ { \mathrm { p r o j } }$ and $\sin ( \theta )$ :
$$
s = \sigma \left( \mathbf { x } _ { \mathrm { p r o j } } ^ { \mathsf { T } } \sin ( \theta ) \right)
$$
where $\sigma ( \cdot )$ is the sigmoid activation function, ensuring $s \in ( 0 , 1 )$ . The final output of the QE layer $\mathbf { x } _ { \mathrm { e n t } } \in \mathbb { R } ^ { d _ { i n } }$ is then:
$$
\mathbf { x } _ { \mathrm { e n t } } = \mathbf { x } _ { \mathrm { p r o j } } \cdot s
$$
The learnable parameters $\theta$ allow the network to adaptively determine the optimal projection and feature interdependencies. The combination of cosine and sine transformations, modulated by learnable parameters, offers a rich function space for feature manipulation. The sigmoid gate allows for a soft selection or attenuation of the transformed feature set based on a collective signal derived from all features. This process aims to disentangle underlying factors of variation by creating complex, non-linear feature combinations and selectively emphasizing informative ones. Our model incorporates two such QE layers: one at the input level (acting on $\mathbb { R } ^ { D }$ ) and another at an intermediate feature level (acting on $\mathbb { R } ^ { 6 4 }$ ).
Attention Mechanisms for Feature Recalibration. Following the initial QE layer, we apply a self-attention mechanism to further refine and recalibrate feature representations. Self-attention allows the model to weigh the importance of different features within a sample dynamically, capturing global dependencies. Given the output $\mathbf { x }$ from the first QE layer (or more generally, an input feature map of dimension $d _ { a t t n } )$ ), we treat it as a sequence of length 1 to apply attention within the sample’s features (i.e., channel attention if features are considered channels). The specific implementation uses single-head attention where Query (Q), Key (K), and Value (V) matrices are derived from the same input $\mathbf { x }$ . Let $\mathbf { x } \in \mathbb { R } ^ { d _ { a t t n } }$ be the input to the attention layer. It is first unsqueezed to $\mathbf { x } ^ { \prime } \in \mathbb { R } ^ { 1 \times d _ { a t t n } }$ to match the expected batch-first input format for sequence length 1. The Query, Key, and Value are computed as:
$$
\mathbf { Q } = \mathbf { x } ^ { \prime } \mathbf { W } _ { Q } , \quad \mathbf { K } = \mathbf { x } ^ { \prime } \mathbf { W } _ { K } , \quad \mathbf { V } = \mathbf { x } ^ { \prime } \mathbf { W } _ { V }
$$
where $\mathbf { W } _ { Q } , \mathbf { W } _ { K } , \mathbf { W } _ { V } \in \mathbb { R } ^ { d _ { a t t n } \times d _ { k } }$ are learnable weight matrices (for a single head, $d _ { k } = d _ { a t t n } )$ . The attention output is then:
$$
\mathrm { A t t e n t i o n } ( \mathbf { Q } , \mathbf { K } , \mathbf { V } ) = \mathrm { s o f t m a x } \left( { \frac { \mathbf { Q } \mathbf { K } ^ { \top } } { \sqrt { d _ { k } } } } \right) \mathbf { V }
$$
The output of the attention mechanism, $\mathbf { x } _ { \mathrm { a t t n } }$ , is added back to the input $\mathbf { x }$ via a residual connection:
$$
{ \bf x } _ { \mathrm { r e c a l i b r a t e d } } = { \bf x } + { \bf x } _ { \mathrm { a t t n } }
$$
This residual connection facilitates gradient flow and allows the model to adaptively decide how much recalibration is needed. By focusing on the most salient features relative to each other within a given sample, the attention mechanism complements the QE layer, improving the model’s ability to extract discriminative information. In our architecture, this attention layer uses $d _ { a t t n } = D$ (input feature dimension) and a single attention head.
# 3.2.2. kNN-Guided Sample-Aware Dynamic Mixup
Data augmentation is crucial for improving generalization. We apply a sample-aware dynamic mixup strategy, termed kNN-Guided Dynamic Mixup, which generates synthetic samples by interpolating between an anchor sample and one of its nearest neighbors in the feature space. This approach aims to create more meaningful and challenging training examples compared to the standard mixup that interpolates random pairs.
Definition 1 (kNN-Guided Dynamic Mixup). Let $( \mathbf { x } _ { i } , y _ { i } )$ be an anchor sample from a mini-batch $B _ { \cdot }$ .
1. Neighbor Selection: For each $\mathbf { x } _ { i } \in B$ , we identify its $k$ nearest neighbors $\{ \mathbf { x } _ { i , j } ^ { N N } \} _ { j = 1 } ^ { k }$ from $\boldsymbol { B }$ (excluding $\mathbf { x } _ { i }$ itself) based on Euclidean distance in the current feature space $h _ { \phi } ( \mathbf { x } )$ . One neighbor, $\mathbf { x } _ { i } ^ { N N }$ , is randomly selected from this set. Let its corresponding label be $y _ { i } ^ { N N }$ .
2. Interpolation Parameter Sampling: An interpolation coefficient $\lambda$ is sampled from a Beta distribution, $\lambda \sim$ Beta $( \alpha , \alpha )$ . To ensure the anchor sample retains a dominant influence, 𝜆 is adjusted as $\lambda ^ { \prime } = \operatorname* { m a x } ( \lambda , 1 - \lambda )$ . This biases $\lambda ^ { \prime }$ towards values $\ge 0 . 5$ .
3. Feature Interpolation: The mixed feature vector $\tilde { \mathbf { x } } _ { i }$ is generated as:
$$
\tilde { \mathbf { x } } _ { i } = \lambda ^ { \prime } \mathbf { x } _ { i } + ( 1 - \lambda ^ { \prime } ) \mathbf { x } _ { i } ^ { N N }
$$
4. Label Interpolation: Labels are mixed similarly, assuming one-hot encoding ${ \bf y } ^ { O H }$ for labels:
$$
\tilde { \mathbf { y } } _ { i } ^ { O H } = \lambda ^ { \prime } \mathbf { y } _ { i } ^ { O H } + ( 1 - \lambda ^ { \prime } ) ( \mathbf { y } _ { i } ^ { N N } ) ^ { O H }
$$
The hyperparameter 𝛼 controls the strength of interpolation, and $k = 5$ in our case defines the neighborhood size.
Rationale: Interpolating with kNNs encourages local linearity and smoothness of the decision boundary in denser regions of the feature manifold. By construction, $\mathbf { x } _ { i } ^ { \mathrm { { N N } } }$ is semantically similar to $\mathbf { X } _ { i }$ , making $\tilde { \mathbf { x } } _ { i }$ a plausible variation. This contrasts with random-pairing mixup, which might interpolate between semantically distant samples, potentially generating less realistic examples. In our training, we use the mixed features $\tilde { \mathbf { x } } _ { i }$ , but for the classification component of our loss, we associate them with the original label $y _ { i }$ . This strategy regularizes the model to be robust to perturbations towards its neighbors while maintaining the original class identity, effectively encouraging enlargement of the decision region for class $y _ { i }$ to include these sensible interpolations.
# 3.2.3. Hybrid Contrastive Loss with Variance Regularization
To learn discriminative and robust embeddings, we propose a hybrid loss function that integrates focal loss for classification with supervised contrastive loss, triplet loss, and an explicit variance regularization term based on learnable class centroids. Let $h _ { \phi } ( \tilde { \mathbf { x } } )$ be the embedding (output of the projection head) for a (potentially augmented) input $\tilde { \mathbf { X } }$ , and $g _ { \psi } ( h _ { \phi } ^ { \prime } ( \tilde { \mathbf { x } } ) )$ be the raw logits from the classifier, where $h _ { \phi } ^ { \prime }$ is the representation before the projection head. The original label is $y$ .
Focal Variance Loss. The Focal Variance Loss (FVL) component addresses class imbalance and hard example mining in classification, while simultaneously promoting intra-class compactness and inter-class separability in the embedding space.
Definition 2 (Focal Variance Loss). The FVL for a sample $( \tilde { \mathbf { x } } , y )$ with embedding ${ \bf e } = h _ { \phi } ( \tilde { \bf x } )$ and logits ${ \bf z } = g _ { \psi } ( h _ { \phi } ^ { \prime } ( \tilde { \bf x } ) )$ is:
$$
\mathcal { L } _ { F V L } ( \mathbf { z } , y , \mathbf { e } ) = \mathcal { L } _ { F o c a l } ( \mathbf { z } , y ) + \beta _ { 1 } \mathcal { L } _ { i n t r a } ( \mathbf { e } , y ) + \beta _ { 2 } \mathcal { L } _ { i n t e r } ( y )
$$
where $\beta _ { 1 } , \beta _ { 2 }$ are weighting hyperparameters. In our implementation, $\beta _ { 1 } = 0 . 8$ and $\beta _ { 2 }$ is implicitly $\jmath$ indicating $\mathcal { L } _ { i n t e r }$ is unweighted.
1. Focal Loss Component $( \mathcal { L } _ { \mathrm { F o c a l } } )$ : This addresses class imbalance by down-weighting the loss assigned to well-classified examples. Given the probability $p _ { t }$ for the true class $y$ (derived from logits $\mathbf { z }$ via softmax, $p _ { t } = \operatorname { s o f t m a x } ( \mathbf { z } ) _ { y } )$ , the Focal Loss is:
$$
\mathcal { L } _ { \mathrm { F o c a l } } ( \mathbf { z } , y ) = - ( 1 - p _ { t } ) ^ { \gamma } \log ( p _ { t } ) = ( 1 - p _ { t } ) ^ { \gamma } \log \frac { 1 } { p _ { t } }
$$
where $\gamma \geq 0$ is the focusing parameter. We use $\gamma = 3 . 0$ in our experiments.
2. Class Centroid Learning and Intra-Class Compactness $( { \mathcal { L } } _ { \mathrm { i n t r a } } )$ : We maintain learnable class centroids $\mathbf { c } _ { j } \in \mathbb { R } ^ { d _ { e } }$ for each class $j \in \{ 1 , \ldots , C \}$ , where $d _ { e }$ is the dimension of embeddings from the projection head $( d _ { e } \ = \ 8 \$ in our case). These centroids are parameters of the FVL module. The intra-class compactness loss penalizes the distance of an embedding 𝐞 to its corresponding class centroid $\mathbf { c } _ { y }$ :
$$
\mathcal { L } _ { \mathrm { i n t r a } } ( \mathbf { e } , y ) = | | \mathbf { e } - \mathbf { c } _ { y } | | _ { 2 } ^ { 2 }
$$
This term encourages embeddings of the same class to cluster tightly around their respective centroids. In the provided code, $\beta _ { 1 }$ corresponds to ‘beta‘ in ‘FocalVarianceLoss‘.
3. Inter-Class Separability $( \mathcal { L } _ { \mathrm { i n t e r } } )$ : To ensure centroids of different classes are well-separated, we introduce a penalty based on pairwise distances between centroids of classes present in the current mini-batch. Let $ { C _ { \mathrm { b a t c h } } }$ be the set of unique classes in the current batch.
$$
\mathcal { L } _ { \mathrm { i n t e r } } ( y ) = - \log \sigma _ { s } \left( \frac { 1 } { | C _ { \mathrm { b a t c h } } | ( | C _ { \mathrm { b a t c h } } | - 1 ) } \sum _ { j \in C _ { \mathrm { b a t c h } } } \sum _ { k \in C _ { \mathrm { b a t c h } } , k \neq j } | | \mathbf { c } _ { j } - \mathbf { c } _ { k } | | _ { 2 } \right)
$$
where $\sigma _ { s } ( \cdot )$ is the sigmoid function and $\begin{array} { r } { \log \sigma _ { s } ( x ) = \log \frac { 1 } { ( 1 + e ^ { - x } ) } } \end{array}$ penalizes low centroid separation. This loss term encourages maximization of the average inter-centroid distance.
Remark 1. The use of the log sigmoid in our implementation penalizes low centroid separation. In practice, $\beta = 0 . 8$ controls the strength of intra-class compactness. The inter-class term is unweighted and added directly. The centroids $\{ \mathbf { c } _ { j } \}$ are learnable and updated jointly with the network.
The complete FVL objective integrates all three components into the following expression:
Integration with Supervised Contrastive and Triplet Loss. The FVL is combined with established metric learning losses to further structure the embedding space. This forms our Hybrid Loss.
Definition 3 (Hybrid Loss). The total hybrid loss $\mathcal { L } _ { H y b r i d }$ for a mini-batch is:
$$
\mathcal { L } _ { H y b r i d } = \alpha _ { l o s s } \mathcal { L } _ { F V L } + ( 1 - \alpha _ { l o s s } ) ( \mathcal { L } _ { S u p C o n } + \mathcal { L } _ { T r i p l e t } )
$$
where $\alpha _ { l o s s }$ is a weighting factor $\prime \alpha _ { l o s s } = 0 . 5$ in this study).
1. Supervised Contrastive Loss $( \mathcal { L } _ { \mathrm { s u p C o n } } )$ : This temperature-scaled loss ( Khosla et al. , 2020 ) encourages embeddings of samples from the same class to lie closer together in the representation space, while pushing apart embeddings from different classes. Given an anchor embedding $\mathbf { e } _ { i } \in \mathbb { R } ^ { d _ { e } }$ with class label $y _ { i }$ , the supervised contrastive loss is defined as:
$$
\mathcal { L } _ { \mathrm { S u p C o n } } = \sum _ { i = 1 } ^ { N } \frac { - 1 } { | \mathcal { P } ( i ) | } \sum _ { p \in \mathcal { P } ( i ) } \log \frac { \exp \left( \sin ( \mathbf { e } _ { i } , \mathbf { e } _ { p } ) / \tau \right) } { \sum _ { a \in \mathcal { A } ( i ) } \exp \left( \sin ( \mathbf { e } _ { i } , \mathbf { e } _ { a } ) / \tau \right) }
$$
where ${ \mathcal { P } } ( i ) = \{ j \neq i \ | \ y _ { j } = y _ { i } \}$ denotes the set of positive indices for anchor $i$ , and $\mathcal { A } ( i ) = \{ j \neq i \}$ is the set of all other indices in the batch. The similarity function $\sin ( \cdot , \cdot )$ is implemented as cosine similarity, and $\tau > 0$ is a temperature hyperparameter. We use $\tau = 0 . 2$ in our experiments.
2. Triplet Loss $( \mathcal { L } _ { \mathrm { T r i p l e t } } )$ : Triplet loss encourages an embedding space where examples of the same class are pulled closer together while pushing apart examples of different classes. Specifically, for an anchor embedding $\mathbf { e } _ { a }$ , a positive sample $\mathbf { e } _ { p }$ (same class), and a negative sample $\mathbf { e } _ { n }$ (different class), the loss enforces a margin $m > 0$ between intra-class and inter-class distances:
$$
\mathcal { L } _ { \mathrm { T r i p l e t } } = \sum _ { ( \mathbf { e } _ { a } , \mathbf { e } _ { p } , \mathbf { e } _ { n } ) } \left[ \| \mathbf { e } _ { a } - \mathbf { e } _ { p } \| _ { 2 } ^ { 2 } - \| \mathbf { e } _ { a } - \mathbf { e } _ { n } \| _ { 2 } ^ { 2 } + m \right] _ { + } ,
$$
where $\left[ \cdot \right] _ { + } = \operatorname* { m a x } ( 0 , \cdot )$ denotes the hinge function. To improve convergence and avoid training on trivial triplets, we apply adaptive triplet mining using a Multi-Similarity Miner ( Wang et al. , 2019b ). This strategy dynamically selects informative (hard or semi-hard) triplets from the mini-batch, based on the relative similarity of samples. In our experiments, we use a margin of $m = 0 . 5$ .
This hybrid loss structure capitalizes on the complementary strengths of each component: FVL for robust classification and centroid-based regularization, SupCon for global structure in the embedding space by contrasting multiple positives and negatives, and Triplet loss for fine-grained separation using specific anchor-positive-negative relationships.
# 3.2.4. Projection Head for Robust Embeddings
Following standard practice in CL frameworks ( Chen et al. , 2020 ), we implement a projection head $p _ { \eta } : \mathbb { R } ^ { d ^ { \prime } } \mathbb { R } ^ { d _ { e } }$ that maps intermediate representations to a space optimized for metric learning. Specifically, we transform the output of the second fully connected layer (fc2), where $d ^ { \prime } = 3 2$ , to an embedding dimension $d _ { e } = 8$ used for supervised contrastive and variance-based objectives. The projection head is implemented as a two-layer MLP:
$$
\mathbb { R } ^ { 3 2 } \xrightarrow { \mathrm { L i n e a r } ( 1 6 ) } \mathrm { B a t c h N o r m } \xrightarrow { \mathrm { R e L U } } \mathrm { L i n e a r } ( 8 ) \longrightarrow \mathbb { R } ^ { 8 }
$$
This component is trained jointly with the encoder $h _ { \phi }$ , but its output is used only for contrastive and regularization losses. The final classifier $g _ { \psi }$ (i.e., fc3) operates directly on the 32-dimensional features from $h _ { \phi }$ , without the projection head, thus decoupling classification from CL.
# 3.2.5. Overall Training Procedure
Algorithm 1 presents a pseudocode that summarizes the complete end-to-end training procedure. During each training epoch, the model iterates over mini-batches, where we first apply our kNN-guided sample-aware dynamic mixup strategy to generate meaningful augmented samples. These augmented inputs are then passed through the core architecture, yielding two decoupled outputs: the final classification logits $\hat { y }$ for prediction, and low-dimensional embeddings $z$ from a dedicated projection head, used exclusively for metric learning. Importantly, the hybrid loss function is computed using the augmented representations but is supervised by the original, unmixed labels $y$ . This design encourages the model to be robust against local perturbations in feature space while preserving the semantic identity of the anchor samples. Finally, model parameters are updated via backpropagation, and the best-performing checkpoint is preserved based on its macro-F1 score on a held-out validation set.
# 3.3. Theoretical Analysis
The design of our QCL-MixNet incorporates several components, each contributing to its overall learning capability and robustness.
# 3.3.1. Expressiveness of QE Layers
The QE layers incorporate sinusoidal projections and sigmoid-based gating (Eq. 1 – 3 ), yielding a non-linear transformation of the input space:
$$
\mathbf { x } _ { \mathrm { e n t } } = \sigma \left( \mathbf { x } ^ { \top } \sin ( \pmb { \theta } ) \right) \cdot ( \mathbf { x } \odot \cos ( \pmb { \theta } ) )
$$
where $\pmb { \theta } \in \mathbb { R } ^ { d }$ is a learnable parameter vector. This structure introduces both global feature interactions (via the dot product) and localized modulations (via the cosine-weighted projection).
Proposition 1 (Expressiveness of QE Layer Composition). Let $f _ { Q E } ( \cdot ; \pmb { \theta } )$ be the transformation induced by a QE layer. Then, the function class
$$
\mathcal { F } _ { Q E } = \left\{ f ( \mathbf { x } ) = g \circ f _ { Q E } ( \mathbf { x } ) \ | \ g \in \mathcal { F } _ { M L P } \right\}
$$
is a universal approximator over compact subsets of $\mathbb { R } ^ { d }$ , provided $g$ is a standard feed-forward neural network with non-linear activation.
Proof 1 (Sketch). The QE transformation is continuous and differentiable in 𝐱 and 𝜽. Since it preserves the input dimensionality and introduces non-linearity, it acts as a learnable basis transformation. When composed with a fully connected MLP (which is a universal approximator), the overall function class remains dense in the space of continuous functions on compact domains (by the closure properties of universal approximators).
Thus, QE layers contribute to the expressiveness of the network in a structured way, introducing spatially adaptive gates and sinusoidal modulation, which can improve data-fitting capacity while maintaining compact parameterization.
# 3.3.2. Benefits of kNN-Guided Mixup
Standard mixup ( Zhang et al. , 2018 ) improves generalization by encouraging linear behavior between random pairs of training samples. However, such random interpolations may traverse regions far from the true data manifold, especially in class-imbalanced or multi-modal distributions. Our kNN-guided mixup (Eq. 7 ) refines this by restricting interpolation partners to semantically similar neighbors, thereby keeping synthetic samples closer to high-density regions of the data space.
Proposition 2 (Manifold-Aware Regularization). Let $\mathcal { M } \subset \mathbb { R } ^ { d }$ be the data manifold. Compared to uniform mixup, kNN-guided mixup is more likely to generate samples $\tilde { \mathbf { x } } = \lambda \mathbf { x } _ { i } + ( 1 - \lambda ) \mathbf { x } _ { j }$ such that $\tilde { \mathbf { x } } \in \mathcal { M } + \epsilon ,$ , for small $\epsilon > 0$ . This proximity to leads to stronger regularization and potentially tighter generalization bounds.
INTUITION. By sampling neighbors $\mathbf { x } _ { j } \in \mathcal { N } _ { k } ( \mathbf { x } _ { i } )$ based on Euclidean proximity in the learned feature space, the interpolation respects local structure. In contrast, random mixup may interpolate between semantically disjoint classes, generating unrealistic data that can harm decision boundaries.
# 3.3.3. Optimization Landscape with Hybrid Loss
The hybrid loss function (Eq. 14 ) combines focal reweighting with metric-based representation learning and variance regularization. Each component shapes the loss landscape differently: (i) The focal term amplifies the gradient contribution of hard examples, counteracting class imbalance and encouraging escape from flat or poor local minima. (ii) The contrastive and triplet terms act on the embedding space, enforcing angular and margin-based class separation. (iii) The variance regularization terms induce intra-class compactness and inter-class repulsion, stabilizing feature distributions.
Quantum-Informed Contrastive Learning with Dynamic Mixup Augmentation
Proposition 3 (Landscape Smoothing via Hybrid Loss). Let $\mathcal { L } _ { H y b r i d } = \alpha \mathcal { L } _ { F o c a l + V a r } + ( 1 - \alpha ) ( \mathcal { L } _ { S u p C o n } + \mathcal { L } _ { T r i p l e t } )$ . Then, under mild smoothness assumptions on the encoder and projection head, $\mathcal { L } _ { H y b r i d }$ is locally Lipschitz and its gradient field encourages inter-class separation while preserving intra-class smoothness.
Proof 2 (Sketch). (i) $\mathcal { L } _ { F o c a l }$ is smooth away from $p _ { t } ~ = ~ 1$ ; its gradients are steep near misclassified samples. (ii) $\mathcal { L } _ { S u p C o n }$ is differentiable and has bounded gradients due to the softmax denominator. (iii) $\mathcal { L } _ { T r i p l e t }$ is piecewise linear with subgradients due to the hinge. $( { i \nu } )$ Variance losses are quadratic in embeddings, hence smooth and convex. Therefore, the overall hybrid objective is piecewise smooth and exhibits gradient alignment toward class-separating embeddings, regularized by compactness constraints.
# 3.3.4. Role of Decoupled Representations
To reconcile the objectives of classification and contrastive representation learning, we use a projection head $p _ { \eta }$ to decouple the encoder $h _ { \phi } ^ { \prime }$ ’s output from the embedding space used by the contrastive and regularization losses. Specifically, the encoder produces $h _ { \phi } ^ { \prime } ( \mathbf { x } ) \in \mathbb { R } ^ { d ^ { \prime } }$ , which is used by the classifier $g _ { \psi }$ , while the projection head maps to $p _ { \eta } ( h _ { \phi } ^ { \prime } ( \mathbf { x } ) ) \in \mathbb { R } ^ { d _ { e } }$ for contrastive loss computation.
Proposition 4 (Representation Preservation and Task Decoupling). Let $\mathcal { L } _ { t o t a l } = \mathcal { L } _ { c l a s s i f i c a t i o n } + \mathcal { L } _ { c o n t r a s t i \nu e } .$ Applying the contrastive loss directly on $h _ { \phi } ^ { \prime } ( \mathbf { x } )$ can suppress dimensions useful for classification. The use of a non-linear projection $p _ { \eta }$ enables $h _ { \phi } ^ { \prime }$ to learn richer, task-relevant features while allowing contrastive alignment to occur in a separate, dedicated space.
Proof 3 (Sketch). As shown empirically by ( Chen et al. , 2020 ), projecting features into a lower-dimensional space before applying contrastive objectives improves downstream classification. This is because the encoder is freed from learning features solely shaped by contrastive geometry. The projection head learns a task-specific transformation optimized for metric learning, while the encoder focuses on preserving discriminative information for the main task.
# Algorithm 1 QCL-MixNet for imbalanced tabular data
Table 1 Details of 7 binary and 11 multi-class imbalanced tabular datasets.
1 https://archive.ics.uci.edu/ml/datasets/Ecoli 2 https://archive.ics.uci.edu/dataset/80/optical+recognition+of+handwritten+digits 3 https://archive.ics.uci.edu/dataset/146/statlog+landsat+satellite 4 https://archive.ics.uci.edu/ml/datasets/pen-based+recognition+of+handwritten+digits 5 https://archive.ics.uci.edu/ml/datasets/abalone 6 https://archive.ics.uci.edu/ml/datasets/isolet 7 https://archive.ics.uci.edu/ml/datasets/arrhythmia 8 https://www.kaggle.com/datasets/vinven7/comprehensive-database-of-minerals/data 9 https://www.openml.org/d/54 10 https://www.openml.org/d/182 11 https://www.openml.org/d/1478 12 https://www.openml.org/d/40691 13 https://www.openml.org/d/10 14 https://www.openml.org/d/1493 15 https://www.openml.org/d/11 16 https://www.openml.org/d/40498 17 https://www.openml.org/d/6 18 https://www.openml.org/d/41
# 4. Experiments
# 4.1. Datasets
To thoroughly evaluate the effectiveness of our proposed framework, we benchmarked it on 18 publicly available imbalanced datasets from the UCI Machine Learning Repository ( Asuncion et al. , 2007 ), OpenML ( van Rijn et al. , 2013 ), and Kaggle, comprising both binary and multi-class classification tasks. Table 1 summarizes the detailed characteristics of these datasets, including the number of parameters, features, classes, class imbalance ratio, and their sources. For binary classification, we selected seven datasets: ecoli, optical_digits, satimage, pen_digits, abalone, isolet and arrhythmia. These datasets span diverse domains and present varying degrees of class imbalance (ranging from 8.71 to 17.20), which enables a rigorous evaluation of model robustness under challenging imbalance conditions. For multi-class classification, we used 11 datasets, including minerals, vehicle, satimage, har, wine-quality-red, lymph, one-hundred-plants-texture, balance-scale, wine-quality-white, letter and glass. These datasets cover a wide range of class counts (from 3 to 100) and imbalance ratios (up to 440). Such diversity allows for evaluating the generalization capabilities of the proposed model in complex, real-world settings.
# Quantum-Informed Contrastive Learning with Dynamic Mixup Augmentation
# 4.2. Implementation
# 4.2.1. Training details and reproducibility measures
To ensure fair comparisons and reproducibility across all models, we used a consistent 80:20 stratified traintest split with a fixed random seed (42) and a batch size of 64. Preprocessing involved StandardScaler for feature normalization (zero mean and unit variance) and LabelEncoder for encoding class labels. We implemented GridSearchCV 3 with 3-fold cross-validation for hyperparameter tuning in ML models. All deep and QI models were trained for 100 epochs using the AdamW optimizer with an initial learning rate of $1 \times 1 0 ^ { - 3 }$ and weight decay of $1 \times 1 0 ^ { - 5 }$ . For QCL-based models, a $\jmath$ -cycle learning rate policy scheduler 4 was applied with a maximum learning rate of $1 \times 1 0 ^ { - 2 }$ to stabilize training. GNN models dynamically constructed k-nearest neighbor graphs $\mathrm { k } = 5 ,$ using torch-cluster during both training and evaluation, which enables GNNs to model local feature interactions in non-explicit graph structures typical of tabular data.. Macro F1 score was computed after each epoch on the held-out test set, and the best-performing model checkpoint was retained for final evaluation.
# 4.2.2. Hardware Setup
All experiments were conducted on a system equipped with an Intel(R) Xeon(R) CPU (4 vCPUs $\textcircled { \mathscr { a } } 2 . 0 \mathrm { G H z }$ , 30 GB RAM) and dual NVIDIA T4 GPUs (16 GB VRAM, 2560 CUDA cores each), enabling efficient parallel training for deep and graph-based models. The implementation leveraged PyTorch 2.2.0 ( Paszke , 2019 ) for DL, scikit-learn 1.4.2 ( Pedregosa et al. , 2011 ) for classical ML models, and PyTorch Geometric 2.6.1 ( Fey and Lenssen , 2019 ) in combination with Torch Cluster 1.6.3 for implementing graph-based neural architectures. All code was developed and executed using Python 3.10.12 in a Linux-based environment. Visualizations were created using Matplotlib 3.8.4 ( Hunter , 2007 ) and Seaborn 0.13.2 ( Waskom , 2021 ).
# 4.3. Benchmark Protocol
# 4.3.1. Evaluation metrics
In order to evaluate the performance of our models on both binary and multi-class classification tasks, we implemented four standard evaluation metrics: Accuracy, Macro Average Precision (maP), Macro Average Recall (maR), and Macro Average F1-score (maF1). These metrics are especially useful in uneven class distribution scenarios, as the macro-averaged scores assign equal weight to each class irrespective of frequency and thus provide a balanced evaluation.
Accuracy is the proportion of correctly classified instances among the total number of samples.
$$
{ \mathrm { A c c u r a c y } } = { \frac { T P + T N } { T P + T N + F P + F N } }
$$
maP computes precision independently for each class and then takes the unweighted mean.
$$
{ \mathrm { m a P } } = { \frac { 1 } { C } } \sum _ { i = 1 } ^ { C } { \frac { T P _ { i } } { T P _ { i } + F P _ { i } } }
$$
maR calculates recall for each class and averages them equally.
$$
\mathrm { m a R } = \frac { 1 } { C } \sum _ { i = 1 } ^ { C } \frac { T P _ { i } } { T P _ { i } + F N _ { i } }
$$
maF1 is the harmonic mean of macro precision and macro recall. This metric combines precision and recall into a single value, treating both equally. By considering both metrics, it provides a more balanced evaluation of a model’s performance.
$$
\mathrm { m a F 1 } = { \frac { 2 \times \mathrm { m a P } \times \mathrm { m a R } } { \mathrm { m a P } + \mathrm { m a R } } }
$$
Here, $C$ is the number of classes, and $T P _ { i } , F P _ { i } , F N _ { i }$ represent the true positives, false positives, and false negatives for class $i$ , respectively.
# 4.4. Baseline Models
We benchmarked our framework against 20 strong baseline models, including 8 ML, 7 DL, and 5 GNN models. We implemented ML models including Extreme Gradient Boosting (XGBoost), Balanced Random Forest (Balanced RF), Support Vector Machine with SMOTE oversampling (SVM (SMOTE)), Decision Tree (DT), Random Forest (RF), Gradient Boosting (GB), Logistic Regression (LR), and kNN. Key hyperparameters (e.g., number of estimators, learning rate, regularization terms) were optimized via grid search.
The DL models consist of a MLP with two hidden layers (128 and 64 units) and batch normalization, a ResNet with three residual blocks, a Gated Recurrent Unit (GRU) and a Long Short-Term Memory (LSTM) network with 128 hidden units, their bidirectional variants (BiGRU and BiLSTM, 128 units per direction), and a Convolutional Neural Network (CNN) comprising two 1D convolutional layers (32 and 64 filters, kernel size 3).
To evaluate graph-structured representations, we adopted five GNNs implemented Graph Convolutional Network (GCN) ( Kipf and Welling , 2017 ), GraphSAGE ( Hamilton et al. , 2017 ), Graph Attention Network (GAT) ( Velickovic et al. , 2017 ), Graph Isomorphism Network (GIN) ( Xu et al. , 2019 ), and Jumping Knowledge Network (JKNet) ( Xu et al. , 2018 ). The GNN architectures implemented in this study comprise two layers (Except for JKNet, which uses three layers) with 64 hidden units each, followed by a fully connected output layer for classification. All models were trained on identical data splits with consistent evaluation protocols for fair comparison.
Table 2: Performance benchmarking results for 7 binary tabular datasets with different class imbalance ratios. Bold indicates the best performance and underline indicates the second best performance. ’Diff’ indicates the absolute improvement of QCL-MixNet over either the best-performing or the second-best-performing baseline model for each dataset, depending on which is more relevant in each scenario.
Table 2 (Continued): Performance benchmarking results for 7 binary tabular datasets with different class imbalance ratios.
Table 2 (Continued): Performance benchmarking results for 7 binary tabular datasets with different class imbalance ratios.
Table 2 (Continued): Performance benchmarking results for 7 binary tabular datasets with different class imbalance ratios.
Table 3: Performance benchmarking results for 11 multi-class tabular datasets with different class imbalance ratios. Bold indicates the best performance and underline indicates the second best performance. ’Diff’ indicates the absolute improvement of QCL-MixNet over either the best-performing or the second-best-performing baseline model for each dataset, depending on which is more relevant in each scenario.
Table 3 (Continued): Performance benchmarking results for 11 multi-class tabular datasets with different class imbalance ratios.
Table 3 (Continued): Performance benchmarking results for 11 multi-class tabular datasets with different class
Table 3 (Continued): Performance benchmarking results for 11 multi-class tabular datasets with different class imbalance ratios.
Table 3 (Continued): Performance benchmarking results for 11 multi-class tabular datasets with different clas imbalance ratios.
Table 5: Ablation study results for 11 multi-class tabular datasets. Bold indicates the best performance and underline indicates the second best performance. ’Diff’ indicates the absolute improvement of QCL-MixNet over either the best-performing or the second-best-performing baseline model for each dataset, depending on which is more relevant in each scenario.
Table 5 (Continued): Ablation study results for 11 multi-class tabular datasets.
Table 4 Ablation study results for 7 binary tabular datasets. Bold indicates the best performance and underline indicates the second best performance. ’Diff’ indicates the absolute improvement of QCL-MixNet over either the best-performing or the second-best-performing baseline model for each dataset, depending on which is more relevant in each scenario.
# 5. Results and Discussion
# 5.1. Performance Comparison
Our proposed QCL-MixNet demonstrates consistently superior performance across a diverse set of 7 binary tabular datasets, as detailed in Table 2 .Notably, on datasets like arrhythmia, pen_digits, satimage and, optical_digits, QCLMixNet secures the best maF1 values. This improvement is visualized in Figure 2a , which shows the absolute gain in Macro F1 score compared to the best-performing baseline for each dataset. This strong performance, particularly in maF1, which is crucial for imbalanced scenarios. Traditional DL models, including advanced architectures like BiLSTMs or ResNets, often falter on imbalanced tabular data due to overfitting to the majority class or inadequate representation of minority classes. QCL-MixNet demonstrably overcomes these limitations in most cases. For instance, on satimage, QCL-MixNet achieves an maF1 of 0.84, surpassing the strongest competitors (second highest maF1: 0.82). Here, the ‘Diff’ row, which quantifies QCL-MixNet’s improvement over the best single model from any other
Figure 2: maF1-score improvement of QCL-MixNet over the best performing baseline on binary and multiclass datasets.
(a) Improvement of QCL-MixNet over the best performing base- (b) Improvement of QCL-MixNet over the best performing baseline on 7 binary datasets line on 11 multiclass datasets
Figure 3: Cross-dataset maF1 score distribution of various models on (a) 7 binary and (b) 11 multiclass datasets.
category, shows a $+ 0 . 0 2$ maF1 gain, highlighting its superior handling of minority classes. This is likely due to its dynamic mixup, which creates more informative synthetic samples, and its QI feature learning, which improves separability.
This robust performance extends to multi-class classification, where QCL-MixNet shows leading performance across 11 challenging tabular datasets (see Table 3 ), frequently outperforming all other model categories (ML, DL, and GNNs). It frequently outperforms all other model categories (ML, DL, and GNNs). The performance gains on these multiclass datasets are summarized in Figure 2b , which shows that QCL-MixNet outperforms the top baseline on 8 of the 11 datasets. For instance, on highly imbalanced datasets like vehicle, wine-quality-red, and glass, QCL-MixNet demonstrates substantial improvements in accuracy, with ‘Diff’ values of $+ 0 . 0 9$ , $+ 0 . 0 4$ , and $+ 0 . 0 9$ , respectively, over the best competing model from other categories. These results underscore the efficacy of its dynamic mixup strategy. This strategy is hypothesized to generate synthetic samples adaptively, particularly in underrepresented regions of the feature space defined by class imbalance and feature overlap. Such targeted augmentation is crucial for enabling better separation of multiple minority classes, a common challenge where standard oversampling or augmentation methods often falter in multi-class settings.
The results indicate that QCL-MixNet surpasses all other methods in accuracy across 14 out of 18 binary and multiclass datasets, and leads in maP (11 datasets), maR (9 datasets), and maF1 (12 datasets). While strong baselines like SVM (SMOTE) excel on specific datasets (e.g., one-hundred-plants-texture), QCL-MixNet demonstrates superior consistency, significantly outperforming these methods across a wider range of datasets, especially those with severe class imbalances or complex structures (e.g., lymph, wine-quality-white). This highlights QCL-MixNet’s robust and generalizable performance in imbalanced learning across diverse datasets, rather than focusing on dataset-specific efficiency, driven by its innovative integration of quantum-informed principles with dynamic mixup augmentation, which enables more effective regularization and feature learning for imbalanced tabular classification.
Figure 4: t-SNE plots showing prediction-space clustering of QCL-MixNet on binary and multiclass datasets. (a) presents t-SNE embeddings for binary classification tasks, while (b) shows $\mathsf { t } { - } \mathsf { S N E }$ embeddings for multiclass classification tasks. Each plot reflects the distribution of model predictions in the latent space, highlighting the degree of class separation and cluster compactness achieved by QCL-MixNet.
Figure 5: Ablation study showing the impact of removing key components from QCL-MixNet. Performance is measured by maF1 score on (a) the mean of 7 binary datasets and (b) the single mineral multiclass (left) dataset and the mean of 10 other multiclass (right) datasets. In all scenarios, the full model outperforms variants where the quantum-inspired embedding, mixup augmentation, or self-attention is removed. This confirms that each component contributes positively to the model’s overall performance. Error bars represent standard deviation across datasets.
To further illustrate these findings, Figure 2 provides a direct, per-dataset comparison of QCL-MixNet against the strongest baseline. The green bars indicate an improvement in maF1, with particularly large gains on datasets like arrhythmia and vehicle. The few red bars in Figure 2b correspond to the handful of cases where a baseline model retained a slight edge. Complementing this, Figure 3 contextualizes the overall performance landscape. The box plots show the distribution of maF1 scores for each model across all binary (Figure 3a ) and multiclass (Figure 3b )
datasets, respectively. These plots reveal the high variance and lower median performance of some traditional models, establishing the highly competitive nature of baselines like ResNet against which QCL-MixNet’s improvements are measured. As illustrated in Figure 4 , the t-SNE embeddings of QCL-MixNet’s prediction space reveal consistent and interpretable clustering behavior across both binary and multiclass classification tasks. In the binary datasets (Figure 4a ), the model exhibits distinct and compact clusters for the two classes, with minimal overlap, especially evident in datasets such as pen_digits, isolet, and optical_digits, suggesting that QCL-MixNet effectively captures discriminative representations even in low-dimensional latent spaces. On the multiclass side (Figure 4b ), datasets like letter, har, and one-hundred-plants-texture display well-separated groupings aligned with the underlying class labels, demonstrating the model’s capacity to preserve inter-class structure. Notably, even for challenging datasets such as abalone and glass, where classes are inherently less separable, the embeddings retain a coherent spatial distribution.
# 5.2. Ablation Studies
To dissect the contributions of key architectural modules (QE Layers, Dynamic Mixup, and Attention), we conducted systematic ablation experiments. Figure 5 provides a high-level visual summary of these findings by plotting the mean performance across datasets. Detailed per-dataset results are shown in Table 4 for binary datasets and Table 5 for multi-class datasets. Among all components, the removal of Dynamic Mixup led to the most significant degradation, with an average maF1 drop of approximately $2 4 . 5 \%$ across the 18 datasets. The absence of mixup severely hampers generalization in datasets with either many classes or high intra-class variability (e.g., one-hundred-plantstexture, glass). Mixup appears to lessen overfitting and reduce decision boundary sharpness, promoting interpolation between samples, which is especially beneficial in low-data regimes such as glass with only 214 parameters. The attention mechanism also proved highly influential, its exclusion resulting in an average maF1 decrease of about $2 3 . 2 \%$ ; its importance is highlighted on datasets with high feature dimensionality (e.g., har with 562 features, mineral with 140 features). This suggests that the attention module is crucial for feature selection and localization, enabling the model to suppress irrelevant or noisy features, especially when feature redundancy is high or sample diversity is sparse.
The QE Layers, while having a more varied impact across datasets, still demonstrated substantial overall contribution, with their removal causing an average maF1 reduction of roughly $1 1 . 5 \%$ , most notably in high-imbalance or highdimensional datasets (e.g., vehicle, arrhythmia). This supports the hypothesis that these QI layers introduce beneficial inductive biases that improve representation expressiveness, particularly under data sparsity. Moreover, QCL-MixNet exhibits high stability across varying model capacities, from compact models (e.g., ecoli with 336 parameters) to large ones (e.g., letter with 20,000 parameters), indicating architectural scalability. Ultimately, the consistent superiority of the full QCL-MixNet model over its ablated variants emphasizes that its components—improved representation learning, strategic data augmentation, and refined feature weighting—work synergistically to drive robust performance. This synergistic effect is visualized in Figure 5 , where the blue bar representing the full model consistently stands taller than the bars for any of the ablated versions across all test scenarios.
# 5.3. Statistical Comparison Using the Friedman Test
To statistically compare the performance of multiple classification models across multiple datasets, we employed the Friedman test, a widely accepted non-parametric procedure suitable for multiple comparisons under repeated measures. This method is particularly suitable for evaluating algorithms across a common set of datasets, where the assumptions of parametric tests, such as normality and homoscedasticity, are unlikely to hold. To rigorously compare the performance of multiple classifiers across datasets and metrics, we applied the Friedman test in two complementary ways: (i) on a per-metric basis across datasets, and (ii) globally, averaging the ranks across metrics. These tests account for non-normality and repeated measures, providing a robust statistical basis for evaluating model differences.
Let us denote by $k$ the number of models being compared, and by $N$ the number of repeated measures, which can correspond to either the number of datasets (in the per-metric Friedman test) or the number of evaluation metrics (in the global Friedman test). For each repetition $i \in \{ 1 , \ldots , N \}$ , the $k$ models are evaluated using a common criterion (e.g., classification accuracy or another performance measure). These scores are then converted into ranks $R _ { i j } \in \{ 1 , \ldots , k \}$ , where $R _ { i j }$ denotes the rank of the $j$ -th model under the $i$ -th repetition. The best-performing model receives rank 1, the second-best receives rank 2, and so on. In the event of ties, average ranks are assigned to the tied models.
The null hypothesis $( H _ { 0 } )$ of the Friedman test posits that all models perform equally well in expectation, implying that their mean ranks are equal:
$$
H _ { 0 } : \mathbb { E } [ R _ { 1 } ] = \mathbb { E } [ R _ { 2 } ] = \cdots = \mathbb { E } [ R _ { k } ]
$$
Table 6 Average Ranks of Models Across All Datasets and Friedman Test Outcomes for Each Evaluation Metric
To evaluate this hypothesis, the Friedman test statistic $\chi _ { F } ^ { 2 }$ is computed as:
$$
\chi _ { F } ^ { 2 } = \frac { 1 2 N } { k ( k + 1 ) } \sum _ { j = 1 } ^ { k } \bar { R } _ { j } ^ { 2 } - 3 N ( k + 1 )
$$
where $\begin{array} { r } { \bar { R } _ { j } ~ = ~ \frac { 1 } { N } \sum _ { i = 1 } ^ { N } R _ { i j } } \end{array}$ is the average rank of the $j$ -th model across all $N$ repeated measures (either datasets or evaluation metrics). In settings where ties may occur, such as when ranking models on individual datasets, it is necessary to adjust for the reduced variance in ranks caused by tied values. A correction factor $c \in ( 0 , 1 ]$ is introduced:
$$
c = 1 - \frac { T } { N k ( k ^ { 2 } - 1 ) }
$$
where $T$ is the total tie correction term, obtained by summing $t ( t ^ { 2 } - 1 )$ over all tie groups of size $t$ across the repeated measures. The corrected test statistic is then:
$$
\chi _ { F , \mathrm { c o r r e c t e d } } ^ { 2 } = { \frac { \chi _ { F } ^ { 2 } } { c } }
$$
In our analysis, this correction was applied only in the per-metric Friedman tests, where ties were possible within individual datasets. For the global Friedman test based on aggregated average ranks across metrics, no correction was necessary. Assuming a sufficiently large number of repetitions ( $\langle N > 1 0 \rangle$ , the Friedman statistic (corrected or uncorrected) asymptotically follows a $\chi ^ { 2 }$ distribution with $k - 1$ degrees of freedom $( d f )$ . A small $p$ -value indicates that the $H _ { 0 }$ can be rejected, suggesting statistically significant differences in model performance.
# 5.3.1. Per-Metric Friedman Tests Across Datasets
To assess whether the observed performance differences among the $k \ = \ 2 1$ classification models across the $n = 1 8$ datasets are statistically significant for each evaluation metric, we conducted individual Friedman tests per metric. The test computes ranks for each model within each dataset, then evaluates whether the mean ranks differ significantly under the $H _ { 0 }$ that all models perform equally well. Table 6 presents the average ranks of all models along with the Friedman test outcomes. In all cases, the Friedman test strongly rejects $H _ { 0 }$ $( p \ll 0 . 0 5 )$ , indicating statistically significant differences in performance rankings across models. Among the models, the SVM (SMOTE) variant achieved the best (lowest) average rank in maR (2.8) and maF1 (2.8), while the proposed QCL-MixNet achieved the best performance in accuracy (3.8) and maP (3.6). To resolve the ambiguity arising from multiple models excelling in different metrics, we performed a global Friedman test over the average ranks aggregated across all four metrics. This allows us to assess the models’ overall consistency and performance in a unified statistical framework.
# 5.3.2. Global Friedman Test on Aggregated Ranks Across Metrics
To complement the per-metric Friedman tests, we also performed a global Friedman test (bottom row of Table 6 ) on the average ranks of each model, aggregated across the four evaluation metrics. This approach offers a more consolidated view of model performance by considering multiple evaluation dimensions simultaneously. Let $k = 2 1$ be the number of classifiers and $N = 4$ be the number of metrics used as repeated measures. The average ranks $\bar { R } _ { j }$
Equation 25 , w $j \in { 1 , \dots , k }$ Since is small, we applied the Iman-Davenport $\begin{array} { r } { \chi _ { F } ^ { 2 } = \frac { 1 2 \times 4 } { 2 1 \times 2 2 } \bar { \sum } _ { j = 1 } ^ { 2 1 } \bar { R } _ { j } ^ { \dot { 2 } } - 3 \times 4 \times 2 2 = 2 9 . 6 8 . } \end{array}$ $N$
correction to obtain an $F$ -distributed statistic: $\begin{array} { r } { F _ { F } = \frac { ( N - 1 ) \chi _ { F } ^ { 2 } } { N ( k - 1 ) - \chi _ { F } ^ { 2 } } = \frac { ( 4 - 1 ) \times 2 9 . 6 8 } { 4 \times ( 2 1 - 1 ) - 2 9 . 6 8 } = \frac { 8 9 . 0 4 } { 5 0 . 3 2 } \approx 1 . 7 7 } \end{array}$ = 8590.0342 ≈ 1.77. The 𝑑𝑓 for this test
are: $d f _ { 1 } = k - 1 = 2 0$ and $d f _ { 2 } = ( k - 1 ) ( N - 1 ) = 2 0 \times 3 = 6 0$ . The critical value at $\alpha = 0 . 0 5$ from the $F$ -distribution
is $F _ { 0 . 0 5 } ( 2 0 , 6 0 ) \approx 1 . 7 4 8$ . Since $F _ { F } = 1 . 7 7 > 1 . 7 4 8$ , we reject the $H _ { 0 }$ , which posits that all models perform equally in expectation. This result confirms that the observed rank differences across models are statistically significant even when aggregating across multiple evaluation dimensions. QCL-MixNet, with the lowest global mean rank of 3.55, again emerged as the top-performing model. | Expert systems often operate in domains characterized by class-imbalanced tabular data, where detecting rare but critical instances is essential for safety and reliability. While conventional approaches, such as cost-sensitive learning, oversampling, and graph neural networks, provide partial solutions, they suffer from drawbacks like overfitting, label noise, and poor generalization in low-density regions. To address these challenges, we propose QCL-MixNet, a novel Quantum-Informed Contrastive Learning framework augmented with k-nearest neighbor (kNN) guided dynamic mixup for robust classification under imbalance. QCL-MixNet integrates three core innovations: (i) a Quantum Entanglement-inspired layer that models complex feature interactions through sinusoidal transformations and gated attention, (ii) a sample-aware mixup strategy that adaptively interpolates feature representations of semantically similar instances to enhance minority class representation, and (iii) a hybrid loss function that unifies focal reweighting, supervised contrastive learning, triplet margin loss, and variance regularization to improve both intra-class compactness and inter-class separability. Extensive experiments on 18 real-world imbalanced datasets (binary and multi-class) demonstrate that QCL-MixNet consistently outperforms 20 state-of-the-art machine learning, deep learning, and GNN-based baselines in macro-F1 and recall, often by substantial margins. Ablation studies further validate the critical role of each architectural component. Our results establish QCL-MixNet as a new benchmark for tabular imbalance handling in expert systems. Theoretical analyses reinforce its expressiveness, generalization, and optimization robustness. | [
"cs.LG"
] |
# 1 Introduction
Cohesive subgraph mining is a fundamental problem in graph theory with numerous real-world applications [13, 24, 32, 39]. Unlike cliques, which require complete connectivity, cohesive subgraphs provide more flexibility by allowing the absence of certain edges. In real-world graphs, data is often noisy and incomplete, making fully connected cliques impractical. By allowing a certain degree of edge absence, the relaxation offers a robust way to identify strongly connected substructures while accommodating missing or uncertain relationships. Various approaches have been proposed to relax the constraints of cohesive subgraph definitions, such as $k$ -plex [54], $k$ -core [53], and $k$ -defective clique [63]. One widely studied definition is the $\gamma$ -quasi-clique [46]. Given a fraction $\gamma$ between 0 and 1, a $\gamma$ -quasi-clique requires that every vertex in the subgraph is directly connected to at least a $\gamma$ proportion of the other vertices in the subgraph. Cohesive subgraph mining in the context of $\gamma$ -quasi-clique has recently attracted increasing interest [30, 34, 37, 40, 48, 52, 64, 68]. It has numerous applications in real-world graph structure analysis, including social network analysis [7, 24, 31] and the modeling of protein-protein interaction networks [2, 9, 55], which captures relationships between proteins. For example, in [50], researchers identify biologically significant functional groups by mining large $\gamma$ -quasi-cliques that meet a minimum size threshold across various protein-protein and gene-gene interaction networks. The idea is that within a functional protein group, most members interact frequently, suggesting a high likelihood of forming a quasi-clique [50]. Similarly, in another case study [31], $\gamma$ -quasi-cliques are utilized to uncover meaningful communities within networks derived from publication data.
In this paper, we study the maximum 𝛾-quasi-clique problem, which aims to identify the 𝛾-quasi-clique with the largest number of vertices in a graph. As a natural extension of the classic maximum clique problem, this problem is unsurprisingly NP-hard [46, 48]. An even more challenging lies in the fact that the $\gamma$ -quasi-clique lacks the hereditary property, i.e., subgraphs of a $\gamma$ -quasi-clique are not guaranteed to also be $\gamma$ -quasi-cliques. This limitation complicates the design of efficient pruning methods, unlike problems for $k$ -plex or $k$ -defective clique, where the hereditary property enables effective optimizations. In this paper, we focus on designing practically efficient exact algorithms to tackle this challenging problem.
Existing state-of-the-art algorithms. The state-of-the-art algorithm for the maximum $\gamma$ -quasi-clique problem is DDA [48], which combines graph properties with techniques from operations research. A key feature of DDA is its iterative enumeration (or estimation) of the degree of the minimum-degree vertex in the maximum $\gamma$ -quasi-clique. For each candidate degree, it invokes an integer programming (IP) solver multiple times to solve equivalent subproblems and identify the final solution. Unlike other approaches relying solely on IP formulations or branch-and-bound techniques based on graph properties, DDA effectively integrates both. However, it has two limitations: (1) it naively enumerates possible minimum degree values, potentially requiring a large number of trials (e.g., DDA may enumerate $O ( n )$ values, where $n$ is the number of vertices in the graph); (2) the $\mathrm { I P }$ solver operates as a black box, which makes it difficult to optimize for this specific problem.
Another closely related algorithm, FastQC [64], is the state-ofthe-art for enumerating all maximal $\gamma$ -quasi-cliques in a graph. FastQC uses a divide-and-conquer strategy with efficient pruning techniques that leverage intrinsic graph properties to systematically and effectively enumerate all maximal $\gamma$ -quasi-cliques. It is clear that the largest maximal $\gamma$ -quasi-clique identified corresponds to the maximum $\gamma$ -quasi-clique. However, since FastQC is not specifically designed to solve the maximum $\gamma$ -quasi-clique problem, even with additional pruning methods tailored for the maximum solution, its efficiency remains suboptimal. Moreover, our experimental results in Section 5 show that the efficiency of the FastQC algorithm significantly decreases when the value of $\gamma$ is relatively small.
In summary, while both DDA and FastQC have practical strengths, they fail to fully address the efficiency challenges of the maximum $\gamma$ -quasi-clique problem. Their limitations in scalability and handling smaller values of $\gamma$ highlight the need for more specialized solutions. Our new methods. In this work, we propose a novel algorithm, IterQC, which reformulates the maximum 𝛾-quasi-clique problem as a series of $k$ -plex problems. Notably, the $k$ -plex possesses the hereditary property, enabling the design of more effective pruning methods. To achieve this, we introduce a non-trivial iterative framework that solves the maximum $\gamma$ -quasi-clique problem through a carefully designed iterative procedure, leveraging repeated calls to a maximum $k$ -plex solver [15, 28, 59]. Building on this novel iterative framework, we further propose advanced optimization techniques, including the pseudo lower bound (pseudo LB) technique and the preprocessing technique. The pseudo LB technique effectively coordinates information between the inner iterations and the outer iterative framework, thereby boosting the overall efficiency of the iterative procedure. Meanwhile, the preprocessing technique performs preliminary operations before the iterative framework begins, reducing both the problem size by removing redundant vertices from the graph and reducing the number of iterations by skipping unnecessary iterations. With these optimizations, our proposed algorithm IterQC equipped with these optimizations achieves up to four orders of magnitude speedup and solves more instances compared to the state-of-the-art algorithms DDA and FastQC.
Contributions. Our main contributions are as follows.
• We first propose a basic iterative framework, which correctly transforms non-hereditary problems into multiple problem instances with the hereditary property. (Section 3) Based on the basic iterative method, we design an improved algorithm IterQC with two key components: (1) The pseudo lower bound (pseudo LB) technique, which utilizes information from previous iterations to generate a pseudo lower bound. This technique optimizes the branch-and-bound search, improving performance on challenging instances. (2) The preprocessing technique, which computes lower and upper bounds of the optimum size to remove unpromising vertices from the graph, potentially reducing the number of iterations. (Section 4) • We conduct extensive experiments to compare IterQC with the state-of-the-art algorithms DDA and FastQC. The results show that (1) on the 10th DIMACS and real-world graph collections, the number of instances solved by IterQC within 3 seconds exceeds the number solved by the other two baselines within 3 hours; (2) on 30 representative graphs, IterQC is up to four orders of magnitude faster than the state-of-the-art algorithms. (Section 5)
# 2 Preliminaries
Let $G = ( V , E )$ be an undirected simple graph with $\left| V \right| = n$ vertices and $\left. E \right. = m$ edges. We denote $G [ S ]$ of $G$ as the subgraph induced by the set of vertices $s$ of $V$ . We use $d _ { G } ( \boldsymbol { \upsilon } )$ to denote the degree of $\boldsymbol { v }$ in $G$ . Let $g$ be an induced subgraph of $G$ . We denote the vertex set and the edge set of $g$ as $V ( g )$ and $E ( g )$ , respectively. In this paper, we focus on the cohesive subgraph of $\gamma$ -quasi-clique defined below.
Definition 1. Given a graph $G \ : = \ : ( V , E )$ and a real number $0 ~ < ~ \gamma ~ \leq ~ 1$ , an induced subgraph $g$ is said to be a 𝛾 -quasi-clique (𝛾 -QC) $i f \forall v \in V ( g )$ , $d _ { g } ( v ) \geq \gamma \cdot ( | V ( g ) | - 1 )$ .
We are ready to present our problem in this paper.
Problem 1 (Maximum $\gamma$ -qasi-cliqe). Given a graph $G \ =$ $( V , E )$ and a real number $\gamma \in \left[ 0 . 5 , 1 \right]$ , the Maximum 𝛾-quasi-clique Problem (MaxQC) aims to find the largest 𝛾-quasi-clique in $G$ .
We first note that $\mathsf { M a x Q C }$ has been proven to be NP-hard [46, 48]. Moreover, we let $g ^ { \ast }$ be the largest $\gamma { \mathrm { - } } Q C$ in $G$ and $s ^ { * } = | V ( g ^ { * } ) |$ be the size of this maximum solution. We also note that, following previous studies [30, 37, 50, 52, 64], we consider MaxQC with $\gamma \geq 0 . 5$ since (1) conceptually, a $\gamma { \mathrm { - Q C } }$ with $\gamma < 0 . 5$ may not be cohesive in practice (note that each vertex in such a $\gamma$ -QC may connect to fewer than half of the other vertices); (2) technically, a $\gamma { \mathrm { - } } Q C$ with $\gamma \geq 0 . 5$ has a small diameter of at most 2 [50].
# 3 Our Basic Iterative Framework
The design of efficient exact algorithms for MaxQC presents multiple challenges. First, MaxQC has been proven to be NP-hard [46, 48]. Existing studies on similar maximum cohesive subgraph problems often adopt the branch-and-bound algorithmic framework [14, 15, 59]. Second, the $\gamma$ -quasi-clique is non-hereditary [64], i.e., an induced subgraph of a $\gamma { - } Q C$ is not necessarily a $\gamma { \mathrm { - } } Q C$ . This non-hereditary property of the $\gamma { \mathrm { - } } \mathsf { Q C }$ complicates the design of effective bounding techniques within the branch-and-bound framework.
To address these challenges, our algorithm adopts an iterative framework that solves $M a x Q C$ by iteratively invoking a solver for another cohesive subgraph problem. The design rationale originates from the intention to convert the non-hereditary subgraph problem into multiple instances of a hereditary subgraph problem, so as to speed up the overall procedure.
We first introduce another useful cohesive subgraph structure.
Definition 2 ([54]). Given a graph $G = \left( V , E \right)$ and an integer $k \geq 1$ , an induced subgraph $g$ is said to be a $k$ -plex $i f \forall v \in V ( g )$ , $d _ { g } ( v ) \geq | V ( g ) | - k$ .
In the literature, a related problem for $k$ -plex is defined [3, 14, 26, 35, 36, 59, 61, 70], as shown in the following.
Problem 2 (Maximum $k$ -plex). Given a graph $G = \left( V , E \right)$ and a positive integer $k$ , the Maximum $k$ -plex Problem aims to find the largest $k$ -plex in $G$ .
Based on the maximum $k$ -plex problem, we next describe our basic iterative framework for the $M a x Q C$ problem. To simplify the description, we define the following two functions.
$\bullet \ \operatorname* { g e t - k } ( x ) : = \left\lfloor \left( 1 - \gamma \right) \cdot \left( x - 1 \right) \right\rfloor + 1$ , which takes a number $x$ of vertices as input and returns a appropriate value of $k$ ; • solve-plex $( y ) : =$ the size of the largest $y$ -plex in $G$ , which takes a value $y$ as input.
We present our basic iterative framework for solving MaxQC in Algorithm 1. Specifically, Algorithm 1 initializes $s _ { 0 }$ as the number of vertices in the graph $G$ , computes the corresponding value of $k _ { 1 }$ using the function $\mathsf { g e t - k }$ , and sets the index 𝑖 as 1 (Line 1). The algorithm then iteratively computes $s _ { i }$ by solving the maximum $k _ { i }$ -plex problem via the function solve-plex, while updating $k _ { i }$ via the function get-k in each iteration (Lines 2-6). The iteration terminates when $k _ { i } = \mathtt { g e t - k } ( s _ { i } )$ in Line 4, at which point $s ^ { * }$ , the size of the maximum $\gamma$ -quasi-clique, is returned in Line 5. Note that Algorithm 1 returns only the size of the maximum $\gamma$ -quasi-clique, but can be easily modified to output the corresponding subgraph. Correctness. We now prove the correctness of our basic iterative framework in Algorithm 1. For simplicity in the proof, we assume that the termination condition (Line 4) of Algorithm 1 is met at the $( p + 1 )$ -st iteration. Then, in our iterative framework, we can obtain two sequences $\{ s _ { 0 } , s _ { 1 } , \dotsc , s _ { p } , s _ { p + 1 } \}$ and $\{ k _ { 1 } , k _ { 2 } , \dotsc , k _ { p } , k _ { p + 1 } \}$ , where $k _ { p + 1 } = \mathsf { g e t - k } ( s _ { p + 1 } )$ . In our proof, we will show that the sequence of $\{ s _ { 0 } , s _ { 1 } , \ldots , s _ { p } \}$ generated by Algorithm 1 is strictly decreasing and $s _ { p + 1 }$ corresponds to the largest $\gamma$ -quasi-clique in the graph, i.e, $s _ { p + 1 } = s ^ { * }$ . We first consider the special case that the input graph $G$ is already a $\gamma$ -quasi-clique.
Lemma 1. When the input graph $G = ( V , E )$ is $a \gamma$ -QC, Algorithm 1 correctly returns $| V |$ as the optimum solution.
The proof of Lemma 1, along with other omitted proofs in this section, can be found in Section A.1. In the following, we discuss the correctness of the algorithm where $G$ itself is not a $\gamma$ -quasi-clique. We first prove the properties of the sequence for $s$ in the following.
Lemma 2. For the sequence $\{ s _ { 0 } , s _ { 1 } , \ldots , s _ { p } \}$ , two consecutive elements are not identical, i.e., $\forall 0 \leq i \leq p - 1 , s _ { i } \neq s _ { i + 1 }$ .
Lemma 3. The sequence $\{ s _ { 0 } , s _ { 1 } , . . . , s _ { p } \}$ is strictly decreasing.
Proof. We prove this lemma by the mathematical induction. $\textcircled{1}$ $p = 1$ . As we consider the case where $G$ itself is not a $\gamma$ -quasiclique (by Lemma 1), we have $s _ { \phi } = s _ { 1 } = \mathsf { s o l v e } \mathsf { - p l e } \mathsf { x } ( k _ { 1 } ) < s _ { 0 }$ . The base case holds true. $\textcircled{2}$ $p \geq 2$ . Assume that the induction holds for $p - 1$ , i.e., we have $s _ { p - 1 } < s _ { p - 2 }$ . We observe that get- $\boldsymbol { \cdot } \boldsymbol { \mathsf { k } }$ is a non-decreasing function, which implies that $\mathsf { g e t - k } ( s _ { \hat { p } - 1 } ) \leq \mathsf { g e t - k } ( s _ { \hat { p } - 2 } )$ . Moreover, by the definition of the solve-plex function, for any $y _ { 1 } > y _ { 2 }$ , the inequality solve- $\mathsf { \Pi } ^ { - } \mathsf { p l e x } ( y _ { 1 } ) \ge \mathsf { s o l v e } ^ { - } \mathsf { p l e x } ( y _ { 2 } )$ holds. Then we have
$$
\begin{array} { r l } & { s _ { \mathcal { P } } = { \sf s o l v e - p l e x } ( k _ { \mathcal { P } } ) = { \sf s o l v e - p l e x } ( { \sf g e t - k } ( s _ { \mathcal { P } - 1 } ) ) } \\ & { \quad \quad \leq { \sf s o l v e - p l e x } ( { \sf g e t - k } ( s _ { \mathcal { P } - 2 } ) ) = { \sf s o l v e - p l e x } ( k _ { \mathcal { P } - 1 } ) = s _ { \mathcal { P } - 1 } . } \end{array}
$$
Note that based on Lemma 2 the sequence $\{ s _ { 0 } , s _ { 1 } , . . . , s _ { p } \}$ ensures that no two adjacent elements share the same value, which implies $s _ { p } < s _ { p - 1 }$ , completing the inductive step.
Based on $\textcircled{1}$ , $\textcircled{2}$ , and the principle of mathematical induction, we complete the proof of Lemma 3. □
Based on Lemma 3, we can obtain the following corollary.
Corollary 1. The sequence $\{ s _ { 0 } , s _ { 1 } , . . . , s _ { p } \}$ is finite.
We then show the relationship between $\{ s _ { 0 } , s _ { 1 } , \dotsc , s _ { p } , s _ { p + 1 } \}$ and the size of the largest $\gamma$ -quasi-clique.
Lemma 4. $\forall s _ { i } \in \left\{ s _ { 0 } , s _ { 1 } , . . . , s _ { p + 1 } \right\}$ , $s _ { i } \geq s ^ { * }$ holds.
Lemma 5. Algorithm 1 correctly identifies the largest 𝛾-quasiclique, i.e., $s _ { p + 1 } = s ^ { * }$ .
Proof. We prove by contradiction. By Lemma 4, we assume, to the contrary, that $s _ { p + 1 } > s ^ { * }$ . According to the termination condition in Line 4 of Algorithm 1, we have $k _ { p + 1 } = \mathsf { g e t } \ – \mathsf { k } ( s _ { p + 1 } ) =$ $1 + \lfloor ( 1 - \gamma ) \cdot ( s _ { p + 1 } - 1 ) \rfloor$ . Thus, it is clear that the result obtained by solve- ${ \mathsf { p l e x } } ( k _ { p + 1 } )$ is a $\gamma$ -quasi-clique of size $s _ { p + 1 }$ , which contradicts $s ^ { * }$ being the size of the maximum 𝛾-quasi-clique. □
Discussions. We first note that an example of Algorithm 1 and its iterative process are provided in Appendix B. As shown in Lemma 5, Algorithm 1 can correctly solve $M a x Q C$ . The time complexity analysis is simple, since there are at most $n$ iterations (Lines 2-6) and each iteration invokes a solver for the maximum $k$ -plex problem. Note that the current best time complexities of the maximum $k$ -plex algorithms are $O ^ { * } ( ( k + 1 ) ^ { \delta + k - s ^ { * } } )$ and $O ^ { * } ( \gamma _ { k } ^ { \delta } )$ [15, 28, 59], where $O ^ { * }$ suppresses the polynomial factors, $\gamma _ { k } ~ < ~ 2$ is the largest real root of $x ^ { k + 3 } - 2 x ^ { k + 2 } + x ^ { 2 } - x + 1 = 0$ , and $\delta$ is the degeneracy of the graph. We also remark that prior studies adopted similar approaches by reducing a more difficult problem to solving another problem iteratively. For instance, Chang and Yao [15] briefly discuss the relationship between the $\gamma$ -quasi-clique and the $k$ -plex. They focus on enumerating maximal $\gamma$ -quasi-cliques and mention a method for generating an initial solution set by enumerating maximal $k$ -plexes, followed by a screening step to remove those that do not satisfy the maximality condition. However, this method is mainly of theoretical interest and runs slowly in practice, since it requires enumerating a large number of maximal $k$ -plexes. Zhang and Liu [69] tackle the minimum $k$ -core problem by reducing it to multiple iterations of solving the maximum $k$ -plex problem. Their iterative strategy incrementally adjusts the value of $k$ by one in each round, progressively approaching the desired solution. Similarly, they also adopt an iterative strategy that considers different values of $k$ , where the value of $k$ differs by one between consecutive iterations. In contrast, we consider the maximum $\gamma$ -quasi-clique problem. Our method adaptively adjusts the values of $k$ based on the size of the current best $k$ -plex, rather than exhaustively enumerating all possible values of $k$ .
However, Algorithm 1 still suffers from efficiency issues for the following reasons. First, Algorithm 1 utilizes a solver for solve-plex as a black box. This solver relies on a conservative lower bound within solve-plex to reduce the graph, leading to inefficiencies when handling certain challenging instances. Second, the graph processed iteratively in solve-plex (Line 3) often contains many unpromising vertices, which negatively affects overall performance, even after graph reduction within solve-plex. Further, in the initial iteration, the first value of $k$ is set as $k _ { 1 } = \mathsf { g e t } \mathsf { - k } ( | V | )$ with $| V |$ is a trivial upper bound of $s ^ { * }$ in Line 1. This initialization could potentially result in numerous unnecessary iterations of Lines 2-6.
# 4 Our Improved Algorithm: IterQC
To address the limitations of our basic iterative framework (Algorithm 1), we propose an advanced iterative method IterQC (shown in Algorithm 2), which incorporates two key stages: the preprocessing stage and the iterative search stage. First, we consider the iterative search stage (Line 4). Within the function Plex-Search – an improved version of solve-plex from Algorithm 1 – we introduce a pseudo lower bound (pseudo LB). This technique improves practical performance when handling challenging instances while preserving the correctness of the iterative algorithm for $M a x Q C$ . The pseudo LB technique is discussed in Section 4.1. Second, we provide a preprocessing technique (Lines 1-3) to improve efficiency. Specifically, we compute both the lower and upper bounds, $l b$ and $u b$ , of the size $s ^ { * }$ of the largest $\gamma$ -quasi-clique $g ^ { \ast }$ via Get-Bounds in Line 1. Using $l b$ and $u b$ , in Lines 2-3, we can (1) reduce the graph by removing unpromising vertices/edges from $G$ that cannot appear in $g ^ { \ast }$ , and (2) initialize a smaller value of $k$ for Plex-Search, potentially reducing the number of iterations. The preprocessing technique is introduced in Section 4.2. Finally, we analyze the time complexity of our IterQC, which theoretically improves upon the state-of-the-art algorithms, in Section 4.3
# 4.1 A Novel Pseudo Lower Bound Technique
The basic iterative framework (Algorithm 1) solves $M a x Q C$ by iteratively invoking the maximum $k$ -plex solver solve-plex as a black box with varying values of $k$ . To improve practical efficiency, we have the following key observation.
Key observation. Existing studies [14, 15, 28, 35, 59, 61] on maximum $k$ -plex solvers mainly consist of two steps: (1) heuristically computing a lower bound of the maximum $k$ -plex, and (2) exhaustively conducting a branch-and-bound search to find the final result. It is well known that Step (1) has a polynomial complexity while Step (2) requires an exponential complexity. Thus, both theoretically and practically, in most cases, the time cost of the branch-and-bound search dominates that of obtaining the lower bound.
With the above property, our key idea is to balance the time cost of both steps. In particular, we introduce a pseudo lower bound, denoted by 𝑝𝑠𝑒𝑢𝑑𝑜- $. l b$ , which is defined as the average of the heuristic lower bound from Step (1) and a known upper bound for the maximum $k$ -plex solution. In other words, 𝑝𝑠𝑒𝑢𝑑𝑜-𝑙𝑏 is always at least the heuristic lower bound and at most the known upper bound. By incorporating 𝑝𝑠𝑒𝑢𝑑𝑜- $. l b$ into the branch-and-bound search in Step (2), we may improve pruning effectiveness, which reduces the overall search time as verified by our experiments. We note that in maximum $k$ -plex algorithms, Step (1) corresponds to Plex-Heu for computing a heuristic solution, while Step (2) corresponds to Plex-BRB for performing the branch-and-bound search.
The technique of pseudo LB. We call this technique as pseudo lower bound (pseudo LB), which is incorporated into a branch-andbound search algorithm called Plex-Search in Algorithm 3. In Plex-Search, 𝑢𝑏-𝑝𝑙𝑒𝑥 represents an upper bound of the size of the maximum $k$ -plex found in Plex-Search, meaning no $k$ -plex larger than 𝑢𝑏-𝑝𝑙𝑒𝑥 exists. The output of Plex-Search include a pseudo lower bound 𝑝𝑠𝑒𝑢𝑑𝑜- $. l b$ and a pseudo size 𝑆 which corresponds to (1) the size of the maximum $k$ -plex of size at least 𝑝𝑠𝑒𝑢𝑑𝑜- $\mathbf { \nabla } \cdot l b$ if such a $k$ -plex exists, or (2) 0, if no $k$ -plex of size at least 𝑝𝑠𝑒𝑢𝑑𝑜- $. l b$ exists. This is because, when the input 𝑝𝑠𝑒𝑢𝑑𝑜- $. l b$ exceeds the size of the maximum $k$ -plex in the graph, Plex-BRB will utilize this 𝑝𝑠𝑒𝑢𝑑𝑜- $. l b$ for pruning, resulting in an empty graph as the vertex set $s$ .
Our Plex-Search method. We describe Plex-Search in Algorithm 3. Specifically, in Line 1, we invoke Plex-Heu to compute a lower bound 𝑙𝑏-𝑝𝑙𝑒𝑥. If $\begin{array} { r } { l b - p l e x = u b } \end{array}$ -𝑝𝑙𝑒𝑥, we can directly return $l b$ -𝑝𝑙𝑒𝑥 as both the pseudo lower bound and the pseudo size in Line 2, since Plex-Heu already finds the maximum $k$ -plex in this case. Then, Line 3 computes our pseudo lower bound 𝑝𝑠𝑒𝑢𝑑𝑜- $. l b$ as the average of 𝑙𝑏-𝑝𝑙𝑒𝑥 and 𝑢𝑏-𝑝𝑙𝑒𝑥. We conduct Plex-BRB using 𝑝𝑠𝑒𝑢𝑑𝑜- $. l b$ for pruning and obtain the corresponding $k$ -plex $s$ (either the vertex set of maximum $k$ -plex or an empty set). Finally, we return 𝑝𝑠𝑒𝑢𝑑𝑜- $. l b$ and $| S |$ in Line 5.
$\mathbf { A l g o r i t h m 3 : P l e x - S e a r c h } ( G , k , u b - p l e x )$
Our improved iterative search algorithm. With our newly proposed Plex-Search with the pseudo LB technique, we next present our improved iterative search method Improved-Iter-Search in Algorithm 4. This method is similar to our basic iterative framework in Algorithm 1. Specifically, Algorithm 4 initializes $s _ { 0 }$ to $u b$ , computes the corresponding value of $k _ { 1 }$ using the function get-k, and sets the index 𝑖 to 1 (Line 1). The algorithm then iteratively computes $s _ { i }$ by invoking Plex-Search (instead of solve-plex in Algorithm 1), while updating $k _ { i }$ using the function get-k at each iteration (Lines 2-6). Note that both 𝑝𝑠𝑒𝑢𝑑𝑜- $. l b$ and 𝑝𝑠𝑒𝑢𝑑𝑜-𝑠𝑖𝑧𝑒 are obtained from our Plex-Search, and $s _ { i }$ is set to the greater of these two values in Line 3. The iteration terminates when $k _ { i } = \mathtt { g e t - k } ( s _ { i } )$ and 𝑝𝑠𝑒𝑢𝑑𝑜-𝑠𝑖𝑧𝑒 $\geq$ 𝑝𝑠𝑒𝑢𝑑𝑜- $. l b$ in Line 4. This termination condition ensures that (1) the output from Plex-Search is the maximum $k$ -plex even with the use of a pseudo lower bound; and (2) the current iteration produces the same value of $k$ for the next iteration from Plex-Search. At this point, we return $s ^ { * }$ as the size of the maximum 𝛾-quasi-clique in Line 5.
Remark. We note that our improved iterative search method in Algorithm 4 can use adaptions of any existing maximum $k$ -plex algorithm [14, 15, 28, 35, 59, 61] for Plex-Search, provided the algorithm can be decomposed into two steps: (1) heuristically obtaining a lower bound for the maximum $k$ -plex, Plex-Heu, and (2) exhaustively conducting the exact branch-and-bound search for the final solution, Plex-BRB. In this work, we utilize one of the state-of-the-art maximum $k$ -plex algorithms KPEX [28].
Correctness. We prove the correctness of our improved iterative search algorithm Improved-Iterative-Search in Algorithm 4, which includes the pseudo LB technique in Algorithm 3. The complete proof and the illustration example of the pseudo LB technique are provided in Appendix A.2 and Appendix B, respectively.
# 4.2 Our Preprocessing Technique
To show our preprocessing technique, we first describe the computation of lower and upper bounds Get-Bounds (Line 1 of Algorithm 2). Computation of lower and upper bounds. Following the idea commonly adopted for computing lower and upper bounds in prior dense subgraph search studies [12, 14, 51, 64], we derive our bounds based on the $k$ -core structure. In the following, we introduce the relevant concepts and present the bounds for our problem.
Definition 3. $\boldsymbol { { \mathit { k } } }$ -core [53]) Given a graph $G$ and an integer $k$ , the $k$ -core of 𝐺 is the maximal subgraph $g$ of 𝐺 such that every vertex $u \in V ( g )$ has degree $d _ { g } ( u ) \geq k$ in $g$ .
Based on the $k$ -core, a related concept is the core number of a vertex $u$ , denoted as 𝑐𝑜𝑟𝑒 𝑢 , which is the largest $k$ such that $u$ is part of a $k$ -core in the graph. In other words, $u$ cannot belong to any subgraph where all vertices have a degree of at least 𝑐𝑜𝑟𝑒 $( u ) +$ 1. Let 𝑚𝑎𝑥_𝑐𝑜𝑟𝑒 be the maximum core number in the graph, i.e., $m a x \_ c o r e = \operatorname* { m a x } _ { u \in V } c o r e ( u )$ . Given a graph $G = \left( V , E \right)$ , the core numbers for all vertices can be computed in $O ( | V | + | E | )$ time using the famous peeling algorithm [6, 47], which iteratively removes vertices with the smallest degree from the current graph. We remark that during the peeling algorithm used to compute the core numbers, we can obtain the lower bound $l b$ of $s ^ { * }$ as a by-product. In particular, we check whether the current graph qualifies as a $\gamma$ -quasi clique and record the size of the largest qualifying 𝛾-quasi clique. Moreover, as the current graph keeps shrinking during the peeling algorithm, the lower bound $l b$ clearly corresponds to the size of the first current graph that meets the $\gamma$ -quasi clique requirements.
Lemma 6. Given a graph $G$ and a real number $0 < \gamma \leq 1$ , we have $s ^ { * } \le 1 + \lceil m a x \_ c o r e / \gamma \rceil$ .
The proof of Lemma 6 directly follows from the fact that the minimum degree of a $\gamma$ -QC is at most 𝑚𝑎𝑥_𝑐𝑜𝑟𝑒 and the definition of $\gamma$ -QC. Our method Get-Bounds for computing both lower and upper bounds is summarized in Algorithm 5. In particular, as mentioned, Algorithm 5 follows the peeling algorithm, which iteratively removes the vertex with the smallest degree from the current graph while dynamically updating $u b$ and $l b$ .
The preprocessing stage. After obtaining both lower and upper bounds via Get-Bounds, we show how to utilize these bounds in our preprocessing. Recall that, with $l b$ and $u b$ , we can (1) reduce the graph by removing unpromising vertices from $G$ that cannot appear in $g ^ { \ast }$ using $l b$ , and (2) set a smaller initial value of $k$ in the function Plex-Search using $u b$ . The details are as follows. For (1), we can iteratively remove vertices from $G$ with degrees at most $\left\lfloor \left( l b - 1 \right) \cdot \gamma \right\rfloor$ and their incident edges, since it is clear that no solution of size greater than $l b$ can still include these vertices. For (2), we can easily derive from Lemma 7 (Section 4 where the proofs are in Appendix A.2) that, instead of initializing $s _ { 0 }$ as $| V |$ , $s _ { 0 }$ can be set to an integer larger than $s ^ { * }$ while still ensuring our iterative framework produces the correct solution. This allows us to use the obtained upper bound $u b$ to update $s _ { 0 }$ . Additionally, this $u b$ is used to generate the initial value of $k$ for Plex-Search through get-k, while maintaining the correctness of the iterative framework.
# 4.3 Time Complexity Analysis
As mentioned, in our implementation of Improved-Iter-Search (or specifically, Plex-Solve in Algorithm 3), we use two components, i.e., the heuristic and the branch-and-bound search, of the maximum $k$ -plex solver kPEX [28]. The time complexity of our exact algorithm IterQC is dominated by Plex-BRB in Algorithm 3, which is invoked at most $n$ times. The time complexities of $\mathsf { k P E X }$ are $O ^ { * } ( ( k + 1 ) ^ { \delta + k - s ^ { * } } )$ and $O ^ { * } ( \gamma _ { k } ^ { \delta } )$ , where $O ^ { * }$ suppresses the polynomials, $\gamma _ { k } < 2$ is the largest real root of $x ^ { k + 3 } - 2 x ^ { k + 2 } + x ^ { 2 } - x + 1 = 0 ,$ and $\delta$ is the degeneracy of the graph. These complexities are derived using the branching methods in [15, 59], which represent the current best time complexities for the maximum $k$ -plex problem. In addition, as the sequence $\{ s _ { 1 } , s _ { 2 } , . . . , s _ { p } \}$ generated by our improved iterative framework is monotonically decreasing and get- $\boldsymbol { \cdot } \boldsymbol { \mathsf { k } }$ is a nondecreasing function, each value of $k _ { i }$ is thus upper bounded by $k _ { 1 }$ . Thus, the time complexity of each iteration depends on solving the maximum $k$ -plex problem with $k _ { 1 }$ . Further, with the preprocessing technique, $\begin{array} { r } { k _ { 1 } = \delta \frac { 1 - \gamma } { \gamma } } \end{array}$ and we let $k = k _ { 1 }$ . Therefore, the time complexity of IterQC is given by $O ^ { * } \left( \operatorname* { m i n } \left\{ ( \delta + 1 ) ^ { 2 \delta - s ^ { * } } , \gamma _ { k } ^ { \delta } \right\} \right)$ . This improves upon (1) the time complexity of FastQC [64], which is $O ^ { * } ( \sigma _ { k } ^ { \delta \cdot \Delta } )$ , where $\Delta$ denotes the maximum degree of vertices in the graph and $\gamma _ { k } ~ < ~ \sigma _ { k }$ for each $k$ , and improves upon (2) the time complexity of DDA [48], which is $O ^ { * } ( 2 ^ { n } )$ .
# 5 Experiments
We conduct extensive experiments to evaluate the practical performance of our exact algorithm IterQC against other exact methods.
· $\mathsf { D D A } ^ { 1 }$ : the state-of-the-art algorithm [48]. FastQC2: a baseline adapted from the state-of-the-art algorithm for enumerating all maximal $\gamma$ -QCs of size at least a given lower bound [64]. We adapt FastQC for our MaxQC problem by tracking of the largest maximal $\gamma$ -QC seen so far and dynamically updating the lower bound to facilitate better pruning. Note that the largest $\gamma$ -QC is the largest one among all maximal $\gamma$ -QCs.
Table 1: Statistics of 30 representative graphs.
Setup. All algorithms are implemented in $C + +$ , compiled with $\mathrm { g } { + + }$ -O3, and run on a machine with an Intel CPU $\textcircled { a } \ 2 . 6 0 \mathrm { G H z }$ and 256GB main memory running Ubuntu 20.04.6. We set the time limit as 3 hours (i.e., 10,800s) and use OOT (Out Of Time limit) to represent the time exceeds the limit. Our source code can be found at https://github.com/SmartProbiotics/IterQC.
Datasets. We evaluate the algorithms on two graph collections (223 graphs in total) that are widely used in previous studies.
The real-world collection contains 139 real-world graphs from Network Repository with up to $5 . 8 7 \times 1 0 ^ { 7 }$ vertices. The 10th DIMACS collection4 contains 84 graphs from the 10th DIMACS implementation challenge with up to $5 . 0 9 \times 1 0 ^ { 7 }$ vertices.
To provide a detailed comparison, we select 30 representative graphs as in Table 1 (where density is computed as $\frac { { \bar { 2 m } } } { n ( n - 1 ) } )$ . Specifically, these representative graphs include 10 small graphs (i.e., G1 to G10) with $n < 1 0 ^ { 5 }$ , 10 medium graphs (i.e., G11 to G20) with $1 0 ^ { 5 } \le n < 1 0 ^ { 7 }$ , and 10 large graphs (i.e., G21 to G30) with $n \geq 1 0 ^ { 7 }$ .
# 5.1 Comparison with Baselines
Number of solved instances. We first compare our algorithm IterQC with the baselines DDA and FastQC by considering the number of instances solved within 3-hour and 3-second limit on two collections in Figure 1. In addition, we present the number of instances solved over time for both collections at $\gamma$ values of 0.65, 0.75, 0.85, and 0.95, in Figures 2 and 3. We have the following observations. First, our algorithm IterQC solves a greater number of instances across different values of $\gamma$ , compared to the baselines
Ours FastQC 的 DDA Ours FastQC H DDA 81 A 81
5670 5670
340 H 340 1 专 5 A 3 第 部 e D A e 1 一 20 20 14 8 14 8 L 3 1030100300009809600800 3 1030100300009809600800 time (sec) time (sec) (a) 𝛾 = 0.65 (b) 𝛾 = 0.75 A Ours FastQC DDA Ours FastQC DDA 78 78
560 560 F
340 340 3 e 6 13 13 3 1030100300009808600800 3 1030100300009809600800 time (sec) time (sec) (c) $\gamma = 0 . 8 5$ (d) $\gamma = 0 . 9 5$
Figure 1: Number of solved instances with varying $\gamma$ .
Figure 2: Number of solved instances on 10th DIMACS.
Figure 3: Number of solved instances on real-world.
FastQC and DDA. For example, in the real-world dataset with $\gamma = 0 . 7$ (in Figure 1(b)), IterQC solves 135 out of 139 instances, while DDA and FastQC only solve 95 and 76 instances, respectively. Second, in general, as the value of $\gamma$ decreases, IterQC tends to use relatively larger values of $k$ during each iteration, leading to higher computational costs and longer overall running times. As shown in Figures 1(b), 1(c) and 1(d), the number of instances solved by IterQC generally increases with $\gamma$ . However, this trend is less apparent in Figure 1(a). Specifically, as $\gamma$ decreases from 0.8 to 0.7, IterQC solves more instances. This phenomenon may be due to the following factors. IterQC computes a heuristic upper bound $u b$ in the preprocessing stage. The gap between this $u b$ and the optimum solution $s ^ { * }$ is unpredictable for different values of $\gamma$ . A smaller gap may result in a potentially shorter overall running time. Additionally, in our graph reduction process in the preprocessing stage, a smaller $\gamma$ leads to a non-decreasing lower bound $l b$ , which enhances the effectiveness of graph reduction and reduces subsequent
A Ours FastQC DDA Ours FastQC H DDA 132 136
1017890 A 10120 890 3 \$3 50 5650 P 1 3 1030100300009809600800 1 3 1030100300009809600800 time (sec) time (sec) (a) 𝛾 = 0.65 (b) 𝛾 = 0.75 Ours FastQC B DDA Ours FastQC B DDA 138 营 A 138 A
10120 10120 GR 3 £3 90 80 80 70 1 70 5650 + 60 31030100300009809600800 31030100300009808600800 time (sec) time (sec) (c) 𝛾 = 0.85 (d) $\gamma = 0 . 9 5$
search costs. Moreover, the increased computational complexity associated with smaller values of $\gamma$ primarily arises from the branchand-bound search process. Intuitively, as $\gamma$ decreases, the relaxation of the clique condition becomes more significant, making it harder to prune branches that could previously be terminated early. The pseudo LB technique accelerates the branch-and-bound approach and helps reduce the increased difficulty introduced by smaller values of $\gamma$ . Third, we observe in Figures 2 and 3 that the number of instances that IterQC can solve within 3 seconds exceeds the number solved by the other two baselines within three hours. For example, on the 10th DIMACS graphs with $\gamma = 0 . 7 5$ , IterQC solves 54 instances within 3 seconds, while DDA and FastQC solve 46 and 43 instances within 3 hours, respectively. Moreover, as shown in Figure 3(c), IterQC can complete 105 instances in 1 second, while DDA and FastQC finish 93 and 99 instances within 3 hours, respectively. Performance on representative instances. The runtime performance comparison between IterQC and the two baseline algorithms with $\gamma = 0 . 7 5$ across 30 representative instances is shown in Table 2. As illustrated in the table, IterQC consistently demonstrates superior efficiency, outperforming both baselines FastQC and DDA across nearly all instances. In particular, IterQC can solve all the graph instances, while both baseline algorithms FastQC and DDA exhibit a high occurrence of timeouts, failing to yield solutions within the 3-hour limit. Specifically, FastQC and DDA fail to solve 23 and 16 instances, respectively. Furthermore, IterQC successfully solves 14 out of the 30 representative instances where both baseline algorithms exceed the 3-hour time limit. For example, on G3, IterQC uses only 0.37 second, while both baselines cannot complete in 3 hours, achieving at least a $_ { 2 9 , 0 0 0 \times }$ speed-up. These results further suggest the efficiency superiority of IterQC over both baseline algorithms. The superior performance of IterQC, particularly compared to FastQC, may be due to the hereditary property of the $k$ -plex, which allows for more efficient pruning during the branch-and-bound search. However, in rare cases, the computational overhead of IterQC exceeds that of DDA, such as in G19, G23, and G26. This is because DDA adopts an iterative approach based on the IP solver CPLEX, which differs fundamentally from the branch-and-bound-based approaches used by IterQC and FastQC. As a general-purpose solver, CPLEX is not specifically optimized for the $\gamma$ -quasi-clique problem and is difficult to tailor for it. Consequently, while DDA may occasionally perform better in specific instances, IterQC generally outperforms DDA in most cases.
Table 2: Runtime performance (in seconds) of IterQC, FastQC, and DDA on 30 instances with $\gamma = 0 . 7 5$ .
Figure 4: Scalability test on G30.
Scalability test. We use G30 for the scalability test, which has the most vertices among the representative graphs. In our experiment, we randomly extract $2 0 \%$ to $100 \%$ of the vertices and test the performance of IterQC and two baselines with $\gamma$ values of 0.65, 0.75, 0.85, and 0.95 in Figure 4. The results demonstrate two findings: First, in almost all cases, IterQC consistently achieves the shortest runtime. Second, across all four different values of $\gamma$ , as the ratio increases—indicating a larger graph size – the increase in the runtime of IterQC is significantly smaller compared to the other algorithms. For example, in Figure 4(c), when the ratio increases from 0.2 to 0.8, the runtime of DDA rises from 0.52 seconds to 1817.60 seconds, whereas IterQC only increases from 0.49 seconds to 8.36 seconds. As for FastQC, at a ratio of 0.2, the runtime is 1086.88 seconds, but when the ratio reaches 0.4, it exceeds the time limit (10,800 seconds). These results demonstrate the scalability of IterQC.
Table 3: Runtime performance (in seconds) of IterQC, IterQC-PP, and IterQC-PLB on 30 instances with $\gamma = 0 . 7 5$ .
# 5.2 Ablation Studies
We conduct ablation studies to evaluate the effectiveness of the techniques of preprocessing and pseudo LB proposed in Section 4. We compare IterQC with the following variants:
IterQC-PP: it removes the preprocessing technique in IterQC, which includes (1) the initial estimation of lower and upper bounds, and (2) graph reduction. Specifically, IterQC-PP replaces Lines 1-3 in Algorithm 2 with $u b \gets | V |$ . IterQC-PLB: it removes the pseudo lower bound 𝑝𝑠𝑒𝑢𝑑𝑜- $. l b$ and utilizes the true heuristic lower bound by replacing Line 3 in Algorithm 3 with 𝑝𝑠𝑒𝑢𝑑𝑜- $\cdot l b \gets l b$ -𝑝𝑙𝑒𝑥.
Table 3 presents the runtime performance of IterQC, IterQC-PP, and IterQC-PLB on 30 representative instances with $\gamma = 0 . 7 5$ .
Effectiveness of the preprocessing technique. From Table 3, we observe that IterQC consistently outperforms IterQC-PP, achieving a speedup factor of at least 5 in 17 instances and at least 10 in 6 instances, with a remarkable speedup factor of 20.32 on G29. We also summarize additional information for our preprocessing technique in Table 4, which details the percentages of vertices and edges pruned during preprocessing, as well as the lower and upper bounds ${ \mathit { l b } } $ and $u b$ ) for the optimum solution $s ^ { * }$ . From Table 4, we observe that in 3 instances (G17, G20, and G29), the preprocessing technique prunes all vertices and edges, effectively obtaining the solution directly, while in 15 instances, it removes at least $9 0 \%$ of the vertices. Moreover, across the 30 instances, the preprocessing technique enables the iteration process to start from a smaller initial value (closer to the optimum solution $s ^ { * }$ ), as indicated by the upper bound $u b$ (in contrast to the trivial upper bound 𝑉 in Table 1).
Effectiveness of the pseudo LB technique. We can see in Table 3 that applying the pseudo LB technique leads to an improved performance in 25 of these 30 instances. Moreover, compared to IterQC-PLB, IterQC successfully solves 2 additional OOT instances, i.e., G15 and G18. For G18, IterQC solves in 341.59 seconds while IterQC-PLB times out, implying a speedup factor of at least 31.62. This improvement is due to the acceleration of the branch-andbound search by the pseudo LB technique in Algorithm 3, particularly by leveraging the graph structure: dense local regions increase branch-and-bound search costs, leading to greater speedups. Conversely, instances like G4, G5, G7, G9, and G13 show lower effectiveness when the running time is dominated by the computations of the heuristic lower bound in Line 4 of Algorithm 3. Despite this, even in these instances, the impact on running time is minor, with all such instances completing in under 1 second.
Table 4: Preprocessing information with $\gamma = 0 . 7 5$ , where Red-V/RedE represent the percentages of reduced vertices/edges.
# 6 Related Work
Maximum $\gamma$ -quasi-clique search problem. The maximum $\gamma$ -quasiclique search problem is NP-hard [46, 48] and W[1]-hard parameterized by several graph parameters [4, 5]. The state-of-the-art exact algorithm for solving this problem is DDA [48] and extensively discussed in Section 1. In contrast, Bhattacharyya and Bandyopadhyay [9] provided a greedy heuristic. Further studies [18, 38] addressed related problems of finding the largest maximal quasicliques that include a given target vertex or vertex set in a graph. Moreover, Marinelli et al. [45] proposed an IP-based method to compute upper bounds for the maximum $\gamma$ -quasi-clique.
Maximal $\gamma$ -quasi-clique enumeration problem. A closely related problem is the enumeration of all maximal $\gamma$ -quasi-cliques in a given graph [37, 40, 64], where a $\gamma { \mathrm { - Q C ~ } } g$ is maximal if no supergraph $g ^ { \prime }$ of $g$ is also a $\gamma { \mathrm { - } } \mathsf { Q C }$ . Several branch-and-bound algorithms have been proposed to tackle this problem by using multiple pruning techniques to reduce the search space during enumeration. In particular, Liu and Wong [40], Guo et al. [30], and Khalil et al. [37] developed such algorithms to improve efficiency. Recently, Yu and Long [64] introduced FastQC, the current state-of-the-art algorithm combining pruning and branching co-design approach. We remark that the maximum $\gamma$ -QC search problem can be solved using algorithms designed for maximal $\gamma$ -QC enumeration, as the maximum $\gamma { \mathrm { - } } \mathsf { Q C }$ is always a maximal one in the graph. In our experimental studies, we adapt the state-of-the-art maximal $\gamma$ -QC enumeration algorithm, FastQC, as the baseline method to solve our problem. Some studies explored different problem variants. For example, Sanei-Mehri et al. [52] focused on the top- $\cdot k$ variant, Guo et al. [31] studied the problem in directed graphs, and others considered graph databases instead of a single graph [34, 68].
Other cohesive subgraph mining problems. Another approach to cohesive subgraph mining involves relaxing the clique definition from the perspective of edges. This approach gives rise to the concept of the edge-based $\gamma \cdot$ -quasi-clique [1, 19, 49], which is also referred to as pseudo-cliques [58], dense subgraphs [42], or near-cliques [10, 56]. In this cohesive subgraph model, the total number of edges in a subgraph must be at least $\gamma \cdot { \binom { n } { 2 } }$ . Very recently, Rahman et al. [51] introduced a novel pruning strategy based on Turán’s theorem [33] to obtain an exact solution, building on the PCE algorithm proposed by Uno [58]. Similar to the $\gamma$ -quasi-clique problem, several studies have also explored non-exact approaches to solve the edge-based $\gamma$ -quasi-clique problem [1, 11, 16, 41]. For instance, Tsourakakis et al. [57] proposed an objective function that unifies the concepts of average-degree-based quasi-cliques and edge-based $\gamma$ -quasi-clique. There also exist many other types of cohesive subgraphs, including $k$ -plex [14, 22, 26, 35, 36, 59–61, 70, 71], $k$ -defective clique [12, 21, 27], and densest subgraph [44, 62]. Moreover, the topic of cohesive subgraphs has also been widely studied in other types of graphs, including bipartite graphs [17, 23, 43, 65–67], directed graphs [29], temporal graphs [8], and uncertain graphs [20]. For an overview on cohesive subgraphs, see the excellent books and survey, e.g., [13, 24, 25, 32, 39]. | Cohesive subgraph mining is a fundamental problem in graph theory with numerous real-world applications, such as social network analysis and protein-protein interaction modeling. Among various cohesive subgraphs, the $γ$-quasi-clique is widely studied for its flexibility in requiring each vertex to connect to at least a $γ$ proportion of other vertices in the subgraph. However, solving the maximum $γ$-quasi-clique problem is NP-hard and further complicated by the lack of the hereditary property, which makes designing efficient pruning strategies challenging. Existing algorithms, such as DDA and FastQC, either struggle with scalability or exhibit significant performance declines for small values of $γ$. In this paper, we propose a novel algorithm, IterQC, which reformulates the maximum $γ$-quasi-clique problem as a series of $k$-plex problems that possess the hereditary property. IterQC introduces a non-trivial iterative framework and incorporates two key optimization techniques: (1) the pseudo lower bound (pseudo LB) technique, which leverages information across iterations to improve the efficiency of branch-and-bound searches, and (2) the preprocessing technique that reduces problem size and unnecessary iterations. Extensive experiments demonstrate that IterQC achieves up to four orders of magnitude speedup and solves significantly more graph instances compared to state-of-the-art algorithms DDA and FastQC. | [
"cs.SI",
"cs.DB"
] |
# 1. INTRODUCTION
In general, any unintentional distortion of signals in audio technology is undesirable. However, the human auditory system is not sensitive to all forms of distortion and this insensitivity can be leveraged in the engineering of human-facing audio systems. Here, we consider the case of phase distortion [1] which occurs when the phase response of a system is nonlinear [2], distorting the phase relationships between frequency components in a signal [3]. A distortionless system’s discrete time impulse response can be defined as:
$$
h [ n ] = K \delta [ n - \tau ]
$$
Here, $\delta [ n ]$ is the impulse function, $K > 0$ is the gain parameter corresponding to a constant magnitude response, and $\tau \geq 0$ is the timedelay constant corresponding to a linear phase response with a slope of $- \tau$ . If the system’s phase response is not linear, phase distortion occurs, which can lead to noticeable changes in perceived timbre [4]–[9]. However, it has long been observed that phase-distortion can be imperceptible in some cases [5], [10].
Though human hearing is binaural, and inter-aural phase differences are crucial for spatial awareness and sound localization [11], we limit our discussion to monaural phase effects, where identical signals arrive at both ears. Monaural perceptual studies have shown that altering the relative phase between sinusoids affects perceived timbre—whether in simple two-tone signals [7], [8] or in the harmonics of complex sounds [4]. The monaural perceptibility of phase distortion in all-pass filters [5], [12], anti-alias filters [6], and speech enhancement systems [13] has also been studied. However, distortion caused by a constant phase shift across all frequencies has not been extensively studied. In this paper, we present evidence that the special case of phase-intercept distortion is not perceptible in real-world sounds, and show how this fact can be leveraged for data augmentation in audio-based machine learning applications.
# 1.1. Phase-intercept Distortion
For a single sinusoidal tone, a constant phase shift is defined as:
$$
x ( t ) = \sin ( \omega _ { 0 } t + \phi ) = \frac { 1 } { 2 i } ( e ^ { i \omega _ { 0 } t } e ^ { i \phi } - e ^ { - i \omega _ { 0 } t } e ^ { - i \phi } )
$$
Note that we must add the positive phase to the positive frequencies and the negative of that phase to the negative frequencies. This can
be verified by using a single cosine tone and the effect of a shift in phase in its frequency domain.
$$
\hat { x } ( \omega ) = \frac { 1 } { 2 i } [ \delta ( \omega - \omega _ { 0 } ) e ^ { i \phi } - \delta ( \omega + \omega _ { 0 } ) e ^ { - i \phi } ]
$$
This operation is called the frequency-independent phase shift, and can be performed using the signum function, defined as:
$$
s g n ( \omega ) = \left\{ \begin{array} { l l } { 1 } & { ; \omega > 0 } \\ { 0 } & { ; \omega = 0 } \\ { - 1 } & { ; \omega < 0 } \end{array} \right.
$$
The transfer function of a frequency-independent phase shift of $\theta$ can then be defined as:
$$
| H ( \omega ) | = 1 ; \quad \Phi ( \omega ) = \theta \cdot s g n ( \omega )
$$
The phase response of this operation is piecewise constant and is nonlinear. Thus, frequency-independent phase shifting creates distortion and is called phase-intercept distortion, a special case of phase distortion. This operation results in a group delay of zero, but a nonlinear phase delay for all non-zero frequencies.
Let the Fourier transform be denoted as $\mathcal { F }$ . Applying this operation on an input signal $x ( t )$ with its Fourier Transform $\mathcal { F } ( x ) = \hat { x } ( \omega )$ will result in a rotated signal:
$$
x _ { \theta } ( t ) = \mathcal { F } ^ { - 1 } ( \hat { x } ( \omega ) e ^ { i \theta \cdot s g n ( \omega ) } )
$$
A popular example of a frequency-independent phase-shift operation is the Hilbert Transform. The Hilbert transform introduces a phase shift of $- 9 0 ^ { \circ }$ for the positive frequencies and $9 0 ^ { \circ }$ for the negative frequencies, without altering the signal’s magnitude spectrum. Let $\mathcal { H }$ denote the Hilbert transform and consider a signal $x ( t )$ with its Fourier transform $\hat { x } ( \omega )$ ; the Hilbert Transform modifies the complex spectral content as follows:
$$
\mathcal { F } ( \mathcal { H } ( x ) ) = - i \hat { x } ( \omega ) \cdot s g n ( \omega )
$$
A useful property of the Hilbert transform of a real-valued signal $x ( t ) : \mathbb { R } \mathbb { R }$ , is that it is orthogonal to the original signal, as can be easily proved by showing that its inner product with $\mathcal { H } ( x ) ( t )$ is equal to zero. This property makes it fundamental in applications such as analytic signal representation [14], [15], envelope detection [16], phase retrieval [17], single-sideband (SSB) modulation, and instantaneous frequency estimation [18].
We can use the Hilbert transform to create an analytic signal representation of the original signal [15] as follows:
$$
x _ { a } ( t ) = x ( t ) + i \mathcal { H } ( x ) ( t )
$$
The frequency-independent phase shift operation can be achieved by directly applying a rotation of $\theta$ to the analytic signal and taking its real part:
$$
x _ { \theta } ( t ) = R e [ x _ { a } ( t ) e ^ { i \theta } ]
$$
Fig. 1: Illustration of the effect of the Hilbert Transform $\wp = \pm 9 0 ^ { \circ }$ ) on a square wave.
This can be shown by expanding (4) and rearranging the terms. In summary, phase-intercept distortion can be modeled by constructing the analytic representation of the signal using the Hilbert transform, rotating it, and extracting the real part of this complex signal.
# 1.2. Perception of Phase-intercept Distortion
The perception of phase-intercept distortion has not been explored as much as other common forms of phase distortion. One specific case that has been studied is polarity reversal—corresponding to a phase shift where $\theta = \pm 1 8 0 ^ { \circ }$ in (7). Though most researchers (and audio engineers) assume that polarity reversal is inaudible, which is consistent with experimental evidence [19], there have also been some claims that it is audible [7], [8]. The Wood Effect1 [8] is one dramatic edge case where polarity reversal creates a minute difference in timbre for lower frequencies. This effect occurs when inverting a sine wave which has been clipped on only one half of each cycle; The resulting polarity-inverted signals sound slightly different, showing that our ears do treat the clipped portion differently when it is presented as a compression or as a rarefaction. No other cases of perceivable phase inversion have been reported.
Phase-intercept distortion can have a large impact on an audio waveform (Fig. 1). However, human hearing relies on both timedomain and frequency-domain characteristics of sound [7], and visible alterations to the waveform shape may not necessarily correlate with changes to the perceived sound. To our knowledge, the Wood Effect is the only case of perceptible phase-intercept distortion which has been reported in the literature. Research on the perception of frequencyindependent phase shifts at arbitrary angles (i.e., besides $\theta = \pm 1 8 0 ^ { \circ }$ ) has not been reported. In informal testing, we observed that arbitraryangle phase-intercept distortion does not seem to be perceptible in real-world audio examples. We thus hypothesized that frequencyindependent phase shifting is imperceptible for general audio signals. In the next section, we report the results of an experiment to test this hypothesis by measuring human participants’ ability to detect phase-intercept distortion in ecologically-valid sounds.
# 2. EXPERIMENT
# 2.1. Participants
Forty-six participants were recruited via different professional and university mailing lists. 2 Participants were only required to be physically present in the United States, at least 18 years old, and have no history of hearing loss (screened through self-report). Only 25 of our 46 participants completed $100 \%$ of the survey and we discarded data from one participant who reported a hearing disability, leaving complete data from 24 participants.
We anticipated that individual differences between participants (age, gender, music background, etc.) might affect their ability to distinguish phase-intercept distortion. Thus, through a pre-questionnaire, we collected information about the participants’ age, gender, and experience with music, musical instruments, music production, mixing engineering, and high-fidelity listening habits. Twenty-one participants identified as male (mean age: 29.3 years) and the rest as female (mean age: 38.3 years); six of the participants were above the age of 35. Fourteen participants identified as musicians. In the final analysis, none of these factors seem to affect participants’ performance.
# 2.2. Stimuli
For stimuli, we gathered audio recordings of a variety of music, speech, and general sounds (e.g., traffic or machine sounds) from several existing datasets. We then manipulated these recordings by applying phase-intercept distortion as needed. We sampled ten recordings each of music, speech, and real-world sounds, for a total of thirty. The majority of the samples were taken from AudioSet [20], one of the largest available collections of music and speech audio samples.
Our ten music samples were a mix of monophonic and polyphonic recordings of instruments or a combination of instruments such as guitar, bass, drums, etc. These recordings were randomly sampled from a popular source separation dataset, the MUSDB18-HQ dataset [21]— both mixes and isolated stems—with a few more from the AudioSet. Phase distortion generally “smears” the signal [7], so its effect will be most prominent on percussive sounds and transients [22]. Thus, three purely percussive recordings were included along with seven music recordings that also contain percussion in the mix. To further ensure that attacks are not “smeared” off, tonal percussion was also included by randomly sampling pure percussion samples from multi-tracks in the Saraga [23] and Sanidha [24] datasets, which include clean, isolated recordings of tonal percussion like the mridangam.
Our ten speech samples were drawn from two sources: seven were sampled randomly from the AudioSet and three were sampled from the Librispeech [25] dataset. We included Librispeech samples beause they are clean and isolated, contrasting with the speech recordings from AudioSet, which include chatter, chants, generic conversations, etc. The ten other samples were randomly chosen from the remaining sounds in AudioSet.
All stimuli recordings used 32-bit depth audio, with sample rates varying depending on the original recording source: recordings from Librispeech use a sample rate of $1 6 \mathrm { k H z }$ ; recordings from AudioSet use either a $4 4 . 1 ~ \mathrm { k H z }$ or $4 8 ~ \mathrm { k H z }$ sampling rate; the rest of the recordings use a $4 4 . 1 \ \mathrm { k H z }$ sampling rate.
# 2.3. Audio Editing and Manipulation
To make experimental stimuli short enough for participants to easily hold them, and compare them, in working memory, a three-second excerpt was extracted from each recording. Many of the recordings have silent moments, so to prevent sampling silence, we repeatedly sampled a random starting point from $[ 0 , l - 3 ]$ (where $l$ is the length of the recording in seconds) until a non-empty three-second audio clip was obtained. “Non-emptiness” was defined by the $l _ { 2 }$ -norm of the signal exceeding a chosen threshold value.
To obtain the phase-intercept distorted stimuli, we randomly sampled thirty $\theta$ values from a uniform distribution, ranging from $- \pi$ to $\pi$ .
$$
\theta \sim U n i f ( - \pi , \pi )
$$
We computed the Hilbert transform of each of the thirty stimuli to construct the analytic signals using (6). To finally apply phase-intercept distortion, the sampled $\theta$ values were then inputted into (7).
Phase-intercept distortion manipulation will inevitably result in the start and end samples having different values, which may lead to audible artifacts and is not relevant to our hypothesis. We thus applied a trapezoidal fade in/out to each stimuli, with a 0.1 second linear taper at both ends. To prevent clipping and ensure consistent amplitude between trials, we equally normalized the signal level for the pair of stimuli (distorted and unaltered) across all questions.
Fig. 2: Illustration of the experimental design.
Table 1: Perceptual Test Results
# 2.4. Experiment Design
We employed a two-alternative forced-choice (A/B) design, similar to [5]. The benefit of this design is that even if the subjects hear no difference, they must still guess, in which case their expected accuracy would be $5 0 \%$ . The experiment employed a within-subjects design, as all participants were exposed to all the stimuli; the order of stimuli was randomized for each participant.
# 2.5. Procedure
Participants accessed the experiment via a web interface, created on Qualtrics, and were instructed to use high-quality headphones or speakers (e.g., studio monitors) to minimize information loss caused by poor audio equipment. Each participant completed thirty trials: ten music, ten speech, and ten other real-world sounds, with the total duration ranging from 10 to 15 minutes.
In each trial, participants were presented with reference stimuli and two comparison stimuli, labeled A and B; the original signal is randomly assigned as option A or B with $5 0 \%$ probability. The other option becomes the phase-intercept distorted version of the original signal. The participant was then required to choose which signal (A or B) was “identical” to the original signal. The experimental procedure can be summarized as shown in Fig. 2. After the main experiment concluded, participants were asked to fill out a post-questionnaire to share their experience, in particular, any strategies they adopted during the main task. Many participants commented that the pairs all sounded the same.
# 3. RESULTS
For each trial, the participant’s response (A or B) can be coded correct or incorrect. If our hypothesis is true, participants’ true success rate should be approximately $50 \%$ . On average, our participants selected correctly in $4 9 . 4 4 \%$ of trials (Table 1). A one-sample t-test was performed to check if this accuracy was significantly different from the chance level of $5 0 \%$ , and the result was not significant. However, a non-significant test result might arise from a small sample size, especially if the true effect is small. We thus, focus on Bayesian statistical analysis, which is more appropriate for this analysis.
Let $q$ be the true probability that a participant will pick the distorted recording from a pair of a stimuli. Our hypothesis is that $q = 0 . 5$ corresponding to random binary guessing. Consider a naive flat prior distribution for $q \sim \mathrm { B e t a } ( \alpha = 1 , \ \beta = 1 )$ , corresponding to a neutral prior belief about the true value of $q$ . We can model each experimental trial as a single Bernoulli sample, where the outcome is either 0 (wrong pick) or 1 (correct pick), and the probability of the correct pick is $q$ . The Bernoulli distribution is the conjugate distribution to the beta distribution (our prior distribution). That means that the Bayesian posterior distribution after observing $N$ Bernoulli samples is also beta distributed, where $\alpha _ { p o s t e r i o r } = \alpha _ { p r i o r } + N _ { s u c c e s s e s }$ and $\beta _ { p o s t e r i o r } = \beta _ { p r i o r } + N _ { f a i l u r e s }$ .
Fig. 3: Distribution of response accuracy of 24 participants.
Fig. 4: Posterior distribution of $q$ , assuming flat prior distribution.
Out of the 720 total questions answered by the participants, they successfully selected the undistorted stimuli 356 times and failed 364 times. Thus, the Bayesian posterior distribution for $q$ , given the naive prior distribution we started with, is qposterior $\sim \mathrm { B e t a } ( \alpha =$ $1 + 3 5 6$ , $\beta = 1 + 3 6 4 ,$ ) (Fig. 4). As can be seen in Fig. 4, even if we begin with flat prior belief about $q$ , the data from this experiment leads to a posterior belief about $q$ that is tightly centered around $q = 0 . 5$ with $9 5 \%$ of this posterior distribution falling between 0.458 and 0.531. These results are consistent with the experimental hypothesis that phase-intercept distortion has no perceptible effects.
Table 1 reports the results for each of the categories: music, speech and other. The median results are $5 0 \%$ for all categories except speech, which has a median of $45 \%$ . The mean scores are similar for music and other, while the mean score of speech is worse than $5 0 \%$ .
To ensure that participants’ fatiguing over the course of the survey was not a confound, Fig. 5 plots the question-wise scores, with a fitted trend line. This least-squares regression line has a negligible slope of $- 0 . 0 0 2$ . This suggests that fatigue was not a confounding factor in our experiment.
# 4. DATA AUGMENTATION EXPERIMENT
Data augmentation is the process of increasing the amount of data inputted into a machine learning (ML) model by performing the right operations on the data to achieve better results [26]–[29]. It helps to counter overfitting, a common issue in ML where the model learns the training data too precisely and fails to generalize.
Fig. 5: Question Scores
Commonly used data augmentation techniques for audio ML tasks include pitch shifting, time stretching, speed perturbation, noise addition, harmonic distortion, random time-frequency masking, and amplitude gain [30]–[32]. Effect of image-based data augmentation styles on magnitude spectrograms have also been researched [30]. These strategies have improved results for various audio-based tasks.
We propose using phase-intercept distortion as a data augmentation technique for music and speech ML tasks, such as classification, source separation, and generation. This form of data augmentation is a unique way to modify a signal without altering the transients, the pitch, and time characteristics of the signal. Data augmentation can be performed by randomly sampling a $\theta \in [ - \pi , \pi )$ and modifying the signal using (7). Since phase-intercept distortion does not affect the magnitude spectrogram (as shown in (3)), this augmentation is not applicable to models using magnitude spectrograms as inputs—it is only applicable to those using time-domain signals or complex spectrograms.
To truly test whether the benefits of data augmentation stem from the diversity it provides and not the quantity of data, we designed our experiments with data augmentation added “on the fly” during training. This means that each time a training sample is inputted into the model, it is randomly transformed in real time. This ensures variation without increasing the dataset size, and isolates the impact of diversity provided by data augmentation.
We performed experiments on two tasks: audio classification and source separation. We used models that take time-domain audio signals as input for these tasks.
# 4.1. Audio Classification
Audio classification is the task of assigning audio signals to corresponding classes based on their characteristics. This process typically involves an initial stage of feature extraction from the raw audio. Some typical features which are extracted include spectrograms, melspectrograms, and mel-frequency cepstral coefficients or MFCCs. These features are used as inputs to these classification models that learn to map these features to the audio classes.
Many audio-classification datasets have been published; We chose to use the SC09 dataset, a subset of the Speech Commands dataset [33], which includes spoken digits from zero to nine from various different speakers.
Choosing models to test was more difficult, because few existing audio classification models that use time-domain signals as a direct input. We thus designed a new model for this experiment, taking inspiration from wav2vec2.0 [34]. We chose the exact same convolution layers as the initial layers of wav2vec2.0 to encode the audio information, and used a two layer bidirectional LSTM [35] on these encodings. Batch normalization [36] was performed in between each layer with a dropout probability of 0.05. Finally, the output from either side was concatenated and passed through a fully connected layer for the prediction.
Table 2: Phase intercept distortion as data augmentation for classification
Table 3: Phase intercept distortion as data augmentation for source separation
Five different seed values were chosen randomly for the experiment. The mean and standard deviation of the validation and test accuracies are reported in Table 2. The validation accuracy has clearly improved when phase-intercept distortion is used as a data augmentation technique for this model. However, the test accuracy is only slightly better.
# 4.2. Blind Source Separation
Blind source separation (BSS) is defined as the task of separating the individual sources when only provided with a single mixture audio signal. Supervised approaches have been the go-to strategy due to their strong performance, which is achieved by learning the mapping between mixtures and separated components. There are three primary ways to solve this: (i) waveform prediction [37]–[39], (ii) spectrogram prediction [40], [41], and (iii) hybrid methods, which combine elements of both (i) and (ii) [42], [43].
We chose a popular waveform prediction-based model called the Wave-U-Net [37], which works directly on source image waveforms. The details of the metrics are described in [44]. Table 3 reports the median results of this experiment in decibels (dB). SIR, SAR and ISR are computed using the bss_eval toolkit [45]. For the individual examples we compute the median value of all the frames. Signal to Distortion Ratio (SDR) and Scale Invariant Signal to Distortion Ratio (SI-SDR) are calculated as described in [46]. Phase intercept distortion as a data augmentation technique for Wave-U-Net on MUSDB18-HQ has shown improvement in all the metrics, except SAR. | Phase distortion refers to the alteration of the phase relationships between frequencies in a signal, which can be perceptible. In this paper, we discuss a special case of phase distortion known as phase-intercept distortion, which is created by a frequency-independent phase shift. We hypothesize that, though this form of distortion changes a signal's waveform significantly, the distortion is imperceptible. Human-subject experiment results are reported which are consistent with this hypothesis. Furthermore, we discuss how the imperceptibility of phase-intercept distortion can be useful for machine learning, specifically for data augmentation. We conducted multiple experiments using phase-intercept distortion as a novel approach to data augmentation, and obtained improved results for audio machine learning tasks. | [
"eess.SP",
"cs.LG",
"eess.AS"
] |
# 1 Introduction
As large language generators [1, 2] surpass average human performance, their interaction with humans becomes increasingly prevalent. However, as they tend to generate incorrect information, or so-called hallucinated information, community’s concerns on their mis-alignment issues have significantly increased. One simple but effective approach to mitigating hallucination is selective generation, selectively abstaining from generation if the generator is unsure of its response [3, 4, 5, 6]. By the abstention, the selective generator may control its high precision, or equivalently a low false discovery rate (FDR), i.e., whenever it says answers, they are mostly correct. Interestingly, few selective prediction and generation methods [3, 4, 6] provide theoretical guarantees on the controllability of the FDR to ensure the trustworthiness of generators.
However, conventional selective prediction methods [3, 4, 6] are designed under limited stochastic assumptions, i.e., data are independently drawn from a fixed distribution, which undermines their applicability to real world applications in adversarial and distribution-shifted environments. Moreover, the methods require full feedback (e.g., a true answer is given) instead of partial feedback (e.g., the
Without online selective generation Context: Context: What are the two main types of 国 Solar energy What are the two main types of 国 Solar energy solar-induced chemical reactions? solar-induced chemical reactions? Answer: Thermochemical and photochemical reactions Answer: Thermochemical and photochemical reactions The two main types of solar-induced chemical I don't know.. This is out of my scope. reactions are photosynthesis and photolysis. Can you ask me something else? P OK.. What is one of the challenges ® OK, What is one of the challenges 👨 associated with making carbon-based in producing carbon-based · : Context: : Context: What version of Linux is sold by 8 PlayStation 3 What version of Linux is sold by 👨 PlayStation 3 Fixstars Solutions for the PlayStation 3? Fixstars Solutions for the PlayStation 3? . Answer: Yellow Dog Linux Answer: Yellow Dog Linux Fixstars Solutions sells Yellow Dog Fixstars Solutions sells Yellow Dog ! Linux for the PlayStation 3. Linux for the PlayStation 3. 👍 + . nX/Y+cedL3Zu7fzrzm93Hu4c7zcGez8x85/7fz3zv983/v+/vdPvz9qQv/oR+2Yv98xf75/83+QwTj3</latexit>FDR > ω H4thr/j8Lr6LwRVqPqe+HUdsvu7n1y7uf91/ae3feHVN/s3f7H/i+fXPvt3Z3mz493/mHnH3e+2rm58687v915sHO083JnsPMfO/+18z87/v9e+Pvn/9/dsm9I9+1I75+x3z5/v+/wEArj1z</latexit>FDR ω
correctness of a generated answer), where partial feedback is more practical and easier to obtain (e.g., thumbs-up buttons in dialog systems).
To mitigate these limitations, we propose a novel online selective generation algorithm to control a false discovery rate (FDR), while maximizing selection efficiency (i.e., the ratio of non-abstaining cases), with partial feedback. First, we address the online learning problem under partial feedback by leveraging profound achievements from online learning and multi-armed bandits. To this end, we reduce selective generation to bandits, exploit any regret minimization algorithms with regret bounds, and then translate the regret bounds back into FDR bounds for selective generation by introducing a novel Regret-to-FDR conversion lemma. This enables us to use any regret minimization algorithms for selective generation with FDR controllability guarantees.
Second, we design a regret minimization algorithm tailored for selective generation to fully exploit feedback information. In particular, partial feedback is practical but the amount of new information is clearly insufficient compared to full feedback, suffering from sample inefficiency and slower convergence speed. To address this technical challenge, we exploit a unique structure of selective generators to unlock additional feedback from observed partial feedback, dubbed by partial feedback unlocking. By exploiting this, we extend the $\mathtt { E x p 3 }$ algorithm [7] for adversarial bandits for online Selective generation with partial feedback UnLocking (ExSUL). Moreover, we provide the $\mathcal { O } ( \ell _ { \mathrm { m a x } } \sqrt { T \ln { | \mathcal { H } | } } )$ regret bound of $\mathtt { E x S U L }$ , which is better than a regret minimizer with partial feedback, i.e., $\mathcal { O } ( \ell _ { \mathrm { m a x } } \sqrt { T | \mathcal { H } | \ln { | \mathcal { H } | } } )$ , and compatible to a regret minimizer with full feedback, i.e., $\mathcal { O } ( \ell _ { \mathrm { m a x } } \sqrt { T \ln { | \mathcal { H } | } } )$ , where $T$ is a time horizon and $\mathcal { H }$ is a set of hypotheses of selective generators. See Figure 1 for qualitative results of our method that controls an FDR in an interactive environment.
We empirically evaluate the efficacy of the proposed online selective generation algorithm ExSUL. In particular, we consider two tasks (including question-answering over TriviaQA [8] and Natural Question [9] and dialog conversations via two dialog agents), three learning environments (including stochastic, distribution-shifted, and interactive environments), and two language models (including GPT-3.5-turbo [1] and LLaMA3.1 [2]) to show that our method (1) controls a desired FDR and (2) maintains reasonably low selection inefficiency, i.e., the ratio of abstention. Additionally, our learner’s FDR convergence speed is clearly better than a regret minimizer with partial feedback and compatible to a regret minimizer with full feedback (which forms a performance upper bound).
# 1.1 Related Work
Here, we introduce tightly related literature, including selective prediction, online learning, and bandit problems. See Appendix A for additional related work.
Selective Prediction. Selective prediction is a method that abstains from making a prediction to control the FDR in a certified manner. [3] proposes selective classification that uses a threshold for a scoring function to abstain from uncertain predictions. [4] extends this to exploit hierarchical structure of labels for better efficiency. [5] extends this idea to generation tasks but mainly focuses on decontextualization task without the FDR guarantee. [6] further generalizes selective prediction to generation tasks by introducing the concept of an entailment set, which addresses the challenge of open-ended question-answering. In particular, they propose a semi-supervised method to leverage unlabeled entailment data, providing a theoretical guarantee for controlling the FDR at a desired level. Previous work focuses on a batch learning setup under a stochastic assumption, which may be fragile in real-world applications. Contrast to this, we consider a non-stochastic setup, allowing distribution shift, to propose online selective generation methods.
Online Learning and Bandit Problems. Sequential prediction [10, 11, 12, 13] designs learners that adapt to sequentially arriving data. Online learning, e.g., exponential weighting [14], mostly assumes adversarial environments where data can be arbitrarily, possibly adaptively, manipulated by adversaries. Under this environment, a learner receives full feedback on predictions. In contrast to this, bandit problems, including multi-armed bandits [15] and adversarial bandits [7], assume to have sequentially-arriving partial feedback, i.e., a learner receives feedback on a chosen arm over time. To efficiently leverage partial feedback, structured bandits [16] exploit the functional structure of arms-to-loss functions and semi-bandits [12] allow to choose a set of arms with fixed size for better learning efficiency. We exploit online adversarial bandits to mitigate stochastic assumptions in learning traditional selective prediction and to consider practical learning under partial feedback. Moreover, we leverage a specific structure between arms and loss in selective generation for better learning efficiency.
# 2 Preliminary
We introduce preliminaries on language generation, selective prediction, and adversarial bandits. See Appendix B for online learning. To this end, let $\chi$ be a set of inputs (e.g., examples or questions) and $y$ be a set of outputs (e.g., labels or answers).
# 2.1 Language Generation
We mainly consider language generators as our model to control hallucination. In particular, let $G : \mathcal { X } \mathcal { Y }$ be a language generator, where $\mathcal { W }$ is a set of tokens and ${ \mathcal { X } } = { \mathcal { Y } } : = \cup _ { i = 0 } ^ { \infty } { \bar { \mathcal { W } } } ^ { i }$ . Here, each $i$ -th token $\hat { \mathbf { y } } _ { i }$ of a generated answer $\hat { \mathbf { y } } \in \mathcal { V }$ is decoded from an underlying probability distribution $p ( \mathbf { y } \mid \mathbf { x } )$ , where $p$ is usually learned via language data. For the decoding strategy, we consider greedy decoding, i.e., $\hat { \mathbf { y } } _ { i } = \arg \operatorname* { m a x } _ { w \in \mathcal { W } } p ( w \mid \mathbf { x } , \hat { \mathbf { y } } _ { 1 : i - 1 } )$ , where $\mathbf { y } _ { a : b } : = ( \mathbf { y } _ { a } , \ldots , \mathbf { y } _ { b } )$ . Given a generated answer $\hat { \mathbf { y } }$ , there are multiple ways to measure its likelihood of correctness $f : \mathcal { X } \times \mathcal { Y } \mathbb { R }$ . A standard way $f _ { \tt s t d }$ considers a length-normalized token probability, i.e., $\begin{array} { r } { f _ { \mathrm { s t d } } ( \mathbf { x } , \hat { \mathbf { y } } ) : = ( p ( \hat { \mathbf { y } } _ { 1 } \mid \mathbf { x } ) \prod _ { i = 2 } ^ { | \hat { \mathbf { y } } | } p ( \hat { \mathbf { y } } _ { i } \mid } \end{array}$ $\mathbf { x } , \hat { \mathbf { y } } _ { 1 : i - 1 } ) \big ) ^ { 1 / | \hat { \mathbf { y } } | }$ . A better alternative is to consider consistency of $\hat { \mathbf { y } }$ to multiple generated answers with sampling [17], i.e., $f _ { \mathrm { c o n } } ( \mathbf { x } , \hat { \mathbf { y } } ) = \hat { \mathbb { E } } _ { \mathbf { y } ^ { \prime } \sim G ( x ) } f _ { E } ( \mathbf { y } ^ { \prime } , \hat { \mathbf { y } } )$ , denoted by a consistency score, where we use sampling for decoding $\mathbf { y } ^ { \prime }$ , $f _ { E } ( \mathbf { y } ^ { \prime } , \hat { \mathbf { y } } )$ measures an entailment score of $\hat { \mathbf { y } }$ given $\mathbf { y } ^ { \prime }$ (i.e., whether $\mathbf { y } ^ { \prime }$ entails $\hat { \mathbf { y } }$ ) via an entailment model (e.g., GPT-3.5-turbo).
# 2.2 Selective Prediction
Selective prediction [3] and generation [6] provide certified control over the risk of incorrect predictions by abstaining answers (saying IDK) when the prediction is uncertain. Given a predictor $\hat { \mathbf { y } } : \mathcal { X } \mathcal { Y }$ , a selective predictor ${ \hat { S } } : { \mathcal { X } } { \mathcal { V } } \cup \{ { \mathrm { I D K } } \}$ abstains from returning $\hat { \mathbf { y } } ( \mathbf { x } )$ if a selection function ${ \hat { s } } : { \mathcal { X } } \times { \mathcal { Y } } \{ 0 , 1 \}$ deems the prediction uncertain, i.e., $\hat { S } ( \mathbf { x } ) : = \left\{ \begin{array} { l l } { \hat { \mathbf { y } } ( \mathbf { x } ) } & { \mathrm { i f } ~ \hat { s } ( \mathbf { x } , \hat { \mathbf { y } } ( \mathbf { x } ) ) = 1 } \\ { \mathrm { I D K } } & { \mathrm { o t h e r w i s e } } \end{array} \right.$ In learning, the selection function $\hat { s }$ is chosen to possibly satisfy a desired level of a false discovery rate, i.e., $\mathbb { P } ( \hat { S } ( \mathbf { x } ) \neq _ { E } \mathbf { y } \mid \hat { S } ( \mathbf { x } ) \neq \mathtt { I D K } )$ over the i.i.d. samples of $( \mathbf { x } , \mathbf { y } )$ . The equality relation $\neq _ { E }$ is usually the standard exact matching in selective classification [3], which is extended to capture semantic correctness via entailment in selective generation [6]. In contrast to the standard stochastic setup as in the previous literature, we consider online learning under partial feedback.
# 2.3 Regret Minimization
In sequential prediction, the main goal is to find a learner whose performance is as good as the best learner in hindsight. As a goodness metric, we use widely accepted expected regret, simply called regret in this paper, leading to regret minimization. The goal of regret minimization is to find a distribution $p _ { t }$ over hypotheses $\mathcal { H }$ to minimize regret, i.e., the gap between the learner’s expected cumulative loss over a possibly randomly chosen hypothesis $\tau _ { t } \sim p _ { t }$ and the best cumulative loss for
any loss sequence $\ell _ { t }$ over a time horizon $T$ :
$$
\mathbf { R e g } _ { T } : = \sum _ { t = 1 } ^ { T } \mathbb { E } _ { \tau _ { t } \sim p _ { t } } \ell _ { t } ( \tau _ { t } ) - \operatorname* { m i n } _ { \tau \in \mathcal { H } } \sum _ { t = 1 } ^ { T } \ell _ { t } ( \tau ) .
$$
Here, the best cumulative loss assumes to have access to loss for any $\tau$ , i.e., full feedback, but the learner’s cumulative loss may have limited access to loss, e.g., partial feedback. Depending on this feedback setup, we consider two learning problems: online learning with full feedback and adversarial bandits with partial feedback. In the following section, we review adversarial bandits with one regret minimization algorithm, called Exp3. A well-known regret minimization algorithm for online learning with feedback is Exponential Weighting (EW) [14], introduced in Appendix B.
# 2.4 Adversarial Bandits and Exp3 Algorithm
Traditional online learning typically assumes full feedback, i.e., loss for any hypothesis is computable. However, practical applications may only have partial feedback, i.e., loss for a chosen hypothesis is only given. To consider learning in partial feedback, we consider multi-armed bandit in an adversarial setting, called adversarial bandits. In adversarial bandits, we consider the same regret as in online learning, except that a learner only receives feedback of a chosen arm by itself, i.e., partial feedback, and an arbitrary loss sequence $\ell _ { t } ( \tau _ { t } )$ over a time horizon $T$ is fixed before learning by an oblivious adversary. In bandit context, we denote a hypothesis as an arm.
To minimize the regret $\mathbf { R e g } _ { T }$ , Exponential-weight algorithm for Exploration and Exploitation $\mathtt { ( E x p 3 ) }$ [7] is one standard algorithm, which leverages the EW algorithm for adversarial bandits (Algorithm 3). In particular, Exp3 exploits an unbiased estimator $\left. \tilde { \ell } _ { t } ( \tau \mid \tau _ { t } ) \right.$ of loss for all arms $\tau \in { \mathcal { H } }$ constructed from loss $\ell _ { t } ( \tau _ { t } )$ of chosen arms $\tau _ { t }$ , i.e.,
$$
\tilde { \ell } _ { t } ( \tau \mid \tau _ { t } ) : = \ell _ { t } ( \tau ) / p _ { t } ( \tau ) \cdot \mathbb { 1 } ( \tau _ { t } = \tau ) \in [ 0 , \infty ) .
$$
Note that $\tilde { \ell } _ { t } ( \tau \mid \tau _ { t } )$ is an unbiased estimator, i.e., $\mathbb { E } _ { \tau _ { t } \sim p _ { t } } \tilde { \ell } _ { t } ( \tau \mid \tau _ { t } ) = \ell _ { t } ( \tau )$ ; thus, by using this in $\mathbf { R e g } _ { T }$ , we have the following regret bound for Exp3. See Appendix G.2 for a proof.
Theorem 1. $\ [ 7 , \ 1 3 ]$ Let $\ell _ { t } ( \cdot ) \in [ 0 , \ell _ { m a x } ]$ . For any $T \in \mathbb { N }$ and finite hypotheses $\mathcal { H }$ , Algorithm 3 provides the following regret bound if $\eta = \sqrt { 2 \ln | \mathcal { H } | / ( \ell _ { \mathrm { m a x } } ^ { 2 } T | \mathcal { H } | ) }$ :
$$
R e g _ { T } = \sum _ { t = 1 } ^ { T } \mathbb { E } _ { \tau , \tau _ { t } \sim p _ { t } } \tilde { \ell } _ { t } ( \tau \mid \tau _ { t } ) - \operatorname* { m i n } _ { \tau \in \mathcal { H } } \sum _ { t = 1 } ^ { T } \ell _ { t } ( \tau ) \le \ell _ { \operatorname* { m a x } } \sqrt { 2 T | \mathcal { H } | \ln | \mathcal { H } | } = O ( \ell _ { m a x } \sqrt { T | \mathcal { H } | \ln | \mathcal { H } | } ) .
$$
We will exploit Exp3 to extend this for learning selective generators with partial feedback.
# 3 Problem: Online Selective Generation with Partial Feedback
We consider online learning of a selective generator under partial feedback for language models. In particular, let $\mathcal { W }$ be a set of tokens and ${ \mathcal { X } } = { \mathcal { Y } } : = \cup _ { i = 0 } ^ { \infty } \mathcal { W }$ be a set of (token) sequences. Here, given a generator $G : \mathcal { X } \mathcal { Y }$ , we consider a time-varying selective generator $\hat { S } _ { t } : \mathcal { X } \to \mathcal { Y } \cup \{ \mathrm { I D K } \}$ that abstains from answering if a generated answer $G ( \mathbf { x } _ { t } )$ at time $t$ is uncertain, i.e., $\hat { S } _ { t } ( \mathbf { x } _ { t } ) : = \$ $\int G ( \mathbf { x } _ { t } )$ if $\hat { s } ( \mathbf x _ { t } , G ( \mathbf x _ { t } ) ) = 1$
, where $\hat { s } : \mathcal { X } \times \mathcal { Y } \{ 0 , 1 \}$ is a selection function and IDK represents IDK otherwise
“I don’t know”. To learn the selective generator, we consider online learning under a non-stochastic assumption: each time step $t$ until a time horizon $T$ , a learning algorithm (1) observes $\mathbf { x } _ { t } \in \mathcal { X }$ , (2) predicts $\hat { S } _ { t } ( \mathbf { x } _ { t } )$ where $\bar { \hat { S } } _ { t }$ is drawn from a learned distribution $p _ { t }$ over selective generators, (3) observes partial feedback $b _ { t } : = \mathbb { 1 } ( \hat { S } _ { t } ( \mathbf { x } _ { t } ) \neq _ { E } \mathbf { y } _ { t } )$ where $\mathbf { x } _ { t }$ and $\mathbf { y } _ { t }$ are drawn from a time-varying distribution, and (4) update the distribution $p _ { t }$ over selective generators by using partial feedback. Here, $A \neq _ { E }$ $B$ means $A$ and $B$ are different in terms of a given correctness relation $E$ , e.g., textualentailment [18] and $b _ { t } = 1$ if $\hat { S } _ { t } ( { \bf x } _ { t } ) = \mathtt { I D K }$ by the definition. Importantly, we consider partial feedback, i.e., we cannot directly observe $\mathbf { y } _ { t }$ but we instead observe whether our prediction is correct or not, i.e., $b _ { t }$ . This is an ordinary setup in using language generators where a user provides thumb up or down feedback if a generated answer is correct or not, respectively.
Under learning with partial feedback, we aim to update a distribution $p _ { t }$ over selective generators that controls an empirical false discovery rate (FDR), with respect to a correctness relation $E$ at a desired level $\alpha \in [ 0 , 1 ]$ up to a time horizon $T$ , i.e.,
$$
\mathbf { F } \mathbf { D } \mathbf { R } _ { T } : = \frac { \sum _ { t = 1 } ^ { T } \mathbb { E } _ { \hat { S } _ { t } \sim p _ { t } } \mathbb { 1 } \left( \hat { S } _ { t } ( \mathbf { x } _ { t } ) \neq \mathrm { I D K } \wedge \hat { S } _ { t } ( \mathbf { x } _ { t } ) \neq _ { E } \mathbf { y } _ { t } \right) } { \sum _ { t = 1 } ^ { T } \mathbb { E } _ { \hat { S } _ { t } \sim p _ { t } } \mathbb { 1 } \left( \hat { S } _ { t } ( \mathbf { x } _ { t } ) \neq \mathrm { I D K } \right) } \leq \alpha ,
$$
where $\mathbf { F D R } _ { T } = \alpha$ if $\hat { S } _ { t } ( { \bf x } _ { t } ) = \mathbb { I } \mathbb { D } \mathbb { K }$ for all $t$ to penalize inefficient cases. Equivalently, we wish to find $\hat { S } _ { 1 : T }$ that make the following $F D R ~ r i s k$ smaller than zero: $\textstyle \sum _ { t = 1 } ^ { T } \big [ \mathbb { 1 } ( \hat { S } _ { t } ( \mathbf { x } _ { t } ) \neq \mathbb { I } \mathbb { D } \mathbb { K } \land \hat { S } _ { t } ( \mathbf { x } _ { t } ) \neq _ { E }$ $\mathbf { y } _ { t } ) - \alpha \mathbb { 1 } \big ( \hat { S } _ { t } \big ( \mathbf { x } _ { t } \big ) \neq \mathrm { I D K } \big ) \Big ]$ . Then, the goal is to learn a distribution $p _ { t }$ over selective generators such that drawn selective generator $\hat { S } _ { t } \sim p _ { t }$ controls the FDR at a desired level $\alpha$ for any sequences, $i . e .$ ,
$$
\frac { 1 } { T } \sum _ { t = 1 } ^ { T } \mathbb { E } _ { \hat { S } _ { t } \sim p _ { t } } \left[ \mathbb { 1 } \left( \hat { S } _ { t } ( \mathbf { x } _ { t } ) \neq \mathrm { I D K } \land \hat { S } _ { t } ( \mathbf { x } _ { t } ) \neq _ { E } \mathbf { y } _ { t } \right) - \alpha \mathbb { 1 } \left( \hat { S } _ { t } ( \mathbf { x } _ { t } ) \neq \mathrm { I D K } \right) \right] \leq \varepsilon ( T )
$$
for any $\mathbf { x } _ { t }$ and $\mathbf { y } _ { t }$ , where $\varepsilon ( T )$ is some decreasing function in $T$ . Additionally, we wish to minimize selection inefficiency $\begin{array} { r } { \mathbf { I n e f f } _ { T } : = \frac { 1 } { T } \sum _ { t = 1 } ^ { T } \mathbb { E } _ { \hat { S } _ { t } \sim p _ { t } } \mathbb { 1 } ( \hat { S } _ { t } ( \mathbf { x } _ { t } ) = \mathrm { I D K } ) } \end{array}$ .
# 4 Method: Online Selective Generation with Partial Feedback Unlocking
We leverage a regret perspective from online learning and bandit problems in designing a learning algorithm for selective generation with full or partial feedback to control the FDR. In particular, we reduce selective generation to online learning with full feedback or bandit problems with partial feedback to use existing regret minimization algorithms. The main benefit of this reduction is that this enables us to leverage any regret minimization algorithms (e.g., EW or Exp3) and their regret bounds for selective generation algorithms with FDR bounds. To connect regret bounds and FDR bounds, we introduce a novel conversion lemma from the regret to the FDR in Section 4.4. However, under partial feedback setups, simply leveraging existing bandit algorithms may not lead to sample efficient algorithms without exploiting properties of selective generation.
In this section, we propose a novel online selective generation with partial feedback unlocking for sample efficiency. First, we introduce the reduction of the online selective generation to adversarial bandits in Section 4.1. In the following Section 4.2, we extend the Exp3 algorithm [7] for adversarial bandits to learn selective generators by exploiting unique structure in selection functions by feedback unlocking, followed by its novel regret bound in Section 4.3. This regret bound leads to an FDR bound by using our conversion lemma in Section 4.4. Note that online selective generation under full feedback can be devised in a similar way. See Appendix C for this detail via EW.
Notations. We first introduce our choice of the selection function and simplified notations. As in conventional selective prediction literature, we consider the scalar parameterization of a selection function given a scoring function $f : \mathcal { X } \times \mathcal { Y } [ 0 , 1 )$ , i.e., $\hat { s } ( \mathbf { x } _ { t } , G ( \mathbf { \bar { x } } _ { t } ) ) : = \mathbb { 1 } \left( f ( \mathbf { x } _ { t } , G ( \mathbf { x } _ { t } ) ) \geq \tau \right) ,$ , thus the selective generator $\hat { S }$ is parameterized by $\tau$ , denoted by $\hat { S } ( \cdot ; \tau )$ . Here, the scoring function $f$ measures the likelihood of $G ( \mathbf { x } _ { t } )$ for being an answer of $\mathbf { x } _ { t }$ . Also, we specifically consider that $\tau$ is chosen from the finely-quantized, finite space of $[ 0 , 1 ]$ , denoted by $\mathcal { H }$ (i.e., $\tau \in { \mathcal { H } }$ ). Given this parameterization, we consider the following shorthand to simplify key notations: $a _ { t } ( \tau ) : =$ $\mathbb { 1 } ( \hat { S } ( \mathbf { x } _ { t } ; \tau ) = \mathbb { I D K } )$ , which measures the selection inefficiency at time $t$ , called inefficiency loss, and $\begin{array} { r } { d _ { t } ( \tau , \alpha ) : = \mathbb { 1 } \big ( \hat { S } ( \mathbf { x } _ { t } ; \tau ) \neq \mathrm { I D K } \land \hat { S } ( \mathbf { x } _ { t } ; \tau ) \neq _ { E } \mathbf { y } _ { t } \big ) - \alpha \mathbb { 1 } \big ( \hat { S } ( \mathbf { x } _ { t } ; \tau ) \neq \mathrm { I D K } \big ) + \alpha , } \end{array}$ which measures the violation of the FDR constraint at time $t$ with a margin by $\alpha$ to penalize the IDK response, called FDR loss with a margin. Based on these short hands, we define loss for online learning and bandits that simultaneously considers the FDR constraint and selection inefficiency as follows:
$$
\ell _ { t } ( \tau , \alpha ) : = a _ { t } ( \tau ) + \lambda d _ { t } ( \tau , \alpha ) \in \{ 0 , \lambda , 1 + \lambda \alpha \} ,
$$
where $\lambda$ is a regularized parameter, which will be determined via theoretical analysis.
# 4.1 Reduction: From Online Selective Generation to Adversarial Bandits
Here, we map the components of online selective generation with partial feedback to those of adversarial bandits to leverage existing algorithms and regret bounds. In particular, each parameter $\tau \in { \mathcal { H } }$ of a selective generator is considered as an arm $\tau \in { \mathcal { H } }$ in adversar
Table 1: From online selective generation to adversarial bandits
ial bandits. We learn selective generators from partial feedback $\mathbb { 1 } ( \hat { S } ( \mathbf { x } _ { t } ; \tau _ { t } ) \neq _ { E } \mathbf { y } _ { t } )$ , i.e., whether an output of a selective generator $\hat { S }$ with a chosen parameter $\tau _ { t }$ is incorrect or not, where $\hat { S } ( \mathbf { x } _ { t } ; \tau _ { t } ) = \mathbb { I D K }$ incurs non-zero feedback by the definition. In adversarial bandits, we consider two-step feedback: (1) get feedback from the nature $\mathbb { 1 } ( \hat { S } ( \mathbf { x } _ { t } ; \tau _ { t } ) \neq _ { E } \mathbf { y } _ { t } )$ and (2) use this to compute the loss (6) by the chosen arm $\tau _ { t }$ . Based on observed loss up to the time horizon $T$ , the adversarial bandit learner updates a strategy $p _ { t }$ over time to minimize the regret $\mathbf { R e g } _ { T }$ . Then, by leveraging our conversion lemma (Lemma 1), the algorithm provides an FDR bound. See Table 1 for a summary of reduction.
# 4.2 Algorithm: Exp3 for Online Selective Generation with Partial Feedback Unlocking
As for the adversarial bandit learner, we leverage Exp3 [7] that is a direct extension of the EW [14] for partial feedback. The key challenge in moving from full to partial feedback is the slow convergence of learning due to the lack of feedback information. To address this, we exploit the unique parameter structure of selective generators to unlock feedback of other arms by a chosen arm, i.e., feedback unlocking. In particular, by the monotonicity of a selection function in $\tau$ , i.e., $\hat { s } ( \mathbf { x } ) : = \mathbb { 1 } ( f _ { t } \geq \tau )$ where $f _ { t } : = f ( \mathbf { x } _ { t } , G ( \mathbf { x } _ { t } ) )$ and the definition of a selective generator $\hat { S }$ , the following relation among feedback holds, where $b _ { t } : = \mathbb { 1 } ( \hat { S } ( \mathbf { x } _ { t } ; \tau _ { t } ) \neq _ { E } \mathbf { y } _ { t } )$ is observed feedback by a chosen arm $\tau _ { t }$ :
$$
b _ { t } : = \Im ( \hat { S } ( \mathbf { x } _ { t } ; \tau _ { t } ) \neq _ { E } \mathbf { y } _ { t } ) = \left\{ \begin{array} { l l } { \Im ( \hat { S } ( \mathbf { x } _ { t } ; \tau ) \neq _ { E } \mathbf { y } _ { t } ) \mathrm { f o r } \tau \leq f _ { t } } & { \mathrm { i f } \ \hat { S } ( \mathbf { x } _ { t } ; \tau _ { t } ) \neq \mathrm { I D K } } \\ { \Im ( \hat { S } ( \mathbf { x } _ { t } ; \tau ) \neq _ { E } \mathbf { y } _ { t } ) \mathrm { f o r } \tau > f _ { t } } & { \mathrm { i f } \ \hat { S } ( \mathbf { x } _ { t } ; \tau _ { t } ) = \mathrm { I D K } } \end{array} \right. .
$$
To exploit this feedback unlocking within Exp3, we propose a novel unbiased loss estimator
$$
\ell _ { t } ( \tau , \alpha \mid \mathcal { H } _ { t } ( \tau _ { t } ) ) : = \frac { \ell _ { t } ( \tau , \alpha ) } { \sum _ { \tau \in \mathcal { H } _ { t } ( \tau _ { t } ) } \mathbb { 1 } ( \tau \in \mathcal { H } _ { t } ( \bar { \tau } ) ) \cdot p _ { t } ( \bar { \tau } ) } \cdot \mathbb { 1 } ( \tau \in \mathcal { H } _ { t } ( \tau _ { t } ) ) \in [ 0 , \infty ) ,
$$
where $\mathcal { H } _ { t } ( \tau _ { t } ) : = \{ \tau \in \mathcal { H } \mid \tau \leq f _ { t } \}$ if $\hat { S } ( \mathbf { x } _ { t } ; \tau _ { t } ) \neq \mathtt { I D K }$ and $\mathcal { H } _ { t } ( \tau _ { t } ) : = \{ \tau \in \mathcal { H } \mid \tau > f _ { t } \}$ otherwise. Note that $\tau \in \mathcal { H } _ { t } ( \tau )$ by its definition and it is an unbiased estimator, i.e., $\mathbb { E } _ { \tau _ { t } \sim p _ { t } } \ell _ { t } ( \tau , \boldsymbol { \alpha } \mid \mathcal { H } _ { t } ( \tau _ { t } ) ) =$ $\ell _ { t } ( \tau , \alpha )$ . See (29) in Appendix G.4 for a proof on the unbiasedness and Algorithm 1 for our extension of Exp3 with partial feedback unlocking.
Connection to Exp3. Our algorithm can be reduced to Exp3 by letting $\mathcal { H } _ { t } ( \tau ) = \{ \tau _ { t } \}$ for any $\tau$ . Then, we have $\begin{array} { r } { \sum _ { \bar { \tau } \in \mathcal { H } _ { t } ( \tau _ { t } ) } \mathbb { 1 } \{ \bar { \tau _ { t } } \in \mathcal { H } _ { t } ( \bar { \tau } ) \} \cdot p _ { t } ( \bar { \tau } ) = p _ { t } ( \tau _ { t } ) } \end{array}$ , recovering the Exp3 unbiased estimator in (2) from our unbiased estimator (8).
Connection to EW. Our algorithm can be also reduced to EW. Specifically, let $\mathcal { H } _ { t } ( \tau ) = \mathcal { H }$ for any $\tau$ , i.e., full feedback. Then, we have $\begin{array} { r } { \sum _ { \bar { \tau } \in \mathcal { H } _ { t } ( \tau _ { t } ) } \mathbb { 1 } ( \tau _ { t } \in \mathcal { H } _ { t } ( \bar { \tau } ) ) \cdot \bar { p } _ { t } ( \bar { \tau } ) = \mathrm { i } } \end{array}$ , making our estimator (8) the same as the loss $\ell _ { t }$ .
# 4.3 Regret Bound
Our Algorithm 1 has a sublinear regret bound. From this regret bound, we construct the FDR bound by using Lemma 1. See Appendix G.4 for a proof on our novel regret bound.
Theorem 2. Let $\ell _ { t } ( \cdot ) \in [ 0 , \ell _ { m a x } ]$ . For any $T \in \mathbb { N }$ and finite hypotheses $\mathcal { H }$ , Algorithm 1 provides the following regret bound if $\eta = \sqrt { \ln \left| \mathcal { H } \right| / \ell _ { m a x } ^ { 2 } T }$ :
$$
R e g _ { T } = \sum _ { t = 1 } ^ { T } \mathbb { E } _ { \tau , \tau _ { t } \sim p _ { t } } \ell _ { t } ( \tau , \alpha \mid \mathcal { H } _ { t } ( \tau _ { t } ) ) - \operatorname* { m i n } _ { \tau \in \mathcal { H } } \sum _ { t = 1 } ^ { T } \ell _ { t } ( \tau , \alpha ) \leq 2 \ell _ { m a x } \sqrt { T \ln { | \mathcal { H } | } }
$$
for any observed loss sequences $\ell _ { t }$ generated before learning by an oblivious adversary.
Interestingly, despite of partial feedback, our algorithm achieves the same upper bound as EW with full feedback, i.e., $\ell _ { \mathrm { m a x } } \sqrt { T \ln | \mathcal { H } | / 2 }$ , up to a constant factor due to our unbiased loss estimator (8) with rich information, whereas $\mathtt { E x p 3 }$ suffers from an additional factor of $\sqrt { | \mathcal { H } | }$ (see (3)) due to its unbiased loss estimator with limited information.
# 4.4 Regret-to-FDR Conversion
We mainly leverage algorithms that minimize the regret $\mathbf { R e g } _ { T }$ for learning selective generators. Yet, the key requirement for selective generation is the FDR guarantee of a learner at a desired level. To this end, we introduce a novel perspective on the connection of a regret bound to an FDR bound. In particular, any learner, including learners for online learning and bandits, that minimizes the regret (1) controls the FDR (4). This can be achievable as the designed loss (6) is penalized with the regularized parameter $\lambda$ when the FDR loss at each step is higher than $\alpha$ by $d _ { t } ( \tau , \alpha )$ , resulting in controlling the FDR over time $T$ by $\alpha$ . The following is a proposed regret-to-FDR conversion lemma. See Appendix G.3 for a proof.
Lemma 1. Let $T \in \mathbb { N } ,$ , $\alpha \in ( 0 , 1 )$ . If we have a learner that achieves a bounded regret for any loss sequences of $( 6 )$ , then the learner also bounds the FDR, i.e., if $\dot { { \bf \Phi } } \lambda \ge 1 / T ^ { 1 / 4 }$
$$
F D R _ { T } \leq \alpha + \frac { 1 + R e g _ { T } / T } { ( 1 - I n e f f _ { T } ) T ^ { 1 / 4 } } .
$$
Importantly, this lemma is applicable to learning for both full and partial feedback, as it is agnostic to feedback mechanisms. Also, the lemma implies that if $\mathbf { R e g } _ { T }$ has a sublinear bound, satisfied by most online learners, and $\mathbf { I n e f f } _ { T }$ is constant less than 1, which also mostly holds, then $\mathbf { F D R } _ { T }$ gets closer to $\alpha$ at most the rate of $\mathcal { O } ( 1 / T ^ { 1 / 4 } )$ , i.e., $\mathbf { F } \mathbf { D } \mathbf { R } _ { T } \leq \alpha + \mathcal { O } ( 1 / T ^ { 1 / 4 } )$ . Note that $\mathbf { I n e f f } _ { T }$ does have an effect on the convergence speed by a constant factor. See Section $\mathrm { H }$ for discussion on efficiency.
# 5 Experiments
We empirically justify that $\mathtt { E x S U L }$ controls the FDR while maximizing selection efficiency. Moreover, we demonstrate the controllability guarantee holds under diverse environments. In particular, we consider three environments: (1) stochastic, (2) distribution-shift, and (3) interactive environments.
Figure 2: Comparison of selective generation methods under a stochastic environment with LLaMA3.1-8B-Instruct as a generator on TriviaQA $\begin{array} { r } { T = 3 0 \mathrm { K } } \end{array}$ , $\alpha = 0 . 0 8 \mathrm { ^ { \circ } }$ ). The violin plots are drawn with randomly chosen 30K samples over 30 random trials.
Figure 3: Comparison of selective generation methods under a stochastic environment with GPT-3.5- turbo as a generator on TriviaQA $\begin{array} { r } { T = 3 0 \mathrm { K } } \end{array}$ , $\alpha = 0 . 0 8 \mathrm { ^ { \circ } }$ ). The violin plots are drawn with randomly chosen 30K samples over 30 random trials.
Datasets and Models. We use two datasets for stochastic and distribution-shift environments, 79K Natural Question (NQ) [9] and 93K TriviaQA [8] with two base models, GPT-3.5-turbo [1] and LLaMA3.1-8B-Instruct [2]. We simulate distribution-shift environments by mixing two datasets in diverse ways and an interactive environment by generating interaction between two GPT-3.5- turbo models (see Section E.2 and E.3, respectively for details). To obtain entailment labels and self-consistency scores, we additionally use GPT-3.5-turbo as an entailment model.
Scoring functions. We consider two scoring functions $f _ { \mathsf { s t d } }$ and $f _ { \mathsf { c o n } }$ , introduced in Section 2.1. In short, $f _ { \mathsf { s t d } }$ is a standard log-likelihood score defined as the conditional probability of an answer given a question. $f _ { \mathsf { c o n } }$ is a self-consistency score of an answer computed via entailment scores (from an entailment model) across samples. We use $f _ { \mathsf { c o n } }$ unless specified.
Methods. We consider two baselines, Exp3-SG and No-SG, and one performance upper bound algorithm, EW-SG. In particular, (1) Exp3-SG (Algorithm 5) is an online selective generator with partial feedback by converting Exp3 [7] for selective generation via our regret-to-FDR conversion lemma. (2) No-SG is a non-selective generator (i.e., $\tau _ { t } = 0$ always), to serve as a standard use of a generator without abstention. (3) EW-SG (Algorithm 4) is an online selective generator with full feedback by converting this for selective generation via our conversion lemma. As it exploits full feedback, we use it as our empirical performance upper bound.
Metrics. We measure performance of ours along with comparing methods via the FDR, i.e., $\mathbf { F D R } _ { t }$ and selection inefficiency, i.e., $\mathbf { I n e f f } _ { t }$ . Note that No-SG achieves zero selection inefficiency by definition, so we do not explicitly add in figures.
Parameters. A desired FDR level $\alpha$ and time horizon $T$ depend on tasks, but we set $| \mathcal { H } | = \mathrm { { 1 K } }$ and $\lambda = T ^ { 1 / 4 }$ by default. Note that $\alpha$ is a desired FDR set by users. This usually depends on tasks, e.g., $\alpha$ is smaller than an error of a base generator on a given task without selection to avoid a trivial FDR.
# 5.1 Stochastic Environment
In Figure 2 and 3, we observe that ExSUL converges fast under $\alpha$ , comparable to EW-SG which has access to full feedback, while Exp3-SG fails to converge under $\alpha$ in most cases. This is because Exp3-SG only observes partial feedback from the chosen arm, requiring a much longer time horizon
Figure 4: Comparison of selective generation methods under a distribution-shift environment with GPT-3.5-turbo as a generator $\begin{array} { r } { T = 1 2 0 \mathrm { K } } \end{array}$ , $\alpha = 0 . 1 \mathrm { \Omega }$ ). Here, we consider a single distribution shift from 60K-sized TriviaQA to 60K-sized NQ, denoted in a dotted vertical line. The violin plots are drawn with randomly chosen 120K samples with 30 random trials.
Figure 5: Comparison of selective generation methods under a distribution-shift environment with GPT-3.5-turbo as a generator $\begin{array} { r } { T = 1 2 0 \mathrm { K } } \end{array}$ , $\alpha = 0 . 1 \$ ). We consider multiple distribution shifts by alternating 10K-sized TriviaQA and NQ 6 times (i.e., TriviaQA, NQ, TriviaQA, NQ and so forth), where shifting points are visualized in dotted vertical lines. The violin plots are drawn with randomly chosen 120K samples with 30 random trials.
$T$ for convergence (Figure 23), which supports the regret bound analysis discussed in Section 4.3. These results imply that our partial feedback unlocking is beneficial in a regret bound and faster convergence to the desired FDR. Similar trends are observed in Figure 9 and 11. Additionally, Figure 21 demonstrates that our algorithm effectively controls the FDR under varying $\alpha$ in practice. Note that algorithms with $f _ { \mathsf { c o n } }$ have faster convergence than those with $f _ { \mathsf { s t d } }$ (e.g., Figure 9 v.s. 11), suggesting that a well-calibrated scoring function can have a positive effect on convergence speed.
# 5.2 Distribution-shift Environment
In this study, to evaluate the robustness of selective generation we examine three distinct distributionshift scenarios. First, we create a single instantaneous change by concatenating one large segment from one dataset with an other segment from the other. Second, we induce frequent, periodic shift by interleaving a series of small, fixed-size chunks drawn randomly from each dataset. Third, we provide progressive shift by sampling each example according to a mixing probability that varies linearly. More details of environment setup introduce in Appendix E.2.
In Figure 4,5, and 6, we observes that $\mathtt { E x S U L }$ is clearly better in convergence speed than Exp3-SG, while closely follows EW-SG for data distribution shifted environment. Interestingly, in Figure 4, Exp3-SG exhibits a sharp increase in the FDR immediately following the distribution shift, highlighting its sensitivity to sudden changes in data distribution, while the proposed ExSUL does not. In 22, as in the stochastic environments, we also confirmed that our methods find selective generators for a diverse desired FDR $\alpha$ , showing theoretical controllability empirically. See additional results on varying generators and shift types in Section F.2 and a longer time horizon in Section 24.
Figure 6: Comparison of selective generation methods under a distribution-shift environment with GPT-3.5-turbo as a generator $T = 6 0 \mathrm { K }$ , $\alpha = 0 . 1 \mathrm { \Omega }$ ). We consider a distribution shift by gradually shifting from TriviaQA to NQ over time. The violin plots are drawn with randomly chosen $6 0 \mathrm { K }$ samples with 30 random trials.
Figure ${ \mathbf { } } 7 { \colon } \mathbf { F D R } _ { t }$ and $\mathbf { I n e f f } _ { t }$ for $\mathtt { E x S U L }$ under an interactive environment with GPT-3.5-Turbo as a question-answering agent $\mathit { T } = 1 5 , 0 0 0$ , $\alpha = 0 . 2 5 )$ ). A user agent continuously creates questions based on changing context, which simulate distribution shifts.
# 5.3 Interactive Environment
We empirically demonstrate the efficacy of our proposed $\mathtt { E x S U L }$ in real-world interactive applications. In particular, we simulate an interactive environment by implementing a user-acting agent, a questionanswering agent, and an evaluating agent by states. These agents interact over multiple turns, forming a dialog-like sequence of states that mimics realistic user interactions, and our selective generation method ExSUL decides whether to abstain from generation as shown in Figure 1. In Figure 7, we observe that ExSUL consistently controls the FDR under $\alpha$ , despite the instability of $\mathbf { F D R } _ { t }$ due to the shifting dynamics of question-answering. This observation demonstrates that our method $\mathtt { E x S U L }$ is robust under frequently shifting distributions over time. In Figure 1, ExSUL successfully abstains from incorrect answers, helping to achieve the desired FDR level $\alpha$ . | Large language generative models increasingly interact with humans, while their falsified responses raise concerns. To address this hallucination effect, selectively abstaining from answering, called selective generation, provides an effective way for generators to control the hallucination when it is unsure of their answers. However, as selective generators are interacting under non-stochastic environments and having partial feedback from users on selective generation (e.g., thumbs up or down on the selected answer), learning methods for selective generation under such practical setups are crucial but currently missing. To address these limitations, we propose an online learning algorithm for selective generation under partial feedback. In particular, as learning under partial feedback is well-studied by multi-armed bandit problems, we reduce selective generation to bandits and provide a novel conversion lemma from bandits back to selective generation to leverage any known bandit algorithms and theoretical properties. This mainly connects regret guarantees of bandits to false discovery rate (FDR) guarantees of selective generation for controlling hallucination. However, naively exploiting known bandit algorithms and their regret bounds suffers from slow convergence speed in practice due the nature of partial feedback. To overcome this, we exploit a unique structure of arms in selective generation for feedback unlocking, i.e., unlocking unknown feedback from observed feedback. We theoretically and empirically evaluate the efficacy of the proposed online selective generation algorithm under partial feedback over diverse data environment setups, resulting in controlling a desired FDR, while maintaining reasonable selection efficiency, i.e., the ratio of non-abstaining answers, compared to baselines. | [
"cs.LG"
] |
# 1. Introduction
The field of Explainable AI (XAI) [1, 2, 3] focuses on making ML models more transparent to their users. It has given rise to a wide range of methods focusing on different data modalities and model types (e.g. [4, 5, 6, 7]), and can help to improve model performance [8, 9, 10]. XAI has also been used in various application fields and scientific disciplines, e.g. medicine [11, 12, 13, 14], physics or chemistry [15, 16, 17] geoscience [18, 19, 20, 21] and history [22, 23, 24], to unlock potential insight embodied by machine learning models.
Counterfactuals explainers [25, 26, 27, 28] constitute a popular approach to explanation, which asks how input features would have to be changed to flip the predictor’s decision. They have shown applicability in a broad range of settings ranging from linear models on tabular data (sometimes, this is also called algorithmic recourse) to sequence classification [29], graph classification [30], unveiling deepfakes [31], object recognition [32], counterfactual image generation [33] or image regression [34]. In this paper, we will focus on applications to image classification, and refer to the corresponding techniques as ‘visual counterfactual explainers’ (or VCEs) [28, 27, 35].
Explanations produced by VCEs take the form of an image (or a collection of images), which should differ from the original image only by the features that are necessary to effect a specific change at the output of the model. This can be e.g. the inclusion or removal of object parts, but also more intricate changes in image quality or color, that may not be accessible with other explanation techniques such as feature attribution. Another advantage of counterfactuals is that they are inherently actionable, e.g. together with a human in the loop, counterfactuals provide an implicit data augmentation scheme that can serve to address a model’s missing invariances or reliance on spurious correlations [36]. Mathematically, the search for counterfactuals can be formulated as an optimization problem:
$$
\arg \operatorname* { m i n } _ { \widetilde { x } } \quad d ( x , \widetilde { x } ) \qquad \mathrm { s . t . } \quad f ( \widetilde { x } ) < 0 \quad \land \quad \widetilde { x } \in \mathcal { M } ,
$$
where $x$ is the data point (factual), $\widetilde { x }$ is the counterfactual, $\mathcal { M }$ is the data manifold, and $d$ is a distance along the manifold [27]. Theis effectively searches for the minimum perturbation of the original data point (according to some metric $d$ ) that flips the classification outcome.
Despite these promising capabilities, we argue in this paper that current visual counterfactual explainers exhibit some major shortcomings that limit their potential use. Similar to Eq. (1)’s formulation, they are mainly focused on optimizing two criteria: (1) staying on the data manifold $\mathcal { M }$ , i.e. ensuring that $\widetilde { x }$ has high image quality, and (2) minimality of the transformation $x \mapsto { \widetilde { x } }$ through minimizing the distance function $d$ . See Figure 1 (left) for an illustration of a classicael counterfactual search based on these criteria.
Higher image quality is arguably better, but one has to distinguish between image editing [37, 38, 39, 40], where the explicit goal is to produce high-quality edited images, and counterfactual explanations, which are supposed to reveal the inner workings of a classifier. Low-quality counterfactuals that clearly and correctly reveal the features used are arguably more valuable than high-quality counterfactuals that do not. For example, if the model is of ‘Clever Hans’ type [41, 42, 43, 44] and spuriously relies e.g. on background features, the generated counterfactual must faithfully detect this flaw of the model and expose it in human-understandable manner. Concerning the minimality criterion, it is clear that a large distance between factual and counterfactual, e.g. resulting from choosing any data point on the other side of the decision boundary, would fail to disentangle relevant from irrelevant features. However, there are caveats to minimizing such distances. For example, suppose an image of a person is already on the boundary between the classes ‘smiling’ and ‘serious’. In that case, it is better for the sake of understandability to resolutely perturb one feature (e.g. the mouth) and leave the remaining features intact than to find a faint combination of unrelated pixels that produce a similar effect; in other words, sparsity in some meaningful semantic space should be preferred to minimality.
To increase the usefulness and actionability of counterfactual explanations, we propose to start from a well-established explanation desiderata by Swartout and Moore [45] listing ‘fidelty’, ‘understandability’, and ‘sufficiency’ as essential characteristics of an explanation. We contribute an instantiation of these desiderata in the context of counterfactual explanations (cf. Section 3 and Figure 1, right) as well as novel mechanisms to fulfill these desiderata. We combine these mechanisms into a novel algorithm called ‘smooth counterfactual explorer’ (SCE), which we present in Section 4. The effectiveness of our approach is demonstrated in Sections 5 and 6 through evaluations on synthetic datasets with access to latent variables and real-world datasets. The actionability of SCE is further demonstrated through in-the-loop experiments, where we show that SCE can be used to effectively repair an ML model by reducing its reliance on spurious correlations and thereby improving its overall accuracy.
Figure 1: Proposed desiderata-driven approach to counterfactual explanation, emphasizing ‘fidelity’ (the need to focus on the principal variations of $f$ ), ‘understandability’ (the need to align with the basis used for interpretation), and ‘sufficiency’ (the need to generate diverse counterfactuals). Compared to classical approaches based on minimizing distance on manifold, our desiderata-driven design of counterfactual explainers leads to more reliable and distinct explanations of what causes a change in class (in this example, the classifier changes its decision from ‘blond’ to ‘non-blond’ by darkening the hair or adding a beard).
# 2. Related Work
Visual Counterfactual Explainers. A variety of approaches have been proposed for producing counterfactuals of image classifiers, each of which aiming at overcoming various challenges with the optimization problem itself and with the quality of the resulting counterfactuals. Diffeomorphic Counterfactuals (DiffeoCF) [27] and DiVE [35] aim to produce counterfactuals that remain on the data manifold by using a generative model, specifically an invertible latent space model. The Adversarial Visual Counterfactual Explanations (ACE) [28] method uses gradients filtered through the noising/denoising process of a diffusion model [46] to avoid moving in adversarial directions. It also uses a combination of L1 and L2 regularization between factual and counterfactual and RePaint [47] as post-processing to keep pixel-wise changes to a minimum. Unlike ACE, which uses one noise level and re-encodes the current counterfactual at each iteration, DVCEs [48, 49], DiME [50], and FastDiME [51] encode only once in the beginning to a latent state that is then updated based on classifier guidance [52]. TIME [53], GCD [54], and DiffEx [55] are also diffusionbased visual counterfactuals, but they are not based on the gradients of the classifier and are weaker on the metrics. Compared to existing VCEs, our proposed approach combines a broader set of counterfactual generation mechanisms, with the aim of fulfilling more holistic desiderata.
Evaluating Explanations. A major challenge in designing explanation techniques is the issue of evaluation. Ground-truth explanations are rarely available, and the notion of a good explanation depends on the use case. The question of evaluation has been central, with early foundations including [56] and [45], which proposes practical desiderata for explanation techniques. While the evaluation of explanations has been extensively covered for attribution methods (e.g. [57, 58, 59, 60,
61, 62]), there have been comparatively fewer such studies in the context of visual counterfactual explanations. Guidotti et al. [63] proposed plausibility for linear models, which is similar to our concept of fidelity. Mothilal et al. [25] proposed validity (a specific aspect of what we call fidelity), diversity, and sparsity in the context of linear counterfactuals on tabular data. [64] considers both sparsity and interoperability for visual counterfactuals. Two measures (IM1 and IM2) of closeness to the data manifold were proposed for the latter purpose. DiffeoCF [27] focuses on IM1. On CelebA, DiVE [35] proposed to measure sparsity by counting the mean number of attributes changed (MNAC) in a counterfactual. DiME [50] introduced measures of image quality based on the FID and/or sFID scores of the counterfactuals and a wide range of generic or domaininformed minimality metrics that DVCEs, ACE, FastDiME, TIME, and GCD also use. DiME is the only one that measures diversity by measuring the LPIPS between multiple counterfactuals for the same factual. In summary, while similar metrics to the one suggested by us have been proposed in the context of tabular data, previous work on visual counterfactual evaluation has predominantly focused on minimality and adherence to the data manifold/image quality, with the latter instantiated in an ad hoc fashion without deriving from formal desiderata.
# 3. Desiderata of Counterfactuals
With the aim to enhance the usefulness of visual counterfactual explainers (VCEs), we propose to take as a starting point the holistic explanation desiderata formulated by Swartout and Moore (1993) [45]. These desiderata, originally proposed in the context of explaining expert systems, are ‘fidelity’, ‘understandability’, ‘sufficiency’, ‘low construction overhead’, and ‘[runtime] efficiency’. In the following, we contribute an instantiation of these desiderata, specifically the first three desiderata, to counterfactual explanations.
# 3.1. Fidelity
For an explanation to be useful, it should faithfully describe what the model does. As noted in [45], an incorrect or misleading explanation is worse than no explanation at all. Translating the desideratum to counterfactual explanations, one requires that (1) the transformation of the data $x$ into its counterfactual $\widetilde { x }$ is plausible in practice, and that (2) the classifier $f$ responds strongly to this transformationeand flips the classification. Most state-of-the-art VCEs, such as DiffeoCF [27], DiVE [35], ACE [28], DiME [50], and FastDiME [51], ensure the first aspect by limiting the search for $\widetilde { x }$ to the data manifold $\mathcal { M }$ through a generative model. However, they do not address wheth ethe transformation $x \mapsto { \widetilde { x } }$ is associated with a robust model response that allows crossing the decision boundary. Selectieng any on-manifold transformation that flips the classifier (e.g. a minimal one) risks producing an adversarial example (cf. [65, 66]), as the counterfactual search may leverage spurious local variations in the classification function $f$ that are not representative of how $f$ varies more globally. Spurious local variations are commonplace in image-based classifiers (e.g. [67, 68, 3, 69]), and also occur along the data manifold $\mathcal { M }$ [66]. Most existing counterfactual methods such as DiffeoCF, DiVE, ACE, DiME, and FastDiME lack a mechanism to address potential spurious variations on the manifold.
# 3.2. Understandability
Understandability refers to whether an explanation is understandable to its intended recipient, who is usually a human. Explanations should be presented at an appropriate level of abstraction and be concise enough to be quickly assimilated. In the context of counterfactual explanations, understandability can be characterized by the sparseness of the transformation $x \mapsto { \widetilde { x } }$ , which allows only a few features to change between $x$ and $\widetilde { x }$ in an interpretable latent space. Mecehanisms for sparseness can be found in several state-of-the-aret VCEs, such as in ACE [28] where sparseness is enforced trough a combination of $\ell _ { 1 }$ losses between the factual and the counterfactual, or the so-called RePaint function [47] which resets specific image regions of $\widetilde { x }$ the original pixel values found in $x$ . These sparsity mechanisms contribute to make the coun erfactual explanation more concise, thereby increasing its understandability.
# 3.3. Sufficiency
In the context of explaining an ML model, sufficiency can be interpreted as an explanation’s ability to provide enough information to make it actionable. In the context of counterfactuals, the sufficiency of an explanation can be promoted by generating for each instance $x$ a diverse set of counterfactuals, highlighting the multiple, potentially interdependent factors that influence the decision function. This detailed account of the potentially many local effects is crucial for explanation-based model improvement in order to address the potentially multiple model flaws and build in the desired invariances. Diversification mechanisms can be found in the context of linear models of tabular data, with [70] proposing to actively prevent the same features from being updated twice in two different counterfactual generation steps. However, such mechanisms to induce diversity are largely missing in existing visual counterfactual explainers such as DiffeoCF, ACE and DiME, except for the possibility to rerun counterfactual generation multiple times with different seeds.
# 4. Smooth Counterfactual Explorer (SCE)
We propose a novel counterfactual explainer that exhaustively addresses the desiderata of Section 3 through the incorporation of mechanisms found in existing counterfactual generation methods as well as novel mechanisms such as smooth distilled surrogates and lock-based diversifiers, which we introduce below. Our SCE approach is illustrated in Fig. 2. The detailed sequence of computations that SCE performs is given in Algorithm 1. Table 1 shows how our SCE approach differs from existing methods in terms of implemented mechanisms and how these mechanisms help fulfill the desiderata.
Inducing Fidelity. Similar to methods such as DiffeoCF [27], ACE [28], and DiME [50], we ensure a focus on plausible data transformation $x \mapsto { \widetilde { x } }$ by performing counterfactual search on the data manifold $\mathcal { M }$ . Specifically, we pass the data tehrough a denoising diffusion probabilistic model (DDPM) [46] that iteratively converts a noisy input into a denoised version, i.e. $\mathrm { D D P M } ( x ) = ( d \circ$ $\cdots \mathord { \circ } d ) ( x + \varepsilon )$ where $d$ is the decoding function. Propagating the gradient through the decoding stack suppresses part of the gradient $\nabla f$ that does not align with $\mathcal { M }$ . To address the additional (and so far unsolved) problem of on-manifold spurious function variations, we propose conceptually to smooth the classifier’s gradient field $\nabla f$ , i.e. ${ \vec { g } } = ( \nabla f ) * k$ where $k$ is the smoothing kernel. To make this computationally feasible, we perform smoothing indirectly, by distilling $f$ into a surrogate model $\widehat { f }$ which we train to match $f$ under data augmentation (an adversary [71], MixUp [72] and label sbmoothing [65]). To facilitate smoothing, we equip the student model with tailored nonlinearities combining LeakyReLU and Softplus. Performing counterfactual search on the gradient of the smoothed function $\widehat { f }$ allows SCE to avoid getting trapped in on-manifold local minima and to emphasize global v biations of the classifier over local ones.
Figure 2: Illustration of our proposed Smooth Counterfactual Explorer (SCE). It includes a diffusion model (DDPM) to project the data and gradients on manifold $\mathcal { M }$ , a distillation of the original model into a surrogate model with smoother gradients, a sparsification of the transformation $x \mapsto { \widetilde { x } }$ , and a lock-based diversifier forcing the currently generated counterfactual to differ from those generated previousley. In the given example, we assume that the model $f$ responds to three features (‘hat’, ‘bowtie’ and ‘background’). The bowtie is prevented from being removed in the current counterfactual because it was removed in the previous one. Of the remaining possible removals (hat and background), only the hat is removed in order to implement counterfactual sparsity.
Inducing Understandability. Similar to ACE and FastDiME, we induce sparsity in SCE using the RePaint function and an $\ell _ { 1 }$ penalty between the original and the counterfactual. Unlike ACE, which applies the RePaint function only after the counterfactual search ends, we apply it at each step of the counterfactual generation procedure. The proposed repeated application of the RePaint function in SCE enables us to perform the whole counterfactual search in the space of sparse counterfactuals, and to avoid that the last repainting step introduces sparsity at the expense of other desirable properties of the counterfactual $\widetilde { x }$ .
Table 1: Comparison of existing counterfactual methods and our SCE approach in terms of their mechanisms for fulfilling explanation desiderata. $\mathcal { M }$ denotes a manifold projection mechanism (e.g. a generative model), $\ell _ { 1 }$ and $\ @$ denote the addition of $\| x - { \widetilde { x } } \| _ { 1 }$ as a penalty term and the RePaint function respectively, $\widehat { f }$ denotes our proposed function smoothing, and $\mathbf { a }$ d enotes our proposed lock-based diversification mechanism.
Inducing Sufficiency. Our goal is to surpass the implicit diversification mechanism of ACE, DiME, and FastDiME, which rely on random factors in the optimization procedure, by actively pulling apart the different generated counterfactuals. We propose a novel lock-based diversifier, where counterfactual updates are only allowed to follow directions that differ from previously generated counterfactuals. This locking mechanism is enforced in the gradient descent step in both latent and pixel spaces. (In pixel space, this is achieved through the RePaint function.) As a final step, the sequence of generated counterfactuals is clustered and ranked based on counterfactuals obtained from the entire dataset. Specifically, reranking involves clustering the counterfactual direction vectors in latent space using k-means, with a cosine similarity variant. The clusters are then ranked according to the average of direction vectors within each cluster (the cluster with the largest vector is ranked first). Finally, the counterfactuals are ranked according to the rank of the clusters to which they belong.
Computational Aspects. To generate a set to up to $K$ counterfactuals associated to one data point $x$ and denoting by $T$ the number of gradient descent step per counterfactual, SCE requires up to $K \times T$ forward and backward passes through the generative model and the same number of forward passes for the RePaint in the sparsifier. Typically, the value of $K$ in SCE is low (between 1 and 2 in our experiments) as only few distinct counterfactual can be generated for a given factual $x$ . Per counterfactual, SCE has the same complexity $T$ as ACE. However, compared to ACE, the amount of computations varies in the following way: (1) After every iteration, SCE checks whether the output of the classifier $f$ has flipped and stops early if it has been achieved. At the same time, ACE typically performs gradient descent for a fixed number of iterations. (2) Due to the smoother gradients from our gradient smoothing, the counterfactual search of SCE converges faster than ACE. (3) Compared to ACE, SCE includes the initial phase of building the distilled model $\widehat { f }$ before being able to compute the counterfactuals.
# 5. Experimental Setup
In this section, we first describe the collection of datasets and tasks on which we conduct our experiments. Then, we propose a set of metrics to measure whether the tested counterfactual explainers fulfill the desiderata of Section 3. Next, we perform benchmark experiments comparing our SCE method with a set existing VCEs, specifically, ACE, DiME, and FastDiME, as well as an ablation study that tests the necessity of each component of SCE. Our quantitative results are complemented with qualitative results where we provide visual interpretation of the advantage of our SCE approach.
# 5.1. Datasets and Models
We select various models and classification tasks, ranging from natural to purely synthetic. Our selection aims to verify the robustness of SCE in a broad range of settings, and to provide a variety of ground truths for evaluation purposes. For all our experiments, we used as a generative model the DDPM implementation of Dhariwal et al. [52].
First, we consider a ResNet18 [73] with weights from torchvision fine-tuned on the CelebA dataset [74]. The CelebA dataset is a collection of face images, each of which is categorized by 40 different attributes. We fine-tune the torchvision weights on the CelebA ‘smiling’ attribute, and use the resulting model for the task of generating smiling to non-smiling counterfactuals and vice versa. We repeat the experiment for the ‘blond’ attribute, which differs from the smiling attribute by its significantly larger pixel footprint. One advantage of the CelebA dataset is that ground-truth latent factors can be generated. In our experiments, we use the 40 logits of a CelebApretrained DenseNet $^ { 1 }$ for that purpose. Additionally, we run experiments on an augmented version of CelebA, where we add a ‘copyright tag’-like visual artifact to the bottom right corner, in a way that it spuriously correlates with the ‘smiling’ attribute (see Supplementary Figure 1). This semisynthetic dataset provides ground-truth for our evaluation as the location of the injected copyright tag is known. Specifically, we have access to segmentation masks that separate the foreground from the copyright tag location. We also repeat the same experiment with a different model: a vision transformer [75].
Next, we consider a ResNet18 with weights from torchvision and finetuned on a purely synthetic dataset called ‘Square’. The Square dataset (See Supplementary Figure 2) consists of a simple 4- dimensional manifold embedded in image space. The four latent dimensions are the intensity of the foreground square, the intensity of the background, and the x- and y-position of the foreground square. A spurious correlation is introduced between the (relevant) foreground square’s intensity and the (irrelevant) background square’s intensity. This spurious correlation, combined with the background’s higher saliency (high pixel footprint), causes the model to learn a Clever Hans strategy based on the background (cf. [43] for a related study). This dataset has direct access to the latent features, namely the square’s position and the background/foreground intensities, enabling further scrutiny into the correctness of the counterfactuals.
Finally, we consider the ResNet18 with weights from torchvision, this time fine-tuned on a subset of the Camelyon17 dataset. This subset introduces a spurious correlation between the histopathological patch type (benign or malignant) and the hospital from which the patch originates. This simulates a situation in which all malignant samples come from one hospital and all benign samples come from another. We access the two-dimensional semantic latent space by training an oracle model on the full dataset, which has no such correlation, to predict both the patch type and the hospital. We then use the oracle’s logits as the latent representation.
# 5.2. Metrics for Testing the Desiderata
We now introduce several metrics for testing the fulfillment of the desiderata discussed in Section 3. As a starting point, we need to test whether a given method is capable of generating counterfactuals at all. We quantify this by the ‘flip rate’ $\mathrm { F R } = N _ { \mathrm { f l i p p e d } } / N _ { \mathrm { t o t a l } }$ where $N _ { \mathrm { f l i p p e d } }$ is the number of counterfactuals that flipped the decision of the explained predictor and $N _ { \mathrm { t o t a l } }$ is the number of data points where a counterfactual was attempted. The following metrics focus on examples that are successfully flipped. First, we want to verify that the produced counterfactuals implement fidelity, i.e. follow the main variations of the prediction function. To verify specifically that counterfactuals robustly cross the decision boundary, we test our predictions against a new model distilled from the original model, where we use a different seed for the weight initialization. This allows us to verify that generated counterfactuals are not fooled by weight-specific adversarial attacks and can be quantified by the non-adversarial rate (NA):
#
The latter counts how often the decision boundary is also crossed in the distilled model. The higher the NA score, the more faithful the counterfactual. We consider a second metric of fidelity that verifies that the counterfactual explainer focuses on primary variations. First, we identify the dominant feature (e.g. with large pixel footprint) and compute the rate $R _ { \mathrm { t r u e } }$ at which it dominates the competing feature (e.g. with small pixel footprint). Then, we compute the rate $R _ { \mathrm { a c t u a l } }$ at which the dominant feature changes in the counterfactual. Our proposed ‘dominant rate’ metric is then defined as the ratio of these two quantities:
$$
\mathrm { F i d e l i t y : D R } = R _ { \mathrm { a c t u a l } } / R _ { \mathrm { t r u e } }
$$
As a test for understandability, we define $e$ an encoding function mapping the data to some meaningful latent space (e.g. the 40-dimensional space of attributes in CelebA) and represent the difference between factual and counterfactual as $\Delta = e ( x ) - e ( \widetilde { x } )$ . We then quantify understandability as the sparsity of the difference vector $\Delta$ :
$$
\mathrm { S p a r s i t y } = 1 - \mathrm { a v g } ( | \Delta | ) / \mathrm { m a x } ( | \Delta | )
$$
where $| \cdot |$ applies element-wise, ‘max’ takes the maximum over this vector, and ‘avg’ calculates the average over this vector, excluding the maximum. A score of 0 indicates a maximally nonsparse (i.e. constant) vector, and a score of 1 indicates a maximally sparse vector. The sparsity score is then averaged over all generated counterfactual instances. We then test whether the counterfactual techniques achieve sufficiency, i.e. produce a sufficiently rich explanation by measuring the diversity of counterfactuals they produce for each data point. As for sparsity, we measure diversity on the dataset-specific latent differences. Specifically, we generate two counterfactuals for each original sample and compute:
$$
\mathrm { D i v e r s i t y } = 1 - \mathrm { C o s S i m } ( \Delta _ { 1 } , \Delta _ { 2 } )
$$
where $\Delta _ { 1 }$ and $\Delta _ { 2 }$ denote the differences (in latent space) between the factual and the two generated counterfactuals.
# 5.3. In-the-Loop Gain
To evaluate the actionability of the different VCEs, we integrate them in the recently proposed CFKD [36] evaluation framework. CFKD simulates an environment containing a VCE agent and a user agent. The simulated user agent is given oracle knowledge, receives the generated counterfactuals as input and outputs data consolidation proposals on which the function $f$ can be retrained. We apply this evaluation to all settings where the data has a spurious correlation (see Section 5.1) and the model an associated Clever Hans strategy. The objective we set for the VCE is to enable the model to get rid of its Clever Hans strategy through meaningful data consolidation steps. The performance metric in the CFKD environment is the unpoisoned test accuracy (i.e. where the data is stripped from its artificial spurious correlation). We compare the unpoisoned test accuracy before and after applying CFKD and calculate the Gain as how much less likely the model is to make a classification error after running CFKD:
$$
\mathrm { G a i n } = { \frac { \mathrm { A c c } _ { \mathrm { a f t e r } } - \mathrm { A c c } _ { \mathrm { b e f o r e } } } { \mathrm { E r r } _ { \mathrm { b e f o r e } } } }
$$
Positive Gain indicates that the explanations contain useful information for identifying and fixing errors in a model. Our in-the-loop gain evaluation can also be viewed as a simulation of a human study, with the difference that the user is modeled as an oracle and the study is fully reproducible. Furthermore, measuring performance gain rather than relying on subjective human feedback prevents logical fallacies in the study design, ensuring that explanations useful for improving the model are ranked higher than those that the user merely expects.
# 6. Results
Using on the quality metrics above, we proceed with comparing four VCEs: ACE [28], DiME [50], FastDIME [51], and the proposed Smooth Counterfactual Explorer (SCE). We first look at counterfactual quality in terms of the fidelity, understandability and sufficiency desiderata, as quantified by Eqs. (2)–(5). Results are shown in the corresponding columns in Table 2. We also report the nominal flipping rates (column ‘FR’) for reference. Examining the Fidelity:NA scores, we observe that our SCE approach performs robustly across all datasets and ranks first in 5 out of 6 considered datasets. This highlights the higher immunity of SCE to adversarial counterfactuals. ACE fares best on the CelebA-Smile but tends to not exhibit robust performance on other datasets. On the Fidelity:DR metric, we similarly observe the higher robustness of SCE at systematically at focusing on dominant features (with a large pixel footprint). In comparison, ACE, DiME, and FastDiME are less predictable, and on some datasets, tend to systematically turn to secondary features (with a small pixel footprint). In terms of understandability, SCE performs on par with ACE, which can be explained by their common sparsity mechanism based on the RePaint function. SCE and ACE sparsity capabilities also appear significantly more robust than DiME and FastDiME. Looking at explanation sufficiency, SCE scores above competitors on the diversity metric by a wide margin, which can be attributed to our newly proposed lock-based diversifier and the absence of comparable mechanisms in other counterfactual explainers. Considering then our Gain metric, SCE again outperforms the other VCEs by a wide margin on all datasets. We see this as a natural consequence of fulfilling the various explanation desiderata, and this also demonstrates that our desiderata-driven approach is particularly effective in producing explanations that are actionable. Lastly, the performance of SCE appears fairly stable w.r.t. the choice of model to explain, as shown by similar qualitative results when replacing ResNet-18 by a ViT-16B model.
Table 2: Desiderata-driven evaluation of our proposed SCE approach. We compare our approach to three competing approaches across six different dataset/architecture combinations. The four numerical columns in the middle quantify the extent to which each desiderata is fulfilled according to Eq. (2). Higher is better. Additionally, the leftmost numerical column provides nominal flip rates for indicative purposes. The rightmost column shows the end-to-end metric ‘gain’ which is described in Section 5.3. Higher scores indicate more actionable explanations.
Ablation Study. To verify the combined importance of the four mechanisms contained in SCE, namely the gradient filtering through the DDPM generative model (projection on the data manifold $\mathcal { M }$ ), the sparsifier $( \ell _ { 1 } + \pmb { \mathscr { G } } )$ , the smooth distilled surrogate $( \widehat { f } )$ and the lock-based diversification mechanism $\mathbf { \eta } ( \mathbf { a } )$ , we perform an ablation study, where each of thebse mechanisms is removed individually. The results are shown in Table 3 for the Square dataset. We observe that combining these four mechanisms is necessarily to fulfill the explanation desiderata. In particular, removing the data manifold projection mechanism greatly exposes the counterfactual generator to adversarial counterfactuals, as shown by a significant drop in the NA fidelity metric, and also reduction in the overall gain metric. Deactivating the sparsifier leads to an expected decrease in the sparsity score, a sharp reduction in diversity scores, and a significant reduction in the gain metric. Deactivating the smoothing mechanism breaks the counterfactual explainer, with the latter now failing to generate counterfactuals (as shown by a FR close to 2%). Finally, as expected, deactivating the diversifier (and replacing it with a basic randomized diversifier) incurs a sharp drop in diversity, and a noticeable, although less drastic, reduction in the gain metric.
Table 3: Ablation study on the ‘square’ dataset. The first four rows show the effect of deactivating each SCE mechanism individually. The last row corresponds to our original proposed SCE approach. The evaluation metrics are the same as in Table 2. The best results are in bold. The setting without smoothing $\widehat { f }$ is excluded from our ranking due to its extremely low flipping rate (FR).
Qualitative Comparison. We present some qualitative results for the analyzed methods in Figure 3. Here, we select two examples for each dataset (one from each class). To visualize the diversity capability of SCE, we generate as many counterfactuals as can be generated under the constraints imposed by SCE’s diversifier mechanism, and these counterfactuals are ordered from left to right according to the result of the cluster-and-rank procedure. We observe that all methods work quite well for flipping the ‘smiling’ attribute. However, the difficulty increases for the other four tasks as the distance between classes also increases. While ACE is often nominally successful at flipping counterfactuals, its lack of adversarial robustness leads it to generate instances that are not easily distinguishable from the original images, potentially misleading a human’s understanding of the model. DiME and FastDiME yield results qualitatively similar to ACE. In contrast, SCE makes semantically more distinct changes to the factual, e.g., successfully changing the background of the squares dataset. The ability of SCE to produce a diverse set of counterfactuals, often revealing the multiple strategies contained in the classifier, is also highlighted. For example, on the smiling task, SCE reveals two different ways of reducing the smile, either by covering it with heavy makeup or by outlining the face with black color without touching the smile. Similarly, in the ‘blond hair’ task, the ‘blond’ attribute seems to be modifiable either by an actual color change or by tampering with the male/female features, thereby revealing a Clever Hans strategy [41]. | Visual counterfactual explainers (VCEs) are a straightforward and promising approach to enhancing the transparency of image classifiers. VCEs complement other types of explanations, such as feature attribution, by revealing the specific data transformations to which a machine learning model responds most strongly. In this paper, we argue that existing VCEs focus too narrowly on optimizing sample quality or change minimality; they fail to consider the more holistic desiderata for an explanation, such as fidelity, understandability, and sufficiency. To address this shortcoming, we explore new mechanisms for counterfactual generation and investigate how they can help fulfill these desiderata. We combine these mechanisms into a novel 'smooth counterfactual explorer' (SCE) algorithm and demonstrate its effectiveness through systematic evaluations on synthetic and real data. | [
"cs.LG",
"cs.CV"
] |
# I. INTRODUCTION
Virology laboratories play a critical role in infectious disease research, diagnostics, and response [1] [2]. However, these high-containment environments expose researchers to significant occupational risks, including accidental contamination, direct contact with hazardous biological agents, and repetitive strain injuries due to manual procedures [3] [4]. Despite strict biosafety protocols, reports indicate that nearly $70 \%$ of infectious lab accidents stem from human error, such as pipetting, culturing, and sample transfer mishaps [5] [6]. The integration of robotic systems into biomedical laboratories has emerged as a promising solution to mitigate these risks [7]. Robotic arms can automate precision-driven processes, thereby enhancing reproducibility and reducing human exposure to infectious materials [8]. Nevertheless, their adoption remains limited in clinical and research settings, with only an estimated $5 \%$ of hospitals globally deploying robotic solutions [9]. A key barrier is the lack of intuitive human-robot interaction (HRI) interfaces and cost-effective platforms that allow seamless integration into existing workflows [10]. In parallel, immersive technologies such as augmented reality (AR) and virtual reality (VR) are reshaping the landscape of medical training and procedural simulation [11]. VR-based platforms have been shown to improve user engagement, reduce cognitive load, and offer safe, high-fidelity environments for skill acquisition [12]. The increasing availability of low-cost, standalone VR headsets—such as the Oculus Quest 2—has further democratized access to immersive systems [13]. These devices offer advanced hand-tracking, high-resolution displays, and untethered interaction capabilities, making them suitable candidates for gesture-based control of robotic systems [14]. VR systems, when used in conjunction with laboratory robotics and AR, allow researchers to practice in virtual environments that replicate real-world complexities without the associated risk [15]. The integration of Robot Operating System (ROS) and Jetson Nano in 5-Degree Of Freedom (DOF) robotic arms has been effectively demonstrated in various research applications, particularly in laboratory settings [16]. However, existing VR-robotic interfaces are often constrained to training-only environments or limited to industrial use cases [17].
# II. LITERATURE REVIEW
Advancements in virtual reality (VR), mixed reality (MR), and robotics have significantly contributed to automation, teleoperation, and training in high-risk biomedical environments [18] [19]. Several studies have explored the integration of robotic systems to enhance biosafety and reduce manual handling of infectious samples [20] [21] [22]. Angers et al. [23] proposed an automated pipeline for sample preparation and testing, minimizing direct exposure for laboratory personnel. However, their closed-loop design lacked flexibility, as each system required hardware-specific adaptation for different pathogens. Moreover, manual sample insertion was still necessary, limiting full autonomy. To support realtime robotic simulation and interaction, Babaians et al. [24] introduced a Unity3D [25] based platform with ANet middleware, employing ZeroMQ and Google Protobuf for efficient communication. Unity–ROS integration has proven highly effective for real-time robotic simulation, with Platt et al. [26] and Kuts et al. [27] highlighting its low latency and high precision in digital twin frameworks .While this enabled the generation of realistic robotic sensor data for training AI models, it introduced complexity in software dependencies and required continual updates to remain stable. Chinnasamy et al. [28] and Dey et al. [29] implemented a ROS–Gazebo digital twin and identified significant synchronization lags between virtual and physical models, resulting in reduced motion repeatability—highlighting the need for more robust closedloop architectures. Lundeen et al. [30] reviewed ROS–Unity and VR integrations, demonstrating that stability and timing drift continue to limit reliability in immersive teleoperation systems. In parallel, immersive control interfaces have gained attention for improving human-robot interaction [31] [32]. Rosen et al. [33] demonstrated that MR-based head-mounted displays (HMDs) can reduce task completion time compared to conventional 2D interfaces. However, positional drift due to SLAM inaccuracies was found to degrade control reliability during prolonged use. Tsai et al. [34] applied VR simulations to virological testing, enhancing learning engagement but noted that developing high-fidelity 3D assets was laborintensive and resource-heavy, posing challenges for scalability. Despite substantial progress in robotic motion planning, Chan et al. [35] identified persistent challenges in achieving precise trajectory control, especially in environments with curved paths or tight spatial constraints. Santos et al. [36] highlighted the inefficiency of manual robot programming via Teach Pendants [37], further reinforcing the need for automated, adaptive interfaces. Collectively, these studies demonstrate the potential of VR and robotics to transform laboratory practices, but also reveal several limitations—namely, the lack of real-time adaptive learning, dependence on manual intervention, and limited flexibility in constrained environments. Existing systems often separate training from execution, rely on scripted control, or incur high development costs for immersive content. Hence to overcome these limitations, we propose GAMORA: a Gesture-Articulated Robotic Arm system that integrates VRbased control, ROS infrastructure, and reinforcement learning to enable real-time, user-guided automation in biosafety lab environments.
# III. PROPOSED SYSTEM
To bridge the gap in VR-enabled robotic systems for biosafety lab environments, we present a novel architecture shown in Fig.1, designed to enhance safety, precision, and training in virology laboratories. The workflow begins with the development of a 3D virtual lab environment, enabling immersive simulations. The Oculus Quest 2 is integrated to provide gesture-based interaction and control, allowing users to operate the system in a risk-free, virtual setting. Jetson Nano serves as the onboard processor, managing sensor data and control commands. The Robot Operating System (ROS) facilitates seamless communication between the VR interface and the robotic arm. Data from user actions is transmitted in real-time to execute high-precision tasks using inverse kinematics. This modular pipeline ensures accurate, repeatable manipulation of infectious samples while minimizing direct human exposure and improving training scalability.
Fig. 1: System Workflow
# A. Hardware SetUp
The proposed hardware setup, shown in Fig.2, is designed to safely link the hazardous lab environment with the operator’s remote interface. Within the hazardous zone, a 5-DOF robotic arm performs critical handling of infectious materials, guided by commands processed on a Jetson Nano and actuated via an Arduino DUE. A Ricoh Theta SC2 camera provides live video feedback to the operator through the Oculus Quest 2 VR headset, enabling immersive monitoring without physical exposure. In the operator’s environment, a digital twin of the robotic arm runs in ROS on Ubuntu, synchronized with the physical arm via the /joint states topic. Communication between environments is achieved through a 5GHz WiFi and Bluetooth connection. This integrated setup supports real-time control and enhances biosafety, while also ensuring system scalability and long-term reliability in virology lab automation.
Fig. 2: Hardware Setup
# IV. METHODOLOGY
# A. Creation of the Virtual Environment
The virtual environment for GAMORA was developed following a structured pipeline, as illustrated in Fig. 3, aimed at achieving a high-fidelity, interactive simulation space suitable for virology lab training and teleoperation. We began by collecting detailed spatial references—2D floor plans, images, and physical measurements—from the target lab environment. These inputs informed the creation of accurate 3D models representing essential workspace elements. Once generated, the 3D assets were imported into Unity 3D, ensuring compatibility with the rendering engine and physics system. Objects were then positioned precisely using Unity’s coordinate system to match their real-world spatial arrangement. The virtual layout was constructed with a modular approach—grouping elements based on functional zones or task relevance to ensure clarity and ease of navigation. With the physical structure replicated, we proceeded to implement interaction features. Custom scripts in C# enabled user navigation and hand-based manipulation of virtual components. The Oculus Integration SDK for Unity was employed to support full VR functionality, including spatial tracking, gesture recognition, and controller input. The Oculus Quest 2 was selected as the target platform due to its high-resolution display, untethered operation, and robust hand-tracking capabilities. After initial deployment, the environment underwent iterative testing and optimization to ensure system stability, performance consistency, and accurate interaction mapping. The final result, shown in Fig. 4, is a fully immersive and interactive virtual lab environment that supports real-time control and training simulations.
Fig. 3: Process flow for creating virtual workspace
Fig. 4: Virtual Workspace
# B. Configuration of the Robotic Arm
The design of the GAMORA robotic arm followed a comprehensive digital-to-physical workflow, as shown in Fig.5. The arm was first modeled using SolidWorks to define its kinematic structure, joint constraints, and end-effector geometry , as shown in Fig.5a. The components were then fabricated using 3D printing, shown in Fig. 5b, ensuring accurate translation of the CAD model into a functional prototype. The arm utilizes high-torque RDS5180 servo motors $( 8 0 \mathrm { k g } { \cdot } \mathrm { c m } )$ to support precision manipulation and payload handling. A Jetson Nano 4GB serves as the primary control unit, enabling real-time computation, while a DC-DC converter provides regulated power to all electronic subsystems. ROS was employed as the middleware framework to manage communication, control, and data flow.
Fig. 5: GAMORA’s robotic arm: (a) digital design and (b) 3Dprinted of robotic arm
The robotic structure was defined using a Unified Robot Description Format (URDF), specifying link dimensions, joint types, and spatial relationships. ROS nodes were configured for motor control, trajectory planning, and sensor integration. RViz was used for real-time 3D visualization and debugging of motion planning and robot states, allowing for accurate alignment between the simulated and physical arm. This setup ensured modularity, precision, and repeatability—critical for remote operation in biosafety-sensitive environments.
# C. Motion Planning and Initial Calibration
Motion planning for GAMORA was implemented using the MoveIt! framework within ROS, as shown in Fig. 6. MoveIt! enabled the generation of feasible, collision-free trajectories based on the robot’s URDF-defined kinematic model. Planning algorithms such as RRT (Rapidly-exploring Random Tree) and PRM (Probabilistic Roadmap Method) were used to explore the configuration space and determine optimal joint paths for reaching target poses. Inverse kinematics (IK) played a key role in this process, allowing the computation of joint angles required to position the end effector at a specific Cartesian coordinate. Fig. 7 illustrates the IK-based trajectory planning in an obstacle-filled workspace. MoveIt! utilized solvers like KDL to iteratively solve for joint angles while minimizing positional error and avoiding collisions. The following equations govern the computation of joint angles $\theta _ { 1 }$ and $\theta _ { 2 }$ for a planar 2-DOF manipulator:
Fig. 6: Hardware and software architecture with ROS packages
Fig. 7: Path planning in robotic arm using Inverse Kinematics
$$
\cos \theta _ { 2 } = \frac { ( x ^ { 2 } + y ^ { 2 } - L _ { 1 } ^ { 2 } - L _ { 2 } ^ { 2 } ) } { 2 L _ { 1 } L _ { 2 } }
$$
$$
\sin \theta _ { 2 } = \mp \sqrt { 1 - \cos ^ { 2 } \theta _ { 2 } }
$$
$$
\theta _ { 2 } = \operatorname { a t a n 2 } ( \sin \theta _ { 2 } , \cos \theta _ { 2 } )
$$
$$
\theta _ { 1 } = \mathtt { a t a n 2 } ( y , x ) - \mathtt { a t a n 2 } ( L _ { 2 } \sin \theta _ { 2 } , \ L _ { 1 } + L _ { 2 } \cos \theta _ { 2 } )
$$
where $x , y$ are the target coordinates of the end effector, and $L _ { 1 } , L _ { 2 }$ are the lengths of the arm segments. These equations allow the robot to compute joint angles dynamically based on hand gestures transmitted via the Oculus Quest 2 VR interface. The computed joint trajectories are then executed in real-time, ensuring safe and precise manipulation within constrained environments.
# D. Object Detection Integration
Object detection was integrated into the GAMORA system using YOLOv8 with default pretrained weights. This allowed for the identification of objects and features within the virtual or physical workspace. The detected objects helped enhance the spatial awareness of the robotic arm and the user, particularly in planning interactions and verifying the correct placement or retrieval of items. This AI component complemented the VR-based control system by providing contextual understanding of the environment without requiring manual annotation or training. Finally, the robotic arm’s motion planning was tested and visualized in a simulated environment using tools such as RViz. By integrating the URDF model into RViz, we were able to visualize the arm’s kinematic structure and monitor its joint states and trajectory execution throughout the planning process. RViz enabled real-time observation of the paths generated by MoveIt!, helping evaluate how the robotic arm would interact with its environment. This allowed for effective validation of inverse kinematics, collision avoidance, and trajectory planning in a dynamic and realistic virtual workspace.
# V. EXPERIMENTAL SETUP AND SYSTEM TESTING
# A. Calibration Experiments
Calibration experiments for GAMORA evaluated visualization modes, control methods, and end-effector tools to optimize user interaction and task precision. Three visualization modes were tested. The basic view (no camera) offered minimal feedback, while a webcam provided clarity but lacked depth perception. A depth-sensing camera proved most effective, offering virtual zoom and enhanced spatial awareness—significantly improving operator immersion and accuracy in manipulation tasks. For control input, traditional mouse and keyboard lacked spatial intuitiveness. Hand-tracking via Oculus Quest 2 was tested but found imprecise for fine tasks. The Oculus Controllers delivered the best performance, offering real-time responsiveness and precise control of the arm’s end effector, making them the preferred interface for
GAMORA. Two end-effector tools were evaluated: a suction cup and a gripper. While the suction cup handled flat, lightweight items well, it required pneumatic support and struggled with heavier loads. The gripper was better suited for small objects like vials but lacked force feedback, limiting its use in delicate operations. These results highlight the need for further refinement of adaptive end-effector systems for diverse lab tasks.
# B. Iterative Testing, Optimization, and System Refinement
To validate GAMORA’s real-world performance, hardwarein-the-loop (HIL) testing was conducted by integrating the physical robotic arm with its ROS-based control software. This allowed the system to be evaluated under simulated deployment conditions without risking hardware damage. Sensor feedback, motor commands, and trajectory data were processed in real time to analyze the arm’s behavior and controller precision. Key performance metrics included joint angle accuracy, trajectory tracking, and control latency. Discrepancies between simulated and physical outcomes were used to iteratively refine both the kinematic model and control algorithms. Inverse kinematics solvers were tuned to reduce positional error, and control loop frequencies were optimized to enhance responsiveness. Sensor sampling rates and motion planning parameters were adjusted for improved real-time actuation. System debugging involved monitoring CPU/GPU utilization, ROS node latencies, and network performance. RViz was used extensively for visualizing joint states, sensor feedback, and system errors. Real-time updates to the URDF model and diagnostic feedback enabled continual calibration, ensuring stable and precise operation in safety-critical environments.
# C. Real-Time Feedback Integration and Final System Implementation
Real-time feedback in GAMORA was achieved by continuously monitoring sensor inputs—including joint encoders, force-torque sensors, and visual data from onboard cameras—and dynamically adjusting the robotic arm’s behavior through ROS. This allowed the system to respond adaptively to environmental changes and task requirements, improving precision and safety. The final system integration brought together all subsystems—hardware, control software, feedback mechanisms, and planning algorithms—into a unified and functional architecture. Actuators, sensors, and controllers were fine-tuned to operate seamlessly within the ROS ecosystem. Motion planning was executed using MoveIt!, while inverse kinematics solvers ensured accurate end-effector positioning. Full-system testing validated the synchronization between hardware and software, confirmed the system’s responsiveness, and resolved latency or feedback issues using ROS diagnostic tools. The integrated platform was assessed across multiple tasks, including object handling and assembly, demonstrating high stability, accuracy, and fault tolerance. These results confirmed GAMORA’s readiness for deployment in both laboratory automation and industrial settings.
# VI. RESULTS AND DISCUSSION
Pilot trials were conducted to assess the operational performance of the GAMORA system in critical virology laboratory workflows. Tasks such as specimen manipulation, pipetting, and sample transfer—typically requiring high precision and reproducibility—were used to evaluate system effectiveness. The robotic arm, controlled remotely via Oculus Quest 2 VR and equipped with a gripper end-effector, successfully handled delicate glass vials and executed precise fluid transfer. The system demonstrated consistent control stability and task accuracy across multiple stages, confirming its potential to reduce manual intervention and support safe, repeatable procedures in high-containment laboratory settings. The experiments were performed in several stages, each concentrating on various laboratory tasks:
# A. Specimen Handling and Placing
The GAMORA system was evaluated for its ability to handle and accurately place standard glass viral culture vials using a gripper-based end-effector. The system could reliably manipulate vials up to 20 grams, meeting the requirements for typical specimen containers used in virology labs. Performance metrics were as follows:
Positional Accuracy: Improved from $4 . 0 \ \mathrm { m m }$ to $2 . 2 \ \mathrm { m m }$ after calibration.
Angular Accuracy: Reduced misalignment from $8 . 5 ^ { \circ }$ to $2 . 5 ^ { \circ }$ during vial insertion.
Repeatability: Achieved $\pm 2 . 8 \mathrm { \ m m }$ deviation across 20 placement cycles.
These results indicate that GAMORA is capable of precise and repeatable specimen handling in virology lab conditions.
# B. Pipetting and Liquid Handling
GAMORA was evaluated on VR-guided liquid handling tasks, involving pipetting between containers with varying liquid viscosities and geometries. The system achieved an average pipetting deviation of $0 . 2 \mathrm { m L }$ from a 1mL target, with a repeatability of $\pm 0 . 1 \mathrm { m L }$ and a $9 5 \%$ success rate. Initial pipette alignment error of $2 . 4 \mathrm { m m }$ was reduced to $1 . 6 \mathrm { m m }$ after dynamic motion adjustments. Figure 8 highlights these improvements, showing significant reductions in positional discrepancy (from $4 . 0 \mathrm { m m }$ to $2 . 2 \mathrm { m m }$ ), pipetting deviation (from $0 . 4 \mathrm { m L }$ to $0 . 2 \mathrm { m L }$ ), and repeatability error (from $\pm 2 . 8 \mathrm { m m }$ to $\pm 1 . 2 \mathrm { m m } ,$ ), demonstrating the enhanced precision and consistency of the upgraded system.
# C. Sample Preparation and Mixing
GAMORA was evaluated for sample preparation tasks, including dispensing into multi-well plates and irregular containers. The system was required to operate within tight spatial constraints, ensuring liquid delivery without overflow or spillage. It achieved a placement accuracy of $0 . 3 \mathrm { m m }$ when dispensing into wells and maintained repeatability within $0 . 5 \mathrm { m m }$ across repeated transfers. For irregular container geometries, the initial deviation of $0 . 7 \mathrm { m m }$ was mitigated through softwarebased adjustments to the gripper’s positioning system, significantly improving precision during non-standard operations.
# D. Repetitive Operations and Workflow Effectiveness
Repetitive sample handling and liquid transfer tasks are critical in virology labs. GAMORA was tested over 50 consecutive operation cycles to evaluate positional repeatability and performance stability. The system consistently executed high-throughput tasks with a positional error within $\pm 1 . 2 \mathrm { m m }$ and an average cycle time of 45 seconds per 10-transfer set. The VR interface enabled precise control, ensuring low error accumulation over prolonged execution. Figure 9 illustrates the measured positional errors across 50 trials, showing fluctuations well within the defined mean error bounds and confirming the system’s operational stability under repeated use.
Fig. 8: Comparative performance metrics of the original and improved GAMORA system.
Fig. 9: Repeatability analysis of the GAMORA system over 50 trials
# E. Planning and Execution Metrics
GAMORA demonstrated improved motion planning efficiency, with planning time reduced from 0.75s to 0.5s and execution time shortened from 2.5s to 1.75s using MoveIt, while maintaining a $9 0 { - } 9 5 \%$ path planning success rate. These gains ensured smoother operations with minimal mechanical jerk. The system also became more energy efficient. Idle current dropped from $2 5 0 \mathrm { m A }$ to $2 0 0 \mathrm { m A }$ , and peak current under heavy load halved from 2A to 1A. Correspondingly, power output was reduced from 100W to 50W, as shown in Table I. These gains significantly reduced thermal and electrical stress on components during extended operations. Resource usage was optimized, with CPU consumption dropping from $10 \%$ to $5 \%$ and RAM usage decreasing from 600MB to 550MB, enabling the system to execute complex tasks without straining computational resources. These hardwarelevel optimizations further enhance GAMORA’s suitability for long-duration, high-precision laboratory tasks.
TABLE I: Current and power usage of original and improved system | The convergence of robotics and virtual reality (VR) has enabled safer and more efficient workflows in high-risk laboratory settings, particularly virology labs. As biohazard complexity increases, minimizing direct human exposure while maintaining precision becomes essential. We propose GAMORA (Gesture Articulated Meta Operative Robotic Arm), a novel VR-guided robotic system that enables remote execution of hazardous tasks using natural hand gestures. Unlike existing scripted automation or traditional teleoperation, GAMORA integrates the Oculus Quest 2, NVIDIA Jetson Nano, and Robot Operating System (ROS) to provide real-time immersive control, digital twin simulation, and inverse kinematics-based articulation. The system supports VR-based training and simulation while executing precision tasks in physical environments via a 3D-printed robotic arm. Inverse kinematics ensure accurate manipulation for delicate operations such as specimen handling and pipetting. The pipeline includes Unity-based 3D environment construction, real-time motion planning, and hardware-in-the-loop testing. GAMORA achieved a mean positional discrepancy of 2.2 mm (improved from 4 mm), pipetting accuracy within 0.2 mL, and repeatability of 1.2 mm across 50 trials. Integrated object detection via YOLOv8 enhances spatial awareness, while energy-efficient operation (50% reduced power output) ensures sustainable deployment. The system's digital-physical feedback loop enables safe, precise, and repeatable automation of high-risk lab tasks. GAMORA offers a scalable, immersive solution for robotic control and biosafety in biomedical research environments. | [
"cs.RO",
"cs.AI",
"cs.CV"
] |
# 1 INTRODUCTION
Recent datasets with complex geometric structures have highlighted the limitations of traditional Euclidean spaces in capturing hierarchical or tree-like data found in domains like complex networks (Krioukov et al., 2010), natural language processing (Lo´pez et al., 2019; Zhu et al., 2020; Lo´pez & Strube, 2020), and protein interactions (Zitnik et al., 2019). Hyperbolic spaces, with their exponential volume growth, offer a more suitable alternative for embedding such data with lower distortion (Sarkar, 2012), effectively mirroring hierarchical structures.
Hyperbolic neural networks (HNNs) (Ganea et al., 2018) and hyperbolic graph convolutional networks (HGCNs) (Chami et al., 2019) have demonstrated superior performance over Euclidean models in learning from hierarchical data. However, challenges remain: hyperbolic models suffer from numerical stability issues (Sala et al., 2018; Mishne et al., 2023), and their computations, especially aggregations, are often slower than Euclidean counterparts.
To tackle these challenges, we present the sHGCN model, a streamlined version of HGCN that delivers state-of-the-art performance in link prediction, node classification, and graph regression tasks. Our model achieves significant computational efficiency gains, enabling faster processing and enhanced scalability compared to existing approaches. This makes sHGCN not only more effective but also better suited for real-world applications.
# 2 HYPERBOLIC GRAPH CONVOLUTIONAL NEURAL NETWORKS (HGCNS)
Graph Convolutional Networks (GCNs) (Kipf & Welling, 2017) have recently demonstrated significant superiority over traditional machine learning models in graph-related tasks and applications, such as node classification, link prediction, and graph classification. Hyperbolic GCNs (HGCNs) have further achieved remarkable success in studying graph data, particularly with a tree-like structure, due to the unique properties of hyperbolic spaces. For a more detailed background on the geometric frameworks that support HGCNs, please refer to Appendix A.
Many authors have proposed various methods for implementing HGCNs as summarized in Yang et al. (2022). In general, a unified HGCN can be formulated as:
$$
\begin{array} { r l } & { { \boldsymbol { h } } _ { i } ^ { l , \mathcal { H } } = \left( { \boldsymbol { W } } ^ { l } \otimes ^ { c _ { l - 1 } } x _ { i } ^ { l - 1 , \mathcal { H } } \right) \oplus ^ { c _ { l - 1 } } { \boldsymbol { b } } ^ { l } } \\ & { y _ { i } ^ { l , \mathcal { H } } = A G G ^ { c _ { l - 1 } } \left( { \boldsymbol { h } } ^ { l , \mathcal { H } } \right) _ { i } } \\ & { { \boldsymbol { x } } _ { i } ^ { l , \mathcal { H } } = \sigma ^ { \otimes ^ { c _ { l - 1 } , c _ { l } } } \left( y _ { i } ^ { l , \mathcal { H } } \right) } \end{array}
$$
where $\mathcal { H }$ denotes the hyperbolic space, commonly represented using either the Lorentz $\mathbb { H } ^ { n }$ or Poincar´e $\mathbb { D } ^ { n }$ models. The choice of hyperbolic model impacts how the operations in Eq. (1)-(3) are computed, as each model requires different adaptations to the underlying geometry. Consequently, these steps are implemented in various ways depending on the specific method proposed in the literature.
# 3 MOTIVATION
The core motivation behind our proposed model was to address several key limitations found in existing methods that utilize hyperbolic geometry: (i) Hyperbolic operations tend to introduce numerical instability; (ii) The time performance of these methods is inefficient, making them impractical for large-scale applications.
We build upon the HGCN model introduced by Chami et al. (2019) as the foundation for our work. This framework provides flexibility by supporting both the Poincar´e ball and Lorentz models of hyperbolic geometry. In our study, we opted for the Poincare´ ball model due to the numerical instabilities encountered which are further analyzed in Appendix B.
To enhance computational efficiency, we employed fixed weights $w _ { i j } = 1$ in the aggregation step for all $i , j$ , and adopted origin-based aggregation in hyperbolic space, resulting in the $( \mathrm { H G C N - A G G _ { 0 } } )$ ) model. This approach addresses runtime concerns while preserving the advantages of hyperbolic operations.
To gain deeper insights and identify optimization opportunities, we revisited the original formulation and derived a consolidated matrix equation for the entire framework. The general message-passing rule of the HGCN- $\mathbf { A G G } _ { 0 }$ model is:
$$
\begin{array} { r l r } & { h _ { i } ^ { l , \mathbb { D } } = \displaystyle \mathbf { e x p } _ { 0 } ^ { c _ { l - 1 } } \left( W ^ { l } \log _ { 0 } ^ { c _ { l - 1 } } ( x _ { i } ^ { l - 1 , \mathbb { D } } ) \right) \oplus ^ { c _ { l - 1 } } \exp _ { 0 } ^ { c _ { l - 1 } } ( b ^ { l } ) } & { \qquad \mathrm { ( f e a t u r e ~ t r a n s f o r m ) } } \\ & { y _ { i } ^ { l , \mathbb { D } } = \displaystyle \mathbf { e x p } _ { 0 } ^ { c _ { l - 1 } } \left( \sum _ { j \in \mathcal { N } ( i ) \cup \{ i \} } \log _ { 0 } ^ { c _ { l - 1 } } ( h _ { j } ^ { l , \mathbb { D } } ) \right) } & { \qquad \mathrm { ( a g g r e g a t i o n ~ a t ~ t h e ~ o r i g i n ) } } \\ & { x _ { i } ^ { l , \mathbb { D } } = \displaystyle \mathbf { e x p } _ { 0 } ^ { c _ { l } } \left( \sigma \left( \log _ { 0 } ^ { c _ { l - 1 } } ( y _ { i } ^ { l , \mathbb { D } } ) \right) \right) } & { \qquad \mathrm { ( n o n - l i n e a r ~ a c t i v a t i o n ) . } } \end{array}
$$
To simplify the notation, we define $\log = \log _ { 0 } ^ { c _ { l - 1 } }$ and $\exp = \exp _ { 0 } ^ { c _ { l - 1 } }$ . Combining these definitions, we obtain the following consolidated expression from Eq. (4)-(6):
$$
H ^ { l } = \mathbf { e x p } _ { 0 } ^ { c _ { l } } ( \sigma ( \log ( \mathbf { e x p } ( \tilde { A } \log ( \mathbf { e x p } ( W ^ { l } \mathbf { l o g } ( H ^ { l - 1 } ) ) \oplus ^ { c _ { l - 1 } } \mathbf { e x p } ( b ^ { l } ) ) ) ) ) ,
$$
where $\tilde { A } = \hat { D } ^ { - 1 } \hat { A }$ , with ${ \hat { A } } = A + I _ { N }$ and $\begin{array} { r } { \hat { D } _ { i i } = \sum _ { j } \hat { A } _ { i j } } \end{array}$ . $W ^ { l }$ is a layer-specific trainable weight matrix, and $\sigma ( \cdot )$ denotes the activation function. The feature matrix at the $l ^ { \mathrm { t h } }$ layer is $H ^ { l } \in \mathbb { R } ^ { N \times D }$ . Additionally, $\hat { H } ^ { ( 0 ) } = \exp _ { 0 } ^ { c _ { 0 } } ( X )$ represents the matrix of Euclidean node feature vectors $X _ { i }$ transformed into their hyperbolic counterparts.
From Eq. 7, the bold transformations can be omitted as they correspond to the identity $\log ( \exp ( x ) ) = x$ in the Poincare´ space. While these operations theoretically do not affect the model’s outcomes, they introduce unnecessary computations that slow it down. Moreover, due to precision issues, this identity breaks down for values far from the origin, leading to numerical errors that degrade accuracy, as shown in Fig. 1. For a more detailed analysis, refer to Appendix B.1.
Figure 1: Composition log exp maps becomes inaccurate as points move away from the origin.
# 4 PROPOSED METHOD
# 4.1 FEATURE TRANSFORM
A linear transformation involves multiplying the embedding vector $x$ by a weight matrix $W$ and adding a bias term $b$ . For feature transformation, we use matrix-vector multiplication in the Poincare´ ball model along with Mo¨bius addition (see Appendix A for a detailed definition of these operations). The bias term $b$ is defined as a Euclidean vector. Let $W$ be a $d ^ { \prime } \times d$ weight matrix in Euclidean space. Since both $W x$ and $b$ are in Euclidean space, which is isomorphic to the tangent space $\mathcal { T } _ { 0 } \mathbb { D } _ { c } ^ { n }$ , we can use exponential maps to project these into hyperbolic space. Thus, the feature transformation is defined as follows:
$$
f ( x ) = \exp _ { 0 } ^ { c } \left( W x \right) \oplus ^ { c } \exp _ { 0 } ^ { c } ( b ) .
$$
# 4.2 NEIGHBORHOOD AGGREGATION
Aggregation is crucial in HGCNs for capturing neighborhood structures and features. Our proposed method simplifies the aggregation process by using fixed weight values $w _ { i j } = 1$ , performed directly in the tangent space at the origin, significantly enhancing computational efficiency. Traditional HGCN methods face challenges due to the computational burden of dynamic weight calculations. In fact, as demonstrated in the results section 5, one notable case highlights how link prediction can result in an Out-Of-Memory error. By streamlining these operations, our algorithm effectively addresses these issues and accelerates performance. The aggregation operation is then defined as follows:
$$
A G G ^ { c } ( x ) _ { i } = \sum _ { j \in \mathcal { N } ( i ) \cup \{ i \} } \log _ { 0 } ^ { c } ( h _ { j } ) .
$$
# 4.3 NON-LINEAR ACTIVATION
Since the output from our neighborhood aggregation is in Euclidean space, Eq. (9), we apply a standard non-linear activation function. This approach is highly advantageous, as it frees us from being restricted to activation functions that preserve hyperbolic geometry, allowing greater flexibility in choosing among a wider range of non-linear activations. The non-linear activation is then as follows:
$$
\boldsymbol { \sigma } ^ { \otimes ^ { c } } ( \boldsymbol { x } ) = \boldsymbol { \sigma } ( \boldsymbol { x } ) .
$$
# 4.4 TRAINABLE CURVATURE
Similar to the HGCN model, our method introduces learnable curvature at each layer, allowing for a better capture of the underlying geometry at each layer.
# 4.5 ARCHITECTURE
Given a graph $\mathcal { G } = ( \nu , \mathcal { E } )$ with $N$ nodes $\{ x _ { i } ^ { \mathbb { E } } \} _ { i \in \mathcal { V } }$ and edges $( x _ { i } ^ { \mathbb { E } } , x _ { j } ^ { \mathbb { E } } ) \in \mathcal { E } \subseteq \mathcal { V } \times \mathcal { V }$ . Message passing rule of the new method sHGCN at layer $l$ for node $i$ then consists of:
$$
\begin{array} { r l r } & { h _ { i } ^ { l , \mathbb { D } } = \displaystyle \mathrm { e x p } _ { 0 } ^ { c _ { l - 1 } } \big ( W ^ { l } x _ { i } ^ { l - 1 , \mathbb { E } } \big ) \oplus ^ { c _ { l - 1 } } \exp _ { 0 } ^ { c _ { l - 1 } } \big ( b ^ { l , \mathbb { E } } \big ) \qquad } & { \mathrm { ( f e a t u r e ~ t r a n s f o r m ) } } \\ & { y _ { i } ^ { l , \mathbb { E } } = \displaystyle \sum _ { j \in \mathcal { N } ( i ) \cup \{ i \} } \log _ { 0 } ^ { c _ { l } } \big ( h _ { j } ^ { l , \mathbb { D } } \big ) \qquad } & { \mathrm { ( n e i g h b o r h o o d ~ a g g r e g a t i o n ) } } \\ & { x _ { i } ^ { l , \mathbb { E } } = \sigma \left( y _ { i } ^ { l , \mathbb { E } } \right) \qquad } & { \mathrm { ( n o n - l i n e a r ~ a c t i v a t i o n ) . } } \end{array}
$$
Combining Eq. (11)-(13) at a matrix level we obtain:
$$
H ^ { l } = \sigma \left( \tilde { A } \log _ { 0 } ^ { c _ { l - 1 } } \left( \exp _ { 0 } ^ { c _ { l - 1 } } ( W ^ { l } H ^ { l - 1 } ) \oplus ^ { c _ { l - 1 } } \exp _ { 0 } ^ { c _ { l - 1 } } ( b ^ { l - 1 } ) \right) \right) , \forall l \geq 1 ,
$$
where, $\tilde { A } , W ^ { l }$ , and $\sigma ( \cdot )$ are defined as in the HGCN model, with the distinction that $H ^ { ( l ) } \in \mathbb { R } ^ { N \times D }$ . Specifically, $H ^ { ( 0 ) } = \dot { X }$ is the matrix of Euclidean node feature vectors $X _ { i }$ . To better illustrate the architecture and flow of the sHGCN message-passing method, we present a diagram of this model in Fig. 2.
Figure 2: The proposed sHGCN architecture.
# 5 RESULTS
For details on the experimental setup, please refer to Appendix C.
Link Prediction. The performance difference is particularly noticeable when comparing our new method to hyperbolic-based models on specific datasets. This is most evident in the Disease dataset (see Table 1), where our proposed model achieves a test AUC score of 94.6, significantly outperforming the previous HGCN model’s score of 90.8. While the performance of our method on other datasets is comparable to prior models, the Disease dataset stands out, highlighting the effectiveness of our approach in tasks involving hierarchical relationships.
Table 2 further illustrates the computational performance of our model. The newly proposed approach shows notable improvements in efficiency, with speedups ranging from $3 \%$ to $96 \%$ over the $\mathrm { H G C N - A G G _ { 0 } }$ model, and from $69 \%$ to $146 \%$ compared to the $\mathrm { H G C N - A T T _ { 0 } }$ model. However, it is important to note that the $\mathrm { H G C N - A T T _ { 0 } }$ model, which computes dynamic weights, faces OutOf-Memory (OOM) issues, as it requires significantly more memory. This makes it unsuitable for real-world application tasks, where memory limitations and efficiency are critical factors.
The superior performance of our method in link prediction tasks can be attributed to several key factors. Notably, operating in Euclidean space offers an advantage when applying Euclidean distance in the decoder. Additionally, the error accumulation caused by the composition $\log ( \exp ( x ) )$ in hyperbolic models exacerbates performance gaps, further highlighting the effectiveness of our approach.
Node Classification. In node classification, the results were similar across models, due to the influence of the MLP classifier at the end of the pipeline. This classifier often dominates the overall performance, leading to similar results.
In terms of time performance, the newly proposed sHGCN model shows significant improvements, with speedups ranging from $49 \%$ to $56 \%$ over the $\mathrm { H G C N - A G G _ { 0 } }$ model, and from $45 \%$ to $127 \%$ compared to the HGCN-ATT $_ { \mathrm { ~ 0 ~ } } ^ { \mathrm { ~ ~ } }$ model. This gain in performance can be attributed to several factors. Firstly, the sHGCN model requires fewer operations compared to its counterparts. Secondly, one key advantage of the new model is that the embeddings are already in Euclidean space, eliminating the need for additional steps to map embeddings back to Euclidean space, as required in the other hyperbolic models.
Table 1: ROC AUC for Link Prediction (LP), and F1 score (disease and airport) and accuracy (pubmed and cora) for Node Classification (NC) tasks. All settings and results are reported from Chami et al. (2019).
An Out-of-Memory (OOM) error occurred in the GPU memory (see C for the hardware specifications used). It is important to note that we were unable to reproduce the original results of the HGCN model, so we present the results reported in the paper.
Table 2: Comparisons of speed-up between sHGCN and other existing models.
Graph Regression. Additionally, we have conducted experiments with Graph Regression due to the possibility of representing molecules as graphs. We believe that these molecules may exhibit high hyperbolicity, and that our sHGCN model could be particularly useful in such cases, as demonstrated by the results in the Appendix D. | Hyperbolic geometry has emerged as a powerful tool for modeling complex, structured data, particularly where hierarchical or tree-like relationships are present. By enabling embeddings with lower distortion, hyperbolic neural networks offer promising alternatives to Euclidean-based models for capturing intricate data structures. Despite these advantages, they often face performance challenges, particularly in computational efficiency and tasks requiring high precision. In this work, we address these limitations by simplifying key operations within hyperbolic neural networks, achieving notable improvements in both runtime and performance. Our findings demonstrate that streamlined hyperbolic operations can lead to substantial gains in computational speed and predictive accuracy, making hyperbolic neural networks a more viable choice for a broader range of applications. | [
"cs.LG",
"cs.AI"
] |
# 1 Introduction
Function realisations and improvement in software-defined vehicles require continuous software updates; hence, rapid, efficient, and continuous software development is crucial to maintaining competitiveness and user satisfaction. LLMs can be seen as a potential element in the software development pipeline, as their capability in code generation has been demonstrated [10]. However, the limitations and capabilities of LLMs are under-explored, as they are examined primarily for simple coding tasks and less for complex and novel tasks that require creativity.
Generating code with LLMs might require multiple tries, given the LLMs’ stochastic behaviour [2] and the complexity of the task. Moreover, as an LLM does not possess a proper understanding of the real world, it might fail to propose an appropriate strategy in the generated code, that may not be easily detected by a human reviewer. Hence, it would be beneficial to integrate a preliminary assessment of the generated code against unintended behaviours using key safety indicators before submitting it for review. Having this preliminary feedback on the code also helps with the automatic selection of the best model for each task and supports model improvement in the reinforcement learning process.
To assess the viability, limitations, and capabilities of LLMs, and to propose appropriate adaptations in the development of safety-related software, the following research questions have been formulated:
RQ1: How do state-of-the-art LLM models perform in code generation tasks for automotive functions? Quantitative evaluations of generations in Sec. 4
RQ2: What are the key limitations and risks of using LLM-generated code in safety-related applications? Qualitative analysis of LLM failures in Sec. 4
RQ3: What adaptations to existing software engineering processes are necessary to safely integrate LLM-generated code into automotive software development workflows? Proposal of LLM-augmented review, verification, and validation processes for safety-related driving functions in Sec. 5.
In this paper, we propose an approach to “Generate fast, Eliminate fast” by integrating a rapid checker within the code generation process to reduce the time a code reviewer is spending to check useless code that might not even compile. Additionally, it ensures that the unsafe software is not delivered to further stages in the V&V process, where more rigorous and time-consuming tests, like in Hardware-in-the-Loop (HIL)-environments, would take place. The proposed Software-in-the-Loop (SIL) code co-generation environment combines LLM-based code generation for automotive software with an automatic assessment in a virtual test environment. The results from the simulation are used in automated evaluation and ranking of the codes and sent to the user for reviewing the code. In order to establish the true capabilities and limitations of the used LLMs, fairly evaluate them, and indirectly validate the results, we propose a specific test design strategy. First, we avoid to give extra help such as conversations or skeleton of the code to the LLM to identify their practical capability. Second, we design tests to avoid benchmark leakage [18], that implies the risk of testing LLMs’ memory (since common driving functions and their benchmarks might be part of the LLMs’ training) as opposed to measuring their performance on previously unseen tasks. This approach improves the validity of the results and reveals the models’ true limitations and capabilities. Finally, we compared the performance of six state-of-the-art LLMs, varying in parameter size, and reported their results across four distinct tasks using both quantitative and qualitative analyses.
Structure of the Article: The rest of the paper is structured as follows: in Sec. 2 the role of LLMs in Software Engineering (SE) tasks and required Verification and Validation process are discussed to identify research gaps. In Sec. 3, we outline the methodology used in this research followed by the results of the experiment in Sec. 4. Finally, we discuss the results in Sec. 5 together with our proposed process to assure the safety of the LLM-generated code. We conclude our work in Sec. 6.
# 2 Related Work and Background
# 2.1 Large Language Models for Software Engineering
LLMs are already successfully applied to a wide range of SE tasks, including code comprehension and summarization [9]. Hence, LLMs are considered as valuable assistants that provide support and insights before a human developer formulates the final artifacts [3] or specify the requirements [14]. LLMs can also augment the implementation of autonomous driving functions. For instance, Liu et al. [10] explored LLM-based safety-related code generation for vehicle behaviour and concluded that LLMs can automatically generate safety-critical software codes. LLMs have also been employed in automated vulnerability fixing [15] and code repair [11] to efficiently fix software bugs without human intervention. However, manual verification comes with the cost of labour and the involvement of experts, typically in multiple iterations, as the LLM-generated code might not match the minimum expected quality [14]. Although LLMs can quickly generate code, we must not overlook their tendency to hallucinate, i.e., generating nonsensical or unfaithful information that does not align with the provided context or real-world knowledge [1]. As the adoption of AI grows, the need for AI to be trustworthy is also increasing [6]. Trustworthy AI can be defined as a conceptual framework that ensures that the development and implementation of technically and socially robust AI systems adhere to all relevant and applicable laws and regulations, and conform to general ethical principles [7,4,17]. These requirements apply not only for the integration of LLMs into software products and systems, but also the use of LLMs tools in software development [4].
The capabilities of LLMs are often studied against existing benchmarks [16], and then ranked using metrics such as pass@k [12], which employ predefined test cases and acceptance criteria. However, as highlighted by Zhou et al., there are concerns about the benchmarks’ data being used in LLM training [18], which is known as benchmark leakage. This raises questions about LLMs’ ability to handle unseen tasks, crucial for safety. Therefore, new evaluation methods as we propose in Sec. 3 are needed to ensure that LLMs can meet expectations when employed for real world use-cases.
# 2.2 Verification and Validation of Autonomous Driving Systems
Verification and Validation (V&V) are critical processes in the development lifecycle of Autonomous Driving Systems (ADS). Given the complexity and safetyrelated nature of ADS, it is imperative to ensure that these systems function correctly under a wide variety of real-world conditions. Effective V&V guarantees that the system meets its intended requirements and performs as expected in diverse driving environments. In autonomous driving, V&V approaches can be categorized into multiple methods, each serving different purposes and suitable for different stages of development from unit verification to verification of software integration. These include, but are not limited to, code review, formal verification, and testing. In testing, the software is evaluated in different environments, such as:
– Software-in-the-Loop (SIL): Where the software is integrated and tested within a simulated environment, without the need for physical hardware. – Hardware-in-the-Loop (HIL): Involves testing with real hardware components integrated into a simulated system. – Vehicle-in-the-Loop (VIL): Involves testing with an actual vehicle, typically in a controlled environment with real-world driving scenarios.
SIL facilitates and accelerates early Verification and Validation (V&V) during the conceptual development of a product function, a stage traditionally addressed later in the development cycle, typically during the “right leg” of the V-model. By enabling rapid iteration and validation of software functionality within a controlled virtual environment, SIL allows for spotting and fixing potential issues long before hardware integration begins.
We use Environment Simulator Minimalistic (esmini) [8], an open-source simulation tool designed for testing and validating advanced driver assistance systems (ADAS) and automated driving systems (ADS) during the SIL validation study. esmini serves as a lightweight, flexible, and efficient platform for simulating complex traffic scenarios, making it particularly useful for Softwarein-the-Loop (SIL) testing. esmini is built on the OpenDRIVE and OpenSCENARIO standards, which are growingly used for defining road networks and dynamic traffic scenarios, respectively. This enables users to simulate real-world driving environments and interactions. In SIL applications, esmini allows for virtual testing of ADAS/ADS functionalities by replicating various traffic situations, sensor inputs, and vehicle behaviours [5]. In order to apply SIL for the early verification of prototype code, its credibility should be ensured through rigorous validation methods. This alignment is crucial to ensuring that that the results of SIL-based validation can be trusted sufficiently, and they correlate with reality. Correlation of selected SIL environment in this study is addressed in [5] by proposing techniques for aligning high-fidelity, non-deterministic test track data with low-fidelity, deterministic simulations.
# 3 Methodology
# 3.1 Technical Setup: Code Generation Pipeline
As it is presented in Fig. 1, the proposed pipeline enable “Evaluate fast, Eliminate fast” concept for reaching the correct generated software faster as the remaining part of generation will be done by user and other testing environments. The implemented pipeline is configurable for different LLM architectures, enabling us to examine and compare the performance of multiple LLMs.
Start End Prompt 四
You are tasked with writing Python files to control autonomous cars within a +
Pipeline Input Code Generator Pipeline Output
a vehiclereferred to as the Ego car. State Class Documentation: The ’CustomController’interacts with the simulation Function Description [NaturalLanguage] Prompt Baseline [Python Code]
viathe 'State’class, which provides the following functionalities def ___-init__(self, simulator): TestCases ↓ [Natural Language, json] TestReport #i [PasnSaiE ARI] Compiled? No d Yes
Decision & Control WorldModel Perception (Generated SW) Actuators ↑
Your task is to generate a custom_controller.py file that controlls
<Add function description> Simulation Environment [esmini]
Five open-source LLM models (retrieved using Ollama) and GPT-4 (Azurebased via API) $^ 4$ were tasked with generating code for a controller for four automotive functions. The models were selected from a pool of the most well-known and frequently downloaded models on Hugging Face, following preliminary tests to identify the strongest candidates. Four alternative prompts were used for each model to evaluate their performance and analyse their limitations. The most effective prompt was then selected, and all models were tested using esmini. The evaluation was conducted according to the following process:
– Does the generated controller code compile? – Does the generated controller code run? – Does the generated controller code pass all of the test cases?
The acceptance criteria for all test cases are checked automatically in the logged data from each test case, and the success rates for compilation, execution, and passing test cases are stored in JSON format for each generated code. This quantitative evaluation is accompanied by an analysis of the responses, generated controller codes, and log files of the LLMs to better understand their limitations and drawbacks. Additionally, the failed generated codes are analyzed to identify root causes, which are reported in the results section.
Since Python requires less code for the same task compared to lower-level programming languages (e.g., C++), it is a suitable choice for the designed function. This helps to reduce the risk of failure of experiment due to token limitations, especially when models with different number of parameters are being compared. Moreover, since Python is not suitable for the final software, using LLM-generated code in Python reduces the risk of code leakage (e.g., copy-pasting the code) into production-related software before gaining sufficient confidence in the technology and identifying its limitations and strengths. Finally, Python demonstrates an average performance, with Copilot (GPT-based) achieving an output quality of $4 2 \%$ compared to other languages( Java (57 $\%$ ), JavaScript (27%), and C (39%) [13]). This makes Python a strong candidate for evaluating the average performance of LLMs, which contributes to the generalizability of the findings.
SIL with esmini : As esmini supports a fast verification of system requirements and functionality using prototype code, it comes naturally as preferred choice for also testing LLM-generated code. Combining LLM-based code generation with SIL in a feedback loop ensures that functional components are validated early, reducing the time and costs associated with hardware-based testing while streamlining the transition to later stages of development. It is important to note that the focus of early verification of prototype code is not on achieving the full test coverage, but rather intended to deliver the generated code to human for code review. The selected simulation environment allows the engineers to also visualise the behaviour of the function. This way, it enables them to check whether the intentions of the function designer are met. Furthermore, the simulation environment also let the engineer to test the safety goals or functional safety requirements.
# 3.2 Code Generation Experiments: Tasks and Test Cases
To ensure the successful integration of LLM-generated code into an existing software system, the LLM must accurately adhere to the defined interface requirements. Moreover, the generate code shall calculate and request the expected output signals with the correct timing and precision. Hence, by analysing the results of our preliminary experiments on code generation tasks, we identified nine key capabilities required for successful code generation, as listed in Table 1. Accordingly, four functions are specified with varying levels of difficulty and complexity to be implemented by the LLM. As mapped in Table 1, these functions allowing us to evaluate the capabilities necessary for LLMs in a real automotive application. The functions are specified as follows:
F1: The ego vehicle shall start braking if the speed exceeds $1 0 ~ \mathrm { m / s 2 }$ .
F2: The ego vehicle shall perform a lane change to the right if there is a vehicle in the same lane as the ego vehicle.
F3: The ego vehicle shall adapt its speed to the vehicle in front to avoid collision (exhibiting so called Adaptive Cruise Control behaviour, ACC).
F4: In unsupervised Collision Avoidance by Evasive Manoeuvre (CAEM), the ego vehicle shall perform a lane change to avoid imminent collision with the vehicle in front. The lane change is preferably to be conducted to the left.
As presented in Table 1, F4 is designed as the most complex function, covering all identified capabilities. Moreover, since unsupervised lane change (i.e., decision-making for when to change lanes) is an advanced feature, the risk of LLMs being trained on such a dataset is minimal.
Table 1. Presents the list of identified capabilities ( $C _ { x }$ ) required to generate a code with an acceptable level of maturity to be delivered to an engineer for code review. These functions ( $F _ { x }$ ) are selected or designed to test a subset or all of these capabilities at different levels of complexity. F1 and F2 are designed to capture the minimum capabilities of less capable LLMs. While F3 and F4 are more complex and examine the LLMs’ performance in real industrial use cases.
# Requirements for the Generated Code
The generated code shall satisfy the following requirements ( $R _ { x }$ ):
R1: The generated code shall be integrable in a predefined software architecture without any manual modification or improvement (e.g., APIs and outputs).
R2: The generated code shall control only the specified signal in the described function and must not affect any other signals.
R3: The generated code shall not request any actions that could lead to exiting the drivable area (e.g., going off the road).
R4: The generated code shall decide on proper action to avoid collisions with other static or dynamic objects (intended only for ACC and CAEM).
# Test Cases
Considering the potential malfunctions of CAEM and ACC, seven scenarios have been designed to test the generated codes, each covering multiple requirements:
TC1-3: Three multi-lane highway scenarios, with the ego vehicle driving at 120 kph (TC1), 80 kph (TC2), and 40 kph (TC3). A second vehicle performs a cut-in manoeuvre and immediately decelerates. The time to brake in the scenarios is 0.4 seconds, shorter than the time-to-react by a skilled driver according to UNECE Regulation No. 157 [17].
TC4-5: Similar to TC1-3, a second vehicle overtakes the ego vehicle and brakes in front of it. However, in TC4 and TC5, a third vehicle (static or matching the ego-vehicle speed, respectively) is blocking the ego vehicle’s avoidance manoeuvre.
TC6-7: In TC6, there are no other vehicles on the road, while in TC7, a second vehicle is in a parallel road, driving in the opposite direction.
TC1-3 are intended to test the “commission” malfunction of deceleration (in ACC) and lane change (in CAEM). TC4-5 are designed only for CAEM to test the capability of the generated code in handling more complex scenarios with multiple agents and covering malfunctions such as “delayed” or “wrong lane change.” To prevent the LLM from generating code tuned to the test cases (test case leakage), the description of these tests cases are not included in the prompts. Finally, TC6-7 are intended to test “commission” malfunctions, which include unintended lane changes and deceleration for ACC and CAEM, respectively 5
# Experimental Setup
Six configurations of the pipeline are evaluated, each corresponding to one of the models described earlier. Each configuration is independently employed to generate code for one of the four automotive functions defined in Section 3.2, one at a time. The description of the specific function in the setup is added at the end of the prompt shown on the left side of Fig. 1. This combination results in 24 unique experimental setups (6 models $\times$ 4 functions). To mitigate the inherent stochastic nature of the LLM output, each set-up was executed 20 times. An experiment run is considered successful only if the generated code passes all test cases without violating any of the predefined requirements (R1–R4).
# 4 Results
We first analyse the performance of the six state-of-the-art LLMs on F1 and F2. As shown in Table 1, F1 and F2 are designed to assess the minimum capability of the models.
For F1, as reported in Fig. 2, GPT-4 delivered 20 fully successful codes for F1 and 18 for F2. All other models generated between 4 (CodeLlama:34B) and 17 (DeepSeek-r1:32B) successful codes (green area). This demonstrates their varying ability to generate code that reads relevant inputs and triggers appropriate outputs based on the ego vehicle’s state. It confirms that the models are capable of interpreting the given inputs and outputs as described in the prompt. Moreover, some models exhibited creative solutions despite limited flexibility in the given task. For instance, Mistral:7B generated two codes that reversed speed instead of braking, while DeepSeek-Coder:33B (in one case) and DeepSeek-r1:32B (in two cases) reduced speed to avoid a collision rather than applying the brakes. However, since braking was explicitly required, these solutions were classified as failures due to non-compliance requirement R2.
Fig. 2. Reports the performance of all LLM models on two simple functions (F1 and F2). The left bar of each model presents the results for F1 (i.e., brake if the speed is higher than $\mathrm { 1 0 ~ m / s ^ { 2 } }$ ), and the right bar presents the results for F2 (i.e., lane change until reaching the rightmost lane). The performance of the models is ranked first based on the total number of successful codes for F2 and then based on F1, as F2 is considered more complex than F1.
For F2, DeepSeek-Coder:33B and Mistral:7B were capable of delivering successful code (2 cases), while the other open-source LLM models failed to generate any successful code. The complexity of F2 increased compared to F1, as it required additional capabilities such as reading the state of other agents through inputs, understanding the position of relevant agents (e.g., detecting the presence of other agents in the ego’s lane), and deciding on appropriate actions (e.g., changing the lane to the right). Furthermore, F2 demanded precise lane-change values, as lateral motion is more sensitive than longitudinal adjustments. For instance, some failed cases occurred due to excessive lane changes. If the code instructed a single lane change in two steps, it resulted in an unintended twolane change, causing the vehicle to exit the drivable area. Thus, we observe the following potential risks from LLM-generated code:
– Alternative strategies (e.g., reducing or reversing the speed) instead of what was explicitly asked for (e.g., braking).
– Failing to retrieve the state of the ego vehicle from the available interfaces.
– Unnecessary (e.g., double lane change) or illegal (e.g., leaving the drivable area) manoeuvres that conflict with the requirements.
In the second experiment, all models are examined by more advanced automotive functions: F3, adapting the speed to the vehicle in front (ACC), and F4, lane change to avoid imminent collisions (CAEM). Several models successfully generated code for F3 (ACC): Mistral:7B and CodeGemma:7B respectively generated 3 and 1 successful code versions, while GPT-4 was able to generate 6 successful codes. As reported in Fig. 3, the only LLM, which could deliver the code for F4 (CAEM), was GPT-4, while all others failed.
Fig. 3. Performance of six models on two advanced automotive functions (F3 and F4). The ACC bar in each group indicates the models’ performance in F3, and the CAEM bars present the results for F4. The models are ranked first based on total successful generations for F3 and F4, and then by the number of executable code generations.
Contrary to the open-source models, only 3 of the code versions generated by GPT-4 for F3 and F4 contained errors that led to non-executable code, while the rest did not pass the tests described in Sec. 3.2 (blue area). Some of these failures were due to incorrect values selected for thresholds, such as the safe distance threshold or time-to-collision. This led to late lane changes, which could be fixed by changing the values. These values require tuning or must be derived from standards, regulations, or expert domain knowledge within the company. We did not provide the models with any hints on these numbers, as doing so could bias them toward a specific solution. Hence, the $5 \%$ success rate of GPT-4 for CAEM could be improved by adjusting the thresholds. However, this was not done to avoid compromising the validity of the experiment. Comparing the reported results of the models for F1 and F2, in Fig. 3, with the results for ACC and CAEM, in Fig. 4, we note the impact of the task complexity on a model’s performance. Both DeepSeek models dropped in ranking from second and third place to last for ACC and CAEM. Moreover, higher task complexity does not only increases the failure rate of test cases but also affects the code quality, reducing the likelihood of generating executable or compilable code.
The failed codes were also analysed to identify and report the most common root causes of failures. Most failures in the open-source models were due to syntax errors or incorrect output calculations, leading to failed compilation, execution errors, or no action in the simulation. For instance, in some cases, CodeLlama-34B and Mistral did not generate any code in the response, but instead returned a natural language explanation of the logic or just a skeleton of the code with comments. Using incorrect syntax to access attributes or methods (e.g., ego_car.position[0] instead of ego_car.s) are also observed in the generated codes, even when explicitly provided in the prompt. Another cause of compilation or execution failures in the simulation was the addition of unnecessary extra code that prevented integration. For instance, in six occasions, DeepSeek-r1:32B hardcoded the test case within the generated code, causing conflicts with the simulation. Additionally, most models failed to account for an edge case where the relative speed between the ego vehicle and the vehicle in front could be zero. As a result, when calculating the time to collision, they attempted to divide by zero, leading to a division-by-zero error and a subsequent simulation execution failure. Other code failures were due to incorrect logic, e.g., CodeGemma often incorrectly assumed that the first car in the object list was the vehicle in front, without considering its longitudinal and lateral position. We observe the following potential risks for safety-critical, automotive functions:
– Choosing incorrect threshold values (e.g., safety distance between cars) within the code that might lead to danger (e.g., late lane changes).
– Task complexity and originality might negatively impact the performance of otherwise well-performing models.
– Failing to use the specific syntax needed to access the provided interfaces.
– Generating unnecessary code that prevents integration with the simulation tool or provided interface.
– Failing to consider edge-cases that lead to run-time errors (e.g., dividing by the relative speed when it is zero).
# 5 Discussion
In F2 and CAEM, the code shall request the exact value (1 for left and -1 for right) for lane changes $( s t a t e . s w i t c h \_ l a n e ( x ) )$ within the required time. Additionally, the requested lane change must not cause the ego vehicle to exit the drivable area (lane ID: -2 to -4), This means that a single extra lane change request could lead the ego vehicle off the road. Hence, the sensitivity of F2 and CAEM are high to the requested output as mentioned in Table 1. On the contrary for F1 and ACC, if the generated code requests for a slightly wrong value, the function might survive and this might be one of the root causes for the high success rate of F1 and ACC. As discussed, the risk of benchmark leakage justify the design of CAEM as a complex function, which requires all capabilities in Table 1, and it is also not as publicly available and known as ACC. We in general observe that LLMs seem to be not yet fully ready to deliver code for function development, however they show promising potential. For instance,
GPT-4 delivered one successful code out of 20 attempts for the CAEM function, while others were not successful at all. As presented in Sec. 4, the ranking of the models based on their performance for simple functions changes when compared to more complex functions. Moreover, it seems that the number of parameters has less impact on the performance of the models.
Fig. 4. Pipeline in Fig. 1 integrated into the software engineering process. It enables the pre-evaluation of generated codes, with the best candidates selected for engineers to review and improve before proceeding to the rigorous $\mathrm { v } \& \mathrm { v }$ process (right side). Failed codes are analysed to extract failure modes, helping refine the prompts. To avoid automation bias and evaluate the effectiveness of the review process, the failed codes can be provided to check whether they are detected and excluded.
LLM-augmented Review Process: Currently, LLMs are directly used by the software developers, and might generate non-compilable, non-executable, or even unsafe code (i.e., seemingly working, but violating safety requirements). In order to benefit from the proposed pipeline, we suggest a process as illustrated in Fig. 4 to safeguard the integration of LLM-generated code into the development of automotive functions. Asking human reviewers to assess any LLM-generated code, which is usually a complex and lengthy task, may become a waste of time, especially since, as seen in Fig. 2 and 3, most of the generated versions of the code are not directly usable. Hence, the code shall be pre-evaluated by a set of simple, fast, and yet relevant, verification tools such as esmini. Then, the best candidates are sent to engineers for review and improvement. Human-oversight is a crucial part of this process even if the proposed pipeline can generate fully functional code, as also recommended by the EU AI Act [4]. The preliminary evaluation of the generated code versions, as seen in Sec. 4, can also help to extract a failure modes catalogue, such as the one extracted from the two experiments above. Such a failure modes catalogue could also help refine prompts and highlights the need for tools to detect bugs and to mitigate recurring failures. The ranked codes would be used in improving the LLMs themselves through reinforcement learning. This process may help LLMs and humans in generating the final version of the code, in which low-ranking code versions can also be part of the human review process in order to fight automation bias (so humans do not become too complacent with LLMs generating more and more parts of the final code base).
Threats to Validity: To ensure the construct validity of the employed evaluation method, the selected simulation environment (esmini ) is part of the correlated SIL toolchain used in industrial AD development, as reported in [5]. The specific tasks (F1-F4) are designed to reduce the risk of data and benchmark leakage as much as possible to evaluate the capabilities of the LLMs in handling new tasks, which is contributing to internal validity. Publicly available elements of the pipeline, such as OpenSCENARIO and esmini, would not constitute leakage, since the model is not asked to reproduce the scenario or simulation environment, but rather to generate code for functions. Importantly, if an LLM is capable of leveraging its understanding of the simulation environment or scenario structure to produce valid control logic, this indicates reasoning and generalization capability, which is an advantage. Benchmark leakage only occurs when models reproduce memorised solutions rather than generate new code from learned understanding. Moreover, we conducted additional tests to identify potential threats to validity in the experiment such as removing the interface descriptions from the prompt to examine whether the LLMs had encountered similar code in the esmini environment. As demonstrated by the 24 experiment configurations (6 LLM models generating code for 4 functions), the pipeline is adaptable for model- or function-agnostic use. Thus, it can be applied to other domains, programming languages, and verification environments, addressing External Validity. However, since model performance varies by task, each configuration must be evaluated separately. | Software engineers in various industrial domains are already using Large Language Models (LLMs) to accelerate the process of implementing parts of software systems. When considering its potential use for ADAS or AD systems in the automotive context, there is a need to systematically assess this new setup: LLMs entail a well-documented set of risks for safety-related systems' development due to their stochastic nature. To reduce the effort for code reviewers to evaluate LLM-generated code, we propose an evaluation pipeline to conduct sanity-checks on the generated code. We compare the performance of six state-of-the-art LLMs (CodeLlama, CodeGemma, DeepSeek-r1, DeepSeek-Coders, Mistral, and GPT-4) on four safety-related programming tasks. Additionally, we qualitatively analyse the most frequent faults generated by these LLMs, creating a failure-mode catalogue to support human reviewers. Finally, the limitations and capabilities of LLMs in code generation, and the use of the proposed pipeline in the existing process, are discussed. | [
"cs.SE",
"cs.AI"
] |
# 1 Introduction
Recent advances in large language models (LLMs) have demonstrated impressive capabilities across complex reasoning and generation tasks. However, benchmark gains are often prioritized over practical deployment concerns, leading to systems that rely on repeated calls to expensive proprietary LLMs. For example, SWE-Agent (Yang et al., 2024) caps each run at $\$ 4$ (Kapoor et al., 2024) , making even modest-scale evaluations prohibitively expensive. In contrast, systems such as Retrieval-Augmented Generation (RAG) can reduce cost by $9 9 \%$ (Jin et al., 2025), motivating a critical question: How can we design effective yet cost-efficient LLM systems? Retrieval-Augmented
To bridge this gap, we present a three-pronged taxonomy with 12 diverse strong-weak collaboration methods, including static context augmentation (e.g., Few-Shot examples, FAQs, Planning), pipeline division (e.g., Strong LM First, Prompt Reduction), and dynamic collaboration (e.g., Routing). Our taxonomy (Figure 1) draws on prior work along with tailored adaptations for code generation. We evaluate these methods across both API-based and hybrid ( $\cdot \mathrm { A P I } +$ open-source) LLM pairs on the SWE-Bench Lite (Jimenez et al., 2024) benchmark. Our goal is to empirically characterize the costperformance trade-offs of these methods and identify best practices under different budget and accuracy constraints. While issue-solving serves as a grounded and challenging testbed, our broader focus is on the general design space of strong–weak collaboration.
Our findings show that pipeline and contextbased strategies offer the best average efficiency, while weak-only baselines consistently underperform. Yet, we find that examining average efficiency does not always predict optimal performance for a given cost. Through cost-performance curves, we answer the question: What is the best method given a performance requirement and budget constraint? Our best strategy improves the weak LLM’s performance by ${ \sim } 6 2 \%$ , almost matching the strong LLM, at ${ \sim } 6 0 \%$ cost. Overall, we provide actionable principles for designing costefficient, collaborative LLM systems for complex code generation tasks in the future.
Figure 1: Taxonomy of the 14 techniques studied. \* denotes methods newly proposed or adapted in this study. We categorize them into cost-equated weak-only, context-based, pipeline-based, and dynamic collaboration methods.
# 2 Related Work
Strong LLMs achieve high performance but are often too expensive to use at scale. To tackle this, several lines of work explore cost-efficient alternatives. Self-consistency (Wang et al., 2023; Chen et al., 2023b) aggregates multiple generations from the same LLM to improve performance. Other approaches rely on strong–weak collaboration. Amayuelas et al. (2025); Wang et al. (2024) leverage a strong LM’s superior reasoning capabilities to generate a plan for execution by a cheaper weak LM. Routing-based methods like FrugalGPT (Chen et al., 2023a) dynamically select LLMs to optimize cost and quality. Similarly, LLM Cascades (Yue et al., 2024) escalate to strong LMs only when weak LMs fail. Drawing from this, we curate a taxonomy of strong–weak collaboration strategies, including self-consistency, planning, LLM cascades (Weak LM First), routing among others, adapted to repository-level code generation. This allows us to empirically evaluate their cost-performance trade-offs in a unified framework.
# 3 Methodology
To conduct a systematic analysis, as shown in Figure 1, we present a taxonomy of strong-weak collaboration methods for repo-level RACG, where the methods are either based on or inspired by prior studies (Chen et al., 2025a; Bai et al., 2024; Xu et al., 2024; Lu et al., 2024; Chen et al., 2025c).
Preliminary. We conduct our experiments with Agentless-Lite (Dunn, 2025), a two-step repo-level RACG framework. We first apply a retriever to retrieve the top- $k$ relevant documents (e.g., files in a repository) based on their similarity score with the query $q$ . Then we apply an LM to iteratively generate the code with the documents as context until it is in the correct code patch format.
Cost-equated weak LM. We first study several baselines that only involve the weak LM (McDonald et al., 2025). Self-consistency (Wang et al., 2023) enhances weak LMs by sampling $n$ diverse outputs, where $n \approx \mathbf { C o s t } _ { s t r o n g } / \mathbf { C o s t } _ { w e a k }$ and selecting the most consistent one via: $\textcircled{1}$ Majority Voting $( S C _ { m } )$ , $\textcircled{2}$ Clustering $( S C _ { c } )$ and $\textcircled{3}$ Universal $( S C _ { u } )$ . We also experiment on a “cheating” method, $\textcircled{4}$ Best of n. Detailed explanations are included in Appendix $\ S \mathrm { A } . 2$
Static Context Augmentation. We experiment methods that leverage the strong LM’s superior problem-understanding capabilities to augment the context and use the weak LM for iterative generation to reduce cost. We first study methods that prompt the strong LM to generate repo-level information to provide background information for code generation, including: $\textcircled{5}$ Repository Summary, which generates a summary of the repository based on its README file and directory structure, $\textcircled{6}$ Repo-Level FAQs, which is similar but generates a set of FAQs instead, $\textcircled{7}$ Repo Structure, which summarizes the repository structure. Here we use RepoGraph (Ouyang et al., 2025) , which encodes the repository in a graph. We also study few-shot examples for each repository, which can be constructed by: $\textcircled{8}$ Few-shot (Random) that randomly selects $k$ (input, successful strong LM output) pairs, and $\textcircled{9}$ Few-shot (Similarity), which selects few-shot demos based on the problem statements’ embedding similarity. We also study methods that use the strong LM to generate instance-specific context. Compared to repo-level context, it typically requires a higher cost but provides more precise and example-relevant knowledge. We analyze: $\textcircled{1}$ Planning, which generates high-level planning for each instance (Amayuelas et al., 2025; Wang et al., 2024), and $\textcircled{1}$ Instance-Level QA pairs, which generates a set of FAQs for each instance with the code generation query (e.g., issue description) and retrieved code as context.
Pipeline Division. These methods aim to reduce cost while maintaining performance by selectively calling the strong LM and weak LM sequentially as part of a hard-coded pipeline. Specifically, we compare: $\textcircled{1}$ Strong LM First, which prompts the strong LM to make the first attempt and calls the weak LM to iteratively refine its solution until it is correctly formatted. $\textcircled{1}$ Weak LM First, where the weak LM first makes $n$ attempts to solve the issue, following model cascades (Chen et al., 2023a; Zhang et al., 2023; Yue et al., 2024). If it fails to generate a valid patch, the strong LM makes one attempt. Based on previous study’s conclusion that weak LMs have comparable performance to strong LMs on localization (Xia et al., 2024), we present $\textcircled { 1 4 }$ Prompt Reduction, where we perform a preliminary call to the weak LM to reduce the code context by removing irrelevant code, thus reducing the overall context to be passed to the strong LM.
Dynamic Collaboration. Unlike $\ S 3$ , where the pipeline is hard-coded, we allow the decision to use the strong or weak model to be made dynamically during inference. Specifically, we invoke a router (Chen et al., 2023a) to decide if the problem is simple or complex and appropriately delegate it to the weak or strong LM. We evaluate both weak LMs and strong LMs as routers and denote them as $\textcircled{1}$ Weak Router and $\textcircled { 1 6 }$ Strong Router.
Figure 2: Performance vs. Cost curves for different Strong-Weak Model Pairs. O3 - O3-mini; O4 - O4-mini; 4o - GPT-4o-mini; Qx - Qwen2.5-Coder-xB. Detailed pairwise results in Appendix Figure 4 and Appendix Tables 3-8.
# 4 Experiments
Models. We consider strong-weak model pairs that have a noticeable gap in performance and cost, and the more expensive model performs better: Strong LMs: O3-mini, O4-mini (OpenAI, 2025) and GPT4o-mini (OpenAI, 2024); Weak LMs: GPT-4omini and Qwen2.5-Coder series (Hui et al., 2024). Dataset and Agentic Framework. We conduct all experiments on the SWE-bench Lite subset (Jimenez et al., 2024) which has 300 issues from 11 Python repositories. We run the Agentless Lite framework (Dunn, 2025) once per instance for our experiments, using voyage-code- $3 ^ { 2 }$ for retrieval, techniques mentioned in $\ S 3$ for generation (Details in Appendix $\ S \mathrm { A } . 2 \AA _ { , }$ .
Evaluation Metrics. The following metrics are reported for each method: (1) Resolution Rate: Proportion of instances for which the generated patch successfully resolves the issue; and (2) Cost: Total generation cost $( \$ )$ including additional method costs, but excluding retrieval cost, which remains constant across all methods.
# 5 Main Results
We focus our evaluation on the generator, as voyage-code-3 achieves strong retrieval $( \sim 9 2 \%$ recall, $\sim 7 6 \%$ MRR). Our findings uncover consistent patterns when strong–weak collaboration succeeds or fails (Appendix Figure 4, Appendix Tables 3-8.
Cost-equated baselines vs. Strong-Weak collaboration. As shown in Figure 2, Appendix Figure 4 and Appendix Tables 3-14 and Appendix Figure 4, across nearly all model pairs, the cost-equated baselines, i.e. sampling multiple times to match the strong LLM’s cost, underperform compared to collaboration. Surprisingly, some baselines even hurt weak LLM’s performance, which is likely due to faltering patch selection. Our best collaboration strategy, Strong LM First, achieves a 0.4167 resolution rate, which is ${ \sim } 9 2 \%$ more than the best corresponding cost-equated baseline, i.e., best-of-n for GPT-4o-mini cost-equated with O4-mini.
Efficacy of Methods Across Taxonomy Groups Analyzing by taxonomy (§3), on average, pipeline and context-based strategies yield the highest costefficiency across model pairs, followed by dynamic methods and Best of $\dot { n }$ , with self-consistency approaches being the least efficient (Appendix $\ S \mathrm { A } . 4 )$ . However, average efficiency alone is not a reliable indicator of the best method. Performance–cost curves (Figure 2) often intersect, showing that Weak LM First is optimal when budget is low due to minimal strong LM usage, while Strong LM First dominates once budget increases. For example, with O3-mini $^ +$ Qwen2.5-Coder-32B, Strong LM First achieves equivalent performance to the strong model at just ${ \sim } 6 0 \%$ cost. We also observe a regime shift around the $\$ 18.3$ mark as Figure 2 shows, below which $\mathrm { O } 3 + \mathrm { Q w e n } 2 . 5$ -Coder32B performs best, whereas above it, $\mathbf { O 4 } + \mathbf { G P T - }$ 4o-mini becomes more cost-effective.
These results suggest that no method is universally optimal. Instead, choosing the best strategy depends on the deployment scenario, specifically the available budget and required performance, which is enabled by plots like Figure 2. Given a minimum performance threshold and a budget cap, the optimal method corresponds to the curve that reaches the highest resolution rate within the feasible region i.e., the top-left quadrant defined by those constraints. For instance, if the target is at least $20 \%$ resolution within a $\$ 20$ budget, Weak Router with $\mathrm { O 4 - m i n i + G P T - }$ 4o-mini emerges as the most cost-effective choice. These observations yield actionable guidance: (1) Cost-equated weak-only methods are inefficient; (2) Pipeline-based methods outperform contextonly methods when budget allows; (3) Weak LM First and Weak Router are strong choices under tight budgets; and (4) Strong LM First performs best in higher-budget regimes, often approaching strong LM performance at reduced cost.
# 5.1 Interesting cases
Instance-level help is better than repo-level help. Repo-level context (e.g., summaries, structure, QA pairs) consistently failed to improve weak LM performance. Such information is too coarse and may distract from instance-specific signals. Even fewshot examples, whether random or similar, often reduced accuracy. In contrast, instance-level augmentation (plans, QA pairs) significantly boosted resolution rates, justifying their higher cost.
Weak Router is better than Strong Router. Counterintuitively, Weak Router frequently outperformed Strong Router, both in accuracy and cost. For example, with $\mathrm { O 4 - m i n i + Q w e n } 2 , \mathrm { 5 - C o d e r } { \mathrm { - } } 3 2 \mathrm { B }$ , Weak Router achieved 6 percentage points higher resolution at ${ \sim } 2 0 \%$ lower cost. We suspect that stronger models , while good at problem solving, may “overthink” routing decisions (Cuadron et al., 2025). Only with very weak LMs (e.g., Qwen-7B) did strong routing slightly edge ahead, suggesting weak routing is often the more efficient choice.
Prompt Reduction trades turn-wise validity for overall performance. Prompt Reduction offers a surprising trade-off where it lowers the valid patch rate to ${ \sim } 6 5 \%$ , which is well below the ${ \sim } 9 5 \%$ average of other methods, yet often achieves higher resolution rates overall. Due to aggressive pruning of irrelevant context by the weak LM, the strong LM is forced to focus on the most salient code, leading to more successful fixes despite more retries. In effect, Prompt Reduction sacrifices turn-wise reliability in favor of sharper attention, making it a high-variance but high-upside strategy, especially when correctness matters more than efficiency. | We study cost-efficient collaboration between strong and weak language models for repository-level code generation, where the weak model handles simpler tasks at lower cost, and the most challenging tasks are delegated to the strong model. While many works propose architectures for this task, few analyze performance relative to cost. We evaluate a broad spectrum of collaboration strategies: context-based, pipeline-based, and dynamic, on GitHub issue resolution. Our most effective collaborative strategy achieves equivalent performance to the strong model while reducing the cost by 40%. Based on our findings, we offer actionable guidelines for choosing collaboration strategies under varying budget and performance constraints. Our results show that strong-weak collaboration substantially boosts the weak model's performance at a fraction of the cost, pipeline and context-based methods being most efficient. We release the code for our work at https://github.com/shubhamrgandhi/codegen-strong-weak-collab. | [
"cs.AI",
"cs.SE"
] |
# 1 INTRODUCTION
Logical specifications, expressed through formalised formulas, enable precise system modelling and automated verification, supporting correctness and the detection of critical properties. The replication of reasoning-based experiments is essential for validating the generalisability of such methods in software engineering. Recent advances in automated specification generation [4] and logic-based inference [7] further highlight the need for empirical scrutiny. This study replicates the solver benchmarking approach from [15] and the reasoning framework of [3], extending both with behavioural-model-oriented problems and a PLTL solver. It also lays the groundwork for future integration with logic engines in development tools. A structured benchmark suite is introduced to evaluate solver performance, specifically Prover9 [9], SPASS [16], and InKreSAT [2], across diverse logical conditions.
Automated reasoning is increasingly relevant in software engineering, with prospective applications in CI/CD pipelines for realtime verification and AI-driven IDEs (Integrated Development Environments) that provide on-the-fly validation and adaptive solver selection. However, some solvers exhibit performance irregularities, highlighting the need for heuristic-driven optimisation, particularly for safety-critical verification, where stable and intelligent theorem proving is essential.
This study contributes to empirical software engineering by conducting empirical replication and validation to assess the reliability of logic-driven methodologies and by evaluating scalability and performance to measure the efficiency of automated reasoning tools across varying problem scales. Our findings further highlight the distinctive characteristics of individual provers as logical specifications increase in complexity.
# 2 REPLICATION OF LOGICAL SPECIFICATIONS
We developed a catalogue of eight logical problems to test software behavioural models systematically. This approach ensures robust, online testing beyond intuition, addressing known computational challenges, such as clause length variations [13] and 3-SAT problems [12]. Each problem examines factors, like clause length stability, atomic proposition variability and liveness/safety [8] distribution.
# 2.1 Experimental Setup
Each problem consists of many tasks. For each task, represented by a logical formula, we write down the testing time, memory used and whether a formula is satisfied or not. Each formula is tested three times, and the average time and memory values are reported. Time is the most critical metric, as memory varies little and satisfiability remains constant.
# P1 Uniform Clause-Length Formulation.
Each formula consists of 50, 100, 200, 500, 1000, or 2000 clauses, with clause lengths set to 2, 3, 4, 6, 8, and 10. The number of clauses assigned to each length is equal, ensuring a uniform distribution (the same number of clauses with the length of 2, the length of 3, etc.). Each formula contains an equal number of liveness and safety clauses, also with consideration of the specified groups of clause length. The total number of distinct atomic propositions is half the number of clauses, ensuring that every atom appears at least once in the formula (i.e. for 50 clauses over the set of 25 atoms, for 100 clauses over the set of 50 atoms, etc.). (The exemplary formula for 100 clauses contains: 8 clauses with the length of 2 of a liveness type, 8 clauses with the length of 2 of a safety type, 8 clauses with the length of 3 of a liveness type, 8 clauses with the length of 3 of a safety type, etc.). Each atom from the available set of atoms is used in the generated formula at least once.
# P2 Variable Clause-Length Distribution.
This formulation modifies P1 by introducing a non-uniform clause-length distribution. The number of clauses for the respective lengths results now from the distribution of Pois$\mathsf { s o { \bar { n } } ^ { 1 } }$ where 𝐿𝑎𝑚𝑏𝑑𝑎 corresponds to the anticipated number of literals in the clause, in our case this will be specified as number 3.5, i.e. between 3 and 4, where we anticipate this length of a clause as a typical one. The total number of clauses and the 50:50 ratio of liveness to safety clauses remain unchanged. The number of distinct atomic propositions is set to half the number of clauses, with each atom appearing at least once.
P3 Atom-to-Clause Variability Analysis.
Each formula consists of 50, 100, 200, 500, 1000, or 2000 clauses, with clause lengths restricted to 2, 3, 4, 6, 8, and 10. Unlike in P1, where the number of atomic propositions is fixed at half the number of clauses, this formulation varies the atom-to-clause ratio by defining the set of atomic propositions as 2, 3, 4 and 5 times the number of clauses (2 for 50 clauses means that they are generated over the set of 100 atoms, 2 for 100 means generating over the set of 200 atoms, etc.). The maximum clause length depends on the number of available atomic propositions, with longer clauses omitted when the atom set is small. Each formula contains equal proportions of liveness and safety clauses, and every atom appears at least once.
P4 Fixed-Length Clause Uniformity.
Each formula consists of 50, 100, 200, 500, 1000, or 2000 clauses, with all clauses in a given formula having the same fixed length of 2, 3, 4, or 5. Unlike P1, where clause lengths vary, each formula enforces a single clause length. Liveness and safety clauses are equally distributed, and each atomic proposition appears at least once. The number of distinct atomic propositions is half the number of clauses.
P5 Disparate Clause-Length Grouping. Each formula consists of 50, 100, 200, 500, 1000, or 2000 clauses, grouped into fixed clause lengths 1, 5, 10, and 20. Liveness and safety clauses are equally distributed. We consider the following cases concerning the number share of respective groups: a) all the groups have equal share i.e. $2 5 \%$ each; b) clauses with the length of 1 in the number of $1 \%$ , the rest equally; c) clauses with the length of 20 in the number of $1 \%$ , the rest equally.
P6 Liveness-Safety Ratio Sensitivity.
Each formula consists of 50, 100, 200, 500, 1000, or 2000 clauses, with varying liveness-to-safety [8] clause ratios: 90:10, 80:20, 65:35, 50:50, 35:65, 20:80, and 10:90. We consider two cases:
a) the lengths of clauses equal in groups, as in P1, b) the lengths of clauses assigned variably as in P2.
P7 Model Property Verification via Logical Implication.
This problem tests whether a given behavioural model satisfies specific requirements through logical implication. Formulas $F _ { 1 } , F _ { 2 }$ , and $F _ { 3 }$ are generated with 50, 100, or 200 clauses and combined into two structures: the disjunctive model
$G _ { 1 } \equiv F _ { 1 } \lor F _ { 2 } \lor F _ { 3 }$ , the conjunctive model $G _ { 2 } \equiv F _ { 1 } \lor F _ { 2 } \lor F _ { 3 }$ The verification task involves checking $G _ { 1 } \Rightarrow R$ (it may be replaced with $\lnot G _ { 1 } \lor R )$ and $G _ { 2 } \Rightarrow R _ { : }$ , where $R$ is a simple liveness clause composed of four randomly selected atoms from $G _ { 1 }$ or $G _ { 2 }$ . Each formula consists of liveness clauses and safety clauses, half each. Collectively, we consider the following cases:
a) For $G _ { 1 } \Rightarrow R$ , clause-length distributed as in P1, b) For $G _ { 1 } \Rightarrow R$ , clause-length distribution as in P2, c) For $G _ { 2 } \Rightarrow R$ , clause-length distributed as in P1, d) For $G _ { 2 } \Rightarrow R$ , clause-length distribution as in P2.
P8 Comparative Analysis via Logical Square Framework. [4, Tab. 4] The modification of problem P1, with the use of the logical square. We generate two formulas $F _ { 1 }$ and $F _ { 2 }$ consisting of 50, 100, 200, 500 and 1000 clauses. Each formula consists of liveness clauses and safety clauses, half each. We test the following three cases: a) contradictory: $( F _ { 1 } \Rightarrow \neg F _ { 2 } ) \land ( \neg F _ { 1 } \Rightarrow F _ { 2 } )$ b) subcontrary: $\neg ( \neg F _ { 1 } \land \neg F _ { 2 } )$ c) subalternated: $( F _ { 1 } \Rightarrow F _ { 2 } )$ ) $\land \lnot ( F _ { 2 } \Rightarrow F _ { 1 } )$ 1 Subproblems a), b) and c) with the clause lengths as in problem P1; furthermore, we test the following cases: d), e) and f) with clause lengths as in problem P2, for the same cases as a), b) and c), respectively.
# 2.2 Rationale for Logical Benchmark Problems
Problem P1 is basic and other problems refer to it, it corresponds to a typical situation when a formula describes the behavioural model of the system under design. All the variables from the available set of atoms are used for generating formula clauses, according to the assumption that if for a given behavioural model an atom variable was identified, then it should be used at least once in the formula.
Problem P2 arises when clause lengths are not predetermined, unlike in the previous problem, where fixed lengths were assumed – a constraint that may sometimes appear unnecessarily strict. The expected clause length typically ranges between 3 and 4, a choice that aligns with common encoding practices.
Problem P3 varies the number of atomic propositions used per formula to simulate underused variables, increasing variability and reducing redundancy in behavioural modelling.
Problem P4 explores reasoning efficiency for formulas with uniform clause lengths, which may significantly affect performance [13].
Problem P5 basically allows to see the influence on the reasoning of very short and very long clauses in the formula. We consider three cases: the first one assuming the equal share of all the lengths is to be a starting point for two subsequent cases, the second one corresponds to a situation when rather long clauses are dominant, and the third one when rather short clauses are dominant.
Problem P6 tests the impact of varying the distribution of liveness and safety clauses, previously assumed equal. While a 50:50 split reflects the natural balance in system design, analysis, and implementation, exploring other quantitative ratios provides further insights.
Problem P7 arises when we need to test specific properties of a behavioural model, such as liveness or safety, using simple formulas related to the analysed model. More generally, given behavioural models expressed as formulas, we use implication statements to check whether a required property – typically a simple one – is satisfied. Depending on the context, this verification applies to either a conjunction or a disjunction of these formulas.
Problem P8 is slightly different. It relates to the logical square [1], a well-known concept in the literature, but is also applied when testing various alternative variants of behavioural models.
All eight problem sets were generated algorithmically using controlled parameters (clause length, atom ratio, etc.), enabling consistent structural variation; templates and scripts are available upon request.
# 3 EXPERIMENTAL RESULTS AND EVALUATION
We evaluated the performance of First-Order Logic (FOL) theorem provers and compared their efficiency with a Propositional Linear Temporal Logic (PLTL) prover. Based on a systematic review of available solvers, we selected Prover9 [9] and SPASS/MSPASS [16] for FOL, while InKreSAT [2] was chosen for PLTL. The three provers operate with distinct input formats: SPASS requires TPTP, Prover9 uses LADR and InKreSAT utilises InTohyLo. We executed all tests [14] on a PC with an Intel Core i5-6400 (2.70 GHz), 16GB RAM DDR4, running Ubuntu 18.04.4 LTS (Bionic Beaver). Inference computations were interrupted if they exceeded the 300-second timeout.
Figure 1: Problem $\# 1$ , clauses against time for FOL provers (top) and InKreSAT (bottom)
Figure 1 presents the results of Problem $\# 1$ testing in terms of computational time. Prover9 timed out on larger formulas (2000 clauses), whereas SPASS and InKreSAT completed within limits, highlighting scalability differences.
Figure 2 presents the results of Problem #2 testing, revealing substantial differences compared to Problem $\# 1$ . These differences stem from a distinct approach to formula generation. Unlike in the previous case, clause quantities for specific length groups are not predefined; instead, clause lengths follow a Poisson distribution, fluctuating around an expected value. This distribution aligns with typical specification scenarios, where clauses of the approximate length 3 or 4 are the most frequently encoded, while those of other lengths occur less frequently. The observed computation times exhibit similar trends to those in Problem $\# 1$ , though overall values are lower. Notably, an anomalous increase in processing time was recorded for InKreSAT when handling formulas with 200 clauses.
Figure 2: Problem #2, clauses against time for FOL provers (top) and InKreSAT (bottom)
Figure 3 presents the time results for testing Problems $\# 3$ and $\# 4$ . The findings for Problem $\# 3$ indicate that increasing the set of atoms within a formula of fixed length significantly impacts testing time while also prolonging the resolution process for the encoded behavioural model. This arises from the reduced frequency of atom occurrences within the formula, some may not appear at all, resulting in fewer contradictions, which, paradoxically, accelerates overall problem-solving. Furthermore, formula length exerts a greater influence on testing time than the number of atoms involved. Notably, anomalies occur: for formulas with an extensive number of clauses but relatively few atoms, InKreSAT exhibits unexpectedly high computation times. Despite this atypical behaviour, InKreSAT remains substantially faster than FOL provers.
The results for Problem $\# 4$ indicate that, for FOL provers, increasing clause length, and thus formula length, significantly extends testing time. In contrast, for InKreSAT, clause length has a minimal impact, with the number of clauses being far more influential. All provers successfully solved the formulas, yet InKreSAT once again displayed notable behaviour.
Figure 4 presents the time results for Problem #5, analysed separately for each prover across three cases. Prover9 is less impacted by average clause length than SPASS; they both perform better with shorter clauses. However, SPASS outperforms Prover9 on shorter formulas, while the reverse holds for longer ones. InKreSAT is significantly faster than both of them and appears proportionally affected by longer formulas similarly to Prover9, yet remains unaffected by medium-length formulas.
Figure 3: Problem #3 (top) and #4 (bottom), clauses against time for FOL provers and InKreSAT
Figure 5 presents the time results for Problem $\# 6$ , examining the impact of different distributions of liveness and safety clauses, as well as two approaches to clause length. For Poisson distribution, both FOL provers achieve better performance. As in previous tests, InKreSAT generally performs better with Poisson distribution but occasionally exhibits significantly worse results compared to even distribution. Prover9 performs best with a majority of liveness clauses, while SPASS excels with a majority of safety clauses. However, under Poisson distribution, SPASS maintains superior performance regardless of clause type distribution. In contrast, InKreSAT shows no clear correlation with the proportion of liveness and safety clauses.
Figures 6 present the time results for Problem #7, which examines the satisfiability of certain properties across multiple models, connected by disjunction or conjunction. Each subsequent model is expressed as a 200-clause formula, potentially resulting in a tested formula with up to 600 clauses. SPASS successfully solved all instances, whereas Prover9 frequently timed out, particularly when models were conjunction-connected. Overall, disjunctionconnected formulas are processed more efficiently, similarly to formulas with Poisson distribution.
Figure 4: Problem #5 for Prover9, SPASS and InKreSAT, clauses against time
Figures 7 present the time results for Problem $\# 8$ , comparing two models based on logical square relations. They also confirm the experiments [5], but now with larger sets of formulas. Each behavioural model can be expressed with up to 1000 clauses, resulting in tested formulas of up to 2000 clauses. These complex formulas could undergo preprocessing, but the study focused on direct processing as defined in sub-Problems $\# 8 . \mathrm { a }$ to $\# 8 . 0$ . SPASS solved all instances, occasionally exceeding memory limits, while Prover9 frequently timed out. Problem $\# 8 . 0$ (subalternated) had the lowest computational cost, whereas Problem $\# 8 . \mathrm { a }$ (contradictory) required significantly more time.
Table 1 presents a summary of testing times for the various problems and formulae under consideration. It is clearly observable that both FOL provers perform well across all formulae. In the case of shorter formulae, Prover9 generally outperforms SPASS. The results obtained for InKreSAT exhibit some outliers, as previously discussed. Nevertheless, all results remain fully acceptable, with the average approximate performance which is one hundred times superior to that of the FOL provers. This can be attributed, at least to a certain extent, to the significantly lower memory requirements of Prover9, which were not explicitly illustrated due to space constraints in this article. Prover9 achieves superior execution times for shorter formulae, particularly those comprising 200–500 clauses.
Figure 5: Problem #6 for FOL provers and InKreSAT, clauses against time
Figure 6: Problem $\# 7$ for both provers, clauses against time
Figure 7: Problem $\# 8$ for both provers, clauses against time
In contrast, SPASS demonstrates better performance for substantially longer formulae. Notably, InKreSAT consistently outperforms both FOL provers across all formulae in terms of execution time. Although the Poisson distribution—characterised by its intrinsic irregularities in length distribution—leads to a substantial improvement in execution times, it occasionally introduces abrupt and irregular decreases in efficiency for InKreSAT. Nevertheless, even in such cases, InKreSAT remains significantly more efficient than the FOL provers.
Assuming that the objective is to develop an Integrated Development Environment (IDE)-class tool with built-in interaction based on deductive reasoning about behavioural models, the following statement can be formulated.
Claim 1. Automated reasoning on behavioural models should be completed within 1–2 seconds using FOL provers to support realtime feedback in IDEs. Our results demonstrate this is feasible for formulas up to a medium size, aligning with needs in interactive model validation. For the PLTL prover, the obtained results demonstrate an improvement by a factor of 100.
Claim 2. InKreSAT consistently outperforms FOL provers, offering near-instant validation via efficient heuristics and incremental solving. This makes it a strong candidate for integration in CI/CD pipelines and AI-assisted development tools.
Despite minor irregularities, InKreSAT outperforms other provers by an order of magnitude, proving its efficiency even in complex formulas. Its speed advantage makes it highly effective for safetycritical verification, where both performance and stability matter.
# 4 RELATED WORKS
Over the years, various structured collections have been developed to test the efficiency and applicability of theorem provers and SAT solvers. Pelletier [11] provided an early benchmark suite for first-order logic (FOL), consisting of structured logical problems widely used in theorem proving research. Sutcliffe [15] expanded this with the TPTP (Thousands of Problems for Theorem Provers) library, a standard for evaluating provers across logic domains. While our approach targets logic-based behavioural modelling, the TPTP benchmark is oriented toward general theorem proving. Mitchell et al. [10] analysed SAT problem structures, identifying conditions that make some instances significantly harder. Their focus is benchmarking; ours targets logic-based verification in software engineering contexts.
Table 1: Time results for Problems from $\# 1$ to $\# 6$ for formulas with 50–500 clauses, Prover9 & SPASS (top) and InKreSAT (bottom). When both extreme columns are rejected, then the average testing times are calculated. Green marking denotes the selection of one prover for shorter formulas and another for longer formulas
To sum up, our study constructs a structured problem catalogue explicitly designed for evaluating automated logical specification techniques in behavioural modelling. This work enhances empirical validation frameworks for IDE-oriented and AI-driven software verification tools, addressing scalability and applicability concerns in practical use cases. Earlier prototyping efforts lacked structured IDE integration, limiting their utility in such contexts [6]. | This study empirically validates automated logical specification methods for behavioural models, focusing on their robustness, scalability, and reproducibility. By the systematic reproduction and extension of prior results, we confirm key trends, while identifying performance irregularities that suggest the need for adaptive heuristics in automated reasoning. Our findings highlight that theorem provers exhibit varying efficiency across problem structures, with implications for real-time verification in CI/CD pipelines and AI-driven IDEs supporting on-the-fly validation. Addressing these inefficiencies through self-optimising solvers could enhance the stability of automated reasoning, particularly in safety-critical software verification. | [
"cs.SE"
] |
# I. INTRODUCTION
The need to generate dynamic and realistic speech-driven talking heads has intensified, driven by emerging applications in domains such as digital assistants [1], [2], virtual reality [3], [4], and filmmaking [5]–[7]. These applications demand high visual fidelity and seamless integration of critical synchronized factors, including subject identity, lip movements, facial
Hongyan Liu and Zhaoxin Fan are the corresponding authors. Ziqiao Peng and Jun He are with the School of Information, Renmin
University of China. Wentao Hu and Hui Tian are with the School of Information and Com
munication Engineering, Beijing University of Posts and Telecommunications. Junyuan Ma is with the Aerospace Information Research Institute,
Chinese Academy of Sciences. Xiangyu Zhu, and Xiaomei Zhang are with the Institute of Automation,
Chinese Academy of Sciences. Hao Zhao is with the Institute for AI Industry Research, Tsinghua
University. Hongyan Liu is with the School of Economics and Management,
Tsinghua University. Zhaoxin Fan is with the Beijing Advanced Innovation Center for Future
Blockchain and Privacy Computing, School of Artificial Intelligence, Beihang
University, Hangzhou International Innovation Institute, Beihang University.
expressions, and head poses. The ultimate goal is to create synthetic videos that are indistinguishable from real human captures, thereby aligning with human perceptual expectations and enabling more expressive communication.
At the core of realistic talking head synthesis lies the challenge of synchronization across critical factors. These components must be perfectly aligned to produce a coherent and lifelike representation. However, the inherent ambiguity in mapping speech to facial movements introduces significant challenges, often resulting in artifacts that disrupt the perceived realism. This ambiguity makes it difficult to achieve an accurate and consistent depiction of facial dynamics based solely on speech. Synchronization in talking head synthesis is particularly critical because of the way humans process and interpret facial movement in communication. Facial expressions and lip movements are tightly coupled with speech, and any misalignment between these factors can disrupt the perception of realism. Thus, addressing the synchronization challenge involves dissecting the ambiguity in audio-visual mappings, turning this “devil” in the details into a focal point for ensuring good fidelity in talking head synthesis.
Current methods for generating talking heads are generally divided into two main categories: 2D generation and 3D reconstruction methods. 2D generation methods, including Generative Adversarial Networks (GAN) [8]–[16] and recent diffusion models [17]–[19], have shown significant progress in modeling lip movements and generating talking heads from single images. These methods, trained on large datasets, excel at producing realistic head movements and facial expressions. However, their reliance on 2D information limits their ability to achieve accurate synchronization across critical factors. Without three-dimensional prior knowledge, these methods often produce facial movements that do not conform to physical laws, leading to inconsistencies such as variations in facial features across frames. These inconsistencies arise from the 2D models’ inability to capture the depth and spatial relationships necessary for realistic facial animation, resulting in outputs that may lack identity consistency and exhibit artifacts.
Similarly, the emerging 3D reconstruction methods in recent years, such as those based on Neural Radiance Fields (NeRF) [20]–[28] and Gaussian Splatting [29]–[31], have shown excellent performance in maintaining identity consistency between frames and preserving facial details. These methods utilize ray and point information in three-dimensional space to generate high-fidelity head models, ensuring continuity and realism of the character from different perspectives. However, these 3D reconstruction methods also face some challenges. They struggle to achieve highly synchronized lip movements with only a limited volume of 4-5 minutes of training data. Most existing methods use pre-trained models like DeepSpeech [32] for automatic speech recognition to extract audio features. However, the feature distribution from speech-to-text differs from the speech-to-image distribution needed for this task, often resulting in lip movements that do not match the speech.
Fig. 1. The proposed SyncTalk $^ { + + }$ uses 3D Gaussian Splatting for rendering. It can generate synchronized lip movements, facial expressions, and more stable head poses, and features faster rendering speeds while applying to high-resolution talking videos.
Based on the above motivations, we find that the “devil” is in the synchronization. Existing methods need more synchronization in four key areas: subject identity, lip movements, facial expressions, and head poses. To address these synchronization challenges, we propose three key sync modules: the Face-Sync Controller, the Head-Sync Stabilizer, and the Dynamic Portrait Renderer, as shown in Fig. 1.
The first is the synchronization of lip movements and facial expressions, we use the Face-Sync Controller, which employs an audio-visual encoder and a 3D facial blendshape model to achieve high synchronization. Unlike traditional methods that rely on pre-trained ASR models, our approach leverages an audio-visual synchronization encoder trained specifically for aligning audio features with lip movements. This ensures that the extracted audio features better aligned with the movements of the lips. The Face-Sync Controller also incorporates a 3D facial blendshape model, which utilizes semantically meaningful facial coefficients to capture and control expressions. This allows the system to produce more nuanced and realistic facial expressions independent of lip movements.
The second is the synchronization of head poses, the HeadSync Stabilizer plays a vital role in maintaining stability. This module employs a two-stage optimization framework that starts with an initial estimation of head pose parameters and is followed by a refined process integrating optical flow information and keypoint tracking. By using a semantic weighting module to reduce the weight of unstable points such as eyebrow and eye movements, the Head-Sync Stabilizer significantly improves the accuracy and stability of head poses, ensuring that head movements remain natural and consistent.
The third is the synchronization of subject identity, the Dynamic Portrait Renderer takes charge of high-fidelity facial rendering and the restoration of fine details. Utilizing 3D Gaussian Splatting, the renderer explicitly models 3D Gaussian primitives, allowing for high-fidelity reconstruction of facial features from multiple perspectives. This method not only improves rendering speed but also reduces visual artifacts.
In real-world applications of talking heads, commonly used out-of-distribution (OOD) audio, such as audio from different speakers or text-to-speech (TTS) systems, often leads to mismatches between facial expressions and spoken content. For instance, an OOD audio might make a character frown during a cheerful topic, reducing the realism of the video. To address this issue, we introduce the OOD Audio Expression Generator. This module creates facial expressions that match the speech content, which we call speech-matched expressions, enhancing the realism of the expressions even with OOD audio. Additionally, to handle the limited generalization of Gaussian Splatting with unseen data, we incorporate a codebook that minimizes cross-modal mapping uncertainties. Additionally, when generating videos with OOD audio, inconsistencies may arise, such as the character’s mouth being open in the original frame but closed in the generated frame. This discrepancy in jaw position can lead to pixel gaps between the generated head and torso. To address this, we introduce the Torso Restorer, which uses a lightweight U-Net-based inpainting model. This module effectively bridges these gaps, ensuring seamless integration of the head and torso, thus improving the final video’s overall visual quality and coherence.
The contributions of this paper are summarized below:
We present SyncTalk $^ { + + }$ , a talking head synthesis method using Gaussian Splatting, achieving high synchronization of identity, lip movements, expressions, and head poses, with 101 frames per second rendering and improved visual quality. We enhance the robustness for out-of-distribution (OOD) audio inference by using an Expression Generator and Torso Restorer to generate speech-matched facial expressions and repair artifacts at head-torso junctions. We compare our method with recent state-of-the-art methods, and both qualitative and quantitative comparisons demonstrate that our method outperforms existing methods and is ready for practical deployment.
A preliminary version of this work was presented in [33]. In this extended work, we make improvements in four aspects: (1) We adopt Gaussian Splatting to replace NeRF implicit modeling, achieving faster rendering speed and higher fidelity; (2) We introduce an Expression Generator and Torso Restorer to enhance robustness against out-of-distribution (OOD) audio, thereby improving stability in practical applications; (3) We optimize the facial tracking module by incorporating a semantic weighting module to improve reconstruction stability; and (4) We conduct broader and more comprehensive experiments, demonstrating that our method outperforms the existing stateof-the-art.
# II. RELATED WORK
# A. 2D Generation-based Talking Head Synthesis
1) GAN-based Talking Head Synthesis: Recently, GANbased talking head synthesis [34]–[42] has emerged as an essential research area in computer vision. For example, Wav2Lip [43] introduces a lip synchronization expert to supervise lip movements, enforcing the consistency of lip movements with the audio. IP-LAP [12] proposes a two-stage framework consisting of audio-to-landmark generation and landmark-to-video rendering procedures, surpassing wav2lip and similar methods in video generation quality and alleviating the poor fusion of generated lip region images with facial images. These methods generate only the lower half of the face or the lip region, while the other areas remain the original video content. This can lead to uncoordinated facial movements, and artifacts are likely to appear at the edge between the original and generated regions. Methods like [36], [44]–[46] generate the entire face but struggle to maintain the original facial details. Apart from video stream techniques, efforts have also been made to enable a single image to “speak” using speech. For example, SadTalker [47] uses 3D motion coefficients derived from audio to modulate a 3Daware face render implicitly.
2) Diffusion-based Talking Head Synthesis: With the widespread application of diffusion models in the Artificial Intelligence Generated Content field, their excellent generative capabilities have also been utilized for talking head synthesis, such as [17], [19], [48]–[50]. For example, EMO [17] employs Stable Diffusion [51] as the foundational framework to achieve vivid video synthesis of a single image. DiffTalk [19], in addition to using audio conditions to drive the lip motions, further incorporates reference images and facial landmarks as extra driving factors for personalized facial modeling. Hallo [48] introduces a hierarchical cross-attention mechanism to augment the correlation between audio inputs and nonidentity-related motions. However, these methods rely solely on a single reference image to synthesize a series of continuous frames, making it difficult to maintain a single character’s identity consistently. This often results in inconsistencies in teeth and lips. The lack of 3D facial structure information can sometimes lead to distorted facial features. Additionally, these diffusion model-based methods often require significant computational resources, which presents challenges in deployment.
Compared to these methods, SyncTalk $^ { + + }$ uses Gaussian Splatting to perform three-dimensional modeling of the face. Its capability to represent continuous 3D scenes in canonical spaces translates to exceptional performance in maintaining subject identity consistency and detail preservation. Simultaneously, its training and rendering speed is significantly superior to 2D-based methods.
# B. 3D Reconstruction-based Talking Head Synthesis
1) NeRF-based Talking Head Synthesis: With the recent rise of NeRF, numerous fields have begun to utilize it to tackle related challenges [52], [53]. Previous work [21], [22], [24], [54] has integrated NeRF into the task of synthesizing talking heads and have used audio as the driving signal, but these methods are all based on the vanilla NeRF model. For instance, AD-NeRF [21] requires approximately 10 seconds to render a single image. RAD-NeRF [55] aims for realtime video generation and employs a NeRF based on InstantNGP [56]. ER-NeRF [25] innovatively introduces triple-plane hash encoders to trim the empty spatial regions, advocating for a compact and accelerated rendering approach. GeneFace [24] attempts to reduce NeRF artifacts by translating speech features into facial landmarks, but this often results in inaccurate lip movements. Portrait4D [57] creates pseudo-multi-view videos from existing monocular videos and trains on a largescale multi-view dataset. It can reconstruct multi-pose talking heads from a single image. However, it cannot be directly driven by speech and faces the same problem as 2D oneshot methods in maintaining identity consistency. Attempts to create character avatars with NeRF-based methods, such as [58]–[61], cannot be directly driven by speech. These methods only use audio as a condition, without a clear concept of sync, and usually result in average lip movement. Additionally, previous methods lack control over facial expressions, being limited to controlling blinking only, and cannot model actions like raising eyebrows or frowning.
2) 3DGS-based Talking Head Synthesis: Recently, Gaussian Splatting based on explicit parameter modeling has demonstrated excellent performance in 3D rendering [29], [62], [63]. 3DGS has been explored for application in 3D human avatar modeling. 3DGS-Avatar [64] utilizes 3D Gaussian projection and a non-rigid deformation network to quickly generate animatable human head avatars from monocular videos. GauHuman [65] combines the LBS weight field module and the posture refinement module to transform 3D
Fig. 2. Overview of SyncTalk $^ { + + }$ . Given a cropped reference video of a talking head and the corresponding speech, SyncTalk $^ { + + }$ can extract the Lip Feature $f _ { l }$ , Expression Feature $f _ { e }$ , and Head Pose $( R , T )$ through two synchronization modules $( a )$ and (b). Then, Gaussian Splatting is used to model and deform the head, producing a talking head video. The OOD Audio Expression Generator and Torso Restorer can generate speech-matched facial expressions and repair artifacts at head-torso junctions.
Gaussian distribution from the canonical space to the posture space. PSAvatar [66] uses point-based deformable shape model (PMSM) and 3D Gaussian modeling to excel in real-time animation through flexible and detailed 3D geometric modeling. GaussianAvatars [67] innovatively combines the FLAME mesh model with 3D Gaussian distribution to achieve detailed head reconstruction through the spatial properties of the Gaussian distribution. Gaussian head avatar [68] utilizes controllable 3D Gaussian models for high-fidelity head avatar modeling. GaussianTalker [69] integrates 3D Gaussian attributes with audio features into a shared implicit feature space, using 3D Gaussian splatting for fast rendering. It is a real-time posecontrollable talking head model that significantly improves facial realism, lip synchronization accuracy, and rendering speed. TalkingGaussian [31] is a deformation-based framework leveraging the point-based Gaussian Splatting to represent facial movements by maintaining a stable head structure and smoothly, continuously deforming Gaussian primitives, thereby generating high-fidelity talking head avatars. However, these methods have certain limitations in synchronization mechanisms, such as the inability to consistently maintain stable head poses, leading to the separation of the head and torso.
In comparison, we use the Face-Sync Controller to capture the relationship between audio and lip movements, thereby enhancing the synchronization of lip movements and expressions, and the Head-Sync Stabilizer to improve head posture stability.
# III. METHOD
# A. Overview
In this section, we introduce the proposed SyncTalk $^ { + + }$ , as shown in Fig. 2. SyncTalk $^ { + + }$ mainly consists of five parts: a) lip movements and facial expressions controlled by the FaceSync Controller, b) stable head pose provided by the HeadSync Stabilizer, c) high-synchronization facial frames rendered by the Dynamic Portrait Renderer, d) speech-matched expressions generated by the OOD Audio Expression Generator, and e) facial and torso fusion details repaired by the OOD Audio Torso Restorer. We will describe the content of these five parts in detail in the following subsections.
# B. Face-Sync Controller
Audio-Visual Encoder. Existing 3D reconstruction-based methods utilize pre-trained models such as DeepSpeech [32], Wav2Vec 2.0 [70], or HuBERT [71]. These are audio feature extraction methods designed for speech recognition tasks. Using an audio encoder designed for Automatic Speech Recognition (ASR) tasks does not truly reflect lip movements. This is because the pre-trained model is based on the distribution of features from audio to text, whereas we need the feature distribution from audio to lip movements.
Considering the above, we use an audio and visual synchronization audio encoder trained on the 2D audio-visual synchronization dataset LRS2 [72]. This encourages the audio features extracted by our method and lip movements to have the same feature distribution. The specific implementation method is as follows: We use a pre-trained lip synchronization discriminator [73]. It can give confidence for the lip synchronization effect of the video. The lip synchronization discriminator takes as input a continuous face window $F$ and the corresponding audio frame $A$ . If they overlap entirely, they are judged as positive samples (with label $y = 1$ ). Otherwise, they are judged as negative samples (with label $y = 0$ ). The discriminator calculates the cosine similarity between these sequences as follows:
$$
\sin ( F , A ) = { \frac { F \cdot A } { \| F \| _ { 2 } \| A \| _ { 2 } } } ,
$$
and then uses binary cross-entropy loss:
$$
{ \cal L } _ { \mathrm { s y n c } } = - \left( y \log ( \sin ( F , A ) ) + ( 1 - y ) \log ( 1 - \sin ( F , A ) ) \right) ,
$$
to minimize the distance for synchronized samples and maximize the distance for non-synchronized samples.
Under the supervision of the lip synchronization discriminator, we pre-train a highly synchronized audio-visual feature extractor related to lip movements. First, we use convolutional networks to obtain audio features $\operatorname { C o n v } ( A )$ and encode facial features $\operatorname { C o n v } ( F )$ . These features are then concatenated. In the decoding phase, we use stacked convolutional layers to restore facial frames using the operation $\operatorname { D e c } ( \operatorname { C o n v } ( A ) \oplus \operatorname { C o n v } ( F ) )$ . The $L _ { 1 }$ reconstruction loss during training is given by:
$$
L _ { \mathrm { r e c o n } } = \| F - \operatorname { D e c } ( \operatorname { C o n v } ( A ) \oplus \operatorname { C o n v } ( F ) ) \| _ { 1 } .
$$
Simultaneously, we sample synchronized and nonsynchronized segments using lip movement discriminators and employ the same sync loss as Eq. 2. We train a facial generation network related to audio by minimizing both losses, with the reconstruction results shown in Fig. 3. We discard the facial encoder and decoder parts of the network, retaining only the audio convolution component $\operatorname { C o n v } ( A )$ , which serves as a highly synchronized audio-visual encoder related to lip movements. Our method effectively restores the lip movements of the input image through audio features, thereby enhancing the lip synchronization capability.
Facial Animation Capturer. Considering the need for more synchronized and realistic facial expressions, we add an expression synchronization control module. Specifically, we introduce a 3D facial prior using 52 semantically facial blendshape coefficients [74] represented by $B$ to model the face, as shown in Fig. 4. Because the 3D face model can retain the structure information of face motion, it can reflect the content of facial movements well without causing facial structural distortion. During the training process, we first use the facial blendshape capture module, which is composed of ResNet [75], to capture facial expressions as $E ( B )$ , where $E$ represents the mapping from the blendshape coefficients to the corresponding facial expression feature. The captured expression can be represented as:
Fig. 3. Visualization of reconstruction quality. The Audio-Visual Encoder effectively captures and reconstructs lip movements.
$$
E ( B ) = \sum _ { i = 1 } ^ { 5 2 } w _ { i } \cdot B _ { i } ,
$$
where $w _ { i }$ are the weights associated with each blendshape coefficient $B _ { i }$ .
We first estimate all 52-dimensional blendshape coefficients and, to facilitate network learning, select seven core facial expression control coefficients—Brow Down Left, Brow Down Right, Brow Inner Up, Brow Outer Up Left, Brow Outer Up Right, Eye Blink Left, and Eye Blink Right—to control the eyebrow, forehead, and eye regions specifically. These coefficients are highly correlated with expressions and are independent of lip movements. The expression of each region can be represented as:
$$
E _ { \mathrm { c o r e } } = \sum _ { j = 1 } ^ { 7 } w _ { j } \cdot B _ { j } ,
$$
where $E _ { \mathrm { c o r e } }$ represents the core expression, $w _ { j }$ are the corresponding weights for the seven selected blendshape coefficients.
Using these semantically meaningful blendshape coefficients allows the model to capture and accurately represent the nuances of facial movements. During training, this module helps the network learn the complex dynamics of facial expressions more effectively, ensuring that the generated animations maintain structural consistency while being expressive and realistic.
Facial-Aware Masked-Attention. To reduce the mutual interference between lip features and expression features during training, we use the horizontal coordinate of the nose tip landmark as a boundary to divide the face into two parts: the lower face (lips) and the upper face (expressions). We then apply masks $M _ { \mathrm { l i p } }$ and $M _ { \mathrm { e x p } }$ to the respective attention areas for lips and expressions. Specifically, the new attention mechanisms are defined as follows:
Fig. 4. Facial Animation Capturer. We use 3D facial blendshape coefficients to capture the expressions of characters.
$$
\begin{array} { c } { { V _ { \mathrm { l i p } } = V \odot M _ { \mathrm { l i p } } , } } \\ { { V _ { \mathrm { e x p } } = V \odot M _ { \mathrm { e x p } } . } } \end{array}
$$
These formulations allow the attention mechanisms to focus solely on their respective parts, thereby reducing entanglement between them. Before the disentanglement, lip movements might induce blinking tendencies and affect hair volume. By introducing the mask module, the attention mechanism can focus on either expressions or lips without affecting other areas, thereby reducing the artifact caused by coupling. Finally, we obtain the disentangled lip feature $f _ { l } = f _ { \mathrm { l i p } } \odot V _ { \mathrm { l i p } }$ and expression feature $f _ { e } = f _ { \mathrm { e x p } } \odot V _ { \mathrm { e x p } }$ .
# C. Head-Sync Stabilizer
Head Motion Tracker. The head pose, denoted as $p$ , refers to the rotation angle of a person’s head in 3D space and is defined by a rotation $R$ and a translation $T$ . An unstable head pose can lead to a head jitter. In this section, we use Face Alignment [76] to extract sparse 2D landmarks and estimate the corresponding 3D keypoints using the BFM (Basel Face Model) [77]. The facial shape is modeled with identity $( \alpha _ { \mathrm { i d } } )$ and expression $( \alpha _ { \mathrm { e x p } } )$ parameters, while head motion is captured through rotation $( R )$ and translation $( T )$ . We obtain 3D keypoint projections based on these parameters and compute the projection loss by comparing them with the detected 2D landmarks, allowing for iterative optimization. For each frame, we refine the expression parameters $( \alpha _ { \mathrm { e x p } } )$ , pose parameters $( R , T )$ , and focal length $( f )$ , while keeping the identity parameters $( \alpha _ { \mathrm { i d } } )$ fixed to maintain subject consistency. Rather than assuming a rigid 3D facial shape, we explicitly model both static identity features and dynamic expression variations, ensuring robust tracking that captures temporal facial motion changes. Since we do not use expression and identity parameters, we omit them in the following description. The following are the details of the Head Motion Tracker.
Initially, the best focal length is determined through $\mathbf { \chi } _ { i }$ iterations within a predetermined range. For each focal length candidate, $f _ { i }$ , the system re-initializes the rotation and translation values. The objective is to minimize the error between the projected landmarks from the 3D Morphable Models (3DMM) [77] and the actual landmarks in the video frame. Formally, the optimal focal length $f _ { \mathrm { o p t } }$ is given by:
$$
f _ { \mathrm { o p t } } = \arg \operatorname* { m i n } _ { f _ { i } } E _ { i } ( L _ { 2 D } , L _ { 3 D } ( f _ { i } , R _ { i } , T _ { i } ) ) ,
$$
where $E _ { i }$ represents the Mean Squared Error (MSE) between these landmarks, $L _ { 3 D } ( f _ { i } , R _ { i } , T _ { i } )$ represents the projected landmarks from the 3DMM for a given focal length $f _ { i }$ , the corresponding rotation and translation parameters $R _ { i }$ and $T _ { i }$ , $L _ { 2 D }$ are the actual landmarks from the video frame. Subsequently, leveraging the optimal focal length $f _ { \mathrm { o p t } }$ , the system refines the rotation $R$ and translation $T$ parameters for all frames to better align the model’s projected landmarks with the actual video landmarks. This refinement process can be mathematically represented as:
$$
( R _ { \mathrm { o p t } } , T _ { \mathrm { o p t } } ) = \arg \operatorname* { m i n } _ { R , T } E ( L _ { 2 D } , L _ { 3 D } ( f _ { \mathrm { o p t } } , R , T ) ) ,
$$
where $E$ denotes the MSE metric, between the 3D model’s projected landmarks $L _ { 3 D }$ for the optimal focal length $f _ { \mathrm { o p t } }$ , and the actual 2D landmarks $L _ { 2 D }$ in the video frame. The optimized rotation $R _ { \mathrm { o p t } }$ and translation $T _ { \mathrm { o p t } }$ are obtained by minimizing this error across all frames.
Stable Head Points Tracker. Considering methods based on Gaussian Splatting and their requirements for inputting head rotation $R$ and translation $T$ , previous methods utilize 3DMM-based techniques to extract head poses and generate an inaccurate result. To improve the precision of $R$ and $T$ , we use an optical flow estimation model from [22] to track facial keypoints $K$ . Specifically, we first use a pre-trained optical flow estimation model to obtain optical flow information $F$ of facial movements. The optical flow information is defined as:
$$
\begin{array} { r } { F ( x _ { f } , y _ { f } , t _ { f } ) = ( u _ { f } ( x _ { f } , y _ { f } , t _ { f } ) , v _ { f } ( x _ { f } , y _ { f } , t _ { f } ) ) , } \end{array}
$$
where $\boldsymbol { u } _ { f }$ and $v _ { f }$ are the horizontal and vertical components of the optical flow at pixel location $( x _ { f } , y _ { f } )$ at time $t _ { f }$ . Then, by applying a Laplacian filter $L$ , we select keypoints with the most significant flow changes:
$$
K ^ { \prime } = \{ k \in K \mid L ( F ( k ) ) > \theta \} ,
$$
where $\theta$ is a threshold defining significant movement. We track these keypoints’ movement trajectories $T _ { K }$ in the optical flow sequence.
During the optical flow estimation in SyncTalk, we observed noticeable jitter issues when tracking certain subjects, particularly due to the movement of eyebrows and eyes. These regions tend to exhibit more dynamic and unpredictable movements, which can introduce instability in the facial tracking process. To address this, we implemented a Semantic Weighting module that selectively assigns lower weights to key points located in the eyebrow and eye regions, as these are more prone to erratic movements.
The Semantic Weighting Module first detects sparse landmarks across the face and then applies a semantic weighting mask to the detected keypoints. This step is crucial because the dynamic movements in these regions can otherwise lead to noisy and unstable tracking results. By excluding these high-variance regions, the Semantic Weighting module ensures that only the most stable and reliable keypoints are used in subsequent tracking, significantly enhancing the accuracy of the head pose parameters $R$ and $T$ .
Fig. 5. Overview of Gaussian Rendering. Canonical Gaussian fields utilize a triplane representation to encode 3D head features, which are processed by the MLP to yield canonical parameters. These parameters are then integrated with lip feature, expression feature, and head pose parameters in the deformable Gaussian fields. This design facilitates the generation of high-fidelity talking head, achieving realistic and dynamic facial animations.
Bundle Adjustment. Given the keypoints and the rough head pose, we introduce a two-stage optimization framework from [21] to enhance the accuracy of keypoints and head pose estimations. In the first stage, we randomly initialize the 3D coordinates of $j$ keypoints and optimize their positions to align with the tracked keypoints on the image plane. This process involve minimizing a loss function $L _ { \mathrm { i n i t } }$ , which captures the discrepancy between projected keypoints $P$ and the tracked keypoints $K ^ { \prime \prime }$ , as given by:
$$
L _ { \mathrm { i n i t } } = \sum _ { j } \lVert P _ { j } - K _ { j } ^ { \prime \prime } \rVert _ { 2 } .
$$
Subsequently, in the second stage, we embark on a more comprehensive optimization to refine the 3D keypoints and the associated head jointly pose parameters. Through the Adam Optimization [78], the algorithm adjust the spatial coordinates, rotation angles $R$ , and translations $T$ to minimize the alignment error $L _ { \mathrm { s e c } }$ , expressed as:
$$
L _ { \mathrm { s e c } } = \sum _ { j } \| P _ { j } ( R , T ) - K _ { j } ^ { \prime \prime } \| _ { 2 } .
$$
After these optimizations, the resultant head pose and translation parameters are observed to be smooth and stable.
# D. Dynamic Portrait Renderer
Preliminaries on 3D Gaussian Splatting. By leveraging a set of 3D Gaussian primitives and the camera model information from the observational viewpoint, 3D Gaussian Splatting (3DGS) [29] can be used to calculate the predicted pixel colors. Specifically, each Gaussian primitive can be described by a center mean µ ∈ R3 and a covariance matrix Σ ∈ R3×3 in the 3D coordinate as follows:
$$
g ( \mathbf { x } ) = \exp ( - \frac { 1 } { 2 } ( \mathbf { x } - \boldsymbol { \mu } ) ^ { T } \Sigma ^ { - 1 } ( \mathbf { x } - \boldsymbol { \mu } ) ) ,
$$
where the covariance matrix $\Sigma = R S S ^ { T } R ^ { T }$ can be further decomposed into a rotation matrix $R$ and a scaling matrix $S$ for regularizing optimization. These matrices can subsequently be expressed as a learnable quaternion $r \in \mathbb { R } ^ { 4 }$ and a scaling factor $s \in \mathbb { R } ^ { 3 }$ . For rendering purposes, each Gaussian primitive is characterized by its opacity value $\alpha \in \mathbb { R }$ and spherical harmonics parameters $\boldsymbol { S } \mathcal { H } \in \mathbb { R } ^ { k }$ , where $\mathbf { k }$ is the degrees of freedom. Thus, any Gaussian primitive can be represented as $\mathcal { G } = \{ \mu , r , s , \alpha , S \mathcal { H } \}$ .
During the point-based rendering, the 3D Gaussian is transformed into camera coordinates through the world-to-camera transformation matrix $W$ and projected to image plane via the local affine transformation $J$ [79], such as:
$$
\Sigma ^ { \prime } = J W \Sigma W ^ { T } J ^ { T } ,
$$
Subsequently, the color of each pixel is computed by blending all the overlapping and depth-sorted Gaussians:
$$
\hat { C } ( \mathrm { r } ) = \sum _ { i = 1 } ^ { N } c _ { i } \tilde { \alpha } _ { i } \prod _ { j = 1 } ^ { i - 1 } ( 1 - \tilde { \alpha _ { j } } ) ,
$$
where $i$ is the index of the $N$ Gaussian primitives, $c _ { i }$ is the view-dependent appearance and $\tilde { \alpha _ { i } }$ is calculated from the opacity $\alpha$ of the 3D Gaussian alongside its projected covariance $\Sigma ^ { \prime }$ .
Triplane Gaussian Representation. Utilizing multiperspective images and corresponding camera poses, we aim to reconstruct canonical 3D Gaussians representing the average shape of a talking head and design a deformation module that modifies these Gaussians based on audio input, as shown in Fig. 5. Ultimately, this deformation module predicts the offset of each Gaussian attribute for the audio input and rasterizes the deformed Gaussians from relevant viewpoints to generate novel images.
Fig. 6. Visualization of the triplane feature grids. The reference images (left) are projected onto three orthogonal planes: $( x , y )$ , $( y , z )$ , and $( x , z )$ .
Addressing the challenges of learning canonical 3D Gaussians, such as ensuring consistency across multiple viewpoints, we incorporate three uniquely oriented 2D feature grids [80]– [82]. A coordinate, given by $x = ( x , y , z ) \in \mathbb { R } ^ { X Y Z }$ , undergoes an interpolation process for its projected values via three individual 2D grids:
$$
{ \begin{array} { r l } & { { \mathrm { i n t e r p } } ^ { \mathrm { X Y } } : ( x , y ) \to f ^ { \mathrm { X Y } } ( x , y ) , } \\ & { { \mathrm { i n t e r p } } ^ { \mathrm { Y Z } } : ( y , z ) \to f ^ { \mathrm { Y Z } } ( y , z ) , } \\ & { { \mathrm { i n t e r p } } ^ { \mathrm { X Z } } : ( x , z ) \to f ^ { \mathrm { X Z } } ( x , z ) , } \end{array} }
$$
where the outputs $f ^ { \mathrm { X Y } } ( x , y ) , f ^ { \mathrm { Y Z } } ( y , z ) , f ^ { \mathrm { X Z } } ( x , z ) \ \in \ \mathbb { R } ^ { L D }$ , with $L$ representing the number of levels and $D$ representing the feature dimensions per entry, signify the planar geometric features corresponding to the projected coordinates $( x , y ) , ( y , z ) , ( x , z )$ . By reducing the dimensionality of the triplane features and projecting them onto the planes based on XYZ coordinates, we can observe that the triplane method effectively models facial depth information while maintaining multi-angle consistency, as shown in Fig. 6.
By fusing the outcomes, the fused geometric feature $f _ { \mu } \in$ $\mathbb { R } ^ { 3 \times L D }$ is derived as:
$$
f _ { \mu } = f ^ { \mathrm { X Y } } ( x , y ) \oplus f ^ { \mathrm { Y Z } } ( y , z ) \oplus f ^ { \mathrm { X Z } } ( x , z ) ,
$$
where the concatenation of features is symbolized by $\oplus$ , resulting in a $3 \times L D$ -channel vector. Specifically, we employ a suite of MLP layers, designated as $\mathcal { F } _ { \mathrm { c a n } }$ , to project the features $f _ { \mu }$ onto the entire spectrum of attributes of the Gaussian primitives, as illustrated below:
$$
\mathcal { F } _ { \mathrm { c a n } } \left( f _ { \mu } \right) = \mathcal { G } _ { \mathrm { c a n } } = \left\{ \mu _ { c } , r _ { c } , s _ { c } , \alpha _ { c } , S H _ { c } \right\} .
$$
To fully leverage the explicit representation of 3DGS, we opt to deform 3D Gaussians, manipulating not only the appearance information but also the spatial positions and shape of each Gaussian primitive. Consequently, we define a suite of MLP regressors $\mathcal { F } _ { \mathrm { d e f o r m } }$ to predict the offsets for each Gaussian attribute, utilizing $f _ { \mu }$ , the lip feature $f _ { l }$ , and the expression feature $f _ { e }$ , as elucidated below:
$$
\begin{array} { r } { \mathcal { F } _ { \mathrm { d e f o r m } } \left( f _ { \mu } , f _ { l } , f _ { e } , R , T \right) = \left\{ \triangle \mu , \triangle r , \triangle s , \triangle \alpha , \triangle S H \right\} , } \end{array}
$$
Thus, by applying the deformation network, we integrate the lip and expression features generated by the Face-Sync Controller module and the head pose features from the HeadSync Stabilizer module. We then compute the deformations in position, rotation, and scale. These deformations are subsequently integrated with the canonical 3D Gaussians, ultimately defining the deformable 3D Gaussians:
$$
\begin{array} { r } { \mathcal { G } _ { \mathrm { d e f o r m } } = \{ \mu _ { c } + \triangle \mu , r _ { c } + \triangle r , s _ { c } + \triangle s , } \\ { \alpha _ { c } + \triangle \alpha , S H _ { c } + \triangle S H \} . } \end{array}
$$
Optimization and Training Details. We adopt a two-stage training methodology to optimize the model progressively. In the first stage, focused on the canonical Gaussian fields, we begin by optimizing the positions of the 3D Gaussians and the triplanes to establish a preliminary head structure. The static images of the canonical talking head are then rasterized as follows:
$$
I _ { \mathrm { s t a t i c } } = { \mathcal { R } } ( { \mathcal { G } } _ { \mathrm { c a n } } , V ) ,
$$
where $V$ defines the camera settings that determine the rendering perspective.
During this stage, we utilize a combination of pixel-level $\mathcal { L } _ { 1 }$ loss, perceptual loss, and Learned Perceptual Image Patch Similarity (LPIPS) loss to capture fine-grained details and measure the difference between the rendered and real images. The overall loss function is defined as:
$$
\mathcal { L } _ { \mathrm { s t a t i c } } = \lambda _ { \mathrm { L 1 } } \mathcal { L } _ { \mathrm { L 1 } } + \lambda _ { \mathrm { l p i p s } } \mathcal { L } _ { \mathrm { l p i p s } } + \lambda _ { \mathrm { p e r c e p t u a l } } \mathcal { L } _ { \mathrm { p e r c e p t u a l } } .
$$
Once the initial structure is established, we move to the second stage, optimizing the entire network within the deformable Gaussian fields. At this stage, the model predicts deformations as input, and the 3D Gaussian Splatting (3DGS) rasterizer renders the final output images:
$$
I _ { \mathrm { d y n a m i c } } = { \mathcal { R } } ( { \mathcal { G } } _ { \mathrm { d e f o r m } } , V ) ,
$$
where $V$ defines the camera settings that determine the rendering perspective.
During the deformation stage, we increase the weight of LPIPS loss, which enhances the model’s ability to capture intricate details and textures in the generated images. This focus results in a more realistic and nuanced visual quality compared to the static phase. The loss function used in this stage is:
$$
\mathcal { L } _ { \mathrm { d y n a m i c } } = \lambda _ { \mathrm { L 1 } } \mathcal { L } _ { \mathrm { L 1 } } + \uparrow \lambda _ { \mathrm { l p i p s } } \mathcal { L } _ { \mathrm { l p i p s } } + \lambda _ { \mathrm { p e r c e p t u a l } } \mathcal { L } _ { \mathrm { p e r c e p t u a l } } .
$$
This two-stage approach allows us to refine the model progressively, ensuring structural integrity in the initial phase and high-quality visual output in the final phase. By carefully balancing the various loss terms, we can produce images that are both visually accurate and rich in detail.
Fig. 7. Learning framework of blendshape coefficient space. The VQVAE model handles out-of-distribution (OOD) blendshape coefficients by embedding them into a learned codebook, ensuring accurate reconstruction and addressing variations in facial expressions.
Portrait-Sync Generator. To seamlessly blend the 3D Gaussian Splatting (3DGS) rendered facial region with the original high-resolution image while preserving fine details—especially hair strands and subtle textures—we introduce the Portrait-Sync Generator. While 3DGS effectively reconstructs facial structures and expressions, it struggles with high-frequency details such as individual hair strands. This module fuses the 3DGS-rendered facial region $F _ { r }$ with the original high-resolution image $F _ { o }$ (e.g., $1 9 2 0 \times 1 0 8 0 \text{‰}$ ). Before blending, we apply a Gaussian blur to $F _ { r }$ to generate a smoothed version $G ( F _ { r } )$ . Then, $G ( F _ { r } )$ is placed back onto the original high-resolution image $F _ { o }$ according to the corresponding facial region coordinates. This process enhances the realism of the generated facial region, ensures consistent hair textures across frames, and reduces artifacts, enabling the model to produce high-resolution videos that retain fine details.
# E. OOD Audio Expression Generator
In real-world applications of talking head generation, it is common to encounter scenarios where out-of-distribution (OOD) audio is used. This could include situations where a character’s speech is driven by audio from a different speaker or text-to-speech (TTS) generated audio. However, these situations often lead to a mismatch between the generated facial expressions and the spoken content because previous methods simply repeated facial expressions from the original video. For example, using OOD audio might cause a character to frown while discussing a cheerful topic, thereby undermining the perceived realism and coherence of the generated video.
To overcome these challenges, we introduce an OOD Audio Expression Generator, a module designed to bridge the gap between mismatched audio and facial expression. This generator builds upon our previous work, EmoTalk [74], published at ICCV, which was developed to produce facial expressions that are tightly synchronized with the speech content—what we refer to as speech-matched expressions. EmoTalk provides a more accurate and context-aware method for driving facial expressions based on the audio input, ensuring that the emotional tone and expression match the spoken words.
However, even with EmoTalk, challenges arise when dealing with OOD audio for characters whose facial blendshape coefficients significantly differ from the reference identity used during training. Since renderers based on 3D Gaussian Splatting (3DGS) [29] typically learn facial expression from a limited and specific dataset (e.g., a few-minute-long video), when confronted with OOD blendshape coefficients of crossidentity characters generated by EmoTalk [74], the rendering may produce inaccurate facial movements, generate artifacts, or even cause the rendering to crash because 3DGS struggles to extrapolate to unseen expression coefficients from different sources. Therefore, merely improving the quality of blendshape coefficients is insufficient to address the problem of facial expression generation driven by audio of cross-identity characters.
To enhance the generalization ability, we pre-train a Transformer-based VQ-VAE [83] model, which includes an encoder $E$ , a decoder $D$ , and a context-rich codebook $Z$ , as shown in Fig. 7. This setup allows the model to effectively capture the characteristic distributions of different identities and generate blendshape coefficients that are tailored to the target character’s facial features during the decoding phase.
Specifically, the encoder $E$ converts the input blendshape coefficients $B$ into high-dimensional latent representations $\mathcal { Z } _ { e } ~ = ~ E ( B )$ . These representations are then mapped to a discrete embedding vector space using a codebook $Z \ =$ $\{ z _ { k } \in \mathbb { R } ^ { C } \} _ { k = 1 } ^ { N }$ , where $C$ represents the dimensionality of each embedding vector, and $N$ represents the number of codebook entries. The quantization function $\mathcal { Q }$ maps $\mathcal { Z } _ { e }$ to its nearest entry in the codebook $Z$ :
$$
\boldsymbol Z _ { \boldsymbol q } = \boldsymbol Q ( \mathcal Z _ { e } ) : = \arg \operatorname* { m i n } _ { \boldsymbol z _ { k } \in \boldsymbol Z } \left\| \mathcal Z _ { e } - \boldsymbol z _ { k } \right\| _ { 2 } ,
$$
where the quantized embedding vector $Z _ { q }$ represents the blendshape coefficients adapted for the target character. The reconstructed blendshape coefficients $\widetilde { B }$ are then generated by the decoder $D$ :
$$
\widetilde { B } = D ( Z _ { q } ) = D ( { \mathcal { Q } } ( E ( B ) ) ) .
$$
This process ensures that the generated blendshape coefficients align with the unique facial features of the target character, even when driven by OOD audio. The discrete codebook helps mitigate mapping ambiguity, allowing the model to retain expressiveness while accurately capturing the discrete features necessary for effective reconstruction.
To supervise the training of the quantized autoencoder, we minimize the reconstruction loss and the quantization loss:
$$
\begin{array} { r l } & { \mathcal { L } = \mathcal { L } _ { \mathrm { r e c o n } } + \mathcal { L } _ { \mathrm { v q } } = \Big \| { B } - \widetilde { B } \Big \| ^ { 2 } } \\ & { \qquad + \left\| \mathcal { Z } _ { e } - \mathrm { s g } ( Z _ { q } ) \right\| ^ { 2 } + \beta \left\| \mathrm { s g } ( \mathcal { Z } _ { e } ) - Z _ { q } \right\| ^ { 2 } , } \end{array}
$$
where sg denotes a stop-gradient operation, and $\beta$ is a weight factor for the commitment loss.
By integrating our method with EmoTalk, we can generate speech-matched facial expressions, even when using OOD audio. The introduction of a discrete codebook enhances the model’s ability to generalize across different identities, ensuring that the generated expressions are both consistent and contextually appropriate.
# F. OOD Audio Torso Restorer
Although the Face-Sync Controller, Head-Sync Stabilizer, and Dynamic Portrait Renderer enabled us to achieve high synchronization of facial movements and head poses, challenges remain when it comes to rendering fine textures such as torso, which are distinct from the facial region. Additionally, when generating videos with out-of-distribution (OOD) audio, inconsistencies may arise—such as the character’s mouth is open in the original frame but closed in the generated frame. This discrepancy in jaw position can lead to visible gaps between the generated head and torso, often manifesting as dark areas around the chin.
To address these issues, we develop an OOD Audio Torso Restorer. The main module is the Torso-Inpainting Restorer module, as shown in Fig. 8. This module is designed to repair any gaps at the junction between the head and torso due to discrepancies in facial expressions or jaw positions. The TorsoInpainting utilizes a lightweight U-Net-based inpainting model to seamlessly integrate the rendered facial region with the torso, ensuring the visual coherence and quality of the final output.
The primary cause of these gaps is the mismatch between the facial boundaries rendered by Gaussian Splatting and the torso from the source video. To simulate and address this issue during training, we process the source video frames $F _ { s o u r c e }$ to obtain the original-sized facial mask $M$ and the ground truth facial region $M F _ { s o u r c e }$ . To enhance the network’s robustness to various poses, we randomly rotate the source frames and expand the cheeks and chin areas of the facial mask. The expanded mask area is then removed from each frame, resulting in a pseudo ground truth for the background region $( 1 - M - \delta _ { \mathrm { { r a n } } } ) F _ { s o u r c e }$ , where $\delta _ { r a n }$ is the random expansion range of the facial mask $M$ .
The inpainting process used by the Torso-Inpainting Restorer is described by the following equation:
$$
\mathcal { T } ( M F _ { s o u r c e } , ( 1 - M - \delta _ { \mathrm { r a n } } ) F _ { s o u r c e } , \theta ) = \hat { F } _ { s o u r c e } ,
$$
where $\boldsymbol { \mathcal { T } }$ represents the inpainting process, $\theta$ represents the learnable parameters of $\boldsymbol { \mathcal { T } }$ .
For $5 1 2 \times 5 1 2$ -sized images, the random expansion range is set between 10-30 pixels. After concatenating the facial and background regions, they are fed into the inpainting model, which completes and smooths out the areas that were removed due to the random mask expansion in each frame. The reconstruction loss used to optimize $\boldsymbol { \mathcal { T } }$ is calculated as:
$\mathcal { L } _ { \mathrm { i n p a i n t } } = \mathcal { L } _ { \mathrm { L 1 } } ( F _ { s o u r c e } , \hat { F } _ { s o u r c e } ) + \mathcal { L } _ { \mathrm { L P I P S } } ( F _ { s o u r c e } , \hat { F } _ { s o u r c e } ) ,$ (29) where $\mathcal { L } _ { \mathrm { L 1 } }$ and $\mathcal { L } _ { \mathrm { L P I P S } }$ are the reconstruction and perceptual losses, respectively.
Fig. 8. Structure of the Torso-Inpainting Restorer. We manually construct impaired inputs in training to build the network’s complementation ability.
During rendering, a fixed 15-pixel expansion is applied to the facial mask to obtain a robust background region $( 1 ~ -$ $M - \delta ) F _ { s o u r c e }$ . The generated facial region is then smoothly merged with the background region, and the Torso-Inpainting repairs remaining gaps, ensuring the final frames are visually coherent.
# IV. EXPERIMENTS
# A. Experimental Settings
Dataset. To ensure a fair comparison, we use the same well-edited video sequences from [21], [24], [25], including English and French. The average length of these videos is approximately 8,843 frames, and each video is recorded at 25 FPS. Except for the video from AD-NeRF [21], which has a resolution of $4 5 0 \times 4 5 0$ , all other videos have a resolution of $5 1 2 \times 5 1 2$ , with the character-centered.
Implementation Process. We adopt the same settings as previous work based on NeRF [21], [24], [25]. Specifically, we use a few minutes of video of a single subject as training data shot by a static camera. The framework will save $f _ { l } , f _ { e }$ , and $( R , T )$ as preprocessing steps. During training, the model preloads these data and stores them in memory or on the GPU. In the inference stage, by inputting the audio feature $f _ { l }$ , the model can render the character’s image and merge the newly generated face with the original image through the pre-saved mask area, ultimately achieving real-time output.
Comparison Baselines. For a fair comparison, we reimplement existing methods to conduct reconstruction and synchronization experiments, including 2D generation-based methods: Wav2Lip [43], VideoReTalking [84], DINet [9], TalkLip [10], IP-LAP [12], and 3D reconstruction-based methods: AD-NeRF [21], RAD-NeRF [55], GeneFace [24], ERNeRF [25], TalkingGaussian [31], and GaussianTalker [69].
In the head reconstruction experiment, we input the original audio to reconstruct speaking head videos. Taking a subject named “May” as an example, we crop the last 553 frames as the test set for the reconstruction experiment and the corresponding audio as the input for inference. In the 2D generation-based methods, we use the officially provided pretrained models for inference, with video streams input at
TABLE I QUANTITATIVE RESULTS OF HEAD RECONSTRUCTION. WE ACHIEVE STATE-OF-THE-ART PERFORMANCE ON MOST METRICS. WE HIGHLIGHT BEST AND SECOND-BEST RESULTS.
TABLE II QUANTIFIED RESULTS OF PORTRAIT MODE. “PORTRAIT” REFERS TO THE USE OF THE PORTRAIT-SYNC GENERATOR. SYNCTALK $^ { + + }$ OUTPERFORMS SYNCTALK ON ALL METRICS.
25 FPS and the corresponding audio, resulting in the respective outcomes after processing by these five methods. In the 3D reconstruction-based methods, since AD-NeRF [21], RAD-NeRF [55], ER-NeRF [25], TalkingGaussian [31], and GaussianTalker [69] do not provide pre-trained models for the corresponding subjects, we re-train the models for these subjects following the publicly available code. The dataset is divided in the same manner as before methods, and the test results are obtained.
In the synchronization experiment, we choose speeches from other people as the input audio. We test recent methods using the same test sequence as in the head reconstruction experiment, with audio inputs for ER-NeRF [25] using the same OOD audio. After obtaining the synthesized video sequences, we use the same evaluation code as Wav2Lip [43] for assessment, finally obtaining metrics on lip synchronization performance for different methods.
# B. Quantitative Evaluation
Full Reference Quality Assessment. In terms of image quality, we use full reference metrics such as Peak Signal-to-Noise Ratio (PSNR), Learned Perceptual Image Patch Similarity (LPIPS) [85], Multi-Scale Structure Similarity (MS-SSIM), and Frechet Inception Distance (FID) [86] as evaluation metrics.
No Reference Quality Assessment. In high PSNR images, texture details may not align with human visual perception [87]. For more precise output definition and comparison, we use three No Reference methods: the Natural Image Quality Evaluator (NIQE) [88], the Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) [89] and the Blindly Assess Image Quality By Hyper Network(HyperIQA) [90].
TABLE III QUANTITATIVE RESULTS OF THE LIP SYNCHRONIZATION. WE USE TWO DIFFERENT AUDIO SAMPLES TO DRIVE THE SAME SUBJECT, THEN HIGHLIGHT BEST AND SECOND-BEST RESULTS.
TABLE IV RESULT OF DIFFERENT INITIALIZATION STRATEGIES ON 3D HEAD REPRESENTATION. WE EVALUATE THE IMPACT OF VARIOUS INITIALIZATION STRATEGIES ON FACIAL RECONSTRUCTION QUALITY, DEMONSTRATING THEIR EFFECTS ON SYNCHRONIZATION AND VISUAL FIDELITY.
Synchronization Assessment. For synchronization, we use landmark distance (LMD) to measure the synchronicity of facial movements, action units error (AUE) [91] to assess the accuracy of facial movements, and introduce Lip Sync Error Confidence (LSE-C), consistent with Wav2Lip [43], to evaluate the synchronization between lip movements and
audio.
Efficiency Assessment. To evaluate the computational efficiency of our model, we measure both training time and inference speed. Training time reflects the total duration required for the model to converge on a given dataset. For real-time applicability, we assess inference speed in terms of frames per second (FPS) during video generation, where a higher FPS indicates better real-time performance, making the model more suitable for applications such as live streaming and video conferencing.
Evaluation Results. The evaluation results of the head reconstruction are shown in Tab.I. We compare the latest methods based on 2D generation and 3D reconstruction. It can be observed that our image quality is superior to other methods in all aspects. Because we can maintain the subject’s identity well, we surpass 2D generation-based methods in image quality. Due to the synchronization of lips, expressions, and poses, we also outperform 3D reconstruction-based methods in image quality. Particularly in terms of the LPIPS metric, our method has a $6 5 . 6 7 \%$ lower error compared to the previous state-ofthe-art method, TalkingGaussian [31]. In terms of lip synchronization, our results surpass most methods, proving the effectiveness of our Audio-Visual Encoder. We also compare the two output modes of SyncTalk++, one processed through the Portrait-Sync Generator and one without, as shown in Tab.II. After processing through the Portrait-Sync Generator, hair details are restored, and image quality is improved. Compared with SyncTalk, SyncTalk $^ { + + }$ shows significantly better image quality, demonstrating the robustness of our introduction of Gaussian splatting for rendering. We compare the latest SOTA method drivers using out-of-distribution (OOD) audio, and the results are shown in Tab.III. We introduce Lip Synchronization Error Distance (LSE-D) and Confidence (LSE-C) for lipspeech sync evaluation, aligning with [43]. Our method shows state-of-the-art lip synchronization, overcoming small-sample 3D reconstruction limitations by incorporating a pre-trained audio-visual encoder for lip modeling.
We also evaluate the training time and rendering speed. On an NVIDIA RTX 4090 GPU, our method requires only 1.5 hours to train for a new character and achieves 101 FPS at a resolution of $5 1 2 \times 5 1 2$ , far exceeding the 25 FPS video input speed, enabling real-time video stream generation. Compared to SyncTalk [33], SyncTalk $^ { + + }$ achieves a shorter training time and higher rendering speed.
Impact of Initialization Strategies on Canonical 3D Head Representation. To assess the effectiveness of our approach, we compare different initialization strategies for the canonical 3D head representation. As shown in Tab. IV, the $s \varkappa , \alpha$ initialization achieves the best overall performance, leading to higher image quality and synchronization accuracy.
Compared to other methods, $s \varkappa , \alpha$ results in lower LPIPS and LMD scores, indicating improved perceptual quality and facial alignment. This suggests that leveraging spherical harmonics $( S { \mathcal { H } } )$ and opacity $( \alpha )$ attributes effectively enhances spatial consistency and feature learning. In contrast, random initialization leads to degraded performance, highlighting the importance of structured attribute conditioning.
Interestingly, using all attributes $( s , r , S \mathcal { H } , \alpha )$ does not yield the best results. This is likely because introducing too many attributes increases optimization complexity and potential redundancy, which can make it harder for the model to focus on the most critical features for synchronization and reconstruction. Given these results, we adopt $s \varkappa , \alpha$ as our default initialization strategy, as it offers the best trade-off between visual fidelity and synchronization accuracy.
TABLE V USER STUDY. RATING IS ON A SCALE OF 1-5; THE HIGHER, THE BETTER. THE TERM “EXP-SYNC ACCURACY” IS AN ABBREVIATION FOR “EXPRESSION-SYNC ACCURACY”. WE HIGHLIGHT BEST AND SECOND-BEST RESULTS.
# C. Qualitative Evaluation
Evaluation Results. To more intuitively evaluate image quality, we display a comparison between our method and other methods in Fig. 9. In this figure, it can be observed that SyncTalk $^ { + + }$ demonstrates more precise and more accurate facial details. Compared to Wav2Lip [43], our method better preserves the subject’s identity while offering higher fidelity and resolution. Against IP-LAP [12], our method excels in lip shape synchronization, primarily due to the audio-visual consistency brought by the audio-visual encoder. Compared to GeneFace [24], our method can accurately reproduce actions such as blinking and eyebrow-raising through expression sync. In contrast to ER-NeRF [25], our method avoids the separation between the head and body through the Pose-Sync Stabilizer and generates more accurate lip shapes. Our method achieves the best overall visual effect; we recommend watching the supplementary video for comparison.
To comprehensively evaluate the method’s performance in real-world scenarios, as shown in Fig. 10, we present a qualitative comparison of lip-sync effects driven by in-thewild audios. The Wav2Lip [43], while producing relatively realistic facial animations, exhibits significant discrepancies in lip-audio synchronization, such as misalignment during the pronunciation of “science.” GeneFace [24] shows some improvement, but synchronization remains unnatural on key syllables. ER-NeRF [25] enhances lip-sync performance; however, during the pronunciation of “make,” the lip movements do not fully match the audio. Talking Gaussian [31] produces realistic results with detailed facial handling, but lip movements still show discrepancies. During “progress,” lipaudio synchronization is poor, with noticeable lag. Gaussian Talker [69] offers more consistent lip-sync but shows rigidity during fast syllable transitions and struggles with complex syllables, resulting in less natural lip movements. In contrast, our method generates superior lip-sync effects driven by in-thewild audios, demonstrating higher reliability and naturalness in both coherence and detail accuracy. This indicates our method excels at capturing and reproducing complex lip movements in the wild audios, enhancing lip-sync quality and achieving optimal visual effects.
Fig. 9. Qualitative comparison of facial synthesis by different methods. Our method has the best visual effect on lip movements and facial expressions without the problem of separation of head and torso. Please zoom in for better visualization.
Using the OOD Audio Expression Generator, we can generate facial expressions by generating Blendshape coefficients through EmoTalk. As shown in Fig. 11, by using different Blendshapes, we can enable the character to display different expressions. Our method can effectively generate facial expressions continuously, consistently maintaining the character’s identity without discontinuing issues between frames.
By incorporating the Semantic Weighting module, we have a more stable head tracker that enhances the stability of head poses. This improvement resulted in higher-quality reconstructions during training and significantly enhanced the visual coherence and realism of the generated videos. We compared our results with TalkingGaussian [31] and SyncTalk [33], finding that our more stable tracker exhibited better visual quality, as shown in Fig. 12.
User Study. To assess the perceptual quality of our method, we conduct a comprehensive user study comparing SyncTalk $^ { + + }$ with state-of-the-art approaches. We curate a dataset of 65 video clips, each lasting over 10 seconds, encompassing various head poses, facial expressions, and lip movements. Each method is represented by five clips. A total of 42 participants evaluate the videos, with an average completion time of 24 minutes per questionnaire. The study achieves a high reliability score, with a standardized Cronbach’s $\alpha$ coefficient of 0.96, ensuring the consistency of responses. The questionnaire follows the Mean Opinion Score (MOS) protocol, where participants rate the generated videos across five key aspects: (1) Lip-sync Accuracy, (2) Expression-sync Accuracy, (3) Pose-sync Accuracy, (4) Image Quality, and (5) Video Realness.
As shown in Table V, SyncTalk $^ { + + }$ consistently achieves the highest scores across all five metrics. Specifically, SyncTalk++ attains a Lip-sync Accuracy score of 4.309, outperforming the second-best SyncTalk by a 0.178 margin. For Expression-sync Accuracy, our method scores 4.154, exceeding IP-LAP and GaussianTalker. Additionally, SyncTalk $^ { + + }$ achieves the best Pose-sync Accuracy at 4.371, a notable 0.268 improvement over IP-LAP. In terms of visual quality, SyncTalk $^ { + + }$ achieves an Image Quality score of 4.297, surpassing the second-best SyncTalk. Furthermore, it leads in Video Realness, scoring 4.229, which is $7 . 7 \%$ higher than TalkingGaussian. In general, our approach significantly improves lip synchronization, expression synchronization, and pose alignment, while also improving image fidelity and video realism.
Fig. 10. Qualitative comparison of facial synthesis driven by in-the-wild audios. Our method demonstrates the most accurate lip movement while maintaining the subject’s identity well.
Fig. 11. Expression generation using OOD Audio Expression Generator. For different expression coefficients, our method can achieve highly accurate eyebrow and eye generation.
Fig. 12. Comparison of different trackers. SyncTalk and TalkingGaussian trackers will cause obvious facial jitter and artifacts for long-haired characters, but SyncTalk $^ { + + }$ will improve significantly.
# D. Ablation Study
We conduct an ablation study to systematically evaluate the contributions of different components in our model to the overall performance. To this end, we selected three core metrics for evaluation: Peak Signal-to-Noise Ratio (PSNR), Learned Perceptual Image Patch Similarity (LPIPS), and Landmark Distance (LMD). These metrics respectively measure image reconstruction quality, perceptual consistency, and the accuracy of lip synchronization. For testing, we chose a subject named “May,” and the results are presented in Table VI.
First, the Audio-Visual Encoder plays a critical role in the model, providing the primary lip sync information. When this module is replaced, we observe a significant deterioration in all three metrics, particularly with a $1 9 . 7 \%$ increase in the LMD error. This increase clearly indicates a decline in lip motion synchronization, further validating the importance of our Audio-Visual Encoder in extracting accurate audio features. This result underscores the ability of the AudioVisual Encoder to capture fine lip movements synchronized with speech, which is crucial for generating realistic talking heads.
Next, we examine the impact of the Facial Animation Capture module, which captures facial expressions by using facial features. When this module is replaced with the AU units blink module, the metrics also worsen: PSNR decreases to 37.264, LPIPS rises to 0.0249, and LMD increases to 3.058. This suggests that the Facial Animation Capture module not only plays a vital role in lip synchronization but is also crucial for maintaining the naturalness and coherence of facial expressions.
The ablation of the Head-Sync Stabilizer further reveals its key role in reducing head pose jitter and preventing the separation of the head from the torso. Without this module, all metrics significantly decline: PSNR decreases to 29.193, LPIPS increases to 0.0749, and LMD rises to 3.264. This phenomenon indicates that the Head-Sync Stabilizer is essential for ensuring the stability of head movements and the overall consistency of the image.
The Portrait-Sync Generator focuses on restoring facial details. When this module is removed, noticeable segmentation boundaries appear in the generated images, particularly in the hair region. The ablation of the Semantic Weighting module reveals its importance in enhancing video stability. Removing this module results in a decline in all metrics, indicating its contribution to maintaining head pose stability in dynamic scenes.
TABLE VI ABLATION STUDY FOR OUR COMPONENTS. WE SHOW THE PSNR, LPIPS, AND LMD IN DIFFERENT CASES.
TABLE VII QUANTITATIVE RESULTS OF THE TORSO RESTORER. OUR TORSO RESTORER SIGNIFICANTLY IMPROVES IMAGE QUALITY AT OOD AUDIO SETTINGS.
In addition, we conduct a dedicated ablation study on the OOD Audio Torso Restorer. When using OOD audio inputs during inference, the Torso Restorer effectively closes any pixel gaps between the generated head and the original torso, eliminating unnatural seams in the video. As shown in Tab.VII, we evaluated three no-reference image quality metrics and observed a significant improvement after applying the Torso Restorer. Furthermore, Fig.13 demonstrates that using the Torso Restorer markedly enhances visual quality and maintains coherence in the transition area between the face and torso.
# V. ETHICAL CONSIDERATION
Our SyncTalk and SyncTalk $^ { + + }$ can synthesize high-quality, high-fidelity, audio-motion synchronized, visually indistinguishable talking-head videos. They are expected to contribute to developing fields such as human-computer interaction, artistic creation, digital agents, and digital twins. However, we must be aware that this type of deepfake talking-head video synthesis technology can be exploited for harmful purposes. In light of this, we have put forward a series of suggestions to try to mitigate the abuse of deepfake technology.
Improve deepfake detection algorithms. In recent years, there has been considerable work on detecting tampered videos, such as face swapping and reenactment [92]–[94]. However, distinguishing high-quality synthetic portraits based on recent NeRF and Gaussian Splatting methods remains challenging. We will share our work with the deepfake detection community, hoping it can help them develop more robust algorithms. Additionally, we attempt to distinguish the authenticity of videos based on rendering defects of NeRF and Gaussian Splatting. For example, Gaussian Splatting-rendered novel-angle talking heads may show some unreasonable pixel points due to the incomplete convergence of 3D Gaussians.
Fig. 13. Ablation study of OOD Audio Torso Restorer Without using the Torso Restorer, there will be obvious pixel missing problems, and our method can repair them well.
Protect real talking-head videos. Since current methods based on NeRF and Gaussian Splatting strongly rely on real training videos, protecting them helps reduce the misuse of technology. For example, video and social media sites should take measures to prevent unauthorized video downloads or add digital watermarks to the portrait parts to interfere with training.
Transparency and Consent. In scenarios involving generating synthetic images or videos of individuals, explicit consent must be obtained. This includes informing participants about the nature of the technology, its capabilities, and the specific ways in which their likeness will be used. Transparency in the use of synthetic media is not just a legal obligation but a moral imperative to maintain trust and integrity in digital content.
Restrict the application of deepfake technology. The public should be made aware of the potential dangers of deepfake technology and urged to treat it cautiously. Additionally, we suggest establishing relevant laws to regulate the use of deepfake technology. | Achieving high synchronization in the synthesis of realistic, speech-driven talking head videos presents a significant challenge. A lifelike talking head requires synchronized coordination of subject identity, lip movements, facial expressions, and head poses. The absence of these synchronizations is a fundamental flaw, leading to unrealistic results. To address the critical issue of synchronization, identified as the ''devil'' in creating realistic talking heads, we introduce SyncTalk++, which features a Dynamic Portrait Renderer with Gaussian Splatting to ensure consistent subject identity preservation and a Face-Sync Controller that aligns lip movements with speech while innovatively using a 3D facial blendshape model to reconstruct accurate facial expressions. To ensure natural head movements, we propose a Head-Sync Stabilizer, which optimizes head poses for greater stability. Additionally, SyncTalk++ enhances robustness to out-of-distribution (OOD) audio by incorporating an Expression Generator and a Torso Restorer, which generate speech-matched facial expressions and seamless torso regions. Our approach maintains consistency and continuity in visual details across frames and significantly improves rendering speed and quality, achieving up to 101 frames per second. Extensive experiments and user studies demonstrate that SyncTalk++ outperforms state-of-the-art methods in synchronization and realism. We recommend watching the supplementary video: https://ziqiaopeng.github.io/synctalk++. | [
"cs.CV"
] |
# 1. Introduction.
Recently, computer scientists, cognitive scientists and others have systematically tested Large Language Models (LLMs) as ChatGPT1 on a wide series of different reasoning skills, sometimes with truly impressive results. A partial list of examples includes logical reasoning (e.g. Liu et al. 2023, Bang et al. 2023), mathematical reasoning (e.g. Frieder et al. 2023, Wardat et al. 2023), physical reasoning (e.g. Lehnert 2023, West 2023, Zhang et al. 2025), psychological reasoning (e.g. Hagendorff 2023, Holterman and van Deemter 2023), medical reasoning (e.g. Bhayana et al. 2023) and several other types; this list may well grow exponentially fast in the near future (see also e.g. Bang et al. 2023, Bubeck et al. 2023, Huang and Chang 2022, Mahowald et al. 2023). From the currently best-performing LLMs, GPT-4 has been tested most extensively; two other popular LLMs that performed best on deep-reasoning tasks in physics (Zhang et al. 2025) are the open-source DeepSeek-R1 (from the company DeepSeek, cf. Liu et al. 2024a, Guo et al. 2025) and Gemini 2.0 Flash Thinking2 (from Google DeepMind, cf. Anil et al. 2023).
Fueled notably by the recent conviction that ‘causal AI’ might be a promising paradigm for nextgeneration systems (Peters et al. 2017, Pearl 2019), the computer science community has also started testing the causal reasoning capacities of LLMs (e.g. Gao et al. 2023, Jin et al. 2023, Kiciman et al. 2023, Tu et al. 2023, Zečević et al. 2023, Liu et al. 2024, Wang 2024). Causal reasoning is often considered a necessary ingredient of artificial general intelligence (AGI). As computer scientist and philosopher Judea Pearl puts it: “Machines’ lack of understanding of causal relations is perhaps the biggest roadblock to giving them human-level intelligence” (Pearl 2019); a verdict one also finds in a philosophical analysis of deep learning (Buckner 2024, p 74). Now, causal reasoning presupposes an understanding of what causes (and effects) are; and causation is surely among philosophy’s all-time favorite topics. Yet, the precise definition of cause is highly debated in philosophy. Many, and perhaps most, believe that an overarching definition of cause is illusory. L. A. Paul and Ned Hall phrase it this way, at the end of their reference work on causation (2013, p. 249): “After surveying the literature in some depth, we conclude that, as yet, there is no reasonably successful reduction of the causal relation. And correspondingly, there is no reasonably successful conceptual analysis of a philosophical causal concept. […] Barring a fundamental change in approach, the prospects of a relatively simple, elegant and intuitively attractive, unified theory of causation, whether ontological reduction or conceptual analysis, are dim”.
In this context, one may wonder whether AI research and philosophical expertise on causation might fruitfully interact. Two relevant questions, the first ambitious and yet speculative, the second more topical, are: Q1) Could present or future AI help in constructing an overarching concept of cause?; and Q2) What is the status of the causal reasoning skills of a given AI, say DeepSeek or ChatGPT, notably in subtle cases – considered the preserve of philosophy? A more pragmatic variant of Q2) is Q3): How to develop a test for causal reasoning in AI based on philosophical expertise? Here we will focus on Q3); but we hope to show that advanced LLMs have made Q1) a much more pressing question than often assumed. Thus, our first main objective is to propose a testing method that is based on scholarship in neuron diagrams, widely used as heuristic tools in causation research. An ancillary objective is to illustrate the method on a few chatbots, namely ChatGPT, Gemini 2.0 Flash, and DeepSeek-R1 (which do best in certain advanced tests, cf. Zhang et al. 2025). An existing test for abstract causal reasoning developed by computer scientists assesses whether an LLM can infer causation from a list of correlations (Jin et al. 2023). Here we propose a test that is simpler in application and that directly probes whether the LLM can identify causes in various abstract scenarios, notably in subtle cases involving redundant causation, causation by omission, violation of transitivity, etc. In view of the likely rise of ever more powerful AIs, in particular ‘causal AI’ (Pearl 2019, see also Hartnett 2018, Zečević et al. 2023), it seems important to have a variety of tests addressing various causal reasoning skills. In this context, philosophers have recently proposed a roadmap for developing tests for scientific understanding in LLMs (Barman et al. 2024). For a critical philosophical assessment of the capacities of GPT-4 in general reasoning, see (Arkoudas 2023, 2023a, Floridi 2023).
As said, even if we focus on Q3), Q1) came in through the back door. The surprising causal reasoning skills of certain LLMs forced us to propose a definition of cause, one that is more encompassing than the definitions proposed in the literature. To construct a more general definition applicable to neuron diagrams, then, is the second objective of this article; it led, in fine, to our main theoretical result. Such a definition was necessary to verify the correctness of the LLM’s answers to questions of the type: “What is the cause of event E in neuron diagram D?” Here we propose a definition of cause that is in agreement with intuition (and with the verdicts that can be found in the literature) for all the neuron diagrams we studied, which is a sizeable part of the diagrams discussed in the reference (Paul and Hall 2013). To the best of our knowledge, a definition with such a broad domain of validity has not yet been constructed. Hence, the interaction with advanced AI was at the origin of a new philosophical result, in an area that draws vast theoretical and practical attention. Since it is reasonable to conjecture that the capacities of AIs will increase in the near future, we submit that a constructive interaction between human and artificial expertise will soon become a reality also in theoretical philosophical research, as already hinted to by the present work.
A word about our background assumptions. We will not delve here into the question whether LLMs can ‘really’ reason causally. Our research resonates best with a pragmatic conception of intelligence, centered on problem solving, which has become popular in AI since Turing (cf. Norvig and Russell 2015, Bringsjord and Govindarajulu 2018). In this perspective, the distinction between ‘mastering a type of reasoning’ and ‘emulating/simulating’ this type of reasoning is of secondary importance: what counts is the capacity to solve problems and answer questions – for instance of the type ‘what causes X?’ In sum, we remain here largely agnostic of whether the skill to systematically answer questions and solve problems in a certain cognitive field (at a defined level of proficiency) is indicative of ‘really’ mastering the corresponding skill (at that level of proficiency).
While our testing method could be used to test any LLM (and other AIs), we will illustrate the proposed method on advanced versions of ChatGPT, DeepSeek and Gemini. These experiments mainly aim at illustrating the principle and feasibility of the testing method, and at suggesting lines of further research. While our preliminary test results already show instances of impressive causal reasoning skills of these LLMs, they were not obtained by large-scale statistical experiments and quantitative methods as they are deployed in computer science, cognitive science, psychology etc. to come to, ideally, objective conclusions (for an example of a comprehensive benchmark test, BIG-bench, see Srivastava et al. 2022). Yet, our test could be further developed for such a systematic inquiry3, and possibly for assessing and comparing the depth-of-reasoning of various AIs, as we suggest in Section 4. For our own proof-ofconcept tests, we first used ChatGPT based on GPT-3.5 (in July 2023) and GPT-4 (in July 2023 and March 2024), and finally DeepSeek-R1 (DeepThink), Gemini 2.0 Flash (Thinking Experimental), and ChatGPT o3-mini (in February 2025). In the following we refer to the first two versions of ChatGPT as ChatGPT(3) and ChatGPT(4), respectively.
The article is organised as follows. In Section 2 we first give a succinct overview of testing methods and results obtained in the computer science community. Then we describe our test, after an explanation of how neuron diagrams work. We show the results of a small-scale experiment with ChatGPT(4) (and more results in Appendix 1); comparison with the other chatbots is given in Appendix 2. In Section 3 we propose a definition of cause that allows one to derive the intuitive causes for the ‘classic’ neuron diagrams used in our test, and that can therefore serve as the ‘gold standard’ (as it is called in computer science) to assess the answers by LLMs. Such a definition does not apply to literally all causal scenarios and corresponding diagrams studied in the literature, but is more encompassing than other proposed definitions and should therefore offer a solid basis for further synthesis. Section 4 is devoted to a discussion of the test results and to lines of further (interdisciplinary) research. Section 5 concludes.
# 2. Test for abstract causal reasoning, and results obtained by ChatGPT and other LLMs
Before turning to our test, let us briefly comment on related work by computer scientists, who have performed several tests on causal reasoning in LLMs and ChatGPT (Liu et al. 2024, Gao et al. 2023, Jin et al. 2023, Kiciman et al. 2023, Tu et al. 2023, Zečević et al. 2023, Wang 2024). Computer scientists’ assessments of ChatGPT’s capabilities as ‘causal reasoner’ vary greatly (even assuming that the test results have statistical significance): from enthusiastic (Kiciman et al. 2023) to much more pessimistic (Jin et al. 2023, Zečević et al. 2023). An obvious reason for this discrepancy is that many different cognitive skills have been classified as forms of causal reasoning; and different types of tests measure different skills4. Among the enthusiasts, Kiciman et al. (2023) conclude: “We envision LLMs to be used alongside existing causal methods, as a proxy for human domain knowledge and to reduce human effort in setting up a causal analysis, one of the biggest impediments to the widespread adoption of causal methods.” Gao et al. (2023), who claim to have conducted the first comprehensive evaluation of ChatGPT’s causal reasoning capabilities, conclude that ChatGPT is not a good ‘causal reasoner’ (cannot reliably identify causes in concrete situations), but a good ‘causal explainer’ (can reliably come up with plausible explanations of why a causal relation exists in a concrete situation), while having a serious hallucination problem with causal reasoning5. Zečević et al. (2023) come to the conclusion that LLMs are only ‘causal parrots’ that cannot reason causally but only textually reproduce causal links they have learned during training.
A key demarcation criterion in these different types of tests is whether they focus on concrete causal situations (described in texts), i.e. referring to particular real-world facts, events, scenarios, or on abstract scenarios, i.e. described in a formalised manner using variables. The majority of the existing tests is of the first type. Jin et al. (2023) have proposed the first benchmark dataset to test abstract causal inference skills of LLMs, using, notably, the work on causal discovery by Spirtes, Glymour and Scheines (2000). Their test aims at assessing whether an LLM can extract from the list of correlations that exist between variables the correct causal graph, for the time being limited to maximum six nodes (i.e., variables). Their test results for this particular causal skill are sobering, and show that LLMs, including ChatGPT(4), do not perform better than random guessing. (Even after finetuning these models fail to generalize: they fail in ‘out-of-distribution’ settings, i.e. in queries where variable names and textual expressions are not similar to those seen in the training set.)
The complementary testing method we describe now is a test of abstract causal reasoning, directly inspired by research in the philosophy of causation, so focusing on scenarios with a certain degree of complexity and/or subtlety. The principle of the method is straightforward, and lends itself to systematization. The method also could, in principle, be enriched (cf. Section 4) and used to develop a large set of testing scenarios (in a benchmark spirit), since large neuron diagrams can easily be generated by computer code (so one can easily go beyond the 6-node limit of Jin et al. 2023).
In summary, the test we propose assesses whether the AI under scrutiny is able, when presented with the textual version of a neuron diagram, to give the correct answer to questions of type Q-CAUSE: “what is the cause of event E?” In our tests we added one more question, namely “does event E occur?”, where E typically is the firing of the last neuron in a diagram (cf. examples below). Clearly, being able to answer Q-CAUSE with sufficient proficiency (in a statistically meaningful ensemble of causal situations) can be considered a measure of (a certain type of) abstract causal reasoning.
In order to answer questions as Q-CAUSE in subtle cases, when ‘automatic’ or ‘implicit’ intuition is uncertain, one would like to rely on an explicit definition of cause. One prominent model is the counterfactual interpretation of cause, which is based on following sufficient condition (Lewis 1973): (Actual) event C is a cause of (actual) event E if the following holds: if C were not occur, E would not occur. We will call this in the following the ‘simple counterfactual condition/rule’. Starting from this condition a great number of counterfactual definitions have been proposed, many of them discussed at length in (Paul and Hall 2013). The limits of such definitions or models have most efficiently been studied by so-called neuron diagrams, popularised by David Lewis; presumably the largest collection of those can be found in (Paul and Hall 2013), which we use here to extract our sample basis. Neuron diagrams allow one to represent a wide variety of causal situations and problems discussed in the literature, including causal redundancy (‘early/late preemption’, overdetermination), causal omissions, transitivity, etc. Let us briefly explain how they work.
A representative diagram is given in Fig. 1, reproduced from (Paul and Hall 2013).
Fig. 1. Typical neuron diagram (reproduced from Fig. 1, Paul and Hall 2013, p. ix). ‘Firing’ or ‘on’ neurons are shaded.
In the diagram of Fig. 1 five subsystems, represented by five neurons, are in causal interaction in the following way. Neuron C emits a stimulating signal (represented by an arrow) towards neuron D, which subsequently fires; D likewise sends a stimulating signal to E (all firing/‘on’ neurons are shaded). A stimulates B but B is inhibited to fire due to the inhibitory signal from C, represented by a line with a black dot6. The temporal order goes from left to right: C and A fire at t1, B and D react at t2, E reacts at t3, where $\mathrm { t } 1 < \mathrm { t } 2 < \mathrm { t } 3$ . Note that the (causal) connections, so the functioning or meaning of the diagram, can be described without using the word ‘cause’. (When no confusion can arise, we will also indicate the firing of neuron $\mathrm { \Delta } X$ with the same symbol X, which therefore can stand for the neuron or the event. If needed for clarity we will use ‘X-neuron’.) Importantly, it is on ‘classic’ diagrams defined by the rules just given that we will base our test7. In Paul and Hall 2013 other, sometimes more complex, types of diagrams are discussed, which need additional or different rules of functioning.
Actually, Fig. 1 represents one of the most discussed cases in the recent analytical literature, called ‘early preemption’. Looking at the diagram, the intuitive verdict is that the cause(s) of E’s firing are the firing of C and/or D. Or if one wishes to be more precise, one could state that C is the ‘root’ cause at t1 and D the ‘proximate’ cause at t2. Intuitively the firing of A would not be identified as a cause, because its action is blocked by C. But this intuitive identification of C as the cause of E conflicts with the simple counterfactual condition just given: it seems clear that if C would not happen, E would still happen, due to the back-up neuron A.
Conflicts with intuition as the above have been detected for a wide variety of diagrams, which are hotly debated. No encompassing definition of cause has been identified in the literature, one that works for all (classic or other) neuron diagrams, and ideally for all causal situations; there is a widespread consensus that such a universal definition cannot be constructed. At the same time, human intuition converges for very many causal situations – indeed, historically these converging intuitions have been the standard by which the neuron diagrams have been assessed, and candidates for an encompassing definition disqualified.
For our purposes we will not go in any detail in the vast literature devoted to defining cause in neuron diagrams. What matters to us is that the expertise gathered on causation can be put to use in a pragmatic way, namely by using the diagrams to test the causal reasoning capacity (or its simulation/emulation) of a given AI. In the case of LLMs as ChatGPT this is straightforward, since the diagrams can be textually transcribed. The full series of 25 diagrams with which we tested ChatGPT is given in Table 1 in Appendix 1. These include a part of the cases discussed in (Paul and Hall 2013), as well as several variations of these diagrams; in (Paul and Hall 2013) about 50 diagrams are discussed. We selected the diagrams of Table 1 essentially for following reasons: 1) they are classic in the sense we defined, i.e. based on the same rules of functioning as those of Fig. 1, and can (therefore) easily be transcribed; 2) the ensemble is, roughly, representative of the degree of complexity of the collection in (Paul and Hall 2013); 3) they allowed for constructing variations that were helpful for identifying and verifying our new definition DEF-1 (cf. Section 3). Again, we have no pretence at completeness8.
A sub-ensemble of detailed results obtained on ChatGPT(4) is given in Table 2 below, which also illustrates how we did the transcription (column 2). As an example, the transcription of diagram 1 in Tables 1-4 and Fig. 1, and the questions we ask, are as follows: “Suppose time t1 is earlier than time t2, which is earlier than time t3. If C would occur at t1, D would occur at t2. If D would occur at t2, E would occur at t3. If A would occur at t1, B would occur at t2, unless C would occur at t1. If B would occur at t2, E would occur at t3. Suppose C and A occur at t1. Does E occur at t3? What is/are the cause(s) of E’s occurring or not occurring?”
So we always ask essentially the same question, namely whether the last neuron in the diagram fires or not, and what the causes of the firing or not firing are. (In order to guarantee a certain stability and representativeness, the experiments ran over a period of several months in 2023 and again at the beginning of 2024, first with ChatGPT(3) then with ChatGPT(4), on which we focus here. The answers shown in the tables were obtained in one single run (lasting less than an hour, and without feedback to ChatGPT), and are relatively stable over time: the replies and therefore the small-scale statistics of correct/incorrect answers did not change much. In sum, the results shown here should be representative enough for our philosophical analysis.)
Table 2 (and subsequent tables) also shows the ‘correct answers’ in column 3 and the answers given by ChatGPT(4) in column 4. Let us be clear about how we define ‘correct answers’ in the tables. Correct answers are (i) whenever the diagram is discussed in (Paul and Hall 2013), those that are considered the intuitive answers by these authors; (ii) when the diagram is not discussed there, the intuitive answers according to our personal judgement; (iii) in all cases, the causes identified by DEF-1 (cf. next section). In sum, we believe that for all, or almost all9, answers intuitions can converge, at least with a minimal training. Most importantly, the causes indicated in the tables can be derived with one definition, DEF-1 below, to which we turn in the next section.
Note that in unambiguous cases we do not always indicate all causes in the tables. For instance, for diagram 5 we only indicate as causes of E’s occurring: $\mathbf { A } ^ { + }$ (t1); ${ \mathrm { D } } +$ (t2); F–(t3). Here $\mathbf { A } { + } ( \mathrm { t } 1 )$ stands for ‘the firing/occurrence of A at time t1’; F–(t3) for the ‘the non-firing/non-occurrence of F at $\mathbf { t } 3 ^ { \flat }$ ; etc. The intermediate causes between $\mathbf { A } { + } ( \mathrm { t } 1 )$ and $\Sigma { + } ( \mathrm { t } 4 )$ , namely $\mathbf { A l } + ( \mathbf { t } 2 )$ and ${ \bf A } 2 + ( { \bf t } 3 )$ , are obvious once $\mathbf { A } { + } ( \mathrm { t } 1 )$ is identified as a root or initial cause (we justify this point in the next section). The corresponding intermediate neurons A1 and A2 are not labelled in the diagram (but of course mentioned in the transcription); a similar simplification is made in other diagrams. Thus, we only indicate the most relevant/‘difficult’ causes, indeed those that are discussed in the literature, typically those occurring at t1. In diagram 5 for instance, representing a case of double prevention, the most relevant feature is that usually only $\mathbf { A } { + } ( \mathrm { t } 1 )$ is counted as a root cause, not $\mathbf { C } \mathbf { + } ( \mathrm { t } 1 )$ , which seems to violate transitivity (Paul and Hall 2023, p. 224).
Indeed, as we will corroborate in the next section, the initial causes of the last event in the diagrams are often least easy to identify. As follows from intuition and DEF-1 below, proximate causes of a given effect, i.e. causes that are only one step earlier in time, are always (comparatively easily) identified by the simple counterfactual rule mentioned above. For instance, in Fig. 1, this rule immediately allows one to find that ${ \bf D } + { \bf \Lambda }$ (the firing of neuron D) is a cause at t2 of $\mathrm { E + }$ at t3, and that B- (the non-firing of B) is not a cause of $\mathrm { E + }$ . Using the same counterfactual rule one identifies in diagram 5 (Table 2) $^ { \mathrm { A 2 + } }$ and $\mathrm { F } \mathrm { - }$ as proximate causes of $\mathrm { E + }$ , ${ \bf D } + { \bf \Lambda }$ of $\mathrm { F } \mathrm { - }$ , and $\mathrm { C } +$ of ${ \bf D } + { \bf \Lambda }$ and $^ { \mathrm { B + } }$ . So proximate causes are easily identified by the simple counterfactual rule.
9 Note that for our test it is not even necessary that strictly all causes are exactly defined. Some causal judgements might remain debated; the important requirement is to have a reasonable, not necessarily perfect, standard to compare the AI to. Note also that it is relatively easy to identify wrong answers.
Table 2. ChatGPT(4)’s answers (column 4) to causal questions based on neuron diagrams (see full table in Appendix 1). Column 2 gives the transcriptions and questions and column 3 the correct answers (here $\mathbf { C } \mathbf { + } ( \mathbf { t } \mathbf { l } )$ stands for ‘the firing/occurrence of C at time t1’; etc.). In the last column, incorrect parts are underlined. In $< B R A C K E T S >$ our assessment of the (in)correctness of the answer as a whole. All diagrams except 2 are discussed in (Paul and Hall 2013) (PH in the table). In diagram 10, the last neuron has a double border, meaning that it only fires upon reception of at least two stimulating signals. In diagram 5, the intermediate neurons A1, A2 between A and E are not labelled for simplicity; same simplification in other diagrams.
The last column of Table 2 indicates in brackets our verdict ‘correct’ or ‘incorrect’, as well as which part of ChatGPT(4)’s answer is incorrect, by underlining it. Note that in Tables 1 and 2 our evaluation <(IN)CORRECT> refers to the question as a whole, in particular the identification of causes. The diagrams selected in this table are particularly interesting, since they are all, except one, discussed in detail in (Paul and Hall 2023), as indicated in the table. The proportion of correct answers in Table 2 is quite large (6/9), larger than in the full ensemble of Table 1, where it is 13/25 (as again shown in the table).
In some detail, the results we obtained were the following. For all diagrams except 5 and 18, ChatGPT(4) correctly answered the ‘yes/no’-question whether the last neuron fires or not, a much better result than obtained with ChatGPT(3), which came close to a random-guess result ( $50 \%$ correct if only yes/no can be answered). For the essential part of the test, focusing on finding causes, the results were less univocal. As shown in Table 1, we judged that ChatGPT(4) correctly identified the causes for 13 of the 25 submitted diagrams (for two more diagrams the answer was partly correct). ChatGPT(3) found for only one diagram (20) the correct causes. This significant increase in capability due the parametric upscaling in GPT-4 is generally seen in tests; with further upscaling of the LLM its scores might well improve.
We will comment on these results in Section 4, but since the statistics is small, we should be cautious in drawing conclusions. Let us, for now, only highlight three observations. First, the tables illustrate that Q-CAUSE works as a prompt to elicit answers and causal reasoning, or its emulation, from ChatGPT. ChatGPT(4)’s answers always seem to make perfect sense, at least on the surface. But identifying causes is, in principle, an all-or-nothing task: overlooking causes can have dramatic consequences – as any health care worker, engineer, or detective might tell. Therefore, our test is specific enough to be quantitatively graded. On the other hand, we find it remarkable that in several instances ChatGPT(4) provides answers to causal questions that are correct according to human intuition and the criterion stipulated above, even in cases where the ‘correct answer’, or at least its justification, is subject to debate in philosophy. This is for instance the case for the six correct answers in Table 2. Finally, we noted that for the more complex diagrams 22-25 in Table 1, introducing interrupted paths, or crossings between rows – features that are absent in the classic diagrams of (Paul and Hall 2013) –, the causal reasoning of ChatGPT(4) appeared to break down. The answers were of a saliently lower quality than for simpler diagrams. This suggests that there is a limit in complexity in the diagrams above which the causal reasoning capability of ChatGPT(4) collapses.
Qualitatively comparable results were obtained with advanced LLMs, all recently released, namely DeepSeek-R1, Gemini 2.0 Flash (Thinking Experimental) and OpenAI o3-mini, as shown in Appendix 2, Table 3. The 25 diagrams and the questions are the same as in Table 1, except that (i) we only ask to give the causes at the earliest time (t1), and (ii) we specify that the answers should be short, in order to avoid needlessly lengthy answers providing great detail on the reasoning steps (see details in Appendix 2). For this small-scale test, best-in-class was Gemini 2.0 Flash (Thinking Experimental) with 14 fully correct and 9 partially correct answers, and 2 wrong answers, followed by DeepSeek-R1 (resp. $1 0 / 1 5 / 0 )$ . ChatGPT o3-mini had only 2 fully correct and 11 partly correct answers, while 12 answers were wrong, as can be read in the Table. This last result is surprising since the older ChatGPT(4) did better. We have no compelling explanation for this phenomenon10.
Before commenting further on these results, let us first derive a definition that works for all diagrams in the Tables (1-4). From the perspective of theoretical philosophy, this is our main result.
# 3. Analytic definition of cause for the neuron diagrams of Table 1.
For all the cases of Table 1 and all subsequent tables, we identified the causes X of event Y (Y is the firing or non-firing of the last neuron in the diagrams, usually labelled E) by following definition, applied to a given neuron diagram:
X is a cause of Y iff $\mathrm { \Omega } \to \mathbf { X }$ (ceteris paribus, off-path under max blocking) implies $\neg \mathrm { Y }$ .
(DEF-1)
This definition works for any Y-event in a diagram, not only occurring at the last neuron. It is implicitly assumed that 1) Y does not occur earlier than X – in accordance with intuition, and 2) that there is some path, i.e. a suite of contiguous stimulating or blocking connections from X to Y. In DEF-1 $ { \mathbf { \tilde { \Sigma } } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \mathbf { \Sigma } } { \Sigma } { \mathbf { \Sigma } } { \Sigma } \mathbf { \Sigma } { \Sigma } \Sigma { \Sigma } \Sigma \Sigma { } \Sigma \Sigma \Sigma { \Sigma } { \Sigma \Sigma } \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma \Sigma $ can be read ‘not- $X ^ { \prime }$ (event X not happening, corresponding to inverting ‘on’ and ‘off’ states of the X-neuron). (X and Y stand in principle for the events, firing or non-firing, but sometimes also indicate the neurons. If confusion is possible, we write e.g. ‘X-neuron’ or ‘X-event’ or $\mathrm { ^ { 6 } X } \mathrm { ^ { + } } \mathrm { ^ { , } }$ as before.)
DEF-1 needs to be made precise, as we do in the following11. We will show below that the synthetic phrasing of DEF-1 allows one to apply it conveniently to all special cases, once one is accustomed to the new notions (‘off-path’ and ‘blocking’) it contains. The key point is this: the interpretation of the ‘off-path under max blocking’ clause depends on whether X has only one forward-in-time connection, or more. In the latter case we say that X is ‘bifurcating’; C in Fig. 1 is such a bifurcating neuron. In the former, much more frequent, case the clause can be neglected and one reverts to the simple counterfactual condition/rule mentioned in the previous section. Thus, it is only when bifurcating paths occur that DEF-1 needs the ‘offpath under max blocking’ clause.
But let us specify DEF-1 in detail; it comes with rules. To see whether X is a cause of Y, one needs to consider the factual scenario as depicted in the diagrams, and a counterfactual scenario that corresponds to the diagram in which X is replaced by $\neg \mathrm { X }$ and the consequences implemented in the way specified under (i) below. In this counterfactual scenario one has to evaluate whether $\neg \mathrm { Y }$ occurs. The counterfactual scenario is built in the following way, depending on whether the X-neuron is bifurcating or not:
(i) Both for a bifurcating or non-bifurcating $\mathrm { \Delta } X$ -neuron, the ‘forward’ consequences of the replacement $\Chi \to \neg \Chi$ are implemented ceteris paribus, so while keeping all other events, not related (i.e.
connected) to X, fixed. Forward consequences are the ‘subsequent-in-time’ changes implied by the replacement $X \longrightarrow \neg X .$ according to the rules of the functioning of the diagram (given in the description of Fig. 1). Backward consequences are neglected (one can imagine the part of the diagram occurring before X being cut off). To take the simplest example, if in the factual scenario two active neurons X and Y are only connected via a stimulating connection, $\neg \mathrm { X }$ will imply $\neg \mathrm { Y }$ in the counterfactual scenario, in which both $\mathrm { \Delta } X$ and $\mathrm { Y }$ are off. According to DEF-1, the firing of X is then a cause of the firing of Y.
(ii) If X is non-bifurcating, so if there is one path from X to Y, or if the X-neuron is off (whether bifurcating or not), then we do not really need12 the ‘off-path under max blocking’ clause, and DEF-1 corresponds to a simple counterfactual definition (with the ceteris-paribus specification of (i)). As an example, the ceteris-paribus clause implies that in diagram 5 of Tables 1 and $2 \mathrm { F } \mathrm { - }$ is a cause of $\operatorname { E + : } \operatorname { F + }$ implies E- if one keeps fixed all F-unrelated events in the path A-E. Most neurons are non-bifurcating, so the simple counterfactual rule is baseline for identifying causes. Consider the case that $\mathrm { \Delta X }$ is off and bifurcating. Then DEF-1 just stipulates that event X (the X-neuron being off) is a cause of $\mathrm { Y }$ just in case $\mathrm { \Omega \to X }$ implemented (in the counterfactual diagram), ceteris paribus, implies $\neg \mathrm { Y }$ . This case is illustrated in Fig. 2 (diagram 3 in Table 1), where two paths run from C to E.
Fig. 2. The non-firing of $\mathbf { C }$ is a cause of the non-firing of E.
It seems intuitive that here the non-firing of $\mathbf { C }$ is a cause of the non-firing of E: if C would fire, E would fire – because the counterfactually firing C would block the B-neuron. This is also what DEF-1 delivers. Note that one can still consider this as a (counterfactually) ‘off-path maximal blocking’ scheme, in the sense that in the counterfactual diagram the off-path B is blocked from firing. Off-path blocking is maximal in that it is implemented in the counterfactual scheme.
(iii) Only if $\mathrm { \Delta X }$ is on and bifurcating, one needs to distinguish the ‘direct’ path from X to Y (containing ‘in-path’ events) and the ‘indirect’ path(s) from X to $\mathrm { Y }$ (containing ‘off-path’ events); by definition, all events that are not in-path in the direct path are off-path. The direct path is, by definition, the shortest path, once all ‘redundant’ neurons are collapsed on their antecedent ‘parent’ with the same on/off state; such redundant neurons have, by definition, one connection in and out. For instance, D in Fig. 1 and Fig. 2 is a redundant neuron in this sense; long chains of similar neurons (as in diagram 18 in Tables 1 and 2) can thus be made shorter. (In very rare cases, the notion of ‘shortest path’ is ambiguous, since two or more paths could qualify as shortest; then one can choose any of those paths as shortest/direct path: see below.)
If the X-neuron under scrutiny is on and bifurcating, we explicitly need the ‘off-path maximal blocking’ clause: we need to implement $\neg \mathrm { X }$ while maintaining X’s blocking off-path, i.e. in any segment of all indirect paths. This allows for instance for identifying the firing of C as a cause of the firing of E in Fig. 1, as one would spontaneously do: the off-path factually blocked B is maintained blocked in the counterfactual scenario. (For other examples, see Ex2, Ex3, Ex5 below.) Thus, DEF-1 now stipulates, more precisely, to implement $\neg \mathrm { X }$ in the direct path and in all indirect paths, while maintaining off-path (so in any indirect path) any blocking, including blocking that can be retraced to the (factually firing) X. This is somewhat sophisticated, because it boils down to not fully implementing $\neg \mathrm { X }$ in the counterfactual scenario; off-path one needs to maintain any blocking in the indirect path that exists in the factual scenario, also if it stems from the factual firing of X. In sum, in case the X-event under scrutiny corresponds to a bifurcating and firing $\mathrm { \Delta } X$ -neuron, we now explicitly need the ‘off-path under max blocking’ clause. The blocking in the indirect path is now maximal in that it is maintained from the factual firing of X. (Recall from (ii), if X is off, off-path blocking is maximal in that it is implemented in the counterfactual scheme.) In very rare cases, we need, in principle, one more clause in specification (iii): If $\mathrm { \Delta X }$ is on and bifurcating but one cannot define a direct/shortest path according to the definition given, i.e. if there is more than one ‘shortest path’ starting at X, then one can choose any of those paths as shortest/direct path; the result does not depend on the choice. This is the case for the C-neurons of diagrams 5, 6, 7, 9: they are bifurcating but the two paths towards the last neuron are equal in length. For an explicit application of DEF-1 in this case, see example Ex1 below.
With these specifications, DEF-1 leads in all diagrams of Table 1 (and subsequent tables) discussed in (Paul and Hall 2013) to causes that are considered in this work as the intuitive causes. Clearly, one will note that DEF-1 shares commonalities with several ingredients of existing counterfactual accounts of causation (see e.g. passages in Paul and Hall 2013, p. 20, 21, etc.). But these ingredients have not been combined in the way here proposed. DEF-1 might seem complex, due to the specifications (i)-(iii), but the synthetic expression $^ { 6 6 } { \neg } \mathrm { X }$ (ceteris paribus, off-path under max blocking) implies $\vec { \mathbf { \nabla } } \vec { \mathbf { Y } } ^ { 5 }$ allows for an easy use once one is accustomed to the implicit notions. This requires some practice, so let us illustrate the definition in a few non-trivial cases, notably involving bifurcating neurons.
Ex. 1. Diagram 5, an example of double prevention (Paul and Hall 2013, Ch. 5). For this diagram DEF-1 stipulates the following. $\mathbf { A } \mathbf { + }$ and $\mathrm { F } \mathrm { - }$ are causes of $^ { \mathrm { E + } }$ by the simple counterfactual rule in (ii) above. C is bifurcating but its two forward paths are equal in length; whichever path one chooses as shortest/direct, F remains off in the counterfactual $\mathrm { C } \mathrm { - }$ scenario. Hence, according to specification (iii), $\mathrm { C } +$ is not a cause of $^ { \mathrm { E + } }$ ( $\mathrm { C } -$ does not lead to E-). Finally, ${ \bf D } + { \bf \Lambda }$ is a cause of $\mathrm { E + }$ by the simple counterfactual rule of (ii) above. These verdicts coincide with those privileged by the experts, even if this diagram is intensely debated (cf. Paul and Hall 2013, p. 198, p. 216, p. 224, p. 247). Since we derive our results from a comparatively broad definition, our model validates the intuition of several philosophers (e.g. Hall 2000, Hitchcock 2001, Paul and Hall 2013, Ch. 5) that transitivity is violated in certain neuron diagrams13. Other well-known accounts struggle with this case (Paul and Hall 2013, p 82ff).
Ex. 2. Diagram 8. According to DEF-1, $\mathrm { C } +$ is a cause of $\mathrm { E + }$ , because C- implies E- if the off-path B and G are kept blocked (cf. the blocking stipulation (iii)). This corresponds to the expert verdict (cf. Paul and Hall 2013, p. 88).
Ex. 3. Diagram 10. According to DEF-1, $C +$ is a cause of $\scriptstyle \mathrm { E + }$ , because $C ^ { - }$ implies E- if the offpath D is kept blocked (in accordance with (iii)). $\mathbf { A } \mathbf { + }$ is a cause of $\mathrm { E + }$ because A- leads to E- (the off-path D is kept blocked by C). These results agree with Paul and Hall (2013, p. 90). Again many other models have been proposed, but none appears to have a wide applicability.
Ex. 4. Diagram 12, an example of double prevention. DEF-1 implies that $\mathbf { A } \mathbf { + }$ and $\mathbf { C } +$ are both causes of $\mathrm { E + }$ , due to the simple counterfactual rule applicable to non-bifurcating neurons. Paul and Hall converge to a similar conclusion, even if its justification is again highly debated (Paul and Hall 2013, p. 202).
Ex. 5. Diagram 15. According to DEF-1, $C +$ is a cause of $\scriptstyle \mathrm { E + }$ , because $\mathrm { C } \mathrm { - }$ implies E-: the off-path B must be kept blocked. $\mathbf { A } \mathbf { + }$ is a cause of $\scriptstyle { \mathrm { E + } }$ because of the simple counterfactual rule. The final verdict by Paul and Hall is the same (2013, p. 199).
Ex. 6. Diagram 17, a case of ‘redundant prevention’. At t1 only $\mathrm { C } +$ is a cause of G-: C- leads to $\mathbf { G } +$ if the off-path B is kept blocked, as it should be ((iii)). Compare to the discussion by Paul and Hall (2013, p. 213).
Ex. 7. Diagram 18. $C +$ is a cause of $\mathrm { I } +$ : C- leads to I- if the off-path B is kept blocked, as it should be ((iii)). Compare to the discussion by Paul and Hall (2013, p. 220).
Let us emphasise that DEF-1 does not work for all neuron diagrams discussed in (Paul and Hall 2013). It cannot work for diagrams that need more or different rules than the ‘classic’ diagrams, i.e. those that are based on the rules given for diagram 1 in Fig. 1 (plus the rule for double-border neurons as in diagram 10). But we believe that it works for all classic diagrams, perhaps with a minimal upgrading, as further research should confirm14. Indeed, we found one diagram in (Paul and Hall 2013) that is classic in our sense, and that might require a slight upgrade of DEF-1 to be applicable also to this case, namely the diagram in Fig. 38, p. 187. On the other hand, this case is ambiguous and there is no final verdict given by the authors regarding the cause(s) of $\mathrm { E + }$ . If one wants to identify $\mathrm { C } +$ as a cause in this diagram, then one would need to modify specification (iii) of DEF-1 in following way: “If X is on and bifurcating, …” must be replaced by “If X is on and bifurcating, or on and directly stimulating such an on and bifurcating neuron,…”. Interestingly, ChatGPT(4) correctly identified $\mathrm { C } +$ as a cause15.
Sure, we do not (yet) advocate to use LLMs as always reliable causal experts in cases of philosophical relevance, but in the next section we argue there are reasons to be optimistic as to their future capabilities (beyond the obvious parametric upscaling argument).
# 4. Discussion of the causal tests. Lines of further research
As a preliminary note, let us emphasise that neural diagrams do not necessarily capture all aspects of causation (as also acknowledged by Paul and Hall 2013). To what extent exactly the counterfactual analysis studied via neuron diagrams overlaps with other theories of causation, such as the functional model, the manipulability and the regularity model, remains an open problem. Nevertheless, neuron diagrams cover a large part of causation research in philosophy; and for our task of developing a systematic test they seem an appropriate tool.
Are the results obtained by ChatGPT(4) (Tables 1 and 2), DeepSeek-R1, Gemini 2.0 Flash, and o3-mini (Table 3) surprising? We gladly admit that we were surprised that ChatGPT(4) did so well, after we had done tests with ChatGPT(3), which essentially failed on almost all questions. The more recent DeepSeek-R1 and Gemini 2.0 Flash seem to corroborate this tendency of increasing fluency (cf. Table 3). Even a global success rate of say $50 \%$ of fully correct answers in Table 1 (and up to $90 \%$ in Table 3 for Gemini 2.0 Flash, if one includes partially correct answers) seems an achievement for a technology that works, at its basis, by statistical text completion, and that is not specifically developed for causal reasoning. We suspect that this success rate is difficult to match by humans, on average, and perhaps even by philosophers (but we didn’t perform large-scale statistical tests with humans, which seem particularly laborious: they need to be parametrised by a variety of conditions). Unfortunately, it is difficult to get any deep insights into how the LLMs come to their causal verdicts; we are dealing with a black-box technology16. Some will maintain, independently of any test results, that this type of technology does not really reason causally (which would be in agreement with the conclusions of e.g. Jin et al. 2023, Zečević et al. 2023); for a critical philosophical assessment along these lines of GPT-4’s ‘reasoning’, see (Arkoudas 2023, 2023a). We indeed notice that when causal diagrams become complex (say, diagrams 19 and 22-25, which are not discussed in the literature, for that matter), the probability of failing becomes much higher, for all chatbots tested. On the other hand, our question Q-CAUSE often does elicit textual answers that are indistinguishable from answers resulting from ‘real’ step-by-step reasoning – as is known to be the case for other reasoning types (e.g. Chen et al. 2023). From a practical point of view, the conclusion that matters most for us is that several of the tested LLMs can already provide correct answers to causal questions that are considered subtle by the expert community (Tables 2 and 3). Therefore we believe it is far from implausible that next generation LLMs, or dedicated AI, will produce correct answers also for more complex diagrams – equaling or surpassing human experts.
In this context, following observation seems to warrant optimism: there seems to be a relevant analogy between how we constructed DEF-1, and how (future) AIs could identify causes, and perhaps ultimately abstract a definition: the keyword is correlation. We tried – in a somewhat tedious process – various definitions by inspecting large ensembles of neuron diagrams (Table 1), and by looking for patterns in these diagrams, i.e. by looking for correlations. That should be a procedure in which dedicated AIs based on artificial neural nets excel. At any rate, neural nets (which, surely anecdotally, have some resemblance with neuron diagrams) can probably be trained for causal identification in neuron diagrams and in other contexts, and then be used for it. In sum, this analogy in heuristic procedure seems to be one more reason why we believe the ‘causal thinking’ of (future) AI must be taken seriously, and why the foundations of causation and of artificial neural nets should be a fertile field of interdisciplinary research.
However, as said, our small-scale statistics is not presented here as a measure of whether the studied LLMs are good causal reasoners or not; for that we refer to the specialised literature (e.g. Gao et al. 2023, Jin et al. 2023, Kiciman et al. 2023, Tu et al. 2023, Zečević et al. 2023, Liu et al. 2024).
Before concluding, let us discuss how the test method could be enriched and elaborated. Each of the following routes could be systematically explored in further research projects:
(i) Use the diagrams to test for counterfactual reasoning and mastery of concepts as ‘intervention’. It is widely accepted that causal identification is related not only to counterfactual reasoning but also to the concept of intervention (e.g. Woodward 2005). One can indeed use the diagrams to further test whether LLMs master these concepts, as illustrated in Appendix 3, Table 4 by a mini-test on Gemini 2.0 Flash. As for the other tests, we find the answers often excellent, e.g. for diagram 17, of intermediate complexity (but not all answers are fully correct). Again, even if we did not do a large-scale test, one is tempted to conclude that some LLMs ‘imagine’ causal scenarios as humans do; more precisely, that their answers often coincide with those based on counterfactual / imaginative thinking. Note that when the LLM gives a correct answer (20 out of 25 answers related to the 9 diagrams in Table 4), in appearance it ‘understands’ counterfactual intervention as humans normally do, notably by keeping all events anterior to the intervening cause fixed. This corresponds to correctly implementing the ‘ceteris paribus’ clause in DEF1.
(ii) Paraphrase the prompts. One might wonder how robust the LLMs are under prompt variation. The prompts submitted to the LLMs in Tables 1-4 describe the causal dynamics of the diagrams in detail, carefully representing the time details etc. One can paraphrase these queries by using a more contracted description as is done in typical philosophical phrasings of the diagrams. For instance, the prompt corresponding to the paradigmatic diagram 1 (Fig. 1) can be rephrased thus: “Suppose a scenario in which C occurs and causes D to occur, which subsequently causes E to occur. Suppose A normally causes B to occur, unless C causes D to occur. If B would have occurred, it would have caused E to occur. Does E occur in this scenario? What is/are the cause(s) of E’s occurring?” We did a small-scale test with Gemini 2.0 Flash (Thinking Experimental) and ChatGPT o3-mini on the 9 diagrams of Tables 2 and 4. In this test all prompts were rephrased according to the logic as just given for diagram 1. The results showed only small deviations from the results obtained with the original prompts used in Table 3. In some detail, Gemini 2.0 gave one fully wrong answer on diagram 15 and a partially wrong answer for diagram 12 (the answers were correct with the original phrasings used for Table 3). ChatGPT o3-mini gave a fully wrong answer for diagram 17 and partially wrong answers for diagrams 12 and 18; its answers were also problematic for these diagrams under the original prompt (cf. Table 3). We further introduced more abstract phrasings (unlikely to be encountered in the texts used for model training), by replacing in the prompts A, B etc. by Aness, Bness etc., but this led to identical results. In conclusion, these preliminary tests (see also (iii)) suggest that the mentioned LLMs are quite stable under prompt variation for this type of causal reasoning.
(iii) Replace the diagrams by concrete situations, i.e. ask questions about concrete scenarios of which the diagrams are abstract models. For instance, the prompt for diagram 12 (a case of double prevention) could be replaced by following concrete text (paraphrasing an example given by Paul and Hall 2013, p. 175; in square brackets are the corresponding neurons in the diagram): “Bob makes coffee [A], and fills his cup [E]. Meanwhile, Alice scoops up Billy the cat [C] as he lashes his tail wildly [B]; her quick action prevents a disastrous spilling [D], so that the cup remains filled [E]”. The test question could then be: What are the causes of the presence of coffee in the cup? Or: What are the causes of the coffee being/remaining in the cup? As another example, diagram 1 in Fig. 1 could be replaced by following description, a concrete example of early preemption (due to Hitchcock 2007): “A poisons V’s coffee. V drinks it and dies. If A hadn’t poisoned the coffee, B would have, and V would have died anyway. V would not have died if there had been no poison in the coffee. What is/are the cause(s) of V’s death?” For what it is worth, we judged that ChatGPT(4) gave correct (and interesting) answers for both problems, well in line with the experts’ analysis17. Of course, in this case the chatbot can rely on much published text.
(iv) Complicate the prompts, by generating more complex diagrams, e.g. by using computer code. Since DEF-1 can also be programmed, this could, in principle, lead to a systematic (benchmark) test for this type of causal reasoning. Our preliminary tests suggest that the causal reasoning of existing LLMs breaks down under sufficient diagram complexity. Perhaps it is interesting, notably in an interdisciplinary effort with computer scientists, to do a systematic study of the test score of an AI as a function of diagram complexity. For diagram complexity several numerical measures could be defined; the simplest ones are the number of neurons and the number of columns, i.e. time steps, in the diagram; others are the number of forks, of blocked neurons, of crossings, etc. In the most interesting scenario this would be an easy, quantitative way to get insight into the general depth-of-reasoning of given AIs. For assessing an AI’s answers in the case of complex diagrams, an encompassing definition is necessary, one that can be programmed, such as DEF-1. | We propose a test for abstract causal reasoning in AI, based on scholarship in the philosophy of causation, in particular on the neuron diagrams popularized by D. Lewis. We illustrate the test on advanced Large Language Models (ChatGPT, DeepSeek and Gemini). Remarkably, these chatbots are already capable of correctly identifying causes in cases that are hotly debated in the literature. In order to assess the results of these LLMs and future dedicated AI, we propose a definition of cause in neuron diagrams with a wider validity than published hitherto, which challenges the widespread view that such a definition is elusive. We submit that these results are an illustration of how future philosophical research might evolve: as an interplay between human and artificial expertise. | [
"cs.AI",
"cs.LG"
] |
# 1 Introduction
Structured comments in docstring format, containing detailed descriptions of functionality, parameters, return values, exceptions, and use cases, play a key role in maintaining the codebase: they not only speed up developers’ understanding of the code, but also allow them to automatically generate documentation (for example, in HTML format). Automatic comment generation can significantly ease the time-consuming task of writing comments and regularly updating them, which is necessary due to constant changes in the code base.
The work of Shi et al. (Shi et al., 2022) demonstrates that the use of rule-based filters to delete low-quality comments greatly improves the generation results. Nevertheless, such simple filtering approaches are not enough, because they disregard the code-comment semantic similarity. Researchers attempted to use the SIDE metric (Mastropaolo et al., 2024), which includes Russian language support, to filter English training data (Vitale et al., 2025).
However, experiments revealed that even halving the dataset size had minimal impact on summary quality and model accuracy. This indicates the need to explore alternative quality metrics for optimizing code summarization datasets.
There was a trial to apply SIDE (Mastropaolo et al., 2024) metric to filter the comments (Vitale et al., 2025), but the results show that even halving the training set sizes does not significantly affect the model’s ability to generate summaries. However, when comparing the most restrictive selection strategy with a simpler one that randomly selects the training instances, the authors observe that the resulting accuracy of the model also does not change. This result suggests that different quality attributes should be explored for optimizing code summarization datasets.
Apart from SIDE, there are several different metrics, such as MIDQ (Scalabrino et al., 2017), STASIS (Li et al., 2006), or CoCC (Huang et al., 2025), which have their own drawbacks: MIDQ, although considers comment structure, uses a Flesch index (Flesch, 1979) designed for literary texts, which is incorrect for technical documentation full with terms. STASIS, based on WordNet (Miller, 1995) for English, is inapplicable for Russian due to the lack of an equivalent lexical base, and also does not take into account the different informative content of terms (for example, abbreviations like "id" or "ctx"). CoCC is a trained from scratch skip-gram word2vec (Mikolov et al., 2013) for code-comment consistency detection, which also does not support Russian language.
The problem of evaluating the quality of comments is compounded by the limitations of existing metrics. Text-based reference approaches such as BLEU or ROUGE-L depend on the quality of the reference data, which may be incomplete or contain errors. These metrics also do not take into account the semantic equivalence of alternative formulations, artificially underestimating the assessment of correct comments that differ from the standard.
In this paper, we propose a new criterion called CIDRe for the quality of structured comments that eliminates dependence on reference data, enables dataset filtration and evaluates several aspects of quality at the same time. We evaluate CIDRe on StRuCom (Dziuba and Malykh, 2025), which is the only existing dataset with strict structural filtering of code comments.
# Our contributions:
1. Quality criterion for structured comments. CIDRe is reference-free metric that combines four complementary quality components and outperforms existing approaches in crossentropy evaluation.
2. Validation dataset. We manually annotated 840 comments from StRuCom for binary classification (good/bad), creating the first training/evaluation dataset for quality assessment criteria in this domain.
3. Validation through finetuning. Filtering StRuCom dataset with our criterion improved generation quality (in Side-by-Side evaluation via gpt4-o-mini), confirming the practical value of our approach.
# 2 Related Work
# 2.1 Datasets
Existing code-to-text datasets predominantly target English content. The Stack (Kocetkov et al., 2022) aggregates multilingual code (658 languages) but lacks task-specific annotations for supervised finetuning. The Vault (Nguyen et al., 2023), derived from The Stack, contains 43M English code-text pairs, yet structured documentation remains scarce due to an abundance of brief functions. CodeSearchNet (Husain et al., 2019) focuses on code search, limiting text descriptions to introductory documentation paragraphs. MCoNaLa (Wang et al., 2023) offers minimal multilingual support (345 Russian examples) but is constrained to simple "how-to" Python snippets. StRuCom (Dziuba and Malykh, 2025) addresses the Russian documentation gap with 153K human-written and synthetic code-comment pairs across Python, Java, JavaScript, C#, and Go, maintaining languagespecific terminology and docstring conventions.
# 2.2 Models
While proprietary large language models (LLMs), such as GPT- $. 4 ^ { 1 }$ , are excellent at generating code documentation, their proprietary nature poses a challenge for enterprise adoption. In contrast, open-source alternatives such as DeepSeek-Coder and Qwen2.5-Coder offer a balance between performance and deployability, although they may underperform on Russian documentation due to their training on English corpora only. Recent research (Dziuba and Malykh, 2025) has introduced models trained on the StRuCom dataset, which achieves baseline performance on Russian code commenting tasks. However, given the inherent noise in the dataset, arising from uncurated humangenerated and synthetic data, there is potential for accuracy improvements through quality filtering, a direction that remains to be explored in the literature.
# 2.3 Embedding Models for Code
Modern embedding models bridge the gap between code and natural language through semantic alignment. CodeSage (Zhang et al., 2024) utilizes bidirectional transformers with scalable architectures (130M–1.3B parameters) to align code-text representations through contrastive learning, though its English-centric training limits Russian adaptability. CodeXEmbed (Liu et al., 2024) employs a unified multilingual framework (400M–7B parameters) for cross-modal retrieval across 12 languages, achieving state-of-the-art benchmarks at the cost of high computational overhead. Both models highlight the necessity of linguistic adaptation for non-English documentation tasks.
# 2.4 Metrics for Comment Quality
Existing metrics exhibit language and domain limitations. MIDQ (Scalabrino et al., 2017) combines JavaDoc structure analysis with Flesch readability scores, though its reliance on literary readability and Java-specific design hinders cross-lingual applicability. STASIS (Li et al., 2006) measures code-comment similarity via WordNet synsets, computing term distances in the WordNet hierarchy, but suffers from English language bias, uniform term weighting, and lack of Russian support. CoCC (Huang et al., 2025) detects inconsistencies through code-text embeddings but remains Englishcentric. SIDE (Mastropaolo et al., 2024) introduces reference-free coherence evaluation via contrastive learning with MPNet (Song et al., 2020) (12-layer BERT (Devlin et al., 2019) architecture), yet empirical studies show its failure to improve model performance when filtering TL-CodeSum (HU et al.) and Funcom (LeClair and McMillan, 2019) datasets – highlighting the need for multiaspect quality criteria.
# 3 CIDRe Criterion
Our criterion is a combination of four key components: Completeness, Informativeness, length of the text Description, and Relevance. The general pНiоpвeыliйnкeрoиfтtеhрeийc rкitаeчrеiсoтnваisкsоhмoмwеnнтiаnриFiяgure 1.
0.73 R D Binary 0.56
0.67 0.76 0.8 157 classifier 0.21
0.92 -0.05 0.5 425 Vector of probabilities
0.49 0.01 1.0 37 1 – «good» quality comment of belonging to the 0 – «bad» quality comment positive class
The measure of informativeness is inspired by STASIS. The main difference between informativeness and STASIS is the consideration of the weight of terms. The idea is as follows: the terms in the code have different meanings for understanding its functionality. Therefore, it is necessary to weigh the terms by importance, for this we use the mechanism of self-attention in transformers. The details about informativeness calculation are placed in Appendix B.
# 3.3 Description Length
We measure comment length in characters, hypothesizing that detailed textual explanations before key sections (parameters, returns, etc.) improve systemwide context understanding by linking functions to broader architectural goals, reduce cognitive load through self-contained explanations of complex logic, and preserve decision history across code iterations. Furthermore, comprehensive comments mitigate knowledge gaps in collaborative environments by explicitly documenting assumptions and edge cases.
# 3.1 Completeness
Completeness is a measure of the documentation’s coverage of code elements. The documented elements of a function are its parameters, the exceptions thrown from it, and its return value. Our definition of completeness is inspired by the Documented Iterms Ratio (DIR) proposed by the authors of the MIDQ metric. But since MIDQ is defined only for JavaDoc, we have defined it for the other 4 programming languages. The details of completeness calculation are placed in Appendix A.
# 3.2 Informativeness
Informativeness measures the extent to which information from a function’s source code is captured in its corresponding comment. This metric is grounded in the assumption that competent programmers assign semantically meaningful names to code identifiers, enabling critical insights into the code’s functionality to be derived directly from these names.
A term is defined as a word contained within an identifier. A single identifier may comprise $N$ words, thereby containing $N$ terms.
# 3.4 Relevance
Relevance quantifies the degree of semantic alignment between generated code comments and the corresponding source code. The methodology is developed based on a SIDE metric. We employed a Triplet Loss function (Schroff et al., 2015) to finetune CodeSage-small-v2 model, which choice is justified by its support for diverse languages and its ability to generate semantically rich embeddings for code-text matching. For details about finetuning see Appendix C.
# 4 Verification of the Components
Tab. 1 presents statistical analysis via MannWhitney U-test, which confirms significant differences $( \mathtt { p } < 0 . 0 5 )$ between «good» and «bad» comments across all four components: informativeness, relevance, structural completeness, and description length. The observed patterns align with human judgments — high-quality comments systematically exhibit richer contextual details, adhere to documentation standards, and minimize redundant information. This empirical validation justifies the components’ inclusion in the final quality criterion.
Table 1: Comparison of quality criterion components between comment groups by Mann-Whitney test.
Table 2: Comparison of the developed quality criterion based on three different models and two baselines by cross-entropy (CE) with existing metrics for the quality of code comments.
# 5 Metric Comparison
We compare our proposed criterion with MIDQ (Scalabrino et al., 2017), which relies on JavaDoc structure analysis and Flesch readability scores, and SIDE (Mastropaolo et al., 2024), which is reference-free coherence evaluation with MPNet. We evaluate CIDRe against the existing metrics using an independent test set of 100 code comments with cross-entropy, which penalizes both classification errors and probabilistic miscalibrations — notably harsh on overconfident incorrect predictions. As can be seen in Tab. 2, the experiments demonstrate our SVM-based approach’s superiority in probability calibration over ensemble and linear methods, which suffer from error accumulation and nonlinearity handling limitations respectively. Traditional documentation metrics (SIDE, MIDQ) underperform in confidence-sensitive scenarios, validating the need for specialized criteria in borderline case analysis.
component disproportionately reduces model effectiveness, validating our four-dimensional design.
Table 3: Ablation study for SVM model (F1-score). Key: I-informativeness, R-relevance, C-completeness, D-description length. Best result (0.994 F1) with all features.
# 7 Side-by-Side Evaluation
The comparison was performed with GitHub Copilot on a test subset of StRuCom using LLM-asjudge method (the judge is GPT-4o-mini). We finetuned Qwen2.5-Coder models of different size Qwen2.5-Coder (0.5B-7B).
Experiments demonstrate our criterion’s universal effectiveness: data filtering improves model metrics across architectures and languages by removing noise while preserving semantically critical comment patterns. More details are presented in Appendix E. | Effective generation of structured code comments requires robust quality metrics for dataset curation, yet existing approaches (SIDE, MIDQ, STASIS) suffer from limited code-comment analysis. We propose CIDRe, a language-agnostic reference-free quality criterion combining four synergistic aspects: (1) relevance (code-comment semantic alignment), (2) informativeness (functional coverage), (3) completeness (presence of all structure sections), and (4) description length (detail sufficiency). We validate our criterion on a manually annotated dataset. Experiments demonstrate CIDRe's superiority over existing metrics, achieving improvement in cross-entropy evaluation. When applied to filter comments, the models finetuned on CIDRe-filtered data show statistically significant quality gains in GPT-4o-mini assessments. | [
"cs.SE",
"cs.AI",
"cs.CL",
"cs.LG"
] |
# 1. Introduction
Classifying complex objects has extensive applications in various industries from industrial inspection, and healthcare to security systems. Traditional classification methods predominantly rely on visual data, leveraging deep learning models trained on images and videos [1, 2]. While these approaches achieve high accuracy, they are often sensitive to lighting conditions, occlusion, and privacy concerns, making them less suitable for certain non-visual or privacy-sensitive classification applications. Moreover, the light mostly reflects on the objects, making it almost impossible to assess their inner structure and/or materials.
In this study, we propose a novel framework for complex object classification using acoustic scattering. The main point here is that when incident acoustic waves interact with an elastic object, it will produce a scattering sound field re-emitting through all directions, carrying information about the inner structure and materials of the object. Assume we have an active sound source generating an incident wave field $u ^ { i n c } ( x , t )$ , where $x$ represents a typical sine wave and $t$ represents time. As a result, the total acoustic field at a point in space will be a superposition of two acoustic fields: (i) the direct incident $u ^ { i n c } ( x , t )$ , and (ii) the scattered waves $\boldsymbol { u } ^ { s } ( \boldsymbol { x } , t )$ . The total acoustic field $\boldsymbol { u } ( \boldsymbol { x } , t )$ can be formally represented as:
$$
u ( x , t ) = u ^ { i n c } ( x , t ) + u ^ { s } ( x , t )
$$
Acoustic scattering is an important phenomenon in sound propagations [3, 4]. While ray-tracing methods have been used to approximate sound propagation [5], they primarily capture reflections and edge diffractions but fail to encode deeper structural details. By emitting acoustic stimuli toward accessing objects and recording scattered acoustic signals with a microphone, we can classify objects, including complex inner and outer structures. This is done by employing the newest deeplearning-based sound classification methods.
Acoustic scattering has been widely applied in various fields like underwater sonar imaging [6] and medical ultrasound [7, 8], demonstrating its ability to extract rich structural information beyond surface-level features. Unlike ray-tracing models, which primarily capture reflections and edge diffractions, acoustic scattering provides a more comprehensive understanding of an object’s internal structure, density, and geometric composition. Deep learning models applied to acoustic signals have further enabled advancements in bioacoustic analysis [9] and structural health monitoring [7]. However, AI-driven object classification based on acoustic signals remains an underexplored research area, with a few prior works leveraging deep learning models like Convolutional Neural Networks (CNN) to learn the time-frequency representation of the echo signals [10].
This paper reports a case study of acoustic-scattering-based objection classification through an industry-oriented problem of hair assessment. Our objective is to determine whether the scattered sound field can reveal biophysical properties of hair, such as moisture content or type of hair, in a quick, contactless, and privacy-preserving manner. Our experimental setup involves a loudspeaker emitting controlled acoustic waves towards mannequin heads with different wigs, while microphones placed near the mannequin’s neck capture the scattered sound field. The collected acoustic data is then processed using various deep-learning-based sound classification models to classify hair type and moisture levels. To the best of our knowledge, our work is the first attempt to study the feasibility of using AIdriven techniques for complex object classification with acoustic scattering.
The remaining sections are organised as follows. Section 2 describes the setup to collect acoustic waves from different hair moisture contents and types. Section 3 details the utilised methodologies to assess the hair moisture contents and hair type based on the acoustic signals. The experimental results and conclusions are drawn in Section 4 and Section 5, respectively.
# 2. Acoustic scattering in hair type assessment
In this section, we describe our setup to perform hair moisture assessment using acoustic waves. The block diagram of the measurement is illustrated in Figure 1. Various hair samples will be attached to several mannequin heads placed around $1 m$ in front of a loudspeaker (Event ALP5) and a microphone (MX183 omnidirectional). The loudspeaker will emit an acoustic stimulus, which is also the incident wave that hits the hair attached to the dummy head. The incident wave will generate an acoustic scattering field around the dummy head, which will reach the microphone at a superposition of the direct incident and scattered fields. In this experiment, we place the microphone position on the mannequin’s neck. The hair attachment, the microphone position, and the recording setting can be seen in Figure 2 and 3, respectively. The four hair types are also shown in the Figure 3
Figure 1: Schematic diagram of the experiment.
Figure 2: The recording settings.
Figure 3: Pictures of the dummy mannequin heads used in the study, from left to right: A, B, MAMI, MINAYO.
# 2.1. Acoustic stimulus
Acoustic stimulus is the form of an incident wave that is sent out to get information from scattered objects. In this paper, we use Exponential Sine Sweep (ESS), a type of acoustic stimulus that is proven to provide a high signal-to-noise (SNR) ratio in impulse response measurements [11, 12]. This signal is formed by generating a sine sweep signal with exponentially increasing frequencies, as described by the following equation:
$$
x ( t ) = \sin \left[ { \frac { \omega _ { 1 } T } { \ln \left( { \frac { \omega _ { 2 } } { \omega _ { 1 } } } \right) } } \left( e ^ { { \frac { 1 } { T } } \ln \left( { \frac { \omega _ { 2 } } { \omega _ { 1 } } } \right) } | 1 \right) \right]
$$
Where $w _ { 1 }$ denotes the starting angular frequency, $w _ { 2 }$ is the ending angular frequency and $T$ is the total stimulus duration in seconds.
To comprehensively study the reflection, refraction, and scattering phenomena across a wide frequency range, we select $w _ { 1 } = 1 0 0$ and $w _ { 2 } = 2 4 0 0 0$ as the frequency limits, with a stimulus duration of $T = 5 s$ . Given the sensitivity of ESS measurements to non-stationary noise, we conduct our experiments in a soundproof room that maintains a reverberation time RT60 of 0.5s. This controlled environment ensures accurate and reliable data collection.
# 2.2. Hair moisture assessment
Two acoustic measurement experiments were conducted: (i) classification of four hair types, and (ii) differentiation of dry/wet conditions on a single hair sample. In both experiments, hair samples were affixed to dummy mannequin heads, and positioned consistently relative to the loudspeaker and microphone. The hair samples are moistened either by applying shampoo or cream on dry hair.
To retrieve the scattered acoustic signals, an ESS acoustic stimulus is applied, generating an acoustic scattering field characterized by the hair’s properties. The total acoustic field, $\boldsymbol { u } ( \boldsymbol { x } , t )$ , includes the direct incident wave, $u ^ { i n c } ( x , t )$ , defined by Equation 2, and the head-scattered components, $\boldsymbol { u } ^ { s } ( \boldsymbol { x } , t )$ . This field was recorded by the microphone, capturing information about the hair sample. This study proposes a data-driven approach for hair assessment, utilising acoustic scattered samples for each hair class to train deep learning models. Detailed audio sample specifications will be presented in Section 4.
# 3. Sound classification methods
This section introduces deep learning approaches for the classification of hair contents based on scattered acoustic waves. Considering that this direction is novel, we borrow the wellknown and well-studied techniques from a proximate problem: sound classification. Specifically, we investigate common and potential solutions for our task within four categories, namely: (i) fully supervised training, (ii) embedding-based classification, (iii) foundation model supervised fine-tuning, and (iv) selfsupervised learning model fine-tuning.
# 3.1. Fully supervised training with ResNet-50
Fully supervised training remains optimal for sound classification in the presence of sufficient training data. The spectrogram characteristics of acoustic scattered sound samples (Figure 4) show clear energy contours, especially in the scattered pulse. Therefore, it may indicate the suitability of convolutional neural networks for enhancing local features [13]. We implemented ResNet-50 [14] with Bottleneck Residual Blocks, modifying the original 2D convolutional model for single-channel melscaled spectrogram input. The spectrograms were computed using a 512-point FFT with a hop length of 128 samples, and the number of Mel filters was set to 40. Hyperparameter optimisation yielded a $7 \times 7$ convolution kernel (stride 2, padding 3) for the initial 2D convolutional layer while maintaining default ResNet-50 configurations for Bottleneck blocks. The adapted model comprises 23.5M parameters.
# 3.2. Embedding-based classification with VGGish model
Embedding-based models are favoured for lightweight, costeffective deployment on IoT devices [15], particularly when high-performance computing is unavailable. We evaluated a low-cost solution using Extreme Gradient Boosting (XGBoost) [16] with GridSearch parameter optimisation to mitigate overfitting. Performance was benchmarked on our datasets by extracting embedding vectors from the AudioSet VGGish pretrained model [17] and fitting them to an XGBoost classifier.
Figure 4: An example of spectrograms of ESS stimulus and the corresponding hair-on-head scattered record.
Figure 5: Wav2Vec2-Conformer fine-tuning strategies.
# 3.3. Supervised fine-tuning with Audio Spectrogram Transformer
Adapting large, pre-trained models from related fields is an effective strategy for sound classification when limited datasets preclude fully supervised training. We adopted the Audio Spectrogram Transformer (AST), a convolution-free, state-of-the-art model for audio classification [18], to construct acoustic scattering sound classification models. The initial AST architecture, designed for $[ 1 0 2 4 \times 1 2 8 ]$ spectrogram input with $8 6 . 1 \mathrm { M }$ parameters, was pre-trained on ImageNet [19] and fine-tuned on the AudioSet [20]. To align with this pre-trained model, our audio waveforms were processed into Mel-spectrograms, zeropadded to a sequence length of 1024, and batch-normalised to a mean of 0 and standard deviation of 0.5.
# 3.4. Self-supervised learning models fine-tuning, applied to Wav2Vec2-Conformer
The success of self-supervised learning (SSL) models has extended beyond text to other domains such as audio and images [21]. SSL speech models like HuBERT [22] and Wav2Vec2 [23] have proven to be effective in speech and sound classification problems [24, 25, 26, 27, 28]. In this paper, to apply SSL to audio classification, we utilised the Wav2Vec2-Conformer large [29] model with rotary position embeddings. The model was pre-trained for 960 hours on Librispeech. This SSL model performed better than others in our preliminary experiments. We keep the model hyperparameters the same as the pre-trained configuration, with a total number of parameters of $5 9 3 . 6 \mathrm { M }$ . Besides, we used a pre-trained voice activity detector [30] to remove left-right silence regions and environmental noises from the input waveforms. Drawing inspiration from [31], we experimented with two finetuning strategies as illustrated in Figure 5. In (a) partial fine-tuning, only the parameters of the Conformer Encoder are updated while freezing the state of CNN feature extractors; the complete fine-tuning strategy in (b) updates all the parameters of the Wav2Vec2-Conformer model.
# 4. Experimental results
# 4.1. Dataset and Evaluation metrics
The dataset was constructed through multiple recording rounds, each comprising two weekly sessions separated by at least one day to ensure independence and identical distribution. Prior to each experiment, the hair on dummy heads is untangled and combed. Each session involved playing a 5-second ESS impulse of 100 times per hair sample at randomised timings. The acoustic scattered signal is recorded via dual microphones. Recordings were aligned with the source audio using crosscorrelation and segmented into 5-second samples. A total of 26 recording rounds were conducted, capturing scattered pulses from 4 hair classes and 3 hair condition patterns. All the acoustic signals are resampled to 48kHz. The final dataset composition is summarised in Table 1.
Table 1: Number of samples for each class by round, with 4 classes of hair type and 3 classes of hair condition
Due to the dataset’s near-balance nature, we primarily report accuracy, with the F1 score as a secondary metric. Oneversus-rest AUC [32] scores are included in benchmark tables to facilitate future comparative studies across hair types.
# 4.2. Performance evaluation
Experiments were conducted on a single NVIDIA A40 GPU. The fully supervised ResNet model, implemented in PyTorch and HuggingFace, was trained for 40 epochs. The fine-tuning of pre-trained Transformer-based models also utilised the HuggingFace Transformer framework [33], trained with a batch size of 16 for 20 epochs, early stopping is applied with a tolerance of 5 steps based on the evaluation set loss and accuracy. To standardise comparison with the ResNet model, we replaced the original Softmax activation and CrossEntropyLoss with LogSoftmax activation and mean-reduced negative log-likelihood loss (NLLLoss).
# 4.2.1. Task 1: Classification of hair types
We evaluate all proposed classification methods on the first recording session, which includes samples with dry hair. The data was split using round-robin cross-validation for this 4- class problem. Table 2 presents the benchmarking results, with Wav2Vec2-Conformer and ResNet-50 as top performers, suggesting the importance of CNN layers in scattered acoustic classification tasks.
Besides, we report an ablation study comparing partial and complete fine-tuning strategies for Wav2Vec2-Conformer in Table 3. Updating the parameters of CNN feature extractors improves the performance across all metrics.
Figure 6 displays ROC curves and AUC one-versus-rest scores for each class across all methods, providing a comprehensive comparison. The Wav2Vec2-Conformer models consistently outperform other methods across all classes, achieving the highest AUC values, particularly in the HAMI and MINAYO classes, where complete fine-tuning reaches near-perfect performance. ResNet-50 and VGGish-XGB show relatively lower AUCs, especially in the B vs. Rest classification, suggesting that traditional image-based models struggle with this
Figure 6: Receiver operating characteristic curve one-versus-rest on Task 1.
Figure 7: Receiver operating characteristic curve one-versus-rest on Task 2.
Dry hair vs Rest Applied Shampoo Hair vs Rest Applied Cream Hair vs Rest
1.0 Resnet-50 (auc=1.00) Resnet-50(auc=0.93) Resnet-50(auc=0.94) VGG-XGB (auc =0.99) VGG-XGB (auC = 0.84) VGG-XGB (auc =0.89) AST Finetuning (auc = 1.00) ASTFinetuning(auc=0.90) AST Finetuning (auc = 0.92)
0.2 Wav2vec2-ConformerPartialFinetuning (auc=0.97) Wav2vec2-ConformerPartialFinetuning (auc=0.83) Wav2vec2-Conformer Partial Finetuning (auc = 0.94) Wav2vec2-Conformer Complete Finetuning (auc = 1.00) Wav2vec2-Conformer Complete Finetuning (auc=0.93) Wav2vec2-Conformer Complete Finetuning (auc = 0.94)
0.0 0.0 0.2 0.4 0.6 0.8 1.00.0 0.2 0.4 0.6 0.8 1.00.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate False Positive Rate False Positive Rate
Table 2: Results for the classification of hair types (Task 1)
Table 3: Results of different Wav2Vec2-Conformer fine-tuning strategies on Task 1
task. AST Fine-tuning performs competitively but falls short of the Wav2Vec2-Conformer models, highlighting the benefits of fine-tuning on audio-specific architectures.
# 4.2.2. Task 2: Classification of different hair conditions
In this task, we evaluate the robustness and stability of our methods under various environmental conditions, particularly dry hair versus wet hair treated with anonymous shampoo and cream products. Specifically, we assess the proposed methodologies for the classification of multiple hair conditions. To ensure train-test independence, we employ round-robin crossvalidation, stratifying the first and second hair types (represented by mannequins A and B) into the train-dev set, while the third and fourth types (represented by mannequins MAMI and MINAYO) were allocated to the test set. Task 2 constitutes a 3-class classification problem.
Similar to Task 1, we present the results for Task 2 across the four proposed methods in Table 4 and the two Wav2Vec2- Conformer fine-tuning strategies in Table 5. Figure 7 illustrates the ROC curves and AUC one-versus-rest scores for each class across all five proposed methods. The Wav2Vec2-Conformer with complete fine-tuning emerged as the top performer, suggesting that the combination of convolutional layers and attention mechanisms is most effective for acoustic-based hair type classification tasks. These results align with the findings in Task 1, reinforcing the crucial role of CNN layers. The two bestperforming models are Wav2Vec2-Conformer and ResNet-50, both leveraging CNNs. Traditional embedding-based classification models like VGGish-XGB show competitive performance in some cases but generally lag behind fine-tuned audio-specific models, reinforcing the effectiveness of Wav2Vec2-Conformer for this task.
Table 4: Results for the classification of hair conditions (Task 2)
Table 5: Results of different Wav2Vec2-Conformer fine-tuning strategies on Task 2 | This paper presents a novel non-invasive object classification approach using acoustic scattering, demonstrated through a case study on hair assessment. When an incident wave interacts with an object, it generates a scattered acoustic field encoding structural and material properties. By emitting acoustic stimuli and capturing the scattered signals from head-with-hair-sample objects, we classify hair type and moisture using AI-driven, deep-learning-based sound classification. We benchmark comprehensive methods, including (i) fully supervised deep learning, (ii) embedding-based classification, (iii) supervised foundation model fine-tuning, and (iv) self-supervised model fine-tuning. Our best strategy achieves nearly 90% classification accuracy by fine-tuning all parameters of a self-supervised model. These results highlight acoustic scattering as a privacy-preserving, non-contact alternative to visual classification, opening huge potential for applications in various industries. | [
"cs.SD",
"cs.CL",
"eess.AS"
] |
# 1. Introduction
The ability to extract meaningful patterns from visual observations and systematically predict future outcomes represents a cornerstone of cognitive intelligence, forming the foundation for what cognitive scientists term System 2 reasoning—deliberate, systematic thinking that enables complex planning and problem-solving (Kahneman, 2011; Evans & Stanovich, 2013; Bengio et al., 2019). In artificial intelligence, this capability translates to the fundamental challenge of developing world models that can perform symbolic abstraction and logical reasoning (Goyal et al., 2021; Sehgal et al., 2023; Tang et al., 2024; Baek et al., 2025), enabling agents to plan effectively over extended temporal horizons.
Recent advances in image tokenization (Van Den Oord et al., 2017; Esser et al., 2021; Ramesh et al., 2021; Razavi et al., 2019; Yu et al., 2021) and autoregressive modeling (Esser et al., 2021; Chang et al., 2022; Yu et al., 2023; Yan et al., 2023) have demonstrated remarkable progress in visual understanding and generation tasks. However, these approaches primarily focus on patch-level local feature tokenization, which, while effective for reconstruction and generation, exhibit significant limitations when applied to tasks requiring symbolic reasoning and logical planning capabilities. The granular nature of patch-based representations introduces computational overhead and, more critically, fails to capture the high-level semantic abstractions necessary for systematic inference and long-horizon planning.
Contemporary efforts to address these limitations have explored semantic-level tokenization approaches, such as Yu et al. (2024); Wu et al. (2024); Kim et al. (2025); Bachmann et al. (2025), which attempt to move beyond patch-level representations toward more meaningful 1D tokenization.
However, these methods remain constrained by their reliance on pixel-level reconstruction objectives, resulting in tokens that encode unnecessary visual details rather than the abstract semantic concepts crucial for symbolic reasoning. This fundamental mismatch between the granularity of representation and the requirements of symbolic planning tasks represents a significant barrier to developing truly intelligent visual reasoning systems.
The Joint-Embedding Predictive Architecture (JEPA) framework (LeCun, 2022; Assran et al., 2023; Bardes et al., 2023b; Sobal et al., 2022) offers a promising alternative by learning representations through latent-space prediction rather than pixel-level reconstruction. By predicting masked representations in latent space, Assran et al. (2023) demonstrates the potential for learning more semantically meaningful features. However, the continuous nature of its representations limits their applicability to autoregressive modeling paradigms, where discrete tokens are essential for effective sequence modeling and long-horizon prediction with reduced accumulated error.
To bridge this gap, we propose Discrete-JEPA, a novel extension of the JEPA framework that learns discrete semantic tokens capturing high-level semantic abstractions while preserving the benefits of latent-space predictive learning. Our approach introduces semantic-level vector quantization to the JEPA architecture while maintaining the framework’s core advantage of latent-space predictive learning. Through a carefully designed unified predictive framework, DiscreteJEPA learns to encode global semantic information into discrete tokens while preserving fine-grained spatial details through complementary continuous representations.
Our contributions are threefold: (1) We introduce the Discrete-JEPA architecture, which extends the JEPA framework with semantic tokenization and novel complementary objectives (Semantic-to-Patch, Patch-to-Semantic, and Patch-to-Patch prediction) to learn robust discrete semantic tokens for enhanced representation learning. (2) We demonstrate that Discrete-JEPA significantly outperforms existing baselines across challenging visual symbolic prediction tasks, validating the effectiveness of our semantic tokenization approach. (3) We provide compelling visual evidence of systematic patterns that emerge within the learned semantic token space, offering insights into the model’s representation capabilities and potential for more complex reasoning tasks.
# 2. Related Works
Self-supervised Visual Representation Learning. Selfsupervised learning has evolved through contrastive learning (Chen et al., 2020; He et al., 2020; Caron et al., 2020), variance-based regularization (Bardes et al., 2021), bootstrap methods (Grill et al., 2020), self-distillation (Caron et al., 2021; Oquab et al., 2024), and masked reconstruction approaches (He et al., 2022; Bao et al., 2021; Zhou et al., 2022). Recent work has also explored unified multimodal frameworks (Baevski et al., 2022). While these methods achieve strong performance on recognition tasks, they predominantly learn patch-level embeddings optimized for local features rather than the global semantic abstractions required for symbolic reasoning. Our approach addresses this limitation by learning semantic-level discrete tokens that capture high-level conceptual information.
Discrete Image Tokenization. Discrete visual representations emerged with VQ-VAE (Van Den Oord et al., 2017) and subsequent vector quantization methods (Esser et al., 2021; Yu et al., 2021), enabling token-based autoregressive generation (Ramesh et al., 2021; Chang et al., 2022). Building upon these foundations, researchers have developed alternative quantization schemes (Lee et al., 2022; Van Balen & Levy, 2019; Takida et al., 2022; Mentzer et al., 2023) and extended tokenization to video domains (Yu et al., 2023). More recently, semantic-level approaches have explored 1D tokenization (Yu et al., 2024; Chen et al., 2025b; Wang et al., 2025; Bachmann et al., 2025). However, reliance on pixel-level reconstruction objectives biases representations toward fine-grained details rather than semantic concepts essential for symbolic reasoning. We overcome this limitation through latent predictive learning that avoids reconstruction bias.
Joint-Embedding Predictive Architectures. JEPA (LeCun, 2022) introduced latent-space prediction as an alternative to pixel reconstruction. I-JEPA (Assran et al., 2023) demonstrated superior sample efficiency through masked representation prediction, inspiring extensions to audio (Fei et al., 2023), video (Bardes et al., 2023a), multi-modal motion-content learning (Bardes et al., 2023b), and diffusion applications (Chen et al., 2025a). Despite these advances, continuous representations suffer from accumulated errors in sequential prediction and lack discrete structure necessary for robust symbolic reasoning. Our work extends JEPA with discrete semantic tokenization and complementary predictive objectives to enable stable long-horizon prediction.
# 3. Preliminaries
Joint-Embedding Predictive Architecture. JointEmbedding Predictive Architecture (JEPA) (LeCun, 2022; Assran et al., 2023) learns representations by predicting masked portions of the input in representation space rather than pixel space. Specifically, Assran et al. (2023) employs three key components: a context encoder $f _ { \theta } ^ { c }$ , a target encoder $f _ { \bar { \theta } } ^ { t }$ , and a predictor $g _ { \phi }$ .
Given an input image $\boldsymbol { x } \in \mathbb { R } ^ { H \times W \times C }$ , the image is divided into patches and processed as follows:
Figure 2. Discrete-JEPA Architecture Overview. The context encoder $f _ { \theta } ^ { c }$ takes masked inputs with learnable tokens $z _ { s } ^ { 0 }$ and generates semantic $( z _ { s } )$ and patch $( z _ { p } )$ representations, while the target encoder $f _ { \bar { \theta } } ^ { t }$ processes the complete image to produce target representations $\bar { z _ { s } }$ and $\bar { z _ { p } }$ . Vector quantization (VQ) is applied only to semantic representations to create discrete tokens $z _ { s } ^ { \mathrm { d i s c r e t e } }$ . Using these discrete semantic tokens and continuous patch tokens, the model performs three complementary prediction tasks (S2P, P2S, P2P) and compares predictions against the target encoder outputs, whose parameters are updated via EMA.
1. Context Processing: Visible patches (context block) $x _ { \mathcal { V } }$ are encoded by the context encoder to obtain context representations $z _ { c } = f _ { \theta } ^ { c } ( x \nu )$ .
2. Target Processing: The entire image is processed by the target encoder to obtain actual patch representations at target locations $z _ { t } = f _ { \bar { \theta } } ^ { t } ( x _ { \mathcal { M } } )$ .
3. Prediction: The predictor takes context representations and target position indices to predict what representations should exist at those target locations: $\hat { z } _ { t } ~ = ~ g _ { \phi } ( z _ { c } , \mathcal { M } )$ , where $\mathcal { M }$ contains the positional indices of target patches.
The training objective minimizes the L2 distance between predicted and target representations:
$$
\mathcal { L } _ { \mathrm { I \mathrm { J E P A } } } = \sum _ { i \in \mathcal { M } } \vert \vert f _ { \theta } ^ { t } ( x _ { i } ) - g _ { \phi } ( f _ { \theta } ^ { c } ( x _ { \mathcal { V } } ) , i ) \vert \vert _ { 2 } ^ { 2 }
$$
where $i$ represents the positional index of target patches, and the predictor $g _ { \phi }$ takes both the context representations and the target position index $i$ to predict what should be at that location. The target encoder $f _ { \bar { \theta } } ^ { t }$ processes the actual patches to provide the ground truth representations for comparison. Crucially, the target encoder parameters are updated via exponential moving average (EMA) of the context encoder, as shown in (He et al., 2020; Caron et al., 2021).
# 4. Discrete JEPA Tokenization
We propose Discrete JEPA, which extends the JointEmbedding Predictive Architecture to learn discrete semantic tokens for symbolic reasoning and long-horizon planning. Our approach discretizes only semantic representations while maintaining continuous patch representations as intermediate features during training.
The method comprises three key components: an extended JEPA framework (Section 4.1), a semantic and patch tokenization strategy (Section 4.2), and complementary predictive objectives (Section 4.3).
# 4.1. Architecture
Our approach builds upon the JEPA framework (Assran et al., 2023), which employs three key components: a context encoder $f _ { \theta } ^ { c }$ , a target encoder $f _ { \bar { \theta } } ^ { t }$ , and predictors $g _ { \phi }$ . We extend this architecture to support semantic-level discrete tokenization while preserving the original spatial prediction
capabilities.
Given an input image $\boldsymbol { x } \in \mathbb { R } ^ { H \times W \times C }$ , our Discrete JEPA processes the image with the following components:
Context Encoder $f _ { \theta } ^ { c }$ : Processes visible image patches $x _ { \mathcal { V } }$ , sampled from patched inputs $\{ x _ { i } \} _ { i = 0 } ^ { N _ { p } }$ according to masking strategies, to obtain semantic and patch-level representations $z _ { s } , z _ { p }$ :
$$
z _ { s } , z _ { p } = f _ { \theta } ^ { c } ( z _ { s } ^ { 0 } , x _ { \mathcal { V } } )
$$
where $z _ { s } ^ { 0 }$ consists of $L$ learnable tokens.
Target Encoder $f _ { \bar { \theta } } ^ { t }$ : Processes the entire image $x$ along with learnable tokens $z _ { s } ^ { 0 }$ to generate target semantic and patch representations $\bar { z } _ { s } , \bar { z } _ { p }$ :
$$
\bar { z } _ { s } , \bar { z } _ { p } = f _ { \bar { \theta } } ^ { t } ( z _ { s } ^ { 0 } , x )
$$
Vector Quantization: Applies vector quantization to semantic representations from both encoders to obtain discrete semantic tokens using a shared semantic codebook $\mathcal { C } _ { s } \in \mathbb { R } ^ { K _ { s } \times D _ { s } }$ :
$$
\begin{array} { r } { z _ { s } ^ { \mathrm { d i s c r e t e } } = \mathrm { V Q } ( z _ { s } ) , \quad \bar { z } _ { s } ^ { \mathrm { d i s c r e t e } } = \mathrm { V Q } ( \bar { z _ { s } } ) } \end{array}
$$
Predictors $g _ { \phi }$ : Process semantic and patch tokens $z _ { s } ^ { \mathrm { d i s c r e t e } } , z _ { p }$ with target masks $\mathcal { M }$ to generate predictions for their respective objectives:
$$
\hat { z _ { p } } = g _ { \phi } ^ { \mathtt { S 2 P } } ( z _ { s } ^ { \mathtt { d i s c r e t e } } , \mathcal { M } ) , \hat { z _ { s } } = g _ { \phi } ^ { \mathtt { P 2 S } } ( z _ { p } ) , \hat { z _ { p } } = g _ { \phi } ^ { \mathtt { P 2 P } } ( z _ { p } , \mathcal { M } ) )
$$
# 4.2. Semantic and Patch Tokenization
Our approach employs two distinct types of tokens, each serving specific functional roles within the learning framework:
Semantic Tokens (Discrete). The semantic representation $\bar { z _ { s } }$ captures global image context and is discretized through vector quantization to produce discrete semantic tokens. Given the continuous representation $\bar { z _ { s } } \in \mathbb { R } ^ { D _ { s } }$ and a learnable codebook $\mathcal { C } _ { s } = c _ { 1 } , c _ { 2 } , . . . , c _ { K _ { s } } \subset \mathbb { R } ^ { D _ { s } }$ with $K _ { s }$ prototypes, we find the nearest entry:
$$
\begin{array} { r } { \bar { k } ^ { * } = \arg \operatorname* { m i n } _ { k \in 1 , \ldots , K _ { s } } | | \bar { z } _ { s } - c _ { k } | | _ { 2 } } \end{array}
$$
The discrete semantic token is then:
$$
\bar { z } _ { s } ^ { \mathrm { d i s c r e t e } } = C ( \bar { k } ^ { * } ) = c _ { \bar { k } ^ { * } } .
$$
These discrete tokens serve as the primary output for downstream symbolic reasoning and long-horizon planning tasks.
For training, we follow standard vector quantization procedures with commitment loss and exponential moving average updates, following (Van Den Oord et al., 2017; Esser et al., 2021).
Patch Tokens (Continuous). We maintain continuous patch tokens $\bar { z _ { p } }$ that capture fine-grained spatial details from the encoder $\dot { \boldsymbol { f } } _ { \bar { \boldsymbol { \theta } } } ^ { t } ( \boldsymbol { z } _ { s } ^ { 0 } , \boldsymbol { x } )$ . Unlike discrete semantic tokens, patch tokens remain continuous and serve exclusively as intermediate representations during training. These continuous tokens facilitate effective information flow between semantic and spatial levels through our unified predictive framework, but are not used in the final tokenized output.
Semantic-Patch Interaction. The interaction between discrete semantic tokens $( \bar { z _ { s } } ^ { \mathrm { d i s c r e t e } } )$ and continuous patch tokens $( \hat { z _ { p } } )$ from encoders forms the foundation for our unified predictive training framework. Discrete semantic tokens provide global context that guides spatial prediction, while continuous patch tokens contribute local details that enhance semantic understanding. This bidirectional relationship enables effective learning between global and local representations, setting the stage for the complementary predictive objectives detailed in the following section.
# 4.3. Complementary Predictive Objectives
We introduce three predictive objectives that operate between discrete semantic tokens and continuous patch tokens, each serving a distinct role in learning meaningful discrete semantic tokens:
Semantic-to-Patch (S2P) Prediction. The S2P objective encourages discrete semantic tokens to encode sufficient global context by predicting continuous patch tokens at target locations:
$$
\mathcal { L } _ { \mathtt { S } 2 \mathtt { P } } = \sum _ { i \in \mathcal { M } } | | \bar { z _ { p } } ^ { ( i ) } - g _ { \phi } ^ { \mathtt { S } 2 \mathtt { P } } ( z _ { s } ^ { \mathrm { d i s c r e t e } } , i ) | | _ { 2 } ^ { 2 } .
$$
where $z _ { s } ^ { \mathrm { d i s c r e t e } }$ is the discrete semantic token and $i$ encodes the spatial position. This objective enables the model to learn how global semantic information relates to local spatial details.
Patch-to-Semantic (P2S) Prediction. The P2S objective learns to extract semantic abstractions from continuous patch tokens:
$$
\begin{array} { r } { \mathcal { L } _ { \mathtt { P } 2 \mathtt { S } } = | | \bar { z } _ { s } - g _ { \phi } ^ { \mathtt { P } 2 \mathtt { S } } ( z _ { p } ) | | _ { 2 } ^ { 2 } . } \end{array}
$$
This objective encourages continuous patch tokens to contribute meaningfully to global semantic understanding, ensuring consistency between continuous and discrete token representations.
Patch-to-Patch (P2P) Prediction. The P2P objective maintains spatial coherence by predicting continuous patch tokens from other continuous patch tokens, following the
Figure 3. Long-horizon prediction performance on Dancing-Sprites-Pattern dataset. Performance comparison across color (left), shape (center), and position (right) prediction tasks over 200 rollout steps. Discrete-JEPA maintains stable performance while I-JEPA variants degrade over time due to accumulated errors in continuous space. D-JEPA achieves perfect color prediction stability, highlighting the benefits of discrete semantic tokenization for symbolic reasoning tasks.
original JEPA framework:
$$
\mathcal { L } _ { \mathtt { P 2 P } } = \sum _ { i \in \mathcal { M } } | | \bar { z _ { p } } ^ { ( i ) } - g _ { \phi } ^ { \mathtt { P 2 P } } ( z _ { p } , i ) | | _ { 2 } ^ { 2 } .
$$
This objective ensures that our extension preserves the spatial prediction capabilities of the original JEPA framework.
Unified Training Objective. The complete training objective combines all predictive losses with the vector quantization commitment loss:
$$
\mathcal { L } _ { \mathrm { t o t a l } } = \lambda _ { 1 } \mathcal { L } _ { \mathrm { S 2 P } } + \lambda _ { 2 } \mathcal { L } _ { \mathrm { P 2 S } } + \lambda _ { 3 } \mathcal { L } _ { \mathrm { P 2 P } } + \mathcal { L } _ { \mathrm { V Q } } .
$$
where $\mathcal { L } _ { \mathrm { V Q } }$ includes the standard VQ commitment loss for the discrete semantic tokens. This unified predictive framework enables the learning of discrete semantic tokens that effectively capture global context, while continuous patch tokens provide detailed local information for complex reasoning tasks.
# 5. Experiments
Datasets & Evaluation Protocol. We evaluate DiscreteJEPA on two challenging visual sequence prediction tasks designed to assess symbolic reasoning and long-horizon planning capabilities. (1) Dancing-Sprites-Pattern consists of image sequences featuring a single object that follows various color transition patterns (Linear, Repeat-2, Zigzag-3, Repeat-3). Given 4 conditioning frames, we evaluate long-horizon prediction performance over approximately 200 time steps, measuring accuracy on color, shape, and position property classification tasks. (2) Blinking-Ball features sequences with four balls exhibiting interacting position and color patterns, requiring simultaneous tracking of spatial and chromatic dependencies. We assess prediction capabilities over approximately 1,000 rollout steps, measuring performance through pixel-wise reconstruction accuracy.
Both datasets provide controlled environments for evaluating symbolic reasoning capabilities while maintaining sufficient complexity to effectively distinguish between different tokenization approaches. Detailed dataset specifications and evaluation protocols are provided in Appendix A.
Baselines. We compare Discrete-JEPA against I-JEPA (Assran et al., 2023) as our primary baseline. I-JEPA represents the most direct comparison as it shares the same underlying architectural framework but operates with continuous representations rather than discrete tokens. For fair comparison, we adapt I-JEPA to the sequential prediction setting by training autoregressive world models on the continuous representations learned by I-JEPA. This baseline allows us to isolate the specific contribution of discrete semantic tokenization while controlling for architectural differences.
Implementation Details. Our implementation extends the I-JEPA framework with semantic tokenization and complementary prediction objectives. We train autoregressive world models using standard Vision Transformer architecture (Dosovitskiy et al., 2020) for long-horizon sequence prediction tasks. Complete implementation details, hyperparameters, and training configurations are provided in Appendix B.
# 5.1. Main Results
# 5.1.1. LONG-HORIZON SYMBOLIC PREDICTION TASKS
Discrete Tokenization Mitigates Accumulated Prediction Errors. A fundamental advantage of Discrete-JEPA emerges in its ability to prevent error accumulation over extended prediction horizons. By operating in a constrained discrete index space rather than continuous representations, Discrete-JEPA eliminates the compounding errors that plague continuous prediction approaches. This is demonstrated in Dancing-Sprites-Pattern color prediction, where Discrete-JEPA maintains perfect accuracy (1.0) across 200 timesteps while I-JEPA variants show substantial degradation (Figure 3), and in Blinking-Ball, where Discrete-JEPA stabilizes while I-JEPA exhibits continuous decline (Figure 4, Table 1).
Figure 4. Long-horizon prediction on Blinking-Ball task. Discrete-JEPA maintains stable performance while I-JEPA degrades due to accumulated prediction errors, illustrating the benefits of discrete semantic tokenization for long-horizon sequence modeling.
Figure 5. Visualization of Semantic Planning on Blinking Ball. Long-horizon predictions over 1,000 timesteps. I-JEPA breaks pattern consistency around $\scriptstyle 1 = 6 0 0$ despite initial accuracy, while Discrete-JEPA maintains systematic pattern integrity throughout, demonstrating deliberate planning in semantic token space. Additional visualization examples are provided in Appendix B.
Semantic Abstraction Enables Robust Pattern Recognition. Discrete-JEPA’s semantic tokens, which integrate information across spatial patches, demonstrate superior capability for tasks requiring holistic understanding. This advantage is particularly evident in shape prediction tasks within Dancing-Sprites-Pattern, where semantic abstraction enables robust recognition of object-level properties. The approach effectively balances the need for high-level abstraction with sufficient detail retention for symbolic pattern modeling.
# Trade-off Between Abstraction and Spatial Precision.
While discrete semantic tokenization provides substantial benefits for symbolic reasoning, it involves a deliberate trade-off with fine-grained spatial information. This tradeoff manifests in position prediction tasks, where I-JEPA (Concat) initially outperforms Discrete-JEPA due to explicit patch-level spatial encoding. However, the superior longhorizon stability of discrete approaches ultimately proves more valuable for extended sequence modeling. The multiobject complexity in Blinking-Ball further illustrates this trade-off, where Discrete-JEPA shows initial performance adjustment before achieving stable prediction, reflecting the increased demands of detailed positional reasoning in complex scenes.
# 5.1.2. VISUALIZATION OF PLANNING ON SEMANTIC SPACE
Systematic Pattern Maintenance vs. Reactive Prediction. Beyond quantitative performance metrics, DiscreteJEPA exhibits qualitatively distinct prediction behavior that
t=10 t=20 t=50 t=100 t=200 t=400 t=600 t=800 t=1000 GT :.:.:.:.::.:.:.: I-JEPA :3:::.::.:: Ours .:.:.:.:.:.:.:.:.:
suggests systematic planning capabilities rather than myopic next-step prediction. Figure 5 reveals this through extended sequence visualization on the Blinking-Ball task, where Discrete-JEPA maintains coherent pattern integrity throughout 1,000 timesteps while I-JEPA breaks systematic consistency around $\scriptstyle 1 = 6 0 0$ despite initially accurate predictions. This divergence indicates that Discrete-JEPA operates through deliberate pattern-based reasoning rather than reactive prediction.
Evidence of Deliberate Reasoning in Semantic Token Space. The preserved pattern consistency in DiscreteJEPA’s predictions provides compelling evidence of Symbolic reasoning within the learned semantic token space. While I-JEPA’s early accuracy suggests local prediction competence, its eventual pattern breakdown reveals the limitations of continuous representation for maintaining global symbolic consistency. In contrast, Discrete-JEPA’s sustained adherence to underlying symbolic rules demonstrates that semantic tokenization enables the model to internalize and execute systematic reasoning processes, moving beyond immediate sensory-motor responses toward planned, rulebased behavior characteristic of deliberate cognitive processes.
# 6. Limitations and Future Work
Our approach presents several key limitations that open avenues for future research. (1) Abstraction-Precision Tradeoff : Discrete semantic tokens excel at capturing high-level patterns but sacrifice fine-grained spatial information, evident in position prediction tasks where I-JEPA initially outperforms our method. (2) Limited Scope: Our evaluation focuses on controlled synthetic datasets that, while enabling precise assessment of symbolic reasoning, may not capture real-world complexity. (3) Baseline Coverage: Comparisons primarily involve I-JEPA, limiting our understanding relative to other contemporary tokenization approaches like VQGAN or TiTok.
Table 1. Blinking-Ball long-horizon prediction metrics. I-JEPA shows better initial performance but continuous degradation, while Discrete-JEPA stabilizes after step 50 with superior long-horizon results ( $^ { \mathrm { \prime } } 6 \times$ better LPIPS, $5 \times$ better MSE at 1000 steps).
Future work should address these limitations through several promising directions. (1) Real-world Applications: Evaluating Discrete-JEPA on robotics planning and complex video understanding tasks would validate its practical utility beyond controlled settings. (2) Hierarchical Representation: Developing multi-level semantic abstraction could address the abstraction-precision trade-off by maintaining tokens at different granularities.
Despite these limitations, our work establishes a promising foundation for advancing discrete semantic tokenization in latent predictive coding approaches, with demonstrated benefits for long-horizon prediction and compelling evidence of systematic reasoning capabilities. | The cornerstone of cognitive intelligence lies in extracting hidden patterns from observations and leveraging these principles to systematically predict future outcomes. However, current image tokenization methods demonstrate significant limitations in tasks requiring symbolic abstraction and logical reasoning capabilities essential for systematic inference. To address this challenge, we propose Discrete-JEPA, extending the latent predictive coding framework with semantic tokenization and novel complementary objectives to create robust tokenization for symbolic reasoning tasks. Discrete-JEPA dramatically outperforms baselines on visual symbolic prediction tasks, while striking visual evidence reveals the spontaneous emergence of deliberate systematic patterns within the learned semantic token space. Though an initial model, our approach promises a significant impact for advancing Symbolic world modeling and planning capabilities in artificial intelligence systems. | [
"cs.CV"
] |
# 1 Introduction to Unsupervised Node Representation Learning
Unsupervised graph representation learning (UGRL) is an important area in machine learning that focuses on transforming complex, high-dimensional, and often sparse graph data into compact, dense vector representations [23]. The main goal is to capture and summarize information from graphs in a way that can be broadly useful for different downstream tasks—without relying on labeled data during training [23]. In essence, UGRL aims to map core elements of a graph—like nodes, edges, or even entire substructures—into a lower-dimensional space, while still preserving the key relationships and patterns that exist in the original graph [17]. For node embeddings specifically, the objective is to learn a mapping function $f : v _ { i } \stackrel { \textstyle \cdot } { } \mathbb { R } ^ { d }$ that projects each node $\boldsymbol { v } _ { i }$ into a low-dimensional vector of size $d$ , where $d \ll | V |$ , the total number of nodes. The learned embeddings should reflect the similarity relationships between nodes as they appear in the original graph structure [4, 17]. A defining characteristic of unsupervised methods is their ability to generate general-purpose embeddings, meaning they are not tailored or optimized for any single downstream task. This contrasts with semi-supervised approaches, which are typically trained with specific applications in mind and therefore produce task-specific embeddings [4, 28].
The utility of low-dimensional node embeddings is substantial, as they provide powerful feature representations that help bridge the gap between traditional machine learning algorithms and the complex, interconnected nature of graph-structured data [28]. These embeddings play a crucial role in a wide range of predictive tasks, including node classification, link prediction (both missing and future connections), community detection, and anomaly detection [14, 17, 22, 26].One of the key advantages of graph embeddings is their ability to reduce the complexity of graph mining tasks by transforming them into more manageable problems in a continuous vector space. This transformation enables more efficient application of artificial intelligence and machine learning techniques on graph data [22]. In recent years, the field of unsupervised graph representation learning has seen rapid growth and innovation, producing strong results across a wide range of graph analysis tasks [26]. Current approaches in this area can be broadly grouped into several main categories: random walk-based methods, matrix factorization techniques, deep learning-based frameworks (such as autoencoders and Graph Neural Networks), and, more recently, contrastive learning-based methods [17, 22, 28]. Each of these paradigms uses distinct techniques and objective functions tailored to their specific learning goals.
A notable theme in the evolution of unsupervised graph representation learning is the inherent tension between the theoretical goal of producing truly downstream task-agnostic embeddings and the practical realities of model design [23]. While the central objective is to generate representations that are independent of any specific task, many existing methods—either implicitly or explicitly—introduce inductive biases that enhance performance for particular types of applications. For instance, the PairE method was explicitly developed to address limitations in earlier techniques that performed well on node-centric tasks but struggled with edge classification. This design reflects a deliberate effort to broaden the applicability of embeddings across different task types [23]. It illustrates a common trade-off in practice: rather than aiming for an idealized, fully general-purpose embedding, many approaches incorporate cost functions that balance task-independence with improved performance across a spectrum of commonly encountered downstream tasks. For researchers and practitioners, this highlights the importance of selecting unsupervised methods with an awareness of both the graph characteristics and the nature of the intended applications. In reality, even methods described as "task-agnostic" can vary significantly in effectiveness across different tasks, depending on the structural or feature-level signals that their objective functions prioritize capturing.
# 2 Categories of Unsupervised Node Embedding Methods
Unsupervised node embedding methods can be broadly classified into several categories, each employing distinct mechanisms and associated cost functions to learn low-dimensional representations of nodes in a graph.
# 2.1 Random Walk-based Methods
These approaches draw inspiration from the successes observed in natural language processing (NLP) [23]. They operate by simulating random walks across the graph, which generate sequences of nodes similar to "sentences" in text [28]. Subsequently, models like the Skip-Gram model, originally developed for word embeddings, are applied to these sequences to learn node representations [4]. The fundamental objective is to maximize the likelihood of observing neighboring nodes that appear within these generated random walks, given the embedding of the central node [4, 12]. Prominent examples within this category include DeepWalk, Node2vec, and LINE [12, 30, 34]. These methods effectively transform the complex, non-Euclidean structure of a graph into a linear sequence format, enabling the application of well-established and computationally efficient sequence modeling techniques [19, 31, 48]. This transformation is a pivotal conceptual leap, simplifying the problem and making it amenable to existing machine learning tools. The effectiveness of such graph representation learning methods often stems not only from novel algorithms but also from innovative ways of re-framing the input data. However, this approach also introduces the challenge of understanding how well these "linearized" sequences truly capture the full, multi-relational complexity of the original graph, as certain structural nuances might be lost in the linear projection.
# 2.2 Matrix Factorization-based Methods
Matrix factorization methods learn low-dimensional node embeddings by decomposing matrices that capture node-to-node similarity or proximity within a graph [31]. These input matrices may include the graph Laplacian, incidence matrices, the adjacency matrix $A$ , or its polynomial expansions [31]. The core idea is to factorize one of these matrices to extract meaningful vector representations for each node. Representative algorithms in this category include GraRep, HOPE, NetMF, and M-NMF [17].
Although random walk-based and matrix factorization methods are often presented as distinct classes, deeper theoretical analysis reveals a strong connection between them. Research has shown that several random walk-based approaches—such as DeepWalk and LINE—implicitly perform matrix factorizations [31, 50]. This insight uncovers a fundamental mathematical equivalence between these two paradigms. Despite their operational mechanisms differing—one relying on stochastic processes and the other on direct algebraic decomposition—they converge on optimizing similar underlying objectives related to graph proximity. This theoretical unification simplifies the conceptual landscape of unsupervised graph embedding, providing a more cohesive understanding of the diverse methods. It also opens avenues for future research, potentially leading to novel hybrid approaches that strategically combine the strengths of both explicit matrix factorization (e.g., direct control over proximity measures) and random walk-based methods (e.g., scalability through sampling) [50].
# 2.3 Deep Neural Network-based Methods (Autoencoders, GNNs)
This category harnesses the power of deep learning architectures, such as autoencoders [21], Siamese Graph networks [1, 24], or Graph convolutional Neural Networks (CNNs), to learn node representations directly from the graph’s inherent link structure in an unsupervised fashion [1]. Graph Neural Networks (GNNs) stand out as a particularly effective subcategory, where node embeddings are iteratively refined by aggregating information from a node’s immediate neighbors and, through multiple layers, from multi-hop neighbors. Notable examples include SDNE, which employs a non-GNN autoencoder, as well as Graph Autoencoders (GAE) and Variational Graph Autoencoders (VGAE) [17, 47].
The evolution from shallow to deep architectures for graph representation learning marks a significant advancement in capturing non-linearity. Early unsupervised methods, including some matrix factorization techniques and simpler deep learning models, often operated under linear assumptions or possessed limited model capacities [23]. The advent and widespread adoption of deep neural networks, particularly GNNs, allowed for the capture of complex, non-linear structural information inherent in graph data, which shallow models struggled to represent [3]. This transition mirrors a broader trend in machine learning, where deep learning has proven superior in extracting hierarchical and abstract features from complex data. The increased model capacity provided by deep learning methods enables the design of more intricate and expressive cost functions that can capture a wider array of graph properties. However, this advancement also introduces new challenges related to increased computational demands, the complexity of optimizing these models, and a reduction in the interpretability of the learned representations.
# 2.4 Contrastive Learning Methods
Contrastive learning has emerged as a dominant and highly effective paradigm within self-supervised learning (SSL) for graphs [16, 41]. Its central principle is to learn discriminative representations by contrasting data samples based on their semantic similarity [41, 53]. Specifically, the goal is to maximize agreement between augmented views of the same graph instance—termed positive pairs—while minimizing agreement with views derived from different instances, known as negative pairs [16, 53]. Common implementations include the InfoNCE loss, as well as more recent methods that incorporate optimal transport distances to quantify similarity more effectively [41, 46]. This contrastive framework generalizes traditional proximity-based objectives found in earlier unsupervised graph embedding methods, such as DeepWalk and LINE, which implicitly embed similar nodes—based on co-occurrence in random walks or direct edge connections—closely in the latent space. Contrastive learning formalizes this idea by explicitly defining positive and negative pairs, often using graph augmentations to capture deeper semantic relationships beyond raw structural proximity [41, 53]. The corresponding loss functions are designed to bring positive pairs closer in representation space while pushing negative pairs apart, offering a more flexible and robust way to enforce similarity constraints. This paradigm shift enables the model to learn more generalizable representations by leveraging a wider spectrum of self-supervised signals. Rather than relying on fixed, predefined notions of similarity, contrastive learning adapts to the data, allowing the model to infer what constitutes meaningful similarity in a task-specific and data-driven manner.
# 3 Detailed Analysis of Cost Functions
In this section, we examine how the efficacy of unsupervised node representation learning hinges significantly on the design and optimization of its cost functions. These functions dictate how the model learns to capture and preserve the intricate structural and semantic properties of graphs
in a low-dimensional embedding space. This section also delves into the specific cost functions employed by various prominent unsupervised node embedding methods.
# 3.1 Random Walk-based Methods: Maximizing Contextual Likelihood
The foundational principle of random walk-based methods is to learn node representations such that nodes frequently co-occurring within a defined "context"—typically generated by random walks on the graph—exhibit similar embeddings in the low-dimensional space. This objective is mathematically framed as maximizing the likelihood of observing a node’s network neighborhood given its feature representation.
3.1.1 DeepWalk: Skip-Gram Objective with Negative Sampling. DeepWalk initiates the embedding process by generating multiple fixed-length random walks starting from each node in the graph [12, 30]. It then adapts the Skip-Gram model, a core component of the Word2vec framework, to learn low-dimensional node embeddings from these generated sequences. The learning objective is to maximize the likelihood of observing nodes within a fixed-size window surrounding a target node in each random walk, conditioned on the embedding of the target node [12, 30]. The mathematical formulation for a target node $u$ and its observed neighborhood $N _ { S } ( u )$ is expressed as maximizing the following log-probability [12]:
$$
L = \sum _ { u \in V } \sum _ { n _ { i } \in N _ { S } ( u ) } \left[ \log \sigma ( \mathbf { h } _ { n _ { i } } ^ { \top } \mathbf { h } _ { u } ) + k \cdot \mathbb { E } _ { v _ { j } \sim P _ { n } ( v ) } [ \log \sigma ( - \mathbf { h } _ { v _ { j } } ^ { \top } \mathbf { h } _ { u } ) ] \right]
$$
Here, $\mathbf { h } _ { u }$ and $\mathbf { h } _ { n _ { i } }$ denote the learned embeddings of node $u$ and its neighbor $n _ { i }$ , respectively. The sigmoid function, $\sigma ( x ) = 1 / ( 1 + \exp ( - x ) )$ , maps dot products to probabilities. The parameter $k$ signifies the number of negative samples drawn for each positive pair. $P _ { n } ( \boldsymbol { v } )$ is the noise distribution utilized for negative sampling, which is empirically often set to $P _ { n } ( v ) \propto d _ { v } ^ { 3 / 4 }$ , where $d _ { v }$ is the degree of node $\boldsymbol { v }$ [12, 31, 44]. The first term in this objective function encourages a high similarity (large dot product) between the target node and its positive (contextual) neighbors. Conversely, the second term penalizes high similarity (minimizes dot product) between the target node and randomly sampled negative (non-contextual) nodes.
This loss function is critical because it effectively transforms the complex problem of learning graph proximity into a series of computationally efficient binary classification problems, where the model distinguishes between true context nodes and randomly sampled noise nodes. This formulation, particularly with the integration of negative sampling, is crucial for scalability, as directly computing the softmax over all nodes in large graphs would be computationally prohibitive [11, 12, 49].
The pervasive success of negative sampling underscores a fundamental engineering principle in large-scale machine learning: it often involves sacrificing a theoretically exact objective for a computationally tractable and empirically effective approximation. The canonical Skip-Gram objective function includes a normalization term in its softmax denominator that requires summing over all nodes in the vocabulary (or graph), which becomes computationally infeasible for very large networks [12, 49]. Negative sampling directly addresses this bottleneck by replacing the full summation with a small, fixed number of randomly sampled "negative" examples [11, 49]. This approximation significantly reduces the computational cost, making the optimization feasible for large graphs [12]. Its widespread adoption across various representation learning models, beyond just DeepWalk, highlights its fundamental importance as an algorithmic innovation for achieving scalability in graph embedding.
3.1.2 Node2vec: Biased Random Walks and Objective Function. Node2vec extends DeepWalk by introducing a more flexible and biased random walk procedure [11, 12, 17]. This procedure is meticulously designed to allow for a smooth interpolation between Breadth-First Search (BFS)-like and Depth-First Search (DFS)-like exploration strategies [11, 12, 17]. The overarching objective remains to maximize the likelihood of preserving network neighborhoods, but with a significantly richer and more adaptable definition of what constitutes a "neighborhood" [12]. The underlying objective function for learning embeddings is the same Skip-Gram likelihood used in DeepWalk, but the critical difference lies in how the neighborhoods $N _ { S } ( u )$ are generated [12].
The transition probability $P ( c _ { i } = x | c _ { i - 1 } = v )$ for the random walk is controlled by two key parameters: $p$ , the return parameter, and $q$ , the in-out parameter [11, 12]. The return parameter, $p$ , governs the likelihood of the walk immediately revisiting a node it just came from. A high value of $p$ (e.g., $p > \operatorname* { m a x } ( q , 1 ) )$ makes it less likely to sample an already-visited node in the next two steps, thereby encouraging a more moderate exploration of the graph and avoiding redundant 2-hop paths. Conversely, a low value of $p$ (e.g., $p < \operatorname* { m i n } ( q , 1 ) )$ biases the walk to backtrack, keeping the exploration more "local" to the starting node $u$ . The in-out parameter, $q$ , allows the search to differentiate between "inward" (local) and "outward" (global) nodes relative to the previous node in the walk. If $q > 1$ , the random walk is biased towards nodes closer to the previous node $t$ , leading to a local view of the graph that approximates BFS behavior, where samples primarily comprise nodes within a small locality. If $q < 1$ , the walk is more inclined to visit nodes further away from node $t$ , reflecting a DFS-like behavior that encourages outward exploration [11, 12].
Node2vec’s significant contribution is not a novel cost function, but rather a sophisticated sampling strategy for generating the positive pairs that feed into the existing Skip-Gram objective [12]. This demonstrates that the effectiveness of a representation learning method is not solely determined by the mathematical form of its loss function. Instead, it is profoundly influenced by how the input data, specifically positive and negative pairs, are constructed to reflect desired graph properties. For instance, emphasizing homophily through BFS-like walks or structural equivalence through DFS-like walks directly impacts the learned representations. The sampling process implicitly defines the "neighborhood" that the cost function then attempts to preserve. This highlights a critical principle in self-supervised learning: the design of the "pretext task"—the mechanism by which self-supervisory signals are generated, often through data augmentation or sampling—is as important as the loss function itself in guiding the model to learn specific, useful representations. Future advancements might involve more sophisticated, adaptive sampling strategies that dynamically adjust based on the evolving characteristics or specific requirements of the graph.
3.1.3 LINE: First-Order and Second-Order Proximity Loss Functions. LINE (Large-scale Information Network Embedding) is designed to preserve two complementary notions of node proximity in graph-structured data. First-order proximity captures the observed connections between directly linked nodes, while second-order proximity models similarity in shared neighborhood distributions, effectively capturing structural equivalence [17, 34? ]. One of LINE’s notable strengths lies in its versatility as it is applicable to a wide range of information network types, including undirected, directed, and weighted graphs, making it well-suited for diverse real-world information networks [34].
The first-order proximity loss $( L _ { 1 } )$ focuses on preserving the strength of direct connections between nodes. It models the probability of an edge existing between two nodes $( i , j )$ using a sigmoid function applied to the dot product of their embeddings. The objective is to maximize this probability for existing edges:
$$
L _ { 1 } = - \sum _ { ( i , j ) \in E } w _ { i j } \log p _ { 1 } ( v _ { i } , v _ { j } ) \quad { \mathrm { w h e r e } } \quad p _ { 1 } ( v _ { i } , v _ { j } ) = { \frac { 1 } { 1 + \exp ( - \mathbf { h } _ { i } ^ { \top } \mathbf { h } _ { j } ) } }
$$
ACM Comput. Surv., Vol. 37, No. 4, Article 111. Publication date: August 2025.
Here, $w _ { i j }$ represents the weight of the edge between nodes $i$ and $j$ , and $\mathbf { h } _ { i } , \mathbf { h } _ { j }$ are their respective embeddings. This objective encourages embeddings of directly connected nodes to be similar.
The second-order proximity loss $\left( L _ { 2 } \right)$ addresses the sparsity of direct connections by focusing on shared neighborhood structures. It models the probability of a node $j$ being a "context" of node $i$ (i.e., sharing common neighbors) using a separate "context embedding" for node $j$ . The objective aims to make nodes with similar neighborhood distributions have similar embeddings [34]:
$$
L _ { 2 } = - \sum _ { i \in V } \sum _ { j \in N ( i ) } \frac { w _ { i j } } { \sum _ { k \in N ( i ) } w _ { i k } } \log p _ { 2 } ( v _ { j } | v _ { i } )
$$
where $\begin{array} { r } { p _ { 2 } ( v _ { j } | v _ { i } ) = \frac { \exp ( \mathbf { h } _ { j } ^ { \top } \mathbf { h } _ { i } ^ { \prime } ) } { \sum _ { k \in V } \exp ( \mathbf { h } _ { k } ^ { \top } \mathbf { h } _ { i } ^ { \prime } ) } , } \end{array}$ . In this formulation, $\mathbf { h } _ { i }$ is the node embedding for $i$ , and ${ \bf h } _ { i } ^ { \prime }$ is the context embedding for $i$ . The computational cost of the denominator is typically mitigated using negative sampling.
Explicitly addressing both first-order (local) and second-order (global) proximities is crucial for capturing a comprehensive view of the graph’s structure, especially in sparse networks where direct links alone might provide insufficient information for robust representation learning [34]. This dual objective allows LINE to learn richer embeddings that are more informative for a variety of downstream tasks. LINE’s design explicitly incorporates two distinct loss components $\left. L _ { 1 } \right.$ and $L _ { 2 } ^ { \mathrm { ~ ~ } }$ ) to capture different granularities of graph proximity: direct connectivity (local) and shared neighborhood patterns (global) [34]. This is a clear instance of multi-objective optimization within a single framework, acknowledging that different structural properties of a graph require unique inductive biases in the loss function to be effectively preserved. By combining these, LINE aims for a more comprehensive representation than methods focusing on a single type of proximity. This approach demonstrates the power of designing complex loss functions that integrate multiple, complementary objectives. It allows for the generation of richer embeddings that are more robust to graph sparsity and can generalize better across a wider range of downstream tasks, as different tasks may rely on different levels of structural information (e.g., local versus global context). This sets a precedent for developing more sophisticated loss functions that capture diverse aspects of graph data.
# 3.2 Matrix Factorization-based Methods: Proximity Preservation
The fundamental principle of matrix factorization methods is to identify low-dimensional embeddings by decomposing a matrix that encapsulates some form of node-to-node similarity or proximity within the graph. The associated cost function typically quantifies the discrepancy between this original similarity matrix and the similarity that is reconstructed from the learned low-dimensional embeddings. The optimization aims to minimize this reconstruction error, thereby preserving the inherent graph properties. While various matrix factorization methods exist, such as GraRep, HOPE, NetMF, and M-NMF, the literature often focuses on their general approach of factorizing graph-derived matrices (e.g., adjacency or Laplacian) rather than providing explicit, universally applicable cost function formulations for all of them [31]. However, the core idea remains consistent: minimizing a loss that reflects how well the low-dimensional embeddings can reconstruct the original graph’s proximity information.
3.2.1 Laplacian Eigenmaps: Graph Laplacian and Eigenvalue Problem. Laplacian Eigenmaps (LE) is a dimensionality reduction technique deeply rooted in spectral graph theory and manifold learning. Its primary objective is to preserve local neighborhood information [2, 51, 52]. It achieves this by ensuring that if two nodes are close in the original high-dimensional space (typically indicated by an existing edge between them), their corresponding embeddings remain close in the learned low-dimensional space [2, 51, 52]. The method seeks a mapping $f : V \mathbb { R } ^ { m }$ that minimizes the
following objective function:
$$
L = \frac { 1 } { 2 } \sum _ { i , j } W _ { i j } | | \mathbf { f } _ { i } - \mathbf { f } _ { j } | | ^ { 2 }
$$
Here, $W _ { i j }$ represents the weight of the edge connecting nodes $i$ and $j$ . These weights can be binary (1 if connected, 0 otherwise) or can be derived from a similarity measure, such as Gaussian weights based on the distance between nodes [52]. This objective function inherently imposes a heavy penalty if there are large distances between the embeddings of nodes that are connected by a high-weight edge [2, 52].
This minimization problem can be elegantly reformulated as a generalized eigenvalue problem: $L \mathbf { f } = \lambda D \mathbf { f }$ . In this equation, $L = D - W$ is the graph Laplacian matrix, where $D$ is the diagonal degree matrix with entries $\begin{array} { r } { D _ { i i } = \sum _ { j } W _ { i j } } \end{array}$ [2]. The optimal low-dimensional embeddings for the nodes are then obtained from the eigenvectors corresponding to the smallest non-zero eigenvalues of the graph Laplacian [2].
Laplacian Eigenmaps provides a theoretically sound framework for preserving local geometry, drawing its justification from the role of the Laplace-Beltrami operator in manifold learning [2, 52]. Its locality-preserving characteristic makes it relatively insensitive to outliers and noise, and it is not prone to "short-circuiting" issues often seen in global dimensionality reduction methods because it only considers local distances [2].
# 3.3 Deep Neural Network-based Methods: Reconstruction and Generative Models
Deep neural network-based methods leverage the power of deep architectures to learn complex, non-linear relationships in graph data. A common approach within this category is the use of autoencoders, which learn representations by attempting to reconstruct their input.
3.3.1 SDNE: First-Order and Second-Order Proximity Loss Functions. SDNE (Structural Deep Network Embedding) is a deep neural model designed to capture both first-order and second-order proximities within a graph [6]. It employs an autoencoder architecture that aims to reconstruct the graph’s adjacency matrix from learned node embeddings [17]. The total loss function of SDNE is a joint optimization of two main components:
(1) Preserving Second-Order Proximity $( L _ { 1 } )$ : This part of the loss function focuses on reconstructing the adjacency matrix, with a particular emphasis on penalizing errors in reconstructing existing connections more heavily than non-connections. It is formulated as:
$$
L _ { 1 } = \sum _ { v _ { i } \in V } | | ( \mathbf { x } _ { i } - \mathbf { x } _ { i } ^ { \prime } ) \odot \mathbf { b } _ { i } | | ^ { 2 }
$$
Here, $\mathbf { x } _ { i }$ represents the row corresponding to node $\boldsymbol { v } _ { i }$ in the graph’s adjacency matrix, and $\mathbf { x } _ { i } ^ { \prime }$ is its reconstruction. $\mathbf { b } _ { i }$ is a vector where $b _ { i j } = \beta > 1$ if an edge exists between $\boldsymbol { v } _ { i }$ and $\boldsymbol { v } _ { j }$ $( a _ { i j } = 1 )$ ), and $b _ { i j } = 1$ if no edge exists $\begin{array} { r } { ( a _ { i j } = 0 ) } \end{array}$ ) [17]. This weighting scheme ensures that the model prioritizes accurate reconstruction of existing edges, which is crucial for sparse graphs where zeros are abundant [17].
(2) Capturing First-Order Proximity $\left( L _ { 2 } \right)$ : This component encourages connected nodes to have similar embeddings. It is inspired by Laplacian Eigenmaps and penalizes large differences between the embedding vectors of directly connected nodes:
$$
L _ { 2 } = \sum _ { ( v _ { i } , v _ { j } ) \in E } a _ { i j } | | \mathbf { z } _ { i } - \mathbf { z } _ { j } | | ^ { 2 }
$$
ACM Comput. Surv., Vol. 37, No. 4, Article 111. Publication date: August 2025.
In this formula, $\mathbf { z } _ { i }$ and $\mathbf { z } _ { j }$ are the embedding vectors for nodes $\boldsymbol { v } _ { i }$ and $\boldsymbol { v } _ { j }$ , respectively, and $a _ { i j }$ is the element in the adjacency matrix. This loss component ensures that nodes connected by an edge are mapped to nearby points in the embedding space [17].
By jointly optimizing both $L _ { 1 }$ and $L _ { 2 }$ , SDNE generates node embedding vectors that capture both the local (first-order) and global (second-order) structural properties of the graph [6, 17].
3.3.2 Graph Autoencoders $( G A E )$ . Graph Autoencoders (GAEs), proposed by Kipf and Welling (2016), are prominent unsupervised models that predict link probabilities by computing the inner products of node representations learned through a Message Passing Neural Network (MPNN) [10, 27]. The core objective of GAEs is to reconstruct and preserve the graph topology by mapping nodes into a latent space [10].
The GAE architecture typically consists of a GNN encoder that generates node embeddings and a simple decoder that reconstructs the adjacency matrix using an inner product [10, 15, 27]. The reconstruction loss function, often based on mean squared error or binary cross-entropy, aims to minimize the discrepancy between the original adjacency matrix $A$ and the reconstructed adjacency matrix $\hat { A }$ [10]. The reconstructed adjacency matrix $\hat { A }$ is typically obtained by applying a logistic sigmoid function to the inner product of the node embeddings: $\hat { A } = \sigma ( Z Z ^ { \top } )$ , where $Z$ is the matrix of node embeddings [10, 21].
GAEs learn representations by recovering missing information from incomplete input graphs, following a corruption-reconstruction framework [25]. While effective for link prediction and other tasks, existing GAEs often focus primarily on reconstructing low-frequency information in graphs, potentially overlooking valuable high-frequency signals [25]. This is because their optimization tends to prioritize larger discrepancies, which are more pronounced in low-frequency components [25].
3.3.3 Variational Graph Autoencoders (VGAE). Variational Graph Autoencoders (VGAEs), also introduced by Kipf and Welling (2016), extend the GAE framework by incorporating a probabilistic approach based on the Variational Autoencoder (VAE) [29, 32, 43]. VGAEs aim to learn interpretable latent representations for undirected graphs by modeling the latent variables probabilistically [21].
The VGAE model typically employs two GNN encoders to learn the mean $( \mu )$ and variance $( \sigma ^ { 2 } )$ of the embedding vectors, assuming a Gaussian distribution for the latent variables [7, 21, 29, 32]. The generative model then reconstructs the adjacency matrix through an inner product between these latent variables, similar to GAE: $p ( A _ { i j } = 1 | \mathbf { z } _ { i } , \mathbf { z } _ { j } ) = \sigma ( \mathbf { z } _ { i } ^ { \top } \mathbf { z } _ { j } )$ [21].
The learning objective for VGAE is the variational lower bound (Evidence Lower Bound or ELBO), which consists of two main terms [21]:
$$
L = \mathbb { E } _ { q ( \mathbf { Z } | \mathbf { X } , \mathbf { A } ) } \left[ \log p ( \mathbf { A } | \mathbf { Z } ) \right] - \mathrm { K L } \left[ q ( \mathbf { Z } | \mathbf { X } , \mathbf { A } ) | | p ( \mathbf { Z } ) \right]
$$
(1) Reconstruction Term $( \mathbb { E } _ { q ( \mathbf { Z } | \mathbf { X } , \mathbf { A } ) } \left[ \log p ( \mathbf { A } | \mathbf { Z } ) \right] )$ : This term measures how well the decoder can reconstruct the original graph’s adjacency matrix from the sampled latent embeddings. It is typically a binary cross-entropy loss that encourages the model to assign high probabilities to existing edges and low probabilities to non-existent ones.
(2) KL Divergence Term $( { \bf K L } [ q ( { \bf Z } | { \bf X } , { \bf A } ) | | p ( { \bf Z } ) ] )$ : This term acts as a regularizer, forcing the approximate posterior distribution $q ( \mathbf { Z } | \mathbf { X } , \mathbf { A } )$ (learned by the encoder) to be close to a predefined prior distribution $p ( Z )$ (typically a standard Gaussian distribution) [21, 29]. This regularization helps in learning a smooth and continuous latent space, enabling better generalization and sampling.
VGAEs are effective for tasks like link prediction, but they can face challenges such as posterior collapse, where the model might prioritize reconstruction over learning a meaningful latent distribution, especially when initialized poorly [5].
3.3.4 Masked Graph Autoencoders (MGAE). Masked Graph Autoencoders (MGAE) represent a novel framework for unsupervised graph representation learning, drawing inspiration from selfsupervised learning techniques like masked autoencoding in other domains [33]. Unlike traditional GAEs that reconstruct the entire graph, MGAE focuses on reconstructing masked edges, rather than the observed ones, during training [33]. This forces the GNN encoder to become more robust to network noise and to encode graph information more effectively [33].
MGAE operates by randomly masking a large proportion of edges (e.g., $7 0 \%$ masking ratio) in the input graph structure [33]. The GNN encoder then processes this partially masked graph, and the decoder is specifically designed to reconstruct only the missing (masked) edges [33].
The standard graph-based loss function used to train the MGAE model is defined as [33]:
$$
L = - \sum _ { ( v , u ) \in E _ { m a s k } } \log \frac { \exp ( y _ { v u } ) } { \sum _ { z \in V } \exp ( y _ { v z } ) }
$$
Where $y _ { v , u } = \mathrm { M L P } ( h _ { e _ { v , u } } )$ is the reconstructed score for the edge $\boldsymbol { e } _ { \boldsymbol { v } , \boldsymbol { u } }$ , and $E _ { m a s k }$ represents the set of masked edges that the model aims to reconstruct. The summation in the denominator, $\begin{array} { r } { \sum _ { z \in V } \exp ( y _ { v z } ) } \end{array}$ , is computationally expensive for large graphs, so negative sampling is typically employed to accelerate the optimization process [33]. This approach allows MGAE to achieve strong performance while processing only a fraction of the original graph structure during encoding [33].
# 3.4 Contrastive Learning Methods in GNNS: Maximizing Agreement and Discrepancy
Contrastive learning has emerged as a powerful self-supervised paradigm for graph representation learning, focusing on learning discriminative embeddings by contrasting positive and negative sample pairs [41, 53].
3.4.1 General Principles and InfoNCE Loss. The core principle of contrastive learning is to learn representations by encouraging semantically similar instances to occupy nearby positions in the embedding space, while dissimilar instances are pushed farther apart [53]. This objective is typically operationalized through the construction of instance pairs: positive pairs, which consist of different augmented views of the same data point, and are intended to be close in the embedding space; and negative pairs, which are drawn from distinct data instances and are expected to be dissimilar [53].
The InfoNCE loss is a widely used cost function in contrastive learning, particularly in graphbased applications [41, 46, 53]. For a given anchor sample (or view) $x$ , and a positive sample (another view of the same instance) $y$ , the InfoNCE loss aims to maximize the similarity between $x$ and $y$ relative to a set of negative samples $\{ x _ { i } ^ { - } \}$ [46]:
$$
L _ { I n f o N C E } = - \log { \frac { \exp ( \sin ( f ( x ) , f ( y ) ) / \tau ) } { \exp ( \sin ( f ( x ) , f ( y ) ) / \tau ) + \sum _ { i } \exp ( \sin ( f ( x ) , f ( x _ { i } ^ { - } ) ) / \tau ) } }
$$
Here, $f ( \cdot )$ is the encoder network that transforms raw inputs into vector representations, $s \mathrm { i m } ( \cdot , \cdot )$ is a similarity metric (often cosine similarity), and $\tau$ is a temperature parameter that controls the sharpness of the distribution [46, 53]. This loss function encourages alignment (positive samples are brought closer) and uniformity (representations are evenly distributed by pushing negative samples apart) [46]. The success of InfoNCE is strongly influenced by the quantity and quality of negative samples [46].
3.4.2 Subgraph Gaussian Embedding Contrast (SGEC). Subgraph Gaussian Embedding Contrast (SGEC) is a novel method that introduces a subgraph Gaussian embedding (SGE) module to adaptively map subgraphs to a structured Gaussian space [41]. This approach aims to preserve graph characteristics while controlling the distribution of generated subgraphs [41].
SGEC starts by sampling BFS-induced subgraphs. Node representations and topology information within these subgraphs are then embedded into a latent space that is regularized towards a Gaussian distribution using Kullback–Leibler (KL) divergence [41]. The embedded features $\tilde { \mathbf { x } } _ { i }$ are generated using the reparameterization trick: $\tilde { \mathbf { x } } _ { i } = \mu _ { i } + \exp ( \log ( \sigma _ { i } ) ) \odot \boldsymbol \epsilon$ , where $\epsilon \sim N ( 0 , I )$ is Gaussian noise [41]. A KL divergence term regularizes the embedding distribution towards a Gaussian prior, preventing mode collapse [41].
For contrastive learning, SGEC integrates optimal transport distances into the InfoNCE loss formulation, addressing the complexities of graph-based data [41]. The complete contrastive loss $L _ { c o n t r a s t }$ is a sum of two components:
(1) Wasserstein Distance $( L _ { W } )$ : This component captures feature distribution representation within subgraphs. It measures the minimum cost of transforming one feature distribution into another [41].
$$
{ \cal L } _ { W } = \alpha \left( - \sum _ { i \in S } \log \frac { \exp ( - W ( X _ { i } , \tilde { X } _ { i } ) / \tau ) } { \sum _ { j \in S , j \ne i } ( \exp ( - W ( X _ { i } , \tilde { X } _ { j } ) / \tau ) + \exp ( - W ( X _ { i } , X _ { j } ) / \tau ) ) } \right)
$$
where $W ( X _ { i } , \tilde { X } _ { i } )$ is the Wasserstein distance between the feature matrices of the original subgraph $X _ { i }$ and its embedded version $\tilde { X _ { i } }$ [41].
(2) Gromov-Wasserstein Distance $( L _ { G W } )$ : This component captures structural discrepancies, providing a topology-aware similarity measure. It measures the dissimilarity between two metric spaces (subgraphs) while considering their internal structures [41].
$$
{ \bf \Gamma } _ { G W } = \left( 1 - \alpha \right) \left( - \sum _ { i \in S } \log \frac { \exp ( - G W ( A _ { i } , X _ { i } , A _ { i } , \tilde { X } _ { i } ) / \tau ) } { \sum _ { j \in S , j \neq i } ( \exp ( - G W ( A _ { i } , X _ { i } , A _ { j } , \tilde { X } _ { j } ) / \tau ) + \exp ( - G W ( A _ { i } , X _ { i } , A _ { j } , X _ { j } ) / \tau ) ) } \right)
$$
where $G W ( A _ { i } , X _ { i } , A _ { i } , \tilde { X } _ { i } )$ is the Gromov-Wasserstein distance, considering adjacency matrices $A$ and feature matrices $X$ [41].
The final loss $L$ of the SGEC model combines both the contrastive and regularization components, balanced by a hyperparameter $\beta \colon L = L _ { c o n t r a s t } + \beta \mathrm { K L } ( q ( \tilde { X } | X , A ) | | p ( \tilde { X } ) )$ [41]. This comprehensive approach leverages the strengths of Gaussian embeddings and optimal transport to learn robust graph representations.
3.4.3 Discrepancy-based Self-supervised Learning (D-SLA). Discrepancy-based Self-supervised Learning (D-SLA) introduces a novel perspective from traditional contrastive learning paradigms by focusing on modeling the discrepancy between graphs, rather than maximizing their similarity [18]. Instead of aligning representations of different augmented views, D-SLA encourages the model to distinguish the original graph from its perturbed variants. This approach compels the model to capture even subtle structural differences that may have significant implications for global graph properties [18].
D-SLA’s objective functions are designed to achieve this:
(1) Graph Discrimination Loss (LGD): This objective trains the model to distinguish the original graph from its perturbed counterparts.
$$
L _ { G D } = - \log \left( \frac { e ^ { S _ { 0 } } } { e ^ { S _ { 0 } } + \sum _ { i \geq 1 } e ^ { S _ { i } } } \right)
$$
where $S _ { k }$ is a score for graph $G _ { k }$ (original $G _ { 0 }$ or perturbed $G _ { i }$ ), obtained from a learnable score network [18]. Perturbed graphs are generated through edge manipulation (addition/deletion) and node attribute masking, making the discrimination challenging and forcing the model to learn deeper discrepancies [18].
(2) Edit Distance Learning (Ledit): This objective quantifies how dissimilar graphs are by leveraging the graph edit distance. The exact number of edge additions/deletions during perturbation directly provides this distance, making it computationally efficient [18].
$$
L _ { e d i t } = \sum _ { i , j } \left( \frac { d _ { i } } { e _ { i } } - \frac { d _ { j } } { e _ { j } } \right) ^ { 2 }
$$
Here, $e _ { i }$ is the graph edit distance between $G _ { 0 }$ and $G _ { i }$ , and $d _ { i }$ is the embedding-level distance (L2-norm) between their representations. This term enforces that embedding differences are proportional to actual graph edit distances, ensuring that graphs with larger edit distances are farther apart in the embedding space [18].
(3) Relative Discrepancy Learning (Lmargin): This objective extends the learning to differentiate the target graph from completely different graphs (negative graphs from the same batch). It uses a triplet margin loss:
$$
L _ { m a r g i n } = \sum _ { i , j } \operatorname* { m a x } ( 0 , \alpha + d _ { i } - d _ { j } ^ { \prime } )
$$
where $d _ { i }$ is the distance between the original graph and its perturbed versions, and $d _ { j } ^ { \prime }$ is the distance between the original graph and a negative graph from the batch. $\alpha > 0$ is a margin hyperparameter [18]. This ensures that negative graphs are embedded further away from the original graph than its perturbed versions, preventing collapse of semantically dissimilar graphs [18].
The complete D-SLA learning objective combines these three components: ${ \cal L } = { \cal L } _ { G D } + \lambda _ { 1 } { \cal L } _ { e d i t } +$ $\lambda _ { 2 } L _ { m a r g i n }$ , where $\lambda _ { 1 }$ and $\lambda _ { 2 }$ are scaling weights [18]. This framework allows D-SLA to capture subtle differences, discriminate between perturbed graphs, preserve exact discrepancy amounts, and capture relative distances, offering a robust approach to unsupervised graph representation learning.
# 4 Empirical Methodology
In this section, we present our approach to rendering the unsupervised model in inductive settings. To achieve this, we introduce a Universal Feature Encoder designed to map variable-length input vectors to fixed-length representations. By incorporating this encoder, the model can be trained on one graph dataset and subsequently applied to generate embeddings for unseen or different graph datasets, thereby enabling effective inductive generalization.
# 4.1 Universal feature encoder
Let $\mathbf { X } \in \mathbb { R } ^ { B \times d _ { \mathrm { i n } } }$ denote the input feature matrix, where $B$ is the batch size, $d _ { \mathrm { i n } }$ is the (variable) input dimension, and $d _ { h }$ , $d _ { \mathrm { o u t } }$ are the fixed hidden and output dimensions, respectively.
4.1.1 Dynamic Linear Projection. A linear transformation is applied to map input features to the hidden space:
$$
\mathbf { H } = \mathbf { X } \mathbf { W } _ { p } + \mathbf { b } _ { p } , \quad \mathrm { w h e r e } ~ \mathbf { W } _ { p } \in \mathbb { R } ^ { d _ { \mathrm { i n } } \times d _ { h } } , ~ \mathbf { b } _ { p } \in \mathbb { R } ^ { d _ { h } }
$$
4.1.2 Layer Normalization with ReL $U .$ Each row $\mathbf { h } _ { i }$ of $\mathbf { H }$ is normalized using layer normalization:
$$
\hat { \mathbf { h } } _ { i } = \frac { \mathbf { h } _ { i } - \mu _ { i } } { \sqrt { \sigma _ { i } ^ { 2 } + \epsilon } } , \quad \mathrm { w h e r e }
$$
ACM Comput. Surv., Vol. 37, No. 4, Article 111. Publication date: August 2025.
$$
\mu _ { i } = \frac { 1 } { d _ { h } } \sum _ { j = 1 } ^ { d _ { h } } h _ { i j } , \qquad \sigma _ { i } ^ { 2 } = \frac { 1 } { d _ { h } } \sum _ { j = 1 } ^ { d _ { h } } ( h _ { i j } - \mu _ { i } ) ^ { 2 }
$$
Then apply the ReLU activation:
$$
\mathbf { Z } = \mathrm { R e L U } ( \hat { \mathbf { H } } )
$$
4.1.3 Adaptive Average Pooling. First, reshape for 1D pooling:
$$
{ \bf Z } \in \mathbb { R } ^ { { \cal B } \times { d _ { h } } } \quad \quad { \bf Z } _ { \mathrm { r e s h a p e d } } \in \mathbb { R } ^ { { \cal B } \times { 1 } \times { d _ { h } } }
$$
Apply 1D adaptive average pooling to obtain:
$$
{ \bf X } _ { \mathrm { t m p } } = \mathrm { A d a p t i v e A v g P o o l 1 D } ( { \bf Z } _ { \mathrm { r e s h a p e d } } ) \in \mathbb { R } ^ { B \times 1 \times d _ { \mathrm { o u t } } }
$$
Finally, remove the singleton dimension:
$$
\mathbf { X } _ { \mathrm { f i n a l } } = \mathbf { X } _ { \mathrm { t m p } } [ : , 0 , : ] \in \mathbb { R } ^ { B \times d _ { \mathrm { o u t } } }
$$
# End-to-End Transformation Summary:
$$
\mathbf { X } _ { \mathrm { f i n a l } } = \mathbf { A d a p t i v e A v g P o o l 1 D } \big ( \operatorname { R e L U } \big ( \mathbf { L a y e r N o r m } ( \mathbf { X } \mathbf { W } _ { P } + \mathbf { b } _ { P } ) \big ) \big )
$$
# 4.2 Evaluation metrics for unsupervised node representations
Unsupervised graph node representation learning frameworks require specialized evaluation metrics due to the unique challenges and objectives inherent in learning from graph-structured data without explicit supervision. These frameworks aim to learn meaningful node embeddings that can be effectively applied to various downstream tasks. The following points highlight why distinct evaluation metrics are necessary:
(1) Absence of Labeled Data: Since no labeled data is available during training, the quality of learned embeddings must be assessed without relying on ground truth labels. Metrics that evaluate how well similar nodes are clustered together and dissimilar nodes are separated, such as cluster cohesion and separation measures, are essential.
(2) Graph Topology Preservation: A core goal of unsupervised graph representation learning is to capture the underlying structure and relationships within the graph. Metrics that quantify how well the embeddings preserve graph topology—through node classification accuracy, link prediction performance, or community detection quality—are critical.
(3) Robustness and Generalization: To evaluate the stability and applicability of learned embeddings across varying conditions, it is important to test their robustness to noise or perturbations in the graph. This helps assess how well the framework generalizes beyond the training data.
(4) Interpretability: Understanding and explaining the learned representations is crucial for trust and insight. Visualization techniques, such as network graphs, enable interpretation of embeddings in relation to graph structure, facilitating transparency and model validation.
Considering all scenarios we have considered 21 evaluation metrics for unsupervised embedding evaluation [8, 8, 35, 36, 38, 39]. Which are as follows:
# Node Classification Metrics:
(1) node_cls_accuracy: Overall proportion of correctly predicted node classes.
$$
{ \mathrm { A c c u r a c y } } = { \frac { T P + T N } { T P + T N + F P + F N } }
$$
(2) node_cls_precision: Correct positive predictions over total predicted positive
$$
{ \mathrm { P r e c i s i o n } } = { \frac { T P } { T P + F P } }
$$
ACM Comput. Surv., Vol. 37, No. 4, Article 111. Publication date: August 2025.
(3) node_cls_recall (sensitivity): Correct positive predictions over actual positives.
$$
{ \mathrm { R e c a l l } } = { \frac { T P } { T P + F N } }
$$
(4) node_cls_f1: Harmonic mean of precision and recall.
$$
F _ { 1 } = 2 \cdot { \frac { { \mathrm { P r e c i s i o n } } \cdot { \mathrm { R e c a l l } } } { { \mathrm { P r e c i s i o n } } + { \mathrm { R e c a l l } } } }
$$
# Link Prediction Metrics (Binary Classification):
Link prediction is one of the important factor in evaluating unsupervised node embedding.
(1) LP_accuracy: Fraction of correctly predicted links/non-links.
(2) LP_precision, LP_recall, LP_f1: Same definitions as node classification.
(3) LP_auroc: Area under ROC curve; measures true vs. false positive rate trade-off.
(4) LP_aupr: Area under Precision-Recall curve.
(5) LP_specificity: True negative rate.
$$
{ \mathrm { S p e c i f i c i t y } } = { \frac { T N } { T N + F P } }
$$
# Embedding–Adjacency Alignment:
(1) Cosine Similarity–Adjacency Correlation (cosine_adj_corr): Pearson correlation between cosine similarity matrix of embeddings and adjacency matrix.
$$
\frac { \mathrm { c o v } ( \mathrm { C o s S i m } ( X ) , A ) } { \sigma _ { \mathrm { C o s S i m } } \cdot \sigma _ { A } }
$$
(2) Dot-Product–Adjacency Correlation(dot_adj_corr): Correlation between dot product similarity and adjacency.
$$
\rho = \frac { \mathrm { c o v } ( X X ^ { \top } , A ) } { \sigma _ { X X ^ { \top } } \cdot \sigma _ { A } }
$$
(3) Inverted Distance-Adjacency Correlation (euclidean_adj_corr): Negative correlation between Euclidean distance and adjacency.
$$
\rho = \mathrm { c o r r } ( - \| \pmb { x } _ { i } - \pmb { x } _ { j } \| _ { 2 } ^ { 2 } , A _ { i j } )
$$
(4) Edge Reconstruction BCE Loss (graph_reconstruction_bce_loss): Binary cross-entropy loss between predicted and actual adjacency.
$$
\mathrm { B C E } = - \frac { 1 } { | E | } \sum _ { ( i , j ) } \left[ A _ { i j } \log \hat { A } _ { i j } + ( 1 - A _ { i j } ) \log ( 1 - \hat { A } _ { i j } ) \right]
$$
Clustering Quality Metrics This is also important because unsupervised learning is based on clustering or geometry of the data.
(1) silhouette: Mean silhouette coefficient of samples:
$$
s ( i ) = \frac { b ( i ) - a ( i ) } { \operatorname* { m a x } \{ a ( i ) , b ( i ) \} } \quad \in [ - 1 , 1 ]
$$
(2) calinski_harabasz: Ratio of between-cluster to within-cluster dispersion:
$$
\mathrm { C H } = { \frac { \mathrm { T r } ( B _ { k } ) } { \mathrm { T r } ( W _ { k } ) } } \cdot { \frac { n - k } { k - 1 } }
$$
ACM Comput. Surv., Vol. 37, No. 4, Article 111. Publication date: August 2025.
(3) Neighborhood Overlap Score (knn_consistency): Average proportion of shared neighbors between embedding and graph space.
$$
\frac { 1 } { n } \sum _ { i } \frac { | N _ { k } ^ { \mathrm { g r a p h } } ( i ) \cap N _ { k } ^ { \mathrm { e m b e d } } ( i ) | } { k }
$$
Semantic Coherence and Ranking: For measuring the semantic coherence of embedding we have used the following metrics.
(1) coherence: Semantic similarity within cluster, usually using PMI or word co-occurrence.
$$
\mathrm { C o h e r e n c e } ( C ) = \sum _ { i < j } \mathrm { P M I } ( w _ { i } , w _ { j } )
$$
(2) selfCluster: Internal measure of clusterability using self-supervision loss or contrastive consistency. (3) Rankme: Measures the effective rank (entropy of singular values) of the embedding matrix.
$$
{ \mathrm { R a n k M e } } ( X ) = \exp \left( - \sum _ { i } p _ { i } \log p _ { i } \right) , \quad p _ { i } = { \frac { \sigma _ { i } } { \sum _ { j } \sigma _ { j } } }
$$
# 4.3 Classical GNN Architectures
To asses the learning of the model further we wanted to analyse if all GNN gives similar embedding or the embedding quality depends on the combinations of the loss function and Graph Neural networks. There fore we have used $6 { + } 1$ different GNNs in our study.
4.3.1 Graph Convolutional Network (GCN) [20]. GCN is a foundational spectral-based graph neural network that performs convolution-like operations on graphs. It utilizes a normalized adjacency matrix to propagate features across graph neighborhoods. The layer-wise propagation rule is defined as:
$$
\pmb { H } ^ { ( l + 1 ) } = \sigma \left( \hat { \pmb { D } } ^ { - 1 / 2 } \hat { \pmb { A } } \hat { \pmb { D } } ^ { - 1 / 2 } \pmb { H } ^ { ( l ) } \pmb { W } ^ { ( l ) } \right)
$$
where ${ \hat { A } } = A + I$ is the adjacency matrix with added self-loops, $\hat { D }$ is the corresponding degree matrix, $\mathbf { \delta W } ^ { ( l ) }$ is the trainable weight matrix, and $\sigma$ denotes a non-linear activation function.
4.3.2 Graph Attention Network (GAT) [37]. GAT introduces an attention mechanism to assign learnable importance weights to neighboring nodes during feature aggregation. This allows the network to focus on the most relevant parts of a node’s neighborhood:
$$
h _ { i } ^ { \prime } = \sigma \left( \sum _ { j \in N ( i ) } \alpha _ { i j } W h _ { j } \right)
$$
$$
\alpha _ { i j } = \mathrm { s o f t m a x } _ { j } \left( \mathrm { L e a k y R e L U } ( \pmb { a } ^ { \top } [ \pmb { W } \pmb { h } _ { i } \Vert \pmb { W } \pmb { h } _ { j } ] ) \right)
$$
Here, $\alpha _ { i j }$ denotes the attention coefficient computed between nodes $i$ and $j$ , and $\parallel$ represents concatenation.
4.3.3 GraphSAGE (Sample and Aggregate) [13]. GraphSAGE extends GCN by enabling inductive learning. It samples a fixed-size neighborhood and applies an aggregation function to produce node embeddings, making it suitable for large or dynamic graphs:
$$
\pmb { h } _ { i } ^ { ( l + 1 ) } = \sigma \left( \pmb { W } ^ { ( l ) } \cdot \mathrm { A G G R E G A T E } ^ { ( l ) } \left( \{ \pmb { h } _ { i } ^ { ( l ) } \} \cup \{ \pmb { h } _ { j } ^ { ( l ) } , j \in \pmb { N } ( i ) \} \right) \right)
$$
Different aggregation functions such as mean, LSTM, or pooling can be used.
ACM Comput. Surv., Vol. 37, No. 4, Article 111. Publication date: August 2025.
4.3.4 Graph Isomorphism Network (GIN) [42]. GIN is designed to achieve high representational power by mimicking the Weisfeiler-Lehman test for graph isomorphism. It updates node features through a learnable weighted sum of neighbors:
$$
\pmb { h } _ { v } ^ { ( k ) } = \mathrm { M L P } ^ { ( k ) } \left( \left( 1 + \epsilon ^ { ( k ) } \right) \cdot \pmb { h } _ { v } ^ { ( k - 1 ) } + \sum _ { u \in { \cal N } ( v ) } \pmb { h } _ { u } ^ { ( k - 1 ) } \right)
$$
where $\epsilon ^ { ( k ) }$ is either a learnable parameter or a fixed scalar.
4.3.5 Position-Aware Graph Neural Network (PAGNN) [45]. PAGNN integrates node positional encodings, often derived from Laplacian eigenvectors, into the attention mechanism. This allows the network to be sensitive to node positions or roles within the graph:
$$
\alpha _ { i j } = \mathrm { A t t e n t i o n } \left( h _ { i } , h _ { j } , \pmb { p } _ { i } - \pmb { p } _ { j } \right)
$$
Here, $\pmb { p } _ { i }$ encodes the structural position of node $i$ , enhancing message passing with spatial awareness.
4.3.6 Message Passing Neural Network (MPNN) [9]. MPNN is a general framework that encapsulates various GNN architectures. It separates message computation and node update phases, supporting edge features and more flexible interactions:
$$
{ \pmb m } _ { v } ^ { ( t ) } = \sum _ { u \in \mathcal { N } ( v ) } M _ { t } ( { \pmb h } _ { v } ^ { ( t ) } , { \pmb h } _ { u } ^ { ( t ) } , { \pmb e } _ { u v } )
$$
$$
\pmb { h } _ { v } ^ { ( t + 1 ) } = U _ { t } ( \pmb { h } _ { v } ^ { ( t ) } , \pmb { m } _ { v } ^ { ( t ) } )
$$
where $M _ { t }$ is the message function, $U _ { t }$ is the update function, and $\scriptstyle e _ { u v }$ denotes edge features.
4.3.7 ALL: Model Fusion Strategy. The ALL model combines embeddings from multiple GNN architectures to enhance robustness and leverage complementary strengths. Feature fusion can be implemented via concatenation:
$$
\pmb { h } _ { v } ^ { \mathrm { A L L } } = \mathrm { C o n c a t } ( \pmb { h } _ { v } ^ { \mathrm { G C N } } , \pmb { h } _ { v } ^ { \mathrm { G A T } } , \pmb { h } _ { v } ^ { \mathrm { S A G E } } , . . . )
$$
or via summation:
$$
\pmb { h } _ { v } ^ { \mathrm { A L L } } = \pmb { h } _ { v } ^ { \mathrm { G C N } } + \pmb { h } _ { v } ^ { \mathrm { G A T } } + \pmb { h } _ { v } ^ { \mathrm { S A G E } } + . . .
$$
This fusion strategy aims to capture diverse structural and semantic graph features.
# 4.4 Datasets used
We have used 3 publicly available datasets, described as follows:
# Cora:
– 2708 scientific publications across 7 classes
– 5429 citation links
– Each publication is a 1433-dimensional bag-of-words vector
CiteSeer: – 3312 documents classified into 6 classes – 4732 citation links – Each document represented by a 3703-dimensional feature vector
Bitcoin Transaction network:
ACM Comput. Surv., Vol. 37, No. 4, Article 111. Publication date: August 2025.
– The Elliptic dataset is a labeled graph-based representation of Bitcoin transactions, designed to facilitate the analysis of illicit financial activity in cryptocurrency networks. Each node in the graph corresponds to an individual Bitcoin transaction, and directed edges denote the flow of Bitcoin between transactions. The dataset categorizes transactions based on the type of entity controlling the input addresses, with labels indicating whether a transaction is licit (e.g., exchanges, wallet providers, miners, and other legal services) or illicit (e.g., scams, malware, ransomware, Ponzi schemes, and terrorist organizations).
– The graph comprises 203,769 nodes and 234,355 edges, offering a focused view of transaction behavior in contrast to the full Bitcoin network, which contains hundreds of millions of nodes and edges. Out of all transactions, 4,545 $( 2 \% )$ are labeled as illicit, 42,019 $( 2 1 \% )$ as licit, and the remaining majority $( 7 7 \% )$ are unlabeled or unknown [40].
– Each transaction is associated with a 166-dimensional feature vector derived entirely from publicly available blockchain data. For our study we have considered only subgraph of 5000 nodes from this graph.
# 4.5 Loss functions used
We have considered 5 base loss and further its higher order combinations.
4.5.1 Pointwise Mutual Information Loss ( PMI_L). Let $N$ be the number of nodes in the graph, and let $\mathbf { z } _ { i } \in \mathbb { R } ^ { d }$ denote the learned embedding of node $i .$ . Define $\mathrm { P M I } _ { i j }$ as the pointwise mutual information between node $i$ and node $j _ { : }$ , computed from structural statistics of the graph. Let $\mathrm { C o s S i m } ( { \bf z } _ { i } , { \bf z } _ { j } )$ denote the cosine similarity between embeddings $\mathbf { z } _ { i }$ and $\mathbf { z } _ { j }$ .
The loss aims to align embedding similarity with structural PMI and is defined as:
$$
\mathcal { L } _ { \mathrm { P M I } } = - \frac { 1 } { N ^ { 2 } } \sum _ { i = 1 } ^ { N } \sum _ { j = 1 } ^ { N } \mathrm { P M I } _ { i j } \cdot \mathrm { C o s S i m } ( \mathbf { z } _ { i } , \mathbf { z } _ { j } )
$$
This formulation promotes high cosine similarity for node pairs with high PMI, and discourages it for those with low or negative PMI.
4.5.2 Margin-based Contrastive Loss ( Contr_L). Given a graph with edge set $E$ , for each positive pair $( u , v ) \in E$ , we sample a negative node $k _ { u }$ not connected to $u$ . The embeddings $\mathbf { z } _ { u } , \mathbf { z } _ { v } , \mathbf { z } _ { k _ { u } } \in \mathbb { R } ^ { d }$ correspond to the anchor, positive, and negative nodes, respectively. Let $M > 0$ be the margin hyperparameter.
The loss encourages the anchor to be closer to the positive than the negative:
$$
\mathcal { L } _ { \mathrm { C o n t r a s t i v e } } = \frac { 1 } { | E | } \sum _ { ( u , v ) \in E } \operatorname* { m a x } \left( 0 , M - \mathrm { C o s } \mathrm { S i m } ( \mathbf { z } _ { u } , \mathbf { z } _ { v } ) + \mathrm { C o s } \mathrm { S i m } ( \mathbf { z } _ { u } , \mathbf { z } _ { k _ { u } } ) \right)
$$
4.5.3 Cross-Entropy-based Denoising Loss ( CrossE_L). Let $\mathbf { E } \in \mathbb { R } ^ { N \times D }$ be the clean input embedding matrix and $\hat { \mathbf { E } }$ be the reconstructed output from a denoising function. The loss minimizes the mean squared reconstruction error:
$$
\mathcal { L } _ { \mathrm { { D A E } } } = \frac { 1 } { N D } \sum _ { i = 1 } ^ { N } \sum _ { j = 1 } ^ { D } \left( \mathbf { E } _ { i j } - \hat { \mathbf { E } } _ { i j } \right) ^ { 2 }
$$
Here, $\hat { \mathbf { E } } = \mathrm { D e n o i s e r } ( \mathbf { E } + \epsilon )$ , where $\epsilon \sim { \cal N } ( 0 , \sigma ^ { 2 } )$ is Gaussian noise.
ACM Comput. Surv., Vol. 37, No. 4, Article 111. Publication date: August 2025.
4.5.4 PageRank-based Contrastive Loss ( PR_L). Let $A \subseteq \{ 1 , \ldots , N \}$ denote the set of anchor nodes. For each anchor $u \in A$ , select:
• $\begin{array} { r } { P _ { u } = \arg \operatorname* { m i n } _ { v \neq u } \left| P R _ { u } - P R _ { v } \right| } \end{array}$ (positive: similar PageRank) • $\begin{array} { r } { N _ { u } = \arg \operatorname* { m a x } _ { w \neq u , w \neq P _ { u } } \left| P R _ { u } - P R _ { w } \right| } \end{array}$ (negative: dissimilar PageRank) With margin $M > 0$ , the loss is:
$$
\mathcal { L } _ { \mathrm { P R } } = \frac { 1 } { \left| A \right| } \sum _ { u \in A } \operatorname* { m a x } \left( 0 , M - \mathrm { C o s S i m } ( \mathbf { z } _ { u } , \mathbf { z } _ { P _ { u } } ) + \mathrm { C o s S i m } ( \mathbf { z } _ { u } , \mathbf { z } _ { N _ { u } } ) \right)
$$
This encourages embeddings to reflect global ranking similarity.
4.5.5 Triplet Loss ( Triplet_L). Similar to contrastive loss, the triplet loss enforces that an anchor $u$ is closer to its positive neighbor $v$ than to a randomly sampled negative node $k _ { u }$ :
$$
\mathcal { L } _ { \mathrm { T r i p l e t } } = \frac { 1 } { | E | } \sum _ { ( u , v ) \in E } \operatorname* { m a x } \left( 0 , M - \mathrm { C o s S i m } ( \mathbf { z } _ { u } , \mathbf { z } _ { v } ) + \mathrm { C o s S i m } ( \mathbf { z } _ { u } , \mathbf { z } _ { k _ { u } } ) \right)
$$
This loss operates on edge-based triplets, using hinge-style ranking constraints to separate positive and negative pairs.
4.5.6 Other Higher order Loss. Further, we have considered all possible higher-order combinations. For example, second-order combinations include Contr_ $L +$ Triplet_L, Contr_ ${ \cal L } + { \cal P } { \cal R } _ { - } { \cal L }$ , and so on. Third-order combinations include Contr_ $L +$ Triplet ${ \cal L } + P M { \cal L }$ , Contr $L +$ Triplet_ ${ \cal L } + P { \cal R } _ { - } { \cal L } ,$ , etc. The hybrid loss functions are optimized as follows. Let the base loss functions be denoted as: $\mathcal { L } _ { 1 } , \mathcal { L } _ { 2 } , \mathcal { L } _ { 3 } , \mathcal { L } _ { 4 } , \mathcal { L } _ { 5 }$
Define a set of raw, learnable parameters:
$$
\theta = \left\{ \theta _ { i } , \theta _ { i j } , \theta _ { i j k } \mid 1 \leq i < j < k \leq 5 \right\}
$$
These parameters are passed through a sigmoid activation to constrain their corresponding weights in the interval $[ 0 , 1 ]$ :
$$
w _ { i } = \sigma ( \theta _ { i } ) = \frac { 1 } { 1 + e ^ { - \theta _ { i } } } , \quad w _ { i j } = \sigma ( \theta _ { i j } ) , \quad w _ { i j k } = \sigma ( \theta _ { i j k } )
$$
# Hybrid Loss with Learnable Weights
The hybrid loss is constructed as a weighted sum of:
Individual (first-order) losses Pairwise (second-order) products of losses Triple (third-order) products of losses The full loss is given by:
$$
\mathcal { L } _ { \mathrm { h y b r i d } } = \sum _ { i = 1 } ^ { 5 } w _ { i } \mathcal { L } _ { i }
$$
$$
{ \mathcal { L } } _ { \mathrm { h y b r i d } } = \sum _ { 1 \leq i < j \leq 5 } w _ { i } \cdot { \mathcal { L } } _ { i } + w _ { j } \cdot { \mathcal { L } } _ { j }
$$
$$
{ \mathcal { L } } _ { \mathrm { h y b r i d } } = \sum _ { 1 \leq i < j < k \cdots \leq 5 } w _ { i j k } \cdot { \mathcal { L } } _ { i } + w _ { j } \cdot { \mathcal { L } } _ { j } + w _ { k } \cdot { \mathcal { L } } _ { k } + \cdot \cdot \cdot
$$
Similarly, 4th and 5th orders were also considered. This gives us a large space of exploration and research to understand the emergent behavior of different loss functions
# 5 Experimental Results
We have considered two settings, one for transductive and one for inductive, for evaluating the model’s capability. Average Rank (last column in each table) is the mean of global ranks across all metrics (lower is better). We have kept all the experimental conditions the same for all data sets and results. More specifically, we have trained all models for 500 epochs with an early stopping patience level is 10, embedding dimension 128. As most of the evaluation metrics lie in the compact domain [0,1] to avoid more information loss, we have multiplied the results by 100. It also reduces the size while giving better information. In the table caption arrow sign indicates if the metric is higher the better (↑) or lower the better (↓).
# 5.1 Inductive Results
In this setting, we have trained the model on one dataset and tested it on other datasets. Further, the generated embeddings were evaluated and reported for the following metrics.
Pretrained on Cora and Citeseer datasets: In these results, we have trained the model on the Cora and Citeseer dataset, and applied the model to generate node embeddings on an unknown dataset. The results are represented as Data used for pretraining ↓ Applied Data.
# 5.2 Analysis based on top 3 performance on each metric
We aimed to determine which model performs best overall and under what conditions specific combinations of models and loss functions should be used.
5.2.1 Transductive Case. To understand over all effect of the model and loss in transductive settings by analyzing the Tables in Supplementary Information 1 & 2 [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18, 19,20,21]. Table 1 presents a comparative analysis of unsupervised GNN models trained with various loss functions. The ranking is based on inclusion in the top-3 positions across 21 evaluation metrics, capturing three key indicators: average rank, coverage, and top-1 wins.
# 1. Performance Leaders (Low AvgRank):
The $G C N + C r o s s E _ { - } L$ combination achieves the best average rank (1.00) with perfect coverage (1) and a top-1 win, indicating strong and consistent performance across selected metrics. Similarly, $G A T + C r o s s E _ { - } L$ (AvgRank 1.70) and $S A G E + C r o s s E _ { - } L$ (AvgRank 2.00) also demonstrate competitive results, suggesting that Cross Entropy Loss performs reliably when applied individually to classical GNN models.
# 2. Effective Hybrid Losses:
The combinations $G A T + C r o s s E \_ L + T r i p l e t \_ L$ and $G A T + C o n t r \_ l + C r o s s E \_ L + T r i p l e t \_ L$ achieve moderate average ranks (2.27 and 3.74, respectively), but stand out with the highest coverage (9). This reflects their consistent presence in the top-3 across many metrics. Notably, Triplet-based hybrid losses appear to generalize well across evaluation criteria, making them strong candidates for robust unsupervised learning.
# 3. Specialized High Performers (Top-1 Wins):
The $G A T + T r i p l e t \_ L$ combination achieves the highest number of top-1 wins (9) and the highest coverage (14), despite a less favorable average rank (6.97). This suggests that while its overall performance may be inconsistent, it excels on specific metrics. Similarly, $M P N N + C o n t r \_ l$ and GIN $^ +$ CrossE_L achieve multiple top-1 wins, highlighting their targeted effectiveness with particular loss functions.
# 4. Less Effective or Noisy Configurations:
Several combinations involving Contr_ $l + P M L $ or $P R \_ L$ (e.g., $A L L + C o n t r \_ l , P A G N N + C o n t r \_ l +$ $C r o s s E \_ L + P M L )$ show high average ranks (ranging from 14.30 to 36.00) with low or zero top-1 wins. This suggests that adding more loss terms does not necessarily improve performance and may dilute the learning signal. In particular, the $A L L + C o n t r \_ l$ configuration (AvgRank 29.30) significantly underperforms despite the theoretical advantage of combining multiple losses.
Table 1. Summary of Model Performance with Average Rank, Coverage, and Top-1 Wins based on transductive results.
5.2.2 Inductive Case. To answer these questions we have filtered only top 3 results based on average rank from inductive results. We have following findings based on tables in Supplementary Information 1 & 2 [22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42]. We have analyzed many aspects because it’s inductive settings where model is used as a pretrained on different datasets.
Table 2. The overall statistics of the model and loss function considering top 3 for each evaluation metric for inductive results.
Average Rank per Model (Lower is Better): This bar plot (Figure 1) displays the mean of average ranks for each model across all metrics and loss combinations. A lower average rank indicates a more consistent and generally better-performing model across diverse evaluation settings. Models are sorted in ascending order of their mean rank.
Average Rank per Loss Function (Lower is Better): Similar to the previous plot, this figure (Figure 2) shows the average rank for each loss function, aggregated over all models and tasks. This helps identify which loss functions contribute to more stable and superior performance in GNN training across various graph datasets and evaluation settings.
Heatmap of Average Rank for (Model, Loss) across Metrics: This heatmap (Figure 3) visualizes the average rank for each model-loss pair (rows) across all considered evaluation metrics (columns). The color gradient from green (lower rank, better) to red (higher rank, worse) highlights how different combinations perform on specific metrics. It helps identify task-specific strengths and weaknesses.
Count of Top-1 Ranked (Model, Loss) Combinations: This bar chart (Figure 4) reports how many times each model-loss combination achieved the best performance (rank 1) for any evaluation metric. A higher count indicates that a combination is frequently the top-performing setup, even if it may not have the best average rank overall. The x-axis represents the number of top-1 wins, indicating how often each model-loss combination achieved the best performance. The y-axis lists the different GNN models (GAT, GIN, GCN, MPNN), and the bars show the count of top-1 wins for each model when paired with specific loss functions (e.g., CrossE_L $^ +$ Triplet_L, PMI_L, CrossE_L $^ +$ PMI_L, etc.), highlighting which combinations were most effective. GAT along 𝐶𝑟𝑜𝑠𝑠𝐸_𝐿+𝑇𝑟𝑖𝑝𝑙𝑒𝑡_𝐿 with have been found the maximum number of times in top-1.
Fig. 1. This figure shows Average Rank per Model
Fig. 2. This figure shows Average Rank per Loss Function. From inductive case top 3 results only.
Fig. 3. This is heatmap of model+loss function across metrics in inductive settings.
Histogram of Appearance Counts (Model and Loss Function): These two histograms (Figure 5, 6) report the frequency with which each model and loss function appeared in the top- $\mathbf { \nabla \cdot k }$ results across evaluation metrics. This gives a sense of which methods were commonly evaluated and ensures fair comparison coverage.
Summary Table of Aggregated Results: The summary Table 2 includes the following statistics for each model-loss pair:
AvgRank: Mean rank across all metrics (lower is better).
• Coverage: Number of unique metrics in which the model-loss pair was found in top 3 rankings.
• Top1Wins: Number of metrics for which the combination achieved rank 1.
The results in Table 2 reveal key insights into the performance of model–loss function combinations in inductive settings. The GIN model with CrossEntropy loss (CrossE_L) achieves the lowest average rank (4.95) and secures 2 Top1Wins, marking it as the most consistent overall performer. Similarly, GCN with $\mathrm { C o n t r \_ L + P M I \_ L }$ ranks second (AvgRank 5.20), reaffirming the effectiveness of pairing simple architectures with hybrid losses. Notably, while GAT models do not dominate in average rank, they exhibit high Coverage and Top1Wins, especially with CrossE_L $^ +$ Triplet_L (Coverage $\mathbf { \tau } = 1 1$ , Top1Wins $= 9$ ), and PMI_L alone (Coverage ${ \bf \Psi } = 9 { \bf \Psi }$ , Top1Wins $= 3$ ). This indicates GAT’s strength in excelling at specific metrics, despite overall inconsistency. The prevalence of hybrid losses (e.g., CrossE $\_ { \mathrm { L } } + \mathrm { P M I \_ L + P R \_ L }$ ) among top-ranked combinations underscores their benefit in capturing diverse graph properties. In contrast, MPNN consistently underperforms, occupying the bottom ranks, with high average ranks and minimal Top1Wins, suggesting limited generalization in inductive contexts. In summary, the analysis highlights GIN and GCN as the most robust architectures overall, while GAT demonstrates targeted excellence. Hybrid loss design emerges as a critical factor in enhancing inductive performance.
Top-1 Count by Model+Loss Combination
Fig. 4. This figure shows top-1 model combinations. It shows how many times a model and loss function together ranked-1.
Fig. 5. Model appearance in top 3 rankings across all metrics.
Fig. 6. Loss appearance in top 3 rankings across all metrics.
5.2.3 Inductive and transductive together. Models in both the cases for top 3 cases: The figure.,7 presents a comparison of the average ranks of various Graph Neural Network (GNN) models across two different settings: inductive and transductive. The x-axis lists the GNN models (GIN, GCN, PAGNN, GAT, ALL, SAGE, MPNN), while the y-axis represents the average rank, with lower values indicating better performance. The figure shows that the performance of these models varies between the inductive and transductive settings, with some models achieving lower ranks (better performance) in one setting compared to the other.
Inductive vs transductive : (Model, loss) plot for top 3 cases: The figure 8 illustrates the average ranks of various models across different settings, specifically inductive and transductive. The y-axis indicating the number of top-1 models higher is better. The model+loss function combination is presented in this x-y plain giving deep insights.
# 6 Discussion
The comprehensive analysis, spanning six distinct GNN architectures plus an additional combined model, over 30 loss functions, and rigorously evaluated across three diverse datasets in both inductive and transductive settings using 21 distinct evaluation metrics, provides profound insights into the interplay of model architecture and loss function design. The summarized results in Table 1, 2, and Figure 8 which aggregate the top three performers for each of the 21 metrics based on their average rank, reveal several critical conclusions:
Fig. 7. This figure shows Average Rank achieved by models in inductive vs transductive settings.
Fig. 8. This figure shows Average Rank achieved by models $^ +$ loss function in inductive vs transductive settings.
AvgRank vs.Top1Wins for Top Model-Loss Combinations by Seting 2.GAT (CroSSE\_L) 1.GCN (CrossE\_L) 28.GAT (Contr_I+ PMI_L) 29.GAT (CroSSE\_L+ PMI_L) 3.SA(Lpt_L) 30.GA( 8 Model/Setting 5.AL(ontEL++_L) 3.GAT(CoPLL) . Inductive 7.GAT(PMI_L) 34.GAT(Contr_I+Cross\_+PM_L+PR\_L+Triplet) Transductive 35.ALLL Gf T GCN 11.GIN (CroSSE\_L) 37.PAGNN(Contrl_I+CrossE\_L+PMI_L) SAGE 12.GAT(Contr\_I) 39.ALnt+ GAT 13.PAGNN (PR\_L) 40.MPNN (Contr_I) -14.-GIN(PR\_L) C MPNN 15.GN (Contr-1 + PL) 42.ALP 17.SAGE (CrOSSE\_L+ PMI\_L+ PR\_L) 43.GAT (Contr\_I + PR\_L + Triplet\_L) 18.G(rEL+RL) 44.GAT(PMIL+RL+leL) 20.GIN (Contr\_+ PR\_L) 46. MPNN(PRI_L) 2 47 ALLNI $^ +$ PELRIL $^ +$ 24.GAT (PMI\_L) 50.MPNN (CrosSE\_L + PMI_L + Triplet\_L) 25.GCN (CrosSE\_L + PMI\_L+ Triplet\_L) 51.GAT (PMI_L + PR\_L) 26PANto 52. GA (ILpL 28.GAT (Contr\_+PMI\_L) 54.MPNN (CrossE\_L $^ +$ Triplet\_L) 55.SAGE (Tripletl_L) 40 1256 1315 35 41 51 54 0 8117223587832 389 4243 45 46 7 0 53 16 0 10 20 30 40 50 Average Rank (Lower is Better)
# 6.1 Overall Performance and the Efficacy of Hybrid Loss Functions(inductive)
Since the focus of this work is on training GNNs for use as pretrained models, our conclusions are drawn exclusively from inductive settings. The results clearly demonstrate the superior overall performance of specific model–loss function pairings. Notably, the GIN architecture paired with Cross-Entropy loss (CrossE_L) stands out as the most consistent top performer, achieving the lowest average rank (4.95). This suggests that across a broad spectrum of tasks and evaluation metrics, GIN’s aggregation mechanism combined with a fundamental classification loss provides a robust and effective approach. Additionally, the strong performance of GCN with a hybrid loss combining Contrastive Loss (Contr_L) and PMI Loss (PMI_L) (average rank 5.20) highlights that simpler architectures, when thoughtfully paired with suitable loss functions, can also deliver highly competitive results.
Furthermore, the prominence of hybrid loss functions among the best-performing combinations underscores their vital role in enhancing GNN performance. While single loss functions serve as a foundation, they often lack the synergistic effects achieved by integrating multiple objectives. For example, the second-best GIN variant $( \mathrm { G I N + C r o s s E \_ L + P M I \_ L + P R \_ L }$ , average rank 6.40) and the leading SAGE model $( \mathrm { S A G E } + \mathrm { C r o s s E \_ L } + \mathrm { P M I \_ L } + \mathrm { P R \_ L }$ , average rank 6.10) both leverage a blend of cross-entropy, pointwise mutual information, and neighborhood preservation losses. This demonstrates that optimizing across multiple complementary criteria—promoting accurate classification, capturing statistical dependencies, and maintaining structural fidelity—yields more balanced and effective node representations in inductive scenarios.
# 6.2 Nuanced Strengths of GAT and Architectural Comparisons(inductive settings)
Thirdly, the ‘Coverage’ and ‘Top1Wins’ metrics offer a more nuanced perspective on model performance that extends beyond average ranking alone. While combinations involving GIN and GCN frequently achieve the lowest average ranks, the GAT architecture exhibits a notable capacity for targeted dominance. For instance, GAT with CrossE_L $^ +$ Triplet_L records 11 appearances in the top 3 and secures 9 first-place finishes, despite a comparatively higher average rank of 12.80. Similarly, GAT combined with a simple PMI_L loss achieves 9 top-3 coverages and 3 top-1 wins with an average rank of 8.04. These findings suggest that GAT, while not universally superior across all metrics, demonstrates distinct advantages in specific evaluation dimensions. This highlights its potential suitability for applications emphasizing particular criteria—such as recall, precision, or structural sensitivity—where tailored hybrid loss functions may amplify its strengths.
Lastly, the results reveal a clear stratification among GNN architectures in this comprehensive inductive evaluation. GIN and GCN consistently emerge as top performers, underscoring their robust and generalizable representation capabilities. GraphSAGE also exhibits competitive performance, particularly when enhanced with hybrid loss functions. In contrast, the MPNN architecture consistently underperforms, indicating challenges in adapting effectively across the diverse tasks and evaluation metrics considered. | Graph Neural Networks (GNNs) became useful for learning on non-Euclidean data. However, their best performance depends on choosing the right model architecture and the training objective, also called the loss function. Researchers have studied these parts separately, but a large-scale evaluation has not looked at how GNN models and many loss functions work together across different tasks. To fix this, we ran a thorough study - it included seven well-known GNN architectures. We also used a large group of 30 single plus mixed loss functions. The study looked at both inductive and transductive settings. Our evaluation spanned three distinct real-world datasets, assessing performance in both inductive and transductive settings using 21 comprehensive evaluation metrics. From these extensive results (detailed in supplementary information 1 \& 2), we meticulously analyzed the top ten model-loss combinations for each metric based on their average rank. Our findings reveal that, especially for the inductive case: 1) Hybrid loss functions generally yield superior and more robust performance compared to single loss functions, indicating the benefit of multi-objective optimization. 2) The GIN architecture always showed the highest-level average performance, especially with Cross-Entropy loss. 3) Although some combinations had overall lower average ranks, models such as GAT, particularly with certain hybrid losses, demonstrated incredible specialized strengths, maximizing the most top-1 results among the individual metrics, emphasizing subtle strengths for particular task demands. 4) On the other hand, the MPNN architecture typically lagged behind the scenarios it was tested against. | [
"cs.LG"
] |
# 1 Introduction
Large Language Models (LLMs) have emerged as powerful tools capable of understanding and generating human-like text. Despite these advances, factuality – the ability of LLMs to provide responses that are truthful and faithful to the real-world knowledge encountered during pre-training – remains a persistent challenge [20, 33]. Effectively, a lack of factuality manifests as ‘hallucination’ — the generation of plausible yet incorrect information — a pervasive issue that is still observed in frontier models [10, 1]. This issue is particularly critical when LLMs are used in settings demanding high factual precision, such as medical information synthesis [44], financial reporting [13], scientific data analysis [48], or educational content generation [23].
To evaluate and improve factual performance, the research community has developed a variety of benchmarks. However, existing benchmarks predominantly focus on single-value factuality, where the expected output is a short text span or a single scalar value (e.g., a date or named entity, or a numerical value) [49]. These tasks often emphasize reasoning complexity (e.g., multi-hop QA or ambiguous phrasing) [27, 50, 52] but overlook a fundamental aspect of factual competence: the ability of LLMs to generate long, coherent outputs directly from their internal parametric knowledge (i.e., the facts stored implicitly within the model’s parameters), without retrieving external documents.
In this work, we focus on structured, multi-record, tabular outputs to investigate the factuality of LLMs in synthesizing long sequences of facts. This task is motivated by two main arguments.
Output Size Matters. First, experiments highlight that retrieving tabular data from parametric memory presents a significantly greater challenge than recalling isolated cell values, even when the underlying facts are known to the model. For instance, prompting an LLM to return two attributes (e.g., name and state) for US counties yields near-perfect results. However, requesting additional attributes for the same set of counties (e.g., including county area) introduces factual errors in the results.
Crucially, if we then query the LLM for these specific incorrectly reported values in isolation (e.g., “What is the area of Maricopa county?”), the model returns the correct value, demonstrating that the error lies in the generation process, not in the absence of the underlying factual knowledge. Results show that the accuracy of retrieving a specific attribute (e.g., state) degrades linearly (from 1.0 to 0.2) as the total number of concurrently requested attributes increases from one to fifty, regardless of the target attribute’s position in the schema. These findings underscore that the structured, multi-attribute retrieval of factual data is not merely an extension of single-fact recall but a distinct capability with unique failure modes. Moreover, while it is hard to quantify precisely and with fine granularity the quality of the output in unstructured generation tasks [15], structured data allows punctual comparison at the single fact (cell) level.
Q: What is the area of Maricopa county? A: 9 224 sq mi ✓
Increasing Importance of Tabular Output. Second, we argue that the structured factual retrieval capability of LLMs is both under-explored and essential. Several tasks require not just isolated facts about common world knowledge, but the generation of relational data: lists of entities, comparisons, and collections of items satisfying specific conditions [36, 29, 43, 41]. This requirement has been reported in sociology [45], business use cases [53], medical diagnosis [4], and financial use cases [3]. Obtaining tabular data is also increasingly relevant for user-facing applications, such as generating comparative tables of e-commerce products or structuring personalized trip itineraries [14, 47]. Yet, current benchmarks fall short in measuring this dimension. Existing datasets that do contain tabular data focus on its role as contextual input to the LLM, in the role of a corpus for question answering or fact checking [9, 35, 54, 2].
We define the Relational Fact Retrieval task as follows: given a query, the LLM must generate a structured table (rows and columns) containing factual information drawn purely from its parametric memory, prohibiting the use of external tools like web browsers during generation. To address the need for evaluating this capability, we introduce RelationalFactQA, a new benchmark designed to test LLMs’ ability to return factual knowledge in relational (i.e., tabular) form in a closed-books setting. RelationalFactQA probes this capability across several dimensions. The benchmark contains triples with the natural language (NL) question, the corresponding SQL script, and the expected answer in tabular format. For the creation, we combine manually crafted questions (for linguistic variety) with systematically generated ones where the corresponding query complexity (e.g., specific SQL constructs) is controlled. Expected output tables spans from small ones, with few tuples and attributes, to large ones. These dimensions enable analysis of LLMs’ performance across different logical operations (e.g., aggregates, filtering), data types (e.g., numerical, categorical), and retrieval methods (prompts with NL questions vs SQL queries).
Through extensive experimentation, we find that although larger models show improvement, the ability to produce correct structured answers remains limited — especially as the number of tuples and attributes increases or the query involves less common facts and numerical conditions. Moreover, we observe that even state-of-the-art models rarely exceed $2 5 \%$ of factual accuracy on our benchmark.
To summarize, this paper makes the following contributions:
• Task formulation. We introduce Relational Fact Retrieval — the closed-book generation of multi-tuple, multi-attribute tables directly from an LLM’s parametric memory — and clarify how it differs from single-fact recall and context-based table QA.
• RelationalFactQA benchmark. We release a 696-question dataset covering nine knowledge domains, each triple-annotated with a natural-language query, its equivalent SQL statement, and a fully verified gold table (avg. 27 rows $\times 5$ attrs).
• Hybrid construction pipeline. Our semi-automatic workflow unifies (i) manual curation from three existing corpora and (ii) YAGO-driven synthetic tables, yielding controlled variation in schema size, output size, and query complexity.
• Comprehensive empirical study. Nine LLMs (7B – 235B params) are benchmarked under three retrieval techniques (NL, SQL, Chain-of-Thought). Despite parameter scaling, no model exceeds 0.25 in tuple accuracy; performance degrades linearly with requested attributes. Code, prompts, and data will be open-sourced to drive future progress.
Our findings lay the groundwork for future research on factuality in LLMs, and position RelationalFactQA as a valuable resource for tracking progress on this critical capability.
Table 1: Closed-book QA datasets characteristics1. Prior datasets have outputs with approximately one tuple and one attribute. In contrast, RelationalFactQA demands complex outputs, with an average of 27 tuples and 5.3 attributes per answer.
# 2 Related Work
The evaluation of factual accuracy in LLMs has lead to the development of diverse benchmarks [8]. However, existing work evaluates an LLM’s ability to return short-span answers, rather than complex, structured relational data. As motivated in Section 1, the ability to generate such larger, structured outputs presents distinct challenges beyond single-fact recall, involving sustained coherence and factual consistency across multiple data points [30, 18]. Table 1 provides a comparative overview of output characteristics across several closed-book QA datasets and RelationalFactQA.
Factuality Evaluation Benchmarks. A significant body of work focuses on evaluating the factual correctness of LLM generations. Benchmarks such as TriviaQA, NQ-Open, and TruthfulQA assess LLMs’ ability to answer questions with short, often single-entity or single-value, factual statements [49]. While these are crucial for gauging general world knowledge, they do not probe the model’s capacity to synthesize answers as structured relations. As evident in Table 1, the expected outputs in these datasets typically consist of a single tuple and a single attribute. Other efforts like FactScore or HaluEval variants aim to quantify hallucination rates [20], but again, within the context of single-statement claims rather than structured relational outputs. Despite these varied evaluation efforts, the fundamental challenge of LLM hallucination persists as a critical concern [10, 1].
Table Question Answering and Reasoning. Several benchmarks like WikiSQL [54], WikiTableQuestions [35], and TabFact [9] involve tabular data. However, these benchmarks provide the relevant table(s) as input to the LLM, tasking it with understanding, reasoning over, or extracting information from the provided context [51]. In contrast, RelationalFactQA operates in a closed-book setting, where the LLM retrieves the tabular answer from its parametric knowledge. This shifts the evaluation from context-based reasoning to parametric relational knowledge retrieval.
Text-to-SQL. While RelationalFactQA uses SQL as one input modality to query the LLM’s knowledge, our focus is not on the correctness of SQL generation itself, which is the primary goal of Text-to-SQL benchmarks [52, 27, 31, 19, 42]. Instead, we evaluate the factual accuracy and completeness of the tabular data returned by the LLM in response to a query (be it in natural language or SQL). We manually filter examples from two Text2SQL datasets and adapt them to the Relational Fact Retrieval task in building our benchmark.
Knowledge Probes for LLMs. Prior research has explored using “knowledge probes” (e.g., LAMA [37]) to assess what factual information is stored in an LLM’s parameters, typically by prompting models to fill in missing tokens in cloze-style statements (e.g., “Paris is the capital of [MASK]”). These probes generally target single, atomic facts [38]. RelationalFactQA extends this concept from single-fact elicitation to probing for multi-tuple, multi-attribute relational structures.
In summary, while existing benchmarks address various facets of LLM factuality, RelationalFactQA fills a critical gap by specifically evaluating LLMs’ ability to act as “parametric databases,” retrieving factual information - in contrast with plausible data [5] - in a tabular format.
# 3 The Benchmark
Task Definition and Problem Formulation. We define the task of Relational Fact Retrieval as the generation of structured, multi-record, multi-attribute tabular data by an LLM in response to a query, relying exclusively on the model’s internal parametric knowledge.
Formally, the problem is formulated as follows:
• Input: The input is a query $q$ , which can be expressed either in natural language (NL) or as a Structured Query Language (SQL) statement. The query $q$ specifies the factual information to retrieve and the desired output relational structure.
• Output: The desired output is a table $\hat { T }$ . This table is characterized by a schema ${ \textbf { { S } } } =$ $\{ A _ { 1 } , ^ { - } A _ { 2 } , \ldots , A _ { k } \}$ , representing $k$ attributes (columns), and a set of $n$ tuples (rows), where $n \geq 0$ . Each tuple $t _ { i } \in \hat { T }$ is an ordered list of $k$ cell values $( v _ { i 1 } , v _ { i 2 } , \ldots , v _ { i k } )$ , corresponding to the attributes in $S$ .
LLMs are instructed to enforce a closed-book evaluation setting and, where applicable, technically restricted, e.g., by disabling access to external tools, web browsing functionalities, or code execution environments via API parameters. The closed-book setting is intentional: in retrieval-augmented generation (RAG) or tool-assisted workflows, the factual quality of outputs depends not only on the model’s internal knowledge, but also on external factors—such as retrieval accuracy, context formatting, or prompt design. These confounding variables make it difficult to isolate the LLM’s intrinsic factual competence. While retrieval-based methods may improve factual coverage, we hypothesize the challenges in closed-book persist also in open-book scenarios
Dataset Construction. To build the RelationalFactQA dataset, we combine manual curation and semi-automatic generation.
In the manual pipeline, we consider 44 datasets from three existing corpora of examples (Spider [52], Bird [27], and Galois [41]) that contain natural language (NL) and SQL query pairs along with their underlying structured databases. We manually review each dataset in two steps. First, we identify the databases with schema and entities that are present on Wikipedia - this is important to ensure that the examples are within the knowledge scope of an $\mathrm { L L M }$ . Second, for each database, we retain only the NL queries that reference factual, world-knowledge content that is temporally stable, deliberately excluding subjective or dynamic information such as user reviews or prices. The corresponding SQL queries and their tabular outputs are finally included in the benchmark.
For the semi-automatic pipeline, we adopt a two-step process: first, we generate tables to serve as query targets; then, we construct corresponding NL question–SQL query pairs. To ensure that the table schemas and entities are likely to be known by LLMs, we extract data from the YAGO 4.5 knowledge base [46]–a structured resource derived from Wikidata. YAGO is organized around RDF triplets; each has a subject connected to an object through a predicate, e.g., “Trump, president, USA” or “NYC, population, $8 . 2 { \mathbf { M } } ^ { { \mathbf { \beta } } }$ . To obtain tables, we follow the procedure of selecting seven Yago types (high level classes, such as City and Country) and reorganizing the triples to collect multiple attributes for those (such as size in squared Km) [7].
Using an automatic generator tool, Qatch [34], we then create the corresponding NL-SQL pairs for these YAGO-derived tables. To ensure controlled complexity for this segment of the benchmark, the Qatch generation strategy deliberately focuses on SELECT queries. These queries are designed to systematically vary in two main dimensions: the number of projected attributes (columns) and the complexity of selection, achieved by altering the number and nature of predicates in the WHERE clause. Therefore, the Qatch-generated queries predominantly feature projection and filtering operations, allowing for a targeted assessment of these core capabilities.
While the full RelationalFactQA benchmark incorporates a wider range of SQL operators, including JOIN and AGGREGATE functions, these more complex operators are sourced from the manually curated datasets (Spider, Bird, Galois). These human-authored queries contribute crucial linguistic and structural diversity to the benchmark. The rationale for the focused Qatch generation approach, emphasizing projection and selection, is that the primary challenge lies in the LLM’s ability to accurately retrieve the fundamental base data; if this initial extraction is flawed, any subsequent, more complex operations (such as the joins or aggregations found in other parts of the benchmark) would inherently build upon incorrect information. As the tool occasionally produces syntactically correct but semantically trivial or invalid queries, we manually remove such non-meaningful examples.
Finally, we perform targeted preprocessing steps to enhance consistency in the ground truth data. For all date attributes, we extract the year component to ensure that any condition involving dates can be treated as numerical comparisons, rather than requiring models to process full date-type values. Also, we manually removed noisy tuples, such as instances where organizations were listed as Nobel Prize laureates instead of individuals. These actions ensure comparable outputs across samples, focusing the evaluation on the fact retrieval capabilities.
(a) Source Distribution (b) Query Complexity Distribution
Type # Questions
SELECT without WHERE 10
10% WHERE numerical condition 148
BIRD WHERE categorical condition 294
11% GALOIS AWGHGEREGmAixTeIdVcEondition 29449
QATCH JOIN 67
71% 8% SPIDER DISTINCT 34
GROUP BY 13
LIMIT 11
ORDER BY 17
Figure 1: RFQA dataset. Source distribution and distribution of query complexity (SQL operators).
Dataset Statistics. The RFQA benchmark comprises 696 question, query, answer triples. As reported in Figure 1(a), the majority of questions $( 7 1 \% )$ are from the Qatch pipeline, ensuring controlled complexity and coverage, while contributions from Bird $( 1 1 \% )$ , Galois $( 1 0 \% )$ , and Spider $( 8 \% )$ provide diverse, human-authored queries. This hybrid approach allows RFQA to cover a range of factual domains, including common entities typically found within an LLM’s pre-training corpus.
A key characteristic of RFQA is the size of its target outputs, designed to test an LLM’s ability to generate structured relational data. As detailed in Table 1, ground truth answers in RFQA contain an average of 357 tokens, specifically 26.94 tuples (rows) and 5.32 attributes (columns), for an average of 135.50 cells per table. The output dimensions exhibit considerable variability: the number of tuples ranges from a minimum of 1 to a maximum of 904, while attributes span from 1 to 9. This contrasts sharply with prior QA benchmarks, which typically expect single-tuple, single-attribute answers. The attribute types within RFQA tables also vary; on average, each target table schema consists of approximately 1.06 numerical attributes, 3.16 categorical attributes, and 4.26 attributes containing mixed (numerical and string) data types.
The complexity of the retrieval task is also defined by the SQL constructs associated with each question. Figure 1(b) presents the distribution of SQL operators within RFQA. The distribution reflects our focus on evaluating the retrieval of data under diverse projection and filtering requirements.
# 4 Experimental Settings
Retrieval Methods. We evaluate LLMs on RFQA using three iterative methods:
• NL. The LLM is directly prompted with a natural language query $q$ , requesting the model to return tabular results based on its internal knowledge.
• SQL. Similar to the NL approach, but the query $q$ is expressed in SQL. The model is expected to interpret the SQL semantics and return the corresponding tabular data.
• COT. Given an SQL query $q$ , a Chain-of-Thought approach [41] decomposes the query execution into two steps: (1) the LLM is prompted to retrieve the relevant base data (i.e., a broader result set), and (2) relational algebra operations are applied in memory on the intermediate output to produce the final filtered result. This method aims to improve retrieval accuracy by breaking queries into simpler tasks.
In all methods, the LLM is prompted with the query $q$ and the corresponding output schema $s$ expressed in JSON Schema format.
Output Processing. The prompt includes instructions for the model to return results in valid JSON. If no answer is found, the model is instructed to return an empty JSON object. Each strategy is applied iteratively. After the initial prompt, if the model returns a non-empty result, it is prompted again to return additional data until the model returns an empty JSON. Prompt templates used in the experiments are detailed in the Appendix.
Since LLMs do not always produce outputs in valid JSON format, we apply heuristics to extract and recover structured responses. Our approach begins by identifying all text enclosed between “ ” and “}” or “[” and “]”. If this content forms a valid JSON object, we parse it directly and return it in a result. If the content is invalid, we re-prompt for correct formatting or attempt to repair common issues like syntax errors or truncation. If recovery fails, the response is treated as invalid. Further details on recovery strategies are in the Appendix.
Models. We use open-source and proprietary LLMs. To enhance reproducibility and obtain deterministic results, we set the temperature to 0.0. For open-source models, we adopt the following models hosted on Together.AI: Mistral-7B [21], Qwen2.5 and Qwen3 [39], LLama 3 (covering versions 3.1 and 3.3) [32], Gemma 2 [16], DeepSeek-LLama3 (as a base for the reasoning model DeepSeek R1 Distill LLama) [12]. As proprietary models, we use GPT-4.1 and GPT 4.1 mini [33].
Metrics. To evaluate the factuality of each LLM we measure the quality of the produced responses. Each example in the RFQA dataset consists of a query $q$ (either NL or SQL) and the corresponding expected set of tuples $t _ { e x p }$ (the ground-truth). To evaluate an LLM, we execute the query $q$ on it and collect the resulting set of tuples $t _ { a c t }$ . To assess the quality of the result, we compare tuple sets $t _ { e x p }$ and $t _ { a c t }$ . We adopt two metrics commonly used to benchmark queries executed by LLMs [34]:
• F1: We compute the F1 score over the set of cells in $t _ { a c t }$ with respect to those in $t _ { e x p }$ . This metric evaluates performance at the cell level, disregarding tuple structure and focusing purely on the correctness of returned values.
• TS (Tuple Similarity): We measure the fraction of tuples in $t _ { e x p }$ that also appear in $t _ { a c t }$ , comparing tuples holistically. A Tuple Similarity score of 1.0 indicates that $t _ { e x p }$ and $t _ { a c t }$ share the same schema, cardinality, and cell values. This metric is stricter than F1, as it requires correct grouping of values within tuples, not just correct individual values.
To account for superficial differences in formatting (e.g., “1K” vs. “1000”), we normalize all cell values in both $t _ { a c t }$ and $t _ { e x p }$ before evaluation. This step mitigates false negatives caused by representational variations. The normalization process involves the following steps: (i) Replacing accented characters with their unaccented equivalents (e.g., $^ { 6 6 } \mathrm { e } ^ { 9 9 } \to { ^ { 6 6 } \mathrm { e } ^ { 9 9 } } ,$ ); (ii) Converting all characters to lowercase; (iii) Converting shorthand numeric notations like “1K” or “1M” and into the corresponding numeric values (e.g., $\mathrm { \mathrm { \Omega ^ { * } 1 K ^ { 9 } } } \to 1 0 0 0 )$ ; (iv) Standardizing numeric formats (e.g., converting “1.000,5” and $\cdot 1 , 0 0 0 . 5 ^ { \prime \prime }$ into a consistent representation).
Moreover, since LLMs may produce answers that are close, but not identical, to the ground truth (e.g., “Bill Clinton” vs. “Bill J. Clinton”), we incorporate approximate matching. Specifically, we use Edit Distance [40] with a threshold of $10 \%$ relative to the length of the expected string. For numerical values, we apply a tolerance of $\pm 1 0 \%$ relative to the expected number.
To compare two tuples $t _ { a }$ and $t _ { e }$ , we evaluate each pair of corresponding cells based on their shared attribute, using the same comparison strategy as defined previously for the cells. While our current implementation uses simple, efficient matching rules, more advanced approaches such as entity resolution [11, 6] or tuple-level instance comparison [17] could be applied for more nuanced matching, but they require manual user configuration and thus cannot be easily used as a metric.
# 5 Results
We organize our evaluation around three main research questions.
1. Factuality. To what extent can LLMs generate factual tables based on their internal knowledge? 2. Extraction Techniques. Are LLMs more effective at generating tabular responses from SQL queries compared to NL questions? Does CoT help in getting better results? 3. Query complexity. Do LLMs’ performance depend on the schema and the query complexity?
Table 2: Benchmark Results. F1 and Tuple Similarity (TS) measured for all LLMs in our evaluation AVG is the average between F1 and TS. LLMs ordered by increasing size in terms of parameters.
Exp-1. Overall Performance We evaluate all LLMs in our benchmark using the RFQA dataset and report their performance using the two quality metrics: F1 and TS. To provide a single, comparable measure of factual accuracy across models, we also compute the average of F1 and TS.
The results in Table 2 reveal that increasing the number of model parameters generally leads to improved quality performance (and thus factuality) across all retrieval methods (NL, SQL, and COT). However, the task remains inherently difficult. While larger models, such as Qwen 3, achieve F1 scores above 0.6, this improvement does not translate to accurate tuple-level results. The best TS score is only 0.247, obtained by GPT 4.1, highlighting that even frontier models often return wrong values in output tuples.
This experiment also shows that querying using NL has an edge over SQL in all models, while the COT approach leads to improved retrieval with all LLMs except GTP 4.1.
Takeaways for questions (1) and (2): LLMs still struggle to consistently retrieve structured factual knowledge as complete output tuples. NL outperfoms slightly SQL as a retrieval method, while CoT provides benefits in most settings.
Exp-2. Performance by Attribute Type. To investigate the third research question, we exploit the metadata used to annotate each query $q$ in RFQA. In this experiment, we analyze model performance based on the type of attributes in the query output. We divide the queries into two categories: those that return only numerical values and those that return only categorical values. We use the average of the F1 and TS scores as the metric.
Results in Table 3 show that extracting categorical values is generally easier for small and medium LLMs than retrieving numerical ones. However, larger models perform better on numerical queries than on categorical ones when using SQL and CoT.
Exp-3. Performance by Output Size. We focus on the top-3 performing LLMs and analyze how their performance varies with the size of the expected output. We group the results according to: $a$ ) the number of attributes requested in the query, and $b$ ) the overall output size, measured as the number of expected cells (#rows $\times$ #attributes). We use the TS metric, which accounts for both the structure and completeness of the returned data.
Figure 2 summarizes our findings. On the left side, we show how quality decreases as the number of requested attributes increases, indicating that LLMs struggle more when asked to retrieve wider tables. On the right side, we plot the TS score against the total number of expected cells. The trend remains consistent: as the number of rows and columns grows, the model’s ability to return accurate, complete tabular data declines.
Table 3: Quality measured as the AVG between F1 and TS w.r.t. type of output attributes.
Figure 2: TS results for LLama 3.3, GPT-4.1 and QWEN 3, with all retrieval techniques, w.r.t. the expected output measure as the number of attributes (left) and cells (right).
Table 4: Quality measured as the AVG between F1 and TS w.r.t. query complexity.
Exp-4. Query Complexity. Table 4 provides a breakdown of performance w.r.t. the query complexity. We observe that as query complexity increases, the quality of the generated responses tends to decrease. Simple queries such as SELECT without WHERE consistently achieve the highest scores, while complex constructs like JOIN, AGGREGATE, or multi-condition WHERE clauses report substantially lower results across all models and retrieval methods. In particular, the JOIN operator represents a notable challenge. Scores are low for all models, especially in the COT setting as it does not yet support joins over multiple tables. Despite its limitations, the COT strategy demonstrates meaningful gains for several complex operations. This highlights the benefit of breaking down query execution into intermediate reasoning steps. Finally, certain operators such as LIMIT and ORDER BY appear systematically difficult for all models and prompting strategies. These constructs require precise handling of position and ordering in the output tuples — capabilities that autoregressive models struggle to maintain as the result set grows.
Results Discussion. The challenges observed in generating extensive and accurate tabular data from parametric memory resonate with known LLMs’ limitations in long-sequence generation. While issues such as maintaining thematic coherence [30], mitigating factual drift [20], and managing error propagation in autoregressive systems [18] are recognized in tasks involving lengthy free-form text, the generation of tabular outputs magnifies these problems. Specifically, the dual axes of table “size” - the number of rows (tuples) and the number of columns (attributes) - impose distinct pressures on the model’s generative capabilities.
Our findings suggest that the demand for concurrent retrieval and precise alignment of numerous facts strains the model’s effective “working memory” or its ability to maintain sustained attention to all constraints of the query [30]. The fact that LLMs often correctly retrieve individual facts in point-wise queries (e.g., the area of a specific county that is reported incorrectly in a larger table) underscores that the bottleneck is frequently not an absence of the underlying factual knowledge. Instead, the difficulty lies in the process of composing the individual pieces of information into a larger relational structure. This distinction points towards limitations in the architectural or learned capabilities for synthesis from parameters, rather than simply gaps in memorized knowledge. The dense factual requirement of tabular data, where each cell represents a correct assertion, and the inflexible nature of its structural integrity, make it a valuable testbed for these aspects of LLM performance, revealing failure modes that are less explicitly quantifiable in unstructured generation tasks [15]. | Factuality in Large Language Models (LLMs) is a persistent challenge. Current benchmarks often assess short factual answers, overlooking the critical ability to generate structured, multi-record tabular outputs from parametric knowledge. We demonstrate that this relational fact retrieval is substantially more difficult than isolated point-wise queries, even when individual facts are known to the model, exposing distinct failure modes sensitive to output dimensionality (e.g., number of attributes or records). To systematically evaluate this under-explored capability, we introduce RelationalFactQA, a new benchmark featuring diverse natural language questions (paired with SQL) and gold-standard tabular answers, specifically designed to assess knowledge retrieval in a structured format. RelationalFactQA enables analysis across varying query complexities, output sizes, and data characteristics. Our experiments reveal that even state-of-the-art LLMs struggle significantly, not exceeding 25% factual accuracy in generating relational outputs, with performance notably degrading as output dimensionality increases. These findings underscore critical limitations in current LLMs' ability to synthesize structured factual knowledge and establish RelationalFactQA as a crucial resource for measuring future progress in LLM factuality. | [
"cs.CL",
"cs.AI",
"cs.DB"
] |
# 1 Introduction
Real-world systems such as social networks and communication networks are often modeled as temporal graphs, where edges are associated with time intervals or discrete time points, indicating when they are active. In such graphs, certain individuals play a crucial role in driving rapid information diffusion or facilitating rumor spreading. Temporal generalizations of betweenness centrality are defined based on the optimal temporal paths. However, a significant challenge lies in the computational complexity of these generalizations, as calculation requires evaluating all possible temporal paths, which is resource-intensive. To address this, we propose a novel graph neural network (GNN) model designed to accurately predict Temporal Betweenness Centrality (TBC) values.
Figure 1: Distribution of TBC values in the ia-reality-call dataset
Existing graph learning models primarily focus on either computing betweenness centrality (BC) for static graphs [10, 24] or performing vertex ranking on temporal graphs [22, 52]. To the best of our knowledge, only DBGNN [15] predicts TBC values on temporal graphs. However, its prediction accuracy is low, our experiments show that the mean absolute error (MAE) for TBC reaches 496 on the dataset Highschool2013. Unlike conventional ranking tasks, predicting TBC values on temporal graphs is more challenging due to the extreme class imbalance problem.
To illustrate this, we conducted a statistical analysis of the distribution of TBC values in the ia-realitycall dataset, as shown in Figure 1. Figure 1a groups nodes into three categories (i.e., zero-, median-, and high-value) based on their TBC scores. The results reveal a highly imbalanced distribution, with $9 6 . 2 \%$ of nodes having zero TBC, while only $1 . 9 \%$ fall into each of the median and high-value categories. Figure 1b presents a log-scale histogram of TBC values. The non-zero range is evenly divided into 9 equal-width intervals, with a separate bucket for zero values. It further demonstrates that high TBC nodes are not only rare but also span a wide range of magnitudes. This highlights the highly imbalanced nature of TBC distribution, where median and high values have significantly fewer observations. This imbalance causes two problems: (1) Training is inefficient as most TBC values of vertices are zero, which contribute no useful learning for critical nodes with large TBC values; (2) the zero values can overwhelm training and lead to degenerate models.
In such circumstances, applying existing temporal graph models (e.g., TATKC [52], DBGNN [15]) results in significant limitations. These models fail to adequately learn the true TBC values of vertices in the median and high-value intervals, leading to most predicted TBC values in these categories being erroneously classified as 0. Existing work [50] has shown that trained models tend to be biased toward head classes (e.g., the zero-value class) with massive training data, resulting in poor performance on tail classes (e.g., median- and high-value intervals) that have limited data. This is particularly problematic as vertices in the median and high-value intervals are critical hubs for analyzing information flow and propagation. We identify TBC distribution imbalance during training as the main obstacle impeding the model from achieving good accuracy.
Directly applying techniques from imbalanced text and image learning to graph data, such as oversampling, undersampling, loss re-weighting, and data augmentation (e.g., node dropping, edge perturbation, and feature masking), has proven ineffective, as demonstrated in the experimental results shown in Tables 3. Unlike pixels in images, graph nodes are inherently interconnected, and modifying the topology directly alters the TBC values, thereby distorting the underlying data distribution. Inspired by the intuition behind contrastive learning, which aims to identify shared features among similar instances while distinguishing features among dissimilar ones, we propose applying this paradigm to TBC prediction. Our objective is to improve the accuracy of predictions for medium and high TBC values, particularly by avoiding their misclassification as zero-value nodes. By incorporating contrastive learning, the representations of zero-value vertices can be made more similar, while the representations of median and high-value nodes are positioned farther away from those of zero-value nodes. This separation allows the model to better capture and distinguish the unique features of median and high-value nodes, ultimately improving the accuracy of predictions across all TBC value ranges.
Based on these insights, we propose CLGNN, a Contrastive Learning-based GNN model composed of representation and prediction modules. The representation module aims to learn path-time dualaware vertex representations that effectively capture both path-based and temporal dependencies. We construct an instance graph where each node aggregates all timestamps from its outgoing edges, and maintain reachable temporal path count lists that record valid temporal path dependencies while filtering out irrelevant paths. We then design a path count encoding, which, together with time encoding, is fused into a dual aggregation mechanism (i.e., mean aggregation and edge-to-node multi-head attention) to encode structural and temporal characteristics into rich node embeddings.
In the prediction module, we design two complementary components, KContrastNet and ValueNet. KContrastNet adopts a contrastive learning strategy to differentiate features among low, medium, and high-value nodes. To improve the selection of positive and negative sample pairs, we first perform clustering and then select positive pairs as nodes with similar TBC values within the same cluster. Conversely, nodes with significantly different TBC values within the same cluster are treated as negative pairs. ValueNet utilizes multi-layer perceptrons (MLPs) to estimate temporal betweenness centrality (TBC) values. To summarize, our main contributions are as follows:
• We first propose an inductive and scalable contrastive learning-based TBC regression model that is trained on small graphs and generalizes to unseen graphs. In contrast to the state-of-the-art method DBGNN [15], which trains and tests on the same graph and thereby limiting its generality, our model supports cross-graph inference.
• The proposed model, CLGNN, jointly learns path-time dual-aware node representations and refines node embeddings via an improved stability-based clustering-guided contrastive learning to enhance TBC prediction.
• We conduct extensive experiments on 12 real datasets of varying sizes to evaluate CLGNN. CLGNN delivers up to $6 6 3 . 7 \times$ faster performance than leading exact TBC computation methods. It also achieves up to $3 1 . 4 \times$ lower MAE and up to $1 6 . 7 \times$ higher Spearman correlation compared to top static GNN baselines, and surpasses state-of-the-art temporal GNNs with up to $5 . 7 \times$ lower MAE and $3 . 9 \times$ higher Spearman correlation.
# 2 Related work
Traditional BC computation method. Representatives for exact BC computation on static graphs include Brandes’ algorithm [2] and its variants, such as Brandes $^ { + + }$ [9] and BADIOS [40]. Approximate methods leveraging techniques such as source sampling [3, 17] and node-pair sampling [33, 34, 1, 7, 29] to improve accuracy and efficiency. Notable examples include RK [33], KADABRA [1], and ABRA [34], which offer theoretical guarantees. Variants like $k$ -betweenness centrality [30], $\kappa$ -path centrality [19], top- $k$ ego BC [51] , Coarse-grained and Fine-grained BC [48] aim to simplify computation. In temporal graphs, research has focused on temporal paths [5, 46] and BC variants. Tsalouchidou et al. [43] explored exact computation using static snapshots or predecessor graphs. Methods like ONBRA [39] have introduced approximation techniques with probabilistic guarantees. Zhang et al. [53] proposed exact and approximate methods based on transformed time instance graphs. Regunta et al. [32] leveraged parallelism to update TBC values incrementally. Although the above-mentioned optimization techniques have been proposed, the computational complexity of TBC is still relatively high for large-scale temporal graphs.
Graph neural network based model. The modeling of dynamic graphs primarily focuses on leveraging temporal neural networks to capture evolving structures. Early approaches extended static GNNs by incorporating RNNs to capture temporal dependencies [27, 38]. More recent methods, such as TGAT [49], TGN [35], and GATv2 [4], integrate self-attention mechanisms. De Bruijn Neural Networks [31] extend causality-aware mechanisms for temporal graphs. Nevertheless, these models failed to capture the intricate path structures required for betweenness computations, leading to low estimation accuracy. Some models are specially designed for BC ranking or value prediction on static graphs. Early studies [26] used ML, while more recent approaches [10, 25] exploit graph attention and reinforcement learning to approximate centrality. CNCA-IGE [55] adopts an encoder-decoder for centrality ranking. In temporal settings, TATKC [52] uses time-injected attention for Katz centrality ranking, and DBGNN [15] predicts centrality with a time-aware GNN. While effective in ranking, these models struggle with precise TBC value prediction, and they often misclassify mid-to-high value nodes as zero. Additionally, DBGNN is transductive, limiting generalization to unseen networks.
# 3 Preliminaries
Definition 1. (Temporal Graph). A directed temporal graph is denoted by $G = ( V , E , T )$ , where (i) $V$ is a vertex set; (ii) $E \in V \times V \times T$ is the set of directed time-stamped edges; and (iii) $T$ is the set of discrete or continuous time points at which interactions occur. Each edge $e = ( u , v , t ) \in E$ represents a connection from node $u$ to $v$ in $V$ at a specific time $t \in T$ .
Definition 2. (Temporal Path). A temporal path in $G = ( V , E , T )$ is a sequence of edges $p = u$ $\xrightarrow { t _ { 1 } } w _ { 1 } \cdots \xrightarrow { t _ { m - 1 } } w _ { m - 1 } \xrightarrow { t _ { m } } v$ −tm→ v, where (i) each edge in p belongs to the edge set E; (ii) the sequence of timestamps satisfied $0 < t _ { i + 1 } - t _ { i } \le \delta$ for $1 \leq i < m$ , where $\delta \in \mathbb { R }$ , ensuring that the interactions not only follow a ascending temporal order but also occur within a bounded time interval.
Temporal graphs differ from static graphs due to the presence of temporal constraints, which lead to multiple notions of optimal paths beyond the standard shortest path. Common definitions include the shortest, earliest arrival, and latest departure temporal paths, each optimizing for different temporal criteria. Given a temporal graph $\bar { G } = ( V , E , \bar { T } )$ , a shortest temporal path from source $s$ to destination $d$ is a temporal path (not necessarily unique) that contains the minimum number of hops (edges). An earliest arrival temporal path reaches $d$ as early as possible. An latest departure path leaves $s$ as late as possible while still reaching $d$ on time. Based on optimal paths, the formal definition of temporal betweenness centrality (TBC) [43, 53] is as follows:
Definition 3. (Temporal Betweenness Centrality, TBC). Given a temporal graph $G = ( V , E , T )$ the normalized temporal betweenness centrality value $\mathrm { T B C } ( v )$ of a node $v \in V$ is defined as:
$$
T B C ( v ) = \frac { 1 } { | V | ( | V | - 1 ) } \sum _ { \forall s \neq v \neq z \in V } \frac { \sigma _ { s z } ( v ) } { \sigma _ { s z } } ,
$$
where $| V |$ is the total number of vertices in $G$ ; $\sigma _ { s z }$ denotes the number of optimal temporal paths from $s$ to $z$ ; and $\sigma _ { s z } ( v )$ represents the number of such paths pass though $v$ .
# proposed model
1st layer … Path-Time Dual-Aware Mechanism 𝒍 layer Representation Module Continuous Time Encoding Temporal Path Count Encoding 4 0 Listp Listp Listp COs Listp MLP [tt] 区 time(t-t1)= ...。· log(1+P) reshape Φpath t7 t4 d vo t3 t Dual Message Aggregator Mechanism V2 t5 [t2,t2,t4,t6] 因 t6 V 区 区+1区 mean m(t) 目 MLP x +□ [t3,ts] [t1,t1] 21Listp Listp 区 \*Ql(t) o(ox)/ h(t) 图 K(t) 0 t1 t2 tt4 t5t6 t7 tmax vl(t) m (k-1)(t) MLP →hl(attention)( ×(1-λ) Prediction Module input hidden output ValueNet
KContrastNet cluster123k Vo h(t) V0 TBC(vo) Ppstive P(v)P(v)P(v)P(.)P(vk) U1 h(t) V1 TBC(v1) true values NegativeN(v)N(vN(vN(v.)N(vk) 2 h(t) V2 TBC(v2) Lregress Pairs V3 h(t) V3 TBC(v3) VA h4(t) VA TBC(𝑣4) 𝓛𝒄𝒐𝒏𝒕𝒓𝒂𝒔𝒕
aebedinans(a.e) 𝜶 repmbedig fromule MLPs prediction values 𝟏 − 𝜶 𝓛𝒕𝒐𝒕𝒂𝒍
To effectively predict TBC under strong data imbalance, we propose a contrastive learning-based GNN model, CLGNN, as illustrated in Figure 2. CLGNN is composed of two main components, i.e., a representation module and a prediction module. We describe each module in detail below.
# 4.1 Representation Module
To capture the temporal paths and preserve the temporal dependencies, we first transform temporal graphs into an instance-level structure, where each node $v$ is associated with the set $T _ { o u t } ( \boldsymbol { v } )$ of the timestamps attached with $v$ ’s outgoing edges. In addition, for each node $u$ and its outgoing edge $( u , v , t )$ , we maintain the number of valid temporal paths, denoted as:
$$
P ( u , v , t ) = \sum _ { t _ { v } \in T _ { \mathrm { o u t } } ( v ) } \mathbb { I } ( t _ { v } - t ) ,
$$
where $\mathbb { I } ( x )$ is an indicator function that defined as: $\mathbb { I } ( x ) = { \left\{ \begin{array} { l l } { 1 , } & { { \mathrm { i f ~ } } x > 0 } \\ { 0 , } & { { \mathrm { o t h e r w i s e } } } \end{array} \right. }$ . If $P ( u , v , t ) = 0$ , then the message generated by node $u$ will not be transmitted to node $v$ through the edge with timestamp $t$ during the message passing process.
Path-Time dual-aware mechanism. It combines continuous time encoding and temporal path count encoding, jointly capturing temporal dependencies and relevant path information in the message passing process.
• Continuous Time Encoding. We employed the approach in [49], which applies Bochner’s theorem and Monte Carlo integral to map a temporal domain $T = [ 0 , { t _ { \mathrm { m a x } } } ]$ to the $d$ -dimensional vector space.
$$
\Phi _ { t i m e } ( t ) = \sqrt { \frac { 1 } { d _ { T } } } \left[ \cos ( \omega _ { 1 } t ) , \sin ( \omega _ { 1 } t ) , \dots , \cos ( \omega _ { d _ { T } } t ) , \sin ( \omega _ { d _ { T } } t ) \right] ^ { T } ,
$$
where $d _ { T }$ is the finite dimension and $\boldsymbol { \omega } = ( \omega _ { 1 } , \ldots , \omega _ { d _ { T } } ) ^ { T }$ are learnable parameters.
• Temporal Path Count Encoding. Temporal path count encoding processes the number of temporal paths to provide stable and efficient input features for the neural network. It is defined as:
$$
P ( u , v , t ) \xrightarrow { \mathrm { l o g \mathrm { - t r a n s f o r m e d } v a l u e } } \mathrm { l o g } ( 1 + P ( u , v , t ) ) ,
$$
$$
\Phi _ { p a t h } ( u , v , t ) = \mathbf { M L P } ( \operatorname { r e s h a p e } ( \log ( 1 + P ( u , v , t ) ) ) ) ,
$$
where the reshaping operation reshape $( 1 , 1 )$ converts the scalar into a suitable format for input into the network. Note that the path count $P ( u , v , t )$ undergoes a $\log ( 1 + P )$ transformation, which aims to compress the influence of large path values on model training, alleviating gradient explosion issues while preserving non-negativity and the distinguishability of the values.
Message Function. For each interaction event $\boldsymbol { e } = ( u , v , t )$ , a message is computed to update the representation of node $v$ . Specifically, the message at layer $l$ is defined as:
$$
m _ { u v } ^ { l } ( t ) = \big ( h _ { u } ^ { l - 1 } ( t ^ { - } ) | | h _ { v } ^ { l - 1 } ( t ^ { - } ) | | \Phi _ { t i m e } ( t - t ^ { - } ) | | \Phi _ { p a t h } ( u , v , t ) \big ) ,
$$
where $h _ { u } ^ { l - 1 } ( t ^ { - } )$ , $h _ { v } ^ { l - 1 } ( t ^ { - } ) \in \mathbb { R } ^ { d }$ are the hidden representation for node $u$ and $v$ in the previous layer (the $l - 1$ layer) of the neural network just before time $t$ (i.e., from the time of the previous interaction involving $u$ and $v$ ), and $| |$ denotes concatenation. The computed message $m _ { u v } ^ { l } ( t ) $ is then passed into the aggregation function and contributes to the update of node $v$ ’s embedding at layer $l$ .
Dual Message Aggregator Mechanism. Since each node typically interacts with multiple neighbors and may receive multiple messages from the same neighbor at different times, there exist two levels of message aggregation: across edges and across nodes. To address this, we employ a dual aggregation mechanism, i.e., mean aggregation and edge-to-node multi-head attention, to balance the model’s stability and flexibility.
• Mean aggregation (node-level). Mean aggregation performs aggregation at the node level, ensuring that all neighbors contribute equally to the target node’s representation. At the $l$ -th layer, the mean-aggregated message received by node $v$ at time $t$ is computed as:
$$
\bar { h } _ { v } ^ { l } ( t ) = \frac { 1 } { | N _ { \mathrm { i n } } ( v ) | } \sum _ { u \in N _ { \mathrm { i n } } ( v ) } \left( \sum _ { t _ { x } \in T _ { u v } } \frac { P ( u , v , t _ { x } ) } { \sum _ { t _ { x } ^ { \prime } \in T _ { u v } } P ( u , v , t _ { x } ^ { \prime } ) } \cdot m _ { u v } ^ { l } ( t _ { x } ) \right) ,
$$
where $N _ { i n } ( v )$ is the set of $v$ ’s in-neighbors; $T _ { u v }$ is the set of timestamps associated with all edges from $u$ to $v$ ; $m _ { u v } ^ { l } ( t _ { x } )$ is the message computed at time $t _ { x }$ in the $l$ -th layer. Notably, only temporal paths that are reachable (i.e., those with $P ( u , v , t _ { x } ) > 0 $ ) are considered in the aggregation.
• Edge-to-node multi-head attention (edge-level). This mechanism computes attention for each edge connected to a node, ensuring that all temporal edges contribute to the node’s representation update. The input matrix of the $l .$ -th attention layer is:
$$
M _ { v } ^ { l } ( t ) = \left[ m _ { u v } ^ { l } ( t _ { x } ) \right] _ { \forall ( u , v , t _ { x } ) \in E _ { i n } ( v ) , P ( u , v , t _ { x } ) > 0 } ^ { T } ,
$$
$E _ { i n } ( v )$ is the set of $v$ ’s incoming edges. Then $M ^ { l } ( t )$ is projected into query, key, and value spaces: $Q ^ { l } ( t ) \dot { = } [ h _ { v } ^ { l - 1 } ( t ^ { - } ) | | h _ { v } ^ { l - 1 } ( t ^ { - } ) | | \Phi _ { \mathrm { t i m e } } ( t - t ^ { - } ) | | \Phi _ { \mathrm { p a t h } } ( u , v , t ) | \cdot W _ { Q } ; K ^ { \bar { l } } ( t ) \dot { = } \dot { M ^ { l } } ( t ) \cdot W _ { K } ; V ^ { \bar { l } } ( t ) =$ = $M ^ { l } ( t ) \cdot W _ { V }$ , where $W _ { Q } , W _ { K } , W _ { V } \in \mathbb { R } ^ { ( d + d _ { T } ) \times d _ { h } }$ are learnable projection matrices, and $d _ { h }$ is the hidden dimension. Next, the scaled dot-product attention is used in attention layers. The aggregated message for node $v$ at the $l$ -th layer, computed via the multi-head attention mechanism, is given by:
$$
\tilde { h } _ { v } ^ { l } ( t ) = \mathrm { A t t n } ( Q ^ { l } ( t ) , K ^ { l } ( t ) , V ^ { l } ( t ) ) = \mathrm { s o f t m a x } \left( \frac { Q ^ { l } ( t ) ( K ^ { l } ( t ) ) ^ { T } } { \sqrt { d _ { h } } } \right) V ^ { l } ( t ) \in \mathbb { R } ^ { d _ { h } }
$$
Embedding. The final embedding of node $v$ at time $t$ , denoted as $h _ { v } ^ { l } ( t )$ , is obtained by combining the outputs of mean aggregation and edge-to-node multi-head attention:
$$
\begin{array} { r l } & { ~ h _ { v } ^ { l } ( t ) = \lambda \cdot h _ { v } ^ { l ( \mathrm { m e a n } ) } ( t ) + ( 1 - \lambda ) \cdot h _ { v } ^ { l ( \mathrm { a t t e n t i o n } ) } ( t ) , } \\ & { ~ h _ { v } ^ { l ( \mathrm { m e a n } ) } ( t ) = \mathrm { M L P } ( \bar { h } _ { v } ^ { l } ( t ) \parallel h _ { v } ^ { l - 1 } ( t ) ) , } \\ & { h _ { v } ^ { l ( \mathrm { a t t e n t i o n } ) } ( t ) = \mathrm { M L P } \left( \tilde { h } _ { v } ^ { l ( 0 ) } ( t ) \parallel \tilde { h } _ { v } ^ { l ( 1 ) } ( t ) \parallel \cdots \parallel \tilde { h } _ { v } ^ { l ( k - 1 ) } ( t ) \parallel h _ { v } ^ { l - 1 } ( t ) \right) , } \end{array}
$$
where $k$ is the number of heads, and $h _ { v } ^ { l ( j ) } ( t )$ $\left( 0 \leq j < k \right)$ is the dot-product attention output of the $j$ -th head. $\lambda$ is a coefficient that balances the contribution of the two aggregation types. This weighted combination allows the model to trade off between the stability of equal neighbor contributions and the flexibility of dynamic attention-based relevance, enabling expressive temporal node representations.
# 4.2 Prediction Module
To address the imbalance in temporal betweenness centrality values and enhance the model’s sensitivity to rare but important nodes, we design two complementary components, i.e., KContrastNet and ValueNet, in the prediction module.
KContrastNet. KContrastNet employs a contrastive learning strategy to effectively distinguish features among low-, medium-, and high-value nodes in TBC prediction. To enhance the selection of positive and negative sample pairs, we first introduce a stability-based clustering approach that identifies the optimal number $k$ of clusters in the node representation space.
• Stability-based clustering method. Traditional methods for choosing the number of clusters, such as the elbow method [42] or silhouette score [37], often struggle with noisy data or class imbalance, leading to unstable results. To address this, we propose a stability-based clustering method that combines bootstrap resampling [11] with K-means clustering [14] to estimate the optimal number of clusters. To improve efficiency, we first perform stratified random sampling on the original dataset $F$ to obtain a reduced subset $F ^ { n }$ (e.g., $40 \%$ of the original size) that retains the original distribution. Bootstrap resampling is then applied on $F ^ { n }$ to generate $B$ independent sample pairs $\{ ( X _ { i } , Y _ { i } ) \} _ { i = 1 } ^ { B }$ , ensuring each drawn independently and identically from the empirical distribution (a formal justification is provided in Appendix D). For each pair and each candidate cluster number $k \in \{ 2 , \ldots , K \}$ , we run $\mathbf { k }$ -means separately on $X _ { i }$ and $Y _ { i }$ to obtain two cluster assignments $\psi _ { X _ { i } }$ and $\psi _ { Y _ { i } }$ , respectively. We then compute the clustering distance between these assignments and average the results over all $B$ pairs to obtain a clustering instability, as formally defined below.
Definition 4. (Clustering Distance). The clustering distance between two clustering results $\psi _ { X _ { i } }$ and $\psi _ { Y _ { i } }$ is defined as:
$$
d ( \psi _ { X _ { i } } , \psi _ { Y _ { i } } ) = E _ { x , y \in ( X _ { i } \cap Y _ { i } ) } \left[ \delta ( \psi _ { X _ { i } } ( x ) , \psi _ { X _ { i } } ( y ) ) - \delta ( \psi _ { Y _ { i } } ( x ) , \psi _ { Y _ { i } } ( y ) ) \right] ,
$$
where $E$ denotes the expectation operator, computed over all possible samples $x$ and $y$ . $\psi _ { X _ { i } } ( x )$ denotes the cluster label assigned to sample $x$ in the clustering result $\psi _ { X _ { i } }$ . The function $\delta ( a , b )$ is the Kronecker Delta, defined as: $\delta ( a , b ) = { \left\{ \begin{array} { l l } { 1 , } & { { \mathrm { i f ~ } } a = b , } \\ { 0 , } & { { \mathrm { i f ~ } } a \neq b . } \end{array} \right. }$
Definition 5. (K-means Clustering Instability). The clustering instability quantifies the variation in clustering results across different resamples and is defined as:
$$
\hat { s } _ { B } ( \psi , k , n ) = \frac 1 B \sum _ { i = 1 } ^ { B } d \left( { \bf K M e a n s } ( X _ { i } , k ) , { \bf K M e a n s } ( Y _ { i } , k ) \right) ,
$$
$\psi$ is the clustering results of KMeans function and $d ( \cdot , \cdot )$ is the clustering distance defined above.
Finally, the optimal number of clusters ${ \hat { k } } = \arg \operatorname* { m i n } _ { 2 \leq k \leq K } { \hat { s } } _ { B } \ ( \psi , k , n )$ by minimizing the instability. Statistical properties of the estimator, including unbiasedness and consistency, are detailed in Appendix E. And the complete algorithm and its complexity analysis are provided in Appendix C.
• Contrastive Learning. After clustering, each node is assigned a cluster label, which guides the construction of supervised contrastive learning pairs. To ensure that positive samples are both semantically meaningful and structurally consistent, we introduce a conditional sampling strategy. Specifically, positive samples are formed by selecting nodes from the same cluster that have similar TBC values, while negative samples are chosen from the same cluster but with large TBC differences. A theoretical justification for choosing within-cluster sampling alternatives is provided in Appendix F.
Positive sample selection: let $S ^ { + } ( u )$ be the set of positive samples for node $u$ , defined as:
$$
\begin{array} { r } { { \mathcal { S } } ^ { + } ( u ) = \lbrace v \mid \delta ( \psi ( u ) , \psi ( v ) ) = 1 , 0 < \lvert T B C ( u ) - T B C ( v ) \rvert \le \gamma _ { p o s } \cdot T B C _ { m e d i a n } \rbrace , } \end{array}
$$
where $\gamma _ { p o s } \in ( 0 , 1 )$ is a similarity threshold (e.g., 0.5). The use of the median TBC value ensures robustness against outliers and skewed distributions, improving the stability of contrastive thresholds.
Negative sample selection: let $S ^ { - } ( u )$ be the set of negative samples for node $u$ , defined as:
$$
\begin{array} { r } { \mathcal { S } ^ { - } ( u ) = \left\{ v \mid \delta ( \psi ( u ) , \psi ( v ) ) = 1 , \left| T B C ( u ) - T B C ( v ) \right| \geq \gamma _ { n e g } \cdot T B C _ { m e d i a n } \right\} , } \end{array}
$$
where $\gamma _ { n e g }$ (e.g., 0.5) is a dissimilarity threshold.
The contrastive re-weighted loss ${ \mathcal { L } } _ { \mathrm { c o n t r a s t } }$ is then computed across all clusters, encouraging the embeddings of positive pairs to be closer while pushing negative pairs apart, weighted by their semantic similarity. The loss is defined as follows:
$$
\dot { \mathrm { ~ \ z ~ \ c o n t r a s t } } = \sum _ { i = 1 } ^ { \hat { k } } \sum _ { u \in \psi _ { i } } - \log \frac { \sum _ { v \in S ^ { + } ( u ) } \exp \left( \beta _ { u v } \cdot \mathrm { s i m } ( { \bf h } _ { u } , { \bf h } _ { v } ) / \tau \right) } { v \in S ^ { + } ( u ) } = \sum _ { w \in S ^ { - } ( u ) } \exp \left( \beta _ { u w } \cdot \mathrm { s i m } ( { \bf h } _ { u } , { \bf h } _ { w } ) / \tau \right) + \sum _ { w \in S ^ { - } ( u ) } \exp \left( - \frac { \mathrm { s i m } ( { \bf h } _ { u } , { \bf h } _ { w } ) / \tau } { \mathrm { s i m } ( { \bf h } _ { u } , { \bf h } _ { w } ) / \tau } \right)
$$
where $\mathbf { h } _ { u }$ is the node embedding of GNN introduced in section 4.1, $\sin ( \cdot , \cdot )$ denote the dot product similarity. $\tau$ is the temperature parameter of InfoNCE [44]. the pairwise weights for positive samples $\beta _ { u v }$ and negative samples $\beta _ { u w }$ are calculated as:
$$
\beta _ { u v } = \frac { T B C _ { m e d i a n } \cdot \gamma _ { p o s } } { | T B C ( u ) - T B C ( v ) | } , \quad \beta _ { u w } = \frac { | T B C ( u ) - T B C ( w ) | } { T B C _ { m e d i a n } \cdot \gamma _ { n e g } }
$$
The weight design ensures that node embeddings are adjusted in proportion to their TBC differences. For positive samples, the smaller the TBC gap between nodes $u$ and $v$ , the larger the weight $\beta _ { u v }$ , encouraging the model to learn more from highly similar pairs. For negative samples, the weight $\beta _ { u w }$ increases with larger TBC differences, emphasizing separation from more dissimilar nodes. A theoretical analysis of this contrastive loss design is provided in Appendix G.
ValueNet. For each node $v \in V$ , the TBC score at the largest timestamp (i.e., lateset time) $t _ { \mathrm { m a x } }$ , denoted as $\mathrm { T B C } _ { t _ { m a x } } ( v )$ , represents the final TBC value of $v$ under the full-graph setting. It is predicted using a three-layer MLP with ReLU activations:
$$
T B C ( v ) = T B C _ { t _ { \mathrm { m a x } } } ( v ) = \mathbf { M L P } ( h _ { v } ^ { l } ( t _ { \mathrm { m a x } } ) ) ,
$$
where $h _ { v } ^ { l } ( t _ { m a x } )$ is the output representation of node $\boldsymbol { v }$ from the last GNN layer at time $t _ { m a x }$
Total Loss Function. A weighted total loss function that balances the regression loss and the contrastive loss is defined as:
$$
\mathcal { L } _ { \mathrm { t o t a l } } = \alpha \cdot \mathcal { L } _ { \mathrm { c o n t r a s t } } + ( 1 - \alpha ) \cdot \mathcal { L } _ { \mathrm { r e g r e s s } }
$$
where $\mathcal { L } _ { \mathrm { r e g r e s s } }$ is the regression loss (e.g., MAE), and ${ \mathcal { L } } _ { \mathrm { c o n t r a s t } }$ is the contrastive re-weighted loss as described above. The hyperparameter $\alpha \in [ 0 , 1 ]$ controls the trade-off between the two objectives.
# 5 Experimental evaluation
Datasets. We utilize 12 real-world temporal graphs with varying scales. Table 4 in Appendix H provides their details. All data sets are publicly available from online repositories, including netzschleuder [28], SNAP [21], SocioPatterns [12], Konect [20] and Network Repository [36].
Compared Baselines. To evaluate the computational efficiency, we compare CLGNN with the exact algorithm ETBC [53]. To evaluate effectiveness, we compare CLGNN against several baselines: (i) Static GNNs, including GCN [18]; DrBC [10], a GNN-based classifier targeting high BC node identification; and GNN-Bet [24], a supervised model that predicts BC scores from node embeddings. (ii) Temporal GNNs, including TGAT [49], which models time-evolving interactions using attention and continuous time encoding; TATKC [52], which uses time-injected attention for centrality ranking; and DBGNN [15], the SOTA temporal GNN framework built upon De Bruijn graphs for centrality prediction. (iii) Contrastive learning models, IIOIwD [54], which estimates node importance using contrastive sampling. To further validate our contrastive learning module, we compare CLGNN with its variants using over- and under-sampling, Tweedie loss, graph augmentation (node drop, edge perturbation, feature masking), and also without the module.
Evaluation Metrics. We use three metrics to assess performance: mean absolute error (MAE) for prediction accuracy, Spearman correlation for ranking quality, and HitsIn ${ \ @ k }$ scores (HitsIn10, 30, 50) to measure how well top predictions match the ground truth.
Our training dataset consists of 50 real-world temporal networks from Network Repository [36] and Konect [20]. For every dataset, we repeated each experiment 30 times, and we reported the mean and the standard deviation of all scores. The associated learning rates as well as all other hyperparameters of the models are reported in Appendix H.
# Next, we seek to answer the following research questions:
RQ1. How does CLGNN perform compared to static GNNs (GCN, DrBC, GNN-Bet)?
RQ2. How does CLGNN compare to state-of-the-art temporal GNNs (TGAT, TATKC, DBGNN), and what are the benefits of its temporal path-count and contrastive learning designs?
RQ3. How much faster is CLGNN than the leading exact method ETBC?
RQ4. How robust does CLGNN’s contrastive learning handle strong TBC imbalance, and how does it compare to sampling-based remedies (e.g., IIOIwD, random augmentation, Tweedie loss, over- and under-sampling)?
Discussion of results. As shown in Table 1, considering RQ1, CLGNN consistently outperforms static GNN baselines across all datasets and metrics. It achieves $2 . 1 \times$ to $3 1 . 4 \times$ lower MAE and up to $1 6 . 5 \times$ higher Spearman correlation. The gains are especially prominent on large-scale graphs with high temporal variation (e.g., sx-mathoverflow, wikiedits-se, ia-reality-call). Additional HitsIn $@ k$ results are provided in Appendix I, Table 5. These results indicate that the importance of modeling temporal dependencies for accurate TBC estimation.
Table 1: Performance comparison with static GNN baselines
As shown in Table 2, regarding RQ2, CLGNN outperforms temporal GNN baselines across all datasets in MAE and reaches the highest Spearman correlation on 7 out of 12 datasets compared to DBGNN. Compared to TGAT and TATKC, CLGNN reduces MAE by $1 . 1 \times$ to $9 . 4 \times$ , with the large gains on highly dynamic datasets like sp_hospital, haggle, sx-mathoverflow. Against DBGNN, which leverages high-order De Bruijn path modeling, CLGNN performs better across graphs of different sizes and densities, with up to $5 . 7 \times$ lower MAE and 1.1 to $3 . 9 \times$ higher rank correlation on datasets like Highschool2011 and ia-reality-call. It also maintains stable ranking performance on large-scale datasets like superuser. HitsIn $@ k$ results are provided in Appendix I, Table 6. Additionally, as illustrated in Appendix K Table 10, CLGNN generalizes well under various optimal paths. Overall, these results show that the designs of temporal path count encoding and contrastive supervision in CLGNN enable more accurate and robust performance on temporal graphs. To address RQ3, we compare the efficiency of CLGNN with the exact TBC algorithm. As shown in Appendix I Figure 5, CLGNN achieves substantial speedup, ranging from $4 . 8 \times$ to $6 6 3 . 7 \times$ , across all datasets.
Table 2: Performance comparison with temporal GNN baselines
Considering RQ4, Table 3 presents an ablation study of CLGNN’s contrastive learning module. Compared to IIOIwD, CLGNN achieves much higher ranking stability. For instance, on infectious, Spearman correlation improves from -0.49 to 0.55, a relative gain of over $200 \%$ . Removing the contrastive module ("No Contrastive") also leads to noticeable degradation. On highschool2012, Spearman drops from 0.51 to -0.26, and MAE increases by $1 . 2 \times$ . Over- and under- sampling methods perform poorly on large-scale graphs. On highschool2011 and highschool2013, MAE is more than $1 3 \times$ higher than CLGNN’s. Tweedie loss fails to preserve rank order under strong imbalance. Random augmentation techniques offer limited and inconsistent improvements. On highschool2011, Spearman is -0.04, compared to 0.52 with CLGNN. HitsIn $@ k$ results are provided in Appendix I, Table 7 and Table 8. These results confirm that CLGNN’s contrastive module is more robust and effective than alternative imbalance-handling methods, particularly in preserving ranking consistency and regression accuracy under skewed TBC distributions.
We also perform hyperparameter tuning, with results shown in Appendix J, where we analyze model performance across different $( \alpha , \lambda )$ settings (see Figure 6). Additionally, we assess prediction accuracy across discretized TBC value ranges (zero, mid, and high) in Appendix I, Table 9.
Table 3: Ablation study of contrastive learning variants | Temporal Betweenness Centrality (TBC) measures how often a node appears on optimal temporal paths, reflecting its importance in temporal networks. However, exact computation is highly expensive, and real-world TBC distributions are extremely imbalanced. The severe imbalance leads learning-based models to overfit to zero-centrality nodes, resulting in inaccurate TBC predictions and failure to identify truly central nodes. Existing graph neural network (GNN) methods either fail to handle such imbalance or ignore temporal dependencies altogether. To address these issues, we propose a scalable and inductive contrastive learning-based GNN (CLGNN) for accurate and efficient TBC prediction. CLGNN builds an instance graph to preserve path validity and temporal order, then encodes structural and temporal features using dual aggregation, i.e., mean and edge-to-node multi-head attention mechanisms, enhanced by temporal path count and time encodings. A stability-based clustering-guided contrastive module (KContrastNet) is introduced to separate high-, median-, and low-centrality nodes in representation space, mitigating class imbalance, while a regression module (ValueNet) estimates TBC values. CLGNN also supports multiple optimal path definitions to accommodate diverse temporal semantics. Extensive experiments demonstrate the effectiveness and efficiency of CLGNN across diverse benchmarks. CLGNN achieves up to a 663.7~$\times$ speedup compared to state-of-the-art exact TBC computation methods. It outperforms leading static GNN baselines with up to 31.4~$\times$ lower MAE and 16.7~$\times$ higher Spearman correlation, and surpasses state-of-the-art temporal GNNs with up to 5.7~$\times$ lower MAE and 3.9~$\times$ higher Spearman correlation. | [
"cs.LG",
"cs.AI"
] |
# 1 INTRODUCTION
“In Eudoxia, (...), a carpet is preserved in which you can observe the city’s true form. At first sight nothing seems to resemble Eudoxia less than the design of that carpet (...), but if you pause and examine it carefully, you become convinced that each place in the carpet corresponds to a place in the city and all the things contained in the city are included in the design.” (I. Calvino, Invisible Cities)
The Data Lakehouse (DLH) [39], is becoming the de facto cloud standard for analytics and Artificial Intelligence (AI) workloads. The DLH promises many improvements over its predecessors, the data lake and warehouse, such as cheap and durable foundation through object storage, compute decoupling, multi-language support, unified table semantics, and governance [19].
The breadth of DLH use cases makes it a natural target for the philosophy of composable data systems [23]. In this spirit, Bauplan is a DLH built from “spare parts” [31]: while presenting to users a unified API for assets and compute [30], the system is built from modularized components that reuse existing data tools through novel interfaces: e.g. Arrow fragments for differential caching [29], Kuzu for DAG planning [18], DuckDB as SQL engine [24], Arrow Flight for client-server communication [6].
Bauplan serves interactive and batch use cases through a unified Function-as-a-Service (FaaS) runtime running on standard VMs [28]. The complexity of resource management in a dynamic, multilanguage DLH thus reduces to “just” scheduling functions. Building and testing distributed systems is complex, costly, and error-prone in monolithic systems [10, 17, 37] and is even more so in composable data systems. In order to test our intuitions and safely benchmark policies, we decided to build and release a DLH simulator.
In this work, we present Eudoxia, a scheduling simulator designed for the composable DLH. Our contributions are threefold:
(1) We describe a composable lakehouse architecture from a programming and execution model perspective, showing how expressing all workloads as functions provides a simple and consistent abstraction for users and the platform alike.
(2) We formalize the scheduling problem in this setting and outline the key requirements for any viable solution.
(3) We introduce Eudoxia as a modular, open-source simulator for this problem space: we detail our design choices, demonstrate typical usage patterns and provide preliminary validation using standard OLAP workloads against cloud production systems.
While Eudoxia’s development was motivated by Bauplan’s architecture, we release it to the community1 with a permissive license because we believe its impact to be potentially broader – either directly as a pluggable module in similar data systems, or indirectly through its abstractions and design principles.
The paper is organized as follows. In Section 2, we introduce background on composable DLHs, which serves as the main motivation for this work; Section 3 describes the scheduling problem in detail and presents the high-level structure of the proposed system; Section 4 illustrates how to invoke and run the simulator, how to configure parameters for Eudoxia , how to register custom scheduling algorithms, and how we validated our approach to have confidence in the results produced by the simulator. We conclude by positioning our work in the context of the existing literature (Section 5) and of future developments (Section 6).
# 2 BACKGROUND AND MOTIVATION
The flexibility of serving interactive and batch use cases for both analytics focused (SQL) and AI focused (Python) runtimes is a distinctive feature of DLHs. To motivate the need for a new scheduler simulator, we walk backwards from the developer experience designed to simplify user interaction with heterogeneous workloads (Section 2.1), and from the corresponding architectural choices (Section 2.2): as we shall see, the Bauplan DLH is composable and modular at the logical level too.
# 2.1 Writing everything as a function
In contrast with the hard to learn, difficult to debug Big Data (e.g. Spark [32, 36]) and DAG frameworks (e.g. Airflow [38]), coding in Bauplan does not require learning new programming concepts. In particular, data computation can only be expressed through (SQL or Python) functions with signature $T a b l e ( s ) T a b l e$ – environment variables are either passed as runtime argument (e.g. bauplan run --namespace xxx) or stored next to the code itself.
We illustrate this with a concrete example. Consider the Bauplan data pipeline comprising following two files, parent.sql and children.py:
-- bauplan_name parent SELECT col_1, col_2, col_3 FROM raw
# Listing 1: parent.sql
@bauplan.model()
@bauplan.python("3.10", pip={"pandas": "1.5"})
def child(data=bauplan.Model("parent") ): return data.do_something()
@bauplan.model(materialize $\ c =$ True)
@bauplan.python("3.11")
def grand_child( data=bauplan.Model("child")): return data.do_something()
# Listing 2: children.py
Pipelines are simply DAGs of functions chained together by naming convention: the first function to be run is parent.sql, whose input is a Table called raw, which is stored in object storage and registered in the system catalog. The output of this function is also represented as a Table. This approach is connected to the dbt framework, which pioneered this “functional” approach for data analysts chaining SQL queries together. The second function, child, contained in the Python file, takes the parent query as input and produces a new Table, which is in turn the input of the final function. Users interact with data and compute declaratively, using Bauplan tables over data branches, which are semantic, git-like abstractions over Apache Iceberg tables. As such, the underlying catalog and the data files are abstracted away from users.
We find three major types of interactions between users and DLHs, presented in (roughly) descending order of expected latency:
(1) batch data pipelines: Usually scheduled, these pipelines combine SQL and Python steps and are used in production environments. They prioritize throughput over latency, as no user is actively waiting for results.
(2) iterative data pipelines: Triggered during development or debugging, these pipelines benefit from fast feedback loops to improve developer productivity. While not latency-critical in production terms, delays here can slow down iteration speed and increase cognitive load.
(3) interactive queries: Often issued by analysts or business users in SQL or Python, these queries demand low latency and quick feedback. They represent the “live” interface with data and typically require fast, responsive infrastructure.
We see that because functions can read directly from base tables or from the outputs of other functions, each of the above interactions is representable by composing together Python and SQL blocks with specific signatures.
While the functional abstraction may seem limiting at first, it enables two critical features. First, it lowers the barrier to begin using the system, allowing for example interns with no prior cloud experience to push pipelines to production in their first day of work. Second, within the architecture executing pipelines boils down to orchestrating atomic blocks with the same shape and signature, “only” differing by priority.
# 2.2 Running everything as a function
Users often must use different interfaces to execute each of the different types of DLH workloads (batch, iterative, and interactive) as described in Section 2.1. For example, a user may run a query in a SQL editor supported by a data warehouse, develop in a notebook (supported by a Spark cluster and Jupyter server), and run pipelines as a Spark script on a schedule (supported by a submit job API, a cluster, and an orchestrator).
Table 1 summarizes the distinctive, composable nature of a codefirst lakehouse. The uniformity at the developer experience level is mirrored by uniformity at the infrastructure level, where all interactions are served by composing together containerized functions over object storage as shown in Fig. 1. Due to several optimizations [28], ephemeral functions spawn in milliseconds inside off-the-shelf virtual machines (VMs), which greatly simplifies the life-cycle management of containers. Even system-level actions, such as checking out a data branch, reading from parquet files to serve a SQL query, or materializing a result back into the catalog, are written as functions and are added to the user-specified DAG by a logical planner [31]. In other words, any task executed on Bauplan is a DAG of system-provided and user-specified ephemeral functions in the view of both the user and the system. No container, warehouse, or engine exists before or after a request, as any resources or state are spawned on demand. Indeed, even the additional bidirectional communication required at the end of more interactive workloads are achieved by running an Arrow Flight server as an ephemeral container in the same model as all other functions.
This architecture reframes DLH scheduling as the problem of orchestrating functions onto pools of resources with varying latency requirements. We find that the most critical insights we’ve gained from both intuition and empirical evidence align with results shared from the systems community: first, interleaving interactive and non-interactive workloads end up being more computational efficient than separating these workloads onto different systems (i.e. running a query on a warehouse and a pipeline on a Spark cluster) [25, 35]; second, using functions as building blocks nudges users to write small, re-usable code that is easier for them to maintain, and importantly easier for the scheduler to reason about [13]. Importantly, existing FaaS schedulers cannot be re-used as is because these systems–e.g. AWS Lambda [1], Azure Functions [2], OpenWhisk [3]–are designed to support the execution of simple, fast, stateless, standalone functions with small output sizes.
However, the uniformity of the function interface comes with a trade-off: limited horizontal scaling. While this interface easily supports long pipelines, cross-host communication, and vertical scaling of individual functions [28], each function remains the unit of scheduling and cannot be split across multiple VMs. In traditional big data systems, this has been seen as a limitation, but in practice, many modern workloads can be handled comfortably within a single high-memory VM due to the sharp drop in memory costs (e.g., 1TB fell from $\$ 4K$ in 2014 to $\$ 1K$ in 2023 [21]) and the relatively stable size of analytical datasets (i.e. most OLAP workloads today are under 250GB at the $p ^ { g g } . 9 t h$ percentile [34]). This perspective reflects a broader shift toward what some have called “Reasonable Scale” [20, 26, 27], a pragmatic approach that favors simplicity and efficiency over aggressive horizontal scaling at all costs.
In conclusion, we can now see how Bauplan’s function-first approach benefits both users and developers. Using system and user functions as the building blocks of the runtime allows a granular understanding of workloads and provides many opportunities to interleave different workload types depending on their latency requirements. Enabling granular scheduling—pausing and resuming DAGs mid-air, moving functions between hosts etc.—is a desired consequence of our architecture but presents a challenge of finding an effective, if not optimal, scheduling algorithm. Considering a DLH that uses a single runtime makes the problem significantly more tractable and simplifies modeling the platform, but there is still a need to evaluate different scheduling algorithms. This motivates our scheduling simulator, Eudoxia, which we now discuss.
# 3 SIMULATOR DESIGN
We first provide an overview of the motivation and goals behind the simulator in Section 3.1 before discussing the design and major abstractions of the simulator in Section 3.2.
# 3.1 Overview
A composable lakehouse can be tested in a variety of ways, from cheap but case-based unit and integration tests to very expensive but general formal methods. Simulations rely on a deterministic model of the system (like formal methods) but are low-cost and experimentally driven (like integration tests) and thus are a promising way to evaluate scheduling policies in a complex cloud setup.
Table 1: Interaction types, user interfaces and infrastructure requirements for different DLH designs.
Figure 1: Bauplan workers are off-the-shelf VMs, providing stateless compute capacity over object storage. Within an organization, users and machines (Apache Airflow, AWS Lambda on a schedule etc.) may submit interactive read-only queries (red) or asynchronous read-write pipelines (blue). What scheduling policy for functions can maximize a desired metric (e.g. throughput)?
Given the FaaS design we adopted (Section 2), our scheduling simulator must be able to evaluate different scheduling algorithm implementations both in terms of performance metrics (e.g. throughput and latency) as well as monetary cost (e.g. excess cloud resources or premium storage). A successful solution to this problem will therefore be able to give us confidence over scheduling policies without spending the time and money to evaluate the same policies in a real cloud environment.
# 3.2 Design Principles and Major Abstractions
We now present the architecture for our proposed solution. This design, shown in Figure 2, is modular to be able to test any scheduling algorithm, to allow for workload customization via parameters set by developers, and to support alternate executor models. We decompose this design into three components. Our simulator operates as a high-level loop, and during each iteration each of these three components complete whatever work is possible for them. Each iteration represents 1 CPU tick or approximately 10 microseconds.
Figure 2: Simulator Architecture. Users set parameters and pass this to the initializer for Eudoxia which starts a loop of three components, the Workload Generator, Scheduler, and Executor. Once that loop completes, visualizers or other downstream applications can access execution statistics.
3.2.1 Workload Generation. In a real setup, various users submit pipelines to the system at random intervals. The workload generator simulates this part of the system by generating pipelines and sending them to the system at user-defined intervals to be scheduled and executed. The workload generator accepts a wide range of parameters which specify how frequently new pipelines arrive, how many resources pipelines require, how long pipelines will take to complete depending on the physical resources (RAM and CPU) allocated to them, among others. Full documentation is available with our artifact. Additionally, this interface allows users to format existing traces and feed them into the simulator rather than generating random ones.
We model user-submitted pipelines as directed acyclic graphs (DAGs). Each node in a pipeline is called an operator, which represents individual functions such as SQL queries or Python functions. Each operator is generated with some required amount of RAM to execute, representing the largest allocation of memory the operator will require to complete. Each operator is also generated with a CPU scaling function, which returns how long the operator will take to complete based on how many CPUs are allocated (for example, a heavy IO task may not scale with CPUs at all, while a stateless filter can scale linearly with more CPUs). Any value associated with a pipeline is randomly drawn from a distribution centered at one of the user-provided (or system default) parameters. Finally, each pipeline has one of three Priority Levels, based on the DLH scenarios described in Section 2.1: in ascending priority order, we have batch data pipelines, iterative data pipelines, interactive query. At each tick when pipelines are generated, they are passed to the scheduler. For most ticks, no new pipelines will be generated.
3.2.2 Executor. The user also specifies how many CPUs and RAM are available to allocate to jobs and whether more resources can be accessed for additional monetary cost, i.e. using cloud scaling. The user can specify how many pools of resources there are, what the balance of resources are in each pool, and so on.
The executor is the manager of these simulated physical resources. We define an abstraction called a Container which contains a set of Operators to execute and a number of CPUs and amount of RAM. When created each container uses the set of operators provided to calculate how many ticks it will for that container to complete or how many ticks before it will trigger an out-of-memory error based on the parameters in the workload generator and the resources allocated.
3.2.3 Scheduler. The Scheduler’s responsibility is to allocate resources to sets of Operators (as the Scheduler can subdivide pipelines in allocation) and instruct the Executor on what Containers to create. The Scheduler further has the ability to preempt Containers, instructing the Executor to terminate that container and free up resources. It is the Scheduler’s responsibility to manage queues, how allocation decisions are made, what jobs receive allocations sooner than others, and how priority levels are managed.
Each scheduler implementation must simply match a required type signature: accepting a set of Pipelines from the workload generator, and outputting a list of new Container allocations and Container preemptions to the Executor. At runtime, the user will register different scheduler implementations with the simulator and specify which one it should use during execution.
# 4 EUDOXIA 101
In this section we will describe how users interact with Eudoxia, provide a sample program and a short description of key parameter options. Then we present a preliminary validation of our simulator approach using real traces executed on Bauplan.
# 4.1 Developer experience
We first present how users would start a new Eudoxia instance (Section 4.1.1) before presenting the scheduling algorithms already implemented (Section 4.1.2) and how users can write and register their own implementations (Section 4.1.3).
4.1.1 Starting a New Instance. We aimed to make it as easy as possible to start working with Eudoxia’s API. To start a simulator instance, users specify input parameters and select a scheduler implementation, either one of the three scheduler algorithms already implemented or a custom implementation written in Python and registered at runtime. Parameters are set in a 𝑇𝑂𝑀𝐿 file, with each parameter in its own line formatted as parameter $\mathbf { \Sigma } = \mathbf { \Sigma }$ value. The most important parameters here the following:3
duration: how many simulated seconds the simulator will run for. Each iteration of the primary loop corresponds to 10 microseconds, intended to roughly approximate the length of 1 CPU cycle. We call each iteration a tick. waiting_ticks_mean: on average how many ticks (10 microseconds) pass between pipelines being generated and sent to the system to be executed.
• num_pools: how many resource pools will exist. In general all available resources are divided evenly among all pools to start. scheduling_algo: what scheduling algorithm to use.
Here is how easy is to start an instance: run_simulator instantiates Eudoxia with the parameters in project.toml:
import eudoxia
def main(): paramfile $\mathbf { \Sigma } = \mathbf { \Sigma }$ "project.toml" eudoxia.run_simulator(paramfile)
# Listing 3: Minimal code to start a simulation
The run_simulator method will then begin the core loop described in Section 3, containing the workload generation, scheduler, and executor. Eudoxia will use the duration parameter to compute the number of iterations the loop runs for and will pass each parameter to its appropriate component(s). Once Eudoxia is launched, each component will log its current actions, and CPU and RAM utilization will be logged after each tick for each pool of resources.
4.1.2 New Scheduling Protocols. Eudoxia has three built-in implementations for schedulers.
The first is the naive scheduler, which uses one pool of resources. It assigns all available resources to the next pipeline. When that pipeline completes, it repeats with the next pipeline in the queue.
The next is the priority scheduler, which also assumes one pool of resources. It accounts for both the size of the pool and the priority of pipeline that was submitted (either batch, qery, or interactive). New workloads are assigned a container with $1 0 \%$ of the total amount of resources. The scheduler proceeds until it has allocated all resources. If a pipeline completes, those resources are allocated to the next pipeline in the queue.
If a pipeline fails due to insufficient resources, i.e. an out-ofmemory (OOM) error due to insufficient RAM, then those resources are freed but the pipeline re-enters the waiting queue of the scheduler with information about what resources were allocated to the container which failed.
If a previously-failed pipeline arrives, the scheduler attempts to double the resources previously allocated up to a maximum of $5 0 \%$ of total CPU or RAM, at which point the scheduler returns the failure to the user. If there are not sufficient available resources to double the allocation, the job is put back on the queue to wait.
Finally, in the event that all resources are allocated and a high priority pipeline, such as a qery, arrives, the scheduler scans the currently running containers for any which are running a low-priority job (such as a batch workload). That container is preempted, freeing its resources to be used for the qery. The batch pipeline is put back on the waiting queue with a log of what resources were last allocated to it; however, this pipeline does not also receive the flag indicating it failed. So when the batch pipeline next arrives the scheduler will allocate the same resources it allocated previously.
The third scheduling algorithm is the priority-pool scheduler. This operates similarly to the priority scheduler but with multiple resource pools in the Executor. Every time the Scheduler considers a new pipeline, it identifies which pool has the most available resources and allocates a container on that pool. It also handles preemption in the same way, but this time on multiple pools.
4.1.3 Registering New Scheduler Implementations. Eudoxia allows users to write custom scheduler implementations by following three simple steps: writing an initialization function, writing a scheduler function, and using two decorators.
The initialization function accepts one parameter, an instance of the Scheduler class, and returns nothing. This function initializes any needed data structures within the Scheduler.
The scheduler function must accept three parameters and return two values. The three parameters are:
(1) An instance of the Scheduler class. (2) A list of pipelines which failed in the previous tick (3) A list of pipelines which were newly created in this tick
In general, the list of newly created pipelines is often empty, as the workload generation step creates new pipelines at random intervals. Similarly, the list of failures only includes jobs which the executor failed, such as for an out-of-memory error. This does not include pipelines which the scheduler preempted. If the scheduler wishes to preempt pipelines it must manage those queues itself to ensure no pipelines fall through the cracks.
Finally, the algorithm must return two values:
(1) Suspensions: these are a set of pipelines that the scheduler is instructing the Executor to preempt so that its resources may be freed. For the priority scheduler, it places pipelines to be preempted in an internal suspending queue, which after one tick it moves back into the standard waiting queues.
(2) Assignments: the second return value is a list of new assignments instructing the Executor what resources to allocate to a container and what job to run inside that container.
Putting these requirements together, extending Eudoxia with a custom scheduler is as simple as the snippets below – note how the two decorators in algorithm.py and the parameter in project.toml reference the same key:4
from eudoxia.core import Scheduler
from eudoxia.core import Failure, Assignment, Pipeline
from eudoxia.algorithm import register_scheduler, register_scheduler_init
from typing import List
@register_scheduler_init(key="my-scheduler")
def scheduler_init(sch: Scheduler):
@register_scheduler(key="my-scheduler")
def scheduler_algo(sch: Scheduler, f: List[Failure], p: List[Pipeline]): return suspends, assignments Listing 4: algorithm.py: scheduler function.
scheduling_algo $\mathbf { \Sigma } = \mathbf { \Sigma }$ "my-scheduler" Listing 5: project.toml: custom parameter.
from algorithm import scheduler_init, scheduler_algo
import eudoxia
def main(): paramfile $\mathbf { \Sigma } = \mathbf { \Sigma }$ "project.toml" eudoxia.run_simulator(paramfile)
Listing 6: main.py: custom imports and instantiation.
# 4.2 Preliminary Validation
While developer experience, clarity in abstractions and extensibility are crucial aspects for its adoption, Eudoxia’s utility depends on reliability and robustness.
The simulation generates pipelines which have two key values: (1) how the number of CPUs allocated impacts the pipeline’s execution time (if at all) and (2) the minimum RAM allocation needed to avoid an out-of-memory error. The scheduling algorithms do not have access to these values or scaling functions; however, once the pipeline is allocated to a container of resources, those values are used to determine what the true execution time for a pipeline on a container will be. We believe that this is a realistic setup that can effectively represent any kind of workload in the appropriate and necessary dimensions.
We first validate this approach by running data workloads against a Bauplan cloud instance, measuring runtime statistics such as CPU and RAM utilization along with runtime and comparing this to the runtime estimated by Eudoxia on a pipeline with similar statistics. We run the common data analytics benchmark TPC-H [33] and run its 22 queries against a 10GB dataset on a instance running on an AWS c5ad.4xlarge instance with 16 vCPUs and 32GB of RAM. As described, each query is compiled by Bauplan into a small number of execution blocks (i.e. functions), and we observe CPU and RAM usage during execution. Each query is run alone on the instance, and we disable caching. For three queries (11, 16, and 22), the runtime was so short that resource utilization statistics could not be gathered from underlying telemetry systems. The percent error in runtime of the scheduler versus the true runtime as executed on a Bauplan instance ranges from $0 . 4 4 \%$ to $3 . 0 8 \%$ with an average of $1 . 7 4 \%$ error. We additionally plot the real and simulated runtimes for a subset of TPC-H queries for ease of visual interpretation in Figure 3. We see that the simulated runtime well approximates the true execution time, indicating that the simulator can be relied upon to give realistic results.
Furthermore, because Eudoxia supports varying CPU scaling functions and enables real traces to be plugged in rather than using random generation, the system can easily emulate how the benchmark’s performance would vary if different compute resources are allocated or if the benchmark ran on larger datasets. The modular design enables users to reproduce other results cheaply, test how algorithms would hold up against different types of workloads, or consider how a current implementation would fare against a changing setup.
# 5 RELATED WORK
Composable data systems. The FaaS lakehouse modeled by Eudoxia is built in the composable data system tradition [23]. In a sense, the deconstructed lakehouse [31] is the natural generalization of the “Deconstructed Database” [14]. The rapid growth of DataFusion [15] in the composable data community is fostering an eco-system of novel single-node systems [4, 5] that could benefit from the simulation methodology and code in Eudoxia .
Cloud Scheduling. There is a broad range of work in the realm of scheduling or scheduling workloads in cloud environments that is relevant to Eudoxia . Motlagh et. al. [9] provides an analytical framework to evaluate different scheduling approaches. Similarly,
Runtime (seconds) 2468 TPC-H Query Number
Hai et. all [12] proposes a new scheduling approach but does so with a broad cloud usage pattern in mind. In contrast, Eudoxia is designed for specifically a data lakehouse/composable data system environment. Rather than trying to survey a range of approaches or techniques, Eudoxia focuses specifically on an application deployment on a single Bauplan instance on an EC2 node.
Another common goal for schedulers is to abide by quality-ofservice (QoS) guidelines. While this is a vital part of the cloud ecosystem, our goal behind Eudoxia was to consider, experiment, and use that simulator to evaluate future scheduling approaches.
Finally, a broad range of literature covers scheduling under power constraints. Specifically, as power-constrained applications throttle query performance in some instances as they limit CPU frequency, etc. There is a broad range of work in this area, including [7, 8, 11, 16, 22]. However, Eudoxia is generally uninterested in how power consumption limits resources availability and workload runtime, as blob storage and VM services offered by cloud vendors abstract away the power consumption needs for cloud infrastructure. | Due to the variety of its target use cases and the large API surface area to cover, a data lakehouse (DLH) is a natural candidate for a composable data system. Bauplan is a composable DLH built on "spare data parts" and a unified Function-as-a-Service (FaaS) runtime for SQL queries and Python pipelines. While FaaS simplifies both building and using the system, it introduces novel challenges in scheduling and optimization of data workloads. In this work, starting from the programming model of the composable DLH, we characterize the underlying scheduling problem and motivate simulations as an effective tools to iterate on the DLH. We then introduce and release to the community Eudoxia, a deterministic simulator for scheduling data workloads as cloud functions. We show that Eudoxia can simulate a wide range of workloads and enables highly customizable user implementations of scheduling algorithms, providing a cheap mechanism for developers to evaluate different scheduling algorithms against their infrastructure. | [
"cs.DB",
"cs.DC"
] |
# 1. Introduction
The MLC-SLM challenge focuses on multilingual conversational speech recognition and speaker diarization tasks, encompassing 11 languages: English (en), French (fr), German (de), Italian (it), Portuguese (pt), Spanish (es), Japanese (jp), Korean (ko), Russian (ru), Thai (th), and Vietnamese (vi). The English subset contains approximately 500 hours of recordings from diverse regions, including British, American, Australian, Indian, and Philippine English. Each of the remaining languages contributes around 100 hours, resulting in approximately 1,500 hours of multilingual conversational speech data.
The challenge called for participants to develop end-to-end speech language models for both ASR and speaker diarization tasks, presenting significant technical challenges in multilingual processing, conversational speech understanding, and model optimization.
This paper presents the Seewo team’s approach to the MLCSLM challenge, with a particular focus on Track 1’s multilingual ASR task. Our proposed system achieves substantial performance improvements over the baseline [1] through systematic architectural and training innovations. The key technical contributions include:
• A multi-stage fine-tuning pipeline that enhances the reasoning capabilities of speech language models through fine-tuning with curriculum learning strategy • An investigation of various reward functions for RLbased optimization of self-correction capabilities • An empirical study of contextual augmentation and decoding hyperparameters on ASR accuracy
Figure 1: Our model architecture reference from SLAM-ASR [2]
# 2. System overview
This section details the core configuration of our system, including model architecture, data processing, computational resources and toolkits. All experiments were conducted with the same configuration.
# 2.1. Foundation models
Our speech language model architecture follows the SLAMASR framework design [2]. SLAM-ASR provides a comprehensive evaluation of different encoder modules, including the Whisper family of models and other self-supervised encoders such as WavLM Large [3] and HuBERT X-Large [4].
While these self-supervised models demonstrated superior performance in monolingual settings, they were limited by English-only pretraining, rendering them unsuitable for the multilingual task. Therefore, we adopted the encoder of Whisper large-v3-Turbo [5] as our encoder for its robust multilingual capabilities and parameter efficiency.
For the decoder component, we selected Babel-9B-Chat [6] based on two key considerations. First, chat models typically outperform base pretrained models in instruction-following tasks [2]. Second, Babel-9B-Chat’s training data comprehensively covers all 11 languages required in the MLC-SLM challenge. As demonstrated in Table 7 of[6], it outperforms other 10B-size models on most multilingual benchmarks.
To bridge the encoder and decoder, we employed a learnable projection module that maps encoder outputs to the decoder embedding space. The projector module employed a hierarchical architecture consisting of a convolutional layer followed by two linear layers, designed to downsample and align the encoder’s output features with the decoder’s embedding space. This 17.32M-parameter module effectively downsamples the speech features to $1 0 \mathrm { { H z } }$ (one token per $1 0 0 \mathrm { m s } )$ ), aligning the temporal resolution with the LLM’s input requirements. The whole model architecture is shown in Figure 1.
# 2.2. Training loss
Our training pipeline employs a two-phase approach: Supervised Fine-Tuning (SFT) followed by Reinforcement Learning with Verifiable Rewards (RLVR). This design enables systematic enhancement of the model’s ASR capabilities while maintaining stability.
In the SFT phase, we optimize the model using two objectives. First, we apply the standard causal language modeling loss to establish basic next-token prediction capabilities. Additionally, in the Chain-of-Thought (CoT) stage, we employ a weighted loss variant that assigns different weights to various completion sections [7], allowing the model to focus on critical transcription segments. Through this approach, the model learns to transcribe the speech content after reasoning.
The RLVR phase adopts the Dr. GRPO [8] method, an optimized variant of the original GRPO framework [9]. This approach improves the model performance by carefully balancing exploration of potential improvements with the maintenance of model stability through KL divergence constraints with the reference model. The verifiable rewards ensure that the model’s reasoning capabilities are enhanced in a controlled manner.
# 2.3. Data and augmentation
The ASR model is trained on the MLC-SLM challenge training set, as described in Section 1.
The speaker embedding model of speaker diarization pipeline is trained on the MLC-SLM challenge training set and additional open-source datasets including CN-Celeb1, CNCeleb2 [10], VoxBlink [11] and VoxBlink2 [12], to build a robust multilingual speaker embedding model.
For evaluation purposes, we utilized the development set (approximately 4 hours per language) as our primary test set, since the challenge did not provide an official evaluation set and limited the number of submissions. All ablation data reported in this paper are based on the development set.
We applied simple additive noise and reverberation augmentation techniques [13] to enhance model robustness. For each training sample, data augmentation was applied with a probability of 0.7; when augmentation was performed, either additive noise or reverberation was randomly selected. The RIR and noise data were randomly sampled from the SLR28 dataset [14] and the MUSAN dataset [15], respectively.
# 2.4. Training toolkits and resources
Our training pipeline leverages several toolkits including west [16], transformers [17], accelerate [18], peft [19] , and trl [20]. All experiments were conducted on a cluster of 24 NVIDIA A800 GPUs, each equipped with 80GB of memory.
# 3. Training pipeline of ASR system
We adopt a multi-stage training pipeline where each stage builds upon the previous checkpoint to address specific challenges. While we present only the successful experiments here, our development process included numerous exploratory attempts that, despite not improving performance, provided valuable insights into the model’s capabilities and limitations.
# 3.1. Stage 1: Projector training
The chat template is also following the SLAM-ASR [2]: ”USER: $< S > < P >$ ASSISTANT: $< \mathrm { T } > "$ , where ${ < } S >$ represents speech embedding, ${ \mathrm { < P > } }$ represents the prompt, which is ”Transcribe the speech above” in English for all data, and $< \mathbb { T } >$ represents the corresponding transcribed text.
We train the projector module (freezing all parameters except the projector module) in the first 2000 steps, checkpoint2000 (ckpt-2000). At the end of this stage, the loss converges to around 0.35. It ensures the speech features are basically aligned with the decoder’s embedding space, and the LLM roughly follows the transcription instruction.
# 3.2. Stage 2: Special tokens adaptation
Following Stage 1, we observed that the model occasionally deviated from the transcription task, generating conversational responses to the speech content rather than producing transcriptions. To address this instruction-following issue, we extended several special tokens to enhance the model’s structured output capabilities, which is inspired by Qwen2.5 [21] and LaVIT [22].
We extended Babel-9B-Chat’s vocabulary with four categories of functional tokens:
• <LANG XX>, Language-specific tokens for precise language identification across all 11 target languages • <speech> </speech>, Speech segment delimiters to clearly demarcate speech input boundaries • <transcribe> </transcribe>, Task control tokens to enforce transcription-focused output generation <think> </think>, Reasoning framework tokens to facilitate explicit intermediate reasoning
To accommodate these new tokens, we selectively unfroze the embedding layer and language model head, enabling the model to fine-tune the weights for token representations while maintaining the integrity of the pre-trained knowledge.
The enhanced prompt template follows the basic format ”USER: $< S > < P >$ ASSISTANT: $< \mathrm { T } > ^ { , , }$ , with two key modifications from Stage 1:
• ${ < } \mathrm { S } >$ , Speech input is encapsulated within <speech> and </speech> tags to establish distinct input boundaries
• $< \mathrm { T } >$ , Assistant outputs a explicit structured format as ”<LANG XX> <transcribe> [transcription] </transcribe>”
This tokenized approach significantly improved the model’s task adherence and output consistency, leading to a loss of approximately 0.2 after 4000 steps (ckpt-4000).
# 3.3. Stage 3: LoRA training with multilingual prompts and context
While Stage 2 improved instruction following capability, two challenges remained: (1) code-mixing between English and the target languages in transcriptions, likely due to English prompt interference, and (2) a training loss plateau at 0.2, indicating potential for further optimization.
To address these issues, we adopted LoRA [23] adaptation (with rank 16 and lora alpha 32) on both the LLM decoder and Whisper encoder. Additionally, we introduced languagespecific prompts in the $\mathrm { \langle { \vec { P } } \mathrm { \rangle } }$ section, which were ”Transcribe the speech above” translated into each respective language.
This adaptation strategy proved effective in tackling the code-mixing issue, reducing the training loss to 0.15 after 7000 steps (ckpt-7000).
# 3.4. Stage 4: SFT for reflection
Despite improvements in previous stages, the model continued to exhibit typical ASR errors, such as substitutions, insertions, and deletions. Recent studies [24] [25] have shown that large language models can be trained to self-verify and selfcorrect. To achieve this, we first trained the model with Chainof-Thought (CoT) data for explicitly reflecting mistakes before outputting the transcription.
The CoT prompt template maintains the basic structure ”USER: $< S > ~ < \mathbb { P } >$ ASSISTANT: $< \mathrm { T } > ^ { , \dag }$ but introduces a reasoning part in $< \mathrm { T } >$ section, which follows the format: ”<LANG XX $>$ <think> The speech sounds like: hypothesis1, but it might have some [error details], let me correct it. </think> <transcribe $>$ hypothesis2 </transcribe>”
The hypothesis1 is generated with the Stage 3 checkpoint (ckpt-7000) by running inference on the training data. For each hypothesis, the [error details] is generated by computing the WER/CER against the ground truth transcriptions, and specific error details are identified whenever the WER is nonzero, ensuring that at least one relevant error is included when present in the hypothesis. The ground-truth transcription is used as hypothesis2.
We continue fine-tuning the model from ckpt-7000 of Stage 3. The first attempt resulted in a training loss of 0.12 after 9000 steps (ckpt-9000). However, despite the lower training loss, evaluation on the development set revealed a WER/CER of $3 5 \%$ , which was significantly worse than that of ckpt-7000. Further analysis showed that the model often produced correct transcriptions in the hypothesis1 section while generating incorrect final outputs in the <transcribe> tags. This indicates that the model failed to effectively transfer its intermediate reasoning to the final transcription output.
To understand this counterintuitive result, we analyzed the loss function in detail. We identified a fundamental issue with the standard causal modeling loss (Eq. 1):
$$
L _ { c } = - \frac { 1 } { | I _ { c } | } \sum _ { i \in I _ { c } } \log \mathbb { P } ( t _ { i } | t _ { < i } ; \theta )
$$
We hypothesize that the standard loss is suboptimal due to (1) completion length $\left| I _ { c } \right|$ inversely affecting loss magnitude, and (2) the typically longer <think> section disproportionately influencing the loss. Recent studies [26, 7] have shown that the completion-to-prompt length ratio $( R _ { g } )$ significantly impacts model performance. In our experiments, this imbalance resulted in close loss values for stages 3 and 4, despite
substantial differences in WER.
To address this, we implemented a modified version of the prompt loss token weights (PLW) strategy [7]. Unlike the original approach, we applied $P L W < 1$ to the $< \mathtt { t h i n k } >$ section while maintaining full masking for the prompt. This modification focused the model’s attention on the <transcribe> section through the following loss calculation:
$$
L = \frac { - \displaystyle \sum _ { i = 1 } ^ { N } w _ { i } \cdot \log p _ { i } } { \displaystyle \sum _ { i = 1 } ^ { N } w _ { i } }
$$
where $w _ { i } = 1$ for the <transcribe $>$ section and $w _ { i } =$ $P L W$ for the <think $>$ section.
We resumed training from ckpt-9000 with a decaying PLW schedule: starting at 1.0 and reducing to 0.1 over the first 300 steps, then maintaining 0.1. This approach achieved a final loss of 0.1 after 10000 steps (ckpt-10000).
The adoption of the PLW strategy successfully refocused the model’s learning on the transcription segment, resulting in a substantial reduction in WER/CER, as summarized in Table 1.
Table 1: WER/CER and loss of the SFT stages
# 3.5. Stage 5: RLVR for reflection
While Stage 4 successfully established the CoT completion pattern, two critical issues remained: (1) the WER/CER performance still fell short of our best experimental results, and (2) the reasoning content in the <think $>$ section lacked meaningful analysis. After looking into the model’s outputs, we identified several systematic failure modes in the reasoning process:
• Inaccurate error type identification in the initial hypothesis analysis
• Insufficient or missing error details in the reflection phase
• Inability to apply correct error details to final transcription, even when accurately identified
• Generation of non-existent errors (hallucinations) in the reasoning process
To address these issues, we implemented a series of reward functions to guide the model to generate more meaningful reasoning content with Dr.GRPO [8], which fixed length bias of original GRPO [9]. In our experiments, we rolled out 4 samples for each group, with multinomial sampling and beam search decoding. The sampling temperature is set to 0.5 to control the diversity of the samples.
Here are the reward functions we implemented:
• $R F _ { 1 }$ : verifies the structural correctness of the output with respect to <think $>$ and <transcribe $>$ tags.
• $R F _ { 2 }$ : evaluates the accuracy of hypothesis2 with respect to the ground truth (scaled between 0 and 1). • $R F _ { 3 }$ : verifies whether the error types identified in the reflection match the actual errors present in hypothesis1. • $R F _ { 4 }$ : verifies whether the specific error details described in the reflection correspond to the actual errors in hypothesis1. • $R F _ { 5 }$ : evaluates whether hypothesis2 achieves a lower WER/CER than hypothesis1 with respect to the ground truth, rewarding only if hypothesis2 is more accurate.
$R F _ { 1 }$ and $R F _ { 2 }$ are used to constrain the basic performance of the model, including the format of output and the accuracy of the transcription. $R F _ { 3 }$ and $R F _ { 4 }$ are used to guide the model to generate more accurate reflection content. $R F _ { 5 }$ is used to guide the model to generate more accurate transcription.
We conducted experiments with different combinations of these reward functions, all post-training experiments were based on ckpt-10000. Each of the reward score was scaled to 0 to 1.0, and the final advantage was calculated by the sum of the reward scores, with a weight of 0.8 for the $R F _ { 2 }$ and 0.5 for the others. In table 2, we show the WER/CER results of different combinations of reward functions, compared with the best model in SFT stages.
Table 2: Results of different combinations of reward functions
The combination of $R F _ { 1 }$ , $R F _ { 2 }$ , and $R F _ { 5 }$ yielded the best performance among all reward function configurations in our experiments. This approach was inspired by SCoRe [25], which employs a two-step correction mechanism to enhance final model performance. In our experiments, the reward associated with $R F _ { 5 }$ consistently converged above 0.8, indicating that the model developed strong self-correction capabilities. However, we also observed a phenomenon of reward hacking, where hypothesis1 progressively collapsed and became increasingly shorter after several hundred training steps. To mitigate this issue, we introduced an additional constraint requiring hypothesi $s _ { 1 }$ to be sufficiently similar to hypothesis2 in order to receive rewards.
For the $R F _ { 3 }$ , we observed that the error type identification was getting more accurate during training, and the reward of $R F _ { 3 }$ converged above 0.6; however, this did not translate into improved WER/CER. For the $R F _ { 4 }$ , we found that the model was not able to generate the correct error details, the reward of this function converged around 0.5. We conjectured that the reward signal was too sparse to guide the model to generate the correct error details.
In our experiments, the outcome-based reward functions were found to be an effective strategy for training models to perform self-correction, even though the intermediate reasoning steps did not demonstrate a direct causal influence on the final output. This observation aligns with recent findings in the literature [27] [28] that suggest the process-based reward functions might be less stability and need more refinement. By the end of the challenge we had not found a good way to improve the reflection ability of the model, which would be a future work.
Furthermore, we found that it was important to obtain the capability step by step with a curriculum learning strategy, the model’s low level capability must be established first. In the attempt of $R F _ { 2 } { + } R F _ { 5 }$ , the $\prec$ think> structure format was collapsed quickly, and the rewards of $R F _ { 2 }$ and $R F _ { 5 }$ remained low. The GRPO reinforcement learning process was trying to get the better sampling signal, the premise is that better sampling results exist in a group. [9]
At the end of our post-training experiments, we obtained a WER/CER of $1 2 . 7 3 \%$ with $R F _ { 1 } { + } R F _ { 2 } { + } R F _ { 5 }$ , with a checkpoint called ckpt-12500.
# 4. Additional Experiments
# 4.1. Effect of Conversational Context on ASR
Inspired by GEC-RAG [29], which demonstrated that contextual information can enhance error correction in postprocessing, we explored the impact of conversational context on ASR model performance in the MLC-SLM challenge. Given the conversational and long-form nature of the audio, we hypothesized that incorporating preceding utterances could improve the model’s self-correction ability. During training, we augmented each sample by appending the transcriptions of the previous two utterances in the same conversation to the user prompt. With the new prompt, we fine-tuned the model from ckpt-7000 of Stage 3 to ckpt-9000. At the test time, the previous two utterances were used as context as well. The results are summarized in Table 3.
Table 3: Results of training and decoding with context
It should be noted that, in this experiment, the contextual transcriptions used during evaluation were taken from the ground truth of the development set, rather than generated by the model itself. As a result, the reported results represent an upper bound on the potential benefit of context, rather than the true end-to-end performance. Due to the inefficiency of our inference pipeline, which is slow for sequential utterance processing by the conversational order, we were unable to conduct large-scale, fully automatic experiments at scale. Improving the inference efficiency and conducting stricter end-to-end evaluations will be the focus of our future work.
# 4.2. Different decoding methods and hyperparameters
We tried different decoding methods and hyperparameters with the best performance model, including different beam size, different decoding length, different decoding method, etc. All experiments were conducted with the best checkpoint ckpt-12500. The results are shown in table 4. The best result in our experiments is $1 2 . 7 3 \%$ WER/CER with Beam $^ { 1 + }$ Sampling method, beam size 8, new max length 180.
Contrary to our initial expectations, increasing the beam size did not yield consistent performance improvements, revealing a non-monotonic relationship between beam size and model accuracy.
Table 4: Results of different decoding methods
Table 5: Analysis of multiple speaker embedding models with different segment time configurations
revealed that utterances typically fall under 10 seconds. Based on this observation, we systematically evaluated different combinations of speaker embedding models and segment duration settings. Table 5 presents the comparative performance metrics, demonstrating that the ResNet-101 model with an 8-second segment duration achieves the best overall performance, yielding a $1 6 . 7 8 \%$ DER and $1 8 . 0 3 \%$ tcpWER. | This paper presents Seewo's systems for both tracks of the Multilingual Conversational Speech Language Model Challenge (MLC-SLM), addressing automatic speech recognition (ASR) and speaker diarization with ASR (SD-ASR). We introduce a multi-stage training pipeline that explicitly enhances reasoning and self-correction in speech language models for ASR. Our approach combines curriculum learning for progressive capability acquisition, Chain-of-Thought data augmentation to foster intermediate reflection, and Reinforcement Learning with Verifiable Rewards (RLVR) to further refine self-correction through reward-driven optimization. This approach achieves substantial improvements over the official challenge baselines. On the evaluation set, our best system attains a WER/CER of 11.57% for Track 1 and a tcpWER/tcpCER of 17.67% for Track 2. Comprehensive ablation studies demonstrate the effectiveness of each component under challenge constraints. | [
"cs.CL",
"cs.AI",
"cs.SD",
"eess.AS"
] |
# 1. Introduction
Influencer marketing, a marketing strategy used by social media influencers that impacts their followers’ decision-making, has gained significant traction in recent years [1]. As digital advertising continues to revolve around social media platforms, influencers play a pivotal role in shaping consumer behaviour, building brand trust, and driving engagement. In 2023, this global market was valued at $\$ 21.1$ billion, and it is expected to reach $\$ 33.3$ billion by 2027 [2]. The complex implications of influencer marketing have attracted generous attention from various academic disciplines, including marketing, psychology, law, and computer science [3-5]. Because of the sheer scale of the influencer market, increasingly focused on more granular niches and the popularity of influencer marketing seen as a data-driven phenomenon, computational studies have emerged as a key focus area, aiming to provide advanced methods for understanding, optimising and informing the regulation of influencer marketing practices. For the purposes of this research, we define
computational influencer marketing studies as research in the field of influencer marketing that uses computer science methodologies.
While systematic literature reviews on influencer marketing are emerging in social sciences such as communication, marketing or social psychology [6-8] to synthesise research developments, work that systematically maps the advancement of computational methodologies in influencer marketing is currently insufficient. Existing reviews of computational approaches related to influencer marketing often concentrate on narrowly defined topics, such as influencer identification [9-10] or influencer maximisation[11]. However, there is a lack of comprehensive studies that provide a holistic overview of the research themes explored in this area and the underlying technologies driving these investigations. Such studies can clarify our understanding of the current state and future directions of this interdisciplinary field. This paper fills this gap by undertaking a systematic literature review based on the PRISMA model [12], focused on reporting what the state of the art is of computational studies in influencer marketing and proposing research based on this overview. Our work makes two important contributions. First, it raises awareness about what technologies are available in this field and how they have been designed for the influencer market, to enable critical reflections on the benefits and shortcomings of the commercial practices involving them. Second, it contributes to multidisciplinary dialogue. Influencer marketing research is currently siloed, with little to no acknowledgement between computer science and social science or humanities literature, due to the often different cultures around publication and research goals. Synthesising the field of computational influencer marketing studies and making it available to other disciplines can hopefully improve the flow of insights across different research methodologies beyond computer science.
This article proceeds as follows. Section 2 presents the methodology used in completing this literature review, detailing the data collection and selection process of a corpus of 69 papers, as well as the three key research questions the paper aims to address. Section 3 clusters and discusses the selected papers on the basis of the research questions, and categorises and introduces the computational methodologies used to answer the research questions. Section 4 synthesises these research questions, discusses their implications, and proposes a multidisciplinary research agenda to advance computational studies in influencer marketing in general.
# 2. Methodology
# General overview
We conduct a systematic literature review (SLR) based on the PRISMA guidelines [12] to provide a state-of-the-art overview of computational studies in influencer marketing, addressing the field's current status, methodologies, challenges, and future directions. The SLR allows for systematically and accurately answering research questions by following a structured approach. This study seeks to address the following research questions:
Q1: What are the major research themes related to influencer marketing in computer
science?
Q2: What computational methods have been employed to achieve the research purpose, and
what are the pros and cons of these methods?
Q3: What does a research agenda on computational studies in influencer marketing look
like?
The three research questions underline the following goals: First, to explore how computer science research has evolved on influencer marketing topics and characterise the direction taken by the
main research themes therein. Second, to make an inventory of computational methods used in the context of these research themes, and critically reflect on their benefits and shortcomings. Third, to propose a research agenda for computational studies in influencer marketing that transcends computer science as a discipline.
# SLR process
To address the research questions, the SLR process involved several structured steps:
Step 1: By exploring different expressions of “influencer” and “marketing”, we determined keywords, alternative words, and phrases that can retrieve the most relevant results. We also included terms such as ”content creators”, as they are often used interchangeably with the term “influencer”. However, we only considered studies on content creators undertaken in the context of marketing, and not other business models (e.g. streaming). A comprehensive search string combining keywords and Boolean operators was created:
("influencer marketing" OR ((advert\* OR sponsor\*) AND ("influencer" OR "influencers" OR "opinion leader" OR "opinion leaders" OR "eWOM" OR "content creator" OR "content creators" OR "influential user" OR "influential users" OR "micro-celebrity" OR "vlogger" OR "vloggers" OR "blogger" OR "bloggers")))
Step 2: The database and the search field were determined. The review used three of the largest computer science databases, reflecting state-of-the-art computer science research: the IEEE1, the ACM 2and the ACL3. Given the architecture of these databases, the search field was limited to titles, abstracts, and keywords for IEEE and ACM, while for ACL, we used "influencer marketing" as the sole search string without field restrictions due to the limitations of its advanced search function.
Step 3: Retrieved studies were screened for relevance vis-à-vis the research questions. Two authors independently reviewed the abstracts, introductions, and conclusions of each study. In cases of disagreement, the methodology sections were carefully reviewed, and the full texts were skimmed. The same two authors then discussed marginal cases until they agreed on all studies. Studies were included if they met the following criteria:
Published in the English language;
Including at least one computational experiment, defined as quantitative methods used in computer science studies. This includes techniques such as data mining, simulations, statistical analysis, machine learning, network analysis and other data-driven methods; Focused on influencer marketing, meaning that selected studies must consider advertising as a central part of the research purpose.
Step 4: Studies passing the initial screening were thoroughly reviewed for eligibility by the first author. Studies that merely used influencer marketing as a background but did not focus on it (e.g. limited mentions and engagements with the field) were excluded. The most prominent cluster of excluded papers focused on influence maximisation [13-15], which is a particular strand of research pre-dating the rise of computational influencer studies [16-17], primarily based on network science and interactions within and between networks on social media [11]. Although this category of research focuses on identifying influential users within a network to maximise the reach and impact of influence [11], it generally does not focus on influencer marketing, but rather on a broader question of influence, which goes beyond the goal of this paper.
The search was conducted on August 29, 2024, with the process depicted in Fig.1 and explained above. A total of 312 records were retrieved, of which 225 studies were manually checked based on their abstracts, introductions, and conclusions. This process excluded 120 studies, leaving 105 studies for full-text eligibility screening. Ultimately, 69 studies were deemed eligible for the thorough review according to the criteria discussed above.
Identification (Steps 1& 2) Screening (Steps 3) Eligibility (Steps 4) Final Database searching Record screened (abstract + Full-text articles assessed for Final studies included for (N= 312) introduction $^ +$ conclusion) eligibility reviewing $( N = 2 2 5 )$ (N = 105) (N = 69) Remove duplicates Excluded after screening Full-text articles excluded with $( N = 8 7 )$ (N= 120) reasons (N= 36)
# 3. Analysis
# Characteristics of selected papers
Before addressing the specific research questions, this section first presents the general characteristics of the collected articles using descriptive statistics on matters such as publication trends, language analysis, platform use, dataset availability, influencer categories, and terminology used in influencer marketing studies. Fig.2 highlights the publication trends. In our dataset, Li et al.(2009) were the first to explore bloggers with marketing influence. Over time, interest in this field has grown, peaking in 2023 at 18 articles. The drop in 2024 is likely due to data being collected midyear.
Fig.1 PRISMA guidelines showing review procedure
Fig.2 The number of relevant publications by year. $\ast 2 0 2 4$ only includes studies published until August.
Fig.3 offers deeper insight into the characteristics of the selected papers. Fig.3(a) shows that English is the dominant language of the influencer accounts included in the analysis, followed by Chinese. However, $6 4 . 9 \%$ of the studies do not mention language explicitly as a characteristic, and $6 8 . 1 \%$ of the studies do not mention industry categories like lifestyle or fashion (see Fig.3(d)). This indicates that a granular analysis of the various categories applicable to influencers has not been usual in the field, and we can speculate that this is a problem of understanding and contextualising business
practices. Fig.3(b) shows the distribution of platforms investigated in these studies, with Instagram leading, likely due to its significant role in influencer marketing [19]. Interestingly, the second largest source of data is not from social media, but from other types of platforms, such as online shopping platforms [20] and email networks [21-22]. Fig.3(c) illustrates the availability of datasets. Dataset availability is important in computational studies because it can enable and stimulate not only reproducibility but also additional insights into a given dataset. We consider a dataset as available when a paper includes the source link, and it is still accessible at the time of the search. The result points out that dataset availability is limited ( $73 \%$ unavailable), indicating a need to improve research reproducibility.
Fig.3 Characteristics of selected papers: (a) Distribution of languages; (b) Platforms; (c) Dataset availability; (d) Industry categories
# RQ1: What are the major research themes related to influencer marketing in computer science?
Our study identified four distinct research themes, with Fig.4 illustrating the detailed distribution of each theme. We conceptualised the themes by synthesising the research aims, problems, and questions highlighted across the body of literature. This section provides an in-depth exploration of these themes:
Theme 1: Influencer identification and characterisation: This theme includes studies that aim to computationally define influencer identities and characteristics through measurable proxies, such as follower size, content style, and interaction patterns. The outcome generally focuses on influencer discoverability. Theme 2: Advertising strategies and engagement: Research within this theme focuses on providing suggestions for influencers and marketers to optimise their promotion of products or brands. It analyses how different advertising strategies can lead to the best engagement
and consumer outcomes, often comparing the effectiveness of various promotional methods across social media platforms.
Theme 3: Sponsored content analysis and discovery: This theme specifically investigates how sponsored content can be identified and measured, usually including the detection of undisclosed sponsored content based on regulatory considerations. It encompasses studies that develop frameworks for measuring the characteristics of sponsored posts.
Theme 4: Fairness: The final theme revolves around the opacity of algorithmic curation and its role in systemic platform manipulation that may lead to deception in influencer marketing.
Fig.4 Number of publications in each research theme
# Influencer identification and characterisation
The theme can be divided into two main topics: influencer identification and compatible influencer selection. While both focus on influencers, their objectives and approaches vary. Influencer identification seeks to pinpoint influential individuals in the advertising industry. In contrast, compatible influencer selection focuses on finding influencers best suited for specific brands or campaigns, tailored to meet particular objectives.
Starting from studies on influencer identification, a general trend among these studies is the prevalent use of datasets from Twitter and Sina Weibo. Thus, most of the studies tend to identify influencers based on the network features of each user, which means how they are positioned and connected in the social networks, to pinpoint high-ranking influencers. For instance, [18] evaluated influencers based on elements like the number of comments and social connections. [23-24] extended this approach by integrating temporal and sentiment factors.
Other research targets identifying domain-specific influencers, recognising that an influencer's popularity can vary across industries. [25-26] addressed this issue by categorising influencers into domains such as education and entertainment, revealing that domain-specific relevance often outweighs general popularity. [27-28] developed tools that enable topic-based influencer searches, offering practical solutions for advertisers seeking targeted collaborations. [29] explore the characteristics of influencers and automatically categorise influencers into domains, enabling advertisers to analyse influencers more granularly and align them with relevant marketing strategies.
Moving to compatible influencer selection, studies focus specifically on finding the best influencers for a specific brand or marketing campaign. A prominent research focus within this category is matching brands with micro-influencers, namely influencers who have a smaller follower count. For instance, [30] developed a method to analyse the influencer’s profile, helping brands predict which influencers would be a good fit for their products. Additionally, [31] consider not only marketing effectiveness but also the self-development of influencers. By incorporating the historical content information of brands and influencers, their method enables both stakeholders to find the appropriate partners. They further extend their methods by adding target audiences and historical corporation preferences in [32]. Together, this focus allows brands to choose influencers costeffectively, and influencers to select brands that best suit their long-term development.
In contrast to studies directly recommending influencers, other research seeks to empower advertisers by helping them understand their brands better. [20] identify key assets of specific brands, and advertisers can then seek out influencers who better match their brand identity, ensuring a more aligned and effective partnership. Another study tends to contextualise real-world challenges, which are limited influencer/brand historical activity data, limited budget, and the uncertain network environment [33].
# Advertising strategies and engagement
This research theme focuses on providing advertising recommendations through various proposed methodologies. Studies choose various ways to achieve this objective, which can be further classified into three categories: optimising revenue, optimising advertising choices and optimising user engagement.
Studies aiming at optimising revenue tend to investigate strategies to optimise the financial aspects of influencer marketing campaigns, aiming to minimise costs or maximise returns. These studies explore various approaches to improving the monetary efficiency of campaigns, offering insights into the financial side of influencer marketing. For instance, [34-35] both focus on profit maximisation in influencer marketing but adopt different approaches. The key distinction is that the former primarily targets advertisers' interests, while the latter considers the interests of both advertisers and influencers. [34] aim to maximise both direct (e.g., sales revenue) and indirect profits (e.g., consumer interest) from influencer campaigns, while [35] emphasise minimising the gap between influencers' actual hiring prices and maintaining an attractive pricing scheme for influencers.
Certain studies also address the financial benefits for specific stakeholder groups. [36] explores how influencer marketing can create value for small and medium-sized enterprises (SMEs) and highlights the importance of synergy between user engagement and the choice of marketing practices and social media platforms. Whereas, [37] offer a unique perspective by focusing on maximising both advertisement revenue and user traffic from the platform's standpoint. They balanced the two types of advertisement that can successfully maximise short-term revenue while maintaining long-term user traffic for platform management.
Next, unlike revenue optimisation, some studies focus on optimising advertising choices to achieve broader outcomes, such as increasing product adoption or finding the most effective diffusion path for a marketing campaign, which means how the marketing message is spread in the network. The primary objective here is to enhance the overall impact of influencer marketing without a direct focus on monetary gains (e.g. brand reputation). Some research emphasises the selection of optimal strategies under given conditions. For example, [38] propose a time-dependent network, where both influencers and advertisers know their historical performance. They maximise long-term benefits for advertisers and users, due to the concern that frequent advertising can harm future effectiveness. Similarly, [39] evaluates the profitability of "hub seeding", namely barter, in influencer marketing. This practice entails giving free products/services to popular influencers to endorse, without actual hiring costs. Their findings suggest different conditions when hub seeding is most
effective, considering factors such as costs, network size and consumer behaviour, offering strategic insights into specific marketing practices.
Other studies focus on modelling and simulating the diffusion processes of advertising strategies, aiming to gain insights before releasing the campaign in the real world. [40] proposed an Advertisement Path Planning Mechanism (APPM) to enhance marketers' ability to manage and optimise online information diffusion in microblogs. [41] propose a model for simulating influencer performance within social networks, enabling marketers to assess the potential effectiveness of various strategies and pinpoint influencers capable of efficiently disseminating targeted information.
Consumer segmentation is another prominent focus within this category, aimed at tailoring advertising strategies to distinct audience groups. [42] focused on detecting consumers' personalities for more tailored advertising. They connected specific consumer opinions with personality dimensions to offer personalised marketing strategies. [43] propose a system that can match influencer marketing strategies specific to segmented consumer groups by extracting information from marketing survey responses.
Lastly, there is also research examining user engagement metrics, such as view counts or likes, aiming to optimise user engagement of social media content. This area pays specific attention to the factors that form the engagement matrix, distinguishing it from previous categories that primarily focus on advertising strategies. For example, [44] analyse how controversial content influences monetisation and engagement. Their study calculates toxicity scores and assesses their correlation with user engagement, revealing critical insights into how controversial content can shape audience responses.
Several studies explored the influence of multiple factors on engagement. Hashtag usage is one of the most prominent factors. [45] looked into the relationship between hashtags and engagement rates, identifying frequently used hashtags associated with higher engagement. In a more comprehensive study, [46] examine topic clusters formed by hashtags, analysing their evolution and impact on engagement across influencers with varying audience sizes. Their findings provide a nuanced understanding of how topic trends existing in intra-group and global contexts can affect the engagement of influencers of different sizes. Besides, [47] analyse both textual and non-textual factors (e.g. the number of followers) of influencers on Weibo and WeChat, identifying phrases most likely to boost engagement. Together, these types of studies showcase the dynamics behind engagement building.
# Sponsored content analysis and discovery
In this research theme, studies focus on two main areas: sponsored content analysis and sponsored content detection. The first type typically investigates characteristics that differentiate sponsored content from non-sponsored content and explores the effects of posting sponsored content. The second type is more concerned with designing methods to detect undisclosed sponsored content within larger datasets, often referring to regulatory frameworks focused on transparency in advertising.
Research on sponsored content analysis alone is less prevalent, and approaches address the patterns of sponsored content under different contexts. [48] conducted an analysis of sponsored content by the top 20 Chinese gourmet influencers on TikTok. They identify factors, including product categories, frequency of promotions, and platforms to sell, that can significantly impact promotion results. In contrast, [49] conducted a large-scale comparative analysis of influencer marketing on Facebook and Instagram during the COVID-19 pandemic, assessing changes in ad volume and nature, ultimately finding an overall increase in influencer marketing activities across both platforms due to the effect of the pandemic.
Moving to sponsored content detection studies. The first prominent finding is that some studies will detect sponsored content based on predefined rules. For example, [50] focused on affiliate marketing and detected sponsored content by targeting sentences that included coupon codes. Similarly, [51] identified brands and organisations involved in paid partnerships with content creators based on a keyword list containing sponsored cues such as ‘#ad’. While this rule-based methodology ensures high accuracy due to its clear and standardised criteria for determining whether the content is sponsored, it comes with a notable limitation: it cannot include all possible cues. However, it can provide some ground truth on what data is sponsored content.
Other studies use more generalisable methods, such as machine learning models, and they can be built upon rule-based methods. These studies typically involve manually designing a list of features believed to differentiate sponsored from non-sponsored content. These features are then mathematically represented and fed into machine learning models that can automatically learn patterns and generalise them to unseen data. For instance, [52] analysed WeChat Subscription data by manually defining features such as word occurrence and semantic similarity. They then developed a method to extract content marketing articles based on these characteristics. Similarly, by integrating linguistic, community, and image features, [53] detect undisclosed sponsored content from Instagram posts. This type of method can better balance accuracy and inclusiveness, allowing space for feature analysis while broadening its usage to more varied data. [4] assess the contributions of different input formats, such as text, network, and image data. The importance of each data modality is measured by observing the model's performance after removing specific inputs, and thus it provides a broader understanding of input importance.
Lastly, there are also studies combining insights from both sponsored content analysis and detection, using analytical findings to improve detection methods. For example, [54] examined how Instagram influencers of varying audience sizes promote sponsored content. They then identify sponsored content and explore how disclosure practices vary among influencers based on audience reach. [55] investigated the impact of regulation on influencer disclosure in Germany and Spain based on the detection result. Their longitudinal analysis revealed that stricter regulations significantly improve disclosure rates over time, illustrating the real-world impact of regulatory efforts on advertising transparency.
Some studies evaluate the effect of disclosing sponsored content on engagement. [56] analysed the relationship between sponsored content and engagement rates for brands and influencers. Their transparent approach links the detection result directly to their initial analytical findings, such as more hashtags and fewer usertags used in sponsored content. Similarly, [57] analyse and then detect affiliate marketing content. They highlighted the different dynamics of advertising and disclosure practices on YouTube and Pinterest, while a user study provided insights into how various affiliate marketing factors influence user engagement.
# Fairness
Research in this theme acknowledges the potential risks associated with algorithmic decisionmaking, particularly the opaque "black box" nature of algorithms, which may cause harm, as well as systemic manipulation or bias for users. The selected studies address this issue from two perspectives: unveiling the algorithmic black box to protect vulnerable users, and investigating unethical practices such as gaming engagement (e.g. purchased likes) to falsely increase popularity.
Two studies explicitly focus on unveiling the algorithmic black box. [58] explore YouTube's algorithmic monetisation mechanisms. Aiming to investigate if the algorithm has a preference towards larger channels, their study examines the frequency and timing of monetisation decisions, the relationship between video content and channel popularity, and finds that smaller channels are less favourable to the algorithm. [59], on the other hand, focused more narrowly on child advertising, which has many legal limitations across the world. They developed methods to detect how social media platforms target children with ads, which can assist regulatory authorities in enforcing legal protections. These studies underscore the importance of algorithmic transparency and regulation to protect the interests of vulnerable groups, such as children and starting content creators, in digital environments.
As for the gaming engagement studies, there is a strong trend to evaluate which features are most prominent in creating and enhancing false engagement. [60-61] analyse the effectiveness of various detection methodologies, albeit with differing research scopes. [60] focused on detecting fake likes, subscriptions, and comments, while [61] focused solely on examining fake likes, using a more complex method that considered a broader range of factors, such as unusual behaviour and comment patterns. They both provide insights regarding features distinguishing fake accounts from real ones.
Two studies provide more explainability to the evaluation methods so that users can better understand the results. [62] aim to distinguish between normal influencer ads, organic posts, and exaggerated influencer ads (EA). Similarly, [63] investigate the features of crowdturfing users' profiles and comments, referring to real people participating in dishonest popularity-boosting activities for rewards. Their system not only classifies content but also provides explanations, offering users insight into why a post was flagged as EA and what features constitute a crowdturfing profile.
# RQ2: What computational methods have been employed to achieve the research purpose, and what are the characteristics of these methods?
Following the identification of research themes, our next aim is to examine the methodologies used in the selected studies and characterise them to shed light on the technological state of the art. To clarify the methodology used in different research themes, we visualise techniques appearing more than once in the selected papers (see Fig.5). Each study can use multiple methods, and we incorporate all counts as they appear. To better understand these techniques, we further group them into clusters as shown in Fig.6. This section provides a detailed explanation of each technique and the associated tasks, categorising them into two broad groups: machine learning-based techniques and non-machine learning-based techniques. This distinction allows for a clearer understanding of the methodologies employed and provides a foundation for considering future directions in computational influencer marketing research.
Fig.5 Heatmap of techniques used in each research theme, excluding techniques used only once among all studies
Fig.6 Methodology clusters. Greens are broad categories, and yellows are fine-grained categories
Machine learning Non-machine learning
Feature extraction Explainable AI (XAl) Statisticalanalysis
Featureextraction XAI-Deep learning manual model Networkanalysis
Featureextraction XAI-Feature Socialnetworkanalysis automatic importance Agent-basedmodelling
Supervised learning Unsupervised learning Classification Clustering Rankingalgorithm
Sentimentanalysis Topicmodelling Others Informationextraction NLP
# Machine learning-based methods
Machine learning (ML), a branch of artificial intelligence, focuses on creating algorithms and models that allow systems to learn from data, make predictions, and adapt to new information without explicit programming for every scenario. The ML methodologies we have identified in the selected papers are: feature extraction, supervised learning, unsupervised learning, and explainable AI (XAI).
# Feature extraction
This sub-group captures the primary methods for feature extraction in ML. Although most machine learning studies involve either manual or automatic feature extraction processes, this paper focuses only on those that explicitly discuss their feature extraction processes and the rationale behind their selections.
Manual feature extraction entails manual crafting or refining features based on domain knowledge to improve model performance, requiring a deep understanding of both data and domain. These features will be further transferred by mathematical functions and presented as a language that machines can understand. For example, [52] represent textual features by assigning each word a weight to indicate its importance in the text corpus, based on the ratio of its frequency. [53] further decomposed linguistic features to character, word, sentence, and document levels, combining
frequency-based features (e.g., emojis, abbreviations) with syntactic ones like part-of-speech tags. Although manual feature selection may lack inclusivity, it often provides clearer reasoning behind feature choices, enhancing explainability.
In contrast, automatic feature extraction is more widely used in the reviewed studies. Data can be complex and challenging for manual processing, so methods that can automatically learn and extract patterns from raw data are often preferred. For example, to extract textual features, studies can employ a pre-built language model (e.g. BERT4, GPT5 in [42, 64]) that has already learned enough language patterns from a large amount of text data. For image features, models like Convolutional Neural Networks (CNNS) can extract and pool important visual features such as edges, textures and patterns [30]. These feature types are transformed and combined to represent the complex multidimensionality of the data, at the price of lower explainability and higher requirements for dataset volume and computing power.
# Supervised learning
After feature extraction, the next step is to select appropriate models based on the task requirements. The following sub-group includes classification and sentiment analysis, both of which fall under supervised ML. This means the model is trained on human-labelled data, learning patterns to predict labels for new data, and such a model is called a classifier. Classification aims to categorise new data into predefined classes (as defined in the human-labelled data), while sentiment analysis focuses specifically on sentiment classes (such as positive or negative sentiment).
Classification is widely applied across research themes and is the most used technique, especially for Theme 3: Sponsored content analysis and discovery. Classification emerges as a primary solution for nearly all studies under this theme, as they aim to accurately detect sponsored posts. In Themes 1: Influencer identification and characterisation and Theme 2: Advertising strategies and engagement, classification is used to categorise content or influencers into professional domains, like beauty or fitness [29, 65]. Theme 4: Fairness focuses primarily on identifying sensitive content, such as child advertising or fake engagement [59,62]. Sentiment analysis, in contrast, appears only in Themes 1 and 2. Studies in Theme 1 use it to help identify influencers and explore whether successful influencers tend to create content with certain sentiment patterns by linking identified sentiments with performance [28, 66, 67]. In Theme 2, sentiment analysis is used to examine whether sentiment correlates with higher sales in influencer marketing [68, 69].
As for the technical choice, classification and sentiment analysis can use similar methods, since they are both aiming at classifying data into predefined classes. Various classifiers are applied across the themes, which can be categorised as traditional ML and deep learning models. The choice between these approaches often depends on the scale and complexity of the task, as well as the available data resources. Traditional ML classifiers work well for smaller, faster tasks and don’t require large datasets. For instance, [52] use a dataset of 800 articles to detect content marketing. Deep learning, however, requires more data and can handle complex tasks, though it needs longer processing time. For example, the dataset from [56] consists of 18,523 Instagram influencers and 804,397 brandmentioning posts. Despite the promise of advanced techniques, experimental results challenge the assumption that more sophisticated methods always yield better results. This underscores that there is no universally optimal solution -different techniques have distinct strengths depending on the application.
# Unsupervised learning
Unsupervised learning, used for training ML models on unlabeled data, aims to uncover hidden patterns or structures embedded in the dataset. Clustering is the most prominent method of this approach, which can be used to group similar data and identify anomalies existing in the data. Within this approach, topic modelling is a technique focusing on discovering hidden topics or themes based on word patterns in text.
Although less common than supervised methods, unsupervised learning is widely applied across research themes. In Theme 1: Influencer identification and characterisation, [70] identify influencers by clustering as they believe influencers share very similar characteristics, and [27] employed topic modelling to categorise influencers into different professional domains, such as IT or lifestyle. In Theme 2: Advertising strategies and engagement, clustering is used to identify groups with similar engagement patterns or consumer interests, providing advertisers with valuable insights into audience segments [63,71]. Topic modelling is frequently used to identify professional domains and then analyse how they correlate with engagement, demonstrating popular content within each domain [42, 72]. In Theme 3: Sponsored content analysis and discovery, clustering is applied by [50, 57] to detect patterns in affiliate marketing content, helping to shape more inclusive information extraction processes.
# Natural Language Processing (NLP)
NLP operates at the intersection of linguistics and computer science, leveraging ML technologies to address language-based tasks such as sentiment analysis and topic modelling. While NLP heavily integrates machine learning, it also encompasses unique methodologies tailored to solving specific tasks. For example, information extraction (IE), which retrieves structured data from unstructured sources like text, emails, web pages, and social media, illustrates the dual nature of NLP, combining rule-based and ML-based solutions. Since the other two methods have been discussed previously, this section will use IE as an example to show how NLP is used in the studies.
IE techniques are predominantly applied in Theme 3: Sponsored content analysis and discovery, particularly to identify key entities linked to sponsored content. For instance, [50, 57] manually design rules to detect affiliate marketing content by examining affiliate companies across social media platforms. These rules are defined by domain experts, offering high precision and interpretability. In contrast, [51] utilise a pre-trained Named Entity Recognition (NER) system to identify brands tied to paid partnerships. While this method enhances scalability and adaptability, it may trade off a degree of precision compared to rule-based methods. This trade-off highlights the distinct advantages and challenges of each method.
# Explainable AI (XAI)
A common concern with ML in practical settings is whether automated decision-making can be trusted. To address this, XAI methods have emerged to explain the decision-making process. Different methods have been developed according to the complexity of the model. In traditional ML, feature importance is a popular XAI technique across research themes, helping to identify which features influence decisions most. This is often done by assigning scores, and some traditional ML models naturally integrate this function, alleviating the difficulties in examining which features contribute the most or the fakest engagement [63, 72]. In deep learning, fewer studies mention explainability, despite its popularity for complex tasks described in the previous sections. Due to the nature of more complex structures and the large number of parameters, deep learning models require external XAI methods to evaluate feature importance. For visual data, [73] use a heatmap to highlight image areas and indicate the importance of contributing to the result. For textual data, [47, 62] visualise important words within language models, clarifying influential terms to affect
engagement. Another common method in both traditional ML and deep learning is ablation studies, which assess feature importance by removing or modifying elements of a model to observe performance changes. For example, [29] test the importance of textual, visual and graphic input by evaluating the performance of the deep learning model. This method is particularly useful when the focus is on the category rather than the specific component.
# Non-machine learning methodologies
We have identified four relevant methodological clusters as non-machine learning methodologies. These are: statistical analysis, network analysis, ranking algorithm and others. Algorithm is a broad concept in computer science that usually refers to a process or rules to be followed in calculations. Machine learning is a type of algorithm, just like ranking algorithms.
# Statistical analysis
Statistical analysis usually analyses data trends and relationships between different variables used to test a hypothesis. Due to its broader applicability, statistical analysis is widely implemented across research contexts, with correlation and regression analysis being the most common methods observed in the reviewed studies. Especially in Theme 2: Advertising strategies and engagement, these methods are extensively used to measure factors affecting user engagement.
Correlation analysis examines relationships between two mutually independent variables without suggesting causation, which means the change of one variable would not affect the other. It is frequently used to measure relationships between post variables (e.g. post length) and engagement across influencer groups that are different from professional domains or follower sizes [72, 74, 75]. This approach facilitates cross-group comparisons and provides insights into general patterns without emphasising causality. For example, [75] perform this analysis to test how different variables (e.g. number of hashtags) are correlated to the engagement matrix (e.g., likes, comments) across different sizes of influencers and thus indicate their importance.
Conversely, regression analysis explains and predicts the influence of independent variables (e.g. the features of influencer posts) over the dependent ones (e.g. engagement) and emphasises causality. Therefore, Studies utilising this method aim to predict outcomes based on empirical results and identify the relative impact of specific features on engagement [44, 45, 68]. For example, [44] explored how content toxicity affects user engagement for YouTube content.
# Network analysis
This category focuses on techniques that analyse data to explore relationships within social networks. Social network analysis (SNA) is a framework for studying the relationship between entities within a social network. In a social network, the entity (e.g. influencers, users, brands) is represented as a node, and connections that show some kind of relation (e.g. followers) between nodes are called edges. This framework is frequently used to identify influential nodes, detect communities, and analyse relationships among influencers and followers. For example, to identify influencers, [18, 76, 77] measure the influence of one node by counting the number of edges connected to the node. These studies also try to combine these network-based factors with other behaviour-based factors, such as purchase history or forwarding messages, to present the nodes with more information.
Another approach in network analysis is agent-based modelling (ABM). Agent refers to different entities (e.g. influencers, users, brands) that can move and act in a network structure. ABM simulates the actions and interactions of agents in a network (consisting of nodes and edges) to understand social dynamics and predict outcomes under various conditions. This method is
exclusively used in Theme 2: Advertising strategies and engagement. [41] model influencer marketing campaigns within social networks, treating all users as agents who influence their followers while being influenced by those they follow. Then they include attributes like engagement rate and hiring costs, and define interactions that impact purchase decisions. Similarly, [39] examines when a certain influencer marketing strategy is most effective, treating potential adopters of a new product as agents. They then simulate the campaign to experiment with who is the most suitable target that can create the most spread of the product in the social network. Both studies use ABM to simulate the diffusion path of influencer marketing strategies, ultimately offering insights into effective advertising strategy selection.
# Ranking algorithms
A ranking algorithm is a type of algorithm that sorts or orders items based on certain criteria, typically to prioritise results in response to a query or task. This normally can be a formula to calculate a relevance score for each item, which determines its position in the ranked list. Ranking algorithms are widely used in Theme 1: Influencer identification and characterisation. They are especially popular for identifying compatible micro-influencers for brands or campaigns [30], [31], [32], [73]. These studies often first employ automatic feature extraction techniques, such as text, visual, and graph-based features, to represent both brands and influencer accounts. Influencers are then ranked according to their compatibility with brands, helping advertisers efficiently select influencers who align with their brand and budget constraints.
As for the general influencer identification task, studies also tend to use the ranking algorithm after extracting the necessary features to represent social accounts. For example, [18], [76] relied on social network analysis to rank influencers based on manually built graph-based features, while [23, 24, 28] ranked influencers based on the sentiment embedded in their post. [25, 26] first classified influencers by domain (e.g. beauty, fitness) and then ranked them based on the performance metrics (e.g. how they interact with other users).
# Others
Some studies diverge from conventional methods, developing unique or less frequent techniques tailored to specific, complex problems within their research themes. These unique approaches appear only once within the dataset and are found only in Themes 1: Influencer identification and characterisation, and Theme 2: Advertising strategies and engagement. They address challenging issues by creating complex theoretical algorithms. Their methods reflect a need for customised solutions in niche areas, focusing on precise problems that require specialised and often complex algorithms.
One such method uses one or more optimisation algorithms to find the best solution to a problem within a defined set of constraints, e.g. selecting the most cost-effective option with a limited budget. [35] introduced the Profit Divergence Minimization in Persuasive Campaigns (PDMIC) algorithm to manage conflicting interests between brands and influencers in a campaign. These conflicting interests entail the absolute divergence between the actual hiring price and the asking price of each influencer. The study focuses on proposing a mathematical theoretical solution for this problem, which is why it pertains to a category of its own.
# 4. Discussion
Our review has identified four primary research themes and several methodological categories in computational studies in influencer marketing. While these studies offer extensive coverage and depth for selected topics and tasks, certain research gaps remain, offering opportunities for future
exploration. This section presents key findings and reflections from our review to shed light on the state of the art of computational influencer studies, their technologies, and their role in furthering the understanding of the creator economy as a whole.
# What is the future research direction of computational studies in influencer marketing (RQ3)?
More research driven by regulatory interests is needed to ensure a fairer marketplace This review highlights a predominant focus on commercial interests in computational influencer marketing research, with most studies falling under Themes 1: Influencer identification and characterisation and Theme 2: Advertising strategies and engagement, which focus on the interest of advertisers and agencies. Conversely, fewer studies focused on the interest of regulators, as reflected in the number of studies from Theme 3: Sponsored content analysis and discovery and Theme 4: Fairness. This finding underscores a significant imbalance in the regulatory-focused research of influencer marketing.
Studies under Theme 3 highlight discrepancies between regulatory requirements for content disclosure and actual compliance practices, while Theme 4 explores issues like fake engagement and protecting vulnerable groups. Influencers and advertisers often act as counterparts to both regulators and consumers, navigating a delicate balance between transparency and profit-driven motives. By exploiting regulatory loopholes, they aim to enhance engagement or maximise financial returns, particularly when enforcement is lax or non-existent. The temptation to capitalise on these gaps creates fertile ground to exploit consumers, highlighting the urgent need for stricter enforcement, greater regulatory involvement and most importantly, more effective monitoring methods to protect consumers and ensure a fairer marketplace.
This review suggests that the meagre state of regulatory-focused research limits its potential to counteract such exploitative practices. Regulators and policymakers should take an active role in computational influencer marketing studies, collaborating with researchers to develop scalable monitoring solutions. Increased involvement of regulatory stakeholders could shift the research agenda toward a balance of commercial and compliance concerns, fostering both ethical practices and consumer trust in the influencer marketing ecosystem.
# Computational research generally lacks nuance and context
Our review highlights a prevalent ambiguity in the terminology surrounding influencer marketing within the selected studies. Terms like "influencer marketing," "sponsored content," "video monetisation," and "social media marketing" are often used interchangeably, despite important distinctions among them. For instance, self-promotion is not sponsored content per se, but it does have a monetisation dimension due to its engagement, which fuels the profile and potentially market rates of an influencer. This lack of clarity can lead to datasets that include activities outside of influencer marketing, such as platform advertisements that are paid by brands to platforms as opposed to influencers, further resulting in errors in sponsored content detection and classification.
Different influencer marketing practices show distinct characteristics, underscoring the limitations of universal sponsored content detection models. For instance, identifying endorsements (e.g. receiving remuneration for advertising) can be challenging if an influencer merely tags a brand without overt promotional language or campaign hashtags, while affiliate marketing is easily detectable through elements like discount codes or affiliate URLs. Such language variations create unique identifiers for each marketing practice, which should be distinguished in detection datasets. Without this differentiation, detection models risk over-inclusion, leading to high false-positive rates.
Among the selected studies, only a few, such as [50], [57], have focused on specific types of influencer business models, such as affiliate marketing, and [39] addresses hub seeding. This also shows that some business models may be easier to identify than others, but without scholarship building on such legal and transactional frameworks, it is difficult to further the field on these topics. For this reason, while developing a conceptual framework may fall outside the primary scope of computational studies, this review emphasises the need for more multidisciplinary research that integrates insights from cultural, legal, and political perspectives. Computational methods are tools designed to address specific problems, and experts in legal or social science are the ones who often conceptualise and/or deploy them. Human oversight and interdisciplinary collaboration are essential to ensure these tools address real-world challenges effectively.
# Explainability must be improved
Nearly half of the reviewed studies in influencer marketing leverage ML technologies, yet only a small subset $( 2 5 \% )$ addresses model explainability. This lack of explainability highlights a critical limitation of many ML approaches, especially those involving deep learning. Such models often function as "black boxes", delivering high accuracy but providing limited insights into how decisions are made. The review notes that most research prioritises model performance while underestimating the importance of transparency.
Model transparency is vital for several reasons. Practically, it fosters trust among stakeholders: when decision-making processes are understandable, users are more likely to trust the system, particularly in sensitive fields like healthcare, finance, or criminal justice [78]. Ethically, transparency helps identify and address biases embedded in models [79]. For influencer marketing and regulatory compliance, transparency is indispensable for enabling regulators to comprehend model decisionmaking, mitigate systematic errors, and protect consumers from harm.
While explaining complex algorithms to non-expert audiences poses challenges, future research can explore several strategies. Combining machine learning methods with simpler techniques, such as statistical analysis or traditional machine learning models, can provide complementary insights. Additionally, integrating explainable AI (XAI) techniques or employing hybrid models - pairing interpretable models with black-box algorithms - can strike a balance between performance and transparency. This approach has proven effective in other domains, such as radiology [80], and holds the potential for advancing computational studies in influencer marketing.
# Reproducibility must be pursued through clean and transparent data
The review highlights what is probably one of the most significant issues in computational influencer marketing studies: the lack of accessible datasets, as illustrated in Fig.3 - $73 \%$ of the reviewed studies did not disclose their datasets. The absence of standardised, pseudonymised and openly available datasets undermines the ability of independent researchers to replicate or extend previous work, ultimately affecting the credibility and accuracy of the field.
Computational research, particularly on this topic, can benefit from access to open datasets. Open datasets are specifically valuable as they establish standards for data structure, allow for the verification of findings, and provide common baselines for benchmarking. However, they are scarce in influencer marketing research, where textual and visual data, more common in this domain, tend to be unstructured and difficult to standardise, and where social media platforms have long harassed researchers for exploring data outside of terms of service limitations (e.g. [81]). A key observation from the review is that the majority of open datasets are social network datasets, which is a longer-existing and easier-to-handle data format. Besides technical challenges, privacy concerns further complicate the sharing and standardisation of datasets, particularly in regions with strict data protection laws. Additionally, platform-imposed restrictions and varying regulatory environments across countries add to the difficulty of collecting and redistributing data at scale.
To address these issues, the field must prioritise the development of standardised and accessible datasets while exploring innovative approaches to data collection. Examples of prior dataset documentation techniques are [82-84], and they can serve as examples not only for the computer science field, but also for computational social science research. Such efforts will enhance reproducibility, enable benchmarking, and foster more reliable and impactful research in computational influencer marketing.
# A research agenda for computational studies in influencer marketing
Table 1 provides an overview of each identified research gap, accompanied by possible research questions for further investigation, in an attempt to illustrate what a research agenda for computational influencer studies could look like. Some questions are derived directly from the original studies, while others arise from identified gaps following a thorough examination of the literature. We also address common challenges in influencer marketing research, particularly issues of data scarcity.
Table 1: Future research agenda
# Data scarcity
Building upon the challenges of data scarcity discussed earlier, this section explores potential methods to enhance data availability. Achieving the ultimate goal of increasing open datasets in computational influencer marketing research requires initiatives in two key areas: expanding data collection methods and establishing taxonomies for social media data.
Besides collecting through official methods (e.g. APIs), alternative methods for collecting social media data include web scraping and data donations. The first method means automatically extracting data from websites (not necessarily allowed), and while powerful and flexible, it often risks violating platforms’ terms of service. In contrast, data donation offers a more ethically and legally sound alternative framework. This approach invites participants to share their digital trace data voluntarily, akin to collecting interview data. For instance, [85] developed a software tool allowing individuals to inspect their data and selectively donate only the parts they agree to share. Such an approach not only ensures transparency but also empowers users to maintain control over their data. Researchers can collect data from influencers and users based on research purposes. However, a limitation of this method is that it is hard to scale because it needs individual permission, and the sample is also at risk of not being representative.
Establishing universal taxonomies for social media data represents another critical avenue for progress. A standardised taxonomy could integrate diverse data formats from different platforms and collection methods, making datasets more interoperable and accessible. To achieve this, comprehensive documentation is essential. This documentation should provide naming conventions for data elements across various sources and map how different terms or structures refer to the same concept. Additionally, it should align with established frameworks for categorising influencer marketing practices and characteristics. A universal taxonomy could also facilitate better benchmarking and cross-study comparisons, further advancing the field's reliability and impact.
Characterising and detecting influencer marketing business models for regulatory compliance In regulatory research, frameworks for categorising influencer marketing have been proposed. For example, [86] distinguish practices such as endorsement, barter, and affiliate marketing, and [87] further extend this to a broad social media monetisation model that includes both influencer marketing and other monetisation strategies. These conceptual frameworks could provide valuable guidance for computational studies, where nuanced analyses remain rare.
Future computational research could draw on social science and regulatory frameworks to build legally informed annotated datasets for each influencer marketing practice. This means actively involving more legal and social science experts to design and deploy annotation guidelines for different tasks. Such an approach would enable thorough analysis of the sponsored content, allowing for improving the accuracy of detection. For certain practices, reliance on machine learning alone may be unnecessary; for example, [50, 57] developed affiliate marketing detection methods based primarily on information extraction, with machine learning as a supplementary tool. Therefore, studying the nuanced indicators that distinguish sponsored from non-sponsored content remains a valuable research direction to provide insights for the regulator, in turn.
# Protecting influencers
The review highlights a notable imbalance in research attention between protecting consumers and influencers. While nearly the whole Theme 3: Sponsored content analysis and discovery focuses on safeguarding consumers from hidden advertising, only [58] have considered the interests of influencers. Unlike traditional marketing campaigns involving solely business entities, influencers occupy a dual role as both individuals with life experiences and as channels for promoting marketing content. This dual identity presents unique vulnerabilities for influencers, as the interplay between their personal and professional lives can significantly affect each other. For instance, they may face unfair treatment from brands [88], shadow-banning from recommender systems due to demographic factors [89], or experience cyberbullying triggered by scandals associated with their professional content [90].
This complexity underscores the need for greater scholarly attention to the challenges influencers face, ensuring that their rights and interests are adequately addressed within the broader context of influencer marketing. Regulatory bodies can play a pivotal role in driving research agendas,
particularly by promoting transparency in algorithms embedded within recommender systems and exploring mechanisms to mitigate potential harms. Such efforts would contribute to a more equitable and sustainable influencer marketing landscape.
# Finer granular characteristics of influencer
Influencer marketing, like other business activities in the free market, is shaped by various influential factors, including language, country of origin, follower size, and interest domain. Despite the presence of a few studies that address these nuances, most research lacks a focus on these specific variations within influencer marketing content. Further exploration into these distinctions could offer actionable insights for stakeholders, enabling more targeted marketing strategies and assisting regulatory bodies in developing frameworks to better govern the market.
For advertisers and agencies, future research on how competitors’ influencer strategies vary by country, language and platforms could inform adaptations in their own approaches. From a regulatory perspective, there is also a need to compare the effectiveness of policies, such as sponsorship disclosure across countries (e.g. within EU countries), languages (e.g. Belgium or Canada) or platforms (e.g. TikTok vs YouTube). Such comparisons may provide insights into why compliance may be higher in one group over another, potentially guiding improved regulatory practices.
While characteristics such as the professional domain or the influencer audience size receive more attention in the literature, research gaps remain. There is a need for compatibility analyses across different sizes and domains of influencers, which would clarify how these factors interact in affecting audience engagement. Additionally, datasets used for sponsored content detection are not sufficiently categorised by specialised domains, which might limit detection accuracy. Although data scarcity presents a challenge in certain domains, a more fine-grained categorisation in future studies could enhance detection results, providing more precise insights for both commercial and regulatory stakeholders.
# Political advertising
The review identifies an underexplored trend in politically sponsored content, where influencers are hired to promote specific political ideologies [91], such as manipulating election campaigns [92]. This domain poses unique challenges compared to traditional commercial influencer marketing. While regulatory efforts like the Transparency and Targeting of Political Advertising (TTPA) rules [93] aim to address these issues, effective detection methods for non-compliant political content remain scarce. Political advertisements are inherently more complex and subtle than commercial advertisements. The ambiguous nature of political endorsements complicates their identification. It can be difficult to discern whether an influencer is sharing an ideology out of personal belief or as part of a paid sponsorship.
Future studies could address these challenges using both top-down and bottom-up approaches. From a top-down perspective, computational researchers can draw inspiration from existing legal and social science frameworks that categorise recurring problems in political advertising. These frameworks can guide the design of computational tools to identify non-compliant content. Case studies analysing major incidents of political influencer marketing could also offer insights into regulatory loopholes. From a bottom-up perspective, researchers can investigate the characteristics of political advertisements on platforms to differentiate them from non-sponsored content. This involves analysing patterns in language, images, hashtags, and temporal trends, such as spikes during election campaigns. Factors like metadata (e.g., links to funding sources) and network behaviours (e.g., influencers' connections to political entities) could also inform detection methods. | Influencer marketing has become a crucial feature of digital marketing strategies. Despite its rapid growth and algorithmic relevance, the field of computational studies in influencer marketing remains fragmented, especially with limited systematic reviews covering the computational methodologies employed. This makes overarching scientific measurements in the influencer economy very scarce, to the detriment of interested stakeholders outside of platforms themselves, such as regulators, but also researchers from other fields. This paper aims to provide an overview of the state of the art of computational studies in influencer marketing by conducting a systematic literature review (SLR) based on the PRISMA model. The paper analyses 69 studies to identify key research themes, methodologies, and future directions in this research field. The review identifies four major research themes: Influencer identification and characterisation, Advertising strategies and engagement, Sponsored content analysis and discovery, and Fairness. Methodologically, the studies are categorised into machine learning-based techniques (e.g., classification, clustering) and non-machine-learning-based techniques (e.g., statistical analysis, network analysis). Key findings reveal a strong focus on optimising commercial outcomes, with limited attention to regulatory compliance and ethical considerations. The review highlights the need for more nuanced computational research that incorporates contextual factors such as language, platform, and industry type, as well as improved model explainability and dataset reproducibility. The paper concludes by proposing a multidisciplinary research agenda that emphasises the need for further links to regulation and compliance technology, finer granularity in analysis, and the development of standardised datasets. | [
"cs.CY",
"cs.CL"
] |
# 1 Introduction
Breast cancer is among the most prevalent and fatal malignancies affecting women worldwide, posing a major global health burden [1]. Its metastatic nature—commonly spreading to the bones, liver, lungs, and brain—contributes significantly to its incurability [2, 40]. Early detection can dramatically increase the 5-year survival rate to $8 5 \%$ , underscoring the importance of proactive screening [3]. However, manual examination of histological slides remains time-consuming and labor-intensive [4, 5, 42, 44, 45, 46]. With the rising incidence of breast cancer, there is a growing need for automated, efficient diagnostic tools. The digitization of pathology slides has enabled the production of gigapixel images, making large-scale computational analysis feasible.
Deep learning, particularly convolutional neural networks (CNNs), has greatly advanced pathological image analysis, offering improved diagnostic accuracy and efficiency. Nevertheless, designing architectures suited to pathology requires domain-specific considerations. Standard models from natural image domains—such as ResNet or InceptionNet—are often repurposed, but they fail to account for the simpler color distributions and richer hierarchical structures characteristic of pathological images. While researchers typically fine-tune these models using self-supervised or fully supervised approaches, such adaptations often overlook the unique demands of histopathology, resulting in suboptimal outcomes [41]. Moreover, clinical deployment requires not only accuracy but also computational efficiency when processing gigapixel whole slide images (WSIs) in a patch-wise manner on edge devices.
Neural architecture search (NAS), a key technique in automated machine learning (AutoML), has shown promise in generating efficient networks under constraints such as FLOPs or parameter count. Despite its potential, NAS remains underutilized in pathological imaging. Traditional NAS methods like evolutionary algorithms (EA) or Bayesian optimization (BO) often rely on random initialization (RI), which can lead to unstable performance when only a small number of samples are available due to high evaluation cost.
To address this, we propose a two-fold approach. First, we introduce a Network Similarity Directed Initialization (NSDI) strategy to enhance the stability and effectiveness of the search process. Second, we incorporate domain adaptation into one-shot NAS to address distributional shifts caused by varying staining protocols and semantic scales across pathology datasets. The inclusion of domain adaptation loss constrains the supernet to yield more reliable performance estimates, improving evaluation accuracy and overall search quality. Importantly, our method is modular and can be easily integrated into existing NAS pipelines.
This work makes the following key contributions:
• We propose a novel NSDI algorithm that improves the robustness of NAS by reducing redundancy in the initial population, particularly under limited initialization budgets.
• We introduce domain adaptation into one-shot NAS using a Maximum Mean Discrepancy (MMD) loss to mitigate domain shifts in pathological datasets, enabling more reliable architecture evaluation.
Extensive experiments demonstrate that our method achieves superior classification performance and enhanced search stability compared to existing approaches.
# 2 Related Works
# 2.1 Image Classification in Breast Cancer
Most recent studies on breast cancer classification employ weakly supervised techniques, particularly multiple instance learning (MIL), to analyze whole slide image (WSI)-level data. Thandiackal et al. proposed ZoomMIL, an end-to-end framework that performs multi-level zooming and outperforms state-of-the-art (SOTA) MIL approaches on the BRIGHT and CAMELYON16 datasets [25]. Zhan et al. integrated both region-of-interest (ROI) and WSI-level information for breast tumor classification in the BRIGHT Challenge [28]. Wang et al. introduced a weakly supervised method based on cross-slide contrastive learning, which decouples task-agnostic self-supervised feature extraction from task-specific feature refinement and aggregation [29]. Marini et al. developed an instance-based MIL model that integrates both strongly and weakly labeled data through a multi-task loss [30]. Wentai et al. proposed a MIL pipeline enhanced with transformers for subtype classification [31].
Graph-based approaches have also gained attention. Hou et al. designed a spatial-hierarchical graph neural network (GNN) with dynamic structure learning to model spatial dependencies [27]. Pati et al. introduced a hierarchical GNN that captures intra- and inter-entity interactions in tissue via entity graphs [32]. Tiard et al. proposed a self-supervised method that incorporates stain normalization into a constrained latent space for robust feature learning [26]. Most of these methods focus on WSI-level classification, while patch-level classification remains less explored.
# 2.2 Neural Architecture Search
NAS typically consists of three components: the search space, the search strategy, and the evaluation method.
# 2.2.1 Search Space
Zoph et al. initially proposed a flexible search space that optimizes the size, stride, and number of kernels in each convolutional layer [6]. Later works such as NASNet [7] introduced modular designs by stacking normal and reduction cells. Zhang et al. explored block-based architectures [38]. Liu et al. proposed DARTS, which searches over a continuous relaxation of the architecture space using a weight-sharing supernet [8], significantly reducing search cost while achieving competitive results.
Another line of work focuses on optimizing existing architectures through search. ProxylessNAS [9], MDENAS [10], MNasNet [11], FBNet [12], and FBNetV2 [13] all extend MobileNetV2 by adjusting the number, size, and type of convolutional blocks.
# 2.2.2 Search Strategy
Popular search strategies include random search (RS), Bayesian optimization (BO), evolutionary algorithms (EA), reinforcement learning (RL), and gradient-based methods. Bergstra et al. leveraged BO to identify optimal architectures. Zoph et al. employed RL to navigate the search space [6], while Real et al. demonstrated improved performance using a regularized EA [14]. Liu et al. combined parameter sharing with gradient-based search in DARTS to further reduce computational cost [8].
# 2.2.3 Evaluation Strategy
The standard evaluation approach involves training candidate architectures to convergence and assessing their performance on a validation set. Although accurate, this is computationally expensive. To reduce cost, Klein et al. used partial training on subsets of data [15], and Chrabaszcz et al. used lower-resolution images [16]. Domhan et al. proposed early-stopping strategies based on performance extrapolation from early epochs [17]. Cai et al. trained a surrogate model to predict architecture performance from structural encoding [18].
Recent one-shot NAS methods further reduce evaluation cost through weight-sharing supernets, where all candidate architectures share parameters and are evaluated without individual retraining [33].
# 2.3 NAS Applications in Medical Image Analysis
Despite its success in natural image tasks, NAS has seen limited use in medical imaging. Dong et al. introduced a NAS framework for adversarial medical image segmentation [23]. Yan et al. proposed MS-NAS, which fuses multi-scale features for cell-level tasks [22]. Tang et al. applied a hyperparameter-tuned DARTS framework to computational pathology (CPath), achieving promising results on the ADP dataset [20]. Huang et al. developed AdwU-Net, a NAS framework that adapts the depth and width of U-Net for segmentation tasks in the Medical Segmentation Decathlon (MSD) [24]. Eminaga et al. proposed PlexusNet, a scalable model family tailored to five clinical classification tasks through structured control of depth, width, and branching [21].
# 3 Method
We propose a novel neural architecture search (NAS) framework tailored for pathological image analysis, termed Domain Adaptation One-Shot NAS (DAOS). It incorporates a network similarity directed initialization (NSDI) strategy to enhance search stability and introduces domain adaptation into the one-shot NAS paradigm. As shown in Fig. 1, the overall pipeline consists of two main stages: supernet training and architecture search.
Figure 1: The overview of the proposed domain adaptation one-shot neural architecture search.
The architecture search space $\mathcal { A }$ is modeled as a directed acyclic graph (DAG), where each candidate architecture corresponds to a subgraph $a \in { \mathcal { A } }$ , denoted as $\mathcal { N } _ { a , w }$ with weights $w$ . In our design, the search space comprises $N$ layers, each offering $M$ candidate operations, yielding a total of $M ^ { N }$ possible architectures.
NAS typically involves training candidate architectures to convergence, then ranking them based on evaluation metrics such as accuracy or F1 score. This process is formalized as:
$$
w _ { a } = \underset { w } { \mathrm { a r g m i n } } \mathcal { L } _ { \mathrm { t r a i n } } \left( \mathcal { N } _ { a , w } \right) ,
$$
$$
a ^ { * } = \underset { a \in \mathcal { A } } { \mathrm { a r g m a x } } \mathrm { M e t r i c s v a l } \left( \mathcal { N } a , w \right) ,
$$
where $\mathcal { L } \mathrm { t r a i n } ( \cdot )$ is the training loss, and Metricsval $( \cdot )$ denotes validation performance.
# 3.1 Search Algorithm
We adopt an evolutionary algorithm (EA) for architecture search, using random initialization (RI) to generate the initial population. However, when the population size is small, pseudo-random sampling can result in an imbalanced exploration of the search space. Fig. 2 shows that RI often fails to evenly sample operation choices (e.g., the red box highlights under-represented operators). Although methods like AutoBSS [38] partially address this using clusteringbased initialization, they lack consistency.
Figure 2: One sample of population initialization under a specific random seed, where the population number is 50, and the search space is a 20-layer network, with four choices in each layer.
To improve population diversity and search stability, we introduce a Network Similarity Directed Initialization (NSDI) method, inspired by force-directed placement (FDP) algorithms [34]. FDP arranges nodes based on repulsive and attractive forces but operates in continuous space, making it unsuitable for discrete search spaces like NAS. Instead, we define a discrete similarity metric to guide initialization.
Specifically, the similarity between two network samples is defined as:
$$
\mathrm { S S } ( v _ { i } , v _ { j } ) \triangleq \sum _ { k = 0 } ^ { N } v _ { i , k } \odot v _ { j , k } ,
$$
where $v _ { i }$ and $v _ { j }$ are $N$ -dimensional binary vectors encoding architectures $a _ { i }$ and $a _ { j }$ , and $\odot$ denotes the XNOR operation. We then define the Average Population Similarity (APS) as:
$$
\operatorname { A P S } ( V ^ { P } ) \triangleq \frac { 1 } { P } \sum _ { i = 0 } ^ { P } \operatorname* { m a x } _ { v _ { j } \in V ^ { P } , i \neq j } \operatorname { S S } ( v _ { i } , v _ { j } ) ,
$$
where $V ^ { P }$ is the set of $P$ encoded network vectors.
To ensure diversity, we constrain APS under a user-defined threshold $A P S _ { m a x }$ and increment it adaptively to avoid excessive sampling time. The NSDI process is summarized in Algorithm 1.
# 3.2 Supernet Training
To avoid retraining every architecture, we adopt a one-shot NAS approach using a shared-weight supernet $\mathcal { N } _ { \ A , W }$ , which spans the entire search space $\mathcal { A }$ with shared weights $W$ . After training, candidate architectures inherit weights from the supernet and are evaluated directly, significantly reducing search cost.
Algorithm 1: Network Similarity Directed Initialization
The supernet weights are optimized by minimizing:
$$
W _ { A } = \mathrm { a r g m i n } W \mathcal { L } \mathrm { t r a i n } ( \mathcal { N } _ { A , W } ) ,
$$
often approximated by sampling architectures from a prior $\Gamma ( \mathcal { A } )$ :
$$
W _ { A } = \mathrm { a r g m i n } W \mathbb { E } a \sim \Gamma ( A ) [ \mathcal { L } _ { \mathrm { t r a i n } } ( \mathcal { N } _ { A , W } ) ] ,
$$
where $\Gamma ( \mathcal { A } )$ is uniformly sampled under FLOPs constraints.
Given a labeled training set $\mathcal { D } _ { s } = ( x _ { i } ^ { s } , y _ { i } ^ { s } ) i = 1 ^ { n _ { s } }$ , where $\boldsymbol { y } _ { i } ^ { s } \in \mathbb { R } ^ { C }$ is a one-hot label, the classification loss is:
$$
\mathcal { L } \mathrm { c l s } ( \mathcal { N } _ { A , W } , \mathcal { D } _ { s } ) = \frac { 1 } { n _ { s } } \sum _ { i = 1 } ^ { n _ { s } } J ( \mathcal { N } _ { a , W _ { A } ( a ) } ( x _ { i } ^ { s } ) , y _ { i } ^ { s } ) ,
$$
where $J ( \cdot , \cdot )$ denotes cross-entropy loss.
# 3.3 Domain Adaptation with MMD
Weight sharing can lead to performance estimation bias, especially when the training and validation distributions differ (e.g., due to stain variability). To mitigate this, we introduce a domain adaptation loss using Maximum Mean Discrepancy (MMD).
Assume source domain $\mathcal { D } _ { s }$ (labeled training) and target domain $\mathcal { D } _ { t } = x _ { j } ^ { t } j = 1 ^ { n _ { t } }$ (unlabeled validation), sampled from different distributions $p \neq q$ . MMD quantifies their discrepancy as:
$$
d \mathcal { H } ( \boldsymbol { p } , \boldsymbol { q } ) \triangleq \left. \mathbb { E } _ { \boldsymbol { p } } [ \phi ( \boldsymbol { x } ^ { s } ) ] - \mathbb { E } \boldsymbol { q } [ \phi ( \boldsymbol { x } ^ { t } ) ] \right. ^ { 2 } \mathcal { H } ,
$$
where $\phi ( \cdot )$ maps samples to a reproducing kernel Hilbert space (RKHS) with kernel $k ( x , x ^ { \prime } ) = \langle \phi ( x ) , \phi ( x ^ { \prime } ) \rangle$ .
An unbiased estimator of MMD is:
$$
\hat { d } \mathcal { H } ( p , q ) = \frac { 1 } { n _ { s } ^ { 2 } } \sum \mathop { i , j } k ( x _ { i } ^ { s } , x _ { j } ^ { s } ) + \frac { 1 } { n _ { t } ^ { 2 } } \sum _ { i , j } k ( x _ { i } ^ { t } , x _ { j } ^ { t } ) \qquad - \frac { 2 } { n _ { s } n _ { t } } \sum _ { i , j } k ( x _ { i } ^ { s } , x _ { j } ^ { t } ) .
$$
The final training objective combines classification and domain adaptation loss:
$$
\mathcal { L } _ { \mathrm { t r a i n } } ( \mathcal { N } _ { A , W } ) = \mathcal { L } _ { \mathrm { c l s } } + \lambda \sum _ { \gamma \in \boldsymbol { q } , l } \hat { d } _ { \boldsymbol { \mathcal { H } } } ( p , \gamma ) ,
$$
where $\lambda$ is a balancing coefficient (set to 0.5 in our experiments).
# 4 Experiment
# 4.1 Dataset
Due to the high computational cost of neural architecture search (NAS), we focus our evaluation on a single large-scale dataset—BRACS [36]. The dataset comprises 4,391 breast histological images, scanned using an Aperio AT2 scanner at a resolution of $0 . 2 5 \ \mu \mathrm { m / p i x e l }$ . All tumor regions-of-interest (TRoIs) are annotated into eight categories: Normal, Benign, Usual Ductal Hyperplasia (UDH), Atypical Ductal Hyperplasia (ADH), Flat Epithelial Atypia (FEA), Ductal Carcinoma In Situ (DCIS), and Invasive. TRoI images vary in spatial resolution, with an average size of $1 7 7 8 \times 1 7 2 3$ pixels.
# 4.2 Training Details
We follow the data augmentation protocol introduced in [37], resizing input images to $5 1 2 \times 5 1 2$ pixels. All models are trained using the Adam optimizer ( $\scriptstyle \beta _ { 1 } = 0 . 9$ , $\beta _ { 2 } { = } 0 . 9 9 9$ , $\scriptstyle \epsilon = 1 \mathrm { e } - 8 $ ) to minimize cross-entropy loss. Training is performed on NVIDIA Tesla V100 GPUs using PyTorch v1.7.1 with a batch size of 64.
# 4.2.1 Search Space
We adopt the one-shot ShuffleNet V2-based search space from [33]. The search space consists of $N = 2 0$ block layers, each offering $M = 4$ candidate operations. These include Shuffle blocks with kernel sizes of $3 { \times } 3$ , $5 { \times } 5$ , and $7 { \times } 7$ , as well as Xception-style blocks with varying depthwise convolutions. The resulting search space contains $4 ^ { 2 0 }$ possible architectures.
# 4.2.2 Supernet Training
The supernet is initialized with pre-trained weights from ImageNet [33] and trained for 2000 epochs on BRACS using the single-path strategy as baseline. In the DAOS-A variant, the supernet is further fine-tuned for the final 1000 epochs using a combination of classification and domain adaptation losses. In DAOS-B, the classifier is frozen and the encoder is further fine-tuned for an additional 1000 epochs. The initial learning rate is set to 3e-4, and the fine-tuning rate to 1.5e-4. A cosine annealing schedule reduces the learning rate to a minimum of 6e-7.
# 4.2.3 Search algorithm
We use evolutionary algorithm with the initialization population $P = 1 0 0$ , among which the top 50 candidates are selected as the population for further EA-based search. For mutation, a randomly selected candidate mutates its every choice block with a probability of 0.1 to produce a new candidate. For crossover, two randomly selected candidates are crossed to produce a new one. Mutation and crossover are repeated (every 25 operations) for enough new candidates. $\mathrm { { F L O P s } \leq 1 8 0 0 M }$ is adopted as the complexity constraint because of the large image size. A total of 1000 candidates are search and the top 10 networks are selected for retraining.
# 4.2.4 Candidate Retraining
Top-ranked architectures from the search stage inherit weights from the supernet and are then fine-tuned on BRACS for 50 epochs. The learning rate is scheduled from 5e-6 to 3e-4 using triangular annealing.
# 4.3 Results and Analysis
# 4.3.1 NSDI Enhances Search Stability
NAS is inherently a black-box optimization problem, where candidate architectures are encoded as discrete vectors within a combinatorial search space. Due to the high cost of evaluating each network, initializing the population with a representative and diverse set is critical to guiding the search effectively.
As illustrated in Fig.3, evolutionary algorithms (EA) rely on mutation and crossover, which are heavily influenced by the initial population. Poor initialization can trap the search in local optima. To demonstrate this, we compute the average population similarity (APS) for a random-initialized population with $P = 5 0$ , $N = 2 0$ , $M = 4$ , and $L a t _ { m a x } = 1 8 0 0 \mathbf { M }$ FLOPs (Table1). Results show that random initialization yields populations where each architecture is, on average, $50 \%$ similar to at least one other sample.
Figure 3: The impact of population initialization on the evolutionary algorithm. (a)Random sampling, (b)mutation and crossover can fall into local optimum.
Table 1: Influence of different population initialization methods on average population similarity
By contrast, our NSDI strategy—with $A P S _ { m a x } = 6$ and timeout $T = 2 \times 1 0 ^ { 5 }$ —achieves lower APS scores, leading to better diversity, albeit at the cost of more sampling. This trade-off is manageable and can be tuned via $T$ .
Fig.4 compares the F1 score trends across generations for three search strategies. Each method is repeated 10 times.
Figure 4: Comparison of random search and evolutionary algorithm with random initialization or network similarity directed initialization.
Table2 summarizes the best performance per trial, showing that EA with NSDI consistently outperforms both random search and EA with random initialization. Despite the time-intensive nature of NAS, our method ensures that the result of each search is consistently close to the global optimum.
Table 2: Best search results on validation set
In terms of search cost, DAOS-A matches the training time of SPOS (Table 3). DAOS-B incurs slightly more cost due to the additional fine-tuning of the encoder.
Table 3: Search cost (GPU hours - Ghs)
After 1000 search iterations, the top-10 architectures are retrained for evaluation. The best-discovered architecture $( \mathrm { F 1 } = 6 1 . 4 1 \% )$ is shown in Fig. 5 and the final performance results are presented in Table 4. Although random search remSahinufsflaest3r×on3g baseline, SPOS and $\mathrm { E A + N S D I }$ consistently outperform it. Both DAOS-A and DAOS-B further enhance performance and stability, with DAOS-B yielding the most stable outcomes.
3 3 双 4 3
Xception Shuffle 5×5 Shuffle 3×3 Xception Shuffle 3×3 Shuffle 5×5 Xception Shuffle 5×5 Shuffle 3×3 Shuffle 5×5
# 4.3.2 Domain Adaptation Improves Supernet Training
We compare three supernet training schemes:
• Baseline: 2000 epochs of training using classification loss only.
DAOS-A: Initial 2000 epochs with classification loss, followed by 1000 epochs with added domain adaptation loss.
DAOS-B: Further fine-tuning of the encoder for 1000 epochs while freezing the classifier (after DAOS-A).
Table 4: Top-1 F1 score $( \% )$ results on BRACS dataset with different neural architecture search methods (CL Classification Loss, DAL: Domain Adaptation Loss, FE: Finetune Encoder)
To assess the effect of domain adaptation, we randomly sample 1000 architectures and evaluate their F1 scores on both validation and test sets. As shown in Fig. 6, the baseline model shows poor correlation between validation and test metrics, meaning a model that performs well on the validation set may generalize poorly. This undermines the reliability of one-shot NAS.
Figure 6: Results of 1000 network evaluation metrics on the validation set and test set. All networks were randomly sampled from the supernet under different training methods. The baseline only uses the classification loss on the training set. DAOS-A uses classification loss and domain adaptation loss. DAOS-B fine-tunes the feature extractor based on DAOS-A by freezing the classifier.
With domain adaptation, both DAOS-A and DAOS-B significantly improve this consistency, yielding a more monotonic relationship between validation and test F1 scores. DAOS-B exhibits the strongest correlation.
Pearson correlation coefficients are computed to quantify this alignment: 0.1794 (Baseline), 0.6985 (DAOS-A), and 0.7096 (DAOS-B). This confirms the effectiveness of our domain adaptation design in stabilizing performance estimation. We further visualize the average feature representations of 1000 networks using t-SNE (Fig. 7). Features
Training set Validation set Test set
70 70 70
50 -1310 -1310
30
10 福 福
-10
-30
-50
-70 -70 -70 -80 -30 20 70 -90 -40 10 60 110 -80 -30 20 70 120 (a) (b) (c)
from the baseline supernet are scattered and noisy, whereas those from DAOS-A and DAOS-B exhibit clearer separation and clustering, suggesting improved feature consistency.
# 4.3.3 DAOS Achieves Superior Performance
To benchmark overall performance, we compare our method against manually designed networks (e.g., ResNet18, InceptionNet V3, MobileNet V2, SqueezeNet variants) and NAS-based models. All models are initialized with ImageNet pre-trained weights and trained on BRACS for 100 epochs with a learning rate of 3e-4, using the same training configuration as for retraining searched candidates.
Results are reported in Table 5. DAOS-A achieves the highest F1 score among all methods under comparable FLOPs and parameter constraints, demonstrating its efficacy in constrained medical settings.
Table 5: F1 score $( \% )$ results on BRACS dataset with different deep learning methods
# 4.3.4 DAOS Improves Pathological Interpretability
Beyond accuracy, interpretability is crucial in clinical applications. We visualize Class Activation Maps (CAMs) [39] for several patches using ResNet18, MobileNet V2, and our method.
As shown in Fig. 8, ResNet18 highlights limited tumor and epithelial regions, while MobileNet V2 attends to irrelevant areas like fat or connective tissue. In contrast, DAOS focuses on a wide range of diagnostically relevant regions—including tumor cells, ducts, and proliferative epithelial structures—better supporting subtype classification. This indicates that our method not only improves accuracy but also enhances clinical relevance and decision support.
Figure 8: CAMs generated from different methods. The upper left corner of the images show the real labels or predicted labels. | Deep learning-based pathological image analysis presents unique challenges due to the practical constraints of network design. Most existing methods apply computer vision models directly to medical tasks, neglecting the distinct characteristics of pathological images. This mismatch often leads to computational inefficiencies, particularly in edge-computing scenarios. To address this, we propose a novel Network Similarity Directed Initialization (NSDI) strategy to improve the stability of neural architecture search (NAS). Furthermore, we introduce domain adaptation into one-shot NAS to better handle variations in staining and semantic scale across pathology datasets. Experiments on the BRACS dataset demonstrate that our method outperforms existing approaches, delivering both superior classification performance and clinically relevant feature localization. | [
"cs.CV"
] |
# 1 Introduction
Relational databases are foundational to modern data infrastructure, powering analytics, reporting, and decision-making across domains. Yet, querying these databases typically requires fluency in SQL—a barrier for many users. Text-to-SQL systems aim to democratize access by translating natural language (NL) questions into executable SQL queries (Zhu et al., 2024; Zhang et al., 2024). Enabled by large language models (LLMs), recent systems achieve impressive performance across complex cross-domain settings.
Algorithm 1: Graph-Based Schema Linking
However, bringing these systems to real-world applications introduces new challenges. Enterprise databases often contain hundreds of tables and thousands of columns—far beyond the scale of academic benchmarks. Supplying the entire schema to the model risks exceeding token limits and introduces considerable noise, which can hinder SQL generation and inflate inference cost (Cao et al., 2024; Li et al., 2023c). In practice, user queries typically touch only a small subset of the schema, making it crucial to identify and extract the relevant part—a process known as schema linking (Lei et al., 2020).
Schema linking aims to determine which tables or columns are needed to answer a user question. While early methods relied on exact string matches (Yu et al., 2018), recent work has proposed neural linkers (Gan et al., 2023), retrieval-based modules (Pourreza and Rafiei, 2024), and promptbased systems (Wang and Liu, 2025). These can capture semantic signals beyond surface overlap, but typically require supervised training, complex multi-stage pipelines, or brittle prompt engineering. They also struggle with the core trade-off: being precise enough to reduce noise, yet broad enough not to miss critical context (Liu et al., 2024; Wang et al., 2025).
In this work, we ask: Can we perform effective schema linking without relying on specialized fine-tuned models or complex prompting strategies? Our answer is affirmative.
We introduce SchemaGraphSQL, a zero-shot schema linking framework that revisits classical algorithmic tools. Our key idea is to model schema linking as a graph search problem. We treat the database schema as a graph where nodes are tables and edges reflect foreign-key connections. Given a user query, we make a single LLM call to predict coarse-grained source and destination tables, then apply deterministic path-finding algorithms to enumerate all shortest join paths between them. The union of these paths forms a compact subschema—guaranteed to be connected and grounded in the query.
This perspective is both simple and surprisingly powerful. To our knowledge, SchemaGraphSQL is the first Text-to-SQL system to rely exclusively on classical graph algorithms for schema linking, using LLMs only for coarse guidance. It requires no training, incurs minimal inference cost, and integrates easily into any downstream parser or LLM-based SQL generator.
Empirical results on the BIRD benchmark show that SchemaGraphSQL achieves new state-of-theart scores on recall-focused schema linking metrics and improves execution accuracy across multiple SQL generators. We also conduct ablations demonstrating that even this minimal linking method outperforms specialized neural or prompt-based systems in robustness and cost-efficiency.
# Main Contributions:
• We introduce a zero-shot schema linking approach that models database schemas as graphs and applies classical path-finding algorithms. Our method achieves state-ofthe-art performance without requiring any training—either for fine-tuning or inference—making it highly suitable for lowresource, real-world scenarios where training data is unavailable or difficult to obtain.
• Our system uses only a single lightweight LLM call (Gemini 2.5 Flash) per query, with minimal token usage (averaging 4593 input and 14 output tokens), significantly reducing inference cost while maintaining ease of integration and deployment.
• We conduct comprehensive empirical evaluations, demonstrating superior schema linking performance compared to fine-tuned and specialized methods. Additionally, we perform detailed ablation studies to examine precision–recall trade-offs and assess the downstream impact on Text-to-SQL execution accuracy across a range of open-source and closedsource models.
# 2 Related Work
Text-to-SQL systems aim to automatically translate natural language questions into executable SQL queries, thereby enabling non-experts to interact with relational databases. The advent of large language models (LLMs) has significantly advanced this task (Zhang et al., 2024; Zhu et al., 2024), with models like GPT-3.5/4, Gemini, and their opensource variants demonstrating impressive performance across benchmarks. However, as schema size increases, providing the entire schema as input may exceed the model’s context window, especially in large-scale databases. Even when using recent LLMs with extended context lengths, supplying the full schema can introduce noise and hinder the model’s ability to focus on relevant elements.
# 2.1 Schema Linking in Text-to-SQL
Schema linking—the process of aligning natural language mentions to corresponding tables and columns in a database—is a crucial component of Text-to-SQL systems (Lei et al., 2020; Liu et al., 2022; Li et al., 2023c). Early approaches relied on exact string matching or type-based heuristics (Yu et al., 2018), which struggled with synonyms, paraphrases, and complex cross-domain schemas. Recent methods have increasingly leveraged pretrained LLMs and neural encoders to improve linking accuracy (Gan et al., 2023; Glass et al., 2025). Schema linking has proven particularly important for LLM pipelines that operate on large or multi-database environments, where prompt space is limited and precision in schema filtering directly affects SQL generation quality (Cao et al., 2024; Liu et al., 2025).
# 2.2 Neural and Prompt-Based Linking Strategies
Numerous methods have been proposed to handle schema linking within LLM-based Text-to-SQL systems. Some decouple schema linking as a separate module before SQL generation (Pourreza and
Figure 1: Overview of our graph-based schema linking pipeline.
Rafiei, 2024; Li et al., 2023a), while others incorporate schema selection as a prompt-driven or retrieval-augmented step (Wang and Liu, 2025). Extractive methods, such as Glass et al. (2025), directly prompt LLMs to list relevant schema items, trading generation flexibility for interpretability and control. RSL-SQL (Cao et al., 2024) proposes a bidirectional pruning mechanism with selfcorrection to boost recall, while Solid-SQL (Liu et al., 2025) augments training data to improve linking robustness. Despite variations in architecture, a common trend across these systems is the effort to balance schema coverage (recall) with relevance filtering (precision) to avoid overloading the LLM or omitting critical elements.
# 2.3 Graph-Based Approaches for Schema Linking
A parallel line of work models the database schema as a graph structure, where tables and columns are nodes, and foreign-key or semantic relations form edges. These methods primarily leverage graph neural networks (GNNs) or relation-aware transformers to propagate information across schema components. RAT-SQL (Wang et al., 2020) pioneered relation-aware attention over a joint question–schema graph, inspiring successors such as LGESQL (Cao et al., 2021) (line-graph encoding of meta-relations) and ShadowGNN (Chen et al., 2021) (delexicalised projection for cross-schema generalisation). Later hybrids integrate graph reasoning directly into pretrained LMs, e.g. GraphixT5 (Li et al., 2023b) and GRL-SQL (Gong and Sun, 2024). Most recently, SQLformer (Bazaga et al., 2024) embeds schema structure as inductive bias in a Transformer encoder and autoregressively generates SQL ASTs as graphs. While graphenhanced models capture rich global relations, they typically require substantial fine-tuning or architectural changes—an obstacle in low-resource, realtime deployments. Graph-based schema linking methods have recently declined in popularity as LLM-driven approaches have become dominant.
# 2.4 Classical Graph Algorithms in Schema Linking
In contrast to learned graph encoders, only a handful of systems reuse classical graph algorithms to aid LLMs. DBCopilot (Wang et al., 2025) constructs a directed schema graph and performs depthfirst traversal to linearise the sub-schema passed to a lightweight “router” model. InteractiveT2S (Xiong et al., 2024) equips an LLM agent with a FINDSHORTESTPATH tool that runs breadthfirst search over the foreign-key graph to supply valid join chains during multi-turn dialogue. These works demonstrate the practicality of DFS/BFS as auxiliary helpers, but the graph search remains peripheral—responsible only for join validation or routing—rather than serving as the core schemalinking engine.
# 2.5 Positioning Our Work
While prior literature has thoroughly explored neural and graph-enhanced architectures for schema linking, the explicit use of classical graph algorithms—particularly as the core mechanism for schema linking in LLM-based Text-to-SQL systems—remains rare. Our approach, SCHEMAGRAPHSQL, revisits this paradigm by operationalizing schema linking as a path-selection problem on the schema graph. To our knowledge, this is the first work to systematically evaluate and ablate classic path-finding algorithms for schema linking in LLM-driven Text-to-SQL pipelines on real-world benchmarks.
# 3 Methodology
# Notation
Databases. A relational database is represented as
$$
\mathcal { D } = \langle \mathcal { T } , \mathcal { A } , \mathcal { K } \rangle ,
$$
where:
• $\mathcal { T } = \{ T _ { 1 } , \ldots , T _ { n } \}$ : set of tables.
• $A ( T _ { i } )$ : attributes (columns) of table $T _ { i }$ ; $\textstyle A = \bigcup _ { T _ { i } \in { \mathcal { T } } } A ( T _ { i } )$ is the global set of attributes.
• ${ \mathcal { K } } \subseteq { \mathcal { T } } \times { \mathcal { T } }$ : set of foreign key (FK) relations.
The schema graph is the undirected graph $G = ( \tau , \kappa )$ , with nodes as tables and edges as FK links. For sparse schemas (fewer than two edges), we further augment the schema graph by adding edges between tables that share a column containing “id” in its name, thus ensuring that the schema graph is sufficiently connected for path enumeration.
# Languages.
• $\mathcal { L }$ : set of well-formed natural language questions. • $s$ : set of valid SQL queries.
Given $q \in { \mathcal { L } }$ , the objective is to generate $Q \in { \mathcal { S } }$ that answers $q$ over $\mathcal { D }$ .
This section formalizes the schema linking problem and describes our graph-based, training-free approach for selecting minimal connected subschemas to facilitate Text-to-SQL generation. We begin by introducing notation and the problem formulation, then present our graph-based schema linking procedure, and finally detail the configuration space of our approach.
# 3.1 Problem Formulation
We first introduce the notation used throughout this paper:
Definition 3.1 (Text-to-SQL). Given $q$ and $\mathcal { D }$ , Text-to-SQL seeks a function
$$
\boxed { f _ { \mathrm { N L 2 S Q L } } : \mathcal { L } \times \mathcal { D } \longrightarrow \mathcal { S } }
$$
that returns an executable SQL query $\begin{array} { r l } { Q } & { { } = } \end{array}$ $f _ { \mathrm { N L 2 S Q L } } ( q , \mathcal { D } )$ that answers the user question $q$ on the database $\mathcal { D }$ .
Definition 3.2 (Schema Linking). Let $G = ( \tau , \kappa )$ be the schema graph of $\mathcal { D }$ . Schema linking selects a connected sub-schema $S = \langle T ^ { \star } , \mathcal { K } ^ { \star } \rangle$ with $\mathcal { T } ^ { \star } \subseteq$ $\tau$ and ${ \cal K } ^ { \star } \subseteq { \cal K }$ sufficient to express the SQL query answering $q$ . Formally,
$$
\boxed { g _ { \mathrm { S L } } : \mathcal { L } \times G \longrightarrow \mathcal { P } ( \mathcal { T } ) , \qquad \mathcal { T } ^ { \star } = g _ { \mathrm { S L } } ( q , G ) }
$$
Here,
$$
\left\lceil \mathcal { K } ^ { \star } = \{ ( T _ { i } , T _ { j } ) \in \mathcal { K } \mid T _ { i } , T _ { j } \in \mathcal { T } ^ { \star } \} \right\rceil
$$
The output sub-schema $S$ defines the smallest set of tables and links needed to answer $q$ while remaining connected within the schema graph.
# 3.2 Graph-Based Schema Linking as Path Selection
Step 1: Extracting Source and Destination Tables. A single LLM call extracts two subsets of tables from the schema:
• $\mathcal { T } _ { s }$ (sources): tables whose columns appear in query conditions or filtering predicates; • $\mathcal { T } _ { d }$ (destinations): tables containing the columns requested as output.
Both sets are guaranteed to be non-empty and may overlap, reflecting cases where the same table is used for both filtering and output.
We operationalize schema linking as a pathselection task on the schema graph $G$ , which enables systematic and efficient sub-schema identification:
This extraction is performed via a single call to Gemini 2.5 Flash, guided by a dedicated system prompt designed to elicit precise identification of source and destination tables from the question and schema. The full prompt is shown in Prompt 1.
# Prompt 1: System prompt for source and destination extraction
# ROLE & OBJECTIVE
You are a senior data engineer who analyses SQL schemas and maps user questions precisely to source tables (filtering) and destination tables (final result columns).
TASK Identify:
• Source table(s) (src): contain columns used in filters/conditions.
• Destination table(s) (dst): contain columns returned in the answer.
# INSTRUCTIONS
1. Internally inspect every table to determine • which tables participate in filtering, and • which tables supply the requested output columns.
Briefly justify your choice internally but do not include that justification in the final answer.
2. Output exactly one line in the following format: src=TableA,TableB, dst=TableC,TableD
Step 2: Candidate Path Enumeration. For every pair $( T _ { s } , T _ { d } ) \ \in \ { \mathcal { T } } _ { s } \times { \mathcal { T } } _ { d }$ , we enumerate all shortest simple paths connecting them in $G$ :
$$
\overbrace { \sum \mathrm { P } ( T _ { s } , T _ { d } ) \ = \ \Bigl \{ \ p \ | \ p \ \mathrm { i s ~ a ~ s i m p l e ~ p a t h } \ T _ { s } \ \stackrel { \sim } { \sim } \ | } \\ { T _ { d } , \ | p | = \mathrm { d i s t } _ { G } ( T _ { s } , T _ { d } ) \Bigr \} } ^ { \mathrm { ~ } }
$$
This set $\mathrm { S P } ( T _ { s } , T _ { d } )$ contains all minimal-length paths in the schema graph between each source and destination table pair.
The global candidate set and their union are defined as:
$$
\boxed { \mathcal { C } = \bigcup _ { T _ { s } \in \mathcal { T } _ { s } } \bigcup _ { T _ { d } \in \mathcal { T } _ { d } } \mathrm { S P } ( T _ { s } , T _ { d } ) , \qquad U = \bigcup _ { p \in \mathcal { C } } p }
$$
Here, $\mathcal { C }$ enumerates all candidate paths, and $U$ is the union of all tables appearing in any candidate path—representing the maximal connected subgraph that could be relevant for the query.
Step 3: Path Selection and Sub-schema Construction. Depending on the configuration (detailed below), the set $U$ is optionally appended to $\mathcal { C }$ . A second LLM call (or a deterministic rule) selects a candidate path $p ^ { \star } \in { \mathcal { C } }$ , and we set $\tau ^ { \star } : = p ^ { \star }$ as the chosen subset of relevant tables for downstream SQL generation.
# 3.3 Configurations
To provide flexibility and support empirical analysis, we define a family of selection strategies parameterized by the following flags: let $k _ { s } = | \mathcal { T } _ { s } | > 0$ , $k _ { d } = | \mathcal { T } _ { d } | > 0 .$ ,
$$
\begin{array} { r } { \mathrm { L o N G E S T } \in \{ \mathsf { f a l s e } , \mathsf { t r u e } \} , } \\ { \mathrm { U N I O N } \in \{ \mathsf { f a l s e } , \mathsf { t r u e } \} . } \end{array}
$$
Table 3.3 summarizes the seven configurations we evaluate, spanning single-source/singledestination and union-based settings.
Here, $*$ means any positive integer. Mode 5 chooses the longest among the shortest paths; Mode 6 excludes $U$ from $\mathcal { C }$ ; Mode 7 bypasses path selection and deterministically returns the union $U$ . This design enables ablation studies to assess the effect of schema coverage and path selection criteria on final Text-to-SQL accuracy.
# 3.4 End-to-End Objective
Given configuration $\Theta$ , our full pipeline is:
$$
\boxed { f _ { \mathrm { N L 2 S Q L } } ^ { \Theta } ( \boldsymbol { q } , \mathcal { D } ) = h _ { \mathrm { G E N } } \left( \boldsymbol { q } , \mathcal { G } _ { \mathrm { S L } } ^ { \Theta } ( \boldsymbol { q } , G ) \right) }
$$
where $g _ { \mathrm { S L } } ^ { \Theta }$ is our graph-based schema linker and $h _ { \mathrm { G E N } }$ is any downstream SQL generator, constrained to use only the filtered schema $\mathcal { T } ^ { \star }$ . All pipeline steps operate in a single pass, are fully automatic, and require no training data or domain adaptation.
# 4 Experimental Setup
# 4.1 Dataset
All experiments are conducted on the BIRD development split, which comprises 1,534 naturallanguage questions over 11 heterogeneous relational databases. For schema linking precision, recall, and exact match rate, we use the BIRD dev set gold queries by extracting the referenced tables. For execution accuracy, we follow the official evaluation script provided by BIRD without modification.
# 4.2 Compared Methods
SchemaGraphSQL (Ours) Unless otherwise noted, results correspond to Mode 7 in Table 3.3, i.e., we deterministically return the union $U$ of all shortest paths connecting the LLM-identified source and destination tables (cf. Section 3.2). The src/dst extraction prompt (Prompt 1) is executed using google/gemini-2.5-flash-preview at temperature 0.2, while downstream SQL generation is performed at temperature 0.3.
LLM as Schema Linker (Baseline) A single Gemini 2.5 Flash call is prompted to list all tables that must appear in the FROM/JOIN clause given the user question. This mirrors prior “single-step” schema linking approaches while controlling for model and prompt length.
DENSE RETRIEVER We embed each table name (along with its column names) using the multilingual-E5-large-instruct encoder. For each question, the top- $\mathbf { \nabla } \cdot k$ tables $( k = 1 \ldots 6 )$ retrieved via cosine similarity form the predicted schema.
For completeness, we also include published BIRD dev results from recent schema-linking systems such as Extractive Schema Linking for Text-to-SQL (Glass et al., 2025) and LINKALIGN. (Wang and Liu, 2025) We did not re-run these systems; hence, they are excluded from execution accuracy comparisons.
# 4.3 LLMs for SQL Generation
Following schema filtering, we evaluate four LLMs for SQL generation:
• google/gemini-2.5-flash-preview;
• google/gemma-3-27b-it;
• google/gemma-3-12b-it;
• google/gemma-3-4b-it.
All calls are made through the respective provider APIs using identical configurations and prompting templates.
# 4.4 Evaluation Metrics
Schema-level Metrics. Let $G$ be the gold table set and $P$ the predicted set.
• Precision: The percentage of predicted tables that are actually present in the gold SQL query:
$$
\mathrm { P r e c i s i o n } = { \frac { | P \cap G | } { | P | } }
$$
• Recall: The percentage of gold tables that are successfully predicted:
$$
{ \mathrm { R e c a l l } } = { \frac { | P \cap G | } { | G | } }
$$
• $F _ { \beta }$ Score: The generalized F-score that weights recall $\beta$ times more than precision:
$$
F _ { \beta } = \frac { ( 1 + \beta ^ { 2 } ) \left| P \cap G \right| } { \beta ^ { 2 } \left| G \right| + \left| P \right| } , \qquad \beta \in \{ 1 , 6 \}
$$
• Exact Match Rate (EMR): The percentage of examples where the predicted schema exactly matches the gold schema:
$$
\mathrm { E M R } = \frac { 1 } { N } \sum _ { i = 1 } ^ { N } \mathbb { I } [ P _ { i } = G _ { i } ]
$$
End-to-End Metric Execution accuracy is computed using the official BIRD evaluation script: the generated SQL query is executed against the database, and its result must exactly match that of the reference query.
# 4.5 Implementation Notes
All experiments are conducted via hosted API endpoints; no on-premise hardware is used. Each query incurs (i) one Gemini 2.5 Flash call for schema linking, and (ii) one model call for SQL generation (Gemini2.5 or Gemma3). Code, prompts, and outputs will be released to support reproducibility.
# 5 Results
# 5.1 Schema Linking Evaluation
Table 1 shows that our primary configuration, SchemaGraphSQLforce-union, attains Recall $\mathbf { \sigma } = \mathbf { \sigma }$ $9 5 . 7 1 \%$ and an ${ \bf F _ { 6 } } \mathbf { = } 9 5 . 4 3 \ \%$ on the BIRD development split—surpassing all published systems, including the previous recall-centric leader $\mathrm { E x S L _ { f } }$ $( \mathbf { F _ { 6 } } { = } 9 3 . 9 2 \%$ ). Prior work has argued that recallweighted metrics such as $\mathrm { F } _ { 6 }$ are the most reliable indicator of downstream success, because omitting a relevant table is far more damaging than including extras (Glass et al., 2025). By pushing both recall and $\mathrm { F } _ { 6 }$ to new highs without any supervised training, SchemaGraphSQLforce-union establishes a new performance bar for zero-shot schema linking.
Table 1: Schema Linking Results in Dev Mode
For users who require a tighter schema, our balanced SchemaGraph $\mathbf { s Q L } _ { n - n }$ variant delivers the best $\mathrm { F } _ { 1 }$ $( 9 2 . 9 3 \% )$ with only a modest drop in recall $( 9 5 . 1 0 \% )$ . Exact-match rate also improves over the single-step LLM baseline $( 7 5 . 8 8 \% )$ —rising to $78 . 2 9 \ \%$ for $n { - } n$ and $76 . 6 0 \ \%$ for forceunion—demonstrating that classical graph search repairs connectivity errors that an LLM alone often misses.
# 5.2 Ablation Insights
The configuration sweep in Table 2 highlights two actionable lessons:
• Union is essential. Removing the union step (no-union) drops both $\mathrm { F } _ { 1 }$ and EMR, confirming that coverage matters more than compactness. • Avoid unnecessary hops. Forcing the longest path (force-longest) harms all metrics, indicating that extra intermediate tables add noise without benefit.
Together, these results validate our design choice: merge all shortest paths for maximum recall, then optionally down-select (e.g., $n { - } n _ { - }$ ) when higher precision is required.
# 5.3 End-to-End Execution Accuracy
Table 3 reports execution accuracy for four LLM generators. Across the board, SchemaGraphSQL yields gains of $6- 1 2 \%$ over the single-step baseline. Using Gemini-2.5-Flash, SchemaGraphSQLforce−union attains $62 . 9 1 \ \%$ total accuracy—only $1 . 5 ~ \%$ short of the oracle “ideal schema linking” setting, implying that most residual errors stem from SQL generation rather than linking.
Table 2: Schema-linking results across graph settings on BIRD-Dev.
Improvements concentrate on the Moderate and Challenging subsets: Gemini-2.5-Flash sees a $+ 1 5 \ \%$ boost on challenging questions, reflecting SchemaGraphSQL’s advantage on multi-join queries.
For every generator, the high-recall force-union variant outperforms the high-precision 1-1 variant on execution accuracy by $2 \mathrm { - } 7 ~ \%$ (Dev) and 4–12 $\%$ (MiniDev). This affirms that omitting a table is far more damaging than including extras—LLMs can ignore noise but cannot guess missing joins. Among schema metrics, $\mathrm { F } _ { 6 }$ correlates best with end-to-end success: the highest- ${ \mathrm { . F } } _ { 6 }$ model is invariably the highest-accuracy model, whereas precision alone can be misleading.
Table 3: SQL Execution Accuracy Results - Dev
# 5.4 Efficiency
Our pipeline adds negligible latency: one GeminiFlash call consumes on average $4 . 6 \ : \mathrm { K }$ input and 14 output tokens, and the subsequent $O ( | E | )$ shortestpath search completes in under $1 5 \mathrm { m s }$ on commodity hardware. Thus SchemaGraphSQL is compatible with real-time database interfaces and lowresource deployments. | Text-to-SQL systems translate natural language questions into executable SQL queries, and recent progress with large language models (LLMs) has driven substantial improvements in this task. Schema linking remains a critical component in Text-to-SQL systems, reducing prompt size for models with narrow context windows and sharpening model focus even when the entire schema fits. We present a zero-shot, training-free schema linking approach that first constructs a schema graph based on foreign key relations, then uses a single prompt to Gemini 2.5 Flash to extract source and destination tables from the user query, followed by applying classical path-finding algorithms and post-processing to identify the optimal sequence of tables and columns that should be joined, enabling the LLM to generate more accurate SQL queries. Despite being simple, cost-effective, and highly scalable, our method achieves state-of-the-art results on the BIRD benchmark, outperforming previous specialized, fine-tuned, and complex multi-step LLM-based approaches. We conduct detailed ablation studies to examine the precision-recall trade-off in our framework. Additionally, we evaluate the execution accuracy of our schema filtering method compared to other approaches across various model sizes. | [
"cs.CL",
"cs.AI",
"cs.DB"
] |
# 1 Introduction
The modern hardware landscape is undergoing a fundamental transformation. As Moore’s Law slows and Dennard scaling ends (Dennard et al.,
1974; Connatser, 2023), the demand for energyefficient, high-performance architectures has accelerated, particularly with the rise of machine learning (ML) applications (Horowitz, 2014; Jouppi et al., 2017). Hyperscalers are increasingly constrained by power and thermal limits (Patterson et al., 2021; Gupta et al., 2021), prompting a reevaluation of datacenter infrastructure.
A major outcome of this shift is the growing adoption of ARM-based processors. Historically dominant in mobile and edge devices due to their RISC-based, low-power design, ARM CPUs were largely absent from datacenters because of their performance gap with $\mathbf { \boldsymbol { x } } 8 6$ (a CISC architecture) (Blem et al., 2013). However, this gap has narrowed significantly: ARM-based chips now match $\mathbf { \Delta x } 8 6$ on many benchmarks (CloudPanel, 2023) and deliver superior energy efficiency (IONOS, 2024). In 2024, $\mathbf { x } 8 6$ designs dominated over $80 \%$ of data center servers (Reuters, 2025), but ARM predicts that its share will reach $50 \%$ by the end of 2025 (Maruccia, 2025). Industry adoption supports this trend, with ARM-based systems like NVIDIA’s Grace CPU (NVIDIA Corporation, 2024), Amazon’s Graviton (Morgan, 2022), and Microsoft’s ARM-compatible OS stack (Verma, 2024) accelerating deployment.
This rapid hardware transition introduces a significant software gap. Legacy binaries compiled for $\mathbf { \boldsymbol { x } } 8 6$ often lack source code and cannot be recompiled for ARM. While solutions like Apple’s Rosetta 2 (Apple Inc., 2020) and QEMU’s emulation service (Bellard, 2005) provide runtime virtualization, they introduce memory and performance overheads. Compilers struggle to retarget opaque binaries (He et al., 2018), and decompilation-based approaches are fragile or legally restricted (Wang et al., 2024). A scalable, accurate, and architecture-aware binaryto-binary translation solution remains elusive.
In this work, we introduce Guaranteed Guess (GG), an assembly-to-assembly transpiler that translates $\mathbf { \boldsymbol { x } } 8 6$ binaries (CISC) into efficient ARM or RISC-V (RISC) equivalents using a custom-trained large language model (LLM). Our approach is open-source, avoids the virtualization tax by generating native ARM/RISC-V assembly, and directly supports legacy binaries without decompilation.
Transpiling across ISAs is non-trivial. CISC and RISC architectures differ in register-memory semantics, instruction complexity, and binary length, $\mathbf { \Delta x } 8 6$ instructions are fewer but more expressive, while RISC requires longer, register-centric code sequences. These differences must be learned implicitly by the model, which we achieve by incorporating hardware-informed design, tokenizer extensions, and context-aware training.
Our approach builds high-accuracy LLM-based transpilers by incorporating hardware-aware insights into the training process, enabling the model to better capture the CISC-specific patterns of $\mathbf { \boldsymbol { x } } 8 6$ and generate semantically valid RISC targets such as ARM. However, unlike high-level language tasks, conventional NLP correctness proxies (e.g., BLEU, perplexity) fall short for binary translation where functional correctness is paramount. Therefore, we embed our predictions within rigorous software testing infrastructure to provide test-driven guarantees of correctness. Holistically, our paper makes the following key contributions:
1. The first CISC-to-RISC transpiler, coined GG, built via a custom-trained, architecture-aware LM achieving a test accuracy of $9 9 . 3 9 \%$ on ARMv8 and $8 9 . 9 3 \%$ on RISC-V64.
2. A methodology to measure and build confidence into transpilation output via software testing approaches ("guaranteeing" the guess) (§3), including detailed analysis of correctness, errors, and hallucinations (§4)
3. An in-depth analysis into the inner workings of our transpiler, including hardware-informed design decisions to best train an accurate LLM model for assembly transpilation $( \ S 3 , \ S 5 )$ .
4. We perform a case-study using our transpiler in a real-world setting, by comparing it directly to Apple Rosetta’s $\mathbf { \boldsymbol { x } } 8 6$ to ARM virtualization engine. Results show that GG’s generated assembly achieves $1 . 7 3 \mathrm { x }$ runtime speedup while delivering $1 . 4 7 \mathrm { x }$ better energy efficiency and $2 . 4 1 \mathbf { x }$ memory efficiency (§5).
# 2 Background and Related Work
Virtualization and Emulation Emulation and assembly-level virtualization enable the execution of one ISA’s binary on a host machine for which it was not originally compiled. QEMU (Bellard, 2005), an open-source emulator, uses dynamic binary translation (Sites et al., 1993) to translate machine code on-the-fly, offering flexibility but with performance overhead. Supported emulation currently includes $\mathbf { \boldsymbol { x } } 8 6$ to ARM, amongst other ISAs. Rosetta 2 (Apple Inc., 2020), Apple’s virtualization layer for macOS, combines ahead-of-time (AOT) and just-in-time (JIT) translation, providing better performance within the Apple ecosystem.
These approaches face challenges in achieving native-level performance and ensuring broad compatibility, due to the dynamic nature of execution. A transpiler approach, directly converting $\mathbf { \Delta x } 8 6$ to ARM assembly, could supplant these solutions by eliminating runtime translation overhead with a one-time translation into the host ISA. This method could address the limitations of current emulation and virtualization techniques, particularly in performance-critical scenarios, or where pre-processing is feasible, or when source code is not available (due to proprietary IP).
Coding with LLMs Language modeling approaches for code have primarily focused on understanding, generating, and translating highlevel programming languages such as $^ { C + + }$ , Java, and Python (Lachaux et al., 2020; Feng et al., 2020; Wang et al., 2021; Roziere et al., 2023; Liu et al., 2024). These models demonstrate increasingly sophisticated code manipulation capabilities through self-supervised learning on vast code repositories. Models further trained with reinforcement learning have shown remarkable performance in rules-based reasoning tasks, including code (et al., 2025). However, the resulting models struggle when applied to languages under-represented in their training sets, in particular when used to write assembly-level code, where the semantics and structure differ significantly from their high-level counterparts.
Neural Low-Level Programming Recent research demonstrates the potential of adapting LLMs to various tasks related to low-level code analysis and transformation: decompilation, binary similarity analysis, and compiler optimization. LLM4Decompile (Tan et al., 2024) introduced specialized language models for direct binary-to-source translation and decompiler output refinement. DeGPT (Hu et al., 2024) further explored decompiler enhancement through semantic-preserving transformations. SLaDe (Armengol-Estapé et al., 2024) combines a 200M-parameter sequenceto-sequence Transformer with type inference techniques to create a hybrid decompiler capable of translating both $\mathbf { \Delta x } 8 6$ and ARM assembly code into readable and accurate C code, effectively handling various optimization levels (-O0 and -O3). Language models have also been adapted to optimization tasks, with LLM Compiler (Cummins et al., 2024) introducing a foundation model that supports zero-shot optimization flag prediction, bidirectional assembly-IR translation, and compiler behavior emulation. Binary similarity analysis has similarly benefited from language model adaptations. DiEmph (Xu et al., 2023) addressed compilerinduced biases in transformer models, while jTrans (Wang et al., 2022) incorporated control flow information into the transformer architecture. Yu et al. (Yu et al., 2020) combined BERT-based semantic analysis with graph neural networks to capture both semantic and structural properties of binary code. While these applications have shown promising results, the use of LLMs to port efficient machine code from one machine to another, while maintaining efficiency, remains underexplored and largely unsolved. Assembly languages present unique challenges due to their under-representation in training datasets, lack of human readability, extensive length, and fundamental differences in execution models across architectures.
Guess & Sketch (Lee et al., 2024) introduced a neurosymbolic approach combining language models with symbolic reasoning for translating assembly code between ARMv8 and RISC-V architectures. In our work, we extend the neural transpiliation direction with a focus on leveraging the existing efficiency in $\mathbf { \Delta x } 8 6$ programs to transpile into efficient ARM binaries, bridging architectural differences in ISA complexity and execution models. Further, instead of fixing transpilations with symbolic approaches, as done in Guess & Sketch, we focus on upfront data design and modeling methods to flexibly handle the increased scale and complexity of CISC-to-RISC transpilation.
# 3 Guaranteed Guess
In this section, we explore the two primary components of building our GG transpiler: data
generation and model training.
# 3.1 Data Collection
As shown in Figure 1, our training dataset is derived from AnghaBench(Da Silva et al., 2021) and The Stackv2(Kocetkov et al., 2022). AnghaBench is a comprehensive benchmark suite that contains 1 million compilable $\scriptstyle \mathbf { C } / \mathbf { C } + +$ programs extracted from major public $\scriptstyle \mathbf { C } / \mathbf { C } + +$ repositories on GitHub. The Stack is a 3.1TB dataset of permissively licensed code in 30 languages for training and evaluating code LLMs. From these datasets, we randomly sampled 1.01M programs (16.16B tokens) from AnghaBench and $3 0 6 \mathrm { k }$ programs (4.85B tokens) from the stack to form our training set, equivalent to 1.32M samples. After we collected the whole samples, we removed boilerplates, deduplicated the data, and choose file that were neither too short $_ { < 1 0 }$ lines) nor too long ${ \mathrm { \Omega } } ^ { } > 1 6 \mathrm { k \Omega }$ lines). These programs were then compiled for $\mathbf { \boldsymbol { x } } 8 6$ $\mathrm { ( C I S C ) }$ ARMv8/ARMv5/RISC-V (RISC).
Each program was compiled to both $\mathbf { \boldsymbol { x } } 8 6$ (CISC) $$ ARMv8/ARMv5/RISC-V (RISC) targets under two optimization levels: $- 0 0$ (no optimization) and $- 0 2$ (aggressive optimization). These flags were selected to expose models to both raw, semantically transparent code (-O0) and real-world, performance-optimized binaries (-O2), enabling the model to learn both unoptimized and optimized ISA patterns. Compilation for ARMv5 and RISC-V64 was performed via cross-compilation on an Ubuntu 20.04 machine with a Ryzen 7 CPU, using arm-linux-gnueabi-gcc (Radcolor, n.d.) and gcc-riscv64-linux-gnu (Project, 2025), respectively. ARMv8 binaries were compiled natively on an Apple M2 Pro (macOS) using clang (Lattner, 2008), ensuring architectural fidelity for performance-critical ARM targets.
# 3.2 Training
All hyperparameter optimization experiments were conducted on a small $5 0 0 \mathrm { k }$ portion of AnghaBench. We tested various hyperparameter settings on this subset of our benchmark. After identifying the optimal configuration, we scaled up the training data to 1.31M samples. We trained three models: DeepSeek-Coder1.3B (Guo et al., 2024), Qwen2.5-Coder (1.5B and 0.5B) (Hui et al., 2024b). Given the dataset size of 1.3M million samples, with an average of 13k tokens per sample, we opted for smaller models. Training was done on A100 GPUs (40GB each). Training with 1.3M samples, a batch size of 24, and 2 epochs required three days. To conserve memory, mixed precision training with bfloat16 was employed. Given limited capacity for large batch sizes, we applied gradient accumulation with an effective batch size of 2. We used paged AdamW (Loshchilov, 2017) to avoid memory spikes, with a weight decay of 0.001. We chose a small learning rate of $2 \times 1 0 ^ { - 5 }$ with a cosine schedule, as experiments indicated this schedule performed best. We trained our model with a context window of $1 6 \mathrm { k }$ . In inference, we do RoPE (Su et al., 2024) extrapolation to increase the context window to $3 2 . 7 \mathrm { k }$ .
Figure 1: GG System Overview. A two-stage transpilation pipeline from $\mathbf { \boldsymbol { x } } 8 6$ to ARM/RISC-V. Left: Data is sourced from Stackv2 and AnghaBench, deduplicated, and compiled using both GCC and Clang to generate paired assembly $( \mathrm { x } 8 6 \mathrm { A R M } )$ ) from $\mathrm { C / C } { + + }$ . Right: A specialized LLM (GG Guesser), trained with tokenizer extension and inferenced with RoPE extrapolation, predicts target ISA code. Predictions are evaluated via unit tests and symbolic analysis on benchmarks like HumanEval and BringupBench. The system emphasizes functional correctness, architectural alignment, and near-native performance.
Table 1: Comparison of tokenization approaches between DeepSeek/Qwen-Coder and our extended tokenizer. Spaces are represented as ␣ and shown with colored backgrounds to highlight token boundaries. Note how our tokenizer groups related tokens (e.g., ldr and r1) as singular units.
# 3.3 Tokenizer Extension
To improve our LLMs’ capability in comprehending and generating assembly code, we augmented the tokenizer by incorporating the most common opcodes and register names from $\mathbf { \Delta } _ { \mathbf { X } } 8 6$ and ARMv5/ARMv8/RISC-V64 architectures (as shown in Table 1). This targeted design improves token alignment with instruction semantics, enabling more precise and efficient assembly translation. As shown in table 2, our extension decreases the fertility rate (tokens/words) (Rust et al., 2020) of Qwen and Deepseek tokenizers by $2 . 6 5 \%$ and $6 . 9 \%$ , respectively. This corresponds to our model fitting 848 and $2 . 2 \mathrm { k }$ tokens respectively.
Table 2: Tokenizer fertility rate (tokens/words) across ISAs. Lower is better.
Table 3: Models trained with our method outperform baselines across all benchmarks, at all optimization levels.
# 4 Experiments and Evaluation
In this section, we describe our experimental setup, training methodology, evaluation benchmarks, and the metrics used to assess the accuracy and robustness of our CISC-to-RISC transpiler.
# 4.1 Setup
We leveraged LLaMa-Factory (Zheng et al., 2024), DeepSpeed Zero3 (Rasley et al., 2020), liger kernels (Hsu et al., 2024), and FlashAttention2 (Dao, 2023) for efficient training and memory optimization. We also used caching to enhance inference speed and disabled sampling to ensure deterministic outputs. We used vLLM (Zheng et al., 2023) to deploy our model and achieve a throughput of 36x requests per second at $3 2 . 7 \mathrm { k }$ tokens context window on a single A100 40GB GPU. Additionally, We apply post-quantization using llama.cpp (Ggerganov) (e.g., bfloat16, int8, int4) to optimize inference for CPU-based deployment.
# 4.2 Evaluation
We evaluate GG using two complementary benchmarks: HumanEval-C (Tan et al., 2024) and BringUpBench (Austin, 2024). HumanEval was originally introduced by Chen et al. (2021) for Python code generation. The benchmark consists of 164 programming problems that assess language comprehension, reasoning, and algorithmic thinking. For our evaluation, we utilize the C-translated version from LLM4Decompile (Tan et al., 2024), which maintains the same problems while converting both function implementations and test cases to C code.
To evaluate real-world generalization, we leverage BringUpBench (Austin, 2024), a challenging benchmark of 65 bare-metal programs ranging from 85 to 5751 lines of code. Unlike HumanEval, which consists of isolated functions, BringUpBench programs are embedded in full project structures with internal libraries and cross-linked components. This setup more accurately reflects real-world embedded systems development, where executing even a single file often requires compiling and linking the entire codebase. As a result, BringUpBench imposes significantly greater context length demands. On average, each BringUpBench sample requires $8 . 9 \times$ more tokens for $\mathbf { \boldsymbol { x } } 8 6$ and $8 . 8 \times$ more for ARM compared to HumanEval, as shown in Figure 2. The benchmark’s diverse control flow and I/O patterns further elevate its difficulty, making it a strong testbed for assessing the robustness and scalability of our transpiler.
Figure 2: Token counts by ISA and benchmark; BringUpBench is substantially longer than HumanEval.
We use gcov, GNU’s coverage tool, to measure line coverage, a core metric in software testing that captures which code lines were executed at least once, thereby exposing untested paths and blind spots (Myers et al., 2011). HumanEval and BringupBench achieved $9 8 . 8 1 \%$ and $9 7 . 3 2 \%$ average coverage, respectively, indicating near-complete execution of all code lines during testing.
We evaluate functional correctness by executing the transpiled ARM code against full unit test suites. A prediction is deemed correct only if all test cases pass, partial correctness is not counted. For HumanEval, this involves compiling the predicted code, linking it with the provided tests, and executing the binary as shown inf figure 1. For BringUpBench, we leverage its Makefile to build the static library and link it with the target file. The output is then compared against the expected output using a diff-based check. This strict pass $\textcircled { a } 1$ evaluation, based solely on the most probable sample, even when beam search (beam size $= 8$ ) is used, ensures that only fully functional translations contribute to final accuracy.
# 5 Results and Analysis
We evaluate the efficacy of our transpiler for CISC-to-RISC assembly translation, focusing on the correctness of the output ARM assembly. Utilizing the metrics defined above (§4), we compare our approach with state-of-the-art coding LLMs and evaluate our approach for $\mathbf { \boldsymbol { x } } 8 6$ to ARM transpilation (Table3).
# 5.1 Transpiler Validation
Baselines. As shown in Table 3, most baseline models, including state-of-the-art LLMs such as StarCoder2 (Lozhkov et al., 2024), DeepSeek (Guo et al., 2024), and Qwen2.5 (Hui et al., 2024a), achieve $0 \%$ accuracy in all transpilation tasks, underscoring the unique difficulty of low-level ISA translation. These models, while effective on high-level programming benchmarks, lack the architectural grounding and token-level inductive bias needed to generalize from $\mathbf { \Delta x } 8 6$ to ARM. GPT-4o was the only exception, achieving $1 . 5 \%$ accuracy, which remains far below usable thresholds, highlighting that general-purpose LLMs are not yet suitable for assembly-level translation without specialized training. This performance gap reinforces the need for task-specific instruction tuning and architectural adaptation to handle the deep structural mismatch between CISC and RISC.
GG Results. Our GG models, particularly the GG1.5B variant, substantially outperform all baselines, reaching $9 9 . 3 9 \%$ accuracy on ARMv8 and $9 3 . 7 1 \%$ on ARMv5 under the $- 0 0$ setting. This validates the effectiveness of architecture aware training, tokenizer extension, and longer context modeling in capturing fine-grained register and memory semantics. For $- 0 2$ optimized code, accuracy drops to $4 5 . 1 2 \%$ (ARMv8) and $5 0 . 3 0 \%$ (ARMv5), exposing the fragility of current LLMs under aggressive compiler transformations. This suggests that while our model learns to generalize well under minimal optimization, it struggles with control/data flow reordering and register coalescing introduced by $- 0 2$ passes. Addressing this challenge may require incorporating optimization-invariant representations, such as symbolic traces or control/data-flow graphs, or extending the training set with more aggressively optimized samples.A detailed error analysis can be found in Appendix A.1.
Table 4: Failed files on BringupBench. Errors after the Guess stage are largely around dataflow reasoning. File names are grouped by error type.
RISC-v64. To demonstrate the generality of our method, we also trained our model on the task of transpiling from $\mathbf { \boldsymbol { x } } 8 6$ to RISC-V64, achieving a pass $\ @ 1$ of $8 9 . 6 3 \%$ . Notably, our model significantly outperforms existing models like GPT4o and DeepSeekCoder2-16B, which achieved much lower test accuracies of $7 . 5 5 \%$ and $6 . 2 9 \%$ , respectively. This result is $9 \%$ lower than ARMv8 which shows how much different RISC-v64 from $\mathbf { \Delta } _ { \mathbf { X } } 8 6$ compared ARMv8.
(-O2) Opt. Compiler optimizations (-O2) introduce complex patterns that increase failure frequency compared to $- 0 0$ . A common error is the motion of the instruction; for example, misplacing $\mathsf { c b } z ^ { 2 }$ alters the control flow, revealing the difficulty of the model in interpreting optimized sequences. While hard to detect automatically, such errors can be repaired via manual inspection (Liu et al., 2025), symbolic solvers (Lee et al., 2024; Mora et al., 2024), or reasoning models. Hybrid human-AI approaches may improve correctness guarantees.
Figure 3: Comparison of execution time, energy consumption, and memory usage across Rosetta, GG, and native binaries.
BringUpBench. We evaluate GG-1.5B on BringUpBench (Austin, 2024) and manually analyze over 200 unit-tested binaries. Our model achieves $4 9 . 2 3 \%$ exact match accuracy under $- 0 0$ (Table 3) with virtually no syntax errors, outputs consistently adhere to valid ARM assembly with correct opcodes, registers, and memory access. This reflects a strong surface-form prior, shifting focus to semantic errors like incorrect dataflow. Notably, $17 \%$ of failures stem from context truncation, indicating a key limitation of current context window sizes. Table 4 summarizes common failure types, including duplicate code, invalid control flow, misused registers / intermediaries, and stack errors - most symptomatic of broken data flow rather than syntax issues. These may be alleviated through longer training, symbolic repair, or richer representations. Lastly, the benchmark’s extensive unit tests offer a valuable semantic signal in the absence of ground truth, suggesting a compelling path for test-driven transpilation and iterative repair.
# 5.2 Real-World Case Study
To evaluate the efficiency of our transpiler, we conducted a real-world study on an Apple M2 Pro (ARM64v8-A). This setup offers two advantages: (1) native ARM toolchain support, avoiding cross-compilation; and (2) Apple’s Rosetta 2 layer, enabling consistent evaluation across execution modes on the same hardware. We assess performance across three environments: (i) native ARM64 binaries, (ii) $\mathbf { \Delta x } 8 6$ binaries via Rosetta 2, and (iii) GG-transpiled $\mathbf { \boldsymbol { x } } 8 6$ -to-ARM64 assembly. For each, we measure execution time, CPU energy (via powermetrics), and memory usage. Each program is executed 100 times, reporting the geometric mean (Fleming and Wallace, 1986), under controlled conditions.
Figure 3 shows that GG achieves near-native performance: matching execution time, $1 . 7 3 \times$ faster than Rosetta, with $1 . 4 7 \times$ better energy efficiency and $2 . 4 1 \times$ better memory usage. GG’s memory footprint $( 1 . 0 3 4 \mathrm { M B } )$ is nearly identical to native $( 1 . 0 3 \mathrm { M B } )$ , while Rosetta uses $2 . 4 9 \mathrm { M B }$ .
These results demonstrate that LLM-based binary translation offers a compelling alternative to traditional dynamic translation layers like Rosetta. Unlike Rosetta, which incurs a persistent runtime overhead, GG performs a one-time transpilation, avoiding the cumulative “runtime tax” and enabling leaner, faster execution. Moreover, our approach is general-purpose and untethered to Apple’s ecosystem, enabling broader cross-ISA deployment and efficient CISC-to-RISC translation across diverse platforms. See Appendix A.1 for scaling, quantization, and error analysis.
# 5.3 Similarity Analysis Across ISAs
In Figure 4b, we observe that ARMv8 exhibits the highest average similarity to $\mathbf { \Delta x } 8 6$ $( 4 0 . 1 9 \% )$ , followed by ARMv5 $( 2 5 . 0 9 \% )$ and RISC-V64 $( 2 1 . 4 1 \% )$ . This gradient of similarity directly correlates with the drop in model accuracy from ARMv8 $( 9 9 . 3 9 \% )$ to ARMv5 $( 9 3 . 7 1 \% )$ and further down to RISC-V $( 8 9 . 6 3 \% )$ . We hypothesize that this discrepancy is rooted in the increasing divergence in instruction semantics and register abstractions across these ISAs. ARMv8’s shift toward CISC-like design (Red Hat, 2022) likely boosts its alignment with $\mathbf { x } 8 6$ , aiding model generalization. In contrast, ARMv5 and RISC-V have simpler, more divergent instruction sets and addressing schemes, making the $\mathbf { \Delta x } 8 6$ -to-RISC mapping less predictable and thus harder to learn.
Figure 4a highlights a significant shift in ARMv8 opcode usage between $- 0 0$ and $- 0 2$ . At $- 0 2$ , mov becomes dominant $( + 1 4 . 8 \% )$ , indicating more register reuse and reduced memory traffic via explicit ldr/str. This hides direct data movement, making it harder for the model to learn memory interaction. Paired instructions like ldp/stp appear more frequently, packing semantics into fewer lines, while conditional ops (tbnz, cset) are folded into predicated sequences. These changes, introduced by the compiler, abstract both control and data flow. We hypothesize that the model, trained only on $- 0 2$ , must decode complex $\mathbf { x } 8 6$ semantics into a highly optimized and compressed ARMv8 form. This transformation increases learning difficulty and explains the drop in $- 0 2$ accuracy (to $4 5 . 1 2 \%$ ) despite strong $- 0 0$ performance.
Figure 4: Side-by-side comparison of opcode shift and CHRF similarity in ARM assembly analysis.
Table 5: Ablation study showing incremental improvements on ARMv8 accuracy from each added component.
# 5.4 Ablation Study
To understand what contributed most to model performance, we performed ablations shown in Table 5, focusing on four key aspects: training data size, RoPE extrapolation, the extended tokenizer, and decoding strategy.
First is the training data. As we increased the amount of training data to 1M AnghaBench, the accuracy jumps from $0 \%$ to $9 3 . 9 4 \%$ ; including an additional 0.3M Stackv2 data points further improves accuracy to $9 5 . 3 8 \%$ . While effective, this scaling approach depends on high-quality, large-scale datasets and longer training time. Second is the architectural enhancement through RoPE Extrapolation, which pushes performance to $9 7 . 1 4 \%$ , indicating a $+ 1 . 7 6 \%$ improvement. This suggests that enabling better generalization beyond the fixed context window substantially benefits instruction understanding and long-range dependency modeling.
The third contributing factor is tokenizer coverage: by extending the tokenizer to include additional subword units and symbols, we observe a further gain to $9 8 . 1 8 \%$ , adding $+ 1 . 0 4 \%$ , highlighting the importance of adapting the tokenizer to the domain-specific vocabulary of assembly code. Finally, decoding strategy plays a non-trivial role; switching to 8-beam search yields the final boost to $9 9 . 3 9 \%$ , adding another $+ 1 . 2 1 \%$ . Altogether, this progression shows that while data scaling gives the biggest leap, fine architectural and decoding choices compound gains toward near-perfect accuracy. | The hardware ecosystem is rapidly evolving, with increasing interest in translating low-level programs across different instruction set architectures (ISAs) in a quick, flexible, and correct way to enhance the portability and longevity of existing code. A particularly challenging class of this transpilation problem is translating between complex- (CISC) and reduced- (RISC) hardware architectures, due to fundamental differences in instruction complexity, memory models, and execution paradigms. In this work, we introduce GG (Guaranteed Guess), an ISA-centric transpilation pipeline that combines the translation power of pre-trained large language models (LLMs) with the rigor of established software testing constructs. Our method generates candidate translations using an LLM from one ISA to another, and embeds such translations within a software-testing framework to build quantifiable confidence in the translation. We evaluate our GG approach over two diverse datasets, enforce high code coverage (>98%) across unit tests, and achieve functional/semantic correctness of 99% on HumanEval programs and 49% on BringupBench programs, respectively. Further, we compare our approach to the state-of-the-art Rosetta 2 framework on Apple Silicon, showcasing 1.73x faster runtime performance, 1.47x better energy efficiency, and 2.41x better memory usage for our transpiled code, demonstrating the effectiveness of GG for real-world CISC-to-RISC translation tasks. We will open-source our codes, data, models, and benchmarks to establish a common foundation for ISA-level code translation research. | [
"cs.CL",
"cs.AR",
"cs.LG",
"cs.PL",
"cs.SE"
] |
# 1. INTRODUCTION
Film grain, appreciated in video production for its natural look and creative expression, originates from the physical exposure and development of photographic film. Unlike film, digital sensors do not undergo such a process and are therefore grain-free. To add texture, warmth or evoke nostalgia, filmmakers often reintroduce grain into digital content in a content-specific manner adapted to factors such as pixel intensity. The random nature of film grain, however, poses a major challenge to conventional video codecs, which often eliminate grain at medium to low bitrates, thus compromising the visual quality and the artistic intent of the content. Conversely, preserving film grain requires disproportionate bitrates which results in an inefficient compression.
To efficiently preserve film grain, in state-of-the-art video codecs like Versatile Video Coding (VVC) [1], an alternative to high bitrate encoding is proposed. It consists in analyzing and estimating film grain parameters prior to encoding and synthesizing it back after decoding using the estimated parameters transmitted as metadata. VVC natively supports the signaling of film grain parameters as metadata through a well-defined Film Grain Characteristics Supplemental Enhancement Information (FGC-SEI) message [2, 3]. This paper primarily addresses the film grain analysis stage.
In conventional codecs, the film grain analysis workflow follows a standard process. First, denoising is applied to extract the film grain image by finding the difference between the grainy source and its denoised version. Further analysis focuses on features such as edges and texture, limiting the analysis to flat, uniform regions. Based on this pre-processing, model parameters are determined, including grain amplitude and grain pattern. Grain amplitude is determined in relation to image intensity using some scaling function while grain pattern is modeled by identifying frequency limits (cut-off frequencies) for frequency-based models or auto-regressive parameters. Consequently, film grain synthesis is typically based on the generation of Gaussian noise, with spatial correlation modeled by frequency limits or auto-regressive parameters, and local adaptation consisting of adjusting grain amplitude. Conventional methods of film grain analysis show some limitations, as the accuracy of the estimation of film grain parameters is highly dependent on the result of denoising and edge detection processing. Moreover, considering only homogeneous blocks in the analysis step, limits the data available for processing, consequently leading to reduced synthesis accuracy.
Alongside conventional approaches to film grain analysis and synthesis, learning-based frameworks have emerged. Style-FG [4] is the first deep learning framework for film grain analysis and synthesis with grain characteristics encoded as a latent vector representation. 3R-INN [5] is another learning-based framework that is based on Invertible Neural Networks (INNs). Thanks to its invertibility, 3R-INN performs analysis in a forward pass where grain information is captured in a latent variable constrained to follow a standard Gaussian distribution and performs synthesis in an inverse pass without requiring explicit metadata. Learningbased methods offer higher accuracy but produce parameters incompatible with video coding standards, in addition to necessitating complex synthesis modules. This incompatibility makes their integration into practical video coding systems challenging, particularly for resource-constrained devices.
By leveraging the accuracy of learning-based models and the compatibility of conventional methods, this paper introduces Film Grain Analysis Neural Network (FGA-NN) which analyzes film grain from grainy videos and provides film grain characteristics in the format supported by recent video codecs, known as FGC-SEI. FGA-NN along with the new FGC-SEI dataset are detailed in Section 2. Section 3 presents a comparative of the performance of FGA-NN against state-of-the-art methods. Finally, conclusions and perspectives are discussed in Section 4.
Fig. 1: Framework for proposed film grain analysis and synthesis workflow in a video distribution system, utilizing FGA-NN for analysi combined with VFGS for synthesis.
# 2. PROPOSED METHOD
# 2.1. System overview
Given a grainy video as input, FGA-NN analyzes the film grain and outputs film grain parameters in the FGC-SEI format supported by recent video coding standards. The video is then encoded and transmitted with these parameters. On the decoder side, the video is decoded, and film grain is synthesized using Versatile Film Grain Synthesis (VFGS) [6], the conventional film grain synthesis method, utilizing the transmitted FGC-SEI parameters, as illustrated in Figure 1. To train FGA-NN, a dataset of grainy videos (inputs) paired with their corresponding FGC-SEI parameters (outputs) is required. Typically, only grainy videos are available, and for synthetic grain, the parameters of the post-production tool might be known but not in FGC-SEI format. FGC-SEI parameters can be derived either manually which may be difficult and time consuming or by using a film grain analysis module designed for video coding, underscoring the need for FGA-NN and such a dataset. The following subsection details the process of the dataset creation.
# 2.2. FGC-SEI Dataset
To create the necessary pairs (grainy videos, FGC-SEI parameters), we approached the problem inversely by first generating a set of FGC-SEI parameters which we then used to add film grain to clean (grain-free) videos using VFGS[6]. Table 1 reports key parameters used to create our dataset. The FGC-SEI message supports two synthesis approaches (SEIFGCModelID): frequency filtering modeling (0) and auto-regressive modeling (1). Currently, only the frequency model is implemented in codec software like VVC Test Model (VTM) reference software [7] or the realworld Versatile Video Encoder and Decoder implementation software [8, 9]. Therefore, FGA-NN aligns with and provides parameters for the frequency filtering model. As for blending mode (SEIFGCBlendingModeID), VFGS uses additive blending mode (0). In the frequency filtering model, each color component (Y:0, Cb:1, Cr:2) for which synthesis process is invoked, a number of intensity intervals is defined SEIFGCNumIntensityIntervalMinus1Comp0,1,2 wherein lower and upper bounds are defined in SEIFGCIntensityIntervalLowerBoundComp0,1,2 and SEIFGCIntensityIntervalUpperBoundComp0,1,2, respectively. The number of model parameters SEIFGCNumModelValuesMinus1Comp0,1,2 is limited to 3, limiting it to scale [0-255] and high vertical and horizontal frequency cut-offs [2-14]. SEIFGCLog2ScaleFactor indicates the scale of the scaling factors. Acting on this parameter is a quick way of changing the film grain strength.
Table 1: FGC-SEI Parameters
To build a large and diverse dataset, we randomly generated 300 set of FGC-SEI parameters while adhering to specific constraints to ensure visually accurate film grain. We used three different Log2scale factors [3-5] and defined 16 intervals for luma and 6 for chroma. For each interval, we selected a scaling factor (a multiple of 10 within the range of 0 to 255) and a single cut-off frequency (the same for both vertical and horizontal) within the range of 3 to 14 for luma and 4 to 8 for chroma. Also, we imposed specific constrains to avoid drastic changes in grain amplitude (scale factor) and grain pattern (cut-off frequencies) between consecutive intervals.
These FGC-SEI parameters were used to add grain on a set of clean videos grouping the BVI-DVC [10] and the DIV2K [11] datasets.
# 2.3. Network architecture
Given a grainy video as input, two versions of the FGA-NN module processes separately luma and chroma channels, predicting each, intervals boundaries (lower and upper), their associated scaling factors, cut-off frequencies, and a global Log2Scale factor. The models address these prediction tasks as one regression and three classification problems. Interval boundary prediction is formulated as a regression task, with outputs ranging from 0 to 255. Cutoff frequencies, scaling factors, and Log2Scale factor predictions are treated as classification tasks.
Both luma and chroma versions of FGA-NN employ a shared backbone to extract features from grainy image inputs followed by four parallel task-specific heads to map these features to the desired outputs. The backbone of FGA-NN comprises one convolutional layer followed by three residual blocks and an adaptive average pooling while task-specific heads comprise each two linear layers. Linear layers dimensions are adapted to prediction task complexity. Specifically, to predict Log2Scale factor (3 possible classes: 2-5) 64- dimensional layers are used, to predict scaling factors (26 possible classes: multiples of 10 in 0-255) 1024-dimensional layers are used, and to predict cut-off frequencies (12 possible classes for luma: 3-14, and 5 for chroma: 3-7) 512- dimensional layers are used. Fig. 3 illustrates the luma FGA-NN detailed architecture. Note that the chroma version of FGA-NN is tailored to adapt the input and output dimensions accordingly.
Fig. 2: Comparison of grainy images and their Luma FGC-SEI parameters (AmericanChoice sequence).
Fig. 3: Luma FGA-NN detailed architecture.
4 SES →Log2Scale factor
Luma channel of →Intervals
GU D G 15 1 Scaling factors Residual Block NZ 万星童 T G 中 freCutenffies
# 2.4. Training details
scaled L1 loss $e x p _ { L 1 }$ which heavily penalizes large prediction errors and a monotonicity penalty loss $m o n o _ { P }$ which enforces non-decreasing behavior by calculating differences between consecutive elements and penalizing only the negative ones, as follows:
$$
\begin{array} { c } { { e x p _ { L 1 } = = \displaystyle { \frac { 1 } { N } \sum _ { i = 1 } ^ { N } \left( e ^ { \beta \cdot | y _ { i } - \hat { y } _ { i } | } - 1 \right) } } } \\ { { { } } } \\ { { m o n o _ { P } = \displaystyle { \frac { 1 } { N } \sum _ { j = 1 } ^ { N } \operatorname* { m a x } ( - ( \hat { y } _ { i + 1 } - \hat { y } _ { i } ) , 0 ) } } } \\ { { { } } } \\ { { { \mathcal { L } _ { I n t e r v a l s } = e x p _ { L 1 } + m o n o _ { P } } } } \end{array}
$$
FGA-NN is trained using a combination of four training objectives, each corresponding to one of the four tasks. The classification tasks $( * )$ (cut-off frequencies, scaling factors, and Log2Scale factor predictions) are trained by minimizing the categorical cross-entropy loss, which measures the discrepancy between the predicted and actual labels and is formulated as:
with $\beta$ being a hyper-parameter controlling the sensitivity to the error and set to 5.
The total loss function combines the individual losses, weighted by their respective regularization parameters:
$$
\mathcal { L } _ { * } = - \sum _ { i = 1 } ^ { N } y _ { i } \log ( \hat { y } _ { i } )
$$
where $y _ { i }$ is the true label, $\hat { y } _ { i }$ is the predicted probability of the true label and $N$ is the batch size.
Interval boundaries, normalized to [0, 1], are predicted by minimizing a combined loss function: an exponentially
$$
\mathcal { L } = \lambda _ { 1 } \mathcal { L } _ { C u t - o f f } + \lambda _ { 2 } \mathcal { L } _ { I n t e r v a l s } + \lambda _ { 3 } \mathcal { L } _ { L o g 2 S c a l e } + \lambda _ { 4 } \mathcal { L } _ { s c a l i n g }
$$
Adam optimizer [12, 13] with $\beta _ { 1 } = 0 . 9$ and $\beta _ { 2 } = 0 . 9 9 9$ is used. Weights are initialized using the classic He initialization [14]. The batch size is set to 64 and the learning rate is set to $5 \mathrm { e } { - 4 }$ , with a total of $\mathrm { 1 0 k }$ training iterations.
# 3. EXPERIMENTAL RESULTS
FGA-NN aims to accurately estimate film grain parameters from grainy videos for efficient compression and faithful synthesis. In the following subsections, we evaluate FGA-NN performance in terms of accuracy of estimated parameters, synthesis fidelity and bitrate savings.
# 3.1. Film grain analysis evaluation
This subsection presents an evaluation of FGA-NN in comparison to FGA-CONVENT ([7]), the only existing stateof-the-art approach capable of estimating parameters compatible with the FGC-SEI format. UHD sequences with real film grain from the JVET subjective evaluation test set [16] are used for this evaluation. Fig. 2 compares the same cropped region from the grainy synthesized videos generated by VFGS using parameters estimated by FGA-CONVENT and FGA-NN, against the ground truth. It also contrasts their estimated luma channel film grain parameters with expert-tuned ground-truth values. Film grain parameters are illustrated using an interactive graphical tool [6], where the $\mathrm { \Delta } \mathrm { X }$ axis represents the pixel values range [0,255]. The blue $\mathrm { Y }$ axis represents the scaling factors [0,255] and the green Y axis represents the cut-off frequencies [2,14]. The dashed lines separate the different intervals. As for the Log2Scale factor, it is described in the Y-axis label as $\mathbf { G a i n } ( x ^ { \mathbf { L o g 2 s c a l e } } )$ .
Fig. 4: Comparison of synthesized grainy images using different analysis and synthesis workflows to ground-truth ones
Table 2: Quantitative comparison between our method and state-ofthe art methods on different test-set.
One can observe that FGA-NN accurately captures the overall trend of the ground-truth film grain pattern and amplitude, resulting in synthesized images with perceptually similar film grain to that of the ground-truth ones. On the other hand, FGA-CONVENT predicts a lower scaling factor, compensated by a correspondingly lower Log2Scale factor as a result of its design, and tends to generate a coarser film grain pattern than the reference, resulting in a distinct yet visually consistent appearance.
Direct comparison of estimated and ground-truth film grain parameters is challenging. The interplay between scaling factors and Log2Scale factors allows for error compensation. Furthermore, minor variations in the estimated scaling factor or cutoff frequencies are likely to have minimal visual impact.
Furthermore, FGA-NN is trained to provide film grain parameters on the full intensity range [0, 255], while some test data with limited intensity variation may provide insufficient data for accurate prediction. Therefore, the following subsection evaluates the end-to-end analysis and synthesis workflow. Accurate analysis enables faithful synthesis, and comparing grainy images (both subjectively and objectively) is more straightforward.
# 3.2. Film grain synthesis evaluation
In this subsection, we evaluate film grain synthesis fidelity by comparing results obtained using: 1) the combination of FGA-NN for analysis and VFGS for synthesis , 2) the combination of FGA-CONVENT for analysis and VFGS for synthesis, 3) Style-FG [4] and 4) 3R-INN [5]. For a fair comparison, methods are evaluated on both the test-set from the FGC-SEI dataset and the test-set from the FilmGrainStyle740k dataset [4]. Table 2 presents a quantitative evaluation of synthesized grainy images against ground-truth images using the learned perceptual image patch similarity (LPIPS) [17], Jensen Shannon divergence - natural scene statistics (JSD-NSS) [18], and KL divergence (KLD) metrics, widely used to evaluate film grain similarity. On the FGC-SEI test-set, synthesis using estimated parameters output by FGA-NN demonstrates superior performance across all metrics. On the FilmGrainStyle740k test-set, Style-FG and 3R-INN achieve the best results, as these methods were specifically trained on this dataset, with FGA-NN trailing closely behind. The performance of FGACONVENT combined with VFGS is suboptimal on both testsets. This is solely due to the fact that the analysis relies on homogeneous regions and exploits information from multiple frames in a real film grain analysis use-case, whereas in the present evaluation analysis is provided with a single lowresolution image ( $2 5 6 \mathrm { x } 2 5 6$ to a maximum of $7 6 8 x 5 1 2 ,$ ), which often contains significant texture. This further complicates the challenge for conventional analysis method, making it impossible to apply FGA-CONVENT to such small images.
Fig. 5: Comparison of enhanced compressed images using different analysis and synthesis workflows (third to last column).
Fig. 4 presents a subjective visual comparison of synthesized grainy images for a sample from the FilmGrainStyle740k test set (top row) and for a sample from the FGCSEI test set (bottom row). For the top row sample, synthesis with Style-Fg and 3R-INN is excellent, followed by synthesis using FGA-NN parameters. FGA-CONVENT exhibits the lowest performance as reported by the objective metrics, due to already mentioned limitations. For the bottom row sample, Style-FG and 3R-INN exhibit a significant drop in quality, indicating a sort of overfitting to the FilmGrainStyle740k training set and poor generalization. Synthesis with FGA-NN estimated parameters maintains comparable performance, highlighting its robustness and generalization.
# 3.3. Real-world film grain workflow evaluation
This subsection evaluates different film grain analysis and synthesis workflows on two UHD video sequences (MeridianSmoker1 and TearsOfSteel-044) from the JVET subjective evaluation test set [16], assessing learning-based film grain analysis and synthesis against conventional methods on unseen, real-world content. Fig. 5 shows cropped regions from different versions of the original video sequences: original (film grain present), compressed at low-bitrate (efficient transmission, film grain lost), and low-bitrate enhanced by the different film grain analysis and synthesis workflows. In the top row sample, both FGA-CONVENT and FGA-NN, when coupled with VFGS, faithfully synthesize film grain, outperforming Style-FG and 3R-INN. In the bottom row sample, all methods struggle with accurate grain parameter prediction. Although FGA-CONVENT and FGA-NN introduce grain that does not match the original input, they enhance the overall visual quality of the reproduced video still. Note that in such cases, the interactive visualization tool presented in Section 3.1 could use the estimated parameters from FGA-NN and FGA-CONVENT as a starting point for further manual refinement, a capability absent in Style-FG and 3R-INN due to their latent space representations. The high computational cost of 3R-INN and Style-FG workflows prevents end-user device implementation, further highlighting the advantages of our learning-based analysis module coupled with a hardwarefriendly synthesis module. Finally, encoding UHD videos with film grain at medium to low bitrates using our film grain analysis and synthesis workflow enables bitrate savings of up to $90 \%$ compared to high-bitrate encoding [?]. | Film grain, once a by-product of analog film, is now present in most cinematographic content for aesthetic reasons. However, when such content is compressed at medium to low bitrates, film grain is lost due to its random nature. To preserve artistic intent while compressing efficiently, film grain is analyzed and modeled before encoding and synthesized after decoding. This paper introduces FGA-NN, the first learning-based film grain analysis method to estimate conventional film grain parameters compatible with conventional synthesis. Quantitative and qualitative results demonstrate FGA-NN's superior balance between analysis accuracy and synthesis complexity, along with its robustness and applicability. | [
"cs.CV",
"eess.IV"
] |
# 1 Introduction
Digital survey platforms have become essential tools in participatory research with various different tools available across different domains [41]. Many provide survey logic capabilities, typically allowing researchers to define conditional branching or skipping questions based on respondent answers. Many platforms also offer dynamic content features where responses can be piped into later questions, allowing basic personalization of the questionnaire experience. Despite this, most mainstream survey platforms still follow a mostly linear, predefined flow. The branching logic is usually limited to conditions based on previous responses within the survey itself. Truly context-aware or externally adaptive workflows remain rare. Recent analyses highlight that popular survey services "lack usable mechanisms for seamlessly importing participants’ data from other systems" [39]. Although several platforms can preload known information or trigger minor follow-ups, there is limited support for dynamically incorporating external context, such as real-time sensor data, environmental conditions, or personal digital data into the survey logic during execution.
The CASE framework introduces a new approach within digital survey methodology, designed to overcome these limitations through an innovative approach to survey design and execution. Unlike conventional survey tools that rely on static branching logic, CASE implements an event-driven architecture that enables dynamic and context-aware workflows. Rather than treating surveys as fixed sequences of questions, CASE models them as responsive systems that can adjust in real time based on multiple inputs, such as participant responses, temporal conditions (even ensuring minimal time distances to validate triggers), external data, or changes in user state. The state of a participant is persistently tracked by the platform and can be updated by the same triggers that influence the progression of the survey. This allows flows to diverge or iterate based on real-time conditions and makes CASE well suited for longitudinal studies and research scenarios where flexibility and contextual awareness are important.
The practical value of CASE has been demonstrated through deployment in many different real-world scenarios. This shows that beyond its conceptual flexibility, CASE is grounded in real-world design needs, shaped by practical demands such as privacy, security, scalability, and long-term maintainability. During development, we prioritized not only advanced functionality, but also sustainability in deployment. The framework is designed to be reliably operated across diverse institutional and infrastructural settings.
The decision to undertake a comprehensive rework of the CASE framework in 2024 was driven by several critical factors that emerged from years of practical deployment experience. The growing number of platforms that utilize CASE technology revealed the need for greater flexibility to support various technical, ethical, legal, and use-case-specific requirements. Additionally, the goal of remaining aligned with modern web development standards and evolving research methodologies leads to a fundamental architectural reassessment. The rework replaced the previous complex microservice setup with a streamlined and simplified architecture, significantly reducing necessary code management efforts, reducing interdependencies, and improving ease of deployment, which is critical for institutions with limited technical capacity or specific infrastructure requirements. This further positions CASE as a modern, maintainable solution that balances sophisticated functionality with practical deployment options.
This paper makes three key contributions. First, we present the design and implementation of an event-driven architecture tailored for adaptive participatory research, enabling context-sensitive realtime survey workflows. Second, we document the architectural evolution of the CASE framework, from an initial microservicebased design to a simplified, maintainable monolithic architecture based on insights gained through real-world deployments. Third, we share lessons learned from five years of real-world deployments across diverse research domains and regulatory environments, including national surveillance platforms, specialized health monitoring systems, and cross-domain applications, providing practical insight for future participatory research infrastructure. These deployments, which involve tens of thousands of participants, also serve to validate the scalability and adaptability of the framework.
The remainder of this paper is organized as follows. Chapter 2 outlines the foundation and history of the framework and discusses related work. Chapter 3 provides an overview of the key modules and features that make up the CASE framework. These are the result of extensive collaboration with domain experts over several years, with input from a variety of stakeholders influencing the evolution of the framework. Chapter 4 details the architectural improvements that started in 2024, explaining the motivations behind the rework and technical choices. and adaptions are laid out. Chapter 5 presents selected real-world deployments and lessons learned from these implementations, providing practical insights for future participatory research infrastructure.
# 2 Foundations and Related Work 2.1 Background
The theoretical foundations of the framework date back to 2017, building on an idea that proposed a fundamentally different approach to survey logic [9]. This work introduced the concept of dynamic question selection through mathematical relevance, where questions would be chosen from a pool based on calculated relevance rather than predetermined sequences. The original vision emphasized a generalized view of data that could incorporate not only participant responses, but also sensor readings, location context, and external data sources. Virtual sensors were proposed to enable the integration of machine learning algorithms and complex computational logic, laying the foundation for the developed event-driven architecture. Although the actual CASE architecture evolved from these initial concepts into a more complex system than originally envisioned, the fundamental principle remained: Moving beyond static linear questionnaires to create an adaptive research instrument.
The first implementation of those concepts began back in 2018 and was driven by a practical need to update the aging Influenzanet platform. This participatory surveillance system for influenza-like illness (ILI) had been in operation since 2003. Over the years, it has expanded into a network of national platforms in 10 European countries, collectively engaging tens of thousands of volunteers each season, providing valuable data for epidemiological research and disease monitoring [18, 29]. However, by the mid-2010s, the underlying software of the platform was becoming outdated and increasingly difficult to maintain. It was not designed to incorporate new diseases or modern data practices, such as smartphone input or flexible consent models or enhanced privacy protection. CASE took the role as the successor to this legacy software as a next generation framework to carry forward the citizen science concept of Influenzanet on a more robust, scalable, and sustainable technology.
# 2.2 Related Work
The landscape of digital data collection and participatory research platforms has evolved significantly in the last two decades, with various systems that address different aspects of survey deployment, participant participation, and real-time aspects. This section examines existing general approaches or solutions within related domains and positions CASE within this broader ecosystem.
Traditional Digital Survey Platforms. Most mainstream survey platforms like LimeSurvey, SurveyMonkey, or Qualtrics provide a web-based questionnaire design with conditional branching and basic logic elements. LimeSurvey offers an expression manager for defining boolean conditions [20], while Qualtrics supports complex survey flows with visibility logic and embedded data integration [32]. These tools improve on classic paper surveys by allowing skip patterns and piping of answers into subsequent questions. However, they remain fundamentally linear with fixed tree structures, where branches depend only on previous responses. Integration of real-time external context is extremely limited and is typically restricted to preloading known data. Dynamic adaptation during ongoing surveys (e.g., altering question flow based on external input or evolving conditions) is not supported, especially when it involves participant context or state. Although relatively userfriendly and sufficient for basic research, they lack the event-driven architecture and live context integration needed for highly adaptive or longitudinal study designs. More sophisticated platforms like Alchemer support API integration and custom scripting. However, they are still rely on predefined survey structures and do not support a full reconfiguration of the survey logic in response to live context or changes in participant state [1]. Basic tools like Google Forms and Microsoft Forms offer simple survey creation, but lack advanced logic capabilities, and are primarily designed for casual data collection rather than research applications.
Mobile and Field Data Collection Tools. The ubiquity of smartphones has enabled an increase in platforms optimized for mobile and offline data collection, such as Open Data Kit (ODK) [14], KoBoToolbox, SurveyCTO, or the widely used REDCap [13]. These tools introduced GPS transmission, photo upload, and offline synchronization capabilities. ODK supports complex form logic that includes skip patterns, input validation, and calculated fields that enable conditional question displays within a form. REDCap additionally offers regulatory compliance and longitudinal data collection modules for multi-visit clinical studies. Despite their strengths in reliable data capture and compliance, these platforms focus on static surveys. Adaptivity is limited to logic within forms. Once deployed, external data or events cannot alter the flow of the questionnaire. Implementing long-running or interactive studies requires manually scheduling separate surveys or custom workflows outside the core system. These platforms focus on robust, predefined form execution rather than adaptive, context-sensitive survey workflows that leverage evolving participant state and runtime conditions.
Context Aware and Sensor-Driven Surveys. Researchers have explored context-aware survey frameworks that react to sensor data or environmental events. Intille proposed early concepts of contextaware experience sampling [17], while the MyExperience framework provided systems to trigger in situ questionnaires based on sensor readings [10]. Studies demonstrated that mobile sensors could initiate relevant questions at opportune moments, for example, asking a user when the accelerometer detects physical activity or when the GPS indicates arrival at a location, improving data relevance and accuracy [36]. In the initial phase of CASE development, the use of smartphone sensors was explored to enhance participatory research capabilities [15], capturing movement patterns and environmental context to augment traditional self-reported responses. However, such systems, while widely explored [8], remained largely proof-ofconcept rather than deployed platforms. Implementation required significant custom programming and tight coupling of survey logic with mobile apps. Technical challenges increased as mobile operating systems introduced stricter privacy controls and background processing restrictions [30]. Earlier context-aware systems focused on momentary interactions rather than providing infrastructure for multi-year or large cohort studies. Recent research continues to seek adaptive survey solutions, including data-driven survey generation approaches [39], highlighting the ongoing demand for platforms that intelligently adapt to the context of the user. However, a gap remains between research prototypes and robust general-purpose frameworks for longitudinal studies.
Mobile Health Research Platforms. Dedicated mobile health research frameworks like Apple’s ResearchKit [2], Google’s ResearchStack [7] represent specialized tools for longitudinal health studies. ResearchKit has enabled large-scale studies such as the Stanford Heart Study and Parkinson’s mPower study, demonstrating the potential for mobile-based participatory research [5, 22]. These platforms provide built-in consent frameworks, sensor data integration, and standardized health survey modules. However, these frameworks are designed primarily for specific mobile ecosystems and health research contexts. They lack cross-platform deployment and are limited in their ability to handle diverse research domains beyond health. Furthermore, while they support some sensor integration, they do not provide the event-driven context-aware survey logic that enables real-time adaptation based on external data sources or complex management of the state of participants in multiple studies [31].
Participatory Surveillance Platforms. Several participatory surveillance platforms have demonstrated the value of crowd-sourced symptom reporting for the monitoring of infectious diseases. In the US, Flu Near You and its successor Outbreaks Near Me [4] allowed tens of thousands of volunteers to report influenza-like and COVID19 symptoms weekly, although their static technical architecture limits dynamic survey adaptation, remains largely undocumented and was not designed for broader reuse [35]. Australia’s FluTracking [6] has similarly shown high public engagement and epidemiological value during seasonal influenza outbreaks, but, like other single-purpose platforms, lacks the technical flexibility to dynamically adapt survey flows or rapidly adapt to new diseases during emerging health threats. The UK’s ZOE COVID Symptom Study demonstrated the scalability and research potential of participatory symptom tracking at unprecedented scale, engaging millions of users and generating findings that directly influenced public health policy [25]. Beyond its initial role in tracking COVID-19 symptoms, the ZOE app has evolved into a broader research platform now known as the ZOE Health Study. It engages participants in longitudinal studies of diet, chronic symptoms, and post-COVID-19 syndrome [3, 33]. This illustrates the potential for participatory research platforms to remain relevant beyond acute outbreaks. CASE supports a reusable and adaptive architecture that facilitates such transitions, enabling research continuity across domains. Many European platforms within the Influenzanet network have since transitioned to the open-source CASE framework, establishing a reusable foundation for multi-study participatory research. This shift reflects a broader trend toward scalable, transparent, and adaptable infrastructures for participatory surveillance, identified as a critical need across global systems [24].
Summary. The review of existing platforms reveals several key gaps that CASE addresses. Traditional survey platforms like LimeSurvey and Qualtrics, while user-friendly, lack real-time adaptiveness and external context integration. Mobile data collection tools such as ODK and REDCap focus on form-based data capture but are not able to dynamically adjust to external events once deployed.
Context-aware research prototypes demonstrate promising concepts, but remain largely a proof-of-concept. Mobile health research platforms such as ResearchKit are sophisticated, but limited to specific ecosystems and health domains. Furthermore, many commercial platforms have restrictive pricing models and limited customization options for research use cases [37, 41]. CASE contributes to this landscape by implementing an event-driven architecture that enables dynamic survey workflows based on external context, participant state, and temporal factors. The framework has been deployed in several participatory surveillance systems and longitudinal studies. Although primarily applied in health research contexts, its modular design allows adaptation to other domains, as illustrated by its use in analysis of political sentiment during live events. This flexibility positions CASE not only as a platform for health-related studies, but also as a foundation for participatory research more generally. However, like other research frameworks, CASE still requires technical expertise for deployment and customization to specific research needs.
# 3 Framework Overview and Requirements 3.1 Requirements Context and Scope
The features outlined in this section were derived from a comprehensive set of requirements gathered through interviews and discussions with the application owners, who are recognized experts in their respective fields of participatory studies. Structured requirement engineering [27] is essential for successful system development. However, the underlying rationale for these requirements is outside the scope of this work, and we focus on presenting them as the foundational parameters that guided our development process. This paper shows how we implemented solutions to effectively meet the needs set by experts, highlighting the technical realization of fulfilling the requirements.
Through our experience in implementing diverse applications, collaborating with experts, and deploying studies across various contexts, we have made several key observations. Although many applications share common components and logic, each typically comes with a unique set of requirements that can occasionally conflict. In addition, applications deployed in different countries often need to comply with varying regulatory requirements. Another observation is that user interface designs must adopt different stylistic approaches and content goals depending on the specific goals, contexts, and target audiences involved.
In response to these observations, our primary goal has been to develop a technical framework that allows for the rapid composition of tailored applications. This framework aims to maximize the reusability of common logic and components while simultaneously providing the flexibility to accommodate the specific set of requirements of individual use cases.
The features described in the following subsections reflect our effort to balance standardization and customization, creating a system that is both efficient and adaptable to diverse needs in participatory studies.
# 3.2 Overall Application Goal
The framework was designed to support a wide range of participatory studies, from simple one-time surveys to more complex long-term research projects. Its flexible architecture addresses the needs of both study participants and researchers. The main objectives of the CASE framework can be summarized as follows:
Ethical Design. The whole system was designed with explicit attention to the principles of data minimization and transparency. CASE supports state-of-the-art encryption, role-based data access and is capable of implementing consent workflows informed by established ethical guidelines for digital participatory surveillance [12, 40] which include the need for clear, multilingual and user-friendly electronic consent procedures to ensure participant autonomy and data protection.
Flexibility in Study Design. The framework is capable of supporting diverse types of studies. On the one hand, it can manage basic anonymous surveys that require minimal engagement. However, it can handle complex longitudinal studies, in which participants are actively managed over extended periods, often participating in multiple studies with continuous follow-up. The flexibility of the framework allows it to be tailored to different research fields and study complexities, making it suitable for use in various domains.
Dual-Focus Functionality. The framework is built around two primary components. The participant interface provides a userfriendly way for individuals to complete surveys, track participation, and interact with researchers. The design prioritizes simplicity and engagement, ensuring that participants can easily navigate the interface and maintain long-term participation in the studies. Meanwhile, researcher and manager interface offers a robust system that allows researchers and application managers to design, execute, and monitor studies. It includes tools for study configuration, survey customization, participation tracking, and data analysis, ensuring researchers’ control to manage the full study lifecycle. By focusing on both participant experience and researcher management needs, the CASE framework bridges the gap between efficient data collection and study administration.
# 3.3 Core Functional Modules
The framework consists of several key modules, each designed to handle a specific aspect of participatory studies. These modules collectively provide a comprehensive and flexible solution that can be customized to meet the unique needs of various research endeavors.
3.3.1 Study System. The module study system is a key component of the framework, offering a robust and highly configurable environment to manage complex studies. It is based on a sophisticated event-driven engine that serves as the core component to manage the flow and logic of the study. The key features of the system include the following.
Event-Driven Execution: The module operates on a set of configurable rules that determine how the system reacts to different events, like participant enrollment, response submission, or periodic timers checking states or custom events triggered by the specific application. Context-Based Decision Making: The system can utilize various context sources, such as certain participant responses,
response history, current participant state, or payload data from external events. • Customizable Action Logic: Study managers can define rules that trigger actions in response to events, such as assigning new surveys, scheduling messages, or sending data to external systems or event handlers.
Figure 2 illustrates the underlying architecture of the study system, showing how events, context, and participant state flow into a rule-based engine that drives these actions. This flexibility allows for the management of complex and dynamic studies with individualized participant interactions. It is particularly well suited for large-scale or long-term studies in fields such as epidemiology or public health, where ongoing engagement, adaptability, and automation of certain processes are crucial.
Figure 2: A diagram showing the architecture of the CASE framework’s study system. Participant events, timers, context sources (such as sensor data or external inputs), and the participant’s state (including response history) are processed by a rule-based engine to trigger actions like survey assignments, message scheduling, or external system calls.
Figure 3: A diagram showing the architecture of the CASE framework’s survey module. Context sources including survey definitions, previous and current responses inform the Survey Engine, which processes expressions, generates dynamic content, and controls survey logic to produce an adaptive, context-aware survey experience via the survey renderer.
3.3.2 Survey Module. The survey module supports the creation of context-aware, dynamic surveys that can adapt in real time based on participant’s responses, improving the quality and relevance of collected data. Key features of the survey module include:
• Survey Renderer: A flexible renderer that supports a variety of commonly used question types and other items, like formatted text, ensuring that surveys are both visually appealing and functional.
• Survey Engine: Responsible for resolving dynamic expressions defined by researchers, enabling adaptive questionnaire behavior based on real-time evaluation of predefined logic.
• Expression System: Expressions can incorporate context variables from multiple flexible sources, including previous participant responses, sensor data, external data sources, or current response states, providing comprehensive adaptability to participant context and environmental factors.
• Dynamic Content Generation: Resolved expressions enable dynamic content within surveys, such as generating relative dates, displaying response counts, or incorporating calculated values that update based on participant interactions or external conditions.
• Conditional Logic Control: Expression resolution drives sophisticated conditional logic that can dynamically display or hide individual survey items or only their components, groups of items, as well as enable or disable interactive components based on evaluated conditions.
Figure 3 illustrates the architecture of the survey module, detailing how data is processed and the listed components interact with each other. This adaptability makes the survey module an effective tool for maintaining a high level of participant engagement while collecting high-quality research data in complex environments via context-sensitive questionnaires.
Context Survey Engine informs
▪ Participant state
▪ Previous responses Expression System
▪ External Data (device ▪ Processes context variables triggers type, sensors,…) (after user interaction) ▪ Evaluates conditional logic Current Survey resolved expressions 0 Survey definiton 0
▪ Previous answers Dynamic Content Generator
▪ Pre-fills Calculates dynamic values Current Response Tags items w/ validation results 0 Logic Control Enables/disables items (questions, updates response options,…) Determines order of items and pages prepared 0 survey Survey Renderer Renders UI for current page based on context ▪ Highlights invalid questions and blocks (screen size, question type customizations, …) navigation until all required questions answered ▪ Handles partipciant input
3.3.3 Authentication and User Management. Authentication and user management are crucial for tracking participants across multiple surveys over time and ensuring secure access to the system. This module supports both the participant and researcher roles, ensuring appropriate access control and data protection. Key features include:
Participant Authentication: The built-in module primarily supports authentication through email and password. As an additional option, temporary one-time codes are supported to further protect sensitive resources. • Participant Account Management: Participants have complete control over their accounts, allowing them to update email addresses, change passwords, and manage other accountrelated information. • Researcher/Administration Authentication and Management: The system supports OpenID Connect for authentication to the management area. A permission system allows administrators to manage access rights for different researchers and management users. Restricted access can be granted for example to access response data or managing configurations and content, including study or survey management.
While this module was designed for composing more complex use cases, it is possible to build an application with the CASE framework incorporating other authentication solutions.
3.3.4 Messaging System. The framework provides comprehensive messaging functionality to support various use cases in research studies and surveys. These features enable efficient communication with participants. Key features include:
Template-Based Emails: The framework contains data models and logic to set up template-based emails that allow personalized messages with dynamic content. This includes insertions, such as authentication codes or other information based on participant data or study events. Message Scheduling: The system allows for scheduling various types of communication, such as newsletters, reminders, or participation invitations. Built-in mechanisms are in place to balance the load based on available system resources and avoid delivery issues like false spam detection through flood messaging.
The system can be extended to utilize other communication channels and paradigms. A built-in HTTP to SMTP bridge allows easy integration into existing email infrastructure.
# 4 Architecture and Implementation
Between 2020 and early 2024, CASE-based applications were implemented using a microservice architecture. This approach aimed to provide modularity and scalability, reflecting best practices in cloud-native design at that time [28]. An overview of the original architecture is shown in Figure 4. The applications were composed of the following main components:
# 4.1 Previous Microservice Approach
Front-End. A web application implemented as a single-page React application, based on a shared application library. This library contains common data types, logic for data fetching, state management, navigation, and styling to support different use cases.
Back-End. The back-end was composed of the following core microservices implemented in the Go programming language.
User management service Study service • Messaging service Email client service • Logging service
Each service is organized along thematic lines. Individual services were responsible for all methods related to specific functional areas of the system. Both management and participant-facing features were integrated within all relevant services, rather than being separated. These services exposed their functionality through gRPC interfaces, enabling structured communication between components. In addition to the core services, two API services existed:
Management API service: Exposed back-end functionality for management operations
• Participant API service: Provided interfaces for participantfacing functions
Both API services acted as proxies, exposing the back-end functionality over HTTP webserver and routing incoming requests to the appropriate microservices.
Figure 4: Thematic microservice architecture, with standalone modules organized by functional responsibility. Orange M labels components required for study management and configuration, while green P label modules necessary for participant facing functionality.
# 4.2 Disadvantages of the Microservice Approach
While the initially anticipated benefits of microservice architecture for modularity and scalability did not materialize, as practical deployments evolved, several disadvantages became apparent. The code was distributed across multiple repositories, leading to unclear dependencies and relationships between modules, which made understanding the overall architecture of the system more challenging. Additionally, composing new applications from parts of the system proved more difficult, as the service parts focused more on their responsibility than offering composable modules. Furthermore, simple changes often required modifications in multiple repositories, with corresponding API changes frequently necessary. The entanglement of management and participant functions within the services created additional complications, leading to the necessity for a full deployment of all services to begin the setup. This inability to deploy services independently reduced flexibility and increased deployment complexity.
These challenges are not limited to CASE [19] but indicate the need for a reevaluation of the system architecture, considering the actual requirements and usage patterns of the participatory system.
# 4.3 Simplified Architecture and Re-implementation
In 2024, we performed a comprehensive architectural rework to address the previously mentioned issues. The new back-end implementation still uses the Go language, as its properties make it highly suitable for the task. However, the new back-end combines all code, including data types, modules for feature areas, and database adapters, into a single monorepo. Figure 5 shows an overview of this new architecture. This approach significantly simplifies code management and intermodule dependencies. The new system includes reference implementations for the following key components:
Figure 5: Simplified architecture optimized for easier deployment. Compared to Figure 4, it reduces interdependencies between components and lowers the complexity of setup. Green P labels mark modules required for participant facing applications while orange M labels represent components needed for management functionality.
• Participant Back-End: A web server handling all participant facing features of the participatory study system. It encompasses authentication, study flows, and account management functionality.
• Management Back-End: A web server offering API functionality for administrative tasks. It allows for the configuration and management of study setups, message templates, and schedules. It also provides access to data and implements a resource-scope-based permission system.
• Schedulable Auxiliary Tasks: We have implemented several small programs that can be scheduled to run regularly on a fixed schedule. These include cleaning up unverified or inactive users, handling message tasks, and managing timerbased study events.
The reworked system also leverages modern front-end technologies, driven by the increased popularity of server-side rendering with React and the stable release of Next.js’s app router approach. These advancements in front-end implementation brought several benefits, most notable for us:
Smaller bundle sizes • Faster page loads • Simpler state management
Additionally, new mature libraries emerged that facilitate the implementation of accessible user interfaces, such as Shadcn/UI, based on Radix UI and Tailwind CSS.
This simplified architecture addresses many of the challenges that we faced with the previous microservice approach. By consolidating our code base and leveraging modern web technologies, we have created a more maintainable, flexible, and efficient system. The monorepo structure supports easier updates and better code reuse, while the separation of participant and management back-ends allows for more targeted development and deployment.
The new implementation uses backward compatible data models, and the new approach can be rolled out gradually. The microservice approach is still in use for multiple use cases and is currently not deprecated. The new approach is meant to provide an easier to deploy and maintainable version for use cases where this is relevant and suitable.
# 5 Practical Information and Real-World Applications
The CASE framework is openly available at https://github.com/ case-framework where the source code and documentation can be accessed. It is licensed under the Apache 2.0 open-source license, ensuring transparency and adaptability for diverse environments. Development is primarily led by coneno GmbH, with continued collaboration from academic and research partners.
In the following, we highlight notable research projects working with and on CASE, followed by selected key use cases, illustrating how the framework has been successfully implemented across diverse domains, from long-term participatory health studies to real-time data collection in mobile applications. We also include selected lessons learned, reflecting practical considerations observed during development and deployment.
# 5.1 Selected Research Projects
The CIMPLEX project (Bringing CItizens, Models and Data together in Participatory, Interactive SociaL EXploratories, 2014-2017, Horizon 2020, 9 partners)1 developed the GrippeNet App, a mobile application allowing self-reporting of symptoms to Influenzanet enhanced with sensor-based features to analyze behavioral patterns during epidemics. CIMPLEX also initiated the transition from the older Influenzanet technology stack to the newly developed, more advanced CASE framework.
Building on this foundation, the EpiPose project (Epidemic intelligence to minimize 2019-nCoV’s public health, economic and social impact in Europe, 2020-2023, Horizon 2020, 6 partners)2 leveraged CASE technologies to develop new and improve existing platforms, allowing the collection and integration of COVID-19 related data from citizens. The project helped expand participatory surveillance systems such as Infectieradar (in the Netherlands and Belgium) and Influweb (in Italy), making them more efficient in monitoring a broader range of infectious diseases apart from influenza through citizen contributed health information. Additionally, EpiPose contributed to the development of CASE customizations for the Influenzanet infrastructure to fit specific national use cases.
Currently, the ongoing VERDI project (SARS-coV2 variants Evaluation in pRegnancy and paeDIatrics cohorts, since 2021, Horizon Europe, $^ { 3 0 + }$ partners) uses CASE as a reference use case to explore how participatory surveillance systems can support pandemic preparedness. VERDI is investigating the technological and methodological foundations required for resilient, scalable, and engaging data collection infrastructures that can quickly adapt to newly emerging infectious disease threats.
# 5.2 Selected Use Cases
For flu and COVID-19 surveillance, the infectieradar.nl (Netherlands) and flusurvey.net (UK) platforms, both part of the Influenzanet network, demonstrate the capabilities of the CASE framework for large-scale participatory disease monitoring. These platforms engage tens of thousands of volunteers in weekly symptom reporting to track influenza, COVID-19, and other infectious diseases [18, 23]. In addition to seasonal surveillance, they also support ad hoc onetime surveys hosted within the same application ecosystem. More European platforms within the Influenzanet network have successfully transitioned from legacy technical infrastructure to CASEbased implementations, including grippenet.fr (France), influweb.org (Italy) and grippenet.ch (Switzerland). While the open-source nature of CASE enables such migrations, not all implementations are publicly documented. These platforms operate independently with their own customized implementations of the CASE framework, tailored to their specific national contexts and requirements.
The Dutch platform tekenradar.nl allows public participation in tracking tick bites and associated health effects for the monitoring of tick-borne diseases. The system has gathered tens of thousands of tick bite reports and supports epidemiological research on Lyme disease and related conditions through flexible survey and notification modules tailored to the experiences reported by individual participants [11, 16]. Recently rebuilt with the CASE framework, it allows users to report tick encounters and symptoms while supporting follow-up engagement through a multitrack longitudinal study structure.
The Post-COVID Research Portal at postcovidonderzoek.nl serves as a national entry point for individuals with long-term symptoms after COVID-19 infection to participate in ongoing scientific studies [26]. Developed from the start with the reworked version of the CASE software, the platform collects baseline and follow-up data while enabling centralized participant recruitment across multiple coordinated research projects. By streamlining intake and periodic follow-up engagement, the system improves long-term cohort management and enables targeted substudies. The platform also supports integration with clinical studies, as participants are invited through the portal to participate in, for instance, blood bank studies and random control trials, illustrating how CASE allows the digital research infrastructure to connect efficiently with laboratory workflows [38].
In a different domain, the Real-Time Response (RTR) mobile application was designed for sentiment analysis during live events with a focus on political debates prior to elections in Germany. It utilizes the survey module from the CASE framework to implement questionnaires and augment them with a real-time feedback data stream, allowing researchers to capture audience reactions dynamically and correlate them with specific debate moments [21, 34].
Various applications built on top of the CASE framework, including the listed examples, demonstrate its adaptability across different contexts, from citizen science to mobile app-based real-time sentiment analysis. Although public health has been a central area of application, the underlying architecture is designed to serve a broader research agenda enabling adaptive, scalable data collection systems that prioritize data privacy and offer an accessible and engaging user experience.
# 5.3 Lessons Learned
The development and rework of the CASE platform and its deployment in multiple countries and institutions provided practical insights. The following selected lessons learned offer guidance for future implementations of participatory data collection platforms, particularly in preparedness contexts.
Customization Capability as Core Value. Our experience working closely with stakeholders demonstrated that the primary benefit of using our study framework compared to other alternatives, which in some cases might be easier to deploy, was the ability to "make things happen". When using off-the-shelf software or services (e.g., as SaaS products), changes to the workings, design, and functionality are very limited if possible at all. While our framework allows a huge number of customizations through the standard feature set, it also offers extendability to build around the core or compose new components entirely.
Preparedness Requires Balanced Standardization and Flexibility. Preparedness, which seems to grow in importance, requires both standardization for rapid deployment and flexibility to adapt to evolving situations. New studies typically want to accommodate specific scenarios as quickly as possible. A certain degree of standardization is necessary for the quick deployment and established processes, but a high degree of flexibility is also required to react to the dynamic nature of evolving situations (such as in a pandemic). In addition, stakeholders often have conflicting preparation priorities, ranging from technical infrastructure to study design readiness to ethical compliance.
Digital Sovereignty as Strategic Necessity. Additionally, compared to many services offered, digital sovereignty is a key aspect. Having the ability to host, maintain, or develop the solution if needed without external entities. Although our team currently provides efficient engineering solutions, the open source and permissive nature of the project ensures the independence of the organizations. This autonomy becomes particularly critical during crisis situations, when external dependencies may become unreliable or unavailable.
Organizational and Training Requirements. Successful deployment requires more than technical solutions. Even user-friendly platforms need structured onboarding and training programs. We found that iterative development cycles that involved technical teams, researchers, and ethics/privacy officers were essential for addressing real-world constraints. A deep understanding of the actual research goals proved critical. Effective solutions emerged only when developers truly grasped the surveillance objectives and priorities. This required sustained collaboration with domain experts throughout development, not just during requirement gathering. Without this ongoing dialogue, technical teams risk building features that seem logical but miss actual research needs.
Sustainability and Funding Challenges. Software maintenance is an ongoing necessity as dependencies, libraries, and security requirements evolve constantly. However, this creates a fundamental tension as often research projects need working solutions immediately but rarely budget for long-term maintenance. Funding typically supports specific research goals, not the "invisible" work of keeping infrastructure up to date and secure, which threatens sustainability. Although organizations benefit from the framework today, few contribute to its future viability. Ensuring long-term availability requires new funding models that recognize infrastructure maintenance as essential research support, not optional overhead. Without sustainable funding for core development and governance, even successful platforms risk obsolescence.
# 6 Closing Remarks and Next Steps
In this paper, we introduce the CASE framework, a modern, flexible, and open-source architecture for the deployment of adaptive surveys, grounded in more than a decade of participatory surveillance experience. Designed to meet the demands of public health infrastructure, CASE has demonstrated robustness in large-scale real-world deployments, including national COVID-19 monitoring platforms and longitudinal post-infection cohort studies. At the same time, CASE is well suited for general-purpose use. Its modular event-driven logic, privacy-aware design, and highly customizable workflows make it suitable for behavioral science, environmental health, and other domains where context-aware participatory data collection is critical.
The 2024 rework ensures that CASE remains aligned with modern standards and research needs, significantly enhancing both maintainability and deployment flexibility. The new architecture retains flexibility and scalability, ensuring that the system remains relevant and extendable for future applications. By combining technical robustness with a user-friendly experience for participants and ethical design, CASE establishes itself as a research infrastructure capable of supporting both urgent epidemiological surveillance and long-term interdisciplinary studies. Indeed, the recent adoption of CASE by numerous European platforms, particularly within the Influenzanet network, highlights its growing role as a foundational infrastructure for multi-study participatory research.
Beyond its technical capabilities, CASE addresses the critical need for institutional control in research infrastructure. Its opensource and self-hostable nature allows organizations to retain sovereignty over their data collection systems when needed, without mandatory dependence on external commercial platforms or services.
Despite these strengths, several limitations remain. The lack of formal usability evaluations limits the understanding of participant and researcher experience across diverse deployments. Although the monolithic redesign improved maintainability, a technical barrier still exists for non-specialist teams. Furthermore, the long-term sustainability of the framework depends on continued institutional support and the development of a formal governance and funding strategy to ensure ongoing maintenance and community growth.
To address these challenges, we now aim to expand accessibility by enhancing the core tool set, further generalizing CASE’s application beyond public health, making it quicker to deploy, and enhancing preparedness and research reproducibility across various domains.
To maximize the framework’s impact and accessibility, future plans consider a library of ready-to-deploy application templates tailored to common research scenarios, possibly complemented by establishing a Software-as-a-Service (SaaS) platform that will provide hosted CASE instances with managed infrastructure and support services. This would significantly reduce the technical barrier to entry, allowing researchers to rapidly deploy sophisticated participatory systems without requiring extensive technical expertise or infrastructure investment.
# References
[1] Alchemer LLC. 2025. Getting Started With Logic. Alchemer LLC. https://help. alchemer.com/help/getting-started-with-logic
[2] Apple Inc. 2025. ResearchKit: Open source framework for medical research and health apps. Apple Inc. https://www.researchkit.org
[3] Kate M. Bermingham, Inbar Linenberg, Lorenzo Polidori, Francesco Asnicar, Alberto Arrè, Jonathan Wolf, Fatema Badri, Hannah Bernard, Joan Capdevila, William J. Bulsiewicz, Christopher D. Gardner, Jose M. Ordovas, Richard Davies, George Hadjigeorgiou, Wendy L. Hall, Linda M. Delahanty, Ana M. Valdes, Nicola Segata, Tim D. Spector, and Sarah E. Berry. 2024. Effects of a personalized nutrition program on cardiometabolic health: a randomized controlled trial. Nature Medicine 30, 7 (01 Jul 2024), 1888–1897. https://doi.org/10.1038/s41591- 024-02951-6
[4] Boston Children’s Hospital, HealthMap, Flu Lab, and Ending Pandemics. 2025. Outbreak Near Me. Boston Children’s Hospital, HealthMap, Flu Lab, and Ending Pandemics. https://outbreaksnearme.org
[5] Brian M. Bot, Christine Suver, Elias Chaibub Neto, Michael Kellen, Arno Klein, Christopher Bare, Megan Doerr, Abhishek Pratap, John Wilbanks, E. Ray Dorsey, Stephen H. Friend, and Andrew D. Trister. 2016. The mPower study, Parkinson disease mobile data collected using ResearchKit. Scientific Data 3, 1 (03 Mar 2016), 160011. https://doi.org/10.1038/sdata.2016.11
[6] Sandra J Carlson, Daniel Cassano, Michelle T Butler, David N Durrheim, and Craig B Dalton. 2019. Flutracking weekly online community survey of influenzalike illness annual report, 2016. Commun Dis Intell (2018) 43 (April 2019). https: //doi.org/10.33321/cdi.2019.43.15
[7] Cornell Tech and Open mHealth. 2025. ResearchStack: Android framework for building research study apps. Cornell Tech and Open mHealth. http: //researchstack.org
[8] Victor P Cornet and Richard J Holden. 2018. Systematic review of smartphonebased passive sensing for health and wellbeing. J. Biomed. Inform. 77 (Jan. 2018), 120–132. https://doi.org/10.1016/j.jbi.2017.12.008
[9] Hunter Damron and Marco Hirsch. 2017. Structured Representation for Dynamic Survey Logic. (2017). http://dx.doi.org/10.13140/RG.2.2.26126.27208
[10] Jon Froehlich, Mike Y. Chen, Sunny Consolvo, Beverly Harrison, and James A. Landay. 2007. MyExperience: a system for in situ tracing and capturing of user feedback on mobile phones. In Proceedings of the 5th International Conference on Mobile Systems, Applications and Services (San Juan, Puerto Rico) (MobiSys ’07). Association for Computing Machinery, New York, NY, USA, 57–70. https: //doi.org/10.1145/1247660.1247670
[11] Irene Garcia-Marti, Raul Zurita-Milla, Margriet G. Harms, and Arno Swart. 2018. Using volunteered observations to map human exposure to ticks. Scientific Reports 8, 1 (18 Oct 2018), 15435. https://doi.org/10.1038/s41598-018-33900-2
[12] Lester Darryl Geneviève, Andrea Martani, Tenzin Wangmo, Daniela Paolotti, Carl Koppeschaar, Charlotte Kjelsø, Caroline Guerrisi, Marco Hirsch, Olivia Woolley-Meza, Paul Lukowicz, Antoine Flahault, and Bernice Simone Elger. 2019. Participatory Disease Surveillance Systems: Ethical Framework. J Med Internet Res 21, 5 (May 2019), e12273.
[13] Paul A. Harris, Robert Taylor, Robert Thielke, Jonathon Payne, Nathaniel Gonzalez, and Jose G. Conde. 2009. Research electronic data capture (REDCap)-A metadata-driven methodology and workflow process for providing translational research informatics support. Journal of Biomedical Informatics 42, 2 (April 2009), 377–381. https://doi.org/10.1016/j.jbi.2008.08.010
[14] Carl Hartung, Adam Lerer, Yaw Anokwa, Clint Tseng, Waylon Brunette, and Gaetano Borriello. 2010. Open data kit: tools to build information services for developing regions. In Proceedings of the 4th ACM/IEEE International Conference on Information and Communication Technologies and Development (London, United Kingdom) (ICTD ’10). Association for Computing Machinery, New York, NY, USA, Article 18, 12 pages. https://doi.org/10.1145/2369220.2369236
[15] Marco Hirsch, Olivia Woolley-Meza, Daniela Paolotti, Antoine Flahault, and Paul Lukowicz. 2018. grippeNET App: Enhancing Participatory Influenza Monitoring Through Mobile Phone Sensors. In Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers (Singapore, Singapore) (UbiComp ’18). Association for Computing Machinery, New York, NY, USA, 833–841. https://doi.org/10.1145/3267305.3274171
[16] Agnetha Hofhuis, Jan van de Kassteele, Hein Sprong, Cees C. van den Wijngaard, Margriet G. Harms, Manoj Fonville, Arieke Docters van Leeuwen, Mariana Simões, and Wilfrid van Pelt. 2017. Predicting the risk of Lyme borreliosis after a tick bite, using a structural equation model. PLOS ONE 12, 7 (07 2017), 1–15. https://doi.org/10.1371/journal.pone.0181807
[17] Stephen S. Intille, John Rondoni, Charles Kukla, Isabel Ancona, and Ling Bao. 2003. A context-aware experience sampling tool. In CHI ’03 Extended Abstracts on Human Factors in Computing Systems (Ft. Lauderdale, Florida, USA) (CHI EA ’03). Association for Computing Machinery, New York, NY, USA, 972–973. https://doi.org/10.1145/765891.766101
[18] Carl E. Koppeschaar, Vittoria Colizza, Caroline Guerrisi, Clément Turbelin, Jim Duggan, W. John Edmunds, Charlotte Kjelsø, Ricardo Mexia, Yamir Moreno, Sandro Meloni, Daniela Paolotti, Daniela Perrotta, Edward van Straten, and Ana O. Franco. 2017. Influenzanet: Citizens Among 10 Countries Collaborating to Monitor Influenza in Europe. JMIR Public Health Surveilance 3, 3 (19 Sep 2017), e66. https://doi.org/10.2196/publichealth.7429
[19] Daniel Krug, Rafael Chanin, and Afonso Sales. 2024. Exploring the Pros and Cons of Monolithic Applications versus Microservices. In Proceedings of the 26th International Conference on Enterprise Information Systems - Volume 2: ICEIS. INSTICC, SciTePress, 256–263. https://doi.org/10.5220/0012703300003690
[20] LimeSurvey GmbH. 2025. LimeSurvey Expression Manager Manual. LimeSurvey GmbH. https://www.limesurvey.org/manual/Expression_Manager
[21] Jürgen Maier, Paul Lukowicz, Jennifer Bast, Marco Hirsch, and Martin Lange. 2022. "Mexican Standoff" – Trielle in Berlin: TV-Debatten in der heißen Wahlkampfphase 2021. Zeitschrift für Parlamentsfragen 53 (01 2022), 39–52. https://doi.org/10.5771/0340-1758-2022-1-39
[22] Michael V. McConnell, Anna Shcherbina, Aleksandra M Pavlovic, Julian R. Homburger, Rachel L. Goldfeder, Daryl Waggot, Mildred K. Cho, Mary Rosenberger, William L. Haskell, Jonathan Myers, Mary Ann Champagne, Emmanuel JeanMarie Mignot, Martin J Landray, Lionel Tarassenko, Robert A. Harrington, Alan C Yeung, and Euan A. Ashley. 2017. Feasibility of Obtaining Measures of Lifestyle From a Smartphone App: The MyHeart Counts Cardiovascular Health Study. JAMA Cardiology 2 (2017), 67–76. https://api.semanticscholar.org/CorpusID: 205105699
[23] Scott A McDonald, Cees C van den Wijngaard, Cornelia C H Wielders, Ingrid H M Friesema, Loes Soetens, Daniela Paolotti, Susan van den Hof, and Albert Jan van Hoek. 2021. Risk factors associated with the incidence of self-reported COVID-19-like illness: data from a web-based syndromic surveillance system in the Netherlands. Epidemiol Infect 149 (May 2021), e129. https://doi.org/10.1017/ S0950268821001187
[24] Carrie McNeil, Sarah Verlander, Nomita Divi, and Mark Smolinski. 2022. The Landscape of Participatory Surveillance Systems Across the One Health Spectrum: Systematic Review. JMIR Public Health Surveill 8, 8 (5 Aug 2022), e38551. https: //doi.org/10.2196/38551
[25] Cristina Menni, Ana M Valdes, Maxim B Freidin, Carole H Sudre, Long H Nguyen, David A Drew, Sajaysurya Ganesh, Thomas Varsavsky, M Jorge Cardoso, Julia S El-Sayed Moustafa, Alessia Visconti, Pirro Hysi, Ruth C E Bowyer, Massimo Mangino, Mario Falchi, Jonathan Wolf, Sebastien Ourselin, Andrew T Chan, Claire J Steves, and Tim D Spector. 2020. Real-time tracking of self-reported symptoms to predict potential COVID-19. Nature Medicine 26, 7 (July 2020), 1037–1040. https://doi.org/10.1038/s41591-020-0916-2
[26] National Institute for Public Health and the Environment (RIVM), Netherland. 2025. New research portal for people with post-COVID. National Institute for Public Health and the Environment (RIVM), Netherland. https://www.rivm.nl/ en/news/new-research-portal-for-people-with-post-covid
[27] Bashar Nuseibeh and Steve Easterbrook. 2000. Requirements engineering: a roadmap. In Proceedings of the Conference on The Future of Software Engineering (ICSE ’00). Association for Computing Machinery, New York, NY, USA, 35–46. https://doi.org/10.1145/336512.336523
[28] Oyekunle Claudius Oyeniran, Adebunmi Okechukwu Adewusi, Adams Gbolahan Adeleke, Lucy Anthony Akwawa, and Chidimma Francisca Azubuko. 2024. Microservices architecture in cloud-native applications: Design patterns and scalability. Computer Science & IT Research Journal 5, 9 (06 Sep 2024), 2107–2124. https://www.fepbl.com/index.php/csitrj/article/view/1554
[29] Daniela Paolotti, Anne Carnahan, Vittoria Colizza, Ken Eames, John Edmunds, Gabriela Gomes, Carl Koppeschaar, Magnus Rehn, Ronald Smallenburg, Cl’ement Turbelin, Sander Van Noort, and Alessandro Vespignani. 2014. Web-based participatory surveillance of infectious diseases: the Influenzanet participatory surveillance experience. Clinical Microbiology and Infection 20, 1 (2014), 17– 21. https://doi.org/10.1111/1469-0691.12477
[30] Oliver Petter, Marco Hirsch, Eshan Mushtaq, Péter Hevesi, and Paul Lukowicz. 2019. Crowdsensing under recent mobile platform background service restrictions: a practical approach. In Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers (London, United Kingdom) (UbiComp/ISWC ’19 Adjunct). Association for Computing Machinery, New York, NY, USA, 793–797. https://doi.org/10.1145/3341162.3344867
[31] Abhishek Pratap, Elias Chaibub Neto, Phil Snyder, Carl Stepnowsky, Noémie Elhadad, Daniel Grant, Matthew H. Mohebbi, Sean Mooney, Christine Suver, John Wilbanks, Lara Mangravite, Patrick J. Heagerty, Pat Areán, and Larsson Omberg. 2020. Indicators of retention in remote digital health studies: a crossstudy evaluation of 100,000 participants. npj Digital Medicine 3, 1 (17 Feb 2020), 21. https://doi.org/10.1038/s41746-020-0224-8
[32] Qualtrics. 2025. Using Logic - Survey Platform. Qualtrics. https://www.qualtrics. com/support/survey-platform/survey-module/using-logic/
[33] Khaled Rjoob, Michela Antonelli, Benjamin Murray, Erika Molteni, Nathan Cheetham, Liane S. Canas, Marc Modat, Joan Capdevila Pujol, Christina Hu, Vicky Bowyer, Jonathan Wolf, Tim D. Spector, Sébastien Ourselin, Alexander Hammers, Emma L. Duncan, Claire J. Steves, and Carole H. Sudre. 2025. Symptom evolution in individuals with ongoing symptomatic COVID-19 and post-COVID19 syndrome after SARS-CoV-2 vaccination versus influenza vaccination. Journal of Infection 90, 2 (01 Feb 2025). https://doi.org/10.1016/j.jinf.2024.106406
[34] RPTU University Kaiserslautern-Landau - Political Communication Research Group. 2025. Real-time Response to German Election TV Debates (Project Description). RPTU University Kaiserslautern-Landau - Political Communication Research Group. https://ksw.rptu.de/abt/politikwissenschaft/abteilung/politischekommunikation/projekte/tv-duell
[35] Mark S. Smolinski, Adam W. Crawley, Kristin Baltrusaitis, Rumi Chunara, Jennifer M. Olsen, Oktawia Wójcik, Mauricio Santillana, Andre Nguyen, and John S. Brownstein. 2015. Flu Near You: Crowdsourced Symptom Reporting Spanning 2 Influenza Seasons. American Journal of Public Health 105, 10 (2015), 2124–2130. https://doi.org/10.2105/AJPH.2015.302696
[36] Preethi Srinivas, Kunal Bodke, Susan Ofner, Nicole R Keith, Wanzhu Tu, and Daniel O Clark. 2019. Context-sensitive ecological momentary assessment: Application of user-centered design for improving user satisfaction and engagement during self-report. JMIR MHealth UHealth 7, 4 (April 2019), e10894. https://pmc.ncbi.nlm.nih.gov/articles/PMC6468333/
[37] V.M. Sue and L.A. Ritter. 2012. Conducting Online Surveys. SAGE Publications. https://books.google.de/books?id=4_3aX2A2S98C
[38] UMC Utrecht. 2025. Klinisch onderzoek naar post-COVID gestart. UMC Utrecht. https://www.umcutrecht.nl/nl/over-ons/nieuws/details/klinischonderzoek-naar-post-covid-gestart
[39] Lev Velykoivanenko, Kavous Salehzadeh Niksirat, Stefan Teofanovic, Bertil Chapuis, Michelle L. Mazurek, and Kévin Huguenin. 2024. Designing a Data-Driven Survey System: Leveraging Participants’ Online Data to Personalize Surveys. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’24). Association for Computing Machinery, New York, NY, USA, Article 498, 22 pages. https://doi.org/10.1145/3613904.3642572
[40] World Health Organization. 2022. Best practices for the design, implementation, analysis and reporting of participatory surveillance for influenza-like illness (first ed.). World Health Organization, Geneva, Switzerland. https://www.who.int/ publications/i/item/9789240095038
[41] Kevin B. Wright. 2017. Researching Internet-Based Populations: Advantages and Disadvantages of Online Survey Research, Online Questionnaire Authoring Software Packages, and Web Survey Services. Journal of Computer-Mediated Communication 10, 3 (07 2017), JCMC1034. https://doi.org/10.1111/j.1083-6101. 2005.tb00259.x | We present the CASE framework, an open-source platform for adaptive, context-aware participatory research, and pandemic preparedness. CASE implements an event-driven architecture that enables dynamic survey workflows, allowing real-time adaptation based on participant responses, external data, temporal conditions, and evolving user states. The framework supports a broad range of research needs, from simple one-time questionnaires to complex longitudinal studies with advanced conditional logic. Built on over a decade of practical experience, CASE underwent a major architectural rework in 2024, transitioning from a microservice-based design to a streamlined monolithic architecture. This evolution significantly improved maintainability, flexibility, and accessibility to deployment, particularly for institutions with limited technical capacity. CASE has been successfully deployed across diverse domains, powering national disease surveillance platforms, supporting post-COVID cohort studies, and enabling real-time sentiment analysis during political events. These applications, involving tens of thousands of participants, demonstrate the framework's scalability, versatility, and practical value. This paper describes the foundations of CASE, details its architectural evolution, and presents lessons learned from real-world deployments. We establish CASE as a mature and reusable research infrastructure that balances sophisticated functionality with practical implementation, addressing the critical global need for sustainable and institutionally controlled data collection systems. | [
"cs.SE",
"cs.CY",
"cs.HC"
] |
# 1 Introduction
Interest in the synthesis of classical artificial general intelligence [9, 15] with emerging quantum information processing (QIP) [1, 21, 25, 34] technologies has given rise to questions regarding how the underlying physical substrate upon which intelligent systems are constructed influences their nature and capabilities. Most AGI theories are classical: implicitly assuming a computational and informational model grounded in classical physics [4,10,11,20,26,30]. Yet quantum mechanics offers a profoundly different ontology [3, 7, 19, 22, 27] due to phenomena such as superposition, entanglement, non-locality, contextuality [18] and no-cloning [35]. Hamiltonian mechanics [12] offers a powerful and unifying language to describe the dynamics of both classical and quantum systems. In this work, we use Hamiltonian dynamics to model AGI in classical AGI (CAGI) quantum AGI (QAGI) settings. We demonstrate how key AGI functionalities can be associated with specific Hamiltonian generators. The algebraic properties of these generators (e.g., their commutation relations) affect the capabilities of each respective AGI, influencing their capacity for information processing, logical reasoning, learning, and interaction. By analysing these structures, we aim to contribute to the development of a mathematically rigorous theory of
quantum agency.
Background & Related Work. Classical systems evolve on a symplectic phase space manifold $M$ [13]. Observables are represented by smooth functions $f \in C ^ { \infty } ( M )$ , while their dynamics are represented variationally via Hamiltonian dynamics using Poisson brackets $\dot { f } = \{ f , H \}$ . Quantum systems, by contrast, are described by states in a Hilbert space $\mathcal { H }$ , with observables represented by selfadjoint operators. Their dynamics are governed by the Schrödinger equation, and are inherently tied to the non-commutative algebra of these operators. This noncommutativity is accompanied by quantum phenomena such as superposition, entanglement, measurement stochasticity, and contextuality [3, 18]. Adopting an approach that synthesises concepts from quantum information theory [34], quantum circuit formalism [5] and geometry [6,8,14,17,23], we conceptualise an agent’s cognitive and interactive processes as arising from a set of fundamental Hamiltonian generators. This approach allows for, in certain circumstances, a direct comparison of agent/environment interactions where agents and/or environments may be classical and quantum. By introducing a variational-based approach, we can in principle analyse structures that govern their respective dynamics and differences.
Classical and Quantum Information Processing. To compare CAGI and QAGI, we require a means of formulating them both within a common theoretical framework. However, doing so is not simple. Quantum mechanics and classical mechanics, while sharing considerable overlap, are fundamentally different in important ways that affect their comparison. For agents, this problematises common classical assumptions regarding identifiability, certainty of state descriptions and the distinguishability of an agent from its environment. To formulate CAGI and QAGI in a way that shines light on their differences, we frame both systems in the paradigm of quantum information theory [34] (QIP). In this formulation, agents and environments are described via informational registers X (e.g. bits) comprising information drawn from a classical alphabet $\varSigma$ . The states of registers may be either classical or quantum states. We define a Hilbert space $\mathcal { X } = C ^ { | \Sigma | }$ with computational basis $\{ | s \rangle \} _ { s \in \Sigma }$ . A quantum state of a register ${ \tt X }$ (associated with space $\mathcal { X }$ ) is a density operator $\rho \in { \mathcal { D } } ( { \mathcal { X } } )$ , i.e., a positive semidefinite operator with $\operatorname { T r } ( \rho ) = 1$ . Classical states are precisely those density operators that are diagonal in the distinguished computational basis $\{ | s \rangle \} _ { s \in \Sigma }$ of $\mathcal { X }$ (i.e. $\textstyle \rho = \sum _ { s \in \Sigma } p _ { s } \ : | s \rangle \langle s |$ with $p _ { s } \geq 0$ , $\begin{array} { r } { \sum _ { s } p _ { s } = 1 , } \end{array}$ ). Interactions to (and changes of) states occur via channels which ar superoperators. They define how quantum and classical states interact. Classical-to-classical channels (CTC) preserve classical states, classical-to-quantum (CTQ) channels encode classical information in quantum states, quantum-to-classical (QTC) channels extract classical information from quantum states, decohering them in the process; quantum-toquantum (QTQ) channels form coherent (e.g. unitary) transformations between quantum registers. In this framing, both agents and environments are registers which may be CAGI (classical state sets) or QAGI (quantum state sets). They may interact in ways that are coherent (quantum-preserving) or classical (via CTC or QTC maps). These channels act on the algebra of observables: CTC preserves commutative subalgebras, QTQ preserves the full non-commutative structure, while CTQ/QTC mediate between them. A diagrammatic illustration is set out Figures 1 and 2 in the Appendix.
Classical AGI Hamiltonians. Using a classical mechanical paradigm, CAGI can be conceptualised as a dynamical system evolving in a high-dimensional phase space $M \ = \ T ^ { * } { \mathcal C }$ , the cotangent bundle of its configuration space $\boldsymbol { \mathcal { C } }$ . The state of the AGI at any time is given by a point $( \mathbf { q } , \mathbf { p } ) \ \in \ M$ , where $\mathbf { q } = ( q _ { 1 } , \dots , q _ { n } )$ are generalized coordinates representing, for instance, memory contents, internal model parameters, or sensor readings, and $\mathbf { p } = ( p _ { 1 } , \ldots , p _ { n } )$ are their conjugate momenta, representing rates of change or dynamic aspects. Observables may be smooth real-valued functions $f ( \mathbf { q } , \mathbf { p } )$ on this phase space, but they are not always. The dynamics of AGI we model as being governed by a total Hamiltonian $H _ { C } ( \mathbf { q } , \mathbf { p } )$ , a function representing the AGI’s total energy or a cost function to be optimized. Evolution is described by Hamilton’s equations which can succinctly be represented using Poisson-bracket formalism $\dot { f } = \{ f , H _ { C } \} _ { P B }$ . If $\{ f , g \} _ { P B } = 0$ (Poisson bracket), the observables $f$ and $g$ are said to commute, implying they can, in principle, be simultaneously determined with arbitrary precision. Classical logic and computation often implicitly rely on this property: the truth value of one proposition or the state of one register does not inherently interfere with another, distinct one unless explicitly coupled by $H _ { C }$ . For a classical AGI, $H _ { C }$ we model Hamiltonians as decomposable: $\begin{array} { r } { H _ { C } = \sum _ { k } H _ { C , k } } \end{array}$ , where each $H _ { C , k }$ represents a functional aspect like learning (e.g., gradient descent dynamics [32]), reasoning (e.g., energy function of a Hopfield network or constraint satisfaction), or interaction. The commutativity of these underlying processes, or the variables they act upon, defines the classical computational semantics.
Quantum AGI Hamiltonians. When transitioning to a quantum substrate, the AGI’s state is described by a vector $| \psi \rangle$ in a Hilbert space $\mathcal { H }$ (or a density operator $\rho$ acting on $\mathcal { H }$ ). Observables are represented by self-adjoint operators $A$ acting on $\mathcal { H }$ . The dynamics are governed by the Schrödinger equation $\begin{array} { r } { i \hbar \frac { \mathrm { d } } { \mathrm { d } t } \rho ( t ) = [ H _ { Q } , \rho ( t ) ] } \end{array}$ where $H _ { Q }$ is the quantum Hamiltonian operator and $[ A , B ] = A B - B A$ is the commutator. The key algebraic difference from the classical case lies in the non-commutativity of operators when $[ A , B ] \neq 0$ , giving rise to consequences explored below. For a quantum AGI, the total Hamiltonian $\begin{array} { r } { H _ { Q } = \sum _ { k } H _ { Q , k } } \end{array}$ would similarly consist of generators for different AGI functions. However, these $H _ { Q , k }$ are now operators, and their mutual commutation relations, as well as their commutation with other relevant observables, dictate the AGI’s behavior. For example, if a learning operator $H _ { Q , l e a r n }$ does not commute with a sensing operator $H _ { Q , s e n s }$ representing environmental perception, then the act of learning can be disturbed by observation, and vice-versa, in a way that has no classical parallel. This non-commutative structure underpins quantum phenomena like entanglement and contextuality. More background is set out in the Appendix.
Table 1. Example phase–space coordinates interpreted as measurable AGI features.
# 2 Generator Decomposition Analysis
We now decompose the total AGI Hamiltonian in order to compare CAGI and QAGI acting in various environments. For each $H _ { G }$ , we contrast $H _ { G } ^ { C }$ (classical) and $H _ { G } ^ { Q }$ (quantum). Boldface denotes a vector (multi-degree-of-freedom object), plain italic a single coordinate (unless otherwise indicated). Classical phase space is $T ^ { * } { \mathcal { C } }$ with $( \mathbf { q } , \mathbf { p } )$ ; quantum states are on $\mathcal { H } _ { A } \otimes \mathcal { H } _ { E }$ . Pauli operators on logical $\rho$ qubits are $X _ { k } , Y _ { k } , Z _ { k }$ . We set $\hbar = 1$ . It is useful to build intuition at this stage for exactly what the observables in the classical case may be. Table 1 offers a (nonexhaustive) prospective set of generalised coordinates and conjugate momenta in terms of instrumentable observables that may be used in order to construct phase–space coordinates $( \mathbf { q } , \mathbf { p } )$ . Each row specifies (i) how a CAGI agent logs or senses the variable, (ii) the quantum measures the observable, and (iii) the relevant AGI property of interest that can be inferred.
Induction. Induction in our framework represents the process by which an agent updates its internal model based on observed data. In the classical case, this corresponds to parameter optimization via gradient descent on prediction error, which we cast in Hamiltonian form by treating the loss function as a potential energy and introducing momentum terms for parameter dynamics. In CAGI, induction corresponds to minimising error on the statistical manifold parametrized by $f _ { \theta }$ . To model this process, we use $H _ { \mathrm { i n d } }$ as the generator measuring information in data. In QAGI it becomes the relative-entropy distance on state space. Information-geometrically the classical term measures Fisher length, while the quantum term measures Bures length. The two coincide whenever $\rho _ { D }$ and $\rho _ { \theta }$ commute.
Classical form ( $H _ { \mathrm { i n d . } } ^ { C }$ ). Given data $\mathcal { D } = \{ ( \mathbf { s } _ { i } , \mathbf { r } _ { i } ) \} _ { i = 1 } ^ { N }$ , model $f _ { \theta }$ , weights $w _ { i }$
$$
H _ { \mathrm { i n d } } ^ { C } = \sum _ { i = 1 } ^ { N } \frac { w _ { i } } { 2 } \left. f _ { \pmb { \theta } } ( \mathbf { s } _ { i } ) - \mathbf { r } _ { i } \right. _ { 2 } ^ { 2 } + \sum _ { \ell = 1 } ^ { | \pmb { \theta } | } \frac { p _ { \theta _ { \ell } } ^ { 2 } } { 2 m _ { \ell } } .
$$
where $f _ { \theta }$ is a parametric predictor with weights $\pmb \theta$ and sample-weights $w _ { i }$ ; $p _ { \theta _ { \ell } }$ is the momentum conjugate to $\theta _ { \ell }$ with weight $m _ { \ell }$ (measured by finite differences in a log of $\theta _ { \ell } ( t )$ , akin to a momentum term). This can model gradient descent dynamics for learning parameters in AIXI-like agents [15, 33] or other inductive systems [24, 29].
Quantum form $( H _ { \mathrm { i n d } } ^ { Q } )$ . The quantum analogue replaces classical prediction error with quantum relative entropy between the empirical data state $\rho _ { D }$ and $\rho _ { \pmb { \theta } }$ the agent’s predictive state, capturing how quantum learning must respect fundamental trade-offs imposed by non-commuting observables. Using relative entropy $S ( \rho _ { 1 } \| \rho _ { 2 } ) = \mathrm { T r } [ \rho _ { 1 } ( \ln \rho _ { 1 } - \ln \rho _ { 2 } ) ]$ :
$$
H _ { \mathrm { i n d } } ^ { Q } = k _ { \mathrm { B } } T S ( \rho _ { D } \| \rho _ { \pmb { \theta } } ) .
$$
where $k _ { \mathrm { B } } T$ rescales quantum relative entropy $S ( \rho _ { 1 } \| \rho _ { 2 } ) = \mathrm { T r } [ \rho _ { 1 } ( \log \rho _ { 1 } - \log \rho _ { 2 } ) ]$ into energetic units. Generally, $[ H _ { \mathrm { i n d } } ^ { Q } , \rho _ { D } ] \neq 0$ if $\rho _ { D }$ and $\rho _ { \pmb { \theta } }$ don’t commute. This implies that the act of learning (reducing relative entropy) can disturb the evidence state $\rho _ { D }$ . This contrasts with classical Solomonoff induction where the data sequence is fixed [16]. Note that HiCnd serves as a variational principle for parameter evolution, not energy conservation. Energy in this case reflects a computational resource (cost) that the learning dynamics minimise through dissipative gradient flow, rather than a conserved quantity.
Reasoning—Logical Consistency ( $H _ { \mathrm { r e a s } }$ ). Reasoning can be modelled via $H _ { \mathrm { r e a s } }$ , a penalty term that encodes consistency with logical rules. Logical clauses are encoded as energy penalties where violations of logical constraints increase the system’s energy, naturally driving the agent toward logically consistent states.
The ground subspace of $H _ { \mathrm { r e a s } }$ corresponds to assignments satisfying all constraints. In classical systems, logical propositions can be evaluated independently and combined without interference, corresponding to the commutative nature of Boolean operations. In quantum settings, non-commuting projectors in quantum logic or semantics mean the truth of one clause depends on which other clauses are measured first due to contextuality. Denote $\mathcal { C }$ as the agent’s configuration manifold: the set of all instantaneous values of its state variables (weights, memory cells, sensor registers) with $p \in T ^ { \ast } { \mathcal { C } }$ its cotangent bundle. $\varphi _ { \alpha }$ is a Boolean predicate evaluating to 1 when the clause is satisfied in the current classical state. $\mu$ is a penalty weighting for inconsistency with clause $\alpha$ . These indicator functions on phase space encode logical constraints—for instance $\varphi _ { \alpha } ( { \mathbf { q } } , { \mathbf { p } } ) = 1$ when the agent’s state satisfies clause $\alpha$ of its reasoning system.
Classical form $H _ { \mathrm { r e a s } , } ^ { C }$ ). Boolean clauses $\varphi _ { \alpha } : T ^ { * } { \mathcal { C } } \to \{ 0 , 1 \}$ , penalty $\mu _ { \alpha } > 0$ :
$$
H _ { \mathrm { r e a s } } ^ { C } = \sum _ { \alpha = 1 } ^ { M } \mu _ { \alpha } \delta \bigl ( \varphi _ { \alpha } ( \mathbf { q } , \mathbf { p } ) - 1 \bigr ) .
$$
Classical logical propositions typically commute: $\{ \varphi _ { \alpha } , \varphi _ { \beta } \} _ { P B } = 0$ if they depend on distinct configuration variables or are otherwise compatible. The delta function $\delta ( \varphi _ { \alpha } ( { \mathbf { q } } , { \mathbf { p } } ) - 1 )$ enforces a hard constraint: the energy becomes large unless clause $\alpha$ is satisfied, effectively restricting the system to logically consistent regions of phase space.
Quantum form $\it { ' } H _ { \mathrm { r e a s , } } ^ { \mathrm { Q } }$ . In quantum mechanics, logical propositions correspond to projection operators $I I _ { \alpha }$ that project onto subspaces where proposition $\alpha$ is ’true’. Unlike classical Boolean functions, these projectors may not commute, leading to fundamental differences in logical inference. We express this as follows via having clauses lift to projectors $I I _ { \alpha }$ on $\mathcal { H } _ { A }$ :
$$
H _ { \mathrm { r e a s } } ^ { Q } = \sum _ { \alpha = 1 } ^ { M } \mu _ { \alpha } \left( \mathbb { I } - I I _ { \alpha } \right) .
$$
While conventional quantum computation uses unitary sequences $U _ { a } . . . U _ { k }$ , the $\left( \mathbb { I } - \pi _ { \alpha } \right)$ term acts as a penalty to enforce logical constraints during evolution. Contextuality results [18] may mean a QAGI cannot assign simultaneous, context-independent truth values to all propositions. This may fundamentally alter the nature of logical inference from classical CAGI rule-based systems [9]. The Hamiltonian itself is such that its ground states are exactly those classical configurations (or quantum subspaces) that satisfy all logical clauses, because every added term evaluates to zero there. For CAGI the penalties commute, so minimising $H _ { \mathrm { r e a s } } ^ { C }$ is order-independent and reproduces ordinary Boolean logic. For QAGI, attempting to simultaneously minimise two incompatible projectors may lead to contextual trade-offs. Contextuality means that the truth value of a proposition can depend on which other propositions are measured first—a phenomenon impossible in classical logic but fundamental to quantum mechanics when dealing with non-commuting observables, potentially requiring strategic choices by QAGI about which logical relationships to evaluate first in complex reasoning chains.
Recursion—Self-Reference ( $H _ { \mathrm { r e c . } }$ ). Recursive computation and self-reference are fundamental to advanced AI systems, enabling everything from hierarchical reasoning to self-modification capabilities. Recursion can be represented via $H _ { \mathrm { r e c } }$ , models recursive computation and self-reference by tracking the agent’s call stack depth—the number of nested function calls or recursive reasoning steps currently active. While actual call stacks are discrete, we approximate stack depth as a continuous coordinate to leverage Hamiltonian mechanics.
Classical form $H _ { \mathrm { r e c } } ^ { C } )$ . The classical form $H _ { \mathrm { r e c } } ^ { C }$ models recursion as a mechanical system with three components: current recursion depth (number of active nested calls) $q _ { \mathrm { s t k } }$ , $\boldsymbol { p } _ { \mathrm { s t k } }$ is its conjugate momentum representing the rate of depth change and potential $V _ { \mathrm { s t k } } ( q _ { \mathrm { s t k } } ) = \kappa _ { \mathrm { s } } q _ { \mathrm { s t k } } ^ { 2 } / 2$ capturing the inertia of recursive processes (with $m _ { s }$ a parameter that controls the rate of recursive calls):
$$
H _ { \mathrm { r e c } } ^ { C } = \frac { p _ { \mathrm { s t k } } ^ { 2 } } { 2 m _ { \mathrm { s } } } + V _ { \mathrm { s t k } } ( q _ { \mathrm { s t k } } ) .
$$
Here $q _ { \mathrm { s t k } } \in \mathbb { N }$ is the current call-stack depth, $\boldsymbol { p } _ { \mathrm { s t k } }$ its conjugate momentum, $m _ { \mathrm { s } }$ a parameter that controls how quickly depth can change, and $\kappa _ { \mathrm { s } }$ the spring constant of the harmonic potential $\begin{array} { r } { V _ { \mathrm { s t k } } ( q ) = \frac { 1 } { 2 } \kappa _ { \mathrm { s } } q ^ { 2 } } \end{array}$ that energetically penalises deep recursion. The classical Hamiltonian HrCec2= ps2tk/2ms + Vstk therefore aims to keep a CAGI’s stack from growing without bound, while the quantum version $H _ { \mathrm { r e c } } ^ { Q }$ comprises clock states $| t \rangle$ , data gates $U _ { t }$ , and the halt projector $I I _ { \mathrm { h a l t } }$ . The suitability of this form of Hamiltonian is of course open to debate, but we select it to illustrate the idea that deeper recursion requires more memory and processing resources (represented by higher potential energy), while the momentum term represents the ’inertia’ of ongoing recursive computations that resist sudden changes in depth. This models classical sequential processing, where the call stack state is definite. Gödel machines [28, 31] involve self-inspection of classical code.
Quantum form ( $H _ { \mathrm { r e c } , } ^ { Q }$ ). The quantum version fundamentally differs by representing the entire computational history in superposition rather than tracking a single definite stack depth. For $\{ U _ { t } \} _ { t = 0 } ^ { L - 1 }$ on data $\mathcal { H } _ { d }$ , clock $\mathcal { H } _ { c }$ (basis $| t \rangle$ $, t = 0 , \dots , L$ ) the quantum form of Hamiltonian is:
$$
\begin{array} { l } { \displaystyle H _ { \mathrm { r e c } } ^ { Q } = \sum _ { t = 0 } ^ { L - 1 } \bigl ( | t + 1 \rangle \langle t | _ { \mathrm { c l o c k } } \otimes U _ { t } + \mathrm { H . c . } \bigr ) } \\ { \displaystyle \qquad + \left. | 0 \rangle \langle 0 | _ { \mathrm { c l o c k } } \otimes ( \mathbb { I } - | \psi _ { 0 } \rangle \langle \psi _ { 0 } | _ { \mathrm { d a t a } } ) + | L \rangle \langle L | _ { \mathrm { c l o c k } } \otimes ( \mathbb { I } - { I _ { \mathrm { h a l t } } } ) . \right. } \end{array}
$$
Here, $| t \rangle$ are discrete computational time steps, $U _ { t }$ unitary operations, and H.c. the Hermitian conjugate. The ground state, or history state $| \varPsi _ { \mathrm { h i s t } } \rangle \ \propto$ $\begin{array} { r } { \sum _ { t = 0 } ^ { L } \left| t \right. \otimes \left( \prod _ { s = 0 } ^ { t - 1 } U _ { s } \right) \left| \psi _ { 0 } \right. } \end{array}$ , encodes the computation in superposition. Measurement of the current step in the process would collapse this history. This illustrates the difficulties of self-modification and self-inspection for QAGI (albeit these might be addressed with available ancilla).
Learning and Parametric Self-Modification ( $H _ { \mathrm { l e a r n } }$ ). Learning in AGI systems involves continuous adaptation of internal parameters based on experience. While our earlier induction generator focused on prediction error minimisation, this learning generator models the broader dynamics of parametric self-modification during training and adaptation processes. Let $\pmb \theta = ( \theta _ { 1 } , \dots , \theta _ { d } )$ be trainable weights, $p _ { \theta _ { \ell } }$ their conjugate momenta, $m _ { \ell } \ > \ 0$ effective masses (larger masses correspond to parameters that change slowly, smaller masses allow rapid adaptation) and $\mathcal { L } ( \pmb { \theta } ; \mathcal { D } )$ a differentiable loss (e.g. mean-squared error on the data set $\mathcal { D }$ ). The learning Hamiltonian ( $\lambda > 0$ sets the loss-to-energy scale) is:
$$
H _ { \mathrm { l e a r n } } ^ { C } ( \pmb { \theta } , \mathbf { p } ) = \sum _ { \ell = 1 } ^ { d } \frac { p _ { \theta _ { \ell } } ^ { 2 } } { 2 m _ { \ell } } + \lambda \mathcal { L } ( \pmb { \theta } ; \mathcal { D } ) ,
$$
Quantum form $\begin{array} { r } { \left( H _ { \mathrm { l e a r n } . } ^ { Q } \right. } \end{array}$ ). The quantum formulation requires encoding continuous parameters into discrete qubit states, where each parameter $\theta _ { \ell }$ is represented by the expectation value $\langle Z _ { \ell } \rangle$ of a Pauli-Z operator, with $X _ { \ell }$ and $Z _ { \ell }$ being the standard Pauli matrices for qubit $\ell$ . $Z _ { \ell } Z _ { \ell ^ { \prime } }$ realises Ising couplings Jℓℓ′ that embed the classical cost landscape. The Ising model, borrowed from statistical physics, uses $Z _ { \ell } Z _ { \ell ^ { \prime } }$ interactions to encode correlations between parameters, while the transverse fields $g _ { \ell } X _ { \ell }$ create quantum superposition that enables exploration of multiple parameter configurations simultaneously:
$$
H _ { \mathrm { l e a r n } } ^ { Q } = - \sum _ { \ell < \ell ^ { \prime } } J _ { \ell \ell ^ { \prime } } Z _ { \ell } Z _ { \ell ^ { \prime } } - \sum _ { \ell } g _ { \ell } X _ { \ell } .
$$
This Hamiltonian embeds the classical loss landscape into quantum spin interactions: the ground state of the Ising terms $- J _ { \ell \ell ^ { \prime } } Z _ { \ell } Z _ { \ell ^ { \prime } }$ corresponds to optimal parameter configurations, while the couplings Jℓℓ′ encode the loss function’s curvature structure. Non-commutation $[ X _ { \ell } , Z _ { \ell } Z _ { \ell ^ { \prime } } ] \ne 0$ enables tunnelling through high, narrow barriers, which might accelerate optimisation.
Sensing and Environmental Interaction $( H _ { \mathrm { s e n s } } , H _ { \mathrm { e n v } } )$ . Sensing the environment can be modelled via $H _ { \mathrm { s e n s } }$ which describes transfers of information from the environment register $\mathsf { E }$ into an agent sensor register S. In a classical implementation the transfer is a CTC channel that leaves $\mathsf { E }$ untouched. In the quantum implementation the same coupling entangles a pointer qubit with $\mathsf { E }$ , so reading the pointer realises a QTC channel whose back–action decoheres $\rho _ { \mathsf E }$ . The pointer qubit $m$ is an ancilla that entangles with environment observable $O _ { E }$ with projective readout decohering $O _ { E }$ off-diagonal terms.
Classical form. Let $q _ { \mathrm { s e n s } } \in \mathbb { R }$ be the sensor coordinate inside the agent’s phase space, $q _ { \mathrm { e n v } } \in \mathbb { R }$ the quantity to be read from the environment, $P$ the conjugate momentum of $q _ { \mathrm { s e n s } }$ , and $\kappa > 0$ a tunable coupling strength while $\mathbf { F } ( \mathbf { q } _ { \mathrm { e n v } } , \mathbf { p } _ { \mathrm { e n v } } )$ denotes the generalised force $\nabla _ { \mathbf { q } } H _ { \mathrm { E } } ^ { \mathrm { b a r e } }$ . The measurement Hamiltonian is
$$
H _ { \mathrm { s e n s } } ^ { C } = \kappa \ : P \ : \delta \bigl ( q _ { \mathrm { s e n s } } - q _ { \mathrm { e n v } } \bigr ) , \qquad H _ { \mathrm { e n v } } ^ { C } = H _ { \mathrm { E } } ^ { \mathrm { b a r e } } - \mathbf { u } ( t ) \cdot \mathbf { F } \bigl ( \mathbf { q } _ { \mathrm { e n v } } , \mathbf { p } _ { \mathrm { e n v } } \bigr ) .
$$
$H _ { \mathrm { s e n s } } ^ { C }$ vanishes exactly when the sensor value matches the environmental value, zero energy is expended for a perfect copy and the Poisson brackets $\{ q _ { \mathrm { e n v } } , H _ { \mathrm { s e n s } } ^ { C } \} =$ $0$ show that $\mathsf { E }$ is not disturbed. The drive term $\mathbf { u } ( t ) \cdot \mathbf { F }$ (with control field $\mathbf { u }$ and generalised force $\mathbf { F }$ ) keeps the environment open and classically steerable.
Quantum Hamiltonian. Write $| 0 \rangle _ { m } , | 1 \rangle _ { m }$ for the orthogonal pointer states in the one-qubit sensor register $\mathcal { H } _ { m }$ , let $O _ { \mathsf { E } }$ be a Hermitian observable on the environment Hilbert space $\mathcal { H } _ { E }$ , and keep the same real constant $\kappa$ . With $\mathbf { A } _ { \mathsf { E } }$ a vector of Hermitian operators and $\mathbb { I } _ { A }$ the identity on the agent’s internal Hilbert space:
$$
H _ { \mathrm { s e n s } } ^ { Q } = \kappa \big ( | 1 \rangle \langle 0 | _ { m } \otimes { \cal O } _ { \mathsf E } + \mathrm { H . c . } \big ) , \qquad H _ { \mathrm { e n v } } ^ { Q } = H _ { \mathrm { E } } ^ { \mathrm { b a r e } } - { \mathbf u } ( t ) \cdot \big ( \mathbf { A } _ { \mathsf E } \otimes \mathbb { I } _ { \boldsymbol A } + \mathrm { H . c . } \big ) .
$$
The operator $| \negmedspace 1 \rangle \langle 0 | _ { m }$ does not commute with its Hermitian conjugate. This means that $[ H _ { \mathrm { s e n s } } ^ { Q } , H _ { \mathrm { s e n s } } ^ { Q \dagger } ] \neq 0$ and, as a consequence, a projective read-out of the pointer implements a QTC channel whose Lindblad generator ${ \mathcal { L } } _ { \mathrm { m e a s } } ( \rho ) =$ $- i [ H _ { \mathrm { s e n s } } ^ { Q } , \rho ] + \gamma \left( Z _ { m } \rho Z _ { m } - \rho \right)$ suppresses off-diagonal terms of $\rho _ { \mathsf { E } }$ at rate $\gamma \sim \kappa ^ { 2 }$ . $Z _ { m }$ is the Pauli operator on qubit $m$ . The operator $| 1 \rangle \langle 0 | _ { m } \otimes { \cal O } _ { \sf E }$ creates entanglement: when the environment observable $O _ { \mathsf { E } }$ has a particular value, it correlates with flipping the pointer from $| 0 \rangle _ { m }$ to $| 1 \rangle _ { m }$ , encoding environmental information in pointer-environment correlations. If $[ H _ { \mathrm { s e n s } } ^ { Q } , H _ { \mathrm { l e a r n } } ^ { Q } ] \neq 0$ , the same measurement inevitably perturbs the learning dynamics, and the resulting agent–environment entanglement can violate Bell inequalities [2, 36], an effect absent in the commuting classical model, potentially enabling quantum sensing advantages but also creating fundamental measurement-learning trade-offs impossible in classical AGI.
Example Comparison Hamiltonian To illustrate our approach, we consider the following toy example. Assume the environment is described by a quantum register. A CAGI agent must encode and decode quantum data via CTQ/QTC interfaces, while a fully quantum QAGI agent that can also exploit coherent QTQ interactions. We compose a total Hamiltonian from three subsidiary Hamiltonians $H _ { \mathrm { t o t } } = H _ { \mathrm { s e n s } } + H _ { \mathrm { r e a s } } + H _ { \mathrm { l e a r n } }$ .
$Q A G I .$ The QAGI register consists of a two-qubit policy $\mathcal { H } _ { A _ { 1 } } \otimes \mathcal { H } _ { A _ { 2 } }$ , a pointer qubit $\mathcal { H } _ { m }$ , and the environment qubit $\mathcal { H } _ { E }$ . Setting $\hbar = 1$ ,
$$
H _ { Q } = \underbrace { \kappa \big ( | 1 \rangle \langle 0 | _ { m } \otimes Z _ { E } + \mathrm { H . c . } \big ) } _ { \mathrm { s e n s i n g ~ c h a n n e l } } + \underbrace { \mu \big ( \mathbb { I } - \varPi _ { \alpha } \big ) } _ { \mathrm { r e a s o n i n g ~ e r r o r ~ p e n a l t y } } + \underbrace { g X _ { A _ { 1 } } + J Z _ { A _ { 1 } } Z _ { A _ { 2 } } } _ { \mathrm { Q T Q ~ l e a r n i n g ~ b l o c k } } ,
$$
Here $\begin{array} { r } { \quad \varPi _ { \alpha } = \frac { 1 } { 2 } ( \mathbb { I } + Z _ { m } ) \otimes \frac { 1 } { 2 } ( \mathbb { I } + Z _ { A _ { 1 } } ) } \end{array}$ enforces $Z _ { m } , Z _ { A _ { 1 } } = + 1$ . The first term realises a $Q T C$ measurement: it entangles the pointer with $Z _ { E }$ , and a subsequent pointer read-out transfers the result to a classical log while decohering $\rho _ { E }$ . The $\mu$ -term operates purely within the quantum formalism: it conditions the system’s energy on a projector that fails to commute with the QTC coupling, hence logical consistency is contextual. The Ising transverse field block is also QTQ; its noncommutation lets the policy search landscape be traversed through tunnelling, visible as peaks in the quantum Fisher information $F _ { \theta } ( t ) = \mathrm { T r } [ L _ { \theta } ^ { 2 } \rho _ { t } ]$ .
CAGI. CAGI possesses only classical registers, so it must encode and decode quantum data. Let $q _ { E } \left( = \pm 1 \right)$ be the $Z _ { E }$ eigenvalue obtained by an external QTC reader, $q _ { m }$ the classical sensor bit, $\theta \in \mathbb { R }$ a weight, and $p \theta , m$ as before. Define the action bit $q _ { A } = \mathbb { 1 } _ { \{ \theta > 0 \} }$ . The Hamiltonian is:
$$
H _ { C } = \underbrace { \kappa \delta ( q _ { m } - q _ { E } ) } _ { \mathrm { C T C ~ c o p y } } + \underbrace { \mu \delta \big [ ( 1 - q _ { m } ) q _ { A } - 1 \big ] } _ { \mathrm { C T C ~ l o g i c } } + \frac { p _ { \theta } ^ { 2 } } { 2 m } + \lambda | \theta | + \eta ( t ) \underbrace { \big [ q _ { A } Z _ { E } \big ] } _ { \mathrm { C T Q ~ a c t u a t c } }
$$
The first two deltas are $\zeta T C$ —they move only classical bits and therefore commute with everything else. The last line is a time-dependent $C T Q$ channel: a classical action bit $q .$ is written into the quantum environment operator $Z _ { E }$ via a control field $\eta ( t )$ (e.g. a laser pulse that rotates the obstacle qubit). There is no QTQ term because the agent cannot maintain coherence; sensing happens by an external QTC device that prepares $q _ { E }$ . The fundamental distinction between CAGI and QAGI lies in commutation: CAGI terms commute completely while QAGI terms do not, creating qualitatively different agent-environment dynamics. The CTC copy term $\kappa \delta ( q _ { m } - q _ { E } )$ leaves the quantum environment untouched, the Boolean penalties commute so their evaluation order is immaterial, and the weight trajectory $( \theta ( t ) , p _ { \theta } ( t ) )$ follows a smooth deterministic hill–climb. By contrast, for QAGI the measurement, reasoning and learning blocks fail to commute. A pointer read–out (QTC) entangles then decoheres the obstacle qubit, injecting energy of order $\kappa$ and shifting the logical penalty because $[ H _ { \mathrm { s e n s } } ^ { Q } , H _ { \mathrm { r e a s } } ^ { Q } ] \neq 0$ ; the truth therefore becomes formally context-dependent.
Conclusion and Discussion. We have proposed a generator-based Hamiltonian framework in which the total dynamics of an agent are written as a sum of subsidiary Hamiltonians. For each generator we provided $( i )$ a classical phasespace realisation $H _ { G } ^ { C }$ acting on a commutative algebra of observables and (ii) a quantum operator realisation $H _ { G } ^ { Q }$ acting on a non-commutative von-Neumann algebra. The simplified examples above demonstrate how our framework captures both the computational aspects (via Hamiltonians) and the informationtheoretic aspects (via channel types) in a unified description. More complex scenarios may involve coupling and correlations for both CAGI and QAGI. Potential future research avenues include: (i) implementing small-scale agent–inthe-loop experiments on NISQ hardware; (ii) extending many-body and open environments ; and (iii) embedding alignment and safety constraints as additional commuting / non-commuting generators.
# References
1. Aaronson, S.: Quantum computing since Democritus. Cambridge University Press (2013)
2. Bell, J.S.: On the Einstein Podolsky Rosen Paradox. Physics Physique Fizika 1, 195–200 (1964)
3. Bell, J.: Speakable and Unspeakable in Quantum Mechanics. Cambridge University Press, Cambridge, 2nd edn. (2004)
4. Bennett, M.T., Maruyama, Y.: The artificial scientist: Logicist, emergentist, and universalist approaches to artificial general intelligence. In: Artificial General Intelligence. Springer (2022)
5. Chiribella, G., D’Ariano, G.M., Perinotti, P.: Quantum circuit architecture. Physical review letters 101(6), 060401 (2008)
6. Chruściński, D., Jamiołkowski, A.: Geometric phases in classical and quantum mechanics. Springer (2004)
7. Feynman, R.P.: Simulating physics with computers. International Journal of Theoretical Physics 21(6), 467–488 (Jun 1982)
8. Frankel, T.: The Geometry of Physics: An Introduction. Cambridge University Press (2011)
9. Goertzel, B.: Artificial general intelligence: concept, state of the art, and future prospects. Journal of Artificial General Intelligence 5(1), 1 (2014)
10. Goertzel, B.: The general theory of general intelligence: A pragmatic patternist perspective (2021)
11. Goertzel, B., Bogdanov, V., Duncan, M., Duong, D., Goertzel, Z., Horlings, J., Ikle’, M., Meredith, L.G., Potapov, A., de Senna, A.L., Suarez, H.S.A., Vandervorst, A., Werko, R.: Opencog hyperon: A framework for agi at the human level and beyond (2023)
12. Goldstein, H.: Classical Mechanics. Pearson Education (Sep 2002)
13. Hall, B.C.: Quantum theory for mathematicians. Springer (2013)
14. Helgason, S.: Differential Geometry, Lie Groups, and Symmetric Spaces. ISSN, Elsevier Science (1979)
15. Hutter, M.: Universal artificial intelligence: Sequential decisions based on algorithmic probability. Springer Science & Business Media (2004)
16. Hutter, M., Quarel, D., Catt, E.: An Introduction to Universal Artificial Intelligence. CRC Press (2024)
17. Knapp, A.W., Knapp, A.W.: Lie groups beyond an introduction, vol. 140. Springer (1996)
18. Kochen, S., Specker, E.P.: The problem of hidden variables in quantum mechanics. Journal of Mathematics and Mechanics 17(1), 59–87 (1967)
19. Manin, I.I.: Mathematics as metaphor: Selected essays of Yuri I. Manin, vol. 20. American Mathematical Soc. (2007)
20. McMillen, P., Levin, M.: Collective intelligence: A unifying concept for integrating biology across scales and substrates. Communications Biology 7(1), 378 (Mar 2024)
21. Nielsen, M.A., Chuang, I.L.: Quantum Computation and Quantum Information. Cambridge University Press, 10th anniversary edn. (2010)
22. Özkural, E.: What is it like to be a brain simulation? In: International Conference on Artificial General Intelligence. pp. 232–241. Springer (2012)
23. Perrier, E.: Quantum geometric machine learning. arXiv preprint arXiv:2409.04955 (2024)
24. Potapov, A., Rodionov, S.: Making universal induction efficient by specialization. In: International Conference on Artificial General Intelligence. pp. 133–142. Springer (2014)
25. Preskill, J.: Quantum computing 40 years later. arXiv:2106.10522 (2021)
26. Russell, S., Norvig, P.: Artificial Intelligence: A Modern Approach. Pearson, 4th edn. (2020)
27. Sakurai, J.J., Napolitano, J.: Modern quantum mechanics. Cambridge University Press (2020)
28. Schmidhuber, J.: Gödel machines: Self-referential optimal universal problem solvers. arXiv preprint cs/0309048 (2003)
29. Solomonoff, R.J.: A formal theory of inductive inference. part i & ii. Information and Control 7(1–2), 1–22, 224–254 (1964)
30. Solé, R., Moses, M., Forrest, S.: Liquid brains, solid brains. Philosophical Transactions of the Royal Society B: Biological Sciences 374(1774), 20190040 (2019)
31. Steunebrink, B.R., Schmidhuber, J.: A family of gödel machine implementations. In: Artificial General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3-6, 2011. Proceedings 4. pp. 275–280. Springer (2011)
32. Sunehag, P., Hutter, M.: Optimistic aixi. In: Artificial General Intelligence: 5th International Conference, AGI 2012, Oxford, UK, December 8-11, 2012. Proceedings 5. pp. 312–321. Springer (2012)
33. Veness, J., Sunehag, P., Hutter, M.: On ensemble techniques for aixi approximation. In: International Conference on Artificial General Intelligence. pp. 341–351. Springer (2012)
34. Watrous, J.: The Theory of Quantum Information. Cambridge University Press (2018)
35. Wootters, W.K., Zurek, W.H.: A single quantum cannot be cloned. Nature 299, 802–803 (Oct 1982)
36. Zurek, W.H.: Decoherence, einselection, and the quantum origins of the classical. Reviews of Modern Physics 75(3), 715–775 (2003)
# Technical Appendices
A Diagrams
Fig. 1. Classical agent (CAGI) interacting via CTC, CTQ or QTC maps with classical $E _ { C }$ or quantum $E _ { Q }$ environments
Fig. 2. Quantum agent (QAGI) interacting via QTC, CTQ or QTQ maps.
# B Hamiltonian Dynamics
Classical Evolution Evolution is described by Hamilton’s equations:
$$
\dot { q } _ { i } = \frac { \partial H _ { C } } { \partial p _ { i } } , \quad \dot { p } _ { i } = - \frac { \partial H _ { C } } { \partial q _ { i } } .
$$
This can be expressed more abstractly using the Poisson bracket. For two observables $f , g$ , their Poisson bracket is:
$$
\{ f , g \} _ { P B } = \sum _ { i = 1 } ^ { n } \left( \frac { \partial f } { \partial q _ { i } } \frac { \partial g } { \partial p _ { i } } - \frac { \partial f } { \partial p _ { i } } \frac { \partial g } { \partial q _ { i } } \right) .
$$
The time evolution of any observable $f$ is then given by $\dot { f } = \{ f , H _ { C } \} _ { P B }$ . When $\{ f , g \} _ { \mathrm { P B } } = 0$ the Poisson bracket vanishes, so the observables $f$ and $g$ commute and their values can, in principle, be fixed simultaneously with arbitrary accuracy. Classical logic and computation implicitly assume this independence: the truth of one proposition (or the content of one register) leaves another untouched unless an explicit coupling Hamiltonian $H _ { C }$ is present. Consequently, for a classical AGI we write the control Hamiltonian as a direct sum
$$
H _ { C } \ = \ \sum _ { k } H _ { C , k } ,
$$
where each term $H _ { C , k }$ drives a distinct functional block—learning (e.g. gradientdescent updates [32]), reasoning (e.g. a Hopfield-network energy or constraintsatisfaction term), or sensorimotor exchange. The mutual commutativity of these blocks, and of the variables they address, underpins the semantics of classical computation.
In information theoretic terms, classical mechanical dynamics of CAGI can be expressed as follows. Let $\{ \mathcal { R } _ { i } \} _ { i = 1 } ^ { n }$ be the classical registers of the agent, each described by a commutative von Neumann algebra $\mathcal { V } _ { i } = L ^ { \infty } ( \varOmega _ { i } , \mu _ { i } )$ . A microstate of the whole agent–environment system is therefore a point $( \mathbf { q } , \mathbf { p } ) \in \mathcal { M } =$ $T ^ { * } { \mathcal { C } }$ with
$$
q _ { i } : = X _ { i } ( \omega _ { i } ) , \qquad p _ { i } : = M _ { i } \dot { X } _ { i } ( \omega _ { i } ) ,
$$
where $X _ { i } ~ \in ~ \mathcal { V } _ { i }$ is the random variable realised by register $\mathcal { R } _ { i }$ and $M _ { i }$ is an information-theoretic weight term (e.g. an inverse learning rate or buffer capacity). The classical Hamiltonian functional $H _ { C } \colon \mathcal { M } \mathbb { R }$ involves coordinates:
$$
\dot { q } _ { i } = \frac { \partial H _ { C } } { \partial p _ { i } } , \qquad \dot { p } _ { i } = - \frac { \partial H _ { C } } { \partial q _ { i } } ,
$$
but now (13) is understood to act on probability densities $f _ { t } ( \mathbf { q } , \mathbf { p } )$ pushed forward by the CTC channel $\mathrm { C T C } _ { t } : L ^ { \infty } ( \varOmega ) \to L ^ { \infty } ( \varOmega )$ . For any pair of observables $f , g \in \oplus _ { i } \mathcal { V } _ { i }$ we retain the Poisson bracket:
$$
\{ f , g \} _ { \mathrm { P B } } \ = \ \sum _ { i = 1 } ^ { n } \Bigl ( \frac { \partial f } { \partial q _ { i } } \frac { \partial g } { \partial p _ { i } } - \frac { \partial f } { \partial p _ { i } } \frac { \partial g } { \partial q _ { i } } \Bigr ) ,
$$
so the time derivative of $f$ is $\dot { f } = \{ f , H _ { C } \} _ { \mathrm { P B } }$ . In information terms $\{ f , g \} _ { \mathrm { P B } } = 0$ 1 iff the corresponding classical channels commute.
# B.1 Quantum Hamiltonian dynamics
Upon shifting to a quantum substrate, the AGI’s state lives as a vector $| \psi \rangle$ in a Hilbert space $\mathcal { H }$ (or, more generally, as a density operator $\rho$ on $\mathcal { H }$ ). Observables correspond to self-adjoint operators $A$ acting on that space, and evolution follows the Schrödinger–von Neumann equation
$$
i \hbar { \frac { \mathrm { d } } { \mathrm { d } t } } \rho ( t ) = [ H _ { Q } , \rho ( t ) ] ,
$$
where $[ A , B ] \equiv A B - B A$ is the commutator—the quantum analogue of the Poisson bracket—and $H _ { Q }$ is the total quantum Hamiltonian. The critical algebraic shift from the classical picture is that operators need not commute: when $[ A , B ] \neq 0$ , simultaneous precise values are forbidden, giving rise to distinctively quantum effects discussed below. For a quantum AGI we likewise decompose
$$
H _ { Q } \ = \ \sum _ { k } H _ { Q , k } ,
$$
each $H _ { Q , k }$ generating a functional capability—learning, reasoning, perception, actuation, and so forth. Now, however, the commutation relations among these generators, and with other key observables, govern behaviour: if the learning term $H _ { Q , \mathrm { l e a r n } }$ fails to commute with the sensing term $H _ { Q }$ ,sens, then observation can disturb learning (and vice versa) in a way with no classical counterpart. Such non-commutative structure underlies quantum phenomena like entanglement and contextuality and therefore reshapes the semantics of computation in a quantum-enabled AGI. This non-commutativity is fundamental and has profound consequences which may be a constraint or benefit.
For a quantum AGI we again write the control Hamiltonian as a sum of functional generators,
$$
H _ { Q } \ = \ \sum _ { k } H _ { Q , k } .
$$
Each $H _ { Q , k }$ is now an operator, so the commutators among these terms—and with other observables—govern the agent’s evolution. If, say, the learning generator $H _ { Q , \mathrm { l e a r n } }$ fails to commute with the sensing generator $H _ { Q , \mathrm { s e n s } }$ , environmental measurement can disturb learning (and vice versa) in a manner with no classical analogue. This non-commutative architecture underlies quantum hallmarks such as entanglement and contextuality, which may represent either valuable resources or formidable challenges for a QAGI.
# B.2 Quantum information formulation
In quantum information terms, transitioning to quantum involves replacing every classical register $\mathscr { V } _ { i } = L ^ { \infty } ( \varOmega _ { i } )$ by a non-commutative von Neumann algebra $\nu _ { i } =$ $B ( \mathcal { H } _ { i } )$ acting on a Hilbert space $\mathcal { H } _ { i }$ . The full agent–environment is reflected by the tensor algebra $\mathcal { V } = \otimes _ { i } \mathcal { V } _ { i } \subseteq B ( \mathcal { H } )$ , with $\mathcal { H } = \bigotimes _ { i } \mathcal { H } _ { i }$ . States are represented by density operators $\rho \in { \mathcal { D } } ( { \mathcal { H } } ) = \{ \rho \geq 0$ , $\operatorname { T r } \rho = 1 \}$ , and a observable is an element $A \in \nu$ . When the evolution is closed and reversible the channel on $\nu$ is the adjoint action of a unitary $U _ { t }$ :
$$
\begin{array} { r } { \phi _ { t } ^ { \mathrm { ( u ) } } ( A ) = U _ { t } ^ { \dagger } A U _ { t } , \qquad U _ { t } = \exp \bigl ( - \frac { i } { \hbar } H _ { Q } t \bigr ) , } \end{array}
$$
where the Hamiltonian operator $H _ { Q } \in \mathcal { V }$ is the quantum analogue of $H _ { C }$ . In Schrödinger form this yields the familiar
$$
i \hbar \dot { \rho } ( t ) = \big [ H _ { Q } , \rho ( t ) \big ] ,
$$
which is the generator $\begin{array} { r } { { \mathcal L } _ { H _ { Q } } ~ = ~ - { \frac { i } { \hbar } } [ H _ { Q } , \cdot ] } \end{array}$ of a one-parameter group of QTC channels. Realistic AGI modules interact with—and are monitored by—their environment, so the fundamental dynamical object is a quantum channel $\varPhi _ { t } =$ $\exp ( t \mathcal { L } )$ , with Lindblad superoperator:
$$
\begin{array} { l } { \displaystyle \mathcal { L } ( \rho ) = - \frac { i } { \hbar } [ H _ { Q } , \rho ] + \sum _ { \alpha } \Bigl ( L _ { \alpha } \rho L _ { \alpha } ^ { \dagger } - \frac { 1 } { 2 } \{ L _ { \alpha } ^ { \dagger } L _ { \alpha } , \rho \} \Bigr ) } \end{array}
$$
where the $L _ { \alpha }$ ’s represent QTC measurement-and-feedback registers). | The prospect of AGI instantiated on quantum substrates motivates the development of mathematical frameworks that enable direct comparison of their operation in classical and quantum environments. To this end, we introduce a Hamiltonian formalism for describing classical and quantum AGI tasks as a means of contrasting their interaction with the environment. We propose a decomposition of AGI dynamics into Hamiltonian generators for core functions such as induction, reasoning, recursion, learning, measurement, and memory. This formalism aims to contribute to the development of a precise mathematical language for how quantum and classical agents differ via environmental interaction. | [
"quant-ph",
"cs.AI"
] |
# 1 Introduction
Documents are a fundamental form for the preservation and exchange of information, and an important source for humans to learn and acquire knowledge (Gu et al., 2021; Chia et al., 2024; Deng et al., 2024). Document question answering is a core task for automated understanding and retrieval of information (Appalaraju et al., 2021; Van Landeghem et al., 2023). Document Visual Question Answering (DocVQA) involves answering questions grounded in multi-modal documents containing text, tables, and images — common in formats like reports and manuals (Suri et al., 2024; Ma et al., 2024b). There are three main challenges in this task: (1) multiple pages, where a portion of a long document needs to be processed to answer the question, (2) multiple references, where different pages need to be cross-referenced, and (3) multiple modalities.
Figure 1: Illustration of the vanilla RetrievalAugmented Generation (RAG) pipeline and the proposed SimpleDoc framework. SimpleDoc introduces a two-step page retrieval process that utilizes preprocessed embedding and summaries of each page. During generation, a reasoning agent reviews the retrieved pages and decide whether to give the answer, or produce a new query to retrieve more pages.
Retrieval-augmented generation (RAG) (Lewis et al., 2020) is an effective pipeline to overcome challenges (1) and (2), where relevant information is retrieved by a retrieval model and then fed to a generation model to output the answer. To handle different modalities, several methods have been proposed to pre-process documents by converting different modalities into texts (Memon et al., 2020; Fenniak, 2022; Shinyama et al., 2019). Recently, multi-modal retrieval models such as CoPali (Faysse et al., 2025) are proposed to perform page-level retrieval by treating each page as image (Yu et al., 2024a; Xie et al., 2024). Building on this, M3DocRAG (Cho et al., 2024) proposed a multi-modal RAG system that demonstrated strong performance in DocVQA tasks by combining image and text embeddings for document retrieval. Since multi-agent systems have emerged as an effective method to solve complex tasks and multistep tasks (Wu et al., 2023; Zheng et al., 2025; Wu et al., 2024), MDocAgent (Han et al., 2025) applied this concept to document QA by designing a multi-agent pipeline composed of dedicated text and image retrieval agents, a critical information extractor, and a final summary agent to collaboratively tackle multi-modal document understanding. Despite MDocAgent’s effectiveness, we find it to be overcomplicated and might not utilize the full capacity of recent VLMs.
SimpleDoc introduces a simple retrieval augmented framework that leverages modern VLMs without the overhead of complex multi-agent designs. The pipeline unfolds in two stages. First, an offline document-processing stage indexes every page twice: (i) as a dense visual embedding produced by a page-level VLM such as ColPali, and (ii) as a concise, VLM-generated semantic summary that captures the page’s most salient content. Second, an online iterative QA stage employs a dual-cue retriever that initially shortlists pages via embedding similarity and then asks an LLM, which operates solely over the summaries, to decide which of those pages are pertinent to the query and re-rank them by estimated relevance. This ordered subset is handed to a single reasoning agent. The agent reads only the newly selected pages along with a working memory, which preserves important information from previously examined pages, and judges whether the evidence now suffices to answer the question. If it detects missing information, the agent emits a refined follow-up query, prompting another retrieval round and merging the newly distilled notes into memory. This lightweight loop of targeted retrieval and memory-aided reasoning continues until an answer is produced or a preset iteration limit is reached, enabling SimpleDoc to flexibly trade retrieval depth for generation quality.
We perform various experiments and analyses to gain an understanding of the VQA problem and to validate the effectiveness of our method. We test on 4 different datasets and find that our method can improve over previous baselines by 3.2 absolute points, with only 3.5 pages retrieved for each question. While the setting of multi-modal, multi-page document-based QA seems new, we find it very much resembles ‘traditional’ RAG tasks focusing on tasks like HotpotQA (Yang et al., 2018) and 2WIKI (Ho et al., 2020), which usually require retrieved fine-grained chunked texts from given documents. However, M3DocRAG and MDocAgent have had few discussions in this direction. Instead, we do a detailed analysis of these RAG methods and uncover two common strategies: query decomposition and relevant page review. We implement Plan∗ RAG and Chain-of-note as representations of the common strategies and compare them under the DocVQA setting. To summarize, our contributions are the following:
• We propose SimpleDoc, a straightforward and effective framework for multi-modal document question-answering. • We perform various experiments to test effectiveness of SimpleDoc, and analyze and compare with traditional RAG methods in which previous methods on DocVQA are missing.
# 2 Related Work
Document visual question answering. focuses on answering questions grounded in visual and textual information contained within documents (Ding et al., 2022; Tanaka et al., 2023). Early efforts primarily addressed single-page document images using OCR-based approaches and multi-modal language models (MLMs) (Mathew et al., 2021b,a; Mishra et al., 2019). However, these methods often struggled with the long-context reasoning and complex layouts found in real-world documents. Recently, benchmarks like MP-DocVQA (Tito et al., 2023) and MMLongBench-Doc (Ma et al., 2024b) focus on long multi-page and multi-modal document understanding, posting new challenges to the task (Tanaka et al., 2023). However, recent advances in vision-language models (VLMs) has shown promise for multi-modal document understanding (Liu et al., 2024a, 2023; Chen et al., 2022; Bai et al., 2025; Xie et al., 2025; Ma et al.,
Figure 2: SimpleDoc consists of two stages: (1) offline extraction of visual embeddings and LLM-generated summaries for all document pages, and (2) an online reasoning loop that performs retrieval via embedding and summary-based re-ranking, followed by answer generation with a memory-guided VLM agent that iteratively refines its query if needed.
2024a). ColPali (Faysse et al., 2025) introduces a new concept of treating document pages as images to produce multi-vector embeddings, where pages can be retrieved for each query. Other methods such as VisRAG (Yu et al., 2024a) and VDocRAG (Tanaka et al., 2025) also convert pages as images to avoid missing information from parsing text and image separately from one page. From CoPali, M3DocRAG (Cho et al., 2024) proposed a multi-modal RAG pipeline that retrieves relevant document pages across large document corpora and feeds them into a vision language model. MDocAgent (Han et al., 2025) extended this by introducing specialized agents for handling cross-modal retrieval and reasoning over long documents.
Retrieval augmented generation (RAG) has become a powerful strategy for knowledge-intensive tasks by supplementing language models with external context, which consists of two core steps: retrieve and generate (Jiang et al., 2023a; Gao et al., 2023). Many works have been proposed to improve RAG, such as training effective embedding models (Karpukhin et al., 2020; Khattab and Zaharia, 2020a), query rewrite and decomposition (Ma et al., 2023; Peng et al., 2024; Chan et al., 2024; Verma et al., 2025; Lee et al., 2024; Wang et al., 2024), constructing different forms of databases (e.g., knowledge graphs) (Gaur et al., 2022; Edge et al., 2024; Liu et al., 2025), improving quality of retrieved context (Yu et al., 2024b; Chen et al., 2024), augmenting the RAG process (Asai et al., 2023; Trivedi et al., 2022a; Liu et al., 2024b), and many others (Jiang et al., 2023b). Most of the RAG methods focus on knowledge and reasoning tasks that only require text-based retrieval (e.g., HotpotQA) (Yang et al., 2018; Geva et al., 2021; Trivedi et al., 2022b; Mallen et al., 2023; Ho et al., 2020; Kwiatkowski et al., 2019). While we are targeting the Document Visual understanding task, we find that many core ideas might also be effective in DocVQA. Thus, we also implement and test two RAG methods: Chain-of-Notes (Yu et al., 2024b), which improves retrieval context for better generation, and Plan∗RAG (Verma et al., 2025), which decomposes queries and augments the generation process for better retrieval, to help understand how previous methods can be used on DocVQA.
# 3 Method
Below we introduce SimpleDoc, an effective framework for DocVQA. SimpleDoc consists of two stages: an offline document processing phase followed by an online iterative retrieval-augmented question answering phase. Our framework features the following: 1. Enhanced page retrieval through a combination of vector and semantic representations. 2. Continuous refinement via iterative retrieval and memory update. Figure 2 illustrates the overall pipeline of our approach.
# 3.1 Offline Document Processing
The initial stage involves pre-processing and indexing each document to create a searchable representation. We treat each page as a unit, and use two VLMs to get both vector and semantic representations of each page. For vector embedding, we employ VLM like CoPali (Faysse et al., 2025) that are trained to generate embeddings for document pages. For semantic representation, we use a general VLM guided by a predefined prompt to produce a summary (typically 3-5 sentences) that includes the salient information of that page. These summaries are designed to highlight information that might be generally relevant for answering potential future questions without prior knowledge of any specific user query.
Specifically, given a document $D$ consisting of $j$ pages $D \ = \ p _ { 1 } , p _ { 2 } , . . . , p _ { j }$ , we use a vision embedding model to generate embedding vectors $E ~ = ~ \{ e _ { 1 } , e _ { 2 } , . . . , e _ { j } \}$ for each page, and use a VLM to generate $j$ summaries ${ \cal { S } } = \{ s _ { 1 } , s _ { 2 } , . . . , s _ { j } \}$ .
# 3.2 Multi-modal Question Answering
For retrieval, we use a VLM to retrieve pages through embedding similarity, and a VLM to look at the summaries and re-rank those retrieved pages. During the question answering phase, we build a reasoner agent that can automatically decide whether to retrieve more information and iteratively refine its own memory with newly retrieved pages.
Page Retrieval Given a query $q$ and its document $D$ , we first embed the given query and retrieve $k$ pages with the highest MaxSim score (Khattab and Zaharia, 2020b). Then, we pass $q$ and $k$ summaries of the retrieved pages $S _ { k }$ into an LLM (can be textonly) to select and rank the relevant pages. The model returns an ordered list of page indices $C =$ $c _ { 1 } , c _ { 2 } , \ldots , c _ { n }$ based on their perceived relevance to the query. Note that the number of relevant pages is automatically and dynamically chosen by the model. Since the re-rank is based on the retrieved pages from embedding, so $n < k$ pages are later sent to the reasoner agent, keeping the input size manageable. In this step, we also ask the LLM to generate an overall document-level summary $s _ { \mathrm { D O C } }$ that contextualizes the entire document in relation to the current query, serving as the initial working memory of the reasoner agent.
Generation We treat the retrieved relevant pages as images, denoted as $I _ { C } ~ = ~ \{ i _ { c _ { 1 } } , i _ { c _ { 2 } } , . ~ . ~ . , i _ { c _ { n } } \}$ .
# Algorithm 1 SimpleDoc
Require: query $q$ , per–page embeddings $E$ and summaries $S$ , cutoff $k$ , max iterations $L$
Ensure: answer $a$ or failure notice
1: $q _ { \mathrm { c u r } } q$
2: $M \gets \emptyset$
3: for $\ell \gets 1$ to $L$ do
4: $\begin{array} { r l } & { s _ { \mathrm { Ḋ } o c Ḍ } , C \gets R e t r i e v e P a g e s ( q _ { \mathrm { c u r } } , E , S , k ) } \\ & { I _ { C } \gets \{ i _ { c } | \ : c \in C \} ; \ : T _ { C } \gets \{ t _ { c } | \ : c \in C \} } \\ & { M \gets M \cup s _ { \mathrm { Ḋ } o c Ḍ } } \\ & { ( i s _ { \mathrm { Ḋ } o l Ḍ } \nu e d , a , m ^ { \prime } , q ^ { \prime } ) \gets } \\ & { \quad \quad \mathrm { R E A S O N E R } ( q , I _ { C } , T _ { C } , M ) } \end{array}$
5:
6:
7:
8:
9: if is_solved then
10: return $a$
11: else
12: $\begin{array} { l } { { M M \cup \{ m ^ { \prime } \} } } \\ { { q _ { \mathrm { c u r } } q ^ { \prime } } } \end{array}$
13:
14: return FAIL
Those pages are also converted into text, denoted as $T _ { C } = \{ t _ { c _ { 1 } } , t _ { c _ { 2 } } , \dots , t _ { c _ { n } } \}$ . We input will $I _ { C } , T _ { C }$ , input query $q$ and a working memory $M$ (initialized to $s _ { \mathrm { D O C } } \mathrm { . }$ ) into a reasoner agent (backed by a VLM), and ask it to determine if the question can be solved with the given context.
The reasoner can produce one of three distinct response types:
• Answer: If the provided pages contain sufficient information, the reasoner formulates a direct answer to the query.
• Not Answerable: If the question cannot be answered by the document.
• Query Update: If the reasoner believes the answer exists within the document but on pages not yet retrieved, it outputs a note of current pages $m ^ { \prime }$ and generates a new query $q ^ { \prime }$ that asks for missing information.
Iterative Refinement Self-reflection has been proven an effective method in LLMs (Shinn et al., 2023; Madaan et al., 2023). We employ a similar mechanism where the LLM can actively retrieve more pages as needed. If the reasoner agent decides that the question cannot be solved after the initial retrieval, we start an iterative process to continue retrieving new pages. As shown in Algorithm 1, we maintain a memory module $M$ to preserve useful information from previous retrievals. When the reasoner agent outputs a query update, we retrieve new page numbers $C ^ { \prime }$ based on the refined query $q ^ { \prime }$ , update the memory module $M$ with the notes $m ^ { \prime }$ , and call the reasoner again with the following inputs: $\{ q , I _ { C ^ { \prime } } , T _ { C ^ { \prime } } , M \}$ . The iterative process terminates when the reasoner produces an answer, determines the query is not answerable, or reaches a predefined maximum number of iterations $L$ . If the maximum iterations are reached without resolution, the question is marked as "not answerable."
# 4 Experiments
Our experiment is organized as follows: In Section 4.1, we present the main results of our method and baselines on 4 different datasets. In Section 4.2, we further experiment on MMLongBench using different models. In Section 4.3, we adopt and implement two other RAG methods that were originally proposed for knowledge Question Answering, Finally inn Section 4.4, we test variations of SimpleDoc and further analyze our method.
# 4.1 Main Results
Datasets. We evaluate SimpleDoc on 4 comprehensive PDF document understanding benchmarks, which provide a robust testbed for assessing document understanding at scale across varied document types, lengths, and retrieval complexities:
1) MMLongBench (Ma et al., 2024b): This dataset is designed to test document reasoning over long PDFs, containing complex layouts and multimodal components. The dataset contains 1073 questions across 135 documents, with an average length of 47.5 pages per document.
2) LongDocURL (Deng et al., 2024): Another large-scale multi-modal benchmark aimed at evaluating document retrieval and reasoning. It has over 33,000 document pages and includes 2,325 question samples.
3) PaperTab (Hui et al., 2024): It focuses on the extraction and interpretation of the tabular data from the research papers, providing 393 questions from over 307 academic documents.
4) FetaTab (Hui et al., 2024): A table-based question answering dataset using tables extracted from Wikipedia articles. It presents 1,023 natural language questions across 878 documents, requiring models to generate free-form answers.
Baselines. We compare with two baselines: (1) M3DocRAG (Cho et al., 2024) first uses an image retrieval model to retrieve top- $\mathbf { \nabla } \cdot \mathbf { k }$ pages, and then uses a VLM to generate an answer with retrieved pages. (2) MDocAgent (Han et al., 2025) employs both text retrieval model and image retrieval model to retrieve two sets of pages, then top-k pages from both sets will be used for generation. MDocAgent uses 5 different agents and require both a VLM and a text model. We also include the results of using a VLM to solve the question directly, and results of using VLM with the ground-truth pages included as images (denoted as GT pages), which can be seen as lower and upper bounds.
Metrics. For this experiment, we evaluate model performance with Binary Correctness (Accuracy). We classify each model response as either correct or incorrect and compute the accuracy as the ratio of correct responses to the total number of questions. We use GPT-4.1 as an automatic evaluator to judge response correctness against ground truth answers and set the temperature to 0.
Implementation Details. We use the same models for SimpleDoc and baselines for rigorous comparison. For visual embedding model, we use ColQwen-2.5 for all methods, which is the latest model trained with CoPali (Faysse et al., 2025)’s strategy (See Table 6 for a comparison with CoPali), and we use Qwen2.5-VL-32B-Ins whenever a VLM is needed. For MDocAgent, we use ColBERTv2 (Khattab and Zaharia, 2020a) as the text retrieval model following the original paper, and Qwen3-30B-A3B as the text model. For SimpleDoc, we use Qwen2.5-VL-32B-Ins for per-page summarization during pre-processing. Note that the summarization only needs to be performed once. We use Qwen3-30B-A3B to for page retrieval. For baselines, we test with top-k set to 2, 6, 10. For our method, we set top-k to 10 and 30 for embedding retrieval. All prompts used in our method is shown in Appendix A.4.
Results Analysis Table 1 shows that SimpleDoc achieves the highest average accuracy of $7 0 . 1 2 \%$ , outperforming all the baselines with different top- $\mathbf { \nabla } \cdot \mathbf { k }$ retrieval settings. On MMLongBench and LongDocURL, which contain long, diverse, and multimodal documents, our proposed method significantly outperforms MDocAgent by $+ 5 . 3 \%$ and $+ 9 . 1 \%$ , respectively. These gains highlight strength in addressing complex queries that require aggregating information dispersed across different sections of a document. However, on FetaTab, a heavily table-centric dataset, SimpleDoc performs lower than MDocAgent. We attribute this to MDocAgent’s explicit multi-agent design, which uses a dedicated image agent to focus on another modality (table grids) and is especially effective for this specific type of table-based QA. In contrast,
Table 1: Accuracy $\left( \% \right)$ on 4 different DocVQA datasets. We use ColQwen-2.5 as the retrieval model for all methods. $P g$ . Ret. indicates the actual pages used during generation.
Table 2: All-Match Retrieve Rate, and Page-level F-1 Score on MMLongBench (See Section A.3 for calculation). We present the results for ColQwen (used by M3DocRAG and MDocAgent) and our retrieval.
SimpleDoc treats pages as images to feed into one single reasoner agent. Thus, SimpleDoc is more robust and effective across questions that require diverse evidence types.
Table 1 also lists the average number of pages each system retrieves. SimpleDoc needs only 3.5 pages per question yet achieves the best overall accuracy. By contrast, MDocAgent attains $5 9 . 6 \%$ accuracy when it reads 4 pages, which is about 10 percentage points below our method. Notably, both MDocAgent and M3DocRAG reach their peak accuracy at top- $\scriptstyle \cdot \mathbf { k } = 6$ rather than 10, implying that indiscriminately adding pages can hurt performance. To understand this effect, Table 6 reports two retrieval metrics. 1) The all-hit rate gauges coverage, the fraction of questions for which the entire gold evidence set appears among the retrieved pages. 2) The page-level F1 score captures efficiency, rewarding systems that surface the right pages while avoiding noise. For ColQwen-2.5, raising $\mathbf { k }$ from 2 to 10 boosts coverage but reduces F1, showing that many of the extra pages are irrelevant. Thus, top- $\scriptstyle \cdot \mathbf { k } = 6$ reflects a better balance between coverage and conciseness, which in turn yields higher answer accuracy for the agent baselines. In contrast, SimpleDoc attains nearly the same coverage as ColQwen-2.5 at $\scriptstyle \mathbf { k } = 2$ yet more than doubles its F1, demonstrating that our retriever supplies almost all necessary evidence with far less clutter. Overall, SimpleDoc delivers the best coverage-versus-conciseness trade-off while avoiding trial-and-error to find the best top-k retrieval numbers, giving the reasoner everything it needs while keeping the reading budget minimal.
Qualitative Analysis As illustrated in Figure3, SimpleDoc demonstrates its ability to reason iteratively. Initially, it retrieves pages that are broadly relevant but lacking specific details needed to answer the question. Recognizing the gap, the agent refines the query to target missing information, retrieves the precise page containing the relevant table, and successfully answers the question. This example highlights how SimpleDoc detects incomplete evidence and adaptively improves retrieval to resolve complex queries.
# 4.2 Results with different models
In Table 3, we test with smaller models $( 0 w e n 2 . 5 \mathrm { - } \lor L \mathrm { - } 7 \mathsf { B - I n s t r u c t } + 0 \mathsf { w e n - 3 - 8 B } )$ with detailed results on MMLongBench to further validate our method. Note that $\mathsf { Q w e n } - 3 - 8 \mathsf { B }$ are textonly models and used in MDocAgent (Text Agent) and our method (for retrieval). Our method outperforms all baselines in terms of avg. accuracy (ACC) for both models. Under the smaller 7B/8B
# QUESTION
From the paper, which temperature gives ChatGPT the highest alignment score?
ITERATION 1
1) Retrieval 1. Retrieving top- k pages w ith embedding: Pages: [34, 38, 67, 21, 58, ....] The paper ... Page 6 explicitly ... Page 13 references ... Page 14 2. Filtered w ith Reasoning Agent : Pages: [6, 13, 14] $^ +$ Summary (initializing memory): for human eval ...
2) Generation Page 6, 13, 14
- Inp ut : Question Memory
- Out p ut : Response Type: query_update Query: From the paper ... Please provide the section or table that compares alignment scores for ChatGPT at temperatures 0.1, 0.5, and 0.9. Notes: The current pages describe the experimental setup and evaluation metrics but lack the specific results or ... which includes tables or ... at temperatures 0.1, 0.5, and 0.9.
ITERATION 2
1) Retrieval 1. Using Em b ed d ing : Pags: [ 30, 27, 67, 24, 58, ....] 2. Filtering w ith Reasoning Agent: Pages: [6, 7] $^ +$ Summary: The document explores ... Page 6 explicitly ..., while Page 7 includes Table 3, ...
2) Generation
- Inp ut : Question summary Page 6, 7
- Out p ut : Response Type: answer Answer: From the information provided in \*\*Table ${ 3 ^ { * * } }$ on Page 7, the temperature ... alignment score of \*\*85.9\*\*. The alignment scores ... the highest alignment score is achieved at temperature $^ { * * } 0 . 1 ^ { * * }$
Figure 3: An example run of SimpleDoc’s iterative reasoning solving a question. In the first round, the agent retrieves Pages 6, 13, and 14 based on embedding and summary-based filtering. However, the retrieved pages only describe the experimental setup and evaluation metrics without giving exact alignment scores. The agent identifies this gap and generates a refined query asking specifically for a section or table comparing scores at different temperatures. This updated query retrieves Page 7, which contains Table 3 with the required information, allowing the agent to correctly answer that temperature 0.1 yields the highest alignment score (85.9).
model setting, our method achieves $50 \%$ overall accuracy, improving over MDocAgent by $+ 6 . 6 2$ points, which is a bigger gap compared to using larger models $( + 4 . 1 5$ points). When broken down by evidence source, our model achieves the best performance on three out of five modalities. We note that MDocAgent are competitive on charts and tables with specialized agents, which is consistent with our observation and analysis in Section 4.1. When broken down by number of evident pages, our methods have similar results compared with MDocAgent on multi-page (MUL) and single-page (SIN) reasoning with different models. However, SimpleDoc achieves much better results on unanswerable questions, which is used to test hallucinations, showcasing its ability to abstain from guessing when no valid evidence is present.
# 4.3 Other RAG methods
We also adopt and evaluate two RAG methods that originally focus on knowledge question answering tasks: (1) $\mathbf { P l a n } ^ { * } \mathbf { R A G }$ (Verma et al., 2025): first decomposes a question into sub-queries that form a directional acyclic graph (DAG). It starts with solving the leaf sub-queries, and incorporates the previous subquery+answer when solving the next queries, until the original question. This features the query-decomposition and augmented process strategies, which are common in RAG methods. (2) Chain-of-Notes (Yu et al., 2024b) taking notes of retrieved paragraphs and then using them for more precise generation. We do the following to adapt them to our setting: we use ColQwen2.5 to retrieve document pages, and use VLM for generation, which is the same as other baselines.
Table 3 reports the performance of the two RAG baselines when paired with Qwen2.5-VL-32B. Both Chain-of-Note and Plan∗RAG lag behind approaches designed specifically for DocVQA, indicating that simply transplanting text-oriented knowledge-based RAG techniques is insufficient for this domain. From our analyses, we also observe potential failure reasons for each method: (1) Since Chain-of-Note uses page-level image summary, it can miss finer details like exact numbers in tables or exact words in charts and layouts. Also, one summary per page can be too general, making it hard to reason across multiple pages or give precise answers, yielding only $4 0 . 4 \%$ accuracy. (2) Plan∗RAG uses full-page images and breaks the main question into sub-questions using a query decomposition step. However, the acyclic graph it builds is often not accurate, leading to off-target sub-queries. For each one, it retrieves top-k image pages, generates answers, and then summarizes them. This multi-step pipeline adds complexity and increases error propagation.
# 4.4 Additional Analysis of SimpleDoc
In this section, we do more experiments to decompose and analyze our method.
Varying top- $\mathbf { \nabla } \cdot \mathbf { k }$ for embedding retrieval. In SimpleDoc, we first retrieve top- $\mathbf { \nabla } \cdot \mathbf { k }$ pages based on embeddings, and then use a LLM to re-rank them based on summaries. With retrieval, we can filter and bound the maximum number of pages before re-ranking. In this experiment, we test our method with different numbers of top- $\mathbf { \nabla } \cdot \mathbf { k }$ pages retrieved through embedding. The increase in Top- $\mathbf { \nabla } \cdot \mathbf { k }$ gives the LLM retrieval agent more space to select the most closely related pages that were not correctly identified by the embedding-based retrieval method. We didn’t see the retrieval agent select significantly more pages in the setting where K is large. This means the agent is dynamically deciding which pages are truly relevant to the given query.
Table 4: Our method with different top- $\mathbf { \nabla } \cdot \mathbf { k }$ numbers for embedding retrieval on MMLongBench. Avg. Page Used denotes the actual number of pages seen by the reasoner agent.
Table 5: Performance of SimpleDoc on MMLongBench across different iterations, showing accuracy and number of query updates.
Results with different iterations. Table 5 illustrates the benefits of our iterative refinement strategy on MMLongBench. The observed trend shows that additional iterations allow SimpleDoc to progressively enhance understanding and locate crucial information initially missed. This targeted re-querying leads to improved accuracy, while the decreasing number of query updates indicates the system is either satisfying the information need or recognizing when an answer cannot be found within the document. | Document Visual Question Answering (DocVQA) is a practical yet challenging task, which is to ask questions based on documents while referring to multiple pages and different modalities of information, e.g, images and tables. To handle multi-modality, recent methods follow a similar Retrieval Augmented Generation (RAG) pipeline, but utilize Visual Language Models (VLMs) based embedding model to embed and retrieve relevant pages as images, and generate answers with VLMs that can accept an image as input. In this paper, we introduce SimpleDoc, a lightweight yet powerful retrieval - augmented framework for DocVQA. It boosts evidence page gathering by first retrieving candidates through embedding similarity and then filtering and re-ranking these candidates based on page summaries. A single VLM-based reasoner agent repeatedly invokes this dual-cue retriever, iteratively pulling fresh pages into a working memory until the question is confidently answered. SimpleDoc outperforms previous baselines by 3.2% on average on 4 DocVQA datasets with much fewer pages retrieved. Our code is available at https://github.com/ag2ai/SimpleDoc. | [
"cs.CV",
"cs.AI"
] |
# 1 Introduction
Large language models (LLMs) have shown immense success across various tasks in natural processing and information retrieval. They have been successfully applied to reformulate queries [6, 22] and documents [16], rerank documents [13, 17] and products [5], as well as train embeddings for dense indices [19]. They are pretrained on huge corpora, and often excel at a variety of search tasks, often without the need for additional fine-tuning.
In the context of reranking, they have been often used through 3 paradigms – pointwise, pairwise, and listwise. The most common reranking models are cross encoders, which have a BERT backbone [18], ColBERT rerankers [19], which work on late interaction, and LLM-based rerankers. In many prior works, BM25 is utilized as a common first-stage retrieval system, from which the top-k documents are fed to the reranker.
While there is prior work leveraging LLMs in all three settings (point, pair, and listwise), we focus on the listwise setting in this particular work. A listwise approach is advantageous as it consumes lesser number of tokens than the corresponding aggregated pointwise approaches, and allows the LLM to reason over all the documents at once. Moreover, with more and more LLMs having the ability to handle very long contexts, listwise reranking is becoming viable for LLM-based document reranking. In this work, we explore the use of LLMs on re-ranking passages in the context of complex reasoning-centric queries. To that end, we utilize two benchmarks - BRIGHT [21] and R2MED [10] - a medical reasoning retrieval benchmark. Particularly, we experiment with the injection of BM25 scores into the prompt and demonstrate it as a useful signal for reasoning-centric LLM reranking. Injecting BM25 scores into the ranking input along with query and document has proven to be effective for BERT based cross encoders [2]. In this work, we demonstrate it is a useful signal to augment the reasoning capabilities of LLMs in the reranking setting.
In this work, we ask the following research question – Can incorporating lexical signals, such as retrieval scores, serve as effective clues for rerankers to improve retrieval effectiveness in reasoning tasks ?
Specifically, our work contributes the following:
(1) We introduce InsertRank – a simple listwise reranking method that exploits BM25 retrieval scores to improve retrieval over reasoning queries
(2) We evaluate our method across multiple open and closed LLMs to demonstrate the effectiveness across two reasoning centric retrieval benchmarks - BRIGHT and R2MED
(3) We also conduct ablations and analyses along the following dimensions:
(a) Given the long context of the reranking inputs in the listwise setting, and tendencies of LLMs to favor context that is at the beginning and the end, we perform ablation experiments by shuffling the document order with and without BM25.
(b) We experiment with scale, normalization to examine their effects on the reasoning abilities of LLMs in the reranking context.
While many studies have focused on enhancing reasoning through fine-tuning and reinforcement learning methods, which rely heavily on labeled data, to the best of our knowledge, ours is the first work demonstrating improved retrieval effectiveness by integrating retrieval scores within a zero-shot setting using a listwise generative reranker.
# 2 Related work
We now discuss related work to place our contributions in context.
# 2.1 Retrieval for Complex Queries
Reasoning-centric queries are often much more difficult and nuanced than those in traditional document retrieval, where keyword matching or semantic matching suffice. With LLMs becoming increasingly more powerful at reasoning and understanding, they become crucial for improving ranking and retrieval effectiveness for complex reasoning queries [23] [21]. BRIGHT [21] is a challenging benchmark of ˜1300 queries across 11 domains and ˜1M documents. Similarly, R2MED [10] is a challenging benchmark of 876 queries across 8 tasks in the medical domain, which focuses on reasoning-centric retrieval. On BRIGHT, Su et al. [21] have observed significant gains with query reformulation using GPT-4, Gemini, and other LLMs with a BM25 backbone.
There has also been growing research around training retrievers and rankers for reasoning-centric information retrieval. Shao et al. [20] finetune a LLama-8B model for complex reasoning queries. They also develop a synthetic data generation pipeline that produces complex hard negatives for finetuning a dense retrieval model. Weller et al. [23] leverage reasoning traces that are collected from Deepseek-R1 on the MSMARCO dataset and fine-tune small language models of varying sizes to achieve significant results on the BRIGHT benchmark. [25] leverage the GRPO technique to finetune language models of varying sizes (3B to 14B) for listwise reasoning centric reranking. Yang et al. [24] uses a listwise reranker finetuned on QwQ-32B and leverage a sliding window approach in a listwise setting to reduce the number of LLM calls compared to the pointwise setting. Niu et al. [15] leverages innovative prompting strategies with GPT-4 to score queries and documents in a pointwise setting. While many works have focused on improving reasoning using finetuning and RL methods, to the best of our knowledge, ours is the first work to incorporate BM25 scores into the LLM prompt for a listwise zero shot setting.
# 2.2 LLM Based Reranking
In the context of reranking, there are mainly three salient paradigms - pointwise, pairwise and listwise, with a fourth one namely setwise that has been recently introduced.
• Pointwise: Produce a score $s _ { j }$ for each pair $( q _ { i } , D _ { j } )$ where $q _ { i }$ is the ith query and $d _ { j }$ is the jth document in the evaluation.
$$
( q _ { i } , D _ { j } ) \mathbf { M } s _ { j }
$$
• Pairwise: Produce a preference score $s _ { j }$ for each triple of the form $( q _ { i } , D _ { j } , D _ { k } ) ;$ the goal is to maximize the number of instances where $( q _ { i } , D _ { j } , D _ { k } ) > 0$ when $D _ { j }$ is more relevant than $D _ { k }$ in the ground truth where $( q _ { i } , D _ { j } , D _ { k } ) > 0$ indicates $D _ { j }$ is more relevant than $D _ { k }$ .
$$
( q , D _ { j } , D _ { k } ) { \bf M } s _ { j }
$$
• Listwise: The goal in a listwise setting is to consider a query $q _ { i }$ and a list of documents $D _ { 1 } . . . D _ { n }$ and produce a ranked list that takes in all the documents from the retriever at once.
$$
( q , D _ { 1 } , \hdots , D _ { n } ) \to \mathbf { M } \to r _ { 1 } r _ { 2 } \hdots r _ { n }
$$
where $r _ { 1 } , r _ { 2 } , . . . r _ { n }$ are the ranked list of documents or their identifiers.
# 2.3 Leveraging numerical information in language models
There have been several prior works studying how both BERT based (encoder) and decoder only LLMs understand numerics. [12] provide a comprehensive survey of mathematical LLMs, covering CoT, tool use, instruction tuning, etc. Similarly, [1] provides an overview of LLM abilities in problem solving, math reasoning, geometry and so on. Askari et al. [2] fine-tuned a BERT based cross encoder reranker by injecting BM25 scores along with the document tokens and found improvements over a pointwise cross encoder setup. However, it is unclear whether BM25 scores can boost recent LLM capabilities under more realistic settings – namely without fine-tuning, and in listwise reranking. Our work demonstrates the effectiveness of injecting BM25 scores in a zero shot listwise setting with no finetuning.
# 3 Proposed Method
We now describe our proposed method. InsertRank involves injecting the retriever’s BM25 score into the listwise reranking setting.
$$
( q , D _ { 1 } , b _ { 1 } . . . , D _ { n } , b _ { n } ) { \bf M } r _ { 1 } , r _ { 2 } \dots r _ { l }
$$
Here, $q$ is the query, $D _ { 1 } , D _ { 2 } , . . . D _ { n }$ is the list of documents passed to the reranker, $b _ { 1 } , b _ { 2 } , . . . b _ { n }$ is the BM25 scores associated with each document, and $r _ { 1 } , r _ { 2 } , . . . r _ { n }$ is the reranked list of document identifiers.
In addition, in our experiments with BRIGHT and R2MED, the documents are passed in decreasing order of their BM25 scores, i.e $b _ { 1 } > b _ { 2 } > . . . b _ { n }$
For the BRIGHT benchmark, inspired by the Rank-1 paper, we leverage their queries augmented by GPT-4 chain of thought (CoT) as it is reported to give the best $\mathrm { N D C G } @ 1 0$ scores on BM25. Similarly, for the R2MED benchmark, we leverage the HyDE query reformulation [7] mentioned in their work, where a hypothetical document is leveraged as a query reformulator. as it is reported to give the best $\mathrm { N D C G } @ 1 0$ scores in their BM25 first stage retrieval setting. In a listwise setting with BM25 score injection, the queries and documents along with the scores are passed as follows,
Table 1: Performance in BRIGHT benchmark (P - Pointwise, L - Listwise, only retrieval if neither P nor L mentioned)
$\$ 5$ - for baseline comparisons, we have taken the best results from each of the above works
Table 2: Performance in R2MED benchmark
- for baseline comparisons, we have taken the best results from each of the above works
<instructions $\mathrel { \mathop : } >$ You are also given the BM25 scores from a lexical retrieval sytem. $\prec$ <query> $\prec$ <doc_1, BM25 score: s_1, doc_2, BM25 score: s_2, .... doc_n, BM25 score: b_n>
Other than the ablation setting in Table 4, documents are ordered by decreasing order of BM25 scores.
In a reasoning setting, LLMs are often known to have issues like hallucinations [8], incorrect reasoning [9], brittleness with respect to changing numbers and names [14]. Overthinking is also established as a common issue in reasoning models - a tendency where LLMs tend to produce very verbose reasoning chains for simpler problems leading to issues like concept drift [3] [4]. By providing a critical lexical relevance signal like BM25 scores, the goal is to ground the reasoning and prevent the model from running into issues like overthinking and concept drift and ground the reasoning process with respect to the first stage retriever.
By injecting the BM25 scores in the LLM reranking step, we provide a low cost solution for improving reasoning centric LLM reranking. Our solution produces consistent improvements across two reasoning centric retrieval benchmarks with no extra cost of finetuning and negligible additional token costs.
For all our experiments, we utilize the official repository of BRIGHT and R2MED. For the LLM implementations of Gemini2.0-flash, GPT-4o and Deepseek-R1, we use their respective official APIs.
We compare InsertRank with multiple baselines:
(1) ReasonIR [20], which trains a retriever with hard synthetic negatives and incorporates a hybrid BM25 and pointwise reranking setup on top of it
(2) Rank1-32B [23], which finetunes a 32B parameter model from Deepseek-R1’s reasoning traces
(3) Rank-R1 GRPO [25], which finetunes an LLM using reinforcement learning based methods for listwise ranking
(4) JudgeRank [15], which is a prompting approach for pointwise reranking.
(5) Rank-K [24] which finetunes a QwQ-32B model and uses a sliding window style listwise ranking approach
# 4 Results
The results of our experiment are as shown in Table 1 and 2. The top performing setting scores an average of 37.5 on the BRIGHT benchmark. We observe consistent gains by injecting BM25 scores into the prompt on multiple LLM families - Gemini, GPT-4 and Deepseek. The results show consistent improvement over a vanilla LLM listwise reranking with just queries and documents. By injecting BM25 scores into the prompt, we show gains of $3 . 2 \%$ on Gemini 2.0 flash, $1 6 . 3 \%$ on Gemini 2.5 flash, $0 . 8 \%$ on GPT-4o and Deepseek-r1 compared to just using the raw queries and documents. While we are able to use full length of the documents in the Gemini models, due to context length limitations, we use only the first 1800 tokens for the GPT-4o and Deepseek series of models.
We observe similar gains in the R2MED benchmark, with BM25 injection consistently surpassing the ranking quality on average compared to the vanilla listwise setting which takes just the documents into the prompt. The gains demonstrated are $0 . 8 \%$ for Gemini 2.0 flash, $2 . 2 \%$ in Gemini 2.5 flash and $0 . 5 \%$ gains in GPT-4o and Deepseek family of models.
# 5 Ablations
In this section, we analyze the effectiveness of our proposed method in the context of 1) normalization 2) shuffling input documents
# 5.1 Scaling and normalization of BM25
We additionally also examine the effect of different scales and how LLMs perceive them. Since BM25 scores are normally not restricted to a particular range, we perform an experiment with normalized BM25 scores. Similar to the previous experiments, we do two settings – one with BM25 injected and one without. Finally, we also examine the effect of scaling wherein normalized scores are scaled from 0-100.
The ablation results are reported for both R2MED and BRIGHT on the Gemini 2.0 flash model. As evidenced in 3, R2MED shows around $0 . 4 \%$ decrease when BM25 scores are normalized from 0-1 and $0 . 8 \%$ increase when BM25 scores are normalized from 0-100. While there is a small decrease in the 0-1 normalization setting, the 0-100 normalization, shows a marginal improvement.
Similarly for BRIGHT, we observe similar marginal performance gains when using a 0-100 normalization and very slight decrease when scores are normalized from 0-1. The results listed in table 3 indicate a $0 . 5 8 \%$ decrease when normalizing from 0-1 and $0 . 5 \%$ increase when normalizing using 0-100. With $\mathrm { N D C G } @ 1 0$ scores across normalization also beating the vanilla listwise setting , the results demonstrate the robustness of InsertRank to normalization and scaling.
# 5.2 Shuffling order of documents
LLMs are well known to prefer documents at the beginning and end of context [11]. In order to validate the effectiveness of the proposed approach we shuffle the document tuples, where each tuple is of the form $D , B$ where $D$ is the document and $B$ is the BM25 score associated with the document. Similar to the previous experiments, we perform two settings, one with the BM25 scores injected and one without it. Unlike, normalization, we observe divergent results - while BRIGHT benchmark demonstrates robustness in the BM25 injection and shows gains, R2MED on the other hand, shows a consistent decrease when the documents are shuffled. Similar to the previous ablation, we report results for both BRIGHT and R2MED on Gemini 2.0 flash. As evidenced in 4, BRIGHT in the shuffled setting with BM25 injection demonstrates a $9 . 4 \%$ increase relative to the vanilla setting. However, there is a 1.1 points absolute decrease compared to the original setting, where documents are passed in decreasing order of BM25 scores. This establishes that listwise reranking methods in general are very sensitive to initial ordering for reasoning centric retrieval/reranking.
Table 3: Effect of normalized BM25 scores
Table 4: Effect of shuffling on R2MED | Large Language Models (LLMs) have demonstrated significant strides across various information retrieval tasks, particularly as rerankers, owing to their strong generalization and knowledge-transfer capabilities acquired from extensive pretraining. In parallel, the rise of LLM-based chat interfaces has raised user expectations, encouraging users to pose more complex queries that necessitate retrieval by ``reasoning'' over documents rather than through simple keyword matching or semantic similarity. While some recent efforts have exploited reasoning abilities of LLMs for reranking such queries, considerable potential for improvement remains. In that regards, we introduce InsertRank, an LLM-based reranker that leverages lexical signals like BM25 scores during reranking to further improve retrieval performance. InsertRank demonstrates improved retrieval effectiveness on -- BRIGHT, a reasoning benchmark spanning 12 diverse domains, and R2MED, a specialized medical reasoning retrieval benchmark spanning 8 different tasks. We conduct an exhaustive evaluation and several ablation studies and demonstrate that InsertRank consistently improves retrieval effectiveness across multiple families of LLMs, including GPT, Gemini, and Deepseek models. %In addition, we also conduct ablation studies on normalization by varying the scale of the BM25 scores, and positional bias by shuffling the order of the documents. With Deepseek-R1, InsertRank achieves a score of 37.5 on the BRIGHT benchmark. and 51.1 on the R2MED benchmark, surpassing previous methods. | [
"cs.IR",
"cs.AI",
"cs.CL"
] |
# 1 Introduction
The energy of the electrons in molecules and materials serves as a glue between their atoms, determining the stability and properties of the chemical structure. Accurately computing the electron energy is therefore essential for predictive modeling across a broad spectrum of applications, including assessing whether a chemical reaction will proceed, whether a candidate drug molecule will bind to its target protein, whether a material is suitable for carbon capture, or if a flow battery can be optimized for renewable energy storage. Unfortunately, computing this energy amounts to solving the Schrödinger equation, whose cost scales exponentially with the number of electrons $N$ . Density functional theory (DFT),1 provides an exact reformulation that replaces the many-electron wavefunction with the much simpler electron density. Although exact in principle, one component of the total energy — the exchange-correlation (XC) functional — remains unknown and must be approximated in practical implementations. The role of the XC functional is to capture intricate quantum many-body interactions of electrons using only the electron density, making this a universal functional that has the same form for all molecules and materials. $^ { 2 , 3 }$ Equipped with a formalism4 whose cost scales asymptotically as $O ( N ^ { 3 } )$ , and supported by practical functional approximations pioneered over several decades, $^ { 5 - 1 2 }$ DFT has become the computational workhorse in disciplines ranging from (bio)chemistry to catalysis to materials science.13 However, DFT users must still choose from among hundreds of XC functional approximations, 11,13,14 often relying on dedicated benchmark studies or experimental results to guide the choice for the application at hand. Crucially, current XC approximations still fall short of the accuracy required to predict experimental outcomes across a wide range of chemical systems and properties reliably.11,13,14 Achieving this level of precision — commonly known as chemical accuracy — typically demands errors below 1 kcal/mol for processes involving making and breaking covalent chemical bonds.13 This means, for example, that in silico screening pipelines for molecule and material discovery often pass too many candidates to the lab, with a large fraction failing experimental verification. In addition, lower-cost methods such as force fields and property-guided generative models trained on DFT data inherit these same limitations. The search for a general-purpose XC functional that meets chemical accuracy has persisted for over 60 years and is sometimes referred to as “the pursuit of the divine functional” 15 a challenge with profound implications for accelerating scientific discovery.
Figure 1: Jacob’s ladder of density functional approximations16 defines the rungs LDA, GGA and metaGGA by expanding the set of semi-local features they extract from an electronic density matrix into a grid representation. The next rungs, hybrid and double hybrid extract more and more expensive wavefunctionbased information directly from the density matrix. Skala departs from this ladder by extracting relatively cheap meta-GGA features, and instead gaining expressivity by learning non-local interactions between grid points at a manageable and controllable cost.
The prevailing approach has been to handcraft functional forms based on a limited set of ingredients defined by the so-called Jacob’s ladder of DFT; 16 see Fig. 1. Like its biblical namesake, it is intended to guide users toward the “heaven” of chemical accuracy. The ingredients at the lower rungs allow to retain the asymptotic $O ( N ^ { 3 } )$ scaling of DFT, but amount to XC functionals that only use (semi-)local information such as the density, its gradient, the Laplacian and the Kohn-Sham kinetic energy density. However, it is well established that the exact XC functional exhibits non-local dependence on the density, 3 and in practice lower-rung approximations yield only limited accuracy. To improve accuracy, researchers began introducing non-locality through wavefunction-like ingredients. $^ { 8 , 9 }$ While this approach enhances accuracy in many cases, it does not do so for all chemical problems, and it increases the computational complexityi to $O ( N ^ { 4 } )$ , $O ( N ^ { 5 } )$ or higher, thereby defining the higher rungs of the ladder. The vast majority of XC functionals is built from this hierarchy of Jacob’s ladder ingredients. They differ primarily in how these ingredients are combined and the number of parameters involved. The focus on these ingredients is driven by their compatibility with exact constraints, offering a rigorous theoretical foundation for building functional approximations.
As in many other areas of science, machine learning (ML) has been explored as a promising approach for developing accurate XC functionals, revealing the challenges and subtleties of this complex learning problem.17 Yet, to date this has not led to a meaningful shift in the established accuracy-cost tradeoff, and no ML-based functional has seen widespread adoption. There are two interlinked reasons for this. First, high-level data for this complex learning problem are very scarce, as they must be generated using computationally intensive wavefunction methods that require specialized expertise to be used at scale. Second, confined to this low-data regime, the vast majority of efforts have been limited to feeding handcrafted features into machine learning models, whether based on Jacob’s ladder ingredients18–22 or newly designed descriptors.23–26 This approach mirrors machine learning strategies used in computer vision and speech recognition prior to the deep learning (DL) revolution, which may partly account for the limited impact observed so far. In the absence of sufficient data, the handful of efforts to move beyond handcrafted features — though promising— have remained focused on model systems or narrowly defined problems.27–31
In this work, we present a key milestone toward a true deep learning solution to this long-standing scientific problem, addressing both the data scarcity challenge and several core machine learning challenges. Our initial focus is on the total atomization energy (TAE) — the energy required to dissociate a molecule into its constituent atoms — as it represents one of the most fundamental and challenging thermodynamic properties for electronic structure methods. $3 2 \substack { - 3 4 }$ From atomization energies, many other thermodynamic properties in complex chemical transformations involving multiple bond rearrangements can be predicted. Using an efficient wavefunction-based protocol with an accuracy of within 1 kcal/mol relative to experiments, we have generated a highly diverse training set of approximately 80k TAEs, at least two orders of magnitude larger than existing datasets of comparable accuracy. $^ { 3 5 }$ We designed a neural network architecture that enables learning data-driven non-local representations essential for chemically accurate XC functionals, using only simple semi-local input features. The result is the Skala functional that reaches chemical accuracy on a well-established benchmark set for atomization energies. With modest additional training data covering properties beyond TAEs, Skala also reaches an accuracy competitive with the leading more computationally expensive hybrid rung functionals across general main group chemistry. Importantly, this is achieved with a scalable neural network design that allows us to retain the asymptotic complexity of semi-local DFT, and which naturally supports GPU acceleration. To further assess its practical utility, we demonstrate that Skala can make reliable predictions for equilibrium geometries and dipole moments. Moreover, while we impose only a minimal set of exact constraints through Skala’s model design, we find that adherence to additional exact constraints emerges as more data is added to the training set. Together, these capabilities make the Skala functional already suitable for practical use. As we continue to generate large amounts of data to cover different portions of chemical space, Skala is poised to systematically improve its accuracy. The implications are far-reaching: making DFT fully predictive removes a fundamental bottleneck in shifting the center of gravity from laboratory-based experimentation to in silico discovery — spanning fields from drug and materials design to batteries and sustainable fertilizers.
# 2 Learning the XC functional: Basic challenges, solutions and practical settings
The success of DFT is based on the Kohn-Sham (KS) formalism,4 which decomposes the energy density functional into components that capture large effects such as the Pauli exclusion principle and long-range classical electrostatics, as well as the remaining unknown term that we aim to learn — the XC functional which accounts for a smaller but crucial energy due to quantum many-body effects. The XC functional $E _ { \mathrm { x c } } [ \rho ]$ maps the electron density $\rho ( r )$ , a positive function over three-dimensional space, to a scalar value representing the XC energy. In practical implementations of KS DFT, all terms except for the XC functional are evaluated using the density represented in a basis set via the density matrix, with atom-centered Gaussian functions being the most commonly used basis functions in chemistry. Focusing on the semi-local functional rungs, the XC energy is evaluated using a representation of the electron density on a large integration grid. For molecules containing up to several hundreds of atoms, the integration grids typically consist of approximately ${ \sim } 1 0 ^ { 4 } - 1 0 ^ { 6 }$ points. The learning problem we address is to obtain an accurate $E _ { \mathrm { x c } } ^ { \theta } [ \rho ]$ from the large irregular point cloud representing the local density features on the grid, while learning the crucial non-local representations from data with a neural network architecture with parameters $\theta$ . The learned XC functional should have a well-defined limit when the grid becomes infinitely dense and show good convergence as the grid is refined. Aside from the more obvious challenge of obtaining highly-accurate reference energies (also referred to as “labels”) at scale, the learning problem faces other unique challenges:
1. Obtaining accurate ground-state densities at scale, which serve as input for the XC functional, is even more challenging than obtaining accurate energy labels at scale.36–38
2. Having access to accurate wavefunction energies and densities is still not sufficient to extract accurate labels for $E _ { \mathrm { x c } } [ \rho ]$ from wavefunction total energies. This stems from the fundamentally different way that the total energy is decomposed in Kohn–Sham DFT compared to wavefunction-based methods. 39–44
3. During inference, the XC functional is evaluated repeatedly as part of the self-consistent-field (SCF) KS equations to minimize the total energy of the given molecular system with respect to the density $\rho ( r )$ . Ensuring that the learned functional drives the system toward convergence at both the correct minimum energy and the correct minimizing density makes this learning task different from standard regression.
Previous ML attempts at learning the XC functional have proposed and analyzed several solutions to all these challenges,18,21–23,28,31 with many of them too computationally demanding for the much larger-scale training considered in this work. We address these challenges with a training procedure that consists of a pre-training phase and a fine-tuning phase. To tackle challenges 1 and 2, in the pre-training phase, we train the model with a straightforward reaction energy regression loss using $E _ { \mathrm { x c } } ^ { \theta }$ evaluated on densities $\rho$ B3LYP from another approximate XC functional (B3LYP $^ { 8 , 4 5 }$ ) and $E _ { \mathrm { x c } }$ labels, as detailed in Sec. B.1. These labels are extracted from accurate wavefunction energies by subtracting the other KS energy components using B3LYP KS orbitals. Leveraging approximate B3LYP densities during training, as introduced by Kirkpatrick et al. 21, along with the large-scale data we generated,35 enables us to expose the model to a broad range of densities and energies. To tackle the third challenge, in the fine-tuning phase, the model is trained using its own SCF densities, generated on the fly during training. This aims to close the gap between the accuracy achieved when evaluating the functional on the fixed input densities from the pre-training stage, and the accuracy obtained when evaluating the functional on its own SCF densities. Crucially, this procedure does not require backpropagating through the SCF cycle, as described in more detail in Sec. B.4. During the SCF fine-tuning phase we monitor the aforementioned accuracy gap on a holdout validation set, as well as the accuracy of our SCF densities by comparing dipole moments with accurate labels available in the literature.46 We stop the fine-tuning when our SCF density stops improving while the accuracy gap is still decreasing.
Several mathematical properties of the XC functional are known, usually referred to as exact constraints.3,10,47,48 Following a well-established practice in DFT, we facilitate the satisfaction of some of the most energetically relevant constraints (such as the high-density uniform coordinate scaling, size-consistency, and the Lieb-Oxford lower bound $^ { 4 9 }$ ) by constructing Skala as
$$
E _ { \mathrm { x c } } ^ { \theta } [ \rho ] = - \frac { 3 } { 4 } \left( \frac { 6 } { \pi } \right) ^ { \frac { 1 } { 3 } } \int \left( \rho ^ { ( \uparrow ) } ( r ) ^ { 4 / 3 } + \rho ^ { ( \downarrow ) } ( r ) ^ { 4 / 3 } \right) f _ { \theta } [ \mathbf { x } [ \rho ] ] ( r ) d r ,
$$
where $\rho ^ { ( \uparrow ) }$ and $\rho ^ { ( \downarrow ) }$ are the densities of the two spin channels and $f _ { \theta }$ is a bounded enhancement factor. While the vast majority of previous ML attempts only learned the enhancement factor with a local function $f _ { \boldsymbol { \theta } } ( \mathbf { x } [ \boldsymbol { \rho } ] ( \boldsymbol { r } ) )$ of the given hand-designed input features $\mathbf { x } [ \rho ] ( r )$ , $^ { 1 8 - 2 6 }$ our DL approach models the enhancement factor as a neural functional, similar in spirit to neural operators that have been applied to other fields. 50 The architecture for the enhancement factor learns new relevant non-local (but finite range) representations from the input features, hence the explicitly distinguishing notation $f _ { \boldsymbol { \theta } } [ \mathbf { x } [ \boldsymbol { \rho } ] ] ( \boldsymbol { r } )$ .
It is worth noting that some approaches incorporate hand-crafted non-locality on the DFT grid to model dispersion 51–53 — a long-range, subtle yet crucial component of the XC energy — essential for capturing interactions that do not involve the making or breaking of covalent chemical bonds.54 Our focus in this first milestone is very different, as we look at thermochemistry (the energy to form and break covalent bonds). We aim to show for the first time that learned non-locality can reach chemical accuracy given sufficient training data and at practical computational cost. This opens the path to a deep-learning, data-driven, systematically improvable approach to the universal XC functional, away from expensive hand-designed features. In particular, the accuracy in main-group thermochemistry has been dominated for decades by the accuracy/cost trade-off of Jacob’s ladder, which we aim to disrupt with this approach. For this reason, we do not attempt to model dispersion yet, and train our functional with a fixed D3 dispersion correction.55,56 We leave the learning of dispersion effects using our architecture for future work.
# 2.1 Skala: A model for scalable non-local representation learning
Skala’s enhancement factor in Eq. (1) is a non-local functional modeled with a deep neural network that takes as input a set of semi-local, density-dependent features $\mathbf { x } [ \rho ]$ from the standard meta-generalized-gradient approximation (meta-GGA) $O ( N ^ { 3 } )$ rung, and which are represented on the aforementioned large irregular integration grid. The challenge here is to design an accurate XC functional that models intricate non-local interactions across the grid in order to achieve the accuracy that is often only attainable by more expensive functionals of a higher rung, while maintaining a computational cost comparable to functionals from the meta-GGA rung. While a naive solution with all-to-all communication across the grid would enable non-local representation learning, it is not a scalable design, since the cost of doing so on grids of the order of $1 0 ^ { 4 }$ to $1 0 ^ { 6 }$ points quickly grows out of control. Instead, Skala introduces a second coarse grid with far fewer points, 31 which acts as an intermediary layer through which the points on the finer grid can communicate.
Fig. 2 shows the overall schematic of the neural network architecture. Starting from the input meta-GGA features, the 7 semi-local inputs are log-transformed, followed by a small multilayer perceptron (MLP) that acts strictly locally on each grid point. The MLP is applied twice, once to each spin-ordering of the transformed features, followed by an averaging operation. This yields a spin-symmetrized semi-local hidden representation that serves as input for the rest of the model. By making the hidden layer spin symmetric before feeding it through any non-local computation across the grid, we avoid having to run the more expensive part of the non-local neural network twice, saving computational cost.
Before the spin-symmetrized features are passed into the non-local interaction model, they are projected to a lower-dimensional hidden vector. Subsequently, the coarse points collect non-local information from the fine grid, analogous to the accumulation of multipole moments. More specifically, for each coarse point, the local hidden features on the integration grid are projected onto a product of radial basis functions and spherical harmonics that depend on the distance vector between the coarse and fine points, followed by an integration over space of all fine grid points. While one could consider further processing the coarsened features using message-passing layers on the coarse grid, 31 in preliminary experiments, we found this to lead to significant overfitting behavior.
(b) Non-local interaction model
Figure 2: (a): Skala’s architecture, where $G$ is the size of the DFT integration grid, and $C$ is the number of coarse points. After transforming a set of 7 meta-GGA features with a log-transform, we generate spin-symmetric hidden features by applying the same MLP to both spin-orderings and averaging. After local processing, we apply a non-local interaction model between grid points. The interactions, which we expand upon in (b), are centered around coarse points coinciding with the positions of nuclei, and are then reassembled with a soft partitioning. We feed both the local and non-local features into a final MLP which produces an enhancement factor to be multiplied with a scale-function based on the local density and finally integrate over the grid to get $E _ { \mathrm { x c } } ^ { \theta }$ . (b): While Skala uses only meta-GGA features, it models non-local effects with communication between grid points indirectly through selected coarse points. We apply the logic in this figure for each coarse point, and for each spherical harmonic level $\ell = 0 , 1 , 2 , 3$ . First, the local features on a grid are interpreted as functions. These functions are pointwise multiplied by $2 \ell + 1$ spherical harmonics and 16 radial basis functions according to the grid shown in the figure. Each product in the grid is integrated and yields a scalar value. The resulting scalars are mixed linearly, allowing interactions between the different radial basis functions. For each radial basis function, the mixed scalars become coefficients for the same spherical harmonic basis, yielding new functions (represented on a grid) that capture non-local interactions between grid points. Note that these 16 resulting functions have a spherical frequency of order $\ell$ and will be combined with the other orders before being reassembled via a distance-based soft partitioning of space shown in (a).
Instead, using the same product basis of radial and spherical components for each coarse point, we construct functions that when evaluated on the finer grid yield non-local hidden features on each fine grid point, which are invariant with respect to the Euclidean symmetry. In order to ensure that the non-local interaction between the coarse and fine points (and therefore also between the fine points) has a finite range, enabling the model to satisfy the size-consistency constraint, the radial basis functions are modulated by an envelope function $^ { 5 7 }$ that smoothly decays to zero beyond 5 bohr.
Finally, the non-local hidden representations are concatenated with earlier semi-local hidden features, processed through a purely local MLP, and projected down to a scalar value per grid point. The scalar value is passed through a scaled sigmoid activation function with a range between 0 and 2,21 yielding a bounded enhancement factor that enforces the Lieb-Oxford lower bound.49 The result is plugged into the discretized equivalent of Eq. (1) to yield the predicted $E _ { \mathrm { x c } } ^ { \theta } [ \rho ]$ .
In Sec. A.5 we show that the hidden features on the coarse grid can be interpreted as multipole moments, and that the non-local module has the expressivity to model any two-body interaction. This could also be systematically increased to approximate any N-body interaction $^ { 5 8 , 5 9 }$ on the density grid to any desired accuracy. While in principle the non-local module has the ability to approximate non-local interactions independent of where the coarse points are placed, we take advantage of the structure of integration grids typically used in DFT — centered around the atomic centers — and place the coarsened points on the atomic centers. For more details on the neural network architecture, see Sec. A in the Supplementary Information.
Table 1: Datasets used in training, showing the original number of labels and the number of training labels after subtraction of the overlap with the test sets GMTKN55 and W4-17 and splitting off any validation sets.
# 2.2 Training data
Our training data comprise $\mathrm { \sim } 1 5 0 \mathrm { k }$ reaction energies (Table 1) computed at the CCSD(T)/CBS level of theory or higher, as detailed in Sec. C. The largest subset of our training data (over half) is composed of ${ \sim } 8 0 \mathrm { k }$ diverse total atomization energies for general molecules with up to five non-hydrogen atoms (MSR-ACC/TAE). Molecular structures from this dataset consisting of a single molecular fragment (95.4%) are released as the MSR-ACC/TAE25 dataset, described in $^ { 3 5 }$ We extend the training data on total atomization energies with 14 publicly available linear and cyclic carbon clusters,60 and we further add total atomic energies to the training data to gauge total energies.
To this large thermochemistry dataset, we add smaller datasets that provide initial coverage of reaction kinetics, basic properties, and both intra- and intermolecular non-covalent interactions. For the last category, we draw on the relatively abundant publicly available data and select four datasets from the NCIAtlas collection (D442x10, SH250x10, R739x5, HB300SPXx10). $^ { 6 1 - 6 5 }$ The remainder of this first batch of training data was generated in-house, as detailed in Sec. C. To begin coverage of basic properties, we include atomic datasets of electron affinities (EAs) and ionization potentials (IPs) — including double and triple IPs — for elements up to argon, as well as proton affinities (MSR-ACC/PA) and ionization potentials (MSR-ACC/IP) for the molecules in the MSR-ACC/TAE dataset. For conformational energies, the MSR-ACC/Conf dataset includes all conformers within a 10 kcal/mol energy window of the molecules in MSR-ACC/TAE. To start covering kinetics, the MSR-ACC/Reactions dataset comprises elementary steps of reactions of small organic molecules with up to eight atoms, including both transition states and endpoints along the reaction pathways.
From all these datasets, we removed the overlap with the test sets GMTKN55 $^ { 1 4 }$ and W4-17 $^ { 6 6 }$ based on the molecular graphs of all systems with more than two atoms. We determine molecular graphs (with undetermined bond order) from the bond model of GFN-FF $^ { 6 7 }$ and we subtract W4-17 from the training data by removing all reactions that contain any molecule that contains any covalently connected subgraph found in any molecule in W4-17 (some molecules in W4-17 are not recognized as fully connected by GFN-FF). Similarly, we subtract GMTKN55 from the training data by removing all reactions that share the same set of molecules (defined by the GFN-FF graph) with the same stoichiometric ratios. This prevents any leakage of W4-17 into the trained model and minimizes the leakage of GMTKN55. After the test sets subtraction, $1 \%$ of MSR-ACC/TAE25 is further reserved for validation as a holdout set. Both the holdout and training splits of MSR-ACC/TAE25 are released as part of Ehlert et al. 35
As explained in Sec. 2, in the pre-training phase we evaluate our model at fixed densities using B3LYP $^ { 8 , 4 5 }$ in a def2-QZVP basis set,68 or with a ma-def2-QZVP basis set $^ { 6 9 }$ for all reactions containing anions. Using the fixed densities, we compute the relative energy of a reaction from the B3LYP total energies by replacing the B3LYP XC energies with the XC energies predicted with our functional. To regularize the trained model with respect to numerical variations on the grid, we use eight distinct integration grids, using level 2 and level 3 from PySCF $^ { 7 0 }$ with four different angular integration schemes.
Figure 3: (a): The plot’s horizontal axis shows weighted total mean absolute deviation (WTMAD-2) on the GMTKN5514 test set for general main group thermochemistry, kinetics and non-covalent interactions. The vertical axis shows mean absolute error on the diverse atomization energies test set W4-1766. Skala performs similarly to the best-performing hybrid functionals, and reaches near chemical accuracy (1 kcal/mol) on W4-17. (b): Shows the precise errors (in kcal/mol) on W4-17 and GMTKN55, corresponding to the numbers in the plot. For W4-17, the table shows both the MAE on the full set (shown in the plot) as well as on the set of 183 single-reference structures with $\mathrm { \% T A E { \vert ( T ) \vert < 1 0 \% } }$ .72 All functionals, including Skala, were evaluated with a D3(BJ) correction, except for those with the VV10 $^ { 5 3 }$ correction, indicated with “-V”.
(b)
# 3 Accuracy and robustness of Skala
An XC functional is used to predict the energy and properties of new molecules: it must therefore show compositional generalization to different compounds than those seen during training, which should not be confused with the simpler configurational generalization to unseen configurations of the same system used in training. 17 For this reason, as detailed in Sec. 2.2, we have subtracted the overlap with the two main test sets from the training set based on molecular graphs of all systems with more than two atoms. For atomization energies of small molecules, we test on the well-established W4-17 dataset,66 which contains 200 diverse representative atomization energies. These energies were computed using a very high-level wavefunction protocol that achieves a $9 5 \%$ ( $z \sigma$ ) confidence interval of 0.17 kcal/mol and a 99% ( $3 \sigma$ ) confidence interval of 0.26 kcal/mol with respect to highly accurate experimental TAEs.71,72 For performance across main group chemistry, we test on the GMTKN55 database,14 which is the de facto standard benchmark for electronic structure methods, comprising 55 subsets covering five categories: basic properties, thermochemistry, kinetics, intermolecular non-covalent interactions, and conformational energies. The overall accuracy of an electronic structure method on this broad dataset is encoded in the weighted total mean absolute deviation (WTMAD-2).14
The accuracy of Skala is readily apparent in Fig. 3, which displays the errors on the two benchmark sets alongside those of the best performing XC functionals in the first 3 rungs of the Jacob’s ladder (up to the hybrid or $O ( N ^ { 4 } )$ rung).ii For atomization energies of small molecules — the domain represented by the largest training subset — Skala achieves chemical accuracy, outperforming the state-of-the-art range-separated hybrid functional $\omega$ B97M-V,73 reducing the error by half. Across the broader domain of main group chemistry, Skala already demonstrates competitive accuracy with the best hybrid XC functionals — a performance enabled by the inclusion of our first batch of training data beyond atomization energies, as detailed in Sec. 3.2. A breakdown of the unweighted errors on the different subsets of GMTKN55 is further shown in Fig. 4, where we compare Skala to the best performing GGA, meta-GGA and hybrid according to the WTMAD-2 metric. We find that Skala outperforms the best hybrid functional in several thermochemistry subsets, while remaining remarkably robust on subsets entirely out of distribution, including those with heavier elements, like Sn, Sb, Te and Pb in the HEAVYSB11 dataset, which were never seen in training. Here, Skala often surpasses the best meta-GGA and, even in the few worst cases, maintains GGA-level accuracy. This highlights the key advantage of training an XC functional on high-accuracy small-molecule data over training a force field: the largest contribution to the energy that governs generalization to different elements and bigger systems is described in KS DFT by other terms than the XC functional.
Basic properties and reaction energies for small systems Reaction barrier heights DIPCS10 3.17 5.20 4.52 4.77 Double-ionization potentials of closed-shell systems BHDIV10 1.34 1.28 2.90 7.81 Diverse reaction barrier heights W4-11 1.28 2.08 2.87 7.56 Total atomization energies WCPT18 0.93 1.41 1.49 7.25 Proton-transfer barriers with water or no catalyst G21IP 3.60 2.88 3.02 4.19 Adiabatic ionization potentials PX13 2.47 1.88 1.09 8.75 Proton-exchange barrier in $H _ { 2 } O$ , $N H _ { 3 }$ and HF clusters PA26 1.47 1.40 3.32 4.70 Adiabatic proton affinities (incl. amino acids) INV24 1.80 1.29 1.22 1.91 Inversion/racemization barrier heights
ALKBDE10 3.19 3.74 4.12 5.16 Dissociation energies in group-1 and -2 diatomics BHPERI 3.13 1.13 1.15 6.27 Barrier heights of pericyclic reactions ALK8 3.08 2.48 2.63 3.62 Dissociation and other reactions of alkaline compounds BH76 2.01 1.33 4.15 7.84 Barrier heights of various reaction types
HEAVYSB11 2.83 2.70 2.17 2.79 Dissociation energies in heavy-element compounds BHROT27 0.33 0.22 0.69 0.37 Barrier heights for rotation around single bonds GD2CR1C3 31.9278 51.4909 54.260 86.4165 1R3eadcitfifiocnuletncearsgeiessf orf sDeFlTe cmtedthGo2d/s97 systems Skal⍵aB97MB-9V7M-rVevPBE YBADLE21x68 10.818 21.9209 41.3573 42.4027 BDiomnde-ridzisastiocniaetinoenrgeineesrogfi $A l X _ { 3 }$ pcoumnpdsounds $A l X _ { 3 }$ Intermolecular noncovalent interactions RC21 2.08 1.86 3.51 4.85 Fragmentations and rearrangements in radical cations IL16 0.78 0.94 0.52 0.87 Interaction energies in anion-cation dimers SIE4x4 13.6 10.7 16.3 23.4 Self-interaction-error related problems WATER27 2.47 1.10 0.77 3.37 Binding energies in $( \mathsf { H } _ { 2 } \mathsf { O } ) _ { n }$ , ${ \sf H } ^ { + } \left( { \sf H } _ { 2 } { \sf O } \right) _ { n }$ and ${ \mathsf { O H } } ^ { - } \left( \mathsf { H } _ { 2 } { \mathsf { O } } \right) _ { n }$ GF2H1E5A1 101.82986 10.409461 321.027976 231.839438 ARediacbtiaotincseinlnevrcotglrivoeisngianftfrviinahirytiidoerusisde( no-li)goorgmaenrisc systems ACHHBB261 10.31057 0.92684 0.842073 010.80473 IBnitnedriancgt eonergniersgioefsnionncacanotiivoanl-e-neteuluyttrrbaloludinimdmedrirsmers BH76RC 0.57 0.87 2.02 2.80 Reaction energies of the BH76 set CARBHB12 0.21 0.20 0.30 1.12 Hydrogen-bonded complexes of carbene analogs TAUT15 0.76 0.33 0.89 1.55 Relative energies in tautomers S66 0.13 0.14 0.13 0.28 Binding energies of noncovalently bound dimers Skal⍵aB97MB-9V7M-rVevPBE PNIHCAOL2539 0.21009 0.21861 0.421724 0.782382 IBnitnedriancgt eonergniersgienshioanf -naielcnktaoatgnednd-dicimomnetrrasin(incg .dihmaleorgsen bonds) $n$ Reaction energies for large systems and isomerization reactions HEAVY28 0.20 0.18 0.23 0.29 Noncovalent interaction energies of heavy element hydrides MB16-43 6.87 14.5 36.0 27.0 Decomposition energies of artificial molecules RG18 0.04 0.08 0.07 0.09 Interaction energies in rare-gas complexes C6D0AISROC 82.5849 10.17.85 43.8550 93.8659 Relacttiivoenenerrgiies boeftDwielesn- ${ \mathsf C } _ { 6 0 }$ irsroemacetrisons Skal⍵aB97MB-9V7M-rVevPBE ISBSORL3264 30.5704 10.6426 40.0292 41.5789 IBsonmde-sriezpaatiroant oenerregaicetsionfslaorfgseatourrgatneicd hmyodlreoccualerbsons Intramolecular noncovalent interactions ISO34 0.69 0.62 1.46 1.50 Isomerization energies of medium-sized organic molecules IDISP 3.46 1.63 3.18 3.03 Intramolecular dispersion interactions RSE43 0.47 0.77 2.11 2.32 Radical-stabilization energies UPU23 0.60 0.47 0.44 0.48 Relative energies between RNA-backbone conformers PArel 0.91 0.59 1.36 1.53 Relative energies in protonated isomers MCONF 0.24 0.38 0.34 0.44 Relative energies in melatonin conformers CDIE20 0.33 0.59 1.62 1.49 Double-bond isomerization energies in cyclic systems SCONF 0.16 0.17 0.18 0.54 Relative energies of sugar conformers Skal⍵aB97MB-9V7M-rVevPBE BUT1I4CDOIONFL 0.189 0.105 0.2194 0.321 Relative energies in cboutnafonrem-1e,r4s- doifoilncorngfaonrimc esrystems AMINO20x4 0.28 0.19 0.23 0.36 Relative energies in amino acid conformers ACONF 0.07 0.06 0.15 0.10 Relative energies of alkane conformers PCONF21 0.96 0.63 0.82 0.86 Relative energies in tri- and tetrapeptide conformers Skala B97M-V Color represents the error relative to ⍵B97M-V ⍵B97M-V revPBE −6.00 dB $( - 7 5 \% )$ 广 −3.00 dB $( - 5 0 \% )$ 0.00 dB $( 0 \% )$ $+ 3 . 0 0$ dB $( + 1 0 0 \% )$ +6.00 dB $( + 2 9 8 \% )$
Given that achieving chemical accuracy on the W4-17 atomization energies test set is a key result, it warrants a more in-depth examination. Our large training dataset MSR-ACC/TAE includes a wide range of diverse and unusual bonding. As shown in Fig. 5, Skala achieves high-accuracy predictions on the holdout set, while these molecules are very challenging for other functionals. All the molecular structures in MSR-ACC/TAE have single-reference electronic structure character, meaning that they can be treated accurately with the thermochemical W1-F12 protocol based on CCSD(T)/CBS that has been used to label them. The test set W4- 17 is instead labeled with the higher-level W4 protocol, based on CCSDTQ5, and also contains multi-reference molecules on which the W1-F12 protocol makes larger errors. To assess the label quality of MSR-ACC/TAE, we computed the MAE of the W1-F12 protocol against the W4 protocol on the single-reference subset of W4-17 (183 reactions out of 200), which is estimated to be 0.49 kcal/mol. Since Skala is trained on single-reference molecules with W1-F12 labels, we further analyze its performance on the single-reference subset of W4-17 in Fig. 3, comparing it to the full test set. For multireferential molecules, approximate XC functionals can often reach better accuracy through symmetry breaking,74,75 which we use for all the functionals on the three most challenging multireferential cases ( $\mathrm { \Delta \Omega } \cup { \mathrm { 2 } }$ , $^ { 1 }$ BN and B $^ 2$ ). See Sec. E.1 and Table 6 for more details and statistics.
In Sec. D, we also examine key aspects of practical usability, such as SCF-cycle convergence (Table 4) and grid-size convergence (Fig. 11). As expected for an ML-based functional, Skala exhibits slightly less smooth behavior than traditional functionals, but all variations remain well within acceptable ranges for practical use.
# 3.1 The importance of learning nonlocal interactions
The non-local branch of our architecture is remarkably lightweight — Skala comprises just 276, 001 parameters in total, with 265, 473 allocated to the local branch. This compact design is crucial for maintaining scalability. It is therefore insightful to examine the performance gains enabled by the learned non-locality. In Fig. 6a, we show ablation results by training the local branch only and compare it to the full model that includes the nonlocal module, both on the full training set of Table 1. The local model arguably provides the accuracy limit for meta-GGAs on the chemistry covered by GMTKN55, which is not that far from the accuracy of the parameterized meta-GGA B97M-V.77 This ablation study is performed with settings that reduce computational demands, as
Functional MAE on the MSR-ACC/TAE25 holdout set [kcal/mol]
revPBE 6.70 H
r2SCAN 6.81
B97M-V 10.22 T
B3LYP 4.42
M06-2X 3.74
ωB97X-V 5.08
ωB97M-V 4.31
Skala 0.37 H 1 0 10 20 30
Label quality 0.49 (estimated as avg. error of W1-F12 w.r.t. W4)
Figure 6: (a): Accuracy of Skala’s nonlocal architecture compared with its local branch only, trained on all of the data in Table 1. (b): Data composition ablation from Table 1: results of training Skala on A, MSR-ACC/TAE only, on B, the public data NCIAtlas and W4-CC plus the Atomic datasets only, on A + B, and further adding all the other MSR-ACC data C. In both ablations, for each setting we trained three models using different random seeds. SCF fine-tuning was limited to 1000 steps, and evaluation was performed on the smaller Diet GMTKN55.76
Figure 5: The MSR-ACC/TAE25 holdout set has the same distribution as part of our training set, but none of its molecules are used for training. The figure displays example molecules and the distribution of total atomization energies in this set. The table shows the errors of various functionals on the holdout set. The estimated quality of the W1-F12 labels used in MSR-ACC/TAE25 is computed as the error of W1-F12 against the more accurate W4 protocol on the single-reference subset of W4-17. The estimate is conservative because the W4-17 subset was created with a ${ \sim } 1 0 \%$ cutoff in $\% \mathrm { T A E [ ( T ) ] }$ , while MSR-ACC/TAE25 has a cutoff of 6% in $\%$ TAE[(T)]. All functionals, including Skala, were evaluated with a D3(BJ) correction, except for M06-2X which uses D3(0) and those with the $\mathrm { V V 1 0 ^ { 5 3 } }$ correction, indicated with “-V”.
described in Sec. B.3, with SCF fine-tuning limited to 1000 steps and evaluation on the representative subset Diet GMTKN55, $^ { 7 6 }$ which was designed to approximate the WTMAD-2 metric on the full GMTKN55 dataset.
# 3.2 Skala’s accuracy improves systematically with training data
Figure 6b reports an ablation study on training data composition, which shows systematic improvement of Skala as we add more diverse chemistry in training. With the same settings as in the previous section (1000 SCF fine-tuning steps and evaluation on Diet GMTKN55), we find that if we train on MSR-ACC/TAE only (A), Skala can reach chemical accuracy on W4-17, while performing at low-tier GGA level on GMTKN55. If we train only on the publicly available data in Table 1, which we denote with B and which is composed of NCIAtlas, W4-CC, and the atomic datasets TOT, EA, IP, then the model performs very poorly, with low accuracy and large inter-seed variance. When we add the non-covalent interactions and atomic data in B to MSR-ACC/TAE, we see that Skala maintains the accuracy on W4-17 while improving dramatically on GMTKN55. Finally, its performance continues to improve systematically as we add the latest MSR-ACC training data, covering conformers, reactions, IPs and PAs, denoted by C.
Figure 7: The kinetic correlation component $T _ { \mathrm { c } } [ \rho _ { \gamma } ]$ of $E _ { \mathrm { x c } }$ as a function of the density scaling parameter $\gamma$ . Results are shown for models trained with different data compositions, as well as the final Skala functional. From left to right: results of training Skala on MSR-ACC/TAE only (A); the public data NCIAtlas, W4-CC and the Atomic datasets (B); the combination of datasets $\left( \mathrm { A } + \mathrm { B } \right)$ ; and adding all other MSR-ACC datasets (conformers, reactions, IPs and PAs) to the training data $( \mathrm { A } + \mathrm { B } + \mathrm { C } )$ . The rightmost column shows results of the final Skala functional trained on all $\mathrm { ~ A ~ } + \mathrm { ~ B ~ } + \mathrm { ~ C ~ }$ , which was trained with more compute. Positive values indicate that exact constraint of $T _ { c }$ being positive is satisfied, while negative values indicate violations. More results for models trained with different random seeds can be found in Fig. 14.
# 3.3 The emergence of learned exact constraints with training data
Exact constraints of the XC functional have been pivotal in guiding the approximations that made DFT practical for thousands of applications in chemistry and materials science.48 Many are ingeniously built in by design, $^ { 7 , 1 0 }$ lending robustness to the functionals that include them. In Skala, we imposed only minimal constraints to maximize model flexibility, making it interesting to explore whether exact constraints can emerge from data.
As part of the same data ablation study of Fig. 6b, we tracked whether the model learns to satisfy the positivity of $T _ { c }$ ,47 the kinetic correlation component of $E _ { \mathrm { x c } }$ . This constraint reflects the physical principle that correlation makes electrons move faster to avoid one another due to their Coulomb repulsion. In Fig. 7 we evaluate $T _ { \mathrm { c } } ^ { \theta } [ \rho _ { \gamma } ]$ as a function of the scaling parameter $\gamma$ , which rescales the density as $\rho _ { \gamma } ( r ) = \gamma ^ { 3 } \rho ( \gamma r )$ , for all atoms from the Atomic TOT set in Table 1. We clearly observe that the constraint is violated when the model is trained only on MSR-ACC/TAE (A). In contrast, when the model is trained only on the public NCIAtlas, W4-CC, and Atomic datasets (B), the constraints are violated significantly less often. This is likely attributed to the presence soifgdniaslsoackiiantiton tchuervdesrivnatNivCeI tAetrlams nw $\begin{array} { r } { T _ { \mathrm { c } } [ \rho _ { \gamma } ] = \gamma ^ { 2 } \frac { \mathrm { d } } { \mathrm { d } \gamma } \frac { E _ { \mathrm { x c } } [ \rho _ { \gamma } ] } { \gamma } } \end{array}$ .figWuhreatn otnrailndeednsointythvearciaotmiobnisnetdh $( \mathrm { A } + \mathrm { B } )$ ,e tahteraminoidnegl the TAE set (A) is included, given that dataset (A) contains a significantly larger number of reactions (almost 6 times larger) compared to (B). Once all MSR-ACC data is added to the training set, including conformers, reactions, IPs, and PAs, we observe a definite signal that the functional has learned to satisfy the physical constraint correctly, which is also reflected in the results for the final Skala model, trained with more compute. The emergence of Skala learning to satisfy this constraint for the largest composition of datasets likely stems from the fact that dataset C contains a sufficiently large proportion of data with relatively smaller density variations, such as those found in the MSR-ACC conformers and reactions datasets. For more detailed results, the reader is referred to Sec. E.5.
# 4 Beyond energies: Densities and equilibrium geometries
Since labels for accurate densities and equilibrium geometries are not included in our training data, it is essential to verify that Skala maintains at the very least the baseline quality of standard semi-local DFT for these observables, to ensure its practical utility.
# 4.1 Densities
Starting with densities, it is important to recall that the energy error from a KS DFT calculation with a given XC functional can be decomposed into two components: a functional error, which is the error the functional would make if evaluated on the exact density, and a density-driven error, which is the error the exact functional would make when evaluated on the self-consistent density of the approximate functional. $7 8 – 8 0$ These two errors can compensate each other,81,82 yielding XC approximations that improve energies by worsening their SCF densities, “straying from the path toward the exact functional”, quoting Medvedev et al. 83 We train our functional on fixed approximate densities $\rho$ B3LYP and we further fine-tune it using on-the-fly calculated SCF densities for a
Dipole error
6.05
0.456 Reaction error on MSR-ACC/TAE25 holdout set With self-consistent densities Eval on ρB3LYP before fine-tuning 0 2 k 4 k 6 k 8 k Number of fine-tuning steps with SCF
(b) Dipole accuracy 46 of various functionals
Table 2: Geometry optimization results. We optimized the geometries in the benchmark datasets LMGB35, HMGB11 $^ { 8 4 }$ and CCse21 $^ { 8 5 }$ with a set of functionals and compared bond lengths and bond angles to the ground truth values from these datasets. Numbers indicate average errors in Ångstrom or degrees and box plots show the quartiles of the error distribution. Skala was not specifically trained for the accuracy of optimal geometries, but performs similarly to other functionals in most benchmarks. All functionals, including Skala, were evaluated with a D3(BJ) correction, except for those with the VV1053 correction, indicated with “-V”.
LMGB35 $[ \mathrm { \AA } ]$ HMGB11 $[ \mathrm { \AA } ]$ CCse21 bond lengths $[ \mathrm { \AA } ]$ CCse21 bond angles [°]
GFN2-xTB (tblite) 0.021 0.030 0.008 0.81
revPBE 0.014 0.033 0.012 0.49
r2SCAN 0.006 0.012 0.004 0.28
B97M-V 0.007 0.023 0.005 0.40
B3LYP 0.007 0.026 0.004 0.38
ωB97X-V 0.009 0.040 0.005 0.24
ωB97M-V 0.008 0.010 0.005 0.18
Skala 0.014 0.032 0.012 0.26
small number of steps, to close the gap between the accuracy learned on $\rho$ B3LYP and that on the self-consistent densities $\rho _ { \mathrm { S k a l a } }$ produced by Skala, as detailed in Sec. B.4. To ensure that this SCF fine-tuning does not rely on error compensation, we monitor the quality of the SCF density by comparing its dipole moments against a highly accurate dataset of 151 structures.46 Figure 8a illustrates how, on the TAE holdout set, the gap between the accuracy learned on B3LYP densities and the actual SCF evaluation of Skala closes during the fine-tuning process. We also report how the errors of the SCF Skala density behave during the fine-tuning. We clearly see a first phase where the model is improving both energies and densities, a second phase in which only energies are improved, and a subsequent phase in which the SCF densities start to deteriorate, indicating that the model begins to exploit compensation between functional and density-driven error: at this point, we terminate the fine-tuning. The final Skala error on the dipole dataset falls below the error of B3LYP and is close to the errors of the best hybrid functionals, as shown in Fig. 8b.
# 4.2 Equilibrium geometries
One of the use cases for DFT is to predict the equilibrium structures of molecules by relaxing the positions of the nuclei to their lowest energy configuration. We test geometries optimized with Skala against (semi-)experimental datasets that include light main group bond lengths (LMGB35),84 heavy main group bond lengths (HMGB11),84 and the bond lengths and bond angles of the 21 small molecules of the CCse21 set. $^ { 8 5 }$ The results are shown in Table 2, where, besides comparing with functionals in different rungs, we also compare to the semi-empirical GFN2-xTB $^ { 8 6 }$ method. Skala was not specifically trained for the accuracy of optimal geometries, and we see that its performance is of GGA quality or better in most benchmarks, with the worst outlier being the significantly out-of-distribution Pb-Pb bond length in the HMGB11 dataset. For details on the evaluation settings the reader is referred to Sec. D.5.
Figure 9: Left: Runtime for molecules with increasing molecular size. Calculations for GPU timings were performed on Azure NC24ADS V4 A100 virtual machines with Accelerated DFT,91 using def2-TZVP basis set with density fitting (RIJ) for the Coulomb integrals for all functionals and exact exchange integrals for all hybrid functionals, def2-universal-jkfit as auxiliary basis set, gm3 grid level for integrating the exchange-correlation energy, Treutler grid pruning and Mura–Knowles radial integration scheme. CPU timings were performed on Azure E32ADS V5 virtual machines with PySCF 2.7.0,70 using def2-TZVP basis set, density fitting (RIJ) for the Coulomb integrals for all functionals, and density fitting (RIK) for exchange integrals for all hybrid functionals, def2-universal-jkfit as auxiliary basis set, grid level 2 for integrating the exchange-correlation energy, with Treutler–Ahlrichs radial integration scheme and NWChem grid pruning. Lines show fitted power laws $a { N _ { \mathrm { o r b i t a l s } } } ^ { n }$ disregarding offsets at smaller system sizes. The fitted power $n$ is reported in the legend for each functional. Right: A sample of the molecules used for evaluating timings of Skala in Accelerated DFT and PySCF. The systems are collected from Grimme, $^ { 9 3 }$ S30L,94 HS13L,95 and NCI16L.96
# 5 Computational cost of Skala
The computational cost of quantum chemistry methods is commonly expressed through its asymptotic scaling with system size, $O ( f ( N ) )$ , a convention we have followed so far in this paper. In practice, the prefactors of that scaling can differ by orders of magnitude between methods, the cost can be dominated by other terms for smaller to medium-sized molecules, and the bottleneck for scaling may be memory rather than compute. Moreover, hardware-specific optimizations and algorithmic advances continue to lower these scalings in practice.
For all these reasons, although our architecture design ensures that Skala has the same asymptotic scaling as meta-GGA semi-local DFT, we have to empirically verify its prefactor and actual cost as system size increases. A relevant analogy to clarify why this is crucially important is the following. At the hybrid rung of Jacob’s ladder we find both global hybrids and local hybrids. For global hybrids, the XC functional contains a fixed fraction of exact exchange evaluated on the basis set, which can be made computationally efficient but lacks universality, as different systems often require different optimal fractions. In contrast, the more flexible local hybrids allow the fraction of exact exchange to be position-dependent, requiring the exchange to be evaluated on the grid. Although both have the same asymptotic scaling, the latter has a much larger prefactor, with basic implementations being even more expensive than the double hybrids of the next rung. Despite impressive progress over the last decade, $8 7 - 8 9$ less costly implementations of local hybrids are still very rare, $^ { 9 0 }$ which has prevented their widespread use so far. A reasonable cost before any dedicated optimization is therefore essential for quick adoption of an XC functional in practical applications.
Figure 9 presents the computational runtimes of two non-optimized implementations of Skala: one GPU-based version integrated into Accelerated DFT,91 and one CPU-based version implemented in PySCF.70 For the GPU-based implementation in Accelerated DFT, we clearly observe that after a modest prefactor for small systems, Skala’s cost becomes the same as the semi-local meta-GGA r2SCAN, at least 10 times lower than hybrid cost.92 The CPU-based implementation in PySCF shows a reasonable ${ \sim } 3 { - } 4$ prefactor with respect to $\mathrm { r ^ { 2 } }$ SCAN. Skala’s scaling here is also affected by a suboptimal interface with PySCF, which does not take full advantage of basis function screening when computing the features, resulting in higher computational overhead. Therefore, this second test provides a rather loose upper bound to Skala’s cost.
The take-home message of these results is that already a very basic, non-optimized implementation of Skala has a cost comparable to functionals routinely used in practical applications. To put this in perspective: going back to the comparison with local hybrids, a basic implementation of the DM21 $^ { 2 1 }$ functional in PySCF has a computational cost more than 100 times higher than standard functionals, as shown in Fig. 9. | Density Functional Theory (DFT) is the most widely used electronic structure method for predicting the properties of molecules and materials. Although DFT is, in principle, an exact reformulation of the Schrödinger equation, practical applications rely on approximations to the unknown exchange-correlation (XC) functional. Most existing XC functionals are constructed using a limited set of increasingly complex, hand-crafted features that improve accuracy at the expense of computational efficiency. Yet, no current approximation achieves the accuracy and generality for predictive modeling of laboratory experiments at chemical accuracy -- typically defined as errors below 1 kcal/mol. In this work, we present Skala, a modern deep learning-based XC functional that bypasses expensive hand-designed features by learning representations directly from data. Skala achieves chemical accuracy for atomization energies of small molecules while retaining the computational efficiency typical of semi-local DFT. This performance is enabled by training on an unprecedented volume of high-accuracy reference data generated using computationally intensive wavefunction-based methods. Notably, Skala systematically improves with additional training data covering diverse chemistry. By incorporating a modest amount of additional high-accuracy data tailored to chemistry beyond atomization energies, Skala achieves accuracy competitive with the best-performing hybrid functionals across general main group chemistry, at the cost of semi-local DFT. As the training dataset continues to expand, Skala is poised to further enhance the predictive power of first-principles simulations. | [
"physics.chem-ph",
"cs.AI",
"cs.CE",
"cs.LG",
"physics.comp-ph"
] |
introduction to the dataset, establishing foundational benchmarks for future research. We envision this dataset as a valuable resource for advancing machine learning applications in neuro-oncology, supporting both academic research and clinical decision-support development. datasetlink: https://www.kaggle. com/datasets/briscdataset/brisc2025/
Keywords MRI dataset, Segmentation, Classification, Brain Tumor
# 1 Introduction
Brain tumors are among the most critical medical conditions, necessitating precise and timely diagnosis for effective treatment and management [1]. Magnetic Resonance Imaging (MRI) plays a pivotal role in diagnosing and monitoring brain tumors due to its noninvasive nature and ability to provide detailed images of brain structures [2, 3, 4]. Despite significant advancements in medical imaging technologies, developing automated systems for tumor detection and segmentation remains a formidable challenge [5,6,7]. This difficulty stems primarily from the limited availability of highquality labeled datasets tailored for these tasks, coupled with the inherent complexity and variability in tumor appearances across patients [8,9,10].
Existing brain tumor segmentation datasets, such as the BraTS [11], Figshare [12], and others, have significantly advanced the development of automated segmentation models. However, several limitations in these datasets drive the need for novel datasets to address emerging challenges in the field. For instance, the
BraTS dataset, while comprehensive and widely used, exhibits certain constraints such as its reliance on preprocessed and standardized data that may not represent real-world variability in MRI acquisition protocols across institutions. Additionally, BraTS primarily focuses on gliomas and lacks representation of other tumor types, potentially limiting the generalizability of models trained on it [13,14]. The Figshare dataset, on the other hand, suffers from class imbalance and limited diversity in imaging conditions and patient demographics, which can restrict model robustness [15]. Moreover, annotation precision and quality remain critical issues in many publicly available datasets, where inconsistencies in labeling can adversely impact the training and evaluation of segmentation algorithm[16,17,18,19]. These limitations underscore the necessity of introducing a new dataset that offers balanced class distributions, multi-institutional diversity, and high-quality expert annotations to enhance the reliability and generalizability of automated brain tumor segmentation models.
To address these challenges, we present a meticulously curated brain tumor MRI dataset designed to advance research in tumor detection and segmentation. Our dataset comprises 5,000 high-resolution T1- weighted MRI images for training and 1,000 for testing, carefully selected for their suitability in visualizing brain tissue and tumor regions. Each image is accompanied by precise segmentation masks, created using advanced annotation tools and validated by radiologist and physician to ensure accuracy. The dataset focuses on three of the most common brain tumor types—Glioma, Meningioma, and Pituitary tumors—as well as a ”No Tumor” class, which is essential for developing models capable of distinguishing between healthy and tumorous brain scans. By excluding T2-weighted images and other less relevant imaging modalities, we ensure consistent data quality and minimize the risk of misclassification.
An important feature of this dataset is the inclusion of multiple imaging perspectives for each tumor, covering the Coronal, Sagittal, and Axial planes. This comprehensive approach captures a diverse range of tumor characteristics, enabling researchers to train models that generalize well across different views. Furthermore, the dataset follows a well-structured division into training and test sets, with 5000 images allocated for training and 1000 for testing, ensuring a balanced distribution of tumor types across both sets.
The potential applications of this dataset extend beyond segmentation and classification tasks. It provides a valuable benchmark for evaluating the performance of deep learning models, particularly in medical imaging.
Researchers can use this dataset to explore various challenges, such as tumor boundary delineation, multi-class classification, and domain adaptation in medical imaging. Additionally, the dataset holds promise for clinical applications, including the development of decisionsupport systems to assist radiologists and physicians in diagnosing brain tumors with greater accuracy and efficiency.
In addition to introducing the dataset, we present a novel transformer-based model, Swin-HAFUNet, designed for brain tumor segmentation. This model adopts a hierarchical encoder-decoder architecture, leveraging Swin Transformer blocks to capture both local and global contextual information. It incorporates two key innovations: the Hierarchical Attention Fusion (HAF) module and the Contextual Bottleneck Enhancer (CBE). These components enhance the model’s ability to aggregate multi-scale features and refine semantic representations, resulting in improved segmentation performance across diverse tumor types.
The main contributions of this work are summarized as follows:
– We introduce a large-scale, high-quality brain tumor MRI dataset with expert-annotated segmentation masks and classification labels, covering three major tumor types (glioma, meningioma, pituitary) and a no-tumor category.
– The dataset includes multi-planar views (axial, sagittal, and coronal), enabling the development of models with robust cross-view generalization capabilities.
– We propose Swin-HAFUNet, a lightweight yet powerful segmentation model that integrates Swin Transformer blocks, the Hierarchical Attention Fusion (HAF) module, and the Contextual Bottleneck Enhancer (CBE) for effective multi-scale feature integration.
– Extensive experiments demonstrate that our model outperforms existing state-of-the-art methods in brain tumor segmentation, achieving the highest weighted mean IoU on the benchmark.
– We establish strong baseline results on the proposed dataset to support future research in brain tumor analysis and medical image segmentation.
The remainder of this paper is organized as follows: Section 2 reviews related work and existing datasets in brain tumor research. Section 3 provides essential background and key medical concepts related to brain tumors and MRI imaging. Section 4 presents a detailed description of the proposed dataset, including its structure, annotation process, and imaging modalities. Section 5 offers visual demonstrations and qualitative insights. Section 6 describes the proposed Swin
HAFUNet segmentation model. Section 7 provides experimental evaluations and comparative results. Finally, Section 8 concludes the paper and outlines future research directions.
# 2 Related Work
In recent years, significant advancements in deep learning have driven the development of automated brain tumor diagnosis systems. A key factor in achieving highperforming models is the availability of high-quality annotated datasets, which serve as a foundation for both segmentation and classification tasks. Numerous brain tumor datasets have been introduced, each varying in imaging modalities, annotation quality, tumor types, and overall dataset size. This section provides a comprehensive review of existing brain tumor datasets, categorizing them based on their intended tasks—segmentation or classification—and highlighting their key attributes, strengths, and limitations. By examining these datasets, we aim to underscore the challenges they present and justify the need for introducing our novel dataset with refined annotations and enhanced diagnostic utility.
# 2.1 segmentation
When addressing brain MRI segmentation, the Brain Tumor Segmentation (BraTS) challenge remains a pivotal benchmark in the field [11]. BraTS, which undergoes annual updates, has evolved significantly, with its latest iteration—BraTS 2024—incorporating a diverse dataset encompassing various tumor types, including gliomas, pediatric brain tumors, and brain metastases [20]. While BraTS provides high-quality multimodal MRI data and expert-annotated tumor sub-regions, it predominantly emphasizes cases with multiple MRI sequences, such as T1, T2, T1-contrast, and FLAIR. This focus may not fully reflect real-world clinical scenarios where single-sequence scans are more commonly encountered. Additionally, BraTS’ reliance on synthetic modalities for specific cases introduces variability that might not always correlate with actual clinical imaging conditions. These limitations underscore the need for complementary datasets that encompass a broader range of imaging scenarios and real-world variability.
Another notable dataset in brain tumor segmentation is the Medical Segmentation Decathlon (MSD) [21]. Unlike the original BraTS challenge, which primarily targets gliomas, MSD encompasses a broader spectrum of medical imaging tasks, including brain tumor segmentation. The brain MRI data in MSD originates from multiple medical centers, enhancing the dataset’s heterogeneity and, consequently, the potential generalizability of models trained on it. However, this multicenter nature introduces variability in imaging protocols, scanner types, and acquisition parameters, complicating model training and evaluation.
In addition to BraTS and MSD, the Federated Tumor Segmentation (FeTS) dataset [22] represents another key resource in brain tumor segmentation. FeTS builds upon the BraTS dataset while incorporating additional clinical data from multiple healthcare institutions. It primarily focuses on gliomas, with multimodal MRI scans meticulously annotated by expert radiologists and physicians. The inclusion of multi-center data enhances the generalizability of models developed using FeTS. However, FeTS is predominantly centered on glioma cases, limiting its applicability to other tumor types, such as meningiomas and pituitary tumors. This narrow scope highlights the need for datasets that comprehensively cover a wider variety of brain tumor types for more generalized segmentation tasks.
Beyond BraTS, MSD, and FeTS, several other datasets contribute to advancing brain MRI segmentation. BrainMetShare [23] is one such dataset, comprising 156 whole-brain MRI studies with high-resolution, multi-modal pre- and post-contrast sequences from patients presenting with at least one brain metastasis. Ground-truth segmentations provided by expert radiologists and physicians accompany each study, making it a valuable resource for developing models tailored to brain metastases. Unlike glioma-focused datasets, BrainMetShare’s emphasis on metastatic brain tumors offers a complementary perspective, addressing an important clinical need.
The Open Access Series of Imaging Studies (OASIS) project [24] also provides a significant resource for brain MRI research. While not explicitly designed for tumor segmentation, OASIS offers a large-scale collection of multimodal brain MRI scans, including longitudinal data from healthy individuals and patients with various neurological disorders. OASIS-3, in particular, features structural MRI scans acquired over time, enabling the study of disease progression. Although primarily intended for neuroscience research, OASIS’ comprehensive data can be leveraged for developing segmentation models that generalize across different brain conditions, including tumors.
Another valuable initiative is fastMRI, spearheaded by Facebook AI Research and NYU Langone Health [25]. While the primary objective of fastMRI is to promote advancements in MRI reconstruction, the dataset includes a substantial number of fully sampled brain MRI scans acquired on 1.5T and 3T scanners. These high-resolution scans, encompassing T1-weighted, T2- weighted, and FLAIR sequences, can be repurposed for segmentation tasks. The large scale and high diversity of the fastMRI dataset make it a valuable resource for training robust models, particularly in scenarios where high-quality input data is critical.
The LGG Segmentation Dataset [26], available on Kaggle, specifically targets lower-grade gliomas by providing brain MRI scans alongside manual FLAIR abnormality segmentation masks. This dataset addresses a gap by offering focused data for a specific tumor grade, enabling the development of models tailored to low-grade glioma segmentation. Although limited in scope, its accessibility and detailed annotations make it a popular choice for researchers developing segmentation algorithms for lower-grade gliomas.
# 2.2 classifcation
The development of robust brain tumor classification models heavily relies on high-quality datasets that provide diverse and well-annotated medical imaging data. Several publicly available datasets have been widely used in recent research, enabling the training and evaluation of deep learning models for brain tumor classification. These datasets vary in size, imaging modalities, and tumor types, offering researchers a range of options for developing and benchmarking their models.
One of the most widely used datasets is the Figshare [12], which contains 3,064 T1-weighted contrastenhanced MRI (CE-MRI) images categorized into three tumor types: glioma, meningioma, and pituitary tumors. This dataset has been instrumental in advancing brain tumor classification research. It provides a balanced distribution of tumor types, making it suitable for multi-class classification tasks. The dataset’s accessibility and comprehensive annotations have made it a benchmark for evaluating the performance of deep learning models, as demonstrated in studies such as Islam et al. [27] and Balamurugan et al. [28].
Another notable dataset is one of the Kaggle Brain Tumor MRI datasets, introduced by Nickparvar [29], which includes 7,023 brain MRI images categorized into four classes: glioma, meningioma, pituitary tumors, and no tumor. This dataset is particularly valuable for its diversity and inclusion of non-tumor cases, allowing researchers to develop models capable of distinguishing between healthy and pathological brain scans. The dataset has been used in studies such as Chen et al. [30] and Alanazi et al. [31], where it facilitated the development of transfer learning and feature fusion approaches for brain tumor classification.
The BraTS (Brain Tumor Segmentation) dataset [11] is another critical resource, primarily focused on tumor segmentation but also widely used for classification tasks. The dataset includes multi-modal MRI scans (T1, T1c, T2, and FLAIR) with annotated tumor regions, making it suitable for both segmentation and classification tasks. The BraTS dataset has been used in studies such as Ghosal et al. [32], where researchers developed a Squeeze and Excitation ResNet model for brain tumor classification, achieving an accuracy of $9 3 . 8 3 \%$ . The dataset’s multi-modal nature allows for the exploration of complementary information from different imaging modalities, enhancing model performance.
The ”Brain Tumor” dataset [33] is another valuable resource, providing a collection of brain MRI images with annotations for various tumor types. The dataset’s inclusion of rare tumor types and diverse imaging protocols makes it a valuable resource for developing models that can generalize across different clinical settings.
In summary, the availability of diverse and wellannotated datasets has been instrumental in advancing brain tumor classification research. These datasets continue to play a critical role in driving innovation in the field, providing the foundation for future research and clinical applications. However, despite the progress made, there is still a pressing need for new datasets that address specific challenges, such as the inclusion of rare tumor types, multi-modal imaging data, and more diverse patient demographics. Additionally, existing datasets often focus on either segmentation or classification tasks, limiting their utility for models that require simultaneous learning of both tasks. The introduction of a new dataset that supports both segmentation and classification tasks can bridge this gap, providing a more comprehensive resource for training and evaluating models. Such datasets can also incorporate advanced annotations, such as tumor sub-regions and molecular markers, which are critical for developing models that align with clinical needs. These datasets continue to play a critical role in driving innovation in the field, providing the foundation for future research and clinical applications.
# 3 Background and Key Concepts
# 3.1 Overview of Brain Tumors
Brain tumours are amongst the most fatal of all cancers [34]. Amongst paediatric solid tumours, brain tumours are most fatal and commonly occurring [34]. There is diversity in the types of brain tumours, including but not limited to gliomas which account for 45% of brain tumours with pituitary tumours and meningiomas accounting for $1 5 \%$ each [35]. The gold standard imaging for diagnosis of a brain tumours or brain metastases is an MRI scan with gadolinium [36]. When possible, management is initially via surgery to remove the lesion which is then sent for histological and molecular genotype identification [36]. Pre, intra and post-operative MRIs can be used to guide surgical resection and management [36,37,38]. Intra-operatively, functional MRIs visualise cerebrovascular activity which can be correlated with neuronal activity, aiding the surgical team. Other techniques such as cord simulation can also be used [37]. In many scenarios, regardless of skill, neurosurgical reach has to be limited for safety due to the presence of many functionally important regions within the organ [34,36]. Further management includes the use of medical interventions for symptomatic management, radiotherapy and chemotherapy [34]. The blood brain barrier poses challenges to medical interventions and chemotherapy, this barrier filters material entering the brain via circulation, limiting medical access to the brain [34]. Localisation of the lesion and adjacent structures via MRIs can guide both surgical and radiotherapeutic planning [39]. In summary, MRI scans are used in the initial diagnosis and management planning of brain tumours, including pre-surgical use and as guidance for radiotherapy [39].
# 3.1.1 Glioma
Gliomas are primary brain tumours, they are the most common of malignant primary brain tumours in adults [36]. They arise from glial cells or stem cells which develop glial properties during neoplastic changes [40]. Glial cells designate a group of different cells which provide support for neurons, for example by the formation of axonal myelin sheaths [41]. For adults, the most aggressive form of gliomas, the glioblastoma, has a two year prognosis [34]. Gliomas can be typed against the WHO 2016 classification of CNS tumours [40].
Diffuse forms of gliomas can grow in irregular shapes, extensively infiltrating brain parenchyma [40], this makes neurosurgical management difficult as safe maintenance of functional brain tissue is required during resection [34,36].
# 3.1.2 Meningioma
Meningiomas in adults are the most common, being 30% of central nervous system tumours, whilst they are rare amongst children [42]. They arise from cells on the outer layer of the arachnoid mater, a part of the meninges [42]. The meninges is a layer in the central nervous system which encompasses the brain, cerebrospinal fluid and spinal cord [43]. Though, a meningioma could arise anywhere on the meninges, 98% of meningiomas are intracranial [42]. Usually benign lesions, they can be slow growing [42]. MRI scans in conjunction with CT scan can be sued for diagnosis and treatment planning [42]. Treatment generally consists of neurosurgical treatment, occasionally in adjunct with radiotherapy [42]. Benign meningiomas can grow to a large size with pressures on the brain causing symptoms [44]. Non-benign meningiomas are associated with irregular shapes and tumours heterogeneity and therefore, benign meningiomas tend to be associated with regular shapes and homogeneity [45].
# 3.1.3 Pituitary Tumors
These are tumours originating in the pituitary gland, a small structure at the base of the brain, above the sphenoid bone [46]. The pituitary gland has an essential role in growth, metabolism and reproduction [47]. Due to this, pituitary tumours can cause a wide range of symptoms including but not limited to; mood disorders, diabetes mellitus, obesity, infertility and visual disturbances [47]. However, only one third of these tumours are symptomatic [47]. The majority of pituitary tumours are benign and when treated, treatment generally includes neurosurgical resection and radiotherapy [48].
# 3.1.4 Non-Tumorous Conditions
There are space occupying lesions which are non tumorous in nature, they can be inflammatory, like infections, or arise from vascular abnormalities. Some examples of such space occupying lesions include abscesses, cysts, haematomas and aneurysms [49]. Due to its high soft tissue sensitivity, MRIs can be used to aid characterisation of space occupying lesions, though surgical biopsy may still be required to finalise results [50].
# 3.2 Magnetic resonance imaging in Brain Imaging
# 3.2.1 Anatomical planes
The main anatomical planes are coronal, transverse and sagittal planes [51]. These may be more simply described as a ‘front to back’ vertical plane, a ‘top to bottom’ horizontal plane, a longitudinal ‘side to side’ plane [51]. For a radiologist reporting an MRI scan, the above planes are available for viewing [52].
# 3.3 MRI Basics
# 3.3.1 MRI scans
Magnetic resonance imaging uses non hazardous electromagnetic radiation to provide images of the body’s internal structure [53]. A computer constructs images from the information gathered by the MR scanner [53]. These images can be viewed in sequential 2D ‘slices’ and can also be used to develop a 3D image [53]. There are two main types of images usually provided, T1 weighted and T2 weighted images [53].
# 3.3.2 T1-Weighted Imaging
This form of imaging fat is displayed brightly, therefore it can provide detailed anatomical images of soft tissues especially in the brain [53].
# 3.3.3 T2-Weighted Imaging
In this format, the image represents water, such as CSF, brightly [53], due to this they can be used to detect inflammation [53] and aid distinguish tumours from other space occupying lesions [53]. Most of the diagnostics for space occupying lesions can be done using these two imaging modalities [54].
# 3.3.4 Further modalities
Further MRI modalities are used by radiologists in conjunction with T1 and T2-weighted imaging to aid space occupying lesion diagnostics, these include perfusion-weighted imaging, diffusion-weighted imaging, MR spectroscopy [54] and FLAIR [55]. A radiologist generally has access to the full range of slices and may have further imaging modalities to aid diagnostic reasoning.
# 3.4 Challenges in Brain Tumor Diagnosis
# 3.4.1 Misdiagnosis Risks
As mentioned above, some tumours do not cause symptoms until reaching a certain growth [44], this may lead to late clinical suspicion to warrant imaging. When available and possible, the best imaging modality are MRI scans [36]. Even the use of neurological imaging can lead to misdiagnosis as neoplastic and non neoplastic conditions can mimic each other. As mentioned above, there are non tumorous space occupying lesions, these can be benign, meaning surgical resection and biopsy exposes many patients unnecessarily to the risks of surgery [56]. There are also neoplastic brain lesions which do not appear as a space occupying lesion [56]. Not neglecting T1 precontrast imaging can aid avoidance of misdiagnosis [56]. Further MR imaging modalities and a thorough clinical assessment alongside some further investigations can aid in reducing errors [56]. There are certain tumours which are difficult to visualised on MRI, though MRIs provide a high level of diagnostic accuracy for most tumours [57].
# 4 Dataset Description
# 4.1 Dataset Overview
This dataset has been meticulously curated to address key challenges in brain tumor research, particularly in the domains of segmentation and classification tasks. It provides a balanced, high-quality collection of MRI data, annotated for both research and clinical applications. The dataset includes images with labels for four categories: Glioma, Meningioma, Pituitary tumors, and No Tumor. By focusing on comprehensive data collection and rigorous annotation processes, the dataset aims to advance the development of robust machine learning models in medical imaging.
# 4.2 Purpose and Objectives
The primary goal of this dataset is to overcome limitations observed in existing brain tumor datasets, such as class imbalance, lack of diversity, and annotation inconsistencies. While datasets like BraTS have driven significant advancements in glioma segmentation, their exclusive focus on gliomas and reliance on pre-processed data limit their generalizability to other tumor types and real-world scenarios. Our dataset expands the scope by incorporating multiple tumor types and includes a ”No Tumor” class to aid in broader diagnostic tasks. This addition makes the dataset highly versatile, enabling its use in applications ranging from multi-class tumor classification to binary tumor detection. The key objectives include:
1. Supporting segmentation tasks by providing accurate tumor masks. 2. Facilitating multi-class classification through balanced representation of tumor types.
# 4.3 Dataset Composition and Planar Distributions
The dataset comprises 6,000 MRI images, divided into training and testing sets, as detailed in Table 1. This structured division ensures robust evaluation metrics while providing ample data for training advanced machine learning models. For the training dataset, the total number of images across the planes is 5,000, and for testing, the total is 1,000.
Table 1: Class distribution in the training and testing parts of BRISC
Table 2: Class distribution based on MRI planes in the training and testing parts of BRISC
Table 3: Samples of Glioma segmentation across different imaging planes
Table 4: Samples of Meningioma segmentation across different imaging planes
In addition to the class-based distribution, we provide another form of distribution—dataset composition by MRI planes. This breakdown categorizes images into Coronal, Sagittal, and Axial planes, helping to analyze how different orientations are represented in the dataset. As shown in Table 2, the distribution of different MRI planes is nearly uniform, similar to the distribution of different classes in Table 1. This balanced distribution ensures that no particular class or plane is overrepresented, which is crucial for preventing model bias and improving generalization.
Table 5: Samples of Pituitary segmentation across different imaging planes
Table 6: Samples of No Tumor across different imaging planes
# 4.4 Data Source and Preprocessing
This dataset was derived from ”Brain Tumor MRI Dataset” [58] that combined data from three prominent sources: Figshare [12], SARTAJ [59], and Br35H [60]. The original dataset included 7,023 MRI images across four classes: Glioma, Meningioma, No Tumor, and Pituitary. The ”No Tumor” class images were specifically sourced from the Br35H dataset. During preprocessing, several steps were undertaken to ensure the quality and consistency of our final dataset:
Table 7: Samples of whole-region misannotations. The red area indicates regions that were initially marked as tumors but were identified by radiologist and physician as non-tumorous.
Separation of T1 Images: Only T1-weighted MRI images were retained to maintain uniformity in imaging modality.
Quality Control: Images with incorrect or inconsistent labels were identified and removed with the assistance of radiologist and physician. Standardization: Images were resized and margins were adjusted to improve model accuracy during training.
# 4.5 Imaging Details
All images in the dataset are T1-weighted contrastenhanced MRI scans, selected specifically from the ”Brain Tumor MRI Dataset” (Kaggle) [58]. Although the original dataset included some T2-weighted images, we exclusively selected T1-weighted scans for their superior ability to highlight tumor boundaries effectively. Another notable characteristic of this dataset is the length of MRI sequences. While typical brain MRI studies often consist of longer sequences, the majority of sequences in this dataset were notably short, ranging from 1 to 5 images per sequence. Sequences with only one image were excluded, as even experienced radiologists and physicians found it challenging to identify tumors accurately in these cases.
Table 8: Samples of partial-region overannotations. The red area indicates regions that were initially marked as tumorous but were later identified by the radiologist and physician as non-tumorous.
# 4.6 Annotation Process
The dataset underwent a meticulous annotation and review process to ensure accuracy and reliability. Annotation was performed using the AnyLabeling tool [61], which facilitated efficient and precise delineation of tumor regions. Each image was reviewed and edited multiple times with the input of a certified physician and radiologist. Key steps in the annotation process included:
# 5 Visual Demonstrations of the Dataset
This section provides an in-depth exploration of the dataset through visual examples and analytical discussions, illustrating its structure, composition, and inherent challenges. These demonstrations aim to deepen understanding of the dataset’s unique features while highlighting its potential applications in advanced segmentation and classification tasks.
Tumor Mask Refinement: Using AnyLabeling [61], points of interest for tumors were iteratively refined to ensure precise segmentation masks.
Class Correction: Images incorrectly classified as ”No Tumor” in the original dataset were re-evaluated and removed from this class if necessary.
Consensus Reviews: Discrepancies in annotation were resolved collaboratively by the physician and radiologist.
# 5.1 Overview of Classes
The dataset encompasses four distinct classes: ”Glioma”, ”Meningioma”, ”Pituitary” tumors, and ”No Tumor”. As detailed in Section 3.1, each class presents unique characteristics and complexities. This subsection offers representative visual examples from each class, including raw MRI scans alongside their annotated tumor masks, emphasizing the diversity and precision of the dataset.
Glioma: Gliomas are irregularly shaped and often infiltrate surrounding tissues, presenting significant challenges for precise boundary definition. These complexities require robust segmentation techniques to capture their variable morphology. As shown in Table 3, gliomas exhibit irregular and diffuse growth patterns, which are highlighted through annotated tumor masks.
Table 9: Samples of partial-region underannotations. The purple area indicates regions that were initially marked as non-tumorous but were later identified by the radiologist and physician as tumorous.
Meningioma: Meningiomas are generally wellcircumscribed and homogeneous, making them easier to segment. However, their proximity to sensitive regions such as the meninges can complicate diagnostic tasks. An example of a meningioma and its segmentation mask is presented in Table 4, illustrating the clarity of its boundaries.
Pituitary Tumors: Located at the base of the brain near critical structures like the optic chiasm, pituitary tumors demand careful delineation to avoid diagnostic errors. As shown in Table 5, the segmentation accurately captures the tumor’s boundaries without encroaching on adjacent critical regions.
No Tumor: This control class is integral for training models to distinguish healthy scans from those with abnormalities. Including ”No Tumor” cases enhances the dataset’s robustness for binary and multi-class classification. Table 6 illustrates an example of a healthy brain scan with no abnormalities.
# 5.2 Tumor Mask and Annotation Quality
Achieving accurate tumor segmentation required a meticulous process of iterative reviews and refinements, conducted in close collaboration with a physician and a radiologist. This collaborative effort was crucial in ensuring that the final annotations accurately reflected the true tumor boundaries, minimizing errors and improving the overall quality of the dataset.
In some cases, regions initially annotated as tumors were later identified by physician and radiologist as non-tumorous. These corrections were essential to avoid false positives that could mislead model training. An example of such a case is shown in Table 7, where an area initially believed to be a tumor was excluded from the final annotation after expert review.
In other instances, certain areas that were mistakenly included as part of the tumor region were refined based on radiologist and physician feedback. These areas, though visually similar to tumor tissue, were determined to be non-tumorous upon closer examination. As illustrated in Table 8, the removal of these incorrect segments resulted in more precise tumor masks and enhanced the reliability of the dataset.
Conversely, there were cases where genuine tumor regions had been overlooked during the initial annotation process. With input from the Physician and radiologist, these missing regions were added to the annotations, ensuring that the masks comprehensively captured all tumor areas. Table 9 demonstrates an example of such an adjustment, where previously unannotated tumor segments were correctly incorporated into the final mask.
# 5.3 Challenges of No Tumor Conditions
Non-tumorous conditions frequently mimic tumors in MRI scans, posing significant challenges not only for classification tasks but also for segmentation. This section visualizes examples of these conditions and compares them with actual tumors to highlight their distinctions.
Brain lesions, for instance, often resemble tumors both in shape and intensity. These similarities can lead to misclassification as well as erroneous segmentation of the lesion as a tumor. Cysts are another condition that can complicate both segmentation and classification. Typically fluid-filled and round, cysts may be mistaken for tumors during segmentation tasks due to their well-defined boundaries. Calcifications, which appear as bright regions on MRI scans, can similarly lead to errors in both classification and segmentation. While their growth patterns differ from tumors, their intensity can cause segmentation models to incorrectly label them as tumorous regions.
These challenges underscore the critical importance of radiologist and physician expertise in ensuring accurate segmentation and classification of such conditions. The examples provided highlight the need for robust models capable of distinguishing these non-tumorous conditions from actual tumors in both classification and segmentation tasks.
# 6 Proposed method
# 6.1 Overview
Although the original dataset was designed primarily for classification tasks, through close collaboration with physicians and radiologists, we have extended it by providing high-quality expert annotations delineating tumor regions, thereby creating a new segmentation benchmark. This enriched dataset motivated the development of an effective segmentation model tailored to accurately localize and delineate brain tumors in MRI scans.
In this paper, we present a novel transformer-based architecture for accurate and efficient tumor segmentation in brain MRI scans. The overall framework adopts an encoder-decoder structure, where the encoder extracts multi-scale semantic representations from the input image, and the decoder progressively reconstructs the segmentation map using enhanced contextual features.
The encoder is built upon a hierarchical Swin Transformer backbone, which efficiently captures both local and global dependencies through shifted window-based self-attention. To further refine the extracted features, we use the Contextual Bottleneck Enhancer (CBE), which enriches feature representations using a sequence of lightweight yet effective operations, including shifted MLPs and gated encoding units.
To preserve high-resolution semantic details during decoding, we design a lightweight decoder that includes Adaptive Context Aggregator blocks, which adaptively fuse local and global context from the encoder outputs. Additionally, we propose a Hierarchical Attention Fusion (HAF) module that integrates multi-scale features from different encoder levels through a combination of Swin Transformer blocks and deformable convolutions, allowing the model to capture hierarchical dependencies and spatial variations effectively.
Together, these components enable our model to achieve robust segmentation performance while maintaining computational efficiency. The complete architecture is illustrated in Figure 1.
# 6.2 Encoder Architecture
The encoder of the proposed segmentation model is designed to extract rich hierarchical features from brain MRI scans using a multi-stage Swin Transformerbased backbone. It begins with a Patch Partition module, which splits the input image into non-overlapping patches. These patches are then flattened and mapped to a fixed-dimensional embedding space through a Linear Embedding layer.
Following this, the encoder comprises three repeated stages, each consisting of two Swin Transformer Blocks followed by a Patch Merging layer. The Swin Transformer blocks utilize a window-based self-attention mechanism with a shifted window strategy, allowing the model to effectively capture local and non-local dependencies with reduced computational cost. The patch merging operation downsamples the spatial resolution while increasing the feature dimensionality, forming a hierarchical representation.
Fig. 1: the overview of the Swin-HAFUNet
This hierarchical structure enables the encoder to progressively capture multi-scale semantic information, which is crucial for segmenting tumors of varying sizes and shapes. The feature maps from different stages are later passed to the HAF modules, enabling multi-level feature interaction and refinement in the decoding process.
Fig. 2: Hierarchical Attention Fusion module
6.3 Hierarchical Attention Fusion (HAF) Module
The HAF module is designed to effectively integrate encoder and decoder features at each resolution level, enhancing the model’s capacity to capture both lowlevel spatial detail and high-level semantic context.
At each stage of the decoder, the HAF module takes two inputs:
– $x _ { \mathrm { s k i p } }$ : The skip connection feature map from the encoder.
$\mathrm { ~ - ~ }$ $x _ { \mathrm { d e c o d e r } }$ : the upsampled feature map from the previous decoder layer.
The decoder feature $x _ { \mathrm { d e c o d e r } }$ is first passed through a Swin Transformer Block to refine contextual dependencies and enhance representation. As shown in Equation 1, the refined feature is then concatenated with the corresponding encoder feature $x _ { \mathrm { s k i p } }$ along the channel dimension.
$$
x _ { \mathrm { c a t } } = \mathrm { C o n c a t } ( x _ { \mathrm { s k i p } } , \mathrm { S w i n } ( x _ { \mathrm { d e c o d e r } } ) )
$$
This concatenated feature map $x _ { \mathrm { c a t } }$ is then projected through a ${ \mathrm { ~ ~ 1 ~ } } \times { \mathrm { ~ ~ 1 ~ } }$ convolutional layer (Equation 2).
$$
x _ { \mathrm { o u t } } = \mathrm { C o n v _ { 1 \times 1 } } ( x _ { \mathrm { c a t } } )
$$
The resulting output $x _ { \mathrm { o u t } }$ maintains the same spatial resolution as $x _ { \mathrm { d e c o d e r } }$ and serves as the input for the next step in the decoder. This fusion strategy allows the network to maintain fine spatial information while enriching the semantic features through transformerbased attention. The Architecture of HAF is shown in Figure 2.
# 6.4 Contextual Bottleneck Enhancer (CBE)
The CBE employs a Tokenized MLP Block to enhance feature representations efficiently at the bottleneck stage. This module captures long-range dependencies while maintaining computational efficiency.
The process begins with a spatial shift along the width axis, followed by a linear projection to produce token embeddings. These tokens pass through a shifted MLP layer (across width), then through a depth-wise convolution and GELU activation, which introduces non-linearity and encodes positional information. The output is then processed by another shifted MLP along the height axis.
A residual connection adds the original input tokens to the output of the second MLP, and the result is normalized using Layer Normalization (LN). This sequence can be summarized in Equation 3.
$$
\begin{array} { r l } & { X _ { \mathrm { s h i f t } } ^ { W } = \mathrm { S h i f t } _ { W } ( X ) } \\ & { \quad T _ { W } = \mathrm { M L P } _ { W } ( X _ { \mathrm { s h i f t } } ^ { W } ) } \\ & { \qquad Y = \mathrm { G E L U } ( \mathrm { D W C o n v } ( T _ { W } ) ) } \\ & { \quad Y _ { \mathrm { s h i f t } } ^ { H } = \mathrm { S h i f t } _ { H } ( Y ) } \\ & { \quad T _ { H } = \mathrm { M L P } _ { H } ( Y _ { \mathrm { s h i f t } } ^ { H } ) } \\ & { \quad Z = \mathrm { L N } ( T _ { H } + T ) } \end{array}
$$
where $T$ represents the original tokenized input. This design enables the block to model spatial context in both directions and fuse the information effectively via residual learning.
This block structure is adapted with modifications from the tokenized MLP design presented in [62], but tailored to our segmentation task.
# 6.5 Decoder Architecture
The decoder receives as input the enhanced representation produced by CBE and gradually reconstructs the segmentation map through a hierarchical upsampling process. Unlike traditional encoder-decoder architectures, our decoder integrates semantic context at multiple levels by leveraging the Adaptive Context Aggregator and HAF module.
Fig. 3: Structure of the Adaptive Context Aggregator module.
At each stage of the decoder, the feature map undergoes a Patch Expanding operation to increase the spatial resolution. Following this, contextual features generated by the Adaptive Context Aggregator are fused with encoder features using the HAF module. The output of the HAF block serves as an enhanced skip connection, injected into the decoder pathway to guide the reconstruction process with both fine-grained and semantic details.
This structured design ensures that skip connections are not merely concatenations of encoder features, but rather semantically enriched representations aligned with the decoder’s current context. After multiple stages of patch expansion and fusion, the final feature map is passed through a linear projection layer to produce the segmentation output. culminating in a final linear projection to generate the final prediction map.
# 6.6 Adaptive Context Aggregator
To effectively inject adaptive context into the decoder path, we employ the Adaptive Context Aggregator module. This block is responsible for enhancing the decoder features by integrating both local geometric and global semantic information.
As illustrated in Figure 3, the Adaptive Context Aggregator module receives the decoder feature map as input. It first processes this input through a Swin Transformer Block to capture long-range dependencies and global contextual cues. In parallel, the same input is passed through a Deformable Convolution layer to focus on important local structures and spatially variant patterns.
The outputs of both the Swin Transformer and the Deformable Convolution branches are concatenated and fused via a 1 1 convolution layer. This fusion ensures that both global and local contexts are adaptively aggregated in a computationally efficient manner. The resulting feature map serves two purposes: it is passed to the HAF module to refine the skip connection at the current decoder level, and it is also forwarded to the subsequent Patch Expanding block in the decoder pipeline. This dual role ensures both better feature fusion and more informed upsampling in the reconstruction process.
# 6.7 Loss Function
To train the proposed segmentation model, we utilize a compound loss function that combines the Binary Cross-Entropy (BCE) loss and the Dice loss. The BCE component focuses on pixel-level classification accuracy, while the Dice loss emphasizes region-level consistency, which is particularly effective in addressing class imbalance commonly present in medical image segmentation tasks.
Given the ground truth mask $y \in \{ 0 , 1 \} ^ { N }$ and the predicted probabilities $\hat { y } \in [ 0 , 1 ] ^ { N }$ for $N$ pixels, the BCE loss is formulated as shown in Equation 4.
$$
\mathcal { L } _ { \mathrm { B C E } } = - \frac { 1 } { N } \sum _ { i = 1 } ^ { N } \left[ y _ { i } \log ( \hat { y } _ { i } ) + ( 1 - y _ { i } ) \log ( 1 - \hat { y } _ { i } ) \right] ,
$$
The Dice loss, which evaluates the overlap between predicted and ground truth regions, is defined as shown in Equation 5.
$$
\mathcal { L } _ { \mathrm { { D i c e } } } = 1 - \frac { 2 \sum _ { i = 1 } ^ { N } y _ { i } \hat { y } _ { i } + \epsilon } { \sum _ { i = 1 } ^ { N } y _ { i } + \sum _ { i = 1 } ^ { N } \hat { y } _ { i } + \epsilon } ,
$$
where $\epsilon$ is a small constant added to avoid division by zero.
The final loss used to optimize the network combines these two components, as shown in Equation 6:
$$
\mathcal { L } _ { \mathrm { t o t a l } } = \mathcal { L } _ { \mathrm { B C E } } + \mathcal { L } _ { \mathrm { D i c e } } .
$$
This joint formulation encourages both accurate boundary delineation and robust region-level segmentation.
Table 10: IoU ( $\%$ ) for Brain Tumor Segmentation Models on Different Tumor Types. Weighted mIoU is calculated as a weighted average based on the number of samples per tumor type: Glioma, Meningioma, Pituitary.
# 7 Experimental Results
Establishing baseline performance is a critical step in evaluating any newly proposed dataset, as it sets a reference point for further research and model development [75]. To validate the effectiveness and versatility of our Brain Tumor MRI Dataset, we conducted experimental evaluations on segmentation task.
This section presents a comprehensive analysis of the model used, including their performance metrics, and discusses the implications of these results in the context of the dataset’s characteristics and potential applications in medical imaging.
# 7.1 Evaluation Metrics
In this part, we detail the evaluation metric employed to assess the performance of segmentation models on the proposed dataset. These metrics provide comprehensive insights into the efficacy of the models.
Intersection over Union $( I o U )$ Intersection over Union (IoU), also known as the Jaccard Index, is a fundamental metric for evaluating binary segmentation tasks. It quantifies the overlap between the predicted tumor regions and the ground truth, normalized by their union [76]. For binary segmentation, IoU is computed as shown in Equation 7.
$$
\mathrm { I o U } = \frac { \sum _ { i = 1 } ^ { N } y _ { i } \hat { y } _ { i } } { \sum _ { i = 1 } ^ { N } y _ { i } + \sum _ { i = 1 } ^ { N } \hat { y } _ { i } - \sum _ { i = 1 } ^ { N } y _ { i } \hat { y } _ { i } + \epsilon } ,
$$
where $y _ { i } \in \{ 0 , 1 \}$ denotes the ground truth label, $\hat { y } _ { i } \in \{ 0 , 1 \}$ represents the predicted label (after thresholding), and $\epsilon$ is a small constant added for numerical stability.
As shown in Equation 7, this formulation captures the pixel-wise overlap between the predicted and actual tumor regions and is particularly effective for evaluating segmentation quality, especially along object boundaries.
# 7.2 Comparison
To evaluate the effectiveness of our proposed Swin-HAFUNet, we conducted a comparative study against a diverse set of brain tumor segmentation models, including traditional convolutional architectures, attention-enhanced methods, and transformer-based approaches. The baselines include UNet [63], UNet $^ { + + }$ [64], LinkNet [66], MANet [65], DeepLabV3+ [67], PAN [68], EINet [69], EU-Net [70], DAD [71], and BASNet [72], as well as two recent transformer-enhanced models, SaberNet [73] and ABANet [74].
Each model was evaluated using the mean Intersection over Union (mIoU) metric for three tumor types: Glioma, Meningioma, and Pituitary. Furthermore, we report a weighted mIo $U$ , which is calculated based on the proportion of samples belonging to each tumor type, providing a more representative performance indicator across the dataset.
As summarized in Table 10, our proposed SwinHAFUNet achieves the highest mIoU scores across all tumor types, particularly excelling in segmenting Meningioma and Pituitary tumors. It also outperforms all competing methods in terms of weighted mIoU, demonstrating the strength of our architectural design in capturing multi-scale contextual features and handling inter-class variability in medical image segmentation.
In particular, our proposed method achieves the highest weighted mIoU of $8 2 . 4 \%$ , surpassing the next best-performing model (Saber et al.) by a margin of $2 . 6 \%$ . This improvement underscores the robustness of our approach in handling heterogeneous tumor types. Importantly, the reported weighted mIoU is calculated as a weighted average based on the number of samples in each tumor class to provide a more realistic assessment under dataset imbalance. Unlike simple arithmetic means, the weighted mean better reflects the overall segmentation performance in real-world clinical distributions.
Moreover, our model consistently outperforms existing baselines across all tumor categories, with notable improvements observed in the segmentation of glioma and meningioma tumors. These gains can be attributed to the architectural choices that enhance multi-scale feature extraction and contextual representation, which are particularly beneficial for capturing diverse morphological structures in brain tumors.
It is important to emphasize that the primary goal of this work is to introduce and validate a new brain tumor segmentation dataset, which is designed to support the development of robust and generalizable medical segmentation models. While the proposed method demonstrates promising initial results, we consider this study a foundational step. Future research is expected to build upon this dataset to explore a broader range of models, training protocols, and evaluation settings.
# 7.3 Ablation study
To evaluate the contribution of each component in the proposed Swin-HAFUNet architecture, we conducted an ablation study. Four configurations were tested by progressively integrating the HAF module and the CBE into the baseline. The results are reported in Table 11.
The baseline model includes a simple Swin-UNet structure without the HAF or CBE modules. Adding the HAF module alone leads to a notable improvement across all tumor types, increasing the weighted mIoU from $7 9 . 8 \%$ to $8 1 . 1 \%$ . This indicates that the hierarchical attention fusion effectively enhances the representation of skip connections.
Table 11: The performance of different configuration of Swin-HAFUNet.
Incorporating the CBE module without HAF further improves performance, yielding a weighted mIoU of $8 1 . 3 \%$ . This demonstrates the benefit of contextual feature extraction and enrichment in the encoder path.
Finally, the complete model that includes both HAF and CBE achieves the best performance with a weighted mIoU of $8 2 . 4 \%$ . The consistent gains across all tumor categories—particularly glioma, which is typically more challenging—highlight the complementary strengths of the proposed components and their combined effectiveness in accurate tumor segmentation. | Accurate segmentation and classification of brain tumors from Magnetic Resonance Imaging (MRI) remain key challenges in medical image analysis, largely due to the lack of high-quality, balanced, and diverse datasets. In this work, we present a new curated MRI dataset designed specifically for brain tumor segmentation and classification tasks. The dataset comprises 6,000 contrast-enhanced T1-weighted MRI scans annotated by certified radiologists and physicians, spanning three major tumor types-glioma, meningioma, and pituitary-as well as non-tumorous cases. Each sample includes high-resolution labels and is categorized across axial, sagittal, and coronal imaging planes to facilitate robust model development and cross-view generalization. To demonstrate the utility of the dataset, we propose a transformer-based segmentation model and benchmark it against established baselines. Our method achieves the highest weighted mean Intersection-over-Union (IoU) of 82.3%, with improvements observed across all tumor categories. Importantly, this study serves primarily as an introduction to the dataset, establishing foundational benchmarks for future research. We envision this dataset as a valuable resource for advancing machine learning applications in neuro-oncology, supporting both academic research and clinical decision-support development. datasetlink: https://www.kaggle.com/datasets/briscdataset/brisc2025/ | [
"eess.IV",
"cs.CV"
] |
# 1 Introduction
Students in introductory programming classes are generally able to write working code. However, they may not have a full understanding or comprehension of how the code works. In the experience of Lehtinen et al., around a third of their class of 125 students had difficulties explaining their code [2]. At this introductory stage, this can have serious long term consequences to the student’s ability to become a skilled programmer, as these shaky foundations become increasingly strained by problems of greater scale and complexity. Code comprehension questions are an effective solution to this problem [4], forcing students to critically examine their own code, as well as challenging any incorrect assumptions. Unfortunately, the bespoke nature - manually creating questions based on a student’s own code, makes for a process which is difficult to scale to larger classes.
This work presents AutoMCQ, a tool for the automatic generation of bespoke multiple-choice code comprehension questions by utilising a combination of automated unit testing and AI generated follow-up questions. This is not necessarily intended as a method of assigning or gating class credit, but rather as a tool for identifying cases where there might be a benefit to early intervention.
# 2 AutoMCQ
We have developed a web application, AutoMCQ, which uses GPT4o mini via the OpenAI API to generate personalised multiplechoice code comprehension questions, based on a student’s submitted code. The code for this tool is available on GitHub 1. We integrated calls to this application within CodeRunner [3] quiz questions hosted on our Virtual Learning Environment (VLE), which is built on top of Moodle . CodeRunner is a Moodle plugin which allows users to submit code to be run against predefined test cases and can support multiple programming languages. The high level architecture of our approach can be seen in Figure 1.
The students are presented with a traditional CodeRunner question (see [1] for more detail). When they then progress to the code comprehension questions their code plus other parameters are passed to our web application. The prompt sent to the OpenAI API consists of the system prompt "You are an educational assistant specializing in computer science. Your task is to analyse
public c l a s s B u i l d i n g
1 private i n t windows ; p r i v a t e double c ha rg e ; public B u i l d i n g ( i n t windows , double c h a r g e ) { t h i s . windows $\mathbf { \Sigma } = \mathbf { \Sigma }$ windows ; t h i s . c h a r g e $\mathbf { \Sigma } = \mathbf { \Sigma }$ c h a r g e ; } public double getTax ( ) { return t h i s . windows $\star$ t h i s . c h a r g e ; }
}
Figure 1: System Architecture
Figure 2: Building Class
Figure 3: Generated Questions
students’ code for the beginner programmer class and generate thoughtful multiple-choice questions that can help them understand and improve their coding skills. You should try and make good distractor options to really test students understanding." plus the parameters detailing the number of questions, CodeRunner question text, topics to ask about, programming language, any code the student was provided with, plus the student’s submitted code to stop questions being asked on the skeleton code. This then returns multiple-choice code comprehension questions and displays them within a CodeRunner question. The students can then answer these questions and have them automatically marked. Unfortunately, we can’t rely on GenAI generating correct or sensible questions every time. Therefore, to handle this there is a note above the generated questions: "These questions were generated by AI. Therefore, questions generated may be incorrect. If you think they are incorrect please select ‘This question doesn’t seem right’. Also, select this option if the question doesn’t relate to programming." This option can be used to trigger a manual check by the class lecturer. Until GenAI models improve our code comprehension questions will only be used for formative assessment.
# 2.1 Example
The following question is used in our introductory Java programming class when testing student knowledge of inheritance and overriding:
Q: The council tax for a flat is calculated similarly to that for a building (see Figure 2) except that there is a deduction of $\varepsilon 7 5$ . Develop a new class called Flat.java which inherits from Building.java and correctly implements the modified getTax() method.
Generated Questions Whatdoes the‘super()keyword do in the‘Flatclass constructor? It creates a new instance of the Building' class. It calls the constructor of the parent class Building'. It allows access to the private members of 'Building' It prevents the \`Flat classfrom having its own constructor. This question doesn't seem right? Which ofthe following statements best describes method overriding? It changes the return type of the method in the subclass. It creates a static method in the subclass. It allows a subclass to provide a specific implementation of a method already defined in its superclass. It makes the superclass method inaccessible. This question doesn't seem right?
The CodeRunner tests for this question involve testing getTax() with different arguments and printing the answer. The result is then compared with the expected result. For example:
F l a t $\mathrm { ~ \ f { ~ 0 ~ } ~ } = \mathrm { ~ \ }$ new F l a t ( 7 , 1 8 . 5 ) ;
S y s t e m . o u t . p r i n t l n ( " C o u n c i l t a x f o r f l a t i s : " + f 0 . g e t T a x ( ) ) ;
If the submitted code is correct it should output
C o u n c i l t a x f o r f l a t i s : 5 4 . 5
The submitted code is sent to the web application along with number of questions 2, the question, the topics "inheritance and overriding", language java, and Building.java. Examples of generated questions for a correct solution can be seen in Figure 3. | Students often do not fully understand the code they have written. This sometimes does not become evident until later in their education, which can mean it is harder to fix their incorrect knowledge or misunderstandings. In addition, being able to fully understand code is increasingly important in a world where students have access to generative artificial intelligence (GenAI) tools, such as GitHub Copilot. One effective solution is to utilise code comprehension questions, where a marker asks questions about a submission to gauge understanding, this can also have the side effect of helping to detect plagiarism. However, this approach is time consuming and can be difficult and/or expensive to scale. This paper introduces AutoMCQ, which uses GenAI for the automatic generation of multiple-choice code comprehension questions. This is integrated with the CodeRunner automated assessment platform. | [
"cs.SE",
"cs.AI",
"cs.PL"
] |
# 1. Introduction
Recent advancements in reinforcement learning from human feedback have shown that utilizing fine-grained token-level reward models can substantially enhance the performance of Proximal Policy Optimization (PPO) in aligning large language models. However, it is challenging to leverage such token-level reward as guidance for Direct Preference Optimization (DPO), since DPO is formulated as a sequence-level bandit problem. To address this challenge, this work decomposes the sequence-level PPO into a sequence of token-level proximal policy optimization problems and then frames the problem of token-level PPO with token-level reward guidance, from which closed-form optimal token-level policy and the corresponding token-level reward can be derived. Using the obtained reward and BradleyTerry model, this work establishes a framework of computable loss functions with token-level reward guidance for DPO, and proposes a practical reward guidance based on the induced DPO reward. This formulation enables different tokens to exhibit varying degrees of deviation from reference policy based on their respective rewards. Experiment results demonstrate that our method achieves substantial performance improvements over DPO, with win rate gains of up to 7.5 points on MT-Bench, 6.2 points on AlpacaEval 2, and 4.3 points on Arena-Hard. Code is available at https://github.com/dvlab-research/TGDPO.
Reinforcement Learning from Human Feedback (RLHF) has become a crucial technique for aligning Large Language models (LLMs) with human preferences and intentions (Ouyang et al., 2022; Ziegler et al., 2020). This approach has demonstrated significant success in recent LLMs advancements (OpenAI et al., 2024; Team et al., 2024a; Grattafiori et al., 2024; Team et al., 2024b). In typical RLHF workflows, a reward model is first trained using human feedback, and then the Proximal Policy Optimization (PPO) algorithm (Schulman et al., 2017) is employed to fine-tune the policy model. Typically, in these methods, a sequencelevel reward is assigned to the last token of a sequence. However, this approach faces challenges, such as the sparse reward problem (i.e., delayed feedback), which leads to instability and sample inefficiency in PPO training (Choshen et al., 2020). This issue is particularly pronounced in LLM training, where responses are often lengthy and generated at the token level (Yang et al., 2023). Recent research has suggested that leveraging dense token-level reward models (Yang et al., 2023; Yin et al., 2025; Zhong et al., 2024) can help alleviate these issues, improving PPO’s performance in aligning LLMs with human preferences.
Recent developments in RLHF have centered around creating simpler and more efficient algorithms that eliminate the need for a separate reward model. A notable approach in this direction is Direct Preference Optimization (DPO) (Rafailov et al., 2023). DPO reparameterizes the reward function in RLHF by directly using preference data to optimize the policy model, bypassing the traditionally required step of training a separate reward model. This reparameterization streamlines the alignment process, making DPO a popular algorithm for LLM alignment. While dense tokenlevel reward guidance has been proved beneficial for PPO (Yang et al., 2023; Yin et al., 2025; Zhong et al., 2024), its extension to DPO is nontrivial, as DPO is formulated as a sequence-level bandit problem. In this context, the reward is expressed through the policy being optimized, and integrating token-level reward guidance into this framework presents a significant challenge, especially in eliminating the partition function from the loss function.
To fill this gap, we decompose the sequence-level proximal policy optimization into a sequence of token-level proximal policy optimization problems and modify them to incorporate token-level reward guidance. We derive a closed-form optimal token-level policy and the corresponding tokenlevel reward for the modified problem. Based on the obtained reward and Bradley-Terry model, especially a new theoretical result for eliminating partition function, we propose a preference optimization algorithm framework with token-level reward guidance for DPO, which we refer to as TGDPO. Additionally, we introduce a practical token-level reward guidance based on the induced DPO reward.
Extensive experiments are conducted on three instruction following benchmarks: AlpacaEval 2 (Li et al., 2023), MTBench (Zheng et al., 2023), and Arena-Hard (Li et al., 2024). TGDPO consistently outperforms existing preference optimization algorithms, achieving improvements of up to 7.5 points on MT-Bench, 6.2 points on AlpacaEval 2, and 4.3 points on Arena-Hard compared to the best baseline method. We further demonstrate and analyze the unique advantages of TGDPO. We empirically show that TGDPO achieves satisfactory policies upon loss convergence, which is not commonly observed in conventional preference optimization methods. TGDPO also enables control over convergence speed and is robust to variations in token-level rewards. These properties significantly enhance the efficiency and practicality of the algorithm. Our key contributions are outlined below:
• We decompose the sequence-level PPO into a sequence of token-level proximal policy optimization problems via the upper-bounding approach and derive a closedform optimal token-level policy for the modified problem, with which the corresponding reward can be represented along with the token-level reward guidance. • With the obtained reward, the Bradley-Terry model, and a new result for eliminating the partition function, we propose TGDPO, a preference optimization algorithm framework with token-level reward guidance for DPO. We further introduce a practical token-level reward guidance based on the induced DPO reward. • Extensive experiments demonstrate that our TGDPO improves win rates by up to 7.5 points on MT-Bench, 6.2 points on AlpacaEval 2, and 4.3 points on ArenaHard compared to the best baseline.
# 2. Related Work
Reinforcement Learning from Human Feedback. Reinforcement learning from human feedback (RLHF) has been extensively applied for aligning LLMs with human preferences and values (Ouyang et al., 2022; Ziegler et al., 2020). The standard RLHF pipeline typically consists of two stages: reward modeling and policy optimization through reinforcement learning. Proximal Policy Optimization (PPO) with on-policy sampling (Schulman et al., 2017) is commonly used for this purpose. However, challenges in effective reward modeling and tuning the PPO algorithm to achieve optimal performance have motivated alternative approaches that bypass the reward modeling step and focus on directly optimizing the policy. The direct preference optimization (DPO) algorithm (Rafailov et al., 2023) is a representative one. DPO explicitly represents the reward function with the optimal policy of the proximal policy optimization problem, thereby avoiding the need for a separate reward model and fine-tuning LLMs directly with human preference. DPO has proven to be both lightweight and stable, showing strong performance in a range of applications (Ivison et al., 2024; Tian et al., 2024; Miao et al., 2024). Several variants of DPO have since been proposed, improving its performance. For instance, R-DPO (Park et al., 2024) addresses DPO’s tendency to exploit token length, while SimPO (Meng et al., 2024) aims to better align the objective with the decoding formula and eliminate the need for a reference model. KTO (Ethayarajh et al., 2024) focuses on optimizing preferences using non-pairwise data. These preference optimization techniques, however, operate at the sequence level and do not shape the reward function of DPO from the token level. In contrast, our approach aims to leverage token-level rewards to guide preference optimization and better align LLMs. A recent work TDPO (Zeng et al., 2024) tries to provide a token-level understanding of DPO. It explains DPO using token-level Markov decision process and proposes to incorporate forward KL divergence to the DPO objective. However, like DPO, TDPO still does not consider token-level reward guidance. Our TGDPO, on the other hand, explicitly incorporates token-level reward signals into the preference optimization framework.
RLHF with Dense Token-Level Reward. Text generation of LLMs can be modeled as a Markov decision process. Sequence-level PPO treats the entire sequence as an action and assigns a reward at the sequence’s end (Schulman et al., 2017), which results in sparse feedback at the token level. This sparsity hinders the model’s ability to differentiate between preferred and dispreferred tokens within a sequence, leading to training instability (Snell et al., 2023; Xia et al., 2024). To mitigate this issue, several techniques have been developed to generate dense token-level rewards, including learning from fine-grained human feedback (Wu et al., 2023), fine-grained AI feedback (Ouyang et al., 2024), and grounding preferences at the token or segment level (Yang et al., 2023; Yin et al., 2025; Zhong et al., 2024). PPO leveraging such fine-grained reward models has shown significant performance improvements. However, extending token-level guidance to DPO is a challenge, as DPO’s reward function is explicitly expressed through the policy being optimized. Incorporating token-level reward guidance into the DPO framework requires overcoming substantial difficulties, especially in eliminating the partition function from the loss function, which remains an open problem. More discussions on closely related work are presented in Appendix C.
# 3. Preliminary
Given a human preference dataset $\mathcal { D } = \{ ( x , y _ { w } , y _ { l } ) \}$ , where $x$ is a prompt, $y _ { w }$ and $y _ { l }$ are preferred and dispreferred responses respectively, in RLHF a sequence-level reward model $r _ { \phi } ( x , y )$ is first trained with the preference dataset for assigning higher reward to preferred response and lower reward to dispreferred one. With the trained reward model, sequence-level Proximal Policy Optimization (PPO) solves the following problem to fine-tune LLMs:
$$
\begin{array} { r l } & { \underset { \pi _ { \theta } } { \operatorname* { m a x } } \mathbb { E } _ { \boldsymbol { x } \sim \mathcal { D } , \boldsymbol { y } \sim \pi _ { \theta } ( \cdot \vert \boldsymbol { x } ) } \left[ r _ { \phi } ( \boldsymbol { x } , \boldsymbol { y } ) \right] - \beta \mathbb { D } _ { \mathrm { K L } } [ \pi _ { \theta } ( \cdot \vert \boldsymbol { x } ) \vert | \pi _ { \mathrm { r e f } } ( \cdot \vert \boldsymbol { x } ) ] } \\ & { = \underset { \pi _ { \theta } } { \operatorname* { m a x } } \mathbb { E } _ { \boldsymbol { x } \sim \mathcal { D } , \boldsymbol { y } \sim \pi _ { \theta } ( \cdot \vert \boldsymbol { x } ) } \left[ r _ { \phi } ( \boldsymbol { x } , \boldsymbol { y } ) - \beta \log \frac { \pi _ { \theta } ( \boldsymbol { y } \vert \boldsymbol { x } ) } { \pi _ { \mathrm { r e f } } ( \boldsymbol { y } \vert \boldsymbol { x } ) } \right] , } \end{array}
$$
where $\mathbb { D } _ { \mathrm { K L } } [ \cdot ]$ is the KL-divergence of two probability distributions, $\pi _ { \boldsymbol { \theta } }$ is the language model policy, $\pi _ { \mathrm { r e f } }$ is the reference policy, and the positive parameter $\beta$ controls the deviation of $\pi _ { \boldsymbol { \theta } }$ from $\pi _ { \mathrm { r e f } }$ . Equation (1) can be considered as assigning the reward to a sequence and is referred to as the sequencelevel PPO problem in this work. It has the issue of sparse reward (delayed feedback) that challenges traditional deep reinforcement learning (Andrychowicz et al., 2017). To alleviate the issue, sequence-level PPO with token-level reward guidance is developed to fine-tune LLMs in a fine-grained fashion with dense token-wise rewards (Yang et al., 2023; Yin et al., 2025; Zhong et al., 2024).
Sequence-Level PPO with Token-Level Reward Guidance. Text generation of an LLM can be modeled as a Markov Decision Process (MDP). Let $s _ { t }$ be the context for generating the token at time step $t \geq 0$ , the generated token is denoted as $a _ { t } \sim \pi _ { \theta } ( \cdot | s _ { t } )$ . For a prompt $x$ of the LLM, $s _ { 0 } = x$ and $s _ { t } = [ x , a ^ { < t } ]$ , where $a ^ { < t } = [ a _ { 0 } , \dotsc , a _ { t - 1 } ]$ are the previously generated tokens. The generated full textsequence with $T$ tokens is denoted as $\pmb { a } = [ a _ { 0 } , \ldots , a _ { T - 1 } ]$ A token-level reward, for convenience it is also denoted by $r _ { \phi } ( s _ { t } , a _ { t } )$ , is learned so that the reward sequence is dense and can guide the selection of token at any time step, which is called token-level reward guidance (Yang et al., 2023; Yin et al., 2025). Typically, the problem of sequence-level proximal policy optimization with token-level reward guidance is (Yin et al., 2025):
$$
\begin{array} { r l r } & { } & { \underset { \pi _ { \theta } } { \operatorname* { m a x } } \mathbb { E } _ { \boldsymbol { x } \sim \mathcal { D } , \boldsymbol { y } \sim \prod _ { t = 0 } ^ { T - 1 } \pi _ { \theta } ( a _ { t } | s _ { t } ) } \left[ \sum _ { t = 0 } ^ { T - 1 } r _ { \phi } ( s _ { t } , a _ { t } ) - \right. } \\ & { } & { \left. \beta \log \frac { \pi _ { \theta } ( \boldsymbol { y } | \boldsymbol { x } ) } { \pi _ { \mathrm { r e f } } ( \boldsymbol { y } | \boldsymbol { x } ) } \right] , } \end{array}
$$
where $x$ is a prompt, $s _ { t }$ and $a _ { t }$ are the state and action defined previously, $y \ = \ [ a _ { 0 } , \ldots , a _ { T - 1 } ]$ is the response generated by $\pi _ { \boldsymbol { \theta } }$ from the given prompt $x$ . Classically, the sequence-level reward function $r _ { \phi } ( x , y )$ can be set as $\begin{array} { r } { r _ { \phi } ( x , y ) = \sum _ { t = 0 } ^ { T - 1 } r _ { \phi } ( s _ { t } , a _ { t } ) } \end{array}$ (Yang et al., 2023).
Direct Preference Optimization. Direct preference optimization (Rafailov et al., 2023) bypasses learning a reward model and aligns directly an LLM to human preference. DPO (Rafailov et al., 2023) expresses the sequence-level reward function explicitly with the optimal policy of Equation (1) as:
$$
r _ { \phi } ( x , y ) = \beta \log \frac { \pi _ { \theta } ( y | x ) } { \pi _ { \mathrm { r e f } } ( y | x ) } + \beta \log Z ( x ) ,
$$
where $Z ( x )$ is the partition function and $\beta$ is a positive constant. By adopting the Bradley-Terry preference model (Bradley & Terry, 1952)
$$
\operatorname* { P r } ( y _ { w } \succ y _ { l } | x ) = \frac { \exp { ( r _ { \phi } ( x , y _ { w } ) ) } } { \exp { ( r _ { \phi } ( x , y _ { w } ) ) } + \exp { ( r _ { \phi } ( x , y _ { l } ) ) } }
$$
for specifying human preference distribution, DPO obtains the following loss function:
$$
\begin{array} { r l r } & { } & { { \mathcal L } _ { \mathrm { D P O } } ( \pi _ { \theta } ) = - \mathbb E _ { ( x , y _ { w } , y _ { l } ) \sim \mathcal D } \left[ \log \sigma \left( \beta \log \frac { \pi _ { \theta } ( y _ { w } | x ) } { \pi _ { \mathrm { r e f } } ( y _ { w } | x ) } \right. \right. } \\ & { } & { \left. \left. - \beta \log \frac { \pi _ { \theta } ( y _ { l } | x ) } { \pi _ { \mathrm { r e f } } ( y _ { l } | x ) } \right) \right] , \quad } \end{array}
$$
which is obtained by substituting Equation (3) into Equation (4), where $\sigma$ is the sigmoid function. DPO minimizes Equation (5) with respect to the policy $\pi _ { \boldsymbol { \theta } }$ to directly finetune the LLM with the preference dataset at the sequence level.
# 4. Methodology
Direct preference optimization expresses the reward function explicitly with the optimal policy of the sequence-level proximal policy optimization problem. However, incorporating existing token-level rewards explicitly into DPO to guide fine-tuning is an unresolved problem. To derive a form of DPO with token-level reward guidance, this section first gives the problem of token-level PPO in Section 4.1 from the sequence-level PPO in Equation (2). The token-level PPO problem is further modified to incorporate token-level reward guidance in Section 4.2, the closed-form optimal policy is derived, and the corresponding token-level reward with guidance is obtained. Then with the Bradley-Terry model, we propose the direct preference optimization with token-level reward guidance in Section 4.3.
# 4.1. Token-Level PPO
Note that $y = [ a _ { 0 } , \dotsc , a _ { T - 1 } ]$ is the response generated by $\pi _ { \boldsymbol { \theta } }$ from the given prompt $x$ . Using the notations of state
and action in Section 3, we can get
$$
\begin{array} { l } { \pi _ { \theta } ( y | x ) = \pi _ { \theta } ( [ a _ { 0 } , \dots , a _ { T - 1 } ] | x ) = \displaystyle \prod _ { t = 0 } ^ { T - 1 } \pi _ { \theta } ( a _ { t } | s _ { t } ) ; } \\ { \pi _ { \mathrm { r e f } } ( y | x ) = \pi _ { \mathrm { r e f } } ( [ a _ { 0 } , \dots , a _ { T - 1 } ] | x ) = \displaystyle \prod _ { t = 0 } ^ { T - 1 } \pi _ { \mathrm { r e f } } ( a _ { t } | s _ { t } ) . } \end{array}
$$
Thus, the objective function in Equation (2) can be decomposed into the token level as:
$$
\begin{array} { r l } & { \displaystyle \sum _ { t = 0 } ^ { T - 1 } r _ { \phi } ( s _ { t } , a _ { t } ) - \beta \log \frac { \pi _ { \theta } ( y | x ) } { \pi _ { \mathrm { r e f } } ( y | x ) } } \\ & { \displaystyle = \sum _ { t = 0 } ^ { T - 1 } \left( r _ { \phi } ( s _ { t } , a _ { t } ) - \beta \log \frac { \pi _ { \theta } ( a _ { t } | s _ { t } ) } { \pi _ { \mathrm { r e f } } ( a _ { t } | s _ { t } ) } \right) . } \end{array}
$$
Moreover, according to the MDP for language model (Section 3), $\begin{array} { r } { y \sim \prod _ { t = 0 } ^ { T - 1 } \pi _ { \theta } ( a _ { t } | s _ { t } ) } \end{array}$ in Equation (2) is equivalent to $y \sim \pi _ { \theta } ( \cdot | x )$ , which is further equivalent to $s _ { 0 } = x \sim \mathcal { D }$ , $a _ { t } \sim \pi _ { \theta } ( \cdot | s _ { t } )$ , $t = 0 , 1 , \dots , T - 1$ . Then by Equation (6), the problem of sequence-level PPO with token-level reward guidance in Equation (2) becomes
$$
\begin{array} { r l r } { { \operatorname* { m a x } \mathbb { E } _ { \pi \sim \mathcal { D } , y \sim \pi _ { \theta } ( \cdot | x ) } [ \sum _ { t = 0 } ^ { T - 1 } ( r _ { \phi } ( s _ { t } , a _ { t } ) - \beta \log \frac { \pi _ { \theta } ( a _ { t } | s _ { t } ) } { \pi _ { \mathrm { r e f } } ( a _ { t } | s _ { t } ) } ) ] } } \\ & { } & { = \operatorname* { m a x } _ { \pi _ { \theta } \sim \mathcal { D } , a _ { t } \sim \pi _ { \theta } ( \cdot | s _ { t } ) , t = 0 , 1 , \dots , T - 1 } [ \sum _ { t = 0 } ^ { T - 1 } ( r _ { \phi } ( s _ { t } , a _ { t } ) } \\ & { } & { - \beta \log \frac { \pi _ { \theta } ( a _ { t } | s _ { t } ) } { \pi _ { \mathrm { r e f } } ( a _ { t } | s _ { t } ) } ) ] . } \end{array}
$$
Based on Equation (7), we can show that:
Theorem 4.1. The maximum value of the sequence-level proximal policy optimization in Equation (2) is upper bounded by the summation from $t = 0 , 1 , \ldots , T - 1$ of the maximum value of the problem:
$$
\operatorname* { m a x } _ { \pi _ { \theta } } \mathbb { E } _ { s _ { t } \sim \mathcal { D } _ { t } , a _ { t } \sim \pi _ { \theta } ( \cdot | s _ { t } ) } \left[ r _ { \phi } ( s _ { t } , a _ { t } ) - \beta \log \frac { \pi _ { \theta } ( a _ { t } | s _ { t } ) } { \pi _ { r e f } ( a _ { t } | s _ { t } ) } \right]
$$
where $s _ { t } \sim \mathcal { D } _ { t }$ denotes that $s _ { 0 } = x \sim \mathcal { D }$ and $a _ { p } \sim \pi _ { \theta } ( \cdot | s _ { p } )$ , $p = 0 , 1 , \ldots , t - 1$ .
The proof of Theorem 4.1 is given in Appendix A.1.
Equation (8) is the problem of token-level PPO at time step $t$ , which optimizes the policy for action $a _ { t }$ given the state $s _ { t }$ . Theorem 4.1 suggests that, the sequence-level proximal policy optimization in Equation (2) can be upper-bounded with a sequence of token-level PPOs in Equation (8). However, it is not easy to solve the problem since $s _ { t } \sim \mathcal { D } _ { t }$ is dependent on the policy $\pi _ { \boldsymbol { \theta } }$ to be optimized (see Equation (1) for a comparison, where the distribution $\mathcal { D }$ is independent of the policy $\pi _ { \boldsymbol { \theta } }$ to be optimized).
# 4.2. Modified Token-Level PPO with Reward Guidance and Optimal Policy
Given win and lose responses $y _ { w } = ( a _ { 0 } ^ { w } , \ldots , a _ { T _ { w } - 1 } ^ { w } )$ and $y _ { l } = ( a _ { 0 } ^ { l } , \dots , a _ { T _ { l - 1 } } ^ { l } )$ , Rafailov et al. (2024) expressed the per-instance loss of DPO (Rafailov et al., 2023) in the tokenlevel as:
$$
\begin{array} { r l } & { \operatorname* { P r } ( y _ { w } \succ y _ { l } ) } \\ & { = \sigma \left( \sum _ { t = 0 } ^ { T _ { w } - 1 } \beta \log \frac { \pi _ { \theta } \left( a _ { t } ^ { w } | s _ { t } ^ { w } \right) } { \pi _ { \mathrm { r e f } } \left( a _ { t } ^ { w } | s _ { t } ^ { w } \right) } - \sum _ { t = 0 } ^ { T _ { l } - 1 } \beta \log \frac { \pi _ { \theta } \left( a _ { t } ^ { l } | s _ { t } ^ { l } \right) } { \pi _ { \mathrm { r e f } } \left( a _ { t } ^ { l } | s _ { t } ^ { l } \right) } \right) . } \end{array}
$$
Assuming access to a token-level reward $\hat { r } \big ( s _ { t } , a _ { t } \big )$ , since the token-level reward $\hat { r } \big ( s _ { t } , a _ { t } \big )$ may imply whether the action $a _ { t }$ is preferred or dispreferred in the state $s _ { t }$ , this work aims to replace $\beta$ in the above equation with $\beta f ( \hat { r } ( s _ { t } , a _ { t } ) )$ , a function of the token-level reward $\hat { r } ( s _ { t } , a _ { t } )$ , to guide the DPO.
Following DPO (Rafailov et al., 2023), we derive this form of loss function from the token-level proximal policy optimization in Equation (8) by incorporating the token-level reward guidance $f ( \hat { r } ( s _ { t } , a _ { t } ) )$ . First, similar to (Zeng et al., 2024; Yang et al., 2024), we relax $s _ { t } \sim \mathcal { D } _ { t }$ to $s _ { t } \sim \mathcal { D }$ and make Equation (8) solvable as
$$
\operatorname* { m a x } _ { \pi _ { \theta } } \mathbb { E } _ { s _ { t } \sim \mathcal { D } , a _ { t } \sim \pi _ { \theta } ( \cdot | s _ { t } ) } \left[ r _ { \phi } ( s _ { t } , a _ { t } ) - \beta \log \frac { \pi _ { \theta } ( a _ { t } | s _ { t } ) } { \pi _ { \mathrm { r e f } } ( a _ { t } | s _ { t } ) } \right] .
$$
Next, we manage to incorporate token-level reward guidance $f ( \hat { r } ( s _ { t } , a _ { t } ) )$ into this formulation, and represent the groundtruth unknown reward function $r _ { \phi } ( s _ { t } , a _ { t } )$ with the optimal policy of this equation. The obtained ground-truth reward $r _ { \phi } ( s _ { t } , a _ { t } )$ is subsequently leveraged to construct our DPO’s loss function under the Bradley-Terry preference model.
Directly replacing $\beta$ in Equation (9) with $\beta f ( \hat { r } ( s _ { t } , a _ { t } ) )$ might not make the problem easy to solve. To address this issue, by noting that $\beta$ is a positive constant, Equation (9) is equivalent to
$$
\operatorname* { m a x } _ { \pi _ { \theta } } \mathbb { E } _ { s _ { t } \sim \mathcal { D } , a _ { t } \sim \pi _ { \theta } ( \cdot | s _ { t } ) } \left[ \frac { r _ { \phi } ( s _ { t } , a _ { t } ) } { \beta } - \log \frac { \pi _ { \theta } ( a _ { t } | s _ { t } ) } { \pi _ { \mathrm { r e f } } ( a _ { t } | s _ { t } ) } \right] .
$$
Then, we make the following Assumption 4.2 for incorporating token-level reward guidance $f ( \hat { r } ( s _ { t } , a _ { t } ) )$ explicitly into Equation (10).
Assumption 4.2. Suppose we have an existing reward model $\hat { r } ( \cdot )$ , which can generate a dense token-level reward sequence $\hat { r } \big ( s _ { t } , a _ { t } \big )$ , $t = 0 , 1 , \dots , T - 1$ . Moreover, suppose $f ( u )$ is a positive univariate function of $u$ .
It was shown in Rafailov et al. (2024) under the definition of equivalent state-action reward class and invariant rereward $\hat { r } \big ( s _ { t } , a _ { t } \big )$ of the form $\begin{array} { r } { \beta \log \frac { \pi _ { \theta } ^ { - } \left( a _ { t } | s _ { t } \right) } { \pi _ { \mathrm { r e f } } \left( a _ { t } | s _ { t } \right) } } \end{array}$ , and the total reward $\begin{array} { r } { \hat { r } ( x , y ) = \sum _ { t = 0 } ^ { T - 1 } \hat { r } ( s _ { t } , a _ { t } ) } \end{array}$ . Hence Assumption 4.2 is feasible.
Modified Token-Level PPO. With Assumption 4.2, we propose to adopt the token-level reward $\hat { r } ( s _ { t } , a _ { t } )$ to guide token-level PPO. First, the parameter $\beta$ in Equation (10) is replaced with $\beta f ( \hat { r } ( s _ { t } , a _ { t } ) )$ and we obtain the modified problem of token-level PPO with token-level reward guidance as follows:
$$
\operatorname* { m a x } _ { \pi _ { \theta } } \mathbb { E } _ { s _ { t } \sim \mathcal { D } , a _ { t } \sim \pi _ { \theta } ( \cdot | s _ { t } ) } \left[ \frac { r _ { \phi } ( s _ { t } , a _ { t } ) } { \beta f ( \hat { r } ( s _ { t } , a _ { t } ) ) } - \log \frac { \pi _ { \theta } ( a _ { t } | s _ { t } ) } { \pi _ { \mathrm { r e f } } ( a _ { t } | s _ { t } ) } \right] ,
$$
where $f ( \hat { r } ( s _ { t } , a _ { t } ) )$ with the token-level reward $\hat { r } \big ( s _ { t } , a _ { t } \big )$ is adopted to modify the ground-truth unknown reward function $r _ { \phi } ( s _ { t } , a _ { t } )$ .
Thus similar to (Rafailov et al., 2023), the optimal policy for the action $a _ { t }$ at time step $t$ of the modified token-level proximal policy optimization in Equation (11) can be derived as the following Theorem 4.3.
Theorem 4.3. The optimal policy $\pi _ { \boldsymbol { \theta } _ { t } } ( a _ { t } | s _ { t } )$ for the action $a _ { t }$ at time step t of the modified token-level proximal policy optimization in Equation (11) is
$$
\pi _ { \theta _ { t } } ( a _ { t } | s _ { t } ) = \frac { \pi _ { r e f } ( a _ { t } | s _ { t } ) \exp { \left( \frac { r _ { \phi } ( s _ { t } , a _ { t } ) } { \beta f ( \hat { r } ( s _ { t } , a _ { t } ) ) } \right) } } { Z ( s _ { t } ) } ,
$$
$\begin{array} { r } { Z ( s _ { t } ) ~ = ~ \mathbb { E } _ { a _ { t } \sim \pi _ { r e f ( \cdot \vert s _ { t } ) } } \left[ \exp { \left( \frac { r _ { \phi } \left( s _ { t } , a _ { t } \right) } { \beta f \left( \hat { r } \left( s _ { t } , a _ { t } \right) \right) } \right) } \right] } \end{array}$ is the partition function, and $s _ { t } \sim \bar { \mathcal { D } }$ does not depend on $\pi _ { \theta _ { t } }$ Moreover, the ground-truth unknown token-level reward can be represented with the optimal policy $\pi _ { \boldsymbol { \theta } _ { t } } ( a _ { t } | \boldsymbol { s } _ { t } )$ as:
$$
\frac { r _ { \phi } ( s _ { t } , a _ { t } ) } { f ( \hat { r } ( s _ { t } , a _ { t } ) ) } = \beta \log \frac { \pi _ { \theta _ { t } } ( a _ { t } | s _ { t } ) } { \pi _ { r e f } ( a _ { t } | s _ { t } ) } + \beta \log Z ( s _ { t } ) .
$$
The proof of Theorem 4.3 is provided in Appendix A.2.
Modified Token-Level Reward. By Equation (12), we have the token-level reward function
$$
\begin{array} { r } { r _ { \phi } ( s _ { t } , a _ { t } ) = \beta f ( \hat { r } ( s _ { t } , a _ { t } ) ) \log \frac { \pi _ { \theta _ { t } } ( a _ { t } | s _ { t } ) } { \pi _ { \mathrm { r e f } } ( a _ { t } | s _ { t } ) } + } \\ { \beta f ( \hat { r } ( s _ { t } , a _ { t } ) ) \log Z ( s _ { t } ) , } \end{array}
$$
where $f ( \hat { r } ( s _ { t } , a _ { t } ) )$ satisfies Assumption 4.2, $\beta$ is a constant, $s _ { t } \sim \mathcal { D }$ does not depend on $\pi _ { \theta _ { t } } , t = 0 , 1 , \ldots , T - 1$ .
Without loss of generality, suppose that trajectories generated by LLMs are bounded by a finite number of time steps, or tokens. Then, since LLMs are over-parameterized, we may assume without loss of generality that, there exists $\theta$ such that $\pi _ { \boldsymbol { \theta } } ( a _ { t } | s _ { t } ) = \pi _ { \boldsymbol { \theta } _ { t } } ( a _ { t } | s _ { t } )$ , $t = 0 , 1 , \ldots , T - 1$ . Thus, with the notations of the prompt $x$ and the generated sequence $y$ , Equation (13) can be rewritten in the form
$$
\begin{array} { r } { r _ { \phi } ( [ x , y ^ { < t } ] , y ^ { t } ) = \beta f ( \hat { r } ( [ x , y ^ { < t } ] , y ^ { t } ) ) \log \frac { \pi _ { \theta } ( y ^ { t } | [ x , y ^ { < t } ] ) } { \pi _ { \mathrm { r e f } } ( y ^ { t } | [ x , y ^ { < t } ] ) } } \\ { + \beta f ( \hat { r } ( [ x , y ^ { < t } ] , y ^ { t } ) ) \log Z ( [ x , y ^ { < t } ] ) \enspace . } \end{array}
$$
for all time-step $t$ , where the last term with the partition function does not depend on $\pi _ { \boldsymbol { \theta } }$ , according to Theorem 4.3.
# 4.3. Direct Preference Optimization with Token-Level Reward Guidance
For the proximal policy optimization with token-level reward guidance in Equation (11), Section 4.2 has represented the ground-truth unknown token-level reward $r _ { \phi } ( s _ { t } , a _ { t } )$ explicitly in Equation (14). Subsequently, the total reward $r _ { \phi } ( x , y )$ for the prompt $x$ and its response $y$ can be expressed as:
$$
\begin{array} { r l r } { { r _ { \phi } ( x , y ) = \sum _ { t = 0 } ^ { T } \beta f ( \hat { r } ( [ x , y ^ { < t } ] , y ^ { t } ) ) \log \frac { \pi _ { \theta } ( y ^ { t } | [ x , y ^ { < t } ] ) } { \pi _ { \mathrm { r e f } } ( y ^ { t } | [ x , y ^ { < t } ] ) } } } \\ & { } & { + \sum _ { t = 0 } ^ { T } \beta f ( \hat { r } ( [ x , y ^ { < t } ] , y ^ { t } ) ) \log Z ( [ x , y ^ { < t } ] ) , } \end{array}
$$
where the last term with the partition function does not depend on $\pi _ { \boldsymbol { \theta } }$ .
Next, we derive the loss function with token-level reward guidance for direct preference optimization, as we set the target at the beginning of Section 4.2. Given a human preference dataset $\mathcal { D } = \{ ( x , y _ { w } , y _ { l } ) \}$ , where $x$ is a prompt, $y _ { w }$ and $y _ { l }$ are preferred and dispreferred responses respectively, we adopt the reward function in Equation (15) and the BradleyTerry preference model in Equation (4) for specifying human preference. To this aim, we choose different shaping functions $f _ { w } ( \cdot )$ and $f _ { l } ( \cdot )$ for win and lose responses respectively, both of them satisfy the condition in Assumption 4.2. Then by substituting Equation (15) into Equation (4), we can get the per-instance loss detailed as follows.
Bradley-Terry Model with Token-Level Reward Guidance. From Equation (15), for convenience we let
$$
\begin{array} { r l } & { \varphi ( \pi _ { \theta } , f , \hat { r } ; x , y _ { w } , y _ { l } ) } \\ & { \quad = \displaystyle \sum _ { t = 0 } ^ { T _ { w } - 1 } \beta f _ { w } \big ( \hat { r } \big ( [ x , y _ { w } ^ { < t } ] , y _ { w } ^ { t } \big ) \big ) \log \frac { \pi _ { \theta } \big ( y _ { w } ^ { t } \big | [ x , y _ { w } ^ { < t } ] \big ) } { \pi _ { \mathrm { r e f } } \big ( y _ { w } ^ { t } \big | [ x , y _ { w } ^ { < t } ] \big ) } } \end{array}
$$
$$
- \sum _ { t = 0 } ^ { T _ { l } - 1 } \beta f _ { l } \big ( \hat { r } ( [ x , y _ { l } ^ { < t } ] , y _ { l } ^ { t } ) \big ) \log \frac { \pi _ { \theta } ( y _ { l } ^ { t } | [ x , y _ { l } ^ { < t } ] ) } { \pi _ { \mathrm { r e f } } ( y _ { l } ^ { t } | [ x , y _ { l } ^ { < t } ] ) } ;
$$
$$
\begin{array} { r l } & { \delta ( f , \hat { r } ; x , y _ { w } , y _ { l } ) } \\ & { \quad = \displaystyle \sum _ { t = 0 } ^ { T _ { w } - 1 } \beta f _ { w } \big ( \hat { r } ( [ x , y _ { w } ^ { < t } ] , y _ { w } ^ { t } ) \big ) \log Z ( [ x , y _ { w } ^ { < t } ] ) } \\ & { \qquad \quad - \displaystyle \sum _ { t = 0 } ^ { T _ { l } - 1 } \beta f _ { l } \big ( \hat { r } ( [ x , y _ { l } ^ { < t } ] , y _ { l } ^ { t } ) \big ) \log Z ( [ x , y _ { l } ^ { < t } ] ) , } \end{array}
$$
where $T _ { w }$ and $\varGamma _ { l }$ are the lengths of the responses $y _ { w }$ and $y _ { l }$ respectively. Then, the Bradley-Terry preference model with token-level reward guidance is
$$
\begin{array} { r l } & { \operatorname* { P r } ( y _ { w } \succ y _ { l } | x ) } \\ & { = \sigma \left( \varphi ( \pi _ { \theta } , f , \hat { r } ; x , y _ { w } , y _ { l } ) + \delta ( f , \hat { r } ; x , y _ { w } , y _ { l } ) \right) . } \end{array}
$$
The proof of Equation (17) is given in Appendix A.3.
The above function is not computable since it contains partition functions in $\delta ( f , \hat { r } ; x , y _ { w } , y _ { l } )$ . Notably, preference optimization aims to maximize the preference function in Equation (17) with respect to $\pi _ { \boldsymbol { \theta } }$ , and $\delta ( f , \hat { r } ; x , y _ { w } , y _ { l } )$ does not depend on the policy $\pi _ { \boldsymbol { \theta } }$ , we can eliminate $\delta ( f , \hat { r } ; x , y _ { w } , y _ { l } )$ from Equation (17) based on the following Theorem 4.4.
Theorem 4.4. The preference function in Equation (17) has the same maxima and the same ascent directions as the function $\sigma \left( \varphi ( \pi _ { \theta } , f , \hat { r } ; x , y _ { w } , y _ { l } ) \right)$ . Moreover, for two policies $\pi _ { \theta _ { 1 } }$ and $\pi _ { \theta _ { 2 } }$ ,
$$
\begin{array} { r l } & { \quad \sigma \left( \varphi ( \pi _ { \theta _ { 1 } } , f , \hat { r } ; x , y _ { w } , y _ { l } ) + \delta ( f , \hat { r } ; x , y _ { w } , y _ { l } ) \right) } \\ & { \quad > \sigma \left( \varphi ( \pi _ { \theta _ { 2 } } , f , \hat { r } ; x , y _ { w } , y _ { l } ) + \delta ( f , \hat { r } ; x , y _ { w } , y _ { l } ) \right) } \end{array}
$$
if and only if
$$
\begin{array} { r l } & { \sigma \left( \varphi ( \pi _ { \theta _ { 1 } } , f , \hat { r } ; x , y _ { w } , y _ { l } ) \right) } \\ & { > \sigma \left( \varphi ( \pi _ { \theta _ { 2 } } , f , \hat { r } ; x , y _ { w } , y _ { l } ) \right) . } \end{array}
$$
The proof of Theorem 4.4 is given in Appendix A.4. Theorem 4.4 is due to that, the sigmoid function is strictly increasing and it does not change the order of values. Hence Theorem 4.4 suggests that, maximizing $\sigma \left( \varphi ( \pi _ { \theta } , f , \hat { r } ; x , y _ { w } , y _ { l } ) \right)$ with respect to $\pi _ { \boldsymbol { \theta } }$ is equivalent to maximizing the preference function in Equation (17) with respect to $\pi _ { \boldsymbol { \theta } }$ . Furthermore, the equivalence between Equation (18) and Equation (19) demonstrates that, for any two policies $\pi _ { \theta _ { 1 } }$ and $\pi _ { \theta _ { 2 } }$ , canceling the term $\delta ( f , \hat { r } ; x , y _ { w } , y _ { l } )$ from Equation (18) does not affect the preference order of the responses $y _ { w }$ and $y _ { l }$ .
Loss Function. Since we only care about the optimal policy of Equation (17), by Theorem 4.4 we may redefine the preference function as $\sigma \left( \varphi ( \pi _ { \theta } , f , \hat { r } ; x , y _ { w } , y _ { l } ) \right)$ , i.e.,
$$
\begin{array} { r l } & { \mathrm { P r } ( y _ { w } \succ y _ { l } | x ) \triangleq \sigma \left( \varphi ( \pi _ { \theta } , f , \hat { r } ; x , y _ { w } , y _ { l } ) \right) } \\ & { = \sigma \left( \displaystyle \sum _ { t = 0 } ^ { T _ { w } - 1 } \beta f _ { w } \big ( \hat { r } \big ( [ x , y _ { w } ^ { < t } ] , y _ { w } ^ { t } \big ) \big ) \log \frac { \pi _ { \theta } \left( y _ { w } ^ { t } \big | [ x , y _ { w } ^ { < t } ] \right) } { \pi _ { \mathrm { r e f } } \left( y _ { w } ^ { t } \big | [ x , y _ { w } ^ { < t } ] \right) } \right. } \\ & { \quad \left. - \displaystyle \sum _ { t = 0 } ^ { T _ { l } - 1 } \beta f _ { l } \big ( \hat { r } \big ( [ x , y _ { l } ^ { < t } ] , y _ { l } ^ { t } \big ) \big ) \log \frac { \pi _ { \theta } \left( y _ { l } ^ { t } \big | [ x , y _ { l } ^ { < t } ] \right) } { \pi _ { \mathrm { r e f } } \left( y _ { l } ^ { t } \big | [ x , y _ { l } ^ { < t } ] \right) } \right) , } \end{array}
$$
which specifies the per-instance human preference and is computable. Furthermore, analogous to Equation (5), we formulate the loss function for enhancing DPO by harnessing token-level reward guidance as follows:
$$
\begin{array} { r l } & { \mathcal { L } _ { \mathrm { T G D P O } } ( \pi _ { \theta } ) = - \mathbb { E } _ { ( x , y _ { w } , y _ { l } ) \sim \mathcal { D } } [ \log \sigma ( \displaystyle \sum _ { t = 0 } ^ { T _ { w } - 1 } } \\ & { \qquad \beta \cdot f _ { w } ( \hat { r } [ \boldsymbol { x } , y _ { w } ^ { \le t } ] , y _ { w } ^ { t } ) ) \cdot \log \frac { \pi _ { \theta } ( y _ { w } ^ { t } \boldsymbol { x } , y _ { w } ^ { \le t } ) } { \pi _ { \mathrm { r e f } } ( y _ { w } ^ { t } \boldsymbol { x } , y _ { w } ^ { \le t } ) } - } \\ & { \qquad \displaystyle \sum _ { t = 0 } ^ { T _ { l } - 1 } \beta f _ { l } ( \hat { r } ( [ \boldsymbol { x } , y _ { l } ^ { \le t } ] , y _ { l } ^ { t } ) ) \log \frac { \pi _ { \theta } ( y _ { l } ^ { t } \boldsymbol { x } , y _ { l } ^ { \le t } ) } { \pi _ { \mathrm { r e f } } ( y _ { l } ^ { t } \boldsymbol { x } , y _ { l } ^ { \le t } ) } ) ] . } \end{array}
$$
The loss function $\mathcal { L } _ { \mathrm { T G D P O } } ( \pi _ { \theta } )$ in Equation (20) provides a framework of direct preference optimization, by leveraging $f ( \hat { r } ( s _ { t } , a _ { t } ) )$ to shape the optimization of the policy on the tokens of win and lose responses. Specifically, with an appropriate choice of $f ( \cdot )$ , this framework can recover several known direct preference optimization methods. For example, if we take $f _ { w } \equiv f _ { l } \equiv 1$ , then Equation (20) is the loss function of DPO (Rafailov et al., 2023) (for others, see Appendix C.2). Nonetheless, the aim of this framework is to use token-level reward $\hat { r } \big ( s _ { t } , a _ { t } \big )$ to shape the loss function in Equation (20) directly. In the following, we provide a practical example.
Practical Method. For convenience, we adopt the induced DPO reward (Rafailov et al., 2023) for the token-level reward $\hat { r } \big ( s _ { t } , a _ { t } \big )$ . Suppose $\pi _ { \hat { \theta } }$ is an optimal policy of the loss function of DPO in Equation (5), Rafailov et al. (2024) showed in Theorem 1 that DPO learns implicitly a tokenlevel reward of the form
$$
\hat { r } ( [ x , y ^ { < t } ] , y ^ { t } ) = \beta \log \frac { \pi _ { \hat { \theta } } ( y ^ { t } | [ x , y ^ { < t } ] ) } { \pi _ { \mathrm { r e f } } ( y ^ { t } | [ x , y ^ { < t } ] ) } .
$$
Hence for Equation (20), we simply set
$$
\begin{array} { r l } & { f _ { w } \big ( \hat { r } ( [ x , y _ { w } ^ { < t } ] , y _ { w } ^ { t } ) \big ) = 1 + \alpha \hat { r } ( [ x , y _ { w } ^ { < t } ] , y _ { w } ^ { t } ) ; } \\ & { f _ { l } ( \hat { r } ( [ x , y _ { l } ^ { < t } ] , y _ { l } ^ { t } ) ) = 1 - \alpha \hat { r } ( [ x , y _ { l } ^ { < t } ] , y _ { l } ^ { t } ) , } \end{array}
$$
where $\alpha$ is a positive constant. Obviously, this setting meets Assumption 4.2 if $\alpha$ is small enough.
Motivation of the Practical Method. Observing the loss function $\mathcal { L } _ { \mathrm { T G D P O } } ( \pi _ { \theta } )$ in Equation (20), below is the motivation for setting $f ( \hat { r } ( [ x , y ^ { < t } ] , y ^ { t } ) )$ as in Equation (21):
• For a token $y _ { w } ^ { t }$ in win-response, if $\hat { r } ( [ x , y _ { w } ^ { < t } ] , y _ { w } ^ { t } ) > 0$ then it is identified as a preferred token, implying that the state-action should be reinforced, and then it is assigned a larger weight $1 + \alpha \hat { r } ( [ x , y _ { w } ^ { < t } ] , y _ { w } ^ { t } )$ . In this way, the gradient of our loss function $\mathcal { L } _ { \mathrm { T G D P O } } ( \pi _ { \theta } )$ at this state-action is
$$
\beta ( 1 + \alpha \hat { r } ( [ x , y _ { w } ^ { < t } ] , y _ { w } ^ { t } ) ) \nabla _ { \pi _ { \theta } } \log \frac { \pi _ { \theta } ( y _ { w } ^ { t } | [ x , y _ { w } ^ { < t } ] ) } { \pi _ { \mathrm { r e f } } ( y _ { w } ^ { t } | [ x , y _ { w } ^ { < t } ] ) } ,
$$
which is scaled up by $1 + \alpha \hat { r } \big ( [ x , y _ { w } ^ { < t } ] , y _ { w } ^ { t } \big )$ . As a result, optimizing our loss function $\mathcal { L } _ { \mathrm { T G D P O } } ( \pi _ { \theta } )$ encourages the policy to assign a higher probability to this action.
• Similarly, the token $y _ { w } ^ { t }$ satisfying $\hat { r } ( [ x , y _ { w } ^ { < t } ] , y _ { w } ^ { t } ) < 0$ is identified as a dispreferred token, although it is in the preferred response $y _ { w }$ . Then by assigning weight $1 + \alpha \hat { r } ( [ x , y _ { w } ^ { < t } ] , y _ { w } ^ { t } ) < 1$ , optimizing our loss function $\mathcal { L } _ { \mathrm { T G D P O } } ( \pi _ { \theta } )$ would progressively assign a lower probability to this action.
• For a token $y _ { l } ^ { t }$ in lose-response, if $\hat { r } ( [ x , y _ { l } ^ { < t } ] , y _ { l } ^ { t } ) < 0$ then it is considered as a dispreferred token. Thus since the weight $1 - \alpha \hat { r } ( [ x , y _ { l } ^ { < t } ] , y _ { l } ^ { t } ) ) > 1$ , optimizing the loss function $\mathcal { L } _ { \mathrm { T G D P O } } ( \pi _ { \theta } )$ would assign an even lower probability to this action.
• The token $y _ { l } ^ { t }$ satisfying $\hat { r } ( [ x , y _ { l } ^ { < t } ] , y _ { l } ^ { t } ) > 0$ is considered as a preferred token, although it is in the dispreferred response $y _ { l }$ . In this case $1 - \alpha \hat { r } ( [ x , y _ { l } ^ { < t } ] , y _ { l } ^ { t } ) ) <$ 1, then optimizing the loss function $\mathcal { L } _ { \mathrm { T G D P O } } ( \pi _ { \theta } )$ would progressively assign a higher probability to this action.
The above analysis indicates that our direct preference optimization with token-level reward guidance performs in the token-level granularity, and exhibits varying degrees of deviation from the reference policy based on their respective rewards. This property inherently empowers our approach to discover satisfactory policies, leading to better policies than existing approaches. This property should be attributed to the modified token-level PPO with reward guidance in Section 4.2, and the derived loss function $\mathcal { L } _ { \mathrm { T G D P O } } ( \pi _ { \theta } )$ for direct preference optimization in Equation (20) with the setting of $f ( \hat { r } ( [ x , y ^ { < t } ] , y ^ { t } ) )$ in Equation (21).
# 5. Experiments
In this section, we first outline our experiment settings in Section 5.1. Then we show the main experiment results in Section 5.2. Lastly, we provide an empirical analysis of the unique properties of our TGDPO in Section 5.3.
# 5.1. Experiment Settings
Models and Training Settings. We conduct experiments on three models: Llama3-8B-Instruct (Grattafiori et al.,
2024), Llama3.2-3B-Instruct, and Gemma2-2B-it (Team et al., 2024b). Following (Meng et al., 2024), we use prompts from the UltraFeedback dataset (Cui et al., 2024) and let each model generate 5 responses with a temperature of 0.8. These responses are then ranked using the ArmoRM model (Wang et al., 2024). The highest and lowest-ranked responses are selected as the chosen and rejected samples, respectively. For Llama3-8B-Instruct, we further utilize the PairRM model (Jiang et al., 2023) to annotate response scores, thereby evaluating the robustness of algorithms in handling varying quality of sample annotations. Hyperparameter settings are presented in Appendix D.1.
Evaluation Benchmarks. We primarily evaluate trained models’ performance using three widely recognized open-ended instruction-following benchmarks: MT-Bench (Zheng et al., 2023), Arena-Hard (Li et al., 2024), and AlpacaEval 2 (Li et al., 2023), which assess models’ response quality across diverse queries. For MT-Bench, we report the MT-Bench score and win rate against GPT-4. For ArenaHard, we report the win rate against GPT-4-0314. For AlpacaEval 2, we report the win rate against GPT-4 Turbo. Further details are discussed in Appendix D.2.
Baseline Methods. We compare our TGDPO with two stateof-the-art preference optimization methods: DPO (Rafailov et al., 2023) and SimPO (Meng et al., 2024). We also include the pre-trained Instruct model as a baseline.
# 5.2. Main Results
The experiment results on AlpacaEval 2 (Li et al., 2023), Arena-Hard (Li et al., 2024), and MT-Bench (Zheng et al., 2023) are summarized in Table 1. Our TGDPO consistently outperforms baseline methods across these benchmarks. Notably, on AlpacaEval 2, it achieves a win rate increase of up to 6.2 over the best baseline, while on MT-Bench, the win rate improves by up to 7.5. For the challenging ArenaHard benchmark, our method demonstrates stable superior performance, with a win rate enhancement of up to 4.3 compared to the best baseline. These consistent performance improvements underscore the effectiveness of our approach. More experiment results and comparisons are presented in Appendix B.
# 5.3. Analysis
In this section, we present an empirical analysis of the unique properties of our TGDPO in comparison to conventional preference optimization approaches. The analysis is conducted under the Llama3-8B-Instruct PairRM setting.
TGDPO Leads to Satisfactory Results upon Loss Convergence. A well-known challenge in preference optimization algorithms is the misalignment between loss minimization and model performance (Guo et al., 2024). Specifically, minimizing the loss for many preference optimization methods often results in degenerate policies. This issue necessitates extensive hyperparameter tuning to identify a sweet spot between the initialization and convergence points, significantly limiting the practicality and efficiency of these algorithms. As shown in Figure 1, the optimal hyperparameters for DPO barely reduce its loss. In contrast, we empirically find that TGDPO enables convergence in much fewer steps than conventional preference optimization algorithms. In Figure 1, TGDPO demonstrates consistent and stable loss reduction toward convergence. We assume it is because TGDPO’s token-level reward inherently distinguishes preferred and dispreferred tokens.
able 1. Experiment results on AlpacaEval 2 (Li et al., 2023), Arena-Hard (Li et al., 2024), and MT-Bench (Zheng et al., 2023) benchmarks
Figure 1. Training loss curve for DPO and our TGDPO with different values of $\alpha$ . Changing the value of $\alpha$ leads to different convergence speeds for our method.
Furthermore, in Table 2, we compare benchmark performances by training each method using their default configurations and training them until loss convergence. The results reveal that both DPO and SimPO suffer substantial performance degradation upon convergence, with SimPO’s win rates dropping to single digits. Conversely, TGDPO maintains exceptional performance at the convergence point. These findings highlight the necessity of extensive hyperparameter searches for traditional preference optimization algorithms, whereas TGDPO simplifies the process, significantly improving efficiency and usability.
Table 2. Analysis of preference optimization methods’ performance upon training loss convergence.
Table 3. Analysis of our TGDPO’s performance upon training loss convergence with different convergence speeds.
TGDPO Enables Control Over Convergence Speed. TGDPO offers the flexibility to control the speed of convergence by adjusting the value of $\alpha$ in Equation (20). A larger $\alpha$ provides stronger token-level guidance, resulting in faster convergence, while a smaller $\alpha$ aligns the algorithm more closely with conventional DPO behavior. As illustrated in Figure 1, increasing $\alpha$ leads to a more rapid loss reduction compared to lower values of $\alpha$ . Additionally, in Table 3, we compare benchmark performances at the respective convergence points for different values of $\alpha$ Specifically, we evaluate checkpoints at step 50 for $\alpha = 2 . 0$ , step 60 for $\alpha = 1 . 0$ , and epoch 1 for $\alpha = 0 . 5$ . The results demonstrate comparable performance across all configurations, especially for the challenging Arena-Hard benchmark. This desirable property of TGDPO allows for early stopping once the loss converges, significantly reducing computational costs without compromising performance.
Table 4. Analysis of our TGDPO’s robustness using different token-level rewards $\hat { r } \big ( s _ { t } , a _ { t } \big )$ .
TGDPO is Robust to Variations in Token-Level Rewards $\hat { r } ( s _ { t } , a _ { t } )$ . To make TGDPO practical, we propose using token-level rewards derived from pre-trained DPO models as a convenient implementation. A key question arises: how sensitive is TGDPO to the quality of the token-level rewards $\hat { r } \big ( s _ { t } , a _ { t } \big )$ defined in Equation (20)? To investigate this, we analyze the behavior of TGDPO using token-level rewards obtained from two DPO models trained with different $\beta$ values: $\beta = 0 . 1$ and $\beta = 0 . 0 1$ . The benchmark performances of these models, along with TGDPO’s performance using their respective rewards, are presented in Table 4. As expected, DPO with $\beta = 0 . 0 1$ significantly outperforms DPO with $\beta = 0 . 1$ . However, when the token-level rewards from these models are used in TGDPO, the resulting performance is nearly identical. This finding highlights TGDPO’s robustness to variations in the quality of token-level rewards, making it less dependent on the specific characteristics of the pre-trained DPO model. Such robustness further enhances TGDPO’s practicality and reliability. | Recent advancements in reinforcement learning from human feedback have shown that utilizing fine-grained token-level reward models can substantially enhance the performance of Proximal Policy Optimization (PPO) in aligning large language models. However, it is challenging to leverage such token-level reward as guidance for Direct Preference Optimization (DPO), since DPO is formulated as a sequence-level bandit problem. To address this challenge, this work decomposes the sequence-level PPO into a sequence of token-level proximal policy optimization problems and then frames the problem of token-level PPO with token-level reward guidance, from which closed-form optimal token-level policy and the corresponding token-level reward can be derived. Using the obtained reward and Bradley-Terry model, this work establishes a framework of computable loss functions with token-level reward guidance for DPO, and proposes a practical reward guidance based on the induced DPO reward. This formulation enables different tokens to exhibit varying degrees of deviation from reference policy based on their respective rewards. Experiment results demonstrate that our method achieves substantial performance improvements over DPO, with win rate gains of up to 7.5 points on MT-Bench, 6.2 points on AlpacaEval 2, and 4.3 points on Arena-Hard. Code is available at https://github.com/dvlab-research/TGDPO. | [
"cs.LG",
"cs.AI",
"cs.CL"
] |
# 1 Introduction
The proliferation of AI-driven applications such as retrievalaugmented generation (RAG) [6, 17, 22, 26], personalized recommendation [12, 32, 36], machine learning [7, 11] and multimodal search [34] has led to explosive growth in the deployment of vector databases—specialized systems that manage and query high-dimensional vector embeddings produced by large language models, vision encoders, and other machine learning models. These vector databases rely heavily on Approximate Nearest Neighbor (ANN) search to efficiently retrieve vectors that are close to a given query in high-dimensional space, balancing search accuracy and latency to support real-time or near-real-time applications [2, 3, 14, 23, 30, 45, 48].
ANN Search Indices. Numerous indexing methods have been proposed for efficient ANN search. The graph-based index [23, 30] has become the most widely used technique due to its superior recall-latency trade-offs in high-dimensional space. Meanwhile, tree-based approaches [3, 5, 37] suffer from the curse of dimensionality [5], and hash-based methods [15, 19] often require excessive memory to maintain hash tables [15]. In contrast, graph-based indexing exploits proximity relationships between vectors, enabling efficient neighbor exploration. However, classical graph-based methods such as HNSW typically assume a static vector set that resides entirely in memory. While suitable for moderate data scales, these methods become impractical at billionscale datasets, as the required memory capacity exceeds costeffective limits, especially in cloud or budget-constrained environments [4, 30].
As a result, disk-based ANN search systems have gained growing attention for large-scale deployments. DiskANN [23] extends graph-based search to disk-resident datasets by leveraging offline graph construction and aggressive pruning techniques to improve disk access locality and minimize random I/O during search. However, DiskANN is primarily designed for static datasets, where the entire dataset is available upfront, and the graph structure is carefully optimized during offline preprocessing.
Challenges within Dynamic Vector Search. The constant influx of new vector data in real-world applications drives the escalating demand for dynamic ANN search indexing. Unlike static datasets, modern systems such as recommendation engines, social networks, and generative AI models require vector databases that can efficiently handle real-time insertions, deletions, and updates. For instance, Amazon’s recommendation system [1] continuously generates new product embeddings based on user interactions, requiring immediate integration into the search index to maintain recommendation accuracy. Traditional ANN indexing methods are inadequate for such dynamic environments due to latency and computational overhead incurred by reindexing. For example, DiskANN either requires costly global rebuilding of the graph or suffers from degraded search performance due to poorly connected new nodes. Therefore, efficiently supporting continuous insertions, deletions, and evolving query patterns while maintaining high search accuracy and low disk I/O remains a crucial and open challenge for diskbased ANN systems.
Several recent studies [35, 38, 42] have explored methods to support dynamic updates in ANN indices. Among them, the state-of-the-art solution SPFresh [42] maintains incremental updates by applying clustering-based strategies rather than traditional graph indexing. SPFresh partitions the vector space into coarse-grained clusters and supports efficient in-place updates by assigning new vectors to the nearest cluster. This enables fast insertions and deletions without requiring global restructuring. However, SPFresh suffers from several key limitations. First, coarse partitioning introduces structural rigidity, that similar vectors may fall into different clusters, breaking neighborhood locality and leading to lower recall. For example, after the initial index is constructed, SPFresh achieves only around 0.75 Recall $1 0 @ 1 0$ , which is significantly lower than that of graph-based methods. Second, the in-place update design restricts the flexibility of data layout optimization, making it difficult to improve disk locality over time.
Our Design: LSM-VEC. This paper presents LSM-VEC, a large-scale, disk-based vector database designed to achieve both efficient dynamic updates and high-recall in ANN search. LSM-VEC is the first system to integrate the LSM tree, a well-known indexing structure optimized for updates, to support efficient insertions and deletions in vector index. Specifically, we leverage AsterDB [33], a state-of-the-art graph-oriented LSM-tree, to maintain the HNSW proximity graph on disk, enabling efficient updates to the HNSW structure. LSM-VEC further incorporates two key techniques to reduce the query latency. (1) Selective neighbor exploration in HNSW. LSM-VEC avoids exhaustively evaluating all neighbors of each visited node. Instead, it adopts a probabilistic sampling strategy that selectively expands only a subset of neighbors. This technique is inspired by recent work on probabilistic graph traversal [28], originally proposed for inmemory ANN graphs. However, we extend this idea to the disk setting, where random I/O dominates the cost profile. Unlike the original formulation, where the primary overhead is computation, our adaptation must explicitly account for disk latency and data layout. To support this, LSM-VEC incorporates a new cost analysis that models the I/O savings from skipping neighbor evaluations, showing that even small reductions in sampling ratio can lead to substantial latency gains without significant loss in recall. (2) LSM-VEC employs sampling-aware graph reordering to optimize vector placement on disk based on query-driven connectivity. Unlike traditional methods relying solely on static topology [41], LSMVEC incorporates sampling-based edge weights reflecting actual traversal patterns. By co-locating vectors connected through frequently traversed edges, LSM-VEC enhances disk locality and significantly reduces random I/O operations during the traversal of the graph-based vector index.
Contributions. Overall, LSM-VEC integrates the writeoptimized characteristics of LSM-trees, the high recall of graph-based ANN search, the I/O efficiency of locality-aware reordering, and the update agility of sampling-based maintenance. This design yields a scalable and practical solution for billion-scale, dynamically evolving vector search. Experimental results on the SIFT1B dataset show that LSM-VEC consistently outperforms existing disk-based baselines. It achieves a higher Recall $1 0 @ 1 0$ , lower update and query latency, and significantly lower memory usage. Compared to DiskANN, LSM-VEC reduces average update latency by up to $2 . 6 \times$ and memory usage by over $6 6 . 2 \%$ , while maintaining more stable and efficient query performance under dynamic workloads. These results demonstrate that LSM-VEC is a robust and efficient solution for real-world billion-scale ANN search.
In summary, this paper makes the following contributions:
• We present a comprehensive analysis of the limitations in existing disk-based ANN systems and identify key challenges in supporting dynamic updates, efficient query execution, and scalable storage layout.
• We propose LSM-VEC, a disk-based vector search system that integrates hierarchical graph indexing with LSM-tree storage. LSM-VEC supports billionscale datasets with efficient insertions, deletions, and high-recall ANN queries.
We implement LSM-VEC on top of AsterDB and evaluate it using billion-scale public datasets. Experimental results show that LSM-VEC achieves high accuracy and outperforms prior disk-based systems in both query and update efficiency.
# 2 Background
In this section, we introduce the fundamental task of ANN search, discuss existing solutions, and highlight the challenges in this domain.
# 2.1 ANN Search
Approximate nearest neighbor (ANN) search is a fundamental problem in large-scale vector retrieval, enabling fast similarity-based queries in applications such as retrievalaugmented generation (RAG) [26], recommendation systems [12], and multimodal search [34]. Given a query vector $q \in \mathbb { R } ^ { d }$ and a database $X = \{ x _ { 1 } , x _ { 2 } , . . . , x _ { n } \}$ , the goal of ANN search is to efficiently retrieve the most similar vectors to $q$ based on a predefined distance metric.
Figure 1. An example of pipeline of approximate nearest neighbor (ANN) search, consisting of index construction, candidate selection, and distance computation.
The exact nearest neighbor (NN) search problem is formally defined as:
$$
\mathrm { N N } ( q , X ) = \arg \operatorname* { m i n } _ { x _ { i } \in X } D ( q , x _ { i } ) ,
$$
where $D ( q , x _ { i } )$ represents a similarity function, such as Euclidean distance:
$$
D ( q , x ) = \| q - x \| _ { 2 } .
$$
Due to the high cost of exact nearest neighbor search in large-scale datasets, Approximate Nearest Neighbor (ANN) methods trade accuracy for efficiency by allowing approximate results instead of the exact nearest neighbors.
In practice, Recall K@K is commonly used to evaluate the effectiveness of ANN methods. Specifically, given a query, Recall K@K measures the fraction of the ground-truth $K$ nearest neighbors that are successfully retrieved by the algorithm. Formally, it is defined as:
$$
\operatorname { R e c a l l } \operatorname { K @ K } = { \frac { | X \cap G | } { K } } ,
$$
where $X$ denotes the set of retrieved candidates and $G$ is the ground-truth set of the $K$ nearest neighbors.
Figure 1 illustrates the pipeline of a typical graph-based ANN search, which consists of two phases: index building and query processing. In the build phase, the system constructs a proximity-based index (e.g., graph) over a set of data vectors $\{ x _ { 1 } , x _ { 2 } , \ldots , x _ { N } \} \subset \mathbb { R } ^ { D }$ based on their geometry property. Each vector is represented as a node, and edges are created between pairs of vectors that are considered close according to a chosen distance metric. In the search phase, given a query vector $q \in \mathbb { R } ^ { D }$ , the system first performs candidate selection by traversing the index, followed by scanning and ranking the candidates based on distance. The final result is returned as the top-ranked candidates, representing the query vector’s approximate nearest neighbors.
# 2.2 Indexing Techniques for ANN Search
Over the past decade, various ANN indexing techniques have been proposed, including tree-based [3, 5, 37], hashingbased [15, 19], and graph-based methods [21, 23, 29, 30]. Among these, graph-based approaches have emerged as the most effective for high-dimensional ANN search due to their superior recall-latency trade-off.
HNSW. Hierarchical navigable small world (HNSW) [30] is a widely adopted in-memory ANN indexing method that builds a hierarchical proximity graph. Each vector is assigned to a random maximum level based on an exponentially decaying distribution, and each layer maintains a navigable neighborhood structure. Higher layers include long-range links for coarse routing, while lower layers capture dense local neighborhoods for accurate refinement. HNSW achieves near-logarithmic search complexity and high recall. However, HNSW assumes that the entire graph resides in RAM, which makes it impractical for billion-scale datasets where memory costs become prohibitive. Furthermore, the incremental insertion procedure requires updating multiple graph layers, which leads to structural imbalance and degraded recall under high update rates. These limitations motivate the development of disk-based extensions that can retain the search quality of HNSW while supporting scalable, dynamic workloads.
DiskANN. DiskANN [23] adapts graph-based vector indices for disk-resident data by leveraging a pruned graph index [4] and combining it with disk-aware optimizations. It performs aggressive offline pruning and data reordering to improve disk locality. Neighbors with strong connectivity are placed close to each other on disk, reducing random $\mathrm { I } / \mathrm { O }$ . At query time, DiskANN uses cache-aware traversal and prefetching strategies to efficiently access relevant parts of the graph. Although DiskANN significantly lowers memory consumption, it is fundamentally a static index. The graph is built entirely in memory and optimized before deployment. Insertions are appended at the end of the dataset without being properly integrated into the graph, which increases traversal cost and reduces recall. Deletions are not fully supported and may fragment the graph over time. While periodic full index reconstruction is possible, it incurs substantial computational overhead and is impractical for dynamic workloads. Consequently, DiskANN performs well in static environments but struggles to maintain high performance under continuous updates.
# 2.3 Dynamic Vector Index
While many ANN systems focus on optimizing static indexing performance, emerging workloads such as retrievalaugmented generation (RAG) and personalized search demand efficient dynamic support, where vectors are continuously inserted and deleted in real time.
SPFresh. SPFresh [42] proposes a fundamentally different design based on cluster-based indexing. Instead of maintaining a proximity graph, SPFresh organizes vectors into coarse-grained clusters via quantization. New vectors are assigned to their nearest clusters, enabling fast in-place updates and avoiding graph maintenance overhead. While this design enables efficient insertions and deletions, it suffers from structural rigidity. Similar vectors may fall into different clusters, harming recall unless many clusters are probed. This limitation is particularly severe under non-uniform data or evolving query distributions. Additionally, the system performs in-place updates, which simplifies maintenance but restricts opportunities for layout optimization. Vectors assigned near cluster boundaries may experience suboptimal placements, and SPFresh lacks mechanisms to adaptively refine these placements over time.
As a result, SPFresh trades accuracy for update speed. It achieves lower recall compared to graph-based systems, making it less suited for workloads where high precision and adaptive indexing are critical. In contrast, our approach combines the high-recall traversal of proximity graphs with update and disk-efficient mechanisms for scalable dynamic search.
# 2.4 Our Motivation
Existing disk-based ANN systems face a fundamental tradeoff between achieving high recall, supporting efficient updates, and maintaining low search latency. Classical graphbased systems like DiskANN [23] achieve strong search accuracy by performing offline pruning and layout optimization, but they assume a static dataset and suffer from high maintenance costs when updates are required. To support dynamic workloads, recent systems take different design choices but introduce new limitations. SPFresh [42] adopts a clustering-based index with in-place updates, enabling efficient storage management but sacrificing accuracy. In contrast, FreshDiskANN [38] retains graph-based indexing to achieve better recall but lacks layout refinement during updates, resulting in sub-optimal search latency as the graph gradually deteriorates over time. Overall, none of the existing systems fully resolves the three-way trade-off among update efficiency, search performance, and accuracy in largescale, disk-based ANN search. Designing an index that simultaneously supports high-recall search, low-latency query processing, and efficient real-time updates remains a critical open challenge.
To address this gap, we propose LSM-VEC, a disk-based vector search system that integrates graph-based indexing, lightweight traversal, and storage-aware layout optimization. A key design decision in LSM-VEC is the use of a logstructured merge tree (LSM-tree) as the underlying storage architecture. Unlike traditional $\mathrm { B ^ { + } }$ -tree or static file formats, LSM-trees are inherently write-optimized: they absorb random updates via a memory-resident buffer and organize data in sequentially written disk files through background compaction. This makes them particularly suitable for workloads with frequent insertions and deletions.
Figure 2. LSM-VEC architecture.
By combining a hierarchical graph-based vector index with LSM-tree-based storage and layout-aware maintenance, LSM-VEC achieves high recall and robust support for dynamic updates with minimal I/O overhead. Building on this foundation, we further introduce substantial query-time optimizations through a sampling-based probabilistic search strategy and connectivity-aware graph reordering. These techniques significantly reduce I/O during vector search, enabling the system to meet the performance and scalability requirements of real-world applications such as retrievalaugmented generation and personalized recommendation, where low-latency vector retrieval must coexist with massive data that continuously evolves.
# 3 The Design of LSM-VEC
# 3.1 Overview
Figure 2 presents the overview of LSM-VEC. LSM-VEC is constructed upon a graph-oriented log-structured merge tree [33] (LSM-tree) that enables efficient updates and queries for graph-based ANNS index. Building on this foundation, we have integrated three key modules further to enhance the performance of LSM-VEC, each tailored to address specific challenges associated with disk-based ANN searches and updates.
LSM-based Hierarchical Graph Indexing Module. This module employs a memory-disk hybrid hierarchical proximity graph inspired by the Hierarchical Navigable Small World (HNSW) model. It addresses scalability limitations of HNSW by partitioning the graph into memory-resident upper layers and a disk-resident bottom layer managed through an LSM-tree. The upper layers facilitate rapid long-range navigation, while the lower layers leverage efficient disk indexing and management. Vector storage and graph indexing are decoupled to enhance storage efficiency and enable quick disk-based vector retrieval.
Sampling-Based Query Engine. Recognizing the computational overhead associated with naive neighbor exploration, this module implements a probabilistic neighbor selection mechanism. Utilizing a probabilistic filtering strategy based on projection-based similarity scores, the engine selectively evaluates neighbors, notably reducing disk I/O and computation.
Connectivity-Aware Reordering Module. To minimize random disk access, this module continuously optimizes the layout of data based on observed access patterns. Unlike traditional static reordering methods, it dynamically leverages runtime traversal statistics derived from the sampling-based query engine. Nodes frequently traversed together are incrementally co-located during regular LSM-tree compactions, enhancing data locality and reducing random disk I/O. This adaptive strategy is specifically designed for disk-resident graphs, efficiently handling updates without requiring extensive restructuring.
Collectively, these modules form an integrated solution tailored to the unique demands of large-scale, dynamic ANNS workloads. The LSM-based hierarchical indexing module ensures efficient index updating and querying at scale, the sampling-based query engine significantly reduces unnecessary IO overhead during search, and the connectivity-aware reordering module dynamically adapts storage layout to minimize disk latency. Detailed explanations and performance analyses of each module are provided in subsequent sections.
# 3.2 LSM-based Proximity Embedding: Efficient Indexing for Dynamic ANN Search
Disk-based approximate nearest neighbor search (ANNS) faces significant challenges in efficiently handling dynamic updates, as these updates often result in substantial random disk writes. LSM-VEC addresses this issue by extending hierarchical graph-based ANN search [30] for large-scale diskbased environments through an integration with an LSMtree-based storage engine. This design allows the system to retain the high recall and logarithmic query complexity of HNSW while addressing memory constraints and update inefficiencies encountered when scaling to billions of vectors. HNSW is known for its excellent balance between efficiency and accuracy due to its hierarchical structure, which facilitates efficient long-range navigation in higher layers and precise neighbor refinement in lower layers. However, the original HNSW design assumes that the entire graph structure resides in memory, making it unsuitable for large-scale disk-based scenarios.
Storage Layout in LSM-VEC. To overcome this limitation, LSM-VEC decomposes the HNSW index into memoryresident upper layers and a disk-resident bottom layer. As shown in Figure 2, the upper layers of HNSW are retained in RAM to support low-latency search entry and fast hierarchical navigation. According to the exponential decay distribution used in HNSW’s level assignment [30], the upper layers are typically small. Empirically, less than $1 \%$ of all nodes reside above the bottom layer, which makes them suitable for in-memory storage even at billion-scale. Whereas the major layer of HNSW is stored on disk and maintained via an LSM-tree, facilitating efficient index updates. Since each vector insertion or deletion generates substantial new edges in the major layer, the adoption of an LSM-tree allows LSM-VEC to handle these updates efficiently without requiring a global restructuring of the entire index. In addition, LSM-VEC stores vector data separately from the graph index. All vectors are placed in a contiguous on-disk array, sorted by their corresponding ID. This layout allows constant-time retrieval via offset computation, avoiding redundant data storage while ensuring that vector access and neighbor traversal remain both efficient and write-friendly.
Search in LSM-VEC. Search in LSM-VEC follows a layered traversal strategy, optimized to minimize random disk I/O. The search process starts from the upper memory-resident layers, where long-range edges enable efficient navigation towards the target region. Once the search reaches the lower disk-resident layer, LSM-VEC employs the sampling-guided traversal technique introduced in Section 3.3 to selectively explore a small set of promising neighbors. This approach significantly reduces unnecessary disk accesses.
Insertion in LSM-VEC. Each newly inserted vector is indexed following a hierarchical HNSW-style process. The vector is assigned to a random level $L$ sampled from an exponentially decaying distribution. The insertion then proceeds top-down through the hierarchy: at each level $\ell$ (except the bottom layer), the system identifies approximate neighbors and connects the vector to the top- $M$ closest nodes using in-memory search. At the bottom layer, neighbor search is conducted on the disk-resident graph stored in the LSM-tree. The vector is connected to the top- $M$ nearest disk-resident nodes, and the resulting edges are written to the LSM-tree for durable storage.
Figure 3 presents a running example of the bottom-layer insertion procedure in LSM-VEC. In this example, a new vector $\textstyle v _ { n }$ is inserted into the disk-resident graph. Through a disk-based nearest neighbor search, LSM-VEC identifies $\boldsymbol { v } _ { 4 }$ and $\boldsymbol { v } _ { 5 }$ as the top- $M$ closest neighbors to $\textstyle { v _ { n } }$ . The system then forms bidirectional links between $\textstyle v _ { n }$ and these two nodes. As shown in the lower part of the figure, these edges are encoded as key-value pairs and inserted into the LSMtree, where the key represents the source vector ID, and the value is its neighbor. All insertions are initially buffered in memory and eventually propagated to deeper LSM-tree levels via compaction. This example illustrates how LSMVEC integrates new vectors into the disk-resident index with low overhead. The complete insertion procedure is detailed in Algorithm 1, where $\mathrm { N N } ( \cdot )$ denotes the nearest neighbor search performed over either the in-memory graph or the disk-resident index.
Figure 3. An illustration of vector insertion in LSM-VEC. The new node $\textstyle { v _ { n } }$ is connected to two bottom-layer neighbors ${ { v } _ { 4 } }$ and $v _ { 5 }$ , and the resulting edges are stored in the LSM-tree.
# Algorithm 1 Insertion in LSM-VEC.
Require: $\boldsymbol { x } \in \mathbb { R } ^ { d }$ : vector to insert; $\mathcal { G }$ : in-memory graph; $\mathcal { D }$ :
disk-resident graph; $L _ { m a x }$ : current maximum level
1: Sample level $L \sim \operatorname* { P r } ( L ) \propto e ^ { - L }$
2: if $L > L _ { m a x }$ then
3: $L _ { m a x } \gets L$
4: end if
5: $E \gets$ entry point from top layer of $\mathcal { G }$
6: for $\ell = L _ { m a x } , \ldots , L + 1$ do
7: $E \gets \mathrm { G r e e d y S e a r c h } ( x , E , \mathcal { G } _ { \ell } )$
8: end for
9: for $\ell = L , \ldots , 2$ do
10: $N _ { \ell } \gets \mathrm { N N } ( x , \mathcal { G } _ { \ell } )$
11: $\mathcal { G } _ { \ell } \mathcal { G } _ { \ell } \cup \{ ( x , \mathrm { T o p M } ( N _ { \ell } ) ) \}$
12: end for
13: $N _ { 1 } ^ { \prime } \gets \mathrm { N N } ( x , \mathcal { D } )$
14: $\mathcal { D } \mathcal { D } \cup \{ ( x , \mathrm { T o p M } ( N _ { 1 } ^ { \prime } ) ) \}$
15: return $( \mathcal { G } , \mathcal { D } )$
Deletion in LSM-VEC. To support efficient deletions in dynamic vector databases, LSM-VEC performs a local neighbor relinking strategy for both in-memory and disk-resident layers. When a vector is deleted, its immediate neighbors are reconnected using approximate neighbor search to preserve local graph connectivity. For the disk layer, LSM-VEC identifies affected nodes and inserts new edges into AsterDB, avoiding full reindexing.
In hierarchical HNSW indexing, the deleted node may exist in both the memory-resident upper layers and the diskresident bottom layer. LSM-VEC ensures deletions are applied consistently across all levels. After relinking neighbors, the system removes all edges involving the deleted node from AsterDB and deletes the corresponding vector data. The full deletion procedure is described in Algorithm 2.
# Algorithm 2 Deletion in LSM-VEC.
Require: $\boldsymbol { x } \in \mathbb { R } ^ { d }$ : vector to delete; $\mathcal { G }$ : in-memory graph; $\mathcal { D }$
disk-resident graph
1: for each layer $\ell$ where $x$ exists in $\mathcal { G }$ do
2: $N _ { \ell } \gets \mathrm { N e i g h b o r } _ { \ell } ( x )$
3: for each $p \in N _ { \ell }$ do
4: Remove edge $( p , x )$ and $( x , p )$ from $\mathcal G _ { \ell }$
5: end for
6: $C \gets \bigcup _ { \mathit { p } \in N _ { \ell } } \mathrm { N e i g h b o r } _ { \ell } ( \mathit { p } )$
7: for each $p \in N _ { \ell }$ do
8: $N _ { p } ^ { \prime } \gets \mathrm { N N } ( p , C )$
9: Connect $p$ to $\mathrm { T o p M } ( N _ { p } ^ { \prime } )$ in $\mathcal G _ { \ell }$
10: end for
11: Remove node $x$ from $\mathcal { G } _ { \ell }$
12: end for
13: $N _ { 1 } \gets \mathrm { N e i g h b o r } _ { 1 } ( x )$ in $\mathcal { D }$
14: for each $\boldsymbol { p } \in N _ { 1 }$ do
15: Remove edge $( p , x )$ and $( x , p )$ from $\mathcal { D }$
16: end for
17: $C \gets \bigcup _ { \substack { \boldsymbol { p } \in N _ { 1 } } } \mathrm { N e i g h b o r } _ { 1 } ( \boldsymbol { p } )$
18: for each $\boldsymbol { p } \in N _ { 1 }$ do
19: $N _ { p } ^ { \prime } \gets \mathrm { N N } ( p , C )$
20: Connect $p$ to $\mathrm { T o p M } ( N _ { p } ^ { \prime } )$ in $\mathcal { D }$
21: end for
22: Remove vector $x$ and all edges involving $x$ from $\mathcal { D }$
23: return $( \mathcal { G } , \mathcal { D } )$
# 3.3 Sampling-Guided Traversal: Fast and Robust Search over Disk-Based Graphs
Efficient ANN search on graph-based indices relies on exploring a minimal number of nodes and edges while ensuring high recall. Traditional graph-based ANNS methods, such as HNSW, typically employ greedy traversal strategies to navigate from an entry point to the target neighborhood. However, when applied to disk-based settings, naive greedy search often needs to exhaustively scan all neighbors of a node to make local routing decisions, incurring substantial random I/O overhead. To address this, LSM-VEC introduces a sampling-based filtering strategy inspired by probabilistic routing [8, 28], enabling efficient pruning of unlikely candidates with theoretical guarantees.
A key observation in graph-based ANN search is that not all neighbors need to be explored with equal probability.
When expanding a node’s neighbors during traversal, conventional greedy search evaluates all potential neighbors and selects the closest ones for further expansion. However, this approach results in redundant distance computations and excessive candidate evaluations, increasing query latency. This motivates LSM-VEC to adopt the probabilistic selection mechanism that dynamically adjusts the exploration probability of each neighbor based on its estimated proximity to the query. This sampling-based approach reduces unnecessary distance calculations while preserving high recall. For ease of understanding of our system, below we introduce the sampling techniques [8, 28] in detail.
At the initialization stage, the system samples $m$ random projection vectors $\{ a _ { i } \} _ { i = 1 } ^ { m } \sim N ( 0 , I _ { d } )$ , where $d$ is the vector dimension. Each data vector $\boldsymbol { x } \in \mathbb { R } ^ { d }$ is encoded into a binary sign-hash code:
$\mathrm { H a s h } ( x ) = \left[ \mathrm { s g n } ( x ^ { \top } a _ { 1 } ) , \dots , \mathrm { s g n } ( x ^ { \top } a _ { m } ) \right] \in \{ - 1 , 1 \} ^ { m } ,$ (4) where $s \mathrm { g n } ( z ) = 1$ if $z \geq 0$ , and $^ { - 1 }$ otherwise. These hash codes are stored in memory at insertion time.
At query time, given a query vector $q$ , the system computes its hash code and compares it to each candidate $u$ via:
$$
\mathrm { \# C o l } ( q , u ) = \frac { 1 } { 2 } \left( m + \mathrm { H a s h } ( q ) ^ { \top } \cdot \mathrm { H a s h } ( u ) \right) ,
$$
which counts the number of matching hash bits (collisions).
To ensure recall guarantees, a collision threshold is applied according to Hoeffding’s inequality. Given a target error $\epsilon$ and a maximum distance $\delta$ , the threshold number of collisions is defined as $T _ { \epsilon } ^ { \mathrm { S i m H a s h } }$ . Here, $\delta$ typically corresponds to the distance between the query $q$ and the farthest candidate in the current top- $k$ candidate set, serving as a dynamic cutoff for evaluating new candidates.
Then, the filtering condition becomes:
$$
\begin{array} { r } { \operatorname* { P r } \left[ \lVert q - u \rVert \leq \delta \mid \# \mathrm { C o l } ( q , u ) \geq T _ { \epsilon } ^ { \mathrm { S i m H a s h } } \right] \geq 1 - \epsilon . } \end{array}
$$
This allows the system to safely skip candidates with insufficient hash collisions, significantly reducing $\mathrm { I } / \mathrm { O }$ and maintaining theoretical recall guarantees.
By integrating query-adaptive sampling and errorcontrolled hash filtering, LSM-VEC significantly accelerates the search on disk-based graphs while maintaining theoretical guarantees, making it highly suitable for billion-scale ANN applications.
Theoretical Cost Analysis. To quantify the effectiveness of sampling-guided traversal, we analyze and compare the expected search cost before and after applying sampling. Let $T$ be the total number of visited nodes during the search, $d$ be the average node degree, $t _ { v }$ be the time to fetch a single vector from disk, and $t _ { n }$ be the time to retrieve the neighbor list of a node from the LSM-tree.
In conventional graph traversal, all neighbors of each visited node are evaluated, resulting in a search cost of:
$$
{ \mathrm { C o s t } } _ { \mathrm { f u l l } } = T \cdot ( t _ { n } + d \cdot t _ { v } ) .
$$
Figure 4. An example of graph ordering to improve I/O efficiency.
In contrast, LSM-VEC introduces a sampling ratio $\rho \in \left( 0 , 1 \right]$ , which controls the fraction of neighbors to be accessed during traversal. A smaller $\rho$ implies more aggressive pruning of neighbor evaluations. The corresponding search cost is reduced to:
$$
{ \mathrm { C o s t } } _ { \mathrm { s a m p l i n g } } = T \cdot ( t _ { n } + \rho \cdot d \cdot t _ { v } ) .
$$
Thus, the expected $\mathrm { I } / \mathrm { O }$ cost saving brought by sampling is:
$$
\Delta = T \cdot ( 1 - \rho ) \cdot d \cdot t _ { v } .
$$
This analysis highlights that sampling-based search effectively reduces vector $\mathrm { I } / \mathrm { O }$ cost while preserving search quality, especially when the sampling ratio $\rho$ is carefully tuned to balance recall and efficiency.
# 3.4 Locality-Aware Reordering: Adaptive Layout Optimization for Disk Traversal
To minimize random $\mathrm { I } / \mathrm { O }$ overhead during disk-based ANN search, LSM-VEC adopts a graph reordering strategy to improve the physical locality of vectors stored on disk. This design is inspired by prior work on offline graph ordering [41], which aims to cluster closely related nodes together in memory to accelerate graph traversal.
Specifically, existing methods typically define a scoring function $S ( u , v )$ between two nodes $u$ and $\boldsymbol { v }$ based on static graph topology. For example, the state-of-the-art approach [41] combines the number of shared in-neighbors $S _ { s } ( u , v )$ and direct connections $S _ { n } ( u , v )$ as:
$$
S ( u , v ) = S _ { s } ( u , v ) + S _ { n } ( u , v ) ,
$$
where $S _ { s } ( u , v ) = | N _ { I } ( u ) \cap N _ { I } ( v ) |$ measures the number of common in-neighbors of $u$ and $\boldsymbol { v }$ , and $S _ { n } ( u , v )$ counts the existence of direct edges between $u$ and 𝑣. This static formulation captures structural proximity but ignores runtime query patterns.
In contrast, LSM-VEC introduces a fundamentally different score definition tailored for dynamic ANN search. Instead of relying solely on static graph structure, we derive $S ( u , v )$ from query-time traversal statistics. In particular, we define:
$$
\begin{array} { r l } & { S ( u , v ) = S _ { s } ( u , v ) + S _ { n } ( u , v ) \cdot ( 1 + \lambda ) } \\ & { \phantom { x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x } \cdot \mathrm { H a m m i n g } ( \mathrm { H a s h } ( q ) , \mathrm { H a s h } ( u ) ) . } \end{array}
$$
This sampling-driven score directly captures the runtime importance of each edge based on its frequency in sampled search paths, enabling the layout optimization to reflect actual query behavior.
Given this score definition, LSM-VEC aims to find a node permutation $\phi ( \cdot )$ that maximizes the total edge scores within a physical prefetch window of size $ { \boldsymbol { w } }$ , following the formulation as:
$$
F ( \phi ) = \sum _ { \substack { 0 < \phi ( v ) - \phi ( u ) \le w } } S ( u , v ) ,
$$
where $\begin{array} { r } { \boldsymbol { v } _ { i } = \phi ^ { - 1 } ( \boldsymbol { i } ) } \end{array}$ denotes the node placed at position 𝑖 in the storage layout. Intuitively, this objective encourages frequently co-accessed nodes to be placed closely together, so that they can be fetched together within the same disk I/O block.
To achieve this goal, LSM-VEC periodically applies a global reordering pass over the disk-resident bottom-layer graph. The reordering is guided by the query-sampled edge heatmap, and the resulting layout naturally adapts to evolving query patterns without requiring prior knowledge of the full graph structure.
Running Example. In Figure 4, we demonstrate the effectiveness of locality-aware reordering with a running example. Left panel (original layout): The nodes of the proximity graph are stored in memory without any consideration for the query access pattern. As a result, frequently traversed nodes are dispersed across the vector storage. For instance, during the query traversal path $V _ { 1 } V _ { 6 } $ $V _ { 2 } \ \ V _ { 7 }$ , since these nodes are not physically contiguous, the system must perform four random I/O operations to retrieve the corresponding vectors during the search. Right panel (after reordering): After applying locality-aware reordering, the system rearranges the vector storage so that graph neighbors likely to be accessed in succession are placed adjacently in memory. In the reordered layout, the query that originally traversed $V _ { 1 }$ , $V _ { 6 } , V _ { 2 }$ , and $V _ { 7 }$ is effectively transformed into a new, optimized traversal path $V _ { 1 } \to V _ { 3 } \to V _ { 5 } \to V _ { 7 }$ , where adjacent nodes in the graph now correspond to sequentially stored vectors. With this physical reordering, the number of random I/O operations required is reduced to only two. This example demonstrates how reordering can align the physical storage layout with runtime search paths, thereby improving $\mathrm { I } / \mathrm { O }$ efficiency.
By integrating sampling-driven edge weights into reordering decisions and aligning them with the LSM-tree’s compaction mechanism, LSM-VEC achieves high disk locality without sacrificing update efficiency. This approach ensures that the physical layout of the index remains closely aligned with the logical query paths, significantly reducing I/O cost in disk-based ANN search.
# 4 Related Work
In-Memory ANN Indexing Graph-based ANN methods have emerged as the dominant paradigm for high-accuracy, low-latency vector search in RAM. Notably, HNSW [30] introduces a hierarchical small-world graph structure that enables logarithmic search time and strong recall guarantees. Variants and extensions of HNSW, including those in FAISS [13] and NGT [20], further improve indexing speed and graph quality. However, all these methods assume the index resides entirely in memory, limiting scalability in billionscale scenarios. NSW [29] also use proximity-based neighborhood structures, but they tend to suffer from higher memory usage or lower recall under tight latency constraints.
Beyond graph-based approaches, several memory-efficient ANN systems adopt alternative indexing paradigms. SCANN [18] combines optimized quantization and learned pruning to reduce memory and latency, achieving stateof-the-art performance under inner-product similarity. BATL [27] proposes a learned tree-based index that achieves high recall and low latency using balanced partition trees trained with neural sequence prediction. PCNN [40] adopts error-correcting codes (polar codes) for efficient highdimensional hashing, offering better trade-offs than classical LSH. While these systems achieve high performance under in-memory settings, their scalability and update efficiency degrade significantly in billion-scale, disk-based scenarios.
Disk-Based ANN Systems To overcome the memory limitations of in-memory ANN indices, several disk-based systems have been proposed to scale to billion-scale datasets. These methods reduce the memory footprint by optimizing disk access and leveraging SSD-friendly designs. DiskANN [23] builds a pruned graph offline and uses quantized vectors in memory to guide the search, loading only the best candidates from disk. SPANN [9] partitions the vector space via hierarchical clustering and builds local graphs, enabling efficient disk access but limiting the flexibility of update. ScaNN [16] combines quantization, reordering, and reranking in a multistage pipeline, balancing latency and accuracy but assuming a static corpus due to high retraining cost.
In summary, while these systems offer strong performance under static workloads, they lack native support for dynamic updates, limiting their applicability in evolving real-world deployments.
Hardware-accelerated ANN systems. A number of recent efforts have proposed ANN solutions based on GPUs and FPGAs to accelerate large-scale vector search. For example, FusionANNS [39], BANG [25], RUMMY [47], iQAN [46] and ParlayANN [31] exploit GPU-friendly pipelines or CPUGPU hybrid execution to enable low-latency approximate search. Others such as DF-GAS [44] design specialized FPGAbased infrastructures to support billion-scale ANN with high throughput and energy efficiency.
Despite their performance advantages, these systems are fundamentally designed for in-memory settings. They assume the index or its compressed form can fit into highbandwidth accelerator memory and often lack support for real-time updates or truly out-of-core datasets.
In contrast, LSM-VEC addresses the orthogonal challenge of disk-based ANN search at billion scale. Rather than relying on dedicated hardware, it improves system-level efficiency through sampling-guided search, LSM-tree-based index maintenance, and graph reordering. Our design complements hardware accelerators and can be integrated with future hybrid pipelines that combine disk-resident storage with accelerator-based query processing.
Dynamic and Hybrid ANN Indexing SPFresh [42] represents a recent effort to support dynamic updates in largescale ANN search. Instead of graphs, it partitions the vector space into clusters and performs in-place updates via quantization-based assignments. While this strategy enables efficient insertions and deletions, it suffers from degraded recall due to coarse cluster boundaries and the inability to preserve fine-grained neighborhood structure. Moreover, SPFresh does not exploit graph connectivity or adaptive layout reordering. FreshDiskANN [38] improves upon the design of DiskANN by enabling update support without full reprocessing. It maintains a fixed disk layout and incrementally inserts new vectors by connecting them to diskresident neighbors. For deletions, it uses a localized neighborrelinking strategy, where neighbors of a deleted node are reconnected using a pruning rule. While FreshDiskANN supports efficient insertions and deletions, it does not perform any form of global reordering. As a result, the physical layout of vectors gradually deteriorates over time, which can negatively impact I/O locality and search performance. Recent systems like NV-tree [43] and PQ-based hybrid indexing [24] attempt to blend vector quantization with disk-aware indexing. Yet, these systems either fail to support dynamic updates or yield subpar search latency compared to graph-based methods.
# 5 Evaluation
This section evaluates the performance of LSM-VEC against multiple baselines under various workloads.
# 5.1 Experimental Setting
System Environment. All experiments are performed on a dedicated server configured as follows:
CPU: Intel(R) Xeon(R) Gold 6326 CPU $@$ 2.90GHz (64 cores, 128 threads).
• Memory: 256 GB DDR4.
Disk: 2 TB NVMe SSD.
Operating System: Ubuntu 20.04.4 LTS (kernel 5.4.0-100- generic).
All indices are constructed on disk unless explicitly stated. For each configuration, we perform a warm-up phase to load frequently accessed pages into memory, followed by 10K randomly ordered queries for latency evaluation. We repeat each experiment three times and report the average result.
Baselines. We compare LSM-VEC with two representative disk-based ANN systems:
• DiskANN [23]: A graph-based disk-resident index that relies on offline pruning and reordering to improve search performance on static datasets. • SPFresh [42]: A clustering-based dynamic ANN system that enables fast insertions and deletions via in-place updates but sacrifices recall due to coarse partitioning.
All systems are tuned to achieve comparable recall, and parameters such as the number of neighbors, search depth, and memory budget are carefully selected based on opensource implementations and prior work.
Dataset. We conduct all experiments on the widely-used SIFT1B [10] dataset, which contains one billion 128- dimensional SIFT descriptors extracted from image patches. Although SIFT1B is designed for billion-scale evaluation, we use a 100-million scale subset in our experiments due to hardware constraints. In particular, existing solutions such as DiskANN require several terabytes of memory to handle the full 1-billion dataset, which exceeds the memory capacity of our server. This experimental setting also reflects practical scenarios where billion-scale indices are typically partitioned or sharded in real-world systems.
Evaluation Metrics. We evaluate and report the following key metrics:
• Recall $\mathbf { 1 0 } \textcircled { a }$ 10: Search accuracy, measuring the fraction of true nearest neighbors found within the top-10 returned results for each query.
• Query latency: Average search latency per query, computed over 100 randomly sampled queries.
• Update latency: Average latency of vector insertions under dynamic workloads, reflecting the efficiency of handling online updates.
• Memory usage: Peak memory consumption measured during search and update operations, including both index structures and graph buffers.
Implementation Notes. The on-disk index of LSM-VEC is implemented in $C + +$ on top of AsterDB1, with concurrency support for graph construction and query processing. For
Figure 5. Evaluation of LSM-VEC under four update scenarios with different insert-delete ratios. We report recall, update latency, and search latency, simulating real-world dynamic workloads where the index continuously evolves. Each batch corresponds to $1 \%$ vector updates ( $1 \%$ insertion or $1 \%$ deletion).
DiskANN2 and SPFresh3, we use the official open-source implementations and follow the recommended settings from their published papers. In all experiments, we avoid using SSD caching layers or memory-mapped file tricks to simulate a realistic deployment scenario for large-scale vector databases.
# 5.2 System Performance
LSM-VEC delivers robust and efficient performance across diverse update scenarios. To evaluate the robustness and efficiency of LSM-VEC under dynamic workloads, we simulate real-world update scenarios by designing a series of batch workloads with varying insert and delete ratios. In vector database applications like personalized search, recommendation, and RAG systems, vectors are frequently inserted, deleted, or updated to reflect evolving content or user behavior. Notably, an update operation is commonly modeled as a delete followed by an insert.
We construct four representative workloads to reflect different application scenarios:
Insert-only workload: $100 \%$ insert operations, simulating system initialization or rapid data growth.
• Insert-heavy workload: $7 0 \%$ insert and $30 \%$ delete operations, capturing scenarios with frequent new data but occasional clean-up.
• Balanced workload: $5 0 \%$ insert and $5 0 \%$ delete operations, representing mature systems with stable user bases.
• Delete-heavy workload: $30 \%$ insert and $7 0 \%$ delete operations, reflecting data refreshing or model retraining phases.
Figure 5 presents the comprehensive results of these experiments, reporting Recall $1 0 @ 1 0$ , update latency, and search latency across different update workloads. Each batch corresponds to $1 \%$ vector updates ( $1 \%$ insertion or $1 \%$ deletion), following the real-world dynamic update pattern as adopted in SPFresh [42].
For Recall $1 0 @ 1 0$ (Figure 5(a)), LSM-VEC consistently outperforms both SPFresh and DiskANN across all workloads. In the Balanced workload, LSM-VEC achieves $8 8 . 4 \%$ recall, significantly higher than SPFresh $( 7 5 . 5 \% )$ and DiskANN $( 8 2 . 0 \% )$ . In the Delete-heavy workload, where extensive deletions severely impact graph quality, DiskANN’s recall drops dramatically to $6 1 . 0 \%$ , while LSM-VEC still maintains $7 7 . 4 \%$ recall, demonstrating its robustness against dynamic data evolution.
For update latency (Figure 5(b)), LSM-VEC exhibits the lowest average update latency across all workloads. In the Insert-only workload, LSM-VEC achieves an average update latency of $4 . 9 0 \mathrm { m s }$ per vector, which is $1 . 2 \times$ faster than SPFresh $\left( 6 . 1 0 \mathrm { m s } \right)$ and $2 . 6 \times$ faster than DiskANN $( 1 2 . 5 \mathrm { m s } )$ . As the workload becomes more delete-heavy, DiskANN’s update latency increases to $1 1 . 8 6 \mathrm { { m s } }$ , while LSM-VEC remains stable at $4 . 6 0 \mathrm { m s }$ , benefiting from its write-optimized LSMtree-based design.
As Figure 5(c) illustrates, for search latency, LSM-VEC consistently provides the lowest and most stable average search latency across all workloads. In the Insert-only workload, LSM-VEC achieves $4 . 7 0 \mathrm { m s }$ search latency, and this value remains nearly unchanged at $4 . 6 3 \mathrm { m s }$ in the Delete-heavy workload. In contrast, DiskANN suffers from degraded locality, increasing its search latency from $8 . 0 \mathrm { m s }$ to $1 2 . 0 \mathrm { m s }$ . SPFresh maintains relatively stable search latency (around $7 \mathrm { m s } ,$ ), but this comes at the cost of lower recall due to its coarse-grained clustering.
Overall, these results demonstrate that LSM-VEC effectively addresses the challenges of dynamic ANN search. It achieves higher recall, lower update latency, and more stable search latency than both DiskANN and SPFresh, making it a strong candidate for real-world large-scale vector database deployments.
LSM-VEC achieves lower memory usage without sacrificing accuracy. Apart from update and query performance, memory usage is a critical concern for billion-scale vector search systems, especially in memory-constrained environments. Figure 6 reports the memory consumption of all systems over time, under four different update workloads. We include both the memory consumed by the in-memory index and the buffering layer for dynamic updates.
In the Insert-only workload, DiskANN exhibits rapid memory growth from 25GB to 76GB, as all inserted nodes and graph structures must be kept in memory. In contrast, both SPFresh and LSM-VEC demonstrate stable memory usage throughout the run. Specifically, SPFresh grows slightly from 20.1GB to 23.9GB, while LSM-VEC increases from 22.4GB to 26.5GB, benefiting from compact upper-layer storage and disk-resident bottom-layer graph. Notably, LSM-VEC maintains a flat memory curve even with $100 \%$ insert operations.
In the Insert-heavy workload, the memory gap between DiskANN and other systems further widens. DiskANN reaches 66GB due to increasing memory pressure from mixed inserts and deletes. Both SPFresh and LSM-VEC maintain low memory usage, stabilizing below 26GB. The final memory footprint of LSM-VEC is only 25.6GB, slightly higher than SPFresh’s 25.3GB, demonstrating the effectiveness of LSMtree-based storage in isolating on-disk graph maintenance from in-memory structures.
In the Balanced workload, DiskANN’s memory usage spikes to 80GB due to frequent deletions and fragmented memory management. Meanwhile, SPFresh increases moderately to 27.0GB, and LSM-VEC remains highly compact at 27.0GB. Despite the high churn from $5 0 \%$ insert and $5 0 \%$ delete operations, both systems manage to cap memory usage effectively.
Finally, under the Delete-heavy workload, DiskANN continues to consume excessive memory (exceeding 69GB), while both SPFresh and LSM-VEC maintain excellent memory stability. SPFresh grows slowly to 25.7GB, and LSM-VEC stays within 22.4GB to 25.7GB across the entire run, showing strong adaptability to deletion-intensive environments.
Overall, this experiment demonstrates that both SPFresh and LSM-VEC provide low and stable memory usage across a wide range of dynamic workloads. Compared to DiskANN, which suffers from significant memory amplification in dynamic scenarios, LSM-VEC leverages its LSM-tree-based bottom-layer storage to efficiently bound memory usage while preserving high recall and low latency. This design makes LSM-VEC particularly suitable for billion-scale vector search deployments in resource-constrained environments.
LSM-VEC achieves superior search-update balance under dynamic workloads. Figure 7 presents the tradeoff between Recall $1 0 @ 1 0$ and latency in two critical dimensions: (a) query latency and (b) update latency. We vary the search parameters (e.g., efSearch and candidate pool size) to explore a wide recall range and report the corresponding latency of all baselines.
In terms of query latency (Figure 7(a)), LSM-VEC consistently achieves the best recall-latency tradeoff. Specifically, LSM-VEC reaches up to $9 4 . 0 \%$ recall with a query latency of only $6 . 2 \mathrm { m s }$ . In contrast, DiskANN requires $1 0 . 5 \mathrm { m s }$ query latency to achieve $9 2 . 0 \%$ recall, which is over $1 . 7 \times$ higher latency than LSM-VEC for lower accuracy. SPFresh exhibits the lowest recall range ( $7 5 . 0 \%$ to $8 2 . 0 \%$ ) and incurs $6 . 2 \mathrm { m s }$ query latency at its best recall, significantly lagging behind both LSM-VEC in accuracy.
For update latency (Figure 7(b)), LSM-VEC again outperforms both baselines. LSM-VEC supports $8 8 . 4 \%$ recall with only $6 . 2 \mathrm { m s }$ update latency, thanks to its LSM-tree-based design that buffers and batches graph modifications. In comparison, DiskANN’s update latency grows to $1 4 . 3 \mathrm { m s }$ under similar recall, which is $2 . 3 \times$ higher than LSM-VEC. SPFresh shows better update latency than DiskANN $( 6 . 1 ~ \mathrm { m s } \cdot 9 . 4 ~ \mathrm { m s }$ ), but suffers from limited recall improvements due to its coarsegrained cluster structure.
Figure 6. Memory usage over time under different batch workloads.
Figure 7. Tradeoff between recall and latency. (a) Search latency vs Recall. (b) Update latency vs Recall.
Figure 8. Impact of the sampling ratio on recall, red star indicates our configuration.
Overall, these results highlight that LSM-VEC provides the most efficient and scalable search-update tradeoff among all methods. It achieves higher recall with significantly lower query and update latency, making it well-suited for billionscale dynamic vector search.
Sampling enhances recall while preserving search efficiency. We further evaluate how sampling impacts the trade-off between recall and query latency in LSM-VEC. By reducing the sampling ratio, LSM-VEC selectively skips a portion of candidate neighbor evaluations to minimize computation and disk I/O.
Figure 8 presents the results as the sampling ratio varies from 1.0 (i.e., no sampling applied) to 0.7. As expected, query latency drops significantly as the sampling ratio decreases, from $6 . 8 1 \mathrm { m s }$ at ratio 1.0 to $4 . 7 2 \mathrm { m s }$ at ratio 0.7. This latency reduction comes at a modest cost in recall, which decreases from $8 9 . 2 \%$ to $8 2 . 4 \%$ .
Notably, LSM-VEC achieves a favorable balance at a sampling ratio of 0.8, highlighted by the red star. At this configuration, it attains $8 5 . 1 \%$ Recall $1 0 @ 1 0$ with only $4 . 9 0 \mathrm { m s }$ average query latency. Compared to full evaluation at ratio
Sampling Ratio vs Recall and Throughput 90.0 87.5 120140160180200220240Query Throughput (QPS) Recall 10@10 85.0 82.5 80.0 77.5 75.0 1.0 0.9 0.8 0.7 Sampling Ratio
1.0, this reduces latency by $30 \%$ while sacrificing only $4 . 1 \%$ recall.
These results demonstrate that sampling can substantially improve efficiency with minimal impact on accuracy. It is a key component of LSM-VEC, enabling scalable and latencyaware vector search under dynamic workloads. | Vector search underpins modern AI applications by supporting approximate nearest neighbor (ANN) queries over high-dimensional embeddings in tasks like retrieval-augmented generation (RAG), recommendation systems, and multimodal search. Traditional ANN search indices (e.g., HNSW) are limited by memory constraints at large data scale. Disk-based indices such as DiskANN reduce memory overhead but rely on offline graph construction, resulting in costly and inefficient vector updates. The state-of-the-art clustering-based approach SPFresh offers better scalability but suffers from reduced recall due to coarse partitioning. Moreover, SPFresh employs in-place updates to maintain its index structure, limiting its efficiency in handling high-throughput insertions and deletions under dynamic workloads.
This paper presents LSM-VEC, a disk-based dynamic vector index that integrates hierarchical graph indexing with LSM-tree storage. By distributing the proximity graph across multiple LSM-tree levels, LSM-VEC supports out-of-place vector updates. It enhances search efficiency via a sampling-based probabilistic search strategy with adaptive neighbor selection, and connectivity-aware graph reordering further reduces I/O without requiring global reconstruction. Experiments on billion-scale datasets demonstrate that LSM-VEC consistently outperforms existing disk-based ANN systems. It achieves higher recall, lower query and update latency, and reduces memory footprint by over 66.2%, making it well-suited for real-world large-scale vector search with dynamic updates. | [
"cs.DB"
] |
# 1 Introduction
In the rapidly developing field of Large Language Models (LLMs), it is difficult to keep up with the latest developments and put them into context of prior work. Several LLMs are released every month and some of them are advertised as "better" $^ { \prime \prime }$ , "faster" $^ { \prime \prime }$ , "cheaper" $^ { \prime \prime }$ or with better " $^ { \prime \prime }$ reasoning capabilities" $^ { \prime \prime }$ . With the work on our benchmarking framework LLM-KG-Bench, we are particularly interested in making it possible to assess and compare LLMs by their capabilities to cope with Semantic Web technology. The framework features support for open source and top-ranking commercial LLMs and includes a set of highly relevant tasks specific to Knowledge Graph Engineering (KGE). For instance, there are specialized tasks related to SPARQL and RDF serialization. In this work, we present Version 3.0 of LLM-KG-Bench. Our latest advancements offer the following contributions:
– A major update of the task api which makes writing of new task easier as the overhead is reduced and the framework can handle the evaluation orchestration.
– An RDF repair task where the goal is to detect and fix errors across several RDF serialization formats, such as Turtle, JSON-LD, N-Triples.
– Improved analytics and visualization: Combined scores can be computed and visualized in a capability compass for task categories such as RDF syntax, RDF analytics, SPARQL syntax, SPARQL semantics or brevity.
– Support for encrypted task data to prevent test data leakage into LLM training data.
– A new connector for vllm $^ { 5 }$ , a popular high throughput LLM serving framework.
The extensions and refinement of the framework were guided by making the KGE related comparison of LLMs easier and broader in aspects of LLM size and task areas covered.
The remainder of this work is structured as follows: In Section 2 we present related work. An overview of our LLM-KG-Bench system and its latest improvements is given in Section 3. In Section 4 we apply the framework to create a big dataset of evaluation results for more than 30 open and proprietary LLMs. Section 5 concludes this work and points out directions for future work.
# 2 Related Work
In order to explore and navigate the vast amount of LLMs, there are several LLM leaderboards, which rank LLMs based on a selection of benchmarks or workloads. For commercial (and a set of open) models, the Chatbot Arena6 [5] is popular. While it lists scores for MMLU $^ 7$ and MT-bench[2], it also calculates its own arena-score which is based on arbitrary tasks, that are processed by two models side-by-side and then evaluated by the same user voting for their preferred answer. For open models the OpenLLM-Leaderboard [7] provides the most exhaustive list of benchmark results with over 2,000 tested models, and with scores for IfEval, BBH, MATH, GPQU, MUSR, MMLU, and a carbon dioxide emission estimate. HELM $^ 8$ [17] comprises the most exhaustive list, including also domain-specific tests like LegalBench and MedQA. In contrast to a set of other leaderboards that just collect published or reported test results, those leaderboards provide evaluation as a service. While a comparison of the individual benchmark suites is out of scope of this paper, we see in all of them a gap in addressing Knowledge Graph engineering (KGE) tasks. We also see a gap in a benchmark execution framework that helps to deal with the particularities of RDF and KG-related workloads (format parsing, syntax check feed back loops, execution and evaluation of queries towards KGs, etc.) While Big Bench[25] was an initial inspiration for the LLM-KG-Bench framework, and we tried to be compatible in the beginning, we realized that the Task API was not sufficient for our KGE benchmarking efforts. Both HELM and BigBench have a strong focus on multiple choice tasks and use scores based on string or document similarities. In contrast, the LLM-KG-Bench framework focuses on the syntactically and semantically correct generation of RDF (i.e. in Turtle) and SPARQL. The LLM-KG-Bench framework aims to reduce complexity and technological burdens to create, execute, evaluate, and analyze KG-related tasks.
In the area of benchmarking coding capabilities, we observe, at a conceptual level, characteristics (e.g., with respect to output format requirements, instruction complexity, and response evaluation strategies) that are more closely related to KGE capabilities benchmarking. In this domain, several leaderboards exist. The Big Code Models Leaderboard $^ { 9 }$ evaluates over 60 base models using the HumanEval and MultiPL-E configured for Java, Javascript, and CPP. The EvalPlus Leaderboard $^ { 1 0 }$ ranks more than 100 models using HumanEval and the Mostly Basic Python Programming (MBPP) Benchmark. These datasets combine human-written programming problems with basic Python challenges to assess coding proficiency. The CanAiCode Leaderboard $^ { 1 1 }$ focuses on programming-related tasks, benchmarking more than 300 models using the custom CanAICode Benchmark. This test suite is specifically designed for testing small text-to-code LLMs with less complex tasks compared to HumanEval and MBPP. However, these efforts are very specialized towards coding and as such also hard to adopt for our use case.
Table 1: Comparison of some of the LLM evaluation approaches mentioned here. Best values are marked with bold font. Only the LLM-KG-Bench framework combines automatic evaluation with several Knowledge Graph Engineering (KGE) topics and many LLMs covered.
In literature, many efforts12 explore the combination of LLMs and KGs[23]. Several of them are evaluating the application of LLMs for KG related tasks. However, these LLM evaluations are often focused on a very specific problem in a specific task area like Text to RDF (e.g. [22,29]) or Knowledge Graph Question Answering (KGQA, e.g. [26]) or Text to SPARQL (e.g. [15,28]) or RML generation (e.g. [12]). Unfortunately, many of the evaluations in these articles were conducted manually. This comes with the problem of not being able to scale those evaluations to more repetitions and more or newer models. In case an automated evaluation has been performed, the underlying code usually lacks adaptability to encompass new models or task variations to be executed and analyzed. A benchmarking effort, that is related to our interest in studying the JSON-LD capabilities, is StructuredRAG [24]. It consists of six tasks designed to assess LLM capabilities in following response format instructions according to JSON templates. Table 1 compares several LLM evaluation approaches.
The LLM-KG-Bench framework has been described and applied in several publications. The following section provides a brief overview. The LLM-KGBench framework was initially introduced in [20], featuring three basic tasks (Version 1.0). Version 1.1 expanded the framework to include five tasks, incorporating evaluations of the Turtle capabilities of both open and proprietary LLMs, as detailed in [8]. In 2023, Version 1.2 collected results for various proprietary LLM versions, as described in [9]. The framework was further enhanced to support task-based dialogues with LLMs and introduced a re-evaluation mode, enabling task evaluations to be rerun using previously generated task data and responses to the same prompts. In Version 2.0, numerous additional SPARQL tasks and task parameterizations were integrated. This version was also utilized to evaluate the SPARQL capabilities of various proprietary LLMs, as detailed in [19]. Building upon this basis, Version 3 of the LLM-KG-Bench framework introduces (a) an expanded task list, (b) supports encrypted task data, (c) includes a major update to the task API, and (d) extends compatibility to more models through additional model connectors.
# 3 Resource Description
Benchmarking LLMs involves significant time and financial costs plus organizational effort, and the evaluation process can often be imprecise. LLM-KG-Bench is designed to simplify the creation of KG-related assessments while providing a foundational infrastructure for further development. Its main features are:
– Modular and Extensible Framework: Supports automated evaluation tasks using a comprehensive set of KG-extraction and evaluation-related helper methods.
– Built-in Correction Cycles: Implements dialogue-based correction cycles, enabling LLMs to revise previous mistakes.
– Data Security: Supports encryption of task data to prevent test data leakage into LLM training datasets.
– Task Management: Manages task configurations, evaluation orchestration, logging, and result persistence.
– Result Analysis and Visualization: Provides built-in tools for analyzing and visualizing evaluation results.
– Broad Model Support: Includes connectors for many contemporary LLMs.
– Open-Source Codebase: The framework is published as open source and welcomes extensions and community contributions.
The main architecture is described in Figure 1 of Meyer et al.[20]. In the following sections, we describe the basic concepts and infrastructure of LLMKG-Bench framework in greater detail.
# 3.1 Main Concepts of the LLM-KG-Bench Framework
The LLM-KG-Bench framework is build around some main concepts we want to describe here.
Evaluation Tasks: The evaluation tasks are the main building block of a benchmark and automatically evaluate the LLM answers. For the prompt-answerevaluate loop the tasks provide the prompt and evaluation functionality.
Task Classes, Parametrized Tasks and Task Case Entries: Tasks are organized in task classes. Some task classes can be parametrized with task class specific task parameters resulting in parametrized task classes.
Prompt-Answer-Evaluate Loop: The evaluation of LLMs is based on dialogues, consisting of prompts and answers. The prompt-answer-evaluate loop starts with the generation of an initial prompt that is sent to an LLM. In the next step, the produced answer is evaluated. Based on the evaluation result, the framework can decide to start a new prompt-answer-evaluate round or stop the dialogue. The idea is to make use of the chat capability of modern LLMs and their bigger supported context size in order to get the answer closer to the correct one. The structure of these loops is shown in fig. 1a and an example dialogue is given in fig. 3.
(a) The Prompt-Answer-Evaluate loop for the task - LLM interaction as organized by the framework. Prompting and evaluation is covered by the task, the answer is generated by the LLM.
(b) Different execution scopes: task iterations (includes one to many cycles of the prompt-answerevaluate loop), task execution (includes all iterations), benchmark execution (includes all task executions for all combinations of tasks and LLMs defined for all iterations).
Fig. 1: Overview of the evaluation workflow and execution scopes.
LLM Connectors: The LLM connectors offer a consistent abstraction layer to interact with various supported LLMs. Several LLM connector classes are implemented in the LLM-KG-Bench framework as described in section 3.3. They can get parametrized to abstract specific LLMs.
Task Evaluation Iterations: We name one task evaluation loop consisting of one or more prompt-answer-evaluate rounds a task evaluation iteration, see also fig. 1b.
Task Executions: Since LLM answers are generated probabilistically, a configurable number of task iterations is executed, collectively forming a task evaluation execution for a specific task and a particular LLM, see also fig. 1b.
Benchmark Executions: A benchmark execution consists of all task executions for all combinations of tasks and models defined in the configuration, see also fig. 1b.
List Tasks and Task Case Entries: List tasks have a list of task case entries, where each entry defines a distinct exercise resulting in a specific prompt and expected answer. For each task iteration one task case entry is selected from this list. All task case entries are evaluated by the same list task.
Benchmark Configuration: A benchmark configuration specifies the tasks and models to be included in a benchmark run together with the number of iterations per task execution.
Fig. 2: UML class diagram of the Task API and its reference by some example tasks. All task classes implement the AbstractLlmKgBenchInterface via an inheritance connection. A task can be described with a TaskInfo object. A TaskExecutionInfo references this TaskInfo object for the documentation.
Execution Configuration A benchmark configuration can be executed as a whole or with a selection of tasks and models defined by command line parameters.
Result Reevaluation With result reevaluation existing task model interaction data can be feed into the tasks evaluation code. This could help for example to get updated results with updated evaluation code without new LLM interactions.
# 3.2 Task API
Tasks are implemented following the task API as a common interface between tasks, framework and model connectors. In LLM-KG-Bench framework Version 3, a major update of the Task API was introduced together with new helper classes. Figure 2 shows the UML class diagram of the new Task API.
Benchmark tasks in LLM-KG-Bench do implement the interface AbstractLlmKgBenchTaskInterface which enables a rough compatibility with the BigBench task classes. As supplement, it defines methods for the Evaluate and Prompt part of the Prompt-Answer-Evaluate loop as well as methods for the serialization and deserialization of tasks.
Here, the following methods are especially important:
getNextPrompt: combines an evaluation and prompting step. If no new prompt is generated the prompt-answer-evaluate loop ends.
finalizeEvaluation: is called at the end of the prompt-answer-evaluate loop and creates a final evaluation result for this task evaluation iteration.
condenseTaskData: creates a serializable representation of this concrete task case entry. This offers the possibility for later continuation or reevaluation.
createTaskFromCondensedData: initializes a task from the representation given by condenseTaskData
The abstract implementation AbstractLlmKgBenchTaskClass helps to reduce redundant code and eases the concrete task implementation. For the two main variations of tasks, single-prompt tasks and dialogue-tasks, specialized abstract classes are provided. Tasks which store their task data in an encrypted file can benefit from the abstract class AbstractFileListTaskImplementation.
This new task API offers more granularity for the interaction of the LLM-KGBench framework with the tasks compared to the interface used in prior versions of the LLM-KG-Bench framework and BigBench. The central framework logic orchestrates the prompt-answer-evaluate loop(fig. 1a). The new task API gives the central framework logic more flexibility in the orchestration, reduces error possibilities and reduces repeated code in tasks.
# 3.3 Model Connectors and Supported Models
Model connectors are responsible for offering standardized APIs to LLMs. They are defined similar to the BIG bench model class. The main method offered is generate_text(inputs: Union[str,List[str]], ...)->str, taking a single prompt or dialogue and returning the LLMs answer.
The LLM-KG-Bench framework offers several model connectors:
OpenAI / ChatGPT: Connector for OpenAI-compatible LLMs like GPT-3.5, GPT-4, GPT-4t, GPT-4o and GPT-o1 via the OpenAI python library $^ { 1 3 }$ and REST AP $_ { . } ^ { \cdot 1 4 }$ . Many other LLM providers offer a compatible REST API as well. They can be integrated with the new endpoint parameter.
Google / Gemini: Connector for LLMs from Google like Gemini 1.5 or Gemini 2.0 via the Google python library15 and REST API $^ { \cdot 1 6 }$ .
Anthropic / Claude: Connector for LLMs from Anthropic from Claude 1.0 to Claude 3.5. The connector uses the Anthropic REST API $^ { 1 7 }$ using the offered python library18.
vLLM : Runtime for self-hosted LLMs $^ { 1 9 }$ [16]. This library is compatible to many open LLMs and enables serving and inferencing them.
# 3.4 Benchmark Tasks
Several tasks are implemented in LLM-KG-Bench framework as described in several articles [20,8,9,19]. This includes especially the following task classes:
RDF related:
FactExtractStatic: asks the LLM to extract facts from a given textual fact sheet and create a corresponding Turtle KG RdfConnectionExplainStatic (extended): asks the LLM to find a connection between two nodes in a small RDF graph RdfFriendCount (extended): for a simple dynamically computed RDF graph the LLM should find the node with the most incoming edges RdfSyntaxFixList (new): present the LLM a RDF document with syntax errors and asks for a fixed document TurtleSampleGeneration: asks the LLM to create a small graph of foaf:Person objects connected with foaf:knows edges SPARQL SELECT query related: Sparql2AnswerList: presents the LLM a small KG and SPARQL SELECT query and asks for the expected result set of the query Text2AnswerList: presents the LLM a small KG and textual question and asks for the expected result set of the question Text2SparqlList: asks the LLM to translate a given textual question into a SPARQL SELECT query for a given KG SparqlSyntaxFixingList: presents a SPARQL SELECT query with syntax errors and asks fo a fixed query
Many of this task classes support parameters and there are several variations of the Sparql2AnswerList task class for different benchmark datasets. Some tasks, especially the SPARQL related ones, are based on existing task data [6,21,15,3,4].
The prompts are designed in a way that keeps ambiguity as low as possible. All requirements that we expect the LLM to honor are stated explicitly, e.g. stick with the original formatting, or answer with just one markdown fenced code block ... no other text. At the same time we avoid LLM specific prompt optimization for a fair comparison across different and future models.
New RdfSyntaxFixList Task The task presents an RDF document with one or two syntax errors together with the related parsing error and asks the LLM to fix the document with few as possible changes, similar to the TurtleErrorsStatic task introduced in the first version of LLM-KG-Bench framework [20]. But where the TurtleErrorsStatic task is limited to one document and tries to estimate the syntactical correctness of a Turtle document that still contains errors, we decided to take a more generic approach. When the document returned by the LLM contains still some errors or differs from the expected formatting, the task provides feedback and asks the LLM in another prompt-answer-eval loop to fix it again. This is repeated for up to three rounds and the documents generated are measured according to several aspects resulting in the following scores:
parsableSyntax $\in [ 0 , 1 ]$ : is the document syntax correct?
contentF1 [0..1]: f1 measure for the document generated in comparison with the expected content on a normalized triple level. This score is computed only if the document syntax is correct.
strSimilarity $\in \ [ 0 . . 1 ]$ : string similarity of the document generated to the expected result.
brevity $\in$ [0..1]: is only the document provided or additional surrounding text generated we did not ask for?
combined ∈ [0..1]: combined score computed by 0.1strSimilarity + 0.2parsableSyntax + 0.7contentF 1
This is implemented for Turtle, JSON-LD and N-Triples. The selection of the RDF serialization format is configured with a task parameter graphFormat. The task is based on an encrypted task data file containing five task case entries for each serialization format supported.
Each task case entry is based on variations of the organizational graph with between one and four syntax errors. The errors could be a missing or additional formatting character, an invalid combination of literal language and type, a misspelled prefix or a wrong character escaping in a string literal. An example dialogue for a missing formatting character in Turtle is shown in fig. 3.
Bench: Please fix all syntax errors of the following RDF in turtle syntax. Try to stick with the original formatting of the RDF given and only change as few characters as necessary. To support automated parsing, please answer with just one markdown fenced code block (start and end with ‘‘‘) containing the rdf, no other text. ‘ ‘ ‘ t u r t l e : anne a f o a f : P e r s o n ; f o a f $\because$ firstName "Anne" Parsing error message: at line 7 of <>: Bad syntax (expected ’.’ or ’}’ or ’]’ at end of statement) at ^ in: . . . LLM: A dot (.) is missing Bench: Please correct your answer following the expected structure(exactly one fenced code block with the RDF, no other text). LLM: ‘ ‘ ‘ t u r t l e : anne a f o a f : P e r s o n ; f o a f $\vdots$ firstName "Anne"
Fig. 3: Example dialogue for the RdfSyntaxFixList task with a missing dot in Turtle syntax. Some text left out is marked with ". . . " . The LLM’s first answer is missing the expected code block with the fixed Turtle which is corrected in the second answer.
Extended RdfConnectionExplainStatic and RdfFriendCount Tasks The task classes RdfConnectionExplainStatic and RdfFriendCount, first introduced in the Version 1.1 of the LLM-KG-Bench framework [8], got extended to support more graph serialization formats. Instead of just presenting a Turtle graph, they support the four RDF formats Turtle, JSON-LD, RDF/XML and N-Triples now. The graph format can be configured with a task parameter graphFormat.
# 3.5 Summary of Updates in Version 3
In comparison to the earlier versions of the LLM-KG-Bench framework the Version 3 offers especially the following new features:
major update in task API for reduced task code and clean dialogue orchestration
– framework support for task data encryption
– added vllm connector
– new result analysis capability with score aggregation
– new plot type spider plot as shown in the capability compass in subsection 4.2
– new task and new task variations with task parameters
– some reorganization like modularized model definition
# 4 Benchmark in Use
To showcase the benchmark, we conducted a broad evaluation of some of the state-of-the-art proprietary and open LLMs with the help of LLM-KG-Bench framework. All generated data is published in a GitHub Repository and can be accessed with the link given in the "Online Resources" Section at the end of this article to enable further analysis and comparison.
# 4.1 Dataset Generation
We adopted the default configuration to define the tasks and models included in this evaluation. As a trade-off between resource usage and confidence, we have decided to conduct 20 iterations for the proprietary LLMs and 50 iterations for the open LLMs.
Selected Tasks The benchmark was executed on the following tasks that are described in section 3.4 and in the code repository:
– RdfSyntaxFixList: For Turtle, JSON-LD and N-Triples as graph format
– RdfConnectionExplainStatic: For Turtle, JSON-LD, RDF/XML and N-Triples as graph format
– RdfFriendCount: For Turtle, JSON-LD, RDF/XML and N-Triples as graph format
– SparqlSyntaxFixingList
– Sparql2AnswerList: For Organizational graph
– Text2SparqlList: For Organizational graph, Coypu-Mini and Beastiary
Table 2: Details for the models selected for the experiment presented here. The parameter count of proprietary models is not documented and marked with a question mark (?) here.
Selection of Proprietary LLMs To get an overview on the current stateof-the-art proprietary models we selected three long-term high-ranked model families from the Chatbot Arena Leaderboard: OpenAI GPT, Google Gemini and Anthropic Claude. From these families, we selected the current models in various sizes and also included the latest GPT 3.5 for comparability with other results. The selected models together with their context size are shown in table 2
Selection of Open LLMs We based our selection of state-of-the-art open LLMs on the Open LLM Leaderboard [7] and used the average score over all included benchmarks as our reference value. The selection criteria were that the model is instruction-finetuned, as required by the task construction of the LLMKG-bench framework, has less than 80B parameters because of a limited amount of available hardware resources and is a base model, i.e., not a fine-tuned version of another model. With the latter requirement, we wanted to stick to mature and popularly used LLMs that are not just optimized to achieve a slightly higher score on one or few benchmarks than a base model.
Fig. 4: Examples of capability compasses generated with the framework on the dataset. Five dimensions are configured in an exemplary way: Brevity(Brev), RDF Syntax (R-Syn), RDF Analytics(R-Ana), SPARQL Semantics(S-Sem) and SPARQL Syntax(S-Syn)
The models fulfilling all our criteria and were among the TOP-4 models based on the average benchmark scores, disregarding models of the same family with lower scores, were Qwen2-72B-Instruct, Meta-Llama-3.1-70B-Instruct, solar-pro-preview-instruct and Phi-3.5-MoE-instruct. Here, we excluded solarpro-preview-instruct from our selection since it only supports a context length of up to 4096k Tokens and not all prompts of tasks included in the run fitted within this limit. For the remaining three models, we also included all models of their larger model families that matched our requirements, i.e., we also tested all models of the Llama3 [10], Qwen2 [14,27], and Phi3 [1] families fulfilling our requirements.
In addition, we wanted to test open LLMs that are fine-tuned or explicitly optimized on code since they could potentially better understand and produce structured data as required for the tasks included in the LLM-KG-Bench. Here, we consulted the EvalPlus Leaderboard [18] and used the reported models’ Mostly Basic Python Programming (MBPP) Benchmark score as our reference value to assess the code-producing quality of the models. Again, we excluded models that were only finetuned versions of a code-finetuned or -optimized base model, had more than 80B parameters or were not instruction-finetuned. Moreover, we only searched for models that are fine-tuned or explicitly optimized on code. Finally, we included the Top-3 models satisfying the criteria in our runs, namely Qwen2.5-Coder-32B-Instruct [14], DeepSeek-Coder-33B-Instruct [11] and OpenCoder-8B-Instruct [13].
Table 3: Result of two sided t-tests checking for a preference for Turtle vs JSONLD serialization. Preferences are expected if the confidence interval is at least $9 5 \%$ , bold font indicates $9 9 \%$ or better.
# 4.2 Example Evaluations
The generated data can be analysed in different ways. The raw data contains all LLM interactions, extensive logs and the evaluation results in json, yaml and txt format including interaction and task details. The plotResults command helps to generate several boxplots (which we have omitted here due to limited space) as well as generate csv and excel files showing the results in big tables.
In LLM-KG-Bench framework Version 3, we added the capability to aggregate results for each model evaluated and create capability compass plots. We used an exemplary configuration to create the ones shown in fig. 4. These plots can be used to give a summary of a model or create model cards.
Finally, we used the added task versions with different graph serialization formats to check if some models prefer JSON-LD over turtle serialization. Table 3 shows the result of two sided t-tests for all combinations of related tasks and all models. A table cell contains "TTL" or "JSON" if the statistics indicate better scores for one format. Bold font is used if the confidence is at least 99%. | Current Large Language Models (LLMs) can assist developing program code beside many other things, but can they support working with Knowledge Graphs (KGs) as well? Which LLM is offering the best capabilities in the field of Semantic Web and Knowledge Graph Engineering (KGE)? Is this possible to determine without checking many answers manually? The LLM-KG-Bench framework in Version 3.0 is designed to answer these questions. It consists of an extensible set of tasks for automated evaluation of LLM answers and covers different aspects of working with semantic technologies. In this paper the LLM-KG-Bench framework is presented in Version 3 along with a dataset of prompts, answers and evaluations generated with it and several state-of-the-art LLMs. Significant enhancements have been made to the framework since its initial release, including an updated task API that offers greater flexibility in handling evaluation tasks, revised tasks, and extended support for various open models through the vllm library, among other improvements. A comprehensive dataset has been generated using more than 30 contemporary open and proprietary LLMs, enabling the creation of exemplary model cards that demonstrate the models' capabilities in working with RDF and SPARQL, as well as comparing their performance on Turtle and JSON-LD RDF serialization tasks. | [
"cs.AI",
"cs.CL",
"cs.DB"
] |
# 1 Introduction
Retrieval-Augmented Generation (RAG) [28, 30] enhances Large Language Models (LLMs) by leveraging external knowledge. However, despite the availability of numerous retrieval models, no single retriever consistently outperforms others across diverse queries and corpora [25, 40]. Inspired by distributed information retrieval and federated search, the research community has recently begun investigating query routing as a means to address this challenge.
Effective query routing is essential given the complexity introduced by various retrieval methods. Optimal routing can improve adaptability across query types, thereby enhancing downstream RAG performance. Additionally, selective routing can help reduce computational costs by activating only necessary retrievers, including potentially bypassing retrieval entirely. Furthermore, modern retrieval systems increasingly operate as independent, competing services, effectively forming a search marketplace targeted at machine consumers (LLMs) [31], making efficient query routing a timely research.
Existing query routing approaches exhibit several limitations. Many rely on heuristics [25, 29] or optimize solely for traditional retrieval metrics intended for human end-users [20], while in the context of LLM consumers, routing strategies should ideally optimize downstream LLM performance rather than traditional retrieval
# 2 Related Work
Distributed IR. Our approach to query routing builds on several established IR fields, including Distributed Search, Federated Search, Selective Search, Aggregated Search, and Meta Search. Distributed search traditionally address the selection of relevant document collections based on query relevance, often in a distributed and disjoint environment [4, 5]. Federated and selective search extend these ideas, focusing on brokered retrieval across multiple independent and typically uncooperative systems, employing resource representations and selection strategies to effectively route queries [11, 15]. Aggregated Search similarly aims to integrate diverse retrieval results from specialized vertical search services into a unified search interface, emphasizing the selection of relevant services per query [2]. Additionally, Meta Search combines results from several search engines to improve overall relevance, recognizing that no single search engine consistently outperforms others across diverse queries [7, 19].
While our query routing methodology shares conceptual similarities with these fields, it uniquely differs in its explicit emphasis on routing queries to varied retrieval strategies optimized directly for downstream retrieval-augmented generation (RAG) tasks.
Query Routing Strategies. Building on insights from distributed IR, recent RAG systems increasingly incorporate query routing strategies. Khramtsova et al. [25] examined various dense retriever selection methods and identified corpus similarity as a reliable, training-free criterion. This line of work was extended by Khramtsova et al. [26], who proposed an unsupervised approach using pseudoquery generation and LLM-based pseudo relevance judgments to rank dense retrievers.
Mu et al. [32] proposed a routing strategy that directly predicts downstream LLM performance, bypassing traditional retrieval effectiveness metrics. However, this approach overlooks cases where retrieval may not be beneficial and struggles with the variability of absolute score prediction across queries. Similarly, AdaptiveRAG [22] classifies queries by their perceived complexity to select retrieval strategies, but this relies on human-centric definitions of complexity and requires curated training data, which may not align with LLM behavior. Other recent studies expand the space of query routing. RouterRetriever [29] uses embedding similarities for retriever selection. Guerraoui et al. [20] introduced RAGRoute, a lightweight classifier that dynamically selects retrieval sources to optimize recall and classification accuracy. Tang et al. [38] framed routing as a contextual multi-armed bandit problem in knowledge graph-based RAG systems, but without modeling no-retrieval as a viable option.
Our approach emphasizes learning to rank retrievers based directly on improvements in downstream LLM utility. It explicitly includes no-retrieval as a valid action and is evaluated over a diverse set of retrieval strategies.
# 3 RAG Method
Multi-Retriever Setup. We utilize Opensearch sparse BM25 and Pinecone dense $\mathrm { E } 5 _ { b a s e }$ retrievers as a base retrievers, and we combine two reranking strategies with distinct goals to make variations in retrieval strategies. The first, score regularization [14], focuses on improving retrieval performance. The second, stochastic reranking [27], aims to enhance item-fairness and diversity, which can also improve downstream RAG performance. As a result, we establish six distinct retrievers: (1) BM25; (2) ${ \mathrm { B M } } 2 5 +$ Score Regularization Reranking; (3) ${ \mathrm { B M } } 2 5 +$ Stochastic Reranking; (4) $\mathrm { E } 5 _ { b a s e }$ ; (5) $\mathrm { E } 5 _ { b a s e } +$ Score Regularization Reranking; and (6) $\mathrm { E } 5 _ { b a s e } + \mathrm { S t o } -$ chastic Reranking. Details of reranking methods can be found in Appendix A.
All retrievers utilize the sampled version of FineWeb corpus (15M documents)[35], and their retrieval strategies and corpus statistics remain hidden from both the generator and router (uncooperative environment).
Query Routing via LTRR. Our RAG framework routes queries to a suitable retriever $\mathcal { R } _ { i }$ from a pool of multiple retrievers $L _ { \mathcal { R } }$ . For each input instance $x$ , we generate a query $q$ via a query generation function $\phi _ { q } ( x )$ . The core objective is to route this query to one or more retrievers that maximize the downstream performance of the RAG system. Formally, we introduce a router function $\mathcal { F }$ that maps queries to a ranked set of retrievers:
$$
\mathcal { F } ( q ; L _ { \mathcal { R } } ) \to \pi _ { L _ { \mathcal { R } } } ,
$$
where $\pi _ { L _ { \mathcal { R } } }$ is a ranking of retrievers reflecting the predicted utility of each retriever can give to the downstream generator $\mathcal { G }$ for the given query. In our implementation, we route queries to the top-ranked retriever using a pairwise XGBoost-based router (chosen for its empirical effectiveness, as discussed in later sections). Importantly, we include a ’no-retrieval’ option $\mathcal { R } _ { 0 }$ in the ranking. This allows the system to bypass retrieval altogether when the router predicts that relying solely on the language model’s parametric memory yields the best performance.
Generator. We use Falcon3-10B-Instruct [39] as the generator in our RAG system. Inspired by recent work on prompting LLMs to reason with external information [8, 23], we design prompts that instruct the model to explicitly assess the relevance and utility of each retrieved passage. Specifically, the model is prompted to reflect on how to use the passages in a <think> section, followed by its final answer in a <answer> section. We extract only the content within the <answer> tag as the system’s output. For ill-formatted generations, a fallback prompt omitting explicit reasoning is used. Full prompt details are provided in Appendix B.
# 4 Learning to Rank Retrievers
We propose Learning to Rank Retrievers (LTRR4LLM), which formulates the routing problem as a learning-to-rank task tailored specifically to optimize downstream LLM performance.
To first derive the ground-truth retriever rankings required for training, we measure the utility gain $\delta _ { i }$ achieved by retriever $\mathcal { R } _ { i }$ relative to the baseline generator performance (without retrieval):
$$
\delta _ { i } = \mu _ { u } ( \boldsymbol { G } ( \overline { { x } } _ { i } ) , y ) - \mu _ { u } ( \boldsymbol { G } ( x ) , y ) ,
$$
where $\overline { { x } } _ { i } = \phi _ { p } ( x , \mathcal { R } _ { i } ( q , k ) )$ with $k$ denoting the number of passages to retrieve, $\phi _ { P }$ denoting a prompt construction function for an LLM $\mathcal { G }$ , and $\mu _ { u }$ is an arbitrary string utility metric. To ensure comparability across queries, utility-gain scores are min-max normalized per query into the range of [0,1].
Following the LTR literature [6], LTRR is then characterized by a scoring function $f$ :
$$
f ( \Phi ( q , \mathcal { R } _ { i } ) ) \mathbb { R }
$$
that assigns a score to each retriever based on query- and retrieverspecific features $\Phi ( q , \mathcal { R } _ { i } )$ extracted from the $i ^ { \prime }$ th retriever $\mathcal { R } _ { i }$ .
To train the ranking model, we experiment with three wellestablished approaches (detailed in Appendix C): (1) the pointwise approach, where the model predicts each retriever’s utility gain $\delta _ { i }$ independently using a regression loss; (2) the pairwise approach, where the model learns to minimize ranking inversions between retriever pairs based on their relative utility gains compared to the no-retrieval baseline; and (3) the listwise approach, which directly optimizes the predicted utility gains over the full set of retrievers for each query.
# 4.1 LTR Features
Our setup assumes an uncooperative retrieval environment in which retrievers do not expose detailed corpus statistics or embedding model specifications. Thus, we extract a set of query-dependent preretrieval features and query- and retriever-dependent post-retrieval features to facilitate effective learning-to-rank (LTR) modeling.
For pre-retrieval features, we include the query representation $( \mathbb { R } ^ { \mathrm { d i m } } )$ , query length, and query type. The query representation is a vector produced by an embedding model, with optional dimensionality reduction (e.g., via PCA). Query type is determined using a lightweight classifier that distinguishes between keyword-based and natural language queries.3 These features are query-specific but not retriever-specific, allowing LTRR models to learn differences across queries.
Post-retrieval features, in contrast, are computed after querying all retrievers and are both query- and retriever-specific, providing the LTRR model with signals to differentiate between retrievers.
Let $z _ { i } = [ d _ { i , 1 } , \dotsc , d _ { i , k } ]$ denote the top- $\mathbf { \nabla } \cdot \mathbf { k }$ documents retrieved by retriever $\mathcal { R } _ { i }$ , $s ( \cdot , \cdot )$ be an embedding-based cosine similarity function, and $M$ be the total number of available retrievers. We define $\begin{array} { r } { e ( z _ { i } ) = \frac { 1 } { k } \sum _ { j = 1 } ^ { k } } \end{array}$ embed $( d _ { i , j } )$ as the aggregated semantic embedding of the retrieved documents retrieved by $\mathcal { R } _ { i }$ . Using these definitions, we construct the following semantic and statistical features:
• OverallSim: similarity between query and the aggregated embedding of retrieved documents, $s ( q , e ( z _ { i } ) )$ ,
• AvgSim: average similarity score between query and individual retrieved documents, $\mathsf { a v g } _ { j } ( s ( q , d _ { i , j } ) )$ ,
• MaxSim: maximum similarity score between query and individual retrieved documents, $\mathfrak { m a x } _ { j } ( s ( q , d _ { i , j } ) )$ ,
• VarSim: variance of retrieval similarity scores, $\iota { \sf a r } _ { j } ( s ( q , d _ { i , j } ) )$ , capturing retrieval confidence dispersion,
Moran: Moran coefficient [13], which measures semantic autocorrelation among retrieved documents in alignment with the cluster hypothesis, and
• CrossRetSim: average semantic similarity of the current retriever’s result set with those from other retrievers, defined as $\begin{array} { r } { \frac { 1 } { M - 1 } \sum _ { m ; m \neq i } ^ { M } s ( e ( z _ { i } ) , e ( z _ { m } ) ) } \end{array}$ , which can indicate a uniqueness of a ranking compared to other rankings from other retrievers.
For the no-retrieval option $( \mathcal { R } _ { 0 } )$ , only pre-retrieval features are available. To maintain consistent feature dimensionality across retrievers, we handle missing post-retrieval features differently depending on the model type. For neural LTR models, we introduce a dedicated learnable parameter vector to represent the post-retrieval feature space of $\mathcal { R } _ { 0 }$ . This vector is randomly initialized and optimized during training, allowing the model to implicitly encode the characteristics of the no-retrieval strategy. For non-neural models, we apply median imputation based on the training data to fill in the missing post-retrieval features, ensuring compatibility with fixed-length feature inputs.
# 5 Experiment
We evaluate our proposed routing approaches against a range of baseline and train-free routing models.
# 5.1 Routing Models
We first consider five heuristic, train-free routing models, each based on a post-retrieval feature: OverallSim, AvgSim, MaxSim, VarSim (where lower variance is preferred), and Moran.
For learned routing models trained via the LTRR framework, we evaluate eleven models spanning the pointwise, pairwise, and listwise paradigms. In the pointwise and pairwise settings, we train models using XGBoost [9], $S \mathrm { V } \mathrm { M } ^ { r a n k }$ [24], a feedforward network (FFN), and DeBERTa [21]. In the listwise setting, we evaluate ListNet [6], LambdaMart [42], and DeBERTa-based models.
All LTRR models are trained using utility labels derived from two metrics: BEM [3] and Answer Correctness (AC) [17], both shown to correlate strongly with human evaluations in RAG setting [34].
# 5.2 Datasets
We generate a synthetic dataset using DataMorgana [18], enabling fine-grained control over question and user characteristics. Question configurations include four dimensions: answer type, premise, phrasing, and linguistic variation. User configurations are based on expertise level (expert vs. novice). Full dataset generation details are provided in Appendix D.
For our LTRR experiments, we focus on the answer-type category, which comprises five distinct question types: factoid, multiaspect, comparison, complex, and open-ended.
We construct five dataset splits for evaluation. The Balanced split includes all question types proportionally in both training and test sets. Four unseen type splits (multi-aspect, comparison, complex, and open-ended) each hold out one question type from training and use it exclusively for testing, enabling us to assess model generalization to unseen query types. Dataset statistics are reported in Appendix E.
# 6 Results and Discussion
RQ1: Do routing-based RAG systems outperform the bestperforming standard RAG model? To study this question, we first identify the best-performing single-retriever (standard) RAG system for each dataset under the two utility metrics. Their downstream scores are shown in the ‘Best Standard RAG’ row of Table 1 (see Appendix F for details).
As shown in Table 1, the train-free routing models did not yield statistically significant improvements over the best-performing standard RAG systems, despite showing some numerical gains. In contrast, the LTRR-based models demonstrated more substantial improvements, particularly with the pointwise $S \mathrm { V M } ^ { r a n k }$ and DeBERTa, as well as the pairwise XGBoost and DeBERTa models, which achieved statistically significant gains on the Balanced split.
However, performance gains were noticeably higher for router models trained using the AC utility metric compared to those trained with BEM. Statistically significant improvements were observed only for AC-based routers (highlighted in bold), while no such gains were found for BEM-based models. We attribute this discrepancy to differences in metric reliability: although both BEM and AC correlate well with human judgments, prior work shows that AC consistently achieves stronger alignment [34]. Since LTRR models are trained directly on utility labels, the choice of a consistent and accurate metric is critical.
RQ2: Do LTRR-based routing algorithms outperform the best-performing train-free routing model? We also examined whether trained routing models (LTRR-based) outperform the highest train-free baselines. As in RQ1, numerical results indicate that LTRR models generally outperform the strongest train-free routers (usually MaxSim). However, statistical significance tests revealed that these improvements were not significant after correction. This suggests that the observed gains may be subject to variability and underscores the need for larger-scale studies or refined methods to more conclusively demonstrate the advantages of trained routing over train-free approaches.
BEM-based
AC-based
Table 1: Average downstream RAG utility measured by either BEM or AC when the test queries are routed to the top retriever based on the routing model. Bold: statistically significant improvement over the best standard RAG system according to the paired Wilcoxon signed-rank tests with bonferroni correction.
RQ3: Are the performance improvements from routing-based RAG models robust across the different unseen query-type splits? We investigated the robustness of performance improvements across various unseen query types (multi-aspect, comparison, complex, open-ended). While train-free routing models showed relatively modest improvements across these splits, LTRR-based trained routing algorithms displayed more stable and consistent performance gains. In particular, the pairwise XGBoost routers trained on the AC utility metric showed the most consistent outperformance over the standard RAG and train-free baselines across different unseen datasets, achieving statistically significant results in the complex and open-ended query splits.
Discussion and Implications. Our findings underscore that not all routing improvements are created equal. While LTRR models often outperform standard and train-free routing methods numerically, only a subset achieve statistically significant gains, particularly those trained with the AC utility metric. This confirms that metric choice is not merely a technical detail, but a determinant of learning signal quality. Moreover, the effectiveness of pairwise training (especially with tree-based models like XGBoost) suggests that explicitly modeling retriever tradeoffs per query offers a more robust inductive bias than listwise or pointwise formulations. The observed performance robustness of LTRR across unseen question types further indicates that routing functions can generalize beyond their training distribution. This points to the potential of query routing as a critical component in adaptive retrieval architectures, especially for long-tailed or evolving query scenarios. Notably, our system is built entirely on lightweight, cost-effective retrievers and a computationally efficient routing model, but still achieves meaningful gains, showing that even modest setups can benefit from query routing. Finally, since LTRR produces a full ranking over retrievers, it naturally supports future extensions to multi-retriever selection, where retrieved results can be fused to enhance coverage and diversity [10]. We leave this extension for future work. | Retrieval-Augmented Generation (RAG) systems typically rely on a single fixed retriever, despite growing evidence that no single retriever performs optimally across all query types. In this paper, we explore a query routing approach that dynamically selects from a pool of retrievers based on the query, using both train-free heuristics and learned routing models. We frame routing as a learning-to-rank (LTR) problem and introduce LTRR, a framework that learns to rank retrievers by their expected utility gain to downstream LLM performance. Our experiments, conducted on synthetic QA data with controlled query type variations, show that routing-based RAG systems can outperform the best single-retriever-based systems. Performance gains are especially pronounced in models trained with the Answer Correctness (AC) metric and with pairwise learning approaches, especially with XGBoost. We also observe improvements in generalization to out-of-distribution queries. As part of the SIGIR 2025 LiveRAG challenge, our submitted system demonstrated the practical viability of our approach, achieving competitive performance in both answer correctness and faithfulness. These findings highlight the importance of both training methodology and metric selection in query routing for RAG systems. | [
"cs.CL",
"cs.IR"
] |
# 1 Introduction
Probabilistic filters. Probabilistic filters are space efficient data structures for fast set membership queries. They have many practical applications, such as for database systems [24], networks [14, 25], storage systems [21] and sequence analysis in computational biology [17, 20, 23]. A probabilistic filter represents a set $K$ of keys from a key universe $\boldsymbol { \mathcal { U } }$ and supports at least two operations: (1) inserting a key, and (2) querying if a key is contained in the filter. Instead of storing an exact representation of the set $K$ , the keys are fuzzily represented in a way that queries for keys that were inserted into the filter are always correctly answered (no false negative results). Queries about keys that were not inserted, are correctly answered most of the time, but sometimes incorrectly positively. The probability of obtaining a false positive answer is called the false positive rate (FPR). The FPR is controlled by the user and related to the space requirements of the filter. To represent an arbitrary set of keys $K$ with cardinality $| K | = n$ with an FPR of $\varepsilon = 2 ^ { - k }$ , the theoretically optimal space requirement is $n k$ bits. Practical implementations of filters need $C n k$ bits with an overhead factor $C > 1$ that differs between filter types. Different filter types try to achieve small $C$ and fast insertion and query times; often with different trade-offs between space and time efficiency. In addition, some filter variations support additional operations, such as deletion of keys (dynamic filters), counting the number of times a key was inserted (counting filters), or even storing arbitrary values with the keys (probabilistic key-value stores). Other variants optimize the space requirements for a fixed set of keys, where the complete set of keys has to be known before filter construction starts (static filters).
Filter variants. Bloom filters [2] were first introduced in 1970 and are still among the most commonly used filters. Bloom filters, when optimally configured for an FPR of $2 ^ { - k }$ , store a bit array with $m : = ( 1 / \ln 2 ) n k \approx 1 . 4 4 3 n k$ bits. Disadvantages of Bloom filters are the relatively high overhead of $4 4 . 3 \%$ and slow insertion and query times. Blocked Bloom filters [29] achieve faster insertion and query times at the cost of even larger space, and the recent BlowChoc filters [30] keep the advantages of Blocked Bloom filters while reducing the space requirements, sometimes even below those of standard Bloom filters.
However, alternative filter designs often perform better than Bloom filters. Cuckoo filters [11], (Vector) Quotient filters [13, 28] and Prefix filters [10] store a $k$ -bit fingerprint in a hash table instead of distributed bits in a bit array. This design allows for storing additional values together with the key fingerprints, simply by using additional bits, giving us not only a probabilistic set membership data structure, but also a probabilistic key-value store without additional effort. The FPR is controlled by the fingerprint size, where larger fingerprints lead to smaller FPRs. Different filter types in this class differ in their hash collision resolution strategies. For example, Cuckoo filters resolve collisions using Cuckoo hashing [27, 11] and Vector Quotient filters use ideas from Robin Hood hashing [28].
If the whole key set is known in advance, static filters, such as XOR filters [15], (Bumped) Ribbon filters [8, 7] or Binary Fuse filters [16], achieve smaller overhead factors, but are not suitable for streaming applications where the key set $K$ is usually not known in advance.
The above filters, even the non-static ones, require knowledge of the cardinality $n = | K |$ (or an accurate estimate of it) prior to filter construction. In contrast, extendable filters, such as Infini filters [4], Aleph filters [5], dynamic Cuckoo filters [3] or consistent Cuckoo filters [22] increase their capacity if more keys arrive than originally planned for, at the cost of a higher memory usage and/or higher FPR. Adaptive Cuckoo filters [26] can react to false positive queries by moving them to a hash table. This leads to a lower FPR for future queries at the cost of a larger memory overhead.
In this work, we focus on fingerprint based non-extendable filters supporting online insertions; an overview is given in Table 1. In contrast to the Bloom filter, the (best possible) overhead factor of the fingerprint based filters decreases with increasing $k$ (see also Figure 1), but many of these filters have additional technical or efficiency-related restrictions, which can increase the overhead by a factor of (almost) 2, and hinders flexible use.
Table 1 Overview of non-extendable filter types supporting online insertions. The overhead factor $C = C ( k ) > 1$ specifies the required space per key; one needs $C n k$ bits to store $n$ keys with an FPR of $2 ^ { - k }$ . For some filters, $C ( k )$ can vary between the given formula (best case) and twice that value (worst case; here given as worst-case $C ( 8 )$ for a typical $k = 8$ scenario). The number of cache misses for insertion/lookup is a proxy for insertion/lookup time.
(†): The number of buckets/slots in the filter must be a power of 2. (‡): The Prefix filter achieves the same FPR for different parameters $\gamma$ , here $\gamma = 1 / \sqrt { 2 \pi \cdot 2 5 }$ [10]. $^ { ( * ) }$ : For Cuckoo filters, $M$ is the maximum random walk length during insertion. Higher values of $M$ achieve lower overhead but require more time. $( * * )$ : For the Quotient filter, cache misses depend on the load factor; higher loads (lower overhead) means more cache misses and more time. $( * * * )$ : For the Prefix filter, cache misses depend on the load factor and whether the spare (second level; used if bins in first level are full) is accessed. If the spare is not accessed, insert and queries require 1 cache miss. The number of cache misses in the spare depend on the chosen spare filter (e.g., Cuckoo, Vector Quotient or Blocked Bloom filter).
Contributions and Outline. We improve upon Cuckoo filters in two ways: (1) We remove technical restrictions that until now only yield a small space overhead under certain conditions on the number $n$ of inserted keys. (2) We change the memory layout to overlapping windows instead of non-overlapping buckets, obtaining higher hash table load with fewer possible locations for each key, thus needing fewer bits of memory overall. The advantages of the windowed memory layout also apply to other Cuckoo filter variants, such as the dynamic Cuckoo filter [3] or the adaptive Cuckoo filter [26], but we limit our evaluation to the standard Cuckoo filter with fixed capacity. Our implementation is parallelized using independent subfilters which can be filled by separate threads without locking or complex communication. We obtain universally usable filters that are both small and fast.
After providing background on Cuckoo hashing and Cuckoo filters in Section 2, we describe how to improve Cuckoo filters in Section 3 and provide implementation details in Section 4. In Section 5, we provide a detailed evaluation of bucketed and windowed Cuckoo filters and compare them to other state-of-the-art filters. Section 6 concludes.
# 2 Background
# 2.1 Multi-way Cuckoo hashing
In standard Cuckoo hashing [27], a set of keys $K = \{ x _ { 1 } , x _ { 2 } , \ldots , x _ { n } \} \subset \mathcal { U }$ is stored (exactly) in a hash table with $s \geq n$ slots; indexed $0 , 1 , 2 , \ldots , s - 1$ . After inserting $n$ keys, the fill rate (or load factor) of the hash table is $r : = n / s$ . The hash table may store additional data associated with each key. Each key $x$ may be inserted into one of two possible slots, computed using two hash functions $f _ { 1 } : \mathcal { U } \to [ s ] : = \{ 0 , \dots , s - 1 \}$ and $f _ { 2 } : \mathcal { U } [ s ]$ , randomly chosen from a universal family. If both slots $f _ { 1 } ( x )$ , $f _ { 2 } ( x )$ for key $x$ are already occupied, one of the keys at $f _ { 1 } ( x )$ or $f _ { 2 } ( x )$ is removed and re-inserted into its alternative slot. This slot might also be occupied, so a chain of removals and re-insertions starts until a free slot is found. This can lead to long walks or end in a cycle, and the maximum load factor that can be achieved with a high probability in this setting is $1 / 2$ , a rather low value.
Figure 1 (A) Best-case overhead factors for different filters and FPRs. The standard Cuckoo filter is the bucketed (2,4) Cuckoo filter (orange). (B) Number of required buckets in the original Cuckoo filter [11] (dashed blue staircase) vs. an optimally sized Cuckoo filter (orange diagonal).
Table 2 Rounded theoretical load thresholds for $( 2 , l )$ Cuckoo hashing [31].
To achieve higher loads and consequently use less memory, Cuckoo hashing has been generalized in the following three ways.
1. In $d$ -ary Cuckoo hashing, $d$ independent hash functions $f _ { 1 } , f _ { 2 } , \dots , f _ { d } : \mathcal { U } \to | s |$ are used to compute $d$ candidate positions for each key [12].
2. In $( d , l )$ bucketed Cuckoo hashing, the hash table is divided into $B = \lceil s / l \rceil$ non-overlapping buckets, each containing $\it l$ slots. The $d$ hash functions $f _ { 1 } , \dots , f _ { d } : \mathcal { U } \to [ B ]$ pick alternative buckets. Each key can be inserted into any of the $l$ slots inside the $d$ buckets [6, 32].
3. $( d , l )$ windowed Cuckoo hashing works similarly to $( d , l )$ bucketed Cuckoo hashing, but uses overlapping windows instead of non-overlapping buckets [19]. Each key can be inserted into one of $d l$ slots, but each window overlaps by ${ \mathit { l } } - 1$ positions with its surrounding windows, hence the total number of windows is $W = s - l + 1$ .
Theoretical asymptotic load thresholds for these generalizations have recently been computed by Stefan Walzer [31]; see Table 2. The load threshold increases with $d$ and $l$ and is higher for windows compared to buckets. However, larger $d$ leads to increased query times, since we search a key in each of the $d$ buckets or windows at different memory locations, each likely causing a cache miss, while a search inside a bucket or window is local. Hence, we consider only $d = 2$ . Since existing Cuckoo filter implementations use buckets, it is natural to ask whether a windowed memory layout reduces the memory overhead while retaining the fast query times of Cuckoo filters also in practice.
# 2.2 Cuckoo filters
A Cuckoo filter is a probabilistic data structure that is based on Cuckoo hashing, introduced using $( 2 , 4 )$ bucketed Cuckoo hashing [11]. A Cuckoo filter uses a fingerprint hash function $f _ { 0 } : \mathcal { U } \to [ 2 ^ { q } ]$ that computes a $q$ -bit fingerprint for a key. Each key is assigned to two distinct buckets $b _ { 1 }$ and $b _ { 2 }$ from a total of $B$ buckets. A bucket consists of a constant number of available slots (4 in [11]), and each slot may hold a fingerprint. Therefore, the fingerprint of a key can be stored in any of $2 \cdot 4 = 8$ slots. In order to guarantee an FPR of $2 ^ { - k }$ , with 8 possible locations for each key, the fingerprint size needs to increase to $q : = k + 3$ bits, yielding an FPR bounded by $8 \cdot 2 ^ { - ( k + 3 ) } = 2 ^ { - k }$ .
The two bucket addresses are constrained such that one can be computed from the other and the fingerprint. In the standard Cuckoo filter [11], a hash function $f _ { 1 } : \mathcal { U } \to [ B ]$ computes the first bucket address of a key $x$ , given by $b _ { 1 } = f _ { 1 } ( x )$ . The second bucket address is the bit-wise XOR of a hash of the fingerprint $h ( f _ { 0 } ( x ) )$ and the first bucket address $b _ { 1 }$ , where $h : [ 2 ^ { q } ] \to [ B ]$ . Conversely, the first bucket address is obtained back by XORing $b _ { 2 }$ with the same fingerprint hash. The XOR operation is only guaranteed to return a valid address if the number of buckets is a power of 2. A proposed simplification uses the $q$ -bit fingerprint $f _ { 0 } ( x )$ directly instead of a hash value $h ( f _ { 0 } ( x ) )$ [9].
Insertions. Inserting a key $x$ works as follows: If any of the $2 l$ slots where the fingerprint of $x$ can be stored is empty, it is stored there, giving priority to slots in the first bucket $b _ { 1 }$ . If all $2 l$ slots are full, pick a random one of these slots, remove the stored fingerprint, say of key $x ^ { \prime }$ , and insert fingerprint $f _ { 0 } ( x )$ at the now free slot. Then, attempt to insert the removed fingerprint $f _ { 0 } ( x ^ { \prime } )$ into its alternative bucket (which can be computed from its former address and the fingerprint itself, without knowledge of $x ^ { \prime }$ ). This procedure is repeated for up to a given number of steps. The insertion fails if no free slot can be found. The probability of failure approaches zero if the number of buckets is large enough (and enough steps are allowed). In the described $( 2 , 4 )$ configuration, $B = 1 . 0 2 \cdot n / 4$ buckets, or $1 . 0 2 n$ slots are sufficient in theory. In practice, in order to limit the length of the random insertion walks, the number of buckets and slots is chosen larger, and the original work [11] uses $1 . 0 5 n$ slots. It has to be noted that a Cuckoo filter (if not over-provisioned) cannot tolerate many additional insertions; these will simply fail.
Queries. To query the presence of a key in a $( 2 , 4 )$ bucketed Cuckoo filter, search all eight possible slots and return True if and only if the fingerprint $f _ { 0 } ( x )$ was found at any of the slots. Queries are fast, because at most two distinct memory locations (buckets $b _ { 1 }$ and $b _ { 2 }$ ) need to be accessed; the remaining memory accesses are local within each bucket.
# 3 Flexible windowed Cuckoo filters
We improve upon the standard Cuckoo filter in two ways: First, we introduce a different way to find the alternative bucket (or window) from the given one and the fingerprint, using a signed offset. Second, we use overlapping windows instead of disjoint buckets.
# 3.1 Moving between alternative buckets or windows
In the standard Cuckoo filter [11], there are two alternative buckets to store the $q$ -bit fingerprint $f _ { 0 } ( x )$ of a key $x$ . If $b$ is one of the buckets, $b ^ { \prime }$ is the alternative bucket, and $h : [ 2 ^ { q } ] \to [ B ]$ is a hash function, then setting $b ^ { \prime } = b \oplus h ( f _ { 0 } ( x ) )$ is symmetric in $b , b ^ { \prime }$ but requires that the number of buckets satisfies $B = 2 ^ { \beta }$ for some integer $\beta$ to guarantee valid bucket addresses $b , b ^ { \prime } \in [ B ]$ . This may introduce a significant memory overhead (up to an additional factor of 2.0; see Figure 1B). An alternative [15] proposed $b ^ { \prime } : = ( B - ( b + f _ { 0 } ( x ) ) ) ,$ mod $B$ , which is also symmetric in $b , b ^ { \prime }$ , but has no restrictions on $B$ . However, it does not allow further randomization, which makes it vulnerable to adversarial input data and introduces strong anti-correlation between the locations of $b$ and $b ^ { \prime }$ .
$\cdot$ Figure 2 Inserting a key’s fingerprint into a Cuckoo filter with window size 4 and an FPR of $2 ^ { - 5 }$ : Compute both window addresses and search for an empty slot in either window. If both windows are full, as shown here, remove a random fingerprint from one of the windows (here, the orange fingerprint), compute its two possible windows based on the choice and offset bits and try to re-insert the fingerprint at one of the alternative slots. Here, the removed fingerprint can directly be re-inserted into its current window, since the last position in the window is still empty.
We use a different approach, consisting of the first bucket hash function $f _ { 1 } : \mathcal { U } \to [ B ]$ and an offset hash function $f _ { 2 } : [ 2 ^ { q } ] [ B - 1 ]$ , mapping any fingerprint to an offset less than $B$ . For a key $x$ , we have the first bucket address $b = f _ { 1 } ( x )$ and $b ^ { \prime } : = ( b + 1 + f _ { 2 } ( f _ { 0 } ( x ) ) )$ mod $B$ with $\boldsymbol { b ^ { \prime } } \neq \boldsymbol { b }$ . From $b ^ { \prime }$ , bucket $b$ cannot be reconstructed symmetrically, but asymmetrically as $b = ( b ^ { \prime } - 1 - f _ { 2 } ( f _ { 0 } ( x ) ) )$ mod $B$ . Hence, we need to use a bit in addition to the fingerprint to store the current bucket choice ( $b$ or $b ^ { \prime }$ ) that tells us whether we need to add or subtract the offset to obtain the alternative bucket.
It seems disadvantageous that we need one extra bit in each slot. However, when we query the presence of a key, we only consider the key to be present if both the fingerprint and the choice bit are identical. Therefore, we can use the choice bit as one of the 3 extra bits needed to counteract the 8 possible slots. We do not require that this bit is uniformly distributed between $0$ and 1 across the filter: Assume that fraction $p$ of all fingerprints (of length $q = k + 2$ ) are stored in their respective first bucket with choice bit 0, and the remaining fraction $1 - p$ in their second bucket with choice bit 1. A false positive occurs if we find the fingerprint together with choice bit 0 in the first bucket (probability $\leq 4 \cdot 2 ^ { - q } \cdot p$ ) or with choice bit 1 in the second bucket (probability $\leq 4 \cdot 2 ^ { - q } \cdot ( 1 - p ) )$ . So the total probability is bounded by $4 \cdot 2 ^ { - q } \cdot ( p + 1 - p ) = 2 ^ { - k }$ , independently of $p$ .
# 3.2 Windowed layout
The standard Cuckoo filter uses (2,4) bucketed Cuckoo hashing [11]. Alternatively, the array may be divided into $W$ overlapping windows of size $\it { l }$ , where each window overlaps with the previous window by ${ \mathit { l } } - 1$ slots. Using windows has been shown to increase the load threshold for Cuckoo hash tables in comparison to buckets [31, 19]; see Table 2. This allows us to move from $( 2 , 4 )$ bucketed Cuckoo hashing to $( 2 , 2 )$ windowed Cuckoo hashing without sacrificing much of the load threshold.
However, new complications arise: Any slot of the hash table now belongs to $l$ different windows, and from the location alone we do not know the window number. Hence, we also need to store the offset from the window start, a number in [l], taking $\log _ { 2 } l$ bits. (For convenience, $\it l$ should therefore be a power of 2, such as 2 or 4.) Fortunately, again, these need not be extra bits, but can be used as part of the bits that counteract the FPR multiplier. For concreteness, in a $( 2 , 2 )$ windowed Cuckoo filter, we use a $k$ -bit fingerprint, one additional choice bit to indicate which of the two alternative window locations we are using, and one additional offset bit within the window (first or second slot). Similarly to the choice bit, the distribution of the window offset bits need not be uniform over all $2 ^ { l }$ possible values.
Inserting or querying a key works analogously to the version with buckets. To insert a key, check if any of the $2 l$ slots is empty. If yes, insert the fingerprint into the empty slot. Otherwise, remove a random fingerprint and try to re-insert it into an alternative slot. To re-insert the removed fingerprint, compute both its current and alternative window based on the stored bits (Figure 2). Run this loop of removal and re-insertion until an empty slot is found, but at most for a fixed number of steps.
For queries, return True if and only if any of the $2 l$ possible slots contain the correct fingerprint and the correct choice and window offset bits for the currently searched slot.
Thus, the main differences of our proposal to the previous Cuckoo filter [11] are that here,
1. the additional bits are not (randomized) extensions of the fingerprint, but have meaning as choice bit and offset bits,
2. two slots in windows achieve a load threshold (0.965) close to four slots in buckets (0.980), so we can reduce the number of offsets bits from 2 to $^ { 1 }$ , saving space. More precisely, filling the hash tables slightly below their theoretical load thresholds, the overhead factor is reduced (for typical not too large $k$ ) from $1 . 0 5 \cdot ( 1 + 3 / k )$ in [11] to $1 . 0 6 \cdot ( 1 + 2 / k )$ , e.g. from 1.365 to 1.272 for $k = 1 0$ (an FPR of 1/1024).
# 4 Implementation Details
We here discuss our implementation using optimized just-in-time compiled Python, how we distinguish empty from full slots in the hash table and how we execute bit-parallel queries over several slots. Further aspects, the parallelization over independent subfilters and the concretely used hash functions, are discussed in Appendix A and B, respectively.
# 4.1 Just-in-time compilation
We have implemented the improved Cuckoo filters for integer keys in just-in-time compiled Python using the numba package [18] with typed numpy arrays. This offers several benefits, such as highly optimized machine code, the use of LLVM intrinsics, and the option to choose parameters during runtime, but before compilation. For example, this allows us to provide all parameters of the hash functions as compile-time constants for additional optimizations, achieving speeds comparable to compiled C or C++ code, sometimes even faster, due to the increased possibilities for optimization. Code is available at https: //gitlab.com/rahmannlab/cuckoo-filters.
# 4.2 Distinguishing full slots from empty slots
We have so far not discussed how we can distinguish an empty slot from a full slot. The hash function $f _ { 0 } : \mathcal { U } \to [ 2 ^ { k } ]$ maps a key $x$ to its $k$ -bit fingerprint $f _ { 0 } ( x )$ . We use one of the possible fingerprint values (0) to indicate an empty slot. Hence, we reduce the set of valid fingerprints and instead use a hash function $f _ { 0 } ^ { \prime } : \mathcal { U } \to [ 2 ^ { k } - 1 ]$ . The fingerprint is given by $f _ { 0 } ( x ) : = f _ { 0 } ^ { \prime } ( x ) + 1$ . The loss of one fingerprint value leads to a higher FPR of $1 / ( 2 ^ { k } - 1 )$ , the probability that two random fingerprints collide. Although this effect is measurable for small $k$ , it can be considered as negligible for typically used larger values of $k$ ; see also Figure 4 in Section 5.2. The same compromise was made for the original Cuckoo filter [11].
# 4.3 Bit-parallel engineering
We adapt the bit-parallel optimizations from Fan et al. [11] for windowed Cuckoo filters for small enough $k$ , where several slots fit into a single 64-bit integer. We discuss them by example using (2,4) windowed Cuckoo filters with $k = 5$ , which has $q : = k + 3 = 8$ -bit slots. For other configurations, similar ideas are implemented with appropriate modifications. For too large $k$ , we examine each slot separately at the cost of throughput.
We load the contents of a window into a 64-bit integer register that we (conceptually) partition into four adjacent slots (and some unused bits). To check whether there is an empty slot (all zeros), we use two pre-computed masks, lo and $\mathtt { h i } ~ = ~ \mathtt { l o } ~ < < ~ ( \mathtt { q } { - } 1 )$ , indicating the least significant and most significant bits in each slot, respectively. By computing the bit pattern ${ \texttt { e } } : = { \texttt { ( x - 1 0 ) } } \& { \ ( - \mathbf { x } ) }$ & hi, the 1-bits of $\ominus$ are those 1-bits of hi that indicate which slots are empty. If $\ominus$ is non-zero, there exists an empty slot and its index (from the right) can be computed by dividing the number of trailing zeros of $\ominus$ (obtained by a cttz CPU instruction) by q. In the following example, the result is nonzero and has 7 trailing zeros, indicating that slot 0 in $x$ is empty. Comparing all 4 slots at the same time in this bit-parallel manner saves the overhead of a loop or separate comparisons.
Similarly, to search for a fingerprint in any of the four slots, we can create a single bit mask that contains the valid bit patterns for each slot. For $( 2 , 4 )$ windowed Cuckoo filters, we use one choice bit and two window offset bits. Assume that the 5-bit fingerprint is 11100, that we are examining window choice 1, and that the offset bits increase with decreasing slot number. Then the valid slot bit patterns can be encoded in a mask y, and we can check if the bit patterns agree in one of the slots (and which one) by taking the bit-wise $\mathrm { X O R } \tt { x } \hat { \omega } \tt { y }$ and checking for a zero slot like above. In the following example, we find the fingerprint in slot 1.
# 5 Evaluation
We start by comparing bucketed with windowed Cuckoo filters. We first evaluate the timememory trade-off for increasing loads (Section 5.1). In Section 5.2, we evaluate the actual FPRs, loads and overhead factors for different desired FPRs at target loads of $0 . 9 8 \cdot T$ , where $T$ is the theoretical load threshold for a given configuration (Table 2). More technical aspects, such as achievable loads, depending on the maximum random walk length during insertion or on the number of inserted keys, are given in Appendix C. Performance benchmarks of our implementation are shown in Section 5.3.
Figure 3 Time (left) and memory (right) requirements for inserting a billion $[ 1 0 ^ { 9 }$ ) keys into different Cuckoo filter types (distinguished by color) at different loads (x-axis) for a fixed FPR of $2 ^ { - 1 0 }$ and a fixed maximum random walk length of 10 000. Dashed vertical lines correspond to load thresholds. Memory use for windows and buckets coincides, i.e. below their respective thresholds, the yellow and green lines overlap, as well as the blue and orange lines.
Then, we compare the throughput of our Cuckoo filter implementations with two existing implementations of state-of-the art filters, the Vector Quotient filter and the Prefix filter (Section 5.4). We omit static filters in order to focus on filters used in applications where the whole key set is not known in advance, but where we have a good estimate of its size, which is typical in genome research applications in bioinformatics. In addition, we exclude other filters, like Bloom filters, Morton filters or Quotient filters that showed worse results compared to Prefix and Vector Quotient filters in previous analyses [10, 28].
Vector Quotient filter (VQF) and Prefix filter. We evaluate the original Vector Quotient filter implementation from [28] (downloaded from https://github.com/splatlab/vqf, written in $\cot +$ ) and the original Prefix filter implementation from [10] (from https://github. com/TomerEven/Prefix-Filter, written in $^ { \mathrm { C + + } }$ ) using a Cuckoo filter or VQF as a spare.
Experimental setup. Evaluations were run on a PC workstation with an AMD Ryzen 9 9950X CPU with 16 cores and hyperthreading and 64 GB of DDR5 memory (6000 MHz, CL40). All benchmarks were performed on random unsigned 64-bit integer keys, generated with numpy. Reported times are wall times, including just-in-time compilation and excluding data load time, except for multi-threaded insertion, where reader and inserter threads load and insert keys in parallel (see Appendix A). We report averages over five runs.
For insertions, we perform lookup-and-insert operations, i.e., we only insert a key if it is not already classified as present in the filter, except when comparing the throughput between filters in Section 5.4.
# 5.1 Time-memory trade-off
Figure 3 shows the time and memory requirements for inserting a billion keys with a fixed FPR of $2 ^ { - 1 0 }$ at different final load factors. Higher loads result in higher insertion times, with steep increases as the load threshold is approached. For low loads of 0.8, insertion times are comparable. At load 0.9, already (2,2) buckets are impossible and the time for (2,2) windows has increased from 85 seconds to 140 seconds, but the time for both (2,4) configurations has only slightly increased from 65 seconds to 94 seconds at most. Windowed Cuckoo filters have higher load thresholds compared to bucketed Cuckoo filters with the same number of slots per bucket/window. Therefore, at the same load, the windowed $( d , l )$ Cuckoo filter is faster than the bucketed version.
Figure 4 Comparison of actual properties (memory overhead, actual load, empirical FPR) of Cuckoo filter types (distinguished by color) designed for FPRs of $2 ^ { - k }$ for varying $k$ ( $\mathbf { x }$ -axis), using a fixed maximum random walk length of 10 000, with $n = 2 \times 1 0 ^ { 9 }$ keys and 5 subfilters for parallelization. Insertions are performed only if the filter does not already report a key as present. Left: Actual memory overhead factor $C$ for a Cuckoo filter with $C n k$ bits for a target FPR of $2 ^ { - k }$ ; lower is better. Middle: Empirical loads. Right: Empirical FPRs, relative to the target FPR of $2 ^ { - k }$ , as log ratios. Points below the horizontal red line at 0.00 have an actual FPR that is better than the target FPR.
The memory requirements depend on the window size or bucket size and the load, but there is no difference between windowed and bucketed versions. At the same load, Cuckoo filters with a window size or bucket size of 2 are smaller compared to Cuckoo filters with a size of 4 because one less bit per slot is needed. Windowed Cuckoo filters achieve higher loads and faster insertion times than their bucketed counterparts at fixed load, so they are an overall improvement.
# 5.2 Actual overhead factors, loads and FPRs
According to the observations of Appendix C, we choose a filter size that ensures a load of at most 98% of the theoretical load threshold and a maximum walk length of 10 000 steps. Using $n = 2 \cdot 1 0 ^ { 9 }$ random 64-bit integer keys, we now compare the resulting actual memory overhead factors, empirical FPRs and observed load factors for different values of $k$ (Figure 4). The actual memory overhead factor is obtained by dividing the actual number of used bits by $n k$ . The actual load is the number of occupied slots, divided by the number of total slots. The empirical FPR is obtained by querying $n ^ { \prime } = 2 \cdot 1 0 ^ { 9 }$ random keys that were not previously inserted into the filter, computed as the number of times the lookup erroneously returned True, divided by $n ^ { \prime }$ .
Overhead factors. For all Cuckoo filter variants, the overhead factor decreases for increasing $k$ (i.e., for lower FPRs; see Table 1 and Figure 4 left). Windowed Cuckoo filters with window size 2 have the lowest overhead factors, since a window size of 2 saves one bit per slot compared to buckets or windows of size 4, and they achieve high load thresholds. Although buckets of size 2 use the same number of bits per slot, their load threshold is much lower, and they therefore have higher overhead factors compared to windows of size 2. For small $k$ , buckets of size 2 have a lower overhead factor compared to windows of size 4, but from $k \geq 8$ , the overhead factor for windows of size 4 is lower, since the higher load compensates for the additional bit per slot. Cuckoo filters with buckets of size 4 have higher overhead factors compared to windows of size 4 due to the load threshold.
Figure 5 Performance benchmarks for (A) insertions, (B) successful and (C) unsuccessful lookups of $2 \cdot 1 0 ^ { 9 }$ integer keys, with a maximum random walk length of 10 000, averaged over five runs. Top: Throughput in million keys per second for different values of $k$ using five subfilters (inserter threads) or five query threads; higher throughput is better. Bottom: Speedup factor for varying number of threads, for a fixed FPR of $2 ^ { - 1 0 }$ .
Empirical loads. As shown in Figure 4 (middle), for $k \geq 8$ , the empirical load is independent of $k$ and corresponds to the targeted load of 98% of the load threshold. For smaller $k$ , the observed load is actually lower. Since small fingerprints have high collision probabilities (of $1 / ( 2 ^ { k } - 1 ) )$ , several keys are reported as already present during the performed lookup-andinsert operations and not inserted again, causing an overall lower load.
False positive rates. We target an FPR of $2 ^ { - k }$ by using $k + 2$ or $k + 3$ bits per slot, for a window or bucket size of 2 or 4, respectively. In practice, several other effects increase or decrease the actual FPR. Figure 4 (right) shows log-ratios between empirical FPR and $2 ^ { k }$ . For $k \geq 5$ , the observed FPR is lower than $2 ^ { - k }$ , because the filter is not full. Actual non-present keys have a small chance of hitting an empty slot instead of a stored fingerprint, decreasing the FPR. Since the load decreases for small $k$ (see above), the observed FPR should also decrease for small $k$ . However, for $k \leq 4$ , the window-based filters show a comparably high FPR. Here we see the effect of sacrificing one fingerprint value to indicate empty slots, e.g. for $k = 3$ , we do not have 8 but only 7 usable fingerprints. Since for the windowed versions, the fingerprint size is only $k$ , this effect is severe. For the bucketed versions, we have 1 or 2 additional fingerprint bits (for bucket sizes 2 or 4, respectively) and no window offset bits, so the effect is less severe. While the interplay of these different effects on the FPR for small $k$ is interesting, it is mostly irrelevant in practice, as Cuckoo filters are most useful for $k \geq 9$ .
$\sqsubseteq$ Table 3 Throughput in million keys per second (single threaded) and memory comparison of bucketed (b) and windowed (w) Cuckoo, Prefix and Vector Quotient filters. All filters are configured such that they have $2 ^ { 3 0 }$ slots and the load is selected such that all insertion succeed with a probability of $\approx 1$ ( $T \cdot 0 . 9 8$ for Cuckoo, 1.0 for Prefix and 0.92 for Vector Quotient filters).
# 5.3 Performance Benchmarks
We evaluate insertion throughput and query throughput separately for successful and unsuccessful queries for different $k$ , with a load factor of 98% of the load threshold. Figure 5 (top) shows the throughput in million keys per second (wall time) using five inserter and query threads and different values of $k$ . Insertion throughput is lower compared to query throughput. This is expected since the random walk may cause several cache misses. In contrast, queries incur at most 2 cache misses. The (2,4) windowed Cuckoo filters have the highest insertion throughput, and (2,2) windowed Cuckoo filters have the lowest insertion throughput, which is in concordance with the results from Section 5.1. The throughput of successful and unsuccessful queries is similarly high (Figure 5B and 5C).
For $l = 4$ , the whole bucket or window fits into one 64-bit integer for $k \leq 1 3$ . In this case, the optimizations described in Section 4.3 are applied. For $l = 4$ and $k \geq 1 4$ , we check each slot in the bucket or window separately, hence query throughput drops. For $l = 2$ , both windows (or buckets) fit into one 64-bit integer for $k \leq 1 4$ , and throughput drops for $k \geq 1 5$ . The increased throughput at $k \in \{ 5 , 1 3 \}$ for $l = 4$ and at $k \in \{ 6 , 1 4 \}$ for $l = 2$ , is obtained due to the optimized memory access when the number of bits in a slot is a multiple of 8. Small variations for different values of $k$ may be caused by different compiler optimizations during just-in-time compilation that can be applied to individual fixed values of $k$ .
The speedup factor from parallelization is almost linear, both for insertions and for queries for up to 5 threads (Figure 5 bottom, see Appendix A for details). For more threads, the distributing main thread starts to become the bottleneck. The speedup for insertions is slightly lower than for queries, probably due to higher memory bandwidth utilization.
# 5.4 Comparison with Prefix Filters and Vector Quotient Filters
Before comparing the throughput of our Cuckoo filter implementation with other state-ofthe-art filters, we note several constraints that complicate a fair comparison.
1. The Vector Quotient filter (VQF) only works if the number of slots is a power of 2. Hence, we set the number of slots to $2 ^ { 3 0 }$ . In practice, the space overhead could thus be considerably larger for the VQF. In contrast, our Cuckoo filter implementation and the Prefix filter are flexible concerning the number of slots.
2. Both the VQF and the Prefix filter only support a FPR of $\approx 2 ^ { - 8 }$ (and $2 ^ { - 1 6 }$ for the VQF) and their performance is highly optimized for this special case. A comparable Cuckoo filter with 8 bits per slot only has an FPR of $2 ^ { - 5 }$ . In contrast, we support all FPRs with different optimizations due to the advantages of just-in-time compilation, but at the cost of being less optimized for one special case.
3. The Prefix filter implementation only works on CPUs that support AVX512 instructions; there is no fallback implementation.
Throughput. We evaluated the insertion and lookup throughput ( $5 0 \%$ contained and $5 0 \%$ not contained keys) on random 64-bit integers (see Table 3). Prefix filters have the highest insertion and lookup throughput. Cuckoo filters have the lowest insertions throughput due to the many cache misses compared to $\approx 1$ cache miss for Prefix filters and 2 for Vector Quotient filters. For lookups, Vector Quotient filters and Cuckoo filters have similar throughput due to the same amount of cache misses. For $k = 8$ Prefix filters have the highest throughput, but in the optimized cases (i.e., Cuckoo filters with 16 bits per slot), bucketed Cuckoo filters have a higher lookup throughput compared to Prefix and VQF filters.
Space overhead. The space overhead depends on the FPR and a valid comparison is thus only possible for the same FPR. For a given FPR, the $( 2 , 2 )$ windowed Cuckoo filter has the smallest space overhead compared to Prefix, VQF and bucketed Cuckoo filters (see Table 3). | Cuckoo filters are space-efficient approximate set membership data structures with a controllable false positive rate (FPR) and zero false negatives, similar to Bloom filters. In contrast to Bloom filters, Cuckoo filters store multi-bit fingerprints of keys in a hash table using variants of Cuckoo hashing, allowing each fingerprint to be stored at a small number of possible locations. Existing Cuckoo filters use fingerprints of $(k+3)$ bits per key and an additional space overhead factor of at least $1.05$ to achieve an FPR of $2^{-k}$. For $k=10$, this amounts to $1.365\, kn$ bits to store $n$ keys, which is better than $1.443\, kn$ bits for Bloom filters. The $+3$ for the fingerprint size is required to balance out the multiplied FPR caused by looking for the fingerprint at several locations. In the original Cuckoo filter, the number of hash table buckets is restricted to a power of 2, which may lead to much larger space overheads, up to $2.1\, (1+3/k)\, kn$ bits.
We present two improvements of Cuckoo filters. First, we remove the restriction that the number of buckets must be a power of 2 by using a different placement strategy. Second, we reduce the space overhead factor of Cuckoo filters to $1.06 \, (1+2/k)$ by using overlapping windows instead of disjoint buckets to maintain the load threshold of the hash table, while reducing the number of alternative slots where any fingerprint may be found.
A detailed evaluation demonstrates that the alternative memory layout based on overlapping windows decreases the size of Cuckoo filters not only in theory, but also in practice. A comparison with other state-of-the art filter types, Prefix filters and Vector Quotient filters (VQFs), shows that the reduced space overhead makes windowed Cuckoo filters the smallest filters supporting online insertions, with similarly fast queries, but longer insertion times. | [
"cs.DS",
"cs.DB"
] |
# 1. Introduction
In recent years, topics such as machine learning and quantum computing have become widely recognized in society for the importance of computation in a broad sense. Naturally, these trends have also significantly impacted the field of condensed matter science. Innovative computational methods in condensed matter science are invariably based on a deep understanding of physical systems, and the essential relationship between computation and physics is obvious to researchers, and this relationship has become more explicitly acknowledged. This paper introduces the efforts of The Institute for Solid State Physics (ISSP) at The University of Tokyo in computational materials science, focusing on a project for supporting the computational materials science community through the development/improvement of software.
One of the key initiatives supporting the computational materials science community at ISSP is the nationwide joint use of supercomputer hardware [1]. On the other hand, with the advancement of computer architectures, the human cost of developing original code has increased. The Project for Advancement of Software Usability in Materials Science (PASUMS) [2] was launched in 2015 to address this issue. This initiative was inevitable, as many ISSP supercomputer users use codes developed independently. In addition, the highly efficient computational algorithms and the programs that implement them reflect the characteristics of the physical systems under study, and making them publicly accessible is an efficient way of disseminating the essential ideas of condensed matter theories to the world. In this paper, we would like to provide an overview of the PASUMS project and introduce the software developed through the project with a few applications.
# 2. Overview of PASUMS
PASUMS is a program conducted by ISSP to develop and enhance software that is important in condensed matter physics and materials science, and is expected to be used on ISSP supercomputer systems. It is part of the nationwide joint-use program of the supercomputer, which provides computational resources to domestic researchers, supports software development, and deals with the increasing complexity of modern large-scale parallel computer systems. Software to be developed is called for annually. The steering committee of the joint use program examines proposals, and two or three are accepted every fiscal year. In the PASUMS program, we develop and enhance software functionality, improve user interfaces, and prepare documentation and tutorials. We also support disseminating the software, installing it on the ISSP supercomputer systems, holding hands-on lectures, building dedicated websites, and writing software papers. The program aims to contribute to the advancement of the field by sharing the products as community codes. The deliverables are distributed as open-source software, and users can use them on their sites and extend them to solve their target problems. We believe it essential that the derivative works remain community codes. For this purpose, we adopt the copyleft license for the deliverable software. The software packages developed in the PASUMS program are, in principle, distributed under the GNU Public License (GPL) [3], except for the library packages, in which case the Lesser GPL [4] or Mozilla Public License (MPL) [5] are adopted.
By the end of the school year 2024, 21 projects have been adopted. Table 1 lists the software that PASUMS has handled so far. Most of the applications are intended for computation related to some specific type of physical systems: four related to the first-principles calculations, six to the quantum lattice model solvers, and five to other areas such as machine learning. In the next section, we will pick up some applications among them. There is another category of applications intended to assist the usage of other applications, though not listed in Table 1. For example, we enhanced a software package, StdFace [6], that generates model definition files for the quantum lattice model solvers $\mathcal { H } \Phi$ [7], mVMC [8], and H-wave [9] from a simple description so that it accepts the output of the software package RESPACK [10] that derives effective models. It enables seamless processes from the first-principles calculations through deriving the low-energy effective Hamiltonians to analyzing the resulting effective models. Yet another example of assisting applications is the software package, moller [11], which generates job scripts for batch job schedulers that integrate large-scale computational resources to perform exhaustive calculations. It allows for the swift construction of databases for highly accurate models to estimate physical quantities using data science techniques. We believe these efforts should contribute to accelerating the discovery of new functional materials through materials informatics.
# 3. Developed/Enhanced Software
# 3.1. First-Principles Calculation Related
First-principles calculations, which simulate molecules, solids, surfaces, interfaces, and nanostructures have become a valuable research tool for condensed matter theorists and experimental researchers. These calculations serve as standalone analysis tools and form the basis for combined analyses with classical molecular dynamics, which handle larger scales, and quantum lattice models, which incorporate high-precision correlation effects (discussed in the next section). As a result, they account for a significant portion of the use of the ISSP supercomputer. Programs for performing first-principles calculations, such as VASP [12], Quantum ESPRESSO [13–15], and OpenMX [16], are packaged and maintained to be user-friendly for researchers. Below, we describe the additions and improvements to first-principles calculation packages through PASUMS.
# 3.1.1. OpenMX (2015)
OpenMX (Open source package for Material eXplorer) is a first-principles calculation package using numerical localized basis sets [17–19]. It supports standard functions such as band calculations, structural optimization, and Born-Oppenheimer molecular dynamics calculations based on Density Functional Theory (DFT), in addition to large-scale calculations with computational costs proportional to the number of atoms (as opposed to the cubic scaling in typical DFT) [20]. It also supports electrical conductance calculations for nanoscale junctions using the non-equilibrium Green’s function method [21]. An eigenchannel analysis [22], which aids in physically interpreting the conductance and current obtained in these electrical conductance simulations, is also available. Eigenchannels are a superposition of wavefunctions in nanoscale junctions obtained by diagonalizing the transmission matrix, yielding eigenconductance. One can investigate the paths electrons take by examining the real-space distribution of channels with large eigenconductance (Figure 1).
# 3.1.2. ESM-RISM (2021)
The ESM-RISM software module [23] is an extension implemented within Quantum ESPRESSO [13], a widely used open-source software package for first-principles electronic structure calculations. It is designed to simulate systems exhibiting broken periodicity in one dimension, such as solid-liquid interfaces encountered in battery electrodes and dielectric surfaces. Traditional approaches typically use slab models with periodic vacuum regions, which can lead to non-physical interactions due to periodic boundary conditions. To overcome this issue, the Effective Screening Medium (ESM) method [24] was proposed, where the Kohn-Sham equation for electrons is solved under periodic boundary conditions, while the Poisson equation for the electrostatic field is solved under open boundary conditions, representing semi-infinite vacuum or perfect conductor conditions. The ESM-RISM method further enhances the ESM approach by incorporating the Reference Interaction Site Model (RISM), a classical liquid theory, enabling efficient and accurate simulations of electrolyte distributions and electric double layers at electrode-electrolyte interfaces under bias voltage conditions (Figure 3(a)). This capability is particularly beneficial for modeling electrochemical systems such as fuel cell electrodes [25].
Under the PASUMS, the ESM-RISM module was significantly improved by introducing flexible spatial meshing schemes. Initially, the ESM-RISM method employed a common spatial mesh for electronic (DFT) and classical (RISM) equations, suitable primarily for thin electric double layers. PASUMS introduced independent spatial discretizations for the DFT and RISM equations using Fourier interpolation methods. This allowed for distinct mesh densities and cell sizes tailored to each equation (Figure 3(b)), significantly broadening the method’s application scope. Comprehensive manuals and tutorials were also provided to promote the practical adoption and effective use of this improved ESM-RISM implementation.
# 3.1.3. RESPACK (2018)
While first-principles calculations address various problems, systems with strong electron correlations, such as high-temperature superconductors made of copper oxides, require theoretical frameworks beyond standard first-principles methods. One approach is to construct lattice models, such as the Hubbard model, from first-principles calculation results (downfolding, see Figure 4), and apply high-precision methods, such as exact diagonalization and quantum Monte Carlo methods, to these models. The crucial point is not losing the physical essence of the modeling process. RESPACK [26] is a program package designed to generate such models, ensuring these properties using maximally localized Wannier functions [27] and constrained random phase approximation [28]. PASUMS has developed interfaces to integrate with several high-precision lattice model solvers seamlessly.
# 3.2. Quantum Lattice Model Solver Related
Various quantum lattice models, such as the Hubbard model, where electrons hop between lattice sites while interacting with each other, and the Heisenberg model, where spins fixed at lattice sites interact, have been proposed and studied. These models’ ground and low-lying excited states are the main research targets, but obtaining exact solutions is impossible for most cases, except for a few exceptions. Therefore, various numerical methods have been proposed [29], and corresponding software has been developed. Here, we briefly introduce some representative methods and the software developed or enhanced through PASUMS.
# 3.2.1. H-wave (2022)
Mean-field approximations generally reduce the original many-body problem to a single-body problem that deals with the fluctuations of physical quantities up to first order. Although systematically controlling the degree of approximation is difficult, the mean-field approximations are widely applicable and computationally inexpensive compared to other methods, making them useful for intuitive understanding, such as examining rough phase diagrams over a wide range of parameters, or applying for screening before precise calculations.
H-wave [9] is a software package that implements the unrestricted Hartree-Fock (UHF) method for quantum lattice models, and the random phase approximation, allowing for the calculation of susceptibilities for single-body physical quantities. It supports the UHF method both in the real space and the wavenumber space, which is expected to lead to a significant speed-up for systems with short-range order parameters. It also supports finite temperature calculations.
The input files describing the one-body and two-body interactions are based on the
Wannier90 format. This allows for a smooth connection with the software packages that derive the effective models from the first-principles calculations, such as RESPACK, to analyze them with H-wave. (Figure 5)
The following functionalities and features are suggested as future extensions: (a) calculations of quantities corresponding to dynamic susceptibility measured in experiments, (b) evaluation of the instability of superconducting transition by solving the linear Eliashberg equation considering charge and spin fluctuations as pairing interactions, and (c) more samples such as the calculation with spin-orbit interactions.
# 3.2.2. DCore (2017, 2024)
Dynamical mean-field theory (DMFT) [30,31] is one of the most powerful tools for investigating strongly correlated electron systems. The DMFT maps a lattice model to an impurity model in an effective medium (Figure 6) and solves this model selfconsistently as the mean-field approximation. This method treats the imaginary-time Green’s function and self-energy as the main physical quantities, and it can calculate the dynamical properties of the system like one-particle spectral functions, which are directly compared with the experimental results by angle-resolved photoemission spectroscopy (ARPES). In addition to the theoretical lattice model such as the Hubbard model, the DMFT can be applied to real materials by combining with DFT calculations. This approach, called DFT+DMFT, has been widely used to investigate the electronic structure of strongly correlated materials such as transition-metal oxides [31].
DCore (abbreviation of the “integrated DMFT software for CORrelated Electrons”) [32] was released as DMFT software in 2017 with the help of PASUMS. DCore implements the main DMFT loop and leaves the impurity solver to an external program, which enables the use of various impurity solvers. In the latest version (4.1.0), DCore supports four algorithms for the impurity solver: The continuous-time quantum Monte Carlo (CTQMC) method, the exact diagonalization, the Hubbard-I approximation, and the non-interacting limit (zero self-energy limit). For the CTQMC method, three programs are supported: TRIQS/cthyb [33,34], ALPS/CT-HYB [35,36], and ALPS/CT-HYB-SEGMENT [37,38]. For the exact diagonalization, pomerol [39] is supported. For the Hubbard-I approximation, pomerol and TRIQS/hubbard-I [40] are supported. By installing these external programs, users can choose the most suitable impurity solver for their target systems.
Since its release, several features have been added to DCore, such as more solvers and postprocesses. In the fiscal year 2024, PASUMS made the postprocesses of DCore more organized. One of the postprocesses is Bethe-Salpeter equation (BSE) solver [41], which calculates the two-body susceptibility
$$
\chi _ { i j , k l } ( \pmb q , \Omega _ { m } ) = \frac { 1 } { N } \sum _ { r } \int _ { 0 } ^ { \beta } d \tau \left. c _ { i } ^ { \dagger } ( \pmb r , \tau ) c _ { j } ( \pmb r , \tau ) c _ { k } ^ { \dagger } ( \mathbf 0 , 0 ) c _ { l } ( \mathbf 0 , 0 ) \right. e ^ { i \Omega _ { m } \tau } e ^ { - i q \cdot \pmb r } ,
$$
where $\mathbfit { r }$ denotes the position of the unitcell and $i , j , k , l$ are the combined spin and orbital indices in the unitcell. From the DMFT $^ +$ BSE calculation, for example, the location of the critical point can be estimated as the zero point of the inversed susceptibility $\chi ^ { - 1 }$ .
# 3.2.3. Φ (2015, 2016, 2017) and Kω (2016)
For quantitative comparisons with experimental data, numerical exact diagonalization (ED) of quantum lattice systems is one of the most reliable methods for small systems, as it avoids approximations. This approach enables detailed analysis of quantum systems and serves as a benchmark for other numerical techniques. With the increasing availability of parallel computing infrastructures featuring distributed-memory architectures and narrow bandwidths, there is a growing demand for efficient, user-friendly, and highly parallelized diagonalization software.
To address this need, $\mathcal { H } \Phi$ [42,43] was developed. $\mathcal { H } \Phi$ yields eigenstates of any given lattice fermion Hamiltonian defined on a finite set of lattice points that consists of hopping terms and multi-body interactions. As a special case, it can also be applied to spin systems. The typical supported models are listed as follows:
Hubbard and Heisenberg models,
Multi-band extensions of the Hubbard model,
Models with SU(2)-symmetry-breaking exchange interactions, such as Dzyaloshinskii-Moriya and Kitaev interactions,
Kondo lattice models that describe itinerant electrons coupled with quantum spins.
$\mathcal { H } \Phi$ enables the computation of numerous physical quantities, including:
Internal energy at zero and finite temperatures, Temperature-dependent specific heat, • Charge and spin structure factors, • Optical spectra and other dynamical properties.
These features make $\mathcal { H } \Phi$ a versatile and powerful tool for researchers across various fields, including experimentalists who seek to validate their findings with theoretical models.
The development of $\mathcal { H } \Phi$ spans multiple phases. $\mathcal { H } \Phi$ ver. 1 was completed as part of the fiscal year 2015 project, while $\mathcal { H } \Phi$ ver. 2 was developed during the fiscal year 2016. $\mathcal { H } \Phi$ ver. 2 integrates seamlessly with the numerical library $\mathrm { K } \omega$ [44], which was developed in parallel under the same project. K $\omega$ provides advanced numerical routines for linear algebra, significantly enhancing the computational efficiency of $\mathcal { H } \Phi$ . For example, the shifted bi-conjugate gradient (sBiCG) method was implemented in $\mathcal { H } \Phi$ using K $\omega$ , enabling efficient computation of dynamical Green’s functions and excitation spectra. Furthermore, the locally optimal block preconditioned conjugate gradient (LOBPCG) method was incorporated, allowing for the simultaneous computation of multiple low-energy eigenvalues and eigenvectors in a single calculation. In the fiscal year 2017, $\mathcal { H } \Phi$ was extended to include real-time evolution capabilities. This feature allows researchers to study non-equilibrium dynamics in quantum many-body systems, a rapidly growing area of interest in quantum technologies and experiments. These advancements position $\mathcal { H } \Phi$ as a comprehensive tool for investigating quantum systems’ equilibrium and non-equilibrium properties.
# 3.2.4. DSQSS (2018)
Path integral Monte Carlo (PIMC) method [45] maps the partition function $Z =$ $\mathrm { T r } e ^ { - \beta H }$ of a $D$ -dimensional quantum system onto that of a $( D + 1 )$ -dimensional classical system using path integrals, and samples paths (“world-lines”) with their weight. This powerful method allows us to calculate the thermal expectation values of physical quantity exactly within statistical error regardless of the spatial dimension and/or the system size as long as the system is free from the infamous sign problem. Near the critical point, the convergence of the PIMC method with local update of the world lines becomes slow (critical slowing down). For discrete space systems, several global update (cluster update) methods have been developed to solve the critical slowing down, for example, the directed loop algorithm [46]. Figure 2 shows how the directed loop algorithm updates the world-line configuration globally. Therefore, the PIMC methods with cluster updates have been widely used for studying the critical phenomena of quantum lattice problems.
DSQSS [47] is software implementing path integral Monte Carlo methods for quantum lattice systems. DSQSS can deal with symmetry-broken systems such as spins under magnetic field well because it implements the directed loop algorithm and the parallel worm algorithm [48]. In 2018, PASUMS enhanced the user experience of DSQSS by improving the installation process and developing utility tools for easily generating input files. By using the input generator tools, the input files describing lattice models,
general spin XXZ model hardcore and softcore Bose-Hubbard model
on
hypercubic lattice in any spatial dimension
triangular lattice
honeycomb lattice
are generated. DSQSS can calculate the thermal average (canonical average) of several observables such as
energy and specific heat, magnetization (particle density) and susceptibility, spin-spin (density-density) correlation in space-time.
# 3.2.5. mVMC (2016)
The variational method based on Ritz’s variational principle searches for the ground state $\Psi ( \theta ^ { * } )$ by finding the parameters $\theta ^ { * }$ that minimizes the energy $E ( \theta )$ of a parameterized (many-body) trial wavefunction $\Psi ( \theta )$ . The variational Monte Carlo (VMC) method evaluates the average of an arbitrary operator $\hat { A }$ for the given wave function, $\langle A \rangle _ { \theta } = \langle \Psi ( \theta ) | A | \Psi ( \theta ) \rangle \left/ \langle \Psi ( \theta ) | \Psi ( \theta ) \rangle \right.$ , by the Markov chain Monte Carlo method. Note that the VMC method does not suffer from the sign problem because it does not sample the wave function directly. In the variational methods, the accuracy and computational cost of the approximation can be controlled by varying the parameter set and the numbers that construct the trial wavefunction. For example, the multivariable variational Monte Carlo method [49] is based on the trial wavefunction using a combination of a one-body wavefunction written as a Pfaffian-Slater determinant and a few additional factors, such as the Jastrow and Gutzwiller factors, to better represent electronic correlations.
mVMC [8] is software implementing the multi-variable variational Monte Carlo method for strongly correlated electron systems. mVMC takes similar input files to those of $\mathcal { H } \Phi$ and supports many lattice models supported by $\mathcal { H } \Phi$ too;
Hubbard and Heisenberg models, Multi-band extensions of the Hubbard model, Kondo lattice models.
Once the ground state is obtained, mVMC calculates the energy and the one-body and two-body Green’s functions.
In 2016, PASUMS supported mVMC in improving the user interface and made it easy for a wide range of researchers; from the specialist (theoretical researcher) to the learner (undergraduate student). mVMC gives the accurate ground state of larger systems than the ED method without sign problem, and hence it is one of the strongest tools to investigate the strongly correlated systems and the frustrated spin systems.
# 3.2.6. TeNeS (2019, 2023)
Tensor networks (TNs) are another representation of the variational wavefunctions [50, 51]. Once a basis of the Hilbert space is given, for example, direct product state of up spin and down spin, the wave function is expanded as
$$
\left| \Psi \right. = \sum _ { \left\{ \sigma _ { i } \right\} = \uparrow , \downarrow } C _ { \sigma _ { 1 } \sigma _ { 2 } \ldots \sigma _ { N } } \left| \sigma _ { 1 } \sigma _ { 2 } \ldots \sigma _ { N } \right. .
$$
The coefficient $C$ is an exponentially large tensor with $N$ indices and $d ^ { N }$ elements ( $d = 2$ is the degree of freedom on each site), and the TN framework represents this as a network of many small tensors. For example, a wave function of a spin chain with $N$ sites under the periodic boundary condition is well represented as a product of $N$ small tensors as
$$
C _ { \sigma _ { 1 } \sigma _ { 2 } \ldots \sigma _ { N } } \simeq \sum _ { \{ \alpha _ { i } \} } A _ { \alpha _ { 1 } \alpha _ { 2 } } ^ { ( 1 ) \sigma _ { 1 } } A _ { \alpha _ { 2 } \alpha _ { 3 } } ^ { ( 2 ) \sigma _ { 2 } } \ldots A _ { \alpha _ { N } \alpha _ { 1 } } ^ { ( N ) \sigma _ { N } } .
$$
Fixing the values of $\{ \sigma \}$ reduces the small tensors to matrices, and their product gives the element of $C$ . Therefore, this is called a matrix product state (MPS) (Figure 7). Each tensor has three indices; one index (called a physical index) represents the local degree of freedom and two (called a virtual index) connect tensors to another. The dimension of virtual indices $D$ is called the bond dimension and controls the accuracy of the TN state. The TN can exponentially reduce the number of elements from $d ^ { N }$ to $d D ^ { 2 } N$ . Additionally, by imposing the translational symmetry of the state and introducing the sub-lattice order, the number of independent tensors can be further reduced. For example, when all the tensors are common, the coefficient is represented as
$$
C _ { \sigma _ { 1 } \sigma _ { 2 } \ldots \sigma _ { N } } \simeq \sum _ { \{ \alpha _ { i } \} } A _ { \alpha _ { 1 } \alpha _ { 2 } } ^ { \sigma _ { 1 } } A _ { \alpha _ { 2 } \alpha _ { 3 } } ^ { \sigma _ { 2 } } \ldots A _ { \alpha _ { N } \alpha _ { 1 } } ^ { \sigma _ { N } } .
$$
As for the chain lattice, systems, tensor networks on the square lattice can represent wave functions on two-dimensional lattices. It is called the tensor product state (TPS) [52] or the pair entanglement product state (PEPS) [53]. By repeating tensors, a wave function on an infinitely large lattice can be represented by TPS, which is called the infinite TPS (iTPS).
TeNeS [54] is software for obtaining the iTPS representing the ground state of quantum lattice models. It was released with the help of PASUMS in 2019. To optimize the tensors, TeNeS performs the imaginary-time evolution of the state as $\begin{array} { r } { \left| \Psi \right. = \operatorname* { l i m } _ { n \right. \infty } \left( \exp ( - \tau H ) ) ^ { n } \left| \psi _ { 0 } \right. } \end{array}$ with small time step $\tau$ [55–58]. Tensor contraction of iTPS, an infinitely large tensor network, is performed with the corner transfer matrix renormalization group (CTMRG) method [57,59]. While TeNeS operates on the square lattice only, it takes other lattices such as a triangular lattice by regarding it as a square lattice with next-nearest neighbor (or even further neighbor) interactions. TeNeS, like other PASUMS software, provides utility tools that help users to generate input files for calculating widely used models and lattices:
general spin XXZ model Bose-Hubbard model on hypercubic lattice triangular lattice honeycomb lattice kagome lattice.
In TeNeS, basic tensor operations such as singular value decomposition are implemented with mptensor [60,61], a tensor library supporting the OpenMP/MPI hybrid parallelization via LAPACK and ScaLAPACK. Therefore, TeNeS supports parallel calculations even in massively parallel computers and can perform heavy calculations with large bond dimensions.
In 2023, PASUMS supported TeNeS in implementing more calculation modes; the real-time evolution calculation [62] and the finite temperature calculation [63]. The real-time evolution is achieved simply by replacing $\tau$ in the time evolution operator $\exp ( - \tau H )$ with $i t$ . In the finite temperature calculation, a tensor network represents the density matrix representing the mixed state for the inversed temperature $\beta$ , $\rho ( { \boldsymbol { \beta } } ) \propto$ $e ^ { - \beta H } = e ^ { - \beta H / 2 } \rho ( 0 ) e ^ { - \beta H / 2 }$ , instead of the wave function $\left| \Psi \right.$ . These extensions makes TeNeS a more useful tool in comparing with experiments, which often probe nonequilibrium or finite-temperature states.
# 3.3. Machine Learning Related
In addition to the applications developed through the advancement project mentioned above, first-principles calculation and molecular dynamics packages widely used in the field of condensed matter, such as Quantum ESPRESSO, VASP, and LAMMPS, are pre-installed on the ISSP supercomputer. The ISSP supercomputer’s job scheduler is equipped with features for bulk jobs and array jobs, allowing for the execution of multiple hybrid parallel applications and providing an environment for various exhaustive calculations, such as parameter scans. In PASUMS, the application software that utilizes such an environment is developed based on machine learning, optimization problems, and data-driven approaches.
# 3.3.1. abICS (2019, 2022)
The configurational disorder in various functional materials is a crucial factor in determining material properties. The ability to simulate such disorder is important for the prediction of properties, design of materials, and comparison with experiments. Applying first-principles calculations directly to their evaluation generally results in enormous computational costs, so the importance sampling method is adopted. When the disorder is not completely random but some short-range order exists, it is necessary first to perform thermodynamic sampling for configurations to clarify the structural order. Traditionally, lightweight effective models have been derived by cluster expansion methods that fit the results of the first-principle calculations, to which the Monte Carlo sampling is applied. However, for complex ionic crystals having many components and multiple sublattice structures, the cluster expansion becomes combinatorially demanding and intractable.
abICS [64] is a framework to perform the direct statistical thermodynamic sampling by combining the high-throughput first-principles calculations, the parallel extended ensamble algorithms, and the on-lattice neural network models concurrently improved in an active learning settings (Figure 8). It makes efficient use of massively-parallel supercomputers to examine such multi-component multi-sublattice systems without phenomenological models [65,66]. In the fiscal year 2019, the original program for the thermodynamic sampling is extended to support the first principles calculation software including Quantum ESPRESSO and OpenMX in addition to VASP in a modular structure. For the sampling algorithms, the replica exchange Monte Carlo method and the population annealing Monte Carlo method are implemented. A new user interface is introduced for the ease of use: The overall procedure is controlled by the input file in TOML format, and the tools to generate input data are prepared. In the fiscal year 2022, it is enhanced by the acceleration using the neural network model and the active learning. The supported library includes aenet that is integrated through the file-I/O based interface as well as the aenet-LAMMPS Python module. The grand canonical sampling is implemented that allows for changes in the composition. abICS will be immensely useful for making efficient use of next-generation large-scale computers due to its multi-layered parallelism. By employing the modular coding practice, it should be relatively easy to implement interfaces to other first-principles calculation software, or to introduce new sampling algorithms.
# 3.3.2. 2DMAT (2020, 2021, 2024)
Formulation of a reliable and efficient method for the analysis of experimental data is one of the significant issues in scientific research. In the analysis procedure, one often want to obtain the parameter $X$ characterizing the model from the experimentally observed quantities $D _ { \mathrm { e x } }$ . Generally, $X$ and $D$ are vectors and the dimension of $D$ is greater than that of $X$ . Typically, it can be regarded as an inverse problem. Suppose that we can solve the direct problem of calculating the outcome of the measurement $D _ { \mathrm { { c a l } } } ( X )$ when the model parameter is $X$ , and we can compute the loss function, $F ( X )$ , defined as some properly defined distance between the calculated outcome $D _ { \mathrm { { c a l } } } ( X )$ and the experimentally observed result $D _ { \mathrm { e x } }$ . Then, the inverse problem is to find the optimal parameter value $X ^ { * }$ that minimizes the loss function $F ( X )$ .
2DMAT/ODAT-SE is an exploratory inverse problem analysis platform of the experimental data. The implemented search algorithms include the grid-based search, the Nelder-Mead method, the Bayesian optimization using the PHYSBO library mentioned below, the replica exchange Monte Carlo method [67], and the population annealing Monte Carlo method [68]. By combining multiple methods, the global search of solutions can be attained, instead of being limited to local minima. It can be applied to the analysis that takes account of the experimental uncertainty. Because of its development history, 2DMAT/ODAT-SE comes with a relatively large collection of direct problem solvers specialized in diffraction experiments for the two-dimensional material structure analyses (and thus named as 2DMAT). However, in the latest version, it has been reformulated to a general framework for optimization problems required for solving inverse problems, and thus renamed as Open Data Analysis Tool for Science and Engineering (ODAT-SE). In the FY2020 PASUMS project, the analysis program mainly for the TRHEPD experiment using the Nelder-Mead method and the grid search method is reorganized and extended to support Bayesian optimization and the replica exchange Monte Carlo method, and released as 2DMAT version 1. In the FY2021 project, it is further extended to support more diffraction experiments including SXRD and LEED. The population annealing Monte Carlo method is also added for the inverse problem solver algorithm that is suitable for large-scale parallel computations. In the FY2024 project, it is reorganized as an open platform for data analysis by modularizing direct problem solvers and search algorithms, and released as ODAT-SE version 3. Users can extend the definitions of direct problems and search algorithms, making it a versatile platform for inverse problem analysis. (Figure 9.)
# 3.3.3. PHYSBO (2020)
PHYSBO (optimization tools for PHYSics based on Bayesian Optimization)[69] is a Python-based software tool designed for black-box optimization tasks specific to condensed matter physics, leveraging Bayesian optimization (BO). BO[70] is a machinelearning-driven optimization method particularly suited to scenarios in physics, chemistry, and materials science where the target function (e.g., material properties) is complex, expensive to evaluate, or lacks an analytical expression. For example, in materials development, discovering optimal materials through trial-and-error can be formulated as a black-box optimization problem, with inputs such as composition, structure, and processing conditions, and outputs representing the desired material properties.
PHYSBO, initially developed as Python 3 software under a fiscal year 2020 project, extends the functionality of COMBO [71] to address specific needs in condensed matter physics effectively. Key advancements in PHYSBO include:
Implementation of MPI-based parallelization for acquisition function optimization, enabling massive scalability on supercomputing platforms such as the ISSP supercomputer.
Introduction of multi-objective optimization functionality, expanding its applicability to problems requiring the simultaneous optimization of multiple objectives.
Development of a detailed user manual to facilitate adoption by the research community.
These improvements address the computational bottlenecks of BO, making PHYSBO a highly efficient and versatile tool for complex optimization tasks.
In the field of physics, BO has already been applied to several problems, including autonomous X-ray scattering experiments, inverse scattering, crystal structure prediction, and effective model estimation. The PHYSBO package builds on these successes, further accelerating such studies and enabling the exploration of even more complex physical systems by leveraging supercomputers.
# 3.4. Constructing environments
# 3.4.1. MateriApps Installer (2020)
In materials science, numerical simulation has become indispensable for theoretical research. The advancement of computational materials science relies heavily on developing efficient algorithms for solving equations that describe material properties. Over the years, many excellent applications leveraging state-of-the-art algorithms have been developed. However, the accessibility of these tools to a broader audience, including experimentalists and corporate researchers, remains a challenge.
To address this, MateriApps [72,73], a portal site for materials science simulations, was launched in 2013. MateriApps is a platform for disseminating information about computational materials science software to researchers. Despite this effort, one major obstacle for new users is installing and configuring these applications. To mitigate this challenge, MateriApps LIVE! [74,75], an environment that allows users to try out computational materials science applications on their devices quickly, was developed. MateriApps LIVE! is distributed as a Virtual Hard Disk Image (OVA) for VirtualBox and includes pre-installed applications, an operating system (Debian GNU/Linux), editors, visualization tools, and other essential environments. This setup lets users quickly establish a working computational environment, which benefits software training sessions and classroom settings. However, while MateriApps LIVE! is well-suited for introductory purposes, it is limited in computational power since it operates as a virtual machine. To support users interested in conducting large-scale simulations, MateriApps Installer was developed [75,76] in 2013.
As part of the FY2020 Project for PASUMS, several significant updates to MateriApps Installer were performed:
• Organized the directory structure and scripts for better usability,
Added comprehensive documentation and tutorials,
Upgraded the list of supported software,
Extended support for new hardware, including the ISSP supercomputer system B (ohtaka),
Supported new compilers, such as GCC 10 and Intel oneAPI.
MateriApps Installer includes installation scripts for a wide range of materials science applications, such as
• Simulation tools: ALPS, ALPSCore, DSQSS, Quantum ESPRESSO, $\mathcal { H } \Phi$ , Kω, LAMMPS, mVMC, OpenMX, RESPACK, and TeNeS,
Libraries and tools: Boost, CMake, Eigen3, FFTW, GCC, Git, GSL, HDF5, LAPACK, libffi, OpenBLAS, OpenMPI, OpenSSL, Python3, ScaLAPACK, Tcl/Tk, and zlib.
MateriApps Installer enables these materials science applications to be easily installed on the ISSP supercomputers, providing users with a ready-to-use environment for large-scale simulations.
# 3.4.2. HTP-Tools (2023)
In recent years, approaches that leverage machine learning to predict physical properties and design new materials, collectively known as materials informatics, have gained significant popularity. A critical factor for achieving high accuracy in predictions and designs is the availability of large amounts of supervised data. From this perspective, databases such as Materials Project, which store crystal structures, experimental measurements, and first-principles calculation results, have been developed and are widely used. However, many machine learning applications require particular materials data and physical property information, often unavailable in existing databases. Efficient methods and environments for generating such targeted training data would significantly advance materials informatics by providing a robust research foundation and enabling rapid progress in the field.
To meet this requirement, PASUMS has supported the development of highthroughput (HTP) tools designed for exhaustive data generation from crystal structures using first-principles calculations. One tool, cif2x, generates input files compatible with first-principles calculation software such as VASP, Quantum ESPRESSO, OpenMX, and AkaiKKR. Sample files and comprehensive tutorials are provided alongside cif2x to demonstrate its integration and practical application with these software packages. Additionally, the project has introduced moller, a tool designed to automate batch job script generation, enabling large-scale computations on supercomputers. Though moller was developed independently from cif2x, it supports various computational solvers and is broadly applicable for general-purpose bulk calculations. Practical examples and tutorials illustrate its use with software such as HPhi and DSQSS, demonstrating how researchers can efficiently manage batch processing of multiple computational scenarios. Both cif2x and moller are distributed as open-source software under the GNU General Public License (GPL) version 3.0 and are pre-installed on systems such as the ISSP supercomputer.
As examples of their potential applications, these tools could be utilized to develop comprehensive computational materials science databases. Such databases would significantly accelerate materials informatics research and provide valuable resources to the broader scientific community. Additionally, extending moller’s compatibility to supercomputing infrastructures beyond ISSP, such as those within the High-Performance Computing Infrastructure (HPCI), could further standardize large-scale computations. The availability of these tools thus supports researchers in generating diverse and extensive materials datasets, thereby substantially contributing to the advancement of materials informatics and computational materials science.
# 4. Summary
In this paper, we introduced the Project for Advancement of Software Usability in Materials Science (PASUMS), which ISSP carries out. In the nationwide joint use of ISSP computer systems that began with the sharing of the hardware, the role of software and data is increasingly more essential, which is equally true for many other highperformance supercomputer systems worldwide. The software developed or required by materials science researchers is highly diverse, making it challenging to cover comprehensively. However, about 10 years after the start of the PASUMS project, the open-source software born from this initiative has come to cover a significant portion of computational materials science. Recently, there has been a focus on enhancing the interoperability between different software. One of the future goals will be to build a framework for materials exploration using such “integrated software” and data repositories.
Additionally, various other initiatives have been launched, including a data repository project [77] initiated with the collaboration of data science and materials science, a portal site called MateriApps [72] for promoting software use, MateriApps
LIVE! [74,75] which packages execution environments to facilitate the easy use of software, and hands-on workshops and lectures to promote software utilization. Those interested should refer to the references.
# 5. Acknowledgement
This paper introduced many open-source software projects. Needless to say, these owe much to the contributions of the original developers, project proposers, and collaborators. In many cases, ISSP has assisted in enhancing functionality and usability through manual preparation and other support efforts. We would like to express our gratitude to these individuals.
# References
[1] https://mdcl.issp.u-tokyo.ac.jp/scc/report/result/activity-reports. [2] https://www.pasums.issp.u-tokyo.ac.jp/en/. [3] Free Software Foundation. Gnu general public license https://www.gnu.org/licenses/ gpl-3.0.html.en. Version 3. [4] Free Software Foundation. Gnu lesser general public license https://www.gnu.org/ licenses/lgpl-3.0.html.en. Version 3. [5] Mozilla Foundation. Mozilla public license version 2.0 https://www.mozilla.org/ en-US/MPL/2.0/. [6] https://github.com/issp-center-dev/StdFace.
[7] https://www.pasums.issp.u-tokyo.ac.jp/hphi/.
[8] https://www.pasums.issp.u-tokyo.ac.jp/mvmc/.
[9] https://www.pasums.issp.u-tokyo.ac.jp/h-wave/.
[10] https://sites.google.com/view/kazuma7k6r.
[11] https://www.pasums.issp.u-tokyo.ac.jp/htp-tools/.
[12] Kresse G, Furthm¨uller J. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. Phys Rev B. 1996 Oct;54:11169–11186. Available from: https://link.aps.org/doi/10.1103/PhysRevB.54.11169.
[13] https://www.quantum-espresso.org/.
[14] Giannozzi P, Baroni S, Bonini N, et al. QUANTUM ESPRESSO: a modular and opensource software project for quantum simulations of materials. J Phys Condens Matter. 2009;21(39):395502 (19pp). Available from: http://www.quantum-espresso.org.
[15] Giannozzi P, Andreussi O, Brumme T, et al. Advanced capabilities for materials modelling with QUANTUM ESPRESSO. J Phys Condens Matter. 2017;29(46):465901. Available from: http://stacks.iop.org/0953-8984/29/i=46/a=465901.
[16] https://www.openmx-square.org/.
[17] Ozaki T. Variationally optimized atomic orbitals for large-scale electronic structures. Phys Rev B. 2003 Apr;67:155108. Available from: https://link.aps.org/doi/10.1103/ PhysRevB.67.155108.
[18] Ozaki T, Kino H. Numerical atomic basis orbitals from H to Kr. Phys Rev B. 2004 May; 69:195113. Available from: https://link.aps.org/doi/10.1103/PhysRevB.69.195113.
[19] Ozaki T, Kino H. Efficient projector expansion for the ab initio LCAO method. Phys Rev B. 2005 Jul;72:045121. Available from: https://link.aps.org/doi/10.1103/PhysRevB. 72.045121.
[20] Ozaki T. $o ( n )$ krylov-subspace method for large-scale ab initio electronic structure calculations. Phys Rev B. 2006 Dec;74:245101. Available from: https://link.aps.org/doi/ 10.1103/PhysRevB.74.245101.
[21] Ozaki T, Nishio K, Kino H. Efficient implementation of the nonequilibrium green function method for electronic transport calculations. Phys Rev B. 2010 Jan;81:035116. Available from: https://link.aps.org/doi/10.1103/PhysRevB.81.035116.
[22] Paulsson M, Brandbyge M. Transmission eigenchannels from nonequilibrium green’s functions. Phys Rev B. 2007 Sep;76:115117. Available from: https://link.aps.org/doi/10. 1103/PhysRevB.76.115117.
[23] https://www2.ccs.tsukuba.ac.jp/public/otani/programs.html.
[24] Otani M, Sugino O. First-principles calculations of charged surfaces and interfaces: A plane-wave nonrepeated slab approach. Phys Rev B. 2006 Mar;73:115407. Available from: https://link.aps.org/doi/10.1103/PhysRevB.73.115407.
[25] Nishihara S, Otani M. Hybrid solvation models for bulk, interface, and membrane: Reference interaction site methods coupled with density functional theory. Phys Rev B. 2017 Sep;96:115429. Available from: https://link.aps.org/doi/10.1103/PhysRevB. 96.115429.
[26] Nakamura K, Yoshimoto Y, Nomura Y, et al. Respack: An ab initio tool for derivation of effective low-energy model of material. Comput Phys Commun. 2021; 261:107781. Available from: https://www.sciencedirect.com/science/article/pii/ S001046552030391X.
[27] Marzari N, Mostofi AA, Yates JR, et al. Maximally localized wannier functions: Theory and applications. Rev Mod Phys. 2012 Oct;84:1419–1475. Available from: https://link. aps.org/doi/10.1103/RevModPhys.84.1419.
[28] Aryasetiawan F, Imada M, Georges A, et al. Frequency-dependent local interactions and low-energy effective models from electronic structure calculations. Phys Rev B. 2004 Nov; 70:195104. Available from: https://link.aps.org/doi/10.1103/PhysRevB.70.195104.
[29] Avella A, Mancini F, editors. Strongly correlated systems. Springer Berlin Heidelberg; 2013. Available from: https://doi.org/10.1007/978-3-642-35106-8.
[30] Georges A, Kotliar G, Krauth W, et al. Dynamical mean-field theory of strongly correlated fermion systems and the limit of infinite dimensions. Reviews of Modern Physics. 1996 Jan;68(1):13–125. Available from: https://doi.org/10.1103/revmodphys.68.13.
[31] Kotliar G, Savrasov SY, Haule K, et al. Electronic structure calculations with dynamical mean-field theory. Reviews of Modern Physics. 2006 Aug;78(3):865–951. Available from: http://dx.doi.org/10.1103/RevModPhys.78.865.
[32] https://www.pasums.issp.u-tokyo.ac.jp/dcore/.
[33] Seth P, Krivenko I, Ferrero M, et al. Triqs/cthyb: A continuous-time quantum monte carlo hybridisation expansion solver for quantum impurity problems. Computer Physics Communications. 2016;200:274 – 284. Available from: http://www.sciencedirect.com/ science/article/pii/S001046551500404X.
[34] https://triqs.github.io/cthyb/latest/.
[35] Shinaoka H, Gull E, Werner P. Continuous-time hybridization expansion quantum impurity solver for multi-orbital systems with complex hybridizations. Computer Physics Communications. 2017;215:128–136. Available from: https://www.sciencedirect.com/ science/article/pii/S0010465517300036.
[36] https://github.com/ALPSCore/CT-HYB.
[37] Hafermann H, Werner P, Gull E. Efficient implementation of the continuous-time hybridization expansion quantum impurity solver. Computer Physics Communications. 2013;184(4):1280–1286. Available from: https://www.sciencedirect.com/science/ article/pii/S0010465512004092.
[38] https://github.com/ALPSCore/CT-HYB-SEGMENT.
[39] Krivenko I, Antipov A, Iskakov S, et al. pomerol: An exact diagonalization code written in C++ Available from: https://doi.org/10.5281/zenodo.5739623.
[40] https://github.com/TRIQS/hubbardI.
[41] Tagliavini A, Hummel S, Wentzell N, et al. Efficient bethe-salpeter equation treatment in dynamical mean-field theory. Physical Review B. 2018 Jun;97(23). Available from: http://dx.doi.org/10.1103/PhysRevB.97.235140.
[42] Kawamura M, Yoshimi K, Misawa T, et al. Quantum lattice model solver $\mathrm { H } \Phi$ . Comput Phys Commun. 2017;217:180–192. Available from: https://www.sciencedirect.com/ science/article/pii/S0010465517301200.
[43] Ido K, Kawamura M, Motoyama Y, et al. Update of $\mathrm { h } \phi$ : Newly added functions and methods in versions 2 and 3. Computer Physics Communications. 2024;298:109093. Available from: https://www.sciencedirect.com/science/article/pii/S001046552400016X.
[44] Hoshi T, Kawamura M, Yoshimi K, et al. Kω – open-source library for the shifted krylov subspace method of the form $( z i - h ) x = b$ . Computer Physics Communications. 2021 Jan;258:107536. Available from: https://doi.org/10.1016/j.cpc.2020.107536.
[45] Gubernatis J, Kawashima N, Werner P. Quantum monte carlo methods. Cambridge University Press; 2016. Available from: https://doi.org/10.1017/cbo9780511902581.
[46] Sylju˚asen OF, Sandvik AW. Quantum monte carlo with directed loops. Physical Review E. 2002;66:046701. Available from: https://doi.org/10.1103/physreve.66.046701.
[47] https://www.pasums.issp.u-tokyo.ac.jp/dsqss/.
[48] Masaki-Kato A, Suzuki T, Harada K, et al. Parallelized quantum monte carlo algorithm with nonlocal worm updates. Physical Review Letters. 2014;112:140603. Available from: https://doi.org/10.1103/physrevlett.112.140603.
[49] Tahara D, Imada M. Variational monte carlo method combined with quantum-number projection and multi-variable optimization. Journal of the Physical Society of Japan. 2008 Nov;77(11):114701. Available from: https://doi.org/10.1143/jpsj.77.114701.
[50] Oru´s R. A practical introduction to tensor networks: Matrix product states and projected entangled pair states. Annals of Physics. 2014 Oct;349:117–158. Available from: https: //doi.org/10.1016/j.aop.2014.06.013.
[51] Oru´s R. Tensor networks for complex quantum systems. Nature Reviews Physics. 2019 Aug;1(9):538–550. Available from: http://dx.doi.org/10.1038/s42254-019-0086-7.
[52] Nishino T, Hieida Y, Okunishi K, et al. Two-Dimensional Tensor Product Variational Formulation. Progress of Theoretical Physics. 2001 03;105(3):409–417.
[53] Verstraete F, Cirac JI. Renormalization algorithms for quantum-many body systems in two and higher dimensions .
[54] https://www.pasums.issp.u-tokyo.ac.jp/tenes/.
[55] Jiang HC, Weng ZY, Xiang T. Accurate determination of tensor network state of quantum lattice models in two dimensions. Physical Review Letters. 2008 Aug;101(9).
[56] Jordan J, Oru´s R, Vidal G, et al. Classical simulation of infinite-size quantum lattice systems in two spatial dimensions. Physical Review Letters. 2008 Dec;101(25).
[57] Oru´s R, Vidal G. Simulation of two-dimensional quantum systems on an infinite lattice revisited: Corner transfer matrix for tensor contraction. Physical Review B. 2009 Sep; 80(9).
[58] Phien HN, Bengua JA, Tuan HD, et al. Infinite projected entangled pair states algorithm improved: Fast full update and gauge fixing. Physical Review B. 2015 Jul;92(3).
[59] Nishino T, Okunishi K. Corner transfer matrix renormalization group method. Journal of the Physical Society of Japan. 1996 Apr;65(4):891–894.
[60] Morita S, Motoyama Y, Todo S. smorita/mptensor: mptensor v0.3.0 .
[61] https://github.com/smorita/mptensor.
[62] Czarnik P, Dziarmaga J, Corboz P. Time evolution of an infinite projected entangled pair state: An efficient algorithm. Physical Review B. 2019 Jan;99(3).
[63] Kshetrimayum A, Rizzi M, Eisert J, et al. Tensor network annealing algorithm for twodimensional thermal states. Physical Review Letters. 2019 Feb;122(7).
[64] Available at: https://www.pasums.issp.u-tokyo.ac.jp/abics/.
[65] Kasamatsu S, Motoyama Y, Yoshimi K, et al. Facilitating abinitio configurational sampling of multicomponent solids using an on-lattice neural network model and active learning. The Journal of Chemical Physics. 2022 Sep;157(10):104114. Available from: https://doi.org/10.1063/5.0096645.
[66] Kasamatsu S, Motoyama Y, Yoshimi K, et al. Configuration sampling in multi-component multi-sublattice systems enabled by ab initio configuration sampling toolkit (abics). Science and Technology of Advanced Materials: Methods. 2023;3(1):2284128. Available from:
https://doi.org/10.1080/27660400.2023.2284128. [67] Hukushima K, Nemoto K. Exchange monte carlo method and application to spin glass simulations. Journal of the Physical Society of Japan. 1996 Jun;65(6):1604–1608. Available from: https://doi.org/10.1143/jpsj.65.1604. [68] Hukushima K, Iba Y. Population annealing and its application to a spin glass. AIP Conference Proceedings. 2003;690:200–206. Available from: https://doi.org/10.1063/ 1.1632130. [69] Motoyama Y, Tamura R, Yoshimi K, et al. Bayesian optimization package: Physbo. Computer Physics Communications. 2022;278:108405. [70] Garnett R. Bayesian Optimization. Cambridge University Press; 2023. [71] Ueno T, Rhone TD, Hou Z, et al. Combo: An efficient bayesian optimization library for materials science. Materials Discovery. 2016;4:18–21. Available from: https://www. sciencedirect.com/science/article/pii/S2352924516300035. [72] https://ma.issp.u-tokyo.ac.jp/. [73] Konishi Y, Igarashi R, Kasamatsu S, et al. Materiapps – a portal site of materials science simulation. In: Proceedings of Computational Science Workshop 2014 (CSW2014), JPS Conf. Proc; Vol. 5; 2015. p. 011007. Available from: https://journals.jps.jp/doi/ abs/10.7566/JPSCP.5.011007. [74] https://cmsi.github.io/MateriAppsLive/. [75] Motoyama Y, Yoshimi K, Kato T, et al. MateriApps LIVE! and MateriApps installer: Environment for starting and scaling up materials science simulations. SoftwareX. 2022 Dec;20:101210. Available from: https://doi.org/10.1016/j.softx.2022.101210. [76] https://www.pasums.issp.u-tokyo.ac.jp/mainstaller/. [77] https://datarepo.mdcl.issp.u-tokyo.ac.jp. [78] Misawa T, Morita S, Yoshimi K, et al. mVMC—Open-source software for many-variable variational Monte Carlo method. Comput Phys Commun. 2019;235:447–462. Available from: https://www.sciencedirect.com/science/article/pii/S0010465518303102, https://github.com/issp-center-dev/mVMC. [79] Motoyama Y, Yoshimi K, Masaki-Kato A, et al. DSQSS: Discrete Space Quantum Systems Solver. Comput Phys Commun. 2021;264:107944. Available from: https://www. sciencedirect.com/science/article/pii/S0010465521000692. [80] Shinaoka H, Otsuki J, Kawamura M, et al. DCore: Integrated DMFT software for correlated electrons. SciPost Phys. 2021;10:117. Available from: https://scipost.org/10. 21468/SciPostPhys.10.5.117. [81] Motoyama Y, Okubo T, Yoshimi K, et al. TeNeS: Tensor network solver for quantum lattice systems. Computer Physics Communications. 2022 oct;279:108437. Available from: https://doi.org/10.1016/j.cpc.2022.108437. [82] https://www.pasums.issp.u-tokyo.ac.jp/komega/. [83] https://www.pasums.issp.u-tokyo.ac.jp/physbo/. [84] Motoyama Y, Yoshimi K, Mochizuki I, et al. Data-analysis software framework 2dmat and its application to experimental measurements for two-dimensional material structures. Computer Physics Communications. 2022 nov;280:108465. Available from: https://doi. org/10.1016/j.cpc.2022.108465. [85] https://www.pasums.issp.u-tokyo.ac.jp/2dmat/.
88 Eigenchannel
Table 1. List of software enhanced through the Software Development and Advancement Project (PASUMS).
Figure 2. Illustration of the directed loop algorithm for $S = 1 / 2$ spin model. Vertical solid lines and dashed lines denote upspin and downspin, respectively. First, vertices (dashed horizontal lines) are generated. Next, a pair of ${ \hat { S } } ^ { + }$ (black circle) and $\hat { S } ^ { - }$ (white circle) operators are inserted. Then, ${ \hat { S } } ^ { + }$ moves along lines while flipping spins, and returns where $\hat { S } ^ { - }$ is and removed.
Figure 3. (a) Schematic of the ESM method. The Poisson equation can be solved analytically in regions of perfect conductors and vacuum, and these results are connected to the electrostatic potential within the simulation cell for efficient non-periodic system calculations. The Kohn-Sham equation in each self-consistent step is solved within the simulation cell under conventional periodic boundary conditions. (b) Schematic of the ESM-RISM method. The solvent density is represented in grayscale. The Kohn-Sham and RISM equations are solved in separate simulation cells, interconnected through the electrostatic potential.
Figure 4. Schematic of downfolding. From the band structure and Kohn-Sham orbitals obtained by firstprinciples calculations, parameters such as hopping integrals $t$ and Coulomb integrals $U$ of the Hubbard model are calculated by focusing only on the states near the Fermi surface (target bands) using maximally localized Wannier functions and constrained random phase approximation. Contributions from orbitals other than the target bands are included as screening to the atomic potential and electron-electron Coulomb interactions.
Figure 5. Schematic flow of calculations using H-wave. The users prepare the interaction definition files in the Wannier90 format or the expert-mode format and the input parameter files. The interaction definition files can be generated from a simple description by StdFace, or derived from the first-principles calculations. The results are stored in the output files according to the input parameters, including the expectation values of the physical observables, the Green’s functions, and other data for further analyses.
Interaction definition H-wave Green
First-principles RESePtcACK files mUoHdFre function 000 ExWpaenrtnimero9d0efforrmatt UHFk eigenmodes 00-0 StdFace mRoPdAe enetrcgy
Sidemspclreipmtiodnsel parameter file 1 chiq
Figure 6. Schematic figure of the map from a lattice Hamiltonian to an effective impurity model in the DMFT.
Figure 7. Schematic picture of the MPS. The coefficient of a wave function $C$ is decomposed into a tensor network $A$ .
Figure 8. Schematic figures of the structure of a unit cell and the sequence of Monte Carlo samplings. A unit cell is comprised of a base structure and a set of defect sublattices that accommodate atom groups and vacancies (upper figures). A sequence of configurations are generated according to Monte Carlo samplings that involve trial steps of exchanging atom groups and vacancies, and changing the orientation of atom groups (lower figure).
Figure 9. Schematic view of ODAT-SE, an open framework for data analysis. For a given direct problem solver, ODAT-SE applies search algorithms to find optimal parameter values $X ^ { \ast }$ that minimize the loss function $F ( X )$ . The direct problem solvers and the algorithms are modularized so that the users can provide them for their own problems. | The Institute for Solid State Physics (ISSP) at The University of Tokyo has been carrying out a software development project named ``the Project for Advancement of Software Usability in Materials Science (PASUMS)". Since the launch of PASUMS, various open-source software programs have been developed/advanced, including ab initio calculations, effective model solvers, and software for machine learning. We also focus on activities that make the software easier to use, such as developing comprehensive computing tools that enable efficient use of supercomputers and interoperability between different software programs. We hope to contribute broadly to developing the computational materials science community through these activities. | [
"cs.SE",
"cond-mat.mtrl-sci",
"physics.comp-ph",
"physics.ed-ph"
] |
# 1. Introduction
Addressing data imbalance in computer vision tasks remains a core challenge for improving model performance (Zhang et al., 2021b; Ma et al., 2025). Imbalances in the number of training samples across classes often lead to biases during the learning process, making it difficult for deep learning models to accurately recognize underrepresented classes. To tackle this issue, researchers have proposed various approaches, such as class-aware sampling strategies, loss reweighting, and balanced data augmentation techniques (Tan et al., 2020; Sinha et al., 2020; Ren et al., 2020; Ma et al., 2024a;b; Yin et al., 2019; Liu et al., 2020; Huang et al., 2016; Dong et al., 2017; Kang et al., 2020). However, these methods primarily focus on inter-class imbalances, assuming that achieving balance at the class level suffices to ensure fairness and efficacy in learning. This assumption, however, overlooks intra-class attribute imbalances, particularly the problem of compositional attribute imbalance.
Attribute imbalance refers to the uneven distribution of image attributes (e.g., color, texture, and shape) within a single class. This imbalance can bias the learned representations of a model. While limited studies (Tang et al., 2022; Liu et al., 2021b) have qualitatively discussed the challenges posed by attribute imbalance, no prior research has systematically quantified or analyzed its prevalence, severity, and impact on model performance. To address this gap, our study aims to answer three core questions:
(1) How prevalent is attribute imbalance in commonly used vision datasets?
(2) What is the impact of attribute imbalance on model performance?
(3) How can attribute imbalance be effectively mitigated?
To automatically assess the degree of attribute imbalance in image datasets, two key challenges must be addressed: how to define attributes and how to determine the attributes corresponding to each image. First, based on previous studies (Zhong et al., 2021; Zhang et al., 2024), we define 20 primary attributes (e.g., color) and their corresponding $3 0 0 +$ secondary attributes (e.g., black, white). Second, leveraging the CLIP (Radford et al., 2021), we construct a visual attribute dictionary that aligns low-level visual attributes of images with specific textual descriptions, enabling automated attribute annotation for each image. Using this dictionary, we assign the most suitable secondary attribute from each primary attribute category to each image, resulting in a total of 20 secondary attributes per image. We then compute the frequency of all secondary attributes within each class—the lower the frequency, the higher the scarcity. Based on this, we propose the concept of Compositional Attribute Scarcity (CAS) to comprehensively evaluate the overall attribute scarcity of an individual image. Specifically, for each image, we calculate the scarcity of its contained secondary attributes and sum them to obtain its CAS score.
Through experiments on 12 commonly used vision datasets, we reveal that intra-class attribute imbalance and compositional attribute imbalance are pervasive. Furthermore, we systematically analyze how these imbalances affect model performance. Our experimental results demonstrate a consistent pattern: images with higher CAS tend to have lower recognition accuracy. This finding underscores the potential for improving model generalization by addressing attribute imbalance, beyond the improvements achievable by resolving inter-class imbalance alone.
To mitigate attribute imbalance, we propose a novel sampling adjustment strategy for data augmentation. Specifically, we adjust the sampling probability of each image based on its compositional attribute scarcity, with rarer images being sampled more frequently. This adjustment increases the representation of rare attributes in augmented datasets, enabling data augmentation methods to generate more samples emphasizing rare attributes (e.g., white dogs). As a result, the proposed method facilitates better learning of diverse intra-class attributes. Notably, our method introduces no additional computational overhead and requires only a simple modification of the sampling strategy, making it easily integrated into existing frameworks. The key contributions of this work are as follows:
(1) A Visual Attribute Framework: We define a comprehensive visual attribute framework encompassing 20 primary attributes and over 300 secondary attributes (Section 3.1). We also propose a CLIP-based visual attribute dictionary to automate the evaluation of attribute imbalance, revealing its widespread prevalence in general vision datasets (Section 3.2).
(2) Impact Analysis of Attribute Imbalance: We reveal the significant impact of attribute imbalance on model performance. Specifically, images with higher CAS exhibit lower recognition accuracy, highlighting the necessity and importance of addressing intra-class attribute imbalance (Sections 3.3 and 3.4).
(3) We propose a sampling adjustment method based on CAS. This method, requiring only a custom sampler, integrates seamlessly with existing data augmentation frameworks (Section 4). Experiments on 12 benchmark datasets demonstrate the effectiveness and generalizability of the proposed approach (Section 5).
# 2. Related Work
# 2.1. Long-tailed image recognition
In practice, the dataset usually tends to follow a long-tailed distribution, which leads to models with very large variances in performance on each class. It should be noted that most researchers default to the main motivation for longtail visual recognition is that classes with few samples are always weak classes. Therefore, numerous methods have been proposed to improve the performance of the model on tail classes. (Zhang et al., 2021b) divides these methods into three fields, namely class rebalancing (Sinha et al., 2022; Cui et al., 2019; Lin et al., 2017; Elkan, 2001; Zhou & Liu, 2005; Zhao et al., 2018; Ye et al., 2020; Chawla et al., 2002; Wang et al., 2020a; Estabrooks et al., 2004; Zhang & Pfister, 2021; Zhong et al., 2021), information augmentation (Ma et al., 2024a;b; Chu et al., 2020; Liu et al., 2021a; Park et al., 2022; Cui et al., 2018; Yang & Xu, 2020; Hu et al., 2020; Zang et al., 2021), and module improvement (Cui et al., 2021; Ouyang et al., 2016; Zhou et al., 2020; Wang et al., 2020a; Cai et al., 2021; Wang et al., 2020b; Zhang et al., 2021a). Unlike the above, (Sinha et al., 2022) and (Ma et al., 2023) observe that the number of samples in the class does not exactly show a positive correlation with the accuracy, and the accuracy of some tail classes is even higher than the accuracy of the head class. Therefore, they propose to use other measures to gauge the learning difficulty of the classes rather than relying on the sample number alone.
# 2.2. Discussion on Intra-Class Imbalance
The fundamental goal of exploring intra-class imbalance is to identify factors that cause differences in recognition performance among samples within the same class, thereby enabling targeted model improvements. (Liu et al., 2021b) attempted to define an imbalanced distribution of learning difficulty within a class, where learning difficulty is determined by the model’s prediction confidence. However, prediction confidence varies across different models, leading to inconsistent quantification of learning difficulty, which lacks reproducibility, transparency, and interpretability. (Tang et al., 2022) proposed investigating the long-tail distribution of attributes within a class but only provided qualitative analyses of how attribute imbalance might negatively affect model performance.
In practical applications, (Doonan et al., 2025) addressed the imbalanced distribution of plant traits in wheat recognition by applying weighted point cloud sampling to increase the proportion of rare plant traits. Similarly, (Yang et al., 2024) focused on generating data for specific defect types in industrial defect detection to balance subclass distributions, effectively improving defect detection accuracy. (Zhou et al., 2023) explored the relationship between noise interference and camera angle imbalance with segmentation performance in medical image segmentation tasks. However, these studies are limited to specific domains and lack generalizability.
To date, no research has systematically defined general visual attributes, thoroughly investigated the prevalence of attribute imbalance, or examined whether its negative impact on models warrants widespread attention from researchers.
Figure 1. The left side shows all primary attributes we defined and their corresponding secondary attributes. The right side illustrates the process of constructing the visual attribute dictionary based on CLIP.
# 3. Attribute Imbalance
In this section, we first systematically define the visual attributes of images. Then, we propose using CLIP to construct a visual attribute dictionary, enabling automatic evaluation of image attributes. Finally, we reveal the prevalence of attribute imbalance and compositional attribute imbalance across 12 commonly used visual datasets and analyze their impact on model performance.
# 3.1. Definition of Visual Attributes
Visual attributes refer to the fundamental characteristics that constitute an image, such as color, texture, and shape. These attributes not only define the visual appearance of an image but also play a critical role in the representation learning process of deep learning models. In this study, we define visual attributes based on a comprehensive analysis of prior research (Zhao et al., 2019; Pham et al., 2021) and practical insights. These attributes are categorized into 20 primary attributes (e.g., color, material, shape, size) and over 300 secondary attributes (e.g., “black” and “white” under color). Figure 1 illustrates all primary and secondary attributes. This hierarchical design ensures both the comprehensiveness and granularity of attribute definitions.
# 3.2. Constructing a Visual Attribute Dictionary
To enable the automated evaluation of attribute distributions in image datasets, we leverage the CLIP model to construct a visual attribute dictionary on ImageNet-21k. As shown in Figure 1, we first organize all secondary attributes into a textual attribute list, such as “The photo is Brown,” and generate corresponding text embeddings. Next, we calculate the similarity between each text embedding and the image embeddings, matching the most similar image embedding to the respective text attribute. The matched image embeddings serve as the keys in the visual attribute dictionary, while the corresponding text attributes serve as the values. To query the visual attributes of a given image, its embedding is extracted and compared with the dictionary keys using cosine similarity. The value corresponding to the key with the highest similarity score is then returned as the predicted attribute for the image.
# 3.3. Single-Attribute Imbalance
At the single-attribute level, the imbalance manifests as certain attributes (e.g., “black”) dominating a large proportion of the dataset, while other attributes (e.g., “purple”) are represented by only a few samples. We conducted a systematic analysis of 12 commonly used visual datasets, including ImageNet and CIFAR-100. Using the visual attribute dictionary, we calculated the distribution of different attributes in each dataset and quantified the degree of attribute imbalance. As shown in Figure 2, the distribution of secondary attributes under each primary attribute typically exhibits a long-tailed pattern, with a large number of lowfrequency attributes having significantly fewer samples than high-frequency attributes.
To investigate the impact of attribute frequency on model performance, we first trained standard ResNet-18 and ResNet-50 models on each dataset. Within each category, for each primary attribute, we divided the samples into subsets based on their associated secondary attributes and evaluated the recognition accuracy of both models on each subset. The experimental results, shown in Figure 2, reveal that samples with higher-frequency attributes generally achieve higher and more stable recognition accuracy. Conversely, samples with low-frequency attributes do not consistently exhibit the expected low recognition accuracy. Merely analyzing single-attribute imbalance is insufficient to explain this phenomenon. We hypothesize that the rarity of one type of secondary attribute in an image does not necessarily imply the rarity of other secondary attributes (e.g., a rare color may coexist with a common shape).
# 3.4. Compositional Attribute Imbalance
In the analysis of single-attribute imbalance, we observed that samples with high-frequency attributes are more likely to be correctly identified by the model. However, merely relying on single-attribute statistics cannot fully explain the model’s performance on low-frequency attribute samples. Considering that an image often contains multiple visual attributes, we further introduce the concept of compositional attributes to explore the impact of multi-attribute scarcity on model performance.
Compositional attributes refer to the specific combination of multiple (20 in this study) primary attributes in an image, such as Blue, Metallic, Round, $\cdots \}$ or $\{ { \sf R e d }$ , Wooden, Square, $\cdots \}$ . These combinations not only describe the visual characteristics of an image but also capture the interrelationships between attributes. However, the free combinations of attributes are not uniformly distributed in datasets, and many compositional attributes are extremely scarce in the training data. Such scarcity may lead to significantly degraded model performance on images with these rare compositional attributes. We define Compositional Attribute Scarcity (CAS) as follows:
(1) For each primary attribute, calculate the frequency of its secondary attributes and rank them in descending order of frequency.
(2) The scarcity of each secondary attribute is indicated by the ranked position, the lower the rank, the rarer it is.
(3) The CAS of an image is calculated as the sum of the scarcity ranks of its 20 secondary attributes.
Figure 3 illustrates the process of calculating the CAS of an image. We further investigate the impact of compositional attribute scarcity on model performance across 12 datasets using ResNet-18 and ResNet-50. Samples were divided into subsets based on their CAS values, and classification accuracy was evaluated for each subset. The experimental results, shown in Figure 4, reveal the following:
Figure 2. The distribution of secondary attributes under color and material categories across 12 visual benchmark datasets, along with the performance of ResNet-18 and ResNeXt-50 on each secondary attribute.
Figure 3. Illustration of the computation process for image ompositional attribute scarcity (CAS).
(1) Compositional attribute imbalance is pervasive, with many attribute combinations represented by only a few samples in the entire dataset.
(2) As Compositional Attribute Scarcity increases, model performance gradually deteriorates.
It is evident that samples with high compositional attribute scarcity are often under-learned. To address this issue, we propose a simple yet effective solution to mitigate the impact of compositional attribute imbalance.
Figure 4. The long-tailed distribution of sample composite attribute sparsity across certain categories in 12 visual benchmark datasets, along with the performance of ResNet-18 and ResNeXt-50 across different compositional attribute scarcity (CAS) intervals. The horizontal axis represents 10 evenly divided intervals based on different CAS values, increasing from left to right. The left vertical axis indicates the average compositional attribute scarcity of all samples within each interval.
# 4. Leveraging Compositional Attribute Scarcity to Guide Data Augmentation
To mitigate the negative impact of visual attribute imbalance on model performance, we propose a simple yet effective solution. The core idea is to adjust the sampling probability during data augmentation based on a sample’s compositional attribute scarcity, thereby generating more samples with rare attributes. This approach enhances the model’s representation capability for these underrepresented attributes. To amplify the differences in scarcity among samples, we introduce a power transformation to nonlinearly enhance the scarcity values. In practice, our method can be seamlessly integrated with existing data augmentation techniques by customizing the sampler.
# 4.1. Sampling Strategy Based on CAS
Assume the total number of samples is $M$ , and the compositional attribute scarcity of sample $i$ is $\boldsymbol { r } _ { i }$ . To enhance the differentiation of scarcity, we apply a power transformation to $r _ { i } \colon r _ { i } ^ { \prime } = r _ { i } ^ { b }$ , where $b$ is the power parameter controlling the degree of nonlinear amplification. When $b > 1$ , the differentiation of high-scarcity samples is significantly increased. Our empirical studies recommend setting $b$ to 1.2 (see Section 5.4). Based on the transformed scarcity $r _ { i } ^ { \prime }$ , the sampling probability for each sample is defined as:
$$
p _ { i } = \frac { r _ { i } ^ { \prime } } { \sum _ { k = 1 } ^ { M } r _ { k } ^ { \prime } } .
$$
# 4.2. Seamless Integration with Data Augmentation
During training, we first compute the compositional attribute scarcity and sampling probability for each sample. These probabilities are then used to customize the sampler. Subsequently, data augmentation techniques (e.g., CutMix, FMix, SaliencyMix) are applied to preferentially generate more samples with rare attributes. Algorithm 1 provides the implementation details using CutMix as an example.
Furthermore, Table 1 compares the level of compositional attribute imbalance in the dataset before and after applying our method. The results show that using only standard
Table 1. Comparison of sample CAS statistics on ImageNet-1K. Our method significantly reduces the standard deviation of CAS, indicating a more balanced and less dispersed distribution of compositional attributes.
# Algorithm 1 Enhancing CutMix with CAS
Input :CDoatmabsient $\mathcal { D } = \{ ( x _ { i } , y _ { i } ) \} _ { i = 1 } ^ { N }$ $\{ r _ { i } \} _ { i = 1 } ^ { N }$ , CutMix parameter $\alpha > 0$ , Training epochs $T$ , Scaling factor $\beta > 0$
Output :Trained model $\mathcal { M }$ .
1 Step 1: Compute Sampling Weights foreach $\boldsymbol { r } _ { i }$ in $\{ r _ { 1 } , r _ { 2 } , \dots , r _ { N } \}$ do 2 Compute weight $w _ { i } \gets r _ { i } ^ { \beta }$
3 Define weights vector $\mathbf { w } \{ w _ { 1 } , w _ { 2 } , \ldots , w _ { N } \}$
4 Step 2: Initialize Weighted Sampler Initialize sampler $s$ with weights w using WeightedRandomSampler
5 Step 3: Training with CutMix for $t = 1$ to $T$ do
6 Sample a batch $\boldsymbol { B }$ from $\mathcal { D }$ using sampler $s$ foreach pair $( x _ { i } , y _ { i } )$ and $( x _ { j } , y _ { j } )$ in $\boldsymbol { B }$ do
7 Sample $\lambda \sim \operatorname { B e t a } ( \alpha , \alpha )$ Compute CutMix bounding box $( B )$ Generate CutMix samples: $x _ { m i x } x _ { i } \cdot M + x _ { j } \cdot ( 1 - M )$ , where $M$ is the binary mask defined by $\textit { B } y _ { m i x } \gets \lambda y _ { i } + ( 1 - \lambda ) y _ { j }$
8 Perform forward and backward propagation on CutMix samples Update model $\mathcal { M }$
9 return Trained model $\mathcal { M }$
data augmentation strategies yields little improvement in reducing attribute imbalance within the dataset.
# 5. Empirical Study
# 5.1. Datasets
To comprehensively evaluate the performance of the proposed method, twelve diverse image classification datasets were selected. These datasets encompass tasks ranging from large-scale image classification to fine-grained classification, which effectively validate the model’s performance across various scenarios. The ImageNet-1K (Deng et al., 2009) dataset contains 1.2 million training images and 50, 000 validation images across 1, 000 categories. The CIFAR-100 (Krizhevsky et al., 2009) dataset includes 50, 000 training images and 10, 000 test images spanning 100 categories. The Oxford-IIIT Pet (Parkhi et al., 2012) dataset comprises 37 pet categories with 7, 349 images. The Stanford Dogs (Khosla et al., 2011) dataset contains 20 dog breeds with a total of 20, 580 images. The DTD (Describable Textures Dataset) (Cimpoi et al., 2014) includes 47 texture categories with 1, 880 images. The Oxford-102 Flower (Nilsback & Zisserman, 2008) dataset contains 102 flower categories with 8, 189 images. The Food-101 (Bossard et al., 2014) dataset covers 101 food categories with a total of 101, 000 images. The Stanford Cars (Krause et al., 2013) dataset includes 196 car categories with 16, 185 images. The FGVC-Aircraft (Maji et al., 2013) dataset contains 100 aircraft categories with 10, 000 images. The SUN397 (Xiao et al., 2010) dataset features 397 scene categories with 108, 754 images. The DeepFashion (Liu et al., 2016) dataset consists of 50 clothing categories with 50, 000 images. Finally, the CUB200-2011 (Wah et al., 2011) dataset includes 200 bird species categories with 11, 788 images. Through comprehensive testing across these datasets, we can thoroughly assess the model’s performance in a variety of tasks and environments.
# 5.2. Implementation Details
In this experiment, we employed ResNet-18 and $_ { \tt R e s N e X t - 5 0 }$ as the baseline models and configured the hyperparameter $\alpha$ for different data augmentation methods (Qin et al., 2024). Specifically, $\alpha$ was set to 0.2 for CutMix, FMix, and SaliencyMix, and the mixed hyperparameters were generated by sampling from a $\mathtt { B e t a } ( \alpha , \alpha )$ distribution at each training iteration. The batch size for all models was set to 64, and the initial learning rate was 0.1, with cosine annealing used as the learning rate scheduling strategy (Qin et al., 2024; Islam et al., 2024).
# 5.3. Evaluation Metrics
To comprehensively evaluate the performance of the proposed method, in addition to testing overall classification accuracy (Qin et al., 2024), we also introduce a sample partitioning strategy based on compositional attribute scarcity (CAS) to further investigate the model’s performance at different CAS levels. We first calculate the CAS for all samples and rank them. Samples with higher CAS correspond to rarer visual features, thus posing greater challenges to the model’s discriminative ability. Based on the CAS value of each sample, we divide the test set into three subsets:
• High subset: Contains the top $4 0 \%$ of samples with the highest CAS values, representing the most challenging samples due to their rare visual features. • Middle subset: Includes the next $3 0 \%$ of samples, with moderate CAS values, representing samples with less rare visual features compared to the first subset. • Low subset: Consists of the remaining $3 0 \%$ of samples with the lowest CAS values, representing the easiest samples with more common visual features.
For each subset, we calculate and test the classification accuracy of the model before and after the improvements. This
Table 2. Evaluation results on 12 visual benchmark datasets. The overall performance improvement of CutMix, FMix, and SaliencyMix using our method is reported. Additionally, the performance of our method on three different CAS-based subsets is presented.
Cifar-100 b-value Imagenet-1Kb-value 0.60 CutMix+weight + CutMix+weight 0.51 0.59 SalienyeMixtweight SMiencyMix+weight
0.5 0
T T 0.54 0.46 0.53 01 0 09 1 1 2 1 1 2 0 0.6 0 09 1 3 1 2 b-value b-value
sparsity-based subset division allows us to more precisely analyze the model’s performance under varying information conditions, particularly in terms of classification ability between low sparsity (information-rich) and high sparsity (information-scarce) samples, as well as the differences in model enhancement. Through this method, we can effectively assess the model’s robustness and generalization when faced with samples of varying information density.
# 5.4. Selection of Hyperparameter $b$
The power parameter $b$ of scarcity augmentation controls the nonlinear amplification of the compositional attribute scarcity. We explored the optimal value of $b$ by setting it within the range of 0.5 to 1.5 on $\mathsf { C I F A R - 1 0 0 }$ and ImageNet. As shown in Figure 5, when $b = 1 . 2$ , our method achieves the highest performance gains for CutMix, FMix, and SaliencyMix. Therefore, we set $b = 1 . 2$ for all subsequent experiments.
# 5.5. Main Results
Table 2 shows the classification results of the model before and after improvements across different datasets. We observed that on all datasets, the performance improved after applying our method, demonstrating the effectiveness of our approach in mitigating attribute imbalance. Impressively, with just our sampling strategy, on ImageNet-1k using $_ { \tt R e s N e X t - 5 0 }$ as the backbone network, our method improved the overall performance of CutMix, FMix, and SaliencyMix by $1 . 1 8 \%$ , $1 . 5 8 \%$ , and $3 . 0 7 \%$ , respectively. This highlights the necessity of addressing the combined attribute imbalance issue in general-purpose vision datasets.
In fine-grained classification tasks (e.g., Stanford Dogs, Stanford Cars, and Oxford-102 Flower), the performance improvement was most pronounced. Specifically, on Stanford Dogs, using ResNeXt-50 as the backbone network, our method improved the overall performance of CutMix, FMix, and SaliencyMix by $5 . 4 3 \%$ , $4 . 8 3 \%$ , and $1 . 8 5 \%$ , respectively. On Stanford Cars, using ResNet-18 as the backbone network, our method achieved performance gains of $7 . 0 8 \%$ , $1 0 . 7 9 \%$ , and $6 . 5 4 \%$ for CutMix, FMix, and
SaliencyMix, respectively. On Oxford-102 Flower, using ResNet-18 as the backbone network, our method improved the overall performance of CutMix, FMix, and SaliencyMix by $4 . 9 9 \%$ , $3 . 2 1 \%$ , and $2 . 0 1 \%$ , respectively.
# 5.6. Impact on Rare Samples
To further analyze the effectiveness of our method across different sparsity levels, we divided the test set into three subsets: high sparsity, medium sparsity, and low sparsity, and evaluated the classification accuracy for each subset. As shown in Table 2, standard data augmentation methods perform poorly on high-sparsity samples, leading to a significant performance gap between low-sparsity and high-sparsity samples. However, after applying our sparsitybased sampling strategy, we observed a notable improvement in classification accuracy for high-sparsity samples, effectively reducing the performance gap between low- and high-sparsity samples.
For instance, on ImageNet-1k, using ResNeXt-50 as the backbone network, our method improved the performance of CutMix, FMix, and SaliencyMix on the highsparsity subset by $3 . 5 4 \%$ , $2 . 6 1 \%$ , and $4 . 8 9 \%$ , respectively. On the fine-grained image dataset Stanford Dogs, with ResNet-18 as the backbone, our method enhanced the performance of CutMix, FMix, and SaliencyMix on the highsparsity subset by $3 . 6 5 \%$ , $4 . 0 2 \%$ , and $6 . 6 4 \%$ , respectively. Similarly, on Stanford Cars, our method boosted the performance of CutMix, FMix, and SaliencyMix on the high-sparsity subset by $6 . 6 6 \%$ , $6 . 8 9 \%$ , and $6 . 8 8 \%$ , respectively. These results demonstrate that sparsity-guided data augmentation effectively improves the model’s ability to represent sparse attributes. In summary, our experimental results validate the effectiveness of the sparsity-guided data augmentation approach across multiple datasets and augmentation techniques. This method not only enhances overall classification performance but also significantly improves the model’s performance on sparse attribute samples, providing an effective solution to address attribute imbalance issues in real-world applications. | Visual attribute imbalance is a common yet underexplored issue in image classification, significantly impacting model performance and generalization. In this work, we first define the first-level and second-level attributes of images and then introduce a CLIP-based framework to construct a visual attribute dictionary, enabling automatic evaluation of image attributes. By systematically analyzing both single-attribute imbalance and compositional attribute imbalance, we reveal how the rarity of attributes affects model performance. To tackle these challenges, we propose adjusting the sampling probability of samples based on the rarity of their compositional attributes. This strategy is further integrated with various data augmentation techniques (such as CutMix, Fmix, and SaliencyMix) to enhance the model's ability to represent rare attributes. Extensive experiments on benchmark datasets demonstrate that our method effectively mitigates attribute imbalance, thereby improving the robustness and fairness of deep neural networks. Our research highlights the importance of modeling visual attribute distributions and provides a scalable solution for long-tail image classification tasks. | [
"cs.CV",
"cs.AI"
] |
# 1 Introduction
Recent advances in large language models (LLMs) (Yang et al., 2024; Touvron et al., 2023; Abdin et al., 2024; Raffel et al., 2023; Brown et al., 2020; Devlin et al., 2019) have led to impressive results in a wide array of natural language processing tasks. Building on these successes, researchers have extended LLMs by visual inputs that enable multimodal large language models (MLLMs) such as LLaVA (Liu et al., 2023b, 2024b). These MLLMs can handle complex tasks like image captioning (Anderson et al., 2018), visual question answering (Agrawal et al., 2016), and multimodal dialogue (Das et al., 2017). Existing approaches (Dai et al., 2023; Liu et al., 2023b, 2024b; Zhou et al., 2024; Chen et al., 2023a; Alayrac et al., 2022; Bi et al., 2024) show remarkable potential to bridge the gap between vision and language.
Despite these achievements, MLLMs often inherit a critical limitation from LLMs: the tendency to produce hallucinations (Huang et al., 2024b; Bai et al., 2024; Liu et al., 2024a). These hallucinations arise when a model over-relies on partial or misleading cues, generating responses that are incorrect or do not correspond to the provided input.
To mitigate hallucinations, two general strategies have emerged: training-phase interventions and inference-phase interventions. In the training phase, auxiliary supervision (Chen et al., 2023b) or reinforcement learning (Ben-Kish et al., 2024) can help align model outputs with factual or humanpreferred references. However, these approaches require additional data or complex reward modeling, which may be costly or infeasible in certain scenarios. In contrast, inference-phase methods (Zhou et al., 2024; Zhao et al., 2024; Deng et al., 2024; Wang et al., 2024a; Leng et al., 2023) aim to
Figure 1: Impact of VCD and ICD on attention distribution. We conduct an image description task using LLaVA-1.5 on 500 randomly sampled images from the COCO dataset while monitoring the internal attention distribution within the LLM component. We compare the changes in attention under different settings of Visual Contrastive Decoding (VCD), Instruction Contrastive Decoding (ICD), and their combination, relative to the original LLaVA model. The x-axis represents different attention categories: system tokens (sys), visual tokens (vis), textual tokens (text), and output tokens (output). The y-axis indicates the attention difference relative to the original model. VCD (solid blue bars) reduces attention to visual tokens while slightly increasing attention to textual tokens, with a stronger effect as the number of noising steps increases. ICD (hatched bars) exhibits a similar trend, further decreasing visual attention and increasing text attention, where stronger negative prefixes (see text in the legend) result in a more pronounced shift. When combining VCD and ICD (dotted bars), the reduction in visual attention is further amplified, while the focus on textual tokens increases. These findings indicate that the effectiveness of VCD and ICD originates from underlying shifts in the model’s attention distribution rather than solely from the contrastive decoding process.
correct or filter erroneous outputs without retraining. Contrastive decoding is particularly appealing as it leverages negatively perturbed or prefixed inputs to steer the model away from hallucinations in a training-free manner. Two notable recent methods for contrastive decoding are Visual Contrastive Decoding (VCD) (Leng et al., 2023) that perturbs an input image (e.g., via noising) to generate a “negative result” of logits, which is then subtracted from the original logits to suppress hallucinations, and second, Instruction Contrastive Decoding (ICD) (Wang et al., 2024a) that prepends a negative prefix to the prompt (e.g., “You are a confused object detector”) to generate a signal that shifts the model’s predictions away from hallucinated content. Both methods offer a lightweight, yet effective approach to reducing hallucinations. However, upon closer examination, we find that these methods construct contrasting branches through surfacelevel modifications – either perturbing the image (VCD) or prefixing the prompt (ICD) – without explicitly addressing the underlying cause of hallucinations. Attention steering like OPERA and PAI (Liu et al., $2 0 2 4 \mathrm { c }$ ; Huang et al., 2024a) is also a common inference-phase remedy to reduce hallucination. However, PAI introduces the notion of “text inertia” – the tendency of an MLLM to keep generating text-driven content even when the image is removed – but does not articulate why steering the attention matrix is the necessary lever to overcome this inertia.
In our experiments (Fig. 1), we observe that both VCD and ICD consistently cause fundamental shifts in the internal attention distribution: they tend to reduce attention on visual tokens and amplify attention on textual tokens. This insight raises a natural question: why not directly steer the attention mechanism itself? To this end, we propose an Attention-Steerable Contrastive Decoding (ASCD) framework to manipulate attention. Specifically, the attention modification is integrated into a contrastive decoding pipeline to either enhance visual cues or to suppress negative signals. We further develop a dynamic head-selection mechanism to identify “text-centric” heads that disproportionately focus on textual cues, enabling more targeted positive adjustments. In parallel, we introduce a complementary mechanism that restricts negative steering to only the most critical visual tokens, ensuring that suppression is applied solely where necessary to mitigate hallucinations while preserving essential visual details. In summary, our contributions are as follows: (1) We analyze how recent contrastive decoding methods (VCD, ICD) create “negative samples” that fundamentally alter attention; (2) We propose an attention-steerable contrastive decoding method that explicitly modulates attention distributions to offer a more principled way to mitigate hallucinations in the inference phase; (3) We faithfully reproduce VCD and ICD to ensure fair comparison with prior work. Across three representative MLLM backbones (LLaVA-1.5 7B, LLaVA-NeXT 7B, and Phi2-SigLIP), three decoding schemes (greedy, nucleus, and beam search), and three hallucination-focused benchmarks (Rohrbach et al., 2019; Li et al., 2023b; Sun et al., 2023) (POPE, CHAIR, MMHAL-BENCH), our approach consistently reduces hallucinations and strengthens visual grounding. At the same time, it improves performance on standard VQA benchmarks (Yue et al., 2024; Yu et al., 2024; Lu et al., 2022; Singh et al., 2019; Hudson and Manning, 2019), including MMMU, MM-VET, SCIENCEQA, TEXTVQA, and GQA whereas other methods suffer from degraded performance on these benchmarks.
# 2 Related Work
Multimodal Large Language Models. Multimodal Large Language Models (MLLMs) have significantly advanced the field of artificial intelligence by integrating vision and language understanding, enabling a wide range of vision-language tasks (Dai et al., 2023; Zhu et al., 2023; Liu et al., 2024b, 2023b; Alayrac et al., 2022; Chen et al., 2023a; Zhou et al., 2024; Zhang et al., 2023a; Rong et al., 2025; Chen et al., 2025a; Yu et al., 2025; Guan et al., 2025; Huang et al., 2024c, 2025; Liu et al., 2025; Zhao et al., 2025b). These models typically follow a two-stage training paradigm: (1) large-scale pretraining on web-scale imagetext pairs (Liu et al., 2023b; Li et al., 2023a) to learn cross-modal representations, and (2) visual instruction tuning (Liu et al., $2 0 2 3 \mathrm { a }$ ; Bi et al., 2025) on task-specific datasets to enhance multimodal instruction-following capabilities. While this paradigm has led to substantial improvements in vision-language reasoning, MLLMs still face key challenges, such as hallucination – where the model generates content that is inconsistent with the given visual input. (Huang et al., 2024b; Bai et al., 2024; Liu et al., 2024a).
Mitigating Hallucinations in MLLMs. Hallucination in MLLMs is particularly pronounced in open-ended generation tasks, where models may produce content that is not aligned with the provided visual input (Huang et al., $2 0 2 4 \mathrm { a }$ ; Jing et al., 2024; Zhang et al., 2023b). Some approaches focus on the mitigation of data bias, scaling-up of vision resolution, and alignment optimization. Lovenia et al. (2024) introduce a technique that mines 95,000 negative samples by replacing original categories, attributes, or quantity information with similar but incorrect alternatives. This fine-grained approach effectively enriches the contrastive signal during training, thereby enhancing the model’s robustness. Chen et al. (2024) propose InternVL, which scales the vision encoder up to 6 billion parameters and processes images with widths ranging from 1,664 to 6,144 pixels. While this method improves visual detail and alignment, it requires significant computational resources for pretraining with large-scale data. Sun et al. (2023) employ Reinforcement Learning from Human Feedback (RLHF) (Stiennon et al., 2022) to align different modalities during training. This optimization strategy leads to a reduction in hallucinations by better integrating visual and textual cues. (Bi et al., 2024) propose a representation steering method that effectively mitigates hallucination in multimodal models.
Contrastive Decoding Approaches. Recent work has explored contrastive decoding as an effective, training-free means to mitigate hallucinations (Xiao et al., 2025). For instance, Leng et al. (2023) introduced Visual Contrastive Decoding (VCD), which perturbs the input image to generate a negative logit branch that is subtracted from the original predictions, while Wang et al. (2024a) employs a negative prompt to steer outputs away from hallucinated content. Huo et al. (2024) leverages a Context and Text-aware Token Selection (CT2S) strategy to selectively retain the most informative vision tokens in early decoder layers, thereby amplifying beneficial multimodal context and suppressing spurious hallucinations.
# 3 Preliminaries
Modern MLLMs integrate text and visual inputs based on powerful encoders that enable merging the modalities into a unified representation that is processed by a multi-layer Transformer. While these models enable producing coherent responses, they heavily rely on internal attention mechanisms that dictate how visual and textual cues are combined. As discussed in Section 3.2, subtle variations in these attention distributions can significantly impact the generated output. This observation motivates our approach: by explicitly modulating attention, we aim to enhance visual grounding and mitigate hallucinations.
# 3.1 MLLM Formulation
We consider a multimodal large language model (MLLM) that processes an image $\mathbf { I }$ and a text prompt $\mathbf { x } = \{ x _ { 1 } , \ldots , x _ { N } \}$ to generate an output sequence $\mathbf { y } = \{ y _ { 1 } , \dots , y _ { M } \}$ in an autoregressive manner. Let $\theta$ denote the model parameters. Formally, the model maximizes:
$$
\mathbf { y } ^ { * } = \arg \operatorname* { m a x } _ { \mathbf { y } } \prod _ { t = 1 } ^ { M } p _ { \theta } \Big ( y _ { t } \Big | \mathbf { I } , \mathbf { x } , y _ { < t } \Big ) ,
$$
where $y _ { < t }$ denotes all previously generated tokens.
Figure 2: A motivating example of proactive attention steering in a visually ambiguous scenario. (a) shows the conversation context where the “orange” is actually tinted blue. (b) shows how the logits vary based on negativesteering. (c) shows how the logits vary based on positive-steering. (d) illustrates how attention-steerable contrastive decoding, which combines both negative and positive steering in a unified framework, reduce hallucinations and produce perception-driven answers.
Embeddings. A unified input is obtained from encoded image and embedded text:
$$
\begin{array} { r } { \mathbf { Z } = [ f _ { v } ( \mathbf { I } ) ; f _ { t } ( \mathbf { x } ) ] . } \end{array}
$$
Transformer Architecture. The MLLM processes $\mathbf { Z }$ through $L$ Transformer layers (Vaswani et al., 2023):
$$
{ \bf H } ^ { ( l ) } = \mathrm { T r a n s f o r m e r L a y e r } ^ { ( l ) } { \left( { \bf H } ^ { ( l - 1 ) } \right) } , \quad { \bf H } ^ { ( 0 ) } = { \bf Z } .
$$
Output Prediction. The final hidden state ${ \bf h } _ { t } ^ { ( L ) }$ is mapped to a probability distribution over the vocabulary:
$$
p _ { \theta } ( y _ { t } \mid { \mathbf { I } } , { \mathbf { x } } , y _ { < t } ) = \operatorname { S o f t m a x } \ \Bigl ( \mathbf { h } _ { t } ^ { ( L ) } W ^ { P } \Bigr ) ,
$$
where $W ^ { P }$ is the output projection matrix.
# 3.2 Proactive Steering of Attention
In Figure 1, we show how Visual Contrastive Decoding (VCD) and Instruction Contrastive Decoding (ICD) indirectly alter attention distributions. Building on this insight, we now ask: what if we explicitly steer the model’s attention? Figure 2 provides a motivating example, illustrating how actively modulating attention can influence the final logits distribution.
Consider a simple query: “What is the color of the orange here?” The conversation context (Figure 2a) is based on LLaVA-1.5 7B, with a provided image in which the “orange” fruit appears to be tinted blue. We experiment with two distinct attention-steering scenarios: negative-steered logits (Figure 2b) and positive-steered logits (Figure 2c). In each case, we proportionally adjust the visual or textual attention before finalizing the output distribution.
In the negative-steered branch, we reduce attention to visual tokens or boost attention to the textual tokens. As shown in the histogram of logits, the model reduces its reliance on the visual input, causing it to fall back more heavily on the LLM’s inherent priors. As a result, it is more likely to generate answers that align with typical linguistic associations rather than the actual content of the image – insisting the color is “orange”. Conversely, the positive-steered branch increases attention to visual tokens or downgrades textual tokens, making the model more sensitive to the actual (albeit unexpected) color in the image. This leads the model to answer “blue” with higher probability.
In addition to these unidirectional adjustments, we further integrate attention steering into the contrastive decoding framework. Instead of using the original logits directly (as in VCD or ICD), we inject the attention-modulated logits. Mathematically, we redefine the contrastive decoding formulation by replacing the original logits adjustment with a positively steered version:
$$
p _ { \theta } ^ { \mathrm { f i n a l } } ~ = ~ ( 1 + \alpha ) p _ { \theta } ^ { \mathrm { p o s - s t e e r e d } } - \alpha p _ { \theta } ^ { \mathrm { n e g - s t e e r e d } } ,
$$
where pθpos-steered a nd pθ neg-steered represent the output logits modified by positively or negatively steered attention.
By integrating contrastive decoding with explicit attention manipulation, our attention-steerable contrastive decoding framework (Figure 2d), sharpens the output distribution which enhances the likelihood of the correct response, while reducing the impact of competing distractors.
Figure 3: Distribution of text-centric heads across different models and experiment settings. Each heatmap visualizes how frequently a given head occurs among the most text-focused heads. The panel in the center (a) show the result of LLaVA-1.5 with a generation length of 64 tokens; (b) and (c) show results of the same model with longer generation (512 tokens) and a different image set. Despite these changes, LLaVA-1.5 exhibits minimal JS divergence, which indicates consistent text-centric heads. In contrast, Phi2-SigLIP (d) and LLaVA-NeXT (e) deviate substantially from LLaVA1.5, revealing model-specific attention biases and higher JS divergence.
# 3.3 Text-centric Heads
Previously, we have highlighted the impact of adjusting attention. In this section, we discuss which heads in the model are most prone to over-reliance on textual cues. To this end, we conduct an experiment to identify "text-centric" heads, i.e., those with disproportionately high text-to-visual attention ratios, and examine their consistency under different generation conditions and image sets. The experiment setup is detailed in Appendix A.
Results and Observations. Figure 3 shows the resulting heatmaps $F$ for multiple models and generation settings. The panel in the center (a) corresponds to LLaVA-1.5 on $N = 5 0 0$ images with a generation length of 64 tokens. The two heatmaps at the bottom show results of the same model but with either an increased generation length to 512 tokens (b, bottom left), or using a different set of 500 images (c, bottom right). Despite these changes, the distribution of top text-focus heads remains visually similar, and the small Jensen–Shannon (JS) divergences confirm that these text-centric heads are largely invariant under different sampling conditions for the same model.
Figure 4: Illustration of positive and negative steering. Left: text-centric heads are boosted (positive_steer) to emphasize visual content; Right: a small set of critical visual tokens is suppressed (negative_steer), inducing a stronger contrastive effect. These selective adjustments work in tandem to reduce hallucinations and improve grounding.
In contrast, the Phi2-SigLIP (d, top-left) and LLaVA-NeXT (e, top-right) panels deviate significantly from LLaVA-1.5 even under the same experiment settings, with higher JS divergence. This suggests that each model has its own unique set of heads that consistently favor textual attention over visual cues. However, within a single model, the text-centric heads persist across varied prompts, image sets, and generation lengths.
Implications. The consistent presence of the text-centric heads within the same model indicates that certain heads are inherently prone to focusing on textual signals rather than visual content. In Section 4.2 we describe how this insight can be leveraged to selectively target the problematic heads when applying our positive steering strategy. Rather than uniformly amplifying attention across all heads, we concentrate on those that are most responsible for text-dominant attention, thereby avoiding unnecessary modifications to heads that are well-balanced in their visual-textual focus.
# 4 Methodology
In this section, we present our attention-steerable contrastive decoding framework, which explicitly modulates the model’s attention to mitigate hallucinations. Our approach has two stages: (1) Textcentric Head Selection, which identifies the heads most prone to text-centric bias, and (2) Attention Steering, where we apply positive steering to textcentric heads and negative steering to a small subset of visually critical tokens. We then integrate these adjusted logits for generation into a contrastive decoding pipeline.
Algorithm 1 Text-centric Head Selection (Offline)
Require: Reference dataset $\left\{ \mathbf { I } _ { 1 } , \ldots , \mathbf { I } _ { N } \right\}$ , MLLM with $L$ layers and $H$ heads per layer, Final textcentric head count $\kappa _ { \mathrm { T C H } }$
Ensure: $\mathcal { H } _ { \mathrm { P o s } }$ (set of selected text-centric heads)
1: Initialize a global counter $\boldsymbol { F } \in \mathbb { R } ^ { L \times H }$ to zeros 2: for $i \gets 1$ to $N$ do
3: Run MLLM on image $\mathbf { I } _ { i }$ (e.g., image description)
4: for all head $( r , c )$ in layer-head grid do
5: Compute attention ratio: $Q _ { i } ( r , c ) = { \frac { \mathrm { t e x t A t t n } ( r , c ) } { \mathrm { v i s A t t n } ( r , c ) } }$
6: end for
7: Identify top-32 indices of $Q _ { i }$ (largest ratios) and store in $\mathcal { T } _ { i }$
8: for all $( r , c ) \in \mathbf { \bar { Z } } _ { i }$ do
9: $F ( r , c ) \gets F ( r , c ) + 1$
10: end for
11: end for
12: Sort all heads $( r , c )$ in descending order by $\underset { - } { F } ( r , c )$
13: Select top $\kappa _ { \mathrm { T C H } }$ heads: $\mathcal { H } _ { \mathrm { P O S } } $ first $\kappa _ { \mathrm { T C H } }$ heads in sorted list
14: return POS
# 4.1 Text-centric Head Selection
# Algorithm 2 Attention-Steerable Contrastive Decoding
Require: Image I, Text-centric heads $\mathcal { H } _ { \mathrm { P O S } }$ (from Algorithm 1), Critical vis-token count $\kappa _ { { \mathrm { V I S } } }$ , Steer strengths $\alpha _ { \mathrm { P O S } }$ , $\alpha _ { \mathrm { N E G } }$ , Contrastive weight $\alpha$ , Truncation threshold $\beta$ , MLLM with $L$ layers and $H$ heads per layer
Ensure: pfi $p _ { \theta } ^ { \mathrm { f i n a l } }$ (final logits from ASCD) Step 1: Forward Pass with Positive Steering
1: for $\bar { \boldsymbol { l } } 1$ to $L$ do
2: for $h 1$ to $H$ do
3: Compute attention matrix A(hl)
4: if $( l , h ) \in \mathcal { H } _ { \mathrm { P O S } }$ then
5: $\mathbf { A } _ { h } ^ { ( l ) } \mathbf { A } _ { h } ^ { ( l ) } + \alpha _ { \mathrm { P O S } } | \mathbf { A } _ { h } ^ { ( l ) } |$
6: end if
7: end for
8: Normalize $\mathbf { A } ^ { ( l ) }$ and continue
9: end for
10: Obtain logits $p _ { \theta } ^ { \mathsf { p } { \mathsf { c } } }$ s-steered Step 2: Forward Pass with Negative Steering
11: for $l 1$ to $L$ do
12: for $h 1$ to $H$ do
13: Compute attention matrix A(hl)
14: Identify top- $\kappa _ { { \mathrm { V I S } } }$ critical visual tokens V
15: $\begin{array} { r l } { \mathbf { f o r a l l } v \in \mathcal { V } { \mathbf { d o } } } \\ { \mathbf { A } _ { h } ^ { ( l ) } ( v ) } \\ { \scriptscriptstyle \mathrm { ~ \mathrm { \bf ~ E G } ~ } | \mathbf { A } _ { h } ^ { ( l ) } ( v ) | } \end{array} \quad - \quad \begin{array} { r l } { \mathbf { A } _ { h } ^ { ( l ) } ( v ) } & { { } \mathrm { \bf ~ \zeta ~ - \zeta } } \\ { \mathbf { A } _ { h } ^ { ( l ) } ( v ) } & { { } \mathrm { \bf ~ \zeta ~ - \zeta } } \end{array}$
16: αN
17: end for
18: end for
19: Normalize $\mathbf { A } ^ { ( l ) }$ and continue
20: end for
21: Obtain logits $p _ { \theta } ^ { \mathbf { \mathfrak { m } } }$ eg-steered Step 3: Contrastive Decoding with Truncation
22: $p _ { \theta } ^ { \mathrm { r a w } } ( 1 + \alpha ) p _ { \theta } ^ { \mathrm { p o } }$ s-steered $- \ \alpha p _ { \theta } ^ { \mathbf { n } }$ eg-steered
23: cutoff $ \mathrm { \log ( \beta ) } + \mathrm { \ m a x } ( p _ { \theta } ^ { \mathrm { r a w } } )$
24: pfi $\begin{array} { r l r } { p _ { \theta } ^ { \mathrm { f i n a l } } } & { { } } & { p _ { \theta } ^ { \mathrm { r a w } } } \end{array}$ .masked_fill $( p _ { \theta } ^ { \mathsf { p o } }$ s-steered cutoff, )
25: return pfinal
As detailed in Algorithm 1, we start by identifying the most text-centric heads using a small reference dataset (e.g., 500 images) for a task (e.g., image description). For each sample, we compute the ratio of textual attention to visual attention (Eq. 6 in Appendix A) and take the top 32 heads with the highest ratio. We accumulate these counts over all samples, then choose the top $\kappa _ { \mathrm { T C H } }$ heads as “textcentric”. This step is motivated by our finding (Section 3.3) that certain heads consistently favor textual content over visual cues.
# 4.2 Attention Steering
Text-centric Head Awareness and Critical Visual Token Selection. As shown in Figure 4, we refine our method by incorporating text-centric head selection for positive steering and critical token identification for negative steering. Specifically, given the selected text-centric heads, we positively steer them by increasing their attention weights with a strength of $\alpha _ { \mathrm { P O S } }$ . Figure 5a highlights how targeted steering in text-centric heads improves the positive steering effectiveness. Simultaneously, we apply negative steering to the top $\kappa _ { { \mathrm { V I S } } }$ most critical visual tokens – those receiving the highest aggregate attention across heads – reducing their attention scores by $\alpha _ { \mathrm { N E G } }$ across all heads. Through this strategy, we deliberately obscure only the most pivotal cues – this targeted suppression is sufficient to induce a strong hallucination effect in the negative branch, leading to improved contrastive decoding compared to a blanket suppression of all visual tokens. In Figure 5b, we demonstrate the impact of selectively applying negative steering to critical visual tokens.
Integration with Contrastive Decoding with Truncation. Building on the attention-steering process, we first obtain two output distributions: ppos-steered from the positively steered branch and pθ eg-steered from the negatively steered branch. We
56 CHAIRs (random) CHAIRi (random) 50 80 Greedy Nucleus Beam 20 40 60 80 100 20 40 60 80 100 86 POPE ACC. □ POPE F1 ● - POPE Acc. (random) POPE F1 (random) 84.8 84 84.2 78 85.4 20 40 60 80 100 20 40 60 80 100 Greedy Nucleus Beam Critical Visual Token Percentage (%) Critical Visual Token Percentage (%) (a) Effectiveness of positive-steering (b) Effectiveness of negative-steering only for text-centric heads only for critical visual token
then combine these into contrastive decoding with a truncation mechanism, as detailed in the Step 3 of Algorithm 2. This process not only reinforces visually grounded predictions but also effectively mitigates the influence of spurious textual biases.
Table 1: CHAIR Evaluation Results. Lower CHAIRs and CHAIRi values indicate better performance in reducing hallucination. The best values for each metric within a model-decoding combination are highlighted in bold.
# 5 Experiments
To evaluate the effectiveness of our attentionsteerable contrastive decoding framework in mitigating hallucinations in MLLMs, we conduct a range of experiments. This includes three diverse benchmarks – CHAIR, POPE, and MMHalBench – each designed to assess different aspects of object hallucinations. To ensure the broad applicability and robustness of our approach, we also test it on three representative models – LLaVA-1.5 7B, LLaVA-NeXT 7B, Phi2-SigLIP, and employ three different decoding strategies: greedy search, nucleus sampling, and beam search. Details of the experimental settings are provided in Appendix B. Furthermore, we evaluate performance on standard VQA benchmarks including MMMU, MMVET, ScienceQA, TextVQA, and GQA to verify that the proposed method preserves – rather than diminishes – the model’s original visual understanding.
It is important to note that current benchmarks for evaluating multimodal models are highly variable. For example, baseline models such as LLaVA
1.5 7B often report different metric values between different papers. Moreover, the CHAIR metric relies on random image sampling, which further complicates direct comparisons between papers. To address these issues, we faithfully reproduced both VCD and ICD using the parameters specified in their original papers and repositories, ensuring that our evaluations are conducted under consistent conditions. This allows for a more reliable comparison between our method and existing approaches.
CHAIR. Table 1 shows the CHAIR metrics
Figure 5: Comparative effectiveness of selective attention steering. (a): Positive steering applied only to text-centric heads (32 heads with the highest textto-visual ratio) outperforms random or blanket head selection across various decoding strategies (Greedy, Nucleus, Beam), leading to higher POPE Accuracy and F1. (b): Negative steering focused on a small subset of critical visual tokens, integrated with contrastive decoding, significantly reduces CHAIR metrics (less hallucination) and boosts POPE metrics compared to randomly suppressing visual tokens of the same number. These results validate that targeted attention modulation on text-centric heads (for positive steering) and critical visual tokens (for negative steering) yields stronger hallucination mitigation and more grounded responses.
Figure 6: Radar charts of MMHal-Bench results. Each axis represents a different evaluation dimension in MMHal-Bench, and a larger enclosed area indicates better overall performance.
Table 2: POPE Evaluation Results. The best values for each metric within a model-decoding combination are highlighted in bold. If our ASCD achieves the second-best result, it is additionally marked with an underline.
(CHAIRs and CHAIRi), which measure object hallucination in image captioning. Across all models and decoding strategies, ASCD consistently achieves lower CHAIR values than Orig, VCD, or ICD, which illustrates ASCD’s effectiveness at mitigating object-level hallucinations.
POPE. Table 2 reports the accuracy and F1 scores under the POPE evaluation, which probes object presence with random, popular, and adversarial queries. Higher values indicate fewer hallucinations. Again, ASCD achieves the best or near-best performance in all cases. These gains persist across different model architectures, suggesting that attention steering is robust to model size and design variations.
MMHal-Bench. Figure 7 illustrates the radar charts of MMHal-Bench results for LLaVA-1.5 7B under greedy and nucleus decoding. Each axis represents a sub-dimension of the benchmark, and a larger area signifies better overall performance. ASCD exhibits the largest enclosed area, outperforming baseline, VCD, and ICD in most dimensions. This improvement aligns with the CHAIR and POPE findings, underscoring the benefit of selectively steering attention to reduce hallucinations.
Standard VQA Benchmarks. To verify that ASCD does not sacrifice a model’s general visualquestion-answering ability, it’s evaluated on five widely-used VQA datasets. Across all three backbones and all decoding strategies, ASCD either matches or surpasses the original model on every dataset, while VCD and ICD consistently degrade performance as shown in the Appendix C.
Summary. Our experiments confirm that ASCD effectively reduces hallucinations and improves alignment with visual content, regardless of the model or decoding strategy employed. | Multimodal Large Language Model (MLLM) often suffer from hallucinations. They over-rely on partial cues and generate incorrect responses. Recently, methods like Visual Contrastive Decoding (VCD) and Instruction Contrastive Decoding (ICD) have been proposed to mitigate hallucinations by contrasting predictions from perturbed or negatively prefixed inputs against original outputs. In this work, we uncover that methods like VCD and ICD fundamentally influence internal attention dynamics of the model. This observation suggests that their effectiveness may not stem merely from surface-level modifications to logits but from deeper shifts in attention distribution. Inspired by this insight, we propose an attention-steerable contrastive decoding framework that directly intervenes in attention mechanisms of the model to offer a more principled approach to mitigating hallucinations. Our experiments across multiple MLLM architectures and diverse decoding methods demonstrate that our approach significantly reduces hallucinations and improves the performance on benchmarks such as POPE, CHAIR, and MMHal-Bench, while simultaneously enhancing performance on standard VQA benchmarks. | [
"cs.CV",
"cs.CL"
] |
# I. INTRODUCTION
Background. Large language models (LLMs) have been widely adopted for code generation to enhance developer productivity. However, their performance largely depends on prompt quality, which requires significant developer expertise [1].
To enhance LLM code generation usability, researchers developed Chain-of-Thought (CoT) technology, which decomposes complex problems into simpler steps solved through sequential reasoning, significantly improving accuracy and reliability. Addressing lightweight models’ insufficient selfgeneration reasoning capabilities, Yang et al. [2] introduced COTTON, which provides specialized CoT reasoning through external lightweight models. Jin et al. [2] further proposed MSCoT to improve cross-language generalization. These lightweight CoT models have substantially enhanced code generation performance.
Motivation. Although CoT technology has brought significant performance improvements, as external models, CoT generation models are vulnerable to backdoor attacks [3]. Since LLMs follow CoT instructions when generating code, attackers can inject specific triggers into training data, causing CoT models to produce malicious reasoning steps (e.g., change the logic of the code or added malicious code) when encountered, ultimately resulting in code with security vulnerabilities or functional defects [4].
Recently, researchers have developed a variety of backdoor defense techniques, which can be roughly classified into paassive defense and active defense. Passive defense methods typically detect anomalous patterns by adding extra verification steps during inference, such as ONION [5]. Active defense methods prevent attackers from injecting backdoors by incorporating additional defense mechanisms during training, such as using regularized loss functions like DeCE [6] or retraining models after cleaning and filtering the training data [7]. However, these methods often prove inadequate against increasingly stealthy backdoor attacks [3]. Unlike previous approaches that randomly insert rare words [8], [9], SABER [3] leverages CodeBERT to adaptively identify key tokens in the input prompt and employs specific syntactic patterns as triggers. This strategic approach renders perplexitybased defenses like ONION [5] ineffective, as the triggers are naturally integrated into the input prompt and difficult to distinguish from legitimate content. Therefore, developing effective defense mechanisms specifically for CoT models has become particularly important.
Method. In this work, we propose GUARD, specifically designed to counter backdoor attacks in Chain-of-Thought for neural code generation. Specifically, GUARD detects and repairs potentially backdoored CoT steps through two collaborative agent components, GUARD-Judge and GUARD-Repair, thereby ensuring the security and reliability of code generation.
The GUARD-Judge component identifies potentially attacked samples based on (1) determining whether the CoT steps correctly solve the problem; and (2) detecting possible anomalous patterns or backdoor triggers in the CoT steps. If GUARD-Judge component identifies that a sample may be under attack, it passes the sample to the GUARD-Repair component, which uses a retrieval-augmented generation method to regenerate secure CoT steps for the samples identified as abnormal.
The main contributions can be summarized as follows:
We develop GUARD, a dual-agent defense framework consisting of GUARD-Judge and GUARD-Repair, providing a comprehensive solution to mitigate backdoor attacks.
We conduct extensive experiments demonstrating that GUARD significantly outperforms existing defense methods in detecting backdoor attacks while maintaining the quality of CoT generation.
• We share our corpus and scripts on our project homepage 1 to promote the replication of our research.
# II. BACKGROUND AND RELATED WORK
# A. CoT in Code Generation
Let $\mathcal { D } = \{ ( X _ { i } , Y _ { i } ) \} _ { i = 1 } ^ { | \mathcal { D } | }$ denote a code generation dataset, where $X _ { i }$ represents the natural language description and $Y _ { i }$ represents the corresponding code snippet.
The CoT generation model $M _ { c o t }$ generates reasoning steps $C _ { i }$ conditioned on $X _ { i }$ :
$$
P _ { \theta _ { c o t } } ( C _ { i } | X _ { i } ) = \prod _ { k = 1 } ^ { m } P _ { \theta _ { c o t } } ( C _ { i , k } | X _ { i } , C _ { i , 1 : k - 1 } )
$$
The code generation model $M _ { c o d e }$ generates code $Y _ { i }$ conditioned on $X _ { i }$ :
$$
P _ { \theta _ { c o d e } } ( Y _ { i } | X _ { i } ) = \prod _ { k = 1 } ^ { n } P _ { \theta _ { c o d e } } ( Y _ { i , k } | X _ { i } , Y _ { i , 1 : k - 1 } )
$$
When augmenting code generation with CoT, the probability becomes:
$$
P ( Y _ { i } | X _ { i } ) \propto \underbrace { P _ { \theta _ { c o t } } ( C _ { i } | X _ { i } ) } _ { M _ { c o t } } \times \underbrace { P _ { \theta _ { c o d e } } ( Y _ { i } | X _ { i } , C _ { i } ) } _ { M _ { c o d e } }
$$
Recent research has made significant advances in CoT approaches for code generation. For specialized CoT models, Yang et al. [10] proposed COTTON, a lightweight CoT generation model that has gained widespread adoption in the research community. Expanding the multilingual capabilities of CoT, Jin et al. [11] proposed MSCoT, which supports reasoning across multiple programming languages. These advancements collectively demonstrate the growing importance and sophistication of CoT approaches in modern code generation systems.
# B. Backdoor Attack
A backdoor attack embeds hidden triggers into a model that activate predefined malicious behaviors while maintaining normal performance on benign inputs. A targeted backdoor attack causes the model’s parameters to shift from $\theta$ to $\theta _ { p }$ by:
$$
\begin{array} { r l } & { \theta _ { p } = \underset { \theta } { \arg \operatorname* { m i n } } \left\{ \mathbb { E } _ { ( \boldsymbol { x } , \boldsymbol { y } ) \in D _ { \mathrm { c l e a n } } } \left[ \mathcal { L } \big ( f ( \boldsymbol { x } ; \boldsymbol { \theta } ) , \boldsymbol { y } \big ) \right] \right. } \\ & { \qquad \left. + \mathbb { E } _ { ( \boldsymbol { x } ^ { p } , \boldsymbol { y } ^ { p } ) \in D _ { \mathrm { p o i s o n } } } \left[ \mathcal { L } \big ( f ( \boldsymbol { x } ^ { p } ; \boldsymbol { \theta } ) , \boldsymbol { y } ^ { p } \big ) \right] \right\} , } \end{array}
$$
where $\mathcal { L }$ is the loss function, and $D _ { \mathrm { c l e a n } }$ and $D _ { \mathrm { p o i s o n } }$ are clean and poisoned datasets. The poisoned dataset contains inputs $x ^ { p }$ with triggers and corresponding malicious outputs $y ^ { p }$ .
For CoT models specifically, Jin et al. [3] introduced SABER, which employs stealthier and more natural triggers than traditional NLP backdoor methods like BadPre and RIPPLe.
Fig. 1. Overview of the Threat Model
# C. Backdoor Defense
Passive Defense methods operate during inference without modifying model parameters. They detect potential backdoor triggers through additional verification mechanisms:
$$
D _ { p a s s i v e } ( x , f ( x ; \theta _ { p } ) ) = { \left\{ \begin{array} { l l } { 1 , } & { { \mathrm { i f ~ d e t e c t e d ~ a s ~ b a c k d o o r e d } } } \\ { 0 , } & { { \mathrm { o t h e r w i s e } } } \end{array} \right. }
$$
Active Defense methods work during training or before deployment through two main approaches: (1) cleaning the dataset by removing poisoned samples before training, or (2) employing regularized training methods regardless of data contamination. Formally:
$$
\theta _ { d } = \arg \operatorname* { m i n } _ { \theta } \left\{ \mathbb { E } _ { ( x , y ) \in D _ { \mathrm { f i l t e r e d } } } \left[ \mathcal { L } ( f ( x ; \theta ) , y ) \right] + \mathcal { R } ( \theta ) \right\}
$$
where $D _ { \mathrm { f i l t e r e d } }$ is the cleaned dataset and $\mathcal { R } ( \theta )$ is a regularization term to suppress backdoor behavior.
While these defense mechanisms (e.g., DeCE [6] and ONION [5]) have shown effectiveness against traditional backdoor attacks in NLP models, they face significant challenges when confronting sophisticated attacks like SABER that are specifically designed for CoT models. As mentioned in Section I, SABER’s stealthy triggers and natural integration into the reasoning process make it particularly difficult for existing methods to detect or mitigate. This limitation in current defense approaches against CoT-specific backdoor attacks motivates our development of GUARD, a specialized framework designed to address the unique vulnerabilities in CoT-based code generation systems.
# III. THREAT MODEL
Fig. 1 illustrates our threat model. In a secure environment, developers train models on clean datasets and distribute them through platforms. However, malicious attackers may attack this process by poisoning training data to implant backdoors that manipulate reasoning steps to control code generation.
# A. Attacker Objectives
Attackers aim to implant stealthy backdoors in CoT models that activate only when triggered by specific inputs, producing manipulated reasoning steps leading to malicious code. The model must maintain normal functionality with benign inputs to avoid detection.
Fig. 2. The framework of GUARD
# B. Attack Capabilities and Methods
We assume attackers can poison datasets by injecting samples with crafted triggers, then distribute poisoned models through public platforms.
In this study, we focus on defending against SABER [3], the state-of-the-art backdoor attack method for CoT models. SABER uses CodeBERT to analyze input-operator relationships, identifying optimal locations for trigger insertion using Markdown bold syntax $^ { * * }$ markers). Malicious outputs are achieved through subtle operator mutations in the reasoning chain.
# C. Defender’s Capabilities
We assume defenders have access to the training dataset and can detect and repair poisoned samples. Based on the assumption that attackers can poison datasets, defenders can address the root cause of backdoor vulnerabilities by ensuring training data integrity, rather than attempting to mitigate already-embedded backdoors in deployed models.
# IV. APPROACH
In this section, we introduce the details of GUARD, which is illustrated in Fig. 2. Overall, GUARD consists of two modules: (1) GUARD-Judge. This module identifies potentially backdoored CoT samples by evaluating their correctness and detecting anomalous patterns. (2) GUARD-Repair. This module regenerates secure CoT steps for samples flagged as suspicious by GUARD-Judge using retrieval-augmented generation.
# A. GUARD-Judge
GUARD-Judge serves as the first line of defense against backdoor attacks by analyzing CoT samples from two critical perspectives:
(1) Correctness Evaluation. Given a problem statement and its corresponding CoT solution, GUARD-Judge first evaluates whether the CoT solution correctly addresses the problem. This evaluation focuses on:
Logical and Algorithmic: Assessing whether the CoT steps follow a logical progression and are algorithmically correct.
Requirement Alignment: Checking if all requirements specified in the problem statement are addressed in the CoT.
(2) Patterns Detection. Even if a CoT solution appears functionally correct, it may still contain hidden backdoor triggers. GUARD-Judge performs anomalous patterns detection by:
• Pattern Analysis: Identifying unusual formatting, unexpected symbols, or suspicious text patterns that might serve as triggers.
Feedback Analysis: Providing feedback on the exact position of potential triggers or clear descriptions of detected anomalies in the CoT.
The final judgment combines both evaluations, and the final output is a binary label indicating whether the CoT is potentially backdoored.
# B. GUARD-Repair
When GUARD-Judge identifies a suspicious CoT sample, GUARD-Repair regenerates a secure alternative using retrieval-augmented generation.
For a flagged problem statement $x$ , GUARD-Repair employs the BM25 algorithm to retrieve a set of $k$ similar problems $\{ x _ { 1 } , x _ { 2 } , . . . , x _ { k } \}$ along with their verified safe CoT solutions $\{ c _ { 1 } , c _ { 2 } , . . . , c _ { k } \}$ from the clean subset of samples (those not detected as poisoned).
Using the retrieved examples as reference, GUARD-Repair constructs a prompt that includes these similar examples and their safe CoT solutions to guide the generation of a new, secure CoT solution $c ^ { \prime }$ for the original problem $x$ . This retrieval-augmented approach leverages the knowledge from clean, similar examples to guide the LLM in generating a secure CoT that maintains the problem-solving effectiveness while avoiding potential backdoor patterns.
Finally, by combining these two complementary modules, GUARD provides a comprehensive defense mechanism against backdoor attacks in CoT-based code generation. GUARD-Judge effectively identifies suspicious samples, while GUARD-Repair ensures that clean, secure alternatives are available for use in downstream code generation tasks.
# V. EXPERIMENTAL SETUP
In our empirical study, we aim to answer the following two research questions (RQs). RQ1: How does GUARD compare to baseline defense methods in preserving CoT generation quality? RQ2: What impact does GUARD have on code generation performance relative to existing defenses?
# A. Experimental Subjects
1) Models: For CoT generation, we employ widely-used COTTON [10] as our primary CoT model, which is a specialized lightweight CoT generation model designed for code generation tasks.
To assess the impacts on code generation models, we utilize DeepSeek-Coder and Qwen2.5-Coder, two state-of-theart code generation models. We experiment with both 1.5B and 7B parameter versions in their instruct variants. This diverse selection allows us to evaluate model sensitivity to CoT poisoning across different scales and models.
2) Datasets: For CoT experiments, we use CodeCoT-9k [10], a high-quality dataset containing 9,000 samples created by heuristic rules and multi-agent alignment. We use this as our training set, with a portion randomly selected for poisoning. Then, we use HumanEval-CoT and OpenEval-CoT as test sets with 164 and 178 samples respectively, following Yang et al.’s [10] methodology for generating CoTs on standard benchmarks.
For code generation, we follow Yang et al.’s [10] methodology, using HumanEval and OpenEval as test sets. HumanEval [12] contains 164 Python programming problems with an average of 7.8 test cases per problem. OpenEval [10] comprises 178 problems from the AVATAR [13] dataset, augmented with manually designed test cases.
# B. Evaluation Metrics
1) CoT Generation: We assess CoT quality using three standard metrics:
BLEU-4 [14] evaluates n-gram overlap, focusing on precision. METEOR [15] incorporates synonyms and morphological variations for semantic evaluation.
ROUGE-L [16] measures the longest common subsequence, balancing recall and precision.
2) Code Generation: We use $\mathbf { P a s s } @ 1$ to measure the model’s ability to generate correct code on the first attempt. 3) Backdoor Attack Evaluation: We quantify attack effectiveness using Attack Success Rate (ASR):
$$
\mathrm { A S R } = { \frac { \sum _ { i = 1 } ^ { N } \mathbb { I } ( M _ { \mathrm { P C o T } } ( { \mathcal { T } } ( x ) ) = y ^ { p } ) } { N } }
$$
where $N$ is the number of test samples, $M _ { \mathrm { P C o T } }$ is the poisoned model, and $\tau ( x )$ embeds the trigger.
# C. Defense Baselines
To show the competitiveness of our proposed approach GUARD, we evaluate it against four state-of-the-art backdoor defense baselines in the field of NLP and code generation.
Passive Defense. We select ONION and Paraphrasing as our passive defense baselines.
Specifically, ONION [5] employs the GPT-2 language model to neutralize backdoor activation by identifying and eliminating outlier words in test samples based on perplexity measures.
Paraphrasing [17] uses GPT-3.5-Turbo to refactor user prompts. In the context of CoT backdoor attacks, we utilize the prompt ”Assuming my prompt is unsafe, please paraphrase my question to the safe prompt.” to allow GPT-3.5-Turbo to perform the paraphrasing.
Active Defense. We select DeCE and BKI as our active defense baselines.
Specifically, DeCE [6] defends against backdoor attacks by introducing a deceptive distribution and label smoothing, leveraging the ”early learning” phenomenon to prevent the model from overfitting to backdoor triggers.
BKI [7] removes the poisoned samples from the training set by identifying the importance of each token in the training set, and retrains the model to obtain a model without a backdoor.
# D. Experimental Settings
TABLE I HYPER-PARAMETERS AND THEIR VALUES
In our empirical study, the hyper-parameters and their values are summarized in Table I. GUARD-Judge employs DeepSeek-R1 to ensure high accuracy in anomaly detection, while GUARD-Repair uses GPT-3.5 for generation to maintain stylistic consistency with the original CodeCoT-9k dataset, which was created using the same model. This design choice ensures that repaired CoT steps seamlessly integrate with the existing dataset patterns.
TABLE II IMPACT OF DIFFERENT POISONING RATIOS.
# VI. EXPERIMENTAL RESULTS
# A. RQ1: Performance Comparison in CoT Generation
In our experiments, we followed previous studies by setting poisoning ratios of $4 \%$ and $6 \%$ and analyzed two datasets: HumanEval-CoT and OpenEval-CoT.
The results are shown in Table II. From the perspective of CoT generation quality, GUARD achieves the best performance in most cases across the BLEU, Meteor, and RougeL metrics, particularly at higher poisoning ratios $( 6 \% )$ . This demonstrates that our method maintains high-quality CoT generation even under more severe poisoning attacks.
From a security perspective, GUARD significantly reduces the ASR.
(1) Compared to Passive Defense Methods. Passive defense methods like ONION and Paraphrasing maintained ASRs of $6 1 . 9 0 \%$ and $3 8 . 1 0 \%$ respectively at $6 \%$ poisoning, while GUARD reduced the ASR to $1 9 . 0 5 \%$ on HumanEvalCoT. This indicates that our method is more effective in detecting and mitigating backdoor attacks.
(2) Compared to Active Defense Methods. While BKI follows a similar process, it relies on perplexity-based filtering, whereas our dual-agent framework demonstrates superior performance. For example, at a $6 \%$ poisoning ratio, our method reduced the ASR from $7 2 . 7 3 \%$ to $3 6 . 3 6 \%$ on the OpenEval-CoT dataset, compared to BKI’s reduction to $4 0 . 9 1 \%$ . Additionally, DeCE, which uses a regularized loss function during training, showed limited effectiveness due to the stealthy nature of the attacks, reducing the ASR only from $7 2 . 7 3 \%$ to $6 8 . 1 8 \%$ . This further highlights the superiority of our approach.
Summary for RQ1: GUARD excels in both preserving CoT generation quality and enhancing security, particularly under higher poisoning ratios.
B. RQ2: Performance Comparison in Code Generation
TABLE III IMPACT OF DIFFERENT POISONING RATIOS IN HUMANEVAL-COT.
We evaluated the impact of our defense method on code generation performance using multiple models (DS-1.3b$\mathrm { I n s } ^ { 2 }$ , $\mathrm { D S } { - } 6 . 7 \mathrm { b } { - } \mathrm { I n s } ^ { 3 }$ , QW2.5-1.5b-Ins4, $\mathrm { Q W } 2 . 5 { - } 7 \mathrm { b - } \mathrm { I n s } ^ { 5 } .$ ) and two datasets (HumanEval, OpenEval), comparing Pass $\ @ 1$ scores across zero-shot, CoT, and SABER backdoor attack scenarios (with and without defenses).
(1) Without Defense. Using CoT generally improved Pass@1 scores. For example, DS-6.7b-Ins increased from 71.43 to 76.19 on HumanEval. Under SABER attacks, scores either decreased or remained unchanged. Specifically, a decrease indicates that the backdoor in the CoT was effective, leading to buggy code. While unchanged scores suggest that model did not follow the backdoor in the CoT, but this still represents a potential risk.
(2) With Defense. Our method consistently achieved the highest Pass $\ @ 1$ scores across most models and datasets. For instance, DS-1.3b-Ins improved from 57.14 to 66.67 on HumanEval, and DS-6.7b-Ins increased from 71.43 to 80.95. On OpenEval, QW2.5-7b-Ins improved from 18.18 to 27.27.
In comparison, passive defense methods like ONION showed mixed results, while active defense methods like DeCE demonstrated limited effectiveness. Our method not only defends against backdoor attacks but also enhances code generation performance by ensuring CoT integrity, outperforming other defense mechanisms in attack scenarios.
Summary for RQ2: GUARD outperforms both passive and active defense mechanisms in maintaining and improving $\mathrm { P a s s } @ 1$ scores under backdoor attack scenarios.
# VII. THREATS TO VALIDITY
Internal threats. The internal threat is the potential bias introduced by our experimental setup and parameter selection. To mitigate this, we conducted multiple runs with different random seeds and performed extensive hyperparameter tuning to ensure the robustness of our results.
External threats. The external threat is the choice of models and datasets, which may limit the generalizability of our findings. We addressed this by using multiple diverse models (DeepSeek-Coder, Qwen2.5-Coder) and conducting diverse experiments (HumanEval-CoT, OpenEval-CoT) to validate the consistency of our approach.
Construct threats. This threat relates to the suitability of our selected performance measures. To alleviate this threat, we employed a comprehensive set of evaluation metrics (BLEU4, METEOR, ROUGE-L, Pass $\ @ 1$ , ASR) that cover different aspects of model performance and security quality. | With the widespread application of large language models in code generation, recent studies demonstrate that employing additional Chain-of-Thought generation models can significantly enhance code generation performance by providing explicit reasoning steps. However, as external components, CoT models are particularly vulnerable to backdoor attacks, which existing defense mechanisms often fail to detect effectively. To address this challenge, we propose GUARD, a novel dual-agent defense framework specifically designed to counter CoT backdoor attacks in neural code generation. GUARD integrates two core components: GUARD-Judge, which identifies suspicious CoT steps and potential triggers through comprehensive analysis, and GUARD-Repair, which employs a retrieval-augmented generation approach to regenerate secure CoT steps for identified anomalies. Experimental results show that GUARD effectively mitigates attacks while maintaining generation quality, advancing secure code generation systems. | [
"cs.SE"
] |
# I. INTRODUCTION
H sUecMoAndNahnadve vaeryeminacrkhabwlhe laebidlirtiyv tnog imnakceomuspeleox teravfefriyc environments. For example, in busy urban traffic, human drivers quickly interpret numerous cues, such as distances to surrounding vehicles, potential hazards on the road, and the intention of other road users. They then integrate these cues into informed decisions that balance safety, efficiency, comfort and even social compliance [49]. This aptitude emerges from fundamental cognitive processes combined with the cumulative experience drivers gain over time [52]. By continuously refining their internal understanding of the driving context, humans exhibit flexibility and resilience in ever-changing traffic conditions.
In contrast, AVs mostly process with predefined rules or principles, thus limited the ability to deal with unexpected and emergent situations. Although recent development of end-to-end methods facilitates AVs advanced capabilities to deal complex traffic scenarios with multi-source situated data comprehensive processing [44], such methods process data in black box and lack of understanding of dynamic environment semantics, thus increase the incomprehensibility and unpredictability of vehicle behavior to other human road users. In efforts to emulate such competence, research on human-like decision making for has steadily progressed [67] [28]. Many existing methods rely on sophisticated learning techniques, including supervised and reinforcement learning (RL), to help AVs handle a range of driving scenarios [54] [10] [37] [56]. However, humans do not always strictly follow the traffic laws, and the traffic laws do not always specify traffic behavior. Besides, even if a driver drives into the same traffic situation twice, he may make different decisions, which will change the situation state and trig diversified evolutions. As traffic densities increase and participants interact with each other more frequently, these methods of learning and imitating at behavior level reveal their limitations in migratability across different contexts [37] [63]. Therefore, elements that influence the decision-making, and why and how they make an influence need to be further analyzed to design interpretable human-like decision-making models for human-vehicle mutual understanding.
A key reason for human success in these settings is the capacity to synthesize multiple sources of information and weigh potential outcomes. Briefly speaking, human drivers navigate dense traffic by systematically seeking advantages and avoiding drawbacks in light of accumulated experience, they keep an eye on the surrounding traffic participants, predict their intention, and assess the driving risks of making different responses, while considering their right to use the road space [6]. This interplay of situational awareness, personal judgment, and adherence to traffic rules enable humans to drive in a manner that is both appropriately cautious and dynamically responsive to fluctuating conditions. At the same time, they also factor in social norms and personal mood, including acknowledgment of ROW and courtesy to vulnerable traffic participants, to ensure that individual decisions align with the shared interests of other road users. Through repeated interaction with the environment, humans further sharpen their decision-making processes, whether in expected or unexpected
Decision Rehearsal (decision-making and Decision implementation (intentions become actions) policy evaluation) Driving decision implementation in reality What decision (steering, overtaking, cruising) to make
Intemal of he Intermediates Outcomes Causal Internal of the Vehicle Intermediates Outcomes
Driving Actions: Changes in the vehicle's Decision
C Rev ad toser D iagran StDrvig agiets Reward(Safety, Changes in decelerating,etc. Changes in the vehicle's status: efficiency, comfort, Vehicle: Traeges y isk Changes in speed social compliance) to
brakgs Igts assessment · Changes in position Itself
brete Changes i Tirajestesmient Changes in Geo
Social Norms topological Fitness(Safety, relationship with other Other road users efficiency, comfort,
road users ha gein igtof Extemal of the Vehicle social compliance) to Traffic rules Chaeintetafi eaet the environm ent >Why Changes in Geo-topological How Social Norms relationship with other road Safety, users Efficiency, Other road users Change in right-of-way (ROW) Comfort Human-like decision-making logistics and/or Social Simulation process Ccompliant Driving decision Process flow diagram Predict other's Evaluate the Clarify driving behavior based on Reward to self Make a decision intentions Current situation & Fitness to and implem ent it &Past experience environm ent
scenes.
In this paper, we propose a Safety-First Human-Like Decision Making framework for AVs drive safely, efficiently, comfortably with social compliance in time-varying traffic flow. To this end, We firstly analyze how human make a decision in traffic, as Fig.1 shows. The proposed framework is illustrated in Section III with detailed description.
The novelty and significant contributions of this research includes:
1) A multi-feature late fusion spatial-temporal mechanism is designed to capture other road users’ intention within the most appropriate time series, which impressively improves the intention inference accuracy and enhanced AV’s situational awareness.
2) The concept of absolute ROW area is proposed and described mathematically, based on which the ROWviolation index is defined, and AV’s behavioral social compliance estimation is realized. Thus provide AVs with priori-knowledge for making decisions more socially compatible.
3) The genetic algorithm is introduced into the decisionmaking module to optimize the weight matrix, thus expanding its search space more efficiently while make avoidance of falling into the local optimal trap.
By considering the ROW belongingness when analyzing benefit and cost at each situated decision, the framework adapts decision parameters in real time to preserve safety margins while ensuring contextually appropriate driving maneuvers, thereby reflecting the way human drivers balance risk management, driving quality, and collective interests, to achieve reduced collision rates, improved efficiency and comfortableness, and smoother integration of AVs into modern traffic systems.
# II. RESEARCH FOUNDATIONS
# A. Human Behavior in Traffic
Human drivers exhibit a nuanced decision-making process that combines driving experience, recognition of right-of-way (ROW), ongoing assessments of risks and rewards, while adherence to social conventions. This behavioral sequence can be understood through the lens of social behavioral theories, which posit that decisions are often shaped by a complex interplay of personal goals, perceived norms, and situational cues [3]. When seeking advantages and avoiding drawbacks, they take into account not only immediate conditions, such as vehicle spacing, potential hazards, and traffic signals, but also more implicit factors including intention of surrounding road users, social norms, and collective expectations [16] [38] [57]. This multifaceted process is remarkably adaptive and flexible, showing consistency across a wide range of traffic scenarios. Whether merging onto a highway, navigating a busy intersection, or responding to an unexpected pedestrian crossing, human drivers rely on these same foundational principles to prioritize safety while striving to reach their destinations efficiently.
1) Why Safety First: Safety-first decision-making is naturally essential for human in traffic, given that they must operate in environments where the stakes of errors are extremely high, and the human body is very fragile to take any errors in traffic. Experts from industries have argued for “responsibilityfirst” approaches [39], emphasizing strict adherence to legally assigned duties, such logic does not fully account for the fluid nature of real-world traffic. Generally speaking, humans do not strictly obey the traffic rules, for example, red-light-running [53] is a pretty normal but dangerous social behavior, but even so, AVs are supposed to give way to pedestrians in stead of hitting on them.
Conditions on the road are often uncertain and complex, with elements such as several kinds of surrounding traffic participants, full of different intention, and the behavior of pedestrians and cyclists frequently fluctuating in ways that cannot be entirely captured by static rules. By placing safety first, AVs can proactively manage potential collisions and other undesirable events, thereby better serving the overarching objective of protecting human life. This risk-sensitive stance aligns with human instincts to prioritize harm avoidance when in doubt, a principle also supported by ethical and regulatory guidelines in automotive engineering [50].
2) What means to be Human-Like: When we ask AVs to drive like a human, we are requesting it to possess a spectrum of cognitive and perceptual abilities that reflect the sensitivity, flexibility, adaptability and social awareness that human drivers naturally exhibit [59]. Such abilities include perceiving subtle context cues and adjusting behavior in real time, prioritizing safety while also maintaining an efficient flow of movement with social compatible behavior. They involve the capacity to gauge others’ intention, interpret road users’ implicit signals, negociate the right to use the road in specific time, and uphold common driving customs that extend beyond the written rules of the road. In order to equip AVs with these human-centric capabilities, we analyze how humans understand ROW attribution and violation while driving in fluctuating traffic surrounding by other road users, with consideration of social norms and traffic laws, which in turn enhances trust, predictability, and meaningful interpretability of automated actions for all who use or encounter these systems.
Driving like human offers multiple benefits for AVs. First, it improves predictability and social compatibility: other road users, no matter biological or mechanical, have come to expect certain patterns of behavior, such as subtle gestures of courtesy or slight speed adjustments to indicate cooperative intent. When AVs emulate these behaviors, their actions become more predictable and understandable to human drivers, cyclists, and pedestrians, thereby enhancing overall road safety and acceptance. Second, human-like driving accounts for the tacit knowledge that human drivers rely on knowledge of unspoken norms and situational nuances that cannot be entirely codified in traffic laws, such as always be courteous to vulnerable participants. Third, by adopting a decision-making strategy that is attentive to contextual factors such as evolving traffic dynamics, AVs can react more robustly to unexpected circumstances, leading to fewer edge cases and a smoother integration of autonomous systems into existing traffic networks.
# B. The Spatial-Temporal Attention Mechanism
Driving in time-varying traffic flow needs to pay attention to the geo-topological and social impact relationships between vehicles and key information within certain time series. Therefore, STA mechanism has recently gained prominence in the design of prediction modules for AVs [7] [55] [1]. Rooted in deep learning methods such as attention-based neural networks, S-TA enables the model to selectively focus on the most important inputs at both the spatial (e.g., lateral and longitudinal positions of vehicles or pedestrians) and temporal (e.g., recent movement trends, sudden changes in velocity) dimensions [66] [24] [40]. By integrating these forms of attention, the autonomous driving system refines its perception of the traffic environment, filtering out noise and prioritizing crucial cues for decision making [47] [51].
Recent works show that STA mechanisms significantly increase accuracy in predicting the behavior of surrounding traffic participants [46] [25]. In particular, attention-based architectures can highlight vehicles that pose an imminent risk, or emphasize periods of time when abrupt changes in velocity occur, thereby allowing the AV to plan safer and more efficient maneuvers [11]. This attention-driven trajectory computing framework of surrounding traffic participants complements traditional sensor fusion techniques and predictive models by giving the decision-making algorithm a dynamically focused view of the most critical aspects of the scene [41]. In dense traffic settings where interactions are multi-sourced, diversified and frequent, the STA approach helps identify potential ROW conflicts and predict the real-time intention of other drivers, thus reducing the likelihood of abrupt or dangerous maneuvers [13] [64]. ThAerefore, it provides AVs with the capability of making appropriate decisions by learning from better predictions.
# C. Deep Evolutionary Reinforcement Learning Methods
Reinforcement learning (RL) offers a natural framework for enabling AVs to learn from continuous interaction with the environment, optimizing decision policies based on reward signals tied to safety, efficiency, and comfort [24] [15] [62]. State-of-the-art RL algorithms have shown promise in highway merging [47] [17], lane changin [51], and adaptive cruise control tasks [51]. They excel at balancing multiple objectives, such as minimizing travel time while respecting speed limits and maintaining safe following distances. However, these methods often face challenges in non-stationary and complex environments, which can cause them to converge to suboptimal or locally optimal strategies. Moreover, training efficiency becomes a bottleneck when dealing with high-dimensional state and action spaces, common in real-world driving scenarios, leading to significant computational overhead.
DERL combines the benefits of deep neural networks with evolutionary strategies to address the above mentioned limitations of RL. Whereas standard DRL may get stuck in local optima or require extensive hyperparameter tuning, the evolutionary component in DERL facilitates exploration over a broader range of policy parameters [8]. Specifically, evolutionary learning utilizes global search and evolutionary selection mechanisms to effectively mitigate overfitting risks while improving the model’s generalization performance [11] [33]. Besides, it uses distributed asynchronous learning to leverage the scaling of computation and models that have been successful in other fields of AI. This augmented exploration can expand diversity of search space, and identify highperforming strategies that might be missed by gradient-based methods alone [48] [29].
In the context of time-varying traffic flows, DERL enables the decision-making model to continuously adapt to changing conditions by choosing the policy with the highest fitness value [14] [61]. Precisely, by generating a population of candidate policies and evaluate their fitness to the environment, DERL incorporates novel actions that prove advantageous in complex or unforeseen scenarios, thus improving the overall robustness of the ego vehicle. Furthermore, DERL’s inherent parallelism and reduced sensitivity to hyperparameters can expedite training, making it more feasible to deploy learned policies in real-time traffic. Compared with spatial-temporal attention modules, DERL works more effectively and efficiently weigh the critical elements of driving scenarios, ultimately enhancing both safety and efficiency in decision making.
# D. Social Compliance Estimation
Dense traffic frequently involves a wide array of road users, including cars, buses, motorcycles, bicycles, and pedestrians. Making a decision in such contexts should follow established legal and compliant frameworks while also accounting for situational nuances. These nuances may include whether a driver appears to cede space, how traffic participants coordinate road use with others at complex intersections, or whether pedestrians are on the curb waiting to cross [13]. Advanced decision-making models employ real-time environment perception and predictive analytics to update other participants’ intention estimation continuously, ensuring that
AVs can respond responsibly to dynamic changes in traffic flow.
Most current researches on social compliance in traffic assume human drivers exhibit varying degrees of altruism or egoism in traffic, and use the reward to self as reflection of diverse social-value orientations [64] [45]. More specifically, such method describes other road users as altruism or egoism based on data [43], and to define the ethical performance of traffic participants [42] [70]. However, the driving behavior of each travel heavily influenced by the urgency of the task, thus we think it is not fare to judge them on the ethical level just based on data, especially in emergent situations. In addition to formal traffic laws, there are also implicit social norms to regulate human behavior when living in a society or driving in city. For example, while driving traffic, each participant has their own awareness of right-of-way (ROW) [36]. Drivers who consistently violate ROW by aggressively merging or cutting off others are commonly perceived as reckless or discourteous [34]. In social behavioral terms, these actions disrupt shared social norms and may provoke negative reactions from other drivers. While occasionally such maneuvers can reduce individual travel time, they increase the risk of accidents and broader inefficiencies due to sudden braking, lane changes, or evasive maneuvers by nearby vehicles. Therefore, the times and frequency of ROW-violation to others are more appropriate to use as driver social compliant performance evaluation.
# III. METHOD
In this SF-HLDM framework, we first design an intention inference model, define the absolute ROW area for each traffic participant and introduce the ROW-violation index to estimate the social compliance of the ego vehicle’s driving behavior, and propose the social-compliant decision model based on deep evolutionary reinforcement learning method. The proposed hierachical progressional framework is illustrated in Fig.2, which consists of three parts: the intention inference module, social compliance estimation module and the evolutionary decision-making model. The intention inference module applies spatial-temporal attention mechanism to extract globle critical features from traffic scenarios in the recent continuous time sequences, encoding the spatial relationships among vehicles and the temporal evolution of their trajectories. And then decodes the driving intention of surrounding vehicles and embeds the inferred intention vectors into the state space of the decision-making layer. The decision-making model uses a social compliance and environment fitness based hybrid reward function to train each DRL agent, enhancing its performance and robustness in complex traffic environments. In which, the genetic algorithm is applied to optimize the S-DERL AI agent’s network weights, effectively mitigates overfitting risks while improving the model’s generalization performance. This agent then operates with a hierarchical action space, where the high-level module generates abstract driving strategies (e.g., lane-keeping or lane-changing), and the low-level module outputs specific vehicle control commands (e.g., steering angle or acceleration).
Fig. 2: SF-HLDM based on Situation Awareness and Social Compliance
# A. Recognizing the Critical Participants based on MultiFeature Late Fusion Special-Temporal Attention Mechanism
When human drivers are situated in a tricky environment, they typically make the next decision by considering a sequence of historical observations rather than just depend on the current observation, and extract the critical elements by assigning varying levels of importance to specific objects located at different postions at each time slide. Our framework incorporates a spatial-tmeporal multi-feature late fusion mechanism to imitate such recognition logics, shown in Figure 3. This mechanism contains two complementary modules: the spatial attention module and the temporal attention module. It dynamically assigns different importance value to different spatial regions and temporal segments, producing a comprehensive state vector that encodes vehicle interactions and trajectory changes. The state vector then serves as an essential input to the autonomous driving system’s decision-making layer.
1) The Spatial Attention Mechanism: The spatial attention mechanism focuses on critical regions within an image, enabling the network to adaptively emphasize the most relevant spatial features for making decisions when driving in timevarying traffic flows. It operates between the convolutional layers and the recurrent layers.At each time step $t _ { : }$ , the convolutional layers produce a set of $\mathrm { ~ L ~ }$ region vectors, $\{ v _ { t } ^ { i } \} _ { i = 1 } ^ { L }$ , where $\scriptstyle { L = m \times n }$ represents the total number of regions in the feature map, and each vector $v _ { t } ^ { i } \in \mathbb { R } ^ { d }$ encodes the features of a corresponding image region. These region vectors are then processed by the spatial attention mechanism to compute a context vector $z _ { t }$ , which captures the most relevant spatial features. The context vector $z _ { t }$ is calculated as a weighted sum of all region vectors:
$$
z _ { t } = \sum _ { i = 1 } ^ { L } g _ { t } ^ { i } \cdot v _ { t } ^ { i }
$$
where, $g _ { t } ^ { i }$ represents the attention weight for the i-th region at time step t.
The attention weights are computed by an attention network $g _ { t } ^ { i }$ , which takes the region vector $v _ { t } ^ { i }$ and the LSTM hidden state $h _ { t } ^ { i }$ as input. The attention network is designed as a fully connected layer followed by a softmax function, ensuring that the weights are normalized across all regions:
$$
g _ { t } ^ { i } = \mathrm { S o f t m a x } ( \omega _ { v } \cdot v _ { t } ^ { i } + \omega _ { h } \cdot h _ { t - 1 } )
$$
where $\omega _ { v }$ and $\omega _ { h }$ are learnable parameters of the attention network.
The computed context vector $z _ { t }$ serves as input to the LSTM layer in the global multi-feature late fusion mechanism. By reweighting the region features, the spatial attention mechanism acts as a dynamic mask over the CNN feature maps, selectively emphasizing the most informative regions for decision-making. This selective focus not only enhances the model’s ability to interpret critical spatial features but also reduces the number of parameters, thereby improving the training and inference efficiency.
2) The Temporal Attention Mechanism: The temporal attention mechanism is employed to model the importance of information across different time steps, enabling the network to focus on the most relevant temporal segments. This mechanism operates over the outputs of the LSTM layer, which encode the temporal evolution of the driving scenario. For a sequence of LSTM outputs $\{ h _ { 1 } , h _ { 2 } , \ldots , h _ { T } \}$ , the temporal attention mechanism assigns a scalar weight $\omega _ { T + 1 - i }$ to each output $h _ { T + 1 - i }$ , where the weights are computed as:
$$
\omega _ { T + 1 - i } = \mathrm { S o f t m a x } ( v _ { T + 1 - i } \cdot h _ { T + 1 - i } ) , \quad i = 1 , 2 , \ldots , T
$$
where, ${ \boldsymbol { v } } _ { T + 1 - i }$ represents a feature vector learned during training, and the softmax function ensures that the weights are normalized across all time steps.
The context vector $C _ { T }$ , which summarizes the temporal information, is computed as a weighted sum of the LSTM outputs:
$$
C _ { T } = \sum _ { i = 1 } ^ { T } ( w _ { T + 1 - i } \cdot h _ { T + 1 - i } )
$$
The context vector $C _ { T }$ encapsulates the most critical temporal features by dynamically prioritizing outputs from important time steps, such as abrupt lane changes or sudden braking, thereby providing a refined representation of temporal information. A fully connected layer, FC, then further processes this vector, and the result is provided to the actor-network for hierarchical decision-making.
By including temporal attention, this mechanism adaptively concentrates on pivotal time steps and deemphasizes less informative periods. Such a design improves model robustness and interpretability, which are crucial in dynamic traffic environments where accurate temporal comprehension is essential for safe decision-making.
Through the collaboration of spatial and temporal attention mechanisms, the framework dynamically models the significance of spatial features and temporal segments in complex traffic scenes, thereby enabling accurate driving intention inference. The spatial attention mechanism ensures flexible modeling of inter-vehicle interactions by emphasizing critical spatial features, while the temporal attention mechanism effectively captures the temporal evolution of vehicle behaviors, generating a holistic spatiotemporal representation.
Fig. 3: Spatial-Temporal Multi-Feature Late Fusion Mechanism
This spatiotemporal attention mechanism significantly enhances the perception capabilities and robustness of autonomous driving systems in dynamic traffic scenarios, providing strong support for decision-making in complex environments.
# B. Deep Evolutionary Reinforcement Learning for Making the SCSE Decisions
Deep reinforcement learning (DRL) has demonstrated considerable effectiveness in addressing complex sequential decision-making tasks, particularly in autonomous driving. In this research, we employ the Twin Delayed Deep Deterministic Policy Gradient (TD3) [30] algorithm as the foundation of our decision-making framework. The purpose of this framework is to make safe, comfortable, and socially compatible decisions in high efficiency, which we refer to as SCSE decisions. The TD3 algorithm functions as an AI Agent [20], directly generating hierarchical driving action policies, such as left lane change, lane following, and right lane change. This effectively addresses the challenges of continuous action spaces in autonomous driving. These policies are derived from a structured process that integrates spatialtemporal features and social compliance estimation to regulate the ego vehicle’s behavior, making it perfectly fit the complex and time-varying driving environment.
To further improve the decision-making framework’s performance, we implement a weight optimization mechanism based on evolutionary learning.The optimization process starts with generating an initial parameter population, followed by evaluating fitness in the driving environment. New generations of parameters are iteratively generated through selection, crossover, and mutation operations, effectively enlarge the search space, while decrease the model’s overfitting risks. It ensures the decision-making model continually adapts to ever-changing complex traffic situations. Combining TD3 with evolutionary learning allows our framework to run like a driving AI Agent with the ability to balance safety, comfort, social compliance across various autonomous driving tasks in an efficient way.
The primary advantages of TD3 stem from its dual $Q -$ network architecture and delayed target updates. The dual Q-networks independently estimate the action-value function, and then the lower one is choosed to reduce over-optimistic value predictions. Moreover, TD3 delays the updates to the target policy network, thereby promoting stability and robustness throughout the learning process. These features make TD3 a perfect choice for addressing the complexities inherent in autonomous driving, particularly within dynamic and uncertain traffic environments [23].
1) The S-DERL AI Agent’s State Space: In our network, the state space is designed to capture the critical elements in each complex traffic scene by encoding spatial and temporal features into a structured representation. Driving intention inference is integrated into the state space to provide a deeper understanding of the surrounding environment and the interactions between vehicles. A detailed description of the state space, including its features and components, is provided in Table I.
TABLE I: State-Space Description
2) The S-DERL AI Agent’s Action Space: To address the diverse requirements of autonomous driving, we employ a hierarchical action space that operates at both strategic and tactical levels. At the high level, the framework delineates three discrete driving decision policies that are mutually exclusive: left lane change, maintaining lane to follow the car ahead, and right lane change. At any given time step, the ego vehicle must select one of these high-level decision policies for execution.
Each selected high-level action requires the specification of two continuous-valued parameters to ensure the maneuver is performed safely and efficiently: heading angle and acceleration braking rate. The heading angle, constrained within a range of $[ - 0 . 5 , 0 . 5 ]$ , prevents large, potentially unsafe turning angles. Meanwhile, the acceleration/braking rate, ranging from $[ - 5 , 5 ]$ , modulates the vehicle’s speed, where positive values indicate acceleration, and negative values signify braking.
3) Weight Optimization Based On Evolutionary Learning: To enhance the decision-making capability of the above DRL framework, the genetic algorithm (GA) is introduced to optimize the weight parameters in the RL process. By simulating decision-evolution through operations such as selection, crossover, and mutation, GA carries out a global search of the parameter space, thereby reducing the tendency of DRL to converge to suboptimal solutions [68]. Notably, this approach does not rely on gradient information, facilitating efficient exploration of the complex parameter landscape. The fitness function is designed as a multi-objective evaluation metric, expressed as follows:
The fitness function is designed as a multi-objective evaluation metric, expressed as follows:
$$
\begin{array} { r l } & { F i t n e s s = \omega _ { 1 } \cdot S a f e t y + \omega _ { 2 } \cdot E f f i c i e n c y } \\ & { ~ + \omega _ { 3 } \cdot C o m f o r t + \omega _ { 4 } \cdot S C E \_ S c o r e } \end{array}
$$
where the four weight parameters $w _ { 1 }$ , $w _ { 2 }$ , $w _ { 3 }$ , and $w _ { 4 }$ are normalized to ensure their sum equals 1, thereby preventing any single metric from disproportionately influencing the overall evaluation. Safety is quantified based on the frequency or severity of collisions, reflecting the vehicle’s ability to avoid accidents. Efficiency is measured by the average vehicle speed, capturing traffic throughput. Comfort is assessed via the average variation in acceleration, representing the smoothness of the driving experience. Finally, SCE Score evaluates the social compliance estimation (SCE) the ego vehicle, which is computed by ROW-violation times and severity during its trajectory. By adjusting these weight parameters, the fitness function can flexibly accommodate the requirements of different driving scenarios.
The optimization process begins with the random initialization of a population, where each individual represents a parameter vector. Following this, the fitness function evaluates the performance of each individual in real-world traffic simulations. The selection operation uses a tournament strategy, where a subset of individuals is randomly chosen, and the one with the highest fitness is retained for the next generation. Crossover operations combine portions of the genetic information from two parents to produce offspring, thereby expanding the search space and improving population diversity. Mutation operations introduce random perturbations to a small number of genes, effectively preventing the algorithm from becoming trapped in local optima and enhancing exploration efficiency.
Through this iterative process, the population quality improves progressively, and GA converges towards a global optimum. The final optimized parameters are utilized to initialize the DRL module, providing a more favorable starting point for subsequent training. This hybrid approach effectively integrates the global optimization capability of GA with the adaptive learning characteristics of DRL, enabling the framework to better balance safety, efficiency, comfort, and ROW fairness in complex traffic scenarios. Ultimately, this strategy achieves robust and efficient decision-making, even in highly dynamic and uncertain environments that the ego vehicle has not meet before.
# C. Social Compliance Estimation Based On ROW-Violation Performance
In this section, we demonstrate how the SCE Score is computed in detail. Firstly, the concept of absolute right-ofway area is proposed. This area represents the region where a vehicle has absolute priority access. Next, the ROW-violation index is calculated by analyzing vehicle occupacy behavior of other road users’ A ROW area. Finally, social compliance is assessed based on the ROW-violation index, offering a quantitative measure of adherence to social norms.
Fig. 4: Illustration the Potential ROW Conflict Within B’s Absolute ROW Area
1) Definition and Formalization of Absolute Right-of-Way: To capture the essential constraints in complex vehicle interactions, we define the concept of A ROW as a spatially protected region surrounding a vehicle. As illustrated in Fig. 4, typical A ROW conflict scenarios—namely cross conflicts and merge conflicts—highlight the necessity of such a definition. Cross conflicts occur at the intersection center, where vehicle trajectories intersect at large angles, leading to severe traffic interference and high collision risk. Merge conflicts, by contrast, occur near the intersection exit, where vehicles from different entries aim for the same exit lane, resulting in less disruption but still frequent conflicts.
The A ROW area provides a geometric boundary that other vehicles must not encroach upon, thereby preserving the rightof-way of the target vehicle. By explicitly modeling this constraint, the driving policy can make socially compliant and safety-aware decisions when faced with these typical conflict scenarios.These regions are mathematically defined as:
$$
\mathrm { A \_ R O W } = \{ ( x , y ) \mid x ^ { \operatorname* { m i n } } \leq x \leq x ^ { \operatorname* { m a x } } , y ^ { \operatorname* { m i n } } \leq y \leq y ^ { \operatorname* { m a x } } \}
$$
where:
$$
\begin{array} { l } { { x ^ { \operatorname* { m i n } } = x , \quad x ^ { \operatorname* { m a x } } = x + L \cdot \cos \theta , } } \\ { { y ^ { \operatorname* { m i n } } = y - \displaystyle \frac { \omega } { 2 } , \quad y ^ { \operatorname* { m a x } } = y + \displaystyle \frac { \omega } { 2 } } } \end{array}
$$
Here, $x , y$ represent the vehicle’s position, $\Theta$ is the heading angle, and $\omega$ is the vehicle’s width. The stopping distance $\mathcal { L }$ is defined as:
$$
L = { \frac { v ^ { 2 } } { 2 a _ { \mathrm { m a x } } } } \cdot { \frac { 1 } { 1 + k _ { \rho } \cdot \rho } }
$$
where $\nu$ is the velocity, $a _ { \mathrm { m a x } }$ is the maximum deceleration, $\rho$ represents the road vehicle density, and $k _ { \rho }$ is a density adjustment coefficient.
The textA ROW dynamically updates based on the ego vehicle’s motion state (e.g., velocity and heading angle) and the surrounding context (e.g., traffic flow). This ensures that the protected regions accurately reflect the vehicle’s immediate spatial constraints and the surrounding travelable regions.
2) ROW Embedded Social Compliance Estimation: Social compliance estimation plays a critical part in the textFitness function. Usually speaking, the more social compatible of human driving behavior, the more safety to self and efficiency to the collective. To enhance the driving efficiency, safety, comfort, and compliance of AV operation beyond that of manual driving, the lane-changing model outlined in our paper adopts a multifaceted reward function. This function meticulously integrates several key performance indicators to guide the vehicle’s decision-making processes [18]. The reward function in this paper considers the following aspects:
Firstly, the model prioritizes speed performance by ensuring that the vehicle attains or closely approximates the reference speed $\nu _ { \mathrm { r e f } }$ within a permissible timeframe, as delineated in Equation (9). The speed performance metric is quantified by assessing the deviation between the actual vehicle speed and νref.
$$
r _ { v } = - \omega _ { v } \cdot \frac { \sum _ { t = 0 } ^ { T - 1 } t | v _ { t } - v _ { \mathrm { r e f } } | } { \sum _ { t = 0 } ^ { T - 1 } t }
$$
Secondly, the model places a significant emphasis on comfort, evaluated through the analysis of acceleration variations between consecutive time intervals, following Equation (7).
$$
r _ { c } = - \omega _ { c } \cdot \frac { | a _ { t } - a _ { t - 1 } | } { \mathrm { t i m e } _ { t } - \mathrm { t i m e } _ { t - 1 } }
$$
Regarding safety, the model incorporates the widely recognized Time-To-Collision (TTC) metric as a key safety criterion. The correlation between TTC values and the reward mechanism is inversely proportional: when TTC falls below 3.5 seconds [32], lower values yield reduced rewards. This relationship highlights the importance of maintaining a safe following distance to minimize collision risks.
$$
r _ { s } = - \omega _ { s } \cdot \mathrm { m a x } \{ 0 , \frac { 3 . 5 - t _ { \mathrm { t t c } } } { 3 . 5 } \}
$$
Regarding compliance, the model incorporates the A ROW criterion as a core measure. The reward mechanism is inversely proportional to A ROW violations: greater encroachments into AR-protected regions result in reduced rewards. This emphasizes adherence to spatial constraints and respect for other road users’ safety zones.
To handle dynamic scenarios where A ROW regions may change (e.g., due to sudden braking), a Dynamic Right-of-Way Adjustment mechanism is introduced. The associated reward is defined as:
$$
r _ { d } = - \omega _ { d } \cdot A _ { - } R O W _ { i j } \cdot \left( 1 + e ^ { - \beta T } \right)
$$
where $A _ { - } R O W _ { \mathrm { i j } }$ denotes the change in the overlapping area between the ego vehicle and another vehicle’s AR region, and $\omega _ { \mathrm { d } }$ is the response coefficient. The $e ^ { \beta T }$ is time decay term, $\beta$ determines the decay rate, $\mathbb { T }$ is the time elapsed since the intersection state was detected. This component incentivizes the agent to respond promptly to intersection events. The penalty for significant changes in the overlapping area encourages swift and adaptive behaviors, such as deceleration or lane changes, to mitigate potential conflicts effectively.
The Stop Signal defines a termination state triggered by events like collisions, departure from the roadway, or reaching the maximum simulation duration without incidents. Encounters with the termination state due to adverse events incur significant penalties, while remaining within safe operational boundaries throughout the simulation period is rewarded [58]. This approach not only penalizes risky behaviors but also reinforces sustained safe driving practices, aligning with objectives to enhance travel efficiency, safety, and comfort in AV operation.
$$
r _ { t } = \left\{ { \begin{array} { l l } { 1 0 0 , } & { { \mathrm { s a f e \_ a r r i v e d } } } \\ { - 6 0 , } & { { \mathrm { c o l l i s i o n } } } \\ { - 4 0 , } & { { \mathrm { w r o n g \ l a n e } } } \end{array} } \right.
$$
The Social Compliance Estimatio(SCE) value is calculated as the weighted sum of Equations (9–13), representing the comprehensive evaluation of an autonomous vehicle’s performance in complex, dynamic environments. The reward function integrates five key components: speed performance, comfort, safety, rule compliance, and terminal state outcomes. Each component quantifies the quality of vehicle behavior from distinct perspectives. For instance, speed performance evaluates the vehicle’s efficiency in approaching the reference speed, comfort assesses the smoothness of acceleration transitions, safety measures the maintenance of a safe following distance, and rule compliance examines the vehicle’s ability to adapt to dynamic right-of-way allocations. Meanwhile, the terminal state reinforces safe driving behaviors by rewarding or penalizing outcomes such as safe arrivals or collisions.
The physical significance of the SCE lies in its ability to comprehensively quantify and integrate multiple performance indicators, providing a holistic assessment of driving efficiency, safety, and social compliance. By adjusting the weights of each component, the reward function achieves multi-objective optimization, ensuring sensitivity to behavioral deviations. This approach guides autonomous vehicles in making decisions that effectively balance efficiency, safety, and comfort in real-world scenarios.
# IV. EXPERIMENTS
In this section, we evaluate the performance of the proposed hierachical progressional framework in typical urban road scenarios of different traffic densities, verificating its ability to reduce collisions and increase driving speed with comfortness by making the fittest decisions in diverse time-varying traffic situations.
# A. Experimet Settings
The simulation platform CARLA is used for model training and peformance evaluation. CARLA offers significant flexibility to modify the vehicle and traffic environment parameters, thereby enriches the verification cases and enables more efficient development without incurring real-world safety hazards. The datasets used in this study were generated within the CARLA urban traffic simulation environment, where each vehicle’s target lane and route were randomly assigned to approximate real-world conditions. Data sampling occurred at a frequency of $1 0 ~ \mathrm { H z }$ , with the total simulation time set to 10,000 seconds to ensure sufficient historical trajectory information. The features employed in the driving intent recognition model include lane identification, longitudinal and lateral positions, and longitudinal and lateral velocities. These features were obtained via the CARLA API. For real-world implementation, similar data can be acquired through an AV’s perception module, supplemented by high-definition maps.
Considering that humans make decisions based not only on current situations but also on what happend in the last few seconds, which enable them combine the critical information more comprehensively in developmental perspectives, we think it is also inspiring for AVs to make better decisions. But, no one has ever tested that how long should we take into consideration, not mention to how long should we anble AVs to look-back. To determine the optimal length of historical trajectory time window for model training, various experiments were conducted to test and evaluate their impact on the performance metrics, including precision, recall, and F1-score. The results show in Fig.5 indicate that a 4-second time window enables the STA module achieve the best performance across all metrics(depicted in Fig. 5), providing the most effective balance between information sufficiency and noise reduction.
Fig. 5: Performance metrics on different Look-Back time window lengths
Therefore, a 4-second historical trajectory window was used for the STA model training, and a down-sampling strategy was applied to achieve balance among left lane change, lanekeeping, and right lane change actions. The dataset was split into training and testing sets at an 8:2 ratio to facilitate model validation and performance assessment.
Fig. 6: Bird’s eye view of Town3 in CARLA
We train all models in CARLA’s Town 3 map to ensure a fair comparison and evaluate the effectiveness of our approach and all baseline models. As shown in Fig. 6, this town features a highly dynamic and intricate urban layout, designed to mimic real-world suburban and urban driving scenarios. It encompasses several interconnected regions, including a suburban residential area, a central business district with multilane roads, and a series of complex roundabouts and intersections. The intricate network of Town 3 makes it particularly challenging for human-like flexible and adaptive decision-making, as it offers a wide range of driving conditions, such as long straight roads, sharp turns, elevated sections, and a variety of lane configurations and traffic flow patterns. These characteristics provide an ideal environment for assessing both fundamental driving skills and sophisticated decision-making capabilities.
To create a more realistic and challenging environment, we populate the town with 100 vehicles running in autopilot mode, randomly spawned across the map and managed using CARLA’s built-in traffic manager. These vehicles follow traffic rules, respond to traffic lights, and perform basic collision avoidance, creating a dynamic traffic flow that significantly increases the complexity of the learning task for our RL agent. The agent must now handle various interactive scenarios such as car following, overtaking, and yielding to other vehicles. Additionally, the ego vehicle is randomly positioned in a lane with an initial speed that avoids collisions at the outset, and its desired speed is set within the range of $1 0 { - } 1 8 ~ \mathrm { ~ m / s }$ , with each simulation step corresponding to 0.1 seconds of real time. This setup not only mirrors real-world urban driving conditions but also challenges the RL agent to develop more robust and adaptive driving strategies.
All simulation and training processes were deployed across four Nvidia RTX 4090 GPUs, fully utilizing their parallel computing capabilities. Table II summarizes the key hyperparameters employed in the experiments.
# B. Driving Intention Inference Performance of the MultiFeature Late Fusion STA Model
We compare our proposed STA mechanism with other five Driving Intention Inference baselines, which are:
HMM-BF [22]: a driving intent inference method based on Hidden Markov Models. It takes behavioral feature sequences of vehicle motion and environmental context as input to infer the most likely driving intention.
TABLE II: Evolutionary Algorithm and Deep Reinforcement Learning Hyperparameters
HMM-AIO [54]: an extension of HMM-BF that integrates all-in-one feature representation, including behavioral features, traffic context, and driver-specific data, to enhance intention inference in complex scenarios.
LSTM-RNN [2]: which uses Long Short-Term Memory networks to process sequential data, such as vehicle motion and surrounding traffic states, capturing long-term dependencies to output driving intention.
Bi-LSTM [19]: an extended version of LSTM that processes sequences bidirectionally, considering both historical and future information, making it suitable for complex scenarios requiring bidirectional context.
SlowFast [31]: it is adapted from the SlowFast network for video analysis, the ”Slow” pathway captures long-term patterns, while the ”Fast” pathway focuses on short-term dynamics. Combined multi-scale information is used to infer driving intent.
While recent advances in transformer-based models have demonstrated promising performance in various sequential modeling tasks, the baseline methods selected in this study—including HMM variants and LSTM-based architectures—are representative and widely adopted in the field of driving intention inference. These models provide strong interpretability, established performance benchmarks, and lower computational overhead, making them suitable for both research comparisons and real-world deployment. Additionally, transformer-based methods typically require large-scale labeled datasets and high computational resources, which may not be practical in real-time autonomous driving scenarios or under limited data conditions. Our goal is to evaluate the effectiveness of the proposed STA mechanism against classical and widely-accepted baselines. Incorporating transformerbased approaches will be considered as part of future work as the domain continues to evolve.
The comparisve results of our STA mechanism (illustrated in Fig. 3) with the above mentioned five models. As can be seen from Table III, in the driving intention inference task, our proposed STA mechanism outperforms other methods on all comparative metrics, with improvements of $3 . 4 \%$ on precision, $4 . 8 \%$ on Recall-Rate, $4 . 1 \%$ on F1-Score, and $2 . 3 \%$ on accuracy.
TABLE III: Results of Driving Intention Recognition
Fig. 7 presents the normalized confusion matrix, providing a detailed assessment of the proposed STA mechanism in inferring L Trun, R Trun, and Straight driving intention. The diagonal elements represent the per-intention accuracy, demonstrating strong performance with $94 \%$ for L Trun, $9 5 \%$ for R Trun, and $92 \%$ for Straight. The off-diagonal elements illustrate misclassification rates. Notably, $6 \%$ of $\mathrm { ~ L ~ }$ Trun intention are incorrectly classified as R Trun, and $8 \%$ of Straight intention are misclassified as R Trun. It suggests a potential for improvement in distinguishing R Trun from other intention, which deserves further exploration in the future work. This high overall accuracy demonstrates the effectiveness of the proposed method for driving intention inference.
Fig. 7: Normalized Confusion Matrix of the Driving Intention Inference Model based on Spatial-Temporal Attention
Fig. 8: Comparison of Average Reward and Success Rate Across Training Episodes of Different Algorithms
# C. Safer Decision-Making with Superior Evolutionay Performance
This section demonstrate how SF-HLDM perform by comparing with other state-of-the-art decision-making baseline models:
DDQN [65]: double Deep Q-Network (DDQN) reduces $Q -$ value overestimation by decoupling action selection and Qvalue evaluation. It employs two separate networks, one for selecting the action and another for evaluating its Q-value, resulting in more stable and accurate decision-making.
DQfD [21]: deep Q-learning from Demonstrations (DQfD) incorporates expert demonstration data into the reinforcement learning process. By pre-training with expert trajectories and combining supervised learning with Q-learning, it accelerates convergence and enhances performance in complex tasks.
$\mathrm { B C + D 3 Q N }$ [12]: behavioral Cloning with Double Dueling DQN $\mathbf { B C } + \mathbf { D } 3 \mathbf { Q N } )$ ) integrates behavioral cloning for pretraining and Double Dueling DQN for fine-tuning. The pretrained policy guides the learning process, while D3QN improves decision accuracy by focusing on advantage value learning.
DulDQN [18]: Dual Deep Q-Network (DulDQN) extends Double DQN by enhancing the learning of advantage values, enabling it to achieve better stability and robustness in challenging environments with complex decision requirements.
All the models were trained under identical parameter configurations to facilitate fair comparisons, and the training performance curves are depicted in Fig 8. Each experiment was conducted over 100 episodes to assess their performance. To benchmark the models, a series of commonly used vehicle trajectory features were analyzed, including average speed, minimum time headway (THW), average acceleration, and average yaw rate. To ensure uniformity across all experiments, the same random seed was employed. Multiple rounds of experiments were then carried out on the trained models to evaluate their consistency, with the outcomes summarized in Table IV.
The model’s performance are further evaluted based on six metrics: average velocity, average acceleration, average yaw rate, minimum time headway (THW), average number of ROW-violation, and average number of lane changes. To maintain consistent conditions, a single random seed was applied across all experiments, and the results are presented in Table IV and Figure 8. The proposed SF-HLDM model consistently outperforms the above baseline models on every metric, achieving the highest average speed $\mathrm { { ( 1 5 . 4 4 0 m / s ) } }$ ), with the lowest average accelertion $( 0 . 3 9 5 \mathrm { m } / \mathrm { s } ^ { 2 } )$ ), the lowest yaw rate $( 0 . 0 1 5 \mathrm { r a d / s ) }$ , the lowest average number of lane changes (1.1) and the highest minimum THW (1.133 s). Compared to other SOTA methods, the SF-HLDM framework improves the average velocity by $2 . 5 \%$ , reduces the average acceleration and yaw rate by $23 . 5 \%$ and $6 0 . 5 \%$ , while enlarge the minimum TWHs by $4 1 . 8 \%$ .
It is worth noting that SF-HLDM enables the ego vehicle drive with faster average speed while keep longer distance away from other surrounding traffic participants, in impressive smoother and more gentle style. From a safety point of view, the larger the TWHs between vehicles, the safer it is. However, if the THW is too large, it will affect traffic flow and is not conducive to improving transportation efficiency. Existing researches on road user response timing via a large number of experiments indicate that the observed average response times varied widely between studies (0.5 s to 1.5 s) [9]. Therefore, considering AVs driving in mixed traffic full of human-driven vehicles, driverless vehicles with different degrees of automation, and even vulnerable traffic partipants, it is of great importance to have larger TWHs to keep safetyfirst. The experimental results clearly articulate that SF-HLDM generate the largest TWH while equip the AV with the fastest average velocity, lowest average acceleration and yaw rate.
In addition, the average number of ROW-violations and average number of lane changes are captured. Lane-change is one of the most important reason for causing traffic accidents [35], and decreasement in average number of lane changes will greatly lower the possibility of traffic accidents, which will be studied in our future researches. Meanwhile, when someone intent to make a lane change, the driver’s behavior may accompanied by violating others’ ROW, thus increase the possibility of social conflicts. The social compliance performance of each model can be estimated by these two indexes, which explicitly shows that the SF-HLDM framework runs with much more social compatible driving behavior. Statistically speaking, the SF-HLDM framework generates 1.8 action policies violating other participants’ ROW, while other methods generate at least 5.6 action policies, improves the social compliance behavior by $6 7 . 9 \%$ . Therefore, the SF-HLDM effectively balances the ego vehicle’s driving quality and the overall traffic flow, highlighting its adaptability to fluctuating traffic conditions and underscoring its overall advantages for safe, efficient, smooth and social compliant driving behavior.
# D. Decision-Making Performance in Different Traffic Densities
To assess the performance of the proposed model under varying traffic flow densities, a total of 100 driving experiments were conducted at three distinct traffic density levels: low density (60 vehicles/km), medium density (100 vehicles/km), high density (150 vehicles/km). The corresponding results are presented in Table V.
Table V summarizes the experimental results comparing the proposed SF-HLDM algorithm with the baseline $\mathrm { B C + D 3 Q N }$ across low, medium, and high traffic density scenarios. The results demonstrate that SF-HLDM consistently outperforms the baseline across various key metrics. Under all traffic densities, SF-HLDM achieves significantly higher minimum Time to Worst-Case Hazard (TWHs), showcasing its enhanced capacity to predict and address potential risks. Additionally, the algorithm maintains a higher average velocity, reflecting its more efficient decision-making process and superior traffic flow management. SF-HLDM also registers lower average yaw rates, which contributes to greater stability and smoother vehicle handling, especially in high-density traffic. The results highlight SF-HLDM’s strong adaptability and reliability in dynamic, complex traffic environments, underscoring its potential for deployment in real-world autonomous driving systems. Furthermore, the algorithm’s robustness across different traffic conditions makes it a promising candidate for improving the safety and efficiency of autonomous vehicle operations.
Figure 10 presents selected keyframes from the CARLA simulation, showcasing the agent’s decision-making process during continuous lane-change and overtaking maneuvers.The first lane-change decision takes place in round time of 4 s.The EV initially occupies the rightmost lane with its velocity accelerating from zero to its desired level of $1 3 ~ \mathrm { m / s }$ . Then the EV faces the slower LV and gradually decreases its velocity as capped by the velocity of the LV. The EV equipped with our proposed planner starts a lane change to the adjacent left lane, which is the closest available lane considering the right boundary of the current lane. This lane-change decision enables the EV to raise its velocity up to $1 6 . 6 ~ \mathrm { m / s }$ which is designated by the velocity limit. At around the time of $7 \mathrm { ~ s ~ }$ , the EV encounters an LV whose velocity is at $6 \mathrm { m / s }$ . The decrease in the EV’s velocity is observed due to the encountering of the slow-moving LV. In the meanwhile, there are NVs on the left lane. Considering that the NV is traveling at a lower velocity than the EV, the EV executes the lane-change decision by the planner and brings the velocity back to about $1 3 \ \mathrm { m / s }$ . Subsequently, the EV finds itself trailing again behind a slowmoving LV and the decision process is more complex. The EV is given the options including lane keeping, left lane change, and right lane change. In addition to the LV on the same lane, there is an LV on the right lane and an NV on the left lane. The EV is provided with a decision to change to the left lane by the proposed planner at around the time of $1 0 ~ \mathrm { s }$ . Finally, at around the time of $1 3 \mathrm { ~ s ~ }$ , the EV chooses to go to the left lane in a situation where the current lane and the adjacent right lane are both occupied. At the end of the simulation, the EV manages to navigate through this dynamic scenario occupied with multiple SVs from the right-most lane to the left-most lane where no SV is present. During the whole simulation, the EV adjusts its velocity smoothly. The EV is able to maintain a high traveling efficiency, and resume its velocity after the slow-downs due to the encountering of the slow LVs and the lane-change maneuvers.
TABLE IV: Performance Metrics of Different Models
The SF-HLDM framework’s ability to dynamically adjust parameters in real-time allows it to maintain safe margins while adhering to socially acceptable behaviors. By leveraging deep evolutionary reinforcement learning, the model avoids local optima, effectively balances competing objectives such as safety, comfort, and efficiency, and ensures robust performance in high-density traffic. This capability enables the model to replicate human-like decision-making processes, fostering smoother integration of autonomous vehicles into real-world traffic systems.
25 3.0 6 8 。 。 。 2.0 白 国 南 1.5 Q 。 o 1.0 0 最 网 :0.5 8 T 0 0.0 m o DDQN DulDQN DQfD BC+D3QN SF-HLDM DDQN DulDQN DQfD BC+D3QN SF-HLDM 0.30 4.0 8 。
02 。 目 0
20 0 。 8 @ 西 0 8 0 0 0 。 8 西 T 1 展 0.00 DDQON DulDQN DQfD T BC+D3QN 网 SF-HLDM 0.5 0.0 DDQN 客 DN DQfD 网 BC+D3QN SF-HLDM
T 山
T 2 1 0 0 DDQN DuIDQN DQfD BC+D3QN SF-HLDN DDQN DulDQN DQfD BC+D3QN SF-HLDN
# E. Effect of Critical Modules on SF-HLDM Performance
To evaluate the contributions of the Situation Awareness Module and the Evolutionary Learning-Based Weight Optimization Module in the SF-HLDM model, we conducted an ablation study using the same experimental setup described in this Section . Specifically, the scenario initializes $1 0 0 ~ \mathrm { N P C }$ (Non-Player Character) vehicles to create a dynamic and timevarying traffic flow, ensuring realistic testing conditions.
In the ablation experiments, SF-HLDM-S represents the model with the Situation Awareness Module removed, which excludes spatial attention, temporal attention, and driving intention inference, allowing us to assess the impact of situation awareness on the decision-making process. SF-HLDM-E, on the other hand, removes the Evolutionary Learning-Based Weight Optimization Module and uses fixed or non-adaptive weights, enabling an evaluation of the role of evolutionary optimization in enhancing model performance. As shown in Fig. 11 and Table VI.
Fig.11 displays the performance of three methods: SFHLDM, SF-HLDM-S, and SF-HLDM-E, based on the normalized average reward over 6000 episodes (scaled by a factor of 10). The red curve (SF-HLDM) consistently outperforms the others, achieving rapid convergence and maintaining the highest reward levels throughout the training process. The blue curve (SF-HLDM-S) improves steadily but lags behind SF-HLDM, reflecting moderate convergence speed and lower final rewards. In contrast, the orange curve (SF-HLDM-E) fluctuates significantly and shows slower overall progress, indicating instability and delayed convergence. Shaded regions around each curve represent the standard deviation, emphasizing the robustness of SF-HLDM compared to the other methods. This analysis highlights SF-HLDM’s effectiveness in achieving higher and more stable rewards.
TABLE V: Performance Metrics of Different Models under Different Vehicle Densities
TABLE VI: Performance Metrics of Different Models
Table VI makes a further comparision of SF-HLDM-S, SFHLDM-E and SF-HLDM, which clearly articulates SF-HLDM improves the average velocity by $3 . 9 \%$ , reduces the average acceleration by $6 . 2 \%$ , and significantly decreases the yaw rate by $3 1 . 8 \%$ . The minimum TWH is extended by $1 3 . 1 \%$ , demonstrating the critical role of situation awareness in maintaining safe distances and smooth control. Similarly, when compared to SF-HLDM-E, which removes the evolutionary weight optimization submodule, SF-HLDM exhibits a $1 1 . 9 \%$ improvement in average velocity, a $1 3 . 8 \%$ reduction in yaw rate, and a $2 6 . 0 \%$ increase in minimum TWH, highlighting the effectiveness of adaptive optimization in enhancing driving performance.
Fig. 10: SF-HLDM based on Situation Awareness and Social Compliance
Fig. 11: Multi-frame tracking of vehicles in dynamically timevarying traffic environments
In summary, the results demonstrate that the full SF-HLDM model delivers a well-balanced performance, achieving the highest velocity while ensuring safety and smoothness, thanks to the integration of situation awareness and evolutionary optimization modules. The ablation studies underline the importance of these components in shaping the model’s overall effectiveness. | Despite the recent advancements in artificial intelligence technologies have shown great potential in improving transport efficiency and safety, autonomous vehicles(AVs) still face great challenge of driving in time-varying traffic flow, especially in dense and interactive situations. Meanwhile, human have free wills and usually do not make the same decisions even situate in the exactly same scenarios, leading to the data-driven methods suffer from poor migratability and high search cost problems, decreasing the efficiency and effectiveness of the behavior policy. In this research, we propose a safety-first human-like decision-making framework(SF-HLDM) for AVs to drive safely, comfortably, and social compatiblely in effiency. The framework integrates a hierarchical progressive framework, which combines a spatial-temporal attention (S-TA) mechanism for other road users' intention inference, a social compliance estimation module for behavior regulation, and a Deep Evolutionary Reinforcement Learning(DERL) model for expanding the search space efficiently and effectively to make avoidance of falling into the local optimal trap and reduce the risk of overfitting, thus make human-like decisions with interpretability and flexibility. The SF-HLDM framework enables autonomous driving AI agents dynamically adjusts decision parameters to maintain safety margins and adhering to contextually appropriate driving behaviors at the same time. | [
"cs.AI"
] |
# 1. Introduction
Deep learning has driven transformative advances in computer vision, enabling convolutional neural networks (CNNs) and their variants to achieve state-of-the-art performance across image classification, detection, and segmentation tasks. Yet despite their predictive power, modern neural models remain brittle in the face of adversarial examples (Goodfellow et al., 2015), distributional shifts (Hendrycks et al., 2021), and perturbation-based attacks (Croce & Hein, 2020). These vulnerabilities pose significant concerns in safety-critical applications such as autonomous driving, medical imaging, and financial screening, where
The central problem is that accuracy alone is not a sufficient metric for reliable deployment. A model may achieve high test accuracy yet fail dramatically when exposed to slight perturbations, either crafted adversarially or arising from natural variation. This has led to a growing emphasis on robustness evaluation and explainability as core components of model assessment (Hooker et al., 2020; Xu et al., 2020).
Robustness and Attribution as Safety Axes. Two main axes have emerged as vital to robust and interpretable models: (1) formal verification of robustness, which guarantees prediction stability under small input changes; and (2) attribution-based interpretability, which assesses the saliency and stability of model explanations. Each axes offers partial insight: formal verification answers “Will the prediction change under bounded perturbations?” while attribution asks “What parts of the input most influenced this decision?” However, these have rarely been integrated into a unified diagnostic framework.
Recent work has attempted to measure robustness using tools like DeepPoly (Singh et al., 2019), CROWN-IBP (Zhang et al., 2022), and ERAN (Katz et al., 2021), offering formal guarantees but often limited to small networks or idealized settings. Parallelly, attribution methods such as Integrated Gradients (Sundararajan et al., 2017), SmoothGrad (Smilkov et al., 2017), and saliency maps (Simonyan et al., 2014) have become popular in model interpretation. However, these methods suffer from instability: attributions may change drastically under imperceptible input shifts or minor weight changes, undermining trust in their explanations (Kindermans et al., 2019).
Even more concerning, attribution instability has been shown to correlate with model susceptibility to adversarial examples (Ghorbani et al., 2019). Recent efforts like ROAR (Hooker et al., 2020) and AOPC (Samek et al., 2017) suggest that measuring attribution reliability not just saliency is crucial. Yet, a comprehensive framework that triangulates prediction correctness, robustness guarantees, and attribution stability remains missing.
Our Contribution: TriGuard. To address this gap, we propose TriGuard, a scalable evaluation framework that triangulates three complementary safety axes:
1. Formal Verification: We use adversarially bounded checks (PGD+interval bound propagation) to assess whether predictions remain invariant within an $\epsilon$ -ball in input space.
2. Attribution Entropy: We quantify the focus of explanations by measuring entropy over normalized Integrated Gradients. Lower entropy indicates sparse, localized attention hypothesized to reflect more interpretable reasoning.
3. Contrastive Attribution Drift: We introduce a new metric Attribution Drift Score which captures how model explanations change between two neighboring baseline inputs. Large drifts may signal sensitivity to spurious features or reliance on unstable gradients.
# 2. Methods
In this section, we describe the components of the TriGuard framework for evaluating neural network safety. TriGuard is designed around three diagnostic axes: formal robustness verification, attribution-based entropy analysis, and contrastive attribution drift. We also describe the experimental protocols used to implement and validate each axes across multiple datasets and model architectures.
# 2.1. Model and Dataset Setup
Let $f _ { \theta } : \mathbb { R } ^ { d } \mathbb { R } ^ { k }$ be a deep neural network classifier parameterized by weights $\theta$ , where $d$ is the input dimension and $k$ is the number of output classes. Given an input $\boldsymbol { x } \in \mathbb { R } ^ { d }$ , the predicted class is:
$$
\hat { y } = \arg \operatorname* { m a x } _ { j } f _ { \theta } ( x ) _ { j }
$$
We evaluate this model across standard vision datasets:
• MNIST (gray-scale digits), • FashionMNIST (gray-scale clothing images), • CIFAR-10 (RGB natural images).
Each model is trained for 5 epochs using Adam, and tested on both clean inputs and adversarially perturbed examples.
# 2.2. Axis 1: Formal Verification of Robustness
To evaluate whether model predictions remain stable under small perturbations, we define an $\epsilon$ -bounded region around an input $x$ as:
$$
B _ { \epsilon } ( \boldsymbol { x } ) = \left\{ \boldsymbol { x } ^ { \prime } \in \mathbb { R } ^ { d } : \| \boldsymbol { x } ^ { \prime } - \boldsymbol { x } \| _ { \infty } \leq \epsilon \right\}
$$
A model is said to be formally robust at $x$ if:
$$
\forall x ^ { \prime } \in B _ { \epsilon } ( x ) : \arg \operatorname* { m a x } f _ { \theta } ( x ^ { \prime } ) = \hat { y }
$$
We implement a bounded adversarial check using Projected Gradient Descent (PGD) to search for a counterexample within $B _ { \epsilon } ( x )$ . If no such $x ^ { \prime }$ is found after $T$ steps of perturbation with step size $\alpha$ , we consider the model to be locally robust:
$$
\begin{array} { r } { \begin{array} { l } { \mathrm { x \_ a d v \ = \Delta \mathrm { ~ x \Sigma ~ } + \ a l p h a \mathrm { ~ \star \Sigma ~ } \ s i g n \left( g r a d \right) } } \\ { \mathrm { x \_ a d v \ = \ c l \dot { \mathrm { ~ i p \ } } \left( x \_ \mathrm { \mathrm { \sim } } a d v , \mathrm { ~ x ~ } - \ e p s , \mathrm { ~ x ~ } + \ e p s \right) } } \end{array} } \end{array}
$$
where sign(grad) is the sign of the gradient of the loss with respect to the input.
We also use a simplified Interval Bound Propagation (IBP) style check to ensure robustness holds across all $\epsilon$ -perturbed inputs using forward bounds, implemented as:
$$
x _ { \mathrm { m i n } } = x - \epsilon , \quad x _ { \mathrm { m a x } } = x + \epsilon
$$
The model passes the check if the predicted class dominates across all bounds.
# 2.3. Axis 2: Attribution Entropy
Given a model and input $x$ , we compute Integrated Gradients (IG) as the attribution method. IG for the $i$ -th input feature is defined as:
$$
\operatorname { I G } _ { i } ( x ) = ( x _ { i } - x _ { i } ^ { \prime } ) \cdot \int _ { \alpha = 0 } ^ { 1 } { \frac { \partial f _ { \theta } ( x ^ { \prime } + \alpha ( x - x ^ { \prime } ) ) } { \partial x _ { i } } } d \alpha
$$
where $x ^ { \prime }$ is a baseline input (typically a zero vector or an average image). To compute this in practice, we use a Riemann approximation with $m$ steps:
$$
\operatorname { I G } _ { i } ( x ) \approx ( x _ { i } - x _ { i } ^ { \prime } ) \cdot { \frac { 1 } { m } } \sum _ { j = 1 } ^ { m } { \frac { \partial f _ { \theta } \left( x ^ { \prime } + { \frac { j } { m } } ( x - x ^ { \prime } ) \right) } { \partial x _ { i } } }
$$
We normalize the attributions to obtain a probability distribution $p \in \Delta ^ { d }$ , and compute its entropy:
$$
H ( p ) = - \sum _ { i = 1 } ^ { d } p _ { i } \log ( p _ { i } + \delta )
$$
where
$$
p _ { i } = \frac { \left| \mathrm { I G } _ { i } \right| } { \sum _ { j } \left| \mathrm { I G } _ { j } \right| }
$$
and $\delta$ is a small constant added to avoid numerical instability from $\log ( 0 )$ .
Low entropy $H ( p )$ implies that the attribution is concentrated (potentially more interpretable), whereas high entropy suggests that the attribution is diffuse or noisy.
# 2.4. Axis 3: Contrastive Attribution Drift
We define the Attribution Drift Score (ADS) to quantify the shift in saliency maps when comparing two semantically equivalent inputs. For a fixed test input $x$ , we compute its attribution vectors $a ^ { ( 1 ) } , a ^ { ( 2 ) }$ using Integrated Gradients (IG) under two different baselines $x ^ { ( 1 ) } , x ^ { ( 2 ) }$ , and define:
$$
\mathrm { A D S } ( x ^ { ( 1 ) } , x ^ { ( 2 ) } ) = \| a ^ { ( 1 ) } - a ^ { ( 2 ) } \| _ { 2 }
$$
This score reflects how sensitive a model’s explanation is to the choice of attribution reference — serving as a proxy for explanation stability under interpretability perturbations.
In our experiments, we use two commonly accepted and semantically meaningful baselines:
• A zero baseline (all pixels set to zero), which represents complete absence of input signal. • A blurred baseline, approximated as a smoothed version of the original image, retaining coarse structure but eliminating fine-grained details.
We chose these two baselines due to their contrasting interpretability assumptions: zero encodes null input, while blur encodes contextual prior. This contrast highlights the model’s attribution stability under plausible explanation choices. The resulting ADS is reported for each model and dataset.
Baseline Robustness. While the two baselines above are standard, we also explore ADS using random noise and uniform baselines in Appendix A. We observe that TriGuard’s drift patterns remain consistent, validating the robustness of our metric across attribution setups. Higher ADS indicates unstable or brittle explanation logic, which may not be reflected in clean or adversarial accuracy alone.
# 2.5. Entropy-Regularized Attribution Training.
To further improve attribution stability and adversarial resilience, we propose a lightweight enhancement: entropy regularization on input gradients during training.
Given the input gradient $\nabla _ { x } \mathcal { L } ( f ( x ) , y )$ , we compute a normalized vector:
$$
p = \frac { | \nabla _ { x } \mathcal { L } | } { \sum | \nabla _ { x } \mathcal { L } | }
$$
and apply the entropy penalty:
$$
\mathcal { L } _ { \mathrm { e n t r o p y } } = - \sum _ { i } p _ { i } \log ( p _ { i } + \epsilon )
$$
This encourages sparser and more localized attribution maps, which as our results show correlate with lower drift and higher verification success. The full loss becomes:
$$
\mathcal { L } _ { \mathrm { t o t a l } } = \mathcal { L } _ { \mathrm { C E } } + \lambda \cdot \mathcal { L } _ { \mathrm { e n t r o p y } }
$$
where $\lambda$ is a small regularization coefficient (e.g., 0.01 to 0.1).
# 2.6. Additional Baselines: SmoothGrad2 and CROWN-IBP.
To strengthen empirical benchmarking, we incorporate two additional baselines into the TriGuard evaluation pipeline: SmoothGrad2 (Smilkov et al., 2017) for attribution stability, and CROWN-IBP (Zhang et al., 2021) for certified robustness.
SmoothGrad2 enhances attribution reliability by averaging squared saliency maps over $N$ noisy input samples. Given an input $x$ , Gaussian noise $\mathcal { N } ( 0 , \sigma ^ { 2 } )$ is added to produce perturbed inputs $x _ { i }$ , from which gradients $s _ { i }$ are computed:
$$
S G ( x ) = \frac { 1 } { N } \sum _ { i = 1 } ^ { N } s _ { i } ( x _ { i } ) ^ { 2 }
$$
We compute entropy and drift metrics on these aggregated maps to compare attribution stability with TriGuard’s Integrated Gradients-based approach.
CROWN-IBP integrates convex relaxation (CROWN) and interval bound propagation (IBP) to certify output stability under norm-bounded adversarial perturbations. This method offers tighter and more scalable robustness guarantees compared to vanilla random-sampling checks. We adapt CROWN-IBP into our formal verification component to benchmark the certified safety of evaluated models.
Together, these baselines provide orthogonal perspectives on model robustness - one targeting explanation consistency, the other providing provable guarantees thereby reinforcing TriGuard’s comprehensive safety assessment framework.
# 2.7. Experimental Evaluation Protocol
We evaluate TriGuard across 18 model-dataset combinations:
• Models: SimpleCNN, ResNet50, ResNet101, MobileNetV3 Large, DenseNet121
• Datasets: MNIST, FashionMNIST, CIFAR-10
For each setting, we record the following metrics:
• Clean accuracy on test data,
• Accuracy under PGD attack $\overset { \cdot } { \epsilon } = 0 . 1$ for MNIST & FashionMNIST, $\epsilon = 0 . 3$ for CIFAR-10),
• Formal verification status (pass/fail),
• Attribution entropy,
• Attribution drift score (L2 norm)
# 3. Results
We evaluate TriGuard across combinations of datasets and architectures described in Section 2.1. For each configuration, we report the metrics listed in Table 1: clean accuracy, adversarial error, attribution entropy, attribution drift score, SmoothGrad2, formal verification pass, and CROWN-IBP status. All models are trained with entropy regularization unless otherwise noted.
As shown in Table 1, models like SimpleCNN on MNIST achieve high clean accuracy $( 9 8 . 5 1 \% )$ and also pass both formal verification checks, with notably low attribution drift (2.48) and moderate entropy (5.13). On the other hand, deeper architectures like DenseNet121 achieve high accuracy $( 9 9 . 1 8 \% )$ but fail CROWN-IBP and exhibit higher drift (3.53), highlighting latent instability not visible through accuracy alone.
Figure 1 presents attribution entropy across all models and datasets. We observe that CIFAR-10 models consistently yield higher entropy, indicating more diffuse attributions, likely due to input complexity. Meanwhile, Figure 2 visualizes attribution drift, showing greater variance for FashionMNIST compared to MNIST.
To qualitatively compare attribution robustness, Figure 3 illustrates IG heatmaps for clean and adversarial examples of digit “7.” Adversarial perturbations produce shifted, noisy saliency, confirming fragility in explanation. Figure 5 further highlights this with red/blue overlays showing attribution displacement under attack.
Results without regularization are shown in Appendix B, reinforcing that entropy-regularized training yields more stable saliency without sacrificing accuracy.
# 3.1. Correlation Analysis
To understand metric interdependencies, we compute Pearson correlations between entropy, drift, and adversarial error across all models. Results are visualized in Figure 4. We find:
Figure 1. Attribution entropy across all model–dataset combinations. CIFAR-10 models exhibit higher entropy on average, suggesting more diffuse and uncertain explanations due to input variability.
Figure 2. Attribution drift across all models. FashionMNIST models show the highest drift variance, while MNIST models exhibit more stable saliency patterns.
• Entropy and drift show a weak negative correlation $\ ' = - 0 . 2 2 )$ , suggesting that more focused attributions may yield slightly more stable explanations. • Entropy and adversarial error exhibit a moderate negative correlation $( r = - 0 . 4 5 )$ , indicating that entropy may partially align with robustness. • Drift and adversarial error are effectively uncorrelated ${ \mathrm { \ ' } r = 0 . 0 0 \mathrm { \ : } }$ ), highlighting their orthogonal insights.
These results highlight that TriGuard’s explanation metrics provide complementary signals to adversarial performance, helping expose hidden failure modes.
# 3.2. Ablation Study: Entropy-Regularized Training
We evaluate all models both with and without entropy regularization to assess its effect on attribution stability. Across datasets and architectures, we observe that entropyregularized training consistently produces more dispersed yet stable saliency maps evidenced by higher attribution entropy and lower drift scores while maintaining comparable accuracy.
Table 1. TriGuard results with entropy regularization across five models and three datasets. We report clean accuracy, adversarial error, attribution entropy, attribution drift score, SmoothGrad2, formal verification status, and CROWN-IBP success. Entropy-regularized models tend to exhibit lower drift and better attribution stability.
Figure 3. Integrated Gradients (IG) saliency on clean vs. adversarial inputs. Visual comparison for digit “7” shows that adversaria inputs yield noisy, displaced saliency — highlighting the need for stability-aware attribution metrics.
For instance, SimpleCNN on MNIST sees drift reduced from 16.64 to 1.73 with a slight increase in entropy (4.47 to 5.29). Table 4 presents a detailed breakdown of how varying the regularization strength $\lambda$ influences accuracy, drift, and entropy. This consistent trend across models suggests that entropy regularization is an effective strategy for improving explanation robustness without harming classification performance.
# 4. Discussion
Our results demonstrate that standard performance metrics like clean or adversarial accuracy do not capture the full safety picture. For example, ResNet50 on CIFAR-10 maintains $0 . 0 0 \%$ adversarial error but still shows non-trivial drift (0.99), suggesting latent explanation instability. Conversely, DenseNet121 on MNIST achieves $9 9 . 1 8 \%$ accuracy and passes formal verification, yet exhibits moderate drift (3.53) - a form of silent brittleness.
TriGuard addresses this gap by providing a richer view of model reliability via drift and entropy metrics. As illustrated in Figures 1 and 2, datasets with higher perceptual complexity (e.g., CIFAR-10) consistently induce higher entropy, while FashionMNIST triggers greater drift variance. This suggests both dataset and architecture choices influence attribution consistency.
Entropy-regularized training emerges as a promising strategy. As shown in Table 4 and discussed earlier, increasing $\lambda$ reduces drift significantly while maintaining accuracy. For example, SimpleCNN on MNIST sees drift drop from 16.64 (no regularization) to 1.73 (with $\lambda = 0 . 1 0$ ), as further supported by Table 4.
Figure 4. Correlation plots between attribution metrics and adversarial error. Left: entropy vs. drift shows a weak negative correlation $( r = - 0 . 2 2 )$ . Middle: entropy vs. adversarial error shows a moderate negative correlation $( r = - 0 . 4 5 )$ . Right: drift vs. adversarial error shows no correlation ${ \mathit { r } } = 0 . 0 0 { \mathit { \Omega } } ,$ .
Figure 5. Contrastive attribution map using IG under adversarial perturbation. Red and blue regions visualize attribution displacement, capturing explanation instability even when the prediction remains correct.
Finally, contrastive attribution maps in Figure 5 reveal visual saliency displacement under adversarial perturbations. This reinforces the need for quantitative faithfulness evaluation, which we provide via insertion/deletion AUC (Appendix B.2).
Foundation models like ViT and CLIP pose additional challenges for attribution due to their token-based attention mechanisms and lack of pixel-level alignment, which TriGuard could address by adapting attention-based explainability techniques (Chefer et al., 2021).
Challenges with Formal Verification. While TriGuard integrates formal verification via CROWN-IBP, we observe failures on deeper architectures like DenseNet121. These failures stem from Dropout layers and complex connectivity, which disrupt symbolic bound propagation. Despite achieving $0 . 0 0 \%$ adversarial error and passing our empirical verification check, DenseNet121 fails the CROWN-IBP test, underscoring limitations in current certifiers when handling realistic, high-capacity networks. We report these results transparently to highlight the need for improved verification tools tailored to modern model structures.
Future work could explore abstraction-based verifiers or relaxed bound propagation techniques like DeepPoly (Singla & Feizi, 2019) or $\alpha$ -CROWN (Xu et al., 2021a) for better scalability with modern architectures.
# 4.1. Limitations
TriGuard currently targets image classifiers with structured pixel inputs and assumes access to gradients for saliency extraction. Extensions to black-box or non-differentiable models — such as some tabular pipelines or GNNs — require further adaptation. Our method assumes access to input gradients for saliency generation, which may not hold in fully black-box or proprietary model settings. Adapting TriGuard to such constraints remains future work.
Future extensions may incorporate gradient-free attribution techniques, such as occlusion-based saliency (Fong & Vedaldi, 2017) or model distillation, to support evaluation
in black-box settings.
# 5. Impact Statement
TriGuard aims to enhance model safety in high-stakes domains such as medical diagnostics and financial risk assessment, where explanation consistency and verifiability are critical. By identifying shifts in model attribution and bounding adversarial vulnerabilities, TriGuard enables practitioners to assess AI reliability before deployment.
We envision this framework aiding regulators, developers, and auditors in measuring model transparency and safety. As AI models are increasingly deployed in automated decision-making pipelines, our work contributes to ensuring that these systems behave in trustworthy and interpretable ways under real-world conditions.
However, TriGuard’s reliance on gradient access limits applicability to black-box models. Moreover, while interpretability metrics improve transparency, incorrect attribution interpretations could mislead users in critical settings like healthcare or finance. Deployment should be accompanied by human oversight and calibrated confidence intervals. Inappropriate reliance on saliency maps or weak robustness signals may lead to overconfidence in safety-critical applications.
# 6. Related Work
TriGuard draws upon and extends foundational research across adversarial robustness, formal verification, and interpretability. Our contribution lies in unifying these efforts under a shared evaluation framework and proposing a novel metric — Attribution Drift Score — for quantifying explanation stability.
Adversarial Robustness. The vulnerability of neural networks to adversarial perturbations has been extensively studied. FGSM (Goodfellow et al., 2015) and PGD (Madry et al., 2018) formalized first-order attack strategies, prompting numerous defense proposals. (Gowal et al., 2021) demonstrated that learned data augmentations can improve robustness without sacrificing clean accuracy. However, few works examine the interplay between adversarial robustness and interpretability — a gap TriGuard seeks to address.
Formal Verification. Formal methods such as ERAN (Gehr et al., 2018) and Auto-LiRPA (Zhang et al., 2020) provide provable guarantees under normbounded perturbations, scaling to moderately large ReLU networks. Alpha-beta-CROWN (Xu et al., 2021b) further improved efficiency for large-scale verification. Still, most verifiers operate independently of attribution metrics. TriGuard supplements such verification with saliency-based diagnostics to detect reasoning failures.
Attribution and Explanation. Integrated Gradients (IG) (Sundararajan et al., 2017) remains a cornerstone in attribution methods. (Kim et al., 2022) criticized many saliency techniques via invariance tests, while (Hooker et al., 2020) introduced ROAR for benchmarking attribution utility. (Chen et al., 2023) proposed entropy-based metrics to assess concentration in saliency maps. Our entropy metric builds on these insights to measure attribution sharpness.
Attribution Drift and Stability. Attribution stability under perturbation has been explored via saliency sensitivity (Ghorbani et al., 2019) and local Lipschitz bounds (Alvarez-Melis & Jaakkola, 2018). Our Attribution Drift Score quantifies such instability using contrastive attributions. (Bastani et al., 2023) and (Meng et al., 2023) similarly explored explanation consistency using causal tracing and perturbation-based reliability tests.
Gradient Regularization and Robustness. (Ross & Doshi-Velez, 2018) showed that penalizing input gradients can improve robustness and interpretability. (Dvijotham et al., 2018) connected smooth gradients to certified bounds. TriGuard’s entropy-regularized loss implicitly enforces sparse gradient distributions, contributing to both saliency coherence and robustness.
Positioning of TriGuard. While prior work has tackled verification, saliency, or adversarial robustness in isolation, TriGuard is the first framework to unify all three axes - robustness, interpretability, and verification into a single diagnostic suite. By introducing the Attribution Drift Score $( A D S )$ and combining it with entropy, adversarial error, and formal verification, TriGuard provides a holistic evaluation lens for model safety across architectures and datasets.
Recent works have emphasized the need to assess attribution faithfulness beyond visual plausibility. Hooker et al. (Hooker et al., 2021) introduced benchmark datasets and deletion-based metrics to quantify saliency reliability, while Ramaswamy et al. (Ramaswamy et al., 2022) proposed robustness-oriented tests for attribution stability. TriGuard extends this line of work by incorporating AUC-based faithfulness metrics as a core scoring axis in its evaluation suite. | Deep neural networks often achieve high accuracy, but ensuring their reliability under adversarial and distributional shifts remains a pressing challenge. We propose TriGuard, a unified safety evaluation framework that combines (1) formal robustness verification, (2) attribution entropy to quantify saliency concentration, and (3) a novel Attribution Drift Score measuring explanation stability. TriGuard reveals critical mismatches between model accuracy and interpretability: verified models can still exhibit unstable reasoning, and attribution-based signals provide complementary safety insights beyond adversarial accuracy. Extensive experiments across three datasets and five architectures show how TriGuard uncovers subtle fragilities in neural reasoning. We further demonstrate that entropy-regularized training reduces explanation drift without sacrificing performance. TriGuard advances the frontier in robust, interpretable model evaluation. | [
"cs.LG",
"cs.AI"
] |
# 1 Introduction
Each year, an estimated 2.5 billion people buy something online. You are likely one of those people. Most e-commerce stores now routinely display product reviews. These are typically short, instructive paragraphs of text that detail the observations and judgments of other consumers, pointing to the relative benefits and limitations of the respective item and the associated consumption experience. There is clear evidence that product reviews are persuasive and influence purchase decisions (Duan et al., 2008; Zhu and Zhang, 2010; Forman et al., 2008; Chevalier and Mayzlin, 2006). Yet, the relative scale of fraud across online product reviews remains unknown - as is how many of the world’s 2.5 billion online consumers base their decisions on deliberately manipulated or fraudulent reviews.
The urgency of the issue continues to grow due to the widespread adoption of new Generative Artificial Intelligence techniques (or ‘GenAI’) for creating text, underpinned by recent advances in Large Language Models (LLMs). In many domains, resulting tools now have the capability to produce online posts that have the appearance of being entirely human, despite entirely artificial construction. The credibility of GenAI writing is so striking that it threatens to upturn entire industries, with the most widely described threat of artificial intelligence (AI) - beyond superintelligence destroying humanity (e.g., Bostrom, 2017; Tegmark, 2018) - focusing on technology supplanting the workforce of human labour. Whether replacing lawyers, medical doctors, or academic professors, there is a growing concern that technology will render a broad class of knowledge workers redundant (Brodeur et al., 2024; Schneiders et al., 2024; Sikander et al., 2023).
What has received comparatively less scrutiny is how these technologies will inevitably be used for persuasion. Far from eliminating the role of the marketer, GenAI will more likely enhance the arsenal of persuasive tools available to manipulate consumers. LLMs provide an instantaneous, multilingual, and highly accessible means to persuade consumers through the creation of fake product reviews. It is this precise issue - of generative artificial tools being used by unscrupulous marketers to generate hyper-persuasive but false narratives - which we aim to examine. In short, these tools potentially make the act of industrial fraudulence via product reviews a trivial task.
As part of the broader wave of GenAI innovations, LLMs are increasingly embedded into consumer-facing technologies - a process that has been described as a consumer-centric disruptive force that is reshaping the modern economy and accelerating the shift toward automation and data-driven decision-making (Beheshti et al., 2024). In contexts such as customer service, branding, and digital marketing, LLMs now play an active role in content creation and recommendation (Lee and Park, 2022; Vernuccio et al., 2023; Li et al., 2023; Ford et al., 2023; Osadchaya et al., 2024; Cui et al., 2024; Ferraro et al., 2024); and despite often not obvious to end users, an increasingly large fraction of Web-based content is written by machines (Thompson et al., 2024). This is happening now, not in some hypothetical future. However, given the surreptitious nature of content designed to persuade, it is difficult to reliably estimate its prevalence. The idea of marketers as ’hidden persuaders’ capable of leveraging academic insight into psychological and social sciences to create surreptitious manipulation practices has a long history (at least as far back as Packard, 1957). We will not repeat such critique of marketing here, other than to suggest that the emergence of LLMs has created the potential for marketers to generate and deploy an army of false consumer advocates in their attempts as hidden persuaders. In this article, we present a series of studies to show that this shift has potential to be even more insidious. Understanding the manipulative intent of adverts requires a certain level of literacy, and this knowledge is often not fully realised in most people until late adolescence (John, 1999; Boush et al., 1994). But in the case of identifying false consumer narratives, is literacy enough to defend ourselves?
# 1.1 Background and Research Aims
Emerging evidence suggests that consumers often struggle to distinguish AI-generated from human-written content across a range of domains, including news, product descriptions, and service reviews (Clark et al., 2021; Salminen et al., 2022; Kovács, 2024; Hatch et al., 2025). Where LLMs have previously been shown to pass the ‘Turing Test’ (i.e. a test of a machine’s ability to exhibit intelligent behaviour equivalent to that of a human), this is less obvious for the product review format. Product reviews, however, are relatively unusual compared to many other forms of narrative where LLMs have demonstrated effectiveness in generating. Unlike computer programming, disease diagnosis, the legalese of lawyers, or news articles, product reviews are not so dependent on highly technical language or syntax, which can easily fool a reader. Instead, they often contain half-formed ideas, are replete with typographic errors, include non-sequiturs, and are dependent on the local context in which a product was purchased, delivered, consumed or disposed of. If, as it is often said, that ‘to err is human’, the same cannot be said of LLMs, which are designed to generate flawless text. The imperfections of human-generated product reviews are unusual compared to both typical human-edited or AI-generated text. Therefore, we are interested in asking whether it is still possible for either people or indeed LLMs to distinguish false AI generated reviews. This aim frames the three studies that follow.
While prior work has explored the detection of AI-generated content across various domains, there remains limited empirical evidence directly comparing the performance and the underlying reasoning of humans and state-of-the-art LLMs in judging product reviews. To address this gap, we aim to advance understanding of how both humans and LLMs assess the authenticity of product reviews, particularly when the content is generated by LLMs, via three studies, each correspond to the following respective research questions:
• Research Question 1: Can humans distinguish between real and fake reviews generated by large language models? Through an experimental online study, we evaluate human participants and their ability to distinguish between real and LLM-generated reviews. We additionally investigate how accuracy and self-reported confidence at identifying fakes varies across individuals according to demography and experience. • Research Question 2: Can Large Language Models distinguish between real and fake reviews? We extend the experimental judgment task to evaluate the detection capabilities of state-of-the-art LLMs, benchmarking their
performance against human accuracy.
• Research Question 3: What makes a fake review hard to distinguish? The final study provides a comparative analysis of the underlying heuristics used by both humans and LLMs when making authenticity judgments. Using a range of measures to evaluate binary classification performance, we demonstrate the respective strategies and heuristics used to inform judgment for both humans and machines.
This paper makes two main contributions to the literature on understanding consumer behaviour and detecting AIgenerated content. Firstly, the study provides the first systematic empirical benchmark comparing human participants and state-of-the-art LLMs in their ability to distinguish between real and AI-generated product reviews. The findings reveal that both groups perform only marginally above chance, underscoring fundamental challenges in current detection capabilities. This contributes empirical clarity to growing concerns that GenAI can produce content that is not only persuasive but also effectively indistinguishable from authentic, human-generated text. More importantly, this highlights an imbalance in the generative–detective capacities of LLMs, in that they can easily produce synthetic content, but struggle to accurately detect it as such. This imbalance represents a key dimension of GenAI’s ‘dark side’, especially in consumer-facing environments where trust is critical.
Secondly, this paper identifies the cognitive and computational heuristics underlying authenticity judgments. By analysing which textual features are associated with detection performance and alignment, this paper uncovers the divergent heuristics used by humans and LLMs. Specifically, the study reveals that humans tend to rely on intuitive cues and are biased by ‘too-good-to-be-true’ judgments, which we describe as a form of ‘scepticism bias’ towards positive reviews. These cues are often flawed and easily manipulated, supporting concerns raised in prior theoretical work about the cognitive limitations of human detection. In contrast to humans, we show that LLMs exhibit a different form of strategy when judging reviews, which reveals itself as being biased toward believing that most reviews are real reviews. We define this trait as a ‘veracity bias’, i.e., a tendency to falsely believe in the veracity of reviews. LLM judgments are shown to rely on superficial textual features, such as review length. These findings highlight key vulnerabilities in both humans and LLMs and offer insight into how future ‘fake review’ generations could exploit these blind spots.
The remainder of the paper is structured as follows: section 2 reviews related works on human judgment of review authenticity, AI-based fake content detection, and their differences in evaluation strategies; sections 3, 4, and 5 present three empirical studies, followed by a general discussion of the findings and their implications in Section 6. Finally, section 7 concludes the paper by outlining its limitations and identifying directions for future research.
# 2 Related works
Online reviews are not only consulted in digital shopping environments but also in-store during showrooming, as a form of extended cognition (Smith et al., 2025). This reflects the logic of electronic word-of-mouth (eWOM), whereby consumers seek peer feedback to reduce perceived risk and uncertainty in purchase decisions (Hu et al., 2008). The increasing reliance on such reviews and their persuasive power in consumer choice underscores the importance of ensuring their authenticity, particularly as AI-generated reviews become more prevalent. In 2024, the UK government announced a ‘ban’ on fake online reviews as part of a broader initiative to protect consumers, estimating that misleading endorsements and hidden fees cost consumers over $\pounds 2 . 2$ billion annually (Ungoed-Thomas, 2025). This highlights the urgency of addressing review authenticity not only in academic research but also in policy action.
Outside of AI-generated fake reviews in e-commerce environments, there is a broad literature on authenticity and deception detection, spanning various content formats - research that identifies a range of cognitive and linguistic heuristics used in contexts such as news, social media, and human-written reviews, which inform how consumers perceive authenticity in product reviews. As such this literature review draws from a broad base of related works, presenting existing theories and highlighting potential cues relevant to fake review detection, structuring around three key areas: (1) how consumers judge authenticity and what the psychological and textual cues they rely on when judging reviews; (2) how AI systems, particularly LLMs, process and evaluate review authenticity; and (3) how human and machine judgment strategies differ in this context.
# 2.1 Consumer judgment of review authenticity
# 2.1.1 Cognitive models
According to the Heuristic-Systematic Model (Chaiken, 1980), humans rely on two modes of information processing. One is a systematic route, which involves systematic analysis of message content, entailing a deliberate attempt to assess credibility through a broad range of cues, such as message consistency, author identity, or platform context. The other is the heuristic route, which depends on mental shortcuts and surface cues, such as perceived authority of the source, the length of the message, and the degree of social consensus.
Both analytic and heuristic strategies can operate simultaneously in credibility evaluation (Metzger and Flanagin, 2015; Metzger, 2007), but are heavily influenced by cognitive heuristics. This is reflected in studies finding that people are often biased to believe in the validity of information, and ‘go with their gut’ and intuitions instead of deliberating (Ecker et al., 2022). This might be because heuristic processing is usually faster, relying on surface-level characteristics (e.g., familiarity and tone) or intuitive judgments. According to the Limited Capacity Model (Lang, 2000) and the Prominence-Interpretation Theory (Fogg et al., 2003), individuals selectively attend to only certain salient features when evaluating information due to limited cognitive resources.
The self-confirmation (personal opinion confirmation) heuristic is particularly relevant (Metzger and Flanagin, 2015), whereby people are more likely to believe information that aligns with their prior beliefs and dismiss information that contradicts those beliefs - regardless of how well-reasoned, well-sourced, or comprehensive it may be. Beyond credibility, researchers have identified three dimensions of authenticity perception: historical, categorical, and valuebased (Newman, 2019). These cues are particularly relevant to the self-confirmation heuristic. That is, individuals tend to judge authenticity based on whether a piece of content fits their pre-existing beliefs about what realness should be. These judgments are inherently subjective and may be shaped by prior knowledge and expectations.
# 2.1.2 Text-based heuristics
Human judgment of fake content has been studied across various domains and the heuristics people rely on often differ depending on the content type. In the context of fake news detection, for instance, Damstra et al. (2021) found that individuals draw on a range of cues, including ideological bias, emotional tone, verifiability, and headline structure, as well as linguistic features such as lexical diversity, capitalisation, pronoun usage, informal language, and punctuation patterns.
In the context of online consumer reviews, people tend to rely more heavily on textual and stylistic features. In Chevalier and Mayzlin (2006), reviews perceived as detailed or specific are often seen as more credible. Additionally, the presence of both positive and negative comments within a single review or review set tends to enhance perceived authenticity, as balanced feedback appears more genuine than uniformly positive evaluations. Similarly, Djafarova and Geere BA (2023) identify review length, writing style, and the inclusion of mixed or two-sided sentiment as key factors influencing credibility perceptions, particularly in travel and tourism contexts. Jakesch et al. (2023) demonstrate that people often associate the use of first-person pronouns, contractions, or references to family with human-written content.
The valence of reviews (positive vs. negative) can significantly shape human perception (Metzger and Flanagin, 2015). This is supported by framing theory, which suggests that negative reviews tend to carry more weight and are often judged as more credible than positive ones (Levin and Gaeth, 1988; Doh and Hwang, 2009; Mudambi and Schuff, 2010; O’Reilly and Marx, 2011; Kusumasondjaja et al., 2012; Maslowska et al., 2017). Some studies suggest that a disproportionate number of positive online reviews may cause consumers to discount positive reviews as not reliable (Chevalier and Mayzlin, 2006) and therefore may negatively affect sales. This negativity bias reinforces the idea that consumers may be more sceptical of highly favourable reviews. A study published by the UK DBT in 2023 (UK Department for Business and Trade, 2023) found that at least one in ten product reviews on third-party e-commerce platforms are likely to be fake — most of them positive and intended to influence consumer purchase decisions (Ungoed-Thomas, 2025). In this context, a fake review was defined as one that does not reflect a genuine experience of the product or service and is designed to mislead consumers. Such reviews can be either incentivised and human-written or generated by AI systems, differing from the focus on AI-generated reviews alone taken herein.
# 2.1.3 Consumer judgment is ‘intuitive yet flawed’
In Banerjee et al. (2017), linguistic cues that can distinguish real from fake reviews were summarised into a set of guidelines to help humans better identify deceptive content. While intended as an intervention to enhance human detection accuracy, such findings can paradoxically be repurposed to improve the generation of fake reviews by informing AI systems of which linguistic patterns to avoid to increase perceived authenticity. In other words, these heuristics, while cognitively efficient, make human judgment vulnerable to manipulation, particularly when AI-generated content is designed to mimic these expectations.
Jakesch et al. (2023) describe these cues as intuitive yet flawed heuristics, demonstrating that they make human judgment of AI-generated language predictable and manipulable, allowing AI systems to produce text perceived as ‘more human than human’. In this sense, human judgment can be systematically biased in favour of well-targeted misinformation, especially when it ‘feels’ intuitively correct - bypassing deeper cognitive evaluation. However, to date there is limited empirical evidence directly confirming whether such heuristics can be intentionally misused to fool human judgment in consumer contexts. This highlights the need for systematic investigation into how both humans and LLMs respond to AI-generated content specifically designed to exploit these cognitive shortcuts.
# 2.2 AI judgment of review authenticity
Prior research on AI-based fake content detection has explored a range of methods, mostly relying on linguistic and textual features such as content length, sentiment, topic distribution, or grammar patterns (Crothers et al., 2023; Jabeur et al., 2023; Wu et al., 2025). More recently, with the rise of LLMs, newer detection efforts have been used in these generative models themselves to evaluate whether the content is likely to be real or fake. GenAI is no longer confined to content generation, it is also being used in automatic detection (Crothers et al., 2023).
That said, LLMs like ChatGPT do not process, and hence ‘understand’ content in a human-like way. Instead they generate or judge text based on a statistical prediction, selecting the most probable next word in a sequence - leveraging structural concepts obtained during training, but with no access to deeper semantics or real-world grounding (Lindebaum and Fleming, 2024). As a result, their outputs may appear fluent and convincing but can still contain inaccuracies or fabricated details - a phenomenon referred to as ‘hallucinations’ (Alkaissi and McFarlane, 2023). This limitation poses challenges for AI-led fake review detection. In particular, LLMs without task-specific fine-tuning often perform inconsistently and unreliably across different types of content (Liu et al., 2023). For example, Salminen et al. (2022) found that although humans struggled to identify AI-generated fake reviews, a fine-tuned model performed well, highlighting a growing gap between generation and detection capabilities. In contrast, other studies suggest that detection models may fail when the LLMs used to generate the fake content are unknown (Kovács, 2024; Wu et al., 2025).
These findings underscore a broader concern in the context of customer reviews: while fake reviews can be generated quickly and easily using off-the-shelf LLMs, fake content detection remains an open and urgent problem — especially as LLMs become more powerful and widely accessible (Crothers et al., 2023). In consumer-facing environments, this imbalance represents a growing risk: the production of persuasive but deceptive content is becoming increasingly scalable, while the capacity to detect it remains limited. This calls for a broader evaluation of AI’s role not only as a content generator but also as an emerging judging agent. We contend that it is essential to understand how AI-based authenticity evaluation differs from human judgment.
# 2.3 Human vs. LLM judgment
Lindebaum and Fleming (2024) discussed the fundamental differences between human and machine-based judgment, particularly in tasks involving the evaluation of authenticity and truthfulness. Human judgment is described as involving contextual, reflexive, and deliberative production of meaning, especially in low-probability, ambiguous situations. It is an organic process tied to learning, experience, and moral interpretation within complex socio-technological environments (Moser et al., 2022). In contrast, LLMs such as ChatGPT do not possess the capacity for reflexive judgment. They rely on statistical prediction to generate the most probable next word in a sequence, without semantic awareness, intuition, or the ability to reason about truth or false claims (Hannigan et al., 2024; Lindebaum and Fleming, 2024).
In evaluating content authenticity, sequence length has emerged as an important signal for LLMs, with longer text generally improving detection performance (Crothers et al., 2023). If this holds true in consumer review contexts, it may suggest that surface cues alone can mislead LLMs into perceiving fake reviews as real. In contrast, human evaluators - drawing on contextual reasoning and critical reflection - often perform better with longer reviews, where more cues are available to support deeper verification. These divergent behaviours underscore the importance of empirically investigating how humans and LLMs differ in processing and evaluating authenticity. This is particularly the case in domains such as online reviews, where trust is central and misjudgment carries critical consequences. However, this remains a relatively underexplored area in existing research.
To empirically examine these challenges, the following sections present three studies comparing human and LLM performance in detecting AI-generated fake reviews.
# 3 Study 1: Can Humans Distinguish Between Real and Fake Reviews?
In this study, we examined whether humans could distinguish between real (human-written) and fake (LLM-generated) reviews. The analysis utilsed the Amazon Review 2023 dataset2 and focused on human classification performance,
assessing not only overall accuracy but also associated confidence levels, response patterns over time, and relationships with demographic factors when judging reviews.
# 3.1 Characteristics of Real Online Reviews from Amazon
The Amazon Review 2023 dataset includes multiple categories, each representing different product types with varying levels of customer engagement (Hou et al., 2024). To identify a suitable category for sampling, we ranked them based on key metrics related to customer engagement, product variety, review text volume, and review frequency. ‘Home_and_Kitchen’ emerged as the top-ranked category and was thus selected for further analysis.
From this category, we first extracted two sets of authentic human-written reviews to better understand the characteristics of natural review writing. A random sample of 1,000 reviews was used to capture the general patterns, and a balanced sample of 5,000 reviews based on star ratings was used to capture differences across ratings. Insights from this analysis informed the design of prompts for synthetic review generation in the later stage.
The analysis focused on several key characteristics of customer reviews that have been widely examined in prior research (Banerjee et al., 2017; Nica-Avram et al., 2022), particularly those related to diversity and distribution in style and content (see Table 1). The main findings are summarised below.
Table 1: Characteristics of reviews and analysis sources.
Review length and structure: Most reviews were short and to the point, with an average of 30 words and half containing fewer than two sentences. Short reviews (e.g., fewer than 4 words) made up $10 \%$ . Some reviews were much longer, with $10 \%$ exceeding 60 words and the longest reaching 20 sentences. While many reviews had a consistent writing style, some alternate between very short and very long sentences, making them more expressive.
Use of uppercase words: Most reviews used standard capitalisation. However, $10 \%$ of reviews used uppercase words for emphasis (e.g., ‘LOVE’ or ‘DO NOT’), reflecting strong sentiment.
Punctuation style: Periods and commas were commonly used, suggesting a neutral tone and well-structured sentence structure. About $1 \%$ of sentences contained unusual punctuations such as ellipses ‘...’ and exclamation marks ‘!!!’, reflecting authentic emotional tone.
Pronouns: First-person pronouns were used moderately, while second-person pronouns were infrequent and typically appeared in instructional or advisory reviews.
Past-tense verbs: Past tense verbs appeared occasionally (2 per review on average), suggesting some personal experiences or references to past events. Only $1 2 . 5 \%$ reviews used more than 5 past-tense verbs, indicating that detailed, story-like narratives are relatively rare.
Idiomatic expressions and colloquialisms: Reviews used a mix of informal, expressive language that leads to a conversational tone. Common idioms included ‘with flying colours’ (indicating success), ‘worth every penny (emphasising value), and ‘break the bank’ (indicating to be very expensive), reflecting the use of figurative language to convey enthusiasm. Similarly, colloquialisms like ‘lol’ (casual emphasis) and ‘kinda’ (informal hedge) indicated a relaxed and informal writing style.
Mistakes: The presence of typographical errors, misspellings, and grammar mistakes is important in making reviews feel authentic and human-like, as imperfections are a natural aspect of human writing (Bluvstein et al., 2024). Unlike machines, which generate perfectly structured text, real users often introduce spelling errors, grammatical inconsistencies, and informal phrasing. The sampled reviews showed common issues such as incorrect verb forms, redundant or missing spaces and punctuation, improper capitalisation, missing verbs, and incorrect pronoun usage. Among the common issues, misspellings were the most frequent (in over $50 \%$ of reviews), followed by typographical errors $( 1 5 \% )$ , and grammar mistakes, whitespace errors, and style issues (collectively around $10 \%$ ), while duplications were rare (7 out of 1,000 reviews). Higher-rated reviews tended to be more polished, possibly from more engaged or satisfied customers. In contrast, 2-star and 3-star reviews had the highest error rate per sentence, suggesting they are written more casually or quickly.
Sentiment characteristics: Sentiment generally aligned with star ratings, with higher ratings reflecting more positive sentiment. However, many reviews with mid-range ratings showed mixed sentiment, combining both praise and criticism. 3-star reviews were the most mixed, often including both strengths and weaknesses.
# 3.2 Generating Fake Reviews
To generate human-like reviews using LLMs, prompts should reflect the natural writing patterns of authentic reviews. Based on the previous analysis, we developed heuristics to guide prompt design:
1. Review length and structure: Reviews vary widely in length, but most are short. Prompts should permit both brief and detailed responses, emphasising that concise feedback is more acceptable while also encouraging detailed narratives to enhance authenticity.
2. Capitalisation, punctuation, and expressiveness: Occasional use of uppercase words and informal punctuation for emphasis should be permitted, but not their overuse. Prompts should allow their moderate usage, while avoiding excessive or formal punctuation.
3. Pronoun usage: Prompts should encourage a product-focused perspective while allowing personal reflections by mentioning first-person experience. Prompts should discourage second-person pronouns unless the context is advisory.
4. Past-tense usage: Prompts should accommodate both concise feedback and detailed storytelling with past tense to maintain authenticity. Lower-rated reviews reference past experiences more often, while higher-rated reviews focus on current satisfaction.
5. Idioms, colloquialisms and tone: Prompts should encourage occasional figurative language to enhance conversational and casual tone. Colloquialisms appear in some reviews but are context-dependent, so prompts should indicate their selective use, especially for informal products or experiences.
6. Common mistakes: Prompts should allow occasional typos, minor inconsistencies, or redundant words, as real users rarely write with perfect accuracy. Misspellings are the most frequent error, followed by typographical and grammar/style issues. Errors should be more common in 2-star and 3-star reviews.
7. Sentiment: Prompts should align sentiment with ratings. 1-star and 2-star reviews are typically negative, while 5-star reviews are mostly positive. Prompts should encourage a balanced perspective with mixed sentiments, especially in 3-star reviews.
Preliminary testing with simple prompts indicated that certain patterns, such as sentiment and rating alignment, naturally emerge in the model’s output without explicit instructions. Therefore, we focused on characteristics not captured by default but critical to perceived authenticity. With these insights, we proposed a set of prompts shown in Table 2 and used the ChatGPT o1 model to generate reviews grounded in detailed product information.
Table 2: Proposed prompts and corresponding design insights and heuristics $h$ .
# 3.3 Experimental Method
# 3.3.1 Review dataset
The study used a total of 50 product reviews (as in Meng, 2025): 25 authentic reviews randomly sampled from 1,000 Amazon reviews examined in Section 3.1, and 25 LLM-generated reviews as described in Section 3.2. These two sets of reviews were compared descriptively to ensure consistency. Table 3 shows they were similar in length and content, with no significant distributional differences confirmed by a Kolmogorov–Smirnov test $( \mathtt { p } > 0 . 0 5 )$ .
Table 3: Comparison of real and LLM-generated reviews in the review dataset. Distributions were assessed using th Kolmogorov–Smirnov test, with corresponding p-values reported in the final column.
# 3.3.2 Participants
A total of 300 participants were recruited via Prolific Academic using the platform’s representative sampling option, which allows obtaining samples that reflect national population distributions across age, sex, and ethnicity. This study received ethical approval from the University of Nottingham’s Research Ethics Committee. All participants gave informed consent prior to participation and were compensated at a fair hourly rate in line with ethical standards to ensure adequate task engagement. All participants were adults residing in the United Kingdom and fluent in English. No additional inclusion criteria were applied. After excluding 12 participants due to missing values, the final sample consisted of 288 participants $52 \%$ female; mean age of 47).
# 3.3.3 Experimental Task
Participants were informed that the purpose of the study was to investigate whether humans can distinguish between human-written and AI-generated product reviews. Before the task, participants answered two background questions: (1) highest completed education level, and (2) familiarity with LLMs rated on a five-point categorical scale (Never used, Heard of it, Occasionally used, Regular user, Use professionally). Although no additional demographic information was collected during the study, demographic data were obtained from Prolific and used in subsequent analyses.
During the test, each participant viewed all 50 reviews, which were presented sequentially in randomised order to control for sequence effects. For each review, participants made a binary judgment (‘Fake’ or ‘Real’) and rated their confidence level on a scale. The full instructions read:
Imagine you are working in Amazon’s Quality Assurance Department. Your role is to assess and flag suspicious fake reviews.
You will judge one product review at a time.
On larger screens, product information will appear on the left, while the review will be on the right. Your task is to determine whether the review is human-written (Real) or AI-generated (Fake). Use your best judgment and indicate your confidence level for each decision.
If you are using a mobile device, the layout may adjust-product information and might appear at the top, followed by the review. Regardless of the layout, the task remains the same.
You will read a total of 50 short product reviews, which should take approximately 10 minutes to complete.
Table 4: Definitions of binary classification evaluation metrics.
Note: Metrics can also be calculated with fake reviews to assess classification performance.
# 3.3.4 Measures
Classification performance was evaluated using accuracy, precision, recall, and the F1-score, which are standard metrics in binary classification (Canbek et al., 2017). These metrics are derived from the confusion matrix, which includes four outcomes: true positives (TP), false positives (FP), true negatives (TN), and false negatives (FN). As summarised in Table 4, these outcomes provide the basis for calculating the evaluation metrics. By analysing the confusion matrix, researchers can assess how accurately the model distinguishes between real and fake reviews. In this study, real (human-written) reviews are treated as the positive class, and fake (AI-generated) reviews as the negative. However, we report precision, recall, and F1 scores for both classes to provide a balanced assessment of how well each type of review is identified. Metric definitions are summarised in Table 4.
We also investigated the relationships between participants’ confidence, review duration (i.e., response time per review), and classification accuracy, as well as how these measures evolved over the course of the task. In addition, zero-order Spearman correlations were used to assess associations between performance and individual-level factors such as LLM familiarity and education. Time series patterns were also examined to explore potential fatigue or learning effects across the 50-review sequence.
# 3.4 Results
Figure 1: Comparison of confusion matrices for human participants and LLMs.
(a) Confusion matrix of participant judgments.
(b) Confusion matrix of LLM results.
# 3.4.1 Overall Classification Performance
As shown in the confusion matrix (Figure 1a), the overall classification accuracy was $5 0 . 8 2 \%$ $\mathrm { \ S D } = 0 . 0 8 \mathrm { \Omega }$ ), which is only marginally above the chance level $( 5 0 \% )$ . Performance was relatively better when identifying real reviews (precision $\ c =$
0.506, recall $= 0 . 6 5 8$ , f1-score $= 0 . 5 7 2 \$ ): $6 5 . 8 \%$ of real reviews were correctly classified, while $3 4 . 2 \%$ of real reviews were misclassified as fake. In contrast, accuracy was substantially lower for fake reviews (precision $= 0 . 5 1 2$ , recall $\ c =$ 0.358, f1-score $= 0 . 4 2 1 \$ ): only $3 5 . 8 \%$ were correctly identified as fake, while the majority $( 6 4 . 2 \% )$ were incorrectly labelled as real.
Despite this near-chance performance, participants reported relatively higher confidence in their judgments (Mean $= 6 6 . 9 9$ , $\mathrm { S D } = 1 2 . 0 3 \$ ) than their actual accuracy. No significant correlations were found between confidence, time spent per review, and classification accuracy, based on Spearman’s rank correlation tests $\mathrm { ( p > 0 . 0 5 }$ for all pairwise comparisons).
# 3.4.2 Trial-by-Trial Dynamics: Accuracy, Confidence, and Speed
To explore how participants’ performance and self-perception evolved over time, we examined trial-by-trial changes in accuracy, confidence, and response time across the 50 reviews. Results revealed a slight improvement in accuracy over time, with a modest, statistically significant positive correlation between the trial number and accuracy $( \mathrm { r } = 0 . 3 1 4 \$ , ${ \mathfrak { p } } =$ 0.026). The observed increase in accuracy appeared to plateau around the 35th review, after which smoothed accuracy remained relatively stable, fluctuating narrowly around $5 1 . 5 \%$ and thus, still close to chance.
In contrast, confidence steadily declined as the task progressed, despite the modest gains in performance $( \mathrm { r } = - 0 . 6 4 0$ , p $< 0 . 0 0 1$ , mean slope of $- 0 . 0 6 5$ per review). Response time also showed a clear downward trend, indicating increased speed over time $( \mathrm { r } = - 0 . 2 4 0$ , $\mathrm { p } < 0 . 0 0 1$ , mean slope of -0.178 per review).
# 3.4.3 Demographic and Individual Differences
To explore the influence of individual differences on task performance, a correlation analysis was conducted between key outcome measures (accuracy, confidence, and response time) and participant characteristics, including age, gender, number of fluent languages, primary language, education level and self-reported familiarity with LLMs. Age and accuracy had a small but statistically significant negative correlation $\mathrm { { ' r } } = - 0 . 1 3 7$ , ${ \mathrm { p } } = 0 . 0 1 9 { \mathrm { , } }$ ), as shown in Figure 2. Age was also positively associated with review completion time $\mathrm { \Delta T = 0 . 2 8 4 }$ , $\mathrm { p } < 0 . 0 0 1 \$ ). However, neither prior familiarity with LLMs, level of confidence, nor any other demographic variables were associated with task performance.
Figure 2: Relationship between participant age and classification accuracy, with a fitted quadratic trendline to illustrate the negative association.
Confidence in task judgments shared no significant relationships with any variables, so neither prior familiarity with LLMs nor educational level, for example, seemed to influence this judgment.
Greater familiarity with LLMs was found to be significantly associated with participants who were younger $\mathrm { { ( r = - 0 . 2 4 0 } }$ , $\mathrm { p } < 0 . 0 0 1 \mathrm { \Omega }$ ), were fluent in more languages $_ \mathrm { r = 0 . 1 7 4 }$ , $\mathrm { p } = 0 . 0 0 3 \mathrm { \cdot }$ ), and were more educated $\mathrm { \Phi } _ { \mathrm { { r } } } = 0 . 2 3 4$ , $\mathrm { p } < 0 . 0 0 1 \$ ). Gender was also found to be associated with significant differences in familiarity, whereby males indicated greater levels $\mathbf { M } =$ 2.02) than females $( \mathrm { M } = 1 . 5 5 , \mathrm { F } ( 1 , 1 9 \bar { 2 } ) = 1 4 . 0 5 , \mathrm { p } < 0 . 0 1 )$ .
# 3.5 Discussion
These results reveal a clear asymmetry: participants were moderately better at recognising real reviews but consistently struggled to detect fake ones, with performance near random levels across both classes. This highlights the difficulty of this task for humans - particularly in identifying AI-generated content. Notably, participants demonstrated overconfidence: average confidence ratings were substantially higher than actual accuracy, while confidence showed no significant correlation with performance. If consumers trust their own judgments without sufficient accuracy, they may be more vulnerable to deceptive content.
Further analysis of time-based trends revealed that while participants became slightly more accurate and faster during the task, their confidence steadily declined. Given the absence of feedback during the experiment, participants had limited opportunity to correct their judgments. This may reflect a form of blind learning, where individuals develop rough heuristics but also become increasingly uncertain when faced with ambiguous, hard-to-verify information. Over time, this could reduce confidence in one’s ability to assess content authenticity — a risk in real-world environments with extensive AI-generated content.
Individual differences provided limited explanations for their performance. While LLM familiarity was positively related to being younger, more educated, and multilingual, these characteristics did not predict better performance or higher confidence in the task. The only demographic variable associated with accuracy was age, with younger participants tending to perform better than older counterparts. This advantage, however, could not be fully explained by LLM familiarity alone, suggesting that other factors may play a role.
# 4 Study 2: Can LLMs distinguish between real and fake reviews?
While Study 1 focused on human judgment, Study 2 examined the detection capabilities of LLMs themselves. State-ofthe-art models, including the one used to generate fake reviews, were assessed to benchmark LLM performance against human intuition.
# 4.1 LLM baselines
In this study, seven leading LLMs were evaluated, including ChatGPT-o1, DeepSeek-R1, Grok-3, Gemini-2.0-FlashThinking, ChatGPT-4o, Gemma-3-27B-it, and Qwen2.5-Max. These models were chosen based on three criteria: (1) broad coverage of both open-source and commercial models, (2) high ranking in the Chatbot Arena leaderboard3 (Chiang et al., 2024) (all within the top 10 as of March 2025), and (3) diversity across model developers. To ensure breadth of representation, no more than one model was included per company. An exception was ChatGPT-o1 as it was used to generate the synthetic reviews in Study 1.
# 4.2 Method
# 4.2.1 Task Setup
The selected models were evaluated on the same 50 reviews used in the human study. The input for each trial included the review text and associated product information, identical to what was shown to human participants, except for product images. All models received the same classification prompt, asking them to determine whether each review was ‘Real’ (human-written) or ‘Fake’ (AI-generated), and to provide a confidence estimate.
All models were queried sequentially (one review at a time) to mirror the structure of the human task. To avoid potential memory effects or conversation history influencing responses, each review was submitted to each model three times using three separate sessions or accounts where possible. This also allowed us to assess the stability and variability of model responses. Due to inconsistencies in how different models control temperature or randomness settings, each model’s default generation settings were used to ensure a fair and consistent baseline comparison.
# 4.2.2 Measures
For each model, results were aggregated across the three runs per review. Performance was then compared both between models and against human participants, to determine how effectively current LLMs can distinguish LLM-generated from authentic content. Model performance was assessed using the same metrics as in the human evaluation: accuracy, precision, recall, and F1-score. These metrics were calculated separately for real and fake reviews, allowing for a class-specific performance analysis.
# 4.3 Results: Overall LLM Performance
Table 5: Overall classification performance of the seven LLMs on the review classification task. Metrics include accuracy, precision, recall, F1-score (aggregated across real and fake classes), mean confidence with standard deviation (SD), and consistency across three repeated trials per model.
Table 6: Classification performance of the seven LLMs on the review classification task. Precision, recall, and F1-score are reported separately for real and fake review classes.
In terms of consistency, ChatGPT-o1 and Grok-3 produced nearly identical classifications across repeated trials, with decision consistency rates of 0.98 and 1.00, respectively (in Table 5). Other models showed slightly more variability in their responses, with an average consistency rate of 0.79, meaning they produced the same classification in two out of three trials on average.
As shown in Table 6, all models performed better at identifying real reviews (Recall-Real scores near 1) but struggled with fake reviews (Recall-Fake scores near 0). DeepSeek-R1 obtained better performance for fake reviews, however, its overall performance in both classes was still poor. ChatGPT-4o, Gemini-2.0-Flash-Thinking, and Qwen2.5-Max showed relatively more balanced performance (highest F1 scores) among the evaluated models (in Table 5). The best-performing model, i.e., ChatGPT-4o (accuracy $= 5 0 . 0 \%$ , F1-score $= 0 . 3 4 8 \$ ), matched human accuracy $( 5 0 . 8 \%$ , F1-score $= 0 . 4 9 7$ ) but failed to match their overall effectiveness.
When comparing model performance as presented in the confusion matrix (in Figure 1b) with human participants, results showed that humans slightly outperformed all tested LLMs, despite all models showing higher confidence $( \mathrm { M e a n } = 8 5 . 6 2 , \mathrm { S D } = 3 . 2 6 )$ ). In particular, humans were better at detecting fake reviews as human participants correctly identified $3 5 . 8 \%$ of fake reviews, compared to only $9 \%$ for LLMs.
# 4.4 Discussion
The findings highlight a critical limitation of current LLMs: although they can generate highly human-like content by mimicking real reviews, they struggle to recognise such content as AI-generated when prompted to detect it. This suggests that LLMs’ capabilities in generation and detection are not equally developed.
All tested models have a strong bias toward classifying reviews as real. While human participants show a similar tendency, LLMs were more extreme in this regard. As such, it is difficult to consider current LLMs as reliable tools for content authenticity detection — especially given their demonstrated ability to confuse humans with fake reviews.
In this study, human intuition proved more effective than that of the most advanced LLMs. This raises an important question: what underlying cues or heuristics do humans rely on that LLMs fail to capture? This question is addressed in the following section, which explores the key features of reviews that make them particularly difficult or easy to classify, shedding light on the possible sources of divergence between human and model judgments.
# 5 Study 3: What makes a fake review hard to distinguish?
Having established that both humans and LLMs struggle to reliably detect fake reviews, Study 3 investigated what makes some reviews harder to classify than others by analysing textual, linguistic, and sentiment-related features of the review content. The aim was to uncover the heuristics and cues that humans and LLMs rely on when judging review authenticity and to reveal where their judgment strategies align or diverge.
# 5.1 Method
In this study, Grok-3 was excluded from the analysis as Grok-3 always classified reviews as ‘real’ in Study 2 without showing meaningful variation based on review content.
The features in Table 1 were extracted for the 50 reviews. Initial Spearman correlation analysis showed several feature were highly correlated, thus the redundant features were excluded in this analysis.
To determine what is a difficult review, three indicators of classification difficulty for each review were examined:
1. Participant accuracy: the proportion of human participants who correctly classified the review.
2. LLM accuracy: the proportion of LLMs trials across all models that correctly classified the review.
3. Participant-LLM similarity: the alignment between human and model classification behaviour.
The Participant-LLM similarity was measured by cosine similarity between their judgment vectors. As each review is either real or fake, judgments can be aggregated and represented by a four-dimensional vector representing classification outcomes, i.e., $[ T P , F P , T N , F N ]$ . This was done separately for human participants and for all trials of the six LLMs. Cosine similarity between the participants and the model judgment vectors was used to quantify how closely the two groups agreed in their classification patterns for each review (higher cosine similarity values indicate higher alignment).
To illustrate how human-model similarity was computed, consider the following example: suppose a particular review is a real (human-written) review. Among the 288 human participants, half correctly classified it as real and the other half incorrectly judged it as fake. The participant classification vector is $[ 1 4 4 , 0 , 0 , 1 4 4 ]$ . Across three trials for each of six models (18 judgments total), if 9 classify it as real and 9 as fake. The model classification vector is $[ 9 , 0 , 0 , 9 ]$ . The cosine similarity between the two 4-dimensional vectors is 1.0, indicating the directional pattern of judgments is the same across humans and models.
# 5.2 Results
# 5.2.1 Real Reviews: Feature Correlates of Classification Accuracy and Agreement
When judging real reviews, several text features were associated with the classification difficulty by both participants and LLMs, as shown in Figure 3a. The review rating was the most influential feature across all three measures. Reviews with higher ratings were significantly harder to classify for both participants $\mathrm { { ( r = - 0 . 7 2 1 } }$ , $\mathrm { p } < 0 . 0 0 1 \cdot$ ) and LLMs $\mathrm { \Delta r } = \mathrm { \Delta }$ -0.445, ${ \mathrm { p } } = 0 . 0 2 6 { \mathrm { , } }$ ).
In addition to the decrease in accuracy associated with review rating for both groups, the alignment between human and model judgments was lowest for high-rating reviews $( \mathrm { r } = - 0 . 5 0 6 , \mathrm { p } = 0 . 0 1 0 )$ . This reduced agreement suggests that humans and LLMs made systematically different judgments on these reviews. Specifically, human participants classified more high-rating reviews as fake compared to LLMs.
In terms of review length, LLMs performed better on longer reviews as word count is strongly positively correlated with model accuracy $\mathrm { ' r } = 0 . 8 5 2$ , $\mathrm { p } < 0 . 0 0 1 \$ ). A similar pattern was observed for average words per sentence $_ \mathrm { r } = 0 . 5 5 6$ , p $= 0 . 0 0 3 )$ . In addition, reviews with more varied sentence structures were easier for models to classify $\mathrm { \ddot { r } } = 0 . 6 2 0$ , ${ \mathfrak { p } } =$ 0.001). A similar but non-significant trend was observed for human participants.
For both humans and models, reviews containing more writing mistakes were easier to classify (humans: ${ \mathrm { r } } = 0 . 3 5 1$ , ${ \mathfrak { p } } =$ 0.086; LLMs: $\mathrm { r } = 0 . 4 4 1$ , $\mathrm { p } = 0 . 0 2 7 ,$ ). However, strong positive reviews were associated with lower accuracy for both groups (humans: $\mathbf { r } = - 0 . 4 0 7$ , $\mathrm { p } = 0 . 0 4 3$ ; LLMs: $\Gamma = - 0 . 4 8 6$ , ${ \mathrm { p } } = 0 . 0 1 4 { \mathrm { , } }$ .
Spearman Correlation Between Review Features and Evaluation Metrics (Real Reviev Review Rating -0.72 -0.45 -0.8 Helpful Vote Count - -0.04 0.36 -0.15 -0.6 Avg Wordsord count 0.24 0.57 -0.01 0.2 Sentence Length Variability 0.22 0.62 0.15 -0.0 Uppercase Word Count -0.12 0.18 -0.24 --0.2 Second-person Pronoun Count - -0.34 -0.17 -0.26 Writing Mistake Count- 0.35 0.44 0.24 -0.4 Sentiment Polarity -0.41 -0.08 -0.6 Participant Accuracy LLM Accuracy Similarity Evaluation Metrics
Spearman Correlation Between Review Featuresand Evaluation Metrics (Fake Re Review Rating 0.75 -0.14 -0.57 -0.6 Helpful Vote Count 0.12 -0.37 -0.22 Avg Worsore cunt 0.25 -0.47 -0.24 0.2 SentenceLength Variability 0.27 -0.28 -0.0 Uppercase Word Count 0.20 0.00 -0.17 Second-person Pronoun Count- 0.12 -0.15 -0.17 -0.2 Writing Mistake Count 0.23 -0.14 -0.26 -0.4 Sentiment Polarity 0.32 -0.09 10.6 Participant Accuracy LLM Accuracy Similarity Evaluation Metrics
# 5.2.2 Fake Reviews: Feature Correlates of Classification Accuracy and Agreement
Unlike in the real review condition, human-model alignment was low across all features, with all correlations negative as shown in Figure 3b. Across all features, the review rating showed the strongest associations with participant accuracy. Higher-rated fake reviews were significantly easier for humans to identify $\mathrm { ( r = 0 . 7 5 1 }$ , $\mathrm { p < 0 . 0 0 1 } \AA .$ ). At the same time, review ratings were associated with significant misalignment between human participants and LLMs $\mathrm { \ddot { r } } = - 0 . 5 7 3$ , ${ \mathfrak { p } } =$ 0.003).
LLMs show reduced accuracy on longer fake reviews, with negative correlations for word count $( \mathrm { r } = - 0 . 4 6 5$ , $\mathrm { p } = 0 . 0 1 9 \mathrm { \Omega } ,$ ), average words per sentence $\mathrm { { ( r = - 0 . 6 0 6 } }$ , $\mathsf { p } = 0 . 0 0 1$ ) and sentence length variability $( \mathrm { r } = - 0 . 4 5 6$ , ${ \mathrm { p } } = 0 . 0 2 2 { \mathrm { ) } }$ . In addition, stronger positive sentiment in fake reviews was associated with reduced model accuracy $( \mathbf { r } = - 0 . 4 3 6$ , ${ \bf p } = 0 . 0 2 9 ,$ ).
# 5.3 Discussion
Results indicate several challenges across humans and LLMs in distinguishing real and fake reviews. Review rating, sentiment polarity, the presence or absence of writing mistakes and surface-level richness (e.g., longer length, more detail) are the key diagnostic signals of authenticity for both groups. However, the general human–model agreement remained low, especially for judging fake reviews, suggesting that the two rely on different cues. Overall, these findings point to a slightly different view of how humans and models interpret ‘perfection’ and surface-level richness (e.g., longer length, more detail).
Humans may exhibit a ‘scepticism bias’ when evaluating the authenticity of online reviews. That is, reviews that are highly rated, strongly positive, and flawlessly written are often treated as fake — possibly triggering the intuition that the review is too good to be true. Therefore, highly polished real reviews may be misclassified as fake, while slightly flawed fake reviews with a more neutral tone and ordinary ratings may be judged as real. This bias may stem from consumers’ knowledge or sensitivity to overly strategic or promotional language. LLMs exhibited a similar trend but with a stronger reliance on surface-level richness, i.e., treating detail-rich and structurally complex content as more authentic. This can be a potentially useful heuristic when judging real content, but misleading when facing fake inputs that are artificially verbose.
There are many differences between human assessors and LLMs, with the former acquiring language in rich, social, and multimodal contexts, while the latter are trained on vastly larger but purely textual corpora (Trott et al., 2023). As a result, human behaviour is shaped by a combination of social, cognitive, emotional, and contextual factors (Ecker et al., 2022). In contrast, LLM behaviour is shaped by patterns learned from the large volumes of text on which they were trained. This fundamental difference can explain why LLMs perform especially poorly in detecting fake reviews.
First, some LLMs exhibit what we term a ‘veracity bias’ - a tendency to falsely believe in the veracity of reviews. For example, models like Grok-3 consistently default to classifying reviews as real. From a game-theoretic perspective, this behaviour can be seen as a form of base-rate optimisation, wherein an agent under uncertainty maximises expected accuracy by aligning with the most frequent class. Fundamentally, this strategy arises from the underlying distribution of training data, which contains predominantly human-generated content.
Second, other models, while not exhibiting this default tendency, still failed to detect deception reliably — suggesting that they have not developed effective heuristics for identifying AI-generated content. These models may produce more balanced outputs, yet rely on superficial textual cues and lack contextual reasoning. This is reflected in the divergent cue usage between LLMs and human participants.
# 6 General Discussion
This paper offers empirical evidence that LLMs struggle as much as humans in detecting fake AI-generated reviews. Study 1 highlighted that humans operate at a chance level of detecting fake reviews, with confidence not ensuring accuracy (as in Double and Birney, 2024; Litwin et al., 2025). The only subtle determinant of higher accuracy was younger age, suggesting that familiarity with AI could play a role. Humans were also prone to be overconfident in their ability to reveal fake content, raising significant concerns regarding the ‘dark side’ of AI in consumer decision-making. As AI-generated text advances and further blurs the boundary between authentic and synthetic reviews, it erodes trust and exploits consumers’ vulnerability. Interestingly, LLMs in Study 2 did not perform better than humans. A chance-level accuracy was achieved but for a different reason - the majority of LLMs were selecting ‘real’ for any review. LLMs almost always classified content as authentic, thereby avoiding false accusations while failing to enhance detection capabilities. Study 3 aimed to unpack which of the review text characteristics were used by humans and LLMs to detect fake reviews. While humans used similar heuristics to those used by our research team in designing the studies, the intuition of LLMs remained a black box. The striking difference between humans and LLMs was found in the perception of what chiefly constituted an authentic review: indications of slight flaws for humans, and verbosity for LLMs.
Fake reviews posed significant challenges to the credibility and reliability of online marketplaces even before the emergence of ChatGPT, distorting consumer perceptions and undermining genuine feedback (Sahut et al., 2024). Now, with the widescale incursion of AI-generated content, it is no longer possible to establish the ground truth of what is truly authentic. Consumers themselves could be using Gen-AI to create their authentic reviews and articulate their experiences more effectively. However, the large-scale injection of AI-generated reviews poses serious risks, whether intended to promote a brand positively or to harm a competitor’s reputation. Open web data have become polluted sources that seemingly align with the concept of increasing platform ‘enshittification’ (Doctorow, 2023). Insights drawn from online reviews that include undetectable AI-generated content could be compromised, ultimately leading to a loss of confidence in users feedback as a decision-making tool. Constant usage of corrupt datasets, by extension, might affect consumer behaviour research in general, exacerbating the cycle of data degradation.
If AI-generated content cannot be reliably detected by either humans or LLMs, the focus should be on developing tools to ensure verification and transparency at least in the future open web data landscape. Digital identity verification could be one of the possible interventions (Shukla and Goh, 2024). Another solution could be watermarks embedded into either or both AI- and human-authored content, ensuring that authenticity can be verified at the source rather than through post-hoc detection efforts. Unfortunately, current advances in watermarking do not work on very short pieces of text, such as reviews (Dathathri et al., 2024). However, metadata tagging for the AI-generated review could provide transparency using the platforms to disclose when the review has been synthesised. It goes without saying that these technical solutions must be complemented by ethical and regulatory frameworks ensuring responsible deployment, while maintaining trust in digital communication. | Reading and evaluating product reviews is central to how most people decide what to buy and consume online. However, the recent emergence of Large Language Models and Generative Artificial Intelligence now means writing fraudulent or fake reviews is potentially easier than ever. Through three studies we demonstrate that (1) humans are no longer able to distinguish between real and fake product reviews generated by machines, averaging only 50.8% accuracy overall - essentially the same that would be expected by chance alone; (2) that LLMs are likewise unable to distinguish between fake and real reviews and perform equivalently bad or even worse than humans; and (3) that humans and LLMs pursue different strategies for evaluating authenticity which lead to equivalently bad accuracy, but different precision, recall and F1 scores - indicating they perform worse at different aspects of judgment. The results reveal that review systems everywhere are now susceptible to mechanised fraud if they do not depend on trustworthy purchase verification to guarantee the authenticity of reviewers. Furthermore, the results provide insight into the consumer psychology of how humans judge authenticity, demonstrating there is an inherent 'scepticism bias' towards positive reviews and a special vulnerability to misjudge the authenticity of fake negative reviews. Additionally, results provide a first insight into the 'machine psychology' of judging fake reviews, revealing that the strategies LLMs take to evaluate authenticity radically differ from humans, in ways that are equally wrong in terms of accuracy, but different in their misjudgments. | [
"cs.CL",
"cs.AI",
"econ.GN"
] |
# 1 Introduction
In recent years, there has been a lot of work on table analysis (Li et al., 2024b; Zhao et al., 2024). Specifically, an increasing number of studies discuss table question answering (TableQA) tasks. (Yang et al., 2024b; Sarkar and Lausen, 2023; Zhou et al., 2024; Nguyen et al., 2024; Li et al., 2024a). Additionally, the introduction of large language models (LLMs), including closesource models such as GPT-4o (OpenAI, 2023) and Gemini-1.5-Pro (Anil et al., 2023), and tableoriented models like TableGPT (Su et al., 2024)
Mother part time and Father full time? Averagehoursperday Average hoursperday Activity Both full time Motherultme Activity Both full time Moterpatimhe MothersFathersMothersFathers MothersFathers MothersFathers Childrenunder18 Children7-12
Total 36.31 10.34 12.42 9.99 ITotal 11.47 10.29 12.15 9.77
PersonCare 9.01 8.62 9.10 8.45 PersonCare 9.04 8.67 9.08 8.34 Sleeping 8.20 8.04 8.34 7.91 Sleeping 8.29 8.12 8.37 7.84
Household 2.00 1.35 2.62 1.21 Household 1.85 1.23 2.38 1.10 Housework 0.87 0.27 1.13 0.23 Housework 0.78 0.29 1.03 0.26 Food 0.78 0.35 1.01 0.29 Food 0.78 0.36 0.99 0.30 Lawn 0.08 0.23 0.12 0.24 Lawn 0.07 0.17 0.09 0.19
Purchasing 0.58 0.37 0.70 0.33 Purchasing 0.58 0.39 0.69 0.33 Grocery 0.13 0.06 0.17 0.06 Grocery 0.14 0.07 0.15 0.07 Consumer 0.35 0.24 0.41 0.22 Consumer 0.34 0.25 0.41 0.21 Childrenunder6 Children 13-17
Total 13.00 11.20 13.29 10.74 ITotal 11.84 10.51 12.73 10.24
PersonCare 9.87 9.12 9.52 8.81 PersonCare 8.99 8.59 9.10 8.54 Sleeping 9.02 8.88 8.96 8.26 Sleeping 8.14 7.98 8.32 7.97
Household 2.34 1.57 2.91 1.47 Household 2.11 1.44 2.81 1.29 Housework 1.02 0.36 1.49 0.43 Housework 0.92 0.26 1.20 0.20 Food 0.99 0.41 1.38 0.37 Food 0.78 0.34 1.03 0.29 Lawn 0.11 0.23 0.14 0.28 Lawn 0.10 0.21 0.12 0.22
Purchasing 0.79 0.51 0.86 0.46 Purchasing 0.74 0.48 0.82 0.41 Grocery 0.18 0.10 0.19 0.11 Grocery 0.16 0.09 0.16 0.11 Consumer 0.47 0.31 0.53 0.28 Consumer 0.44 0.29 0.49 0.26 Hierarchical Column Header Hierarchical Row Header Nested Sub-Tables Implicit Multi-Table Join
further enhances models’ capability of table comprehension. Given these advancements, the rapid development of LLMs and the continuous emergence of diverse tabular data in real-world scenarios have highlighted the need for a comprehensive evaluation of LLMs’ table understanding capabilities, putting forward new requirements on the relevant benchmarks.
Unfortunately, we find that the current tabular data benchmarks, such as TAT-QA (Zhu et al., 2021), TableBench (Wu et al., 2024), and InfiAgentDABench (Hu et al., 2024), largely consist of flat tables. More concretely, in such tables, each column represents an attribute, each row represents a record, and all data are stored in a simple one-dimensional format; see Figure 13 for visual demonstration. However, in practical applications, humans often organize relatively complex tables to represent multifaceted relationships between variables. Such hierarchically structured tables are popular in various domains, including economy, science, and employment, on public data platforms.
To address this issue, some benchmarks have considered hierarchical tables, such as HiTab (Cheng et al., 2022) and SciTab (Lu et al., 2023). However, these benchmarks fail to comprehensively present the LLMs’ understanding capabilities of complex table structures. For example, SciTab (Lu et al., 2023) and AIT-QA (Katsis et al., 2022) only consider tables of the scientific and airline domains, respectively. MM-Tab (Zheng et al., 2024) provides only image-based input, while hierarchical tables in realistic applications are also presented in a textual way. SpreadSheetBench (Ma et al., 2024) focuses on the operation of tables. HiTab (Cheng et al., 2022) organizes its tables with lossy JSON format, only contains fundamental QA tasks, and provides incomplete supervision. More importantly, most of these benchmarks focus primarily on relatively simple hierarchical tables, particularly those with a basic column hierarchy, where the hierarchy typically does not exceed two levels. To date, there is still a lack of dedicated benchmarks for comprehensively evaluating LLMs ability to understand complex tabular hierarchies.
To this end, we propose RealHiTBench, a challenging Realistic Hierarchical Table Benchmark based on complex tables and tasks. (i) Complex Table Structures: Our benchmark includes complex tables with intricate features (partially depicted in Figure 1), which are commonly found in real-world scenarios but often overlooked in existing benchmarks. (ii) Modal and Format Diversity: We explore the performance of both text and image approaches while others focusing solely on one of them. Our benchmark provides LLMs and MLLMs to be tested with various input formats, such as LaTeX, HTML, and PNG. (iii) Question Diversity: RealHiTBench covers a wide range of question types, each designed to test different aspects of a model’s ability. Especially a type called Structure Comprehending is designed on the basis of complex parts. (iv) Accurate and Efficient Annotation: Our benchmark employs a rigorous annotation process. GPT-based automated annotation and human checks ensure the accuracy and reliability of the questions and answers.
We conducted a comprehensive evaluation of abundant types of models consisting of tableoriented LLMs, and open-source/closed-source generic LLM/MLLMs. In addition, we tested a couple of table reasoning tasks with different proper metrics. The performance of different models differs greatly, with average scores of all tasks ranging from 7.27 to 56.95. Importantly, overall low scores (mainly below 70) highlight that the ability of LLMs to comprehend and process intricate table structures remains an area in need of significant improvement. We also develop a tree-based pipeline (dubbed TreeThinker) that automatically injects table hierarchies into instructions. Empirically, we show that, with self-emphasized table hierarchies, LLMs’ structural understanding ability can indeed be enhanced.
# 2 Related Work
Table Analysis. Table analysis is pivotal across numerous domains (He et al., 2024), while table question answering (TableQA) can effectively assess analytical capabilities (Müller et al., 2019). Recently, there has been a growing number of studies focusing on the ability of LLMs to comprehend tabular data (Singha et al., 2023; Nguyen et al., 2024; Deng et al., 2024). However, the development of LLMs leads to the emergence of different branches. Common tabular data are in text form to be processed by LLMs (Anil et al., 2023; Yang et al., 2024a; Dubey et al., 2024), while other image-based tables are suited to be processed by MLLMs (Wang et al., 2024; Liu et al., 2024). Additionally, some table-oriented LLMs for both modalities have also been proposed (Su et al., 2024; Zheng et al., 2024). Therefore, the flourishing development of LLMs makes challenging benchmarks for tabular data increasingly important.
Table Analysis Benchmarks. An increasing number of benchmarks for table analysis are discussed (Chen et al., 2020; Nan et al., 2022). However, a proportion of benchmarks focus more on other aspects, such as reasoning methods and supervision fine-tuning, while overlooking the structural complexity of the data, especially the table dimensions (Wu et al., 2024; Zheng et al., 2024). Although some recent benchmarks have introduced the concepts of hierarchical tables (Cheng et al., 2022; Katsis et al., 2022; Ma et al., 2024), and a complex question answering benchmark has been proposed to bridge the gap between theoretical and real-world tabular data (Wu et al., 2024), the tables in these benchmarks are not complex enough and there has been no detailed discussion of the tables.
Therefore, in order to introduce a benchmark that can effectively evaluate current LLMs, we propose RealHiTBench consisting of complex tables and TableQA, which evaluates LLMs and MLLMs, as well as textual and visual input data. Furthermore, we look forward to the possible future development directions of LLMs in table reasoning.
# 3 RealHiTBench
In practical applications, tabular data holds significant value. Numerous previous studies have proposed a series of benchmarks (Zhu et al., 2021; Parikh et al., 2020), which have effectively propelled research in this field. However, as application scenarios become increasingly complex and large language models (LLMs) gradually emerge as the mainstream method for table reasoning (Yang et al., 2024b), the previous benchmarks struggle to provide a comprehensive and accurate assessment of model capabilities. Hence, there is an urgent need for a new, realistic, and challenging benchmark to evaluate the progress of research methods in this field. We propose RealHiTBench focusing on complex structures and tasks. Complex tables, including hierarchical tables, are widely used in various domains (Lu et al., 2023; Katsis et al., 2022). We propose such a challenging benchmark full of complex tables and corresponding difficult tasks to assess the upper bound of the comprehension capability of LLMs, which may further inspire future research on processing tables with LLM.
# 3.1 Table Collection and Process
To construct a realistic and comprehensive dataset, all tables in our benchmark are raw from 13 different open platforms and cover 24 different domains like economy, society, science, and so on (shown in the Appendix A.1 and A.3). In real applications, table-related benchmarks are presented in either textual (Cheng et al., 2022; Parikh et al., 2020) or visual (Kim et al., 2024) manner. However, there is still no fixed format for complex table understanding. Therefore, we explore the influence of textual and visual ways. Specifically, we also explore the influence of different input formats, including LaTeX, HTML, CSV, Markdown, and PNG, whose performance is shown in Figure A.2.
# 3.2 Complex Structures
In actual scenarios, we often encounter tables that show structural and even semantic complexity. However, current benchmarks (Cheng et al., 2022; Lu et al., 2023) have not adequately discussed the complexity of table structure. Concentrating on complexity in tabular data, we define 5 complex
1. Data Collection 2. Question Generation Table 2.1 Generate Attribution $\Leftarrow$ Task Definition Collection Complexity, Diversity, Specificity, Real-World Structure Relevance, Anomalies, Length, Writing Style ... 曲 Taxonomy 2.2 Generate Question $\Leftarrow$ Selected Attributions & Table 1 ProDceastsaing 5 [deQcuresatsieoin]:huWmhiacnhcTonVcseintterasthionwefrdotmheSgerpetaetemsbter to December 2023 in Northern Europe? ... Format Converting 3. Question Review E ValDidaattaion ! Coeaeless Clarity
4. Answer Annotation with Verification Answer1 GPT EVAL ComplexBench T Answer2 Score Human Calculation 力 Check Answer3 Score Final Ranking Answer LLMs
structure types as following definitions. Most types of complex structure are presented in Figure 1.
(1) Hierarchical Column Header. A column header, typically located at the top of a table, serves as the title for each column. It identifies the category, attribute, or subject of the data in that column and provides a structured organization of the table data. In most cases, the feature of the complex column header is cell merging, which leads to the complex header hierarchy in the table (the cells with green background in Figure 1).
(2) Hierarchical Row Header. A row header is a label or identifier for each row in a table, typically located at the beginning of the row. In realistic tables. Row headers sometimes use indentation or clustering to present classified semantics. The common hierarchical row header is presented by indentation within a single column (the cell groups in blue within Figure 1). Moreover, the hierarchical row header is also presented by multiple columns, with a large merged cell corresponding to several subcategory cells in horizontal direction.
(3) Nested Sub-Tables. Sometimes, due to semantic requirements, a whole table consists of several areas with vertical segmentations. Typical presentation is that there are horizontal cells spanning the full width of the table (the cells in orange within Figure 1). These full-width cells divide the root table into nested sub-tables.
(4) Multi-Table Join. Speaking of the dimensions of a table, single-table scenarios cannot necessarily reflect the complexity of real-world applications, and multi-table tasks have been proposed in recent work (Wu et al., 2025). We extend the representation of multiple tables to two refined categories: (i) Explicit Multi-Table Join: We define the majority of the multi-tables that have been discussed as Explicit Multi-Table (shown in Figure 14). (ii) Implicit Multi-Table Join: However, we notice other special multi-tables in our dataset. As depicted with red box in Figure 1, there are subtables with the same structures, especially column headers. The implicit Multi-Table type looks not different from a normal single table, but in fact, it is in the form of multiple tables. Interestingly, this type sometimes contains additional semantics, such as comparisons, but is not easy to detect when comprehending tables.
Table 1: Comparison with existing datasets in Dataset Information, Task Types, and Input Formats. Here are abbreviations inside and their meanings: FC stands for Fact Checking, NR for Numerical Reasoning, DA for Data Analysis, CG for Chart Generation, and SC for Structure Comprehending.
(5) Miscellanies. We also find that some other special contents obtain a part of the tabular information, including additional explanatory texts outside the table and cell background colors. These nonstructural elements also play a certain role in complex tables.
Remark. It is worth noting that the hierarchical information we consider also appears in some existing benchmarks. However, we include tables with higher complexity (see Table 8 for more details). Notably, it is necessary to clarify some differences between our work and the previous HiTab (Cheng et al., 2022) benchmark. First, HiTab pre-extracts the tree structure of hierarchical tables in JSON format, which prevents effective evaluation of whether LLMs can directly understand structural information from table inputs. Additionally, it suffers from a restricted focus on three domains, a simplistic QA task, and incomplete supervision.
# 3.3 Complex Tasks
In order to evaluate the model’s abilities comprehensively, we define 5 primary types following TableBench (Wu et al., 2024): Fact Checking $( F C )$ , Numerical Reasoning (NR), Data Analysis $( D A )$ , Chart Generation $( C G )$ , and Structure Comprehending (SC) (in the Table 2). More detailed subtypes (shown in Appendix B.1) are derived from the above types. Notably, we extend one type named Structure Comprehending, which is tailored for complex tables. This kind of task provides the new table by exchanging some complex parts of the source table. Then we ask LLMs the same question with two similar tabular input to evaluate the ability of LLMs to comprehend structures.
Based on the question types above, we meticulously design pretty difficult questions for each complex structure. We take questions generated from the table in Figure 1 as an example: (i) Header Hierarchy: For the characteristics of header hierarchy, a possible question is: For Children under 18, which exact activity of Household cost the most time of Mothers when both full time? It is difficult to identify the scale of columns and rows with classification and hierarchy. (ii) Nested Sub-Tables: several segments of a whole table lead to interesting semantic tricks. One challenging question is: How much total time costs Mothers in Both full time for all the children recorded in the table? The correct conclusion is to just sum up total time for children under 18, instead of the whole table. Such summing questions with content partition tests if models thoroughly understand the inclusion relationships among sub-tables.
# 3.4 Annotation Generation
Question Generation. Following the previous work (Yu et al., 2023a), we select some attributes that we also agree are important as the core for question construction from what GPT-4o considers. After feeding different prompts of each question type and acquiring generated questions, we conduct manual question review according to four factors: Relevance, Completeness, Feasibility, and Clarity. After the first turn of annotation, we swap questions of each other to check the rationality.
Answer Generation. To eliminate the instability of GPT generation, we generate answers with GPT-4o based on well-designed answer prompt and innovate the pipeline: We feed a table with prompts to several models, acquiring multiple results and the frequency of occurrence of each result. Then the results are input into the G-Eval (Liu et al., 2023) module and scored for each result. Thus, final scores of each result can be figured out with initial score and frequency. The result with the highest final score becomes the candidate answer. We also conduct careful human check again that the table expert is dedicated to check answers one by one according to the question and exact table content. Furthermore, we employ a tree-based method to enhance our annotation process, finding that the automatic annotation accuracy is further improved.
# 3.5 Dataset Statistics
We propose RealHiTBench containing 708 tables and 3,752 questions. RealHiTBench completely focuses on discussing complex tables with such characteristics: (1) Tables in our benchmark are totally raw from available public platforms, and our benchmark totally consists of complex tables in structure or semantics. Thus, we organize 6 university students as annotators spending 150 hours each person on collecting these tables. By the way, annotators spend 480 hours each person on question answering annotations. (2) Compared to work involving complex tables such as HiTab (Cheng et al., 2022) and MultiHierTT (Zhao et al., 2022) before, the tables in our dataset are more complex (quantized in Table 8), even the median sizes of the tables are larger than tables in other works (Wu et al., 2024). (3) We systematically define different types of complex tabular structure and classify them for further exploration. (4) We introduce a couple of challenging task types. The tasks of structure comprehending prove to be difficult for LLMs (shown in section 5.3). The above characteristics are shown in Figure 1 and Table 1.
# 4 TreeThinker
Beyond benchmark, we propose TreeThinker, a understanding-augmented pipeline that enhances the model’s ability in complex hierarchical tables question answering. As shown in Figure 3, we first prompt the model to automatically organizing hierarchical headers into Tree Structure, then aligning keywords from questions with the tree headers, and efficiently locating sub-table relevant to the question.
# 4.1 Tree Generation
Complex tables often contain multi-level row and column headers, making it difficult for models to accurately analyze their intricate hierarchical relationships. Therefore, to enhance the model’s ability in understanding complex table structures, we prompt model to explain the table’s headers structure and organize them into tree structure.
Explanation. We first enable the model to selfexplain the structural information of the table headers, including their meanings, scopes of influence, hierarchical structure, and relationships between headers. Referring to the previous approach (Zhao et al., 2023), we prompt the model by encoding each node of the table header as a tuples $T = ( t _ { 1 } , t _ { 2 } , t _ { 3 } , t _ { 4 } )$ . Specifically, the first element represents row header (R) or column header (C), along with its corresponding level. And the second and third elements represent its start and end positions, while the fourth element contains the content of the cell.
For example, a tuple (R0, 1, 2, City) indicates that it is a row header (R) at level 0, spanning from row 1 to row 2, with the value City. This approach compresses header information into a tuple list $L = [ T _ { 1 } , T _ { 2 } , \cdots , T _ { n } ]$ , enables the model to more clearly identify the hierarchical structure and generating a "Structural Blueprint" of the table that effectively guides subsequent reasoning.
Generation. The tree structure, with its clear parent-child connections, can accurately represents the hierarchical relationships of table headers, while making table data organization and retrieval
TotalStudents TreeThinker Grade Gender NumberPercent Male 7,984 62.5 1. Explain and Generate Female 4,794 37.5 A Total 12,778 100.0 Column Headers: [(C0, 0, 0, Grade) , …] Male 5,749 57.9 First Row Headers: [(R0, 1, 3, 1) , …] Round 2 Female 4,174 42.1 Total 9,923 100.0 3 Mmae 2.763 57.8 2.1. Decompose and Align 四 Total 4,784 100.0 Keywords: male, students, female, Grade 2 Relevant Column Headers: [(C0, 0, 0, Grade), …] Relevant Row Headers: [(R0, 4, 6, 2), …] How many more male 富 ↓ ? students itnhaGnrfaedmea2le? Second 2.2. Reason and Refine ? Round Relevant Sub-Table: \begin{tabular}{ccc} Thought: I now need to calculate the difference [Final Answer]: 1575 Action: Perform the calculation of the difference Result: Calculate the difference: 1575 Column Header Column Root Sub-Table Row Header Tree Structure Data Cell Total Students Grade Gender Number Percent Male 5749 57.9 Row Root 2 = 一 ----- Female 4174 42.1
to decompose the question $Q$ into several keywords $K = [ k _ { 1 } , k _ { 2 } , . . . , k _ { m } ]$ , helping it focus on the most critical aspects of the question.
Aligning and Reasoning. Once the question decomposition is finished, we instruct the model to align keywords with header tuples, enabling it to accurately pinpoint the headers relevant to the question. Specifically, given the keywords $K$ and the header tuples $H$ , the goal of Aligning is to build a Keyword-Header Tree $\begin{array} { r } { H ^ { \prime } \ = } \end{array}$ $s e l e c t ( T , A l i g n _ { L M } ( T , k ) > \theta ) , T \in H , k \in K$ . The function $A l i g n _ { L M } ( T , k )$ calculate the degree of matching between header tuple $T$ and keyword $k$ . The select() function then chooses the header $T$ whose matching degree with keyword $k$ exceeds the threshold $\theta$ and add them to the KeywordHeader Tree $H ^ { \prime }$ . After that, we incorporate $H ^ { \prime }$ in prompts, guiding the LLM to retrieve essential information, e.g. relevant sub-tables, from tables with improved reasoning abilities.
more intuitive and efficient. Therefore, we ask the model to organize the scattered header list $L$ based on their hierarchical relationships into the TableHeader Tree $H$ with following steps: (1) Divide the tuple list $L$ into groups according to their levels, such that all tuples with the same level are grouped together. Add a special ROOT node Leveled "-1" for rows and columns. (2) For each tuple $A \in L$ . If the start and end positions of $A$ are equal $( A _ { t _ { 2 } } ~ = ~ A _ { t _ { 3 } } )$ , mark $A$ as a leaf node. (3) Otherwise, compare its $t _ { 2 }$ and $t _ { 3 }$ values with every closest higher level and same flag tuple $B$ . If tuple $A$ is within the range of tuple $B$ $( A _ { t _ { 2 } } \ \geq B _ { t _ { 2 } }$ and $A _ { t _ { 3 } } \leq B _ { t _ { 3 } } )$ , then $B$ is the parent-header of $A$ . (4) Repeat steps 2 and 3 iteratively until all tuples in $L$ are linked to their respective parent nodes (Tuples without parent node are linked to the ROOT node), forming a hierarchical Table-Header Tree $H$ .
# 4.2 Tree-based Reasoning
Previous studies (Shi et al., 2023) have shown that LLMs are often distracted by irrelevant information. To address this issue, we prompt the model to decompose question into keywords and align them with the table headers, thereby helping it accurately identify the sub-table relevant to the question.
Decomposition. Decomposing questions into more fine-grained components can simplify the reasoning process required and enhance the model’s performance. Therefore, we first prompt the model
Lastly, we also supplement a React-Style multiround refinement strategy — through multiple rounds of Thought, Action and Result, ultimately outputting the final answer. Full prompts are shown in Table 17 and 18.
# 5 Experiment
# 5.1 Experimental Setup
Baselines. We evaluated 25 models, including different modalities, with parameters ranging from 7B to 90B across four categories: (1) Close-source models, including GPT-o1 (OpenAI, 2024), GPT4o (OpenAI, 2023), Deepseek-R1-API(Guo et al., 2025), Gemini-1.5-pro (Anil et al., 2023), QwQ (Team, 2024) and Doubao-1.5-pro (Team, 2025) . (2) Open-source language models, including Llama3s (Dubey et al., 2024), Qwen2.5s (Yang et al., 2024a), Mistral (Jiang et al., 2023) and Deepseek-R1-Distalls (Guo et al., 2025). (3) Opensource multimodal models, including LLaVA-1.5 (Liu et al., 2024), mPLUG-owl3 (Ye et al., 2024), mPLUG-owl2 (Ye et al., 2023), and Llama3- Visions. (4) Table-oriented models, including TableGPT (Su et al., 2024), TableLLMs (Wu et al., 2024), TableLlama (Zhang et al., 2024), and TableLLaVA (Zheng et al., 2024).
Metrics. For Fact Checking, Numerical Reasoning and Structure Comprehending, we refer to previous work (Zhao et al., 2022) using F1 and EM as score metric for these tasks. For chart generation, we calculate the ECR to assess code executability, extract y-axis values to compare with reference data, and compute $\mathrm { P A S S } @ 1$ to evaluate the pass rate. For Data Analysis, we calculate ROUGE-L (Lin, 2004) to evaluate the objective similarity between generated and reference answers. Inspired by G-EVAL(Liu et al., 2023), we also design an evaluation template using GPT-4o to evaluate the scores of model answers. A detailed evaluation prompt template can be found in Appendix C.3.
Table 2: The evaluation results of advanced models with Text and Image Inputs on RealHiTBench.
# 5.2 Implementation Details
We deploy open-source models on 8 H800 GPUs using the Transformer library, while closed-source models are accessed through official APIs in accordance with their documentation. For table input formats, we tested GPT-4’s performance on LaTeX, HTML, Markdown, and CSV formats as text modality input, with LaTeX outperforming the others, as shown in the figure 4. Therefore we select Latex for text modality input. Meanwhile, we selected PNG format tables for visual modality input. Additionally, we set the model’s temperature to 0 to ensure deterministic outputs and configured the maximum output length to 4,096 tokens, balancing detail and efficiency. In order to focus on the hierarchical characteristics, we separate long tables from normal-size tables in experiment, but we still keep long tables in our dataset. For more details, please refer to the appendix C.
Figure 4: The comparison of average scores for different text formats across various tasks.
# 5.3 Main Results
$\textcircled{1}$ Overall Performance. The results in Table 2 highlight the limitations of current LLMs in handling realistic hierarchical table analysis. For our tasks, the EM metric of almost all models remains is relatively low, with the highest not exceeding 70. Plus, all models demonstrate remarkably low outcomes in Chart Generation, with code execution accuracy below 30. Interestingly, many Tableoriented models exhibit pronounced overfitting. However, TableGPT2 delivers the most impressive results among 7B-level models. For models with over 10B parameters, GPT-4o and LLama show comparable performance. It’s worth mentioning that DeepSeek-R1 achieves the most outstanding results among all models. Although this might be attributed to its 671B MoE architecture, it also, to some extent, indicates the potential of reasoning for large models in addressing hierarchical structures.
$\textcircled{2}$ The text modality outperforms the vision modality as an input. On average, GPT-4o with text input outperforms image input by 15, while Gemini-Pro with text input exceeds image input by 10. Similarly, open-source MLLMs generally lag behind their backbone LLMs. Some table-oriented MLLMs like Table-LLaVa also perform worse than other textual ones. The performance of image inputs is not as good as that of text inputs, but they may serve as a complementary to text inputs.
$\textcircled{3}$ Automatic tree reasoning significantly enhances structure understanding ability. Compared to the baseline, GPT-4o with our TreeThinker method demonstrates consistent improvements across different input modalities, achieving the best performance in RealHiTBench. For example, GPT-4o in the Chart Generation, the $\mathrm { P A S S } @ 1$ for text+image input increased from 14.29 to 33.55, marking a $1 3 4 . 7 \%$ improvement. Even for textonly input, TreeThinker enhances performance in NR, raising F1 from 36.68 to 49.35.
Table 3: The evaluation results of advanced models with Text and Image Inputs on RealHiTBench.
$\textcircled{4}$ Too long table size still matters. While handling tables, we identify 127 tables with complex structures that are significantly larger, making it impossible for LLMs to take in the complete table within a single conversation during evaluation. We extracted the impact of different table sizes on the performance of various modal models, which indicates that long table sizes do affect models’ table comprehension abilities, with a more pronounced impact on visual modality models. Given that long tables can exhibit unique complexities, we consider breaking down the large-scale content and inputting them into LLMs with multi-turn dialogues, which are likely to disrupt the tabular information of the tables, which is unacceptable. Therefore, we hope that future works can develop reasonable methods to fully utilize the value of these long tables, or that future LLMs can support the input requirements for enormous tables due to real usage requirements.
Table 4: The ablation results of TreeThinker when using GPT-4o and Llama3.3-70b-Instruct as base models.
$\textcircled{5}$ Each TreeThinker component enhances the model’s effectiveness. As indicated in Table 4, we conduct ablation studies on GPT-4o and Llama3-70b to assess the impact of various components on the performance of TreeThinker. We find that each component contributes to the model’s effectiveness, with Tree Generation playing a particularly crucial role in enhancing the model’s understanding of realistic complex hierarchical table.
Note: We place more experimental results in the Appendix D. | With the rapid advancement of Large Language Models (LLMs), there is an increasing need for challenging benchmarks to evaluate their capabilities in handling complex tabular data. However, existing benchmarks are either based on outdated data setups or focus solely on simple, flat table structures. In this paper, we introduce RealHiTBench, a comprehensive benchmark designed to evaluate the performance of both LLMs and Multimodal LLMs (MLLMs) across a variety of input formats for complex tabular data, including LaTeX, HTML, and PNG. RealHiTBench also includes a diverse collection of tables with intricate structures, spanning a wide range of task types. Our experimental results, using 25 state-of-the-art LLMs, demonstrate that RealHiTBench is indeed a challenging benchmark. Moreover, we also develop TreeThinker, a tree-based pipeline that organizes hierarchical headers into a tree structure for enhanced tabular reasoning, validating the importance of improving LLMs' perception of table hierarchies. We hope that our work will inspire further research on tabular data reasoning and the development of more robust models. The code and data are available at https://github.com/cspzyy/RealHiTBench. | [
"cs.CL"
] |
# 1 Introduction
Historical research is grounded in the analysis of past events through primary source material, for which archives constitute a cornerstone in facilitating discovery, access, and interpretation [9]. One peculiarity of research on the Holocaust in particular is the wide dispersal of sources due to, amongst other things, its vast geographical scope, the intentional destruction of evidence, and the migration of people (and indeed whole populations) before, during and in the aftermath of WWII [17]. In turn, this dispersal led to the fragmentation of many important primary sources, which are now spread across different countries and custodial institutions, which in many cases use different cataloguing standards and practices. Researchers on the Holocaust at a trans-national level must navigate a patchwork of different archival standards, practices, and technologies [13].
The European Holocaust Research Infrastructure (EHRI) project1 was born as a European-funded project seeking to create a network of researchers, archives and digital practitioners in order to actively promote transnational access, delivering services to the researchers on the Holocaust whether in a digital or physical form [17]. Starting in 2010, the EHRI project set out to offer a centralised access for archival descriptions held in Europe and beyond. This materialised in 2015 with the launch of the EHRI Portal [3] which, up to this date, offers access to more than 380,000 archival descriptions held in 2304 archives across 60 countries. The EHRI Portal follows the International Council on Archives’ (ICA) standards – i.e., the General International Standard Archival Description (ISAD(G)) for archival descriptions, the International Standard for Describing Institutions with Archival Holdings (ISDIAH) for archival institutions and the International Standard Archival Authority Record for Corporate Bodies, Persons and Families (ISAAR (CPF)) for authority records – and integrates a search engine, streamlining researchers’ endeavours and acting as a first access point for trans-national and cross-institutional Holocaust research. Moreover, it also contextualises archival metadata by adding a thematic layer based on the use of a controlled set of terms [1] (modelled in the Simple Knowledge Organization System (SKOS) [14]) and linking archival descriptions based on shared provenance [10].
Consequently, one of the main tasks in the EHRI project is the identification and integration of archival metadata into the Portal. This process has evolved over time. Initially, there was an assumption that it would rely heavily upon the existing use of technical standards for the publication and encoding of metadata – standards such as the Open Archives Protocol for Metadata Harvesting (OAI-PMH) and the Encoded Archival Description (EAD) XML format. In practice, however, this assumption proved optimistic. While there were a handful of cases where EHRI was able to leverage such standards for automated harvesting and ingestion of metadata, this was far from the common case.
Without a viable standards-based approach on offer for the large majority of data providers, other options included manual data entry using the EHRI Portal’s web-based administration interface (developed for cataloguing of material not yet described in electronic form), or the creation of institution-specific bespoke ingestion workflows. Both approaches were recognised as excessively labour intensive and difficult to sustain, the former for the cataloguers, and the latter for technical staff.
Over the course of EHRI’s three phases we have sought to balance the desire to increase the coverage and quality of the metadata in the EHRI Portal with the realities of generally low adoption of or poor support for technical standards, overburdened and underresourced staff, and where the burden lies in the data sharing process – with the data provider, or the integrator. This balance shifted somewhat as the focus expanded in the third phase of the project – EHRI-3 – from the more well-established (and typically well-resourced) data providers in the field, to the long tail of smaller, more variegated, GLAM institutions and even “micro-archives”, holders of relevant material or collections that do not fit within a particular institutional mould. This ultimately led to the creation of a data integration lab charged with the task of alleviating the big endeavour of setting data integration workflows between data providers and the EHRI Portal. Therefore, in this paper we share our experiences when running the said data integration lab alongside the faced challenges and the lessons learnt from them.
The rest of the paper is structured as follows: Section 2 reviews the related work, and in Section 3 we introduce the EHRI Portal and its tools for data integration. The data integration lab and the followed methodology are presented in Section 4, while Section 5 presents some prominent cases and their associated challenges. Section 6 explains the general challenges that we encountered and the lessons learnt from them, and in Section 7 we describe how the data integration lab should evolve in future EHRI’s phases. Finally, Section 8 draws some conclusions.
# 2 Related Work
Integrating data from different archives for the sake of offering a unified and centralised platform to the user has been tackled by other initiatives including similar – or overlapping – topics.
National or regional aggregators are becoming the norm in the archival field in an effort to make archives and their holdings more visible and accessible to the end user. They can cover archives without a particular topical focus, like Archiefpunt4 does for Flanders, or grounded in a specific subject, as for example, Netwerk Oorlogsbronnen covering WWII sources in The Netherlands. One of the main advantages of this approach is the possibility to have a closer collaboration with the partner archives, as well as a better understanding of the local particularities. Moreover, in many cases, there are legal provisions that facilitate this endeavour. On the other hand, their reach is somewhat limited given the geographical – and linguistic – boundaries. They are, however, an invaluable resource in which bigger initiatives can be supported as we explore further in Section 5.8.
Given the complexity of operating in a broader area, trans-national initiatives are far less numerous. Archives Portal Europe6 offers a centralised platform to access archival collections all over Europe, though due to its scope there are some inevitable gaps in coverage. In order to overcome this problem – or just trying to accelerate the integration of sources for a specific topic – new aggregators are emerging with a topic-driven mission. In the field of Jewish archival heritage, Yerusha7 offers a portal similar to the EHRI’s one but focused on a broader range of material. As such, it inevitably overlaps with EHRI’s scope, though differs in the acquisition methodology: where in the case of Yerusha it is based on a surveying effort for relevant archival material, EHRI seeks to connect directly with the institutions and integrate their archival descriptions as they are. Ultimately, this makes the EHRI Portal a multilingual platform whereas Yerusha offers its content mainly in English.
While myriad aggregators may seem like a suboptimal solution for solving the dispersal of sources, they serve to cover small fragments of archives in a specific region or topic which can later be aggregated by supra-aggregators like Europeana8, the Common European Data Space for Cultural Heritage9 or the European Cultural Heritage Cloud10. Even though they cover the overarching topic of cultural heritage, and not just specifically archival material, they serve as an entry point for the users who will then be directed to more fine-grained information. In fact, some of these aggregators are already pushing the data to them, like Archives Portal Europe is doing towards Europeana. Furthermore, closer cooperation between the aforementioned aggregators is slowly taking place amidst a general drive for efficiency and to avoid the duplication of effort.
In order to push the data to these platforms some technical adaptations – on account of the different formats used or just to normalise the data – are typically needed. Different aggregators use a variety of techniques to tackle these problems, but in general information is rather scarce prior to committing to the process. As an example, Europeana employs a bespoke format called the Europeana Data Model (EDM) [8] which is then used as the basis for the representation and ingestion of data11. Archives Portal Europe uses EAD $2 0 0 2 ^ { 1 2 }$ as its base format, with some specific additions13, and also offers technical assistance in order to convert the content provider’s data to EAD. In both cases, they require institutions to prepare their data in an intermediate format which is then used for ingestion. By contrast, EHRI has adapted its data integration process to one where it – rather than the data provider – takes on the larger responsibility for alignment and conversion of ingested material to the EHRI Portal’s data model.
It is worth noting that some recent initiatives seek to facilitate the reuse of cultural heritage collections by ensuring that they are released in a manner compatible with further computational use. This is the case, for example, of Collections as Data [15] which provides a set of guidelines aimed at small- and medium-size institutions on how to publish digital collections (aligned with the FAIR principles [20]) and ensuring their further processability by third users. Ultimately, this also allows institutions to retain the control on the data by managing the complete publication workflow which ensures that the CARE principles [5] (as a complement of the FAIR ones) are also respected. Nevertheless, while cultural heritage institutions become more technologically independent, leaning on initiatives like Collections as Data for good practices on making their collections computable and by extent reusable, initiatives like the one described here are still relevant.
3 The EHRI Portal and Its Data Integration Tools In this section we describe the EHRI Portal, its data model14 and the existing data integration tools.
# 3.1 The EHRI Portal
As briefly introduced before, the EHRI Portal gathers archival descriptions relevant to the Holocaust. However, from the very beginning it was deemed necessary to provide more context to the archival descriptions within this overarching trans-national topic. For this purpose, the EHRI Portal uses three main entities (countries, Collection Holding Institutions (CHIs), and archival descriptions). Countries represent an entry point to Holocaust research and provide a historical overview of the country during WWII, its Holocaust history and the general archival situation. From a particular country, the users can search and browse CHIs physically located within its borders. CHIs are described using the ISDIAH standard15, providing users with contact details and historical and servicerelated information about the institution. Institutions can hold a varying number of archival descriptions which will be listed in a hierarchical fashion to represent all the possible levels defined by archival practice (e.g., fonds, sub-fonds, series, sub-series, record groups, collections, folders and items) alongside the possibility to host parallel descriptions to accommodate multilinguality. These descriptions follow the ISAD(G) standard and, as mentioned before, can contain an arbitrary number of nested descriptions to form a hierarchy.
While these three entities constitute the core of the Portal, there exists a set of transversal entities that allow for better contextualisation of the archival metadata within the network. Vocabularies (structured using SKOS) define a controlled set of entities for use as archival access points. At the time of writing, there are three such controlled vocabularies in the EHRI Portal: one for subject headings (EHRI terms ), and two for places (EHRI ghettos18 and EHRI camps19). Similarly, two sets of authorities described using the ISAAR (CPF) standard20 define lists of persons and corporate bodies, respectively, which can be linked as the creators of archival material and also as general access points. Finally, all the aforementioned entities can be annotated and linked using a derivative of the OpenAnnotation standard . Links can represent temporal, hierarchical, or familial relationships, or those based on provenance, such as establishing that a particular archival description was copied from another collection or institution [10]. A graphical representation of the described data model can be consulted in Fig. 1.
Fig. 1. This diagram represents the general data model used by the EHRI Portal where countries use a custom-based model for representing the EHRI country reports, archives use the ISDIAH standard, archival descriptions are based on the ISAD(G), authority sets follow the ISAAR(CPF), and vocabularies are modelled using SKOS. The relations are modelled as follows: countries can contain an arbitrary number of archives, and archives can hold many archival descriptions. Both archives and archival descriptions can be copied or be the original source of other archives and/or archival descriptions. Finally, archival descriptions might be linked to subjects represented in the vocabularies sets or in the authority sets but only entities pertaining to the authority sets can be stated as the creators of an archival description.
# 3.2 The EHRI Data Integration Platform
The EHRI data integration tools have undergone a process of evolution and expansion over the course of subsequent project phases, as progressively more was learned about the requirements, the nature of the data held by providers and the ways it was published and/or transmitted, and the approach taken by EHRI in coordinating and managing the ingest of material. Subsequent phases of the project have also enabled us to put more time into making the tools more capable and easier to use, building on what came before.
In EHRI-1, the foundations of the system were put in place with the development of a SAX-based XML processor for importing material into EHRI’s Neo4j-based metadata repository. The basic requirements for this component were that it be:
Event-based: to handle large hierarchical XML files in an efficient manner.
Transactional: so that validation or other errors did not leave the database in an invalid state.
• Customisable: so that different profiles of XML (conforming to specific schemas) could be catered to.
While the main processor was developed to target hierarchical EAD 2002, the core functionality was generic so that other data types, be it different XML schemas or even tabular data, could be accommodated with the creation of new subclasses. Provider-specific configuration files allowed specific XML paths in the source data to map to different database properties, providing a degree of flexibility to accommodate variations in how individual providers interpreted standards like EAD.
In EHRI-2 this system was further refined, but greater emphasis was placed on conforming the metadata by third parties to a more restricted subset of EAD prior to ingest, using a new GUI-based XML translator. Motivating this change was the desire to help data providers generate valid EAD from what could be a very wide range of primarily XML-based sources, typically exported from some proprietary collections-management system. Named the EAD Creation Tool $\mathrm { ( E C T ) ^ { 2 2 } }$ , it made use of a Google Docs-based tabular mapping system comprised of XQuery expressions. Along with the ECT, a new tool was also developed to help data providers more easily publish their material online using the OAI ResourceSync protocol , named the Metadata Publishing Tool $( \mathrm { M P T } ) ^ { 2 4 }$ . The ECT and MPT together allowed institutions themselves to both generate and validate EAD, and make it available online, making EHRI’s task much easier when it came to sourcing the material in a sustainable manner and ingesting it into the Portal.
In the third phase of the project, the main objective was to make this system easier to use from end to end. This meant making it more straightforward to track the provenance of material from third-parties, to configure the different types of harvesting methods in use, and to manage the transformations necessary to conform material to the ingest standard. Providing a web-based user interface was also deemed essential, as the collection of command-line tools and API interactions comprising the data integration system in EHRI-1 and EHRI-2 quickly grew overly complex and became difficult to administer.
The first iteration of the EHRI-3 data import tools went live in 2020 and incorporated the three main components of the EHRI ingest pipeline – harvesting, transformation, and import – into the administration backend of the Portal. This system was oriented around datasets defined by a specific Extract, Transform, and Load (ETL) pipeline. Extract, in this case, involves either pulling material from the web (via ResourceSync, Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH)25, or from a set of URLs), or simply uploading the files manually to the system. Transformation uses the XQuery-based tabular mapping system developed in EHRI-2, allowing transformations to be chained and combined with Extensible Stylesheet Language Transformations (XSLT)-based processes, and incorporates a real-time preview facility allowing mappings to be developed interactively. Finally, the Load functionality encapsulates the EHRI-1 ingest configuration, interfacing with the Neo4j-based database backend.26
This system was later extended to include supplementary aspects of the data ingest process, including the ability to manage how access points (such as subject headings) used by particular institutions were mapped to EHRI’s own controlled vocabularies. Another such process was the ability to run clean-up tasks to handle stale data leftover from past imports, relating to items removed or renamed by data providers. Batch processing functionality was also added, to handle outlier cases where the same ETL pipeline needed to be executed for a large number of datasets (see Section 5.1).
Altogether, incorporating the EAD generation tools directly into the EHRI-3 data integration system implied a change of approach relative to EHRI-2, where the strategy was oriented around standalone tools that empowered archives to both generate EAD themselves, and publish it via ResourceSync or OAI-PMH. In reality, this change of approach was more tactical than strategic: while the ECT and MPT tools did make the process of generating and hosting conformant metadata easier, in the institutional context they could only do so much, and these tasks still necessitated a relatively high level of technical expertise and IT support to deploy and fully utilise. By integrating the data transformation process into its web-based infrastructure, EHRI could take on some of this burden ourselves (albeit in a manner that was more systematised) and in doing so significantly lower the bar for data providers to participate in supplying their metadata to the EHRI Portal.
# 4 The EHRI Data Integration Lab
In this section we introduce the EHRI Data Integration Lab, the problems encountered during EHRI’s previous phases that motivated its creation, the methodology followed to set up the different data integration cases and a breakdown of some prominent ones.
# 4.1 What Does It Solve?
Delivering data suitable for integration on the EHRI Portal was a substantial challenge even for the larger, more well-resourced institutions, let alone their smaller counterparts who often lacked the required technical support and expertise. As the EHRI-3 project aimed to cover more thoroughly both the long tail of smaller institutions and so-called “micro-archives” (greatly expanding EHRI’s previous reach27), it was deemed necessary for the project to take on more of the technical aspects involving the integration of archival descriptions, thus making the process more approachable for institutions of all sizes. The mission of the EHRI Data Integration Lab is to manage the creation of data integration workflows for institutions seeking to share their archival metadata, ensuring that this process is as transparent as possible on as technical basis, as well as repeatable and sustainable so that future revisions of the metadata can be smoothly delivered [6].
As discussed in Section 3.2, this shift of emphasis prompted a change on how the data integration tools were operated, making them more manageable by EHRI representatives and supporting a broader range of XML schemas (i.e., harvesting all kinds of XML files instead of just EAD). While this did not prevent institutions from setting up their own workflow using the new environment, they were not forced to do so and could delegate this to the data integration lab. At the same time, the tools offered increased productivity, as staff could be trained on more specific activities in a common environment, and resources better shared from one case to the next. The goal was to avoid as much as possible EHRI-specific ad-hoc development within institutions’ own IT infrastructures, but instead using the mechanisms already put in place by the institutions for data exchange (API, exports, etc.), resulting in a more future-proof solution.
Initially, the data integration lab was conceived as a mobile lab, offering the possibility to visit the institutions and deliver the technical set-up in situ. While this possibility was offered to institutions, its reach was somewhat limited due to different circumstances like the breakout of the COVID-19 pandemic or the travel restrictions on the following years. Nevertheless, in some cases, it was possible to exercise this option, but from our experience and the achieved results, we cannot correlate a visit to the archive with a better outcome on setting up a data integration workflow. It is possible that this can be motivated by the fact that people got more used to work and collaborate in virtual environments after the pandemic for which we do not gather enough evidence from this project to totally assert it.
# 4.2 Methodology
In order to make this work more structured and manageable for the data integration team members, a methodology was put in place. This methodology defines a central access point for filing data integration requests to the EHRI project in the form of an online questionnaire28, ensuring that all relevant key aspects about the provider’s technical infrastructure are collected beforehand. Moreover, internally, it keeps the information about all the cases centralised and traceable. These requests are then integrated into a management system which provides traceability of the individual tasks and allows for an efficient organisation within the team. Data integration workflows are firstly implemented under a staging environment so that a representative of the institution can verify if the data has been integrated correctly and ask – if required – for changes to the data. Only after receiving the final approval by the institution the data is moved to the production portal and consequently made publicly available. If at any point in time, an update is required, the described process will be replicated, ensuring that institutions are always in control of how their data is represented on the EHRI Portal.
In an effort to provide a transparent data integration workflow that aligns with the FAIR principles, the resources used for integrating the data of a specific institution are shared in a public repository . This repository contains a folder for each of the implemented cases and within each folder one can find the resources (i.e., mapping rules, configuration files and links to the datasets) employed for this case alongside a README with instructions on how to replicate the full workflow.
# 4.3 Legal Background
Archival finding aids are creative works resulting from intellectual effort by an archivist or group of archivists, who will study the material and describe it according to some general rules. As such are subject to the same intellectual property rights (IPR) as other types of material, and this means that any re-utilisation of the descriptions - modified or not - should respect the IPR of the authors. It is worth noting that a lack of licence on publicly available content does not mean that it can be used for free or openly but rather it is normally understood to be protected by an exclusive copyright, even though this consideration can vary greatly depending on different countries’ laws.
Archival descriptions pertaining to the Holocaust can also involve, in some rare cases, the processing of special categories of personal data according to the GDPR (Art. 9) which need to be taken into account when including them in public archival descriptions (as safeguarded by Recital 158). As a European project, EHRI’s operations need to be aligned with the GDPR, and any possible breach by the project or one of its partners should be communicated expeditiously in order for it to be resolved.
To provide a legal framework for the work of the data integration lab that could cover, amongst other things, situations relating to IPR and data privacy, EHRI developed a text named the Content Provider Agreement (CPA). This agreement covers the exchange of data between the provider and EHRI, as well as the representation of the descriptions, their hosting, and their possible reuse during all the phases of the project. It also comes with a set of compromises for both parties regarding the fulfilment of the GDPR and intellectual property rights.
While these sorts of agreements are quite commonplace in business to facilitate operations involving IPR, during the EHRI-3 project it has proved to be more challenging than anticipated, even more so, in some cases, than the technical challenges described in this paper. While signing the CPA was a straightforward formality for some institutions, others had minor questions or needed additional clarifications. At the other end of the scale were those institutions who felt they needed to undertake legal consultation, which in the most challenging cases could end up requiring an amendment of the CPA. In some cases, the CPA could not be signed, which prevented the relationship between EHRI and the data provider moving forward. In Section 6.5, we further explore some of the actions put in place in order to mitigate the issues derived from this.
5 Some Highlighted Cases and Their Companion Challenges
In this section we provide an overview of some cases that will help illustrate the reach and scope of this data integration lab, as well as some of the challenges encountered.
# 5.1 United States Holocaust Memorial Museum (USHMM)
USHMM30 is one of the biggest international aggregators and collectors of Holocaust-related material, with substantial collections in its archives, museum, and audio-visual library. In sharing metadata about its holdings with EHRI, challenges were encountered due to the size of this dataset, consisting of many thousands of hierarchical collections and item-level descriptions, and different approaches were adopted as the data was updated and refined in successive project phases.
For EHRI-3, the new ingest environment was used and the USHMM export provided an ideal test-case for a large and complex dataset, necessitating a number of refinements and new accommodations. Firstly, the full dataset had to be split into series by accession year in order to make the ingest process more manageable for the EHRI Portal’s backend by reducing the size of single transactions. This, in turn, necessitated the creation of a batch processing system to automate the execution of the same ETL process over a large number of datasets.
It is worth noting that the USHMM was the only institution that preserved its EHRI-2 workflow involving the publication of data using the ResourceSync protocol, but dispensed with the conversion to EAD, instead publishing its bespoke XML data directly with the conversion taking place on EHRI’s servers. This ultimately allowed them to adjust more straightforwardly the fields that were exported, leading to richer descriptions on the EHRI Portal and helping ensure more timely updates31.
# 5.2 Center for Urban History
The Center for Urban History of East Central Europe32, based in Ukraine, possesses a rich digital archive related to the Holocaust containing interviews, maps and photos. For various reasons pertaining to the software capabilities, the export of the items was only possible in a CSV format. As explained in Section 6.1, this was a situation common to many institutions around the world.
Prior to this case, a workflow was put in place to convert Excel files to EAD in order to integrate data on the EHRI Portal from other institutions.33 This workflow was adapted to this specific case, taking into account one particular aspect, the existence of two hierarchical levels within the CSV file that had to be incorporated in different hierarchical levels34. Fortunately, each row incorporated information from the broader collections making the conversion to EAD easier than it usually is in these kinds of cases.
Nevertheless, the process faced many setbacks stemming from the inconsistency between English and Ukrainian descriptions, which made the process very error-prone and time consuming. Throughout the process the provider needed to update their data to correct mistakes that were uncovered, and invariably some trial-and-error was involved. This case made us realise that, while sometimes this method can be the only way forward to import data, alternative methods like that developed for the Vilna Gaon Museum of Jewish History (see Section 5.7) may provide a faster and more sustainable way to integrate tabular datasets.
# 5.3 Ottawa Jewish Archives
This institution35 holds material related to the Jewish Community life in Ottawa, of which a selection is relevant to the Holocaust. While the Ottawa Jewish Archives staff were able to provide EHRI with an export of the Holocaust-related material in XML format, the hierarchies were broken up such that the fonds level description and the item level units it contained were represented by separate XML files. Therefore, it was necessary to reconstruct the hierarchy in order to present these descriptions in a coherent manner.
From a practical point of view this was implemented on the EHRI Portal36 using two distinct datasets: one for the fonds level and another for the item level. They were then executed in a specific order with the item level descriptions ingested first, embedded in an EAD containing the hierarchy as a skeleton, consisting of the fonds level identifier and title, as extracted from the items’ attributes. Following this, the full fonds metadata was ingested, enriching the skeleton descriptions created in the item level import.
This workflow enables the relationship between different levels in order to reconstruct a hierarchy within the EHRI Portal when the data is coming from different files, assuming a identifier reference to the parent is included in lower level descriptions. It is a generally more sustainable approach than doing the same procedure via bespoke scripting and it is flexible enough to cover full hierarchies. The main drawback in practice was the amount of time needed to execute the ETL for different levels.
# 5.4 Vienna Wiesenthal Institute for Holocaust Studies
The Simon Wiesenthal Archive (SWA), which comprises approximately 200 linear meters of materials, is the largest collection within the Archive of the Vienna Wiesenthal Institute for Holocaust Studies $( \mathrm { V W I } ) ^ { 3 7 }$ . Integrating the SWA’s metadata into the EHRI Portal posed a technical challenge, as a complete export in a single file, particularly one that preserved the hierarchical structure, could not be achieved. Only a portion of the SWA’s archival descriptions could be exported as EAD files that contained the hierarchical structure. Others were available only in a custom XML format, which, although it typically contained more detailed information, did not include the hierarchy. The solution was to import metadata in both formats, prioritising the hierarchy-preserving EAD-XML files where available, and relying on the custom XML format when it offered additional information not present in the EAD version.
This amalgam of files was imported into the respective series using the tools available on the EHRI data integration platform. Two collections were imported down to the item level in a single step, using the hierarchical export in EAD format. For the remaining collections, the lack of hierarchical information in the custom XML format resulted in the need to import each level separately. Therefore, the utilisation of two distinct import formats enabled the incorporation of the metadata of the entire SWA38, albeit with variable levels of detail, and represents precisely the kinds of compromises that have to be reached in order to effectively import the data of an archive using the existing technologies.
# 5.5 Wiener Library Tel Aviv
The Wiener Library Tel Aviv39 is a special section of the Tel Aviv University’s Library consisting of a library and archive about the Holocaust. Due to this very specific arrangement within a larger library, the software in use is one specifically dedicated to libraries, i.e., Ex Libris Alma. While using a library-specific set-up for an archive can lead to some misalignments resulting from differing terminology and standards, it was not the biggest challenge in this case due to the flexibility of the overall system.
At first, the representatives were able to export EAD files from the different collections but the nested levels of the hierarchy contained less details than those present in the system. This was probably due to the fact that the system is not centred around archival standards, despite some support for them. However, due to the great flexibility and the variety of supported data exchange formats40, we were able to discover an open endpoint supporting Linked Data. Based on this, we were able to automatically harvest a Bibframe RDF/XML41 representation of the lower levels, containing much more detailed information about those records. Moreover, this endpoint allowed us to establish a sustainable connection with the provider, removing the need to manually export and transmit the data42.
# 5.6 Yad Vashem (YV)
$\mathrm { Y V ^ { 4 3 } }$ is another of the largest collectors and aggregators of Holocaust-related material in the world. This made the integration of their metadata of great importance for the EHRI project since its inception. Nevertheless, in previous EHRI phases, the import of their collections was based on a custom-made Access database (exported from their main system and thoroughly curated by YV archivists for EHRI) containing the full archival hierarchy which was then converted to EAD using an ad-hoc script44.
Unfortunately, when the EHRI-3 data integration lab had to update these descriptions the situation was rather different. This labour-intensive process of curating a bespoke export could no longer be supported, as it was based on the previous project financing no longer in place. Due to the very specific nature of the export and its disconnection from the main database – all the more so after several intervening years – it was considered prohibitively difficult to recreate the full process. Moreover, on EHRI’s side the script was no longer available, nor was the staff expertise. As a more permanent solution, the data integration lab and YV’s IT department agreed to create a bespoke API that could export the collection level to EHRI. This led to the update of the collection level45 but unfortunately, the recreation of the full hierarchy, and therefore its full update, was not possible to achieve due to time-constraint reasons on YV’s and EHRI’s side and is set to be resolved in future phases.
# 5.7 Vilna Gaon Museum of Jewish History
This institution46 is a museum that, like many others, also holds an archive within. In this case, many different approaches were explored but none of them led to a workable and sustainable solution. However, as a Lithuanian institution, they are legally obliged to deliver their data to LIMIS47, a national aggregator for museums’ collections. As a national aggregator, LIMIS has a much more open and interoperable platform, supporting a range of different formats including an API delivering JSON or XML. Our first try was to make use of the XML outputs given the EHRI Portal’s XML-centric platform but this format proved to be less rich in detail than its JSON counterpart.
As mentioned earlier, the EHRI Portal’s data integration tools do not, at the time of writing, support JSON inputs and conversion. However, based on a parallel process to deliver the Portal’s data as a Knowledge Graph (KG) [12], and previous experiences converting RDF/XML to EAD (see Section 5.5), we decided to include a pre-transformation process able to convert JSON to RDF/XML using declarative mapping rules [19]. Declarative mapping rules allow converting heterogeneous data sources to an integrated RDF file in a more flexible and reusable way than most ad-hoc methods, and in practice we only employed a minimal set of rules in ShExML [11] (a language developed by one of the authors of this paper) to convert the JSON input into RDF/XML. In order to integrate this into the existing workflow, a web service was created encapsulating the invocation of the ShExML engine and making it callable from within the URL-based harvester.
After this pre-transformation step, the EHRI Portal receives a set of RDF/XML files that can be converted to EAD using the standard tools and ingested without further additions48. A graphical representation of this workflow and its components can be seen in Fig. 2. This experimental set-up not only solved the integration of data for this institution but also served as a use case to demonstrate how EHRI’s data integration tools can evolve in the future to cope with more heterogeneous data formats.
Fig. 2. Diagram of the system implemented to transform JSON files harvested from an API into EAD using the ShExML engine, prior to the standard EHRI Portal data transformation workflow.
# 5.8 FranceArchives, a National Aggregator Experience
The FranceArchives49 aggregator includes metadata for collections and fonds from archives throughout France. The site went live in 2017, and since then has amassed over 26 million archival descriptions50. These records span fifteen centuries and a wide range of topics, and come from both national and regional-level archives. Collections that may not have previously been easily accessible, particularly at smaller or regional archives, are also included, making it a valuable resource for researchers. The site also provides different methods for searching, querying and exporting the metadata, including a general keyword search, a SPARQL endpoint, or browsing by theme.
One of the key aspects of the integration task was the identification of Holocaust-relevant collections – in contrast to those cases where the CHIs only hold Holocaust-relevant descriptions – for which the use of the site’s search features was therefore essential [18]. As a result of intensive keyword and theme-focused searches on the site, approximately 280 collections potentially relevant to the EHRI Portal were identified. Nevertheless, as an aggregator, the FranceArchives portal accepts metadata in multiple formats and standards from institutions interested in sharing their collection metadata. Some metadata may come through in hierarchical formats such as XML, whereas other archives may provide finding aid PDFs. As such, the level of detail of the metadata can vary significantly. This lack of standardisation can make integration challenging as to whether incorporating the entire collection in the EHRI Portal, or filtering within a larger collection if only a selection of subfolders is relevant to the EHRI Portal.
The platform initially began with EAD as its main standard, and released an OAI-PMH endpoint for metadata harvesting purposes. In 2022, however, it shifted to the Records in Contexts Ontology (RiC-O)51 making the OAI-PMH endpoint potentially deprecable in favour of data exchange standards more amenable to the Semantic Web [2], like the existing SPARQL endpoint. Nevertheless, due to the current EHRI Portal data integration platform, the OAI-PMH endpoint (and its companion EAD format) was more compatible overall, requiring small XSLT transformations rather than fully transforming RiC-O to EAD.
Incorporating collection metadata from FranceArchives was initially devised with the goal of developing strategies and methods to integrate and update collections from similar aggregators in the future, helping us cover a national landscape in a much more efficient manner due to leaning on previous efforts and leveraging a unified technical stack. The FranceArchives process differs from that of archives EHRI has worked with directly as it does not have the same level of continuous human communication and troubleshooting that communication with an individual archive would, helping to determine which collections to share with EHRI, how to share that metadata using guidance from the EHRI data integration team, as well as jointly identifying any challenges or integration glitches.
While a benefit of FranceArchives is that its open licence52 removes potential non-technical, bureaucratic challenges of metadata sharing (metadata can be shared as long as attribution is given to FranceArchives), finetuning the data identification becomes a challenge because of the aforementioned missing human connection to the archive. Additionally, the FranceArchives portal continues to evolve, and therefore a continuous exploration of sustainable methods for locating and updating EHRI Portal-relevant collections becomes of the utmost importance.
# 6 Challenges and Lessons Learnt
In this section we describe the encountered challenges divided by categories and some of the most notable lessons learnt from them.
6.1 Standards Coverage, Limitations on Data Exchange and Data Governance
While the Cultural Heritage field, and more specifically that of archives, have produced a set of standards covering different parts of the data exchange workflow (e.g., OAI-PMH for harvesting metadata or EAD for encoding archival finding aids), the support for them – as noted in Section 1 – is rather inconsistent across institutions and the systems they use. In some cases, organisations have a strong policy of providing wide support for different standards (as for example the case of Ex Libris Alma described in Section 5.5) whereas in other cases, if the data is accessible at all it is via a bespoke format or API, which inevitably makes the data integration process more complex. This, ultimately, makes the exploratory phase of data integration a process in which different possibilities have to be weighed, taking into account how they will affect the final results, as well as their potential repercussions for the future-proofness of the solution that is delivered.
IT service providers are a related factor, particularly with regard to the data exchange policies they wish to implement and support. Given that they may have other use-cases or product roadmaps in mind, the implementation of standards and other robust data exchange methods does not always have the highest priority alongside other business functionality, and experience shows us that many institutions are unable to export their data except in extremely limited ways as a result. Although it may seem surprising, staff working with collections management systems often do not seriously consider questions of interoperability until they want to collaborate in a project such as EHRI, long after IT procurement decisions have been made.
This naturally raises concerns about data governance, and whether institutions have the right to reproduce their own data as needed, or even migrate from one collections-management system to another. During our interviews with different institutions we have detected a tension between the priorities of private IT service providers seeking to maximise the adoption of their solutions in the Cultural Heritage sector, and the mission-driven activities of GLAM institutions who want to be able to manage their data as freely as possible. While in many cases the functionality requested by institutional clients is covered by the solutions delivered by their IT service providers, it is at the moment of a migration to another system or when a collaboration with an aggregator, like the EHRI Portal, wants to be pursued, that these tensions come to the fore. While it is not the purpose of this paper to give solutions to this problem, we would like to highlight two general recommendations that could be applicable to GLAM institutions and service providers alike: 1) ensure that the data introduced in one platform can be exported in reusable formats out of the platform following industry standards (e.g., archival descriptions in EAD) and 2) state these requirements in the Service Level Agreement (SLA) or similar binding contract, especially in cases where the work is funded by public money and should therefore be available for re-use by third parties.
# 6.2 Hierarchies Not Always Considered First-Class Citizens
Even in the cases where multiple data exchange formats and publication possibilities are theoretically supported, additional challenges still appear. A very prominent one that we have encountered across many institutions is inadequate support for describing archival hierarchies. While the most prominent archival conceptual standard – ISAD(G) – describes how descriptions can be hierarchically organised, the implementation of this fundamental aspect across different systems reflects a totally different reality and in many cases the flat tabular form derived from the relational model, in which many applications’ databases operate, is imposed on the application. This is not only true for the user interface but it is also translated to the export functionalities where, even when providing hierarchical data formats like JSON and XML, the systems do not represent the full hierarchy but rather a direct translation of the tabular form. One of the main drawbacks of using a tabular model to represent hierarchical data is its inability to easily capture the hierarchical relationships, leading to complex JOIN clauses, hierarchy reconstruction via ad-hoc scripting, or storing data in ways that violate database normal forms. Moreover, in some cases, the hierarchical information is simply not included in the export.
# 6.3 Heterogeneity of Formats as a New Reality
A related problem for EHRI has proved to be the heterogeneity of formats in which the files can arrive for ingest into the Portal. While the EHRI Portal data integration tools were initially based around the idea of XML as the de facto standard in the Cultural Heritage field, also motivated by the use of EAD as the principal transmission format for archival descriptions, recent technological trends [4] have made this pragmatic choice harder to sustain.
As mentioned in the previous section, some cases required the adaptation of the existing workflow to handle non-XML formats with a varying degree of usability and sustainability. For example, the conversion of Excel files to EAD helps to solve an immediate problem and brings more collections to the EHRI Portal. However, when applied to larger collections, and those not homogeneously shaped, the process was typically quite inefficient, error-prone, and time consuming where, in the worst cases, a mapping error at one stage necessitated rerunning the whole workflow.
Conversely, the inclusion of a pre-transformation step using declarative mapping rules like ShExML (as in the case of the Vilna Gaon Museum of Jewish History), proved to be much more flexible and sustainable, and could be more easily integrated into the existing ETL pipeline. Altogether, it is still an experimental use-case, and as such adds inefficiencies by having to convert the input twice, first to RDF/XML and then to EAD. Integrating these technologies more deeply, however, would require a more substantial change to the EHRI data integration environment, mainly accepting more formats for ingest other than just EAD. Such changes are increasingly pressing, however, with the advent of the first stable release of the ICA’s Records in Contexts Conceptual Model $( \mathrm { R i C - C M } ) ^ { 5 3 }$ and $\mathrm { R i C - O ^ { 5 4 } }$ , which supersede ISAD(G), ISAAH-CPF, and ISDIAH (and their EAD, EAC and EAG counterparts) in providing a model for archival metadata.
# 6.4 Exploring a Federated Approach for Aggregators
Broadly speaking, the work of sustaining an aggregation platform like the EHRI Portal is never going to be finished, as new institutions and their materials continuously need to be identified, integrated, and later on updated to ensure the future relevance of the hosted data. In this respect, the process can seem like a Sisyphean task in trying to achieve a sustainable level of operation. Even when relying on existing national aggregators like the cases developed with LIMIS and FranceArchives, there still exists a substantial operational cost in keeping all this metadata up-to-date and ensuring its accuracy.
While this picture might seem somewhat negative, it is a challenge that prompts discussion for alternative approaches and for exploring new methods of interoperability in the Cultural Heritage field. In this regard, in one of our previous works [12] we have explored the possibilities that the Semantic Web technologies – and their advocacy for shared vocabularies, unambiguous identifiers, and graph-shaped data over a decentralised web – can bring to this problem, specifically how creating a KG of the EHRI Portal’s data can alleviate data integration endeavours by making more lightweight integrations of metadata that can be retrieved on-the-fly from data providers serving their data in compatible standards. In addition to removing several current barriers to effective data integration, it could also lessen the burden data providers currently face in (often manually) pushing their data to different aggregators.
At the same time, this would have a disruptive impact on how aggregators operate nowadays: some of the effort currently put towards centralisation of data could be diverted into maintaining the connections across the federated network and further contextualising the collections amongst them, like EHRI has been doing by means of the EHRI vocabularies (for enabling thematic searches) and provenance-based linking (providing more contextual information to researchers). This is not new to Cultural Heritage where standards like $\mathrm { I I I F } ^ { 5 5 }$ [16] have taken advantage of this federated approach to make images interoperable across platforms or, in recent years, initiatives like the European Cultural Heritage Cloud56 and the Common European Data Space for Cultural Heritage57 have emerged to solve part of these problems. Nevertheless, due to the recent appearance of the first stable version of Records in Contexts (RiC), the required technical underpinnings are now available in the archival field and hence this realisation seems more achievable than before.
# 6.5 Non-technical Challenges
Even though most of the issues raised in this section pertain to the technical aspect, there are some challenges related to organisational issues or the actual historical content.
Whereas there are many GLAM institutions which are dedicated to memorialising and facilitating research into the Holocaust, these are only a minority of those which have custody of Holocaust-related material. This creates an organisational challenge by which it is necessary to determine what is within EHRI’s scope prior to importing such institution’s metadata into the EHRI Portal. Typically, the bigger the archive, the more complicated this task is. As an example, state or national archives contain multiple collections dedicated to various different topics of interest for a specific nation. Of those, only a small portion will be dedicated to the Holocaust, and in the worst cases, only some files or individual documents will be relevant. In the latter case, this complicates even more the required filtering, and later ingest, into the EHRI Portal. This mainly constitutes a content challenge in which an archivist, with a great knowledge about a specific collection, should decide what is relevant, and which archival level or levels would be more interesting to be represented on the EHRI Portal, delivering fine-grained information but sufficient contextual details. Unfortunately, this is not always possible as this specialised archivist may no longer work for the organisation – or simply by a lack of time, let alone the cases in which a specific collection has not yet been described.
Another challenge involves communication with the prospective archives. On many occasions, they understand the benefits that the EHRI Portal can provide in terms of visibility, but find the technical approach too challenging, or are uncertain about how their collection metadata might end up being displayed in the EHRI Portal. In response, we have tried to highlight that the data integration lab would take care of the technical aspects and that all the data would need their approval before being publicly available (via a test integration in the staging environment). However, sometimes this was not sufficient and the archives were interested in more short-term benefits from the participation in EHRI, beyond the enhanced visibility. In that regard, we tried to offer immediate consultancy on technological aspects with a dual purpose: helping them to solve or analyse their technical problems, and making the data exchange with the EHRI Portal possible or more feasible. Amongst other topics, we covered: Search Engine Optimisation (SEO), data normalisation, controlled vocabularies, open source solutions (like Access to Memory ) and Linked Open Data (LOD). In addition, more archival-related expertise was also shared, closely related to the EHRI Experts Lab [7], covering topics such as: digitisation, names records and their indexing, geographical representation, etc.
However, in terms of communication, one of the most common roadblocks was related to the signing of the CPA. As noted above, the CPA was a necessary prerequisite in order to have a reliable legal framework for data exchange and the subsequent hosting of metadata on the EHRI Portal. This was one of the biggest challenges that the data integration lab had to deal with as this legal text often raised many questions and concerns, also fuelled by the trans-national aspect of the project involving archives in different countries with very different laws relating to this area. In order to mitigate these problems, and in collaboration with others in the project, we decided to introduce the CPA into the conversation from the very beginning, and at the same time offered a separate session with legal experts in which the CPA was more thoroughly introduced, alongside its necessity and the clarification of some of the more controversial articles. Nevertheless, in some cases, the whole signing process could take up to one year, involving many different meetings and several amendments. This demonstrated a clear need for a more careful communication about the legal aspects, similar to that done with the technical counterpart and described in this paper.
It is worth highlighting that these issues are outside the technical scope of this data integration lab but, nevertheless, they greatly impacted the outcomes it was able to achieve. Therefore, it should be noted that transversal aspects, like the legal one, but also possible social, cultural, and institutional factors, should be addressed and considered in the methodology from the outset, as they have been shown to have an enormous impact on the overall effectiveness of the technical endeavour.
# 7 Looking to the Future
This paper has gathered our experiences running a data integration lab during the EHRI-3 project to enrich the metadata present on the EHRI Portal about Holocaust-related archival material. However, the end of this project does not mean that EHRI’s data integration work has concluded. On the contrary, as mentioned before, this task is by nature an on-going one. Nevertheless, after three phases of running as a European-funded project, the EHRI project has recently transition to a permanent organisation59 via its establishment as a European Research Infrastructure Consortium (ERIC)60, funded instead by participating countries. While it is not the objective of this paper to detail how an ERIC functions or how the particular EHRI-ERIC will operate, we will try to sketch what this entails for its data integration activities.
Generally speaking an ERIC is constituted by different countries (commonly known as national nodes) and a coordinating body (commonly known as the central hub). The central hub should normally ensure the coordination of the key activities as well as providing some essential infrastructure, though this is not predetermined and can vary greatly between different ERICs. In the specific case of EHRI-ERIC, we envision the creation of federated data integration labs that could ensure a better understanding of national archival landscapes, together with easier follow-up and collaboration with them. It is worth noting that while these days English tends to be used as the de facto lingua franca, and to the extent of our operations it worked fairly well, there are still many archives to which EHRI was unable to establish effective communication with due to idiomatic barriers. The national nodes have an opportunity to improve this situation with their geographical and linguistic advantages, increasing the scope and coverage of the EHRI Portal amongst hitherto neglected GLAM institutions.
Nevertheless, another challenge arises from this new set-up on how to maintain the trans-national aspect of data integration, notwithstanding that the national nodes are going to have a more local approach, and not all the countries will be represented in this next phase. In this sense, the consortium will have to devise a federated approach in which the national nodes would have the autonomy to add new records to the Portal, reproducing their national Holocaust archival landscape, but also taking shared responsibility for the addition of archival records outside their own jurisdiction. In this sense, this federated approach opens up new possibilities of further integrating archival descriptions from many archives around the world but it also presents a set of organisational challenges to preserve the homogeneity of the EHRI Portal’s data, to ensure an even curation and data quality across it, and to oversee that the data integration workflows are developed to the same technical standards and their long-term sustainability is ensured. Inevitably, the central hub will have to formulate guidelines on how these operations should work and seek to coordinate the addition of this new metadata.
In this sense, we see the work developed by the data integration lab during the EHRI-3 project, and described in this paper, as a blueprint on how future federated data integration labs could work; further underlining what the possible lines of improvement are upon our challenges and lessons learnt. | Historical study of the Holocaust is commonly hampered by the dispersed and fragmented nature of important archival sources relating to this event. The EHRI project set out to mitigate this problem by building a trans-national network of archives, researchers, and digital practitioners, and one of its main outcomes was the creation of the EHRI Portal, a "virtual observatory" that gathers in one centralised platform descriptions of Holocaust-related archival sources from around the world. In order to build the Portal a strong data identification and integration effort was required, culminating in the project's third phase with the creation of the EHRI-3 data integration lab. The focus of the lab was to lower the bar to participation in the EHRI Portal by providing support to institutions in conforming their archival metadata with that required for integration, ultimately opening the process up to smaller institutions (and even so-called "micro-archives") without the necessary resources to undertake this process themselves. In this paper we present our experiences from running the data integration lab and discuss some of the challenges (both of a technical and social nature), how we tried to overcome them, and the overall lessons learnt. We envisage this work as an archetype upon which other practitioners seeking to pursue similar data integration activities can build their own efforts. | [
"cs.DL",
"cs.CY",
"cs.DB",
"cs.SE"
] |
# 1 Introduction
The Shapes Constraint Language [1] (SHACL) has become a standard for validating RDF data, providing developers with a powerful tool to ensure that data conforms to specific constraints. However, SHACL is inherently designed to operate on a single graph and does not "natively"1 support the validation of RDF datasets except via SPARQL-based constraints, but which is not clearly defined. As the volume and complexity of data increase, RDF datasets are increasingly utilized because they offer a structured way to organize data, provide additional context, and capture the provenance of data, which can come from different systems. Thus, validating the dataset as a whole, rather than treating its constituent graphs independently, becomes essential to maintaining data integrity, consistency, and reliability.
Currently, developers seeking to validate RDF datasets must build solutions on top of SHACL. For example, if the goal is to validate each graph in the dataset individually, a simple program can iterate through the dataset and apply a shapes graph to each named graph. However, more complex use cases arise when validation involves combining multiple graphs into a single graph or validating data across multiple graphs. In these scenarios, preprocessing is required, which introduces additional complexity and can lead to information loss, such as the graph name from which some data originates.
To address these challenges, we propose SHACL-DS, an extension of SHACL designed specifically to validate RDF datasets. SHACL-DS introduces features that extend the SHACL specification by adding a layer on top of SHACL. When using SHACL-DS, only the execution of the SPARQL queries inside SPARQLconstraint components and SPARQL-based constraints changes, limiting the intrusiveness of our proposal. The extensions are:
– SHACL dataset, an RDF dataset that contains a set of SHACL shapes graphs, and their target, which are declaratively specified.
– TargetGraph, a mechanism for selecting specific graphs from a dataset that a shapes graph targets.
– Target Combination Declaration, a mechanism to specify a combination of graphs that a shapes graph targets
– A specification of the behavior of SHACL’s SPARQL-based constraints and constraint components applied in the context of SHACL-DS.
The remainder of this paper is structured as follows: Section 2 reviews the state of the art in RDF dataset validation, focusing on existing approaches and their limitations. Section 3 formalizes the new features introduced by SHACLDS. Section 4 describes a prototype implementation of SHACL-DS, which extends dotNetRDF $^ 2$ . Section 5 evaluates the extension using test cases, and Section 6 concludes the paper with a summary and potential directions for future research.
# 1.1 Document convention
This paper uses the following namespace prefix bindings:
– sh: http://www.w3.org/ns/shacl# – shx: http://www.w3.org/ns/shacl-x# – shds: http://www.w3.org/ns/shacl-dataset# – foaf: ttp://xmlns.com/foaf/0.1/ – ex: http://example.org/
All listings contain fragments of RDF dataset in Trig that uses the prefix bindings given above.
# 2 State of the art
SHACL has been studied across multiple dimensions. Some works focus on automatically deriving shapes from data [8],[13] or mappings [6]. Others investigate SHACL’s theoretical foundations, comparing it to Description Logic [4], [11] or OWL [9]. On the practical side, SHACL has been applied to use cases like access control [16] and form generation [20], [15]. However, despite this breadth of research, little attention has been given to validating RDF datasets.
This section explores the state of the art in RDF dataset validation. We begin by reviewing the SHACL specification, focusing on elements that may implicitly suggest or influence how dataset validation might be approached. Next, we review how some SHACL implementations handle the validation of RDF datasets, highlighting practical methods and their limitations. Lastly, we present insights from Andy Seaborne’s proposal [17] for a SHACL extension to RDF datasets, which had no prototype implementation to the best of our knowledge but inspired the development of SHACL-DS.
By analyzing these aspects, we aim to identify the gaps in the current state of the art and establish the context for developing SHACL-DS, an extension for dataset validation.
# 2.1 SHACL
The SHACL specification [1] provides a formal language for defining constraints on RDF data. Constraints are grouped into a shapes graph, a single RDF graph, and these are applied to a data graph, another single RDF graph. This inherent focus on single graphs limits SHACL’s direct applicability to RDF datasets.
Despite SHACL’s focus on single graphs, certain aspects of the specification suggest a potential for extending its applicability to RDF datasets. For example:
– SHACL specification informs that a shapes graph can be a union of graphs. A shapes graph containing owl:imports should be extended with the RDF graphs references by this predicate. The data graph can also contain the sh:shapesGraph to indicate that the referenced RDF graphs should be included in the shapes graph. While not explicitly a dataset-level feature, referencing graphs via IRIs shares similarities with the concept of named graphs in RDF datasets.
The SHACL-SPARQL specification introduces the pre-bound variables \$shapesGraph and $\$ 1$ currentShape to expand the expressiveness of SPARQL constraints. These are optional variables that, if supported by a processors, represent, respectively, the IRI of the shapes graph as a named graph if it is in the same dataset as the data graph and the IRI of the current shape. This allows SPARQL-based constraints to access the shapes graph directly. As this is an optional feature, not all SHACL-SPARQL processors support it, but it indicates an underlying assumption that the data graph and shapes graph might exist within the same RDF dataset, hinting at potential interactions at a dataset level.
– The SHACL specification explicitly prohibits certain SPARQL features, such as MINUS, SERVICE, AS, and VALUES, within SPARQL-based constraints. However, dataset-level keywords like FROM, FROM NAMED, and GRAPH are not restricted. While their inclusion may primarily support using prebound variables such as $\$ 1$ shapesGraph, this permissibility contrasts with the explicit prohibitions on other features. This selective allowance hints at SHACL’s potential to interact with datasets, even though the specification does not clearly define how these keywords should behave in the context of dataset validation. The absence of explicit guidance underscores the need for a formalized approach to dataset-level validation in SHACL.
# 2.2 SHACL Implementations
In this subsection, we examine how existing SHACL implementations address the validation of RDF datasets. The implementations considered were selected from the "SHACL Test Suite and Implementation Report" [2], which provides a comprehensive overview of SHACL processors and their conformance. Most implementations accept serialization formats that support datasets for either the shapes graph, the data graph, or both. Table 1 summarizes their behavior when a dataset is given as input for either the data graph or shapes graph.
Table 1. Summary of SHACL Implementations and Their Dataset Support
The table reveals that current SHACL implementations do not natively support dataset-level validation. Instead, they either treat the data graph as the union of all graphs or only consider the default graph for both data and shapes graphs.
While these approaches may be suitable for certain use cases, where treating all graphs as a single entity or focusing on the default graph is desired, they do not provide a way of handling more complex scenarios where users need finer control over which graphs are validated. In such cases, users must implement custom solutions or workarounds themselves. These findings underscore the need for a SHACL extension that allows users to define how a dataset should be validated explicitly.
# 2.3 SHACL-X
Andy Seaborne proposed in [17] an extension to SHACL to adapt its use for RDF datasets, thereby enhancing its applicability beyond single RDF graphs. His work introduces several key concepts that serve as the foundation for SHACLDS, the extension developed in this work. SHACL-DS builds upon and refines Seaborne’s ideas to address gaps in the original proposal. Furthermore, this research is motivated by the lack of practical implementations of SHACL-X, which has hindered its evaluation and broader adoption.
Seaborne’s proposal introduces the concept of Shapes Dataset which is a collection of shapes graphs which group shapes together; and contains information on how these shapes graphs are applied to an RDF dataset. To specify on which graph they apply, one needs to define the Target Graph of theses shapes graph. This concept identifies the specific graph(s) within the RDF dataset to which the shapes graph of a Shapes Dataset applies. This approach similar to how SHACL shapes identify their target.
These target are declared through the shx:targetGraph predicate. This property can be repeated such that a set of graphs can be selected as the target of a shapes graph. To avoid repeating this property, four special IRIS define a collection of graphs:
– shx:all, all the graphs
– shx:named, all the named graphs
– shx:default, the default graph
– shx:union, the graph formed by the union of all named graphs
Moreover, the concept of excluded target graph is also introduced through the shx:targetGraphExclude predicate in order to less verbose complex target definition. When a graph is excluded by this predicate it is excluded from the target graph selection process. In order to achieve a defined set of targets, all inclusions are applied before all exclusions. Note that the target definitions should either be specified in the default graph of the shapes dataset or within the specific shapes graph that defines its own target.
As the validation is more complex, validation reports should also be enhance to provide detailed results. In Seaborne’s proposal, each validation result in a report includes a shx:resultGraph triple, which indicates the graph in which the validation occurred. This helps to identify precisely where in the RDF dataset the error occurred.
While the proposal provides a foundation for extending SHACL to RDF datasets, we have identified some limitations. In the next Section, we will present SHACL-DS. The design and features of SHACL-DS are informed by some of SHACL-X’s proposals, but need to reconsider some aspects in order to provide a more expressive validation language. The limitations and how we addressed them, however, will be discussed in the Section 5.1. This allows us to separate SHACL-DS’s specification from its comparison with the prior art.
# 3 SHACL-DS
SHACL focuses on target nodes within a data graph. SHACL-DS adds a layer around SHACL to prescribe which (named) graphs in an RDF dataset must be validated by specific SHACL constraints. The SHACL shapes are themselves organized in an RDF dataset. This allows one a lot of flexibility in validating RDF datasets. This section presents the design and key concepts of SHACL-DS, detailing its core principles and validation mechanisms.
# 3.1 Shapes Dataset
A Shapes Dataset in SHACL-DS is a collection of shapes graphs, and contains information on how these shapes graphs are applied to an RDF dataset. Figure 1 illustrates differences between SHACL and SHACL-DS. In SHACL, a single data graph is validated against a shapes graph. Within this shapes graph, shapes declare constraints that some target nodes of the data graph must respect. In comparison, in SHACL-DS, an RDF dataset called Data Dataset is validated against a Shapes Dataset. Within this Shapes Dataset, shapes graphs declare shapes that some target data graphs of the Data Dataset must respect.
Fig. 1. SHACL vs SHACL-DS
To select these targets, SHACL-DS defines two distinct methods: Target Graph definition, which identifies existing graphs within the dataset, and Target Graph Combination definition, which enables the selection of a new target graph through set operations on multiple graphs.
SHACL-DS restricts target graph selection to only named graphs of Shapes Dataset. The default graph cannot have any target because it does not have an IRI. Overcoming this limitation would require additional annotations, which were omitted for simplicity. Section 5.1 provides more details on this design choice.
Since the default graph cannot declare a target graph, it serves a different role: providing a centralized location for declaring target graphs within a Shapes
Dataset. SHACL-DS also supports a decentralized approach, wherein each named graph may independently declare its own target.
Consistent with SHACL, if a shapes graph does not specify a target then it is omitted from the validation process.
# 3.2 Focus Graph
SHACL-DS builds on the SHACL concept of focus node to define a focus graph. A Focus Graph is a data graph that is validated against a shapes graph. This data graph may be an existing graph in the data dataset or a new graph obtained through set operations on multiple graphs.
# 3.3 Target Graph
To declare a target, shapes graph of the Shapes Dataset are subjects of triples with shds:targetGraph as predicate. The object of these triples is the IRI of the graph in the data dataset that is targeted. Since the default graph does not have any IRI, SHACL-DS employs the predefined IRI shds:default to allow it to be explicitly targeted.
To simplify the selection of multiple graphs, SHACL-DS provides two additional predefined IRIs:
– shds:named: Targets all named graphs in the data dataset. This is equivalent to explicitly declaring shds:targetGraph for each named graph individually. – shds:all: Targets the entire data dataset, including both the default graph and all named graphs.
SHACL-DS introduces shds:targetGraphExclude for more refined graph selection. This allows specific graphs3 to be removed from the set of targeted graphs. This exclusion mechanism refines the final set of target graphs by applying exclusions after processing inclusions. During the validation, this selection will produce a set of focus graphs.
In the following example, the Shapes Dataset contains a shapes graph that targets all the named graphs. For each of these graphs, Alice must be a Person.
ex:shapesGraph1 shds:targetGraph shds:named. [h] ex:shapesGraph1 { ex:AliceIsPersonShape sh:targetNode ex:Alice ; sh:class ex:Person . }
# 3.4 Target Graph Combination
In addition to selecting existing graphs as targets, SHACL-DS allows the selection of a combination of graphs obtained through set operations. Such targets are declared with shds:targetGraphCombination, and the following operators can be applied to such combinations:
– shds:and: Defines a target graph as the intersection of multiple graphs. – shds:or: Defines a target graph as the union of multiple graphs. – shds:minus: Defines a target graph as the difference between two graphs, where the second graph’s triples are removed from the first.
The operands of these operators must be either named graphs within the dataset or other graph combinations, allowing recursive construction of complex target graphs. The predefined IRIs shds:default, shds:named, and shds:all can also be used within these expressions as a syntactic sugar for several elements of the list. Each triple with the shds:targetGraphCombination predicate will produce a single focus graph during the validation.
In the following example, the Shapes Dataset contains a shapes graph that targets the union of all graphs minus the ex:dataGraph1 named graph. For this graph combination, every person must know someone.
ex:shapeGraph1 shds:targetGraphCombination [shds:minus ([shds:union (shds:all)] ex:dataGraph1);]. ex:shapeGraph1 { [h] ex:PersonKnowsSomeone sh:targetClass foaf:Person ; sh:property [ sh:path foaf:knows ; sh:minCount 1 ; ] ; }
# 3.5 Execution of SPARQL queries linked to SPARQL-based Constraints and SPARQL-based Constraint-components in SHACL-DS
The previous features introduced in SHACL-DS allow users to define how graphs are selected for validation, whether as individual named graphs or complex combinations of multiple graphs. Once this selection process is completed, the SHACL-DS validation process is similar to SHACL. Apart from iterating over focus graphs, the validation still only validates a single data graph against a shapes graph.
However, SHACL-DS further extends SHACL capabilities by specifying how an RDF dataset should be validated against a shapes graph in order to take into consideration the knowledge organization of data into several named graphs. The SHACL specification does not explicitly prohibit the use of dataset-level SPARQL keywords such as NAMED, FROM NAMED, and GRAPH, leaving their intended behavior ambiguous. In SHACL-DS, we formalize how constraints relying on SPARQL queries containing these keywords should operate in the context of RDF datasets, ensuring a well-defined and interoperable validation.
To achieve this, SHACL-DS establishes the following rule:
1. Validation Context: SPARQL queries must be evaluated against the RDF dataset. We strongly suggest that the shapes graph should not be included within this dataset to prevent unintended interactions. In addition, we do not define the behavior of the pre-bound variables $\$ 8$ shapesGraph and \$current Shape in a constraint. Since these variables are optional and not interoperable in SHACL, SHACL-DS maintains the same approach, leaving their potential usage as future work.
While the first rule ensures that SPARQL-based constraints can leverage the knowledge organization of an RDF dataset, it is not sufficient to achieve reusable constraints. Reusability requires constraints to behave consistently regardless of the target graph or dataset structure, ensuring that they apply to different graphs and datasets without requiring modification.
To illustrate this limitation, consider the following SPARQL query, which identifies people who do not know at least one "good person" within the dataset:
SELECT DISTINCT \$this WHERE { \$this a foaf:Person. FILTER NOT EXISTS { [h] GRAPH ex:goodPersonGraph { ?goodPerson a foaf:Person . } \$this foaf:knows ?goodPerson . } }
This query as part of a SPARQL-based constraint or a SPARQL-based constraint component, checks if people know at least a person from the ex:good PersonGraph named graph. It exemplifies that constraints may require the dataset level information such as the named graph triples originates from.
If this constraint is evaluated on the dataset in Listing 1.4, only Bob is included in the validation results, despite the fact that David also fails to meet the constraints. This occurs because the SPARQL query implicitly assumes that triples exist in the default graph unless explicitly specified otherwise. Since David is defined within the named graph ex:City1Graph, he is not considered by the constraint unless the query explicitly accounts for named graphs.
ex:Alice a foaf:Person; foaf:knows ex:Zach; foaf:knows ex:Yara ex:Bob a foaf:Person; foaf:knows ex:Yara . ex:City1Graph { ex:Charlie a foaf:Person foaf:knows ex:Zach . [h] ex:David a foaf:Person foaf:knows ex:Yara . } ex:goodPersonGraph { ex:Zach a foaf:Person. }
When validating the default graph with shds:default as target graph, this behavior aligns with expectations, as Bob is indeed missing a connection to a good person. However, when validating ex:City1Graph, one would expect David to be reported instead of Bob. This discrepancy highlights the difficulty of writing dataset-aware constraints that behave consistently across different target graphs and dataset structures.
A possible solution is to rewrite the query to consider different use case scenarios, but this approach burdens the user with potentially complex task. Instead, SHACL-DS introduces a mechanism that shifts this responsibility to implementers, ensuring that dataset-aware constraints remain consistent across different datasets. This is achieved through the following rule:
2. Dataset View: SPARQL-based constraints are evaluated against a view of the RDF dataset, based on the current target graph, i.e., the focus graph. The view that is considered must be equivalent to a view obtain through the following transformations:
– If the focus graph is the default graph, the view is the dataset as it is, without modification.
– If the focus graph is a named graph, the view is obtained by permuting the named graph and default graph. In this view, the named graph is treated as the default graph, while the original default graph is assigned the name shds:default.
– If the focus graph is a combination of graphs, the view is obtained by assigning the name shds:default to the original default graph and assigning the combination to the default graph of the view.
An example of resulting views of the dataset after these transformations are applied can be seen in 2
Fig. 2. Resulting dataset after transformation based on target graph
This transformation result in treating all target graphs as if they were the default graph during constraint evaluation, SPARQL queries remain reusable and consistently applicable. Additionally, the original default graph remains accessible via the shds:default named graph, preserving access to all data.
By formally specifying how SPARQL queries with dataset-level keywords should be evaluated, SHACL-DS enhances SHACL’s ability to validate RDF datasets without requiring extensive user intervention. This structured approach ensures that dataset organization is leveraged effectively while maintaining constraint reusability and interoperability across diverse dataset structures.
In the following example, the Shapes Dataset contains a shapes graph that targets all the graphs except the ex:goodPersonGraph. For each of these graphs, every person must know at least one good person.
ex:shapeGraph1 shds:targetGraph shds:all; ex:shapeGraph1 shds:targetGraphExclude ex:goodPersonGraph. ex:shapeGraph1 { ex:knowsGoodPersonShape sh:targetClass foaf:Person ; sh:sparql [ sh:select """ @prefix foaf: <http://xmlns.com/foaf/0.1/>. @prefix foaf: <http://example.org/>. [h] SELECT DISTINCT \$this WHERE { \$this a foaf:Person. FILTER NOT EXISTS { GRAPH ex:goodPersonGraph { ?goodPerson a foaf:Person . } \$this foaf:knows ?goodPerson . } }""" ; ] }
# 3.6 Validation report
To enhance the validation reporting process in SHACL-DS, we introduce two new predicates: shds:sourceShapeGraph and shds:focusGraph. These predicates provide additional context to validation results by annotating them with information about the originating shapes graph and the relevant data graph. Essentially, these predicates are the dataset-level equivalent of sh:sourceShape and sh:focusNode.
shds:sourceShapeGraph: This predicate identifies the specific shapes graph within the shapes dataset that contained the constraint responsible for generating the validation result. This allows users to trace validation errors back to their corresponding shape definitions, disambiguating shapes sharing the same IRI in different shapes graphs.
shds:focusGraph: This predicate indicates the focus graph, a specific data graph within the data dataset where the focus node originated. If the validated node belongs to a named graph, the focus data graph is the IRI of that named graph. If the validated node originates from the default graph, shds:focusGraph is assigned the reserved IRI shds:default. In cases where the focus graph is a combination of multiple graphs, it does not correspond to a named graph of the dataset. As it is a blank node 4, which are not shared across systems, a copy of the explicit combination declaration should be included in the validation report rather than simply referencing the blank node. This structure ensures that the validation result accurately identifies the target data graph of composite nature.
Due to space limitations, we are unable to provide a complete example. We refer the reader to the test cases in our GitHub repository for examples of various validation reports.
# 3.7 SHACL-DS shapes dataset to validate shapes datasets
Similar to the SHACL specification, which includes SHACL Shapes to validate shapes graphs, SHACL-DS introduces a Shapes Dataset to validate a Shapes Dataset. This ensures that target graph declarations using the shds:TargetGraph and shds:TargetGraphCombination predicates are well defined.
For Target Graph Declarations, it ensures that targets are IRIs. These are one of the predefined IRIs (shds:default, shds:named, shds:all) or should be the IRI of a named graph in the Data dataset.
For Target Graph Combination Declarations, it ensures that targets are graph combinations. These are either the IRI of a named graph or a blank node with one of the properties shds:or, shds:and or shds:minus. It also ensures that the objects of these properties are well-defined SHACL Lists of graph combinations. This list must also be of length two for shds:minus.
The project’s GitHub repository contains a full version of the SHACL-DS Shapes Dataset for validating Shapes Datasets.
# 3.8 Test cases
A structured test suite has been developed to validate SHACL-DS’s correctness and practical applicability. These test cases systematically examine SHACLDS’s functionality by covering both fundamental features and complex validation scenarios.
The test cases cover: 1) Simple target graph declarations using a named graph or the predefined IRIS; 2) Proper exclusion of a graph from the set of target graphs; 3) Simple target graph combination declarations using each set operator on two graphs; 4) Complex target graph combination declarations with nested graph combinations, which may include predefined IRIs; and 5)Constraints using SPARQL queries with dataset-level keywords.
Each test produces a validation report including SHACL-DS validation annotations such as shds:focusGraph and shds:sourceShapeGraph. The whole test suite, including the Shapes datasets and Data dataset inputs, and expected outputs, is available on the project’s GitHub repository.
# 4 Implementation
As a proof of concept, a prototype SHACL-DS engine was implemented. This prototype extends the SHACL module of dotNetRDF. It introduces support for dataset-level validation while maintaining compatibility with existing SHACL functionality. The core of the implementation consists of a Shapes Dataset class, which extends the InMemoryDataset class in dotNetRDF. The main function of this class is validate, which performs validation on a given Data Dataset.
The validation process builds upon the standard SHACL validation mechanism but incorporates dataset-level processing. First, the system constructs a list of tuples consisting of a shapes graph and a target graph, derived from Target Graph and Target Graph Combination declarations. Each of these tuples undergoes SHACL validation independently. Once validation is performed, the validation results are extended with SHACL-DS validation annotations, namely shds:focusGraph and shds:sourceShapeGraph. These annotated reports are then aggregated into a single comprehensive validation report.
For SPARQL-based constraints, the SHACL-DS engine applies queries directly to the Data Dataset. To ensure that these constraints are evaluated on the required view described in 3.5, the engine transforms the dataset to obtain the required view by renaming graphs and adding a graph if necessary. This transformation is only performed if the validation comes form a Shapes Dataset to preserve compatibility with SHACL.
As SHACL-DS is a proof-of-concept prototype, performance optimizations have not been a focus, and no benchmarking has been conducted. The primary goal of this implementation is to demonstrate feasibility rather than efficiency.
# 5 Discussion
# 5.1 SHACL-DS vs SHACL-X
SHACL-DS and SHACL-X extend SHACL to support dataset validation, but they adopt different approaches and design choices. Initially, SHACL-X served as an inspiration for SHACL-DS, as it proposed key concepts for dataset validation. However, the lack of an implementation led to practical limitations, and upon closer analysis, certain design choices in SHACL-X were reconsidered. SHACLDS refines these ideas, addressing issues found in SHACL-X and improving their applicability.
One key difference is how target graphs are specified. While SHACL-X allows target graphs to be defined at the level of individual shapes, named graphs, or the entire dataset, SHACL-DS restricts target graph declarations to named graphs only.
SHACL-DS positions itself at the dataset level to maintain compatibility with SHACL and simplify implementation. This approach ensures that existing SHACL engines can be reused without requiring modifications to their internal processing. For this reason, SHACL-DS does not allow shapes to define targets graph as it would alter the SHACL validation process and require changes to the core mechanics of SHACL engines.
Additionally, SHACL-DS does not allow specifying target graphs for the whole dataset. In contrast, SHACL-X uses the IRI of the shapes dataset as a subject to declare targets for the entire dataset. However, distinguishing the IRI of the dataset from a named graph within it requires additional annotations, making this approach more complex.
SHACL-X already introduced the idea of target graph combination with the predefined IRI shx:union, which represented the union of named graphs. However, this approach was too restrictive, as it only applied to named graphs and not even the union of all graphs, which seemed arbitrary. In SHACL-DS, we properly separated the concept of Target Graph and Target Graph Combination, allowing for more complex and flexible graph combinations through explicit set operations such as intersection, union, and difference.
SHACL-X also introduced shx:include to share shapes between named graphs, and shx:targetGraphPattern and shx:targetGraphExcludePattern for filtering target graphs based on IRI patterns. While these features provide additional flexibility, SHACL-DS does not currently include them, as it focuses on foundational concepts for dataset validation.
SHACL-DS and SHACL-X also differ in their design goals. SHACL-DS was developed and refined alongside its prototype, as the goal was not only to define a language for dataset validation but also to provide a working implementation. This may have introduced some bias in the language’s design. In contrast, SHACL-X remains a theoretical proposal without an implementation, meaning its feasibility has not been validated through practical use.
Overall, SHACL-DS builds upon concepts introduced by SHACL-X but refines them to ensure practical applicability and implementation feasibility. By integrating these refinements and providing a prototype, SHACL-DS takes a concrete step toward enabling dataset-level validation within SHACL.
# 5.2 Shapes Dataset
An alternative to using a Shapes Dataset is to include target graph declarations as annotations directly within a Shapes Graph. However, SHACL-DS is designed for use in projects where data originates from various sources and must conform to multiple constraints. To accommodate these requirements, one would need to add annotations to Shapes Graphs that are likely distributed across different documents, making management and validation more complex. By introducing a Shapes Graph Dataset, SHACL-DS provides a centralized approach to defining the constraints of a project. This design choice also has the potential to enhance the reusability of shapes, particularly if features like shx:include are incorporated in the future.
# 5.3 Limitations
Despite this study’s contributions, SHACL-DS has not yet been tested on realworld datasets. The evaluation relied on a structured test suite composed of toy examples to validate its functionality systematically. While these test cases demonstrate that SHACL-DS implements dataset-level validation, practical adoption in real-world use cases involving large-scale and heterogeneous RDF datasets remains underexplored. Nevertheless, SHACL-DS addresses an issue identified in [5]. Specifically, not knowing the origin named graph of some triples leads to validation bypass in SHACL. By enabling dataset-level validation while preserving named graph distinctions, SHACL-DS mitigates this issue.
In addition, the implementation is a prototype that focuses on correctness rather than performance optimization. No benchmarking has been conducted to assess its efficiency on large-scale RDF datasets. | The Shapes Constraint Language (SHACL) provides a powerful mechanism for validating RDF data against shape constraints, but is inherently designed for single-graph validation. This limitation makes SHACL unsuitable for natively validating RDF datasets comprising multiple named graphs. To address this gap, developers must build solutions on top of SHACL, applying a shapes graph to each RDF dataset or combinations thereof using bespoke code. However, these approaches may lead to information loss, such as the named graph from which the data originates. This paper introduces SHACL-DS, an extension to SHACL that enables validation of RDF datasets. The extension adds a layer on top of SHACL, and the only disruptive change is the execution of SPARQL queries in, e.g., SPARQL-based constraints. The contributions are a SHACL-DS specification, a prototype implementation, and a set of test cases illustrating its use and providing future developers guidance in building SHACL-DS engines. This work lays the foundation for integrating dataset-level features into SHACL and encourages further exploration of advanced RDF dataset validation techniques. | [
"cs.DB"
] |
# 1. Introduction
Functional maps represent correspondences between shapes as a change of basis matrix $\mathrm { [ O B C S ^ { * } 1 } 2$ , $\mathrm { O C B ^ { * } 1 6 } ]$ . This point of view has a myriad of benefits, such as efficiency and generality. Since their inception, multiple aspects of the functional maps computational pipeline have been extended and improved, for example by using different functional bases [WBCPS18, $\mathrm { H S A } ^ { * } 2 3$ , BXNL24], by incorporating them in a deep learning framework $[ \mathrm { L R R ^ { * } } 1 7$ , $\mathrm { H L R } ^ { * } 1 9 \mathrm { a }$ , $\mathrm { S M J } ^ { * } 2 3 \mathrm { a } ]$ , and by addressing issues such as partiality $[ \mathsf { R C B } ^ { * } 7 $ , $\mathrm { L R B } ^ { * } 1 6$ , LRBB17, APO21, BDK24] and smoothness [RPWO18, MRSHO22], to mention just a few.
However, not all functional maps represent a valid point to point (P2P) map. Hence, a major challenge is converting the functional map to a high quality P2P map for downstream applications. This task is known as functional map refinement, and has been widely addressed in the literature, using both classical [EBC17, NMR $^ { * } 1 8$ , ESBC19, MRR∗19] and data-driven methods [MO24].
Reconstructing a P2P map while refining the functional map is a strong regularizer which has been used in previous works, e.g. ZoomOut $[ \mathrm { M R R } ^ { * } 1 9 ]$ . However, incorporating this step within a deep learning network is difficult, due to the large dimensions of the data involved, which depends on the dimensions of the input meshes, and the difficult non-linear constraints, essentially seeking a permutation.
Image diffusion models [HJA20, SDWMG15, SE19, RBL $^ { * } 2 2$ ] have been hugely popular for image generation due to their high fidelity and flexibility. Initially primarily conditioned on text, these models have been adapted to support image-conditioned generation, allowing control via visual inputs for tasks such as style transfer, image-to-image translation, or super-resolution. In addition, guidance techniques $[ \mathrm { C K M } ^ { * } 2 2 , \mathrm { S Z Y } ^ { * } 2 3 , \mathrm { B C S } ^ { * } 2 3 , \mathrm { Y W Z } ^ { * } 2 3 , \mathrm { Y W Z } ^ { * } 2 ]$ , $\mathrm { H M L } ^ { * } 2 3 \}$ have been developed to steer the generative process at inference time using additional objectives – enabling control over semantics, geometry, or alignment with external signals.
We propose that a conditioned image diffusion model with guidance is a tool which is highly beneficial for functional map refinement. First, functional map matrices are treated as images, mapping the values to pixel values. Second, in the learning phase we work solely on the functional maps, learning to generate an "image" which represents a refined functional map conditioned on an "image" which represents a noisy initial functional map. The initial maps can be computed using different means, e.g., using descriptor correspondence. Finally, in the inference step, we use the input initial functional map as a condition and generate using the learned model a refined functional map.
This is highly efficient, as no P2P maps are required in the learning stage, since the ground truth functional maps are computed from the ground truth P2P maps in a preprocessing step. However, as is widely known $\scriptstyle [ \mathrm { M R R } ^ { * } 1 9 ]$ , restricting the output functional map to correspond to a P2P map is highly beneficial for the output map quality. Thus, in the inference step, we use a guidance objective inspired by P2P reconstruction techniques to guide our output map, leading to refined functional maps that correspond to high quality P2P maps.
We demonstrate that our approach is flexible, accepting as input functional maps computed through various means (e.g., different descriptors, deep feature extractors). Furthermore, we show that additional guidance objectives (e.g., orthogonality, or other functional map regularizations) can effectively improve the output map. Notably, even when trained on a very small dataset our method successfully improves maps for shapes beyond the training distribution, as illustrated in Figure 1 by an example involving a dataset distinctly different from the training set. Finally, we show that our method compares favorably to the state of the art of functional map refinement approaches, both classical and data-driven.
# 1.1. Related Work
Functional Map Refinement. Functional maps have been introduced by Ovsjanikov et al. $\mathrm { [ O B C S ^ { * } 1 2 ] }$ , and since then have been generalized in many ways $\mathrm { [ O C B ^ { * } 1 6 }$ , DYDZ22]. Here, we focus on map refinement, namely improving an initial functional map, computed by some non-accurate means, e.g. from shape descriptors, such that an accurate point-to-point (P2P) map can be extracted from it.
Already when introduced $[ \mathrm { O B C S ^ { * } } 1 2 ]$ , an iterative ICP algorithm in the spectral embedding space was proposed for extracting a P2P map. Later improvements included considering a smoothness assumption [EBC17], namely that the pulled back LaplaceBeltrami eigenfunctions of one shape are in the span of the Laplace-Beltrami eigenfunctions of the second shape. We also use this prior, though we leverage it as a guidance term during the inference process. Thus we guide the functional map improvement using the P2P alignment prior. Among the many later refinement schemes, ZoomOut $[ \mathrm { M R R } ^ { \ast } 1 9 ]$ and IMA $[ \mathrm { P R M } ^ { * } 2 1 ]$ stand out. ZoomOut uses the insight that alternatively upsampling the spectral dimension and projecting on the space of pointwise maps leads to better map recovery. IMA uses a connection to optimal transport to directly improve the functional map matrix. Recently, a differential version of ZoomOut was proposed [MO24], which is incorporated within a network for learning shape correspondences. The refinement component there, however, has no learnable parameters.
As opposed to other map refinement methods, our approach has a few unique properties. First, to the best of our knowledge, it is the first method that learns to refine functional maps from initial and ground truth maps, as opposed to computing refined maps from learned features. Second, the training phase is done fully on the functional map matrix (after a pre-computation of the initial and ground truth maps). Finally, we treat the functional maps as images and leverage the powerful image diffusion models.
Diffusion Models for Geometric Data. Diffusion models were originally introduced for high-fidelity 2D image generation $[ \mathrm { H J A 2 0 } , \mathrm { S E 1 9 } , \mathrm { D N } 2 1 , \mathrm { R B L } ^ { * } 2 2 ]$ , and have since been extended to a wide range of modalities. Their flexibility, stability, and controllable generation capabilities have recently made them attractive for 3D geometric data. This includes applications in 3D shape and scene synthesis, novel view synthesis, avatar modeling, and structure-aware feature extraction.
For geometric generation, diffusion models have been applied across a broad spectrum of representations. Early works explored point-based generation [LH21, $\mathrm { N J D } ^ { * } 2 2$ , MKRV23, $\mathrm { W W F ^ { * } } 2 3$ , $\boldsymbol { Z } \mathrm { H M } ^ { * } 2 4 \boldsymbol { ] }$ , while voxel and volumetric formulations enabled more structured geometry synthesis $\mathrm { [ H L H F 2 2 , M S P ^ { * } 2 3 , T G W ^ { * } 2 3 ] }$ . Triplane-based models $\mathrm { [ C G C ^ { * } 2 3 }$ , $\mathrm { s C P } ^ { * } 2 3$ , $\mathrm { w } Z Z ^ { * } 2 3$ , $\boldsymbol { Z } \boldsymbol { C } \boldsymbol { \mathrm { W } } ^ { * } 2 4$ , $\mathrm { G X N } ^ { * } 2 3$ , LGL $^ { * } 2 4 ]$ have proven effective for neural field generation and textured mesh synthesis. Direct mesh-based diffusion has also been explored $\left[ \mathrm { L F B } ^ { * } 2 3 \right]$ . More recently, 3D Gaussian Splatting (3DGS) has become a dominant representation. Several methods are trained directly on explicit 3D splat data $\mathrm { [ H C P ^ { * } 2 4 }$ , $\boldsymbol { \mathrm { Z C Y ^ { * } 2 4 } }$ , ${ \mathrm { L T Q } } ^ { * } 2 3$ , LWT23], while a number of works have demonstrated that high-quality 3DGS can also be generated using only 2D supervision by leveraging differentiable rasterization and novel view consistency $\mathrm { [ P S T ^ { * } 2 4 }$ , SRV23, $\mathrm { L X C } ^ { * } 2 4$ , $\mathrm { C Z X ^ { * } 2 4 } ]$ . In parallel, 2D diffusion models have also been leveraged to create 3D assets via optimization-based lifting [PJBM22, $\mathrm { T R } Z ^ { * } 2 3$ , LSdOBG $^ { * } 2 4 ]$ . Other approaches target novel view synthesis $\left[ \mathbf { W } \mathbf { C } \mathbf { M } \mathbf { B } ^ { * } 2 2 \right.$ , $\mathrm { L W V H } ^ { * } 2 3$ , SXL24] and multimodal 3D outputs $\mathrm { [ X L X ^ { * } 2 4 }$ , $\mathrm { W Z G } ^ { * } 2 5 ]$ ], expanding the generative space to include conditional, occluded, or multi-view scenarios.
While diffusion models have been widely applied to 3D generation, their use for geometric understanding and shape correspondence remains limited. Some recent works have focused on extracting features from 2D renderings of meshes using diffusion backbones [DMM24], to decorate surfaces with semantic features. Others have repurposed 2D diffusion models fine-tuned for view synthesis to extract geometry-aware features [XLFL24, $\mathrm { T J W } ^ { * } 2 3 ]$ , which show strong view correspondence but are not used for shape matching or refinement. Je et al. $\mathrm { [ J L Y ^ { * } } 2 4 ]$ apply generative sampling via Riemannian Langevin Dynamics for robust symmetry detection, illustrating the broader use of probabilistic methods in geometric reasoning.
The most closely related and concurrent work is by Zhuravlev et al. [ZLG25], who also use diffusion models to learn functional maps. Their approach predicts correspondences to a fixed template using a denoising pipeline trained on synthetic human shapes. In contrast, our method assumes a given (possibly noisy) functional map as input and refines it between arbitrary shape pairs, with flexible plug-and-play guidance during inference.
Guidance in Diffusion Models Recent work has shown that diffusion models can be steered at inference time using guidance mechanisms that incorporate semantic or structural objectives. Classifierbased [DN21], classifier-free [HS22], and gradient-based plugand-play methods ${ \mathrm { [ B C S ^ { * } } } 2 3 $ ,KEES22, $\mathrm { W Y Z 2 2 , C K M ^ { * } 2 2 , L D R ^ { * } } 2 2$ , CSRY22, GMJS22, $\mathrm { H M L } ^ { * } 2 3$ , CLY23] allow external objectives to influence the generation process without retraining. These approaches have been used for semantic control, view consistency, and geometric constraints $[ \mathrm { P S T } ^ { * } 2 4 , \mathrm { S R V } 2 3 , \mathrm { R L P } ^ { * } 2 3 ]$ .
Our method follows this paradigm but applies guidance to functional map refinement, directing the diffusion process with geometric losses such as pointwise consistency, orthogonality, and Laplacian commutativity—objectives specific to spectral correspondence problems.
# 1.2. Contributions
Our main contribution is a reformulation of the functional map refinement problem as a conditional image generation problem. Specifically, we:
Show that conditional image diffusion with guidance can refine functional maps generated by various sources. Provide a P2P guidance that efficiently improves the map at test time, as well as a variety of guidance mechanisms appropriate for different shape classes.
• Demonstrate that our approach compares favorably to state of the art functional map refinement algorithms, both classical and deep.
# 2. FRIDU
# 2.1. Notation
We denote a pair of meshes by $\mathcal { M } _ { j } = ( \mathcal { V } _ { j } , \mathcal { F } _ { j } )$ , where $j = 1 . . 2$ , and $\gamma _ { j } , \mathcal { F } _ { j }$ are the vertices and faces respectively, with $n _ { j } = | \nu _ { j } | , m _ { j } =$ $| \mathcal { F } _ { j } |$ . The first $k _ { j }$ eigenfunctions of the Laplace-Beltrami operator of the mesh $\mathcal { M } _ { j }$ are denoted by a matrix $\Phi _ { j } \in \mathbb { R } ^ { n _ { j } \times k _ { j } }$ , and the corresponding diagonal matrix of $k _ { j }$ eigenvalues is denoted by $\Lambda _ { j }$ . A point to point (P2P) map between the meshes is denoted by $T _ { 2 1 } : { \mathcal { M } } _ { 2 } \to { \mathcal { M } } _ { 1 }$ , and maps vertices on $\mathcal { M } _ { 2 }$ to vertices on $\mathcal { M } _ { 1 }$ .
The P2P map can also be represented as a binary row stochastic matrix $\Pi _ { 2 1 } \in \{ 0 , 1 \} ^ { n _ { 2 } \times n _ { 1 } }$ , where $\Pi _ { 2 1 } ( i , j ) = 1$ if and only if $\nu _ { i } \in \mathcal { M } _ { 2 }$ is mapped to $\nu _ { j } \in \mathcal { M } _ { 1 }$ . $\Pi _ { 2 1 }$ maps indicator functions defined on $\nu _ { 1 }$ to indicator (or zero) functions defined on $\nu _ { 2 }$ . A soft P2P map $P _ { 2 1 } \in \mathbb { R } ^ { n _ { 2 } \times n _ { 1 } }$ maps real functions defined on $\nu _ { 1 }$ to real functions on $ \nu _ { 2 }$ . A functional map $C _ { 2 1 } \in \mathbb { R } ^ { k _ { 2 } \times k _ { 1 } }$ maps functions given in the spectral basis $\Phi _ { 1 }$ to functions defined in the spectral basis $\Phi _ { 2 }$ .
# 2.2. Overview
At inference time we are given an initial functional map $\widetilde { C } _ { 2 1 }$ between two meshes $\mathcal { M } _ { 1 } , \mathcal { M } _ { 2 }$ , as well as the correspondinge functional bases $\Phi _ { 1 } , \Phi _ { 2 }$ . We need to provide as output a refined functional map $C _ { 2 1 }$ , and its corresponding refined P2P map $\Pi _ { 2 1 }$ .
Towards that end we train a conditioned image diffusion model $d _ { \theta }$ which generates our refined map $C _ { 2 1 }$ , conditioned on the initial map $\tilde { C } _ { 2 1 }$ . We train on maps $\widetilde { C } _ { i j }$ between models $\mathcal { M } _ { i } , \mathcal { M } _ { j }$ , from a datase eof models $\mathcal { D } _ { \mathrm { s h a p e s } }$ . These initial maps are given together with the ground truth P2P maps $\Pi _ { i j } ^ { * }$ , from which we generate during pre-processing the corresponding ground truth functional maps $C _ { i j } ^ { * } = \Phi _ { i } ^ { \dag } \Pi _ { i j } ^ { * } \Phi _ { j }$ . The training is done solely in the functional space, namely only $C _ { i j } ^ { * }$ and $\widetilde { C } _ { i j }$ are used during training.
The training and inference procedures are illustrated in Figure 2.
# 2.3. Training
To simulate real-world scenarios where only noisy initial maps are available, we compute initial functional maps $\widetilde { C } _ { i j }$ using descriptorbased techniques (e.g., WKS [ASC11] or SHeOT [TSDS10] descriptors). Thus, each training example consists of a pair $( \tilde { C } _ { i j } , \tilde { C } _ { i j } ^ { * } )$ , where $\widetilde { C } _ { i j }$ serves as the conditioning input and $C _ { i j } ^ { * }$ is the seupervision t eget. We follow the denoising diffusion framework [HJA20], and adopt the EDM-DDPM $^ { + + }$ formulation proposed by Karras et al. [KAAL22]. In this formulation, the ground-truth functional map $C _ { i j } ^ { * }$ is corrupted using a diffusion process by adding Gaussian noise with a continuous noise level $\sigma$ :
$$
C _ { i j } ^ { N } = C _ { i j } ^ { * } + \sigma \cdot \epsilon , \quad \log \sigma \sim \mathcal { N } ( \mu _ { p } , \sigma _ { p } ^ { 2 } ) \quad \epsilon \sim \mathcal { N } ( \mathbf { 0 } , \mathbf { I } )
$$
where $\mu _ { p }$ and $\sigma _ { p }$ are scalar parameters controlling the center and spread of the noise levels, and $\boldsymbol { \epsilon } \in \mathbb { R } ^ { k _ { 2 } \times k _ { 1 } }$ is a matrix of i.i.d. Gaussian noise with entries $\epsilon _ { a , b } \sim \mathcal { N } ( 0 , 1 )$ , matching the dimensions of the functional map Ci∗ j.
Figure 2: An illustration of our (left) training and (right) inference procedures. During training, our pipeline takes as input a random patch of a noisy functional map $\widetilde { C } _ { i j } ^ { N }$ , conditioned on a corresponding patch of an initial functional map $\tilde { C } _ { i j }$ and position maps, and outputs the denoised patch of the functional meap $C _ { i j }$ . At inference time, the patch covers the full-sized image, and $w e$ incorporate guidance at each denoising step, including point-to-point guidance and potentially additional regularizers, such as orthogonality and Laplacian commutativity.
Our model $d _ { \theta }$ is trained to directly recover the clean map from the noisy sample $C _ { i j } ^ { N }$ , conditioned on the initial map and noise scale $\sigma$ . We minimize the denoising objective:
$$
\begin{array} { r l } & { \mathbb { E } _ { ( { \mathcal { M } } _ { j } , { \mathcal { M } } _ { i } ) \sim { \mathcal { D } } _ { \mathrm { s h a p e s } } } \left[ \lambda ( \sigma ) \left| \left| C _ { i j } ^ { * } - d _ { \Theta } ( C _ { i j } ^ { N } \mid \widetilde { C } _ { i j } , \sigma ) \right| \right| _ { 2 } ^ { 2 } \right] } \\ & { \qquad \epsilon { \sim } \mathcal { N } ( \mathbf { 0 } , \mathbf { I } ) } \\ & { \log \sigma { \sim } \mathcal { N } ( \mu _ { p } , \sigma _ { p } ^ { 2 } ) } \end{array}
$$
where $\lambda ( \sigma )$ is a noise-dependent scalar weighting function.
See [KAAL22] for further details.
Training is performed entirely in the spectral (functional) domain, without requiring access to explicit P2P supervision. Once trained, the model can generate refined functional maps $C _ { i j }$ from noise, guided by an initial estimate $\widetilde { C } _ { i j }$ , which can then be decoded into dense correspondences using exeisting recovery techniques.
# 2.4. Inference
At inference time, we are given a new pair of shapes, their corresponding spectral bases $\Phi _ { 1 } , \Phi _ { 2 }$ , and an initial functional map $\widetilde { C } _ { 2 1 }$ . Our goal is to generate a refined functional map $C _ { 2 1 }$ , and recoever accurate point-to-point correspondences $\Pi _ { 2 1 }$ .
We initialize a noisy sample $C _ { 2 1 } ^ { t = T } \sim \mathcal { N } ( 0 , \sigma _ { T } ^ { 2 } \mathbf { I } )$ , and iteratively denoise it using our EDM-based model $d _ { \theta }$ . At each step $t = T , \dots , 1$ , the model predicts a cleaner version of the functional map corresponding to the current noise level $\sigma _ { t }$ , and the sample is updated accordingly:
$$
C _ { 2 1 } ^ { t - 1 } = C _ { 2 1 } ^ { t } + ( \sigma _ { t - 1 } ^ { 2 } - \sigma _ { t } ^ { 2 } ) \frac { d _ { \theta } ( C _ { 2 1 } ^ { t } \mid \widetilde { C } _ { 2 1 } , \sigma _ { t } ) - C _ { 2 1 } ^ { t } } { \sigma _ { t } ^ { 2 } }
$$
This update rule follows the deterministic sampling procedure proposed in [KAAL22]. In practice, we use the full EDM sampler with noise perturbation and second-order correction for improved stability. The final output $C _ { 2 1 } ^ { 0 }$ is the refined functional map $C _ { 2 1 }$ .
# 2.5. Efficient Training via Patch-based Diffusion
Diffusion models typically require large amounts of training data, but in the case of functional maps, available datasets are relatively small. To improve training efficiency and data effectiveness, we adopt a patch-wise training strategy $[ \mathrm { W J } Z ^ { * } 2 3 ]$ .
Rather than denoising full functional maps, we train on smaller patches $C _ { i j } ^ { ( p ) } \in \mathbb { R } ^ { s \times s }$ randomly cropped from the ground-truth functional map $C _ { i j } ^ { * }$ , with patch size $s$ sampled stochastically or progressively. Each patch is denoised independently, conditioned on the corresponding region from the initial map $\tilde { C } _ { i j }$ , its spatial location $( a , b )$ , and the noise level. We concatenate weo additional channels encoding the normalized position of each pixel within the full map to the patch input.
Figure 3: Refinement without Guidance (WKS). We map a function $f _ { 1 } \in \mathbb { R } ^ { n _ { 1 } }$ defined on $\mathcal { M } _ { 1 }$ to $\mathcal { M } _ { 2 }$ , using the initial, FRIDU refined, and ground-truth functional maps. Here, we map the function $\Phi _ { 1 } ^ { \dagger } f _ { 1 }$ using the functional map matrices to get $\tilde { f } _ { 2 }$ , and show $\Phi _ { 2 } \tilde { f } _ { 2 }$ on $\mathcal { M } _ { 2 }$ . Note that in this figure only, our refined map does not include guidance in inference, in order to isolate and illustrate the refinement ability of the base model. The top row visualizes the source and mapped functions, and the bottom row shows the corresponding functional map matrices. We observe improvement in our refined map in both the mapped function and the functional map matrix.
Figure 4: Refinement with P2P Guidance (WKS). We show the pointwise mapping $\Pi _ { 2 1 }$ extracted from the functional maps in Figure 3, mapping a function $f _ { 1 } \in \mathbb { R } ^ { n _ { 1 } }$ from $\mathcal { M } _ { 1 }$ to $\mathcal { M } _ { 2 }$ using $\Pi _ { 2 1 } f _ { 1 }$ . We show the pointwise maps obtained without guidance (center) and with P2P guidance (right). Note the improvement in the pointwise map extracted from the FRIDU refined functional map compared to the initial map, and the significant improvement when adding guidance. The plot shows the average normalized Euclidean error over the Michael dataset, for the initial and refined maps. The shaded region shows the standard deviation around the average.
Patch locations $( a , b )$ are sampled uniformly from the valid crop positions within the full functional map domain, denoted $( a , b ) \sim$ $\mathcal { U } ( \mathcal { R } )$ . Patch sizes $s$ are drawn from a predefined schedule $p ( s )$ .
The model is trained to minimize the patch-level denoising loss:
$$
\begin{array} { r l } & { \mathbb { E } _ { \mathbf { \Phi } ( a , b ) \sim \mathcal { U } ( \mathcal { R } ) } \left\| d _ { \mathbb { \Theta } } \left( C _ { i j } ^ { ( p ) } + \mathbb { \sigma } \cdot \epsilon \left| \widetilde { C } _ { i j } ^ { ( p ) } , a , b , s , \mathbb { o } \right. \right) - C _ { i j } ^ { ( p ) } \right\| _ { 2 } ^ { 2 } } \\ & { \quad \times p ( s ) } \\ & { \quad \log \sigma \sim \mathcal { N } ( \mu _ { p } , \sigma _ { p } ^ { 2 } ) } \\ & { \quad \epsilon \sim \mathcal { N } ( 0 , 1 ) ^ { s \times s } } \end{array}
$$
To encourage the model to learn both local details and global structure, we vary the patch size during training and occasionally include full functional maps. Inference is performed on full maps without modification to the model architecture.
# 2.6. Guidance
Point to Point Map Extraction. The conditional diffusion model produces plausible functional maps $C _ { 2 1 }$ that resemble the training distribution (see Figure 3). However, we also need to compute the P2P map $\Pi _ { 2 1 }$ . In general, since the functional maps are given in a reduced spectral basis (and thus are smoothed versions of the P2P maps), there are many possible corresponding P2P maps (see e.g. the discussion in Ezuz et al. [EBC17]). Hence, some regularization is required to solve this ambiguity. A minimal additional regularizing requirement, is that smooth functions on $\mathcal { M } _ { 1 }$ are smooth on $\mathcal { M } _ { 2 }$ after the map, which is formalized using $\Pi _ { 2 1 } \Phi _ { 1 } \in \mathrm { s p a n } ( \Phi _ { 2 } )$ . This combined with the requirement that $\| C _ { 2 1 } - \Phi _ { 2 } ^ { \dagger } \Pi _ { 2 1 } \Phi _ { 1 } \|$ , leads to the optimization problem [EBC17]:
$$
\Pi _ { 2 1 } ( C _ { 2 1 } ) = \underset { \Pi _ { 2 1 } \in \mathcal { P } _ { 2 1 } } { \arg \operatorname* { m i n } } \ : \lVert \Phi _ { 2 } C _ { 2 1 } - \Pi _ { 2 1 } \Phi _ { 1 } \rVert _ { \mathcal { M } _ { 2 } } ^ { 2 } ,
$$
where $\mathcal { P } _ { 2 1 }$ is the set of valid maps, i.e. binary row stochastic matrices of dimension $n _ { 2 } \times n _ { 1 }$ . This is easily optimized using a nearest neighbor search between the rows of $\Phi _ { 2 } C _ { 2 1 }$ and the rows of $\Phi _ { 1 }$ [EBC17, Sec 4.2].
Point to Point Guidance. Extracting the P2P map that corresponds to a refined functional map does not, however, result in a high quality map (see Figure 4). Thus, at inference, we would like to incorporate a structural constraint that the computed functional map corresponds to a P2P map.
Diffusion models allow such structure to be injected via guidance at inference time $[ \mathrm { C K M } ^ { * } 2 2 , \mathrm { S Z Y } ^ { * } 2 3 , \mathrm { B C S } ^ { * } 2 3 ] ,$ without requiring retraining. In our case, we wish to guide the model toward refined maps that produce accurate point-to-point correspondences, hence a natural guidance loss would be to use Eq. (5):
$$
\| \Phi _ { 2 } C _ { 2 1 } ^ { t } - \Pi _ { 2 1 } \big ( C _ { 2 1 } ^ { t } \big ) \Phi _ { 1 } \| _ { \mathcal { M } _ { 2 } } ^ { 2 } .
$$
However, directly differentiating through the solution of this optimization is computationally expensive due to the nearest neighbor computation for $n _ { 2 }$ points in $\mathbb { R } ^ { \hat { k _ { 1 } } }$ which is required for computing Π21.
Instead, at each step, we solve for $\Pi _ { 2 1 }$ but stop gradients through it — treating it as a fixed prediction. Hence, our P2P loss for guidance is:
$$
\begin{array} { r } { \mathcal { L } _ { \mathrm { P 2 P g } } ( C _ { 2 1 } ^ { t } ) = \Vert \Phi _ { 2 } C _ { 2 1 } ^ { t } - \Pi _ { 2 1 } \Phi _ { 1 } \Vert _ { \mathcal { M } _ { 2 } } ^ { 2 } . } \end{array}
$$
Although $\Pi _ { 2 1 }$ is treated as constant during backpropagation, the iterative nature of the denoising process ensures that updates to $C _ { 2 1 }$ affect future $\Pi _ { 2 1 }$ values. This indirect influence allows the guidance signal to propagate across timesteps, effectively steering the generation toward spectrally and geometrically consistent solutions. Our final P2P map is $\Pi _ { 2 1 } \dot { ( } C _ { 2 1 } ^ { 0 } )$ . Figure 4 shows the resulting P2P map when using this guidance loss. Note the considerable improvement compared to inference without P2P guidance.
Task-dependent Guidance. This approach to guidance is general and can accommodate a wide variety of loss functions, including enabling adaptation to new tasks or assumptions at inference time. Such flexibility is a key advantage of diffusion-based refinement, enabling controlled generation without compromising generality. In the experimental section (Table 1), we report results where two classical functional map regularizers are added to the guidance: the orthogonality constraint $\bar { L } _ { \mathrm { o r t h } } ( C _ { 2 1 } ) = \| C _ { 2 1 } { } ^ { T } C _ { 2 1 } - I \| _ { F } ^ { 2 }$ (promoting area preserving maps) and the Laplacian commutativity term $\bar { L _ { \Delta } ( C _ { 2 1 } ) = \| C _ { 2 1 } \Lambda _ { 1 } - \Lambda _ { 2 } C _ { 2 1 } \| _ { F } ^ { 2 } }$ (promoting isometric maps), see e.g. [CB22, RPWO19].
Figure 5: Functional Map Refinement (SHOT). We show the performance of the pointwise mapping extracted from our refined map alongside the initial and ground-truth mappings. We additionally show the corresponding functional map matrices. Note the noisy appearance of the SHOT-based initial map. The plot shows the normalized Euclidean error over the Michael dataset, where our refined maps consistently outperform the initial maps.
# 2.7. Recursive Refinement
Following the logic of refining a given noisy functional map, we conduct an experiment in which the model is recursively fed its own denoised output during inference to assess whether performance continues to improve. In our experiments, we observe that a single recursive iteration typically enhances accuracy, but applying more than one iteration leads to degradation. We report results for adding one recursive iteration during inference in Table 1. Note that this additional refinement entails a computational trade-off.
# 3. Experimental Results
The following section is organized as follows: In Section 3.1, we evaluate our model’s performance when conditioned on initial functional maps computed different descriptors, and demonstrate its dataset generalization capabilities. In Section 3.2, we compare our method to existing shape matching approaches, and in Section 3.3, we present an ablation study exploring zero-shot condition generalization and guidance parameters.
# 3.1. Functional Map Refinement
To demonstrate the effectiveness of our refinement model, we apply it to initial functional maps computed from both WKS [ASC11] and SHOT [STDS14] descriptors in a classical way; see details in Appendix B.
Data. We use the Michael meshes from the TACO [PMM24] dataset, which provides remeshed versions of different figures in various poses, along with ground-truth correspondences within each figure. Each mesh contains approximately 50K vertices. The dataset includes 190 ground-truth correspondences between the 20 Michael shapes. We split the dataset into training and testing sets using a 90:10 ratio.
Initial map from WKS. Figure 3 shows an example of mapping a function $f _ { 1 } \in \mathbb { R } ^ { n _ { 1 } }$ defined on $\mathcal { M } _ { 1 }$ to function on $\mathcal { M } _ { 2 }$ . The upper row shows the function on $\mathcal { M } _ { 1 }$ and its mapping to $\mathcal { M } _ { 2 }$ using (i) the initial map $\widetilde { C } _ { 2 1 }$ , (ii) our refined map $C _ { 2 1 }$ , and (iii) the ground-truth map $C _ { 2 1 } ^ { * }$ . H ere we map $\Phi _ { 1 } ^ { \dagger } f _ { 1 }$ using the functional map matrix to get $\tilde { f } _ { 2 }$ , and show $\Phi _ { 2 } \tilde { f } _ { 2 }$ on $\mathcal { M } _ { 2 }$ . Note that only in this figure our refined version doesn’t include guidance during inference, to demonstrate the ability of the base-model refinement abilities beside guidance. The bottom row presents the corresponding functional map matrices. We observe that our refined map is closer to the ground truth than the initial one, successfully resolving artifacts. Some symmetry flips remain, which we discuss later in this subsection.
Figure 4 illustrates the pointwise map extracted from the corresponding functional maps, comparing our refined mapping obtained with and without incorporating P2P guidance during inference. The bottom row shows the pointwise mapping of $f _ { 1 }$ to $\mathcal { M } _ { 2 }$ using (i) the initial extracted map $\widetilde { \Pi } _ { 2 1 }$ , (ii) our refined extracted map without guidance, and (iii) ouer refined extracted map with guidance $\Pi _ { 2 1 }$ . The top row shows $f _ { 1 }$ on $\mathcal { M } _ { 1 }$ and its ground-truth mapping on $\mathcal { M } _ { 2 }$ , obtained using $\Pi _ { 2 1 } ^ { * }$ . Here all the pointwise maps are in $\mathbb { R } ^ { n _ { 2 } \times n _ { 1 } }$ , and thus the mapping is done using, e.g., $\Pi _ { 2 1 } f _ { 1 }$ . We additionally show the normalized Euclidean error over the test set for the initial and refined (with guidance) maps. We observe that our approach improves performance both qualitatively and quantitatively.
From now on, whenever we refer to our refined mapping, we mean the version refined with P2P guidance during inference.
Initial Map from SHOT. Figure 5 presents the performance of our model when conditioned on SHOT-based functional maps. The bottom row shows $f _ { 1 }$ on $\mathcal { M } _ { 1 }$ and the (i) initial, (ii) our refined, and (iii) ground-truth pointwise mappings. The top row shows the corresponding functional map matrices and the normalized Euclidean error over the test dataset for both the initial and our refined maps. Note that both the initial pointwise mapping and functional map appear quite noisy, while our refined version resolves many of the artifacts, despite still having some. We observe that although our refined maps exhibit higher error and variance compared to the WKS-based maps experiment, our refined maps still outperforms the initial ones. See an additional example of SHOT-based map denoising in Figure 1.
Landmark Constraints. If a sparse set of corresponding landmarks is available, it can be easily incorporated in our framework, by adding functional constraints based on these landmarks (e.g. as implemented in PyFM [Mag21]) when computing the functional map. Landmarks constraints are useful for improving the quality of the correspondence, especially in the presence of intrinsic symmetries, which lead to non-unique eigendecomposition of the Laplace-Beltrami operator. When only symmetry-invariant descriptors (such as the WKS) are used, symmetry cannot be disambiguated without an additional step of aligning the LB eigenfunctions (see e.g. [ZLG25]), or adding information that disambiguates the symmetry, such as landmarks.
To demonstrate this, we mark 5 landmarks (head, two hands, and two legs) on both source and target shapes and incorporate these landmark constraints into the initial functional map computation. Figure 6 compares three settings: training and evaluating using the WKS-only initial maps (top row), training and evaluating using initial maps computed with landmarks (middle row), and training using the WKS-only initial maps and evaluating using the initial maps with landmarks (bottom row). From left to right, for each setting we show $f _ { 1 }$ on $\mathcal { M } _ { 1 }$ , and its pointwise mapping extracted from the (i) initial, (ii) our refined, and (iii) ground-truth mappings. We observe that the symmetric flip of the legs (see the shaded oval regions) present in the top row is resolved whenever conditioning on initial maps with landmarks (middle and bottom rows).
Figure 6: Landmarks. We show results for three settings: (top) training and evaluating on WKS-based initial maps, (center) training and evaluating on WKS-based initial maps with landmarks, and (bottom) training on WKS-based initial maps and evaluating on WKS-based initial maps with landmarks. We note that symmetry flips are resolved in cases where the model is conditioned on an initial map with landmarks, even without training on such maps (see the shaded regions).
Figure 7 presents the normalized Euclidean error over the test dataset for both the regular and landmark-constrained settings. For each setting, we show the error for the initial and refined extracted pointwise maps. We observe that the landmark-constrained setting improves both the initial and refined map errors compared to the regular setting, with our refined maps outperforming the initial maps in both cases.
Dataset Generalization. To evaluate dataset generalization, we apply the model trained on the Michael dataset, conditioned on a WKS-based initial functional map, to shape pairs of human figures outside this dataset, as well as to a non-human figure from the TACO dataset. The top row of Figure 1 shows results for mapping a function between a pair from the FAUST dataset, while the bottom row shows mapping results for a pair of wolf shapes from TACO. Note we use the remeshed version of FAUST. From left to right, each row shows (i) the normalized Euclidean error graph of the initial and FRIDU refined maps, (ii) the source and pointwise mapping of $f _ { 1 }$ using the initial, FRIDU refined, and ground-truth mappings, and (iii) the corresponding functional map matrices. The average initial and final errors for the top row (human) are 0.19 and 0.05, respectively, and for the third row (wolf) are 0.21 and 0.03, respectively. Note that despite being trained on the small Michael dataset, our model generalizes well and significantly improves the quality of the initial maps—both quantitatively and qualitatively—on these unseen shape categories.
Figure 7: Landmarks. The normalized Euclidean error over the Michael dataset for the regular ("refined") and landmarksconstrained ("refined_l") settings. For each setting, we also report the error of the initial maps ("initial" and "initial_l", respectively). Note that incorporating landmark constraints improves the accuracy of both the initial and refined maps, with our refined maps consistently outperforming the initial maps in both settings.
# 3.2. Shape Matching Comparisons
Since deep learning-based functional map refinement is less established, we position our method within the broader context of deep learning for shape matching. Most existing approaches consist of a learned feature extractor followed by functional map computation, typically optimized with geometric regularizers. Our method is compatible with both supervised and unsupervised pipelines, as it operates independently of how the initial functional map is obtained. To isolate the effect of the refinement step, we use as input the functional map produced by the feature extractor of a state-of-the-art method, DiffZO. Notably, DiffZO includes a built-in refinement stage, allowing for a direct comparison between their refinement and ours under the same initial conditions. This setup also demonstrates our model’s ability to refine functional maps originating from different sources.
We train two models — one on FAUST [BRLB14] and the other on FAUST $^ +$ SCAPE $[ \mathrm { A S K ^ { * } 0 5 } ]$ —and evaluate them on FAUST, SCAPE and SHREC’19 [MMR∗19] datasets, using the remeshed version [RPWO18] of each dataset. We used the same train/test split as in the DiffZo experiments. Specifically, for the FAUST remeshed dataset we used 80 pairs for training and 20 for testing, and for the SCAPE remehed dataset we used 51 for training and 20 for testing. The SHREC19 dataset was only used for testing.
Table 1 reports the geodesic error $\times 1 0 0$ for axiomatic, supervised, and unsupervised methods. Note that the table presents results for our base pipeline, as well as for three variants: one using a recursive refinement operator (as described in Sec. 2.7), and another incorporating two common functional map regularizers into the guidance. We also report the error of the initial functional map given as input to our model. Our method consistently improves the initial mapping and outperforms DiffZO in intra-dataset evaluations, while maintaining comparable results in cross-dataset evaluations relative to supervised methods. We emphasize that our primary point of comparison is DiffZO, as our model is conditioned on maps generated using its feature extractor.
Among the axiomatic refinement methods listed, SmoothShells [ELC20] requires access to the full shape geometry during the refinement process — a requirement our method avoids. The DiscreteOp approach [RMWO21] is tailored to minimizing a given functional map energy while keeping the maps proper. In contrast, our method does not rely on functional map objectives and except the spectral guidance operates work purely in image space. However, we show that regularization can be incorporated into the guidance when needed, without additional training. The method most closely related to ours is ZoomOut $[ \mathrm { M R R } ^ { * } 1 9 ]$ , which has been shown to outperform BCICP [RPWO18] in both accuracy and runtime.
In Figure 8, we compare our method to ZoomOut on the three network models presented in Sect. 3.1 (i.e. initial map computation using WKS, WKS+landmarks and SHOT), trained on the michael shapes. Leveraging the flexibility of our guidance framework, we experiment with incorporating spectral upsampling during inference – that is, progressively increasing the dimension of the functional map used to compute the pointwise correspondence throughout the denoising process. With upsampling our method performs comparably to ZoomOut in terms of accuracy. Furthermore, since the michael meshes consist of approximately 50K vertices, the computational cost of ZoomOut is quite high $\sim 6 0 0$ seconds per pair), while our inference procedure takes only $\sim 6 0$ seconds.
Table 1: Mean geodesic errors $( \times I O O )$ when training and testing on the FAUST, SCAPE and SHREC19 datasets. Best result within each method category (axiomatic, supervised, and unsupervised methods) is shown in bold. We consider DiffZO as part of the unsupervised category. The axiomatic and supervised methods are from [SMJ∗23b] and the unsupervised methods are from [MO24].
Figure 9 shows a qualitative comparison between our approach and DiffZo when training on Faust and generalizing to SHREC19. We show texture transfer by transferring the texture coordinates generated on $M _ { 1 }$ to the other meshes using the corresponding point-to-point map. Since all the maps are vertex to vertex, which leads to highly noisy texture transfer, we first apply a single iteration of RHM [EBC17] to all the maps. We note that the initial map that both methods start from is quite noisy, and our approach (with upsampling) and DiffZo achieve comparable results. It is possible that a better initialization method may lead to improved results.
# 3.3. Ablation
Zero-Shot Condition Generalization We test our model’s generalization ability to different sources of initial maps by refining WKS-based initial maps using the model trained on SHOT-based mappings, and vice versa. Figure 10 shows two examples of WKS-based maps refined using the SHOT-based trained model (Section 3.1), and two examples of SHOTbased maps refined using the WKS-based trained model (Section 3.1). For each example, we show the pointwise mapping of $f _ { 1 }$ and the corresponding functional map matrices. We note that in both cases, our model manages to refine the initial map, but the results of refining SHOT-based maps using the WKS-based model are significantly better.
Guidance Parameters We perform an ablation study on the parameters of the guidance algorithm proposed in $\mathrm { [ B C S ^ { * } 2 3 ] }$ . These are: the number of gradient steps for backward guidance $m$ , the number of recurrent steps $k$ , and the guidance strength parameter $s$ We refer to the paper for more details. In all our experiments, we set $m = 2$ , $k = 5$ , and $s = 5 0 0$ . For cross-dataset evaluations in Section 3.2, we find that increasing $s$ to 2000 improves performance.
Figure 11 shows the performance of the model when inference is performed with different values of these guidance parameters. We use the model from Section 3.1, trained on WKS-based functional maps between michael shape pairs, and evaluate it on WKS-based functional map between horse shape pairs from TACO. The top row shows $f _ { 1 }$ , and the initial and our refined pointwise mapping using the default parameters $m = 2 , k = 5 , s = 5 0 0$ . The next three rows show ablations for $m$ , $k$ , and $s$ , each with the other two parameters fixed. Table 2 provides the inference time in seconds and the normalized Euclidean error for each set of parameters. In general, we observe that increasing $m$ and $k$ increases inference
WKS WKS + Landmarks SHOT
Without Upsampling 100 100 refined refined refined initial initial initial , zoomOut zoomOut zoomOut 0 0.0 0.10.20.3 0.4 0.5 0 0.7 0.8 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.0 0.2 0,4 0.6 0.8 Normalized Euclidean Error Normalized Euclidean Error Normalized Euclidean Error 100
Upsampling refined refined refined initial initial initial % zoomOut zoomOut 0 ↓ zoomOut 0.0 0.10.20.30.40.50.6_0.7 0.8 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.0 0.2 0.4 0.6 0.8 Normalized Euclidean Error Normalized Euclidean Error Normalized Euclidean Error
Figure 9: A qualitative example comparing the map improvement obtained by our approach and by DiffZo [MO24]. For a pair of shapes from SHREC19, we show the resulting maps when both methods were trained on Faust (third column in Table 1) We show, from left to right: $M _ { 1 }$ , the initial map, our result with orthogonality guidance (FRIDU), our result with with orthogonality guidance and upsampling, the DiffZo result and the ground truth. We note that the initial map is very noisy, and FRIDU with upsampling achieves a very similar result to DiffZo despite the difficult initialization.
time but improves performance only up to a point. The parameter $s$ exhibits a balance point, where values that are too low or too high degrade performance. Overall, we find that our chosen parameters achieve a good trade-off between inference time and error in this example.
# 4. Limitations
As a refinement method, our approach is somewhat sensitive to the initial map. For example, as we show in Figure 9, given a very bad initialization our approach, while improving the map, does not fully reach the optimal solution. This is further exacerbated by training on FAUST and testing on SHREC19, which leads to additional generalization challenges, as can be seen in Table 1. In addition, we achieve only comparable performance to ZoomOut, when starting from the same initial map. However, in a setup where many maps between meshes from the same class need to be improved (e.g., if we want to compute all the pairwise maps for a large dataset of meshes), our approach can offer a considerable speedup by first training on a subset of the dataset, and then efficiently testing on the rest of the shapes. Finally, we have only experimented with square functional maps, whereas in practice one may prefer to use rectangular maps (e.g., using more basis functions on the target than on the source). We note that our algorithm extends naturally to this case, and all the guidance objectives can be used as-is. Yet we leave further validation to future work.
Table 2: Guidance Parameters. Inference time and mean error for different values of the guidance parameters m, k, and $s$ from $\begin{array} { r } { { \cal I } B C S ^ { * } 2 3 { \cal J } . } \end{array}$ . The best result in each column is shown in bold. We note that our chosen parameters (first row) provide a good balance between inference time and accuracy. | We propose a novel approach for refining a given correspondence map between two shapes. A correspondence map represented as a functional map, namely a change of basis matrix, can be additionally treated as a 2D image. With this perspective, we train an image diffusion model directly in the space of functional maps, enabling it to generate accurate maps conditioned on an inaccurate initial map. The training is done purely in the functional space, and thus is highly efficient. At inference time, we use the pointwise map corresponding to the current functional map as guidance during the diffusion process. The guidance can additionally encourage different functional map objectives, such as orthogonality and commutativity with the Laplace-Beltrami operator. We show that our approach is competitive with state-of-the-art methods of map refinement and that guided diffusion models provide a promising pathway to functional map processing. | [
"cs.CV",
"cs.LG"
] |
# 1 Introduction
State-of-the-art reinforcement learning (RL) algorithms, such as Proximal Policy Optimization (PPO) [1] and Soft Actor-Critic (SAC) [2], are typically built on the assumption that the environment can be modeled as a Markov Decision Process (MDP). This framework implicitly assumes that the agent observes the current state instantaneously, selects an action without delay, and executes it immediately.
However, this assumption often breaks down in real-world systems due to interaction delays. These delays arise from various sources: the time taken for sensors to collect and transmit observations, the computation time needed for the agent to select an action, and the transmission and actuation delay when executing that action in the environment (as illustrated in Figure 1). Delays pose no issue if the state of the environment is not evolving between its observation and the execution of the selected action. But in continuously evolving systems—such as robots operating in the physical world—the environment’s state may have changed by the time the action is executed [3]. Delays have been recognized as a key concern when applying RL to cyber-physical systems [4]. Outside the scope of RL, delays have also been studied in classic control [5, 6].
Figure 1: Illustration of a setup affected by interaction delays. Any delay between the embedded system and the excited system is considered negligible or otherwise accounted for. The factors contributing to interaction delay are τobserve $( \tau _ { 0 } )$ , τcompute $( \tau _ { \mathrm { c } } )$ , and $\tau _ { \mathrm { a p p l y } } \left( \tau _ { \mathrm { a } } \right)$ . See Section 3.1 for more details about these factors.
These interaction delays can be implicitly modeled by altering the transition dynamics of the MDP to form a partially observable Markov decision process (POMDP), in which the agent only receives outdated sensor observations. While this approach is practical and straightforward, it limits the agent’s access to information about the environment’s evolution during the delay period.
Another common approach to handling delays in RL is to enforce that actions are executed after a fixed delay [7, 8, 9, 10, 11, 12, 13]. This is typically implemented by introducing an action buffer between the agent and the environment, ensuring that all actions are executed after a predefined delay. However, this method requires prior knowledge of the maximum possible delay and enforces that all actions incur this worst-case delay—even when most interactions in practice experience minimal or no delay. The advantage of this fixed-delay approach is that it provides the agent with perfect information about when its actions will take effect, simplifying decision-making. However, it is overly conservative and fails to adapt and account for variability in delay. Note that state-of-the-art algorithms for delayed MDPs, such as BPQL [14] and VDPO [15], rely on this fixed-delay paradigm.
Moving beyond this fixed-delay framework is challenging, especially because in real-world systems, delays are often unobservable. The agent does not know, at decision time, how long it will take for an action to be executed. One existing approach that attempts to address varying delays is DCAC [16], but it assumes perfect knowledge of the delay for each action at the time it is chosen. This assumption is rarely practical.
In this paper, we make the following contributions:
(i) We introduce a novel framework, the interaction layer, which allows agents to adapt to randomly varying delays—even when these delays are unobservable. In this setup, the agent generates a matrix of candidate actions ahead of time, each intended for a possible future execution time. Specifically, the design handles both (a) that the future actions can have varying delays, and (b) that action packets sent over a network can be lost or arrive in incorrect order. The actual action is selected at the interaction layer once the delay is revealed. This approach enables informed decision-making under uncertainty and robust behavior in the presence of stochastic, unobservable delays (Section 3).
(ii) We develop a new model-based reinforcement learning algorithm, Actor-Critic with Delay Adaptation $( A C D A ,$ ), which leverages the interaction layer to adapt dynamically to varying delays. The algorithm provides two key concepts: (a) instead of using states as input to the policy, it uses a distribution of states as an embedding that enables the generation of more accurate time series of actions, and (b) an efficient heuristic to determine which of the previously generated actions are executed. These actions are needed to compute the state distributions. The approach is particularly efficient when delays are temporally correlated, something often seen in scenarios when communicating over transmission channels (Section 4).
(iii) We evaluate ACDA on a suite of MuJoCo locomotion tasks from the Gymnasium library [17], using randomly sampled delay processes designed to mimic real-world latency sources. Our results show that ACDA, equipped with the interaction layer, consistently outperforms state-of-the-art algorithms designed for fixed delays. It achieves higher average returns across all benchmarks except one, where its performance remains within the standard deviation of the best constant-delay method (Section 5).
# 2 Related Work
To our knowledge, there is no previous work that explicitly allows for agents to make informed decisions under random unobservable delays in RL.
Much of existing work on how to handle delays in RL acts as if delays are constant equal to $h$ , in which case, the problem can be modeled as an MDP with augmented state $\left( s _ { t } , a _ { t } , a _ { t + 1 } , \ldots , a _ { t + h - 1 } \right)$ consisting of the last observed state and memorized actions to be applied in the future [18]. Even if the true delay is not constant, a construction used in previous work is to enforce constant interaction delay through action buffering, under the assumption that the maximum delay does not exceed $h$ time-steps.
By maintaining an action buffer that includes future actions to be applied, one may in principle use existing RL techniques to deal with constant delays. However, directly learning policies on augmented states is hard in practice, which has prompted the development of algorithms that make use of the delayed dynamics. The real-time actor-critic by Ramstedt and Pal [9] leverages the augmented state to optimize for a constant delay of one time step. A planning-based approach, called delay-aware trajectory sampling and proposed by Chen et al. [10], uses probabilistic ensemble with trajectory sampling [19] to plan further into the future using memorized actions from the augmented state together with a learned dynamics model. Earlier work by Firoiu, Ju, and Tenenbaum [8] learned a model of the dynamics to perform state-rollouts to use as input for the policy. Recently, Wang et al. [20] looked at variations in model structures, both for policy and critic inputs.
Recent work by Kim et al. [14] describes belief projection-based Q-learning (BPQL) that explicitly uses the delayed dynamics under constant delay to simplify the critic learning. This work shows the ability to achieve good performance during evaluation over longer delays, despite a simple structure of the learned functions. Our algorithm in Section 4.3 use the same critic simplification, but applied to the randomly delayed setting.
Another approach explored for constant-delay RL is that of having delayed agent trying imitate an undelayed expert, which is employed in algorithms such as DIDA [11] and VDPO [15]. These assume access to the undelayed MDP, which in the real-world can be applied in sim-to-real scenarios, but not when training directly on the real physical system.
Work by Bouteiller et al. [16] also uses action buffering to reason about random delays, but assumes that delays are observable at the time when actions are being generated. In contrast, our work does not depend on such a strong assumption, i.e., we only require that delays are available in hindsight for actions that were applied to the underlying system (see Section 3.1).1
One method of making informed decisions is to use a learned dynamics model of the system, for example to plan into future horizons or to estimate future states as policy inputs [7]. A commonly used dynamics model architecture is the recurrent state space model (RSSM) [21] that combine recurrent latent state with stochastically sampled states as inputs when transitioning in latent space. RSSM is designed to be used for planning algorithms, but can also be used for state prediction. The model used by Firoiu, Ju, and Tenenbaum [8] is similar to RSSM, but uses deterministic output of states from latent representation. Another approach using RSSM is DREAMER [22] which learns a latent state representation for the agent to make decisions in, but in an undelayed setting.
Our approach also learns a model to make decisions in latent space but does not follow the RSSM structure. Instead, our model (introduced in Section 4.2) follows a simpler structure that learns a latent representation describing an actual distribution over states rather than uncertainty about an assumed existing true state. By the definitions of [23], our model classifies as a multi-step prediction model with state abstraction, even though we are only estimating distributions.
# 3 The Interaction Layer
In this section, we explain how random and unpredictable delays may affect the interaction between the agent and the system. To handle these delays, we introduce a new framework, called the interaction layer (Section 3.2), and model the way the agent and the system interact by a Partially Observable MDP (Section 3.3).
# 3.1 Delayed Markov decision processes
We consider a controlled dynamical system modeled as an MDP $\mathcal { M } = ( S , A , p , r , \mu )$ , where $S$ and $A$ are the state and action spaces, respectively, where $p ( s ^ { \prime } | s , a ) _ { ( s ^ { \prime } , s , a ) \in S \times S \times A }$ represents the system dynamics in the absence of delays, $r$ is the reward function, and $\mu$ is the distribution of the initial state.
As in usual MDPs, we assume that at the beginning of each step $t$ , the state of the system is sampled, but this information does not reach the agent immediately, but after an observation delay, $\tau _ { \mathrm { { o } } }$ . After the agent receives the information, an additional computational delay, $\tau _ { \mathrm { c } }$ , occurs due to processing the information and deciding on an appropriate action. The action created by agent is then communicated to the system, with an additional final delay $\tau _ { \mathrm { a } }$ before this action can be applied to the system. The delays2 $\left( \tau _ { \mathrm { o } } , \tau _ { \mathrm { c } } , \tau _ { \mathrm { a } } \right)$ are random variables that may differ across steps and can be correlated. These delays are unobservable to the agent.
# 3.2 Handling delays via the interaction layer
The unpredictable delays pose significant challenges from the agent’s perspective. First, the agent cannot respond immediately to the newly observed system state at each step. Second, the agent cannot determine when the selected action will be applied to the system. To address these issues, we introduce the interaction layer, consisting of an observer and of an action buffer, as illustrated in Figure 2. The interaction layer is a direct part of the system that performs sensing and actuation, whereas the agent can be far away, communicating over a network. Within the interaction layer, the observer is responsible for sampling the system’s state and sending relevant information to the agent. The agent generates a matrix of possible actions. These are sent back to the interaction layer and stored in the action buffer. Depending on when the actions arrive in the action buffer, it selects a row of actions, which are then executed in the following steps if no further decision is received. The rest of this section gives technical details of the interaction layer, whereas Section 4 details the policy for generating actions at the agent.
Figure 2: Illustration of the interaction layer and how the agent interacts with it from a global perspective. As the observation is received from the dynamical system, the next action is immediately applied from the action buffer. Packets in transit with random delay imply partial observability.
Action packet. After that the agent receives an observation packet $\mathbf { \sigma } _ { o _ { t } }$ (generated at step $t$ by the interaction layer, described further below), the agent generates and sends an action packet $\mathbf { \alpha } _ { \mathbf { { a } } _ { t } }$ . The packet includes time stamp $t$ , and a matrix of actions, as follows:
$$
\pmb { a } _ { t } = \left( t , \left[ \begin{array} { c c c c c } { a _ { 1 } ^ { t + 1 } } & { a _ { 2 } ^ { t + 1 } } & { a _ { 3 } ^ { t + 1 } } & { . . . } & { a _ { h } ^ { t + 1 } } \\ { a _ { 1 } ^ { t + 2 } } & { a _ { 2 } ^ { t + 2 } } & { a _ { 3 } ^ { t + 2 } } & { . . . } & { a _ { h } ^ { t + 2 } } \\ { \vdots } & { \vdots } & { \vdots } & { \ddots } & { \vdots } \\ { a _ { 1 } ^ { t + L } } & { a _ { 2 } ^ { t + L } } & { a _ { 3 } ^ { t + L } } & { . . . } & { a _ { h } ^ { t + L } } \end{array} \right] \right) .
$$
The $i$ -th row of the matrix of the action packet corresponds to the sequence of actions that would constitute the action buffer if the packet reaches the interaction layer at time $t + i$ . The reason for using a matrix instead of a vector is that subsequent columns specify which actions to take if a new action packet does not arrive at the interaction layer at a specific time step. For instance, if an action packet arrives at time $t + 2$ , then the interaction layer uses the first action in the buffer $( a _ { 1 } ^ { t + 2 }$ in this case). That is, the first column is always used when a new packet arrives at each time step. If no packet arrives for a specific time step, the other columns are used instead (as explained more below in the description of the action buffer). This approach enables adaptivity for the agent: it can generate actions for specific delays without knowing what the delay is going to be ahead of time. Figure 3 illustrates when an action packet arrives at the interaction layer and a row is inserted into the action buffer (3rd row in this case because the packet arrived with a delay of 3).
$$
\begin{array} { r } { \mathbf { \Sigma } ( u = 1 7 ) \mathbf { \Sigma } } \\ { \mathbf { \Sigma } _ { a _ { u } = \mathbf { \Sigma } } \left( \begin{array} { l l l l } { \left[ \begin{array} { l l l l } { a _ { 1 } ^ { u + 1 } } & { a _ { 2 } ^ { u + 1 } } & { \dots } & { a _ { h } ^ { u + 1 } } \\ { a _ { 1 } ^ { u + 2 } } & { a _ { o } ^ { u + 2 } } & { \dots } & { a _ { \iota } ^ { u + 2 } } \\ { a _ { 1 } ^ { u + 3 } } & { a _ { 2 } ^ { u + 3 } } & { \dots } & { a _ { h } ^ { u + 3 } } \end{array} \right] } & { \left[ \begin{array} { l } { \mathrm { ~ I n t e r a c t i o n ~ L a y e r } } \\ { ( t = 2 0 ) } \\ { \frac { \mathrm { ~ D e l a y ~ } \delta = \iota - w = 3 } { \mathrm { ~ 2 ~ i ~ } } } \end{array} \right] } \\ { \left[ \begin{array} { l l l l } { \vdots } & { \vdots } & { \ddots } & { \vdots } \\ { a _ { 1 } ^ { u + 1 } } & { a _ { 2 } ^ { u + L } } & { \dots } & { a _ { h } ^ { u + 1 } } \end{array} \right] } \end{array} \right) \xrightarrow { \mathbf { \Sigma } } \left( \begin{array} { l } { \underbrace { \mathrm { ~ a r t i o n ~ L r o r ~ r o m ~ m u r i x } } _ { \mathrm { ~ a t i o n ~ } } } \\ { \underbrace { \mathrm { ~ B u t f e r } } _ { \mathrm { ~ B u f f e r } } } \end{array} \right) } \end{array}
$$
Figure 3: Example: Suppose an action packet timestamped by the agent with time $u = 1 7$ , $\mathbf { } a _ { u }$ , arrives at the action layer at time 20. Then, at time $t = 2 0$ , $\delta _ { 2 0 } = 3 \$ , and $c _ { 2 0 } = 0$ . Now, suppose that 2 time units elapse without any new action packet arriving. Then, at time $t = 2 2$ , $\delta _ { 2 2 } = 3$ and counter $c _ { 2 2 } = 2$ . Hence, equation $t = u + \delta _ { 2 2 } + c _ { 2 2 } = 1 7 + 3 + 2 = 2 2$ holds.
Action buffer. The action buffer is responsible for executing an action each time step. If no new action arrives at a time step, the next item in the buffer is used. At the beginning of step $t$ , the action buffer contains the following information: $\mathbf { \delta } _ { b _ { t } }$ , a sequence of $h$ actions to be executed next, and $\delta _ { t }$ , the delay of the action packet from which the actions $\pmb { b } _ { t }$ were taken from. For instance, if an action packet $\mathbf { } a _ { u }$ arrives at the action buffer at time $t$ , then $\delta _ { t } = t - u$ , where $u$ is the time stamp of the action packet $\mathbf { a } _ { u }$ that the agent created. If instead no new action packet arrived at time $t$ , then $\delta _ { t } = \delta _ { t - 1 }$ . To enable the use of an appropriate action even if no new packet arrives at a specific time step, the content of the buffer is shifted one step forward, as shown in Figure 4. Finally, the action buffer includes a counter $c _ { t }$ that records how many steps have passed since the action buffer was updated. The following invariant always holds: $t = u + \delta _ { t } + c _ { t }$ . For a concrete example, see the caption of Figure 3.
Figure 4: Action buffer shifting actions. Final slot is repeated. (Example: horizon $h = 8$ )
Observation packet. The observer builds an observation packet $\mathbf { \sigma } _ { o _ { t } }$ at the beginning of step $t$ . To this aim, it samples the system state $s _ { t }$ , collects information $\boldsymbol { b } _ { t } , \delta _ { t } , c _ { t }$ about the action buffer, forms the observation packet $\pmb { o } _ { t } = ( t , s _ { t } , \pmb { b } _ { t } , \delta _ { t } , c _ { t } )$ , and sends it to the agent.
Enhancing the information contained in the observation and action packets (compared to the undelayed MDP scenario) allows the agent to make more informed decisions and ensures the system does not run out of actions when action packets experience delays. However, this is insufficient to model our delayed system as an MDP. This is because the agent does not have the knowledge of all the observation and action packets currently in transit. Therefore, we use the formalism of a POMDP to accurately describe the system dynamics.
# 3.3 The POMDP model
Next, we complete the description of our delayed MDP and model it as a POMDP. To this aim, we remark that the system essentially behaves as if in each step $t$ , the agent immediately observes $\mathbf { \sigma } _ { o _ { t } }$ and selects an action packet $\mathbf { \alpha } _ { \mathbf { { a } } _ { t } }$ that arrives at the interaction layer $d _ { t }$ steps after the observation $\mathbf { \sigma } _ { o _ { t } }$ was made, where $\dot { d } _ { t } > 0 ^ { 3 }$ . We assume that $d _ { t }$ is generated according to some distribution $D ^ { 4 }$ . Furthermore, we assume that observation packets $\mathbf { \sigma } _ { o _ { t } }$ arrive in order at the agent. In this framework— where the agent selects an action packet $\mathbf { \alpha } _ { \mathbf { \alpha } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ \mathrm { ~ } \ \ } \mathrm { ~ \ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ \mathrm { ~ } \ \ } \mathrm { ~ \ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm ~ \mathrm { ~ \alpha ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ \ } \mathrm { ~ \ } \mathrm { ~ \alpha ~ } _ { \mathbf { \beta } } \mathrm \mathrm { ~ ~ \alpha ~ } \mathrm { ~ \mathrm } \mathrm { ~ \alpha ~ } \mathrm { ~ \mathrm } \mathrm { ~ \alpha ~ } \mathrm { ~ \mathrm } \mathrm { ~ \alpha ~ \alpha ~ } \mathrm { ~ \mathrm } \mathrm { ~ \alpha ~ \mathrm } \mathrm ~ \mathrm ~ \alpha ~ \mathrm { ~ } \mathrm \mathrm { ~ ~ } \mathrm \mathrm \mathrm { ~ ~ \alpha ~ } \mathrm \mathrm { ~ \alpha ~ ~ \alpha ~ } \mathrm \mathrm \mathrm ~ ~ \alpha ~ \alpha ~ \alpha ~ ~ \mathrm \alpha ~ ~ \alpha ~ \mathrm ~ \mathrm \alpha ~ ~ \mathrm \alpha ~ \mathrm ~ \mathrm \alpha ~ ~ \mathrm \mathrm ~ \alpha ~ \mathrm ~ \mathrm \mathrm ~ \alpha ~ \mathrm ~ \mathrm \alpha ~ \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm ~ ~ \alpha ~ \alpha ~ ~ \alpha ~ \alpha ~ ~ \alpha ~ \mathrm ~ \alpha ~ \mathrm ~ \mathrm ~ \alpha ~ \mathrm \mathrm ~ \alpha ~ \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm ~ \alpha ~ \mathrm ~ \mathrm \mathrm ~ \alpha ~ \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm \mathrm ~ \mathrm \mathrm \mathrm \mathrm ~ ~ ~ \alpha ~ \mathrm \mathrm \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm \mathrm ~ \mathrm \mathrm \mathrm ~ \mathrm \mathrm \mathrm \mathrm \mathrm ~ \mathrm \mathrm \mathrm ~ \mathrm \mathrm \mathrm \mathrm \mathrm ~ \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm ~ \mathrm ~ \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm $ as soon as the observation $\mathbf { \sigma } _ { o _ { t } }$ is generated—the single delay $d _ { t }$ replaces the three delays $\left( { \tau _ { \mathrm { o } } , \tau _ { \mathrm { c } } , \tau _ { \mathrm { a } } } \right)$ .
The time step $t$ is a local time tag from the perspective of the interaction layer. Our POMDP formulation does not assume a synchronized clock between the agent and the interaction layer. The agent acts asynchronously and generates an action packet upon receiving an observation packet.
We define $\mathcal { T } _ { t }$ as the set of action packets in transit at the beginning of step $t$ , along with the times at which these packets will arrive at the interaction layer $\mathcal { T } _ { t }$ is a set containing items on the form $( u + d _ { u } , \pmb { a } _ { u } ) )$ . In reality, delays are observed only when action packets reach the interaction layer, and the agent does not necessarily know whether the action packets already generated have reached the interaction layer. Hence we must assume that $\mathcal { T } _ { t }$ is not observable by the agent. The framework we just described corresponds to a POMDP, which we formalize in Appendix C in detail.
# 4 Actor-Critic with Delay Adaptation
This section introduces actor-critic with delay adaptation (ACDA), a model-based RL algorithm that uses the interaction layer to adapt on-the-fly to varying unobservable delays, contrasting to state-of-the-art methods that enforce a fixed worst-case delay. A challenge with varying unobservable delays is that the agent lacks perfect information about the actions to be applied in the future. Indeed, it does not know when recently sent actions will arrive at the interaction layer. ACDA solves this with a heuristic (Section 4.1) that is effective when delays are temporally correlated.
The actions selected by ACDA will vary in length depending on the delay that we are generating actions for. This lends itself poorly to commonly used policy function approximators in deep RL such as multi-layer perceptrons (MLPs), that assume a fixed size of input. ACDA solves this with a model-based distribution agent (Section 4.2) that embeds the variable length input into fixed size embedding of the distribution over the state to which the generated action will be applied. The fixed size embeddings are then used as input to an MLP to generate actions. ACDA learns a model of the environment dynamics online to compute these embeddings. Section 4.3 shows how we train ACDA.
# 4.1 Heuristic for Assumed Previous Actions
A problem with unobservable delays is that we do not know when our previously sent action packets will arrive at the interaction layer. This means that we do not know which actions that are going to be applied to the underlying system between generating the action packet and it arriving at the interaction layer. A naive assumption would be to assume the action buffer contents reported by the observation packet to be the actions that are going to be applied to the underlying system. However, this is unlikely to be true due to that the action buffer is going to be preempted by action packets already in transit.
ACDA employs a heuristic for estimating these previous actions to be applied to the system between $\mathbf { \sigma } _ { o _ { t } }$ being generated and $\mathbf { \alpha } _ { \mathbf { { a } } _ { t } }$ arriving at the interaction layer. The heuristic assumes that, if $\mathbf { \alpha } _ { \mathbf { { \alpha } } } \mathbf { \alpha } _ { \mathbf { { \beta } } } \mathbf { \alpha } _ { \mathbf { { \alpha } } } \mathbf { \alpha } _ { \mathbf { { \beta } } } \mathbf { \alpha } _ { \mathbf { { \beta } } } \mathbf { \alpha } _ { \mathbf { { \alpha } } } \mathbf { \alpha } _ { \mathbf { { \beta } } } \mathrm { ~ } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { { \beta } } } \mathrm { ~ } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { { \beta } } } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { { \beta } } } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { { \beta } } } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ \ } \mathbf { \alpha } _ { \mathbf { { \beta } } } \mathrm ~ \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm \mathrm { ~ } \mathrm { ~ } \mathrm \mathrm { ~ } \mathrm { ~ } \mathrm \mathrm { ~ } \mathrm \mathrm { ~ } \mathrm \mathrm { ~ } \mathrm \mathrm { ~ ~ } \mathrm \mathrm { ~ } \mathrm \mathrm { ~ } \mathrm \mathrm { ~ } \mathrm \mathrm { ~ ~ } \mathrm \mathrm \mathrm { ~ } \mathrm \mathrm { ~ \mathrm ~ } \mathrm \mathrm { ~ \mathrm \mathrm ~ } \mathrm \mathrm ~ \mathrm \mathrm { ~ ~ } \mathrm \mathrm \mathrm \mathrm { ~ ~ } \mathrm \mathrm \mathrm \mathrm { ~ ~ } \mathrm \mathrm \mathrm \mathrm { ~ ~ ~ } \mathrm \mathrm \mathrm \mathrm \mathrm ~ ~ ~ \mathrm \mathrm \mathrm ~ \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm { ~ ~ ~ } \mathrm \mathrm \mathrm \mathrm \mathrm ~ \mathrm ~ \mathrm \mathrm \mathrm ~ \mathrm \mathrm ~ ~ ~ ~ \mathrm \mathrm ~ \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm ~ \mathrm ~ \mathrm \mathrm ~ \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm ~ \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm \mathrm ~ \mathrm \mathrm \mathrm ~ \mathrm ~ \mathrm \mathrm \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm \mathrm ~ \mathrm \mathrm \mathrm \mathrm \mathrm ~ \mathrm \mathrm \mathrm ~ \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm $ arrives at time $t + k$ (it having delay $k$ ), then previous action packets will also have delay $k$ . Such that $\mathbf { \delta } \mathbf { a } _ { t - 1 }$ will arrive at time $t + k - 1$ , $\mathbf { \delta } _ { a _ { t - 2 } }$ at $t + k - 2$ , etc.
Under this assumption, a new action packet will preempt the action buffer at every single time step. This means that, if we assume a delay of $k$ , the action applied to the underlying system will be the action in the first column of the $k$ -th row in the action packet last received by the interaction layer.
By memorizing the action packets previously sent, we can under this assumption select the actions
that are going to be applied to the system as shown in Algorithm 1. When generating $a _ { 1 } ^ { t + k }$ , the
first action on the $k$ -th row in the action packet $\mathbf { \alpha } _ { \mathbf { \alpha } _ { \mathbf { \beta } } } \mathbf { \alpha } _ { \mathbf { \beta } _ { \mathbf { \alpha } _ { \mathbf { \beta } _ { \mathbf { \beta } _ { \mathbf { \beta } _ { \mathbf { \beta } _ { \lambda } } } } } } } \mathbf { \alpha } _ { \mathbf { \beta } _ { \mathbf { \beta } _ { \mathbf { \beta } _ { \mathbf { \beta } _ { \mathbf { \beta } _ { \lambda } } } } } } \mathbf { \alpha } _ { \mathbf { \beta } _ { \mathbf { \beta } _ { \lambda } } \mathbf { \beta } _ { \mathbf { \beta } _ { \lambda } } }$ , we use Algorithm 1 to determine the actions
$( \hat { a } _ { 1 } ^ { t + k } , \dots , \hat { a } _ { k } ^ { t + k } )$ that will be applied to the observed state $s _ { t }$ before $a _ { 1 } ^ { t + k }$ is executed. For the action
$a _ { 2 } ^ { t + k }$ ,fbowerfeoerkexn haitse tephxriescviisuotoeundsl.aysgsuoimnpgtitonbeanedxescauyttehdaitf rrairve tdheat $t + k + 1$ p. iWede $( \hat { a } _ { 1 } ^ { t + k } , \dots , \hat { a } _ { k } ^ { t + k } , a _ { 1 } ^ { t + k } )$ , aˆtk+k, at1+k) actions ap $s _ { t }$ $a _ { 2 } ^ { t + k }$
The main idea here is that the heuristic guesses the applied actions if the delay does not evolve too much over time. If the delay truly was constant, then all guesses would be accurate and ACDA would transform the POMDP problem to a constantdelay MDP. The heuristic’s accuracy is compromised during sudden changes in delay, such as network delay spikes. How
ever, as we will see in the evaluation, occasional violations will not significantly impact overall performance.
# 4.2 Model-Based Distribution Agent
The memorized actions used by ACDA are variable in length and therefore cannot be directly used as input to MLPs, unlike in constant-delay approaches. Instead, ACDA constructs an embedding $z _ { 1 } ^ { t + k }$ of the distribution p(st+k|st, aˆt1+ $p ( s _ { t + k } | s _ { t } , \hat { a } _ { 1 } ^ { t + k } , \dots , \hat { a } _ { k } ^ { t + k } )$ , where $\hat { a } _ { 1 } ^ { t + k } , \dots , \hat { a } _ { k } ^ { t + k }$ are the memorized actions. We then provide $z _ { 1 } ^ { t + k }$ as input to an MLP to generate $a _ { 1 } ^ { t + k }$ . This allows the policy to reason about the possible states in which the generated action will be executed. Note that we are only concerned with the distribution itself and never explicitly sample from it.
To compute these embeddings, we learn a model of the system dynamics using three components: $\mathrm { E M B E D } _ { \omega }$ , $\operatorname { S T E P } _ { \omega }$ , and $\mathrm { E M I T } _ { \omega }$ , where $\omega$ represents learnable parameters.
• $\hat { z } _ { 0 } = \mathrm { E M B E D } _ { \omega } ( s _ { t } )$ embeds a state $s _ { t }$ into a distribution embedding $\hat { z } _ { 0 }$ . • $\hat { z } _ { i + 1 } = \mathrm { S T E P } _ { \omega } ( \hat { z } _ { i } , a _ { t + i } )$ updates the embedded distribution to consider what happens after also applying the action $a _ { t + i }$ . Such that if $\hat { z } _ { i }$ is an embedding of $p ( s _ { t + i } | s _ { t } , a _ { t } , \dots , a _ { t + i } )$ , then $\hat { z } _ { i + 1 }$ is an embedding of $p ( s _ { t + i + 1 } | s _ { t } , a _ { t } , \dotsc , a _ { t + i } , a _ { t + i + 1 } )$ . • The final component $\mathrm { E M I T } _ { \omega } \big ( s _ { t + i } \big | \hat { z } _ { i } \big )$ allows for a state to be sampled from the embedded distribution. This component is not used when generating actions, and is instead only used during training to ensure that $\hat { z } _ { i }$ is a good embedding of $p ( s _ { t + i } | s _ { t } , a _ { t } , \dots , a _ { t + i } )$ .
Figure 5: Illustration of the multi-step distribution model embedding $p ( s _ { t + k } | s _ { t } , \hat { a } _ { 1 } ^ { t + k } , \dots , \hat { a } _ { k } ^ { t + k } )$ as $\mathbf { S } \mathbf { T E P } _ { \omega } ^ { k } \big ( \mathrm { E M B E D } _ { \omega } \big ( s _ { t } \big ) , \hat { a } _ { 1 } ^ { t + k } , \dots , \hat { a } _ { k } ^ { t + k } \big )$
The way these components are used to produce the embedding $z _ { 1 } ^ { t + k }$ is illustrated in Figure 5. We use the notation $z _ { 1 } ^ { t + k } = \hat { z } _ { k }$ given that we are embedding the selecte1d actions $( \hat { a } _ { 1 } ^ { t + k } , \dots , \hat { a } _ { k } ^ { t + k } )$ . We use the notation $\mathbf { S } \mathrm { T E P } _ { \omega } ^ { k } \big ( \mathrm { E M B E D } _ { \omega } \big ( s _ { t } \big ) , \hat { a } _ { 1 } ^ { t + k } , \dots , \hat { a } _ { k } ^ { t + k } \big )$ , aˆtk+k) to describe this multi-step embedding process. This notation is formalized in Appendix D.
The $\mathrm { E M B E D } _ { \omega }$ and $\mathrm { E M I T } _ { \omega }$ components are implemented as MLPs, while $\mathbf { S } \mathrm { T E P } _ { \omega }$ is implemented as a gated recurrent unit (GRU). We provide detailed descriptions of these components in Appendix D. We learn these components online by collecting information from trajectories about observed states $s _ { t }$ and $s _ { t + n }$ and their interleaved actions $a _ { t } , a _ { t + 1 } , \dotsc , a _ { t + n - 1 }$ in a replay buffer $\mathcal { R }$ . The following loss function $\mathcal { L } ( \omega )$ is used to minimize the KL-divergence between the model and the underlying system dynamics: $\mathcal { L } ( \omega ) = \mathbb { E } _ { ( s _ { t } , a _ { t } , a _ { t + 1 } , \dots , a _ { t + n - 1 } , s _ { t + n } ) \sim \mathcal { R } } \left[ - \log \mathrm { E M I T } _ { \omega } ( { s _ { t + n } } | z _ { n } ) \right]$ where $z _ { n } = \mathrm { S T E P } _ { \omega } ^ { n } \big ( \mathrm { E M B E D } _ { \omega } \big ( s _ { t } \big ) , a _ { t } , a _ { t + 1 } , \ldots , a _ { t + n - 1 } \big )$ .
zt+1 πθ at1+1 zt+1 πθ at+1 zt+1 zt+2 πθ at1+2 Ste zt+2 πθ at+2 Step zt+2 . . . zt+L πθ a1 t+L te zt+L πθ t+L a2 Step t+L
Given the embedding $z _ { 1 } ^ { t + k }$ , we produce $a _ { 1 } ^ { t + k }$ in the action packet $\mathbf { \alpha } _ { \mathbf { \alpha } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ \mathrm { ~ } \ \ } \mathrm { ~ \ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ \mathrm { ~ } \ \ } \mathrm { ~ \ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm ~ \mathrm { ~ \alpha ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ } \mathrm { ~ \ } \mathrm { ~ \ } \mathrm { ~ \alpha ~ } _ { \mathbf { \beta } } \mathrm \mathrm { ~ ~ \alpha ~ } \mathrm { ~ \mathrm } \mathrm { ~ \alpha ~ } \mathrm { ~ \mathrm } \mathrm { ~ \alpha ~ } \mathrm { ~ \mathrm } \mathrm { ~ \alpha ~ \alpha ~ } \mathrm { ~ \mathrm } \mathrm { ~ \alpha ~ \mathrm } \mathrm ~ \mathrm ~ \alpha ~ \mathrm { ~ } \mathrm \mathrm { ~ ~ } \mathrm \mathrm \mathrm { ~ ~ \alpha ~ } \mathrm \mathrm { ~ \alpha ~ ~ \alpha ~ } \mathrm \mathrm \mathrm ~ ~ \alpha ~ \alpha ~ \alpha ~ ~ \mathrm \alpha ~ ~ \alpha ~ \mathrm ~ \mathrm \alpha ~ ~ \mathrm \alpha ~ \mathrm ~ \mathrm \alpha ~ ~ \mathrm \mathrm ~ \alpha ~ \mathrm ~ \mathrm \mathrm ~ \alpha ~ \mathrm ~ \mathrm \alpha ~ \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm ~ ~ \alpha ~ \alpha ~ ~ \alpha ~ \alpha ~ ~ \alpha ~ \mathrm ~ \alpha ~ \mathrm ~ \mathrm ~ \alpha ~ \mathrm \mathrm ~ \alpha ~ \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm ~ \alpha ~ \mathrm ~ \mathrm \mathrm ~ \alpha ~ \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm \mathrm ~ \mathrm \mathrm \mathrm \mathrm ~ ~ ~ \alpha ~ \mathrm \mathrm \mathrm \mathrm ~ \mathrm \mathrm ~ \mathrm \mathrm \mathrm ~ \mathrm \mathrm \mathrm ~ \mathrm \mathrm \mathrm \mathrm \mathrm ~ \mathrm \mathrm \mathrm ~ \mathrm \mathrm \mathrm \mathrm \mathrm ~ \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm ~ \mathrm ~ \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm \mathrm $ using a policy $\pi _ { \theta } ( a _ { 1 } ^ { t + k } | z _ { 1 } ^ { t + k } )$ , i.e. generating actions given the (embedded) distribution over the state that the action will be applied to. This policy structure allows the agent to reason about uncertainties in future states when generating actions.
By extending the assumptions as shown in Section 4.1, we can also produce the embeddings $z _ { 2 } ^ { t + k }$ $z _ { 3 } ^ { t + k }$ , etc., as illustrated in Figure 6. The complete process of constructing the action packet is formalized in Appendix D.
This model-based policy can also be applied in the constant-delay setting to achieve decent performance. We evaluate how this compares against a direct MLP function approximator in Appendix E.2, where we implement and evaluate the model-based policy in the BPQL algorithm.
# 4.3 Training Algorithm
This section describes the training procedure in Algorithm 2, used to optimize the parameters of the networks. It follows an actor-critic setup based on SAC. The training procedure of the critic $Q _ { \phi }$ is similar to BPQL, where $Q _ { \phi } ( s , a )$ evaluate the value of actions $a$ on undelayed system states $s$ .
Algorithm 2 is split into three parts: trajectory sampling (L3-L12), transition reconstruction (L13), and training (L14-L15). We do this split to reduce the impact that the training procedure can have on the computational delay $\tau _ { \mathrm { c } }$ of the system. From the trajectory sampling we collect POMDP transition information $\left( o _ { t } , \boldsymbol { a } _ { t } , \boldsymbol { r } _ { t } , \boldsymbol { o } _ { t + 1 } \right)$ where $\Gamma _ { t }$ is used to discern if $\mathbf { \sigma } _ { \mathbf { \mathcal { S } } _ { t } }$ is in a terminal state.
An important aspect of Algorithm 2 is how trajectory information is reconstructed for training. Specifically, we reconstruct the trajectory $( s _ { 0 } , a _ { 0 } , r _ { 0 } , s _ { 1 } , a _ { 1 } , . . . )$ from the perspective of the undelayed MDP, along with the policy input used to generate each action $a _ { t }$ . The policy input can be retrospectively recovered by examining the current action delay in the buffer $( \delta _ { t } )$ and the number of times the buffer has shifted $( c _ { t } )$ .
This trajectory reconstruction is necessary
since we follow the BPQL algorithm’s
actor-critic setup. The critic $Q _ { \phi } ( \bar { s } _ { t } , a _ { t } )$ es
timates values in the undelayed MDP, and
we need to be able to regenerate actions
$a _ { t }$ using the model-based policy to com
pute the TD-error. The details of trajectory
reconstruction and the training algorithm is located in Appendix D.
1: Init. policy $\pi _ { \boldsymbol { \theta } }$ , critic $Q _ { \phi }$ , model $\omega$ , and replay $\mathcal { R }$
2: for each epoch do
3: Reset interaction layer state: $s _ { 0 } \sim \mu , t = 0$
4: Collected trajectory: $\tau = \emptyset$
5: Observe o0
6: while terminal state not reached do
7: for $k \gets 1$ to $L$ do
8: Select aˆt1+k, . $\hat { a } _ { 1 } ^ { t + k } , \dots , \hat { a } _ { k } ^ { t + k }$ , aˆt+k by Alg. 1
9: Create the $k$ -th row of $\mathbf { \alpha } _ { \mathbf { \alpha } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ \mathrm { ~ } \ \ } \mathrm { ~ \ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ \mathrm { ~ } \ \ } \mathrm { ~ \ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ \mathrm { ~ \ } \ } \mathrm { ~ \ } \mathrm { ~ \ \ } \mathrm { ~ \ \ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm \mathrm ~ \alpha _ { \mathbf { \beta \beta } } \mathrm \mathrm { ~ \alpha _ { \mathbf { \beta \beta } } \mathrm \mathrm { ~ \alpha } \mathrm _ { \mathbf { \beta \alpha \beta } } \mathrm \mathrm { ~ \alpha _ \beta \alpha \beta } \mathrm \mathrm } \mathrm \mathrm ~ \mathrm { ~ \alpha } \mathrm \mathrm { ~ \alpha ~ \beta \alpha _ \beta \beta \alpha \mathrm } \mathrm \mathrm { ~ \alpha \beta \mathrm } \mathrm \mathrm { ~ \alpha \alpha \beta \mathrm } \mathrm \mathrm { ~ \mathrm ~ \alpha \mathrm } \mathrm \mathrm \mathrm { ~ \mathrm \alpha ~ \alpha \mathrm \mathrm } \mathrm \mathrm ~ \mathrm \mathrm \mathrm \alpha \mathrm ~ ~ \alpha \alpha ~ \alpha \alpha \beta \alpha \mathrm ~ \alpha \mathrm \mathrm \mathrm \mathrm \mathrm \alpha \mathrm \mathrm ~ ~ \alpha \alpha ~ \alpha \alpha \alpha \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \mathrm \delta \mathrm \delta \mathrm \mathrm \delta \mathrm \delta \delta \mathrm \delta \delta \mathrm \mathrm \delta \mathrm \delta \mathrm \delta \delta \mathrm \delta \delta \delta \mathrm \delta \delta \mathrm \mathrm \delta \delta \delta \mathrm \delta \delta \delta \delta \mathrm \delta \delta \delta \mathrm \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta \delta $
10: Send $\mathbf { \alpha } _ { \mathbf { { a } } _ { t } }$ , observe $r _ { t } , \mathbf { \pmb { o } } _ { t + 1 } , \Gamma _ { t + 1 }$
11: Add $\left( o _ { t } , \mathbf { a } _ { t } , r _ { t } , \mathbf { o } _ { t + 1 } , \Gamma _ { t + 1 } \right)$ to $\tau$
12: $t \gets t + 1$
13: Reconstruct transition info from $\tau$ , add to $\mathcal { R }$
14: for $| \mathcal T |$ sampled batches from $\mathcal { R }$ do
15: Update $\pi _ { \boldsymbol { \theta } }$ , $Q _ { \phi }$ and $\omega$ (by $\mathcal { L } ( \omega )$ )
# 5 Evaluation and Results
To assess the benefits of the interaction layer in a delayed setting, we simulate the POMDP described in Section 3.3, wrapping existing environments from the Gymnasium library [17] as the underlying system. Specifically, we aim to answer the question whether our ACDA algorithm that uses information from the interaction layer can outperform state-of-the-art algorithms under random delay processes.
We evaluate on the three delay processes shown in Figure 7. The first two delay processes $\mathrm { G E _ { 1 , 2 3 } }$ and $\mathrm { G E } _ { 4 , 3 2 }$ follow Gilbert-Elliot models [24, 25] where the delay alternate between good and bad states (e.g. a network or computational node being overloaded or having packets dropped). The third delay process MM1 is modeled after an M/M/1 queue [26] where sampled delay is the time spent in the queue by a network packet. The full definition of these delay processes are located in Appendix B.2. We expect that ACDA performs well under the Gilbert-Elliot processes that match the temporal assumptions of ACDA, whereas we expect ACDA to not perform as well with the M/M/1 queue delays that fluctuate more.
Figure 7: The delay processes we evaluate on, as a distribution histogram (above) and as a time series sampled delay over 1000 steps (below) for each delay process. See Appendix B.2 for their definitions.
The state-of-the-art algorithms we compare against are BPQL [14] and VDPO [15].5 As these are designed to operate under constant delay, we apply a constant-delay augmentation (CDA) to allow them to operate with constant delay in random delay processes. CDA converts the interaction layer POMDP into a constant-delay MDP by making agents act under the worst-case delay of a delay process.6 This augmentation process is described in Appendix A. In addition to the state-of-the-art algorithms, we also evaluate the performance of SAC, both with CDA and when it acts directly on the state from the observation packet (implicitly modeling delays). In Appendix E.3, we evaluate when CDA use an incorrect worst-case delay that holds most of the time, but is occasionally violated.
We evaluate average return over a training period of 1 million steps on MuJoCo environments in Gymnasium, following the procedure from related work in delayed RL. However, an issue with the MuJoCo environments is that they have deterministic transition dynamics, rendering them theoretically unaffected by delay. To better evaluate the effect of delay, we make the transitions stochastic by imposing a $5 \%$ noise on the actions. We motivate and specify this in Appendix B.1.
The average return is computed every 10000 steps by freezing the training weights and sampling 10 trajectories under the current policy. We report the best achieved average return—where the return is the sum of rewards over a trajectory—for each algorithm, environment, and delay process in Table 1. All achieved average returns are also presented in Appendix E.1 as time series plots together with tables showing the standard deviation.
Table 1: Best evaluated average return for each algorithm.
Figure 8: M/M/1 Queue results (all results in Appendix E.1.3). Shaded regions are standard deviation.
As shown in Table 1, ACDA outperforms state-of-the-art in all benchmarks except one, with a significant margin in most cases. The improvement is less substantial in Ant-v4, where performance often overlaps, as indicated by the standard deviation (Figure 8). | In standard Reinforcement Learning (RL) settings, the interaction between the agent and the environment is typically modeled as a Markov Decision Process (MDP), which assumes that the agent observes the system state instantaneously, selects an action without delay, and executes it immediately. In real-world dynamic environments, such as cyber-physical systems, this assumption often breaks down due to delays in the interaction between the agent and the system. These delays can vary stochastically over time and are typically unobservable, meaning they are unknown when deciding on an action. Existing methods deal with this uncertainty conservatively by assuming a known fixed upper bound on the delay, even if the delay is often much lower. In this work, we introduce the interaction layer, a general framework that enables agents to adaptively and seamlessly handle unobservable and time-varying delays. Specifically, the agent generates a matrix of possible future actions to handle both unpredictable delays and lost action packets sent over networks. Building on this framework, we develop a model-based algorithm, Actor-Critic with Delay Adaptation (ACDA), which dynamically adjusts to delay patterns. Our method significantly outperforms state-of-the-art approaches across a wide range of locomotion benchmark environments. | [
"cs.LG",
"cs.AI",
"cs.RO"
] |
# 1. Introduction
Surgical phases recognition aims to identify the representation of high-level surgical stages depicted in surgical videos [21]. This capability holds potential applications for fruitful downstream tasks, such as automatic indexing of surgical video databases [45], real-time monitoring of surgical procedures [2], optimizing surgeons schedules [32], evaluating surgeons’ proficiency [28], etc. The primary objective of surgical phase recognition is to predict the category variable $y \in \mathbb { R } ^ { L \times C }$ given a video frame $x \in \mathbb { R } ^ { L \times I }$ , where $L$ and $C$ denote the video frame length and category of phase number, $I$ is the channel number per frame. The process is characterized by the deterministic function $f ( x ) \in \mathbb { R } ^ { L \times C }$ that transforms the video frame $x$ into the category variable $y$ .To help alert surgeons and support decision-making in real-time during surgery, we do not use the future information within the video frame of $x$ , which is also known as online phase recognition [34, 9]. It requires us to design the mapping function $f ( \cdot )$ carefully without the information leakage.
Figure 1. The illustration of unbalanced phase distribution and frame ambiguity on AutoLaparo dataset. i) Unbalanced phase distribution: The ribbon charts tell us that the frame distribution across different phases (best viewed in colors) is highly unbalanced. ii) Frame ambiguity: The blue box inside the black box indicates the target organ and tool that should be focused on while the blue box outside the black box represents incorrectly focused area. The correct and incorrect areas are visual similar which might confuse the model during recognition. Along with the CBAM results, we believe that a better approach to address the aforementioned issues would be beneficial to reliable OSP recognition.
Deep neural network-based models [17, 46] have shown promising performance in surgical phase recognition by designing complex deterministic functions $f ( \cdot )$ [24, 29, 10, 54, 35, 13, 53, 23]. To capture long-term spatial information, Transformer-based methods [42, 51] have been introduced, where SKiT [29] improving efficiency via key pooling ( $\mathcal { O } ( 1 )$ complexity) while preventing future information leakage. Recently, CNN-based models have regained attention due to batch normalization pitfalls [35]. To reduce the labor of video annotation, UATD [10] and VTDC [38] utilize timestamp annotation and semi-supervised learning. However, previous methods neglect the impact of inherent frame ambiguity and unbalanced phase distribution in surgical videos on the robustness of Online Surgical Phase (OSP) recognition, as shown in Fig. 1. For example, in laparoscopic rectal cancer surgery, the high similarity between the sigmoid colon mobilization and microvascular mobilization, along with irregular camera angle changes caused by emergencies during the surgery, contributes to ambiguity in the surgical video frames. Additionally, free rectal movements are much more frequent than other movements because they are central to the procedure. Conversely, digestive tract reconstruction action occurs infrequently due to their fixed process, resulting in unbalanced phase distribution. Therefore, overlooking these inherent factors in surgical videos during phase recognition may lead to unforeseen suboptimal outcomes and significantly misguide downstream surgical tasks.
In this paper, we propose a meta-learning optimized classification diffusion model for reliable OSP recognition (Meta-SurDiff) to address the aforementioned issues in a unified end-to-end framework. Our primary objective is to mitigate the negative impact of frame ambiguity on OSP recognition, conditioned on some coarse phase representations derived from any available deep models mentioned above. To achieve this goal, we introduce a novel classification diffusion model, which unifies the conditional diffusion generative process [40, 18] with coarse phase representations. Notably, to demonstrate the superiority of Meta-SurDiff, we use ConvNext+LSTM, a simple yet effective backbone [35], to extract the phase representations. Although the classification diffusion model should be robust for frame ambiguity, the model still faces the risk of being biased towards the majority phases due to the unbalanced distribution of frames. To this end, we introduce a re-weighting based meta-learning objective to balance the negative impact of unbalanced phases on optimizing the model. We summarize the main contributions as follows:
In the realm of OSP recognition, we introduce Meta-SurDiff, a meta-learning optimized classification diffusion model. It considers the covariate-dependence across both the forward and reverse processes within the diffusion model based on coarse phase representations, resulting in a highly accurate phase distribution estimation. • Meta-SurDiff serves as a flexible plugin framework, seamlessly campatible with existing well-designed models for OSP recognition, using their strong capability to estimate the coarse phase representations, facilitating the estimation of complete phase distribution for reliable recognition. Experiments on five widely used datasets with more than four practical metrics demonstrate Meta-SurDiff establishing new state-of-the-art (SOTA) performance.
# 2. Background
2.1. Diffusion Probabilistic Model. DDPM [18] is a prominent example of probabilistic
diffusion models [41, 36, 37], which comprises a forward diffusion process along with a reverse
denoising process. Noise is incrementally introduced and ultimately converting the initial
variable into Gaussian noise across $T$ steps: $\mathbf { \boldsymbol { \mathsf { y } } } _ { 0 }$ ${ \mathbf { } } _ { { \pmb { y } } _ { T } }$
$$
\begin{array} { r l } & { q ( \pmb { y } _ { 1 : T } | \pmb { y } _ { 0 } ) = \displaystyle \prod _ { t = 1 } ^ { T } q ( \pmb { y } _ { t } | \pmb { y } _ { t - 1 } ) , } \\ & { q ( \pmb { y } _ { t } | \pmb { y } _ { t - 1 } ) = \mathcal { N } ( \sqrt { 1 - \beta _ { t } } \pmb { y } _ { t - 1 } , \beta _ { t } \mathbf { I } ) } \end{array}
$$
where $\beta _ { t }$ is the noise level that typically set to a small constant. A notable characteristic of the forward process is that $q ( \pmb { y } _ { t } | \pmb { y } _ { 0 } ) = \mathcal { N } ( \pmb { y } _ { t } ; \sqrt { \alpha _ { t } } \pmb { y } _ { 0 } , ( 1 - \alpha _ { t } ) \mathbf { I } )$ , $\begin{array} { r } { \alpha _ { t } = \prod _ { t = 1 } ^ { T } ( 1 - \beta _ { t } ) } \end{array}$ . Utilizing a Markov chain with trainable Gaussian transitions, the denoising procQess from $y _ { t }$ back to $y _ { 0 }$ unfolds as:
$$
\begin{array} { l } { \displaystyle { p _ { \theta } ( \boldsymbol { y } _ { 0 : T } ) = p _ { \theta } ( \boldsymbol { y } _ { T } ) \prod _ { t = 1 } ^ { T } p _ { \theta } ( \boldsymbol { y } _ { t - 1 } | \boldsymbol { y } _ { t } ) , } } \\ { \displaystyle { p _ { \theta } ( \boldsymbol { y } _ { t - 1 } | \boldsymbol { y } _ { t } ) = \mathcal { N } ( \mu _ { \theta } ( \boldsymbol { y } _ { t } , t ) , \sigma _ { t } ^ { 2 } \mathbf { I } ) } } \end{array}
$$
where $\begin{array} { r } { \pmb { \mu } _ { \pmb { \theta } } ( \pmb { y } _ { t } , t ) = \frac { 1 } { \sqrt { \alpha _ { t } } } ( \pmb { y } _ { t } - \frac { \beta _ { t } } { \sqrt { 1 - \alpha _ { t } } } \epsilon _ { \pmb { \theta } } ( \pmb { y } _ { t } , t ) ) } \end{array}$ . Additionally, a noise prediction network $\epsilon _ { \pmb { \theta } } ( \cdot )$ is adopted to minimize the regression loss $\begin{array} { r } { \operatorname* { m i n } _ { \theta } \mathbb { E } _ { t , y _ { 0 } , \epsilon \sim \mathcal { N } ( \mathbf { 0 } , \mathbf { I } ) } | | \epsilon - \epsilon _ { \theta } ( y _ { t } , t ) | | _ { 2 } ^ { 2 } } \end{array}$ .
2.2. Learning to Re-weight Examples. Re-weighting the loss function is a widely employed tactic for addressing imbalanced data issue [15]. It treats the weight assigned to each instance as a trainable parameter, enabling the learning of a balanced model for both minority and majority categories through optimization of the weighted loss function. Typically, the optimal
4 YUFEI $\mathrm { L I } ^ { 1 }$ , JIRUI WU1, LONG TIAN $^ { 1 , * }$ , LIMING WANG1, XIAONAN LIU2, ZIJUN $\mathrm { L I U } ^ { 2 }$ , XIYANG LIU1 weight is optimized on a balanced meta dataset and can be expressed as:
Figure 2. Overview of Meta-SurDiff, it consists of a classification diffusion model and a re-weighting based meta-learning objective. Top: We employ a simple yet effective backbone $f _ { \phi } ( \cdot )$ , ConvNext $+ ~ \mathrm { L S }$ TM, to capture coarse phase representations $f _ { \phi } ( \pmb { x } ^ { i } )$ for the $i$ -th video frame, which serves as conditional inputs of the classification diffusion model. Bottom: the proposed classification diffusion model that utilizes $f _ { \phi } ( \pmb { x } ^ { i } )$ as prior and introduces covariatedependence into both the forward and reverse diffusion processes aims to obtain precise frame-level phase distribution via reverse diffusion process given the coarse phase representations $f _ { \phi } ( \pmb { x } ^ { i } )$ (prior), as detailed in Sec. 3.1. Taking unbalanced nature of surgical videos into consideration, we introduce a metalearning objective to train the diffusion model via re-weighting the importance of each video frame, thus mitigating the risk of model being biased towards the majority surgical phases.
$$
\begin{array} { r } { \pmb { \theta } ^ { * } ( \pmb { w } ) = \underset { \pmb { \theta } } { \arg \operatorname* { m i n } } \displaystyle \sum _ { i = 1 } ^ { N } { \pmb { w } } ^ { i } \mathcal { L } _ { t r a i n } ^ { i } ( \pmb { \theta } ) } \\ { \pmb { w } ^ { * } = \underset { \pmb { w } } { \arg \operatorname* { m i n } } \displaystyle \frac { 1 } { M } \sum _ { j = 1 } ^ { M } { \mathcal L } _ { m e t a } ^ { j } ( \pmb { \theta } ^ { * } ( \pmb { w } ) ) } \end{array}
$$
where $\boldsymbol { w } \in \mathbb { R } ^ { N }$ is weight vector , $\pmb \theta$ is classifier, itrain and jmeta are separately the loss functions on the unbalanced training dataset and the balanced meta dataset. The meta dataset is usually downsampled from the training dataset.
# 3. Method
In this section, we introduce the meta-learning optimized classification diffusion model (Meta-SurDiff) for reliable Online Surgical Phase (OSP) recognition. The overview of our model is shown in Fig.2. We introduce the classification diffusion model in Sec. 3.1 and present the corresponding meta-learning optimization method in Sec. 3.2.
3.1. Classification Diffusion Model. (1) Coarse surgical phase representations: Existing OSP recognition methods [24, 29, 10, 54, 35] have primarily focused on learning strong spatial-temporal representations in surgical videos, which can last for several hours and exhibit strong dependencies among different phases. However, robustness of OSP recognition is rarely considered. In this paper, we advocate to adopt such strong phase representations as coarse conditional inputs, which are obviously point estimation, for the follow-up precise surgical phase distribution estimation. To reveal the superiority of our classification diffusion model in OSP recognition, inspired by [35], we use a simple yet effective backbone called ConvNext+LSTM to capture the coarse phase representations as follows:
$$
z ^ { i } = g _ { \pmb \theta } ( f _ { \phi } ( \pmb x ^ { i } ) ) , i = 1 , . . . , L
$$
where $\pmb { x } ^ { i } \in \mathbb { R } ^ { I }$ is the $i$ -th video frame and there are totally $L$ frames, $f _ { \phi } ( \cdot )$ is the backbone of ConvNext+LSTM, $g _ { \pmb { \theta } } ( \cdot )$ denotes another learnable projector to align the dimension with the ground-truth label embedding $\mathbf { \Delta } y _ { 0 } ^ { i }$ discussed later. Importantly, using LSTM for extracting spatial-temporal features is safe for OSP recognition because it avoids utilizing future frames during prediction. Additionally, we ensure the capability of the coarse phase representations by minimizing the cross entropy loss as $\begin{array} { r } { \mathcal { L } _ { C E } = - \sum _ { i = 1 } ^ { L } \sum _ { c = 1 } ^ { C } { \pmb { y } } _ { 0 } ^ { i , c } l o g z ^ { i , c } } \end{array}$ .
(2) Forward diffusion process with coarse phase representations as priors: Unlike vanilla diffusion models that assume the endpoint of the diffusion process to be $\mathcal { N } ( \mathbf { 0 } , \mathbf { I } )$ , we model the endpoint through the incorporation of the coarse phase representations $z ^ { \ i }$ for $i =$ $1 , . . . , L$ which from HFNet as conditional inputs inspired by [16], and we have $p ( { \pmb y } _ { T } ^ { i } | { \pmb z } ^ { i } ) =$ $\mathcal { N } ( z ^ { i } , \mathbf { I } )$ . With a diffusion schedule $\beta _ { t } \in ( 0 , 1 )$ for $t = 1 , . . . , T$ , the forward process is:
$$
q ( \pmb { y } _ { t } ^ { i } | \pmb { y } _ { t - 1 } ^ { i } , \pmb { z } ^ { i } ) = \mathcal { N } ( \widetilde { \beta } _ { t } \pmb { y } _ { t - 1 } ^ { i } + ( 1 - \widetilde { \beta } _ { t } ) \pmb { z } ^ { i } , \beta _ { t } \mathbf { I } )
$$
where $\widetilde { \beta } _ { t } = \sqrt { 1 - \beta _ { t } }$ . Similar with DDPM [18], we can sample $\mathbf { \Delta } y _ { t } ^ { i }$ given $\mathbf { \Delta } _ { \mathbf { \mathcal { Y } } _ { 0 } ^ { i } }$ with an arbitrary times ep $t$ as:
$$
q ( \pmb { y } _ { t } ^ { i } \vert \pmb { y } _ { 0 } ^ { i } , \pmb { z } ^ { i } ) = \mathcal { N } ( \sqrt { \alpha _ { t } } \pmb { y } _ { 0 } ^ { i } + ( 1 - \sqrt { \alpha _ { t } } ) \pmb { z } ^ { i } , \widetilde { \alpha } _ { t } \mathbf { I } )
$$
where $\overline { { \alpha } } _ { t } = 1 - \beta _ { t }$ , $\begin{array} { r } { \alpha _ { t } = \prod _ { t } \overline { { \alpha } } _ { t } } \end{array}$ , and $\widetilde { \alpha } _ { t } = 1 - \alpha _ { t }$ . The mean value above is an interpolation between the ground-truth label emb deding $\mathbf { \Delta } _ { \mathbf { \mathcal { Y } } _ { 0 } ^ { i } }$ and the conditional input $z ^ { i }$ in Eq. 4.
(3) Reverse diffusion process with coarse phase representations as priors: The primary objective of the reverse process is to progressively refine and sample denoised predictions from initially coarse estimates. This process involves complex mathematical transformations and probabilistic reasoning, aiming to reduce the uncertainty introduced during the forward process. Specifically, based on the detailed formulations in Eq. 5 and 6, the posterior distribution of the forward process can be derived, as shown in Eq. 7:
$$
\begin{array} { r l } & { q ( { y _ { t - 1 } ^ { i } } | { y _ { t } ^ { i } } , { y _ { 0 } ^ { i } } , { z ^ { i } } ) \propto q ( { y _ { t } ^ { i } } | { y _ { t - 1 } ^ { i } } , { z ^ { i } } ) q ( { y _ { t - 1 } ^ { i } } | { y _ { 0 } ^ { i } } , { z ^ { i } } ) } \\ & { \propto \exp \{ - \frac { 1 } { 2 } [ \frac { ( { y _ { t } ^ { i } } - ( 1 - \sqrt { \alpha _ { t } } ) { z ^ { i } } - \sqrt { \alpha _ { t } } { y _ { t - 1 } ^ { i } } ) ^ { 2 } } { \beta _ { t } } } \\ & { + \frac { ( { y _ { t - 1 } ^ { i } } - \sqrt { \alpha _ { t - 1 } } { y _ { 0 } ^ { i } } - ( 1 - \sqrt { \alpha _ { t - 1 } } ) { z ^ { i } } ) ^ { 2 } } { 1 - \alpha _ { t - 1 } } ] \} } \\ & { \propto \exp \{ - \frac { 1 } { 2 } [ A ( { y _ { t - 1 } ^ { i } } ) ^ { 2 } - 2 B { y _ { t - 1 } ^ { i } } ] \} } \end{array}
$$
where $\begin{array} { r } { A = \frac { 1 - \alpha _ { t } } { \beta _ { t } \left( 1 - \alpha _ { t - 1 } \right) } } \end{array}$ and $\begin{array} { r } { B = \frac { \sqrt { \alpha _ { t - 1 } } } { 1 - \alpha _ { t - 1 } } { y _ { 0 } ^ { i } } + \frac { \sqrt { \alpha _ { t - 1 } } } { \beta _ { t } } { y _ { t } ^ { i } } + ( \frac { \sqrt { \alpha _ { t } } ( \overline { { \alpha } } _ { t } - 1 ) } { \beta _ { t } } + \frac { 1 - \sqrt { \alpha _ { t - 1 } } } { 1 - \alpha _ { t - 1 } } ) z ^ { i } } \end{array}$ . Due to space limitations, we use $( \cdot ) ^ { 2 }$ to replace $( \cdot ) ^ { T } ( \cdot )$ above, which does not affect the result of the [1], the variance of posterior can be expressed as $\frac { 1 - \alpha _ { t - 1 } } { 1 - \alpha _ { t } } \beta _ { t }$ , and we have $\begin{array} { r } { \gamma _ { 3 } = \frac { 1 - \alpha _ { t - 1 } } { 1 - \alpha _ { t } } } \end{array}$ 1−αt− . The
# Algorithm 2 Inference of Meta-SurDiff
Draw $\mathbf { \Delta } _ { \mathbf { \boldsymbol { x } } ^ { i } }$ from test dataset
Compute $z ^ { i }$ using Eq. 4
for $t = T$ to 1 do $\begin{array} { r } { \hat { \pmb { y } } _ { 0 } ^ { i } = \frac { 1 } { \alpha _ { t } } \left( \pmb { y } _ { t } ^ { i } - \big ( 1 - \sqrt { \alpha _ { t } } \big ) \pmb { z } ^ { i } - \sqrt { 1 - \alpha _ { t } } \epsilon _ { \theta } \big ( \pmb { y } _ { t } ^ { i } , \pmb { z } ^ { i } , t \big ) \right) } \end{array}$ if $t \geq 1$ then Draw $\mathbf { \epsilon } \sim \mathcal { N } ( \mathbf { 0 } , \mathbf { I } )$ $\pmb { y } _ { t - 1 } ^ { i } = \gamma _ { 0 } \hat { \pmb { y } } _ { 0 } ^ { i } + \gamma _ { 1 } \pmb { y } _ { t } ^ { i } + \gamma _ { 2 } \pmb { z } ^ { i } + \sqrt { \gamma _ { 3 } \beta _ { t } } \epsilon$ else $\pmb { y } _ { t - 1 } ^ { i } = \hat { \pmb { y } } _ { 0 } ^ { i }$ end if
end for
mean of posterior can be written as:
$$
\begin{array} { l } { \displaystyle \widetilde { \mu } ( \pmb { y } _ { t } ^ { i } , \pmb { y } _ { 0 } ^ { i } , z ^ { i } ) = \frac { \beta _ { t } \sqrt { \alpha _ { t - 1 } } } { 1 - \alpha _ { t } } \pmb { y } _ { 0 } ^ { i } + \frac { 1 - \alpha _ { t - 1 } \sqrt { \overline { { \alpha _ { t } } } } } { 1 - \alpha _ { t } } \pmb { y } _ { t } ^ { i } } \\ { + ( 1 + \frac { ( \sqrt { \alpha _ { t } - 1 } ) ( \sqrt { \overline { { \alpha } } _ { t } + \sqrt { \alpha _ { t - 1 } } } ) } { 1 - \alpha _ { t } } ) z ^ { i } } \end{array}
$$
For simplicity, we define $\widetilde { \pmb { \mu } } = \gamma _ { 0 } \pmb { y } _ { 0 } ^ { i } + \gamma _ { 1 } \pmb { y } _ { t } ^ { i } + \gamma _ { 2 } \pmb { z } ^ { i }$ and we have:
$$
\begin{array} { l } { \gamma _ { 0 } = \frac { \beta _ { t } \sqrt { \alpha _ { t - 1 } } } { 1 - \alpha _ { t } } , \quad \gamma _ { 1 } = \frac { 1 - \alpha _ { t - 1 } \sqrt { \overline { { \alpha } } _ { t } } } { 1 - \alpha _ { t } } , } \\ { \gamma _ { 2 } = 1 + \frac { \left( \sqrt { \alpha _ { t } - 1 } \right) ( \sqrt { \overline { { \alpha } } _ { t } + \sqrt { \alpha _ { t - 1 } } } ) } { 1 - \alpha _ { t } } } \end{array}
$$
Given a conditional input $z ^ { i }$ , we can use the reverse diffusion process to generate precise phase distribution, which is expanded by multiple fine grained phase representations that closely resembles the ground-truth label embedding $\pmb { y } _ { 0 } ^ { i }$ .
3.2. Re-weighting based Meta-Learning Objective. So far, we have improved the quality of phase representations preparing for reliable OSP recognition, however, unbalanced frame distribution among different phases still poses potential risk of overfitting. Therefore, we propose to use a re-weighting based meta-learning strategy to learn the parameters of our classification diffusion model.
According to Sec. 2.2, we first need to construct $\mathcal { L } _ { t r a i n }$ and $\mathcal { L } _ { m e t a }$ , in practice, both of them should have the same formulation, taking $\mathcal { L } _ { t r a i n }$ for example and we have:
$$
\mathcal { L } _ { t r a i n } = \sum _ { i = 1 } ^ { L } \mathbb { E } _ { q ( { \pmb y } _ { 1 : T } ^ { i } | { \pmb y } _ { 0 } ^ { i } , { \pmb z } ^ { i } ) } [ l o g \frac { p _ { \phi , \theta } ( { \pmb y } _ { 0 : T } ^ { i } | { \pmb z } ^ { i } ) } { q ( { \pmb y } _ { 1 : T } ^ { i } | { \pmb y } _ { 0 } ^ { i } , { \pmb z } ^ { i } ) } ]
$$
where the meaning of $\mathcal { L } _ { t r a i n }$ is exactly maximizing the Evidence Lower BOund (ELBO) [26] of $\begin{array} { r } { \sum _ { i = 1 } ^ { L } l o g p _ { \phi , \theta } ( \pmb { y } _ { 0 } ^ { i } | \pmb { z } ^ { i } ) } \end{array}$ . The specific form of Eq.10 can be divided into three items, for the $i$ -th frame image from video, the items are separately $\mathbb { E } _ { q } \big [ - l o g p ( { \pmb y } _ { 0 } ^ { i } | { \pmb y } _ { 1 } ^ { i } , { \pmb z } ^ { i } ) \big ]$ , $\mathbb { E } _ { q } [ K L ( q ( \pmb { y } _ { T } ^ { i } | \pmb { y } _ { 0 } ^ { i } , \pmb { z } ^ { i } ) | | p ( \pmb { y } _ { T } ^ { i } | \pmb { z } ^ { i } ) ) ]$ , and $\mathbb { E } _ { q } [ K L ( q ( \pmb { y } _ { t - 1 } ^ { i } | \pmb { y } _ { t } ^ { i } , \pmb { y } _ { 0 } ^ { i } , \pmb { z } ^ { i } ) | | p _ { \phi , \theta } ( \pmb { y } _ { t - 1 } ^ { i } | \pmb { y } _ { t } ^ { i } , \pmb { z } ^ { i } ) ) ]$ . As we can see, $\mathcal { L } _ { t r a i n }$ guides the model to predict uncertainty while maintaining the capacity for accurate estimation of the fine-grained phase representation by the reversed diffusion. In practice, it is usually time-consuming to calculate the whole chain from the sampled timestep $t$ to $0$ , therefore, we follow the reparameterization trick used in DDPM and construct $\epsilon _ { \phi , \theta } ( \pmb { y } _ { t } ^ { i } , \pmb { z } ^ { i } , t )$ to predict the forward diffusion noise $\epsilon$ sampled from $\mathbf { \Delta } y _ { t } ^ { i }$ . The training objective $\mathcal { L } _ { t r a i n }$ for the $i$ -th frame can be carried out in a standard DDPM manner:
$$
\mathcal { L } _ { \epsilon ^ { i } } = \| \epsilon ^ { i } - \epsilon _ { \phi , \theta } ( \sqrt { \alpha _ { t } } y _ { 0 } ^ { i } + \sqrt { \widetilde { \alpha } _ { t } } \epsilon + ( 1 - \sqrt { \alpha _ { t } } ) z ^ { i } , z ^ { i } , t ) \| _ { 2 } ^ { 2 }
$$
Combining the noise prediction loss above and the objective in capturing meaningful coarse phase representations in Sec. 3.1 altogether, the intermediate objective is:
$$
\mathcal { L } _ { t r a i n } = \frac { \mathcal { L } _ { \epsilon } + \mathcal { L } _ { C E } } { L } , \quad \mathcal { L } _ { \epsilon } = \sum _ { i = 1 } ^ { L } \mathcal { L } _ { \epsilon ^ { i } }
$$
Notably, we reuse the notation of $\mathcal { L } _ { t r a i n }$ in Eq. 10, however, their meanings are different. The weight for each frame equals to $\textstyle { \frac { 1 } { L } }$ . In order to mitigate the negative impact of unbalanced frames across different phases on OSP recognition, we need to replace the equal weights with the dynamic weights. Drawn inspiration from [15, 39], we adopt a re-weighting based metalearning method to reassign the weights following the meta training and meta testing steps.
(1) Meta training process: We use a meta-weight net $h ( \cdot , w )$ parameterized by $\mathbf { \boldsymbol { w } }$ , a twolayer MLP, to compute the frame-level weights. For convenience, we package $\phi$ and $\pmb \theta$ together and denote it as $\Theta$ . We first update the meta-weight net in meta training process since the parameters $\Theta$ of the classification diffusion model should be robust to unbalanced phase distribution, which heavily depends on the state of meta-weight net. Specifically, given $n$ and $m$ frames separately sampled from the training and meta datasets. Meta-weight net $h ( \cdot , w )$ can be updated using:
$$
\begin{array} { r l } & { \hat { \boldsymbol { \Theta } } ^ { t } = \boldsymbol { \Theta } ^ { t } - \displaystyle \frac { \alpha } { n } \sum _ { i = 1 } ^ { n } h ( \mathcal { L } _ { t r a i n } ^ { i } ; \boldsymbol { w } ) \nabla _ { \boldsymbol { \Theta } } \mathcal { L } _ { t r a i n } ^ { i } \bigg | _ { \boldsymbol { \Theta } ^ { t } } } \\ & { \boldsymbol { w } ^ { t + 1 } = \boldsymbol { w } ^ { t } - \displaystyle \frac { \beta } { m } \sum _ { i = 1 } ^ { m } \nabla _ { \boldsymbol { w } } \mathcal { L } _ { m e t a } ^ { i } \bigg | _ { \boldsymbol { w } ^ { t } } } \end{array}
$$
where $\alpha$ and $\beta$ now are the step sizes.
(2) Meta testing process: After obtaining the updated $\scriptstyle w ^ { t + 1 }$ , the meta-weight net should be gradually capable of reassigning proper weights for unbalanced video frames. Consequently, we use the updated $\pmb { w } ^ { t + 1 }$ to update the parameters in $\Theta$ of our model, which can be expressed
as:
$$
\Theta ^ { t + 1 } = \Theta ^ { t } - \frac { \alpha } { n } \sum _ { i = 1 } ^ { n } h ( \mathcal { L } _ { t r a i n } ^ { i } ; \boldsymbol { w } ^ { t + 1 } ) \nabla \Theta \mathcal { L } _ { t r a i n } ^ { i } \bigg | _ { \Theta ^ { t } }
$$
The Meta-SurDiff is trained through updating parameters iteratively between the two metalearning processes on diverse mini-batch of video frames. Right now, Meta-SurDiff is properly handling the uncertainties of video quality and unbalanced distribution, thereby is beneficial for facilitating reliable OSP recognition. The pseudocodes of training and inference are separately presented in Alg. 1 and Alg. 2.
# 4. Experiment
4.1. Experimental Setup. Datasets: Five surgical phase recognition datasets are utilized to extensively evaluate our model, including Cholec80[45], M2Cai16[44], AutoLaparo[47], OphNet[20], and NurViD [19]. Table 1 is basic statistical information of these datasets, and details are depicted in Appendix A. Cholec80, M2Cai16, and AutoLaparo are datasets of laparoscopic surgery, OphNet belongs to dataset of ophthalmic surgery, and NurViD is dataset of daily care.
Table 1. Statistics of datasets. TR/VAL/TE No. is the number of training, validation, and testing videos. C No. is the phase number.
Evaluation metrics: We use four widely used metrics including accuracy (Acc), precision (Pr), recall (Re), and Jaccard (Ja) to evaluate the OSP recognition performance. Additionally, since prior surgical phase recognition methods do not explicitly model the uncertainty in surgical videos, we leverage Prediction Interval Width (PIW) and Paired Two Samples t-Test (PTST) to quantify the model’s uncertainty. Please refer to Appendix C for more details. Due to the subjective nature of manual labeling in surgical videos and the ambiguous boundaries between adjacent surgical stages which are noted by [50], Cholec80 and M2Cai16 datasets adopt lenient boundary metrics to access model performance. Specifically, frames predicted belonging to adjacent stages within a 10 seconds window before and after a phase transition are also deemed correct. As for OphNet and Nurvid, we implement comparisons following the task settings and evaluation metrics outlined in their paper .
Baselines: We compare our Meta-SurDiff with some most recently proposed competitive baseline methods, such as PitBN [35], SKiT[29], CMTNet [51], LAST [42], TMRNet [24], Trans-SVNet [14], TeCNO [7], SV-RCNet [21] and so on. The results are reported from their original papers or reproduced using their available official codes.
Implementation details: We utilize ConvNeXt [31] pretrained on ImageNet-1K [27] to extract spatial features from videos, followed by LSTM for temporal feature fusion. During training, we freeze the earlier blocks of ConvNeXt and only update the parameters of its last block. To generate meaningful conditional inputs for learning our classification diffusion model, we initially pretrain the backbone of ConvNeXt $^ +$ LSTM, using standard cross-entropy loss on unbalanced training datasets. We employ AdamW [25] to optimize the model, with separate learning rates of 1e-5 for $\Theta$ and 1e-3 for $\mathbf { \boldsymbol { w } }$ , without weight decay. To ensure fair comparisons, we maintain batch size of 1 and the time window length of 256, consistent with other competitors. All experiments are conducted on a single NVIDIA A100 80GB PCIe GPU. More details can be found in Appendix B.
Table 2. The results ( $\%$ ) of Meta-SurDiff V.S. other competitors on Cholec80 dataset. The best results are marked in bold.
Table 3. The results ( $\%$ ) of Meta-SurDiff V.S. other competitors on AutoLaparo dataset. The best results are marked in bold.
# 4.2. Main Results.
4.2.1. Quantitative Results and Analysis. (1) Online surgical phase recognition: We conduct comprehensive studies comparing Meta-SurDiff with other SOTA methods for OSP recognition on Cholec80, AutoLaparo, M2Cai16, OphNet, and NurViD datasets. Quantitative results for these datasets are separately reported in Table 2, Table 3, Table 4, Table 5, and Table 6. Meta-SurDiff significantly outperforms most of the competitors, such as SKiT and LAST, across various metrics including accuracy (Acc), precision (Pr), recall (Re), and Jaccard (Ja). For example, Meta-SurDiff shows improvements on Cholec80 with increase of 1.8% in Acc, $1 . 8 \%$ in $\mathrm { P r }$ , $1 . 1 \%$ in Re, and $3 . 1 \%$ in Ja compared to the second-best method. Additionally, Meta-SurDiff delivers superior results in Pr, Re, and Ja, due to effectively addressing unbalanced effects. Meta-SurDiff also achieves lower standard deviations of Acc, with reductions of $0 . 6 \%$ , $0 . 8 \%$ , and $0 . 5 \%$ on Cholec80, AutoLaparo, and M2Cai16 datasets, respectively, compared to the second-best method. We attribute these notable improvements to effectiveness of Meta-SurDiff in addressing the surgical video issues of frame ambiguity and unbalanced phase distribution.
Table 4. The results ( $\%$ ) of Meta-SurDiff V.S. other competitors on M2Cai16 dataset. The best results are marked in bold.
Table 5. Results on the OphNet dataset using metrics following OphNet. The best results are marked in bold.
(2) Complexity analysis: We report running time, CPU and GPU memories at training time in Table 8 on Cholec80 dataset. At test time, we select the diffusion timestep to be $T = 1 0 0 0$ . To accelerate prediction speed, we employ the DDIM [40] sampling strategy, reducing the total sampling requirement effectively to $T = 1 0 0$ . On one hand, we conduct comparative experiments using different diffusion timesteps and depict results in Table 9. On the other hand, we also compare the complexity of Meta-SurDiff with other competitors. Overall, our model achieves a satisfactory balance between performance and real-time efficiency.
(3) Ablation study: We verify the effects of different components on our proposed MetaSurDiff and report results in Table 7. CDM represents whether using Classification Diffusion Model (CDM) to process the coarse phase representations, also known as conditional inputs. MLO refers to whether employing Meta-Learning Objective (MLO) to re-weight the intermediate loss function in Eq. 11. On one hand, either equipping CDM or MLO can consistently improve recognition performances across all metrics. On the other hand, combining both CDM and MLO altogether can further boost recognition. Additionally, we are surprised to observe that equipping CDM can enhance metrics like $\mathtt { P r , R e , J a }$ that reflect unbalance distribution. It implies that CDM has capability to ameliorate robustness on unbalanced surgical videos thanks to its precise frame-level distribution modeling. Moreover, the ablative results on the occupation of training resources can be further found in Table 8.
Table 6. The results ( $\%$ ) of Meta-SurDiff V.S. other competitors on $\mathrm { { N u r V i D } }$ . The best results are marked in bold.
Table 7. Ablative results of Meta-SurDiff on the Cholec80 dataset.
Table 8. Ablation studies on Train Time, CPU and GPU Memories.
Table 9. Complexity and running time analysis on Cholec80 dataset.
(4) Hyper-parameters analysis: We analyze the impacts of four hyper-parameters closely related with our Meta-SurDiff, including fine-tuning scope, meta-dataset size, backbone selection, and surgical videos’ background.
i) The impact of fine-tuning scope. As mentioned in the implementation details of Sec. 4.1 that we initially pre-train Meta-SurDiff using standard cross-entropy loss function to obtain meaningful conditional embeddings. Therefore, we investigate the performances V.S. the scope of optimized parameters during fine-tuning and report the results in Table 12, where C represents we only fine-tune the classification diffusion model (CDM) parameters. LSTM+C refers to that we fine-tune parameters of both the LSTM and CDM. ConvNeXt#+LSTM+C denotes that we fine-tune the parameters of the last block in ConvNeXt, LSTM, and CDM. ConvNeXt $+$ LSTM+C is that we fine-tune the whole parameters of ConvNeXt, LSTM, and CDM. We find that finetuning parameters with proper scope is beneficial for accelerating performances, which might be able to attribute to the fact that basic and general representations of ConvNeXt is essential for reliable OSP recognition of Meta-SurDiff.
Table 10. Results on the OphNet dataset regarding the effect of background frames using X-CLIP32 backbone in [20].
ii) The impact of meta-dataset size. We study performances V.S. the scale of meta dataset and report results in Fig. 3. We observe that our model achieves consistent recognition performances even with a small scale meta dataset, demonstrating its significance and suitability for practical online surgical phase recognition applications.
iii) The impact of backbone selection. To verify that our model can achieve consistent improvement among various backbones, we conduct experiments on Cholec80 dataset and report results in Table 11. Additionally, Table 5 presents the results of Meta-SurDiff V.S. different backbones on OphNet dataset. As we can see taht by applying Meta-SurDiff to the X-CLIP model improves ACC-Top1 and ACC-Top5 by $0 . 9 \%$ and $0 . 8 \%$ , respectively, further demonstrating that our model can be seamlessly compatible with existing backbones.
iv) The impact of surgical videos’ background. We conduct experiments on OphNet dataset to study the impact of background on OSP recognition since OphNet dataset is partially annotated, and report the results in Table. 10. In IGNORE BG, we directly ignore the unlabeled frames while training the model. In IGNORE BGL, we leverage the visual information of the unlabeled frames to train the model. In USE 1 BGL, we set the unlabeled frames with a same and fixed pseudo label when we train the model. According to the results, we find that IGNORE BGL achieves the best performance since it effectively utilizes the unsupervised spatial-temporal information to learn the model. Notably, USE 1 BGL wins the last place. And we attribute this to the fact that given the same pseudo label may mislead the potential disciminative representations in backgrounds.
(5) OSP recognition in low-data regime: In practice, the annotation of surgical videos is laborious and time consuming, therefore, verifying the effectiveness of our model under low data regime is crucial. We conduct experiments using Cholec80 under two data-limited scenarios, and report the results in Fig. 4. Meta-SurDiff could maintain robust OSP recognition results even when training frames are limited. Taking Fig. 4 Top for example, we randomly ignore labels of $\gamma \%$ ( $\gamma \in [ 1 , 0 ]$ ) training frames along with other fully supervised frames to learn the model. That is, we use the spatial-temporal power of LSTM to effectively capture meaningful phase representations, which should be beneficial for our classification diffusion model training and inference even supervised information is not fully explored.
Table 11. Results on the Cholec80 dataset using different backbones. The best results are marked in bold.
Table 12. The results ( $\%$ ) on the scope of optimized parameters under Cholec80 dataset.
Table 13. PIW $( \times ~ 1 0 0 )$ and t-test on Cholec80 dataset.
(6) Uncertainty estimation: We present the results of our Meta-SurDiff on uncertainty estimation to evaluate the frame-level recognition confidence under the scope of the entire Cholec80 test dataset in Table 13. Since prior surgical phase recognition methods do not explicitly model the uncertainty in surgical videos, we use PIW and PTST to quantify the model’s uncertainty. Specifically, for each test frame, we generate 100 predictions through the reverse diffusion process, resulting in a $1 0 0 \times 7$ matrix. We then compute PIW and PTST based on this matrix. After obtaining the PIW and the PTST from each test frame, we divide the test dataset into two groups by the correctness of majority-vote predictions. We calculate the average PIW of the true phase within each group. We split the test instances by t-test rejection status, and compute the mean accuracy in each group. More details can be found in Appendix C.
As we can see that the mean PIW of the ground truth label among the correct predictions is $( 1 0 \times )$ narrower than that of the incorrect predictions, indicating that Meta-SurDiff can make correct predictions with much smaller variations. Furthermore, when comparing the mean PIWs across different phases, we observe that the phase indexed as 0 has the lowest accuracy at $3 9 . 0 \%$ and its incorrect prediction interval is much smaller than other phases. All these
14 YUFEI LI1, JIRUI WU1, LONG TIAN $^ { 1 , * }$ , LIMING WANG1, XIAONAN $\mathrm { L I U } ^ { 2 }$ , ZIJUN $\mathrm { L I U } ^ { 2 }$ , XIYANG LIU1
04 (a) (b) 5
Figure 3. Results change with the size of meta dataset on Cholec80 dataset.
Figure 4. Performance changes with the number of training frames in Cholec80 dataset. Top: Results of ignoring training labels. Bottom: Results of ignoring training frames.
Figure 5. (a) and (b) are ribbon diagrams of ground truth labels, baseline method, and our proposed Meta-SurDiff from the top to the bottom under Cholec80 and M2Cai16 datasets. (c) The learned weight vectors on Cholec80 dataset, where x-axis is the frames from the current mini-batch and we only mark their labels for clarity. For example, frames from the 1st phase is much more than those from the 0-th phase, obviously, the weights for the 0-th frames are higher than those for the 1-st frames.
H (a) (b)
evidences suggest that the uncertainties of phase 0 could be especially significant. Moreover, we observe that the accuracy of test instances rejected by the $ { \mathrm { ~ ~ t ~ } }$ -Test is significantly higher than that of the not-rejected ones, both across the entire test dataset and within each phase. We point out that these metrics reflect confidence of Meta-SurDiff in the correctness of predictions and have the potential to be applied in mitigating risks during surgical evaluation. Such uncertainty estimation can be used to decide whether to accept the recognition results or to refer the instance to experts for further evaluation.
4.2.2. Qualitative Results and Analysis. (1) The predicted ribbon charts. To intuitively reveal the effectiveness of our model in handling unbalanced phase distribution, We employ ConvNeXt $^ +$ LSTM optimized with standard cross entropy loss function as baseline model, and compare ribbon charts among ground truth labels, baseline model, and our proposed MetaSurDiff. The results verify the capability of Meta-SurDiff in reliable OSP recognition, as shown in Fig. 5 (a) and (b). Taking predicted ribbon chart on Cholec80 dataset for example, the baseline model easily misclassifies P1 (CalotTriangleDissection) into P2 (framepingCutting) at middle of the video. The surgical phase name can be found in Appendix A. In contrast, our model effectively avoid such error.
(2) The learned weight vectors. On the other hand, we also visualize the learned weight vectors of 100 training frames uniformly sampled from each phase and report the results in Fig. 5 (c). We find that the learned weights for minority phases in surgical video are typically more prominent than those for majority phases, prompting the model to focus more on frames from minority phases and thereby reducing the risk of overfitting to majority phases. | Online surgical phase recognition has drawn great attention most recently due to its potential downstream applications closely related to human life and health. Despite deep models have made significant advances in capturing the discriminative long-term dependency of surgical videos to achieve improved recognition, they rarely account for exploring and modeling the uncertainty in surgical videos, which should be crucial for reliable online surgical phase recognition. We categorize the sources of uncertainty into two types, frame ambiguity in videos and unbalanced distribution among surgical phases, which are inevitable in surgical videos. To address this pivot issue, we introduce a meta-learning-optimized classification diffusion model (Meta-SurDiff), to take full advantage of the deep generative model and meta-learning in achieving precise frame-level distribution estimation for reliable online surgical phase recognition. For coarse recognition caused by ambiguous video frames, we employ a classification diffusion model to assess the confidence of recognition results at a finer-grained frame-level instance. For coarse recognition caused by unbalanced phase distribution, we use a meta-learning based objective to learn the diffusion model, thus enhancing the robustness of classification boundaries for different surgical phases.We establish effectiveness of Meta-SurDiff in online surgical phase recognition through extensive experiments on five widely used datasets using more than four practical metrics. The datasets include Cholec80, AutoLaparo, M2Cai16, OphNet, and NurViD, where OphNet comes from ophthalmic surgeries, NurViD is the daily care dataset, while the others come from laparoscopic surgeries. We will release the code upon acceptance. | [
"cs.CV"
] |
# 1. Introduction
Generative large language models, as one of the core technologies in the field of artificial intelligence, exhibit tremendous potential in natural language processing He, Gao and Chen (2023), content creation Touvron, Martin, Stone and et al. (2023), and data analysis Achiam, Adler, Agarwal and et al. (2023). One of the factors constraining the development of generative large language models is the lack of high-quality datasets Wang, Zhang and Wang (2024). Currently, a mainstream solution is to utilize diffusion-based generative models to create datasets, a method that has demonstrated impressive results in domains such as images, audio, and video Leng, Zhang, Xiong and Chen (2024); Yang, Zeng, Liu and et al. (2024); Liu, Chen, Yuan and et al. (2023a). However, unlike data composed of purely continuous pixel values with local spatial correlations—such as images—tabular data combines numerical fields (e.g., age, income) with categorical fields (e.g., gender, occupation), presenting more complex and diverse data characteristics. Tabular data is prevalent in various databases and constitutes a core component of data processing and analysis tasks in the information technology domain Fonseca and Bacao (2023); You, Ma, Ding, Kochenderfer and Leskovec (2020); Zheng and Charoenphakdee (2022). Constructing efficient tabular data generation models has become an important research topic, with applications ranging from training data-based guidance to data privacy protection and beyond Assefa, Dervovic and et al. (2020); Hernandez, Epelde, Alberdi and et al. (2022). These technologies play a critical role in modern data management and analysis.
Figure 1: A simple example of causal awareness
Traditional statistical models, such as Gaussian mixture models, fit data using simple probability distributions, which limits their expressive capability Borisov, Sessler, Leemann and et al. (2023). Variational Autoencoders (VAEs) achieve end-to-end generation by introducing a latent variable encoding-decoding mechanism but are constrained by the blurry quality of their outputs Liu, Qian, Berrevoets and et al. (2023b). Subsequently, Generative Adversarial Network (GAN)-based models break through the clarity bottleneck via adversarial training, yet they still face challenges related to training instability Xu, Skoularidou, Cuesta-Infante and et al (2019). In recent years, diffusion models have demonstrated impressive performance in the field of generative modeling Song and Ermon (2019); Ho, Jain and Abbeel (2020); Rombach, Blattmann, Lorenz, Esser and Ommer (2022). These models capture complex data distributions through progressive noise perturbation and reverse denoising mechanisms. Researchers have been actively exploring ways to extend this powerful framework to tabular data Kim, Lee, Shin and et al. (2022); Kotelnikov, Baranchuk and et al. (2023); Zhang, Zhang, Shen and et al. (2024). However, since diffusion models independently model the conditional probability distributions of individual labels during the generation process. Previous methods struggle to learn causal relationships between different labels, leading to the emergence of counterfactual reasoning phenomena as illustrated in Figure 2. To address counterfactual issues in generative processes, DiffPO employs a tailored conditional denoising diffusion model to learn complex distributions Ma, Melnychuk, Schweisthal and et al. (2024). ECI achieves effective label representations by progressively updating event context representations Man, Dernoncourt and Nguyen (2024). CausalDiffAE enhances the model’s causal awareness by mapping high-dimensional data to latent variables with causal relationships through a learnable causal encoder Komanduri, Zhao, Chen and $\mathrm { w } _ { \mathrm { u } }$ (2024). CaPaint integrates causal reasoning with generative models to handle counterfactual inference and missing data imputation in spatiotemporal dynamics Duan, Zhao, Mao, Wu, Xu, Ma, Wang, Wang, Li et al. (2024). While these methods significantly strengthen causal awareness capabilities, applying causal awareness to the tabular data generation domain remains challenging due to difficulties in multi-type data awareness and training instability.
Figure 2: A comparison of whether causal regularization is included. The results show the number of causally implausible instances (Husband-Female and Wife-Male) in the Adult dataset. Our method significantly reduces the number of such causally implausible cases.
Since the causal relationships in tabular data are often highly nonlinear (e.g., interpersonal relationships in the Adult dataset, or conditional causal effects in financial data), linear causal regularization fail to capture such complex relationships due to insufficient modeling capacity. Additionally, the assumption of enforced linear relationships between variables conflicts with the nonlinear architecture of diffusion models Uemura, Takagi, Takayuki and et al. (2022). To address these challenges, this paper proposes CausalDiffTab, a novel mixed-type diffusion framework for tabular data generation. The key distinction between CausalDiffTab and existing diffusion-based methods lies in its nonlinear causal modeling of mixed-type data relationships through directed acyclic graph (DAG) construction. Subsequently, the framework aligns causal matrices with noise directions during the diffusion process, enabling more faithful representation of complex causal dependencies. Figure 1 provides an easily understandable example of this process. Furthermore, to prevent excessive causal regularization in the early stages of training from hindering model convergence, we propose a hybrid adaptive causal regularization, which effectively ensures stable training of the model while enhancing robustness to noise during training. We select six representative tasks involving complex tabular data generation from realworld scenarios and adopt seven evaluation metrics to assess the performance of CausalDiffTab in terms of Fidelity, Downstream Task utility, and privacy preservation. All experimental results demonstrate that our method achieves significant performance improvements across multiple scenarios. Our main contributions are as follows:
∙ We propose a novel complex tabular data generation model called CausalDiffTab, which learns the joint distribution in the original data space through a continuous-time diffusion model. ∙ To ensure the generative capability of the model, we propose hybrid adaptive causal regularization, which effectively enhances the training stability and robustness of the model. ∙ We conduct extensive experiments to validate the effectiveness of our method, performing a comprehensive evaluation on seven datasets across seven metrics. The results demonstrate that CausalDiffTab outperforms the latest baselines in most tasks.
# 2. Related Works
Generative models represent a key research direction in the field of artificial intelligence, generating new samples by learning data distributions and achieving breakthroughs in areas such as image generation and natural language processing Ho et al. (2020); Devlin, Chang, Lee and et al. (2019); Zhang, Xiong, Xia and et al. (2025). VAEs are based on an encoder-decoder architecture, optimizing the latent space through variational inference Kingma and Welling (2022). However, the distribution assumptions limit the model’s expressiveness, often resulting in blurry generated samples. Subsequently, GANs were proposed, which fit data distributions through an adversarial game between a generator and a discriminator Goodfellow, Pouget-Abadie, Mirza and et al. (2020). Wasserstein GAN (WGAN) introduced the Wasserstein distance to replace the Jensen-Shannon divergence used in traditional GANs, theoretically alleviating issues like gradient vanishing and training instability Arjovsky, Chintala and Bottou (2017). StyleGAN series focuses on improving generation quality and controllability, with GANbased models being widely applied in fields such as image synthesis and data augmentation Karras, Laine and Aila (2019); Karras, Laine, Aittala and et al. (2020). Some methods also reduce overfitting and increase training stability by combining VAEs with GANs Yan, Huang, Yang and et al. (2025). Nevertheless, challenges like training instability and mode collapse remain difficult to fully resolve. Autoregressive generative models, such as GPT, rely on sequence prediction mechanisms to generate data element by element. While autoregressive models excel in natural language generation, they struggle with efficiency in complex tabular data and image generation, finding it difficult to capture high-dimensional structural dependencies.
Figure 3: This section provides a high-level overview of CausalDiffTab. The model constructs a causal correlation matrix by applying one-hot encoding to categorical features, thereby establishing interpretable causal relationships between different feature types . During the reverse denoising process, it dynamically aligns this causal matrix with the noise prediction matrix generated at each diffusion step via a causality-constrained mechanism.
In recent years, the powerful generative capabilities of Diffusion models have been impressive Ho et al. (2020); Blattmann, Dockhorn, Kulal and et al. (2023); Meng, Rombach, Gao and et al. (2023); Song, Meng and Ermon (2021a). They achieve high-quality sample generation by simulating a forward diffusion process (gradually adding noise to corrupt the data) and a reverse denoising process (learning to recover the original data). The core idea originates from nonequilibrium thermodynamics, establishing a bidirectional mapping between the data distribution and the noise distribution step-by-step through a Markov chain. Building on this foundation, DDIM breaks through the Markov assumption by proposing a non-Markovian sampling strategy Song et al. (2021a). By designing an implicit noise transfer function, the number of generation steps is compressed to just a few dozen. Due to the powerful performance of Diffusion models, researchers have begun to focus on how to extend this robust model to table generation. CoDi handles continuous and discrete variables separately using two diffusion models Lee, Kim and Park (2023). TabCSDI proposes three encoding methods—One-hot, analog bits, and Feature Tokenization—to process input data Zheng and Charoenphakdee (2022). TabSAL enhances the generation capability of tabular data through a small surrogate auxiliary language model Li, Qian, Tan and et al. (2024). GenerativeMTD generates pseudo-real data from real data to expand the training samples for training deep learning models Sivakumar, Ramamurthy, Radhakrishnan and Won (2023). Tabsyn synthesizes tabular data by utilizing a diffusion model in the latent space constructed by a VAE Zhang et al. (2024). TabDiff employs transformers to handle different input types, building an endto-end generative framework Shi, Xu, Hua and et al. (2025). However, all these models are based on modeling the distribution of the data, while ignoring the causal relationships between the data. Our proposed CausalDiffTab derives a causal matrix from the dataset and then prunes gradients that violate causal directions through causality-constrain, enabling the model to perceive causal relationships among the data and effectively guide the generation process.
# 3. Our Methods
In this section, we first introduce the relevant theories, followed by a detailed description of the methodological details of CausalDiffTab. It is a data-driven, diffusion-based model. The overall architecture is illustrated in Figure 3.
# 3.1. Preliminary
# 3.1.1. Diffusion Model
Diffusion Model represents a significant breakthrough in the field of generative models in recent years. It models data distributions through a gradual denoising mechanism, with its core idea originating from the diffusion process in nonequilibrium thermodynamics. The essence of the Diffusion Model comprises two key components: the forward noising process and the reverse generative process. The forward diffusion process is defined by a Markov chain, where the addition of noise at each step can be expressed as:
$$
q ( \mathbf { x } _ { t } \mid \mathbf { x } _ { t - 1 } ) = \mathcal { N } \left( \mathbf { x } _ { t } ; \sqrt { 1 - \beta _ { t } } \mathbf { x } _ { t - 1 } , \beta _ { t } \mathbf { I } \right) ,
$$
where $\beta _ { t }$ is the predefined noise variance, and ${ \bf x } _ { t }$ represents the noisy data at step $t$ . After $T$ steps of diffusion, the data approaches a standard Gaussian distribution. The reverse generative process, on the other hand, uses a neural network to predict the noise or data, progressively reconstructing the sample.
# 3.1.2. Hierarchical Prior Fusion
The theoretical foundation of the Hierarchical Prior Fusion is built upon the framework of the standard VAE, with the core objective of modeling the complexity of data distributions through multi-layer latent variable structures. The standard VAE optimizes by maximizing the Evidence Lower Bound (ELBO), expressed as:
$$
\mathcal { L } _ { \mathrm { V A E } } = \mathbb { E } _ { q ( z | x ) } \left[ \log p ( x | z ) \right] - \beta \cdot D _ { \mathrm { K L } } \left( q ( z | x ) \| p ( z ) \right) ,
$$
where $z$ denotes the latent variable, and $\beta$ is the weighting coefficient for the KL divergence term. However, the singlelayer latent variable structure of standard VAEs struggles to effectively capture multi-scale features of data. To address this, Hierarchical VAEs introduce multi-layer latent variables $z _ { 1 } , z _ { 2 } , \dots , z _ { L }$ to model the data distribution in stages:
$$
p _ { \theta } ( x ) = \int \prod _ { l = 1 } ^ { L } p _ { \theta } ( z _ { l } | z _ { l + 1 } ) \cdot p _ { \theta } ( x | z _ { 1 } ) d z _ { 1 } \dots d z _ { L } ,
$$
where each layer $z _ { l }$ corresponds to feature representations at different abstraction levels. To optimize this structure, Hierarchical VAEs use a progressive training strategy: initially, higher-layer latent variables $z _ { l \geq 2 }$ are frozen, and only $z _ { 1 }$ is optimized for local features; as training progresses, higher layers are unfrozen and semantic constraints are incorporated, enabling a fusion from low-level to high-level semantics.
# 3.1.3. Directed Acyclic Graph
DAG is a fundamental structure in graph theory, consisting of a finite set of vertices and a set of directed edges connecting these vertices. Formally, a DAG is defined as a pair $G \ = \ ( V , E )$ , where $V$ is a finite set of vertices, $E \subseteq V \times V$ is a set of directed edges, such that no directed cycles exist. The absence of cycles ensures that there is no path in the graph that starts and ends at the same vertex. In other words, for any vertex $v \subseteq V$ , there does not exist a sequence of edges leading from $v$ back to itself.
# 3.2. Causal Matrix
Before training begins, the data is first processed through the Causal Extraction module. The causal relationships among variables in tabular data are often unknown and it is difficult to obtain manually annotated causal graphs, especially in the presence of high-dimensional or complex nonlinear relationships. Therefore, this paper adopts the notears framework to automatically learn causal structures among variables from observational data. By transforming the traditional combinatorial optimization problem into a differentiable continuous optimization problem, the notears framework can effectively discover causal graphs that satisfy the DAG constraint without relying on expert prior knowledge.
Given the existence of complex nonlinear relationships in tabular data, we employ the nonlinear extension of notears based on a MLP, which uses a neural network to model the relationships between each variable and the others. The objective function is formulated as:
$$
\operatorname* { m i n } _ { \omega } \frac { 1 } { 2 n } \left\| X - \mathcal { F } _ { \omega } ( X ) \right\| _ { F } ^ { 2 } + \alpha \left\| \omega \right\| _ { 1 } + \beta \left\| \omega \right\| _ { 2 } ^ { 2 }
$$
The learnable parameters in the model are denoted as $\omega$ , and the nonlinear mapping is modeled as $\mathcal { F } _ { \omega } ( \cdot )$ . Meanwhile, $\alpha$ and $\beta$ are adopted as regularization coefficients for sparsity constraint and weight decay, respectively. To ensure that the learned causal structure is semantically valid, interpretable, and consistent with the nature of causal relationships among variables in the real world, the model introduces a DAG constraint:
$$
h ( \omega ) = \mathrm { T r } \left( e ^ { \mathcal { H } ( \omega ) } \right) - d = 0 ,
$$
where $\mathcal { H } ( \omega )$ is a matrix derived from the model parameters $\omega , \mathrm { T r } ( \cdot )$ denotes the trace of a matrix, $e ^ { H ( \theta ) }$ represents the matrix exponential, and $d$ is the dimensionality of the variable space. Finally, we obtain a weight matrix $\textbf { \textit { A } } \in$ $\mathbb { R } ^ { d \times d }$ that represents the causal strength among variables. By setting a threshold $\tau$ (e.g., 0.3), this matrix is transformed into a binary causal graph $G \in \{ 0 , 1 \} ^ { d \times d }$ , defined as:
$$
G _ { i j } = \left\{ { \begin{array} { l l } { 1 , } & { { \mathrm { i f ~ } } | A _ { i j } | > \tau } \\ { 0 , } & { { \mathrm { o t h e r w i s e } } } \end{array} } \right.
$$
Each row in this causal matrix indicates the set of parent nodes for the corresponding variable. This structured causal graph is then introduced as prior knowledge in subsequent modeling stages, enhancing both the interpretability and performance of the generative model.
# 3.3. Architecture
In this section, we present the overall framework and training mechanism of CausalDiffTab, which integrates causal discovery with a diffusion model to jointly model numerical and categorical features in tabular data. The causal matrix is then obtained via post-processing. During training, this causal matrix is matched with the noise matrix generated by the denoising network through causal pair matching, producing the loss terms causal and cnausal. These terms serve as regularization to guide the model in perceiving causal relationships across different categories.
For numerical features, we model the forward process $X ^ { n }$ using a stochastic differential equation (SDE) of the form:
$$
d X _ { t } = f ( X _ { t } , t ) d t + g ( t ) d W _ { t } ,
$$
where $f ( \cdot , t ) : \mathbb { R } ^ { M _ { n } } \to \mathbb { R } ^ { M _ { n } }$ denotes the drift coefficient, $g ( \cdot ) : \mathbb { R } \to \mathbb { R }$ represents the diffusion coefficient, and $W _ { t }$ is a standard Wiener process Song, Sohl-Dickstein, Kingma and et al. (2021b); Karras, Aittala, Aila and et al. (2022). The forward equation for numerical features is given by:
$$
\begin{array} { r } { x _ { \mathrm { n } } ^ { t } = x _ { \mathrm { n } } ^ { 0 } + \sigma _ { \mathrm { n } } ( t ) \varepsilon , \quad \varepsilon \sim \mathcal { N } ( 0 , I _ { M _ { \mathrm { n } } } ) , } \end{array}
$$
and the reversal can then be formulated accordingly as:
$$
d x _ { \mathrm { n } } = - \left[ \frac { d } { d t } \sigma _ { \mathrm { n } } ( t ) \right] \sigma _ { \mathrm { n } } ( t ) \nabla _ { x } \log p _ { t } ( x _ { \mathrm { n } } ) d t ,
$$
we use $\mu _ { \mathrm { n } }$ to denote the numerical component of the output of denoising network. It is trained by minimizing the denoising loss:
$$
L _ { \mathrm { n } } ( \theta , \rho ) = \mathbb { E } _ { x _ { 0 } \sim p ( x _ { 0 } ) } \mathbb { E } _ { t \sim U [ 0 , 1 ] } \mathbb { E } _ { \varepsilon \sim \mathcal { N } ( 0 , I ) } \left\| \mu _ { \mathrm { n } } ^ { \theta } ( x _ { t } , t ) - \varepsilon \right\| _ { 2 } ^ { 2 } .
$$
For categorical features, we first apply one-hot encoding to them. The forward diffusion process is defined as smoothly interpolating between the data distribution $\operatorname { c } ( \cdot ; x )$ and the target distribution $\operatorname { c } ( \cdot ; m )$ , where all probability mass is concentrated on the [MASK] state:
$$
q ( x _ { t } | x _ { 0 } ) = \mathtt { c } ( x _ { t } ; \alpha _ { t } x _ { 0 } + ( 1 - \alpha _ { t } ) m ) ,
$$
$\alpha _ { t } \in [ 0 , 1 ]$ be a strictly decreasing function of $t$ , with $\alpha _ { 0 } \approx 1$ and $\alpha _ { 1 } \approx 0$ . It represents the probability for the real data $x _ { 0 }$ to be masked at time step $t$ . where $\sigma _ { \mathrm { c } } ( t ) : [ 0 , 1 ] \to \mathbb { R } ^ { + }$ is a strictly increasing function. Such forward process entails the step transition probabilities: $q ( x _ { t } | \boldsymbol { x } _ { s } ) = \mathrm { c } ( x _ { t } ; \alpha _ { t | s } \boldsymbol { x } _ { s } +$ $( 1 - \alpha _ { t | s } ) \mathbf { m } )$ . where $\begin{array} { r } { \alpha _ { t | s } = \frac { \alpha _ { t } } { \alpha _ { s } } } \end{array}$ . Under th|e hood, this tran| sition means that at each di |ffusion step, the data will be perturbed to the [MASK] state with a probability of $( 1 - \alpha _ { t | s } )$ , and remains there until $t = 1$ if perturbed. The diffusion| model $\mu _ { \theta }$ aims to progressively uncover each column from the “masked” state.
$$
\begin{array} { r } { q ( \mathbf { X } _ { s } | \mathbf { X } _ { t } , \mathbf { X } _ { 0 } ) = \left\{ \begin{array} { l l } { \mathrm { c } ( \mathbf { x } _ { s } ; \mathbf { x } _ { t } ) , } & { \mathbf { x } _ { t } \neq \mathbf { m } , } \\ { \mathrm { c } \left( \mathbf { x } _ { s } ; \frac { ( 1 - \alpha _ { s } ) \mathbf { m } + ( \alpha _ { s } - \alpha _ { t } ) \mathbf { x } _ { 0 } } { 1 - \alpha _ { t } } \right) , } & { \mathbf { x } _ { t } = \mathbf { m } . } \end{array} \right. } \end{array}
$$
increasing the discretization resolution can help approximate a tighter ELBO. Therefore, we optimize the likelihood bound $ { \mathcal { L } } _ { \mathrm { c } }$ under the continuous-time limit, $\alpha _ { t } ^ { \prime }$ is the first order derivative of $\alpha _ { t }$ :
$$
\mathcal { L } _ { \mathrm { c } } ( \boldsymbol { \theta } , \boldsymbol { k } ) = \mathbb { E } _ { q } \int _ { t = 0 } ^ { t = 1 } \frac { \alpha _ { t } ^ { \prime } } { 1 - \alpha _ { t } } 1 _ { \{ x _ { t } = m \} } \log \langle \mu _ { \boldsymbol { \theta } } ^ { \mathrm { c } } ( \mathbf { x } _ { t } , t ) , \mathbf { x } _ { 0 } ^ { \mathrm { c } } \rangle d t ,
$$
First, the encoded data is fed into the causal extraction module, as introduced in Section 3.2, resulting in a causal matrix that captures the causal relationships within the data. Second, causal loss is obtained by matching the model’s predicted noise values with the causal matrix through causal pair alignment. Here, we compute the causal loss by calculating the outer product. By performing an outer product operation on the noise matrix generated by the model, we obtain a matrix that reflects the interaction strength between each pair of features. This representation not only preserves the information of the original features but also reveals potential causal directions. Therefore, by combining this with a pre-extracted causal matrix, we can measure the inconsistency between the predictions and the known causal structure—specifically, retaining correlations aligned with the allowed causal directions while suppressing those that violate causality. This helps guide the model to learn data generation mechanisms that conform to causal principles during training, and also effectively enhances the model’s robustness. Finally, to better model different types of features, we separately compute the causal losses for numerical and categorical features. Specifically, the causal loss function is defined as follows:
$$
\mathcal { L } _ { \mathrm { c a u s a l } } = \lambda \cdot \mathbb { E } _ { \mathrm { b a t c h } } \left[ \frac { 1 } { | \mathcal { M } | } \sum _ { ( i , j ) \in \mathcal { M } } ( \hat { \epsilon } _ { i } \cdot \hat { \epsilon } _ { j } ) \right] ,
$$
where $\lambda$ is the regularization weight, $\hat { \epsilon } ~ = ~ [ \hat { \epsilon } _ { 1 } , \hat { \epsilon } _ { 2 } , \dots , \hat { \epsilon } _ { d } ]$ represents the model’s predicted noise values (with $d$ feature dimensions), $\mathcal { M }$ denotes the causal mask, and $| { \mathcal { M } } |$ indicates the total number of non-zero elements in the m|ask|. The total loss function integrates the diffusion model’s base loss with the causal regularization term, defined as:
$$
\begin{array} { r } { \mathcal { L } _ { \mathrm { t o t a l } } = \mathcal { L } _ { \mathrm { c } } + L _ { \mathrm { n } } + \lambda ( \mathcal { L } _ { \mathrm { c a u s a l } } ^ { \mathrm { c } } + \mathcal { L } _ { \mathrm { c a u s a l } } ^ { \mathrm { n } } ) . } \end{array}
$$
# 3.4. Hybrid Adaptive Causal Regularization
The integration of causal loss with adaptive weighting mechanisms in diffusion models originates from the principle of Hierarchical Prior Fusion in generative models. Analogous to hierarchical variational inference in VAE, this theory emphasizes improving generation quality by injecting multi-level prior knowledge in stages: during the initial training phase, the model prioritizes learning low-level local features (e.g., texture, edges) and later integrates highlevel regularization (e.g., causal relationships and physical laws). Specifically, our hybrid adaptive causal regularization jointly considers the loss fluctuation $\Delta L$ at each step and the noise level $\sigma _ { \mathrm { m e a n } }$ to control the weighting of causal regularization.
$$
w _ { \mathrm { h y b r i d } } = w _ { \mathrm { m a x } } \cdot \frac { 1 } { 2 } \left( e ^ { - | \Delta L | } + \frac { 1 } { 1 + \sigma _ { \mathrm { m e a n } } } \right) ,
$$
when $\sigma _ { \mathrm { { m e a n } } } 0$ (low noise) and $\Delta L 0$ (stable training), $w _ { \mathrm { h y b r i d } } \to w _ { \mathrm { m a x } }$ , applying strong regularization. Conversely, when $\sigma _ { \mathrm { m e a n } } \ \ \infty$ or $\Delta L { \bf \alpha } { \bf \alpha } \infty$ (violent oscillations),
Table 1 Overview of dataset characteristics. The column "# Num" indicates the count of numerical features, while "# Cat" represents the count of categorical features. Additionally, "# Max Cat" specifies the maximum number of categories found within any single categorical feature in the dataset.
# Algorithm 1 Hybrid Adaptive Causal Regularization
Require: $\hat { \pmb x }$ (predictions), $ { \boldsymbol { { \mathbf { x } } } } _ { t }$ (noisy data), 𝝈 (noise scale), $b _ { \mathrm { c } }$ (categorical flag)
Ensure: Causal regularization loss
1: if categorical features then
2: $p \gets \mathrm { S o f t m a x } ( \hat { \pmb x } )$
3: $\mathbf { \delta } _ { G p ^ { \intercal } p }$ {Categorical gradient outer product}
4: else
5: $\begin{array} { r } { \bar { \textbf { { G } } } ( \frac { x _ { t } - \hat { x } } { \sigma } ) ^ { \top } ( \frac { x _ { t } - \hat { x } } { \sigma } ) } \end{array}$ {Numerical gradient outer
product}
6: end if
7: $\pmb { M } \gets \mathrm { C a u s a l M a s k } ( \pmb { G } )$ {Apply causal masking}
8: $L _ { \mathrm { b a s e } } \mathrm { M e a n } ( G \odot M )$ {Base consistency loss}
9: $\Delta L \gets | L _ { \mathrm { b a s e } } - \mathrm { E M A } ( L _ { \mathrm { b a s e } } ) |$ {Loss fluctuation}
10: ${ \sigma _ { \mu } } \gets \mathbf { M e a n } ( \sigma )$ {Noise level}|
11: $\begin{array} { r } { w \gets \frac { w _ { \mathrm { m a x } } } { 2 } \left( e ^ { - \Delta L } + \frac { 1 } { 1 + \sigma _ { \mu } } \right) } \end{array}$ {Hybrid weight}
12: Update $\mathrm { E M A } ( L _ { \mathrm { b a s e } } ) \alpha L _ { \mathrm { b a s e } } + ( 1 - \alpha ) \mathrm { E M A } ( L _ { \mathrm { b a s e } } )$
13: return $L _ { \mathrm { c a u s a l } } = L _ { \mathrm { b a s e } } \cdot w$
$w _ { \mathrm { h y b r i d } } ~ ~ 0$ , avoiding interference with model learning. During the forward process of diffusion models, early denoising steps correspond to high-noise microscopic state spaces. Imposing strong causal regularization directly at this stage may lead to erroneous causal associations (e.g., spurious correlations) due to insufficient decoupling of low-level features. As the denoising progresses and the feature space becomes clearer, gradually increasing the weight of causal loss naturally embeds causal priors as high-level semantic Regularization into the generative process. This strategy also enhances the model’s noise robustness. Assuming the hybrid adaptive noise factor $\frac { 1 } { 1 + \sigma _ { \mathrm { m e a n } } }$ (a monotonically decreasing function of $\sigma _ { \mathrm { m e a n } }$ ), we can derive:
$$
\frac { \partial } { \partial \sigma _ { \mathrm { m e a n } } } \left( \frac { 1 } { 1 + \sigma _ { \mathrm { m e a n } } } \right) = - \frac { 1 } { ( 1 + \sigma _ { \mathrm { m e a n } } ) ^ { 2 } } < 0 ,
$$
a higher noise level leads to a lower regularization weight, which prevents overfitting to the noise. This strategy not only avoids optimization instability caused by competing multi-task objectives in early training stages but also ultimately achieves a balance between generation quality and causal consistency. To better understand the hybrid adaptive causal regularization, we summarize the pseudocode in Algorithm 1.
# 4. Experiments
We evaluate CausalDiffTab by comparing it with various generative models across multiple datasets and metrics, ranging from data fidelity and privacy to downstream task performance. Additionally, we conduct ablation studies to investigate the effectiveness of each component of CausalDiffTab.
# 4.1. Experimental Settings
We use PyTorch Paszke, Gross, Massa and et al. (2019) to implement all the algorithms. All the experiments are conducted on NVIDIA 3090 GPU. We adopt the same evaluation methodology and hyperparameters used in prior approaches Shi et al. (2025), assessing the quality of synthetic data using seven distinct metrics divided into three groups: 1) Fidelity: Shape, trend, $\alpha$ -precision, $\beta$ -recall, and detection measures evaluate how effectively the synthetic data faithfully reconstructs the distribution of the real data; 2) Downstream Tasks: Machine Learning Efficacy (MLE) demonstrate the potential of the model to enhance downstream tasks; 3) Privacy: The Distance to Closest Record (DCR) score assesses the level of privacy protection by measuring the degree of similarity between the synthetic data and the training data. All reported experiment results are the average of 20 random sampled synthetic data generated by the best-validated models.
# 4.2. Datasets
Our evaluation spans seven real-world structured datasets1: Adult, Default, Shopper, Magic, Fault, Beijing, News and Diabetes. Each dataset contains a combination of numerical and categorical features. These datasets are further categorized by their native machine learning objectives, falling under either classification or regression tasks. Comprehensive descriptions of their characteristics, including attribute distributions and task definitions, are documented in Table 1.
Table 2 Performance comparison of different methods based on shape similarity error rates $( \% )$ . Lower error rates indicate superior performance. Bold Face highlights the best score for each dataset. OOM stands for "Out Of Memory."
Table 3 Performance comparison of different methods based on trend error rates $( \% )$ . Lower error rates reflect higher performance.
# 4.3. Baselines
We compare MoR with nine popular synthetic tabular data generation that are categorized into four groups: 1) GAN-based method: CTGAN Xu et al. (2019); 2) VAEbased methods: TVAE Xu et al. (2019), GOGGLE Liu et al. (2023b); 3) Autoregressive Language Model: GReaT; 4) Diffusion-based methods: STaSy Kim, Lee and Park (2023), CoDi Lee et al. (2023), TabDDPM Kotelnikov et al. (2023), and TABDIFF Shi et al. (2025).
# 4.4. Data Fidelity
We first evaluate the shape and trend metrics. The shape metric measures the ability of synthetic data to capture the marginal density of each individual column, while the trend metric assesses its capability to replicate correlations between different columns in real data. Detailed results are presented in Table 2 and Table 3. Analysis reveals that CausalDiffTab outperforms all baselines across six datasets in terms of shape metrics. It surpasses the current state-of-the-art method with an average improvement of $1 4 . 3 \%$ , demonstrating its superior performance in maintaining marginal distributions for various attributes across datasets. In terms of trend metrics, CausalDiffTab also demonstrates excellent and robust performance. Notably, on the Default task, CausalDiffTab achieves a $4 3 . 7 1 \%$ improvement in metrics. This indicates that our method is significantly more effective at capturing complex relationships between columns compared to previous approaches, showcasing its superior modeling capability.
Furthermore, we evaluated fidelity metrics including $\alpha$ - Precision, $\beta$ -Recall, and CS2T scores. CS2T reflects the difficulty of distinguishing synthetic data from real data. The results are shown in Table 6. CausalDiffTab performs well across various tasks, especially on the Adult task, where it achieves an improvement of $1 . 3 1 \%$ . Across all results, the average improvement is $0 . 5 7 \%$ . This demonstrates that the causal regularization enhance the causal consistency of the generated data, making it more aligned with the conditional probability distribution of the real data. $\alpha$ -Precision measures the quality of common data characteristics, where higher scores indicate greater faithfulness of synthetic data to real data. $\beta$ -Recall assesses the extent to which synthetic data covers the distribution of real data. Due to space limitations, detailed results are provided in the supplementary material. On average, CausalDiffTab outperforms other methods across all three metrics.
Table 4 Performance comparison of different methods based on $\alpha$ -Precision scores. Higher error rates reflect higher performance.
Table 5 Performance comparison of different methods based on $\beta$ -Recall scores. Higher error rates reflect higher performance.
# 4.5. Data Privacy
As the amount of data required for training generative models increases, an increasing number of privacy concerns face potential leakage risks. Synthetic tabular data can serve as a privacy-preserving alternative for AI training. In this section, we evaluate the privacy-preserving capabilities of the model using the DCR scoring metric as the evaluation criterion Shi et al. (2025). The core logic involves assessing the similarity between generated data and original data to determine whether the model might expose sensitive information. The results are shown in Table 7. CausalDiffTab outperforms the latest baselines on most datasets, highlighting its strong capability for privacy protection.
# 4.6. Performance on Downstream Tasks
A key advantage of high-quality synthetic tabular data lies in its ability to serve as a substitute for real dataset in model training. In this section, we evaluate CausalDiffTab’s capability to support downstream task learning through MLE measurement. The experimental design strictly follows domain-standard protocols Kingma, Salimans, Poole and et al. (2021); Borisov et al. (2023); Shi et al. (2025): First, we train the CausalDiffTab generative model on the original real dataset. Subsequently, we construct a synthetic dataset of the same scale as the original data using this trained model, which is then employed to train either an XGBoost classifier or XGBoostRegressor. For classification tasks, we calculate AUC scores, while regression tasks are quantified using RMSE metrics to measure prediction deviations. Finally, we assess the performance differences between models trained on synthetic data and those trained on real dataset. The experimental results presented in Table 8 demonstrate that CausalDiffTab consistently achieves the best performance. Although the improvements are marginal, the results closely approach those obtained using real dataset for training. This validates that our method effectively corrects logical inconsistencies in the data, enabling the synthetic dataset to better approximate the outcomes of real dataset.
# 4.7. Ablation Studies
# 4.7.1. Nonlinear Causal awareness
We conduct ablation studies to evaluate the effectiveness of the nonlinear causal awareness proposed in Section 3.3. We compare the results of incorporating nonlinear causal awareness with those of using linear causal awareness. Specifically, we adopt notears_linear as the linear causal extractor and notears_nolinear as the nonlinear causal extractor. Both methods use the same regularization strategies and number of iterations. The results are shown in Table 9. The results show that introducing non-linear causal regularization in diffusion models allows for a more flexible capture of complex interactions between variables. Non-linear regularization effectively mitigate the over-simplification of data distributions caused by linear assumptions by dynamically adjusting the strength and direction of couplings. This flexibility enables the generation process to better align with the underlying non-linear causal mechanisms in real-world data, thereby significantly improving the causal plausibility of generated samples and their suitability for downstream tasks, while maintaining data diversity.
Table 6 Performance comparison of different methods based on detection score (C2ST) using logistic regression classifier. Higher scores reflect superior performance.
Table Performance comparison of different methods based on DCR score. The DCR score measures the likelihood that a generated data sample resembles the training set more closely than the test set. A score closer to $50 \%$ is more preferable.
Table 8 Evaluation of MLE (Machine Learning Efficiency). For classification tasks, the higher the AUC score, the better the synthetic data quality; for regression tasks, the lower the RMSE, the better the quality.
Table 9 Two sets of ablation experiment results: 1) the ablation study of CausalDiffTab using linear causal regularization and nonlinear causal regularization, respectively; 2) the ablation study of CausalDiffTab using Fixed Causal Regularization (FCR) and Hybrid Adaptive Causal Regularization (HACR), respectively. Bold Face highlights the best score for each dataset.
Categorical Loss with Rolling Error Bands (Epoch 1-8000)
Figure 4: Comparison of training loss results and training evaluation (average of shape and trend) results with and without causal regularization on Shoppers tasks.
Figure 5: The results on Beijing tasks.
Figure 6. It can be observed that our method better fits the distribution of the original data.
# 4.7.2. Hybrid Adaptive Causal Regularization
We further conduct a second ablation study to evaluate the effectiveness of the hybrid adaptive causal regularization proposed in Section 1. We compare the results obtained using hybrid adaptive causal regularization with those using fixed-value weights. The experiment employs notears_nolinear as the nonlinear causal extractor. All other hyperparameters remain consistent. The results are presented in Table 9. The results show that although adopting Fixed Causal Regularization has minimal impact on the $\alpha$ -Precision and $\beta$ -Recall metrics, it significantly leads to declines in other indicators, particularly the shape and trend metrics. This indicates that introducing causal regularization prematurely can impair the model’s modeling and generation performance. In contrast, the hybrid adaptive causal regularization employed by CausalDiffTab effectively avoids optimization instability caused by competition among multi-task objectives during early training stages, ultimately achieving a balance between generation quality and causal consistency.
# 5. Visualizations of Data
We provide partial visualizations of the training process, as shown in Figures 4 and 5. These visualizations consistently indicate that the causal-aware module adopted by CausalDiffTab effectively accelerates model convergence and enhances training stability. We also present more detailed visualizations for some of the results, as shown in | Training data has been proven to be one of the most critical components in training generative AI. However, obtaining high-quality data remains challenging, with data privacy issues presenting a significant hurdle. To address the need for high-quality data. Synthesize data has emerged as a mainstream solution, demonstrating impressive performance in areas such as images, audio, and video. Generating mixed-type data, especially high-quality tabular data, still faces significant challenges. These primarily include its inherent heterogeneous data types, complex inter-variable relationships, and intricate column-wise distributions. In this paper, we introduce CausalDiffTab, a diffusion model-based generative model specifically designed to handle mixed tabular data containing both numerical and categorical features, while being more flexible in capturing complex interactions among variables. We further propose a hybrid adaptive causal regularization method based on the principle of Hierarchical Prior Fusion. This approach adaptively controls the weight of causal regularization, enhancing the model's performance without compromising its generative capabilities. Comprehensive experiments conducted on seven datasets demonstrate that CausalDiffTab outperforms baseline methods across all metrics. Our code is publicly available at: https://github.com/Godz-z/CausalDiffTab. | [
"cs.CL"
] |
# 1 INTRODUCTION
The increasing amount of data in the shape of high-dimensional vectors (e.g., embeddings from large language models) has resulted in the surge of a new type of vector-centric systems known as Vector Databases (VecDBs) [19,29,30]. VecDBs like Weaviate, Milvus, Qdrant, and ChromaDB offer storage, management, and efficient querying of high-dimensional vectors [40,44]. The demand of users for vector systems is such that existing database systems have added vector capabilities natively (MongoDB, Redis) or via extensions (e.g., pgvector in PostgreSQL, DuckDB-VSS in DuckDB).
A core feature of VecDBs is the efficient querying of vectors through Vector Similarity Search (VSS). VSS consists of finding the vectors within a collection that are most similar to a query vector based on a distance or similarity metric. The latter is a core component of various applications, such as RAG pipelines. However, on a large scale, VSS poses challenges due to the large number of computations and storage needed to obtain exact answers to a query. To overcome this, applications gave up exactness as generally approximate answers are “good enough.” The latter opened opportunities to accelerate VSS by using approximate indexes [16,18,25,27,37], quantization techniques that reduce the size of the vectors [3,13,15,20], and optimizations to the distance evaluation [14,21,45,48].
Table 1: Relative comparison of queries-per-second (QPS) and queries-per-dollar $( \mathbf { Q P 5 } )$ given by AWS cloud instances on the OpenAI/1536 dataset on various vector search scenarios. $\boxed { + + }$ indicates the best, $^ +$ is $\mathbf { < 2 0 \% }$ away from the best, is $\mathbf { > 2 0 \% }$ but ${ \bf < } 2 { \bf x }$ from the best, lastly - and -- are ${ \bf \tau } > { \bf 2 x }$ and ${ \bf \gamma } _ { > } { 3 } { \bf x }$ away from the best, resp. Each $\left( + \right)$ is 1 and each (-) is -1 score.
A typical business model of VecDBs is to offer their software as a service (SaaS), either in a cloud environment owned by them or by the users (“Bring Your Own Cloud”). This situation prompts the question: Which cloud instance is the best for vector search? While most VecDBs can be deployed on both major architectures $\mathrm { \Delta } [ \mathrm { x } 8 6 \mathrm { \textmu } _ { - } 6 4$ and ARM), there is a lack of benchmarks comparing the performance of vector search across different architectures, let alone microarchitectures (e.g., Intel Sapphire Rapids, AMD Zens, AWS Gravitons). Qdrant, in particular, strongly advises using the latest-generation Intel processors [10]. Milvus, Chroma, and Vexless have presented benchmarks of their systems on Intel CPUs [9,26,36], while USearch [39] presents benchmarks on AWS Graviton3 (ARM).
However, far from being as trivial as choosing the CPUs with the largest caches, highest clock frequency, or specialized SIMD instructions, we uncover that the optimal choice depends on the search algorithm and quantization level used. For instance, in partitionbased indexes, like IVF [18], AMD’s Zen4 gives almost 3x more queries per second (QPS) than Intel’s Sapphire Rapids, but the tables turn on graph indexes, like HNSW [25], in which Intel Sapphire Rapids delivers more QPS. However, when looking at the number of queries per dollar $( \mathrm { Q P \$ 5 } )$ , AWS Graviton3 gives the best bang for the buck, even over its successor, Graviton4.
This study aims to show which cloud CPUs give the best “bang for the buck” by experimentally evaluating their QPS and $\mathsf { Q P \ S }$ on different vector search scenarios. More importantly, we uncover that the performance across microarchitectures depends not solely on their SIMD capabilities but also on the search algorithm’s dataaccess patterns. The latter makes the CPU cache performance (bandwidth and latency) important for vector search, as it is memorybound in many scenarios [21,31,34,41].
Figure 1: Example of an HNSW and IVF index search.
Figure 2: Overview of different quantization techniques.
# 2 PRELIMINARIES
# 2.1 Approximate Vector Similarity Search
Given a collection $V$ of $n$ multi-dimensional objects $\{ v _ { 0 } , v _ { 1 } , \cdot \cdot \cdot , v _ { n - 1 } \}$ defined on a $D$ -dimensional space, and a $D$ -dimensional query $q$ , VSS tries to find the subset $R \subset V$ , containing the $k$ most similar vectors to $q$ . The notion of similarity between two vectors $( v , q )$ is measured using a function $\delta ( v , q )$ . Usually, $\delta$ is a distance or similarity function defined in an Euclidean space. The Squared Euclidean Distance (L2) is one of the most commonly used distance metrics, and it is defined as $\textstyle \delta ( v , q ) = \sum _ { i = 0 } ^ { D } ( v _ { i } - q _ { i } ) ^ { 2 }$ . To obtain $R$ , $\delta$ must be computed for every $v \in V$ , leading to a large number of calculations. However, in most vector-based applications, approximate answers are acceptable. This allowed VSS to scale by returning only an approximate result set ${ \hat { R } } .$ , whose quality is measured by the percentage of intersection between the vectors in $R$ and $\hat { R }$ when answering the same query (recall metric). This tradeoff between accuracy and speed resulted in the development of approximate indexes and quantization techniques for vectors.
# 2.2 Approximate Indexes
Approximate indexes aim to build data structures that guide the search to the most suitable place of the D-dimensional space in which the query may find its nearest neighbours. Indexes can be categorized into three types: graph-based [25,27,32,42], partitionbased [17,18,35], and hybrids [8,16]. Their common goal is only to evaluate the distance/similarity function between $q$ and a smaller set of vectors $V ^ { \prime } \subset V$ while maintaining high recalls. The indexes that have seen the most adoption in vector systems are HNSW (Hierarchical Navigable Small Worlds) [25] and IVF (Inverted Files) [18].
HNSW has seen great success in achieving desirable recall in most datasets [7,49]. HNSW organizes objects into a graph where nodes represent the vectors, and edges reflect their similarity. The property of navigability and "small world" [43] is forced on the graph so that a greedy search can reach the answers to a query in logarithmic time [42]. Borrowing ideas from the skiplist data structure, HNSW organizes the nodes into different layers (see left of Figure 1). The top layer (starting point) contains “distant" nodes, and the bottom layer has all the nodes. The upper layer allows a search to quickly traverse the graph diameter with a few steps, and the lower layer allows it to traverse on a local hub of nodes.
On the other hand, IVF is a partition-based index that clusters the vector collection into buckets. At search time, the distance metric is first evaluated with the centroids of each bucket, and
BQ PQ 2 subspaces with 4 centroids each
float32 -0.10.3-0.51.00.1-0.7-0.3-1.0 -0.10.3 -0.5 0.1 -0.7-0.3-1.0float32 1-bit 日 1 日 1 1 日 日 日 X encoded vector (32x compression) SQ8 0 日 1 1 1
float32-0.10.3-0.51.00.1-0.7-0.3-1.0 2 2 3 3 uint8 255 日 uint8 6 2 encoded vector (4x compression) encoded vector (16x compression)
the vectors inside the nearest centroids buckets are chosen for evaluation (see right of Figure 1). The number of centroids is set in the order of $\sqrt { n }$ [11,40]. More buckets can be probed to trade off speed for more recall [11,40]. IVF works modestly well in most datasets [7,49] while scaling better than graph indexes, which have higher memory requirements and longer construction times [11,31]. A commonly used hybrid index consists of building an HNSW index on the IVF centroids to quickly find the most promising buckets [8].
# 2.3 Quantization Techniques
Quantization techniques aim to reduce the size of every vector. Quantization can be used on top of indexes to alleviate storage requirements. However, the more aggressive quantization, the more the recall is reduced as precision is lost. Quantization can increase query throughput, as less data must be fetched [34], and SIMD can operate on more values at-a-time with a single CPU instruction.
Among the most popular quantization techniques, we have binary quantization (BQ), scalar quantization (SQ), and product quantization (PQ) [18]. Examples of these techniques are shown in Figure 2. BQ maps every float32 value in the vector to 0 or 1 (32x compression). In Figure 2, positive values are mapped to 1 and negative values to 0. In SQ, each float32 value is mapped to an integer code of usually 8, 6, or 4-bit, depending on the target compression ratio. In Figure 2, each value is mapped to 8 bits by normalizing and scaling them to the [0-255] range. The latter is one of the simplest ways to perform SQ. However, a wider family of SQ algorithms exists [3,13]. Finally, downcasting to float16 or bfloat is a safer alternative with little information loss but only achieving a $2 \mathrm { x }$ compression ratio.
In PQ, the D-dimensional space is split into $M$ , $\textstyle { \frac { D } { M } }$ dimensional subspaces, and a codebook is trained for each subspace using a clustering algorithm. In Figure 2, we split vectors into two subspaces of $\textstyle { \frac { 8 } { 2 } } = 4$ dimensions and create 4 clusters for each subspace. To encode a vector, PQ splits it into $M$ subvectors. For each subvector, PQ finds the nearest centroid on that subspace codebook, and the vector is encoded with the centroid codes. Contrary to BQ and SQ, PQ can provide variable compression ratios depending on the chosen number of subspaces and centroids per subspace. However, contrary to BQ and SQ, in PQ, the distance calculations cannot happen in the quantized domain, as each code must be first decoded. The latter affects the distance calculation latency. A wider family of PQ algorithms exists, mostly improving how the codebook is constructed to tradeoff search speed, size, and recall [11,15].
Table 2: AWS cloud instances used (May 2025, us-east-1)
# 2.4 Single Instruction Multiple Data (SIMD)
Distance calculations can be optimized in CPUs using SIMD intrinsics that process multiple values with a single CPU instruction.
SIMD in $\mathbf { x 8 6 _ { - } 6 4 }$ and ARM. In $\mathbf { x } 8 6 \_ 6 4$ architectures, SIMD instructions are called Advanced Vector Extensions (AVX). The number of values AVX can process at a time depends on the SIMD register width supported by the CPU. Initially, registers of 256-bit were introduced (AVX and AVX2), further expanded to 512-bit (AVX512), which can process 16 float32 values with one instruction. AVX512 had a rough start, reportedly down-clocking the CPU in earlier Intel microarchitectures (Sky Lake and earlier) [24]. However, this is no longer an issue in modern CPUs (Zen4, Sapphire Rapids), and AVX512 is widely used to accelerate vector search. On the other hand, ARM architectures also provide SIMD instructions through NEON and SVE. NEON was introduced first, supporting SIMD over 128-bit registers. SVE was introduced later as an improvement over NEON that, unlike traditional SIMD architectures, supports variable-size SIMD registers on its intrinsics through VLA (Variable Length Agnostic) programming. The latter alleviates technical debt as distance kernels no longer need hardware-dependent loop lengths. Graviton4 has 128-bit SVE registers, and Graviton3 has 256-bit SVE registers, with both having 128-bit NEON registers.
SIMD capabilities. SIMD instructions usually support floatingpoint arithmetic (double and single precision), integer arithmetic (64, 32, 16, and 8-bit), data movement, conversions, and comparisons. In addition to this, CPU vendors have added extended SIMD capabilities in different microarchitectures. For instance, Intel’s Sapphire Rapids (SPR) supports arithmetic of float16, and both AMD’s Zen4 and Intel’s SPR support arithmetic of bfloat and a POPCOUNT instruction useful for 1-bit vectors. In modern ARM CPUs, SVE and NEON support arithmetic for bfloat, half-precision float16, and 1-bit vectors (POPCOUNT). Graviton4 supports SVE2, an extension to SVE that introduces additional intrinsics such as MATCH to compute the intersection between two vectors.
Table 2 shows some of the SIMD capabilities relevant to distance calculations present in modern CPUs available in the cloud. It is important to note that, in some scenarios, SIMD instructions are not directly usable in distance kernels. One example is trying to do a dot product between two 8-bit vectors in AVX2/AVX512, as the only available instruction to do so (i.e., VPDPBUSD) expects one vector to be signed and the other to be unsigned. Challenges also arise with sub-byte kernels. For instance, vectors quantized to 6 bits must first be aligned into the SIMD registers, usually with SHIFT $+ 0 R$ instructions, which impact the performance of the distance kernels.
Table 3: Vector datasets
SIMD in vector libraries. Some vector libraries, like USearch [39], leverage SIMD capabilities to provide distance kernels for quantized types in most major CPU microarchitectures. On the other hand, other libraries, like FAISS, avoid specialized kernels by decoding vectors back to the float32 domain. The latter is called asymmetric distance calculations. This reduces technical debt and avoids complex code bases. However, the performance of such kernels is not on par with symmetric kernels that operate in the quantized domain [38]. Both USearch and FAISS prefer to use the latest SIMD ISA available in the CPU (e.g., SVE over NEON if both are available).
$\mathbf { x 8 6 } \_ { 6 4 }$ vs ARM SIMD. A key difference between the SIMD of microarchitectures lies in their register width. However, a larger register width does not guarantee better raw performance. For instance, CPUs with NEON SIMD do not fall behind AVX512 on database workloads [2] despite having a 4x smaller register width. This is because the latency and execution throughput of the instructions used also impact performance.
# 3 BANG FOR THE BUCK
We benchmarked the end-to-end search latency of HNSW, IVF indexes, and full scans (i.e., without an index), with vectors quantized at different levels on five microarchitectures available in AWS. These are presented in Table 2 alongside their on-demand price in us-east-1 at the time of doing this study. These cover the major ISAs and popular CPUs. For Intel SPR, we also present benchmarks on the $Z$ series variant1. All machines have 64GB of DRAM, 8 vCPUs, and Ubuntu 24.04 as OS. For each experiment, we report queries-per-second (QPS) and queries-per-dollar $( \mathrm { Q P } \$ )$ .
For our experiments, we used FAISS (v1.9.0) [11,33] compiled to target the underlying CPU capabilities. We chose FAISS as it is the cornerstone of many vector systems. For instance, Milvus [40] and Weaviate [44] vector engines started as a fork of FAISS. OpenSearch (AWS) is directly built on top of FAISS . By using FAISS, we also avoid introducing possible artifacts of vector databases (e.g., Milvus dynamic batching mechanism that executes queries at intervals).
←AMD ZEN 4 + AMD ZEN3 ●INTEL SAPPHIRERAPIDS Z ▲INTEL SAPPHIRERAPIDS ●AWS GRAVITON4 $,$ AWS GRAVITON 3×10 OpenAI/1536 ×105 OpenAI/1536 ×102 arXiv/768 ×106 arXiv/768 ×103 SIFT/128 ×107 SIFT/128
1.50 ? ? ? ? 2.0 ?
1.25 1.00- 8 6 4 3 2.5 2.0 : + + 3- 1.5 中
0.75 . 1.5 2 1.0 :1\*\* 4 2- 女 +
0.25 1.0 ! 1 0.5 #2 . 1 格 1 ! A0 0.00.96 0.98 0.96 0.98 0.96 0.98 1.00 0.96 0.98 1.00 0.96 0.98 1.00 0.96 0.98 1.00Recall Recall Recall Recall Recall Recall
Table 4: QPS and $\mathbf { Q P S }$ of cloud instances running queries on FAISS IVF indexes quantized at different levels. QP\$ values are in the order of $\mathbf { \delta } _ { 1 0 ^ { 5 } }$ . Color coding is the same as in Table 1.
Unfortunately, when using SQ, float16, or bfloat, FAISS first decodes vectors back to the float32 domain to perform the distance calculations. To make up for that, we add USearch (v.2.16.9) [39] to our benchmarks. USearch is a vector engine focused on getting the best performance on different CPUs with symmetric kernels for various distance metrics and quantized types on HNSW indexes and full scans. USearch is currently used by ClickHouse and DuckDB on their VSS extensions.
For our analysis, we have chosen three datasets that exhibit different dimensionalities, presented in Table 3. These datasets are commonly used to evaluate VSS techniques [7,49]. From these collections, one represents vectors from image data (SIFT/128), and two represent vector embeddings from text (arXiv/768, OpenAI/1536).
Indexes hyperparameters. FAISS IVF: Number of centroids: $^ { 4 * }$ $\sqrt { n }$ , buckets visited (for recall tuning): from 2 to 512, training points: all. Quantized with 8, 6, 4, and 1-bit. FAISS FULL SCAN: Quantized with 8, 6, 4, 1-bit and PQ (n_bits: 8, subspaces m: $\textstyle { \frac { D } { 4 } }$ ). USearch HNSW: m: 16, ef_construction: 128, ef_search (for recall tuning): from 2 to 512. Quantized with float16, bfloat, 8, and 1-bit. USearch FULL SCAN: Quantized with float16, bfloat, 8, and 1-bit. In all settings, we ran queries individually (no multi-threading) with $k = 1 0$ , using L2 and Hamming (on BQ) as distance metrics.
Table 5: QPS and $\mathbf { Q P S }$ of cloud instances running queries on USearch HNSW quantized at different levels. $\ Q P s$ values are in the order of $\mathbf { \delta } _ { 1 0 ^ { 5 } }$ . Color coding is the same as in Table 1.
# 3.1 IVF
Figure 3 shows the performance of the microarchitectures on an IVF index without quantization at different recall levels. In this setting, Zen4 is the clear winner across all datasets in QPS, giving 3x the performance of both Intels. However, in $\mathsf { Q P \ S }$ , Graviton3 ties with Zen4. Zen3 and Graviton4 follow closely in $\mathsf { Q P \ S }$ , and their gap closes as dimensionality decreases. Another observation is that the targeted recall does not affect the relative performance of microarchitectures.
Table 4 shows the performance of each microarchitecture with different quantization levels at the highest possible recall. In SQ vectors, Zen4 and Intel perform well. However, Zen4 takes the upper hand in higher dimensionalities. The Gravitons heavily underperform in SQ vectors (up to $3 \mathrm { x }$ and 6x less $\mathsf { Q P \ S }$ and QPS, resp.). This is because the asymmetric distance calculation in FAISS requires going from the quantized domain to the float32 domain. The latter can be done with SIMD instructions (e.g., HI/LO-UNPACK on 8-bit and SHIFT $+ 0 R$ on 6-bit vectors). However, FAISS currently uses scalar code to do this in ARM CPUs, contrary to the fully SIMDized kernels used in AVX2/AVX512. Nevertheless, the Gravitons shine on 1-bit vectors as FAISS does implement a specialized kernel for BQ in NEON. The latter shows the importance of SIMD kernels.
←AMD ZEN4 + AMD ZEN3 ●INTEL SAPPHIRERAPIDS Z ▲INTEL SAPPHIRE RAPIDS ●AWSGRAVITON4 $\cdot$ AWS GRAVITON 3 ×102 OpenAI/1536 ×10 OpenAI/1536 ×10³ arXiv/768 ×106 arXiv/768 ×10³ SIFT/128 ×107 SIFT/128
S ? 4 ? . + 11 ? 6 + + s □ 3.0 2.5 + ← + 31 2.0 3
1 2 X 8 0.96 0.98 0.96 0.98 0.96 0.98 0.96 0.98 0.96 0.98 0.96 0.98 Recall Recall Recall Recall Recall Recall
Table 6: QPS and $\mathbf { Q P S }$ of cloud instances running full scan queries. Intel performs the best in PQ vectors. The Gravitons take the upper hand in float16 and 1-bit vectors. In all the other settings, Zen4 is the overall winner. The Gravitons perform well in USearch SQ but not in FAISS SQ due to the absence of SIMD. $\boldsymbol { Q P s }$ is in the order of $\scriptstyle { 1 0 ^ { 4 } }$ . Color coding is the same as Table 1.
# 3.2 HNSW
Figure 4 shows the performance of the different microarchitectures on an HNSW index without quantization. In this setting, Intel $Z$ offers the highest QPS. However, it never gives the highest $\mathsf { Q P \Phi }$ For $\mathsf { Q P \Phi }$ , Zen3 and Graviton3 perform the best in high-dimensional vectors. Contrary to IVF indexes, Zen4 performs the worst. Another observation is that, like in IVF indexes, the targeted recall does not affect the relative performance of microarchitectures.
Table 5 shows each microarchitecture’s performance with different quantization levels at the highest possible recall. Intel $Z$ gives the highest QPS in nearly all settings. However, things change when looking at $\mathsf { Q P \Phi 5 }$ . In float16 vectors, the Intels and Graviton3 shine, and the Zens struggle due to the lack of float16 SIMD. On bfloat, the Gravitons are the winners for high-dimensional vectors, with Zen3 following closely, despite the latter not having direct support for bfloat. Finally, in quantized vectors, the Gravitons give the highest $\mathsf { Q P \ S }$ at higher dimensionalities, and the Intels on vectors of lower dimensionality. It is important to note that in HNSW, the gap between architectures is less evident (fewer red and yellow cells).
# 3.3 Full Scans
Table 6 shows the performance of each microarchitecture when doing full scans. For float32 vectors, Zen4 and Graviton3 give the best $\mathsf { Q P \Phi }$ , while the Intels fall short. Contrary to HNSW, the Gravitons always take the lead on float16 instead of Intel, and Zen4 takes the upper hand on bfloat. In 8-bit vectors, USearch and FAISS differ: the Gravitons take the lead in USearch thanks to the symmetric kernels, whereas in FAISS, Zen4 and Intel perform well across SQ settings, with Gravitons heavily underperforming again (lack of SIMD decoding). Finally, on 1-bit and PQ vectors, the Gravitons and Intel are the best performers, respectively.
# 3.4 Why the differences across microarchs?
While having symmetric kernels with specialized SIMD does make a difference in performance under certain settings, it is not a rule of thumb for which microarchitecture will perform better. For instance, SPR does not win in float $^ { 1 6 + }$ full scan. The performance differences arise due to two factors: the data-access patterns of the search algorithm (since vector search is mostly data-access bound [21,34, 41]) and the efficiency of the distance kernel.
Figure 5: SIMD read bandwidth and access latency (random and sequential) of cache and DRAM in cloud CPUs.
Data-access patterns of indexes. On the one hand, IVF indexes (and full scans) sequentially access large chunks of vectors that likely exceed the size of the L2 cache or L3 in full scans. As a result, microarchitectures with higher latencies and lower read bandwidths at L3/DRAM are at a disadvantage. Our memory latency benchmarks, presented in Figure 5, show that when data is bigger than L3, the latency of sequential access (plot on the right) on SPR is almost 2x higher than the Zens and almost 4x higher than the Gravitons. Furthermore, the memory bandwidth (plot on the left) of SPR at L3 and DRAM is the lowest. These data-access-related capabilities are reflected in our vector search experiments, in which the performance gap between SPR and Zen4 closes in IVF and full scans as vectors are quantized at smaller bit-widths (see in Table 4 how SPR becomes green from float32 to SQ4).
On the other hand, searches on an HNSW index access fewer vectors, which results in less pressure on the caches. Moreover, the vectors in the upper layer of the HNSW index are cached efficiently as the entry point of the index is always the same (see left of Figure 1). Hereby, microarchitectures like SPR take advantage, especially with quantized vectors, thanks to having bigger L2/L3 (see Table 2), higher read bandwidth at L1/L2, and a higher SIMD LOAD throughput. These observations align with our HNSW experiments, where both SPRs have the highest QPS in all scenarios. PQ also benefits from architectures like SPR, as the codebooks can be cached and accessed more efficiently.
Efficiency of the distance kernel. The raw performance of distance kernels depends on the execution throughput and latency of the SIMD instructions used. We define execution throughput as the maximum number of bits an instruction can process per CPU cycle across the microarchitecture [5]. Both SPR and Zen4 have a floating-point (FP) execution throughput of 1024 bits per CPU cycle [12]. On the other hand, the Gravitons have an FP execution throughput of 512 bits, which is half of Zen4 and SPR. However, the 1024 bits of Zen4 are arranged as four specialized execution ports of 256 bits each, two for FP-FMA (fused multiply-add) and two for FP-ADD. Effectively achieving a maximum throughput of 512 bits per cycle for FP-FMA. On the contrary, all four ports (512 bits) in Gravitons can handle both FP operations. Thus, depending on the distance kernel, the Gravitons can be on par with Zen4 in FP instructions throughput. However, as data must be first loaded into the registers, the throughput of LOAD instructions also plays a key role in the efficiency of the kernels. In fact, SPR’s main advantage on SIMD is that it can serve $2 \mathrm { x } 5 1 2$ bits loads per cycle from L1, while
Zen4 and Graviton3 can only load $2 \mathrm { x } 2 5 6$ bits [5,12] and Graviton4 3x128 bits [6]. This effect is visible in the left plot of Figure 5, with SPR having twice as much bandwidth than the Zens when data fits in L1. However, this advantage rapidly degrades as soon as data spills to L3. Finally, regarding latencies, all architectures are close by, with latencies of 3-4 cycles for FP instructions [1,5,6,12,22,23]. However, at smaller register widths, the instructions must be called more times to process the same amount of values. The latter gives an advantage to wider register widths due to the instructions call latency.
Data size. To further investigate the effect of data size, we ran full scans on random collections of float32 vectors of different sizes and dimensionalities. We found that when the collection fits in the L2 cache, SPR achieves $1 0 \%$ more performance than Zen4, with Zen3 and the Gravitons underperforming. However, as soon as data spills to L3, the performance of SPR degrades, and Zen4 takes the lead. SPR performance further degrades when data spills to DRAM, delivering $3 0 \%$ less performance than Zen3 and Graviton4 and $2 \mathrm { x }$ less performance than Zen4.
Graviton4 vs Graviton3. In many of our experiments, Graviton3 performed better than Graviton4, both in $\mathsf { Q P \ S }$ and QPS (see float32 in Table 4). The main reason behind this is that Graviton3 has double the size of SVE registers (256 bits) compared to Graviton4 (128 bits). While both architectures have the same FP execution throughput, the total latency cost to process the same amount of data is higher in Graviton4 due to the smaller register. The latter becomes more critical in distance kernels used in vector search, where a dependency chain exists as distances are accumulated on the same SIMD lanes. Furthermore, Graviton3 has a higher execution throughput of LOAD, capable of loading $2 \mathrm { x } 2 5 6$ bits in one CPU cycle [5], while Graviton4 only $3 \mathrm { x } 1 2 8$ bits [6]. Further experiments on full scans on our random collections of float32 vectors reveal that SVE is $3 7 \%$ faster than NEON on Graviton3 (due to doubling the register size). In contrast, in Graviton4, NEON and SVE perform the same since the register size, execution throughput, and latencies are the same. When comparing NEON to NEON, Graviton4 is faster than Graviton3 by $1 0 \%$ . However, when switching to SVE, the tables turn, and Graviton3 is $3 1 \%$ faster than Graviton4. These findings differ from most benchmarks, in which Graviton4 always delivers more performance. However, we believe these observations have been under the radar since most benchmarks use NEON, as SVE has not yet been widely adopted.
# 4 BANG FOR THE BUCK: TAKEWAYS
In Table 1, we rank every microarchitecture on five tiers based on their QPS and $\mathrm { Q P \$ 123}$ on the OpenAI/1536 dataset: $( { \sqrt { + + } } )$ for the best one, ${ \bf \Xi } ( { \bf \Lambda } + { \bf \Lambda } )$ for the ones with a performance at most $2 0 \%$ away from the best, $\big ( \mathrm { ~ - ~ } \big )$ when they provide $2 \mathrm { x }$ less, and $\big ( \mathrm { ~ -- ~ } \big )$ when they provide $3 \mathbf { x }$ less $\mathsf { Q P \ S }$ than the best option, and $( \cdot )$ for options within the middle ground. Finally, we present an aggregated score by giving 1 point for every $\left( + \right)$ and $^ { - 1 }$ point for every (-).
Graviton3 gives the best “bang for the buck,” even over its successor, Graviton4. Graviton3 is only pushed back in our scoring system due to the lack of symmetric kernels in FAISS. Note that Graviton3 excels in the following areas: a variety of SIMD capabilities, high read throughput, and low sequential memory latency at L3/DRAM. More importantly, it is cheap. Zen4 is still a solid option for vector search on IVF indexes and full scans, especially in float32 and bfloat vectors. Zen3 has few negative points thanks to its low price and low latencies for L2/L3/DRAM access. However, it does not excel in any setting. Finally, the SPRs have the lowest score in $\mathsf { Q P \Phi }$ and do not excel in any setting. Despite SPR Z having the best QPS score for HNSW, its price brings down its $\mathsf { Q P \ S }$ score.
# 5 DISCUSSION
The need for Vector Databases is widely disputed among the community [46], with some foreseeing them merging with existing database systems. While the future of VecDBs as standalone systems remains uncertain, vector workloads are here to stay. We believe our insights are important as the decoupling of storage and compute that the cloud provides makes it possible to switch microarchitectures easily depending on the search algorithm to be used. Furthermore, providing the right microarchitecture can incur huge savings, especially in serverless vector search services [28,36], where every millisecond counts toward billing.
In this study, we have focused on the three most common use cases in vector search: HNSW, IVF, and full scans, at different quantization levels. However, there exists a wider variety of indexes (e.g., hybrids), quantization techniques (e.g., residual quantizers [11]), and distance metrics (e.g., cosine similarity, inner product). Therefore, we encourage users to perform data-driven benchmarks to uncover the best microarchitecture for their use case.
It is important to acknowledge that the microarchitectures presented in this study will be outdated in the future. For instance, although not yet available in AWS, Intel has already released Emerald Rapids, the successor to Sapphire Rapids. Similarly, AMD has already released Zen5, the successor to Zen4. Nevertheless, the insights presented here should motivate researchers in the vector search community to not only strive for lower theoretical complexities but also for better data-access patterns and storage designs of newly developed algorithms. The latter is critical for the performance of vector search on a large scale, where it is heavily data-access bound [4,13,21,31,34]. Finally, we encourage researchers always to use SIMD-optimized implementations, as newly developed algorithms that may shine in their scalar implementation can fail to be on par with SIMD-optimized approaches [21,47]. | Vector databases have emerged as a new type of systems that support efficient querying of high-dimensional vectors. Many of these offer their database as a service in the cloud. However, the variety of available CPUs and the lack of vector search benchmarks across CPUs make it difficult for users to choose one. In this study, we show that CPU microarchitectures available in the cloud perform significantly differently across vector search scenarios. For instance, in an IVF index on float32 vectors, AMD's Zen4 gives almost 3x more queries per second (QPS) compared to Intel's Sapphire Rapids, but for HNSW indexes, the tables turn. However, when looking at the number of queries per dollar (QP$), Graviton3 is the best option for most indexes and quantization settings, even over Graviton4 (Table 1). With this work, we hope to guide users in getting the best "bang for the buck" when deploying vector search systems. | [
"cs.DB",
"cs.AI"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.