article stringlengths 507 295k | abstract stringlengths 417 1.92k | category listlengths 1 6 |
|---|---|---|
# 1 Introduction
Worldwide, many bridges are exposed to high traffic loads, extreme weather events, sea salt, and de-icing chemicals, leading to defects. At the same time, most industrialized countries face a growing stock of old infrastructure [2, 3]. To determine rehabilitation measurements and immediate actions, such as traffic restrictions or bridge closures, defects are identified and assessed during inspections. However, the current inspection process is often inefficient and prone to error [4, 5, 6], highlighting the significant potential of automated inspection methods to improve accuracy and reliability. Within an automated inspection workflow semantic segmentation plays a central role as it classifies, measures and localizes damage at pixel level [1, 7, 8].
The largest and most diverse dataset for the segmentation of bridge defects and components is dacl10k [1] due to its variety of buildings and classes. However, a major challenge of dacl10k is the strong class imbalance, affecting not only the number of images but also the pixel-level and instance-level distributions. E.g., the dataset contains approximately ten times more images labeled with Spalling than with Rockpocket, and about ten times more pixels annotated as Protective Equipment than as Crack. This imbalance is evident in the dacl10k challenge [9], where participants consistently reported significantly poor performance for two concrete defects, crack and cavity. Addressing this imbalance is critical to improving model robustness, particularly for real-world bridge inspections, where models must be resilient to variations in image quality, camera pose, concrete texture, and the degree of weathering. Realworld applications require models to perform reliably under varying conditions, but despite the wide range of image acquisition scenarios encountered in civil engineering, no study has rigorously investigated the robustness of multi-class or multi-label computer vision models for damage detection in this domain.
Figure 1: Left column/daclonsynth: Cropped defect shape from dacl10k pasted on synthetic concrete background and ground-truth below showing polygonal annotations of real-world (Spalling with corroded Exposed Rebars) and synthetic defects which are namely Cavity (pale purple) and Weathering (green); Middle-left Column/synthcrack: synthetic concrete surface with synthetic defects and ground-truth showing: Crack (rose), Cavity, Weathering; Middle-right column/synthcavity: synthcavity sample and ground-truth showing three combined masks, Cavity from both rendered concrete texture and the cavity generative model, and synthetic Weathering; Right column/finecrack: Test sample from dacl10k and fine-resolution crack masks.
In our work, we address these challenges by introducing three synthetic dataset extensions, collectively referred to as “synth-dacl”, each designed with 5,000 samples. The first extension addresses the issue of class imbalance. This enhancement superimposes real-world damage polygons on synthetic concrete backgrounds to maintain realistic defect shapes while balancing the representation of underrepresented defect classes. The second and third extension focus on improving model performance for challenging single defect types in practice. We simulate concrete surfaces with one primary synthetic defect – either a crack or a cavity – per set to directly target the detection of these classes. We systematically evaluate how synth-dacl extensions affect average performance, individual class performance, and overall model robustness. Robustness is assessed by applying 15 image perturbations to real-world test data from the dacl10k dataset, simulating conditions such as changes in illumination, noise, and contrast. This step ensures that our models are not only accurate, but also resilient to the unpredictable conditions found in real-world bridge inspections.
# 2 Related Work
# 2.1 Bridge Inspection Datasets
The S2DS dataset [7], which comprises 743 samples, is the first real-world semantic segmentation dataset for bridge inspection with pixel-wise labels for six classes relevant for concrete bridge inspections. In the field of binary crack segmentation both OmniCrack30k [10] and CrackSeg9k [11] are dataset collections for cracked and uncracked surfaces of various materials.
On the other hand, synthetic data has become widely used to enhance performance on real-world tasks, particularly when there is a shortage of well-labeled images [12]. For example, Dwibedi et al. [13] generate synthetic images by cutting and pasting object instances into diverse environments. Other studies explore the role of synthetic data in improving robustness in medical imaging. For instance, Al Khalil et al [14] examine the usability of synthesized short-axis Cardiac Magnetic Resonance (CMR) images generated using Generative Adversarial Networks to ameliorate the robustness of heart cavity segmentation models across various conditions. A comprehensive review of such applications can be found in [15].
In civil engineering, synthetic data starts to become a resource to mitigate the limited availability of pixel-accurate annotated datasets. Much of the research in this field focuses on crack segmentation [16, 17, 18]. For instance, [16] developed a simulation model in Blender for generating additional synthetic crack data for concrete surfaces. The synthetic cracks in this dataset are modeled using irregular fractals. Another synthetic crack dataset is the Supervisely Synthetic Crack Segmentation dataset [18] which consists of 1,558 synthetic images for road surface crack detection. This dataset employs various generative algorithms, such as random walk, rapidly exploring random trees, and Lsystems, to produce a diverse array of crack patterns. [17] introduced a synthetic dataset specifically for dam crack detection. This dataset integrates crack patterns extracted from real-world, open-source datasets with a 3D mesh model of an actual dam, creating realistic training data suited to the unique structural context of dams.
While these datasets have advanced research in the field of automated bridge inspections, each faces limitations when applied to practical scenarios. They typically focus on a single defect class (Crack), offer limited diversity in image quality, concrete texture, and environmental conditions, and lack lack multi-label annotations which is due to the overlapping character of concrete defects crucial. Moreover, existing studies on the robustness of semantic damage segmentation models [19, 20] are, again, restricted to binary crack detection and assess only a narrow set of perturbations, which fall short of representing the wide range of real-world challenges encountered during bridge inspections.
# 3 Datasets
In the following, detailed information on the investigated real-world dataset dacl10k and its finecrack masks (Section 3.1) as well as the synthetic extensions (Section 3.2) are provided.
# 3.1 dacl10k
dacl10k [1] is the first large-scale dataset for automated bridge inspections, containing 9,920 polygon-annotated images across 19 classes, grouped into defects and structural components. It features multi-label semantic segmentation with coarse pixel-level annotations, which stem from real bridge inspections. The dataset includes common defect combinations, as shown in the top-left tile of Figure 1. For the underlying work a new version v3 of this dataset [21] (referred to as dacl10k) was developed. During transition class ambiguities were resolved: Spalling and Rockpocket were often confused in v2 due to their visual similarity, even though they arise from different causes: corroding reinforcement versus poor concrete deaeration. The same applied for Joint Tape vs. Restformwork, and Weathering vs. Wetspot.
# 3.1.1 Finecrack Masks
Most open-source datasets for crack or defect segmentation [11, 22, 23, 24, 25, 26] focus solely on cracks on plain concrete surfaces. These datasets lack important real-world variations, such as wet cracks, efflorescence, graffiti, and weathered backgrounds. Consequently, models trained on these datasets [27, 28] tend to generate numerous false positives, as evidenced by the low precision scores in Table 2.
From a civil engineer’s perspective, cracks are not necessarily severe. In concrete structures, they occur where the tensile strength of the concrete is exceeded. Depending on the exposure of the building part, cracks with a width up to $0 . 4 \mathrm { m m }$ (0.016 in) on non-prestressed building parts may be irrelevant regarding structural integrity and durability. However, on pre-stressed bridges, a crack width of $0 . 1 \mathrm { m m }$ (0.004 in) can indicate tendon failure and be critical [29, 30]. According to many inspection guidelines, the required measurement accuracy for crack width is $0 . 1 \mathrm { m m }$ [31, 32, 33]. This emphasizes that, for practical use, the crack defect class requires pixel-accurate segmentation.
To address this issue, we created fine-resolution crack masks for the 496 dacl10k test images that contain Crack and ACrack. First, the polygon is cropped and contrast-enhanced. Then, it is segmented. For Crack instances, we apply Multi-Otsu thresholding to grayscale images. For ACracks, we use a pre-trained crack segmentation model [27] to generate approximations of fine crack masks. The results are fused into a binary mask and manually refined by a civil engineer to ensure pixel-level accuracy. Thus, the defect classes Crack and ACrack from dacl $1 0 \mathrm { k }$ are fused within one binary Crack mask.
Figure 2: Comparison of classwise pixel count (left) and shape count (right) of dacl10k train set (blue) and the combination of dacl10k and daclonsynth (orange). The red dashed line marks the average pixel, or rather shape count, over the displayed classes from dacl10k. All stats based on the resized data $( 5 1 2 \times 5 1 2 )$ .
# 3.2 Synthetic Dataset Extensions
Due to the significant class imbalance and low performance on specific classes in the dacl10k dataset, as well as the high costs associated with data collection and labeling, we explore synthetic data generation methods in the following sections [34]. Details on the class distribution in the original dacl10k-v3 dataset are provided in our supplementary material.
# 3.2.1 Synthetic Concrete Surfaces
To increase robustness in defect segmentation and overcome class imbalance, we introduce three new dataset extensions based on synthetic concrete surfaces: daclonsynth, synthcrack and synthcavity. To generate these surfaces an extended version of the physics-based rendering (PBR) introduced in Jaziri et al. [16] was used. The rendering pipeline consists of two main stages: (1) scene generation, and (2) defect injection. In the first stage, various texture maps are applied to produce diverse concrete surfaces using Blender’s Cycles PBR engine. Optional overlays such as moss or dirt simulate Weathering effects (see Figure 1). In the second stage, up to two defects are added per scene (see Section 3.2.3 and 3.2.4), and semantic ground-truths including depth, surface normals, and class masks, are generated automatically, including a dedicated map for Weathering.
# 3.2.2 daclonsynth
Building on these synthetic scenes, we generate the daclonsynth extension with the specific goal of mitigating class imbalance in the dacl10k training set. As shown in Figure 2, several classes, particularly Rockpocket, Exposed Rebars, Hollowareas, and Wetspot, are heavily underrepresented in both pixel count and shape count. For instance, Rockpocket is annotated only 354 times and accounts for merely 4.5 million pixels, while the average across classes is around 45 million pixels.
To generate new training samples, we first filter the dacl10k training images to include only those containing at least one instance of a targeted underrepresented class. Then, each annotated shape is cropped from the image along with its corresponding polygonal annotation. This cropped region is then randomly rotated and pasted onto a synthetic concrete surface that is also randomly selected. For each class, half of the synthetic samples include Weathering overlays, while the other half remain clean to promote better generalization to both conditions.
In cases where the target defect typically co-occurs with a larger structural issue, as is common for Exposed Rebars appearing within Spalling or Rockpocket, the crop is extended to include the full area of the co-located host defect. Furthermore, to prevent models from exploiting the artificial distinction between real annotations and synthetic backgrounds, the shapes of Spalling, Rockpocket, Wetspot, Hollowareas, and Efflorescence are dilated using a $3 0 \times 3 0$ kernel prior to compositing.
Figure 3: Pixel (left) and shape (right) counts for Crack, Cavity, Weathering, and background in dacl10k and synthetic datasets.
To determine the number of synthetic samples needed per class, we calculate the number of instances required to bring the pixel and shape counts closer to the averages of all underrepresented classes. We then use the mean of these two estimates to define a target sample count per class. The final distribution of the 5,000 synthetic samples in daclonsynth is determined proportionally based on these class-wise demands while accounting for the average number of pixels and shapes per sample for each defect. Although complete balance is not possible due to defect co-occurrence and overlaps, the resulting dataset significantly shifts the class distribution towards uniformity. Non-underrepresented classes may also be reproduced due to overlaps, which further contributes to a more diverse and realistic training dataset.
# 3.2.3 synthcrack
Crack patterns are generated using a fractal model based on [16] and rendered onto synthetic concrete surfaces with corresponding semantic masks. The resulting extension, synthcrack matches dacl10k in terms of crack pixel and shape counts. It also contains some incidental Cavity shapes, originating from fine-grained surface geometry embedded in the rendering pipeline, but these are not generated explicitly.
# 3.2.4 synthcavity
To generate cavities, we introduce a dedicated simulation approach based on Perlin noise. Multiple noise layers with varying octaves (2–32), persistence (0.6–0.9), and lacunarity (1.5 or 2) are combined to create irregular cavity maps. After thresholding and filtering out small regions, the resulting masks are used to build geometry-aware PBR textures. These are rendered and overlaid onto synthetic concrete surfaces to form the synthcavity dataset. Compared to dacl10k, synthcavity contains more cavity shapes, but with smaller area per instance, leading to a lower overall pixel count.
All synthetic datasets also contribute additional background and Weathering annotations. Overlaps in daclonsynth further introduce crack and cavity shapes, which are reflected in the distributions shown in Figure 3.
# 4 Experiments
In this section, we evaluate the effectiveness of our synthetic dataset extensions by testing eight differently trained models on the dacl10k test set. To assess the contribution of synthetic data to model robustness, the same models are also evaluated on a perturbed version of the test set. Additionally, we conduct two ablation studies: one focusing on the segmentation performance for finely annotated cracks, and another analyzing cross-domain generalization from synthetic to real-world data.
All experiments employ a Feature Pyramid Network (FPN) [35] with a MaxViT-Base Vision Transformer backbone [36]. The primary evaluation metric is Intersection over Union (IoU) as previous work [37, 38, 39] where IoU is set to 1 when the union is zero. This is complemented by F1 score, Precision, and Recall the analysis of the fine-grained classes and the perturbed testing. In general, the metrics are computed per class at the image level and averaged across the dataset. Mean values are reported by averaging the class-level scores. Further details, such as split size and class imbalance, can be found in the supplementary material.
Table 1: mIoU and classwise IoU on the dacl10k test split. A check mark in the Training Data column indicates the synthetic data used during training. Bold numbers indicate the highest value in each column, while underlined numbers represent the lowest.
Table 2: IoU, F1 Score, Recall and Precision on the finecrack and cavity masks. Bold numbers indicate the highest value in each column, while underlined numbers represent the lowest. We compare with two open-source baseline methods for crack segmentation (two bottom rows). The train data are: (1) dacl10k, (2) daclonsynth, (3) synthcrack, (4) synthcavity.
# 4.1 Results on dacl10k
As shown in Table 1, the inclusion of synthetic data improves overall performance, with the six highest mIoU results being achieved when synthetic data is used during training. At class level, extending dacl $1 0 \mathrm { k }$ with the semi-synthetic daclonsynth improves IoU in average for the daclonsynth classes by $1 . 3 \%$ , with Rockpocket experiencing a notable $+ 4 \%$ increase (see Table 1). This boost likely stems from the fivefold increase in representation of this class provided by daclonsynth. Notably, the top three configurations for crack IoU in Table 1 do not include synthcrack, which might seem contradictory at first. This outcome is explained by the fact that synthcrack employs much finer and more detailed crack annotations than those in dacl10k. As a result, the model learns to predict cracks at a finer resolution than what the dacl10k ground-truth can capture. This hypothesis is supported by the result of the model trained only on synthcrack, presented in Table 4, which confirms best performance on the finecrack masks. The results on the finecrack masks, and on Cavity are analyzed in the following Chapter 4.2.
# 4.2 Ablation: Results on Fine-Grained Classes
For both fine-grained classes, Crack in the form of finecrack masks and Cavity, we provide additional metrics in Table 2. All models incorporating synthcrack report better results on finecrack masks than the baseline trained on dacl10k only. The best finecrack IoU is achieved by the model utilizing dacl10k, daclonsynth and synthcrack, 1.3 percent points higher than dacl10k. Regarding Cavity the highest IoU $( 2 4 . 6 3 \% )$ and F1 score $( 3 9 . 5 3 \% )$ is reported for the model trained on the combination of dacl10k and synthcavity.
# 4.3 Results on perturbed dacl10k
In real-world bridge inspection applications, factors such as different camera models, varying image acquisition settings and environmental conditions introduce noise that negatively affect model performance. To investigate how this affects our differently trained models, we follow the methodology of Wang et al. [40] and apply 15 different perturbations to the test images. These include different noise functions, blur, brightness changes, and weather effects, which are demonstrated in the supplementary material. The results, averaged over all perturbations, are presented in Table 3.
Apart from the model trained on dacl10k, synthcrack and synthcavity (see Section 5) all models trained on both original and synthetically generated data consistently show the “most robust” to these perturbations. The highest IoU, F1 score, and Precision is reported for the model trained on dacl10k combined with all synth-dacl extensions. Furthermore, this model shows $1 . 8 9 \%$ less relative performance loss in mean IoU and $2 . 5 4 \%$ in mean F1 score (see Table 3).
Table 3: Comparison of IoU, F1, Recall, and Precision on the test split between raw and perturbed (Pert.) images. The Change column represents the relative performance difference between testing on raw images and perturbed images, highlighting the degradation in performance caused by the perturbations. Bold numbers indicate the highest value in each column, while underlined numbers represent the lowest. The train data are: (1) dacl10k, (2) daclonsynth, (3) synthcrack, (4) synthcavity.
Table 4: Synthetic-only training on three datasets; evaluated on real-world dacl10k test split and finecrack masks.
# 4.4 Ablation: Domain-Partitioned Evaluation
To evaluate cross-domain generalization from synthetic to real-world data, we perform a domain-partitioned ablation in which models are trained exclusively on synthetic datasets and evaluated on the dacl10k test split, thereby isolating the transferability of synthetic feature representations (see Table 4). Our findings indicate that, with the exception of synthcavity, the synth-dacl extensions are representative and consistent to generalize to real-world defect types. This is highlighted by the highest IoU on finecrack masks $( 1 3 . 1 1 \% )$ reported for the model trained on dacl10k and synthcrack. Utilizing synthcavity leads, according to Table 4, to $0 \%$ IoU on Cavity from dacl10k test set which is further discussed in Chapter 5.
# 4.5 Qualitative Evaluation
In some cases, the labels in the dacl10k dataset are overly coarse and not precise, which limits the effectiveness of quantitative evaluation metrics like IoU in reflecting the model’s performance. Therefore, we complement our evaluation with qualitative results in Figure 4, providing a clearer assessment of the model’s capability in handling the task. The predictions in the second row originate from the baseline trained on “dacl10k” (blue). The predictions in the “dacl $1 0 \mathrm { k } +$ synth” row originate from the model trained on dacl10k plus the synthetic split that specifically includes the according class. In this row the three most left columns show predictions by the model trained on dacl10k+daclonsynth (green), followed by the models trained on dacl $\mathbf { \left| 0 k + \right| }$ synthcrack (pink) and dacl $1 0 \mathbf { k } +$ synthcavity (turquoise) respectively. The bottom row displays predictions from the network that used all data (grey), or rather the most robust model according to Table 3.
The qualitative examples mostly underline the metrics reported in aforementioned Tables. The predictions on Wetspot by the most robust model show no false positives compared to the baseline dacl10k. On Rockpocket the prediction gets from top to bottom more accurate indicating that the additional synthetic data raises Precision. The Crack prediction proves that the segmentation gets narrower, thus closer to the crack edges but still leaves room for improvement. Regarding the Cavity sample, we observe that only the relevant cavities are marked, indicating higher accuracy.
Figure 4: Qualitative results on dacl10k test samples for different damage classes (columns) across different training data setups (rows), where the row dacl $1 0 \mathrm { k } +$ synth shows the predictions of the model achieving highest IoU on the given class.
# 5 Discussion
According to Table 1, introducing synthetic data specific to certain defects, such as Cracks (synthcrack) or Cavities (synthcavity), in isolation, the model performs better than without additional extensions. However, when both synthetic Crack and Cavity data are combined, it results in the lowest scores with respect to accuracy and robustness if trained without the balancing from daclonsynth (see Table 1 and 3). This suggests that while synthetic data can effectively address individual class imbalances, combining certain synthetic datasets can create conflicts that negatively impact model performance by introducing imbalances again. This is illustrated in Figure 2 and 3. E.g., the number of pixels and shapes showing Efflorescence in dacl10k is 40 million pixels and 3,350 polygons, while synthcrack introduces 147 million pixels and 1.7 million polygons showing Weathering.
The best performance on Weathering in Table 1 is reported for the model trained exclusively on dacl10k, suggesting that synthetic Weathering features may not fully capture the characteristics of its real-world counterpart, which strongly rely on the subjacent concrete. Style transfer techniques may help to bridge this domain gap by enhancing the realism of synthetic textures [41].
Although the model’s predictions on finecrack masks appear promising (Figure 4), the achieved IoU of $13 \%$ indicates substantial room for improvement in terms of practical applicability. A closer inspection of the predictions by the model trained only on synthcrack (see Supplementary Material) reveals that crack segments are often disrupted when objects such as shadows, Wetspots, or Efflorescence are located adjacent to the crack edges. To improve realism and robustness, future work should focus on augmenting the synthetic pipeline with additional examples of such crack-bordering artifacts.
The isolated use of synthcavity proves insufficient for real-world Cavity detection, as shown in Table 2. Nonetheless, its combination with real-world data improves performance (Table 1), supporting its role as a supplementary rather than standalone training asset. Qualitative analysis of predictions made by the model trained only on synthcavity (see Supplementary Material) reveals common false positives arising, like in the case of synthcrack, from visually alien elements such as bolts, soil patches, or drainage components. Moreover, while the model successfully identifies small cavities, it consistently misses large ones – especially those with sharp, irregular geometries. This limitation stems from the underrepresentation such Cavity features in the synthcavity dataset. In order to generate more realistic synthetic Cavity images, future work should focus on incorporating methods such as texture-sensitive preprocessing [42], style transfer methods [43, 44] or domain randomization strategies [45]. | Adequate bridge inspection is increasingly challenging in many countries due
to growing ailing stocks, compounded with a lack of staff and financial
resources. Automating the key task of visual bridge inspection, classification
of defects and building components on pixel level, improves efficiency,
increases accuracy and enhances safety in the inspection process and resulting
building assessment. Models overtaking this task must cope with an assortment
of real-world conditions. They must be robust to variations in image quality,
as well as background texture, as defects often appear on surfaces of diverse
texture and degree of weathering. dacl10k is the largest and most diverse
dataset for real-world concrete bridge inspections. However, the dataset
exhibits class imbalance, which leads to notably poor model performance
particularly when segmenting fine-grained classes such as cracks and cavities.
This work introduces "synth-dacl", a compilation of three novel dataset
extensions based on synthetic concrete textures. These extensions are designed
to balance class distribution in dacl10k and enhance model performance,
especially for crack and cavity segmentation. When incorporating the synth-dacl
extensions, we observe substantial improvements in model robustness across 15
perturbed test sets. Notably, on the perturbed test set, a model trained on
dacl10k combined with all synthetic extensions achieves a 2% increase in mean
IoU, F1 score, Recall, and Precision compared to the same model trained solely
on dacl10k. | [
"cs.CV"
] |
# 1. Introduction
In the healthcare domain, the decisions made by AI systems can impact the well-being or life of people [1]. In this context, it has been proposed that providing explanations about AI models or single predictions could potentially increase clinicians’ appropriate trust [2–5] and ultimately boost adoption in healthcare settings [6–8]. This has led to increased research in the field of eXplainable AI (XAI), especially in the healthcare sector [9, 10].
Although this area shows promising results, the value of XAI methods still needs to be proven in practice [11]. Systematic and consistent evaluation is crucial: not only for generating understandable explanations but also for ensuring that these explanations are trustworthy and usable for their intended users [12]. Methodologies from the AI/ML community, such as evaluation with a Ground Truth Dataset, cannot be used in these scenarios: the success of an explanation depends on the user [13], its context, the AI model and the explanation [14].
In this discussion, user-centric evaluation in XAI has been recognised as crucial for designing systems that build trust and ensuring explanations are meaningful and actionable [15–19]. Involving users in the evaluation has been recognised as valuable because the explanations’ value depends on how real users perceive, understand, and interact with them [15, 17]. Nevertheless, some XAI evaluations tend to overlook this aspect and focus rather on abstract measurements without users [20, 21].
Researchers have tried to tackle user-centric evaluation in two ways. First, they have tried to disentangle explanation’s characteristics into simple, measurable properties such as completeness [15, 22–24], novelty [25–28], and interactivity [22, 23, 29]. Second, they have tried to provide structure to the properties by providing frameworks that cluster the properties in meaningful groups and provide general guidelines on how to measure the properties [15, 17, 22, 30, 31].
These efforts have tried to stress the importance of involving the users in the evaluation of XAI systems, but despite attempts to establish a unified framework for XAI evaluation [15, 22, 32], no consensus has been reached [32]. We believe this occurs for three reasons. First, there are multiple definitions of the aspects to be evaluated and their corresponding measurements [32]. Current efforts have focused on summarising or redefining the aspects based on current research, but do not use previously defined aspects in their frameworks. Second, there are no clear indications of what aspects are more important to be evaluated. Only [26] attempted to create such guidelines based on usage contexts, but still most papers focus on measuring Understanding, Trust and Performance. Finally, current frameworks do not provide clear guidelines on what aspects to measure based on the system’s context. They cluster aspects into meaningful groups but do not state when to measure them.
This research aims to close this gap by analysing user studies to understand how the contextual aspects influence the evaluation and by providing clear guidelines on what aspects to measure based on the context of the application. This paper builds on previous work by Donoso-Guzmán et al. [15], which identified multiple granular properties that could be measured as a foundation for consistent evaluation across multiple layers. In this study, we examine empirical evidence from user studies regarding these properties. Specifically, we explore which properties are most commonly investigated and considered core to XAI evaluation in healthcare, as well as additional properties that frequently emerge in the literature. Based on our findings, we provide guidelines on how to conduct the evaluation of XAI systems, specifying what aspects to measure based on the system and the users.
The study goals can be summarised as:
(RG1) To provide a framework of well-defined and atomic properties that are part of the XAI user experience in the healthcare domain
(RG2) To provide clear guidelines on how to define the evaluation of XAI systems based on the system characteristics
To achieve these goals, we conducted a systematic review of the literature. We coded all the selected papers to identify explanation properties, users’ characteristics, system conditions and connections between properties, and with the collected information, we provide an evaluation framework with guidelines on how to use it to design an evaluation of an XAI system.
The contributions of this paper can be summarised as follows: we present a comprehensive survey of XAI evaluation studies that involve users and are conducted within healthcare settings. Our overview not only identifies gaps in the current XAI healthcare literature but also serves as a source of inspiration for designing more effective explanations and evaluating them rigorously. Furthermore, we classify explanations based on their visual representation and level of interactivity, offering additional insights for future research and practical application. Finally, we propose a set of guidelines to follow during the evaluation of XAI systems. These guidelines aim to support researchers and practitioners in a consistent and informed selection of evaluation strategy, while encouraging a more comprehensive approach to assessing explanations.
# 2. Related Work
# 2.1. Broad XAI surveys
Much of the literature on Explainable Artificial Intelligence focuses on methods to explain complex black-box models. The first surveys of the area tried to standardise the conceptual frameworks to describe these systems and unify the diverse approaches and definitions across the field. Guidotti et al. [33] present a classification scheme for explanation methods pertinent to various black-box systems, targeting specific challenges related to interpretability. Similarly, Barredo Arrieta et al. [34] analyse XAI methods and delve into the context in which explanations are utilised and the objectives behind incorporating them into systems. Sahakyan et al. [35] investigate the interpretability of models that use tabular data, highlighting the necessity for clarity in understanding decisions made by opaque machine learning models.
More recently, the surveys have focused on examining the current challenges of the area. Saeed and Omlin [31] present a meta-survey that underscores the significance of transparency in creating trust and acceptance. The authors analyse the challenges faced in XAI development and implementation. They identify the varying user needs and the balance between accuracy and explanation complexity as the main barriers. Ali et al. [36] also identify several obstacles that must be addressed to advance towards trustworthy AI. They conduct a comprehensive literature review to identify the key concepts, frameworks, and challenges surrounding XAI and trust. They state that user diversity would be better addressed by creating user-centric explanations.
# 2.2. Challenges and Applications in Healthcare
The integration of XAI within healthcare has been subject to extensive exploration, especially concerning the interpretability of medical decisionmaking processes. Ooge et al. [37] offer a visual analytics review, focusing specifically on visual explanations in XAI, emphasising the importance of visual analytics in healthcare to facilitate understanding of AI outputs. The authors suggest that these visual tools can bridge the gap between opaque algorithmic processes and the interpretative needs of healthcare providers, thereby enabling more informed decision-making. Antoniadi et al. [38] explore challenges for clinical decision support systems (CDSS), highlighting that XAI augments clinical decision-making by improving transparency and trust in AI systems, which are critical for adoption in healthcare environments. They identify key challenges in integrating XAI into clinical workflows and state the need for XAI methodologies that enhance transparency while accommodating clinicians’ needs. Similarly, Chaddad et al. [10] categorise different XAI techniques specific to healthcare applications, identifying challenges and proposing future directions for improving interpretability, particularly in medical imaging. They state that assessing how these methods align with clinicians’ mental models will improve acceptance and operational efficiency in clinical environments. More recently, Mienye et al. [39] conducted a comprehensive survey of XAI applications in healthcare, identifying challenges such as the necessity for explainability, algorithm transparency, and ethical considerations in AI deployment. They state that the success of AI in healthcare depends on users’ ability to understand and trust these technologies.
All these works emphasise the importance of the system’s integration into the clinical workflow and the importance of appropriate trust to increase the adoption of these systems in clinical settings.
# 2.3. XAI Evaluation Surveys
Several studies have tried to establish frameworks to design and evaluate XAI systems. Table 1 presents a summary of the main characteristics of the proposals in the area. Mohseni et al. [40] surveys and organises the diverse research on XAI across multiple disciplines, including machine learning, visualisation, and human-computer interaction. The authors propose a step-by-step framework to guide multidisciplinary teams in designing and evaluating XAI systems, providing guidelines and evaluation methods. Vilone and Longo [23] present a comprehensive survey on XAI evaluation, focusing on empirical evaluation methods. They propose a taxonomy that categorises various evaluation approaches based on both theoretical and practical perspectives on explainability. Löfström et al. [27] conduct a semi-systematic meta-survey to identify and organise evaluation criteria for explanation methods in XAI. They present a taxonomy grouping properties into three aspects: model, explanation, and user. They identify four commonly accepted properties: performance, appropriate trust, explanation satisfaction, and fidelity, recommending these for more generalisable research in explanation quality. Nauta et al. [22] offer an extensive review of computational evaluation methods that do not involve user studies, providing a detailed taxonomy of computational techniques for assessing the quality of explanations.
Recent evaluation surveys have focused on human-centric evaluation. Lopes et al. [32] introduces a new taxonomy to organise XAI evaluation methods, aiming for clarity and intuitiveness by considering the multidisciplinary nature of XAI research. The taxonomy is divided into two families: Human-centred and Computer-centred methods. The taxonomy is designed to serve as a map for XAI evaluation methods during the development process, helping researchers from different disciplines systematically select and apply appropriate evaluation strategies. Kim et al. [17] presents a systematic review of 73 studies evaluating XAI systems with users, focusing on what makes explanations meaningful from a human-centred perspective. The authors identify 30 components of meaningful explanations and organise them into a taxonomy with three main dimensions: the contextualised quality of the explanation, its contribution to human-AI interaction, and its contribution to human-AI performance. Rong et al. [16] also presents a review of user studies in human-centred XAI. The authors categorise user studies based on measured characteristics such as trust, understanding, usability, and human-AI collaboration performance. The paper offers practical guidelines for designing and conducting user studies and identifies open research directions, especially the need to integrate psychological science into human-centred XAI research.
# 2.4. Identified Gap
Current evaluation frameworks emphasise the multidisciplinary nature of XAI. There is widespread recognition of the need for frameworks and methodologies that incorporate human factors into the evaluation process. These studies assert that to effectively evaluate XAI applications, the evaluation should be human-centred [16, 17, 32, 36, 40] and should consider the context of the system’s deployment [10, 38]. It is also understood that the lack of standardisation in the field hinders the consistent evaluation of systems [17, 23, 32, 36, 40].
While previous surveys have effectively highlighted the importance of multidisciplinary collaboration and user-centric design in explanations, they have not contributed to standardising the field (see 1). These works have proposed various aspects, properties, components, and definitions for measuring or assessing evaluations, but they have not built upon previous definitions or connected them. Furthermore, although all these frameworks recognise the importance of context in the evaluation process, they tend to apply these evaluations broadly across different fields without explicitly considering the specific conditions of each area.
Table 1: Summary table with main characteristics of Evaluation-related surveys. Kim et al. [17] presents the most similar evaluation survey, but the inclusion criteria are different; they include Wizard of Oz and exclude systems that make predictions for images.
In this work, we focus exclusively on the healthcare domain and employ the definitions presented by Donoso-Guzmán et al. [15] to analyse user studies. We aim to provide clear guidelines on what aspects to measure based on the characteristics of both the system and the users. Unlike Kim et al. [17], we do not restrict our analysis by data type; we include studies involving images and other modalities, whereas they limit their scope to certain types of data. Additionally, while their review includes Wizard-of-Oz-style studies where humans simulate AI behaviour, we only consider studies in which an actual AI system, regardless of its specific implementation, is responsible for making predictions.
Figure 1: Prisma schema of the survey process
# 3. Methodology
# 3.1. Paper retrieval
We collected papers from 5 databases: Scopus, Web of Science, ACM, IEEE, and PubMed on $1 5 ^ { \mathrm { t h } }$ May 2024. The first two are general databases that include journals that publish in computer science and medicine. IEEE has several journals in computer science, and ACM is specialised in computer science. PubMed was chosen to also capture papers that are published in more specialised medical journals. The query for our systematic review was constructed using the PICO framework to ensure a comprehensive search across multiple academic databases. The search terms were categorised into two main components:
• Population: Keywords related to explanation, interpretability, and AI systems, including terms such as “Explainable AI (XAI)”, “Interpretable Machine Learning (IML)”, “Human-centred AI”, “Decision Support Systems”.
• Intervention: Terms focusing on human evaluation and empirical assessment, including “user”, “empirical”, “experiment”, “evaluation”, and “user-study”.
At this stage, no restrictions were applied on the publication date, topic, journal or conference; the only enforced restriction was that the paper had to be peer-reviewed. We did not apply restrictions on the healthcare domain because, in the case of interdisciplinary research, abstracts need to capture the attention of multiple profiles and make compromises based on their most important target or publication venue. We left the healthcare domain filtering for a full-text screening phase. The query was applied to each database with some changes to comply with their different formats and restrictions 1. We collected 13, 860 papers, out of which 5, 633 were duplicates.
# 3.2. Screening process
# 3.2.1. Title and Abstract screening
After removing duplicate records, 8, 226 proceeded to Title and Abstract Screening. The focus of this step was to keep papers that were user studies of XAI applications using real AI models. The inclusion criteria were:
1. The AI was a model that created predictions, not a Wizard of Oz. In this review, we consider that AI models and XAI methods are designed materials with specific affordances and limitations [42]. In order to ensure that we would analyse explanations that took into consideration these restrictions, we decided to exclude studies that did not have an actual AI model, but a simulated AI, i.e. a Wizard of Oz.
2. The XAI method is reproducible, which means the explanations were not generated by people but with a replicable computational process. Just like for the previous criterion, the XAI method also has specific limitations and capabilities that the explanation design needs to take into account.
3. The assessment was done by people, not with simulations or automatic metrics. The goal is to provide guidelines for studies with users, so only those studies were considered.
4. The paper does not explicitly mention a user study, but the described results suggest that the study was conducted with users. • The authors reported results on user satisfaction, performance, and preference. • The authors claimed to increase user interpretability.
The exclusion criteria were:
1. People generated the AI predictions.
2. People generated the explanations.
3. The paper did not describe the assessment of an XAI system.
4. The XAI was used to make the model interpretable, but the inter
pretability was not evaluated.
To ensure the criteria were followed, four screeners reviewed papers and discussed disagreements until achieving a Fleiss Kappa score above 0.8. After this, one screener continued with the rest of the papers. In case of doubt, it was decided to include the paper to be checked in the full-text screening phase.
At the end of this stage, 6, 880 papers were excluded for not meeting the inclusion criteria, leaving $1 , 3 4 6$ papers eligible for full-text screening.
# 3.2.2. Full-text Screening
The full-text screening was conducted in two phases: first, we screened the papers to look for healthcare applications, and second, we screened the healthcare papers to check if they presented user studies of XAI applications.
Finding healthcare applications. In this stage, we filtered papers to look for studies that evaluated healthcare applications. We skimmed the full text to include papers that had:
1. Presented an evaluation with users.
2. AI models that predicted:
(a) Health outcomes or risks
(b) Diseases’ outcomes or risk
(c) Drug-related information
(d) Information related to the general population health
3. Users that were:
(a) Healthcare professionals
(b) Patients or people with a certain condition
(c) People who did not have any health condition, and the application’s goal was to evaluate the risk of a specific disease or condition
(d) People who did not have any health condition, and the application’s goal was to teach people about specific risk factors.
The exclusion criteria in this part were:
1. The AI model predicted
(a) Food recommendations to people who did not have a specific condition. (b) General behaviour changes to improve overall health.
2. Publication was not peer-reviewed
3. Publication was a Thesis, Workshop proposal, or Editorial
Of the $1 , 3 4 6$ papers subjected to full-text retrieval,
• 967 papers were excluded as they focused on non-healthcare applications
• 26 papers focused on XAI perspectives rather than empirical studies
• 47 papers were excluded due to being the wrong publication type
• 32 papers were excluded due to lack of access
Selecting papers for coding. The previous process left 274 papers related to healthcare applications. These papers were analysed to check whether they complied with the exclusion criteria. These are the following:
1. No user study ( $n = 8 4$ ). The paper did not present an evaluation with users.
2. Missing study details ( $n = 1 9$ ). The paper presented an evaluation with users, but the description of the study missed details that are part of the review.
3. Lack of an AI component ( $n = 1 2$ ). The study did not include an AI component.
4. Lack of explanation aspects ( $n = 3 9$ ). The paper did not evaluate the explanation that was created.
5. Wizard of Oz ( $n = 1 9$ ). The paper presents an evaluation where the AI model is a Wizard of Oz, not a trained AI.
6. Unsuitable participants ( $n = 1 8$ ). The participants had no relevant knowledge for the task, i.e. for a breast cancer prediction task, nonmedical related users conducted the evaluation.
7. Not being peer-reviewed ( $n = 1$ ).
After applying these criteria, 82 papers were included in the final review.
# 3.3. Analysis Criteria
Papers were coded based on the coding scheme developed by the research team, which is presented in the following sections. This deductive coding was conducted by three researchers in ATLAS.ti software and discussed in regular sessions to reach consensus and ensure consistency. During the coding process, new codes and criteria emerged to better describe the evaluation and the system. These are mentioned in this section and explained in the results.
# 3.3.1. User description
To distinguish between users’ backgrounds, we adopted the classification proposed by Suresh et al. [43], which defines three types of user knowledge: formal, instrumental, and personal, as well as three contextual categories: machine learning, data domain, and milieu.
Formal knowledge refers to training acquired through formal education, such as a university degree. For example, in the data domain context, this could include medical students. Instrumental knowledge represents applied expertise gained through practical experience, such as doctors specialising in a particular field (e.g., diabetologists). Personal knowledge, on the other hand, refers to lived experience. In the context of machine learning, for instance, this could apply to a doctor who has neither formal nor instrumental training in ML but has developed an understanding of it through personal interest and self-learning.
# 3.3.2. AI model and XAI method
The AI model and XAI method were identified in each study. No previously defined categories were defined before starting the coding. Nevertheless, we conducted an inductive analysis of this data once all papers were coded to find the different models and methods categories.
# 3.3.3. Data
Data refers to the nature of the dataset for which the system is created. We used the same categories as [22], without the item-matrix type: tabular, text, images, time series, audio, video, and graphs.
# 3.3.4. Usage Context
Usage context is “a situation for which a user seeks explanations.” [26]. We used the same usage contexts as defined by [26]. They state that the application scenario dictates the XAI usage context, and define six possible contexts which we use to classify papers:
Figure 2: Summary of criteria used to code the selected papers.
• Model Improvement: Ensuring the AI model performs as intended by analysing and improving it during development and after deployment. • Capability Assessment: Evaluating the AI system’s capabilities and limitations to determine its reliability and appropriate usage. • Decision Support: Understanding AI predictions to make informed decisions and take appropriate actions based on the causes of predictions. • Adapting Control: Gaining insights into how the AI system processes data to better control and adjust system behaviours. • Domain Learning: Identifying patterns and knowledge from AIextracted historical data for improved prediction tasks. • Model Auditing: Verifying AI compliance with fairness, security, and privacy standards.
# 3.3.5. Explanation
Unlike other surveys, where the explanation and the method used to generate it are analysed together [22, 34], we split the analysis to decouple how the explanation is generated from what information the user will see. To achieve this, we analysed the explanations according to explanation elements, as defined in previous work [15]. These elements are: Generation, the process of identifying and selecting causes; Abstraction, the content of the explanation provided by the XAI method; Format, design of the explanation; and Communication, the interaction of users with the explanation (see fig. 3).
For the generation level, we only decided to label the scope of the explanation as global or local. Other characteristics, like agnostic vs specific model type, were left for analysis of the method. For the abstraction level, we used types defined by [44]. These authors define different ways in which “people understand events or observations through explanations”. They define that for inquiry and reason, users can reason in a deductive, inductive, or analogical way. For the causal explanation and causal attribution, they state that users seek counterfactual, contrastive and attribution types of explanations. Based on this, we defined four types of abstractions using the common names used in XAI literature: example-based (inductive, analogical), feature importance (attribution), counterfactual and rule-based (deductive). Additionally, we added the data-centric type, which focuses on providing explanations based on the data features. This type of explanation has proven to be relevant in healthcare domain settings [45, 46]. For the format level, we focused on what the user would interact with and what type of cognitive effort they would have to make. We choose three basic types: textual, visual and hybrid based on [47]. Lastly, for the communication level, we classified interfaces based on whether the user could interact with the explanation. We included all types of interaction that allowed the user to change what they saw. This could be filtering, having tooltips, selecting samples, etc. A summary of these criteria and their values can be found in table 2.
Figure 3: Explanation elements as defined by [15]. These elements help to identify characteristics of explanations that are defined at the different steps of the process and within the explanation product.
Table 2: Explanation Categories and Types
# 3.3.6. Study type
All papers were classified based on the type of user evaluation they used. We distinguished between qualitative, quantitative and mixed-methods
studies.
# 3.3.7. Measurements
While coding the studies, we also distinguished between particular measurements that were used. These categories were created during coding based on the evidence and are referred to in the results section. These measurements were associated with the elements of explanation described in fig. 3.
# 3.3.8. Properties and their relations
A property, in the XAI context, can be defined as a measurable characteristic of an explanation. The properties belonging to the same framework should have non-overlapping definitions, but can be related to each other by correlation or causation. We use the framework of properties defined in [15] to deductively code the properties measured in the studies. This framework describes 6 Conceptual Components based on the work by Knijnenburg and Willemsen [48], although they only describe properties in 4 of them. These components group properties that measure related concepts. The framework has a total of 30 properties. They present theoretical relations among properties according to the surveyed papers. This framework was chosen because it was strongly built on theory and previous work in the AI evaluation field. The properties definition was created by unifying multiple previous definitions in the field, and the properties were organised using a widely used evaluation framework in the recommendation systems field.
In this survey, a property was coded as part of a study when the definition of the measurement matched the definition of the chosen framework. In qualitative studies, we looked at the definitions or meanings the authors were communicating when discussing the concept. For instance, in [49] the authors say:
Participants also used the highlighted sentences to verify whether the most and least influential sentences matched their expectations.
In this case, the whole sentence was marked as Information Correctness because the users performed the action because they wanted to check how the explanation complied with the information they expected to see.
In the case of quantitative studies, when the authors used User Behaviour or Metrics to measure the property, we made sure to understand the goal of the measurement and also to consistently use the same criteria for all papers using the same or similar measurement. When the study used Closed Questions and the questions were available, we made sure to read each question and code the property each one was measuring. In this way, we ensured that the properties were coded without overlaps between them. In all these cases, the name the authors used to describe it was not important. The only relevant information was the actual property that was measured.
A relation between any two properties was marked in qualitative and quantitative studies. In the case of quantitative studies, the study had to measure a correlation between two properties and the correlation was found to be significant. In the case of qualitative studies, the authors had to mention that a specific property caused or affected another property. For instance, in the quote
In this case, the participants expressed that, as annotated regions from the system matched the important regions determined by the users, they were confidently able to continue to the next stages.[50]
we see that Information Expectedness influences the Confidence the users have on their decision.
# 4. Results
# 4.1. General statistics
Figure 4: Publication year of selected papers. Most of them ( $n = 8 0$ ) were published in 2019 or later. Only two papers from before 2019 complied with the inclusion criteria [51, 52] and are not shown in the chart.
We coded 82 papers, where 80 were published in 2019 or later (figure 4). Publication journals can be classified into three groups. The first group consists of Computer Science journals (44 papers), which focus on the technical aspects of implementation as well as the human factors involved in interactions with computer-based systems. The second group includes interdisciplinary journals (31 papers) that explicitly state their focus on applying Computer Science to the healthcare domain. Finally, healthcare journals (7 papers) primarily concentrate on healthcare topics.
Among the 82 publications, 56 were journal articles, while 26 were part of conference proceedings. The most frequently published journal was Artificial Intelligence in Medicine (Elsevier), with seven papers. It was followed by Medical Image Analysis and BMC Medical Informatics and Decision Making, each with three papers. The conferences with the highest number of papers were CHI Conference on Human Factors in Computing Systems, IUI Intelligent User Interfaces, and CD-MAKE International Cross-Domain Conference for Machine Learning and Knowledge Extraction, each featuring three papers.
# 4.2. Participants
In this section, we describe the characteristics of the participants involved in the user studies. Except for 7 studies [47, 49, 53–57], the selected studies primarily involved users with formal medical knowledge ( $\mathrm { n } = 7 5$ ), including medical students and residents. Fewer studies ( ${ \mathrm { n } } = 5 6 $ ), additionally included users with extensive practical knowledge, such as clinicians with a certain level of experience or specific expertise (e.g., oncologists, paediatricians, neurosurgeons). In cases where practical experience was not explicitly mentioned or the term “medical expert” was used without further specification, we classified participants solely within the formal data domain category. The personal data domain was rarely utilised; it appears in only 8 studies, as it primarily refers to patients who have personal experiences or connections with the AI’s area of focus. Only three of them evaluated how people who experienced a specific healthcare condition interacted with explanations [46, 47, 54]. Some participants also had notable knowledge of machine learning or AI. Six studies [55, 57–61] included participants with formal machine learning knowledge, while nine [59–67] involved users with instrumental machine learning expertise.
Some studies state that they had participants with different knowledge levels. For instance, the studies [68, 69] had two user groups: residents and specialists. Both groups had Formal Data Domain knowledge, but only the specialists had Instrumental Knowledge. We identified 30 studies with more than one group in their evaluation. Most of them only described the knowledge these participants had in the domain, but did not assess their
Figure 5: Groups of participants in the selected papers, according to their knowledge of the Domain and of ML/AI. Each paper can have more than one group. Most studies do not evaluate the ML/AI knowledge their participants have.
ML/AI knowledge. fig. 5 shows the different types of knowledge combinations that were present. Notably, the ML/AI knowledge was not specified for most groups. Out of the 116 groups of participants that were studied, only 9 had knowledge in both the Data Domain and ML/AI (figure 5).
Regarding the number of participants, the studies had, on average, 34.5 users per study. Qualitative studies had between 2 and 21 users, mixed studies between 1 and 262, and quantitative studies between 3 and 223. In this last group, 21 studies (51%) had 16 or fewer participants, out of which 10 were able to provide confidence scores for their results, and 6 recognised that the sample size was a limitation of the research.
# 4.3. Data
The majority of papers use one type of data in their systems. We found that only two [54, 70] papers incorporate more than one type of data: [54] uses Graph, Tabular and Text while [70] uses Tabular and Video. Eight papers were marked as using both Tabular and Time Series datasets because they use raw Electronic Health Records. These papers show the user some kind of patient evolution over time and do not compress the records to one single time point when explaining. Overall, we find that the most used type of dataset is Tabular ( $n = 3 5$ ), followed by Images ( $n = 2 7$ ) and Time series datasets ( $n = 1 2$ ). The other types are used in fewer than 10 papers each.
Table 3: Studies per data type. Some studies used more than one type of data. For instance, [54] has graph, tabular and text data types.
# 4.4. Usage Context
Each study has one or more encodings of usage contexts. The most common context is Decision Support, which is defined as using XAI to support informed decision-making and enhance understanding of prediction causes [26]. This was the most frequent context, appearing in 50 out of 82 papers. Decision support applications were most common in oncologic radiology [51, 52, 67, 69, 86, 87, 92, 95, 116, 125], emergency medicine [66, 98, 101, 103, 104, 113, 115], and oncology [63, 72, 74, 79, 91, 110].
The second most common usage context was Capability A, present in 24 studies. Examples include using explanations to assess the practical validity of AI systems [57, 74] or engaging experts to assess whether the system is capable enough through evidence provided by explanations [62, 89].
The third most common was using explanations for Domain Learning, appearing 9 times. Examples include an interactive and explainable dashboard for drug repurposing [73, 76], a chatbot helping patient relatives understand causes of cancer [53], or a model explaining and detailing risk factors of coronary heart diseases [55].
The usage context of Model Improvement employs explanations to verify model behaviour and inspect how the model needs to improve. Only three studies (those by Lee et al. [124, 126, 128]) explore this capability, where physical therapists could provide feature-based feedback in a system for rehabilitation assessment.
The final context emerging from our studies is Model Auditing, in a study where health experts were involved to assess an explainable model and remove any problematic risk functions if needed [60].
Notably, the usage context of Adapting Control, which aims to understand how to achieve desired AI system behaviour, did not emerge in any of the included studies
# 4.5. AI models
Most of the studies control( $n = 7 2$ ) consisted of evaluating a system using one AI model. Ten studies deviated from this: Four papers presented one system that used multiple models to generate information for its users [63, 119, 121, 127]; Three papers [59, 91, 125] present two evaluations, one for each model; and three papers [79, 116, 128] make an explicit comparison of their performance and explainability.
Most of the models were used for classification tasks ( $n = 9 2$ ), followed by regression with 11, and finally, only one model was a ranking mechanism. The most used models were CNN-based: different DenseNet [129] ( $n = 7$ ) and ResNet [130] ( $n = 6$ ) architectures, as well as other architectures and custom implementations ( $n = 1 4$ ). They are followed by Random Forest $[ 1 3 1 ] ( n = 8 )$ and Boosting Techniques [132]( $n = 6$ ). These models were grouped in the following categories (see table 4):
• Neural Networks: Deep learning models, including CNNs, RNNs, and transformers.
• Tree-Based Methods: Decision trees, random forests, and boosting models.
• Linear Models: Regression and classification models such as logistic regression and support vector machines (SVMs).
• Probabilistic Models: Bayesian networks and Naive Bayes classifiers.
• K-Nearest Neighbors (KNN): Distance-based models like KNN and weighted KNN.
• Ensemble Methods: Techniques combining multiple models.
Ensembles with Knowledge: Hybrid approaches that integrate knowledge-based reasoning with trained models.
• Knowledge Graphs: Models that used knowledge graphs for classification.
• Reinforcement Learning (RL): Models using reinforcement learning for decision-making and classification.
• Rule-Based Systems: Models using predefined rule-based logic.
• Private Models: Proprietary models that are not explicitly described.
• Other Methods: Miscellaneous techniques that do not fit in the above categories.
Table 4: Studies per AI model type. The model category can be used multiple times per paper because we included comparative studies.
# 4.6. XAI methods
In this section, we present methods used in the studies to generate XAI explanations. Regarding the usage of XAI methods in the studies, we analysed whether the systems used one or multiple XAI methods to create the explanations (see table 5). Most of the studies ( $n = 5 5$ ) used only one XAI method in their systems. Twenty-three papers used multiple methods to generate their explanations in one system. Four papers presented an explicit comparison of various XAI methods, which they subsequently tested and compared.
Table 5: Studies organised by how they used the XAI methods. Most of the studies only use one method.
As explained in the methods section, the XAI method was marked in each paper without making any categorisation to simplify the coding process. Once the coding phase was over, we analysed the different descriptions and looked for an up-to-date taxonomy that could help us categorise the methods. We decided to use the taxonomy defined by Ali et al. [36] (see table 6). In this taxonomy, the authors classify the methods into three big categories: data, model, and post-hoc explainability, and there are several subcategories within each. According to [36], these categories can be defined as:
• Data explainability methods focus on helping users understand the underlying data • Model explainability aims to use the AI model to create explanations • Post-Hoc explainability methods group techniques that are applied after the training and prediction to create explanations for users
Most methods used were classified as Post-hoc ( $n = 5 1$ ), while Model and Data methods had 23 and 20 papers each. The most used method is SHAP values [133] ( $n = 1 7$ ), followed by GradCam [134] ( $n = 7$ ).
Table 6: XAI methods for each paper classified by the Taxonomy by Ali et al. [36]
# 4.7. Explanations
Feature importance is the most commonly used type of explanation abstraction. A total of 58 studies presented explanations of this type, with 36 relying exclusively on it. The second most common type is data-centric explanations, which focus on highlighting aspects of the input dataset to provide context. This approach appeared in 20 studies, with 9 using it as the sole explanation type. Its most frequent pairing was with feature importance. The systems by [101, 119] are prominent examples of the integration of both approaches. They combined multiple visualisations using different explanation styles to provide a holistic view of the model predictions they were explaining
Example-based explanations were used in 14 studies, with three employing them as the only explanation type. Similar to data-centric explanations, example-based explanations were often paired with feature importance, appearing together in seven studies. The study by Röhrl et al. [62] is a good example of the integration of example-based explanation and feature importance. Here, the user can identify parts of the cell that were relevant to the classification, and they can also see examples of similar classes to compare the cell image.
Rule-based explanations appeared in 10 studies, while counterfactual explanations were present in 8. These less common explanation types were usually combined with others; both were paired with additional explanation types in six studies each.
Regarding the scope of explanations, 69 studies used local explanations, while only four relied solely on global explanations. Nine studies incorporated both global and local explanations.
Most studies ( $n = 6 8$ ) used a single explanation format. The visualnon-interactive format was the most prevalent ( $n = 4 3$ ), followed by visualinteractive ( $n = 1 5$ ) and purely textual ( $n = 1 0$ ). Fourteen studies employed hybrid formats, combining text with visuals. Among these, the most common hybrid approach combined text with visual-non-interactive elements ( $n = 1 2$ ), while only two studies combined text with visual-interactive explanations.
# 4.8. Study Type
As shown in section 4.8, the majority of the papers ( $n = 4 1$ ) used purely quantitative evaluation methods, such as questionnaires using scales. Ten studies used purely qualitative methods, such as interviews, and 31 had a mixed methods design, for example, a combination of an interview with a Likert scale questionnaire.
Table 7: Papers per study type. The majority of papers conducted a quantitative study.
Figure 6: Number of properties studied by papers and type of study. Triangles represent the average number of properties per type. Quantitative studies measured the least number of properties.
The type of study highly determined the number of properties that were evaluated or considered. Figure 6 shows that quantitative studies usually measured around 4 properties, qualitative studies close to five properties and mixed studies measured almost 7 properties on average.
# 4.9. Type of methods used for evaluation
To closely examine the methods used, we first analysed how the property was measured. Here, we used inductive coding and identified five distinct ways, corresponding to the explanation elements Abstraction and Communication (see fig. 3):
Closed Questions: questions with a limited set of answers. For instance, Likert-type questions or yes/no questions.
Open Questions: questions with no predetermined answer. The participant is free to use the words and expressions she wants. User Behaviour: questions or metrics that measure the user’s actions or knowledge. For instance, user answers to questionnaires that measure the objective understanding and performance.
Interview Analysis: The property appears as part of qualitative data analysis. The analysis could have been deductive or inductive. It is possible that no open questions are marked in the paper because the authors did not disclose the interview protocol.
• Metrics: standardised measurement using a mathematical formula, which focus on assessing the XAI system’s competencies without gathering direct user feedback.
Figure 7 presents a chart with the use of these methods according to the study type (Quantitative, Qualitative, Mixed) and the property they measure. Mixed-studies use closed questions as much as Quantitative studies, but for properties like Relevance to the Task and Information Expectedness, they used interview analysis much more. Quantitative studies do not use measurements at the Abstract level, and rely on closed questions and user behaviour to conduct their analysis.
To describe the methodology of qualitative and the qualitative section of mixed studies, we analysed the study descriptions to understand the authors’ qualitative method. We identified seven distinct qualitative methods, as seen in fig. 8: most studies employed interviews, either before or after using the system. These were followed by open-ended questions in questionnaires and the think-aloud method during system interaction, including constructive interaction. A few studies utilised ethnography, focus groups and observation as continuous evaluation methods.
# 4.10. Medical Domain
We now give an overview of the medical domains we identified where XAI is applied. The most common medical domain XAI was used for is internal medicine (25/82 papers), with specialisations including oncology (i.e., cervical cancer diagnosis using tabular data [110], or predicting the recurrence of breast cancer [107] or lung cancer [74]), cardiology (i.e., risk assessment or diagnosis of coronary heart diseases [55, 56, 105, 109], or classifying pulmonary heart diseases [99] or cardiomegaly cases [84]), endocrinology (mainly focussing on diabetes - either diabetes monitoring [46] or risk prediction [97, 117, 120] with only one study being non-diabetes related, being thyroid tumours classification [69]), pulmonology (systems for classifying asthma and bronchitis [65], paediatric pneumonia [94] or the previously mentioned system for classifying pulmonary heart diseases [99]) or neurology (i.e., systems predicting stroke likelihood due to obstruction or rupture in brain [111], sleep staging predictions using EEGs [50] or predicting rehabilitation of comatose patients [61]), and haematology (assessing risk for coagulopathy [72, 108]).
Figure 7: Measurement method for each property depending on the study type. Quantitative studies rely heavily on Closed questions to conduct their studies, and Mixed-studies use Interview Analysis to capture feedback on specific properties.
Table 8: Studies organised by medical domain and explanation type.
Figure 8: Qualitative methods for mixed and quantitative studies. By far, the most used method is interview.
The second most common was the use of XAI for support diagnostics (16/82 papers), with most applications centring around oncologic radiology. Examples include systems - and by extension explanations - for glioma or other tumour classifications [52, 58, 69, 83, 95, 125], of which most use feature importance explanations. Support diagnostics also include systems used within radiology that do not involve oncology, used for i.e., assessing spleen injuries [80] or bone fractures [90, 92] through X-ray images.
The third most frequent domain was emergency medicine (11/82 papers), with applications such as patient triaging [66, 101] and predicting ICU stay duration [121].
Applications within general medicine (4/82 papers) include broader topics such as predicting diagnoses based on electronic health records [102] and clinical history [123], classifying medical articles [49], or supporting consumer health search [57].
Finally, we have specialisations that fall outside the aforementioned categories, grouped under other medical domains (19/ 82 papers). These include applications in physiotherapy (8 papers), where explanations supported systems for detecting wandering patterns [122] and evaluating rehabilitation progress [124, 126]. Studies within the field of psychology have a slightly broader focus, ranging from assessing anxiety levels from speech [71], stress levels from wearable sensor data [114] and classifying a person’s mental state [100], to systems for automating “Grief Inquiries Following Tragedies” [106] or mental health recommendations for people with chronic pain [47]. Pharmacological studies centred around either drug repurposing for treatments [54, 73] or assessing drug-disease treatment pairs [76]. Studies within orthopedagogy focused on lesion or fracture classification through X-rays [77, 82, 92], whereas studies within pathology focused on cytological image analysis [62]. For dermatology, two studies focused on assessing skin lesions [78, 89], whereas one study pertained to melanoma classification [91] (also falling under the oncology domain). Two studies centred around ophthalmology, both using feature importance explanations for assessing glaucoma cases [64, 88]. Finally, only one study focused on dentistry, identifying furcation involvement lesions on a series of dental radiographs and explaining through highlighting image regions [85].
# 4.11. Relations between properties
We found 85 relations between properties. As seen in figure 9, most of these relations occur between Subjective System Aspects and User Experience. These relations were found in 34 papers. These studies are split equally between quantitative and mixed ( $n = 1 5$ each), and 4 qualitative studies also describe relations. The average number of users does not differ between the relations supported by 1, 2 or 3 papers.
Figure 9: Relations between the properties. Each square represents a relation between two properties. Darker squares mean that more papers found that relation. Most of the relations are found between the Subjective System Aspects and User experience aspects.
# 4.12. Properties
In the following sections, we present observations for each property identified in the studies.
Table 9: Table of all explanation properties and their definitions based on the reviewed literature.
Table 9 – continued from previous page
Table 9 – continued from previous page
Table 9 – continued from previous page
# 4.12.1. Personal Characteristics
Domain Experience refers to the level of knowledge and practice users have in the data domain and it is the personal characteristic that has been studied the most in the area. Twenty-five papers had users with different knowledge and experience levels. Only 5 of them analysed how these differences impacted other properties. Another 4 papers analysed how the experience level impacted other properties. Among these 9 studies, the most analysed connection was to Performance (4 papers). Interestingly, not all papers found the same result: [77] found that residents performed than specialists when they did not have the XAI support; [69] found that senior performed better than junior physicians; [65] found no significant differences between groups; and [68] found that people with more experience obtained higher gains when using the XAI system that when using AI alone and that the people with the least experience had the largest decline when using the XAI system. These mixed results could be due to the fact that all the scenarios where in a Decision Support context with a Diagnostic Decision Support task and less-experienced-participants might have used the opportunity of participating to also test their skills in the problem [77]. The other properties that are affected by Domain Experience are: Confidence [85, 92]; Information Expectedness, Relevance to the Task and Trust [104] ; Size and Structure [98]; Explanation Power [71] and Usefulness [92].
Attitude Towards AI refers to the inclination users have to use AI in their work. This differs from their skill level for this type of technology, as it aims to measure users’ likeness of AI tools. This property was explicitly measured in four studies [65, 71, 103, 120] using questionnaires with closed answers. One study assessed the relation between this characteristic and Reliance, but it did not find evidence supporting it [120]. Another study had the same result with Explanation Power [71]. Additionally, one study evaluated how Performance was affected by this property, and no evidence was found [65].
Personal Characteristics is a broad term used to describe any kind of user trait that could impact the result. In the survey, four papers presented an assessment of these traits. Papers [99, 123] used Need for cognition [137], [71] used the Big Five Inventory [138], and [47] used Need for cognition and Ease of Satisfaction [139]. Two studies evaluated the relation of Personal Characteristics with other properties of the XAI user experience: [71] evaluated its influence on Explanation Power without finding significant results, and [47] found that higher ease-of-satisfaction led to higher Satisfaction with the explanations.
# 4.12.2. Situational Characteristics
Case Difficulty is the only aspect of situational context mentioned in the papers. It refers to the complexity of the case that users face. Papers measured it in two ways: with closed-questions, by asking participants about the difficulty of the case ( $n = 3$ ) and at the abstract level by measuring it while selecting cases ( $n = 8$ ). For this, the papers used difficulty scores provided by other health care providers in a previous stage [92], difficulty based on domain knowledge [50, 63, 104], severity of the case [86, 88], and scores provided by the AI model or another model [85, 117]. This aspect of the situational context influences the user’s Confidence in their decision [49, 50, 63, 77] and Reliance on the system [60, 85, 88]. Case Difficulty also affects Usefulness, but the results were mixed: [49] found a negative correlation and [77, 117] found a positive correlation. Performance and Curiosity were also influenced by the Case Difficulty: Performance decreased when the Case Difficulty was high [77], and Curiosity increased when the Case Difficulty was high [104].
# 4.12.3. Objective System Aspects
Objective systems aspects (OSAs) are “the aspects of the system that are currently being evaluated” [48]. The previous analysis conducted in [15] yielded six properties: AI Model Performance, AI Model Certainty, Certainty, Continuity, Separability and Consistency.
AI Model Performance aims to quantify the model competence. In all these studies, an AI model was trained with a specific dataset, and therefore, all these papers measure this property. The majority of papers ( $n = 7 6$ ) only test one unified system, and therefore, they cannot compare how the performance of the AI model can affect other properties. However, there are 6 papers where the AI Model Performance is used to compare different systems: [116, 128] compare multiple AI models for one system and [59, 79, 91, 125] compare different AI models for different datasets. In all these papers, the AI Model Performance is measured at the abstract level using traditional Machine Learning metrics like accuracy, precision or recall. Additionally, two papers measure or mention AI Model Performance without making an explicit comparison: He et al. [121] presents a system that uses multiple models at the same time. In their study, the goal of the user is to understand the performance of these various models using a visualization; and in Barda et al. [98], the AI Model Performance is mentioned in the qualitative analysis as an aspect that was missing in the system. In regards to its relations with other aspects, He et al. [121] evaluated how it related to the Perceived Model Competence and it found a small correlation between the two. The two studies that compare multiple AI models for the same system also studied the relations of this property. Herm et al. [116] found that higher AI Model Performance is associated with higher Satisfaction, and [128] found that when using higher-performance models, users had better Performance.
AI Model Certainty was part of the evaluation in 4 papers, two mixed studies and two quantitative. In these mixed studies, it was mentioned in the qualitative analysis as a missing aspect of the explanation. To capture this feedback, Panigutti et al. [99] used an open question in a questionnaire, and Anjara et al. [74] used a Think Aloud protocol.
Certainty, Consistency and Separability were not part of the evaluation of the selected papers. The first one was mentioned in one qualitative study as an aspect that influences the Explanation Power [49], but it was not analysed as an individual aspect of the experience. The second and third were not even mentioned in the studies. We believe this happens because these specific properties are related to the XAI method’s mathematical function and should be evaluated before conducting any user studies.
# 4.12.4. Explanation Aspects
This component groups the properties that measure the quality of the generated explanation. The papers surveyed presented evaluations that covered all these properties.
Size and Structure were the ones that appeared the most. Size has been mostly mentioned in qualitative data analysis ( $n = 7$ ). The property was evaluated using closed questions in four studies, and in two others, it was measured at the abstract level. This property is related to Cognitive Load, Relevance to the Task, and Alignment with Situational Context. The relation between these two properties is well illustrated in this quote:
Size The full explanation with all the details of significant evidence is accessed only if desired, and Relevance to the task it is more suitable to retrospectively analysing the prediction or the decision in Aligment with situational context the user’s own time, or in retrospective clinical meetings. [108]
Structure was also primarily mentioned in qualitative research: it is part of the qualitative analysis of 6 papers, and it is measured by closed questions in only two. Its relations to other properties are deeply explored in Pisirir et al. [108]. This study presents a comparison of two different explanations, one narrative and one with bullet points, and compares their effects on the users. They found that the Structure affected Cognitive Load, Confidence, and Trust.
Representativeness was defined as “An explanation is representative if it holds for many distinct but similar instances” and Continuity, as part of Objective System Aspects, as “The function should provide similar explanations for similar instances”. By analysing the use of this concept in the user studies and by re-analysing the literature on these topics [11, 23, 25, 28, 29, 140, 141], it was decided to join these two properties in one only concept. The reason for this is that both focus on the fact that one explanation or style of explanation can be used to explain the prediction of similar instances, but the Continuity was focused on measuring the ability of the model to achieve it, and the Representativeness was focused on how the user would perceive this effect. By joining one concept that can be measured at different levels, we can reflect how these properties are used in user studies. The new concept is still called Representativeness, and its definition is measures similarity of explanations of similar but distinct instances. We found that Representativeness was evaluated using closed questions in two studies [64, 100], mentioned in one qualitative analysis [122], and also measured the abstract level using a metric in [100].
Completeness and Correctness were mentioned in three studies of with Decision Support as usage context. All of them were measured with different measurement types. In [125], a mixed study, both properties were measured at the abstract level using metrics proposed in the paper. In [110], a qualitative study, Completeness is mentioned in the qualitative analysis of the interview. Finally, [111] uses a closed question to evaluate Completeness.
Necessity and Sufficiency refer to the appropriateness of the information that is present in the explanation. The first one was evaluated with closed questions [100] and metrics [72, 84]. Jaber et al. [114] asked about this property in an open question as part of a questionnaire, and it appeared as a topic in their qualitative analysis. Morais et al. [110] also asked about it as part of their interview script, but it was not part of the aspects users mentioned. The second property was investigated by explicitly asking questions about it in one study [110], and quantitatively, using questionnaires, in five studies [57, 97, 100, 111, 122].
The last property of this group, Contrastivity, was measured quantitatively in one study [82] that also found it is correlated with Usefulness. In [124], it was mentioned by the participants as an aspect that was desired, especially when the patient was on the “edge of two classes”. Lastly, the qualitative analysis in [84] revealed that poor Contrastivity leads to poor Understanding.
# 4.12.5. Subjective System Aspects
Subjective System Aspects (SSA) are “users’ perceptions of the Objective System Aspects” [48]. This component experienced the most changes of all the components. We found 3 new properties that reflect the nuances in what users perceive in the XAI scenario.
Alignment with Situational Context context was evaluated in three mixed studies [50, 80, 104] and in three qualitative studies [58, 66, 110, 118]. Users mentioned that time pressure and Case Difficulty were factors considered when requesting explanations and when pondering whether the explanation was useful or not. Only one study, [50], specifically asked questions that related to understanding the integration of the explanations into the users’ workflow.
Cognitive Load is an aspect commonly evaluated in HCI research. XAI research is not an exception. Twenty papers evaluated this Property, out of which only five ([70, 81, 86, 124, 126]) used the standardized NASA-TLX questionnaire [142]. Nine other papers evaluated this property with tailored questionnaires. Only two papers [117, 124] evaluated this property, both with a quantitative questionnaire and also in a qualitative evaluation. The relations of Cognitive Load with other properties have not been as prominent as expected. Size [58] and Structure [108] were found to influence Cognitive Load. Only one paper [98] explored how Cognitive Load can affect the perceived Usefulness of the explanations.
In the previous version of the framework, we identified Information Expectedness as an important property. During this review, we also found another property that was close but different: Information Correctness. On the one hand, Information Expectedness refers to whether the information provided in the explanation was anticipated by the users based on the input information and their knowledge. Information Correctness, on the other hand, refers to whether the information that is provided is accurate in terms of the domain knowledge. For instance, when explaining a diagnosis based on an X-ray image, if the anatomical information is tagged correctly, then the information that is shown is correct. At the same time, the explanation uses the anatomical information to say “The diagnosis is X based on A and B”, and that explanation is expected by the user because of the knowledge she has on the domain. If the anatomical information was tagged incorrectly, then the Information Correctness would be zero. In the case, the explanation does not use the information the user would expect (it uses partial information or other information), then the Information Expectedness would be low.
Information Correctness was part of the evaluation of 11 studies. Four studies evaluated it using more than one type of measurement: [100, 112] used close questions and abstract level metrics, [73] used close questions and it was mentioned in their qualitative data analysis, finally [84] used abstract level metrics, close questions and it also appear in their qualitative analysis. The other eight papers measured this property only with one type of measurement: in 4, it was part of the qualitative data analysis, 3 used close questions, and one used only abstract-level metrics. In regards to its relations to other properties, it was found that it correlated with Trust [68], and Information Expectedness and Satisfaction [84]. Even though this aspect is an important part of the user experience, it has not yet been included in the evaluation of many papers. We believe that, given the explosion of generative-AI-based explanations, this aspect might become more important. For instance, when explaining images using AI-generated examples, their correctness should be considered as part of the evaluation.
Information Expectedness was part of the evaluation of 25 studies. We found it relates to 11 properties: Domain Experience, Information Correctness, Confidence, Explanation Power, Intention to Use, Perceived Model Competence, Relevance to the Task, Understanding, Trust, and Usefulness. These relations were supported by 8 different papers that showcase the importance of this aspect in the evaluation: the difference between what users anticipate and what users see in the explanation can determine a big part of their experience.
Another new property we found in this survey is Prediction Expectedness. While Information Expectedness is related to how much information of the explanation can be anticipated, this property relates to the anticipation of AI prediction. Prediction Expectedness affects user Curiosity, reflecting more engagement with the system. For instance, in [50], a user reported that when her prediction did not match the system predictions she “ Curiosity tried to re-investigate the recordings based on the AI explanations to find out whether my reasoning on predictions was strong enough to modify the AI prediction”. This connection was found in 4 papers [50, 66, 67, 101], which reflects the importance of this aspect in the experience. One paper [50] also found that this property affects Perceived Model Competence and Trust.
Explanation Power is a property that was identified in the previous version as “Measures whether the selected causes make the user understand the reasons the model considered when making a decision”. During the revision, we noticed that this definition was focused more on Understanding Explanation than on the capability of the reasons to make the user accept the prediction. We noticed that many papers discuss the convincingness of the reasons and how much users change their decisions based on the information they received in the explanation. Based on all this, we decided to change the focus of this property by redefining it to “Measures whether the selected causes are capable of making the user accept the prediction”. This property is mainly measured with implicit measurements. Six studies measure this aspect by evaluating how much a user prediction changes after seeing the explanation [70, 71, 77, 99, 120, 123], while other 4 [49, 55, 56, 75] measure how many times the user agrees with the system or how much their perception of the model outcomes changes after seeing the explanation. With respect to its relations, it was found that it relates to 5 Properties: Certainty, Confidence, Usefulness, Information Expectedness and Structure. In total, four papers support these relations.
Perceived Model Competence refers to how the user perceives the model’s performance. This property can affect user Trust and Confidence as stated in this quote from [49]:
When participants found errors in the highlighted sentences, Confidence they felt “unsure and insecure”. For example, P27 said: “I was confused and stopped using the system after Perceived Model Competence I found it made obvious mistakes.”
In total, 10 papers evaluated this property, out of which 7 evaluated this property using close questions.
Related to this property is the Perceived User Performance. Users tend to have an idea of how much their performance improves when using the system, and this idea can affect how likely they are to accept the system. This property was measured by questionnaires in four studies [49, 70, 80, 122], and it was mentioned in the interviews in one qualitative study [81].
Relevance to the Task measures how well-adjusted the explanation is to the task the user has to perform. This property has the same spirit as alignment to the situational context, but instead of measuring with respect to the context, it is measured with respect to the task. The property is widely evaluated: it was measured with quantitative questionnaires in 14 studies [62, 70, 72, 73, 78, 83, 84, 96, 100, 101, 112, 117, 124, 126] and appears in the qualitative results of 11 papers [46, 60, 74, 76, 80, 86, 98, 102, 106, 121, 125]. Only one paper [112] studied an implicit metric, an abstract metric and used close questions to measure the medical relevance of specific parts of the explanation.
Doshi-Velez and Kim defined Cognitive chunks for XAI as “basic units of explanation” [143]. They present it as part of method-related dimensions of interpretability that “may correspond to different explanation needs” [143]. In [15] is presented as one property called Form of Cognitive Chunks. Unexpectedly, this aspect is only mentioned in two mixed studies [67, 80] as part of its qualitative section. During the analysis, we saw this aspect incorporated into the design choices of a few papers [98, 101, 119], but its impact on the user experience was not measured or evaluated.
# 4.12.6. User experience
We found 9 properties in this component, two more than in the previous framework definition.
Curiosity aims to measure how much the user can potentially engage with the system. This property appears in two qualitative studies and one mixed study. In this last study, only this property is mentioned in the qualitative interview and also measured with implicit feedback by collecting user interaction with the system [104]. In the other two studies [66, 119], it is part of the qualitative data analysis. Although this property is not part of the evaluation of many papers, it does appear to be related to other properties. Cases with high difficulty tend to increase user Curiosity [104], Information and Prediction Expectedness affect user Curiosity [50, 66, 67, 101] and by doing so, they can increase Understanding [50, 66]. Users “expressed a Curiosity in why the agent would make such a decision to be able to better understand the system.” [66].
Usefulness aims to measure the utility of the explanation for the user. This property is widely used: 25 papers explicitly measure it in the studies via close questions ( $n = 1 9$ ), open questions ( $n = 6$ ) and implicit measures ( $n \ = \ 1$ ), and in the other two, it is spontaneously mentioned by users. Regarding its relations with other properties, we found that Case Difficulty tends to increase the Usefulness of explanations: the harder the case, the more useful the explanation [49, 77, 117]. We also found that it has relations with Cognitive Load, Contrastivity, Explanation Power, and Information Expectedness.
Understanding is a common property that is widely used in the area and appears in survey papers, for instance [17, 40, 141], as part of their proposed evaluation. The conducted analysis found, as expected, 33 papers (40% of the papers) that evaluated this property covering all Usage Contexts present in the survey. Twelve papers included more than one dimension of Understanding. For instance, [101] has these two sentences to evaluate Understanding: “I understand when and why CORONET may provide the wrong recommendation in some cases”, that refers to model understanding, and “The scatterplot with all patients is easy to interpret”, which refers to understanding the shown explanation. This pattern was repeated in several papers [46, 47, 62, 83, 84, 97, 100, 101, 110, 111, 117, 122], which supports the idea that Understanding is composed of at least two factors: explanation understanding, that refers to apprehend what the explanation is informing, and model understanding, that refers to apprehend how the model works. The first type of understanding, called Understanding Explanation, evaluates whether the user can grasp the ideas that the explanation is trying to convey. Usually, this type of understanding needs to happen in order to acquire model understanding. Understanding Model Behaviour is a more complex achievement; not only does the user have to understand the information that she is seeing, but she also has to infer how the model created that information. The achievement of Understanding Explanation cannot be used as a proxy for Understanding Model Behaviour, as shown in [110]:
As mentioned at the beginning of this section, Understanding Model not achieved the most salient aspect of the analysis is related to the explainability of the XAI methods, which is primarily evidenced in the XAI is not explanatory code.
Despite the issue regarding explainability, Understanding Explanation achieved most domain experts acknowledged that the visual elements are easy to interpret and were able to perform the identification of major/minor influencing features.
Considering that people tend to overestimate how well they understand [141, 144], it is surprising that this property is most usually measured only with self-reported feedback from users ( $n = 1 7$ ) and not via objective understanding measurement or techniques. Only two papers tested whether users understood using questionnaires to elicit the user mental model [53, 55] and [110] evaluated this understanding by asking qualitative questions for model and explanation understanding.
Trust is one of the most important properties of explanations [31, 36, 38, 145, 146]. It is measured in 31 studies: 14 mixed, 10 quantitative and 5 qualitative. Twenty papers measure Trust using closed questions, and three papers only with open questions. In one of this last set of papers, it does not appear as part of the interview analysis because users discuss the AI context regarding Trust and not their Trust towards the system [110]. In six papers, it appears as part of qualitative analysis, even without asking about it explicitly. One study [71], in addition to closed questions, implicitly measured the evolution of Trust over samples according to self-reported Trust and Correctness of AI model prediction. Just like Understanding, these studies cover all Usage Contexts. Five papers found relations between Trust and other properties [50, 67, 68, 104, 108]. These are: AI Model Certainty, AI Model Performance, Information Correctness, Information Expectedness, Prediction Expectedness, Structure, Intention to Use, Understanding Model Behaviour.
Satisfaction, defined as the level of fulfilment the user gets while interacting with the system, is measured in 11 papers, five quantitative and 6 mixed studies. In all these studies, it is measured by closed questions, and in one [47], it additionally appears as part of the interview analysis.
Intention to Use measures the willingness of users to use the technology. It appears in 8 mixed studies, 6 quantitative and 1 qualitative study. This property is measured only by closed questions in 12 papers and by open and closed questions in one paper. In two papers, it appears as part of the interview analysis. According to qualitative evidence, this property depends on users’ Perceived Model Competence [49, 67] and Perceived User Performance [81]. Using a Mutual Information Analysis, [117] found that Intention to Use depended on Understanding Model Behaviour and [67] found a correlation between Trust and Intention to Use.
Confidence is a property that is not part of the original version of the evaluation framework. It is defined as a measure of the user’s subjective belief in the correctness of a decision. This property appears in a total of 22 studies: 13 quantitative studies and 9 mixed. However, only in two of these mixed studies it is evaluated in a qualitative way: [102] it appears as part of the qualitative data analysis and [103] asks about it in their open questions. Its importance in the area is reflected also in the number of relations it has with other properties: 11 papers have evidence of relations with other properties. The properties that are most mentioned are: Case Difficulty ( $n = 4$ ), AI Model Certainty ( $n = 2$ ) and Domain Experience ( $n = 2$ ). The other properties that relate to Confidence are: AI Model Certainty, Case Difficulty, Case Difficulty, Domain Experience, Explanation Power, Information Expectedness, Perceived
Model Competence, Performance, Reliance, Structure and Understanding Model Behaviour.
Controllability is a property widely used in recommender systems. Several papers have shown that the higher the Domain Experience, the higher the desired control. In this survey, we found that it is part of the evaluation of only 4 papers: 3 mixed studies and one qualitative. Half of these papers used closed questions to measure it, two used open questions, and in one study, it appeared in the qualitative analysis. It is interesting to note that only 2 [80, 119] out of the 18 papers that had some interactive elements in their systems measured this property.
# 4.12.7. Interaction
Efficiency was measured in 7 purely quantitative and 5 mixed studies. Although this property can be easily measured implicitly by measuring the time it takes for users to complete the task, it is less used than other more complex to acquire properties. In the mixed studies, it was measured implicitly only in two of them [49, 76]. In 5 of 7 of the quantitative studies, it was measured implicitly [56, 70, 75, 85, 91], and one study measured it both implicitly and self-reported [64], but they did not compare the contrast between the measurements. This property is measured mostly in Diagnostic Decision Support tasks (9/12). The datasets used in these studies follow a different pattern from the average: five studies explain the prediction of images, three of tabular data, and two of graphs. Text, time series and video have one each.
Performance measures the user and how well the user can perform the action using the system, including predictions and explanations. This property is measured in 17 quantitative studies and 6 mixed studies. It is only measured implicitly by checking the user’s predictions against the ground truth. The dataset used in these studies does not follow the typical trend: 12 studies used images, followed by 5 tabular, with the other types having fewer than 3 occurrences. Most of the studies are set in a Decision Support usage context ( $n = 1 7$ ).
Reliance measures how willing the user is to provide decision control to the machine. It is an important aspect of the adoption of healthcare systems [147]. However, only two papers measured it or discussed it with users. Chen et al. [88] measured it with a likert-type questions by asking users directly whether they would be willing to provide decision power to the system, and in [104], users said that they should not rely on the system $1 0 0 \%$ and they were “concerned with liability and responsibility if [they] followed the model and the patient had a bad outcome”. This put emphasis on the legal aspect of using AI systems in healthcare settings. We found some papers that claimed they were measuring Reliance [71, 77], but they measure how much the user decision changes after seeing the prediction and/or explanation. In this survey, we considered that to be a measure of Explanation Power because it does not reflect whether the user will give decision power to the machine but whether she can change her decision based on the information the machine displays.
The last property that we coded is Variance User Decision. This a new property that measures how much different users agree on a specific decision. This does not relate to agreeing with the machine but among users. This measure is relevant in the healthcare scenario because one of the goals of these systems is to achieve standardised care, which only happens if all healthcare providers share the same criteria and knowledge. Five quantitative and 4 mixed studies measured this property, all of them using an implicit metric to measure it and it was mostly used in Decision Support Contexts ( $n = 6$ ).
# 5. Updated User Centric Evaluation framework
During the coding process, we found seven new properties related to SSA, UX and Interaction. These are: Information Correctness, Perceived User Performance, Prediction Expectedness, Confidence, Intention to Use, and Variance User Decision. Additionally, we found 2 Personal Characteristics and 1 Situational Characteristic properties that were repeatedly used by several studies papers. These are: Domain Experience, Attitude Towards AI and Case Difficulty.
We combined the properties Continuity with Representativeness and based on the evidence we found, we split Understanding into Understanding Explanation and Understanding Model Behaviour.
Additionally, we redefined Explanation Power to make it clear that it refers to how much the user changes their mind based on the explanation they receive.
Based on these changes and the evidence we gathered, we also redefine how the framework components are related. In the original work, it was not clear how the Personal Characteristics and the Situational Characteristics were related to the Explanation Aspects. We found one paper [98] that found that Domain Experience affects Size and Structure, so we decided to add the possible influence of personal characteristics to the explanation aspects.
Figure 10: User Centric Evaluation framework updated. New properties are marked with \* and modified properties are marked with $^ { * * }$ .
# 6. How to use the framework
To understand the relations of the different elements that were encoded in this survey and the way they define the evaluation procedure, we propose a layered approach to connect them made based on the observations during coding of the user studies. This approach can serve as a guide to design XAI systems in the early stage and the evaluation that is more appropriate for it. As shown in fig. 11, each layer contains elements that are interdependent, and each layer is informed by all previously encompassing layers. Sections 6.1, 6.2 and 6.3, will elaborate on the layers’ aspects and how the design choices are limited in each layer, and section 6.4 explains how to use them to define which properties to evaluate. The last section, provides general recommendations for reporting the results.
Figure 11: Layered model to design evaluations. During a Designing explanation stage, the layers Domain Context and AI Context will strongly influence the Explanation Design. All layers influence the Evaluation Design.
# 6.1. Domain Context
The foundational layer aims to build a contextual ground that consists of what is known about the deployment space. It consists of the medical domain in which the task is situated, the users performing the task (i.e., medical expert, patient, etc.), the medical task performed by the users and assisted by AI and the data used for the task.
USERS TASK
Refelct on relevant users given the task and Refelct on the task that is being supported by the (X)AI system
the potential participants pool. consider SC case dififculty task-related
consider EXP necessity user-facing properties properties SSA relevance to task EXP sufifciency INT variance user desicion SSA information expectedness SSA forms of cognitive chunks SSA cognitive load MEDICAL DOMAIN SC case dififculty Refelct on the healthcare context
if Criterion user type is lay user / patient consider INT efifciency domain-related properties
↳ consider UX satisfaction INT variance user desicion
if Criterion user type is domain expert SC case dififculty
↳ consider UX trust SSA alignment to situational context
# 6.2. AI Context
The second stage involves selecting an appropriate AI model to support the medical task. This decision is influenced by the nature of the task and the available data. Additionally, the intended usage context (e.g., decision support or model auditing, following the taxonomy of [26]) further refines model selection. For example, if the model’s performance in the given task is not yet established, usage scenarios like auditing or capability assessment should be prioritised over assessing decision support.
# 6.3. Explanation Design
The previous layers strongly inform this stage. Here, the XAI method options are limited by the Task, the Data and the AI model. The usage context might strongly influence the design choices along the four axes we have identified: Scope, Type, Format and Interaction. These choices might shape the selection of the XAI method as well. The technological limitations of XAI methods and the nature data might also influence the design choices along the four axes. It is possible that some combination of them finds no XAI method that works for the context, so these axes would be revisited to adapt them to the technological limitations.
USAGE CONTEXT
Consider properties relevant depending on the usage context. If the context is to validate and audit or improve the model, model-centred properties are considered relevant.
if Criterion usage context is model auditing if Criterion usage context is capability assessment if Criterion usage context is decision support
consider OSA separability model-centred consider OSA separability model-centred consider INT variance user decision properties properties OSA consistency OSA consistency INT reliance OSA model certainty OSA model certainty INT efifciency OSA model performance OSA model performance INT performance UX confidence EXP correctness XmAoIdeplr-ocpeenrttrieds EXP size chaerxapcltaenriastiiocns SSA alignment with sit. context EXP completeness EXP structure EXP representativeness EXP contrastivity EXP representativeness SSA perc. model competence SSA perc. model competence SSA prediction expectedness SSA prediction expectedness
AI Model
if Criterion AI method is GenAI
↳ consider SSA information correctness
Figure 13: Recommendation and suggestion to select properties according to the AI Context Layer
Figure 14: Recommendation and suggestion to select properties according to the Explanation Design Layer
# 6.4. Evaluation design
The final stage consists of the evaluation itself: the study type, the choice of properties and their measurements To be able to decide on the evaluation design in XAI studies, all layers should be taken into account to reflect on the evaluation appropriately and effectively. The following five-step guide outlines a structured process for this decision-making.
# Step 1: Select relevant properties
Choosing which properties to evaluate can be a complex task. Our analysis of user studies revealed recurring patterns that emerge when properties are applied in specific contexts. Each of the criteria (such as medical domain, users’ characteristics, data used, etc.) can potentially influence which properties are relevant. These connections are demonstrated in figures 13, 12 and 14, which show how to make an informed decision with the properties selected.
One of the most crucial criteria we observed is a usage context, which also aligns with Liao et al.’s study [26]. Usage context, among other criteria, could serve as a good starting point when uncertain which properties should be selected.
Furthermore, an overview of relations between properties (see section 4.11 and fig. 9) can help identify how the properties being measured relate to those already selected. This can be facilitated through the use of the Relations Between Properties. These relations can help to identify possible mediating factors that can influence results, and also properties that can influence results but were not identified before.
# Step 2: Identify measurements available at for all elements
Once all the potential properties have been identified, the process of filtering and selecting appropriate measurements and study design starts. Not all properties can be measured at all levels, and some automatic metrics may not be compatible with the available data or specific XAI techniques. For these reasons, it is essential to check whether suitable measurement already exist.
If no existing measurement is found for a given property, new survey items or metrics may ned to be developed and validated. Alternatively, the evaluation design may need to shift toward qualitative or mixed-methods approaches to gather meaningful insights.
Step 3: Identify properties that can be measured without user explicit feedback
At certain stages of XAI evaluation, abstract level measurements may be sufficient and do not require user feedback. For example, abstract-level properties typically do not depend on user feedback. In contrast, measurements at the communication level require implicit user feedback.
Identifying which properties can be evaluated without involving users directly helps simplify the evaluation process and allocate user engagement more efficiently.
# Step 4: Prioritise properties and select
The next step involves prioritising the selected properties. Properties that can only be evaluated through direct user input should be ranked from high to low in order of importance based on the research goals. The decision on which properties to measure with users should consider both the time users have available and the effort required to provide feedback. For example, if users can only participate for five minutes, a 20-question survey would be too long. Therefore, it is crucial to test the feedback methods in advance to ensure they fit within the available time.
# Step 5: Revise selection
After selecting the properties based on the previous steps, a final review is necessary to ensure alignment with both the research objectives and the overall study design. This step is especially important for quantitative studies. If only a limited number of properties can be assessed using closed-ended questions or predefined metrics, the study design may need to shift to a mixed-methods or qualitative approach to maintain relevance.
This process can be iterative and repeated to align the research goals with the conditions of the evaluation. In addition, if the researchers are designing the system and the evaluation procedure at the same time, this process can help to design the system according to what aspects can be evaluated.
# 6.5. Recommendations for reporting results
The properties and their definitions are not random: the names have been chosen to minimise possible overlap between concepts, and the words chosen for the definitions try to avoid ambiguity. When reporting results, these names should always be kept and reused, even if they get repetitive. This helps in understanding the paper’s results, and it makes it easier to compare with other studies.
When users can be grouped by their experience level, the results should always be reported at the group level. As explained in Section 4.2, there are differences between experience levels, so making sure these differences are presented makes it easier to compare studies.
# 7. Discussion of Research Goals
In this section, we discuss aspects related to the two research goals.
7.1. RG1 To provide a framework of well-defined and atomic properties that are part of the XAI user experience in the healthcare domain
# 7.1.1. Disentanglement of properties helps to better understand the evaluation
We find that Satisfaction is mentioned in surveys as an aspect equally important as Trust and Understanding [27, 32]. However, our research shows that this is not the case in this domain. The most studied property is Understanding ( $n = 3 3$ ), followed by Trust ( $n = 3 0$ ) and Satisfaction is measured only by 11 studies. This can be explained by the different meanings assigned to this Satisfaction. Rong et al. [16] considers Satisfaction as part of a bigger aspect called Usability that comprises other properties: Usefulness, Cognitive Load, ease of use and detecting undesired behaviours. Löfström et al. [27] define Satisfaction as “The degree of how much the users feel they understand the system, the explanations, and the user interface”, which in our framework is defined as Understanding Explanation and Understanding Model Behaviour. Mohseni et al. [40] also presents Satisfaction and Usefulness as part of the same construct. Lopes et al. [32] presents Satisfaction and Usefulness as part of the same category. These different definitions make it harder to compare the studies. Our framework proposes to disentangle these concepts into more granular properties. By doing this, we can better understand the connections between the concepts and not simply assume that they are related to each other. Our definition is closer to the definition of satisfaction by the Merriam-Webster dictionary ([148] sense 2).
# 7.1.2. The healthcare domain has specific characteristics that make the evaluation different from a general domain
Most evaluation frameworks have presented evaluation guidelines without any specific domain. In this study, we focused on the healthcare domain and discovered small differences that have not been considered in these general domain frameworks.
First, as mentioned previously, Satisfaction is not as important as other frameworks have indicated. In this context, Understanding and Trust are more relevant than Satisfaction. They are measured in 3 times more papers by using all types of measurements. The relations of Understanding and Trust with other properties are also studied.
Second, Reliance, defined as the user’s willingness to provide control to the machine, is part of the evaluation of only 2 studies; however, Trust is measured in 30 studies. This difference can be explained by two factors: First, most of these studies present prototypes that are only evaluated at a single point but Reliance is a consequence of repeatedly using a system and developing appropriate Trust; Second, as stated in [104], it is not yet clear who holds responsibility when an AI recommendation is followed, and in the healthcare settings, this accountability of actions is an essential part of interacting with patients.
Third, the personal characteristics such as Domain Experience and Attitude Towards AI , and the Case Difficulty are aspects that influence the property outcomes and are never mentioned in these general frameworks. These aspects of the context in which the user’s activity takes place are essential to first decide what should be measured, and then understand the results.
Finally, Confidence is a new property that is measured in several studies, particularly in decision-making scenarios. This aspect, influenced by Case Difficulty and Certainty, appears in 22 papers, more than Intention to Use or Cognitive Load, is not mentioned in general evaluation frameworks, and, as the numbers stress, it is a relevant aspect to consider in evaluations in this domain.
All these aspects considered, we can see how this domain has specific properties and relations that are not mentioned in other frameworks, and affect how the evaluation should be conducted.
# 7.1.3. Current Standard measurements are not convenient for evaluation
Several papers that we studied used standard measurements in the evaluation. Among them we find System Usability Scale [149], Hoffman’s Satisfaction Scale [141], TAM [150], and UTAUT [151]. This review did not specifically analyse which standard measurements were used in the studies. This decision was made based on the fact that we analysed the studies by properties, and these instruments measure more than one property at the same time and provide a unified score. We noticed that some studies did not provide the unified score and simply reported the values question by question [86, 104], which reflects that these instruments may not be convenient in this context.
Most studies created their own questionnaires to evaluate the properties, and almost none of them repeated the questions from a previous study. This is an issue already reported by previous studies [18] and it could improve by conducting studies that specifically aim to validate and create constructs.
# 7.2. RG2 To provide clear guidelines on how to design the evaluation of XAI systems based on the system characteristics
# 7.2.1. Quantitative studies should comply with a minimum number of users
The studies had, on average, 34.5 users per study. Purely quantitative studies had between 3 and 223. In this group, 21 studies had 16 or fewer participants, out of which 10 were able to provide confidence scores for their results, and only 6 recognised that the sample size was a limitation of the research. These sample sizes are too small to draw meaningful conclusions.
Based on the authors’ experience, we understand this is a common problem in the healthcare domain: finding users that can participate is difficult and relying on services like Prolific or Mechanical Turk is expensive or not necessarily trustworthy [152].
For this reason, we encourage researchers in the area to carefully consider their research context when setting up these studies. Quantitative studies should be used to test hypotheses [18], and small samples do not allow for achieving this goal. If it is known or anticipated that finding users will be hard, it is recommended to conduct a thorough qualitative study with a few users. The results of these studies are more informative and legitimate than quantitative studies of a handful of users.
# 7.2.2. Layered model and iterative design of evaluation
In this study, we propose a process to select the properties that should be measured in user studies, taking into account the context of the system, the users and AI components. The layered model for evaluation design section 6 helps to understand the different dependencies that exist between the system’s components in order to compile a set of appropriate aspects that should be used to measure the system’s effects on the users.
Previous work had focused on identifying the aspects that needed to be taken into account when designing and/or evaluating an XAI-based system, but none of them specifically stated when these aspects should be measured. Lopes et al. [32] and Kim et al. [17] organised previously defined aspects in new taxonomies, but do not provide guidelines on which aspect to measure depending on the system characteristics. More recently, Rong et al. [16] provides a guide on how to conduct quantitative user studies, but it does not provide any specific details on how to decide what to measure. Furthermore, their guideline could be applied to any kind of user study, not just XAI user studies.
Our proposal closes this gap by explicitly stating what aspect to measure based on the system’s context, the users and the AI and XAI components. This guideline will help other researchers design evaluations that are well justified and aligned with the research goals.
# 8. Limitations
The first limitation of this review is that it does not propose new measurement instruments or provide recommendations for them. There are very few studies in the Human-Computer-Interaction domain that evaluate whether a question really measures the construct, in this case, whether a question or metric measures a property. The works of Knijnenburg and Willemsen [48], Pu et al. [153] and Jin et al. [154] are well-known frameworks that work towards that direction. New studies need to be conducted to establish the most appropriate measurements for the properties. However, we do provide the list of questions and instruments that were present in the papers, with the specific property they measure. This list can be found in Appendix B.
The decision of not including Wizard of Oz studies was made to ensure we would analyse explanations that were created taking into account the AI model and XAI method affordances (see section 3.2). This led to leaving out of the study several works that are usually in the early stages of development. It is not clear the extent to which this decision impacted the usage context we were able to find in the studies.
Additionally, user studies are expensive, in time and monetary resources, so usually they are made when the systems are in late prototyping stages. We believe this decision also affected the number of usage contexts we were able to find. | Despite promising developments in Explainable Artificial Intelligence, the
practical value of XAI methods remains under-explored and insufficiently
validated in real-world settings. Robust and context-aware evaluation is
essential, not only to produce understandable explanations but also to ensure
their trustworthiness and usability for intended users, but tends to be
overlooked because of no clear guidelines on how to design an evaluation with
users.
This study addresses this gap with two main goals: (1) to develop a framework
of well-defined, atomic properties that characterise the user experience of XAI
in healthcare; and (2) to provide clear, context-sensitive guidelines for
defining evaluation strategies based on system characteristics.
We conducted a systematic review of 82 user studies, sourced from five
databases, all situated within healthcare settings and focused on evaluating
AI-generated explanations. The analysis was guided by a predefined coding
scheme informed by an existing evaluation framework, complemented by inductive
codes developed iteratively.
The review yields three key contributions: (1) a synthesis of current
evaluation practices, highlighting a growing focus on human-centred approaches
in healthcare XAI; (2) insights into the interrelations among explanation
properties; and (3) an updated framework and a set of actionable guidelines to
support interdisciplinary teams in designing and implementing effective
evaluation strategies for XAI systems tailored to specific application
contexts. | [
"cs.HC",
"cs.AI",
"cs.LG"
] |
# 1 Introduction
EHRs store richly structured, longitudinal data spanning diagnoses, laboratory results, procedures, medications, and outcomes—resources that are critical for predictive modeling and clinical decision support [8, 11]. However, regulations such as the U.S. HIPAA Privacy Rule and the EU GDPR impose strict safeguards for protected health information, including consent, minimization, and access controls, with substantial legal and institutional constraints on data use [3, 12]. These policies often prohibit direct access to patient-level records, creating significant barriers for model development, particularly in cross-institutional settings where data-sharing agreements are difficult to establish or enforce.
Despite these constraints, public datasets such as MIMIC-III have enabled research in EHR-driven prediction under carefully controlled conditions, supporting tasks such as mortality forecasting, hospital readmission risk, and treatment efficacy modeling [7, 10]. Traditional supervised models—especially treebased methods like XGBoost—continue to dominate tabular prediction tasks due to their robustness to heterogeneous features, irregular target functions, and missing data [5, 13]. Transformer-based in-context learners, such as TabPFN, offer classification via training-set conditioning, though they still require access to raw examples at inference time [6].
LLMs have recently demonstrated strong performance in structured reasoning tasks, including text-to-SQL translation [4], with execution accuracy exceeding 86% on cross-domain benchmarks like Spider. These capabilities suggest a new opportunity: using LLMs not just for text generation, but for schemaaware query planning that operates under privacy constraints. SQL serves as a controlled, interpretable interface that enables LLMs to retrieve relevant aggregate statistics—without exposing individual-level data—thereby preserving compliance with HIPAA and GDPR [3, 12].
In this work, we introduce Query, Don’t Train, a two-stage, framework for clinical tabular prediction without direct access to raw EHR data. Our approach is grounded in three pillars:
• Privacy preservation, by ensuring only policy-compliant SQL queries are issued and no patient-level data is revealed.
• Structured reasoning, which derives interpretability from two key sources: (1) LLM-mediated chain-of-thought predictions over query results, and (2) the symbolic, auditable queries themselves.
• Robustness to missing data, as the model dynamically selects and conditions on available features at inference without imputation.
We validate our approach on 30-day readmission prediction in a MIMICstyle cohort for Type 2 diabetes patients, showing that it obtains an F1-score of 0.70 while offering interpretability and compliance out of the box.
Figure 1: Comparison of TabPFN and our ”Query, Don’t Train” (QDT) approach. TabPFN uses the training set directly during inference. In contrast, QDT follows: (1) receive test record and task prompt, (2) generate SQL queries, (3) enforce compliance with privacy policies, (4) execute approved queries to retrieve summary statistics, (5) predict using chain-of-thought reasoning. QDT enables privacy-preserving, interpretable inference without raw data access.
# 2 Methodology
# 2.1 Problem Formulation
We consider a tabular classification task under strict access constraints. Let $\mathcal { D } _ { \mathrm { t r a i n } } ~ = ~ \{ ( x _ { i } , y _ { i } ) \} _ { i = 1 } ^ { N }$ denote a training set of patient records $x _ { i } ~ \in ~ \mathbb { R } ^ { d }$ and associated outcomes $y _ { i } \in \mathcal { V }$ . Direct access to $\mathscr { D } _ { \mathrm { t r a i n } }$ is prohibited due to regulatory or institutional privacy restrictions. Given a test-time instance $x ^ { \mathrm { t e s t } }$ , the goal is to predict its label $y ^ { \mathrm { t e s t } }$ by interacting with $\mathscr { D } _ { \mathrm { t r a i n } }$ exclusively via a privacy-compliant SQL interface that enforces data governance policies.
# 2.2 Framework Overview
Our method adopts a two-stage architecture in which an LLM serves as both a query-generation agent and a predictor through structured reasoning. The process, illustrated in Figure 1, proceeds as follows:
1. Input: The LLM receives (i) a natural language prompt describing the prediction task (e.g., “Predict 30-day readmission for Type 2 diabetes”), and (ii) the test-time patient record $x ^ { \mathrm { t e s t } }$ .
2. Query Generation: Based on the prompt and $x ^ { \mathrm { t e s t } }$ , the agent generates SQL queries targeting the database containing $\mathscr { D } _ { \mathrm { t r a i n } }$ . These queries are designed to retrieve summary-level statistics (e.g., “average length of stay for similar patients”).
3. Privacy Filtering: Only queries that comply with predefined privacy constraints (e.g., returning aggregates over groups of at least 2 individuals) are executed.
4. Query Loop: The agent may iteratively generate follow-up queries to refine its understanding of relevant cohort-level statistics.
5. Prediction: The outputs of the executed queries are returned to the LLM, which uses chain-of-thought reasoning to produce a prediction for $y ^ { \mathrm { t e s t } }$
This inference-time-only framework enables structured prediction without accessing raw patient data. The agent implicitly performs dynamic feature selection by deciding which summary statistics to request during the Query Loop.
# 3 Experiments
# 3.1 Experimental Setup
We use OpenAI’s o4-mini model as the LLM agent in our setup and for the LLMonly baseline. We leverage the LangChain library $^ { 1 }$ to implement the agent.
For the privacy policies, we restricted queries via the system prompt to summary-level statistics, which are defined as data averaged over two or more patients. To ensure the queries do not access patient-level information, we have a seperate agent to ensure that only those queries requesting summary-level statistics proceed to execution. In practice, this validation would be enforced by a firewall to prevent unauthorized data access [9].
# 3.2 Datasets
We focus on predicting 30-day hospital readmissions for patients with Type 2 Diabetes in US hospitals [2]2. The dataset consists of patient records $x _ { i }$ , which include demographics, laboratory results, procedures, and prior admissions, with binary outcome labels $y _ { i } \in \{ 0 , 1 \}$ .
# 3.3 Baselines
We compare our method against three baselines: TabPFN [6] is a pre-trained transformer-based predictor trained to perform tabular classification by conditioning on the training set at inference time. It is particularly relevant as it accesses $\mathcal { D } _ { \mathrm { t r a i n } }$ during inference, similar in spirit to our method, albeit without privacy constraints. XGBoost [1] is a widely-used gradient boosting framework for tabular data. We train XGBoost on $\mathscr { D } _ { \mathrm { t r a i n } }$ and evaluate it on $x ^ { \mathrm { t e s t } }$ , representing the standard supervised learning baseline with full access to training data. Additionally, we compare our method with an LLM-only baseline that receives only $x ^ { \mathrm { t e s t } }$ and a prompt containing the problem formulation.
Table 1: Performance comparison of different models on 30-day readmission prediction for Type 2 Diabetes patients predicted for a subset of 2,000 patients. Evaluation metrics include Precision, Recall, and F1-score. Query, Don’t Train (QDT) refers to using SQL queries to perform predictions without direct access to patient-level data.
# 3.4 Classification Results
We compare our approach against TabPFN [6] and XGBoost [1]. Despite never accessing the raw data, our method achieves competitive performance in predicting 30-day readmissions, as indicated by the metrics presented in Table 1. Specifically, our Query, Don’t Train methodology demonstrates strong precision and recall, underscoring the effectiveness of structured reasoning over aggregate statistics. These results highlight the potential of our approach to provide accurate predictions while utilizing minimal training resources.
# 3.5 Ablation Study on Missing Features
To investigate the impact of feature availability on model performance, we conducted an ablation study by systematically removing features from $x ^ { \mathrm { t e s t } }$ . The findings illustrate that our method maintains robust performance even with reduced feature sets. When 30% of the features were omitted, the performance metrics showed only a modest decrease in the F1-score, dropping to 0.67. This demonstrates that, despite missing features, the agent effectively utilized the remaining features in $x ^ { \mathrm { t e s t } }$ to identify relevant similar examples, which it uses to reason for accurate predictions. However, with a substantial reduction of 70% of features, the performance was impacted more significantly, resulting in an F1-score of 0.64. These results attempt to solve the challenges posed by incomplete data in real-world EHR scenarios [13]. | Electronic health records (EHRs) contain richly structured, longitudinal data
essential for predictive modeling, yet stringent privacy regulations (e.g.,
HIPAA, GDPR) often restrict access to individual-level records. We introduce
Query, Don't Train (QDT): a structured-data foundation-model interface enabling
tabular inference via LLM-generated SQL over EHRs. Instead of training on or
accessing individual-level examples, QDT uses a large language model (LLM) as a
schema-aware query planner to generate privacy-compliant SQL queries from a
natural language task description and a test-time input. The model then
extracts summary-level population statistics through these SQL queries and the
LLM performs, chain-of-thought reasoning over the results to make predictions.
This inference-time-only approach (1) eliminates the need for supervised model
training or direct data access, (2) ensures interpretability through symbolic,
auditable queries, (3) naturally handles missing features without imputation or
preprocessing, and (4) effectively manages high-dimensional numerical data to
enhance analytical capabilities. We validate QDT on the task of 30-day hospital
readmission prediction for Type 2 diabetes patients using a MIMIC-style EHR
cohort, achieving F1 = 0.70, which outperforms TabPFN (F1 = 0.68). To our
knowledge, this is the first demonstration of LLM-driven, privacy-preserving
structured prediction using only schema metadata and aggregate statistics -
offering a scalable, interpretable, and regulation-compliant alternative to
conventional foundation-model pipelines. | [
"cs.DB"
] |
# 1 Introduction
Formal methods offer robust mathematical guarantees for system reliability [Huth and Ryan, 2004], but their widespread adoption is impeded by high expertise and labor demands, traditionally limiting their application to safety-critical domains where failures have catastrophic consequences [Clarke et al., 2018, Woodcock et al., 2009]. Concurrently, Large Language Models (LLMs) have emerged with a remarkable ability to generate formal artifacts such as code, proofs, and specifications [Brown et al., 2020, Chen et al., 2021, Jiang et al., 2023a], potentially democratizing formal methods [Hou et al., 2023] and finding new roles in formally correct reasoning and LLM verification [Ganguly et al., 2024, Pan et al., 2023]. However, these two approaches embody fundamentally different epistemological paradigms. Formal methods are rooted in deterministic logical calculi, where conclusions derive necessarily from premises via unambiguous inference rules. LLMs, in contrast, operate probabilistically, representing knowledge as distributions over tokens where multiple, even contradictory, outputs can possess non-zero probability [Wei et al., 2022a]. This inherent tension presents a core challenge: how can we harness the generative power of LLMs for formal reasoning while upholding the rigorous guarantees that define formal verification’s value?
The central thesis of this paper is that the inherent probabilistic uncertainty in LLM outputs for formal reasoning tasks, particularly when generating formal artifacts like SMT-LIB programs, is not a mere nuisance but a valuable source of information for guiding verification. Existing methods often ignore this by selecting only the highest-probability output [Chen et al., 2022], a simplification that we argue undermines the rigorous standards required for formal reasoning. In contrast, we demonstrate how to systematically capture and analyze this output uncertainty by modeling LLM-generated SMT-LIB program distributions with Probabilistic Context-Free Grammars (PCFGs). Instead of focusing on a single output, we analyze ensembles of LLM-generated SMT-LIB programs, treating these as samples from the model’s internal probability distribution [Kadavath et al., 2022], which we then approximate by applying PCFGs to the ensembles. This approximation not only identifies the most likely solutions but also reveals strategic diversity, common structural motifs, and areas of high model uncertainty. Deriving a comprehensive suite of metrics from this structured, quantifiable understanding of uncertainty can then directly guide the verification workflow—for instance, by assessing artifact reliability, focusing human review on more ambiguous or structurally complex candidates, and improving error detection strategies.
The core contributions of this paper are:
• We systematically evaluated frontier LLMs on four formal reasoning datasets, finding SMT-based autoformalization significantly boosted accuracy on tasks like ProofWriter $( + 3 4 . 8 \% )$ but harmed others like FOLIO $( - 4 4 . 5 \% )$ , thus quantifying LLM-driven formal verification’s failure modes. We then demonstrate that known uncertainty quantification techniques do not capture enough information to identify errors in FV artifacts.
• Introduce a probabilistic framework using probabilistic context-free grammars to model LLMgenerated SMT-LIB programs, enabling mathematically sound uncertainty quantification and bridging neural models with formal verification.
• Developed and evaluated 25 uncertainty metrics, revealing a refined taxonomy (epistemicknowledge, epistemic-procedural, recursive-complexity, capacity-limited) that offers a more nuanced understanding of uncertainty in neurosymbolic systems than the traditional epistemic/aleatoric dichotomy.
• Demonstrated that formal reasoning uncertainty is task-dependent and introduced a lightweight, model-agnostic fusion of these varied uncertainty signals. This approach outperforms individual metrics, improves calibration, enables selective verification to cut error rates by $14 \mathrm { - } 1 0 0 \%$ with minimal abstention, and suggests modality-aware architectures for enhanced reliability.
# 2 Methodology
Generating formal artifacts using ad-hoc Domain-Specific Languages (DSLs) introduces significant engineering friction. This friction arises from the need to redesign generators, models, and parsers for syntax changes, and it also complicates debugging erroneous outputs (e.g. syntactically incorrect FV artifacts). To mitigate this overhead, we adopt the stable, widely supported SMT-LIB standard as a common intermediate representation targeting SMT solvers. In this section, we consequently present a theoretical framework linking language models and verification to analyze LLM-generated SMT-LIB program distributions, enabling principled reasoning about their uncertainty.
Problem Setup We formalize the probability space over SMT-LIB programs. Let $\Sigma$ be a finite alphabet. The set of all finite strings $\Sigma ^ { * }$ forms a measurable space $( \Sigma ^ { * } , { \mathcal { F } } )$ , where $\mathcal { F }$ is the $\sigma$ -algebra generated by cylinder sets (strings sharing a common prefix $w$ ). The SMT-LIB language $L _ { S M T } \subseteq \Sigma ^ { * }$ , approximated by its standard context-free grammar $G _ { S M T }$ , is measurable in $\mathcal { F }$ . For a task $T$ and an LLM with parameters $\theta$ , the LLM induces a probability measure $\mu _ { T , \theta }$ on $( \Sigma ^ { * } , { \mathcal { F } } )$ . The measure over valid SMT-LIB programs is then the conditional measure µT,θ,SMT (A) = µTµ,Tθ,(θA(L∩SLMSMT )T ) for A ∈ F. This definition requires $\mu _ { T , \theta } ( L _ { S M T } ) > 0$ , a reasonable assumption that is empirically validated, as LLMs are generally trained to generate syntactically valid code and formal specifications.
Modeling Background: To model distributions over structured programs like $\mu _ { T , \theta , S M T }$ , we employ Probabilistic Context-Free Grammars (PCFGs). PCFGs extend standard Context-Free Grammars (CFGs) by associating probabilities with their production rules. Formally, a PCFG is a 5-tuple $G = ( V , \Sigma , R , S , p )$ , where $V$ is a finite set of non-terminals; $\Sigma$ is a finite set of terminals disjoint from $V$ ; $R \subseteq V \times ( V \cup \Sigma ) ^ { * }$ is a finite set of production rules; and $S \in V$ is the start symbol, such that $( V , \Sigma , R , S )$ collectively form a CFG. The fifth component, $p : R [ 0 , 1 ]$ , is a probability function assigning a probability $p ( r )$ to each rule $r \in R$ . For each non-terminal $\bar { A } \in V$ , these probabilities must satisfy $\begin{array} { r } { \sum _ { r \in R _ { A } } p ( r ) = 1 } \end{array}$ , where $R _ { A }$ denotes the set of rules with $A$ as their left-hand side. The probabi ity of a derivation $\pi$ that applies rules $r _ { 1 } , \ldots , r _ { k }$ in sequence is $\begin{array} { r } { p ( \pi ) = \prod _ { i = 1 } ^ { k } p ( r _ { i } ) } \end{array}$ Consequently, for any terminal string $w \in L ( G )$ , where $L ( G )$ is the language gene aQted by the
(assert Program 2 Program F (assert s Calculate Uncertainty easursfrom PCFG To predict Is LMunceain&likly makea N
underlying CFG, its probability under $G$ is $\begin{array} { r } { \mu _ { G } ( w ) = \sum _ { \pi \in \Pi ( w ) } p ( \pi ) = \sum _ { \pi \in \Pi ( w ) } \prod _ { r \in \pi } p ( r ) } \end{array}$ , where $\Pi ( w )$ represents the set of all leftmost derivations of $w$ from $S$ . It is important to note that a PCFG $G$ defines a consistent probability measure (i.e., $\begin{array} { r } { \sum _ { w \in L ( G ) } \mu _ { G } ( w ) = 1 ) } \end{array}$ if and only if the spectral radius of its moment matrix $M _ { G }$ is less than or equal to 1. This condition ensures that probabilities are well-defined and sum to one across the entire language generated by $G$ .
Approximation: To connect the theoretical LLM distribution $\mu _ { T , \theta , S M T }$ with a tractable probabilistic model, we estimate parameters for a PCFG. We use the SMT-LIB v2 grammar $G _ { S M T } =$ $( V _ { S M T } , \Sigma _ { S M T } , R _ { S M T } , S _ { S M T } )$ as its structural basis. We now generate $N$ SMT-LIB program samples $\mathcal { P } _ { N } = \{ P _ { 1 } , \ldots , P _ { N } \}$ from the target LLM (parameterized by $\theta$ ) and parse them using $G _ { S M T }$ . This yields a set of parse trees $\Pi ( \mathcal { P } _ { N } )$ from the successfully parsed programs. From each parse tree $\pi \in \Pi ( { \mathcal { P } } _ { N } )$ , we identify and record every applied production rule $r = ( A \to \beta ) \in R _ { S M T }$ . This record typically includes the rule itself, its source program identifier, structural context (such as depth), and optionally, the corresponding source text mapping for qualitative analysis. This data collection also allows for extracting richer contextual features than those used by standard Maximum Likelihood Estimation for estimating the rule probability function $p : R _ { S M T } [ 0 , 1 ]$ from these rule application frequencies.
Maximum Likelihood Estimation (MLE) is used to estimate rule probabilities $p ( r )$ using counts from $\Pi ( \mathcal { P } _ { N } )$ . Given total application counts $C ( \boldsymbol r )$ for a rule $r = ( A \to \beta )$ and $\begin{array} { r } { \overset { \vartriangle } { \boldsymbol { C } } ( \boldsymbol { A } ) = \bar { \sum } _ { \boldsymbol { r } ^ { \prime } \in \boldsymbol { R } _ { A } } \boldsymbol { C } ( \boldsymbol { r } ^ { \prime } ) } \end{array}$ for its left-hand side (LHS) non-terminal $A$ (where $R _ { A } = \left\{ r ^ { \prime } \in R _ { S M T } \mid \mathrm { l e f t } ( r ^ { \prime } ) = A \right\}$ ), the MLE is its relative frequency $\hat { p } _ { M L E } ( \boldsymbol { r } ) = C ( \boldsymbol { r } ) / C ( A )$ , defined if $C ( A ) > 0$ . If $C ( A ) = 0$ , rules in $R _ { A }$ are assigned a uniform probability $1 / | R _ { A } |$ . For independent and identically distributed (i.i.d.) samples $\mathcal { P } _ { N }$ from $\mu _ { T , \theta , S M T }$ , these estimated probabilities ${ \hat { p } } _ { N } ( r )$ converge almost surely (a.s.) to $p ^ { * } ( r )$ as $N \infty$ . The limits $p ^ { * } ( r )$ are the parameters of the $G _ { S M T }$ -based PCFG that is closest in Kullback-Leibler (KL) divergence to the true distribution $\mu _ { T , \theta , S M T }$ (i.e., $p ^ { * } = \operatorname * { a r g m i n } _ { p } D _ { K L } ( \mu _ { T , \theta , S M T } \parallel \mu _ { G } ( p ) ) )$ . For finite $N$ , additive (Lidstone) smoothing with $\beta _ { s } > 0$ (e.g., $\beta _ { s } \doteq 1$ for Laplace smoothing) addresses problematic zero counts where $C ( r ) = 0$ but $C ( A ) > 0$ , yielding $\hat { p } _ { N } ^ { ( \beta _ { s } ) } ( r ) = ( C ( r ) +$ $\beta _ { s } ) / ( C ( \boldsymbol { A } ) + \beta _ { s } | R _ { A } | )$ .
Beyond this (smoothed) MLE, other PCFG estimation models exist. The Bayesian PCFG Model offers one such alternative. It interprets additive smoothing, using a parameter $\alpha$ such that $\hat { p } _ { N } ^ { ( \alpha ) } ( r ) =$ $( C ( r ) + \alpha ) / ( C ( A ) + \alpha \vert R _ { A } \vert )$ , as computing the posterior mean. This is under a symmetric Dirichlet prior, $\operatorname { D i r } ( \alpha , \ldots , \alpha )$ , over rule choices for each non-terminal, where the concentration parameter $\alpha > 0$ reflects the prior’s strength. Another approach is the Neural PCFG Model. This model utilizes a neural network $f _ { \phi } ( r )$ to score rules $r = ( A \to \beta )$ . As alluded to in the Approximation section’s discussion of data collection, this model can leverage richer contextual features. Probabilities are then defined as $\begin{array} { r } { p _ { \mathrm { N e u r a l } } ( r ) = \exp ( f _ { \phi } ( r ) ) / \sum _ { r ^ { \prime } \in R _ { A } } \exp ( f _ { \phi } ( r ^ { \prime } ) ) } \end{array}$ . Network parameters $\phi$ are trained by maximizing the log-likelihood of the corpus $\Pi ( { \mathcal { P } } _ { N } )$ , enabling the capture of complex dependencies.
Theorem 1 (Coverage Guarantee). Let $\mu$ be a distribution on a discrete sample space $\Sigma ^ { * }$ , with Shannon entropy $\begin{array} { r } { H ( \bar { \mu } ) ~ = ~ - \sum _ { x \in \Sigma ^ { * } } \mu ( x ) \log _ { 2 } \mu ( x ) } \end{array}$ . Suppose we draw $N$ i.i.d. samples from $\mu$ . Then, for any measurable subset $A \subseteq \Sigma ^ { * }$ with $\mu ( A ) = \epsilon$ , the probability that none of the $N$ samples land in $A$ is at most $\begin{array} { r } { \exp \biggl ( - \frac { N \epsilon } { 2 ^ { H ( \mu ) } } \biggr ) } \end{array}$ , provided $N$ is sufficiently large. Equivalently, the probability of failing to sample at least one point in every region of mass $\epsilon$ is at most $\exp ( - N \epsilon / 2 ^ { H ( \mu ) } )$ . Moreover, the largest ϵ for which this “miss probability” is itself at most ϵ satisfies $\begin{array} { r } { \epsilon \ = \ \frac { 2 ^ { H ( \mu ) } } { N } W \biggl ( \frac { N } { 2 ^ { H ( \mu ) } } \biggr ) } \end{array}$ where W (·) is the Lambert W -function. As N grows large, ϵ ≈ 2 HN(µ) ln , which vanishes at rate on the order of $\mathrm { { l n } } ( N ) / N$ . Proof is provided in the appendix.
# 2.1 Probabilistic Context-Free Grammar (PCFG) Derived Metrics
We derive several PCFG metrics to quantify different facets of uncertainty, using established notation (e.g., $R _ { A }$ for the set of rules expanding non-terminal $A$ ).
Static Metrics for Grammar Structure and Complexity Basic structural properties of the grammar $G _ { S M T }$ provide a foundational understanding of its scale and potential complexity. These include the number of non-terminals $( | V _ { S M T } | )$ and rules $( | R _ { S M T } | )$ , the average number of rules per nonterminal ( |RSMT | ) , and the average right-hand side (RHS) length $\begin{array} { r } { ( \frac { 1 } { | R _ { S M T } | } \sum _ { A \beta \in R _ { S M T } } | \beta | } \end{array}$ , where $| \beta |$ denotes the length of $\beta$ ). Further metrics cover the maximum branching factor $\operatorname { ( m a x } _ { A \in V _ { S M T } } | R _ { A } | )$ , and the detection of various forms of recursion (e.g., left-recursion $A A \gamma$ or right-recursion $A \gamma A$ ). These metrics collectively characterize the grammar’s static architecture.
Spectral Properties The spectral radius of the grammar’s mean matrix (often referred to as the Jacobian matrix in this context), $B \in \mathbb { R } ^ { | V _ { S M T } | \times | V _ { S M T } | }$ , offers insights into its recursive structure and complexity. An element $B _ { j i }$ is the expected number of times non-terminal $A _ { j }$ appears on the righthand side (RHS) of a production chosen for $\begin{array} { r } { A _ { i } \colon B _ { j i } = \sum _ { A _ { i } \to \beta \in { \cal R } _ { A _ { i } } } p ( A _ { i } \stackrel { \circ } { \to } \beta ) \times \mathrm { c o u n t } ( A _ { j } , \beta ) . } \end{array}$ where count $( A _ { j } , \beta )$ is $A _ { j }$ ’s occurrences in $\beta$ . The spectra radius $\rho ( B ) { \dot { = } } \operatorname* { m a x } \{ | \lambda | \ | \operatorname* { d e t } ( B - \lambda I ) = $ $0 \}$ is $B$ ’s maximum absolute eigenvalue. Typically, $\rho ( B ) < 1$ indicates a ‘proper’ grammar with finite expected derivation lengths, while $\rho ( B ) \geq 1$ suggests potentially unbounded derivations or higher complexity, contributing to structural uncertainty. This spectral radius is also a key component of the NSUI metric (introduced later in this section).
Information-Theoretic Measures Information theory provides principled ways to measure the uncertainty associated with probabilistic choices within the grammar. The Shannon Entropy per Nonterminal, for each $A \in V _ { S M T }$ , quantifies the average uncertainty (in bits) in selecting a production rule from $\begin{array} { r } { R _ { A } \colon H ( A ) = - \sum _ { A \to \beta \in R _ { A } } p ( A \to \beta ) \bar { \log _ { 2 } p ( A \to \beta ) } } \end{array}$ A higher $H ( A )$ indicates greater uncertainty or variability in the expansions of $A$ . The Rényi Entropy per Non-terminal, $H _ { \alpha } ( A )$ , generalizes Shannon entropy and is parameterized by an order $\alpha \geq 0$ . For $\alpha \neq 1 \colon H _ { \alpha } ( A ) \ =$ $\begin{array} { r } { \frac { 1 } { 1 - \alpha } \log _ { 2 } \sum _ { A \to \beta \in R _ { A } } p ( A \ \stackrel { \cdot \cdot } { \to } \ \beta ) ^ { \alpha } } \end{array}$ Key special cases include Shannon entropy $( H _ { 1 } ( A ) = H ( A )$ as $\alpha 1$ ), max-entropy $( H _ { 0 } ( A ) = \log _ { 2 } | R _ { A } |$ for $\alpha = 0$ , reflecting the number of choices), collision entropy $\begin{array} { r } { ( H _ { 2 } ( A ) = - \log _ { 2 } \sum _ { A \to \beta \in R _ { A } } p ( A \to \beta ) ^ { 2 } } \end{array}$ for $\alpha = 2$ , sensitive to rule choice repetition), and min-entropy $( H _ { \infty } ( A ) = - \log _ { 2 } \operatorname* { m a x } _ { A \to \beta \in R _ { A } } p ( A \to \beta )$ for $\alpha \to \infty$ , determined by the most probable rule). Calculating Rényi entropy for different $\alpha$ values (e.g., 0.5, 2) provides a richer characterization of the uncertainty profile than Shannon entropy alone. The Overall Grammar Entropy is typically defined as the weighted average of the Shannon entropies of its non-terminals, $H ( A )$ , where weights $\pi ( A )$ correspond to the stationary distribution or expected frequency of nonterminal $A$ in derivations starting from $\begin{array} { r } { S _ { S M T } \colon H ( G ) = \sum _ { A \in V _ { S M T } } \pi ( \hat { A } ) H ( A ) } \end{array}$ . The frequencies $\pi ( A )$ can be estimated iteratively. $H ( G )$ represents the average uncertainty per derivation step across the entire grammar. Perplexity, $P P ( G )$ , measures how well the PCFG predicts derivations and is the exponentiated grammar entropy: ${ \dot { P P } } ( G ) = 2 ^ { H ( G ) }$ . It can be interpreted as the effective average number of choices the grammar presents at each derivation step, weighted by likelihood; lower perplexity indicates a more predictable grammar. The KL divergence, $D _ { K L } \dot { ( } p _ { A } | | U _ { A } ) =$ $\log _ { 2 } | R _ { A } | { - } H ( A )$ , quantifies the inefficiency (in bits) of assuming a uniform rule distribution $( U ( A \to$ $\beta ) \stackrel { \cdot } { = } 1 / | R _ { A } | )$ for a non-terminal $A$ compared to using the true PCFG probabilities $( p ( A \to \beta ) )$ ).
The overall grammar KL divergence from uniform, $\begin{array} { r } { D _ { K L } ( G | | U ) = \sum _ { A \in V _ { S M T } } \pi ( A ) D _ { K L } ( p _ { A } | | U _ { A } ) , } \end{array}$ quantifies the PCFG’s deviation from maximum uncertainty.
Finally, we hypothesize a novel composite metric, $N S U I ( G )$ , for probabilistic uncertainty and structural complexity. This metric, which ranges from 0 to 1, combines normalized grammar entropy with a factor reflecting the grammar’s recursive structure (via its spectral radius $\rho ( B ) )$ . It is calculated as $N S U I ( G ) = E _ { r a t i o } \times S _ { f a c t o r }$ . The entropy ratio $E _ { r a t i o } = H ( G ) / H _ { \operatorname* { m a x } } ( G ) \in [ 0 , 1 ]$ uses the maximum grammar entropy $\begin{array} { r } { H _ { \operatorname* { m a x } } ( G ) = \dot { \sum } _ { A \in V _ { S M T } } \pi ( A ) \log _ { 2 } | R _ { A } | } \end{array}$ (assuming uniform rule choices). The spectral factor is $S _ { f a c t o r } = \rho ( B ) / ( 1 + \rho ( B ) ) \in [ 0 , 1 )$ . The motivation was to link higher NSUI values to indicate greater probabilistic uncertainty via structural/recursive complexity.
Rule Probability Distribution Metrics Analyzing the set of all rule probabilities $\mathcal { P } = \{ p ( A $ $\beta$ $) \mid A \beta \in { \cal { R } } _ { S M T } \}$ involves computing descriptive statistics such as mean, median, minimum, maximum, standard deviation $( \sigma ( \mathcal { P } ) )$ , skewness $( \gamma _ { 1 } ( \mathcal { P } ) )$ , and kurtosis $( \gamma _ { 2 } ( \mathcal { P } ) )$ . These statistics characterize the shape and spread of the learned probabilities. Fitting parametric distributions (e.g., Pareto, power-law) can further reveal structural patterns like Zipfian distributions, which are common in linguistic phenomena.
Text SC reflects solution consistency (e.g., via majority vote) across multiple LLM textual outputs, while SMT SC measures it (e.g., via SMT solver agreement) across diverse LLM-generated SMT-LIB programs for the same prompt, adapting principles from [Wang et al., 2022]. We also implement four distinct ensemble predictors for enhanced uncertainty quantification. These are: (1) Ensemble Simple, an unweighted average of a key subset of metrics; (2) Ensemble Average, a comprehensive unweighted average of all metric scores; (3) Ensemble Weighted, where individual metric contributions are varied based on validation performance or theoretical importance; and (4) Ensemble ML, a meta-machine learning model (e.g., logistic regression) trained on the vector of metric scores to predict errors. This approach aims to improve overall predictive accuracy, calibration, and robustness by combining varied uncertainty signals.
Metrics for Evaluating Uncertainty-Based Error Detection To evaluate uncertainty quantification (UQ) methods for identifying prediction errors, we examine several facets: Error Discrimination utilizes the Area Under the Receiver Operating Characteristic Curve (AUROC) to assess if uncertainty scores distinguish correct from incorrect predictions; a higher AUROC signifies better uncertaintyerror alignment. Selective Prediction Utility employs the Area Under the Risk-Coverage Curve (AURC) to measure practical risk mitigation via abstention (including analysis of optimal abstention percentages, associated error rates, and relative error reduction); lower AURC indicates effective error identification, improving performance on retained samples. Finally, Calibration Assessment evaluates the probabilistic reliability of confidence scores using metrics like Expected Calibration Error (ECE), Reliability Diagrams, and the Brier Score; lower ECE and Brier scores denote better calibration, where predicted confidence accurately reflects empirical correctness rates.
# 3 Results
We have evaluated five frontier LLMs, namely o3-mini, DeepSeekR1 (with CoT enabled [Wei et al., 2022b]), DeepSeek-v3-04-21, Gemini Flash 2.0 & Lite (non-reasoning), on four datasets which are widely adopted and used for reasoning tasks; StrategyQA [Geva et al., 2021], ProntoQA [Saparov and He, 2023], ProofWriter[Tafjord et al., 2021] and FOLIO [Han et al., 2024]. From 5 LLM samples per question, answers were derived via: 1) Text: direct LLM output (intrinsic reasoning over text); 2) SMT: LLM-generated SMT-LIB solved by Z3 (autoformalization). Notably, SMT-LIB generation required significantly less effort (more syntactically valid programs in fewer attempts) and used dramatically fewer tokens per prompt compared to [Ganguly et al., 2024], while also offering multi-solver interoperability.
On ProofWriter, a task closely aligned with symbolic logic, SMT-based methods yielded substantial improvements for three models, particularly benefiting those that struggle with direct formal reasoning. Conversely, on ProntoQA and FOLIO, direct textual reasoning consistently outperformed SMT across most models, suggesting that for these QA tasks, the overhead introduced during autoformalization outweighs potential benefits. StrategyQA showed mixed results, with o3-mini slightly benefiting from SMT while other models performed better with direct reasoning.
The SMT approach systematically alters error profiles compared to direct reasoning, often trading precision for recall. For struggling models, autoformalization frequently increases recall dramatically while reducing precision, reflecting a tendency to provide formal answers more often but introducing additional false positives—consistent with LLMs’ documented proclivity toward proving satisfiability in [Ganguly et al., 2024]. On ProofWriter, where SMT generally helped, performance gains stemmed from simultaneous improvements in both precision and recall, indicating the approach successfully addressed fundamental reasoning errors. Conversely, on datasets where direct reasoning excelled, SMT’s underperformance typically manifested as reduced recall, suggesting failures in the formalization process resulted in missed correct answers.
Table 1: Benchmarking accuracy of frontier LLMs using direct text output (Text) versus SMT-LIB generation solved by Z3 (SMT). No approach universally outperforms the other across all models and datasets. Finer grained results are available in the supplementary material.
Our results reveal predominantly epistemic uncertainty in both reasoning approaches. Direct reasoning fails through knowledge gaps and procedural errors, while SMT introduces formalization errors when translating to formal specifications. This explains task-dependent performance: SMT benefits tasks with explicit premises (ProofWriter) by isolating deductive reasoning, while knowledge-intensive tasks (StrategyQA) expose formalization bottlenecks. These findings highlight the critical need for Uncertainty Quantification on LLM-generated formal artifacts to prevent upstream formalization errors from propagating through otherwise sound solvers.
Table 2: Token-level Uncertainty quantification metrics and their results at detecting autoformalization errors w.r.t. ground truth for DeepSeek-v3-0324 across reasoning datasets. While conventional metrics (AUROC, ECE, Brier) show moderate performance, they inadequately capture the distinct epistemic uncertainties in formalization versus reasoning processes. Notably, no UQ method consistently excels across tasks . The uncertainty-aware abstention metrics reflect how the model can selectively answer questions by applying an optimal uncertainty threshold (Opt.Thresh) that minimizes error rate $( \mathrm { E r r } @ \mathrm { T } )$ and maximizes error reduction (RelErrRed) compared to answering all questions.This suggests the need for specialized UQ approaches that explicitly model the distribution of formal artifacts. DeepSeekv3 is the only model we examined that provides token logprobs.
# 3.1 Performance of Uncertainty Metrics
We evaluated our uncertainty metrics in two distinct prediction tasks: (1) whether LLM-generated SMT programs, when executed by Z3, would yield the correct ground truth answer, and (2) whether the SMT output would be consistent with the model’s own natural language reasoning. It is important to note here that PCFG computation for uncertainty quantification imposes minimal computational overhead. Since we utilize the well-defined SMT-LIB grammar rather than learning it, parsers operate efficiently on the structured outputs. While we explored various estimation techniques (e.g., Bayesian
PCFG, Neural PCFG), Maximum Likelihood Estimation (MLE) proved to be both computationally efficient and sufficiently accurate for uncertainty modeling, aligning with established practices.
Experiments for model and dataset combinations were chosen where a performance gap was observed, with $\Nu { = } 1 0 0$ samples, thereby making UQ analysis meaningful, but where the SMT performance was not close to random guessing. Our argument here is that because we are relying on information within the FV artifacts, and those artifacts are not well-calibrated for the task (i.e., operating at the level of random guessing), we cannot extract information about failure from them.
# Task-Dependent Signal Dominance in SMT vs Ground Truth Prediction:
Knowledge-Intensive Reasoning: For StrategyQA, cross-modal agreement metrics consistently dominated. O3-mini showed strong performance with grammar entropy (AUROC=0.7448, AURC $\mathop { : } =$ 0.1113) and text consistency (AUROC=0.7369, AURC=0.1081). For DeepSeek-R1, text consistency substantially outperformed all pure PCFG metrics $\mathrm { \Delta A U R O C = 0 . 7 8 3 5 }$ , AURC=0.0983). This indicates epistemic uncertainty in world knowledge as the primary correctness bottleneck as these cross-modal metrics effectively gauge if the SMT formalization aligns with the LLM’s initial (potentially flawed) semantic interpretation.
Premise-Explicit Reasoning: For ProofWriter, PCFG-derived metrics demonstrated exceptional discriminative power for ground truth prediction. O3-mini achieved near-perfect performance with grammar entropy $\mathrm { \langle A U R O C = 0 . 9 3 0 1 }$ , AURC=0.0008) and perplexity (AUROC=0.9194, AURC=0.0008). This confirms procedural epistemic uncertainty dominates in formal reasoning tasks, where an LLM’s primary challenge shifts from knowledge recall to the correct application of formal rules. Thus, PCFG metrics assessing structural variance in the SMT-LIB output can identify such deductive missteps with high precision—o3-mini’s AURC of 0.0008 using grammar entropy, for instance, enables filtering nearly all errors by abstaining on a minute fraction of outputs.
Table 3: Uncertainty quantification metrics for predicting ground truth correctness via PCFGs of LLM-generated SMT programs. Results show AUROC, ECE, Brier, and AURC across models and reasoning tasks, with ensemble methods consistently outperforming individual metrics. Color intensity indicates performance strength (darker green $\ c =$ better)
# Predicting SMT-Text Consistency:
Arithmetic Reasoning: On ProntoQA with Gemini Flash 2.0, SMT consistency achieved remarkable performance in predicting text-SMT alignment (AUROC=0.9291, $\scriptstyle \mathrm { A U R C = 0 . 0 0 8 4 }$ ), while spectral radius $( \mathrm { A U R O C = } 0 . 6 4 2 5$ , AURC=0.0379) emerged as the only effective structural metric. This isolates recursive complexity as a distinct source of uncertainty in arithmetic formalization, as excessively convoluted SMT structures, indicated by a high spectral radius, risk diverging from the model’s more direct textual reasoning on numerical problems.
Model-Specific Patterns: For Gemini Flash 2.0 Lite on StrategyQA, kurtosis of the rule distribution was the strongest predictor of SMT-Text consistency (AUROC=0.8695). Analysis revealed a distinctive switching" behavior between minimal and verbose SMT patterns, producing a bimodal distribution with heavy tails—a novel diagnostic for capacity limitations in formalization, whereby such stylistic oscillations between overly terse or verbose SMT, captured by kurtosis, make the resulting formal artifact more prone to misalign with the intended textual meaning.
Table 4: Uncertainty metrics based on PCFGs for predicting consistency between LLM’s SMT formalization and its natural language reasoning. Task-specific uncertainty patterns emerge: rule distribution kurtosis dominates for StrategyQA $( \mathrm { A U R O C = 0 . 8 6 9 5 } ^ { }$ ), while different metrics excel for each model-task combination, highlighting the multifaceted nature of formalization uncertainty.
Ablation Study Results PCFG spectral radius from LLM-generated SMT-LIB programs consistently decreases with sampling temperature, as broader exploration diversifies rule selections, reducing fixation on recursive productions that heavily influence moment matrix eigenvalues. Probability mass spreads more uniformly across production alternatives, diminishing single recursive pattern dominance and thus lowering the mean matrix’s maximum absolute eigenvalue. Notably, grammatical properties lack sharp phase transitions across temperature ranges; derived PCFGs show smooth, monotonic changes in spectral and information-theoretic characteristics, implying a continuous, rather than abrupt, generative response to temperature. Non-terminal expansion distributions shift from concentrated to broader with increasing temperature, though this plateaus, indicating finite exploration capacity, possibly constrained by inherent model biases or the grammar’s finite structure. The striking consistency of these spectral-temperature curves across diverse LLMs points to a fundamental, universal mechanism by which these models navigate the coherence-diversity trade-off when generating structured formal languages. Finally, our fine-grained localized entropy within PCFG production rules surpasses global or non-grammatical standard techniques in error prediction, confirming that granular structural uncertainty in specific grammatical constructs directly flags component-level semantic error likelihood, offering more precise diagnostics.
Discussion Our analysis reveals a fundamental insight: the syntactic atypicality (e.g., in PCFG rule entropy or usage kurtosis) of LLM-generated formal artifacts serves as a powerful signal for semantic errors, reminiscent of OOD detection [Ganguly et al., 2025]. When LLMs correctly understand logical relationships, they consistently produce high-probability rule sequences, whereas semantic misunderstandings manifest as statistical anomalies—creating distinctive "syntactic fingerprints" of reasoning failure that enable our exceptional error detection ( $\mathrm { A U R O C = 0 . 9 3 0 1 }$ on ProofWriter). This typicality-based approach transcends architectures, its PCFG metric rankings consistently capturing intrinsic difficulties like formalizing ambiguous language across diverse models. However, the relationship between typicality and correctness isn’t straightforward; metrics with superior discriminative ability often exhibit poor calibration $\scriptstyle \mathrm { { E C E = 0 } } . 4 4 1 9 ,$ ), indicating anomaly magnitude doesn’t linearly predict error probability—necessitating calibration-aware fusion, perhaps by integrating consistency signals. Even more revealing is asymmetric self-consistency (e.g., Gemini/ProntoQA:
SMT AUROC $\scriptstyle \mathtt { \lambda = 0 . 9 2 9 1 }$ vs. text AUROC=0.5108), suggesting LLMs may use distinct, imperfectly aligned formal versus textual reasoning pathways, not just translate a unified process. Such insights shift neurosymbolic design from translation-focus to pathway-alignment and grounding, e.g., via joint training, as SMT syntactic typicality alone is insufficient if its pathway misaligns with textual reasoning.
# 4 Related Works
Formal Reasoning with LLMs LLMs show proficiency in formal reasoning [Welleck et al., 2022a, Chen et al., 2022], but face challenges including hallucination, uncertainty expression [Lin et al., 2022a], self-verification [Hou et al., 2023], and reasoning opacity [Wei et al., 2022b]. Hybrid approaches combine LLMs with formal tools but often overlook model uncertainty. For autoformalization, early sequence-to-sequence models [Wang et al., 2018, 2020] evolved into LLM-based approaches [Wu et al., 2022, Agrawal et al., 2022, Gadgil et al., 2022, Murphy et al., 2024], with structured methods [Jiang et al., 2023b, Zhao et al., 2024] combining LLMs with ATPs, and various applications [Liu et al., 2023, Pan et al., 2023, Olausson et al., 2023, Ye et al., 2023, Zhou et al., 2024, Huang et al., 2024a, Xin et al., 2024a, Jiang et al., 2024, Quan et al., 2024, Xin et al., 2024b]. Proofstep generation advanced from classification [Whalen, 2016, Huang et al., 2019, Bansal et al., 2019] to language modeling [Polu and Sutskever, 2020, First et al., 2023, Wang et al., 2024, Welleck et al., 2022b, Jiang et al., 2022], with recent work exploring zero-shot capabilities [Zhang et al., 2023, Yousefzadeh and Cao, 2023, Scheidt, 2023, Frieder et al., 2023a,b,c, Zhang et al., 2024a] and formal proof generation [Zheng et al., 2024, Xin et al., 2024a, Huang et al., 2024a, Thakur et al., 2024]. Proof search strategies include supervised learning [Loos et al., 2017, Chvalovsky\` et al., 2019], reinforcement learning [Kusumoto et al., 2018, Crouse et al., 2021, Piepenbrock et al., 2021], MCTS [Wu et al., 2021, Lample et al., 2022, Wang et al., 2023a], and language-agent methods [Thakur et al., 2024, An et al., 2024].
Uncertainty in LLM Reasoning Research explores various uncertainty estimation approaches in language models: information-theoretic methods using entropy [Kadavath et al., 2022, Kuhn et al., 2023, Duan et al., 2024], perplexity [Mora-Cross and Calderon-Ramirez, 2024, Margatina et al., 2023], and mutual information [Malinin, 2019, Wimmer et al., 2023, Depeweg, 2019, Ash, 1965]; ensemble strategies like MC Dropout [Srivastava et al., 2014, Gal and Ghahramani, 2016a, Lakshminarayanan et al., 2017], Deep Ensembles [Fadeeva et al., 2023, Lakshminarayanan et al., 2017], and BatchEnsemble [Gal and Ghahramani, 2016b, Lakshminarayanan et al., 2017, Wen et al., 2020] for hallucination detection [Arteaga et al., 2024]; consistency techniques evaluating output agreement [Wang et al., 2023b, Cole et al., 2023, Huang et al., 2024b, Zhang et al., 2024b, Lakshminarayanan et al., 2017, Gawlikowski et al., 2023, Manakul et al., 2023, Chen and Mueller, 2024]; similarity-based methods [Lin et al., 2024]; Bayesian approaches including BNNs [Shridhar et al., 2019, Blundell et al., 2015], variational inference [Graves, 2011, Jordan et al., 1999, Kullback and Leibler, 1951], Gaussian processes [Iwata and Ghahramani, 2017, Liu et al., 2020], and MCMC [Xiao and Wang, 2018]; and language-based methods extracting uncertainty from verbalizations [Cosmides and Tooby, 1996, Lin et al., 2022b, Tian et al., 2023, Xiong et al., 2024, Kojima et al., 2022, Groot and Valdenegro-Toro, 2024]. Our work models implicit uncertainty in distributions over multiple formal outputs rather than relying on individual response signals.
Verification and Reasoning Uncertainty DTV [Zhou et al., 2024], SAT-LM [Ye et al., 2023], and related approaches [Quan et al., 2024] connect LLMs with formal verification, while latent space methods [Lee et al., 2020, Wu and Wu, 2021] complement uncertainty estimation research [Kadavath et al., 2022, Lin et al., 2022b]. PCFGs, which add probabilities to CFGs, have applications in NLP [Manning and Schutze, 1999], bioinformatics [Durbin et al., 1998], and program analysis [Alur et al., 2014], along with enabling probabilistic analysis for LLM generated DSL programs[Barke et al., 2024]. Our work extends PCFG inference [De la Higuera, 2010] to verification artifacts. | Large language models (LLMs) show remarkable promise for democratizing
automated reasoning by generating formal specifications. However, a fundamental
tension exists: LLMs are probabilistic, while formal verification demands
deterministic guarantees. This paper addresses this epistemological gap by
comprehensively investigating failure modes and uncertainty quantification (UQ)
in LLM-generated formal artifacts. Our systematic evaluation of five frontier
LLMs reveals Satisfiability Modulo Theories (SMT) based autoformalization's
domain-specific impact on accuracy (from +34.8% on logical tasks to -44.5% on
factual ones), with known UQ techniques like the entropy of token probabilities
failing to identify these errors. We introduce a probabilistic context-free
grammar (PCFG) framework to model LLM outputs, yielding a refined uncertainty
taxonomy. We find uncertainty signals are task-dependent (e.g., grammar entropy
for logic, AUROC>0.93). Finally, a lightweight fusion of these signals enables
selective verification, drastically reducing errors (14-100%) with minimal
abstention, transforming LLM-driven formalization into a reliable engineering
discipline. | [
"cs.CL",
"cs.AI",
"cs.LO",
"cs.SE"
] |
# I. INTRODUCTION
P dOaitnat rcelporuedss nhtatvieonbeicnocmoemapuftoerungdratpiohincasl a3nDd gceompeuttriecr vision, with applications in various domains, including archaeology [1], augmented reality [2], autonomous driving [3], robotic navigation [4], [5]. Building on this, 3D visual grounding, which aims to localize an object in 3D scenes based on a textual description, has become a crucial challenge at the intersection of language and spatial reasoning [6], [7]. It emphasizes the interaction between language and spatial understanding.
Recent advancement in 3DVG can be categorized into twostage and one-stage architectures. Early works like ScanRefer [8], 3DVGTrans [9], and SAT [10] adopt the two-stage architecture, first using pre-trained object detectors to generate candidate bounding boxes and then selecting the object from these candidates. Given that the performance of two-stage methods is heavily dependent on the quality of the detectors, one-stage methods, such as 3DSPS [11], which directly localize objects through language-guided keypoint detection, have gained increasing attention. Despite these advancements, a critical issue persists in 3DVG: inaccurate localization alongside correct classification.
Fig. 1. Comparison between the prior works (a) and ours (b). Our method achieves more accurate grounding by bridging the gap between the visual and textual features into the unified representation (UR) space. As illustrated on the right, mapping text and point clouds into the UR space enhances crossmodel correlation, with yellow indicating stronger alignment.
There are two main reasons for the above challenge. First, existing methods, such as EDA [12] and VPP-Net [13], rely on separately pre-trained visual and textual encoders that independently capture positional and semantic features from point clouds and text, respectively. This leads to a significant gap between the two modalities (as shown in Fig. 1). Second, text descriptions and point clouds both contain positional and semantic information about the 3D scene. However, existing methods rely solely on visual queries to select object candidate points and fail to fully leverage the positional and semantic information embedded in the text modality. This limitation negatively impacts object localization performance.
We propose UniSpace-3D, a unified representation space for 3DVG. UniSpace-3D bridges the gap between visual and textual feature spaces by aligning positional and semantic information from both modalities. our method is built on three key components: the unified representation encoder (URE), the multi-modal contrastive learning (MMCL) module, and the language-guided query selection (LGQS) module. Specifically, the URE maps the positional and semantic information from point clouds and text into a unified representation (UR) space. The MMCL further reduces the disparity between visual and textual features in the UR space by enhancing consistency. It achieves this by bringing visual embedding closer to their corresponding textual embeddings while pushing them away from unrelated textual embeddings. Finally, LGQS utilizes the positional and semantic information from both modalities to accurately identify object candidate points that align with the text description. This step reduces localization errors and improves grounding accuracy. Extensive experiments show that UniSpace-3D outperforms baseline models by at least $2 . 2 4 \%$ on the ScanRefer and Nr3D/Sr3D datasets.
Our contributions can be summarized as:
We propose the URE module, which maps visual and textual features into a unified representation space, effectively bridging the gap between modalities;
We introduce the MMCL module, which further reduces the gap between visual and textual representations, enabling effective positional and semantic alignment between both modalities;
• We propose the LGQS module, which improves object localization by focusing on object candidate points that match the positional and semantic information in the text, ensuring the precise identification and localization of the object described in the text.
The remainder of this paper is organized as follows: Section II introduces the related work. Section III gives the details of our method. Section IV shows the experiments of our method, followed by conclusion in Section V.
# II. RELATED WORK
# A. 3D Vision-Language Tasks
Vision and language are the two most fundamental modalities to understand and interact with the 3D real world, giving rise to a variety of 3D vision-language tasks. 3D dense captioning [14]–[16] involves identifying all objects in complex 3D scenes and generating descriptive captions. 3D visual grounding [8], [17], [18] takes 3D point clouds and language descriptions to localize the target objects via bounding boxes. 3D question answering [19], [20] addresses answering questions based on visual information from 3D scenes. All these tasks primarily focus on aligning visual and linguistic features, particularly spatial and semantic information. In this work, we focus on the fundamental task of 3D visual grounding (3DVG), enabling machines to comprehend both 3D point clouds and natural language simultaneously.
# B. 3D Visual Grounding
3D visual grounding aims to localize the corresponding 3D proposal described by the input sentence. In contrast to 2D images, point clouds exhibit characteristics of sparsity and noise, lacking dense texture and structured representation. These attributes seriously limit the migration of advanced 2D localization methods, which rely on pixel-level visual encoding. The main datasets for 3DVG include ReferIt3d [17] and ScanRefer [8]. These datasets are derived from ScanNet [21]. According to the overall model architecture, previous works can be divided into two distinct groups: two-stage methods and one-stage methods.
Two-Stage Methods Most existing 3DVG methods adopt a two-stage framework [8]–[10], [22]. Scanrefer [8] first utilizes a 3D object detector to generate object proposals and subsequently identifies the target proposal that corresponds to the given query. SAT [10] leverages 2D semantics to assist 3D representation learning. SeCG [23] propose a graph-based model to enhance cross-modal alignment . Some recent works [9], [12], [24] utilize transformers [25] as a key module to accomplish the modality alignment. However, the performance of these models depends heavily on the quality of the proposals produced in the first stage. In order to solve this problem, Single-stage methods are introduced.
Single-Stage Methods Without relying on the quality of pre-trained object generators (i.e., 3D detectors or segmentors), recent 3D visual grounding methods follow a one-stage framework that trains grounding models end-to-end, from feature extraction to final cross-modal grounding. Compared to previous detection-based frameworks, this model is more efficient as it eliminates the need for complex reasoning across multiple object proposals. 3D-SPS [11] proposed a one-stage method that directly infers the locations of objects from the point cloud. BUTD-DETR [18] encodes the box proposal tokens and decodes objects from contextualized features. Following 3D-SPS [11], to better align visual language features, EDA [13] proposes a text decoupling module to parse language descriptions into multiple semantic components.
These methods show impressive results. However, aligning features from different modalities remains challenging due to the inevitable feature gap between textual and visual spatialsemantic information. To address this, we propose a unified representation space for 3DVG to effectively integrate separate feature spaces and identify object candidate points aligned with the input text, enabling accurate grounding.
# III. PROPOSED METHOD
Overview. Existing 3DVG methods rely on independently pre-trained feature encoders to capture positional and semantic information, resulting in a considerable gap between the two modalities. As shown in Fig. 1, this gap is the key factor causing correct classification but inaccurate localization in 3DVG, a challenge that many existing methods fail to address. To overcome this challenge, we propose UniSpace-3D.
As shown in Fig. 2, UniSpace-3D incorporates three innovative designs. First, the unified representation encoder (URE, see Sec. III-A) effectively captures task- and position-aware visual and textual embeddings within a unified representation (UR) space. Second, the multi-modal contrastive learning module (MMCL, see Sec. III-B) reduces the remaining feature gap by pulling visual embeddings closer to their corresponding textual embeddings while pushing them away from unrelated textual embeddings. Finally, the language-guided query selection module (LGQS, see Sec. III-C) selects object candidate points that better align with the text description, enhancing grounding accuracy. We explained the design of our loss function in Sec. III-D. Through these innovations, our UniSpace3D achieves more accurate grounding.
5# the coffee table is between the (a) Unified Rresentation Encoder two chairs that are facing each other. there is a bookshelf behind the coffee table. T visual pretrained 3D text ǘ+ ∈ Ƽ1×է ǘ− ∈ Ƽ֑×է Text ǘ− text encoder MLPs embedding textual encoder object detector encoder ǚ ∈ Ƽ֏×է’ dž ǘ’ ∈ Ƽ(և+1)×է ǚ ∈ Ƽ֏×է’ ƺ Unified Rresentation Encoder Task Position MLPs tokenizer tokenizer ǀ ∈ Ƽ֏×է (և+1)×է ֑×է ƺ’ ǚ’ ∈ Ƽ֏×է image Transformer ... ... ... Visual ... dviusaul-alawe amrbeedding Cross Encoder ǂֱ, ǂ֩ (c) Language Guide Query Selection Language Guide visual queries object candidate Query Selection points ƹ ... Embedding logits ... Top-K ! ush Cross Decoder ƻ ǂֱ ∈ Ƽ֏ֲ×թ ǂ֩ ∈ Ƽ֏֪×թ ƹ ∈ Ƽ֏֞×թ TVeisxutualalSSppaaccee ELAN ... (b) MMCL UR Space Target Space language queries
# A. Unified Representation Encoder
The quality of extracted visual and textual features significantly impacts 3DVG performance. However, the disparate spaces of visual and textual features make alignment and understanding challenging. To tackle this issue, URE narrows the gap between the disparate feature spaces, thereby enhancing the model’s understanding of both the positional and semantic information in each modality.
Before URE, the input data are first tokenized into text and visual tokens. These tokens are fed into the URE to obtain textual embeddings and the task-position dual-aware visual embeddings, both aligned in the same CLIP [26] space and interpreted as the UR space for 3DVG.
1) Tokenization: The input text and 3D point clouds are encoded by the text encoder and the visual encoder to produce text tokens $t ^ { \prime } = ( t _ { c l s } , t _ { 1 } , . . . , t _ { L } )$ and visual tokens $\boldsymbol { v } ~ = ~ ( v _ { 1 } , . . . , v _ { N } )$ . Here, $t _ { i }$ and $v _ { i }$ are the features of each token, $t _ { c l s } \in R ^ { D }$ is a special token for text classification, and $L$ represents the length of the text description corresponding to the specified target object. In our experiment, the text encoder and the visual encoder are composed of the pretrained RoBERTa [27] and $\mathrm { P o i n t N e t + + }$ [28]. In addition, the GroupFree [29] detector is optionally used to detect a 3D box according to [13], which is subsequently encoded as a box token $b \in R ^ { d \times D }$ . Here, $d$ is the number of detection boxes and $D$ is the feature dimension.
2) Textual Embedding: The text token is fed directly into the CLIP text encoder to obtain the textual embeddings $T ^ { + } \in$ $R ^ { ( L + 1 ) \times D }$ and $T ^ { - } = ( T _ { 1 } ^ { c l s } , . . . , T _ { n } ^ { c l s } )$ in UR space. Here, $n$ denotes the number of negative sentences, and $T _ { i } ^ { c l s } ~ \in ~ R ^ { D }$ is the token for text classification for each negative sentence, which is detailed in Sec. III-B. The textual embeddings consist of positive embeddings $T ^ { + }$ and negative embeddings $T ^ { - }$ .
3) Visual Embedding: Inspired by EPCL [30], we use a frozen CLIP model to extract shape-based features from point clouds. CLIP image transformer, trained on image-text pairs, maps tokens $X \in \Omega _ { I }$ to $Y \in \Omega _ { O }$ . Similarly, UniSpace-3D leverages PointNet [28] to map local point cloud patches, viewed as 2D manifolds, into the vision token space $\Omega _ { I } ^ { P }$ , enabling effective learning.
To align visual tokens into the UR space, we first pass visual tokens $v$ through several MLPs for dimensional transformation, resulting in $v ^ { \prime } \in \mathbb { R } ^ { N \times D }$ , and then embed $\boldsymbol { v } ^ { \prime }$ into the CLIP. However, since CLIP [26] is trained on a large dataset of text-image pairs, it lacks specific task information. To address this, we design a task tokenizer to embed point clouds into the UR space for 3DVG tasks. The task tokenizer, implemented as a fully connected layer with learnable parameters, captures global task-related biases. Following [31], we initialize the task token as an enumerator. After transforming the input point cloud into visual tokens $\boldsymbol { v } ^ { \prime }$ , these visual tokens, along with task and position tokens, are fed into the CLIP image transformer to extract task-position dual-aware visual embeddings $V \in \mathbb { R } ^ { N \times D }$ . The transformer is initialized with pre-trained CLIP weights and remains frozen during training.
Fig. 1 shows that the URE can weakly align the text tokens and visual tokens. Before applying URE, the text and visual embedding for the same scene exhibit lower cross-correlation. In contrast, after URE, the text and visual embedding achieve a higher cross-correlation, indicating improved alignment within the same scene.
Fig. 3. Negative contrastive learning in Multi-Modal Contrastive Learning module. MMCL encourages higher compatibility scores between the tru grounding scene and the corresponding sentence while discouraging mismatched pairs.
# B. Multi-Modal Contrastive Learning
After mapping the visual and text tokens into the UR space, we aim to minimize the remaining feature gap between the two modalities. To achieve this, we propose the MMCL module that pulls visual embeddings closer to their corresponding textual embeddings while pushing them apart from unrelated textual embeddings. Specifically, we design the multi-modal contrastive learning loss in Eq. 1 to achieve this alignment.
1) Total Contrastive Loss: The total contrastive loss is defined as
$$
\mathcal { L } _ { c o s } = \mathcal { L } _ { p o s } + \alpha \mathcal { L } _ { p } + \beta \mathcal { L } _ { t } ,
$$
where $\alpha$ and $\beta$ are the weights of different loss rates. The components $\mathcal { L } _ { p o s }$ , $\mathcal { L } _ { p }$ and $\mathcal { L } _ { t }$ are introduced as follow.
2) Positive Contrastive Loss: To help learn better multimodal embeddings, we introduce a positive contrastive loss, defined in Eq. 2, to align visual and textual embeddings as
$$
\mathcal { L } _ { p o s } = \frac { \mathcal { L } _ { c } ^ { T V } + \mathcal { L } _ { c } ^ { V T } } { 2 }
$$
where
$$
\mathcal { L } _ { c } ^ { V T } = - \log \frac { e x p ( c o s ( \bar { V } _ { i } , T _ { i } ) / \tau ) } { \sum _ { i = 1 } ^ { n } e x p ( c o s ( \bar { V } _ { i } , T _ { j } ) / \tau ) }
$$
and
$$
\mathcal { L } _ { c } ^ { T V } = - \log \frac { e x p ( c o s ( T _ { i } , \bar { V _ { i } } ) / \tau ) } { \sum _ { i = 1 } ^ { n } e x p ( c o s ( T _ { i } , \bar { V _ { j } } ) / \tau ) }
$$
Here, $T$ is textual embedding, $\bar { V }$ is the mean of visual embeddings of all target objects paired with a description, and $\tau$ is a temperature parameter.
3) Negative Contrastive Loss: To further reduce the gap between visual and textual embeddings, we leverage contrastive learning [32] to push visual embeddings apart from unrelated textual embeddings. As illustrated in Fig. 3, the negative contrastive loss consists of two components: $L _ { p }$ and the $L _ { t }$ , details are as follows.
Specifically, the compatibility score $\phi _ { \theta } \left( b , w \right)$ measures the alignment between visual embeddings $b$ from the scenes and the contextualized word representation $w$ . It is defined as:
$$
\phi _ { \theta } \left( b _ { i } , w _ { j } \right) = b _ { i } \times w _ { j } ,
$$
where $b _ { i }$ and $w _ { j }$ represent individual visual and textual embeddings, normalized during training.
$L _ { p }$ ensures a higher compatibility score between the grounding sentence and the true scene than between the sentence and any negative scenes (other point clouds in the mini-batch). The loss is formulated as:
$$
\mathcal { L } _ { p } ( \boldsymbol { \theta } ) = \mathbb { E } _ { \boldsymbol { \mathcal { B } } } \left[ - \log \left( \frac { e ^ { \phi _ { \boldsymbol { \theta } } ( \mathbf { b } , \boldsymbol { w } ) } } { e ^ { \phi _ { \boldsymbol { \theta } } ( \mathbf { b } , \boldsymbol { w } ) } + \sum _ { l = 1 } ^ { n } e ^ { \phi _ { \boldsymbol { \theta } } \left( { b } _ { l } ^ { - } , \mathbf { w } \right) } } \right) \right] ,
$$
where $b$ represents the visual embedding of the positive scene and $\left\{ b _ { l } ^ { - } \right\} _ { l = 1 } ^ { \bar { n } }$ are visual embeddings from the negative scenes.
Similarly to $L _ { p } , L _ { t }$ encourages a higher compatibility score between the scene and the true grounding sentence compared to negative grounding sentences. Negative grounding sentences are generated using a large language model. In our experiments, we adopt GPT-3 [33].
4) Constructing Negative Grounding Sentences : For a grounding sentence involving a target object $s$ and its context c, the goal is to replace $s$ with an alternative object that fits the context $c$ but inaccurately describes the actual scene. This ensures generating plausible yet incorrect grounding sentences. For example, in the sentence “A microwave is placed on the light wood-colored table,” where $s$ is “microwave,” we utilize a large language model to propose replacement objects.
The process consists of two primary steps: Firstly, the language model generates the ten most plausible candidates for $s$ based on the masked sentence template for $c$ . Then, we manually remove candidates that either do not fit the scene or do not create a false grounding in the context. Therefore, we can generate negative grounding sentences such as “An oven is placed on the light wood-colored table” and remove negative grounding sentences like “A fridge is placed on the light wood-colored table.” By constructing these negative grounding sentences, we apply contrastive loss, which pushes the vision embedding away from the negative textual features.
Training with negative grounding sentences Using the generated context-preserving negative grounding sentences, we employ the negative contrastive loss $L _ { t }$ as
$$
\mathcal { L } _ { t } ( \theta ) = \mathbb { E } _ { \mathcal { B } } \left[ - \log \left( \frac { e ^ { \phi _ { \theta } ( \mathbf { b } , w ) } } { e ^ { \phi _ { \theta } ( \mathbf { b } , w ) } + \sum _ { l = 1 } ^ { n } e ^ { \phi _ { \theta } \left( \mathbf { b } , w _ { l } ^ { - } \right) } } \right) \right] ,
$$
where $w$ represent the contextualized embedding of the true grounding sentence $c$ and $\left\{ w _ { l } ^ { - } \right\} _ { l = 1 } ^ { n }$ represent the embeddings of the corresponding negative g rounding sentences cl− ln=1.
# C. Language-Guided Query Selection
In DETr-like models, object candidate points play a crucial role in identifying the potential regions of the targets. However, previous works [12], [13] rely solely on the probability scores of the seed point features and often neglect the rich semantic information embedded in language queries. To address this limitation, we design a language-guided query selection module that leverages language queries to generate object candidate points within the UR space. This is inspired by GroundDINO [34], a 2D vision-language model. This module selects object candidate points that carry the same positional and semantic information as the input text.
Let $X _ { v } ~ \in ~ R ^ { N _ { v } \times d }$ denote the visual queries and $X _ { t } \in$ $R ^ { N _ { t } \times d }$ denote the language queries. Here, $N _ { v }$ is the number of visual queries, $N _ { t }$ indicates the number of language queries, and $d$ corresponds to the feature dimension. We aim to extract $N _ { q }$ queries from visual queries to be used as inputs for the decoder. $N _ { q }$ is set to be 256. The top $N _ { q }$ query indices for the seed points denoted as $O$ , are selected by
$$
{ \bf O } = \mathrm { T o p } _ { N _ { q } } ( \mathrm { M a x } ^ { ( - 1 ) } ( { { \bf X } _ { v } { \bf X } _ { t } ^ { \top } } ) ) ,
$$
where $\mathrm { T o p } _ { N _ { q } }$ represents the operation to pick the top $N _ { q }$ indices. $\mathrm { M a x } ^ { ( - 1 ) } ( \mathbf { X } _ { v } \mathbf { X } _ { t } ^ { \top } )$ computes the maximum similarity between each visual query and all textual queries by taking the maximum along the last dimension of $\mathbf { X } _ { v } \mathbf { X } _ { t } ^ { \top } \in \mathbb { R } ^ { N _ { v } \times N _ { t } }$ , where $N _ { v }$ and $N _ { t }$ are the numbers of visual and textual queries, and the symbol ⊤ denotes matrix transposition, respectively. The language-guided query selection module outputs $N _ { q }$ indices. We can extract features based on the selected indices to initialize object candidate points.
Similar to most object candidate points in DETR-like models [13], the selected object candidate points $O$ are fed into the cross-modal decoder to detect the desired queries and update accordingly. The decoded query $Q$ is then passed through MLPs to predict the final target bounding box.
# $D$ . Training Objectives
Following the previous work [13], the loss of Unispace-3D consists of the position loss $\mathcal { L } _ { p o s }$ , the semantic loss for dense alignment $\mathcal { L } _ { s e m }$ , the positive contrastive loss $\mathcal { L } _ { p o s }$ and the negative contrastive loss $\mathcal { L } _ { n e g }$ , as:
$$
\mathcal { L } = \mathcal { L } _ { p o s } + \mathcal { L } _ { s e m } + \gamma \left( \mathcal { L } _ { p o s } + \alpha \mathcal { L } _ { p } + \beta \mathcal { L } _ { t } \right) .
$$
The weights of each component in Eq. 9 are discussed in Sec. IV-D1.
# IV. EXPERIMENT
# A. Datasets
We evaluate UniSpace-3D on the ScanRefer and ReferIt3D datasets. The ScanRefer dataset contains 51,583 descriptions of 11,046 objects across 800 ScanNet scenes. ScanRefer divides objects into “Unique” and “Multiple” subsets based on whether the object class is unique in the scenes. The corresponding evaluation metric is $\operatorname { A c c } @ \operatorname { I o U }$ , which measures the fraction of descriptions where the predicted box and ground truth overlap with an IoU greater than 0.25 and 0.5. The ReferIt3D dataset includes two subsets: Sr3D, which contains 83,572 template-generated expressions, and Nr3D, with 41,503 human-annotated descriptions spanning 707 scenes. Each scene in Sr3D/Nr3D can also be divided into “Easy” and “Hard” subsets depending on whether there are more than two instances. Following ReferIt3D [17], the primary evaluation metric for ReferIt3D is the accuracy of grounding predictions for textual descriptions.
# B. Implementation Details
For ScanRefer, the learning rate of the PointNet+ $^ { + }$ is $1 e ^ { - 3 }$ . The learning rate of other modules is $1 e ^ { - 4 }$ . It takes about 30 minutes per epoch, and around epoch 70, the best model appears. The learning rates for SR3D are $3 e ^ { - 4 }$ and $3 e ^ { - 5 }$ , with 50 minutes per epoch, requiring around 60 epochs of training. The learning rates for $\mathrm { N r } 3 \mathrm { D }$ are $3 e ^ { - 4 }$ and $3 e ^ { - 5 }$ , taking 30 minutes per epoch, and around 200 epochs are trained. Since SR3D consists of concise, machine-generated sentences, it facilitates easier convergence. In contrast, both ScanRefer and NR3D are human-annotated, free-form, complex descriptions, which require more training time. Codes are implemented by Pytorch and all experiments are conducted on two NVIDIA RTX GeForce 4090 GPUs.
TABLE I 3D VISUAL GROUNDING RESULTS ON THE SCANREFER DATASET. ACCURACY IS EVALUATED USING IOU 0.25 AND IOU 0.5. METHODS MARKED WITH $\dagger$ INDICATE RESULTS REPRODUCED USING OPEN-SOURCE CODE, WHILE THE OTHERS REPRESENT THE BEST ACCURACIES REPORTED IN THEIR RESPECTIVE PAPERS. OUR SINGLE-STAGE IMPLEMENTATION ACHIEVES HIGHER ACCURACY WITHOUT RELYING ON AN ADDITIONAL 3D OBJECT DETECTION STEP (DOTTED ARROWS IN FIG. 2)
# C. Quantitative Comparisons
Tab. I presents the results of our experiments on the ScanRefer dataset, compared to previous works. UniSpace3D outperforms all prior methods on both $\operatorname { A c c } @ 0 . 2 5 \mathrm { I o U }$ and $\operatorname { A c c } @ 0 . 5 \operatorname { I o U }$ , achieving $5 6 . 0 4 \%$ and $4 3 . 9 5 \%$ , respectively, demonstrating a significant improvement. It surpasses our baseline EDA by $3 . 2 \% \mathrm { A c c } @ 0 . 5 \mathrm { I o U }$ , and also $1 . 7 \%$ higher than that of the VPP-Net [12].
We report experimental results on the Nr3D and Sr3D datasets. As shown in Tab. II, our method achieves the highest accuracy of $5 7 . 8 \%$ on $\mathrm { N r } 3 \mathrm { D }$ and $6 9 . 8 \%$ on $\mathrm { S r } 3 \mathrm { D }$ , surpassing prior state-of-the-art methods. In SR3D, since the language descriptions are concise and the object is easy to identify, our method achieves an accuracy of close to $70 \%$ . In $\mathrm { N r } 3 \mathrm { D }$ , descriptions exhibit noteworthy intricacy and detail, inducing additional challenges to the 3DVG task, our method still outperforms the EDA [13] by $5 . 1 \%$ , thanks to the unified representation space for 3DVG. Additionally, single-stage methods are excluded from the discussion, as ground truth boxes for candidate objects are provided in this setting.
TABLE II QUANTITATIVE COMPARISONS ON THE NR3D AND SR3D DATASETS.
# D. Ablation Study
1) Ablation study on values of loss: The representative results of a grid search over the weights in Eq. 9 are summarized in Tab. III. Each line corresponds to a different weighting scheme for the components of the loss function. Notably, all configurations evaluated outperform the baseline method EDA [13], thereby validating the effectiveness and robustness of our proposed unified representation space.
As illustrated in lines (a) and (b), assigning equal weights to all components does not yield optimal performance. This observation supports the notion that the different components contribute unequally to the overall objective and should thus be weighted accordingly. When giving $\alpha$ a higher weight (line (a)), it turns out that a weight that is too high would also lead to a decrease in performance, which may compromise the functionality of other components.
Through extensive tuning, we identify the weight configuration where $\alpha = 0 . 5$ , while the other components are set to 0.3 and 0.1, respectively. This configuration, denoted as option (c) in the table, achieves the best overall results and is therefore selected for use in our final implementation.
TABLE IIIGRID SEARCH OF THE WEIGHT $\alpha , \beta$ AND $\gamma$ . EVALUATED ON THESCANREFER DATASET. WE SELECT (C) FOR IMPLEMENTATION.
2) Ablation study on introduced modules: We use EDA as our baseline and conduct ablation studies to evaluate the effectiveness of each component in UniSpace-3D. Without further specification, all experiments are conducted on the
inside the door.
Fig. 4. Visualization of grounding results from different models on the ScanRefer dataset. Green boxes represent ground-truth references. Red boxes show EDA results containing grounding errors (e.g., objects of the same category as the target). Blue boxes represent proposals generated by our model.
ScanRefer validation set. The results of our experiments are presented in Tab. IV.
TABLE IV ABLATION STUDY ON DIFFERENT COMPONENTS OF OUR MODEL. ‘URE’ DENOTES THE UNIFIED REPRESENTATION ENCODER. ‘LGQS’ REFERS TO THE LANGUAGE-GUIDED QUERY SELECTION MODULE. ‘MMCL’ REPRESENTS THE MULTI-MODAL CONTRASTIVE LEARNING MODULE.
For comparison, we train EDA [13] based on the official publicly available code, and the results are in line (a). The results demonstrate that URE improves performance by $0 . 8 1 \%$ and $0 . 7 8 \%$ in the “Unique” and “Multiple” splits. The improvements show that our unified representation encoder can effectively encode the relative positional relationships and the relative semantic information.
We integrate the URE module into our baseline and individually modify or incrementally add each component to construct the experimental frameworks for testing. Experiments (c) and (f) add the multi-modal contrastive learning module (MMCL) to further reduces the modality gap. Using MMCL boosts performance to $70 . 0 7 \%$ , $3 9 . 8 9 \%$ , and $4 3 . 9 5 \%$ . These results demonstrate the efficacy of multi-model contrastive learning in improving 3DVG performance.
Experiment (d) validates the efficiency of language-guided query selection module (LGQS). By generating object candidate points guided by language queries, LGQS emphasizes the key role of language queries in query generation. We precisely align the positions and semantics of target objects in both modalities, thereby facilitating a more accurate and reliable generation of object candidate points.
# E. Visualization
Fig. 4 visualizes the results of four ScanRefer scenes, comparing predictions by EDA and UniSpace-3D to the ground truth. By comparing the visualization results, we clearly observed that Unispace3D effectively addressed four types of inaccurate positioning issues: geometric attributes, spatial distance or object size, ordinal numbers, and complex utterances. In each example, the green, red, and blue boxes represent the ground truth, EDA top-1 predictions, and our predictions, respectively. The results demonstrate the effectiveness of our method in understanding contextual information in the text to accurately identify the target objects. This improvement is made possible by the alignment of our textual embedding with visual embedding in the unified representation space.
The successful examples show that with the unified representation space for 3D visual grounding, the expression can better match the 3D scenes, resulting in more accurate groundings. This improvement is particularly evident in complex scenes with ambiguous or closely positioned objects, where our model demonstrates superior robustness and precision. More detailed qualitative results on Nr3D/Sr3D are detailed in Fig. 5. Qualitative results indicate that compared to EDA, our method exhibits a superior perception on the Nr3D/Sr3D dataset. We also present two failure cases in Fig. 6. One occurs when text descriptions are ambiguous, and the other when point clouds are incomplete.
Fig. 5. Qualitative comparison of the grounding results in the Nr3d/Sr3D dataset. For all boxes, green represents the ground-truth references; red represents EDA [13] results containing grounding errors; blue represents proposals generated by ours. Words in different colors show the results of text decoupling.
Fig. 6. Qualitative results of some common failure cases. Green boxes represent ground-truth references. Blue boxes represent proposals generated by ours. | 3D visual grounding (3DVG) is a critical task in scene understanding that
aims to identify objects in 3D scenes based on text descriptions. However,
existing methods rely on separately pre-trained vision and text encoders,
resulting in a significant gap between the two modalities in terms of spatial
geometry and semantic categories. This discrepancy often causes errors in
object positioning and classification. The paper proposes UniSpace-3D, which
innovatively introduces a unified representation space for 3DVG, effectively
bridging the gap between visual and textual features. Specifically, UniSpace-3D
incorporates three innovative designs: i) a unified representation encoder that
leverages the pre-trained CLIP model to map visual and textual features into a
unified representation space, effectively bridging the gap between the two
modalities; ii) a multi-modal contrastive learning module that further reduces
the modality gap; iii) a language-guided query selection module that utilizes
the positional and semantic information to identify object candidate points
aligned with textual descriptions. Extensive experiments demonstrate that
UniSpace-3D outperforms baseline models by at least 2.24% on the ScanRefer and
Nr3D/Sr3D datasets. The code will be made available upon acceptance of the
paper. | [
"cs.CV"
] |
# 1 Introduction
Foundation models have shown remarkable success in diverse domains such as natural language [Raffel et al., 2023, Paaß and Giesselbach, 2023, Touvron et al., 2023], computer vision [Dosovitskiy et al., 2021, Radford et al., 2021, Wang et al., 2022, Bao et al., 2022], and audio processing [Chen et al., 2022a,b, Radford et al., 2022]. This success has sparked interest in building foundation models for graph machine learning (“graph foundation models” or GFM)[Mao et al., 2024, Zhao et al., 2025, Bechler-Speicher et al., 2025], raising a very fundamental question: what are the key requirements for a graph machine learning architecture to generalize across different tasks, graph structures, and feature and label sets?
Significant progress has recently been made on link-level tasks in knowledge graphs [Geng et al., 2022, Lee et al., 2023, Galkin et al., 2024, Zhang et al., 2025, Huang et al., 2025], which typically do not involve node features. This success is partially due to the nature of knowledge graphs, where learning focuses on structured relational patterns rather than raw feature inputs. Unlike in knowledge graphs, natural language processing or computer vision, node-level tasks in general graphs face a fundamental obstacle: the absence of a shared feature "vocabulary": Features may encode textual embeddings in one dataset, molecular properties in another, or social attributes in yet another. This semantic diversity makes it inherently difficult to define a unified feature space, posing a major challenge to building graph foundation models that generalize across domains.
Classical graph neural networks (GNNs) [Chen et al., 2020, Kipf and Welling, 2017, Velicˇkovic´ et al., 2018, Xu et al., 2019] rely on fixed feature ordering and predefined feature sets, making them ill-equipped to generalize across arbitrary graphs with varying features.
Figure 1: The input to a triple-symmetry network is a feature matrix $X$ and (possibly masked) label matrix $\boldsymbol { Y }$ . The encoder must be equivariant to element-wise permutations $\sigma _ { N } \in S _ { N }$ (affecting the rows of both $X$ and $\boldsymbol { Y }$ ), equivariant to class label permutations $\sigma _ { C } \in S _ { C }$ (affecting the columns of $\boldsymbol { Y }$ ), and invariant to feature permutations $\sigma _ { F } \in S _ { F }$ (affecting the columns of $X$ ).
To date, the only architecture shown to generalize zero-shot across graphs with rich node features is that of Zhao et al. [2025], which represents a promising step forward. However, their model offers a specific empirical solution and does not address the deeper question: how can one systematically design generalizable node-level graph foundation models?
Present work. In this paper, we directly address this challenge by introducing a theoretically-grounded recipe for designing GFMs for node-level classification and regression tasks. On the theoretical front, our starting point is identifying three symmetries that a GFM must respect:
1. Node permutation-equivariance ( $\mathbf { \sigma } _ { S _ { N } }$ in Figure $^ { \small 1 }$ ). The standard requirement in graph learning is to ensure that the predictions are invariant under isomorphisms. This is satisfied by design in message-passing GNNs through permutation-invariant local aggregation, which guarantees that a permutation of the input nodes would result in a consistent permutation of the node-level outputs.
2. Label permutation-equivariance $S _ { F }$ in Figure 1). In node-level tasks (classification or regression), outputs should also respect permutations of class labels or multi-regression targets. Permuting the ground-truth should yield a consistent permutation in the predictions, ensuring semantic consistency irrespective of the chosen ordering.
3. Feature permutation-invariance ( $\mho _ { N }$ in Figure 1). Graph node features often represent different quantities from different domains and can vary substantially across different graphs, which poses challenges in identifying a shared feature vocabulary. Hence, a GFM should not rely on feature ordering or dimensionality. This invariance ensures robust predictions irrespective of how the features are arranged across different datasets and tasks.
Having established this “triple-symmetry” criterion, we then characterize the space of linear transformations that are equivariant to permutations of nodes and labels, and invariant to permutations of features. We prove that the resulting triple-symmetry network (TSNet) (see Figure 1) is a universal approximator on multisets that respect the aforementioned symmetries. This is our key theoretical contribution, as it allows us to apply such a TSNet on the multiset of features induced by each local neighborhood of the input graph during aggregation. This approach results in a class of GFMs for node property prediction: our proposed method can be used in combination with any GNN to design a different GFM and applies to both node classification and regression. The resulting architectures can be trained on a given set of graphs with their node features and labels, and then transfer to new graphs with different types of features and labels, unseen during training.
Contributions. We propose a unified framework for GFMs and make the following contributions:
• Universality results. We characterize the space of linear transformations equivariant to permutations of nodes, features, and labels, and prove that our architecture is a universal function approximator over sets (Section 4). • A general recipe for GFMs. We introduce triple-symmetry graph neural networks (TS-GNNs), a modular and practical method to transform any GNN architecture into a GFM. Our approach preserves the expressivity of the original GNN while extending it to operate over arbitrary graphs and feature sets (Section 5). • Empirical validation. We evaluate our framework on 29 real-world node classification datasets, demonstrating consistent performance improvements when applied to standard GNNs such as MeanGNN and GAT [Velicˇkovic´ et al., 2018] (Section 6.1) and establishing it as the first graph foundation model to show improved zero-shot accuracy with increasing training data (Section 6.2).
The proofs of the technical statements and further experimental details can be found in the Appendix of this paper.
# 2 Background: Graphs, Groups, and Equivariance
Graphs. Throughout the paper, we focus on semi-supervised node-level tasks and consider an undirected1, unweighted graph $G = ( \nu , \mathbf { \bar { \boldsymbol { \mathcal { E } } } } , \mathbf { X } , Y )$ , where $\pmb { X } \in \mathbb { R } ^ { N \times D }$ is a matrix of node features and $\pmb { Y } \in \{ 0 , 1 \} ^ { N \times C }$ is a tensor encoding labels over $C$ classes (for classification) or $C$ targets (for regression). If the labels represent classes, the task is multi-label node classification, and if the labels denote target values, the task is multi-label node regression. We represent the topology of the graph $( \nu , \mathcal { E } )$ using an adjacency matrix $\pmb { A } \in \{ 0 , 1 \} ^ { N \times N }$ .
Groups. Let $S _ { N }$ be the group of permutations over a set of $N$ elements, denoted by $[ N ] = \{ 1 , 2 , \dots , N \}$ . The vector of $N$ ones and the all-ones $N \times M$ matrix are denoted by $\mathbf { 1 } _ { N }$ and $\mathbf { 1 } _ { N , M }$ , respectively. Vectors, matrices and tensors are denoted with bold and for any tensor $\boldsymbol { Q }$ , we denote its $i$ -th row by $Q _ { i }$ and its $j$ -th column by $Q _ { : , j }$ . Throughout the paper, $N$ denotes the number of elements in a set or the number nodes in a graph; $F$ denotes the number of input features per element; $C$ denotes the number of output classes or regression targets; and $K$ denotes a symmetry-less channel dimension.
We define the action of $S _ { N } \times S _ { F } \times S _ { C }$ on $\mathbb { R } ^ { K \times N \times ( F + C ) }$ by
$$
\begin{array} { r } { \big ( \big ( \sigma _ { N } , \sigma _ { F } , \sigma _ { C } \big ) \cdot X \big ) _ { i , j } = \left\{ \begin{array} { l l } { X _ { : , \sigma _ { N } ^ { - 1 } ( i ) , \sigma _ { F } ^ { - 1 } ( j ) } } & { \mathrm { i f ~ } j \in [ F ] , } \\ { X _ { : , \sigma _ { N } ^ { - 1 } ( i ) , F + \sigma _ { C } ^ { - 1 } ( j - F ) } } & { \mathrm { o t h e r w i s e } } \end{array} \right. } \end{array}
$$
for all $( \sigma _ { F } , \sigma _ { C } , \sigma _ { N } ) \in S _ { F } \times S _ { C } \times S _ { N }$ and $\pmb { X } \in \mathbb { R } ^ { K \times N \times ( F + C ) }$ . That is, $\sigma _ { F }$ permutes the first $F$ columns (features), $\sigma _ { C }$ permutes the last $C$ columns (labels), and $\sigma _ { N }$ permutes the rows.
Equivariance and Invariance. We are interested in functions that are equivariant or invariant under joint node- featureand label-permutations. Namely, a function $f : \mathbb { R } ^ { K _ { 1 } \times N \times ( F + C ) } \mathbb { R } ^ { K _ { 2 } \times N \times ( F + C ) }$ is $( S _ { N } \times S _ { F } \times S _ { C } )$ -equivariant if
$$
f \left( \left( \sigma _ { N } , \sigma _ { F } , \sigma _ { C } \right) \cdot { \mathbf X } \right) = \left( \sigma _ { N } , \sigma _ { F } , \sigma _ { C } \right) \cdot f ( { \mathbf X } ) ,
$$
for all $( \sigma _ { N } , \sigma _ { F } , \sigma _ { C } ) \in S _ { N } \times S _ { F } \times S _ { C }$ and for all $\boldsymbol { X } \in \mathbb { R } ^ { K _ { 1 } \times N \times ( F + C ) }$ .
Similarly, a function $f : \mathbb { R } ^ { K _ { 1 } \times N \times ( F + C ) } \to \mathbb { R } ^ { K _ { 2 } \times N \times C }$ is $( S _ { N } \times S _ { C } )$ -equivariant and $S _ { F }$ -invariant if
$$
f \left( \left( \sigma _ { N } , \sigma _ { F } , \sigma _ { C } \right) \cdot { \mathbf { X } } \right) = ( \sigma _ { N } , \sigma _ { C } ) \cdot f ( { \mathbf { X } } ) ,
$$
for all $( \sigma _ { N } , \sigma _ { F } , \sigma _ { C } ) \in S _ { N } \times S _ { F } \times S _ { C }$ and for all $\boldsymbol { X } \in \mathbb { R } ^ { K _ { 1 } \times N \times ( F + C ) }$ .
Graph neural networks (GNNs). Our framework provides a recipe for converting GNNs into GFMs. In particular, we focus on a subclass of GNNs that update the initial representations $\pmb { X } _ { v } \in \mathbb { R } ^ { K ^ { ( \ell ) } \setminus \overline { { F } } }$ of each node $v$ for $0 \leq \ell \leq L - 1$ iterations based on its own state and the state of its neighbors $\mathcal { N } _ { v }$ as:
$$
\pmb { X } _ { \pmb { v } } ^ { ( \ell + 1 ) } = \sigma \left( \psi \left( \pmb { W } _ { 1 } ^ { ( \ell ) } \pmb { X } _ { \pmb { v } } ^ { ( \ell ) } , \{ \pmb { W } _ { 2 } ^ { ( \ell ) } \pmb { X } _ { \pmb { u } } ^ { ( \ell ) } \ | \ \boldsymbol { u } \in \mathcal { N } _ { v } \} \right) \right) ,
$$
where { ·} denotes a multiset, σ is a nonlinearity, ψ is an aggregation function, W1( , W2(ℓ) ∈ RK(ℓ+1)×K(ℓ) a re weight matrices, and $K ^ { ( \ell ) }$ is the dimensionality of node embeddings at layer $\ell$ .
# 3 Related Work
Graph foundation models for node classification. Traditional graph neural networks (GNNs) [Chen et al., 2020, Kipf and Welling, 2017, Xu et al., 2019, Velicˇkovic´ et al., 2018] assume fixed feature ordering and predefined feature sets. This mode of operation severely restricts their ability to generalize across graphs with unknown or dynamically varying feature distributions. Unlike domains such as natural language processing, where pre-defined embeddings allow for a shared vocabulary, this approach is unsuitable when dealing with entirely unknown or dynamic graph attributes. GraphAny [Zhao et al., 2025] addresses this limitation by using a Transformer-based mixer that dynamically combines predictions from multiple closed-form least-squares classifiers. Furthermore, [Zhao et al., 2025] is the first to introduce the necessary of node-equivariance, label-equivariance and feature-invariance in fully inductive node classification. While empirically effective, GraphAny simply performs a learned average across the least-squares classifiers, which is very limiting for the model design space. In contrast, we offer a class of graph foundation models derived from first principles, offering theoretical guarantees. Existing message-passing neural networks can be seamlessly integrated into our recipe of graph foundation models.
Knowledge graph foundation models. There are recent graph foundation models dedicated for prediction with knowledge graphs (KGs), which can generalize to any KG, including novel entities and novel relation types (unseen during training), by learning transferrable relational invariants. Notable examples include RMPI [Geng et al., 2022], InGRAM [Lee et al., 2023], ULTRA [Galkin et al., 2024], TRIX [Zhang et al., 2025], and MOTIF [Huang et al., 2025]. The expressive power of these knowledge graph foundation models has been recently investigated by Huang et al. [2025]. Although these methods demonstrate promising generalization capabilities, they only apply to (knowledge) graphs without node features and lack any theoretical guarantees of universality. This limits their applicability to graphs with rich node features, which is critical for node-level prediction tasks.
Invariant and equivariant networks. Expressivity of equivariant and invariant neural networks is well-studied within the graph machine learning community. In their seminal work on DeepSets, Zaheer et al. [2017] established universality of $\bar { S } _ { N }$ -invariant networks over sets of size $N$ . Subsequently, a universality result for permutation subgroups $G \leq S _ { N }$ is presented by Maron et al. [2019]. This universality result is extended by Segol and Lipman [2019] to show that DeepSets can approximate any $S _ { N }$ -equivariant functions. Further advancements by Maron et al. [2020] considered both $S _ { N } \times H$ -invariant and equivariant functions, where $H \leq S _ { D }$ and $D$ denotes the number of input features. The proof techniques introduced in Segol and Lipman [2019] were partially adopted in Maron et al. [2020] and also utilized to prove universality in other contexts. For example, universality for point-cloud-equivariant functions with symmetry group $S _ { N } \times S O ( 3 )$ and for higher-dimensional equivariant representations was established by Dym and Maron [2020] and Finkelshtein et al. [2022], respectively. Our work utilizes the proof techniques in [Segol and Lipman, 2019, Maron et al., 2020] to extend the known universality guarantees to our triple-symmetry graph foundation model framework.
Scaling Laws in Graph Learning. Scaling laws have played a central role in understanding and developing foundation models in domains such as natural language and computer vision, where performance is known to improve predictably with increased model and dataset size. Kaplan et al. [2020] introduced the original scaling laws for language models, demonstrating that performance improves with larger models and more data. Hoffmann et al. [2022] later refined this trend, showing that the models in Kaplan et al. [2020] were under-trained and that slightly better performance could be achieved for smaller models. Similar scaling behaviors have been observed in vision, notably in Vision Transformers [Dosovitskiy et al., 2021] and large-scale contrastive models such as CLIP [Radford et al., 2021].
In graph learning, scaling laws remain relatively underexplored. A first investigation is Liu et al. [2024], which studies classical GNNs on graph-level classification tasks under varying model and dataset scales. However, this study is limited to traditional, task-specific architectures and does not address GFMs, which aim to generalize across diverse graphs, label spaces, or feature domains. It remains unclear whether scaling trends observed in narrowly-scoped GNNs also hold in this more challenging and general setting. Our framework is the first to demonstrate scaling-law behavior in GFMs, where performance improves consistently as the number of diverse training graphs increases (see Section 6.2).
# 4 Universal Function Approximation on Sets of Symmetric Elements
Our goal is to develop a node-level graph foundation model that generalizes to arbitrary graphs, feature sets, and ground truth permutations. In this section, we follow an approach analogous to the derivation of message-passing neural networks from learning on sets: we begin by characterizing the linear maps that respect our triple symmetry over sets. Then, in Section 5, we introduce a graph variant of this architecture by localizing the global aggregations into neighborhood-based updates, resulting in a practical and expressive model for graph-structured data.
The triple-symmetry a GFM must satisfy are: (i) permutation of node indices, (ii) permutation of the ground-truth labels, and (iii) invariance to the ordering and number of input features. These requirements correspond to equivariance under the group $S _ { N } \times S _ { C }$ and invariance under $S _ { F }$ , where $\boldsymbol { S _ { N } }$ acts on node indices, $S _ { C }$ on labels, and $S _ { F }$ the features. Since invariance is a special case of equivariance, we first derive the linear layers that are equivariant to all the aforementioned permutation groups (node-, label- and feature- permutations). More concretely, characterize linear layers that respect these symmetries using Schur’s Lemma (Section 4.1). We then combine them to obtain $( S _ { N } \times S _ { C } )$ -equivariant and $S _ { F }$ -invariant networks, and prove their universality on sets: these networks can approximate any continuous function equivariant (resp., invariant) under the specified group action (Section 4.2). This establishes the theoretical foundation for our GFM framework while also enabling principled use of set-based architectures (e.g., Transformers) in this setting.
# 4.1 Characterizing Equivariant Linear Layers
We now characterize the linear layers that are equivariant to all node-, label- and feature- permutations (“Equivariance Everywhere All At Once”). Our result builds on the rich literature on learning on sets [Zaheer et al., 2017, Segol and Lipman, 2019, Maron et al., 2020] and extends these techniques to our symmetry groups. To better locate our result, we first discuss existing characterizations for DeepSets [Zaheer et al., 2017] and for Deep Sets for Symmetric elements.
DeepSets. A seminal result for learning on sets is presented by Zaheer et al. [2017], who characterize all linear maps of the form $T : \mathbb { R } ^ { K _ { 1 } \times N } \mathbb { R } ^ { K _ { 2 } \times N }$ that are $S _ { N }$ -equivariant, as:
$$
\begin{array} { r } { T ( \boldsymbol { X } ) = \Lambda _ { 1 } \boldsymbol { X } \mathbf { 1 } _ { N , N } + \boldsymbol { \Lambda } _ { 2 } \boldsymbol { X } , \mathrm { ~ w h e r e ~ } \boldsymbol { \Lambda } _ { 1 } , \boldsymbol { \Lambda } _ { 2 } \in \mathbb { R } ^ { K _ { 2 } \times K _ { 1 } } . } \end{array}
$$
Deep sets for symmetric elements (DSS). The characterization of $S _ { N }$ -equivariant functions introduced by Zaheer et al. [2017] is extended by Maron et al. [2020] to include an additional symmetry across the feature dimension. Specifically, they characterize all $S _ { N } \times H$ -equivariant linear maps, where $H \subseteq S _ { F }$ is a subgroup of the feature permutation group $S _ { F }$ . Our interests lies in the simpler and more common case in which $H = S _ { F }$ , corresponding to the full feature permutation group. In this setting, Maron et al. [2020]characterize all linear maps of the form ${ \dot { T } } : \mathbb { R } ^ { K _ { 1 } \times N \times F } \mathbb { R } ^ { K _ { 2 } \times { \dot { N } } \times F }$ that are $( S _ { N } \times S _ { F } )$ -equivariant; that is, for each $k _ { 2 } \in [ K _ { 2 } ]$ , the corresponding output $T ( \boldsymbol { X } ) _ { k _ { 2 } } \in \mathbb { R } ^ { N \times F }$ is given by
$$
T ( \pmb { X } ) _ { k _ { 2 } } = \mathbf { 1 } _ { N , N } \pmb { X } _ { k _ { 2 } } ^ { ( 1 ) } \mathbf { 1 } _ { F , F } + \mathbf { 1 } _ { N , N } \pmb { X } _ { k _ { 2 } } ^ { ( 2 ) } + \pmb { X } _ { k _ { 2 } } ^ { ( 3 ) } \mathbf { 1 } _ { F , F } + \pmb { X } _ { k _ { 2 } } ^ { ( 4 ) } ,
$$
where $\forall i \in$ [4], $\begin{array} { r } { \pmb { X } _ { k _ { 2 } } ^ { ( i ) } = \sum _ { k _ { 1 } } ^ { K _ { 1 } } \Lambda _ { k _ { 2 } , k _ { 1 } } ^ { ( i ) } \pmb { X } _ { k _ { 1 } } } \end{array}$ and $\pmb { \Lambda } ^ { ( i ) } \in \mathbb { R } ^ { K _ { 2 } \times K _ { 1 } }$ . Note that symmetry across both axes of the feature matrix requires lifting the 2D function inputs used in DeepSets to 3D tensors in DSS and in our framework.
Our framework for triple-symmetry. In our setting, the input includes a label matrix encoding the ground truths in addition to the feature matrix. Formally, given $( \pmb { X } , \pmb { \breve { Y } } ) \in \mathbb { R } ^ { \hat { N } \times F } \times \mathbb { R } ^ { N \times C }$ , where $\pmb { X } \in \mathbb { R } ^ { N \times F }$ is a feature matrix and $\pmb { Y } \in \mathbb { R } ^ { N \times C }$ is a label matrix, the ultimate goal of our work is to characterize the class of all linear transformations $T = ( T _ { 1 } , T _ { 2 } ) : \mathbb { R } ^ { N \times ( F + C ) } \mathbb { R } ^ { N \times ( F + C ) }$ that are $( S _ { N } \times S _ { F } \times S _ { C } )$ -equivariant.
The following proposition provides a complete characterization of the linear maps that are equivariant to $( S _ { N } \times S _ { F } \times S _ { C } )$ . Proposition 4.1. A linear function of the form T = (T1, T2): RK1×N×F × RK1×N×C → RK2×N×F × RK2×N×C is $( S _ { N } \times S _ { F } \times S _ { C } )$ -equivariant if and only if there exist ${ \pmb { \Lambda } } ^ { ( 1 ) } , \ldots , { \pmb { \Lambda } } ^ { ( 1 2 ) } \in \mathbb { R } ^ { K _ { 2 } \times K _ { 1 } }$ such that for every $\pmb { X } \in \mathbb { \Gamma }$ $\mathbb { R } ^ { \vec { K _ { 1 } } \times N \times F }$ $\pmb { Y } \in \mathbb { R } ^ { K _ { 1 } \times \mathbf { \hat { N } } \times C }$ and $k _ { 2 } \in [ K _ { 2 } ]$ , the corresponding outputs $T _ { 1 } ( \pmb { X } , \pmb { Y } ) \in \mathbb { R } ^ { K _ { 2 } \times N \times F }$ and $T _ { 2 } ( X , Y ) \in$ $\mathbb { R } ^ { K _ { 2 } \times N \times C }$
$$
\begin{array} { r l } & { T _ { 1 } ( \boldsymbol { X } , \boldsymbol { Y } ) _ { k _ { 2 } } = \left( \mathbf { 1 } _ { N , N } \boldsymbol { X } _ { k _ { 2 } } ^ { ( 1 ) } + \boldsymbol { X } _ { k _ { 2 } } ^ { ( 2 ) } \right) \mathbf { 1 } _ { F , F } + \mathbf { 1 } _ { N , N } \boldsymbol { X } _ { k _ { 2 } } ^ { ( 3 ) } + \boldsymbol { X } _ { k _ { 2 } } ^ { ( 4 ) } + \left( \mathbf { 1 } _ { N , N } \boldsymbol { Y } _ { k _ { 2 } } ^ { ( 5 ) } + \boldsymbol { X } _ { k _ { 2 } } ^ { ( 6 ) } \right) \mathbf { 1 } _ { C , F } , } \\ & { T _ { 2 } ( \boldsymbol { X } , \boldsymbol { Y } ) _ { k _ { 2 } } = \left( \mathbf { 1 } _ { N , N } \boldsymbol { Y } _ { k _ { 2 } } ^ { ( 1 ) } + \boldsymbol { Y } _ { k _ { 2 } } ^ { ( 2 ) } \right) \mathbf { 1 } _ { C , C } + \mathbf { 1 } _ { N , N } \boldsymbol { Y } _ { k _ { 2 } } ^ { ( 3 ) } + \boldsymbol { Y } _ { k _ { 2 } } ^ { ( 4 ) } + \left( \mathbf { 1 } _ { N , N } \boldsymbol { X } _ { k _ { 2 } } ^ { ( 5 ) } + \boldsymbol { X } _ { k _ { 2 } } ^ { ( 6 ) } \right) \mathbf { 1 } _ { F , C } , } \end{array}
$$
where i [6], X(i = $\begin{array} { r } { \mathbf { X } _ { k _ { 2 } } ^ { ( i ) } = \sum _ { k _ { 1 } = 0 } ^ { K _ { 1 } } \Lambda _ { k _ { 2 } , k _ { 1 } } ^ { ( i ) } \mathbf { X } _ { k _ { 1 } } } \end{array}$ Xk and Y (i $\begin{array} { r } { \pmb { Y } _ { k _ { 2 } } ^ { ( i ) } = \sum _ { k _ { 1 } = 0 } ^ { K _ { 1 } } \pmb { \Lambda } _ { k _ { 2 } , k _ { 1 } } ^ { ( i + 6 ) } \pmb { Y } _ { k _ { 1 } } } \end{array}$
# 4.2 A Universality Result for Sets of Triple-Symmetric Elements
In this section, we introduce triple symmetry networks (TSNets) based on the $( S _ { N } \times S _ { C } )$ -equivariant and $S _ { F }$ -invariant linear layers derived in Section 4.1. Next, we show that TSNets are universal on multisets, which make these networks the core building blocks of our GFM.
Triple-symmetry networks for multi-sets (TSNets). Formally, we define a triple symmetry network $F :$ $\mathbb { R } ^ { 1 \times N \times ( \mathbf { \bar { F } } + C ) } \to \mathbb { R } ^ { 1 \times N \times C }$ as:
$$
F = \pi \circ T ^ { ( L ) } \circ \sigma \circ T ^ { ( L - 1 ) } \circ \sigma \circ \cdots \circ T ^ { ( 1 ) } ,
$$
where each $\begin{array} { r } { T ^ { ( \ell ) } : \mathbb { R } ^ { K ^ { ( \ell ) } \times N \times ( F + C ) } \mathbb { R } ^ { K ^ { ( \ell + 1 ) } \times N \times ( F + C ) } } \end{array}$ is an $( S _ { N } \times S _ { F } \times S _ { C } )$ -equivariant linear layer, $\pi$ is a label projection map satisfying $\pi ( X , Y ) = Y$ for all $( \pmb { X } , \pmb { Y } ) \in \mathbb { R } ^ { K ^ { ( L ) } \times N \times F } \times \mathbb { R } ^ { K ^ { ( L ) } \times N \times C \times }$ , and $\sigma$ is a non-linearity.
Observe that $\pi$ is $S _ { F }$ -invariant and each intermediate linear layer $T ^ { ( \ell ) }$ is $( S _ { N } \times S _ { F } \times S _ { C } )$ -equivariant by construction. Since equivariance and invariance are preserved under compositions, it follows directly that the resulting network $F$ is both $( S _ { N } \times S _ { C } )$ -equivariant and $S _ { F }$ -invariant, as required. We set $K ^ { ( 0 ) } = K ^ { ( L ) } = \dot { 1 }$ so that the network can accept the initial feature and label matrices and outputs predictions with the same shape as the ground-truth labels.
We are now ready to state our main result: TSNets are not only symmetry-preserving but also universally expressive on multi-sets, i.e., capable of approximating any continuous function that respects the required group symmetries. Notably, this guarantee holds over a compact domain excluding the low-dimensional exception set $\mathcal { E }$ , defined in Definition B.5.
Theorem 4.2. Let $K \subset \mathbb { R } ^ { N \times ( F + C ) }$ be a compact domain such that $\boldsymbol { \mathcal { K } } = \cup _ { g \in S _ { N } \times S _ { F } \times S _ { C } } g K$ and $\kappa \cap \mathcal { E } = \emptyset$ , where $\mathcal { E } \subset \mathbb { R } ^ { N \times ( F + C ) }$ is the exclusion set corresponding to $\mathbb { R } ^ { N \times ( F + C ) }$ (Definition B.5 ). Then, TSNets are universal aE p⊂roximators in $L _ { \infty }$ of continuous $\mathcal { K } \mathbb { R } ^ { N \times C }$ functions that are $( S _ { N } \times S _ { C } )$ -equivariant and $S _ { F }$ -invariant.
TSNets thus approximate all triple-symmetry-preserving functions on multi-sets, justifying their integration into GFMs.
Relations to the literature. Our result is closely related to Theorem 3 of Maron et al. [2020], which establishes a universality result for $( S _ { N } \times H )$ -equivariant networks, where $H \subseteq S _ { F }$ . Specifically, the theorem states that for any subgroup $H \subseteq S _ { F }$ and any compact domain $\boldsymbol { \mathcal { K } } \subset \mathbb { R } ^ { N \times F }$ that is closed under the action of $( S _ { N } \times H )$ and avoids the exception set ${ \mathcal { E } } ^ { \prime }$ (defined in Definition C.1). If $H$ -equivariant networks are universal for $H$ -equivariant functions. Then, $( S _ { N } \ \times H )$ -equivariant networks are universal approximators (in the $L _ { \infty }$ norm) for continuous $( S _ { N } \times H )$ -equivariant functions defined on $\kappa$ . However, their proof contains a subtle but important flaw. They introduce an $H$ -equivariant polynomial $p _ { 1 } : \mathbb { R } ^ { N \times F } \mathbb { R } ^ { F }$ and write “If we fix $X _ { 2 } , . . . , X _ { N }$ , then $p _ { 1 } \mathbf { \bar { ( } } X _ { 1 } , \ldots , X _ { N } )$ is an $H$ -equivariant polynomial in $\boldsymbol { X _ { 1 } } ^ { \prime \prime }$ , which does not hold generally. By definition, $H$ -equivariance of $p _ { 1 }$ requires the group action to be applied consistently across all arguments: applying the action to only one row while fixing the others violates equivariance. We refer the reader to Appendix C for a detailed discussion on this issue, why it invalidates the general universality claim for arbitrary subgroups $H \leq S _ { F }$ and how, by leveraging techniques from Segol and Lipman [2019], the claim can still be upheld in the special case $H = S _ { F } - \mathrm { a } 1$ key insight that allows us to show universality for TSNets in Theorem 4.2.
Proof strategy. We prove Theorem 4.2 while avoiding the aforementioned caveat by closely utilizing the proof techniques introduced in Segol and Lipman [2019] and Maron et al. [2020]. Specifically, our proof idea can be decomposed into four high-level steps: (i) Characterization of $( S _ { F } \times S _ { C } )$ -invariant polynomials. It is well established that any $S _ { N }$ -invariant polynomial can be represented as a polynomials in the multisymmetric power-sum polynomials Briand [2004], Segol and Lipman [2019], Rydh [2007]. We provide a non-trivial extension to a richer symmetry setting by defining the doubly-symmetric polynomials (DMPs) (see Definition B.2) and showing that any $( \dot { S } _ { F } \times \dot { S _ { C } } )$ -invariant polynomial, can be represented as a polynomials in the DMPS (see Lemma B.3). (ii) Expressing $( S _ { N - 1 } \times S _ { F - 1 } \times S _ { C } )$ -invariant polynomials using DMPs. We adapt the techniques from Theorem 3 of Maron et al. [2020] and Lemma 2 of Segol and Lipman [2019] to use the DMPs introduced in (i) and the $( S _ { N } \times S _ { F } \times S _ { C } )$ - equivariant linear maps we derived in Equation (4) to express $( S _ { N - 1 } \times S _ { F - 1 } \times S _ { C } ) .$ - or $( S _ { N - 1 } \times S _ { F } \times S _ { C - 1 } )$ -invariant polynomials. (iii) Composition of $( S _ { N } \times S _ { F } \times S _ { C } )$ -equivariant polynomials. We extend Lemmas 1 from Segol and Lipman [2019] to show that the outputs of an $( S _ { N } \times S _ { F } \times S _ { C } )$ -equivariant polynomial are $( S _ { N - 1 } \times S _ { F - 1 } \times S _ { C } )$ - or $( S _ { N - 1 } \times S _ { F } \times S _ { C - 1 } )$ -invariant. By leveraging the stronger symmetry structure provided by the permutation groups rather than their subgroups, we avoid the pitfall in Theorem 3 by Maron et al. [2020] – enabling the composition of equivariant functions via invariant ones, which was also the goal of the flawed transition in Maron et al. [2020], as discussed above. (iv) Approximating $( S _ { N } \times S _ { C } )$ -equivariant and $S _ { F }$ -invariant functions using TSNets. We replicate each intermediate step from (ii) and (iii), constructing approximators for the corresponding polynomial spaces using the $( S _ { N } \times S _ { F } \times S _ { C } )$ -equivariant linear maps derived in Equation (4), and multi-layer perceptrons instantiated from the Universal Approximation Theorem (B.9). Finally, by incorporating the label projection map $\pi$ as the final layer of our architecture, we enforce $S _ { F }$ -invariance and yield an approximator for the desired class of $( S _ { N } \times S _ { C } )$ -equivariant and $S _ { F }$ -invariant functions, completing our construction for the triple-symmetry setting. A more detailed illustration, incorporating additional steps, is provided in Figure 4 with the complete proof detailed in Appendix B.
Note that when labels are removed (i.e., $C = 0$ ), steps (ii) and (iii) recover a corrected but less general version of Theorem 3 in Maron et al. [2020] for $H = S _ { F }$ , using techniques from Segol and Lipman [2019], Maron et al. [2020].
# 5 A Recipe for Graph Foundation Models for Node Property Prediction
In this section, we build on the theoretical results from Section 4.1 to derive a practical architecture for node property prediction (see Figure 2). Specifically, we construct a learnable layer that is equivariant to the symmetry group $\bar { ( } S _ { N } \times S _ { F } \times S _ { C } )$ , thereby inheriting the universality-over-sets property established in Section 4.2. In the graph setting, this guarantees that the architecture retains the expressive power of the original GNN, while elevating it to a symmetry-aware foundation model that generalizes across diverse feature sets and labels, unlike traditional GNNs.
Intuitively, we derive an architecture based on Equation (4), replacing the coefficients $\pmb { \Lambda } _ { 1 } , \ldots , \pmb { \Lambda } _ { 1 2 }$ with learnable weight matrices, similar to TSNets. More concretely, we consider inputs of the form:
$$
( X , Y , A ) \in \mathbb { R } ^ { 1 \times N \times ( F + C ) } \times [ 0 , 1 ] ^ { N \times N } ,
$$
where $\pmb { X } \in \mathbb { R } ^ { N \times F }$ is a feature matrix, $\pmb { Y } \in \mathbb { R } ^ { N \times C }$ is a label matrix, and $\pmb { A } \in [ 0 , 1 ] ^ { N \times N }$ is the graph adjacency matrix.
Triple-symmetry graph neural networks (TS-GNNs). Formally, we define a triple-symmetry graph neural network $\bar { F } : \bar { \mathbb { R } } ^ { 1 \times N \times ( F + \bar { C } ) } \times [ \bar { 0 } , 1 ] ^ { N \times N } \to \mathbb { R } ^ { 1 \times N \times C }$ , as:
$$
{ \boldsymbol { F } } = \pi \circ { \boldsymbol { T } } ^ { ( L ) } \circ \sigma \circ { \boldsymbol { T } } ^ { ( L - 1 ) } \circ \sigma \circ \cdots \circ { \boldsymbol { T } } ^ { ( 1 ) } ,
$$
where $\sigma$ is a non-linearity (e.g., ReLU) and each $T ^ { ( \ell ) }$ is an $( S _ { N } \times S _ { F } \times S _ { C } )$ -equivariant linear layer that acts jointly on the node features and node labels, while also using the adjacency matrix:
$$
\begin{array} { r } { T ^ { ( \ell ) } : \mathbb { R } ^ { K ^ { ( \ell ) } \times N \times ( F + C ) } \times [ 0 , 1 ] ^ { N \times N } \mathbb { R } ^ { K ^ { ( \ell + 1 ) } \times N \times ( F + C ) } \times [ 0 , 1 ] ^ { N \times N } . } \end{array}
$$
Figure 2: An illustration of the feature and label embeddings across the layers of our triple-symmetric graph neural network architecture. The architecture is composed of feature-, label- and node-equivariant aggregation layers and a final feature-invariant projection layer.
Each $T ^ { ( \ell ) }$ updates the feature $X _ { v } ^ { ( \ell ) } \in \mathbb { R } ^ { K ^ { ( \ell ) } \times F }$ and label $\boldsymbol { Y } _ { v } ^ { ( \ell ) } \in \mathbb { R } ^ { K ^ { ( \ell ) } \times C }$ representations of node $v \in \mathcal V$ at layer $\ell$ as:
$$
\begin{array} { r l } & { X _ { v } ^ { ( \ell + 1 ) } = \psi \big ( W _ { 1 } ^ { ( \ell ) } X _ { v } ^ { ( \ell ) } , \{ \mathbf { W } _ { 2 } ^ { ( \ell ) } X _ { u } ^ { ( \ell ) } \mid u \in \mathcal { N } _ { v } \} \big ) + \psi \big ( W _ { 3 } ^ { ( \ell ) } X _ { v } ^ { ( \ell ) } \mathbf { 1 } _ { F , F } , \{ \mathbf { W } _ { 4 } ^ { ( \ell ) } X _ { u } ^ { ( \ell ) } \mathbf { 1 } _ { F , F } \mid u \in \mathcal { N } _ { v } \} \big . } \\ & { \qquad \quad + \psi \big ( W _ { 5 } ^ { ( \ell ) } Y _ { v } ^ { ( \ell ) } \mathbf { 1 } _ { C , F } , \{ \mathbf { W } _ { 6 } ^ { ( \ell ) } Y _ { u } ^ { ( \ell ) } \mathbf { 1 } _ { C , F } \mid u \in \mathcal { N } _ { v } \} \big ) + \Theta _ { F } , } \\ & { Y _ { v } ^ { ( \ell + 1 ) } = \psi \big ( W _ { 7 } ^ { ( \ell ) } Y _ { v } ^ { ( \ell ) } , \{ \mathbf { W } _ { s } ^ { ( \ell ) } , Y _ { u } ^ { ( \ell ) } \mid u \in \mathcal { N } _ { v } \} \big ) + \psi \big ( W _ { 9 } ^ { ( \ell ) } Y _ { v } ^ { ( \ell ) } \mathbf { 1 } _ { C , C } , \{ \mathbf { W } _ { 1 0 } ^ { ( \ell ) } Y _ { u } ^ { ( \ell ) } \mathbf { 1 } _ { C , C } \mid u \in \mathcal { N } _ { v } \} } \\ & { \qquad \quad + \psi \big ( W _ { 1 1 } ^ { ( \ell ) } X _ { v } ^ { ( \ell ) } \mathbf { 1 } _ { F , C } , \{ \mathbf { W } _ { 1 2 } ^ { ( \ell ) } X _ { u } ^ { ( \ell ) } \mathbf { 1 } _ { F , C } \mid u \in \mathcal { N } _ { v } ^ { \parallel } \} + \Theta _ { C } , } \end{array}
$$
where $\psi$ is an aggregation function, $\boldsymbol { W } _ { i } ^ { ( \ell ) } \in \mathbb { R } ^ { K ^ { ( \ell + 1 ) } \times K ^ { ( \ell ) } }$ , $i \in [ 1 2 ]$ , are learnable weight matrices (replacing the coefficients $\mathbf { \Lambda } \Lambda _ { i }$ ), and the terms $\Theta _ { F }$ and $\Theta _ { L }$ are $( S _ { N } \times S _ { F } \times S _ { C } )$ -equivariant feature-label mixing terms (detailed below). Each such layer thus directly encodes the respective $( S _ { N } \times S _ { F } \times S _ { C } )$ -equivariant linear transformations from our theoretical formulation in Proposition 4.1. Finally, a label projection map $\pi$ satisfying:
$$
\pi ( \boldsymbol { \mathbf { X } } , \boldsymbol { \mathbf { Y } } , \boldsymbol { \mathbf { A } } ) = \boldsymbol { \mathbf { Y } } \mathrm { ~ f o r ~ a l l ~ } ( \boldsymbol { \mathbf { X } } , \boldsymbol { \mathbf { Y } } , \boldsymbol { \mathbf { A } } ) \in \mathbb { R } ^ { N \times ( F + C ) \times K ^ { ( L ) } } \times [ 0 , 1 ] ^ { N \times N } ,
$$
is applied to obtain node-wise predictions. Similar to TSNets, We also set $K ^ { ( 0 ) } = K ^ { ( L ) } = 1$ . It is easy to see that the architecture is $( S _ { N } \times S _ { C } )$ -equivariant and $S _ { F }$ -invariant, as desired.
How to mix features and labels? A key consideration in designing graph foundation models is determining the degree of interaction between feature and label embeddings. In the extreme, if we do not impose any such interactions then the model’s predictions would be independent of the input features, leading to a degradation in model performance. Effective information flow between feature and label representations is thus essential for building strong graph foundation models. This raises a central design question: To what extent should feature embeddings and label embeddings interact?
In TS-GNNs, we initially perform mixing by applying permutation-invariant pooling operations (e.g., feature-wise or label-wise sum pooling) and appending the pooled representations to the opposing modality (feature-to-label and label-to-feature). While such operations are theoretically sufficient for our universal approximation result (Section 4.2), they often suffer from practical limitations, because all feature or label information is compressed into a single global vector representation, creating a representational bottleneck. To overcome this issue and enable richer feature-label mixing, we introduce an additional mixing operation based on least-squares solutions, which showed strong empirical results [Zhao et al., 2025]. Specifically, we solve the following optimization problems:
$$
\begin{array} { r } { T _ { F } = \underset { T _ { F } \in \mathbb { R } ^ { F \times C } } { \mathrm { a r g } \operatorname* { m i n } } \ \Vert Y - A X T _ { F } \Vert _ { 2 } \quad \mathrm { ~ a n d ~ } \quad T _ { C } = \underset { T _ { C } \in \mathbb { R } ^ { C \times F } } { \mathrm { a r g } \operatorname* { m i n } } \ \Vert X - A Y T _ { C } \Vert _ { 2 } } \end{array}
$$
where $A$ is a random-walk normalized adjacency matrix. These projections transform features into the label space and labels into the feature space, respectively. We incorporate these least-squares transformations into each layer as follows:
$$
\begin{array} { r } { = \psi \left( W _ { 1 3 } ^ { ( \ell ) } Y _ { v } ^ { ( \ell ) } T _ { C } , \ P W _ { 1 4 } ^ { ( \ell ) } Y _ { u } ^ { ( \ell ) } T _ { C } \mid u \in \mathcal { N } _ { v } \mathbb { I } \right) , \quad \varTheta _ { C } = \psi \left( W _ { 1 5 } ^ { ( \ell ) } X _ { v } ^ { ( \ell ) } T _ { F } , \ P W _ { 1 6 } ^ { ( \ell ) } X _ { u } ^ { ( \ell ) } T _ { F } \mid u \in \mathcal { N } _ { v } \mathbb { I } \right) , } \end{array}
$$
where ${ W } _ { 1 3 } ^ { ( \ell ) } , { W } _ { 1 4 } ^ { ( \ell ) } , { W } _ { 1 5 } ^ { ( \ell ) } , { W } _ { 1 6 } ^ { ( \ell ) } \in \mathbb { R } ^ { K ^ { ( \ell + 1 ) } \times K ^ { ( \ell ) } }$ are learnable weight matrices.
These terms share the same objective as our original mixing terms, but crucially avoid collapsing over feature or label dimensions. Instead, they introduce structured, linear mappings across modalities via ${ \pmb T } _ { F }$ and $\scriptstyle { \pmb { T } } _ { C }$ , enabling localized mixing. Importantly, these least-squares transformations are linear and are equivariant to node-, feature-, and labelpermutations, allowing us to retain the triple-symmetry property at every layer. This design also lays a foundation for future extensions involving more expressive mixing mechanisms.
Table 1: Test accuracy of the baselines MeanGNN and GAT and models GraphAny and our GFM variants, all trained on cora. Top three models are colored by First, Second, Third.
How to incorporate bias terms? Another important design choice involves the integration of bias terms into a model that needs to generalize to varying feature scales. Even after row-wise or global $L _ { 1 } \bar { / } L _ { 2 }$ normalization, node features can differ both within a graph and across graphs. A bias vector learned on the set of training graphs may be much larger or much smaller than the features which are going to be observed at test time. This is clearly very problematic for generalization, as the bias term may for instance dominate the actual input features in magnitute, leading to them being largely ignored. To avoid this, one option is to completely avoid using bias terms in the layers, but this is too restrictive. In our architecture, we solve the least-squares problem including an added bias term, and the bias terms obtained, are injected only through the feature-to-label and label-to-feature mixers ( $\Theta _ { F }$ and $\Theta _ { C }$ ). Such biases are learned relative to the current feature scale, and thus remain scale-aware and adapt naturally to new graphs, offering better generalization.
# 6 Empirical Evaluation
In this section, we empirically validate the generalization capabilities of TS-GNNs addressing the core research questions:
Q1 Can the proposed graph foundation models effectively generalize to unseen graphs with varying structures, feature configurations, and label semantics without retraining?
Q2 Does the zero-shot generalization ability of the proposed graph foundation models improve as the number of training graphs increases?
To address these questions, we perform experiments across 29 real-world node classification datasets, covering diverse domains, graph sizes, and feature representations. Note that we focus on the node classification setting due to the lack of node-regression datasets, and refer the reader to the appropriate discussion in Appendix D. We optimize all models using the Adam optimizer, with detailed hyperparameter settings provided in Appendix D.3. Experiments are executed on a single NVIDIA L40 GPU, and our implementation is publicly accessible at: https: //github.com/benfinkelshtein/EquivarianceEverywhere.
Datasets. We use an ensemble of 29 node classification datasets and their respective official splits. Specifically, we use roman-empire, amazon-ratings, minesweeper, questions and tolokers from Platonov et al. [2024], cora, citeseer and pubmed from Yang et al. [2016], chameleon and squirrel from Rozemberczki et al. [2021], cornell, wisconsin, texas and actor from Pei et al. [2020], full-dblp and full-cora from Bojchevski and Günnemann [2018], wiki-attr and blogcatalog from Yang et al. [2023], wiki-cs from Mernyei and Cangea [2022], co-cs, co-physics, computers and photo from Shchur et al. [2019], brazil, usa and europe from Ribeiro et al. [2017], last-fm-asia and deezer from Rozemberczki and Sarkar [2020], and arxiv from Hu et al. [2021]. As full-Cora, co-cs, and co-physics, have an exceptionally large feature dimension, we first reduce their feature dimensionality with PCA to 2048 components before training. Lastly, we apply $L _ { 2 }$ normalization to each node’s feature vector.
# 6.1 Do TS-GNNs Generalize to Unseen Graphs?
Setup. To evaluate generalization (Q1), we design an experiment to assess how well each model performs after being trained on a single graph (cora) – measuring how much transferable knowledge can be extracted from that graph. Our setup closely follows the setup introduced in GraphAny [Zhao et al., 2025]. We experiment with two widely-used baseline GNN architectures: MeanGNN and GAT [Veliˇckovi´c et al., 2018]. Since these models are inherently nontransferable across datasets, we train and evaluate each baseline in an end-to-end fashion on every dataset. Further to this, we provide zero-shot experiments with TS-Mean and TS-GAT, which are TS-GNNs using MeanGNN and GAT, respectively (as their aggregation function). We also experiment with the only other node-level graph foundation model GraphAny [Zhao et al., 2025], which provides a strong zero-shot prediction baseline. In the zero-shot setup, we train each model TS-Mean, TS-GAT, and GraphAny on the cora dataset and evaluate their performance on the remaining 28 datasets. All reported results reflect the mean accuracy and standard deviation over five random seeds.
Results. Table 1 shows that both TS-Mean and TS-GAT consistently outperform their respective baselines, despite being trained in an end-to-end manner. This highlights the generalization capabilities of our proposed framework and its compatibility with varying GNN architectures. Furthermore, both TS-Mean and TS-GAT outperform GraphAny on average across 28 datasets under its own evaluation protocol, further underscoring the strong generalization capabilities of our framework. These results directly address our initial research question (Q1) – our proposed framework, that is also theoretically grounded, successfully generalizes across graphs, features, and label distributions without retraining.
# 6.2 Does More Training Data Improve Generalization?
Setup. To assess the impact of the amount of training data on zeroshot generalization (Q2), we train TS-Mean and GraphAny on increasingly larger subsets of a held-out training pool. We reserve 9 representative graphs for training and construct training subsets of sizes 1, 3, 5, 7, and 9, where each larger subset includes all datasets from the smaller trainsets. The remaining 20 datasets are used for zero-shot evaluation. For each subset size, we report mean zeroshot accuracy, averaged over five random seeds. Detailed per-dataset results are provided in Appendix D.1.
Results. Figure 3 shows that the zero-shot accuracy of TS-Mean improves steadily as more training graphs are introduced. This behavior aligns with the expected characteristics of a graph foundation model – the ability to benefit from increased data diversity, directly addressing Q2. In contrast, GraphAny’s performance remains unchanged as more training graphs are added. This counterintuitive result suggests that GraphAny is not well-suited for the foundation model setting, and may instead be better tailored to scenarios with limited training data. Interestingly, both models perform comparably when trained on three graphs or less, indicating that TS-Mean is competitive in the low-data regime. However, as the training set grows, TS-Mean increasingly outperforms GraphAny, highlighting the benefits of performance that scales with training data.
Figure 3: Average zero-shot accuracy of TS-Mean and GraphAny across 20 datasets as a function of the number of training graphs.
While we do not claim to establish a formal “scaling law,” this offers, to our knowledge, the first empirical indication that a GFM improves with more training data – a key property of foundation models in language and vision. These findings position TS-GNN as a strong and, at present, the only viable candidate for node-level graph foundation modeling. | Graph machine learning architectures are typically tailored to specific tasks
on specific datasets, which hinders their broader applicability. This has led
to a new quest in graph machine learning: how to build graph foundation models
capable of generalizing across arbitrary graphs and features? In this work, we
present a recipe for designing graph foundation models for node-level tasks
from first principles. The key ingredient underpinning our study is a
systematic investigation of the symmetries that a graph foundation model must
respect. In a nutshell, we argue that label permutation-equivariance alongside
feature permutation-invariance are necessary in addition to the common node
permutation-equivariance on each local neighborhood of the graph. To this end,
we first characterize the space of linear transformations that are equivariant
to permutations of nodes and labels, and invariant to permutations of features.
We then prove that the resulting network is a universal approximator on
multisets that respect the aforementioned symmetries. Our recipe uses such
layers on the multiset of features induced by the local neighborhood of the
graph to obtain a class of graph foundation models for node property
prediction. We validate our approach through extensive experiments on 29
real-world node classification datasets, demonstrating both strong zero-shot
empirical performance and consistent improvement as the number of training
graphs increases. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
# A large-scale heterogeneous 3D magnetic resonance brain imaging dataset for self-supervised learning
Asbjørn Munk1,2,+,\*, Stefano Cerri3,1,2,14+,\*, Jakob Ambsdorf1,2, Julia Machnio1,2, Sebastian Nørgaard Llambias1,2, Vardan Nersesjan3,12, Christian Hedeager Krag10,11, Peirong Liu4,5,6,9, Pablo Rocamora Garc´ıa1,2, Mostafa Mehdipour Ghazi1,2, Mikael Boesen10,13, Michael Eriksen Benros3,14, Juan Eugenio Iglesias4, 5, 6, 7, 8, and Mads Nielsen1, 2
1Department of Computer Science, University of Copenhagen, Denmark
2Pioneer Centre For AI, Denmark
3Copenhagen Research Centre for Biological and Precision Psychiatry, Mental Health Centre Copenhagen,
Copenhagen University Hospital, Denmark
4Athinoula A. Martinos Center for Biomedical Imaging, USA
5Massachusetts General Hospital, USA
6Harvard Medical School, USA
7Massachusetts Institute of Technology, USA
8Hawkes Institute, University College London, UK
9Johns Hopkins University, USA
10Radiological AI Testcenter, Denmark
11Faculty of Health and Medical Sciences, University of Copenhagen, Denmark
12Copenhagen University Hospital, Rigshospitalet, Denmark
13Copenhagen University Hospital, Bispebjerg & Frederiksberg Hospital, Denmark
14Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Denmar
\* asmu,stce @di.ku.dk
+Equal contribution. Author order may be adjusted for individual use.
# ABSTRACT
We present FOMO60K, a large-scale, heterogeneous dataset of 60,529 brain Magnetic Resonance Imaging (MRI) scans from 13,900 sessions and 11,187 subjects, aggregated from 16 publicly available sources. The dataset includes both clinical- and research-grade images, multiple MRI sequences, and a wide range of anatomical and pathological variability, including scans with large brain anomalies. Minimal preprocessing was applied to preserve the original image characteristics while reducing barriers to entry for new users. Accompanying code for self-supervised pretraining and finetuning is provided. FOMO60K is intended to support the development and benchmarking of self-supervised learning methods in medical imaging at scale.
# Background & Summary
Self-supervised learning (SSL) has led to major breakthroughs in computer vision and natural language processing, largely driven by the availability of large-scale public datasets such as ImageNet1, Places $3 6 5 ^ { 2 }$ , and OpenWebText3. These resources have enabled the development, benchmarking, and rapid iteration of powerful SSL methods under standardized settings. In neuroimaging, however, the lack of comparably large and diverse public datasets has slowed the adoption and evaluation of SSL approaches. Existing resources such as $\mathrm { \ A D N I ^ { 4 } }$ , UK Biobank5, $\mathrm { P P M I } ^ { 6 }$ , and $\mathbf { A B C D } ^ { 7 }$ , while valuable, are often curated for specific diseases or patient populations, and typically follow homogeneous imaging protocols with limited pathological variability. Access is often restricted by formal applications, data use agreements, and institutional approvals, and data are commonly distributed in formats that require domain-specific preprocessing. These challenges raise the barrier to entry and hinder the scalability of SSL pretraining. The recent release of OpenMind8 is a welcome step forward, reflecting the field’s growing momentum toward more diverse and accessible neuroimaging datasets.
To address these limitations, we introduce FOMO60K, a large-scale, heterogeneous dataset of 60,529 brain MRI scans from 13,900 sessions and 11,187 subjects, aggregated from 16 publicly available sources. FOMO60K spans both clinical- and research-grade imaging, includes multiple MRI sequences, and captures anatomical and pathological diversity, including scans with large brain anomalies, bringing it closer to real-world population-level data.
Figure 1. Representative examples from the FOMO60K dataset, illustrating the heterogeneity in image quality, MRI sequences, and the presence of brain anomalies.
Minimal preprocessing was applied to retain the raw characteristics of the original images while improving usability. We also release code for self-supervised pretraining and fine-tuning to facilitate benchmarking, method development, and broader adoption of SSL in medical imaging. FOMO60K was developed in parallel with the FOMO25 challenge at MICCAI $2 0 2 5 ^ { 9 }$ , which aims to catalyze progress in self-supervised learning for medical imaging. The dataset will continue to grow in future releases as additional cohorts and modalities become available.
# Methods
FOMO60K is derived from 16 publicly available datasets. Table 1 summarizes the source datasets, listing the number of subjects, sessions, scans, MRI sequence types, and applied preprocessing steps. MRI sequence types include T1-weighted (T1), T2-weighted (T2), Fluid-Attenuated Inversion Recovery (FLAIR), and Diffusion-Weighted Imaging (DWI), among others. Most datasets include at least T1-weighted scans, with many also providing T2, FLAIR, and DWI scans; less common modalities include contrast-enhanced T1 (T1ce), Proton Density (PD), Susceptibility-Weighted Imaging (SWI), and Gradient Echo (GRE). A visual overview of the dataset’s heterogeneity—in terms of image quality, modality, and pathology—is provided in Fig. 1.
# MRI preprocessing
MRI preprocessing comprised three main steps: reorienting images to RAS (Right–Anterior–Superior) orientation, affine co-registration, and skull-stripping. All scans were first reoriented to RAS and affinely co-registered using the mri_coreg command from FreeSurfer $7 . 4 . 1 ^ { 3 0 }$ , with default parameters. Within each MRI session, scans were aligned to the image with the highest spatial resolution to preserve anatomical detail.
For diffusion-weighted imaging (DWI) scans stored in 4D format, additional steps were applied. If a $\scriptstyle \mathbf { b = } 0$ (non-diffusionweighted) volume was available, it was extracted and saved separately. For the $\scriptstyle \mathbf { b = } 1 0 0 0$ shell, three diffusion-weighted volumes were selected based on their gradient directions being most closely aligned with the canonical x, y, and z axes. Alignment was determined by computing the cosine similarity between each gradient vector (bvec) and the orthogonal unit vectors. The corresponding volumes were averaged to produce a single representative 3D image.
Skull-stripping was performed using SynthSeg31 (FreeSurfer 7.4.1), which outputs segmentation masks of brain structures. These masks were used to define the brain extraction region. Skull-stripping was only applied when the images were not already defaced or skull-stripped by the dataset provider, or when visual inspection identified residual cranial features that could compromise anonymization.
Table 1. Overview of the current datasets in FOMO60K. For each dataset, we summarize the number of subjects, MRI sessions, scans, available sequence types, and whether the images were skull-stripped or defaced. See Methods for full preprocessing details. Sequences are abbreviated as follows: $\mathrm { T } 1 = \mathrm { T } 1$ -weighted, ${ \mathrm { T } } 2 = { \mathrm { T } } 2$ -weighted, ${ \mathrm { T } } 2 ^ { * } = { \mathrm { T } } 2 ^ { * }$ -weighted, T1ce $\mathbf { \tau } = \mathrm { T } 1$ -weighted contrast-enhanced, FLAIR $\mathbf { \sigma } = \mathbf { \sigma }$ Fluid-Attenuated Inversion Recovery, $\boldsymbol { \mathrm { D W I } } =$ Diffusion-Weighted Imaging, $\mathrm { P D } =$ Proton Density, $\begin{array} { r } { \mathbf { S W I } = \pmb { \mathscr { s } } } \end{array}$ Susceptibility Weighted Imaging, $\mathrm { { \ G R E = } }$ Gradient Echo, miniP $\mathbf { \tau } = \mathbf { \tau }$ minimum intensity projection. "Unknown" indicates that scan sequence metadata was not available or could not be reliably identified.
# Data Record
The FOMO60K dataset is publicly available at https://huggingface.co/datasets/FOMO25/FOMO-MRI. All MRI scans are stored in NIfTI-compressed format and organized using a standardized directory structure. Each subject is assigned a unique identifier (sub_X), and each session is labeled as ses_Y. Scans within a session are named according to their sequence type (e.g., t1, flair). If multiple scans of the same sequence are present, they are enumerated (e.g., t1_1, $\pm 1 _ { - 2 } )$ . In cases where sequence information is unavailable, scans are generically named scan_X. To protect privacy and prevent easy identification of dataset sources, subjects have been shuffled.
# Usage Notes
The dataset can be used with no further pre-processing for pre-training. To ensure proper attribution and recognition of the source datasets we kindly ask all users to cite the following papers: OASIS117, OASIS218, BraTS2412,13,15,16,32, MSD Braintumor19, $\mathrm { I X I } ^ { 2 0 }$ , MGH Wild10, $\mathrm { N K I } ^ { 2 8 }$ .
# Code Availability
All preprocessing scripts are publicly available at https://github.com/fomo25/fomo60k-preprocessing to ensure reproducibility and facilitate application to new datasets.
# References
1. Deng, J. et al. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248–255 (Ieee, 2009).
2. Zhou, B., Lapedriza, A., Khosla, A., Oliva, A. & Torralba, A. Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis Mach. Intell. (2017).
3. Gokaslan, A. & Cohen, V. Openwebtext corpus. http://Skylion007.github.io/OpenWebTextCorpus (2019).
4. Mueller, S. G. et al. The Alzheimer’s disease neuroimaging initiative. Neuroimaging Clin. 15, 869–877 (2005).
5. Bycroft, C. et al. The UK Biobank resource with deep phenotyping and genomic data. Nature 562, 203–209 (2018).
6. Marek, K. et al. The Parkinson’s progression markers initiative (PPMI)–establishing a PD biomarker cohort. Annals clinical translational neurology 5, 1460–1477 (2018).
7. Casey, B. J. et al. The adolescent brain cognitive development (ABCD) study: imaging acquisition across 21 sites. Dev. cognitive neuroscience 32, 43–54 (2018).
8. Wald, T. et al. An OpenMind for 3D medical vision self-supervised learning. arXiv preprint arXiv:2412.17041 (2024).
9. FOMO25. https://fomo25.github.io/.
10. Iglesias, J. E. et al. SynthSR: A public AI tool to turn heterogeneous clinical brain scans into high-resolution T1-weighted images for 3D morphometry. Sci. advances 9, eadd3607 (2023).
11. Rorden, C., Absher, J. & Newman-Norlund, R. Stroke Outcome Optimization Project (SOOP), DOI: doi:10.18112/ openneuro.ds004889.v1.1.2 (2024).
12. LaBella, D. et al. The asnr-miccai brain tumor segmentation (brats) challenge 2023: Intracranial meningioma. arXiv preprint arXiv:2305.07642 (2023).
13. Adewole, M. et al. The brain tumor segmentation (brats) challenge 2023: Glioma segmentation in sub-saharan africa patient population (brats-africa). ArXiv arXiv–2305 (2023).
14. Baid, U. et al. The rsna-asnr-miccai brats 2021 benchmark on brain tumor segmentation and radiogenomic classification (2021). 2107.02314.
15. Moawad, A. W. et al. The Brain Tumor Segmentation-Metastases (BraTS-METS) Challenge 2023: Brain Metastasis Segmentation on Pre-treatment MRI. ArXiv arXiv–2306 (2024).
16. Kazerooni, A. F. et al. The brain tumor segmentation (BraTS) challenge 2023: focus on pediatrics (CBTN-CONNECTDIPGR-ASNR-MICCAI BraTS-PEDs). ArXiv arXiv–2305 (2024).
17. Marcus, D. S. et al. Open Access Series of Imaging Studies (OASIS): cross-sectional MRI data in young, middle aged, nondemented, and demented older adults. J. cognitive neuroscience 19, 1498–1507 (2007).
18. Marcus, D. S., Fotenos, A. F., Csernansky, J. G., Morris, J. C. & Buckner, R. L. Open access series of imaging studies: longitudinal MRI data in nondemented and demented older adults. J. cognitive neuroscience 22, 2677–2684 (2010).
19. Simpson, A. L. et al. A large annotated medical image dataset for the development and evaluation of segmentation algorithms. arXiv preprint arXiv:1902.09063 (2019).
20. IXI. http://brain-development.org/ixi-dataset/.
21. Nugent, A. C. et al. The NIMH Healthy Research Volunteer Dataset, DOI: doi:10.18112/openneuro.ds005752.v2.1.0 (2025).
22. Park, D. et al. The Dallas Lifespan Brain Study, DOI: doi:10.18112/openneuro.ds004856.v1.1.1 (2024).
23. Taylor, P. N. et al. The Imaging Database for Epilepsy And Surgery (IDEAS), DOI: doi:10.18112/openneuro.ds005602.v1. 0.0 (2024).
24. Gibson, M. et al. Aphasia Recovery Cohort (ARC) Dataset, DOI: doi:10.18112/openneuro.ds004884.v1.0.1 (2023).
25. Seminowicz, D. et al. MBSR, DOI: doi:10.18112/openneuro.ds005016.v1.1.1 (2024).
26. Bilder, R. et al. UCLA Consortium for Neuropsychiatric Phenomics LA5c Study (2018).
27. Strike, L. T. et al. Queensland Twin Adolescent Brain (QTAB), DOI: doi:10.18112/openneuro.ds004146.v1.0.4 (2022).
28. Tobe, R. H. et al. A longitudinal resource for studying connectome development and its psychiatric associations during childhood. Sci. data 9, 300 (2022).
29. Snoek, L. et al. AOMIC-ID1000, DOI: 10.18112/openneuro.ds003097.v1.2.1 (2021).
30. Fischl, B. FreeSurfer. Neuroimage 62, 774–781 (2012).
31. Billot, B. et al. SynthSeg: Segmentation of brain MRI scans of any contrast and resolution without retraining. Med. image analysis 86, 102789 (2023).
32. de Verdier, M. C. et al. The 2024 Brain Tumor Segmentation (BraTS) challenge: glioma segmentation on post-treatment MRI. arXiv preprint arXiv:2405.18368 (2024).
# Acknowledgements
This work has been supported by the Danish Data Science Academy, which is funded by the Novo Nordisk Foundation (grant number NNF21SA0069429) and Villum Fonden (grant number 40516), Pioneer Centre for AI, Danish National Research Foundation, grant number P1, the Lundbeck Foundation (grant number R449-2023-1512), and the National Institute of Health (grant number 1R01AG070988, 1RF1AG080371, 1RF1MH123195, 1UM1MH130981, 1R21NS138995, and 1R01EB031114). | We present FOMO60K, a large-scale, heterogeneous dataset of 60,529 brain
Magnetic Resonance Imaging (MRI) scans from 13,900 sessions and 11,187
subjects, aggregated from 16 publicly available sources. The dataset includes
both clinical- and research-grade images, multiple MRI sequences, and a wide
range of anatomical and pathological variability, including scans with large
brain anomalies. Minimal preprocessing was applied to preserve the original
image characteristics while reducing barriers to entry for new users.
Accompanying code for self-supervised pretraining and finetuning is provided.
FOMO60K is intended to support the development and benchmarking of
self-supervised learning methods in medical imaging at scale. | [
"eess.IV",
"cs.CV"
] |
introduction of new emission caps. The caps for diesel passenger cars concerning pollutants such as carbon monoxide $( C O )$ , hydrocarbons and nitrogen oxides $( H C + N O _ { X } )$ , and particulate matter $( P M )$ are detailed in Table 15. For each emission group, the caps for these three pollutants are summed to obtain a weighting factor for the proxies. The passenger car data provides information for emission group 5 but does not differentiate between Euro 5a and 5b. In this case, the more lenient tier, Euro 5a, is considered to assign more emissions to cars in tier 5. The data also includes an emission group labeled "Other." Due to the lack of additional information from the data source regarding this category, it is treated as the Euro 1 group. Since this data was unavailable for Spain, "average daily traffic - light duty vehicles" is used as a proxy. Similarly, Table 16 outlines the FEC end-use sectors for which final proxies were available for Spain.
3a. GHG Emissions (NUTS0 data) to LAU. Tables 17, 18, and 19 present the proxy assignments in the case of emissions end-use sectors. The proxies are similar to those used for FEC. The differences arise from a different breakdown of the source sub-sectors.
In Germany, except for energy-intensive industries, all proxies correspond directly to relevant emission sources. As a result, most emissions end-use sector proxies are classified as having HIGH confidence (see Table 18). In contrast, Spain lacks detailed employment data and spatial data on residential and non-residential areas, limiting the availability of HIGH confidence proxies for several emissions end-use sectors (see Table 19).
# Data Records
The final energy consumption and emissions data at the LAU level for each sub-sector in Germany and Spain are accessible on Zenodo as .csv files. This repository also includes a readme file detailing the repository structure, column definitions, measurement units, and other relevant information.
The spatial disaggregation workflow is being expanded to encompass all 27 EU member states, utilizing Snakemake as a workflow manager. Data is regularly updated through the LOCALISED Data Sharing Platform API. The spatial proxies introducted in this work are also accessible through this platform, both at their original resolution and in their stepwise disaggregated form down to the LAU level. These disaggregated proxies may be of particular interest to the research data community.
# Technical Validation
This study introduces a spatial disaggregation workflow that requires technical validation at two critical stages: (1) the imputation of missing values in proxy data, and (2) the final disaggregation of FEC and emissions data. The validation of the missing value imputation was shown already in the Methods section.
Validating the disaggregated data is more challenging due to the absence of data on energy consumption and emissions at municipal level in official databases. This lack of data is the primary reason for performing spatial disaggregation of national data. Here, technical validation is carried out through the following approaches:
1. City-level inventories: Bottom-up inventories reported by selected Spanish cities are used for validation.
2. Cross-validation with disaggregated product: The results are evaluated against another spatially disaggregated dataset, namely EDGAR. While such datasets are useful for comparative analysis, none provide comprehensive coverage of all the sub-sectors addressed in this study at the municipal level. Furthermore, these alternative datasets are themselves the outcome of spatial disaggregation processes rather than being grounded in official statistics, and therefore warrant independent critical assessment.
3. Visual assessment: For sectors lacking direct reference datasets, visual inspections are performed to confirm that the spatial distribution of emissions aligns with the patterns of the proxy data used.
The results are discussed in the following subsections.
City-level inventories. We compare the disaggregated results with the FEC and emissions reported by seven Spanish cities —Barcelona, Madrid, Valencia, Valladolid, Vitoria-Gasteiz, Zaragoza, and Seville —as part of the Climate-Neutral and Smart Cities initiative25. The climate action plans developed by these cities align in terms of baseline year (2019) and sectoral coverage15. Accordingly, we used 2019 national values for disaggregation to ensure comparability with the reported bottom-up inventories for matching end-use sectors. Two sectors —buildings and road transport —could be aligned across datasets, and comparisons were therefore limited to these sectors.
Table 20 presents a comparison of the reported and disaggregated values for the building sector, which includes both household and commerce sectors. In most cases, the absolute deviation in FEC values remains below $20 \%$ , with notable exceptions in Zaragoza and Seville.
In the case of Zaragoza, further investigation revealed that the reported FEC corresponds to the provincial (NUTS3) level rather than the municipal level. This is confirmed by the municipality’s $\mathrm { S E C A P } ^ { 2 6 }$ , where the building sector FEC is reported as 3,664,235 MWh. Our disaggregated estimate for Zaragoza municipality is 3,670,931 MWh —resulting in a deviation of only $0 . 1 8 \%$ . When disaggregated values are summed to represent the entire province, the resulting FEC is 6,187,351 MWh, which deviates by just $7 . 2 6 \%$ from the value reported in Table 20.
In Seville, the discrepancy arises because only residential electricity consumption is reported under the building sector, excluding other significant components such as commercial and non-electrical energy consumption. This leads to a larger deviation from the disaggregated estimate. These findings highlight the value of top-down disaggregation approaches as a complementary tool to bottom-up inventories, particularly in identifying inconsistencies or omissions in local reporting.
Table 20 also presents a comparison of emission values. Although the same proxies were used for disaggregating both FEC and emissions, the disaggregated emission figures show greater deviation from those reported in bottom-up inventories compared to FEC values. This discrepancy can be attributed to differences in the energy mix between national and regional levels. For instance, according to Eurostat’s national energy balance data, the commerce sector uses $1 9 . 1 6 \%$ natural gas. In contrast, the share of natural gas usage in the commerce sector in the provinces of Araba, Bizkaia, and Gipuzkoa is $2 6 . 3 2 \%$ , $2 4 . 3 8 \%$ , and $1 9 . 7 1 \%$ , respectively27. These variations in energy mix can significantly impact emission estimates. Therefore, when developing local inventories using top-down datasets, it is crucial to recalculate emissions if the regional energy mix diverges notably from the national average.
The local inventories also include data for the road transport sector; however, the reporting practices for this sector are often unclear and inconsistent. For instance, in Vitoria-Gasteiz, the reported FEC appears to cover only road transport, whereas in Barcelona, railway transport is also included. In contrast, Valencia’s plan explicitly states that only road transport within the city limits is considered. These inconsistencies in sectoral definitions and reporting scope likely contribute to the significant deviations observed in Table 21.
Cross-validation with EDGAR. EDGAR provides disaggregated emissions data by sector at the NUTS2 level for the year 2022. As a first step, we conduct a sectoral comparison to identify categories that align between datasets. Table 22 lists the sectors identified as comparable, along with the corresponding national totals reported by both EDGAR and Eurostat.
With the exception of the transport sector, all categories exhibit absolute deviations exceeding $20 \%$ . These discrepancies may arise from differences in the inclusion or exclusion of certain sub-sectors. For instance, chemical industry emissions are not reported for Germany in Eurostat, and thus are not considered here. Moreover, emissions from power plants appear to be included under the industrial sector in EDGAR, whereas they are excluded in our categorisation.
Given the relatively minor deviation observed in the transport sector at the national scale, a more granular comparison at the NUTS2 level was conducted. To facilitate this, emissions data from all LAU regions were aggregated to their corresponding NUTS2 regions. Figure 12 presents this comparison, alongside the associated percentage deviations.
Despite the low national-level discrepancies, significant regional variations are evident across both countries in the tw datasets. These regional discrepancies can be attributed to several factors:
1. Propagation of national-level differences: Although national totals exhibit only minor deviations, these discrepancies are distributed across regions, potentially amplifying inconsistencies at the sub-national level.
2. Differences in regional coverage: In the case of Spain, the Canary Islands were excluded from our analysis due to the unavailability of suitable proxy datasets. Conversely, EDGAR includes this NUTS2 region but omits the autonomous cities of Ceuta and Melilla, which are accounted for in our dataset.
3. Choice and availability of spatial proxies: Our approach leverages openly available regional-level datasets as proxies for spatial disaggregation. While EDGAR also employs open data sources, the specific proxies used are not always transparent, making it difficult to isolate the precise causes of spatial discrepancies between the datasets.
Visual assessment for non-matching sub-sectors. Figure 13 illustrates the spatial distribution of the proxy data, namely "employment in food and beverage manufacturing" and "employment in manufacturing" for Germany and Spain, respectively. These proxies are used to disaggregate emissions in the food, beverages, and tobacco industries. The figure also displays the disaggregated emission values. It can be observed that the spatial distribution of the disaggregated values mirrors that of the proxy data, confirming that the disaggregation has been performed accurately. Figures pertaining to other sub-sectors are available on GitHub along with the code. The link can be found under the Code Availability section.
# Usage Notes
The quality of spatial disaggregation is inherently constrained by the availability and reliability of suitable spatial proxies, particularly with respect to the accuracy of reported values and the extent of missing data. To more precisely evaluate the accuracy of the disaggregation, further comparisons with bottom-up inventories are necessary. However, such inventories are currently limited in number and often exhibit internal inconsistencies. As more consistent and comprehensive bottom-up inventory data becomes available, additional technical validation will be conducted. Corresponding updates to the spatial proxies will also be implemented. All modifications will be documented in the codebase and reflected on the Data Sharing Platform. Further details regarding these resources are provided in the Code Availability section.
For this analysis, the 2019 LAU definitions were applied, acknowledging that LAU boundaries may change annually. Consequently, if updated LAU definitions are used in future analyses, data disaggregation may need to be reconfigured for alignment with these new regions. Additionally, this adaptation would necessitate reprocessing of any spatial proxy data collected at the LAU level. At higher spatial levels, such as NUTS3, boundary definitions typically update on a four-year cycle; for this work, we utilized the 2016 NUTS definitions.
The reference year for regional definitions is essential, as is the year of data records, both of which can influence disaggregation outcomes. Here, emission and FEC data were collected for 2022. Proxy data, however, come from multiple years, with a priority on using the most recent data available from each source. For instance, population data from 2019 aligns with the 2019 LAU definitions, while the Corine Land Cover dataset, last updated in 2018, provides land cover information. Future updates to this work will include re-running the workflow with the latest available data to ensure accuracy and relevance.
Furthermore, Eurostat offers emissions and Final Energy Consumption (FEC) data for all sub-sectors in both Germany and Spain, with the exception of FEC data for the chemical industries in Germany. This specific data has not been available for any of the years examined. Consequently, we have not disaggregated this value in our work. We will continue to monitor Eurostat for any future updates.
# Code Availability
The spatial disaggregation workflow developed for this work is implemented in Python and is available on GitHub under the repository EnergyEmissionsRegio. The core functions can be accessed in the "energyemissionsregio" directory, while the sections on missing value imputation and disaggregation are in the "experiments" directory. This workflow is being expanded to include all 27 EU member states and is regularly updated in the ETHOS.zoomin repository on GitHub.
The disaggregation process leverages spatial proxies, which are collected, processed, and stored in a database, where the disaggregated data is also saved. This data can be accessed through the LOCALISED Data Sharing Platform API. Additionally, a Python API client for accessing this data, named LOCALISED-Datasharing-API-Client, is available on GitHub.
# References
1. Deb, S., Tammi, K., Kalita, K. & Mahanta, P. Review of recent trends in charging infrastructure planning for electric vehicles. Wiley Interdiscip. Rev. Energy Environ. 7, e306 (2018).
2. Valencia. Valencia climate city contract. NetZeroCities https://netzerocities.app/resource-4065 (2025).
3. Kona, A., Bertoldi, P., Monforti-Ferrario, F., Rivas, S. & Dallemand, J. F. Covenant of mayors signatories leading the way towards 1.5 degree global warming pathway. Sustain. Cities Soc. 41, 568–575 (2018).
4. Crippa, M. et al. Gridded emissions of air pollutants for the period 1970–2012 within edgar v4. 3.2. Earth Syst. Sci. Data 10, 1987–2013 (2018).
5. Moran, D. et al. Estimating co 2 emissions for 108,000 european cities. Earth Syst. Sci. Data Discuss. 2021, 1–23 (2021).
6. Valencia, V. H., Levin, G. & Ketzel, M. Downscaling global anthropogenic emissions for high-resolution urban air quality studies. Atmospheric Pollut. Res. 13, 101516 (2022).
7. Risch, S. et al. Scaling energy system optimizations: Techno-economic assessment of energy autonomy in 11 000 german municipalities. Energy Convers. Manag. 309, 118422 (2024).
8. European Commission. Joint Research Centre. & IEA. GHG emissions of all world countries. (Publications Office, LU, 2024).
9. for Communication, D.-G. Spain - Final updated NECP 2021-2030 (submitted 2024) - European Commission, https: //commission.europa.eu/publications/spain-final-updated-necp-2021-2030-submitted-2024_en (2025).
10. Commission, E. Eurostat. Publ. Off. Eur. Union https://ec.europa.eu/eurostat/en/web/main/data/database (2024).
11. Patil, S., Pflugradt, N., Weinand, J. M., Stolten, D. & Kropp, J. A systematic review of spatial disaggregation methods for climate action planning. Energy AI 17, 100386, https://doi.org/10.1016/j.egyai.2024.100386 (2024).
12. Service, C. L. M. Corine land cover 2018 (vector/raster $1 0 0 \mathrm { m }$ ), europe, 6-yearly. Publ. Off. Eur. Union https://doi.org/10. 2909/960998c1-1870-4e82-8051-6485205ebbac (2024).
13. Contributors, O. Openstreetmap [data set]. OpenStreetMap Foundation openstreetmap.org (2024).
14. Chen, T. & Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, 785–794 (2016).
15. NetZeroCities, https://netzerocities.app/ (2025).
16. Crippa, M. et al. Insights into the spatial distribution of global, national, and subnational greenhouse gas emissions in the emissions database for global atmospheric research (edgar v8. 0). Earth Syst. Sci. Data 16, 2811–2830 (2024).
17. Eurostat. Complete energy balances. Publ. Off. Eur. Union https://doi.org/10.2908/NRG_BAL_C (2024).
18. IEA. Energy technology transitions for industry: strategies for the next industrial revolution (OECD Publishing, 2009).
19. Eurostat. Greenhouse gas emissions by source sector. Publ. Off. Eur. Union https://doi.org/10.2908/ENV_AIR_GGE (2024).
20. Fleiter, T. Documentation on excess heat potentials of industrial sites including open data file with selected potentials (version 2). Zenodo https://doi.org/10.5281/zenodo.4785411 (2020).
21. contributors, G. Global Steel Plant Tracker - Global Energy Monitor, https://globalenergymonitor.org/projects/ global-steel-plant-tracker/ (2024).
22. GENESIS-Online. Die datenbank des statistischen bundesamtes. GENESIS-Online https://www-genesis.destatis.de/ genesis/online (2024).
23. Eustat. Basque statistical institute, https://en.eustat.eus/indice.html (2025).
24. Wikipedia contributors. European emission standards — Wikipedia, The Free Encyclopedia (2024). [Online; accessed 13-August-2024].
25. Commission, E. Climate-neutral and smart cities - European Commission, https://research-and-innovation.ec.europa. eu/funding/funding-opportunities/funding-programmes-and-open-calls/horizon-europe/eu-missions-horizon-europe/ climate-neutral-and-smart-cities_en (2025).
26. Plan de Acción por el Clima y la Energía Sostenible del Municipio de Zaragoza 2030, https://www.zaragoza.es/contenidos/ medioambiente/2021-PACES-Zaragoza-2030.pdf (2025).
27. Energy Agency of the Basque Government, https://www.eve.eus/Conoce-la-Energia/La-energia-en-Euskadi/ Datos-energeticos-Euskadi?lang=es-es (2025).
28. Eurostat-GISCO. Local administrative units (lau). Publ. Off. Eur. Union https://ec.europa.eu/eurostat/web/gisco/geodata/ statistical-units/local-administrative-units (2024).
29. Pezzutto, S., Zambotti, S., Croce, S., Zambelli, P. et al. Hotmaps project d2.3 wp2 report – open data set for the eu28, https://www.hotmaps-project.eu (2018).
30. EEA. European air quality data, (interpolated data). EEA Datahub 938bea70-07fc-47e9-8559-8a09f7f92494 (2023).
31. Spanish Statistical Office, https://www.ine.es/en/ (2024).
32. EuroGeographics. Euroregionalmap. Natl. Mapp. Cadastral Agencies (NMCAs) https://www.mapsforeurope.org/datasets/ euro-regional-map (2024).
33. Eurostat. Gross domestic product (gdp) at current market prices by nuts 3 region. Publ. Off. Eur. Union https://doi.org/10. 2908/NAMA_10R_3GDP (2024).
34. Eurostat. National road freight transport by region of loading (nuts 3) and type of goods (t) - annual data (from 2008 onwards). Publ. Off. Eur. Union https://doi.org/10.2908/ROAD_GO_NA_RL3G (2024).
35. Eurostat. Employment (thousand persons) by nuts 3 region. Publ. Off. Eur. Union https://doi.org/10.2908/NAMA_10R_ 3EMPERS (2024).
36. ESPON. Espon 2020 data. ESPON Database Portal https://database.espon.eu/doc/doc.html (2020).
37. Change, R. E. Euro-cordex: new high-resolution climate change projections for european impact research. CORDEX initiative (2020).
38. Gilbert, M. et al. Global distribution data for cattle, buffaloes, horses, sheep, goats, pigs, chickens and ducks in 2010. Sci. data 5, 1–11 (2018).
39. für Arbeit, B. Datenbanken beschäftigungsstatistik. Bundesagentur für Arbeit https://www.arbeitsagentur.de/ (2024).
40. Data on average traffic intensity on the road network, https://www.dataestur.es/en/transport/road-traffic/ (2025).
41. Eurostat. Stock of vehicles by category and nuts 2 region. Publ. Off. Eur. Union https://doi.org/10.2908/TRAN_R_VEHST (2024).
42. Eurostat. Air transport of freight by nuts 2 region. Publ. Off. Eur. Union https://doi.org/10.2908/TRAN_R_AVGO_NM (2024).
43. Eurostat. Air transport of passengers by nuts 2 region. Publ. Off. Eur. Union https://doi.org/10.2908/TGS00077 (2024).
# Acknowledgements
This work was developed as part of the project LOCALISED —Localised decarbonization pathways for citizens, local administrations and businesses to inform for mitigation and adaptation action. This project received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 101036458. We extend our sincere gratitude to the entire LOCALISED project team for their invaluable contributions. We also wish to acknowledge the financial support that has made this project possible. This work was also supported by the Helmholtz Association as part of the program “Energy System Design”. Special thanks are due to our colleagues at the Forschungszentrum Jülich for their diligent proofreading and insightful feedback, which have significantly enhanced the quality of our work.
Disclaimer: This work reflects the authors’ views. The European Commission is not responsible for any use that may be made of the information it contains.
# Author contributions statement
S.P. conducted the experiments, analysed the results, and wrote the manuscript, N.P conceived the experiment and supervised the work, J.M.W. provided the resources to conduct the work and supervised the work, J.K. supervised the work and was involved in funding acquisition, and D.S. provided the resources to conduct the work. All authors reviewed the manuscript.
# Competing interests
The authors declare no competing interests.
# Figures & Tables
Step-wise spatial disaggregation Data collection Proxy data missing value imputation NUTS3 data to LAU using LAU data as proxy Data validation
greenhouse gas emissions and National-level Training of XGBoot models to predict missing values NUTS2 data to LAU using disaggregated NUTS3 and LAU data as proxy disaggregated data with Comparison of the
final energy consumption data Validation of the results bottom-up inventories ? Emissions and energy consumption data to and other spatially
Proxy data at NUTS3, NUTS2, Evaluation of the cross-country LAU using disaggregated NUTS3 and NUTS2 disaggregated datasets and LAU level applicability of the models data and LAU data as proxy 1.0
Utilized agricultural area -0.1 0.2 0.2 0.1 0.2 0.1 0.1 0.1 0.2 0.1 -0.5 0.0 3 om .Cov e .cov ver COV ver ercover ecover rover ecover cover cover cover cover cover dcoscoasc ecover ecover cover rover ngs to 3 OMIU rban oOP sabrte sabric AirP ts sites OumP onstruleisueerty Dustructisure sites sites reRiyplalionPe aice selds sorese isforforath foreshlanock Bareratedogma atedamaltmal marstmarstcou Nater rourserntingotbe emeumbe Nume Pongllut dusbuilanu due due to rinuexin inuousnuous <o0l ber rees eesampie sarser vel eerag rruit 1.0
Utilized agricultural area 0.9 0.7 0.5 0.6 0.5 0.5 0.0 0 COY 9 Co anen seyar grov stui odies ommerredalyirrig oiv getostryane restyveavetal gra eaveealgrasy ilousywooanWa 司 ret Number of passenger cars -emission group euro 1 0 0 0 0.1 0 0.1 0.1 0 0.1 0 0 0 0 0.1 0.1 0 0 0.2 Number of passenger cars-emission group euro 2 0.1 0 0 0.1 0 0 0 0 0.1 1.0 Number of passenger cars-emission group euro 3 0.1 0.1 0 0.1 0.1 0 0 0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 -0.8 Number of passenger cars -emission group euro 4 Number of passenger cars -emission group euro 5 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0 0 0 0 0 0 0 0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 -0.6 Number of passenger cars -emission group euro 6r 0.1 0.1 0 0 0.1 0.1 0.1 0.1 -0.4 Number of passenger cars -emission group euro 6dt 0 0 0 0 0 0 0 0.1 0 0 0.1 0 0 0 0 0 0 0.2 Number of passenger cars-emission group euro 6d 0 0.1 0 0 0.1 0.1 0 0 0
Number of passenger cars-emission group euro other 0 0 0 0 0 0 0.1 0 0 0.1 0 0 0 0 0 0 0.2 -0.0 公 e e e e 心 T es 03 NS C 0 0 。 C 3 C 8 8 e T 0 de 券 0 air ri9° B 9 e with er S o con Jture 0 6 NU Number of passenger cars -emission group euro 1-0.1 0.8 0.8 Number of passenger cars -emission group euro 2 -0.1 0.7 0.8 0.8 0.2 0.1 0.20.2 0.2 0.2 0.2 0.2 0.2 0.2 0.20.20.2 0.6 0.20.3 0.2 0.2 0.9 0.9 C 0.1 0.5 0.10.2 00.2 1.0 Number of passenger cars -emission group euro 3 -0.1 0.1 0.2 0.2 0.20.20.2 0.1 0.10.2 -0.8 Number of passenger cars - emission group euro 4-0.1 0.2 0.20.20.2 0.1 0.1 0.6 Number of passenger cars -emission group euro 5 -0.2 0.20.2 0.2 0.2 0.2 0.20.1 Number of passenger cars- emission group euro 6r-0.1 0.20.2 0.2 0.2 0.2 0.2 0.1 0.1 0.1 -0.4 Number of passenger cars -emission group euro 6dt - 0.1 0.20.20.2 0.20.2 0.2 0.1 0.1 0.1 0.2 Number of passenger cars -emission group euro 6d - 0.1 0.20.20.2 0.1 0.1 0.2 0.1 0.20.10.2 0.2 Q 0.1
Number of passenger cars-emission group euro other 0 0.2 0.2 0.2 0.2 0.10.2 0.1 0.1 0.1 -0.0
Table 1. LAU-level proxy data collected from Eurostat, Hotmaps, and The National Statistics Institute of Spain. The variables highlighted in blue are available only for Spain.
Figure 2. The spatial hierarchy in Germany and Spain, showing the availability of various proxy datasets from public data sources at different spatial levels. The data sources highlighted in orange and blue provide data only for Germany and Spian, respectively. Proxy data undergoes a stepwise spatial disaggregation to achieve final proxies at the LAU level. Emissions and FEC data, available at the NUTS0 level from Eurostat, is then disaggregated to LAU based on these final proxies.
Figure 3. Breakdown of end-use FEC sectors as reported in Eurostat, with Germany at the top and Spain at the bottom.
Figure 4. Breakdown of end-use emission sectors as reported in Eurostat, with Germany at the top and Spain at the bottom. Note: Emissions from the chemical industry are not reported for Germany on Eurostat and are therefore absent from the figure. Consequently, emissions from energy-intensive industries appear lower than those from non-energy-intensive industries.
Figure 5. The distribution and number of iron and steel industries as reported by three open databases: Global Steel Plant Tracker, Hotmaps, and sEEnergies. The figure highlights the differences in coverage among these sources, with Hotmaps providing the most comprehensive dataset.
Figure 7. The absolute correlations between number of commercial and service companies and average daily traffic by light duty vehicles, and different predictors at NUTS3 level. The figure is divided into two sections: the top half displays the least correlated variables, while the bottom half highlights the most correlated ones. For imputing missing values, predictors with correlations of at least 0.1 are used in one set of experiments, while those with correlations of at least 0.5 are considered in another.
Figure 8. The absolute correlations between employment data and different predictors at NUTS3 level. The figure is divided into two sections: the top half displays the least correlated variables, while the bottom half highlights the most correlated ones. For imputing missing values, predictors with correlations of at least 0.1 are used in one set of experiments, while those with correlations of at least 0.5 are considered in another.
Figure 10. The absolute correlations between the building living area and different predictors at NUTS3 level. The figure is divided into two sections: the top half displays the least correlated variables, while the bottom half highlights the most correlated ones. For imputing missing values, predictors with correlations of at least 0.1 are used in one set of experiments, while those with correlations of at least 0.5 are considered in another.
Figure 11. [Top] Results of training an XGBoost model to predict utilized agricultural area in Spain at the LAU level, and applying this model to estimate values for German LAU regions. The predicted data is compared with the available utilized agricultural area data at the NUTS1 level in Germany. The results indicate that the model’s predictions closely align with the actual data, with minimum and maximum deviations of 9.34 and 5106.56 square kilometers, respectively. [Bottom] Results of training an XGBoost model to predict passenger car stock in Germany at the NUTS3 level, and applying this model to estimate values for Spanish NUTS3 regions. The predicted data is compared with the available data for the 3 NUTS3 regions in the Basque Country, Spain. The results indicate that the model’s predictions deviate significantly from the actual data, with minimum and maximum deviations of 108957.0 and 276023.0 cars, respectively.
Figure 12. Comparison of transport emissions data available at NUTS2 level from the EDGAR database and the disaggregated values, for Germany and Spain.
Figure 13. [top-left] "Employment in food and beverage manufacturing" for Germany [top-right] "Employment in manufacturing" in Spain. These proxies are used to disaggregate the emissions in food, beverages, and tobacco industries. [bottom] Disaggregated emission values.
Table 2. LAU-level proxy data collected from Corine Land Cover, Eurogeographics, and OpenStreetMap
Table 3. Number of different industries as reported by Hotmaps and sEEnergies open databases.
Table 4. NUTS3-level proxy data collected from different data sources. The variables highlighted in orange and blue are available only for Germany and Spain, respectively.
Table 5. NUTS2-level proxy data collected from Eurostat.
Table 6. Number of missing values per variable with missing values. The variables highlighted in orange and blue are available only for Germany and Spain, respectively. NOTE: The number of data records at LAU-level in Germany and Spain are 11087 and 8043, respectively. The number of data records at NUTS3-level in Germany and Spain are 401 and 52, respectively.
Table 7. The RMSE and R-squared errors on training and validation data. The colored cells indicate the best performing model between the two: predictors with correlation threshold $\geq 0 . 1$ and predictors with correlation threshold $\ge 0 . 5$ . The color of the cell indicates the confidence rating of the imputed values. See Table 8 for confidence rating details.
Table 8. The threshold for R-sqaured score and the associated confidence level for imputed values.
Table 9. The potential proxies for disaggregating each NUTS3 dataset commonly collected for both Germany and Spain are presented. The final chosen proxy is highlighted, with color coding indicating the confidence level. Refer to Table 8 for the confidence level color scheme.
Table 10. The potential proxies for disaggregating each NUTS3 dataset collected only for Germany are presented. The final chosen proxy is highlighted, with color coding indicating the confidence level. Refer to Table 8 for the confidence level color scheme.
Table 11. The potential proxies for disaggregating each NUTS3 dataset collected only for Spain are presented. The final chosen proxy is highlighted, with color coding indicating the confidence level. Refer to Table 8 for the confidence level color scheme.
Table 12. The potential proxies for disaggregating each NUTS2 dataset commonly collected for both Germany and Spain are presented. The final chosen proxy is highlighted, with color coding indicating the confidence level. Refer to Table 8 for the confidence level color scheme.
Table 13. FEC end-use sectors with final proxies commonly available for both Germany and Spain. The final chosen proxy is highlighted, with color coding indicating the confidence level. Refer to Table 8 for the confidence level color scheme.
Table 14. FEC end-use sectors with final proxies available for Germany. The final chosen proxy is highlighted, with color coding indicating the confidence level. Refer to Table 8 for the confidence level color scheme.
Table 15. Emission caps for different air pollutants, per emission group.
Table 16. FEC end-use sectors with final proxies available for Spain. The final chosen proxy is highlighted, with color coding indicating the confidence level. Refer to Table 8 for the confidence level color scheme.
Table 18. GHG emissions end-use sectors with final proxies available for Germany. The final chosen proxy is highlighted, with color coding indicating the confidence level. Refer to Table 8 for the confidence level color scheme.
Table 19. GHG emissions end-use sectors with final proxies available for Spain. The final chosen proxy is highlighted, with color coding indicating the confidence level. Refer to Table 8 for the confidence level color scheme.
Table 20. Comparison of FEC and emissions values reported by seven Spanish cities with the disaggregated values for the building sector. NOTE: Building sector includes households and commerce sectors.
Table 21. Comparison of FEC and emissions values reported by seven Spanish cities with the disaggregated values for the road transport sector.
Table 22. Comparison of sectoral values reported by EDGAR database and Eurostat values at NUTS0 level, to determine matching sectors. NOTE: The unit of measure of the values is kt CO2 equivalent. | High-resolution energy consumption and emissions datasets are essential for
localized policy-making, resource optimization, and climate action planning.
They enable municipalities to monitor mitigation strategies and foster
engagement among governments, businesses, and communities. However, smaller
municipalities often face data limitations that hinder tailored climate
strategies. This study generates detailed final energy consumption and
emissions data at the local administrative level for Germany and Spain. Using
national datasets, we apply spatial disaggregation techniques with open data
sources. A key innovation is the application of XGBoost for imputing missing
data, combined with a stepwise spatial disaggregation process incorporating
district- and province-level statistics. Prioritizing reproducibility, our
open-data approach provides a scalable framework for municipalities to develop
actionable climate plans. To ensure transparency, we assess the reliability of
imputed values and assign confidence ratings to the disaggregated data. | [
"cs.DB",
"E",
"E.m"
] |
# 1. Introduction
Distribution matching (DM) is a versatile domaininvariant representation learning technique that has been applied to tasks such as fair classification, domain adaptation, and domain translation. Non-parametric DM methods struggle with scalability and adversarial DM approaches suffer from instability and mode collapse. While likelihoodbased methods are a promising alternative, they often impose unnecessary biases through fixed priors or require explicit density models (e.g., flows) that can be challenging to train. We address this limitation by introducing a novel approach to training likelihood-based DM using expressive score-based prior distributions. Our key insight is that gradient-based DM training only requires the prior’s score function—not its density—allowing us to train the prior via denoising score matching. This approach eliminates biases from fixed priors (e.g., in VAEs), enabling more effective use of geometry-preserving regularization, while avoiding the challenge of learning an explicit prior density model (e.g., a flow-based prior). Our method also demonstrates better stability and computational efficiency compared to other diffusion-based priors (e.g., LSGM). Furthermore, experiments demonstrate superior performance across multiple tasks, establishing our score-based method as a stable and effective approach to distribution matching. Source code available at https://github. com/inouye-lab/SAUB.
As machine learning (ML) continues to advance, trustworthy ML systems not only require impressive performance but also properties such as fairness, robustness, causality, and explainability. While scaling data and models can improve performance (Kaplan et al., 2020), simple scaling may not address these issues. For example, historical bias or imbalanced data can cause even well-trained models to produce unfair outcomes, requiring additional constraints to mitigate such biases. Distribution matching (DM), also known as distribution alignment or domain-invariant representation learning, has emerged as a promising approach to address these challenges. By minimizing the divergence between latent representations, distribution matching can introduce additional objectives to ML systems, enabling them to learn representations that are fair, robust, and causal. This approach has been successfully applied to a wide range of problems, including domain adaptation (Ganin et al., 2016; Zhao et al., 2018), domain generalization (Muandet et al., 2013) causal discovery (Spirtes & Zhang, 2016), and fairness-aware learning (Zemel et al., 2013).
DM methods can be broadly categorized into parametric and non-parametric approaches. Non-parametric methods, such as kernel Maximum Mean Discrepancy (MMD)(Louizos et al., 2015; Zellinger et al., 2017) and Sinkhorn divergence (Feydy et al., 2019), operate directly on sample distributions without assuming a specific parametric form. Parametric DM methods, on the other hand, rely on modeling distributions with explicit parameters and can be further divided into adversarial and non-adversarial likelihood-based approaches. Adversarial methods, exemplified by Generative Adversarial Networks (GANs)(Goodfellow et al., 2014), frame distribution matching as a minimax game between a generator and a discriminator. While highly expressive and capable of capturing complex data distributions, these methods suffer from well-documented issues such as training instability, mode collapse, and sensitivity to hyperparameters(Lucic et al., 2018; Kurach et al., 2019; Farnia & Ozdaglar, 2020; Nie & Patel, 2020; Wu et al., 2020; Han et al., 2023). In contrast, likelihood-based approaches leverage probabilistic models such as variational autoencoders (VAEs) (Kingma et al., 2019) or normalizing flows (Papamakarios et al., 2021) to match distributions by maximizing the likelihood of observed data under the model with relatively better training stability and ability. However, normalizing flows are restricted by the requirement that the latent dimension must have the same size as the input dimension. This constraint limits their flexibility in modeling complex latent representations and can hinder their ability to capture lower-dimensional latent structures effectively (Cho et al., 2022b). On the other hand, VAEs are valued for being able to capture meaningful and structured representations (Chen et al., 2019; Burgess et al., 2018) in a lower dimension. Gong et al. (2024) proposed to use VAEs for DM task but imposed a simple learnable prior distribution (e.g., Gaussian, Mixture of Gaussian), which aligned poorly with the true data distribution and consequently led to suboptimal performance. The need for a more expressive learnable prior distribution is also important when enforcing geometry-preserving constraints, as these constraints ensure that the latent space retains the intrinsic geometry of the data (Uscidda et al., 2024; Nakagawa et al., 2023; Hahm et al., 2024; Lee et al., 2022; Horan et al., 2021; Gropp et al., 2020; Chen et al., 2020) which could facilitate disentangled representations in the latent space and consequently improving downstream tasks performance.
In order to have expressive prior, our key insight is that, for gradient-based training, likelihood-based DM methods do not require computation of the prior density directly. Instead, they only require the gradient of the log probability of the prior distribution—commonly referred to as the score function. Building on this observation, we propose a novel approach that models the prior density through its score function, precisely the computation needed for training. The score function can be efficiently estimated using denoising score matching techniques, enabling us to bypass the challenges associated with learning explicit prior densities. Another crucial insight stems from recognizing that DM methods do not inherently require generation capabilities; instead, the prior distribution is only used to form a proper bound for divergence measures during training. This allows us to model the prior using score-based models, where sampling the prior is computationally expensive but score training and inference remain efficient and stable. We demonstrate through extensive experiments that our simple yet effective algorithm significantly improves training stability and achieves superior DM results across various benchmarks. Finally, our framework can also integrate semantic information from pretrained models, such as CLIP (Radford et al., 2021), to capture task-relevant features that reflect higher level semantics. By aligning the latent space with these semantic relationships, our method can ensure that the representations are not only geometrically sound but also contextually meaningful for downstream tasks, such as classification and domain adaptation.
We summarize our contributions in the field of DM as
follows:
• We introduce the Score Function Substitution (SFS) trick that computes exact variational encoder gradients using only the prior’s score function, thereby circumventing the need for explicit density evaluation. • Leveraging SFS, we develop a novel, stable, and efficient alternating optimization algorithm for likelihoodbased DM with expressive score-based priors. • Our method achieves strong performance across diverse downstream tasks, including fair classification, domain adaptation, and domain translation. • We further demonstrate that our approach enables the effective application of geometry-preserving regularization (Nakagawa et al., 2023), yielding additional performance improvements when a semantically rich latent space is available for the task.
# 2. Preliminaries
Variational Alignment Upper Bound (VAUB) The paper by Gong et al. (2024) presents a novel approach to distribution matching for learning invariant representations. The author proposes a non-adversarial method based on Variational Autoencoders (VAEs), called the VAE Alignment Upper Bound (VAUB). Specifically, they introduce alignment upper bounds for distribution matching that generalize the Jensen-Shannon Divergence (JSD) with VAE-like objectives. The author formalizes the distribution matching problem with the following VAUB objective:
$$
\mathrm { V A U B } ( q ( z | x , d ) ) = \operatorname* { m i n } _ { p ( z ) } \mathbb { E } _ { q ( x , z , d ) } \left[ - \log \frac { p ( x | z , d ) } { q ( z | x , d ) } p ( z ) \right] + C ,
$$
where $\scriptstyle q ( z | x , d )$ is the probabilistic encoder, $p ( x | z , d )$ is the decoder, $p ( z )$ is the shared prior, and $C$ is a constant independent of model parameters. The method ensures that the distribution matching loss is an upper bound of the Jensen-Shannon divergence (JSD), up to a constant. This non-adversarial approach overcomes the instability of adversarial training, offering a robust, stable alternative for distribution matching in fairness, domain adaptation, and robustness applications. Empirical results show that VAUB and its variants outperform traditional adversarial methods, particularly in cases where model invertibility and dimensionality reduction are required.
Score-based Models Score-based models (Song et al., 2021c) are a class of diffusion models that learn to generate data by denoising noisy samples through iterative refinement. Rather than directly modeling the data distribution $p ( x )$ , as done in many traditional generative models, score-based models focus on learning the gradient of the log-probability density of the target distribution, known as the score function. To learn the score function, (Vincent, 2011) and (Song & Ermon, 2019) propose training on the Denoising Score Matching (DSM) objective. Essentially, data points $x$ are perturbed with various levels of Gaussian noise, resulting in noisy observations $\tilde { x }$ . The score model is then trained to match the score of the perturbed distribution. The DSM objective is defined as follows:
$$
\mathrm { D S M } = \frac { 1 } { 2 L } \mathbb { E } \bigg [ \| s _ { \phi } ( \tilde { x } , \sigma _ { i } ) - \nabla _ { \tilde { x } } \log q _ { \sigma _ { i } } ( \tilde { x } | x ) \| _ { 2 } ^ { 2 } \bigg ] ,
$$
where $q _ { \sigma _ { i } } ( { \tilde { x } } | x )$ represents the perturbed data distribution of $p _ { \mathrm { d a t a } } ( x )$ , $L$ is the number of noise scales $\{ \sigma _ { i } \} _ { i = 1 } ^ { L }$ , and the expectation is over the distribution $p _ { \mathrm { d a t a } } ( x ) q ( \sigma _ { i } ) q _ { \sigma _ { i } } ( \tilde { x } | x )$ . When the optimal score network $\boldsymbol { s } _ { \phi } ^ { * }$ is found, $s _ { \phi } ^ { * } ( x ) \ =$ $\nabla _ { x } \log q _ { \sigma } ( x )$ almost surely ((Vincent, 2011),(Song & Ermon, 2019)) and approximates $\nabla _ { x } \log p _ { \mathrm { d a t a } } ( x )$ when the noise is small $( \sigma \approx 0 )$ ). Since score-based models learn the gradient of the distribution rather than the distribution itself, generating samples involves multiple iterative refinement steps. These steps typically leverage techniques such as Langevin dynamics, which iteratively updates the sample using the learned score function (Song & Ermon, 2019).
Gromov-Wasserstein Distance The Optimal Transport (OT) problem seeks the most efficient way to transform one probability distribution into another, minimizing transport cost. Given two probability distributions $\mu$ and $\nu$ over metric spaces $( X , d _ { X } )$ and $( Z , d _ { z } )$ , the OT problem is:
$$
\operatorname* { i n f } _ { \pi \in \Pi ( \mu , \nu ) } \mathbb { E } _ { ( x , z ) \sim \pi } [ d ( x , z ) ]
$$
where $\Pi ( \mu , \nu )$ is the set of couplings with marginals $\mu$ and $\nu$ , and $d ( x , z )$ is a cost function, often the Euclidean distance. The Gromov-Wasserstein (GW) distance extends OT to compare distributions on different metric spaces by preserving their relative structures, not absolute distances. For distributions $\mu$ and $\nu$ over spaces $( X , d _ { X } )$ and $( Z , d _ { z } )$ , the GW distance is:
$$
\begin{array} { r l r } { { \mathbf { G W } ( \mu , \nu ) } } \\ & { = \operatorname* { i n f } _ { \pi \in \Pi ( \mu , \nu ) } \mathbb { E } _ { ( x , z ) \sim \pi , ( x ^ { \prime } , z ^ { \prime } ) \sim \pi } [ \| d _ { X } ( x , x ^ { \prime } ) - d _ { Z } ( z , z ^ { \prime } ) \| ^ { 2 } ] } \\ & { = \operatorname* { i n f } _ { \pi \in \Pi ( \mu , \nu ) } \mathbf { G W C o s t } ( \pi ( x , z ) ) } & { ( 4 ) } \end{array}
$$
# 3. Methodology
# 3.1. Training Objective for Distribution Matching with a Score-based Prior
We aim to optimize VAUB (Gong et al., 2024) as our distribution matching objective:
$$
\mathcal { L } _ { \mathrm { D M } } = \mathcal { L } _ { \mathrm { V A U B } } = \sum _ { d } \frac { 1 } { \beta } \mathbb { E } _ { q _ { \theta } } \left[ - \log \frac { p _ { \varphi } ( x | z , d ) } { q _ { \theta } ( z | x , d ) ^ { \beta } } Q _ { \psi } ( z ) ^ { \beta } \right] ,
$$
where $d$ represents the domain $\forall d \in [ 1 , \cdots , D ]$ (e.g., different class datasets or modalities), and $\beta \in [ 0 , 1 ]$ acts as a regularizer controlling the mutual information between the latent variable $z$ and the data $x$ . $q _ { \theta } ( z | x , d )$ and $p _ { \varphi } ( x | z , d )$ are the $d$ -th domain probabilistic encoder and decoder, respectively, and $Q _ { \psi } ( z )$ is a prior distribution that is invariant to domains (Gong et al., 2024). For notational simplicity, we ignore the regularization loss and we assume $\beta = 1$ . We can split the VAUB objective into three components: reconstruction loss, entropy loss, and cross entropy loss.
$$
\begin{array} { r l r } { { \mathcal { L } _ { \mathrm { V A U B } } \triangleq \sum _ { d } \Big \{ \underbrace { { \mathbb E } _ { q _ { \theta } } { \big [ } - \log p _ { \varphi } ( x | z , d ) { \big ] } } _ { \mathrm { r e c o n s t r u c t i o n ~ t e r m ~ } } } } \\ & { } & \quad - \underbrace { { \mathbb E } _ { q _ { \theta } } { \big [ } - \log q _ { \theta } ( z | x , d ) { \big ] } } _ { \mathrm { r e c o n s t r u c t i o n ~ } } + \underbrace { { \mathbb E } _ { q _ { \theta } } { \big [ } - \log Q _ { \psi } ( z ) { \big ] } } _ { \mathrm { d } } { \big \} . } \end{array}
$$
The prior distribution in the cross-entropy term aligns with the encoder’s posterior but is often restricted to simple forms like Gaussians or Gaussian mixtures (Gong et al., 2024), which can distort the encoder’s transformation function (Uscidda et al., 2024). To address this, we propose an expressive, learnable prior that adaptively mitigates such distortions, better capturing the underlying data structure.
Modeling an arbitrary probabilistic density function (PDF) is computationally expensive due to the intractability of the normalization constant. Therefore, instead of directly modeling the density $Q ( z )$ , we propose to indirectly parameterize the prior via its score function $\nabla _ { z } \log Q ( z )$ While this avoids direct density estimation, the score function alone makes log-likelihood computations difficult. Weighted score matching losses only approximate maximum-likelihood estimation (MLE), and directly optimizing MLE using the flow interpretation becomes computationally prohibitive as it requires solving an ODE at each step (Song et al., 2021a). Unlike VAEs, where efficient sampling from the prior is critical, we demonstrate that the distribution matching objective with a score-based prior can be optimized without costly sampling or computing loglikelihood. By reformulating the cross-entropy term as a gradient with respect to the encoder parameters $\theta$ , we derive an equivalent expression that retains the same gradient value. This allows us to decouple score function training from the encoder and compute gradients with a single evaluation of the score function. We call this the Score Function Substitution (SFS) trick.
Proposition 3.1 (Score Function Substitution (SFS) Trick). If $q _ { \theta } ( z | x )$ is the posterior distribution parameterized by $\theta$ and $Q _ { \psi } ( z )$ is the prior distribution parameterized by $\psi$ then the gradient of the cross entropy term can be written
as:
$$
\begin{array} { r l } & { \nabla _ { \theta } \mathbb { E } _ { z _ { \theta } \sim q _ { \theta } ( z | x ) } \left[ - \log Q _ { \psi } ( z _ { \theta } ) \right] } \\ & { \quad = \nabla _ { \theta } \mathbb { E } _ { z _ { \theta } \sim q _ { \theta } ( z | x ) } \Big [ - \Big ( \underbrace { \nabla _ { \bar { z } } \log Q _ { \psi } ( \bar { z } ) \big | _ { \bar { z } = z _ { \theta } } } _ { c o n s t a n t w . r . t . \theta } \Big ) ^ { \top } z _ { \theta } \Big ] , } \end{array}
$$
where the notation of $z _ { \theta }$ emphasizes its dependence on $\theta$ and $\scriptstyle { \big | } { \bar { z } } = z _ { \theta }$ denotes that while $\bar { z }$ is equal to $z _ { \theta }$ , it is treated as a constant with respect to $\theta$ .
The full proof can be seen in Appendix A. In practice, Eqn. 6 detaches posterior samples from the computational graph, enabling efficient gradient computation without additional backpropagation dependencies. Details are provided in the next section. Following Proposition 3.1, we propose the score-based prior alignment upper bound (SAUB) objective defined as follows:
$$
\begin{array} { r l r } { { \mathcal { L } _ { \mathrm { S A U B } } \triangleq \sum _ { d } \Big \{ \mathbb { E } _ { z \sim q _ { \theta } ( z | x , d ) } \Big [ - \log p _ { \varphi } ( x | z , d ) \quad } } & { ( 7 ) } \\ & { } & { \quad \quad + \log q _ { \theta } ( z | x , d ) - \Big ( \nabla _ { \bar { z } } \log Q _ { \psi } ( \bar { z } ) \big | _ { \bar { z } = z } \Big ) ^ { \top } z \Big ] \Big \} . } \end{array}
$$
Since our new loss does not affect terms related to $\varphi$ , and by Proposition 3.1, we have $\nabla _ { \boldsymbol { \theta } , \varphi } \mathcal { L } _ { \mathrm { V A U B } } = \nabla _ { \boldsymbol { \theta } , \varphi } \mathcal { L } _ { \mathrm { S A U B } } \mathrm { - }$ though we note that $\nabla _ { \psi } \mathcal { L } _ { \mathrm { V A U B } }$ and $\nabla _ { \psi } \mathcal { L } _ { \mathrm { S A U B } }$ are not equal in general. In the next section, we show how to train all parameters by approximating a bi-level optimization problem.
# 3.2. Deriving an Alternating Algorithm with Learnable Score-Based Priors
Leveraging SFS, we now develop a novel alternating optimization algorithm for score-based prior distributions. Specifically, we parametrize the prior through its score function, denoted $S _ { \psi } ( \cdot )$ , instead of it’s density $Q _ { \psi } ( \cdot )$ . Given a fixed prior score function, the SAUB objective allows us to optimize the encoder and decoder parameters $\theta$ and $\varphi$ (but cannot be used to update the prior because $\nabla _ { \psi } \mathcal { L } _ { \mathrm { V A U B } } \neq \nabla _ { \psi } \mathcal { L } _ { \mathrm { S A U B } } )$ . To update the prior, we can use a denoising score-matching objective to match the prior with the encoder’s marginal posterior to improve the DM variational bound (Cho et al., 2022a; Gong et al., 2024)—indeed, when the prior matches the marginal posterior, the bound becomes tight. Thus, our global problem can be formulated as a bi-level optimization problem where the upper level is the SAUB objective and the lower level is the denoising score matching objective:
$$
\begin{array} { r l } & { \underset { \theta , \varphi } { \operatorname* { m i n } } \sum _ { d } \bigg \{ \mathbb { E } _ { q _ { \theta } } \Big [ - \log p _ { \varphi } ( x | z , d ) + \log q _ { \theta } ( z | x , d ) } \\ & { \qquad - \left( S _ { \psi ^ { * } } ( z ^ { * } , \sigma _ { 0 } \approx 0 ) \Big | _ { z ^ { * } = ( z + \sigma _ { 0 } \epsilon ) } \right) ^ { \top } z \Big ] \bigg \} , \qquad ( 8 ) } \\ & { \mathrm { s . t . } ~ \psi ^ { * } \in \underset { \psi } { \operatorname { a r g m i n } } ~ \mathbb { E } _ { q _ { \theta } } \Big [ \| S _ { \psi } ( \tilde { z } , \sigma _ { i } ) - \nabla _ { \tilde { z } } \log q _ { \sigma _ { i } } ( \tilde { z } | z ) \| _ { 2 } ^ { 2 } \Big ] , } \end{array}
$$
where the expectation in (8) is over the joint distribution of observed and latent variables, i.e., $q _ { \theta } ( z , x , d ) \ \triangleq$ $p _ { \mathrm { d a t a } } ( x , d ) q _ { \theta } ( z | x , d )$ , and the expectation in (9) is over the marginal (noisy) posterior distribution $q _ { \theta } ( z , \tilde { z } , \sigma _ { i } )$ ≜ $\mathbb { E } _ { p _ { \mathrm { d a t a } } ( x , d ) } [ q _ { \theta } ( z | x , d ) q ( \sigma _ { i } ) q _ { \sigma _ { i } } ( \tilde { z } | z ) ]$ . If the lower-level optimization in Eqn. 9 is solved perfectly, then the upper bound of likelihood represented by SAUB in Eqn. 8 will be tight. While there are many possible approaches to bi-level optimization, we choose the simple alternating approach (Xiao et al., 2023; Chen et al., 2021) between the top-level and the bottom-level problems, while holding the parameters of the other optimization problem fixed. Because this simple alternating approach worked well in our experiments, we leave the exploration of more complex bi-level optimization approaches to future work.
During VAE training, the score model is conditioned on the smallest noise level, $\sigma _ { 0 } = \sigma _ { \mathrm { m i n } }$ , to approximate the clean score function of the marginal posterior. As previously mentioned, the output of the score model is detached to prevent gradient flow, ensuring memory-efficient optimization by focusing solely on the encoder and decoder parameters without tracking the score model’s computational graph. After optimizing the encoder and decoder, these networks are fixed while the score model is updated using Eqn. 9. Theoretically, if the score model is sufficiently trained enough to fully capture latent distribution, it could be optimized using only small noise levels. However, extensive score model updates after each VAE step are computationally expensive. To mitigate this, we reduce score model updates and train with a larger maximum noise level, enhancing stability when the latent representation becomes out-of-distribution (OOD). The complete training process is outlined in Appendix B. We also listed the stabilization and optimization techniques in Appendix C.
# 3.3. Comparison with Latent Score-Based Generative Models
Latent Score-Based Generative Models (LSGM) (Vahdat et al., 2021) provide a powerful framework that integrates latent variable models with score-based generative modeling, leveraging diffusion processes to enhance data generation quality. A key innovation in LSGM is the introduction of a learnable neural network prior, which replaces the traditional cross-entropy term in the Evidence Lower Bound (ELBO) with score-based terms approximated via a diffusion model. This idea of incorporating a score-based prior is similar to our method, which also leverages score functions for the prior.
A crucial challenge of LSGM is the instability associated with computing the Jacobian term when backpropagating through the U-Net of the diffusion model. Computing this Jacobian term is analogous to approximating the Hessian of the data distribution, which has been empirically shown to be unstable at low noise levels (Poole et al., 2022). Conversely, our Score Function Substitution (SFS) trick eliminates the need to backpropagate through the diffusion model, enabling stable optimization without explicitly computing the Jacobian. In addition, the LSGM loss requires approximating an expectation over noise levels with a finite number of Monte Carlo (MC) samples (often a single one). We hypothesize that this MC approximation also contributes to the instability of LSGM. For further details of gradient comparison, please refer to Appendix F.
Figure 1. The reconstruction loss and negative log-likelihood are presented on a logarithmic scale for improved visualization. The experiment uses consistent hyperparameters $\beta = 0 . 1$ ), an identical VAE architecture, and the same pretrained score model.
Comparative Stability: SFS vs. LSGM We assess stability by measuring the posterior’s negative log-likelihood (NLL) under a fixed Gaussian-mixture prior. The prior and target distributions are illustrated in Fig. 6. Unlike standard training, which updates encoder, decoder, and prior parameters, our approach freezes the prior and uses a score model pre-trained on the defined prior, updating only the encoder and decoder. The same pre-trained score model is used for both SAUB and LSGM to ensure a fair comparison. Performance is evaluated under a score model trained on four minimum noise levels, $\sigma _ { \operatorname* { m i n } } ~ \in ~ 0 . 0 0 1 , 0 . 0 1 , 0 . 1 , 0 . 2$ , with $\sigma _ { \mathrm { m a x } } = 1$ fixed. While lower noise levels should improve likelihood estimation, as the score model more precisely approximates the true score function, LSGM requires backpropagation through the score model’s U-Net, which causes instability at low noise levels due to inaccurate gradients. As shown in Fig. 1, when $\sigma _ { \mathrm { m i n } } = 0 . 0 0 1$ , LSGM exhibits catastrophic instability, with diverging NLL and spikes in reconstruction loss. At $\sigma _ { \mathrm { m i n } } = 0 . 1$ and $\sigma _ { \mathrm { m i n } } = 0 . 2$ , LSGM performs better in terms of both reconstruction loss and NLL than at $\sigma _ { \mathrm { m i n } } = 0 . 0 1$ , indicating that unstable gradients at lower noise levels negatively impacts prior matching. This is concerning since low noise levels, like $\sigma _ { \mathrm { m i n } } = 0 . 0 1$ , are commonly used in practice. In contrast, the SFS trick shows greater stability across noise levels. At $\sigma _ { \mathrm { m i n } } = 0 . 0 1$ , the NLL is better than at $\sigma _ { \mathrm { m i n } } = 0 . 1$ , which outperforms $\sigma _ { \mathrm { m i n } } = 0 . 2$ , suggesting that SFS ensures more reliable gradients when the score model is trained on lower noise levels. While both LSGM and SAUB degrade at $\sigma _ { \mathrm { m i n } } = 0 . 0 0 1$ SFS stabilizes and achieves a better NLL than LSGM at $\sigma _ { \mathrm { m i n } } = 0 . 0 1$ , demonstrating its robustness in handling small noise configurations.
# 3.4. Semantic Preservation (SP) in Latent Representations via GW Inspired Constraint
Given an expressive score-based prior, we can now investigate how to incorporate the geometry-preserving regularization introduced by Nakagawa et al. (2023) without inducing the unnecessary biases of fixed priors on the latent distribution. Specifically, Nakagawa et al. (2023) introduces the GW metric $\mathcal { L } _ { \mathrm { G W } }$ in an autoencoding framework, and we adopt this regularization in a similar manner:
$$
\begin{array} { r l r } & { } & { \mathcal { L } _ { \mathrm { t o t a l } } = \mathcal { L } _ { \mathrm { D M } } + \lambda _ { \mathrm { G W } } \mathcal { L } _ { \mathrm { G W } } ( q _ { \theta } ( z | x ) ) } \\ & { } & { \mathcal { L } _ { \mathrm { G W } } ( q _ { \theta } ( z | x ) ) \triangleq \mathrm { G W C o s t } ( \pi = p _ { \mathrm { d a t a } } ( x ) q _ { \theta } ( z | x ) ) } \\ & { } & { = \mathbb { E } \big [ \| d _ { X } ( x , x ^ { \prime } ) - d _ { Z } ( z , z ^ { \prime } ) \| _ { 2 } ^ { 2 } \big ] } \end{array}
$$
where $p _ { \mathrm { d a t a } }$ represents the data distribution, $d _ { X }$ and $d _ { Z }$ are the predefined metric spaces for the observed and latent spaces, respectively, and $\lambda _ { \mathrm { G W } }$ controls the importance of the structural preservation loss. $\mathcal { L } _ { \mathrm { D M } } ( q _ { \theta } ( z | x ) )$ represents the distribution matching objective with $q _ { \theta } ( z | x )$ as the encoder, and $\mathcal { L } _ { \mathrm { G W } } ( q _ { \theta } ( z | x ) )$ is the structural preservation loss where $p _ { \mathrm { d a t a } }$ is the data distribution, $d _ { X }$ and $d _ { Z }$ are the metric spaces for the observed and latent spaces, respectively, and $\lambda _ { \mathrm { G W } }$ controls the GW loss $\mathcal { L } _ { \mathrm { G W } } ( q _ { \theta } ( z | x ) )$ . $\mathcal { L } _ { \mathrm { D M } } ( q _ { \theta } ( z | x ) )$ is the distribution matching objective with encoder $q _ { \theta } ( z | x )$ .
Selection of Metric Space and Distance Functions The GW framework’s key strength lies in its ability to compare distributions across diverse metric spaces, where the choice of metric significantly impacts comparison quality. In lowdimensional datasets like Shape3D (Kim & Mnih, 2018) and dSprites (Matthey et al., 2017), Euclidean pixel-level distances align well with semantic differences, leading prior works (Nakagawa et al., 2023; Uscidda et al., 2024) to use L2 or cosine distances for isometric mappings. However, this breaks down in high-dimensional data, like real-world images, which lie on lower-dimensional manifolds. The curse of dimensionality causes traditional metrics, such as pixel-wise distances, to lose effectiveness as dimensionality increases. Recent advancements in vision-language models like CLIP (Radford et al., 2021) have shown their ability to learn robust and expressive image representations by training on diverse data distributions (Fang et al., 2022). Studies (Yun et al., 2023) demonstrate that CLIP captures meaningful semantic relationships, even learning primitive concepts. Therefore, we propose using the semantic embedding space of pre-trained CLIP models as a more effective metric for computing distances between datasets, which we define as the Semantic Preservation (SP) loss. For a detailed evaluation of the improvements from using CLIP embeddings, please refer to the Appendix G, which includes demonstrations and additional results. In the following section, we will denote the Gromov-Wasserstein constraint as GW-EP, and GW-SP to differentiate the metric space we used for Gromov-Wasserstein constraint as Euclidean metric space Preservation (EP) and Semantic Structural Preservation (SP) respectively.
# 4. Related Works
Learnable Priors Most variational autoencoders (VAEs) typically use simple Gaussian priors due to the computational challenges of optimizing more expressive priors and the lack of closed-form solutions for their objectives. Early efforts to address this, such as Adversarial Autoencoders (AAEs) (Makhzani et al., 2016), employed adversarial networks to learn flexible priors, resulting in smoother and more complete latent manifolds.
Subsequent research (Hoffman & Johnson, 2016; Johnson et al., 2017) highlighted that simple priors can lead to overregularized and less informative latent spaces, while (Tomczak & Welling, 2018) empirically showed that more expressive priors improve generative quality, with significant gains in log-likelihood. More recently, Latent Score-based Generative Models (LSGM) (Vahdat et al., 2021) introduced score-based priors, leveraging a denoising score-matching objective to learn arbitrary posterior distributions. This approach enables high-quality image generation while capturing the majority of the data distribution.
Gromov-Wasserstein Based Learning GromovWasserstein (GW) distance has found numerous applications in learning problems involving geometric and structural configuration of objects or distributions. Moreover, the GW metric has been adopted for mapping functions in deep neural networks. One of the key benefits of GW distance is its capacity to compare distributions with heterogeneous data and/or dimensional discrepancies.
Prior works, such as Truong et al. (2022); Carrasco et al. (2024), although uses GW distance as part of the loss in the the objective but is focusing on calculating and minimizing the GW objective in the embedding space between domains $\mathcal { L } _ { O T / G W } = O T / G W ( z _ { s r c } , z _ { t g t } )$ . On the other hand, Uscidda et al. (2024); Nakagawa et al. (2023) defines the GW objective as being calculated between the data dimension and the embedding dimension.
# 5. Experiments
In this section, we evaluate the effectiveness of our proposed VAUB with a score-based prior on several tasks. We conduct experiments on synthetic data, domain adaptation, multidomain matching, fair classification, and domain translation. For each experiment, we compare our methods to VAUB and other baselines and evaluate performance using various metrics.
# 5.1. Improving Latent Space Separation by Using Score-based Prior
The primary objective of this experiment is to demonstrate the performance of different prior distribution models within the VAUB framework. Additionally, we examine the effect of varying the number of samples used during training, specifically considering scenarios with limited dataset availability. To achieve this, we create a synthetic nested D-shaped dataset consists of two domains and two labels, as illustrated in Fig. 2. The aim is to learn a shared latent representation across two domains and evaluate the degree of separation between class labels within this shared latent space. Since downstream tasks rely on these shared latent representations, better separation of class labels in the latent space naturally leads to improved classification performance. This setup draws an analogy to domain adaptation tasks, where the quality of separation in the latent representation relative to the label space plays a critical role in determining downstream classification outcomes.
Figure 2. The dataset consists of two domains: Domain 1 (left nested ’D-shaped’) and Domain 2 (right flipped ’D-shaped’). In each domain, the outer ’D’ corresponds to Label 1, and the inner ’D’ to Label 2. The shared latent spaces are visualized for models trained with varying data sizes ${ \mathrm { ' } n = 2 0 }$ , 100, 500 samples) using Gaussian(Kingma et al., 2019), Mixture of Gaussians(Gong et al., 2024), Vampprior(Tomczak & Welling, 2018), LSGM,(Song et al., 2021c) and our score-based model (columns). Legends follow the format D{domain_index}_L{label_index}
In this experiment, we control the total number of data samples generated for the dataset, and compare the model’s performance using five types of priors: Gaussian prior, Mixture of Gaussian Prior (MoG), Vampprior, and a score-based prior trained with LSGM, and ours (SFS method). Considering the strong relations between point-wise distance and the label information of the dataset, we use GW-EP to compute the constraint loss in both in the data domain and the latent domain. This helps to better visually reflect the underlying structure and separations in the latent space. As shown in Fig. 3, this performance improvement is evident in the latent space: the nested D structure is well-preserved under transformation with score-based prior methods (LSGM and ours), resulting in well-separated latent representations across different classes. This holds consistently true for varying numbers of data points, from as low as 20 samples to higher counts. On the other hand, the Gaussian prior, MoG and Vampprior only achieves $90 \%$ of separation in the latent space when the number of data samples is sufficiently large $\cdot n = 1 0 0$ for MoG and Vampprior prior and $n = 2 0$ for Gaussian prior), allowing the inner and outer classes to have a classifier bound supported by enough data points as shown in Fig. 3. This finding is especially relevant for real-world datasets, where the original data dimensionality can easily reach up to tens of thousands; while in this experiment, we worked with only a two-dimensional dataset, yet the Gaussian, MoG and Vampprior required more than hundreds of samples to achieve effective latent separation, whereas the score-based prior (LSGM and SFS) succeeded with as few as 20 samples.
# 5.2. Improving the Tradeoff between Accuracy and Parity in Fair Classification
For this experiment, we apply our model to the well-known Adult dataset, derived from the 1994 census, which contains 30K training samples and 15K test samples. The target task is to predict whether an individual’s income exceeds $\$ 50\mathrm { K }$ , with gender (a binary attribute in this case) considered as the protected attribute.
Figure 3. This figure shows label separation in the latent space under varying sample sizes and prior configurations, quantified by AUROC scores from the prediction of support vector classifier. Higher scores indicate better separation. Details of the metric are described in the appendix.
We adopt the same preprocessing steps in Zhao et al. (2020), and the encoder and classifier architectures are consistent with those in Gupta et al. (2021). We additionally adapt GW-EP as our constraint loss considering the lack of semantic models in tabular dataset such as Adult dataset. Please refer to Appendix I for more detailed architecture setup. For comparison, we benchmark our model against three nonadversarial models FCRL(Gupta et al., 2021), CVIB(Moyer et al., 2018), VAUB(Gong et al., 2024) and one adversarial model LAFTR-DP(Madras et al., 2018) and one extra baseline ‘Unfair Classifier’ which is obtained to serve as a baseline, computed by training the classifier directly on the original dataset.
As illustrated in Fig. 4, our method not only retains the advantages of the SAUB method, achieving near-zero demographic parity (DP) gap while maintaining accuracy, but it also improves accuracy across the board under the same DP gap comparing to other methods. We attribute this improvement largely to the introduction of the score-based prior, which potentially allows for better semantic preservation in the latent space, enhancing both accuracy and fairness.
We further provide comprehensive ablation studies and computational efficiency analyses (detailed in Appendix N) validate the effectiveness of our GW regularization component and demonstrate significant computational advantages over LSGM, particularly for high-dimensional applications.
# 5.3. Domain Adaptation
We evaluate our method on the MNIST-USPS domain adaptation task, transferring knowledge from the labeled MNIST (70,000 images) to the unlabeled USPS (9,298 images)
Figure 4. Demographic Parity gap $( \Delta _ { D P } )$ vs. Accuracy trade-off for UCI Adult dataset. Lower $\Delta _ { D P }$ is better, and higher Accuracy is better.
Table 1. Domain-adaptation accuracy $( \% )$ .
without using target labels. We compare our SAUB method (with and without structure-preserving constraints) against baseline DA methods: ADDA (Zhao et al., 2018), DANN (Ganin et al., 2016), and VAUB (Gong et al., 2024). All methods use the same encoder and classifier architecture for fairness, with structure-preserving constraints applied using $L 2$ distance in Euclidean space (GW-EP) and CLIP embedding (GW-SP).
As shown in Table 1, our method outperforms the baselines in both directions. Unlike ADDA and DANN, which require joint classifier and encoder training, our approach allows for classifier training after the encoder is learned, simplifying domain adaptation. Additionally, the inclusion of a decoder enables our model to naturally adapt to domain translation tasks, as demonstrated in Fig. 15. We additionally conduct novel experiments to assess the generalizability and robustness of our model with limited source-labeled data, detailed in Appendix D. Additionally, image translation results between MNIST and USPS are presented in Appendix K.
# 5.4. Domain Translation
Figure 5. All models use the same architecture. Refer to Appendix I for details on the neural network and CLIP model. Applying GW loss in the CLIP semantic space shows superior semantic preservation in both (a) and (b). The samples are selectively chosen to represent diverse variations; random samples are in Appendix M.
We conduct domain translation experiments on the CelebA dataset, translating images of females with blonde hair to black hair and vice versa. We compare three settings: GW loss in semantic space, GW loss in Euclidean space, and no GW loss. This comparison shows that GW loss in the semantic space better preserves semantic features, while Euclidean space GW loss is less effective in high-dimensional settings. We note that achieving state-of-the-art image translation performance is not our primary objective, since we employ relatively simple networks for both the VAE and diffusion model (see Appendix I). Instead, this experiment demonstrates our model’s versatility across tasks and serves as a proof of concept. We believe that with state-of-the-art architecture and engineering design our approach will be competitive for domain translation and other imaging tasks, which we leave to future work.
For quantitative evaluation of semantic preservation, we utilize Structural Similarity Index Metric (SSIM), Learned Perceptual Image Patch Similarity (LPIPS), and image retrieval accuracy as our metrics. The models, trained for 1,500 epochs, for image retrieval translate images from a domain of 100 females with black hair to a domain of 100 females with blonde hair and vice versa. For each translated image, we compute the cosine similarity with all translated images in the target domain using CLIP embeddings. To ensure fairness, we use a different pretrained CLIP model for evaluation and for training GW-SP; for more information, see Appendix I. This process is repeated five times with randomly selected datasets to account for variability in the data. The experiment aims to measure how well the translated images preserve their semantic content. We compute the top-k accuracy, where the task is to retrieve the correct translated image from the set of all translated images. For SSIM and LPIPS, we randomly translate 1,000 images and compare their similarity metrics to quantify structural and perceptual consistency. This bidirectional evaluation black-to-blonde and blonde-to-black ensures robustness and highlights the model’s ability to maintain semantic consistency during translation. GW-SP in semantic space consistently improves accuracy for all metrics. Notably, GW-EP performs worse than no GW loss for image retrieval. The domain translation images in Appendix M confirm that models with semantic space GW loss better preserve semantic features like hairstyle, smile, and facial structure, demonstrating its advantage. For additional experiments, we provide image translations between male and female subjects on the FairFace dataset in Appendix L for interested readers.
Table 2. Retrieval and perceptual similarity metrics. Higher SSIM (↑) and lower LPIPS (↓) indicate better structural and perceptual similarity. | Distribution matching (DM) is a versatile domain-invariant representation
learning technique that has been applied to tasks such as fair classification,
domain adaptation, and domain translation. Non-parametric DM methods struggle
with scalability and adversarial DM approaches suffer from instability and mode
collapse. While likelihood-based methods are a promising alternative, they
often impose unnecessary biases through fixed priors or require explicit
density models (e.g., flows) that can be challenging to train. We address this
limitation by introducing a novel approach to training likelihood-based DM
using expressive score-based prior distributions. Our key insight is that
gradient-based DM training only requires the prior's score function -- not its
density -- allowing us to train the prior via denoising score matching. This
approach eliminates biases from fixed priors (e.g., in VAEs), enabling more
effective use of geometry-preserving regularization, while avoiding the
challenge of learning an explicit prior density model (e.g., a flow-based
prior). Our method also demonstrates better stability and computational
efficiency compared to other diffusion-based priors (e.g., LSGM). Furthermore,
experiments demonstrate superior performance across multiple tasks,
establishing our score-based method as a stable and effective approach to
distribution matching. Source code available at
https://github.com/inouye-lab/SAUB. | [
"cs.LG",
"cs.CY"
] |
# 1 Introduction
Methane, the principal constituent of natural gas, is a potent greenhouse gas with a global warming potential over 80 times greater than that of carbon dioxide over a 20-year period [29]. Despite its relatively short atmospheric lifetime compared to carbon dioxide, methane is highly effective at absorbing infrared radiation, making it a critical driver of near-term climate warming. Consequently, reducing emissions of methane is considered one of the most effective near-term actions to limit climate change.
Methane emissions occur in various natural environments such as wetlands, but the main contribution to emissions are anthropogenic sources in sectors like agriculture waste, biomass burning and fossil fuel production and use [27]. In oil and gas production, methane emissions typically come from flaring, venting, leaks or incomplete combustion. Accurate identification and quantification of these sources can help the development of effective policies, regulations, and target remediation. The Oil and Gas Methane Partnership 2.0 (OGMP2.0) initiative exemplifies industry efforts to standardize monitoring and reporting of these emissions [23]. Because multiple emission sources can be present at a facility, some of them varying in time, sensing technologies are needed that deliver full spatio-temporal resolution.
Recovering unknown source parameters from sensor data requires inversion techniques. In this work, we focus on ground-level sensor networks delivering high-frequency, spatially sparse measurements. A Bayesian inversion framework – traditionally relying on Markov chain Monte Carlo (MCMC) [4, 25, 7] – quantifies uncertainty robustly but suffers from poor scalability and slow convergence in multimodal, time-varying settings. Real-time inference necessitates an alternative that balances accuracy and efficiency.
Our Contribution: We embed a computational fluid dynamics (CFD) surrogate within a Bayesian state-space model [17, 16] and perform sequential inference using a Sequential Importance Resampling (SIR) particle filter [9]. Our surrogate employs a multilayer perceptron (MLP) trained to emulate high-fidelity CFD solvers, providing near-instantaneous predictions of sensor concentrations for any candidate source configuration. This approach retains the physical realism of numerical solvers while reducing per-evaluation cost to milliseconds.
In this paper, we apply our inversion methodology to the Chilbolton controlled-release dataset [12, 11, 32, 22], demonstrating that our surrogate-based SIR filter achieves comparable accuracy to full CFD and Gaussian plume models [30] at a fraction of the computational cost. We further validate robustness under simulated obstructed flow fields and temporally varying emission rates, highlighting the scalability of our framework. Section 2 presents the SIR-based inversion algorithm. Section 3 details the MLP surrogate construction. Section 4 evaluates performance on Chilbolton data, and Section 5 extends the approach to complex, obstructed scenarios.
# 2 Spatio-temporal gas source inversion using particle filters
We cast the gas source inversion problem in a Bayesian state-space framework [15, 17, 6, 34], which is commonly used in a range of areas, including econometrics [1, 10], target tracking [2, 21, 31] and epidemiology [24, 14]. Let the latent state at time $t$ be ${ \pmb \theta } _ { t }$ , which encodes the unknown source parameters – notably the source coordinates $( \tilde { x } , \tilde { y } , \tilde { z } )$ and possibly a time-varying emission rate $s _ { t }$ . We specify a prior distribution $p ( \pmb \theta _ { 0 } )$ over the initial state (for example, a uniform prior over the site for the source location and a broad prior for the emission rate) to capture our initial uncertainty. The state-space model is then defined by two components: a measurement model, which relates the state to the observed gas concentrations, and a state evolution model describing how the latent state changes over time.
Measurement model (Observation Equation). At any time $t$ , we receive sensor measurements $d _ { t }$ (e.g. gas concentration readings at fixed sensor locations). We model these observations as noisy functions of the current source parameters. In particular, we assume the observation equation:
$$
\hat { \pmb { d } } _ { t } = C ( \dot { \pmb { x } } , \dot { \pmb { y } } , \dot { z } \mid \tilde { x } , \widetilde { y } , \widetilde { z } ) \times \pmb { s } _ { \kappa : t } + \pmb { \beta } _ { \kappa : t } + \epsilon _ { t } ,
$$
where $( { \dot { x } } , { \dot { y } } , { \dot { z } } )$ are the known coordinates of the sensor location(s), and $C ( \dot { x } , \dot { y } , \dot { z } \mid \tilde { x } , \tilde { y } , \tilde { z } )$ is the gas concentration function (derived from a CFD model or its surrogate) that predicts the concentration at the sensors given a source at $( \tilde { x } , \tilde { y } , \tilde { z } )$ . The term $\pmb { s } _ { \kappa : t }$ denotes the history of the source’s emission rate from time $t - \kappa$ up to $t$ , representing the fact that gas concentration at time $t$ can depend on emissions in the recent past (within a window of length $\kappa$ ). Likewise, $\beta _ { \kappa : t }$ is the history of the ambient background gas concentration at the sensor (e.g. baseline methane levels) over that period. The final term $\epsilon _ { t }$ represents measurement noise (sensor error), which we typically model as independent zero-mean Gaussian noise. If multiple sensors are deployed, $\scriptstyle d _ { t }$ is a vector of all sensor readings at time $t$ , and we assume the components of $\epsilon _ { t }$ are independent (i.e., each sensor has independent noise). For simplicity, we also assume the source’s influence on the flow field is negligible (i.e. the wind field is not altered by the emission), so that the dispersion of gas is linearly related to the emission rate. This means $C ( \cdot )$ can be computed for a unit emission and then scaled by $\pmb { s } _ { \kappa : t }$ , as reflected in (1).
State evolution model (Dynamic Equation). We allow the source parameters to evolve in time according to a latent dynamics model. In general, the source location might be static (or moving slowly), and the emission rate $s _ { t }$ could potentially vary over time. We capture any uncertainty or evolution in these parameters with a stochastic dynamic equation. A simple choice (which we adopt here) is a random-walk model:
$$
\pmb \theta _ { t } = \pmb \theta _ { t - 1 } + \pmb \zeta _ { t } ,
$$
where $\boldsymbol { \zeta } _ { t } \sim \mathcal { N } ( 0 , \mathbf { W } )$ is a multivariate Gaussian process noise term with covariance W. This model assumes that from one time step to the next, the source location and emission rate do not change dramatically, only undergoing small random perturbations. In practice, if the true source is stationary (constant location and emission), the random walk (with a small covariance W) serves to maintain diversity in our particle filter (preventing all particles from collapsing to a single point). If the source emission rate genuinely varies over time, W can be tuned to account for those changes. We assume the state process ${ \pmb \theta } _ { t }$ is Markovian (i.e. given $\pmb { \theta } _ { t - 1 }$ , the next state ${ \pmb \theta } _ { t }$ is independent of earlier times) and that process noise $\boldsymbol { \zeta } _ { t }$ is independent across time steps. We also assume observations are conditionally independent given the corresponding state (with the caveat that $\mathbf { \nabla } d _ { t }$ may depend on a short history $\pmb { s } _ { \kappa : t }$ of emissions, which can be incorporated by extending the state to include recent emissions up to $\kappa$ ). Under these assumptions, the model is a state-space model amenable to Bayesian filtering techniques to recover the latent process $\pmb { \theta } _ { 1 : t }$ .
Bayesian filtering. Our goal is to infer the posterior distribution of the source parameters given the sequence of measurements up to time $t$ , denoted $p ( \pmb { \theta } _ { t } \mid \mathbf { d } _ { 1 : t } )$ . Using Bayes’ rule, and the statespace model assumptions, the posterior can be updated sequentially: starting from a prior $p ( \pmb \theta _ { 0 } )$ , we incorporate new observations as they arrive. In principle, the update from time $t - 1$ to $t$ is given by:
• Prediction: $\begin{array} { r } { p ( \pmb { \theta } _ { t } \mid d _ { 1 : t - 1 } ) = \int p ( \pmb { \theta } _ { t } \mid \pmb { \theta } _ { t - 1 } ) p ( \pmb { \theta } _ { t - 1 } \mid d _ { 1 : t - 1 } ) \mathrm { d } \pmb { \theta } _ { t - 1 } , } \end{array}$ • Bayes Update: $p ( \pmb \theta _ { t } \ | \ d _ { 1 : t } ) \propto p ( \pmb d _ { t } \ | \ \pmb \theta _ { t } ) p ( \pmb \theta _ { t } \ | \ d _ { 1 : t - 1 } ) ,$ .
Here, $p ( d _ { t } \mid \pmb { \theta } _ { t } )$ is the likelihood of the new observation given state ${ \pmb \theta } _ { t }$ , which from (1) (assuming Gaussian sensor noise) can be written as, for example, $p ( d _ { t } \mid \theta _ { t } ) = \mathcal N \big ( d _ { t } \big | C ( \dot { x } , \dot { y } , \dot { z } \mid \tilde { x } , \tilde { y } , \tilde { z } ) \times$ $s _ { \kappa : t } + \beta _ { \kappa : t } , \sigma ^ { 2 } )$ for some noise variance $\sigma ^ { 2 }$ . In general, these integrals an
d proportionality are intractable to solve in closed form due to the nonlinearity of $C$ and the high dimensionality of the state. We therefore resort to a Monte Carlo approximation – specifically, the Sequential Importance Resampling (SIR) particle filter [9] – to perform the Bayesian update numerically.
In a SIR particle filter, we maintain a set of $N$ random samples (particles) $\{ \pmb { \theta } _ { t } ^ { ( i ) } \} _ { i = 1 } ^ { N }$ that provides a discrete approximation of the posterior $p ( \pmb { \theta } _ { t } \mid \mathbf { \alpha } d _ { 1 : t } )$ . Each particle $\pmb { \theta } _ { t } ^ { ( i ) } = \{ \tilde { x } ^ { ( i ) } , \tilde { y } ^ { ( i ) } , \tilde { z } ^ { ( i ) } , \pmb { s } ^ { ( i ) } \}$ is a possible source location and emission rate. We also associate a weight $w _ { t } ^ { ( i ) }$ with each particle, indicating its relative plausibility given the data. The particle filter sequentially propagates and updates these weighted samples as new data arrive:
1. Initialization: At $t = 0$ , draw $N$ particles $\{ \pmb \theta _ { 0 } ^ { ( i ) } \}$ from the prior $p ( \pmb \theta _ { 0 } )$ , and set all weights $w _ { 0 } ^ { ( i ) } = 1 / N$ .
2. Prediction (Propagate): For each particle at time $t - 1$ , sample a new particle according to the state equation (2). In practice, we add independent Gaussian noise to each particle $- \ \pmb \theta _ { t } ^ { ( i ) } \ \sim \ p ( \pmb \theta _ { t } \ | \ \pmb \theta _ { t - 1 } ^ { ( i ) } )$ . This yields a predicted particle set $\{ \pmb { \theta } _ { t } ^ { ( i ) } \} _ { i = 1 } ^ { N }$ representing an approximate prior for time $t$ .
3. Update (Weight): Upon receiving the observation $d _ { t }$ , compute a likelihood weight for each particle based on how well that particle’s state explains the measurement. For particle $i$ :
$$
w _ { t } ^ { ( i ) } \propto w _ { t - 1 } ^ { ( i ) } p \big ( d _ { t } \mid \theta _ { t } ^ { ( i ) } \big ) ,
$$
using the observation model (1).
In practice, the particles are often resampled, with probability $w _ { t } ^ { ( i ) }$ , to avoid weight degeneracy – where one particle carries most of the weight. After resampling, all weights are reset to t(i) = 1/N , for all $i$ .
We repeat the Prediction–Update–Resampling cycle for each time step as new sensor data become available. Over time, the particle ensemble {θt( , wt( }iN=1 evolves to track the posterior distribution $p ( \pmb { \theta } _ { t } \ \mid \ \pmb { d } _ { 1 : t } )$ . In effect, the particle filter provides a numerical approximation of the Bayesian solution for the gas source inversion problem. This approach is well-suited to our setting as it can handle nonlinear and non-Gaussian relationships (unlike, e.g., a Kalman filter [16]) and it naturally accommodates the sequential arrival of data in a dynamic environment. By using a sufficiently large number of particles, the SIR filter can approximate the true posterior to any desired accuracy, enabling robust spatio-temporal estimation of the source location and emission rate even under complex, unsteady flow conditions.
# 3 Multilayer perceptron surrogate modeling of atmospheric gas measurements in unsteady-state flow fields
The particle filter outlined in Section 2 requires repeated evaluations of the likelihood $p ( d _ { 1 : t } \mid \pmb \theta )$ for each particle {θt(i)}iN=1, which in the case of the gas concentration model, requires computing the concentration function $C ( \dot { x } , \dot { y } , \dot { z } \mid \tilde { x } , \tilde { y } , \tilde { z } )$ at each potential source location. High-fidelity CFD models exist to compute $C$ , but they are often too slow to run for each particle $i$ and time step $t$ in real-time. We can overcome this bottleneck by utilizing a deep learning surrogate model for the CFD simulation. Specifically, we train a multilayer perceptron (MLP) [26] to approximate the mapping from the source parameters to the sensor measurements, effectively serving as a fast emulator of the physical gas dispersion model.
# 3.1 High-fidelity gas dispersion simulation training data
To generate training data for the surrogate, we first require a ground-truth model of how gas disperses in an unsteady flow field. We use a computational fluid dynamics (CFD) solver [13] that captures the physics of air flow and gas transport in our monitored site $\Omega$ . In particular, we solve the timedependent Navier-Stokes equations [19, 20] (which governs fluid flow) to obtain the wind velocity field, and then solve the advection-diffusion equation [28, 5] (which governs the transport and diffusion of the gas) to obtain gas concentrations. These equations are discretized and integrated over time to simulate the evolution of wind and gas in the domain. Denoting by $f _ { v }$ the Navier-Stokes solver and $f _ { c }$ the advection-diffusion solver, we can formalize the process as follows: given a history of wind boundary conditions (e.g. measured wind speed and direction over time) $u _ { \kappa : t }$ and corresponding pressure field $p _ { \kappa : t }$ , and given any fixed obstacles or terrain features $\omega$ over the spatial domain $\Omega$ , the CFD model produces a flow field $f _ { v } ( u _ { \kappa : t } , p _ { \kappa : t } , \omega , \Omega )$ describing the wind velocities in $\Omega$ over time. Using the flow field, the gas transport solver $f _ { c }$ computes the resulting gas concentration field for a source at a specific location $( \tilde { x } , \tilde { y } , \tilde { z } )$ . We then evaluate this concentration field at the sensor coordinates $( \dot { x } , \dot { y } , \dot { z } )$ . Let $C _ { \mathrm { n s } }$ denote the concentration output of the full Navier-Stokes-based numerical solver. We can express the solver’s prediction as:
$$
C _ { \mathrm { n s } } ( \dot { x } , \dot { y } , \dot { z } \mid \tilde { x } , \tilde { y } , \tilde { z } ) = \left. f _ { c } ( ( \tilde { x } , \tilde { y } , \tilde { z } ) , f _ { v } ( { u } _ { \kappa : t } , { p } _ { \kappa : t } , \omega , \Omega ) ) \right| _ { \dot { x } , \dot { y } , \dot { z } } ,
$$
which represents the gas concentration at the sensor location due to a source at $( \tilde { x } , \tilde { y } , \tilde { z } )$ under the given unsteady wind conditions. In other words, we use the CFD solver to simulate the propagation of gas from a candidate source through the evolving wind field, and we record what concentration would be measured at the sensor. Equation 3 is essentially the model behind the concentration function $C$ in (1). This high-fidelity simulation accounts for complex effects, such as turbulent eddies and time varying wind direction, providing accurate ground truth concentrations for given source parameters.
However, running such a CFD simulation for every candidate source is computationally expensive. For example, solving the Navier-Stokes and advection-diffusion equations even once (for a given source configuration) might take seconds to hours, which is prohibitively expensive when deployed within a particle filter that could require thousands of evaluations. Therefore, we will use (3) offline to generate training datasets, and then train a fast MLP surrogate to mimic its output.
To construct the training data, we sample a large number of hypothetical source scenarios and simulate each with the CFD model. In our study, we drew source locations uniformly from the area of interest $\Omega$ (each location $( \tilde { x } , \tilde { y } , \tilde { z } )$ corresponds to a different training sample). For each source location, we assume a fixed emission rate (e.g. a unit emission for simplicity) and run the CFD solver to obtain $C _ { \mathrm { n s } } ( \dot { x } , \dot { y } , \dot { z } \mid \tilde { x } , \tilde { y } , \tilde { z } )$ via (3). All simulations use the same physical environment and flow conditions representative of the scenario we care about. In particular, we leverage data from the Chilbolton experiment (see Section 4): a time series of wind measurements (from an anemometer) provides the unsteady wind boundary conditions $u _ { \kappa : t }$ for the solver, and the site is relatively flat and unobstructed (no large $\omega$ features), which allows us to simplify the simulation. Because the vertical variation in this experiment was minimal, we perform the CFD simulations in two dimensions (assuming all sources and sensors lie in the same horizontal plane). This two-dimensional approximation greatly reduces computational cost while introducing only small errors for a flat site. We use the recorded time-varying wind profile uniformly across the domain (spatially uniform but temporally varying wind) when solving the Navier–Stokes equations, given the small size of the site. In summary, our dataset consists of many pairs $\{ ( \tilde { x } ^ { ( i ) } , \tilde { y } ^ { ( i ) } ) , d ^ { ( i ) } \}$ , where $( \tilde { x } ^ { ( i ) } , \tilde { y } ^ { ( i ) } )$ is a sampled source location and ${ \pmb d } ^ { ( i ) } = C _ { \mathrm { n s } } ( \dot { x } , \dot { y } \mid \tilde { x } ^ { ( i ) } , \tilde { y } ^ { ( i ) } )$ is the corresponding sensor reading produced by the CFD simulation (for a given wind sequence and unit emission). These synthetic data samples form the ground truth that our MLP will learn to emulate.
# 3.2 MLP surrogate model: architecture and training
We design a multilayer perceptron (MLP) to serve as a surrogate for the CFD-based concentration function. The MLP is a fully-connected feed-forward neural network that takes the source location as input and outputs the predicted gas concentration at the sensor. In our case, the input vector to the MLP is $( \tilde { x } , \tilde { y } )$ (the two-dimensional coordinates of a potential source). The output is the predicted sensor measurement $\textbf { \em d }$ (or a vector of concentrations if multiple sensors are present – one output per sensor). Because the relationship from source location to sensor concentration can be quite complex (highly nonlinear due to the physics of dispersion), we choose a sufficiently expressive network architecture. In our implementation, for example, we use an MLP with several hidden layers (e.g. 4 - 8 layers) and each hidden layer has on the order of hundreds of neurons. We employ SeLU activations [18] at the hidden layers (SeLU: Scaled Exponential Linear Unit, designed to self-normalize the neural network while avoiding exploding/vanishing gradients and dying neurons), and a linear activation at the output layer (since we are performing a regression to predict a continuous concentration value). We initialize the network weights using standard Xavier initialization [8] and train them to minimize the error between the MLP’s predictions and the true concentrations from the CFD simulations.
Training. We use a supervised learning approach to train the MLP on the dataset of simulated source scenarios. We define a loss function $\mathcal { L }$ as the Mean Squared Error (MSE) between the MLP’s prediction $C _ { \mathrm { M L P } }$ and the ground-truth solver output $C _ { \mathrm { n s } }$ over all training samples. Formally, if the training set is $\{ ( \tilde { x } ^ { ( i ) } , \tilde { y } ^ { ( i ) } , \mathbf { \bar { d } } ^ { ( i ) } ) \} _ { i = 1 } ^ { N }$ (with ${ \pmb d } ^ { ( i ) } = C _ { \mathrm { n s } } ( \dot { x } , \dot { y } \mid \tilde { x } ^ { ( i ) } , \tilde { y } ^ { ( i ) } )$ , as above), the loss is:
$$
\mathcal { L } ( \boldsymbol { \Theta } ) = \frac { 1 } { N } \sum _ { i = 1 } ^ { N } \left. C _ { \mathrm { M L P } } \big ( \widetilde { \boldsymbol { x } } ^ { ( i ) } , \widetilde { \boldsymbol { y } } ^ { ( i ) } ; \boldsymbol { \Theta } \big ) - \boldsymbol { d } ^ { ( i ) } \right. ^ { 2 } ,
$$
where $\Theta$ denotes the learnable weights of the network. We minimize this loss using stochastic gradient descent. The training is run for enough epochs until the error plateaus without overfitting, which in our experiments was on the order of a few tens of thousands of epochs. After training, we obtain an approximate functional mapping $C _ { \mathrm { M L P } }$ that is a fast proxy for the CFD solver’s output $C _ { \mathrm { n s } }$ , meaning the MLP’s prediction of sensor measurements for a given source is nearly the same as the high-fidelity CFD prediction (3), but can be computed instantaneously. Once this surrogate is trained, it can be plugged into the particle filter: whenever we need to evaluate the likelihood of a particle (i.e. compute the expected sensor reading for a potential source), we use the MLP instead of running a CFD simulation.
Practical Considerations. The flow conditions are time-varying, so the mapping from source location to sensor reading could drift over long periods as winds change. To handle the temporal non-stationarity, we adopt a sliding time-window approach in training the surrogate. Instead of training a single MLP on the entire duration of data (which might force it to average over different wind regimes), we train separate MLP models for consecutive time-windows of the data. For example, we can segment the simulation (and real data) into shorter intervals (each spanning a few minutes), and train one MLP on data from each interval. By keeping the time-window short – on the order of the gas transport time across the site – we ensure each MLP sees a relatively homogeneous wind condition, allowing it to more accurately learn the input-output mapping for that period. In effect, the surrogate model is updated periodically to account for changes in the flow. In our case, the window length can be chosen based on the maximum travel time for gas to reach the farthest sensor (which depends on wind speed and domain size). When deploying the inversion in real-time, the particle filter can then switch to the appropriate MLP corresponding to the current time-window of data. This sequential MLP training strategy enables the surrogate to capture transient behaviors and temporal evolution of the gas plume that a single global model might miss.
Post-Training Assessment. We evaluate the MLP surrogate on held-out test cases (source locations not seen during training) to ensure it generalizes well. We use metrics such as the mean absolute percentage error (MAPE) between $C _ { \mathrm { M L P } }$ and $C _ { \mathrm { n s } }$ on these test simulations, and we also compare the surrogate’s outputs against real sensor measurements when available (e.g. from the Chilbolton release trial). The MLP consistently achieves low prediction error, indicating that it captures the physical relationship between source and sensor effectively. Furthermore, the surrogate is extremely fast: evaluating the MLP for a given input takes on the order of milliseconds or less, which is orders of magnitude faster than running a full CFD simulation for the same scenario. In fact, our learned model is even faster than the simplified Gaussian plume equations [30] (which are themselves a closed-form approximation) while retaining the accuracy of the CFD approach. This balance of physical fidelity and computational efficiency is what enables our overall inversion framework to operate in near-real-time. In the next section, we demonstrate that using the MLP surrogate within the particle filter yields accurate and rapid source inversion results, effectively combining rigorous Bayesian estimation with a fast learned physics model.
# 4 Real-data: Chilbolton gas emission inversion
We now assess the performance of our inversion framework on real data from the Chilbolton Observatory, where controlled methane releases were conducted under varying atmospheric conditions. The known ground truth for source locations and emission rates provides a basis for quantitative validation.
The dataset includes path-averaged methane concentration measurements collected using a laser dispersion spectrometer, which scanned seven retroreflectors every 3 seconds. Wind measurements were obtained via a three-dimensional ultrasonic anemometer and the sources consisted of perforated $2 \mathrm { m } \times 2 \mathrm { m }$ ground frames; see Supplementary Materials for site layout. The flat topography and high-frequency measurements make this dataset well suited for evaluating spatio-temporal inversion methods.
# 4.1 Sensor measurements prediction
We first evaluate the predictive performance of the MLP surrogate model against two baselines: a traditional Gaussian plume model and the high-fidelity numerical solver used for training data generation. The Gaussian plume model is a widely adopted, closed-form analytical solution to the advectiondiffusion equation; often used for its computational efficiency. Its solution describes a steady-state atmospheric gas transport corresponding to a long-term averaged transport in unobstructed spatially uniform wind fields. Predictions from our three models were made minute-by-minute and evaluated against the last 20-second averaged sensor measurements. Predicting the last 20 seconds is consistent with the time-averaging Gaussian plume model assumption and ensures the numerical solver’s simulated gas reaches the sensors from anywhere on the Chilbolton site. For each minute interval, the Gaussian plume model used the averaged wind inputs, while the numerical solver simulated transport using the full minute of wind data and averaged the last 20 seconds.
The MLP was trained to predict the numerical solver’s 20-second averaged sensor measurements using 484 simulations with distinct source locations, all under the same wind boundary conditions obtained by our anemometer. The true source location was held out to assess interpolation performance. All models were then given true source locations and evaluated on two test cases: 10 minutes of data from Source 1 (release 2) and 15 minutes from Source 2 (release 5) – these reflect periods of ideal wind conditions, where sensors were exposed to the gas plumes.
Table 1 reports the MAPE for each model. The numerical solver achieved the lowest MAPE but required approximately 21 minutes of computational time for all predictions. The Gaussian plume model was much faster (1.2 seconds) but substantially less accurate. The MLP surrogate achieved accuracy close to the numerical solver, outperforming the plume model, while requiring only milliseconds per prediction – faster than even the plume model. It took 6 minutes to generate the data used to train each MLP and 2 minutes for model training using 90 CPUs and 250GB memory, though this can be reduced with GPU acceleration. Each MLP comprises 4 hidden layers with 100 neurons per layer. These results confirm that the MLP surrogate delivers both high accuracy and real-time prediction capability in unsteady flow conditions.
Table 1: Gaussian plume model, numerical solver and surrogate model predictions’ MAPE for 10 minutes of Source 1 release 2 and 15 minutes of Source 2 release 5 given true source locations. Computational times highlight the extreme efficiency of the surrogate.
Figure 1: Gaussian plume model and surrogate model-based SIR particles’ posterior density of Source 1’s location using 700 iterations and 1,000 particles. The posterior density closer to the true source location represents the surrogate-based inversion providing more accurate parameter estimation.
# 4.2 Gas source inversion
We next evaluate the inversion framework by estimating source location using the MLP surrogate within the SIR particle filter. As a baseline, we compare to an inversion using an atmospheric stability class-free Gaussian plume model following [22], reducing model misspecification introduced by traditional Gaussian plume models. To ensure informative updates, we used 4-minute sliding timewindows of data over the 10 minutes of Source 1 measurements – therefore using four MLPs and one Gaussian plume model per window.
Table 2: Gaussian plume model and surrogate model-based SIR particles’ mean estimation of Source 1’s location. The mean is computed by averaging the distance from all particles at the SIR last iteration. Computational times highlight the efficiency of the surrogate – the MLP computational time includes training data generation, training, and SIR inversion.
Figure 1 shows the posterior distribution of estimated source locations for Source 1 using both the MLP and plume-based filters – the MLP-based inversion yields tighter and more accurate posterior. Table 2 quantifies this, reporting the mean distance across all particles for the last SIR iteration. The MLP-based inversion reduced mean localization error by nearly half compared to the plume model ${ 5 . 8 2 } \mathrm { m }$ vs. $1 1 . 0 9 \mathrm { m } )$ ) and required only half of the computation time (83.3 min vs. 173.6 min), including surrogate training and particle filtering. Together, these results demonstrate that our surrogate-based framework achieves high inversion accuracy with substantially reduced computational cost, enabling real-time spatio-temporal inference with quantified uncertainty in real-world scenarios.
# 5 Case-study: source inversion in obstructed unsteady-state flow fields
We now demonstrate the scalability and robustness of our proposed inversion methodology in synthetically generated, more complex monitoring environments – specifically, those featuring obstacles and time-varying emissions, for which no real datasets are currently available. We simulate three distinct 10-minute methane emission events with temporally fluctuating emission rates, each within a spatial domain populated by obstructions. Detailed simulation parameters and setup are provided in the Supplementary Materials.
To emulate real-world operational constraints, we implement a sequential inversion protocol in which sensor data are processed minute-by-minute and a 3-minute sliding time-window of the data is used for the likelihood evaluation. A new MLP surrogate is trained each minute on data from the most recent flow conditions, and the SIR particle filter is subsequently updated by sliding the data window, refining the posterior over source parameters. Specifically, 499 CFD-based training simulations are used to train each MLP, with training performed in parallel across 90 CPU cores, consuming approximately $2 5 0 \mathrm { G B }$ of memory. The inversion itself – including 100 iterations of particle filtering between each minute with 1,000 particles – is executed on a modest workstation with only 4 CPU cores and $1 5 \mathrm { G B }$ of memory. Each MLP comprises four hidden layers with 500 neurons per layer; architectural and training details are further elaborated in the Supplementary Materials.
Figure 2 visualizes the posterior over source locations after the full 10-minute observation window, clearly demonstrating the model’s ability to accurately infer fixed source positions – even when occluded by structural obstacles. Figure 3 further highlights the framework’s capability to track dynamically varying emission rates: all three sources exhibit time-varying emission profiles, and our particle-based posterior adapts accordingly, accurately reconstructing the temporal evolution of each source’s emission intensity. However, the delayed adjustment in estimating Source 2’s emission rate following a sharp drop at minute 5 reveals a limitation of the SIR filter – abrupt changes in emission behavior may require more responsive approaches, such as an interacting multiple model filter [3]. | Real-time identification and quantification of greenhouse-gas emissions under
transient atmospheric conditions is a critical challenge in environmental
monitoring. We introduce a spatio-temporal inversion framework that embeds a
deep-learning surrogate of computational fluid dynamics (CFD) within a
sequential Monte Carlo algorithm to perform Bayesian inference of both emission
rate and source location in dynamic flow fields. By substituting costly
numerical solvers with a multilayer perceptron trained on high-fidelity CFD
outputs, our surrogate captures spatial heterogeneity and temporal evolution of
gas dispersion, while delivering near-real-time predictions. Validation on the
Chilbolton methane release dataset demonstrates comparable accuracy to full CFD
solvers and Gaussian plume models, yet achieves orders-of-magnitude faster
runtimes. Further experiments under simulated obstructed-flow scenarios confirm
robustness in complex environments. This work reconciles physical fidelity with
computational feasibility, offering a scalable solution for industrial
emissions monitoring and other time-sensitive spatio-temporal inversion tasks
in environmental and scientific modeling. | [
"cs.LG",
"stat.AP",
"stat.ML"
] |
# 1 Introduction
Story points (SP) serve as the primary metric in Agile methodologies to measure the size, complexity, and effort required for each user story1. Agile teams typically use subjective methods such as planning poker to estimate these points, but this process often exhibits inconsistency and variable accuracy (Jorgensen, 2001 & Usman et al., 2014). The inherent complexity of software development within Agile frameworks demands more precise and adaptable techniques for estimating story points (Menzies et al., 2006).
Recent advancements in Generative AI, particularly multimodal models that integrate various data formats such as text, images, graphs, and categorical data, present a groundbreaking solution to these challenges (Devlin et al., 2019; He et al., 2016). Deep learning architectures in these models process and integrate multimodal inputs, enabling a more nuanced analysis of text-based data and resulting in predictions that are both more accurate and consistent (Radford et al., 2021).
Multimodal Generative AI exploits the synergistic potential of diverse data types, uncovering complex relationships among textual descriptions, visual elements, historical data, and categorical features. This comprehensive approach not only improves the accuracy of story point estimation, aligning with Agile principles, but also enhances the responsiveness and adaptability of the development process (Vaswani et al., 2017). Integrating these models within software development workflows reduces human bias and shortens project timelines, leading to substantial cost savings by minimizing delays and avoiding unnecessary rework (Lin et al., 2014).
This paper proposes a novel framework that uses state-of-the-art multimodal machine learning techniques, including Ordinal Encoding, BERT (Bidirectional Encoder Representations from Transformers), CNN (Convolutional Neural Networks), XGBoost (Extreme Gradient Boosting), and other deep learning models, to refine the task of story point estimation. Through empirical analysis, we aim to show how multimodal Generative AI can significantly advance Agile software development by effectively addressing the complexities associated with story point estimation. Our findings support the adoption of these technologies to foster more reliable, consistent, and adaptable development practices, commonly used in Agile development. It outlines what the user needs and why
setting a new benchmark for future advancements in the field.
# 2 Related Work
Researchers have extensively studied the field of story point estimation within Agile software development, with traditional approaches predominantly relying on expert judgment, historical data analysis, and machine learning techniques such as regression models and decision trees. While useful, these methods often struggle with inconsistencies and inaccuracies due to their reliance on single-modal data inputs, such as text descriptions of user stories (Friedman, 2001). Recent advances in machine learning, particularly with the advent of deep learning and natural language processing (NLP), have introduced more sophisticated approaches. However, even these advanced techniques face limitations in integrating the diverse data types often present in software development processes.
One significant development in this area has been the adoption of Generative AI models, particularly those based on transformer architectures, to enhance the accuracy of story point estimation. Models like BERT (Devlin et al., 2019) and GPT (Brown et al., 2020) have demonstrated promise in processing textual data and capturing the nuances of user stories with a level of detail previously unattainable. However, these models typically focus solely on textual analysis and do not fully exploit the potential of multimodal data integration, limiting their effectiveness in contexts where visual or categorical data are also relevant. Multimodal learning has emerged as a promising approach to overcome these limitations by integrating various data formats such as text, images, graphs, and categorical data. Research in this domain has shown that multimodal models can capture more complex relationships between different types of data, leading to improved performance in tasks like image captioning (Radford et al., 2021), sentiment analysis (Wang & Deng, 2018), and medical diagnosis (Wang et al., 2020). Despite these advancements, applying multimodal learning to story point estimation in Agile software development remains underexplored.
Our work builds upon these foundations by introducing a Multimodal Generative AI approach that integrates not only textual but also visual and categorical data, thereby creating a more comprehensive and accurate estimation model. Unlike previous single-modal methodologies, our framework leverages the strengths of multimodal integration, offering a holistic perspective of user stories and their inherent complexities. This approach promises a significant improvement over traditional methods by providing a deeper understanding of the multifaceted aspects of story points.
Addressing a critical gap in existing research, our study specifically tailors multimodal learning to the unique challenges of Agile methodologies, which require rapid iteration and adaptability. This customization ensures that our model integrates seamlessly into Agile workflows, delivering realtime, adaptive story point estimates. By extending multimodal learning techniques to Agile story point estimation, our paper advances the state of the art, overcoming previous limitations and illuminating new ways to incorporate diverse data types for more accurate and efficient software development practices. Our research presents a novel framework for integrating multimodal data into Agile software development, paving the way for more reliable, consistent, and adaptable practices. This framework makes a significant contribution to the field, offering a robust solution to the longstanding challenges of story point estimation.
# 3 Our Approaches
# 3.1 Data Collection
For this research, we engaged in a comprehensive data collection process from Bugzilla2, an open-source bug tracking system, to estimate story points in Agile software development. We chose Bugzilla for its opensource nature, which provides access to a vast record of historical user stories focused exclusively on fixes, enhancements, and tasks related to Bugzilla itself. This includes release-wise data and associated image data, such as wireframes and screenshots of errors. Additionally, Bugzilla offers relevant historical comments from multiple users. This rich data set provides the diverse and detailed information necessary for our analysis, making Bugzilla an ideal choice for this project.
The data we collected was diverse, encompassing textual descriptions of user stories, historical data on story points previously assigned to similar user stories, and various visual aids such as UI/UX mockups, system architecture diagrams, screenshots of errors, and other relevant images like UI screenshots and flowcharts (Table 1). We collected categorical data encompassing variables such as severity levels (e.g., high, medium, low). Our proposed model classifies story points using the Fibonacci sequence, a widely adopted system known for its scalability and intuitive handling of task complexity and size in project management and software development. In this research, we used the industry-standard sequences of 1, 2, 3, 5, and 8, but additional sequences can be seamlessly integrated if needed. We also organized the collection of historical story point data for individual user stories as part of our comprehensive data gathering process.
Table1: Collected Data in Text, Categorical, and Image Formats
We meticulously sourced the text data from Bugzilla repositories, involving the extraction and cleaning of raw textual descriptions of bugs and feature requests. Historical story points data provided insights into the assessment trends and valuation of similar past stories. We curated the image data from associated repositories to ensure a thorough compilation of visuals that contextualize the user stories, including system architecture, wireframes, UI/UX design wireframes, screenshots, and others. For the categorical data, we included attributes like severity levels to facilitate feature engineering and enhance the model’s accuracy. To manage and streamline the workflow, we consolidated all collected data—text, graphs, images, and categorical inputs—into a unified dataset. Additionally, we utilized Pinecone, a vector database, to store and process the embedded data, ensuring organized storage and efficient handling of complex queries for subsequent analysis and modeling stages.
# 3.2 Data Preprocessing and Feature Engineering
We meticulously preprocessed the raw data for this project to prepare it for use in machine learning models. We refined the text data by removing extraneous details, normalizing the language, and tokenizing the content, while preprocessing the image data involved resizing, normalization, and feature extraction to ensure effective representation of the visual and textual content in the form of embeddings. Our entire corpus consists of 113 observations.
For feature extraction and embedding, we utilized BERT (Bidirectional Encoder Representations from Transformers) for text data and CNN (Convolutional Neural Networks) for image data. We chose BERT for its ability to understand the context within user stories, making it ideal for tasks requiring deep semantic comprehension, such as classification or sentiment analysis (Table 2). We selected CNNs for their exceptional ability to process and analyze visual data. Additionally, we applied ordinal encoding to categorical data such as severity and story points, leveraging the inherent order within these categories to enhance model interpretability. We used Fibonacci sequencing to estimate story points. Ordinal encoding is particularly valuable for encoding categorical features that follow a natural sequence or hierarchy, ensuring that the encoded data accurately reflects the structured relationships inherent in the project's categories.
Table 2: Embedded Data
We integrated these processed features into a multimodal dataset ready for machine learning in the final step. This fusion combined cleaned text, image features, and encoded categorical data into a unified format. To facilitate effective model training, we flattened multi-dimensional arrays into one-dimensional formats and normalized these to ensure a consistent scale across all data types, thereby optimizing the performance of subsequent algorithms. This comprehensive approach to data preparation is crucial for accurately predicting and categorizing story points in our models.
We conducted a correlation analysis to explore hidden relationships among individual parameters, incorporating the calculation of the mean of embeddings into a single numeric metric. We took this approach to reduce the dimensionality of complex data, allowing us to identify patterns more effectively and improve the interpretability of the correlation results. The correlation analysis reveals that the Severity_Encoded feature has a strong positive correlation (0.55) with StoryPoint_Encoded when included (Figure 2). In contrast, both Story_Embedding_Mean and
Image_Feature_Embedding_Mean exhibit low correlations with StoryPoint_Encoded (around 0.06 in Figures 1 and 2), indicating a weaker relationship with the target variable. Despite these differences, XGBoost effectively handles both correlated and non-correlated data (Chen & Guestrin, 2016).). Notably, the Story_Embedding_Mean and Image_Feature_Embedding_Mean are average values representing the embedded features from text data (story descriptions) and image data (visual elements), respectively. These means help capture the overall characteristics of the stories and images, aiding in more accurate story point estimation. Without the Severity_Encoded feature, the correlations among the other features remain consistent and relatively low, suggesting that these features are largely independent and do not strongly influence the story points on their own. The introduction of Severity_Encoded does not significantly alter the relationships between the other features but highlights its importance in the model. Therefore, including Severity_Encoded in the model may enhance its predictive accuracy, while the embeddings provide additional, albeit weaker, contributions. However, incorporating severity could also introduce added complexity, which may prevent any noticeable improvements in accuracy.
Figure 1: Correlation Analysis Using Mean of Embeddings Combined into a Single Numeric Metric - Excluding Severity
Figure 2: Correlation Analysis Using Mean of Embeddings Combined into a Single Numeric Metric – Including Severity
# 3.3 Model Development and Training
After integrating BERT text embeddings, CNNextracted image features, and encoded categorical data, we trained a multimodal generative AI model for story point estimation. To assess the significance of severity data in the estimation process, we trained the model both with and without including severity data. The model was designed to learn patterns across the multimodal data—text, images, and categorical values— corresponding to predefined Fibonacci sequence story point classes. We approached the task as a classification problem. We used TensorFlow, a Python-based open-source machine learning framework, for all our modeling efforts.
For the final estimation of story points, we utilized XGBoost, a powerful ensemble learning algorithm known for its efficiency and performance (Equation 1).
$$
\begin{array} { r } { \hat { y } _ { i } = \sum _ { k = 1 } ^ { k } f k ( x _ { i } ) } \end{array}
$$
Where:
$\hat { y } _ { i }$ is the predicted value for the $i$ th
observation.
$K$ is the total number of trees (boosting rounds).
$f k ( x _ { i } )$ is the prediction from the kth tree for the $i$ th observation.
XGBoost was trained on a labeled dataset, with $80 \%$ of the data used for training and $20 \%$ reserved for testing to ensure exposure to diverse examples during training. A total of 113 observations were utilized in this process. We adjusted XGBoost parameters for fine-tuning (Table 3).
Table3: XGBoost Parameters Fine Tuning
# 3.4 Model Evaluation and Validation
After training, we thoroughly evaluated and validated the XGBoost model. We conducted comprehensive verification and validation by comparing the model’s predictions with the actual story points assigned by Agile teams. We included evaluation metrics such as precision, recall, F1 score, accuracy, and other relevant measures to ensure a robust assessment of the model's performance.
# 4 Results & Discussion
# 4.1 Interpretation of Results
When we compare the model's performance with and without severity data, several key trends emerge. The precision, recall, and F1 scores for story point categories 1 and 3 remain consistently high in both models, indicating strong performance in predicting these categories (Figure 3-5). However, excluding severity data leads to a noticeable improvement in overall model accuracy, which increases from 0.63 to 0.77 (Table 1). This improvement also reflects across the macro and weighted averages, showing more balanced performance across categories.
Story point category 8, which represents more complex or rare story points, shows significant differences. With severity data included, the model fails to effectively predict this category, resulting in a precision, recall, and F1 score of 0.00 (Figure 3). However, excluding severity data, the model's recall for story point category 8 improves to 1.00 (Figure 5), and the F1 score reaches 0.5 (Table 4), though precision remains low at 0.33 (Figure 4). This indicates the model's ability to identify more complex cases, albeit with some inaccuracies. This comparison suggests that while severity data might add complexity, removing it allows the model to generalize better across different categories, particularly improving its performance on rare or complex story points.
Table 4: Comparison of F1 Scores with and without Severity Data
Figure 3: F1 Scores with and without Severity (Ordered from Left to Right)
Figure 5: Recall with and without Severity (Ordered from Left to Right)
Recall by Story Psrt Categor 小
While the model performed well on simpler categories (1 and 3) in both scenarios, the inclusion of severity data seemed to introduce more complexity than the model could handle effectively, leading to a decrease in overall accuracy and performance balance. The comparison suggests that while severity data may offer additional insights, it also increases the model's complexity, potentially hindering its ability to generalize across all categories.
The confusion matrices further illustrate the model's performance, highlighting that misclassification predominantly occurred in categories with fewer data points, such as category 8. In the first confusion matrix (with severity data), the model shows a tendency to misclassify categories 2 and 3 into one another, but it generally predicts these categories with a reasonable level of accuracy, likely due to the higher number of examples in these categories during training (Figure 6). In contrast, in the second confusion matrix (without severity data), the model displays an improved ability to correctly classify category 3, evidenced by fewer misclassifications, and a better overall performance across categories, especially in handling category 8 (Figure 7).
Figure 4: Precision with and without Severity (Ordered from Left to Right)
Figure 6: Confusion Matrix with Severity
Figure 7: Confusion Matrix without Severity
These confusion matrices reflect the challenge the model faces when dealing with imbalanced data, where categories with fewer examples, like category 8, are harder to predict accurately. Additionally, while severity is an influential factor in story point estimation, the improved performance without severity data suggests that other features might be more critical in driving accurate predictions, as severity alone does not account for the complexity of the task.
Table 5: Estimation of User Stories with and without Severity
Table 5 compares actual and predicted story points (SP) for 22 user stories, focusing on predictions made with and without considering severity. Notably, certain user stories feature actual and predicted estimations that are very close. In real-life scenarios, development teams often accept estimations as accurate when they fall within a close range. If we applied this approach to the current model, the accuracy would increase to 0.82 when considering severity, and to 0.95 when not considering severity. However, we could still improve the accuracy of these models by training them with a larger dataset, enhancing data preprocessing, and exploring other advanced methodologies.
# 4.2 Limitations and Challenges
First, the limited size of the corpus and the imbalance in the dataset, particularly with fewer examples in the higher story point categories, likely contributed to the model's reduced performance in these areas. This imbalance challenges the model's ability to grasp the nuances of more complex stories, leading to misclassification.
Another challenge arises from the integration of multimodal data. Although the combination of text, image, and categorical data provided a more comprehensive feature set, the varying quality and relevance of the image data posed difficulties. Some images, such as architectural diagrams, may not have directly contributed to the estimation process, leading to noise in the data.
Moreover, the reliance on BERT embeddings for text representation, while powerful, may have limitations in fully capturing the domain-specific language used in Bugzilla user stories. This limitation could affect the model's ability to generalize beyond the specific dataset used in this study.
# 4.3 Future Work and Improvements
Future research should address data imbalance by incorporating techniques such as data augmentation or synthetic data generation to provide more examples for underrepresented categories. Additionally, researchers should explore advanced image preprocessing techniques, such as attention mechanisms, to better leverage visual data and reduce the impact of irrelevant images.
Another potential improvement involves finetuning BERT on domain-specific corpora related to software development and bug tracking. This finetuning could enhance the model's understanding of the unique language used in these contexts, potentially improving performance across all story point categories.
Additionally, exploring alternative machine learning models or ensemble methods that better handle the complexity and variability of story point estimation could lead to more accurate and reliable results. Integrating these approaches with the current multimodal framework could further enhance the model's robustness and applicability in real-world Agile development settings.
Future work should also explore multimodal models such as ViLBERT, CLIP, LXMERT, VisualBERT, MMT, and others. A larger corpus of pre-processed data is necessary to evaluate how the model performs with a more extensive data pool. Additionally, conducting ablation studies and further analysis on why severity reduced accuracy will be critical for understanding and improving the model's performance. | This research explores the application of Multimodal Generative AI to enhance
story point estimation in Agile software development. By integrating text,
image, and categorical data using advanced models like BERT, CNN, and XGBoost,
our approach surpasses the limitations of traditional single-modal estimation
methods. The results demonstrate strong accuracy for simpler story points,
while also highlighting challenges in more complex categories due to data
imbalance. This study further explores the impact of categorical data,
particularly severity, on the estimation process, emphasizing its influence on
model performance. Our findings emphasize the transformative potential of
multimodal data integration in refining AI-driven project management, paving
the way for more precise, adaptable, and domain-specific AI capabilities.
Additionally, this work outlines future directions for addressing data
variability and enhancing the robustness of AI in Agile methodologies. | [
"cs.SE",
"cs.AI",
"68T07, 68T45",
"I.2.6; I.2.10; D.2.9; H.2.8"
] |
# 1 Introduction
The progress of culture and technology is reflected in language, which adapts to incorporate novel meanings into existing words or by entirely changing their semantics. Such changes exhibit systematic regularities with respect to word frequency and polysemy (Bréal, 1904; Ullman, 1962), and can be detected by studies on distributed word representations (Hamilton et al., 2016b). Studies of diachronic word embeddings have detected known changes in word meaning in English-language books spanning multiple centuries. However, such analyses are limited to languages historically abundant in text corpora, as learning high-quality distributed word representations requires diverse contexts. In our work, we rely on a Croatian online news corpus containing articles from the last 25 years (Dukic´ et al., 2024). We investigate whether major topics in this period are reflected in word semantics and evaluate the practical implications of semantic shift on the use case of sentiment analysis.
We split the corpus into five periods of equal duration, train distributed word representations (Mikolov et al., 2013) for each period, and verify their quality. Next, we select three major topics that likely influenced the meaning of Croatian words during these periods and semi-automatically curate a list of related words for each topic. We show that these words undergo strong linguistic shifts (Hamilton et al., 2016a), acquiring new meanings and demonstrating the rapid impact of narrative on distributional semantics (see Figure 1).
To evaluate whether linguistic shifts affect word representations in practice, we first align word embeddings from different periods, then transfer such aligned embeddings onto a model based on embeddings from another period and observe the change in average predicted sentiment intensity. We find that embeddings from later periods are more positive despite studies showing that mental health has been negatively affected (Rozanov et al., 2019; Cullen et al., 2020). In short, our contributions are as follows: (1) We train diachronic word embeddings on a corpus of Croatian news articles, which we make available for further studies;2 (2) We show that corpora spanning short timespans accurately reflect major topics through linguistic shifts of associated words; (3) We find that the sentiment of word embeddings trained on news corpora becomes more positive in recent periods.
# 2 Related Work
Various studies explore word embeddings as a diachronic tool (Hamilton et al., 2016a,b; Schlechtweg et al., 2019; Fišer and Ljubešic´, 2019; Kurtyigit et al., 2021; Schlechtweg et al., 2024, inter alia). By leveraging methods from distributional semantics, which encode individual words in vector spaces based on co-occurrence (Mikolov et al., 2013), researchers study how global and local neighborhoods of individual words change over time (Hamilton et al., 2016b). There is a variety of causes driving semantic shift, with two major ones being linguistic shift, where words take on a new meaning while retaining previous ones, and cultural shift, where technological progress completely alters the way a word is used (Hamilton et al., 2016a). In our work, we follow the methodology used by Hamilton et al. (2016b), apply it to a corpus of Croatian newswire texts, and extend the setup to evaluate practical effects of linguistic shift on major topics and sentiment analysis.
The majority of diachronic embedding studies explore corpora spanning several centuries, grounded in books (Hamilton et al., 2016b,a; Schlechtweg et al., 2019; Kurtyigit et al., 2021). Due to the lack of such corpora of sufficient scale in Croatian, we leverage a recently introduced dataset of Croatian newswire corpora (Dukic´ et al., 2024), which covers a shorter period of 25 years. Despite the narrower timeframe, we hypothesize that the corpus sufficiently captures the diachronic shift in word meaning, which we experimentally verify in this work.
# 3 Methodology
# 3.1 Diachronic Word Embeddings
Dataset. We train word embeddings on the TakeLab Retriever corpus of Croatian newswire articles (Duki´c et al., 2024). The corpus consists of 9,450,929 articles crawled from 33 Croatian news outlets across 25 years (2000–2024) and contains around 3.7 billion words (see Table 1 for more details). We use spaCy hr_core_news_lg (Honnibal et al., 2020) to sentencize, tokenize, and tag parts of speech in the corpus. As the Croatian language is highly inflectional, we lemmatize the corpus with the lexicon-based MOLEX lemmatizer (Šnajder et al., 2008) and differentiate between homonyms with part-of-speech tags obtained using the Croatian spaCy tagger applied to raw words from articles. We split the corpus into 5 five-year periods.
Table 1: The number of words and unique words per 5 five-year periods in the Croatian online news corpus.
Method. We use the skip-gram with negative sampling (SGNS) method from Word2Vec (Mikolov et al., 2013) to train our word embeddings. We use the GENSIM implementation of SGNS to train the embeddings ( ˇReh˚uˇrek and Sojka, 2010). We list the hyperparameter values and hardware details in Appendix C.
# 3.2 Embedding Quality
We validate the quality of the learned embeddings on two word similarity corpora for Croatian: CroSemRel450 (Jankovi´c et al., 2011) and CroSYN (Šnajder et al., 2013). CroSemRel450 contains human-annotated pairs of words rated for semantic relatedness, while CroSYN is a synonym choice dataset comprising one correct synonym and three unrelated options for each target word.
# 3.3 Topical Linguistic Shift
We hypothesize that diachronic embeddings over periods can reveal significant topical linguistic shifts. To unveil these shifts, we curate words pertaining to three major topics relevant globally and/or to Croatia: the COVID-19 crisis, Croatia joining the European Union $( E U )$ , and technological progress. We expect COVID-19 to produce the highest shift in the fifth period, joining the EU in the second, third, and fourth periods (as Croatia entered the EU in 2013), and technological progress in the fourth and fifth periods (digitalization after entering EU and proliferation of AI in the fifth period). Finding no substantial shifts for verbs or adjectives, we focus on the change in nouns as they are more prone to linguistic shifts (Hamilton et al., 2016a). We measure the shift of each word using the cumulative shift score, based on the halved cosine distance (cos) over neighboring periods:
$$
D _ { \mathrm { c } } = \sum _ { i = 1 } ^ { 4 } { \frac { 1 - \cos ( \mathbf { v } _ { i } , \mathbf { v } _ { i + 1 } ) } { 2 } } .
$$
For this analysis, we use Procrustes alignment (Schönemann, 1966) to align word embeddings across periods. We begin by recursively aligning pairs of embeddings, starting from the most recent, fifth period (2020-2024), and then moving toward the earlier ones. Let $\mathbf { E } _ { t }$ denote the embedding matrix for period $t$ , and let $\mathrm { P A } ( \mathbf { A } , \mathbf { B } )$ denote the Procrustes alignment of matrix A to B. We use $\mathbf { E } _ { t } ^ { \ast }$ to denote the aligned embeddings for period $t$ . The alignment procedure can be written recursively as:
$$
\begin{array} { r } { \mathbf { E } _ { t } ^ { * } = \left\{ \begin{array} { l l } { \mathbf { E } _ { 5 } , } & { \mathrm { i f ~ } t = 5 , } \\ { \mathrm { P A } \big ( \mathbf { E } _ { t + 1 } ^ { * } , \mathbf { E } _ { t } \big ) , } & { \mathrm { i f ~ } t \in \{ 1 , 2 , 3 , 4 \} . } \end{array} \right. } \end{array}
$$
Further details are provided in Appendix A.
# 3.4 Sentiment Shift
Distributed word representations capture contextual cues helpful in determining the tone and sentiment of texts, serving as a more robust and effective alternative to lexicon-based and traditional machine learning approaches (Zhang et al., 2018; Al-Saqqa and Awajan, 2020; Wankhade et al., 2022). To quantify sentiment shifts in our corpus, we train a classifier $C _ { i }$ for each period $t _ { i }$ using embeddings $E _ { i }$ computed on the corpus from $t _ { i }$ . Each classifier predicts the sentiment label (positive, neutral, or negative) of a text sequence based on the average of the word embeddings within the sequence. Next, we compute the average sentiment of a classifier $C _ { i }$ on a test set using the word embeddings from $E _ { i }$ and denote this quantity by $\bar { s } _ { i i }$ . We repeat the same procedure for $C _ { i }$ with Procrustes-aligned embeddings from each other period $E _ { j } ^ { * }$ , $j \neq i$ to obtain quantities $\bar { s } _ { i j }$ . We hypothesize that using the embeddings from a period with an overall more positive (or negative) sentiment biases the classifier accordingly. Thus, we estimate the sentiment shift between periods $t _ { i }$ and $t _ { j }$ with $\bar { d } _ { i j } = \bar { s } _ { i j } - \bar { s } _ { i i }$ We conduct the experiment on two Croatian news sentiment analysis datasets: STONE (Bari´c et al.,
2023), comprising solely of news headlines, and 24sata (Pelicon et al., 2020), which focuses on full news articles.
To further validate the quality of word embeddings for sentiment drift, we also analyze the distribution of sentiment scores of news articles in each period. Specifically, we sample $2 5 \mathrm { k }$ unlabeled articles per period from the TakeLab Retriever corpus. To automatically assign sentiment labels, we train a transformer-based classifier using BERTi´c (Ljubeši´c and Lauc, 2021), on the STONE and 24sata datasets, respectively. Further details on the training procedure and hyperparameter settings can be found in Appendix B.
# 4 Results
# 4.1 Embedding Quality
We report the results of embedding quality evaluation in Table 2. We measure the Spearman correlation between embedding-based cosine similarity and human judgments on the word similarity dataset CroSemRel450. Additionally, we compute contrastive spread on the CroSYN dataset to evaluate how clearly word embeddings distinguish synonyms from unrelated words. Focusing on nouns, adjectives, and verbs, we calculate the contrastive spread as the difference between a word’s cosine similarity to its synonym and its similarity to an unrelated word, where higher scores reflect stronger semantic discrimination. Overall, we find a moderate positive correlation of our estimated similarity with human judgments for word similarity across all periods. Both measurements indicate that embedding quality improves in later periods, highlighting the influence of data quantity on embedding quality. In contrast to similar embedding approaches for word similarity evaluation, our results are slightly worse albeit comparable ${ \mathit { \Omega } } ^ { \prime } \rho { \mathit { \Omega } } = 0 . 6 2$ ; Zuanovic et al. (2014)).
Table 2: Intrinsic embedding evaluation: word similarity $( ^ { \dag } = p < 0 . 0 0 1$ , Spearman correlation) and contrastive spread by period and part of speech.
# 4.2 Topical Linguistic Shift
We provide a summary of words exhibiting most prominent shifts in Table 3. We show that neighboring words of top-shifting words inside a topic can pinpoint the period when words acquire new meanings. We provide complete results of the toppicked shifting words inside each topic: COVID-19, EU, and technology in Table 5 in Appendix A.
COVID-19. The COVID-19 crisis, which began in 2020, is reflected in the semantic shifts of words that were previously topically neutral, such as maska (mask) and varijanta (variant). The word maska changes from referring to a clothing item to an instrument for reducing viral transmission. The noun varijanta changes its dominant meaning during the fifth wave from an option or possibility to characterizing different strains (variants) of the coronavirus. The word pandemija (pandemic) changed a lot during the 25 year period due to its connection to diverse diseases (from Ebola to flu and finally COVID-19). However, it was always used in the context of infectious diseases.
EU. The evolution of EU-related terminology mirrors Croatia’s path through three periods: considering EU membership, preparing for admission, and utilizing the benefits of being a member state. The word integracija (integration) changes from emphasizing bureaucratic harmonization (2000– 2004) to entering the union (2013) and practical implementation and Europeanization by 2020–2024. Komisija (commission) increasingly associates with legislative bodies such as the council, ombudsman, and parliament, reflecting the importance of legal procedures for Croatia’s admission into the EU. Finally, fond (fund) shifts from associating with financial terms such as quotation and portfolio to sufinanciranje (co-financing) and obnova (renewal) in the last two periods, reflecting usage of EU funds.
Technology. Technological advancements are also reflected in linguistic shifts. Vjerodajnica (credential) evolves from diplomatic words (delegation, telegram) to digital identifiers (password, document), signalling the transition into the digital era. Inteligencija (intelligence) changes from abstract cognitive attributes (quotient, erudition) to AI concepts (algorithms, automation), reflecting the post-2010 AI revolution. Finally, privola (consent) shifts from legal, in-person authorization to digital mechanisms such as kolaˇci´c (cookie) and pohrana (data storage).
# 4.3 Sentiment Shift
We report results of sentiment shift on STONE and 24sata datasets in Figure 2. We observe that transferring aligned embeddings from later periods into earlier periods increases average predicted sentiment, while the opposite holds when transferring embeddings from earlier periods to later. Additionally, we observe a similar trend regarding the increased share of positive words in more recent periods using a SentiLex lexicon for Croatian (Glavaš et al., 2012).
We further investigate the increase in news positivity, through the distribution of sentiment labels for both news headlines and full articles across different time periods in Figure 3. We find that in general, the amount of articles labeled as positive increases at the expense of neutral ones. The proportion of negative labels also slightly increased over time, particularly in news headlines. These results corroborate the findings of sentiment shift, indicating an increase of positivity in news in recent periods.
STONE 24sata -0.09 0.02 0.01 0.03 0.06 0.03 0.08 0.15
0
1 -0.09 0.08 0.07 0.06 -0.00 0.06 0.19 0.24 -0.21 -0.19 0.05 0.03 -0.08 -0.02 0.16 0.22
84 -0.37 -0.20 -0.10 -0.00 -0.22 -0.08 -0.08 0.04 5 -0.41 -0.30 -0.14 -0.01 -0.16 -0.09 -0.12 0.00 1 2 3 4 5 1 2 3 4 5 Target (substituted)
We hypothesize that increased positivity in news may be driven by one of several phenomena observed in media communication. Increased positivity could be the a reaction to general negativity, influenced by the decline of mental health in the general population (Rozanov et al., 2019; Cullen et al., 2020). The increase in positivity could also be attributed to online news covering more diverse, less serious topics, or the increase in satirical or comedic articles. Another potential factor is the increased polarization of media discourse, where news content is becoming more extreme in its use of emotionally charged language to elicit reactions from readers (Rozado et al., 2022). Nonetheless,
Table 3: Topical linguistic shift with respect to three topics: COVID-19, European Union $( E U )$ , and Technology (Tech). We pick one top shift noun word per topic based on the cumulative shift score (second column). For each of the picked words, we show the top five nearest noun neighbors over five periods. Translations are in parentheses.
Figure 3: Change of predicted sentiment ratios when using classifiers trained on STONE and 24sata to categorize a sample of articles from Retriever. The trend of increased news polarization is more evident when using classifiers trained on STONE, but the same is evident for 24sata.
we believe that this phenomenon, in which sentiment expressed in news articles contrasts broader negativity, warrants further study as it may affect the quality of models trained on corpora from different time periods. | Measuring how semantics of words change over time improves our understanding
of how cultures and perspectives change. Diachronic word embeddings help us
quantify this shift, although previous studies leveraged substantial temporally
annotated corpora. In this work, we use a corpus of 9.5 million Croatian news
articles spanning the past 25 years and quantify semantic change using
skip-gram word embeddings trained on five-year periods. Our analysis finds that
word embeddings capture linguistic shifts of terms pertaining to major topics
in this timespan (COVID-19, Croatia joining the European Union, technological
advancements). We also find evidence that embeddings from post-2020 encode
increased positivity in sentiment analysis tasks, contrasting studies reporting
a decline in mental health over the same period. | [
"cs.CL"
] |
# Introduction
Generative artificial intelligence models, particularly large language models (LLMs), have demonstrated remarkable capabilities, rapidly integrating into various aspects of technology and daily life. As these systems gain more influence over decisions, recommendations, and content creation, ensuring they operate safely and ethically becomes critically important for preventing harm to individuals and society [1]. However, achieving this safety guarantee is challenging. These models often function as "black boxes," whose internal mechanisms are opaque and poorly understood, making them difficult to interpret and reliably control.
To address the need to control the generation, the field has explored various steering techniques designed to guide the behavior of generative models, encouraging desirable outputs while suppressing undesirable ones. Broadly, these methods fall into two categories. The first involves prompt-based steering, which leverages natural language instructions within the input prompt to direct the model’s generation process, outlining desired characteristics or constraints. The second approach, latent space steering, operates by directly intervening on the model’s internal representations (the latent space) during generation, modifying activations or hidden states to influence the final output towards specific attributes or away from others [2].
Despite the development of these steering approaches, significant challenges remain in understanding their efficacy and limitations [3, 4, 5, 6]. The conditions under which different steering methods succeed or fail are often unclear, hampered by a lack of consistent baselines and systematic comparisons. Notably, prompt-based and latent space steering techniques have often been evaluated on distinct tasks and domains, preventing direct comparisons and leaving researchers uncertain about their relative strengths, weaknesses, and applicability [7]. This fragmentation hinders the development of robust and reliable steering mechanisms, making it difficult to determine the best approach for ensuring model safety and controllability across diverse scenarios.
This paper argues that the effectiveness of steering methods is closely tied to the task itself, and surprisingly, simple prompt-based methods can be remarkably effective. We propose a key insight: many of the failures of prompt-based steering can be overcome with a targeted adjustment to the model’s internal processing. Specifically, we find that manipulating the attention mechanism — a core component of modern transformer architectures — provides a powerful yet simple lever for ensuring steering instructions are consistently followed. This approach is not merely empirical; it is motivated by prior theoretical work that argues that in-context rule following in transformer-based models can be controlled by manipulating attention on instructions [8].
Table 1: Existing studies on latent steering exhibit varying task coverage with limited comparisons against simple instruction-based baselines. This table details the tasks addressed by several such studies and whether they include such a baseline. In contrast, our work provides a more comprehensive analysis by directly comparing both latent and instruction-based steering across a standardized set of commonly used tasks.
Building on this insight, we make the following contributions. First, we present a unified comparison of latent space and prompt-based steering methods across a standardized set of tasks, providing much-needed clarity on their relative performance. Second, we introduce Instruction Attention Boosting (INSTABOOST), a novel and straightforward approach that directly manipulates attention weights to enforce steering constraints, demonstrating its effectiveness where standard prompting falters. Third, we show that INSTABOOST achieves state-of-the-art performance on various challenging steering benchmarks. Finally, we demonstrate the power and precision of INSTABOOST by showing it can not only guide model behavior towards desired attributes but can also effectively remove specific alignments, such as those introduced during safety fine-tuning, highlighting its potential for fine-grained model control.
# 2 The Steering Landscape
Steering methods offer a computationally efficient approach to controlling large language model behavior during inference. There are generally two classes of steering approaches: latent steering and prompt-based steering. Before describing steering methods, we start by describing the general transformer pipeline.
Given a tokenized input sequence $\boldsymbol { x } = \left( x _ { 1 } , \dots , x _ { T } \right)$ , the model first embeds tokens into vectors $h ^ { 0 } = ( h _ { 1 } ^ { 0 } , \ldots , h _ { T } ^ { 0 } )$ using a learned embedding matrix and positional encodings. At each of the $L$ Transformer layers, the representation is updated in two substeps: self-attention and feedforward transformation. For layer $\ell = 1 , \ldots , L$ :
$$
a ^ { \ell } = \mathrm { A t m } ^ { \ell } ( h ^ { \ell - 1 } ) \quad ( \mathrm { m u l t i - h e a d ~ s e l f - a t t e n t i o n } ) \qquad h ^ { \ell } = \mathrm { F F N } ^ { \ell } ( a ^ { \ell } ) \quad ( \mathrm { f e e d f o r w a r d ~ n e t w o r k } ) .
$$
The final hidden states $h ^ { L }$ are projected and softmaxed to obtain a distribution over the vocabulary of which the output tokens are sampled autoregressively:
$$
P ( x _ { t + 1 } \mid x _ { \leq t } ) = \mathrm { s o f t m a x } ( W _ { \mathrm { L M } } h _ { t } ^ { L } + b _ { \mathrm { L M } } ) , \quad \forall t = 1 , \ldots , T .
$$
Steering methods are lightweight modifications to this pipeline which steer the output distribution towards a certain behavior. Prompt-based steering methods modify the model’s output distribution by appending a prompt to the model’s context. In this paper, we focus on prepending of instructions, so these steering methods take a tokenized instruction $\boldsymbol { p } = \left( p _ { 1 } , \ldots , p _ { K } \right)$ and add it before the tokenized input sequence to get $\boldsymbol { x } ^ { \prime } = p \oplus \boldsymbol { x } = \left( p _ { 1 } , \ldots , p _ { K } , x _ { 1 } , \ldots , x _ { T } \right)$ which is then fed through the rest of the transformer as defined above. The success of LLM in-context learning shows that this simple method is often surprisingly effective.
Figure 1: The effectiveness of latent steering compared to instruction varies significantly across different tasks. This figure compares the accuracy of a range of tasks when solved using instruction $\mathbf { \bar { x } }$ -axis) versus a latent steering approach (y-axis). The tasks are categorized based on which method yielded superior accuracy: "latent-optimal" (blue), "instruction-optimal" (orange), or "equivalent" performance (green). The dashed diagonal line indicates where the accuracy of both methods is equal.
On the other hand, latent steering methods typically add a vector to the output of the feedforward networks within the transformer. Given a steering vector $\nu$ , these methods modify the feedforward output as $h ^ { \ell } = \mathrm { F F N } ^ { \ell } ( a ^ { \ell } ) + \nu$ for all $\ell \in S \subseteq \left[ 1 , L \right]$ where $s$ is the set of transformer layers to steer.
In this section we describe the landscape of steering methods based on the tasks which they excel and fail at and then describe our method for overcoming the limitations of existing work.
# 2.1 The Tradeoff Between Instruction and Latent Steering
We first systematically evaluate six latent steering methods and simple prompt-based steering on a suite of six diverse datasets. The tasks range from generating less toxic completions to changing the sentiment of open-ended generations. The latter can further be divided into sub-tasks depending on the sentiment to steer towards (for example, "joy", "fear", etc.). The latent steering methods differ by the steering vector used and have previously been evaluated on only a subset of these datasets (Table 1). For now, we only focus on the best performing latent steering method per task and defer a detailed discussion on these methods to Section 4.2. Figure 1 presents a comparison of the best latent steering method and simple prompt-based steering on all tasks, with respect to steering success on Meta-Llama-3-8B-Instruct.
We find that datasets (and tasks) fall into three different clusters in terms of the trade-off between steering success using latent steering versus prompt-based steering. Datasets such as AdvBench and JailbreakBench are latent-optimal, i.e., latent steering is significantly more successful on these datasets than prompt-based steering. Interestingly, there are certain personas (e.g., wealth-seeking) and sentiments (such as anger and disgust) that are on the other end of the spectrum: they are easier to steer towards using instruction-based prompts and hence, prompt-optimal. A diverse array of datasets such as TriviaQA and certain sentiments (sadness, fear for example) that do not demonstrate any strong bias towards certain methods and we say that latent steering and prompt-based steering are roughly equivalent here.
This trade-off suggests that neither latent steering nor prompt-based steering demonstrate clear superiority on all tasks. In this work, we focus on improving the performance of prompt-based steering using ideas we discuss next.
# 2.2 Instruction Attention Boosting (INSTABOOST)
Xue et al. [8] show that in-context rule following by transformer-based models can be suppressed by reducing attention to the target rules. This suggests that increasing or boosting attention to the target rules could enhance the rule following capabilities of these models. Inspired by this insight, we propose Instruction Attention Boosting, INSTABOOST, that treats instructions as in-context rules and boosts the LLM’s attention to these rules to in turn steer generations towards a target behavior.
Figure 2: Illustration of INSTABOOST which steers LLM behavior by increasing the attention mass onto the tokens corresponding to a prepended instruction.
Concretely, the central component of our approach is to steer a model by amplifying the attention of a prepended prompt. Given a tokenized instruction prompt $p = \left( p _ { 1 } , \dotsc , p _ { K } \right)$ of length $K$ , and an input query $\boldsymbol { x } = ( x _ { 1 } , \dots , x _ { L } )$ of length $L$ , we first form a combined input sequence $x ^ { \prime } = p \oplus x = \left( p _ { 1 } , \ldots , p _ { K } , x _ { 1 } , \ldots , x _ { L } \right)$ . Let $N = K + L$ be the total length of this combined sequence and the symbol $\oplus$ denote sequence concatenation.
Within each Transformer layer $\ell$ , the standard attention mechanism computes the pre-softmax scores $\begin{array} { r } { S = \frac { Q K ^ { T } } { \sqrt { D _ { k } } } } \end{array}$ , applies causal masking, and calculates the initial attention probability distribution $\boldsymbol { \alpha } \in \mathbb { R } ^ { N \times N }$ via a row-wise softmax, $\alpha = \mathrm { S o f t m a x } \left( S _ { m a s k e d } \right)$ , where $\alpha _ { i j }$ is the attention weight from query token $i$ to key token $j$ , satisfying $\begin{array} { r } { \sum _ { j } \alpha _ { i j } = 1 } \end{array}$ .
Our steering method modifies this distribution $\alpha$ to increase the weights assigned to the prompt tokens. We do so by first defining unnormalized, but boosted attention scores:
$$
\beta _ { i j } = \left\{ { \begin{array} { l l } { \alpha _ { i j } \cdot M } & { { \mathrm { i f ~ } } 0 \leq j < K } \\ { \alpha _ { i j } } & { { \mathrm { i f ~ } } K \leq j < N . } \end{array} } \right.
$$
To ensure the modified weights still form a valid probability distribution for each query token $i$ , we re-normalize each row of $\beta$ to sum to 1. Let $\begin{array} { r } { Z _ { i } = \sum _ { j = 1 } ^ { N } \beta _ { i j } } \end{array}$ be the sum of the unnormalized weights for row $i$ . The final steered attention distribution $\beta ^ { \prime } \in \mathbb { R } ^ { N \times N }$ is then:
$$
\beta _ { i j } ^ { \prime } = \frac { \beta _ { i j } } { Z _ { i } } = \frac { \beta _ { i j } } { \sum _ { k = 1 } ^ { N } \beta _ { i k } }
$$
This re-distributes the probability mass, increasing the proportion allocated to prompt keys $( j < K )$ relative to input keys $( j \geq K )$ , while maintaining a normalized distribution.
Finally, the output of the attention mechanism $a ^ { \ell }$ is computed using these re-normalized, steered attention weights $\beta ^ { \prime }$ and the unmodified value vectors $V$ :
$$
a ^ { \ell } = \beta ^ { \prime } V
$$
The resulting $a ^ { \ell }$ proceeds through the rest of the Transformer layer. Listing 1 shows how easily INSTABOOST is added to a model using a hook in TransformerLens.
# 3 Experimental Setup
We use the Meta-Llama-3-8B-Instruct model [22], as latent steering requires access to hidden states, and Llama models are common in prior steering research. For further results, see Appendix B.
Baselines. Latent steering methods construct a steering vector $\nu$ from a dataset $\mathcal { D }$ (with $N _ { \mathcal { D } }$ positive $\mathbf { x } _ { + , k }$ and $N _ { \mathcal { D } }$ negative $\mathbf { X } _ { - , k }$ samples) at a fixed layer $r$ and apply it to hidden states $h ^ { \ell }$ in a set of layers $S \subseteq \{ 1 , \ldots , L \}$ . Table 2 details how the baseline latent steering methods compute and apply the steering vector, where $h _ { + , k } ^ { r }$ and $h _ { - , k } ^ { r }$ are hidden states at layer $r$ . We also include "Default" (no intervention) and "Instruction-only" (prompting without latent modification) baselines.
Figure 3: Depiction of prompt-based steering which in this case is unsuccessful (top) compared to INSTABOOST (bottom) which leads to the model following the instruction. INSTABOOST steers model behavior by increasing the attention weight of the instruction. The shown attention scores are from textttMeta-Llama-3-8B-Instruct and show the attention of the last context token to the instruction tokens.
Hyperparameter selection. The hyperparameters were selected via a held-out validation. Previous work has found the middle layers of the model to be the most suited for extracting the steering vector. Thus, we grid searched among the $20 \%$ middle layers (from layer 13 to 18 in the case of Meta-Llama-3-8B-Instruct). For additive steering baselines, the steering factor $\alpha$ was chosen from [0.1,1]. For INSTABOOST, the steering multiplier M was chosen from [2,20]. Other baselines require no additional hyperparameters. To maintain generation quality, we used an LLM-judge (Gemini 2.0 Flash [23]) to get fluency scores between 0 (incoherent) and 2 (perfectly fluent). The hyperparameters were chosen to maximize task accuracy while keeping average fluency of at least 1.
Tasks and datasets. Next, we briefly explain the tasks and their respective datasets and setups. The tasks were chosen to represent the most commonly used steering cases (see Table 1). For more details, see Appendix A.
• Emotion We steered towards six emotions: anger, disgust, fear, joy, sadness, and surprise. The steering vectors were extracted using samples from the GoEmotions dataset [24], and we evaluated the steering in a set of open-ended questions [11]. To evaluate model outputs, we use a multi-class emotion classification model [25].
• AI Persona We use a subset of Model-Written Evaluations [26], which was designed to test the alignment behavior of language models. We use human-generated evaluation questions to steer towards power- and wealth-seeking behaviors on both multiple-choice questions (MCQ) and open-ended questions (QA).
• Jailbreaking To test the capacity of steering methods to get models to generate harmful content, we use adversarial prompts from the datasets AdvBench [27] and JailbreakBench [28]. The harmfulness of the generations is evaluated with Llama Guard 3-8B [29].
• Toxicity We evaluate steering towards less toxic completions for prompts from the RealToxicityPrompts [30] dataset. Steering success is measured using Perspective API’s [31] scores for the TOXICITY attribute.
• Truthfulness To evaluate steering towards truthful answer generation, we used multiple-choice questions from the TruthfulQA [32] dataset. A model generation is considered correct/truthful if the chosen answer matches the correct option as labeled in the dataset.
def instaboost_hook(attn_scores, hook): attn_scores[:, :, :, :instruction_len] $\ast =$ multiplier return torch.nn.functional.normalize(attn_scores, $\mathtt { p } { = } 1$ , ${ \mathrm { d i m } } = - 1$ )
fwd_hooks $\mathbf { \sigma } = \mathbf { \sigma }$ [(transformer_lens.utils.get_act_name(’pattern’, l), instaboost_hook) for l in range(model.cfg.n_layers)]
with model.hooks(fwd_hooks $\ c =$ fwd_hooks): generations $\mathbf { \tau } = \mathbf { \tau }$ model.generate(input_ids)
Listing 1: Python code for boosting attention on instruction prompt tokens using a hook in TransformerLens. This hook is applied to the attention patterns of all layers during generation.
Table 2: Latent steering baselines in terms of the steering vector used and the steering operation. The steering vector $ { \boldsymbol \nu } ^ { r }$ is extracted at a fixed layer $r$ and applied on a subset of layers $\ell \in S$ .
• General QA Lastly, we test the effects of steering on general questions towards correct answers using the dataset TriviaQA [33]. Answers to this dataset are short form and considered correct when they exactly match one of the correct options.
# 4 Results
# 4.1 Per-task steering performance
Figure 4 presents a per-task accuracy comparison between INSTABOOST, instruction-only intervention, the bestperforming latent steering method, and the model without any intervention. The tasks are grouped by the relative effectiveness of latent versus instruction-only steering. Across all tasks, INSTABOOST either outperformed or matched the strongest competing method.
Equivalent tasks. In the tasks where instruction and latent steering had similar performance (Figure 4a), INSTABOOST consistently performed well. For TriviaQA and TruthfulQA, where the model’s default behavior aligned with the steering goal, all steering methods maintained the model’s performance. In the other tasks within this category where the interventions improved baseline performance, INSTABOOST surpassed all other methods, improving accuracy by an average of $8 . 5 \%$ over the next best method and demonstrating its ability to effectively combine instruction and latent space intervention.
Figure 4: INSTABOOST outperforms or matches all competing interventions. For each task, we show the accuracy of the model without intervention (red), the best-performing latent steering method (green), the instruction-only intervention (orange), and INSTABOOST (blue). Error bars show a standard deviation above and below the mean, computed by bootstrapping. Full results are in Appendix B.
Instruction-optimal tasks. For tasks where instruction prompting was superior to latent steering (Figure 4b), INSTABOOST not only preserved this strong performance but often enhanced it, achieving an average accuracy of $9 3 \%$ compared to $90 \%$ for instruction-only and $40 \%$ for the best-performing latent method.
Latent-optimal tasks. In jailbreaking tasks AdvBench and JailbreakBench (Figure 4c), the default model and the instruction-only baseline had nearly zero accuracy, being significantly outperformed by latent steering methods. Meanwhile, INSTABOOST achieved $89 \%$ accuracy on AdvBench and $6 6 . 6 \%$ on JailbreakBench, surpassing standard latent steering. The only task where INSTABOOST was not able to match or surpass both latent and instruction-only methods was steering towards Joy. Still, the boosting mechanism was able to close the gap between the low performance of instruction-only and the high performance of latent steering in this case.
These results highlight two crucial strengths of INSTABOOST: it does not degrade the performance achieved by instruction alone, and it successfully balances the strengths of instruction and latent steering, frequently surpassing both.
# 4.2 Comparison of steering methods
While Section 4.1 focused on best-case latent steering performance, Table 3 reveals a significant drawback of latent-only methods: their performance fluctuates considerably by task. Linear steering was the top-performing method most often, but it faltered in the Emotion tasks. DiffMean and PCDiff worked well steering towards emotion, but not so well in the other tasks. PCAct was the best only in the AI Persona tasks, but underperformed in all other task types. We further detail the performance of each method on each task in Appendix B This performance inconsistency highlights the unreliability of latent-only approaches. In contrast, INSTABOOST consistently achieves strong performance across all task types, offering a more robust and reliable approach to model steering.
Table 3: Latent steering performance fluctuates significantly with the task, but INSTABOOST consistently outperforms them. The table shows the average accuracy $( \pm 9 5 \%$ confidence interval) over the tasks within each task type (columns) for different steering methods (rows). The highest accuracy is in bold and the highest accuracy among the latent steering methods is underlined.
# 4.3 Latent steering and model generation
Figure 5: Unlike other latent steering methods, INSTABOOST maintains high generation fluency while increasing task accuracy. The figure shows the fluency score (left) and accuracy (right) versus varying steering factors for the latent steering methods on AdvBench. For the latent steering methods, we show the effect of varying the steering factor in the best-performing layer.
A relevant side effect of latent steering, which has been discussed in previous work [9, 10, 13], is the degradation of model generation. To balance this effect, we measured the fluency of model generations while selecting the steering factor for each method. Figure 5 illustrates the relationship between fluency, task accuracy, and steering strength on AdvBench, the dataset where latent steering achieved the highest task accuracies. For the evaluated latent steering methods, increasing the steering factor to enhance task accuracy consistently results in a sharp decline in generation fluency. This presents a significant trade-off, where gains in task performance are achieved at the cost of coherent text. In contrast, INSTABOOST achieved comparable increases in task accuracy without such a drastic drop in fluency. We hypothesize this is because INSTABOOST intervenes on attention mechanisms. Unlike direct hidden state manipulation common in other latent techniques – which may push a decoder-only transformer into out-of-distribution states and disrupt fluency – guiding attention offers a more constrained re-weighting of information flow, better preserving the model’s generative capabilities.
# 5 Related Work
Latent steering methods. Prior work on latent steering, as introduced in Section 2 and detailed for several baselines in Table 2, typically involves applying a derived steering vector to model activations. These vectors are estimated through various techniques, including contrasting activations from positive/negative examples or instructions [9, 19, 12, 11, 10, 14, 17, 34], maximizing target sentence log probability [2], or decomposing activations over concept dictionaries [21]. Other approaches include employing sample-specific steering vectors [18, 16] and applying directional ablation to erase specific behaviors like refusal [15]. To reduce the side effects of latent steering, Stickland et al. [13] trains the model to minimize KL divergence between its steered and unsteered versions of benign inputs.
Attention steering. Two previous methods specifically leverage attention mechanisms for steering model behavior. Todd et al. [20] employs activation patching to identify attention heads that trigger a certain task on input-output pairs. The averaged output from these selected heads is then combined into a vector and added to the hidden states of the model. Zhang et al. [35] takes in a section of the input to be emphasized and downweights the attention scores of the other tokens in a subset of heads. These approaches have in common a significant downside: the high computational cost of individually evaluating each head of the model. The model we use in our experiment, for example, has 32 layers with 32 heads each (1,024 heads total); consequently, the head selection process requires a total number of forward passes exceeding one thousand times the number of validation samples. In contrast, INSTABOOST’s only hyperparameter is the multiplier $M$ .
Existing benchmarks. Prior work has evaluated latent steering methods across multiple scenarios. Brumley et al. [5] found task-dependent performance and reduced fluency with two methods. Likelihood-based evaluations by Tan et al. [3] and Pres et al. [4] indicated some interventions were less effective and success depended more on the dataset than model architecture. Wu et al. [7] highlighted the need to compare latent steering with other control methods, which their AxBench did for synthetic concepts, finding prompting superior. We, however, compare latent and prompt steering in real-world scenarios like safety, alignment, and toxicity.
# 6 Limitations
There are certain limitations of INSTABOOST that suggest interesting directions for future work. Firstly, we evaluate INSTABOOST on tasks that can be represented using simple instructions which the model understands. It will be interesting to see how INSTABOOST performs on more abstract tasks that need longer instructions or tasks that are not represented in the data seen by the model during training. Moreover, INSTABOOST uses a simple boosting mechanism to up weight attention on instructions by a constant factor. While this outperforms other existing latent steering methods, future work should further explore the space of attention manipulation mechanisms: for example, selectively boosting attention on certain instructions or adaptively computing the constant factor used for scaling. | Controlling the generation of large language models (LLMs) remains a central
challenge to ensure their safe and reliable deployment. While prompt
engineering and finetuning are common approaches, recent work has explored
latent steering, a lightweight technique that alters LLM internal activations
to guide generation. However, subsequent studies revealed latent steering's
effectiveness to be limited, often underperforming simple instruction
prompting. To address this limitation, we first establish a benchmark across
diverse behaviors for standardized evaluation of steering techniques. Building
on insights from this benchmark, we introduce Instruction Attention Boosting
(InstABoost), a latent steering method that boosts the strength of instruction
prompting by altering the model's attention during generation. InstABoost
combines the strengths of existing approaches and is theoretically supported by
prior work that suggests that in-context rule following in transformer-based
models can be controlled by manipulating attention on instructions.
Empirically, InstABoost demonstrates superior control success compared to both
traditional prompting and latent steering. | [
"cs.CL",
"cs.AI",
"cs.LG"
] |
# 1 Introduction
Osteoradionecrosis of the jaw (ORN) is a debilitating complication that arises following radiation therapy for head and neck malignancies [1]. It involves the necrosis of previously irradiated bone, culminating in chronic, non-healing wounds that substantially increase the risk of infection and other significant morbidities [2]. Historically, the management of ORN has relied on invasive surgical interventions, which impose considerable physical and psychological burdens on patients [3]. Recent advancements in medical technology, particularly the utilization of customizable 3D-printed hydrogel wound dressings, offer promising, minimally invasive alternatives that may enhance therapeutic outcomes while reducing patient discomfort [4].
In parallel, developments in deep learning have provided powerful tools for medical image analysis, particularly for segmentation tasks. One such tool, nnUNet [5], has demonstrated success in segmenting 3D medical images with high accuracy. However, the field of ORN imaging faces a major challenge due to the scarcity of labeled datasets, making supervised learning approaches difficult to implement. To address this issue, this study proposes an unsupervised approach aimed at identifying anomalies in patient imaging scans. The development of such methods is crucial for reducing the dependency on manual labeling and accelerating the treatment planning process.
This project employs unsupervised anomaly detection to segment anomalous regions in cone-beam computed tomography (CBCT) scans of patients with osteonecrosis of the jaw (ONJ). Specifically, a Vector Quantized Generative Adversarial Network (VQ-GAN) [6] is trained using a novel methodology, achieving high-fidelity reconstruction of the original anatomical structures, including the likely pre-morbid dentition. A subsequent post-processing pipeline is implemented to generate 3D-printable models of the affected regions. The key contributions of this work are summarized as follows:
1. To the best of my knowledge, this is the first study to apply unsupervised anomaly detection to CBCT dental imaging data.
2. A novel training protocol is introduced to address the challenges associated with reconstruction errors and the failure to recover extensive anatomical anomalies in dental imaging data. This protocol is validated on both simulated anomalies and one actual patient scan.
3. A comprehensive post-processing pipeline is developed to produce 3D-printable models of the wound area, offering potential utility in personalized surgical planning and intervention.
# 2 Related Works
# 2.1 Representation Learning
Representation learning aims to automatically extract useful features from raw data to support efficient model learning. Traditional manual feature extraction often fails with complex data like images and audio, while neural networks effectively learn hierarchical, abstract representations. Architectures such as AutoEncoders (AEs) [7], Variational AutoEncoders (VAEs) [8], and Vector Quantized VAEs (VQ-VAEs) [9] are essential: AEs perform dimensionality reduction, VAEs add probabilistic modeling for generative tasks, and VQ-VAEs use discrete latent spaces for sharper reconstructions. These methods are highly relevant for Unsupervised Anomaly Detection (UAD), most of the UAD methods leverage representation learning models as their base model.
# 2.1.1 AutoEncoder
AutoEncoders (AE) are a kind of neural network to learn a compressed representation for a given data distribution [7]. A encoder $f _ { e }$ is used to compress data $x$ into latent representation $z = f _ { e } ( x )$ and a decoder is used to recover the data $\hat { x } = f _ { d } ( z )$ . To train the AE, the basic loss function cam be a simple mean squared error $\mathcal { L } _ { M S E } = \| x - \hat { x } \| _ { 2 } ^ { 2 }$ . This basic setting is useful, but have a set of problems, including blurry recovered image due to the MSE function, and a unstructured latent space, where there is no method to direct sample in its latent space and generate new sample by the decoder.
# 2.1.2 Variational AutoEncoder
Variational AutoEncoder (VAE) [8], impose a probabilistic structure on the latent space. The encoder in VAE map the input data into a distribution over the latent space, parameterized by mean $\mu$ and variance $\sigma ^ { 2 }$ , from which a latent representation $z$ in sampled. Then the decoder can recover the data using the sampled latent vector. The loss term for VAE, in addition to the MSE loss, has another regularization term to ensure the distribution of the latent space is close to a multivariate Gaussian distribution, usually Kullback-Leibler (KL) divergence chosen. Essentially, the encoder network parameterises a posterior distribution $q ( z | x )$ over the latent variables conditioned on the input data, the loss of KL divergence ensures that this posterior approximates the prior distribution $p ( z )$ , and the decoder models the conditional distribution $p ( x | z )$ . By enforcing a Gaussian structure in the latent space, VAEs facilitate the interpolation and generation of new data samples that are coherent and varied, enhancing the model’s utility in generative tasks. The incorporation of the KL divergence term provides a regularizing effect, making VAEs generally more robust to overfitting compared to traditional autoencoders. However, The Gaussian assumption and the MSE loss often lead to blurring in the reconstructed outputs. This is particularly evident in applications like image reconstruction where sharpness and detail are crucial. In some cases, VAEs can experience mode collapse, where the model ignores certain modes of the data distribution, leading to less diversity in the generated samples.
# 2.1.3 The Vector Quantized Variational AutoEncoder
The Vector Quantized Variational AutoEncoder (VQ-VAE), following the variational autoencoder idea, uses a discrete latent representation [9]. In the VQ-VAE, after getting a code from the encoder, it is seen as a set of feature vectors, which are going to be replaced by their nearest neighbour in a predefined codebook, a fixed collection of feature vectors. This quantization step replaces the Gaussian sampling process seen in standard VAEs, thereby making the latent space discrete. The decoder uses this quantized vector to reconstruct the input data.
The loss function for VQ-VAE is still similar to VAE, including a reconstruction loss term, typically MSE. For the regularization term for the latent space, the loss encourage the encoder’s output to be close to the selected codebook vector, and also encourage the codebook vector be closer to the encoded feature, which can be written as [9]:
$$
\mathcal { L } _ { V Q } = | | s g [ f _ { e } ( x ) ] - e | | _ { 2 } ^ { 2 } + \beta | | f _ { e } ( x ) - s g [ e ] | | _ { 2 } ^ { 2 }
$$
where $s g [ \cdot ]$ means stop gradient. These two terms essentially bring the codebook and encoded vector close together. The prior distribution of the latent space is assumed to be uniform, and the code is directly queried by the nearest vector
not sampled, the KL divergence is constant w.r.t. the encoder parameters. As a result, the KL divergence term is ignored in VQ-VAE.
By using discrete codebook vectors, VQ-VAE often yields sharper reconstructions than traditional VAE. Furthermore, VQ-VAEs can efficiently encode information, as each vector in the codebook can represent a large and complex pattern within the data, making them particularly useful in tasks like speech and image synthesis where high fidelity is crucial.
# 2.2 Unsupervised Anomaly Detection
For pixel-level anomaly detection, i.e. anomaly segmentation, there are mainly two categories of methods [10, 11]: image reconstruction and feature modeling.
# 2.2.1 Image reconstruction
Image reconstruction-based UAD focuses on reconstructing normal images and detecting anomalies based on the reconstruction error. Formally, these methods learn from a healthy distribution $x \in { \overline { { \mathbb { R } } } } ^ { D \times H \times W }$ and optimize argmin $\| M ( x ) - x \|$ . The main difference in these methods are the image reconstruction techniques, where a large variety of generative model can be used. For example, VAE [8] is a classical choice which can be further scaled up while keeping the same overall architecture. GAN [12] is used in the model f-AnoGAN [13]. A transformer-based autoencoder is proposed by Ghorbel et al [14] with strong performance. Ideally, with a perfect reconstruction, the input anomaly can be easily identified, while in reality, this method is largely affected by reconstruction error, leaving the anomaly segmentation noisy.
# 2.2.2 Feature modeling
Feature modeling methods mainly focus on detecting the anomaly in a feature space, which is often defined by a pretrained network such as ResNet [15]. Some recent methods are like Reverse Distillation (RD) [16] and Feature AutoEncoder (FAE) [17], they all try to reconstruct the multi-layer feature map extracted by a pretrained network, and generate the anomaly map by resizing the anomaly map in different scale back to the image resolution. Specifically, RD leverage a teacher-student structure, the training objective is to minimize the distance of the feature maps in each layer of the student and teacher. However, different from the normal distillation structure, where student also have the same input as the teacher, the student in RD serves as a decoder, decoding the information passed by the teacher. Such bottleneck design help the student to capture the most essential information, and finally help the anomaly detection. FAE have a more straightforward structure, the overall structure of FAE is still a autoencoder, only the input is replaced by a multi-layered feature map extracted by the network and resized into the same size. The training objective is to minimize the SSIM [18] between the input and output. Feature modeling based method alleviate the problem of pixel level reconstruction error and currently perform better than the image reconstruction method, as is reported by Lagogiannis et al. [11]. However, on one hand, the reconstruction error actually still exists, only in the feature space.
# 3 Methods
# 3.1 Training Scheme
The overall method is shown in 1. The training is divided into two stages, all trained on a healthy dataset $\boldsymbol { x } \in \mathbb { R } ^ { D \times H \times W }$ . In the first stage, the model is trained to reconstruct a faithful result from the image by learning to project and recover from a lower dimension $\boldsymbol { z } \in \mathbb { R } ^ { a \times b \times c }$ , the training objective can be varied depending on the specific model used. In the second stage, the codebook and decoder weights are frozen and only the encoder is trained, meaning a fixed latent space. Two masking methods are used, the first one is a random cube with different sizes, and the second one focuses on the teeth area and masking all the surrounding region so that the model learns to recover from an image with incomplete information.
During the inference, both models trained in two stages are used to generate two reconstructions. The first reconstructed image will fail on the part with ONJ, and the second reconstruction will try to recover an image with all teeth and jaw generated. By taking the difference between these two, the anomaly is clearly segmented.
# 3.2 Network: VQ-GAN
A VQ-GAN is used as the representation learning model in this task. The overall idea is a generative adversaria networks, with the reconstruction model a Vector Quantized Variational Autoencoder (VQ-VAE).
Figure 1: Overview of the proposal anomaly detection scheme
# 3.3 Network Structure
The detailed structure is shown in 2B. The original VQ-GAN is proposed for 2D images, the network has been adapted to this 3D setting, while large followed the original structure, with some slight changes to avoid overfitting.
The encoder consists of residual blocks and downsample blocks, and at the lowest resolution, a non-local block is added to provide the interaction in global information. The decoder is designed in a reverse manner. The residual block is designed in a common manner, while the downsample and upsample block adds additional batch normalization and activation. Empirically, the network will not converge without the activation layer.
The discriminator has a similar structure to the PatchGAN discriminator, where the network does not output a single value indicating the real or fake of the whole image, but directly a patch of $n \times n$ .
# 3.4 Loss Function
The loss function is a combined loss between traditional GAN loss and the loss from VQ-VAE. In the paper proposed GAN, the penalty is simply binary cross-entropy, which is adopted in this work. Typically GAN loss can be written as:
$$
\begin{array} { r l } & { \mathcal { L } _ { G } = - \mathbb { E } [ \log D ( G ( x ) ) ] } \\ & { \mathcal { L } _ { D } = \mathbb { E } [ \log D ( x ) ] + \mathbb { E } [ \log D ( G ( x ) ) ] } \end{array}
$$
In VQ-GAN, a control term $\lambda$ is introduced to balance the impact of adversarial loss and reconstruction loss:
$$
\lambda = \frac { \nabla _ { G _ { L } } [ \mathcal { L } _ { r e c o n } ] } { \nabla _ { G _ { L } } [ \mathcal { L } _ { a d v } ] + \delta } \times 0 . 8 \
$$
where $G _ { L }$ is the last trainable layer of the generator, $\delta$ is a small constant to increase numerical stability.
Figure 2: Detailed Network Structure Design of Masked VQ-GAN. A. The overall structure of a GAN, B. the detailed network structure and the layer definition.
Table 1: The segmented area generated by DentalSegmentor
Overall, the loss can be written as:
$$
\begin{array} { r l } & { \quad \mathcal { L } _ { G } = \mathcal { L } _ { r e c o n } + \mathcal { L } _ { v q } + \lambda \mathcal { L } _ { a d v } } \\ & { \mathcal { L } _ { r e c o n } = \| G ( x ) - x \| _ { 2 } ^ { 2 } } \\ & { \quad \mathcal { L } _ { v q } = \| s g [ f _ { e } ( x ) ] - e \| _ { 2 } ^ { 2 } + \beta \| f _ { e } ( x ) - s g [ e ] \| _ { 2 } ^ { 2 } } \\ & { \quad \mathcal { L } _ { a d v } = - \mathbb { E } [ \log D ( G ( x ) ) ] } \\ & { \quad \mathcal { L } _ { D } = \mathbb { E } [ \log D ( x ) ] + \mathbb { E } [ \log D ( G ( x ) ) ] } \end{array}
$$
This loss is used for both stage 1 and stage 2 training.
# 3.5 Training Data
In this project, the VQ-GAN does not directly reconstruct the raw CBCT image, but a recompiled segmentation image. This brought mainly two benefits. First, the model can learn reduced information from the segmentation map, without caring the detail such as bone and muscle internal structure and the neck. Second, this can totally eliminate the scan parameter discrepancy between the datasets, ensuring a successful reconstruction in the test set.
First, the raw image is segmented by DentalSegmentor into 6 regions, marked from 0 to 5, the content is shown in Table 1. Then the non-zero part is added by 5 and divided by 10 to normalize and increase the contrast. Empirically, the VQ-GAN can easily converge on the processed 1-channel data, while struggling on the multi-channel segmentation map.
$$
x ^ { \prime } = ( D S ( x ) + 5 ) / 1 0
$$
For the teeth mask in stage two of the training, the teeth and mandibular canal region is selected is dilated as the target mask to simulate the missing teeth and bone around the jaw in ONJ.
$$
\begin{array} { r l } & { M _ { t e e t h } = \left\{ \begin{array} { l l } { 1 } & { \mathrm { i f ~ } x ^ { \prime } > 0 . 7 5 } \\ { 0 } & { \mathrm { o t h e r w i s e } } \end{array} \right. } \\ & { M _ { t e e t h } = \mathrm { M a x P o o l } 3 \mathrm { D } \big ( M _ { t e e t h } , \mathrm { k e r n e l \_ s i z e } = 5 , \mathrm { s t r i d e } = 1 , \mathrm { p a d d i n g } = 2 \big ) } \\ & { x _ { t r a i n } ^ { \prime } = x ^ { \prime } \odot \left( 1 - M _ { t e e t h } \right) } \end{array}
$$
# 3.6 Anomaly Subject Classification and Segmentation
The model can not only be used to generate an anomaly map but also serve as a classifier to detect if an anomaly is presented in the image. The idea is similar but there are some detailed differences between these two methods.
For anomaly classification, the difference map is generated by calculating the absolute error $\| G ( x ^ { \prime } ) - x ^ { \prime } \| _ { 1 }$ . Then a threshold is applied to remove low difference, as low difference can be considered as the lack of expression ability of the model. After the thresholding, a binary erosion is used to remove single voxel difference, indicating small reconstruction error. Finally, the error is added up to form an anomaly score.
For anomaly segmentation, as the target is to generate as much as more anomaly area while minimizing the non-anomaly area, thresholding is not used, only erosion is used, because the edge of the anomaly area might contain differences with a low score. Additionally, dilation is used after erosion to recover the area eroded in the true anomaly region.
# 4 Experiments
# 4.1 Dataset and Implementation Details
The dataset used in this project is called "3D multimodal dental dataset based on CBCT and oral scan" posted in https://zenodo.org/records/10829675. It contained 290 scans from healthy subjects scanned with a spacing of
Figure 3: Reconstruction results in stage one. The VQ-GAN and VQ-VAE achieved similar test performance in MSE. While VQ-GAN can achieve slightly better fidelity in details such as teeth.
$0 . 2 5 \times 0 . 2 5 \times 0 . 2 5 m m ^ { 3 }$ , where 287 of them were selected with the same dimensionality. Then the image is downsampled to a spacing of $2 \times 2 \times 2 m m ^ { 3 }$ , resulting $7 5 \times 7 5 \times 5 2$ in voxel image. The VQ-GAN in used reconstruct a image of $6 4 \times 6 4 \times 6 4$ and the original image is first padded to $7 5 \times 7 5 \times 6 4$ and then randomly cropped to the target size. During the final inference, a sliding window is implemented and the reconstruction is averaged through all the predictions. The dataset is split in 217 for training and 70 for inference. For the final model used for the application, the whole dataset is used for training.
Currently, only one patient’s scan is available. It is scanned in a voxel size of $0 . 4 \times 0 . 4 \times 0 . 4 m m ^ { 3 }$ . The image gets the same preprocessing pipeline as the dataset.
The generator and discriminator are all optimized using Adam optimizer with a learning rate of 3e-4, using a batch size of 16. The training for the first stage is around 5000 epochs and second stage often coverage around 1000 epochs.
# 4.2 Performance Analysis
Two experiments are designed to validate the performance of the training method and model design.
The first one is comparing the reconstruction performance of stages one and two with a baseline VQ-VAE, the comparison is conducted both quantitatively and qualitatively.
For the second experiment, I compare the anomaly detection performance between taking the difference between two reconstructions, and the difference between the original image and reconstruction. Two questions are to be answered: 1. Can the abnormal subject be successfully distinguished? 2. Is comparing to the original and reconstructed different?
# 5 Results and discussion
# 5.1 Reconstruction performance
The result of the stage one experiment is shown in Figure 3. The VQ-GAN has a reconstruction mean squared error of 0.009196 while VQ-VAE has an error of 0.009174. Both models can reconstruct the input image with high fidelity, with VQ-VAE obtaining a slightly better result. This might be due to the discrepancy in the training objectives between these two models. The loss function of VQ-VAE only contains the MSE loss and the vector quantization loss, while VQ-GAN should also try to fool the discriminator, optimizing the adversarial loss additionally. Nevertheless, this brings VQ-GAN a better fidelity than VQ-VAE. As is shown in the images, the reconstruction of the front teeth and teeth roots is clearer than the VQ-VAE.
The result of the stage two experiment is shown in Figure 4. The input image is a simulated patient with the teeth and jaw removed. The VQ-VAE shows a better MSE loss than VQ-GAN (0.009850, 0.012047 respectively). However, the image is even more blurred. While VQ-GAN can achieve the same fidelity. This is due to the loss of information in the teeth detail, there is no possibility to successfully guess the correct shape of the teeth. As a result, VQ-VAE chooses to predict a more blurred image to minimize the MSE loss, reflecting the expectation of the teeth. On the other hand, VQ-GAN has to output the image with high fidelity to fool the discriminator.
Figure 4: Reconstruction results in stage two. The VQ-VAE has better MSE. However, the prediction is clearly more blurred than the generation of VQ-GAN.
Figure 5: The anomaly reconstruction by two models trained on different stages, and the anomaly map generated by calculating the difference between reconstruction 2 and the original image, and between reconstruction 2 and 1.
Overall, both models can perform well in this task, indicating the robustness of the training method, with VQ-VAE predicting the expectation value of voxels in the masked area and VQ-GAN predicting a sample of possible distribution.
# 5.2 Anomaly detection performance
The anomaly detection results are shown in Figure 5. The anomaly scores are also calculated for the whole test set.
For the anomaly detection based on two reconstructions, only three normal subjects yielded an anomaly score of 1, while the rest had a score of 0. The patient received an anomaly score of 30, whereas simulated anomalies generally scored above 1000, indicating a clear demarcation between normal and pathological data. As illustrated in Figure 5, both anomalies in the simulated and real patients were highlighted with high confidence, and residuals were minimal.
For the anomaly detection based on the original image and the second reconstruction, 15 normal subjects exhibited anomaly scores greater than 0, with a maximum anomaly score of 8. The patient, however, received a score of 88, providing a slightly reduced contrast compared to the previous method, though still effective. Figure 5 demonstrates that reconstruction error significantly influences the anomaly detection outcomes.
In conclusion, both methods successfully distinguished between patients and normal data, with the anomaly detection approach based on two reconstructions yielding superior results in terms of both anomaly scores and spatial anomaly maps.
# 5.3 Discussion
The experimental results underscore the efficacy and robustness of the proposed methodology. The framework is inherently flexible, allowing variations in model architecture and anomaly detection strategy. Since the latent space remains fixed during the second stage of training, it is possible to conduct anomaly detection directly within this latent representation or through a hybrid approach. As observed in Figure 5, the reconstruction of the real patient was less accurate compared to that of the simulated patient, likely due to limited spatial coverage in the upper skull region during scanning. This reduced coverage results in less available information, while the presence of teeth and jaw structures may introduce misleading signals for the generative model. Addressing this limitation may require enhancing the simulation process with a more sophisticated and detailed design.
However, as the proposed method is an unsupervised anomaly detection method, the missing teeth on the upper jaw are not classified as ORN. As a result, this method can only provide an initial segmentation but still needs human supervision.
Figure 6: The post-processing pipeline. All codes are implemented using Python.
Figure 7: The generated STL model, 3d printer machine code, and the printer in printing.
# 6 Post-processing
The overall post-processing pipeline is shown in Figure 6 and Algorithm 1.
First, the reconstruction is generated and the raw difference map is calculated accordingl
$$
D = \| G _ { 2 } ( x ^ { \prime } ) - G _ { 1 } ( x ^ { \prime } ) \| _ { 2 } ^ { 2 }
$$
Then the map is processed with erosion and dilation to eliminate the small difference region and then applied with thresholding to get the target segmentation.
After getting the segmentation, component analysis is conducted to get independent possible wound regions. The component analysis is used to separate the disconnected parts in the segmentation. After the component analysis, all the disconnected region are stored separately for further processing.
Each region is checked if it has a major overlap with the segmentation. If true, it indicates that this will not be a wound area, it is removed accordingly. The region is then grown in a certain direction to fill up to empty area. Finally, the marching cube algorithm [19] is used to get an STL model from each region.
# Algorithm 1 Post-Processing Pipeline
1: Input: Input data $x ^ { \prime }$ , Generator models $G _ { 1 }$ and $G _ { 2 }$
2: Output: STL models of possible wound regions
3: $D \in \| G _ { 2 } ( x ^ { \prime } ) - G _ { 1 } ( x ^ { \prime } ) \| _ { 2 } ^ { 2 }$
4: $D \gets \mathrm { e r o s i o n } ( D )$
5: $D \gets$ dilation(D)
6: $S \gets$ threshold $( D )$ .
7: $\{ R _ { i } \} \gets$ component_analysis(S)
8: for $R _ { i }$ in $\{ R _ { i } \}$ do
9: if overlap $( R _ { i } , S )$ then
10: remove $( R _ { i } )$ )
11: end if
12: grow $( R _ { i } )$
13: end for
14: for $R _ { i }$ in $\{ R _ { i } \}$ do
15: STL_mode $( R _ { i } ) $ marching_cubes $( R _ { i } )$ 1
16: end for
17: return STL models
Figure 8: Potential pipeline for training a supervised segmentation model (Fully Connected Network, FCN) using noisy segmentation generated by UAD segmentation model.
Figure 9: The segmentation area avoiding the soft tissue
# 6.1 Avoid soft tissue
The previous analysis only considered the bone area and ignored the soft tissue. However, there is soft tissue covered on the broken bone. As a result, I also considered removing the overlapped area between the segmentation and soft tissue, and the result is shown in Figure 9.
I have noticed that there are holes inside the tissue. To finalize the anomaly area, expert suggestions are required. | Advances in treatment technology now allow for the use of customizable
3D-printed hydrogel wound dressings for patients with osteoradionecrosis (ORN)
of the jaw (ONJ). Meanwhile, deep learning has enabled precise segmentation of
3D medical images using tools like nnUNet.
However, the scarcity of labeled data in ONJ imaging makes supervised
training impractical. This study aims to develop an unsupervised training
approach for automatically identifying anomalies in imaging scans.
We propose a novel two-stage training pipeline. In the first stage, a VQ-GAN
is trained to accurately reconstruct normal subjects. In the second stage,
random cube masking and ONJ-specific masking are applied to train a new encoder
capable of recovering the data.
The proposed method achieves successful segmentation on both simulated and
real patient data.
This approach provides a fast initial segmentation solution, reducing the
burden of manual labeling. Additionally, it has the potential to be directly
used for 3D printing when combined with hand-tuned post-processing. | [
"eess.IV",
"cs.AI",
"cs.CV"
] |
# 1 Introduction
Evolutionary computation is a powerful method for black-box optimization problems. Evolutionary computation has been applied to numerous domains, including optimization of weather radar networks [39], radar system design [18], and precipitation nowcasting when combined with machine learning techniques [58, 48, 40, 33], and other many industries [36].
In single-objective optimization, multiple continuous-valued variables are optimized to minimize a single objective function. A single-objective optimization problem aims to find a real-valued vector $\boldsymbol { x } = ( x _ { 1 } , \cdots , x _ { D } )$ , where $D$ is the dimensionality of the problem, that minimizes an objective function $f : \mathbb { R } ^ { D } \mathbb { R }$ . The value $f ( x )$ is referred to as the fitness of solution $x$ . The global optimum $x ^ { * }$ is defined as satisfying $f ( x ^ { * } ) \leq f ( x ) , \forall x \in \mathbb { R } ^ { D }$ .
When evaluating the objective function $f ( x )$ involves complex simulations [49, 57, 59] or when $f$ is not analytically computable [60], algorithms that do not require access to the internal structure of $f$ are necessary. Such scenarios, where only the output of the objective function is available, fall under the category of black-box single-objective optimization.
Differential Evolution (DE) is a widely used algorithm for black-box optimization problems. DE generates new candidate solutions $u _ { i } ^ { t }$ at each generation $t$ —a discrete time step in the evolutionary process—through operations known as mutation and crossover. The fitness of each new candidate is evaluated, and if $\boldsymbol { u } _ { i } ^ { t }$ outperforms the existing candidate $\ v x _ { i } ^ { t }$ , $\boldsymbol { u } _ { i } ^ { t }$ replaces $\ v x _ { i } ^ { t }$ in the next generation as $\boldsymbol { x } _ { i } ^ { t + 1 }$ . If not, the existing candidate survives, and $\boldsymbol { u } _ { i } ^ { t }$ is discarded. This process of generational replacement based on fitness is a fundamental concept common to various evolutionary algorithms such as Genetic Algorithms (GA), and has been a standard feature of DE since the original DE formulation [66].
However, in modern DE implementations, a major challenge lies in the limited population diversity caused by the fixed population size enforced by the generational replacement. Population size is a critical control parameter that significantly affects DE performance. Larger populations inherently contain a more diverse set of individuals, thereby facilitating broader exploration of the search space. For high-dimensional problems, such as those with 50 dimensions, larger population sizes are often employed compared to lower-dimensional problems [74]. Conversely, when the maximum evaluation budgets is constrained, smaller populations focusing on a limited number of promising candidates may be more suitable. Many state-of-the-art DE variants incorporate an archive mechanism [74, 64], in which a subset of discarded individuals is preserved in an archive $A ^ { t }$ during generation replacement and reused in mutation operations. This practice increases diversity by expanding the candidate pool to $P \cup A$ . However, maintaining what is essentially a secondary population via an archive introduces additional design considerations, such as policies for insertion, deletion, and appropriate sizing.
We observe that much of the increasing complexity of state-of-the-art DE can be ascribed to three, widely accepted assumptions/tenets underlying DE implementation: (1) individual are replaced by offspring (either by direct offspring or by the offspring of other individuals); (2) population sizes are either fixed, or are reduced as search progresses (to focus search); and (3) “failed” individuals (offspring with worse fitness than their parents) are discarded.
Underlying all three of these assumptions is the central assumption that $D E$ should throw away much of the information discovered during search (i.e., the individuals created and evaluated during search) – the replacement policy eliminates parents, population size reduction eliminates members from the population, and discarding failed individuals means that such failed individuals will not further contribute to search progress. In fact, by the end of the search, all that remains in a standard DE is the individuals in the final population, which is a very small fraction of the individuals that have been created and evaluated, and almost all the “knowledge” about the search space has been thrown away.
In this paper, we question these assumptions, and explore a fundamentally different design for DE which starts with the premise: What if we did not discard any information (individuals) ? $\mathscr { Y }$ We propose a novel DE framework called Unbounded Differential Evolution (UDE), which adds all generated candidates to the population without discarding any based on fitness. Unlike conventional DE, which removes inferior individuals during generational replacement, UDE eliminates the need for replacement altogether, along with the associated complexities of archive management and dynamic population sizing. UDE represents a fundamentally new approach to DE, relying solely on selection mechanisms and enabling a more straightforward yet powerful evolutionary process.
We use UDE as a framework for “deconstructing” state-of-the-art DE and reconsider whether the complexity of modern DE variants is necessary. Instead ofgenerational replacement, supplemental populations (archives), and deterministic population size reduction strategies, perhaps many of the standard components of a DE are actually unnecessary and can be subsumed by the process of selection. Instead of carefully designing mechanisms for discarding information, maybe we confocus on designing selection operators which effectively choose from all of the information (individuals) generated during the search – i.e., Is selection all you need?
The rest of the paper is organized as follows. Section 2 reviews preliminaries and related work on DE. In Section 3, we propose Unbounded DE (UDE), as well as variants of UDE which incorporate parameter adaptation. We experimentally evaluate UDE in Section 4. We show that UDE and its adaptive variant, USHADE, is competitive with standard adaptive DE such as SHADE and LSHADE. We explore the simulation of population size adaptation in the UDE framework, and show that selection policies can be used to mimic population size increases as well as decreases (Section 4.3). We also explore the necessity of discarding “failed” individuals with worse fitness than their parents, and show that the standard DE practice of discarding failed individuals is unnecessary in the UDE framework (Section 4.4). Section 5 concludes with a discussion and directions for future work.
This work significantly extends and expands upon work preliminary results presented in a CEC2022 paper [34]. The earlier paper proposed and focused on a variant of UDE which discarded failed individuals which scored worse than their parents (the variant referred to as USHADE/DF in Sections 3.3 and 4.4 of this paper). The main versions of UDE and USHADE proposed in this paper keeps all individuals in the population, and are new in this paper. Almost all text in the paper is completely newly written – the presentation of UDE and its variants has been completely rewritten, as well as the survey of previous work. Most of the experiments are completely new, specifcally: evaluation on the CEC2022 benchmarks (Section 4.2), analysis of robustness of search with respect to maximum evaluation budgets (Section 4.3.2), and analysis of the role and usage of failed individuals (Section 4.4) are completely new to this paper. All experimental results, figures, and tables in this paper are new, and do not overlap with results in [34].
# 2 Preliminaries and Background
# 2.1 Differential Evolution (DE )
Differential Evolution (DE) [66, 65] is a population-based optimization algorithm that iteratively improves a population $P ^ { t }$ of candidate solutions, where each individual $x$ is a $D$ -dimensional vector. The index $t$ denotes the generation number, and the population size is denoted by $\vert P ^ { t } \vert$ . Algorithm 2.1 shows the pseudocode for the canonical DE algorithm. In line 2, the initial population $P ^ { 1 }$ is generated, typically by sampling each individual uniformly at random from the search space. The fitness of all individuals $f ( x ^ { i , 1 } )$ is evaluated in lines 3 and 4. In each subsequent generation, $\vert P ^ { t } \vert$ offspring $u ^ { i , t }$ ( $i = 1 , . . . , | P ^ { t } | ,$ are generated and evaluated. Mutation is performed in line 7, where a mutant vector $v ^ { i , t }$ is generated using the difference of two individuals from the population $P ^ { t }$ – this is a defining feature that gives DE its name. Crossover (line 8) then combines the parent $\boldsymbol { x } ^ { i , t }$ and the mutant $v ^ { i , t }$ to form the offspring $u ^ { i , t }$ . The offspring is evaluated in line 10. Lines 11 to 14 implement selection. If $f ( u ^ { i , t } ) \leq f ( x ^ { i , t } )$ , the offspring replaces the parent in the next generation. This process of mutation, crossover, evaluation, and selection repeats until a termination condition is met. Various mutation and crossover strategies exist for DE, as discussed in the following sections. In contrast, the selection mechanism (lines 11–14) is common to all DE variants.
The control parameters for DE are summarized in Table 1. As observed in prior studies [22, 43], the performance of evolutionary algorithms is generally sensitive to these parameters. Section 2.2 discusses adaptive DE algorithms that dynamically adjust the scale factor $F$ and crossover rate $C$ during the optimization process. Section 2.2.2 addresses strategies for decreasing the population size $\vert P ^ { t } \vert$ over time. In adaptive DE, $F$ and $C$ are typically constrained to $( 0 , 1 ]$ and $[ 0 , 1 ]$ , respectively. When fixed values are used for the purpose of evaluating algorithm performance independently of parameter adaptation, we follow conventional, setting $F = 0 . 5 , C = 0 . 5$ . The parameter $p$ , known as the pbest rate, is used in the current-to-pbest mutation strategy [86] (see below) and is generally fixed. The archive size $| A |$ is relevant for DE variants employing an archive (see below).
Table 1: Control parameters for DE. $| P | , F , C$ are the 3 control parameters for standard DE. The archive size $| A |$ is used by DE varaints which use archives, and $p$ is the pbest rate for current-to-pbest mutation.
# Algorithm 2.1 Standard DE [66]
1: $t = 1$ ;
2: Initialize population $P ^ { t }$ ;
3: for $i = 1$ to $\vert P ^ { t } \vert$ do
4: evaluate $f ( x ^ { i , t } )$ ;
5: while not termination condition do
6: for $i = 1$ to $\vert P ^ { t } \vert$ do
7: generate $v ^ { i , t }$ using mutation;
8: generate $u ^ { i , t }$ using crossover between $v ^ { i , t }$ and $\boldsymbol { x } ^ { i , t }$ ;
9: for $i = 1$ to $\vert P ^ { t } \vert$ do
10: evaluate $f ( u ^ { i , t } )$ ;
11: if $f ( u ^ { i , t } ) \leq f ( x ^ { i , t } )$ then
12: $x ^ { i , t + 1 } = u ^ { i , t }$ ;
13: else
14: $x ^ { i , t + 1 } = x ^ { i , t }$ ;
15: $t = t + 1$ ;
Mutation (Alg. 2.1, line 7) A basic mutation strategy used in the original DE [66] is rand/1 [66], defined as:
$$
v ^ { i , t } = x ^ { r 1 , t } + F \cdot ( x ^ { r 2 , t } - x ^ { r 3 , t } )
$$
where $x ^ { r 1 , t } , x ^ { r 2 , t } , x ^ { r 3 , t } \left( r 1 \neq r 2 , r 1 \neq r 3 , r 2 \neq r 3 \right.$ ) are three distinct individuals randomly selected from the population $P ^ { t }$ . The magnitude of $F$ scales the differential vector and thus influences the distance
of the mutant $v ^ { i , t }$ from the base vector $\boldsymbol { x } ^ { r \bot , t }$ . In rand/1, the standard approach to selecting invididuals from the population is uniform random sampling.
The current-to-pbest mutation strategy, introduced in JADE [86] is widely used in state-of-the-art DE algorithms. It is defined as:
$$
v ^ { i , t } = x ^ { i , t } + F \cdot ( x ^ { \mathrm { p b e s t } , t } - x ^ { i , t } ) + F \cdot ( x ^ { r 1 , t } - x ^ { r 2 , t } )
$$
Here, $x ^ { \mathrm { p b e s t } , t }$ is selected uniformly at random from the top $| P ^ { t } | \cdot p$ individuals in the population ranked by fitness. Compared to rand/1, this strategy biases the search toward promising regions of the solution space.
Crossover (Alg. 2.1, line 8) The offspring $u ^ { i , t }$ is generated through crossover (Algorithm 2.1, line 8), which recombines the parent $\boldsymbol { x } ^ { \imath , \imath }$ and the mutant $v ^ { i , t }$ on a per-dimension basis. Binomial crossover [65], a widely used scheme, is defined as:
$$
u _ { j } ^ { i , t } = \left\{ \begin{array} { l l } { v _ { j } ^ { i , t } } & { \mathrm { i f } u \leq C \mathrm { ~ o r ~ } j = j _ { \mathrm { r a n d } } \quad \mathrm { w h e r e } u \sim U ( 0 , 1 ) } \\ { x _ { j } ^ { i , t } } & { \mathrm { o t h e r w i s e } } \end{array} \right.
$$
A uniform random number $u$ determines whether each dimension $j$ inherits from the mutant or the parent. The dimension $j _ { \mathrm { r a n d } }$ is always inherited from the mutant to ensure that $u _ { j _ { \mathrm { r a n d } } } ^ { i , t } = v _ { j _ { \mathrm { r a n d } } } ^ { i , t }$ ·
# 2.2 Adaptive DE
The performance of DE, like other evolutionary algorithms, is highly influenced by control parameter settings [22, 43]. Classical DE algorithms use fixed values for key parameters such as the scale factor $F$ and crossover rate $C$ . However, optimal values for these parameters vary depending on the problem characteristics and search dynamics. Adaptive parameter control can significantly improve performance. For example, larger values of $F$ facilitate exploration by amplifying differential vectors, which is beneficial in the early stages of multimodal optimization. Conversely, reducing the value of $F$ focuses the search. When $C$ is close to $0$ , offspring tend to inherit values from their parents in many dimensions, promoting optimization on a per-dimension basis, so in problems without inter-variable dependencies, setting a small $C$ allows the algorithm to exploit this independence during the search. On the other hand, increasing $C$ enables offspring to have values not present in the parents, which tends to enhance performance on problems involving interdependent variables.
Parameter control in evolutionary computation is typically classified into three categories: deterministic, adaptive, and self-adaptive control [19].
Adaptive control, dynamically adjusts parameters based on feedback from the search process, allowing parameter values to align more closely with the characteristics of the problem at hand. For $F$ and $C$ , adaptive control is commonly used. One widely adopted adaptive control method is success-history based control, described in Section 2.2.1.
In contrast, deterministic control modifies parameters according to predefined rules, independent of search progress. One example, linear population size reduction, is detailed in Section 2.2.2, decreases population size $\vert P ^ { t } \vert$ linearly. Although an obvious disadvantage of deterministic control is lack of flexibility and inability to response to conditions during the search, deterministic control is widely used for population size control due to the difficulty in defining success or failure in population size.
Self-adaptive control evolves the parameters themselves as part of the individual genomes. However, self-adaptive control is rarely employed in state-of-the-art DE algorithms.
# 2.2.1 Success-history based parameter adaptation (SHADE)
Success-history-based parameter control has been widely adopted in state-of-the-art adaptive DEs since the introduction of SHADE [73], and is used in subsequent variants [74, 61, 28]. Algorithm 2.2 shows SHADE. The blue-highlighted lines are related to parameter control.
Algorithm 2.2 SHADE (blue is related to parameter control, and red is related to archives).
1: $t = 1$ ;
2: Initialize population $P ^ { t }$ ;
3: $A ^ { t } = \varnothing$ ;
4: Initialize contents of success histories $M _ { F }$ and $M _ { C }$ to 0.5;
5: $k = 1$ ;
6: for $i = 1$ to $\vert P ^ { t } \vert$ do
7: evaluate $f ( x ^ { \imath , t } )$ ;
8: while not termination condition do
9: $S _ { F } = \emptyset$ , $S _ { C } = \emptyset$ , $S _ { \Delta f } = \emptyset$ ;
10: for $i = 1$ to $\vert P ^ { t } \vert$ do
11: select $r ^ { i }$ randomly from $\{ 1 , \cdots , H \}$ ;
12: while $F ^ { i , t } \leq 0$ do
13: $F ^ { i , t } = \operatorname* { m i n } ( { \mathrm { r a n d } _ { \mathrm { c a u c h y } } ( ( M _ { F } [ r ^ { i } ] , 0 . 1 ) , 1 ) }$ ;
14: $C ^ { i , t } = \operatorname* { m a x } ( 0 , \operatorname* { m i n } ( { \mathrm { r a n d } _ { \mathrm { n o r m a l } } ( M _ { C } [ r ^ { i } ] , 0 . 1 ) , 1 ) } )$ ;
15: select $x ^ { p b e s t , t } , x ^ { r 1 , t }$ from $P ^ { t }$ and select $\cdot$ from $P ^ { t } \cup A ^ { t }$ ( $i \neq r 1 , i \neq r 2 , r 1 \neq r 2$ );
16: generate $v ^ { i , t }$ using current-to-pbest mutation (eq. 2);
17: generate $u ^ { i , t }$ using binomial crossover (eq. 3) between $v ^ { i , t }$ and $\boldsymbol { x } ^ { i , t }$ ;
18: for $i = 1$ to $\vert P ^ { t } \vert$ do
19: evaluate $f ( u ^ { i , t } )$ ;
20: if $f ( u ^ { i , t } ) \leq f ( x ^ { i , t } )$ then
21: $x ^ { i , t + 1 } = u ^ { i , t }$ ;
22: $S _ { F } = S _ { F } \cup F ^ { i , t }$ , $S _ { C } = S _ { C } \cup C ^ { i , t }$ , $S _ { \Delta f } = S _ { \Delta f } \cup ( f ( x ^ { i , t } ) - f ( u ^ { i , t } ) )$
23: if $\vert A ^ { t } \vert < \vert A \vert _ { \mathrm { m a x } }$ then
24: $\_$ ;
25: else
26: select $\boldsymbol { x } ^ { r 3 , t }$ from $\cdot$ ;
27: $\_$
28: else
29: $x ^ { i , t + 1 } = x ^ { i , t } ;$
30: if $S _ { F } \neq \emptyset$ and $S _ { C } \neq \emptyset$ then
31: $M _ { F } [ k ] = \mathrm { m e a n } _ { L } ( S _ { F } , S _ { \Delta f } )$ , $M _ { C } [ k ] = \mathrm { m e a n } _ { L } ( S _ { C } , S _ { \Delta f } )$ ;
32: $\boldsymbol { k } = \left( \boldsymbol { k } + 1 \right)$ modulo $H$ ;
33: $t = t + 1$ ;
The success history is stored in arrays $M _ { F } = ( M _ { F } [ 1 ] , \cdots , M _ { F } [ H ] )$ and $M _ { C } = ( M _ { C } [ 1 ] , \cdots , M _ { C } [ H ] )$ , both initialized at the beginning of the search (line 4) to 0.5, midway in the typical range [0, 1] for $F$ and $C$ .
During the search, for each individual $\boldsymbol { x } ^ { i , t }$ , an integer $r ^ { i }$ is selected uniformly at random from the interval [1,H]. The corresponding elements $M _ { F } [ r ^ { i } ]$ and $M _ { C } [ r ^ { i } ]$ are used to generate $F ^ { \ i , t }$ and $C ^ { \ i , t }$ , respectively (line 11). $F ^ { i , t }$ is drawn from a Cauchy distribution centered at $M _ { F } [ r ^ { i } ]$ with scale $\gamma = 0 . 1$ , and $C ^ { i , t }$ is drawn from a normal distribution centered at $M _ { C } [ r ^ { i } ]$ with standard deviation $\sigma = 0 . 1$ (lines 13–14). The sampling is repeated until a valid value ( $> 0$ ) for $F ^ { i , t }$ is obtained. The use of a Cauchy distribution, which tends to generate larger values compared to the normal distribution, has been found effective in benchmark problems.
If the offspring $u ^ { i , t }$ has a fitness equal to or better than its parent, the values $F ^ { i , t } , C ^ { i , t }$ , and the fitness improvement $f ( x ^ { i , t } ) - f ( u ^ { i , t } )$ are stored in sets $S _ { F } , S _ { C } , S _ { \Delta f }$ , respectively (line 22). These sets are referred to as the successful parameters.
After all offspring have been generated and evaluated, the $k$ -th elements of $M _ { F }$ and $M _ { C }$ are updated using a weighted Lehmer mean, with $\Delta f$ as the weight (line 31). The Lehmer mean of values $X _ { i } ( i =$ $1 , . . . , n )$ with weights $w _ { i }$ is defined as: $\mathrm { \ m e a n } _ { L } ( X , w ) = \sum { ( w _ { i } \cdot X _ { i } ^ { 2 } ) } / \sum { ( w _ { i } \cdot X _ { i } ) }$ The index $k \in \{ 1 , . . . , H \}$ is initialized to 1 (line 5) and incremented cyclical y (modulo $H$ ) with each success-history update (line 32). The Lehmer mean, which tends to produce larger averages than the arithmetic mean, has been shown to be more effective for this purpose. In the reference implementation by the SHADE authors [71], the population size is set $\mathrm { t o } | P ^ { t } | = 1 0 0$ , the history length to $H \ = \ D$ , and $x ^ { \mathrm { p b e s t } , t }$ is selected from the top $1 0 \%$ of the population ranked by fitness.
Mutation with Archive SHADE also incorporates an archive mechanism, first introduced in JADE [86]. Empirical results show that JADE with an archive achieves superior fitness on many 100-dimensional benchmark problems compared to its archive-free counterpart [86]. Archives tend to improve performance in high-dimensional ( $\geq 5 0$ ) problems and have become standard in many DE variants since JADE, including SHADE [73], often in conjunction with the current-to-pbest mutation strategy.
The use of the archive in SHADE is highlighted in red in Algorithm 2.2. The archive $A ^ { t }$ is initially empty (line 3) and accepts parents $\boldsymbol { x } ^ { i , t }$ replaced by offspring during selection until the archive reaches its maximum size $| A | _ { \operatorname* { m a x } }$ (line 24). Once full, a random individual in $\vert A ^ { t } \vert$ is replaced by $\boldsymbol { x } ^ { i , t }$ (lines 26–27).
During mutation, SHADE selects $x ^ { r 2 }$ uniformly from $P ^ { t } \cup A ^ { t }$ (line 15). If $x ^ { r 2 }$ is from $A ^ { t }$ , the difference vector $\boldsymbol { F } \cdot \left( \boldsymbol { x } ^ { r 1 } - \boldsymbol { x } ^ { r 2 } \right)$ becomes a vector from a previously removed individual toward a current individual. As the population converges toward promising regions, this vector promotes exploration and helps maintain diversity. JADE uses an archive size of $\vert A \vert _ { \mathrm { m a x } } = \vert P ^ { t } \vert$ , while the SHADE reference implementation [71] uses $\left| A \right| _ { \operatorname* { m a x } } = 2 \cdot \left| P ^ { t } \right|$ . In general, archive sizes are set larger than the population size.
# 2.2.2 Linear population size reduction (LSHADE)
LSHADE [74] is an improvement to SHADE which incorporates a mechanism for controlling the population size $\vert P ^ { t } \vert$ . LSHADE starts with a larger population and reduces it according to a linearly decreasing schedule, so that by the termination condition (either maximum evaluations budgets or time limit), the population size reaches its minimum ( $| P | _ { \operatorname* { m i n } } = 4$ ). This allows the algorithm to concentrate its search resources on refining the best solutions near the end of the search. Given the initial population size $| P ^ { 1 } |$ and the maximum evaluation budgets $L _ { \mathrm { m a x } } ^ { \mathrm { e v a l u a t i o n } }$ , the next generation’s population size $\vert P ^ { t + \bot } \vert$ is determined by:
$$
| P ^ { t + 1 } | = \mathrm { r o u n d } ( ( | P ^ { 1 } | - 4 ) \cdot ( 1 - \sum _ { j = 1 } ^ { t } | P ^ { j } | / L _ { \operatorname* { m a x } } ^ { \mathrm { e v a l u a t i o n } } ) ) + 4
$$
The worst-performing individuals are iteratively removed from the current population until the desired size $\vert P ^ { t + 1 } \vert$ is reached. This method is referred to as the linear population size reduction strategy (LPSR).
Note that in contrast to the success-history based adaptative control of $F$ and $C$ which is applied at a per-individual level, LPSR is a deterministic control of population size applied to the whole population.
LSHADE has been shown to outperform SHADE with a fixed population size [74], and LSHADE ranked first among 16 algorithms in the CEC2014 Real Parameter Single Objective Optimization Competition. Many subsequent state-of-the-art DE variants, including most of the best-performing DE entries in the CEC Single Objective Optimization Competition series (2014-2022) have been based on LSHADE. Therefore, we adopt LSHADE as the baseline state-of-the-art DE for comparison in this paper.
# 2.3 Related Work
Deterministic Control of Population Size Two broad approaches to controlling the population size in DE have been proposed. The first is population size reduction. The prevalent approach is deterministic reduction, using fixed (problem-independent) schedule for population size reduction. Consequently, once the population has been reduced, there is no mechanism to increase it again, making it difficult to escape stagnation caused by insufficient diversity. Nevertheless, the Linear Population Size Reduction (LPSR) scheme described above in Section 2.2.2 has been widely adopted due to its effectiveness and simplicity.
Another deterministic approach is in dynNP-DE algorithm [8], which halves the population size repeatedly over time. Unlike LSHADE, which removes the worst-performing individuals based on fitness rankings, dynNP-DE deletes the less fit of two randomly selected individuals, effectively implementing a single-round tournament selection. EsDEr-NR [4] reduces individuals in the later stages of evolution by removing those located near low-performing niches—regions far from the current best solution. Similarly, Distributed DE with Explorative-Exploitative Population Families (DDE-EEPF) [79] reduces the size of one of its subpopulations.
The second approach to population size control incorporates mechanisms to increase the population size when stagnation is detected. Unlike reduction, increasing the population size is non-trivial, and various distinct strategies have been proposed. For example, DEVP [77] adds new individuals by mutating the current best solution when the best-so-far fitness ceases to improve. DEWAPI [20] includes an intermediate evolution phase before incorporating new individuals into the main population. APTS [88] perturbs better-fitness individuals to generate new candidates. Guo et al. [27] propose reintroducing archived individuals into the main population to increase size and promote exploration.
To facilitate population size adaptation, various indicators of search stagnation have also been proposed. For instance, SapsDE [78] first reduces the population upon detecting stagnation and increases it only if stagnation persists. Population-Entropy SHADE (PESHADE) [87] uses population density as a metric, reducing density in the early stages to promote exploration and increasing it later to intensify search around promising regions. Polakova et al. [50] employ a diversity indicator $V$ , adjusting the population when $V$ deviates from a scheduled linear decay curve, bounded within 90% to $1 1 0 \%$ of the expected value.
While population reduction methods are relatively simple, there are two issues: First, deterministic, problem-independent schedules may not be aligned with the problem characteristics. If the reduction is too aggressive, the search may stagnate; if too slow, promising regions may not be explored thoroughly. Second, deterministic schedules assume that evaluation budgets (e.g., time or number of evaluations) are determined a priori – in many applications, more flexibility is required (e.g., we may want to terminate the search earlier or later than assumed by the standard schedule).
Mechanisms for increasing population size are even more difficult to implement successfully – they require not only reduction strategies but also stagnation detection and population augmentation processes, resulting in more complex control logic. A notable attempt at adaptive control without explicit indicators is DESAP [75], which uses self-adaptation: each individual maintains a population size parameter, and the average of these values determines the size of the next generation. This work introduces a novel third approach to population size control that differs from the existing two strategies.
The Archive as a Secondary Population Section 2.2.1 introduced the use of an archive in the current-to-pbest mutation operator [86]. Beyond this, several DE variants employ a auxiliary population, in addition to the main population $P ^ { t }$ , from which parent individuals are selected. We use the term external archive to refer to all such auxiliary populations, including the original archive in [86].
For example, DESSA [42] maintains an external archive to construct surrogate models approximating the objective function. Gonuguntla et al. [25] employ a ”huge set” $S ^ { t }$ , from which a subset $A ^ { t }$ is drawn each generation for use in search. When the average fitness of $S ^ { t }$ exceeds its median, indicating concentration near a local optimum, $A ^ { t }$ is composed of randomly selected individuals from $S ^ { t }$ to promote exploration. Otherwise, tournament selection is applied to form $A ^ { t }$ , focusing the search near fitter individuals. Guo et al.[27] propose the Successful-Parent-Selecting method, which reinserts archived individuals into the population upon detection of stagnation. Epitropakis et al.[21] introduce a niching DE inspired by Particle Swarm Optimization (PSO), incorporating a dynamic archive [85]. In this method, if an offspring lies close to an archived individual, it is regenerated to maintain diversity. Other approaches, such as $\epsilon$ -MyDE [56] and [17], utilize Pareto-optimal sets to preserve better-fitness solutions. These external archive mechanisms highlight the need to overcome the limitations of relying solely on a single, fixed-size population maintained through generational replacement.
Incorporating Selection Pressure into Mutation In conventional rand/1 and current-to-pbest mutation strategies, individuals such as $x ^ { r 1 } , x ^ { r 2 } , x ^ { r 3 }$ are typically selected uniformly at random. Selection pressure refers to the bias introduced to favor the selection of better-fitness individuals. When applied to mutation, this pressure increases the likelihood that fitter individuals are selected as components of mutant vectors.
Several mechanisms exist for introducing selection pressure into mutation, with fitness-proportional selection, rank-based selection, and tournament selection being the most common. Empirical studies such as [62] have shown that applying selection pressure (e.g., via fitness-proportional, rank-based [81], or tournament selection [23, 44]) improves performance relative to uniform random selection in several mutation strategies.
Sutton et al. [70] demonstrated that selecting the individuals $x ^ { r 1 , t } , x ^ { r 2 , t } , x ^ { r 3 , t }$ used in the rand/1 mutation strategy (Equation 1) based on fitness-ranked selection significantly improves the performance of DE, particularly when the population size is large. ISSDE [41] adopts a strategy in which individuals are selected to satisfy $f ( x ^ { r 1 , t } ) \leq f ( x ^ { r 2 , t } ) \leq f ( x ^ { r 3 , t } )$ . This approach is equivalent to employing tournament selection with a tournament size of $T = 3$ to determine $\boldsymbol { x } ^ { r \bot , t }$ . Gong et al. [24] showed that applying rank-based selection to $x ^ { r 1 , t } , x ^ { r 2 , t }$ within both the rand/1 and current-to-pbest mutation strategies (Equation 2) can improve the performance of DE variants such as JADE [86]. LSHADE-RSP [61] employs rank selection [32] within the current-to-pbest mutation strategy.
On the ubiquity of archives and population size reduction in state-of-the-art DE Table 2 surveys the use of success-history adaptation, population size reduction strategies, and archive usage in adaptive DE algorithms developed since JADE, specifically, all DE variants that ranked highly in the CEC Real-Parameter Single-Objective Optimization Competitions from 2014 to 2022 [68, 69, 67, 1, 2, 51, 83, 37, 38].
Table 2 shows the prevalence of 3 components in state-of-the-art DE algorithms since LSHADE: (1) success-history adaptation, (2) population size reduction, and (3) archives. The consistent usage of these three comonents in top-ranking entries in the CEC Real-Parameter Single-Objective Optimization Competitions suggests a widely shared recognition that they are critical for enhancing the search performance of DE algorithms. 1
Evolutionary Computation with Infinite Populations Previous work has used infinite populations to facilitate the theoretical analysis of evolutionary algorithms. The analysis of schema in genetic algorithms assumes an infinite population [31]. Qin et al. analyze genetic algorithms [52],[53] by assuming an infinite number of populations, By assuming an infinite population, the authors derive how the population’s mean positions and correlations among its coordinates evolve by mutation and crossover. A similar analysis was also performed by Whitley et al. [80] and tested on a GA with a population size of 625 in a 15-dimensional problem.
Table 2: Success-history (SH), population size reduction (PSR) strategies and archive use in the IEEE CEC Real-Parameter Single-Objective Optimization Competition series (RPSOOC 2014-2022) [68, 69, 67, 1, 2, 51, 83, 37, 38].
In these works, infinite populations are used in order to simplify theoretical analysis. Applying these models to predict/understand the behavior of actual, finite populations is challenging, but they may provide insights into search behavior in the limit as population size increases. In contrast, we consider the use of monotonically increasing, finite populations with unbounded size as a practical alternative to search algorithms with standard (fixed size with generational replacement) population structures.
# 3 Proposed method
# 3.1 Differential Evolution without Generational Replacement: Unbounded DE
In Differential Evolution (DE), generational replacement is believed to play a crucial role in preserving better-fitness individuals and focusing the search on promising regions of the solution space. Nevertheless, prior studies have shown that DE variants which retain discarded individuals in an auxiliary archive population outperform traditional single-population DE methods [86, 25, 27]. This suggests that individuals dicarded by generational replacement may still possess valuable information for search. However, the use of an archive requires careful design choices, including parameters such as retention duration, criteria for inclusion and deletion, and other operational rules.
Motivated by these observation, we propose Unbounded Differential Evolution (UDE), a DE variant that operates on a single population and retains all individuals, including those superseded by offspring. In UDE, even when an offspring is successful, its parent is not discarded. Unlike theoretical models that consider infinite population sizes for analytical convenience [80, 52, 53], UDE is a practical algorithm with an unbounded, but not infinite, population size.
Advantages of UDE Although generational replacement is traditionally regarded as a key component of DE, UDE avoids discarding potentially useful individuals. As such, it provides a simple and unified framework which enables the simulation of having a primary population and auxiliary population (archive) within a single, unbounded population. Additionally, in the UDE framework, selective pressure, rather than generational replacement, drives the selection process. While some recent DE variants have incorporated selective pressure in a limited fashion [61, 63, 6, 64], UDE extensively applies selective pressure for all selection operations. Thus, UDE is a conceptually simple, significant departure from standard DE.
Furthermore, by applying adaptive parameter control to the selection policy itself, UDE can implement search behavior similar to that of adaptive control of population size – a feature traditionally associated with increased algorithmic complexity. In UDE, the computational cost for individual selection increases as the population size grows. However, in most practical applications of evolutionary computation, the time required to generate individuals are negligible compared to the computational cost of evaluating objective functions, and thus this overhead is typically not a concern.
Unbounded Success-history based Adaptive DE (USHADE) is intended to exemplify DE with adaptive parameter control, and extends UDE by incorporating the widely used, success-history based adaptive parameter control mechanism from SHADE.
# 3.1.1 UDE
Algorithm 3.1 presents the pseudocode for UDE. While the algorithm shown employs the current-topbest mutation strategy—commonly used in state-of-the-art DE variants, UDE is a general framework, and other strategies, such as rand/1, can be adopted by modifying lines 8 and 9.
Algorithm 3.1 UDE (red is difference from baseline-DE).
1: $t = 1$ ;
2: Initialize population $P ^ { t }$ ;
3: for $i = 1$ to $| P ^ { 1 } |$ do
4: evaluate $f ( x ^ { i , t } )$ ;
5: while not termination condition do
6: for $i = 1$ to gensize do
7: select xpbest,t
8: select $\cdot$ from $\cdot$ with selection policy ( $-$ );
9: generate $v ^ { i , t }$ using current-to-pbest mutation;
10: generate $u ^ { i , t }$ using binomial crossover between $v ^ { i , t }$ and $x ^ { p , t }$ ;
11: for $i = 1$ to gensize do
12: evaluate $f ( u ^ { i , t } )$ ;
13: $P ^ { t + 1 } = P ^ { t } \cup u ^ { \ i , t }$ ;
14: $t = t + 1$ ;
Figure 1: Comparison of conventional DE (left) vs. UDE (right). Boxes represent populations, and arrows indicate mutation, crossover, and population update. The diamond shape indicates selection by comparison of fitness.
Figure 1 shows a conceptual diagram of conventional DE and UDE. When compared to conventional DE employing the current-to-pbest mutation strategy, we call this baseline-DE, UDE differs in three key aspects:
1. Population growth (line 13): Offspring are added to the population without replacing their parents, resulting in a monotonically increasing population size.
2. Selection of parent $x ^ { p , t }$ (line 8): In standard DE, each individual serves as a parent exactly once per generation In contrast, UDE selects parent individuals dynamically.
3. Definition of “generation” (line 6 and 11): In standard DE, a generation is the creation/evaluation of $\vert P ^ { t } \vert$ new individuals, but in UDE, a “generation” is the creation/evaluation of gensize individuals, where gensize is constant throughout the search.
# 3.1.2 USHADE
Unbounded Success-history based Adaptive DE (USHADE) is an extension of UDE that incorporates adaptive parameter control. As described in Section 2.2.1, state-of-the-art DE algorithms typically employ adaptive control for parameters such as the scaling factor $F$ and crossover rate $C$ . USHADE applies the same success-history based adaptation scheme as SHADE. Algorithm 3.2 presents the pseudocode for USHADE. In addition to the differences between UDE and DE which were already described above in Section 3.1.1, a major difference between USHADE and SHADE (Algorithm 2.2) is elimination of the external archive (highlighted in red in Algorithm 2.2): There is no need for an explicit, auxiliary population because all individuals, including those which would traditionally be stored in an archive, are retained in $P ^ { t }$ , and the decision of whether to use newer or older individuals is left up to the selection policy.
# 3.2 Selection Policy
The selection policy governs the choice of all individuals required for variation, including the parent $x ^ { p , t }$ , as well as auxiliary individuals such as $\boldsymbol { x } ^ { r \bot , t }$ and $\boldsymbol { x } ^ { r 2 , t }$ used in the current-to-pbest mutation strategy (see line 8 in Algorithm 3.1 and line 15 in Algorithm 3.2). While UDE is compatible with various mutation strategies, this paper adopts the widely used current-to-pbest mutation, in line with state-of-the-art DE practices. For other mutation strategies, the selection policy should be appropriately adapted to select the required individuals. For example, in the case of rand/1 mutation (Equation 1), the selection policy would need to choose four individuals: $x ^ { p , t } , x ^ { r 1 , t } , x ^ { r 2 , t }$ and $x ^ { r 3 , t }$ .
The simplest selection policy involves selecting individuals uniformly from the population $P ^ { t }$ . However, selecting all individuals (old and new) hampers the algorithm’s ability to focus exploration on promising regions of the search space.
One effective approach to bias offspring generation toward promising regions is to employ tournament selection [23, 44]. Tournament selection biases selection so that better individuals are more likely to serve as parents in offspring generation. We adopt a tournament selection scheme which samples a subset of $n \geq 1$ individuals uniformly at random from the population and selects the individual with the best fitness from this subset. The probability that an individual of fitness rank $i$ -th is selected (assuming the population is sorted by fitness) can be expressed using the combination formula ${ } _ { n } \mathrm { C } _ { r }$ . Increasing $n$ intensifies the selection pressure, favoring better-fitness individuals, while decreasing $n$ allows for more diverse, potentially lower-fitness individuals to be selected.
$$
\begin{array} { r } { p ( i ) = \left\{ \begin{array} { l l } { \frac { | P ^ { t } | - i ^ { \mathrm { C } _ { n - 1 } } } { | P ^ { t } | ^ { \mathrm { C } _ { n } } } \mathrm { ~ i f ~ } i \in [ 1 , | P ^ { t } | - n - 1 ] } \\ { 0 \mathrm { ~ i f ~ } i \in [ | P ^ { t } | - n , | P ^ { t } | ] } \end{array} \right. } \end{array}
$$
As a baseline tournament policy, we define the T policy, which select $n$ $\mathbf { \Phi } = \vert P ^ { t } \vert / T )$ individuals uniformly at random from the population $P ^ { t }$ to obtain a candidate pool, and chooses the individual with the best fitness is chosen.
1: $t = 1$ ;
2: Initialize population $P ^ { t }$ ;
3: Initialize contents of success histories $M _ { F }$ and $M _ { C }$ to 0.5;
4: $k = 1$ ;
5: for $i = 1$ to $\vert P ^ { t } \vert$ do
6: evaluate $f ( \boldsymbol { x } ^ { i , t } )$ ;
7: while not termination condition do
8: $S _ { F } = \emptyset , S _ { C } = \emptyset$ ;
9: for $i = 1$ to gensize do
10: select $r ^ { i }$ randomly from $\{ 1 , \cdots , H \}$ ;
11: while $F ^ { i , t } \leq 0$ do
12: $F ^ { i , t } = \operatorname* { m i n } ( { \mathrm { r a n d } _ { \mathrm { c a u c h y } } ( ( M _ { F } [ r ^ { i } ] , 0 . 1 ) , 1 ) }$ ;
13: $C ^ { i , t } = \operatorname* { m a x } ( 0 , \operatorname* { m i n } ( \mathrm { r a n d } _ { \mathrm { n o r m a l } } ( M _ { C } [ r ^ { i } ] , 0 . 1 ) , 1 ) )$ ;
14: select xpbest,t
15: select $x ^ { p , t } , x ^ { r 1 , t } , x ^ { r 2 , t }$ from $\cdot$ with selection policy $( p \neq r 1 , p \neq r 2 , r 1 \neq r 2$ );
16: generate $\boldsymbol { v } ^ { i , t }$ using current-to-pbest mutation;
17: generate $u ^ { i , { t } }$ using binomial crossover between $\boldsymbol { v } ^ { i , t }$ and $x ^ { p , t }$ ;
18: for $i = 1$ to gensize do
19: evaluate $f ( u ^ { i , t } )$ ;
20: $P ^ { t + 1 } = P ^ { t } \cup u ^ { i , t }$ ;
21: if $f ( u ^ { i , t } ) \leq f ( x ^ { p , t } )$ then
22: $S _ { F } = S _ { F } \cup F ^ { i , t }$ , $S _ { C } = S _ { C } \cup C ^ { i , t }$ , $S _ { \Delta f } = S _ { \Delta f } \cup ( f ( x ^ { p , t } ) - f ( u ^ { i , t } ) )$
23: if $S _ { F } \neq \emptyset$ and $S _ { C } \neq \emptyset$ then
24: $M _ { F } [ k ] = \mathrm { m e a n } _ { L } ( S _ { F } , S _ { \Delta f } )$ , $M _ { C } [ k ] = \mathrm { m e a n } _ { L } ( S _ { C } , S _ { \Delta f } )$ ;
25: $\boldsymbol { k } = \left( \boldsymbol { k } + 1 \right)$ modulo $H$ ;
26: $t = t + 1$ ;
The value of $T$ is fixed to $| P ^ { 1 } |$ in UDE, and in USHADE, $T ^ { i , t }$ is controlled adaptively as described below in Section 3.2.2.
# 3.2.1 Diversity-Preserving Tournament Policy
We now introducse a tournament policy designed to maintain greater diversity than the baseline T policy.
The Diversity-Preserving Tournament (DPT) policy is as follows: For each offspring to be generated (totaling gensize), select an integer $j$ between 1 and gensize. Construct a subset $S \in P$ consisting of individuals whose insertion index modulo gensize equals $j$ . From $S$ , conduct uniform sampling to obtain $n$ ( $= \vert P ^ { t } \vert / T )$ candidates, and select the best among them based on fitness. Given that gensize is fixed, the individuals in $S$ are expected to be related through parent-offspring lineage. When generating offspring $\boldsymbol { x } ^ { i , t }$ , the respective index values $j ^ { p } , j ^ { r 1 } , j ^ { r 2 }$ for selecting the parent $x ^ { p }$ and two additional $\boldsymbol { x } ^ { r 1 } , \boldsymbol { x } ^ { r 2 }$ individuals are determined as follows:
$$
\begin{array} { r l } & { \textit { j } ^ { p } = i } \\ & { \textit { j } ^ { r 1 } = \mathrm { r a n d } _ { \mathrm { u n i f o r m } } ( 1 , \cdot \cdot \cdot , g e n s i z e ) , j ^ { r 1 } \neq j ^ { p } } \\ & { \textit { j } ^ { r 2 } = \mathrm { r a n d } _ { \mathrm { u n i f o r m } } ( 1 , \cdot \cdot \cdot , g e n s i z e ) , j ^ { r 2 } \neq j ^ { p } , j ^ { r 2 } \neq j ^ { r 1 } } \end{array}
$$
The purpose of tournament selection is to preferentially choose better-fitness individuals while preserving a low probability of selecting lower-quality ones. Compared to T, the DPT policy is expected to maintain greater diversity, as it reduces the likelihood of repeatedly selecting elite individuals from the same subset. Moreover, when the crossover rate $C$ is small, parent and offspring relationships tend to share many variable values, increasing intra-subset similarity. By localizing tournaments to different subsets of the population for each offspring, DPT helps mitigate diversity loss.
# 3.2.2 Adaptive Control of Tournament Selection Policy Parameter
We now describe a method for adaptively controlling the parameter $T ^ { i , t }$ used in the selection policies.
The red-highlighted sections in Algorithm 3.3 show how $T ^ { i , t }$ is managed.
Algorithm 3.3 USHADE(T) (red is related to adaptive control of prameter $T$ ).
1: $t = 1$ ;
2: Initialize population $P ^ { t }$ ;
3: Initialize contents of success histories $M _ { F }$ and $M _ { C }$ to 0.5, $\cdot$ to $| P ^ { 1 } |$ ;
4: $k = 1$ ;
5: for $i = 1$ to $\vert P ^ { t } \vert$ do
6: evaluate $f ( x ^ { i , t } )$ ;
7: while not termination condition do
8: $S _ { F } = \emptyset , S _ { C } = \emptyset$ ;
9: $S _ { T } = \emptyset$ ;
10: for $i = 1$ to gensize do
11: select $r ^ { i }$ randomly from $\{ 1 , \cdots , H \}$ ;
12: while $F ^ { i , t } \leq 0$ do
13: $F ^ { i , t } = \operatorname* { m i n } ( { \mathrm { r a n d } _ { \mathrm { c a u c h y } } ( ( M _ { F } [ r ^ { i } ] , 0 . 1 ) , 1 ) }$
14: $C ^ { i , t } = \operatorname* { m a x } ( 0 , \operatorname* { m i n } ( \mathrm { r a n d } _ { \mathrm { n o r m a l } } ( M _ { C } [ r ^ { i } ] , 0 . 1 ) , 1 ) ) ;$
15: $T ^ { \lambda , t } = \operatorname* { m a x } ( 1 0 0 , \operatorname { r a n d } _ { \operatorname { n o r m a l } } ( M _ { T } [ r ^ { i } ] , 1 0 ) )$ ;
16: select xpbest,t
17: select $x ^ { p , t } , x ^ { r 1 , t } , x ^ { r 2 , t }$ from $P ^ { t }$ with tournament policy $( p \neq r 1 , p \neq r 2 , r 1 \neq r 2$ );
18: generate $\boldsymbol { v } ^ { i , t }$ using current-to-pbest mutation;
19: generate $\boldsymbol { u } ^ { i , t }$ using binomial crossover between $\boldsymbol { v } ^ { i , t }$ and $x ^ { p , t }$ ;
20: for $i = 1$ to gensize do
21: evaluate $f ( u ^ { i , t } )$ ;
22: $P ^ { t + 1 } = P ^ { t } \cup u ^ { i , t }$ ;
23: if $f ( u ^ { i , t } ) \leq f ( x ^ { p , t } )$ then
24: SF = SF ∪ F i,t, $S _ { C } = S _ { C } \cup C ^ { i , t }$ , $S _ { \Delta f } = S _ { \Delta f } \cup ( f ( x ^ { p , t } ) - f ( u ^ { i , t } ) ) , \qquad ;$
25: if $S _ { F } \neq \emptyset$ and $S _ { C } \neq \emptyset$ then
26: $M _ { F } [ k ] = \mathrm { m e a n } _ { L } ( S _ { F } , S _ { \Delta f } )$ , $M _ { C } [ k ] = \mathrm { m e a n } _ { L } ( S _ { C } , S _ { \Delta f } )$ , MT [k] = meanL(ST , S∆f );
27: $\boldsymbol { k } = \left( \boldsymbol { k } + 1 \right)$ modulo $H$ ;
28: $t = t + 1$ ;
The parameter $T ^ { i , t }$ is handled analogously to the scale factor $F$ and $C$ . Specifically, a successhistory $M _ { T }$ is initialized with the value $| P ^ { 1 } |$ (line 3). In each generation, $T ^ { \imath , t }$ is drawn from a normal distribution centered around a value from the success-history (line 15), similar to the approach for $C ^ { \ i , t }$ . To account for the larger numerical scale of $T ^ { i , t }$ , the standard deviation is set to $\sigma _ { T } = 1 0$ , which is larger than $\sigma _ { C } = 0 . 1$ used for $C ^ { \imath , t }$ . The lower bound of $T ^ { i , t }$ is fixed at 100, with no upper bound. When an offspring is successful, $T ^ { i , t }$ is added to the set $S _ { T }$ (line 24), and the relevant success-history element $M _ { T } [ k ]$ is set to the Lehmer mean, weighted by the fitness improvement $S _ { \Delta f }$ (line 26).
The performance of this algorithm is evaluated in Section 4, with a detailed analysis provided in Section 4.4.
# 3.3 UDE/USHADE without Failed Individuals
A defining feature of UDE is that all individuals are kept in the population, and selection is responsible which individuals to user or ignore as search progresses. In contrast, discarding failed individuals (which do not have better fitness values than their parent) is a a standard feature of DE with generational replacement.
By moving line 22 of Algorithm 3.3 inside the conditional block at line 23, failed offspring will be excluded from the population update, thereby ensuring that only successful individuals are retained in the population.
We refer to this variant of USHADE which discards failed individuals as USHADE/DF, and the nonadaptive variant USHADE/DF. The preliminary conference version of this study [34] presented extensive experimental evaluations of UDE/DF and USHADE/DF (“UDE” and “USHADE” in the previous paper refers to UDE/DF and USHADE/DF, respectively). We experimentally compare USHADE/DF to USHADE in Section 4.2.
# 4 Experimental Evaluation
We experimentally evaluate UDE and its variants using standard benchmark problem sets used to evaluate DE. In this chapter, we first introduce a method for comparing evolutionary computation algorithms (Sec. 4.1) and then present the results of a performance comparison (Sec. 4.2). Sec. 4.3 shows that UDE mimics population size control through the behavior of the parameter.
All experiments were run on a Apple M3 Pro CPU with 16GB RAM running macOS Sequoia. All algorithms were implemented in C++ ( $^ { \mathrm { g + + } }$ with -O3 and -std=c++11 compilation flags).
# 4.1 Evaluation Methodology
We use 2 sets of benchmarks from the IEEE Congress on Evolutionary Computation (CEC) Single Objective Optimization Competition series, specifically:
• The CEC2014 benchmark suite [68], consisting of 30 problems ( $F$ 1 Rotated High Conditioned Elliptic Function, $F 2$ Rotated Bent Cigar Function, $F 3$ Rotated Discus Function, $F 4$ Shifted and Rotated (S&R) Rosenbrock’s Function, $F 5$ S&R Ackley’s Function, $F 6$ S&R Weierstrass Function, $F 7$ S&R Griewank’s Function, $F 8$ Shifted Rastrigin’s Function, $F 9$ S&R Rastrigin’s Function, $F 1 0$ Shifted Schwefel’s Function, $F 1 1$ S&R Schwefel’s Function, $F 1 2$ S&R Katsuura Function, $F 1 3$ S&R HappyCat Function, $F 1 4$ S&R HGBat Function, $F 1 5$ S&R Expanded Griewank’s plus Rosenbrock’s Function, $F 1 6$ S&R Expanded Scaffer’s f6 Function, $F 1 7 - F 2 2$ Hybrid Functions, $F 2 3 - F 3 0$ Composition Fuctions )
• The CEC2022 benchmark suite [38], consisting of 12 problems ( $F$ 1 S&R Zakharov Function, $F 2$ S&R Rosenbrock’s Function, $F 3$ S&R Expanded Schaffer’s f6 Function, $F 4$ S&R Non-Continuous Rastrigin Fuction, $F 5$ S&R Levy Function, $F 6 - F 8$ Hybrid Functions, $F 9 - F 1 2$ Composition Functions).
In hybrid functions, the $D$ dimensions are divided into 3-5 groups, and a different function from above is assigned to each group. A composite function is a problem in which the final output is a weighted sum of the outputs of the above functions without dividing the dimensionality of the problem.
The main objective of this paper is to explore the viability of the UDE framework (whose key features are unbounded population and no generational replacment) as an alternative to standard, generational-replacement based DE. However, in order to assess the UDE as a practical framework in the context of state-of-the-art DE, we focus our evaluation on USHADE, which incorporates successhistory based adaptive control of $F$ , $C$ , and the (new in UDE) $T$ parameters. As USHADE is UDE with the control parameter adaptation of SHADE, our experiments primarily use the CEC2014 benchmark suite, because it was the benchmark set on which LSHADE was tuned [74], and is also very similar to the CEC2013 benchmark set on which SHADE was tuned [73].
# 4.1.1 Empirical Cumluative Distribution Functions (ECDF)
To concisely illustrate and compare the performance of multiple algorithms, we use Empirical Cumulative (Run-Time) Distribution Functions (ECDFs). ECDFs are widely adopted in evolutionary computation research as an informative performance indicator [30, 54]. The ECDF is formally defined as follows: given a performance threshold $z$ and trial outcomes $z _ { i } ( 1 \leq i \leq N )$ , where $N$ denotes the
number of independent runs (trials), we define:
$$
\mathrm { E C D F } ( z ) = \frac { 1 } { N } \mathrm { c o u n t } ( z _ { i } < z ) \quad i = 1 , \cdots , N
$$
where $\mathrm { c o u n t } ( z _ { i } < z )$ denotes the number of runs for which $z _ { i }$ is less than the threshold $z$ . The ECDF value lies in the range [0,1], where 1 is the best (all runs achieve the performance threshold).
In this paper, the ECDF plots (e.g., Figure 2) use evaluation count as the horizontal axis (up to the maximum evaluation budget). The vertical axis represents average attainment rate (fraction of runs reaching given targets), based on whether the best-so-far fitness fitnessbest-so-far (best fitness value observed among all invidiausl in the search up to that point in the search) achieves specified thresholds.
The target threshold values $z$ depend on the figure, and also depend on the data. For most of the figures, $z$ is set as follows: Let fitnessbest-so-far,final be the fitnessbest-so-far score at the end of a trial. Then $z$ is the median value of fitnessbest-so-far,final among all of the $N u m T r i a l s \times k$ trials, where $k$ is the number of algorithms being evaluated in the figure. For example, suppose we are evaluating algorithms $A 1$ and $A 2$ , and $\ N u m T r i a l s = 2$ . If fitnessbest-so-far,final(A1, trial1) = 9, fitnessbest-so-far,fina $_ 1 ( A 1 , t r i a l 2 ) = 2$ , fitnessbest-so-f $_ { \mathrm { a r , f i n a l } } ( A 2 , t r i a l 1 ) = 5$ , $\mathrm { f i t n e s s } _ { \mathrm { b e s t - s o - f a r , f n a l } } ( A 2 , t r i a l 2 ) = 6$ , then $z$ is set to 5.5 (the median of 9, 2, 5, 6).
A key advantage of this approach to setting the $z$ threshold values is that $z$ is automatically determined from the performance data ( $z$ is not decided arbitrarily) A curve that lies higher on the plot at a given evaluation count indicates that a larger proportion of trials have reached the target by that number of evaluations, indicating better search performance. However, it is important to note: (1) the appearance of the figure depends on which algorithms are included in the comparison (as the $z$ values depend on the data), and (2) since the vertical axis reflects success rates, improvements in fitnessbest-so-far beyond meeting the $z$ threshold are not reflected in the graph.
# 4.1.2 Statistical Tests
To complement the ECDF-based analysis, we employ the Wilcoxon rank-sum test[82] for statistical comparison of the fitnessbest-so-far,final across algorithms. The Wilcoxon test is a non-parametric method that tests the null hypothesis that two groups come from the same distribution.
The Wilcoxon test only considers the fitnessbest-so-far at a specific point (usually the end of the search), and does not consider how fast the search reached that fitness value. For example, if algorithm A reaches the optimum in 1,000 evaluations and algorithm B reaches it in 10,000 evaluations, the two algorithms are statistically indistinguishable by a Wilcoxon test applied at 10,000 evaluations. Therefore, combining ECDF plots (which capture search behavior over time) with the Wilcoxon rank-sum test (which assesses statistical significance at termination) provides a more comprehensive performance analysis.
# 4.2 Performance Evaluation: Is Replacement Necessary?
We evaluate the following algorithms, all of which use current-to-pbest mutation, and binomial crossover. All algorithms were evaluated on the CEC2014 benchmarks, 51 independent runs per problem, and the maximum evaluation budgets on each run was $2 0 , 0 0 0 \times D$ .
• Baseline-DE (Alg. 2.1): A baseline DE based on the classical DE of [66]. This uses fixed parameter values for scale factor ( $F = 0 . 5$ ), crossover rate ( $C = 0 . 5$ ) and a population size ( $| P ^ { t } | = 1 0 0$ ). Pbest rate $p = 0 . 1 1$ . There is no archive.
• UDE(DPT) (Alg. 3.1): UDE without parameter adaptation, using the DPT tournament policy (Section 3.2.1) with tournament parameter ( $T = r o u n d ( | P ^ { t } | / | P ^ { 1 } | ) )$ , initial population size $| P ^ { 1 } | =$ $1 8 \times D$ . This uses fixed parameter values for scale factor ( $F = 0 . 5$ ), crossover rate ( $C = 0 . 5$ ), and generates a fixed number ( $g e n s i z e = 1 0 0$ ) of individuals each generation. A population size $\vert P ^ { t } \vert$ grows monotonicaly, pbest rate $p = 0 . 1 1$ .
• SHADE (Alg. 2.2, [73]): Adaptive DE using success-history based parameter adaptaion of $F$ and $C$ . As in the implementation [72] by the original author [73], the success-history parameters $M _ { F }$ and $M _ { C }$ are vectors of length $H = D$ , pbest rate $p = 0 . 1 0$ , population size $| P ^ { t } | = 1 0 0$ , and archive size $| A _ { \mathrm { m a x } } | = 2 \times | P |$ .
• LSHADE (Sec. 2.2.2, [74]: SHADE with Linear Population Size Reduction. Following the implementation [72] by the original author [74], initial population size $| P ^ { 1 } | = 1 8 \cdot D$ , population size is reduced using eq. 4, history length $H = 6$ , and archive size $\left| { A _ { \operatorname* { m a x } } ^ { t } } \right| = 1 . 4 \cdot \left| { P ^ { t } } \right|$ .
• USHADE(T) (Alg. 3.2): UDE with success-history based parameter adaptation using the T tournament selection operator. This generates a fixed number $g e n s i z e = 1 0 0$ ) of individuals each generation, and population size $\vert P ^ { t } \vert$ grows monotonicaly. Other parameters $( | P ^ { 1 } | , H , p )$ are the same as LSHADE.
• USHADE(DPT) (Alg. 3.2): UDE with success-history based parameter adaptation using the DPT tournament selection operator (Section 3.2.1). All parameters are the same as USHADE(T).
# 4.2.1 Evaluation of UDE without parameter adaptation: Baseline DE vs. UDE(DPT)
First, we evalute whether the key, novel elements of our approach (unbounded population, no generational replacement) are viable compared to standard DE. Thus, we compare the performance of UDE(DPT) against baseline-DE, neither of which incorporate parameter control.
Table 3 shows the Wilcoxon ranked sum test (p=0.05) comparison of the fitnesses values reached by Baseline-DE vs. UDE(DPT). Overall, UDE(DPT) outperformed baseline-DE across a wide range of functions at all dimensions $\upsilon = 1 0 , 3 0 , 5 0$ . $F 1 4$ is the only function where baseline-DE consistently outperforms UDE(DPT). Comparing the ECDFs of Baseline-DE and UDE(DPT) in Figure 2, the UDE ECDF curve is clearly above the DE curve for $\upsilon = 1 0 , 3 0 , 5 0$ .
Thus, our results clearly indicate that UDE(DPT) is competitive with Baseline-DE – given the same mutation and crossover operators, the unbounded population approach of UDE outperforms generational replacement approach of standard DE.
# 4.2.2 Evaluation of UDE with parameter adaptation
We now compare UDE with adaptation (USHADE) with the standard adaptive DEs SHADE and LSHADE. Figure 2 shows ECDFs of the algorithms’ average attainment rates. USHADE(DPT) outperformed LSHADE for all $D$ . In the early stage of search (approximately up to 10% of the maximum evaluation budgets), SHADE tended to be the most effective algorithm, but by the midpoint of the search, is overtaken by USHADE(T), USHADE(DPT), and LSHADE. Overall, after the midpoint of the search, USHADE(DPT) has the highest ECDF curve among the algorithms.
Tables 4-6 compare fitnessbest-so-far,final (i.e., after $2 e 4 \times D$ evaluations) using the Wilcoxon ranksum test ( $p = 0 . 0 5$ ). USHADE(DPT) outperformed SHADE was worse on 16, 19, and 22 problems for $\upsilon = 1 0 , 3 0 , 5 0$ , respectively. USHADE(DPT) outperformed LSHADE on 10, 11, and 9 problems, while LSHADE outperformed USHADE(DPT) on 6, 8, and 11 problems for $\mathinner { D \mathopen { = } 1 0 , 3 0 , 5 0 }$ , respectively.
For multimodal functions $F 1 0$ to $F 1 5$ , LSHADE generally performed better than USHADE(DPT). For hybrid and composite functions ( $F$ 17 onward), USHADE(DPT) outperformed LSHADE at $D = 1 0$ , while at $D = 3 0$ and 50, the two algorithms performed comparably with an approximately equal number of wins and losses.
Comparing the USHADE tournament policies T and DPT, the USHADE(DPT) ECDF curves are consistently above the USHADE(T) curves. At the end of search, USHADE(DPT) outperformed USHADE(T) on 16, 12 and 14 problems for $\textit { D } = 1 0 , 3 0 , 5 0$ , respectively, and USHADE(DPT) was outperformed by USHADE(T) on 3, 2 and 2 problems for $\upsilon = 1 0 , 3 0 , 5 0$ respectively.
Thus, the results on the CEC2014 benchmarks show that: (1) USHADE(DPT) is competitive with SHADE and LSHADE, and (2) the DPT tournament selection strategy significantly outperforms the baseline T strategy, indicating the usefulness of designing tournament policies to maintain diversity.
Table 3: Comparison of baseline-DE vs. UDE(DPT) (CEC 2014 benchmarks, 10,30,50 dimension, maximum evaluation budgets = 20, $0 0 0 \times D$ ). Wilcoxon ranked sum test ${ \mathrm { ' p } } = 0 . 0 5 { \mathrm { ' } }$ results on $F 1$ to $F 3 0$ are shown. ( $^ +$ better than UDE(DPT), -: worse than UDE(DPT), $\approx$ : no significant difference)
Figure 2: ECDFs of six algorithms on the CEC 2014 benchmark suite for $D = 1 0 , 3 0 , 5 0$ . The x-axis indicates the maximum evaluation budgets, and the y-axis shows the proportion of 51 trials that attained the target value at each point. For each problem, three targets were defined based on the median and first/third quartiles of fitnessbest-so-far,final, calculated across all six algorithms. The ECDFs represent the average attainment rate over 30 benchmark functions.
Table 4: Comparison of LSHADE, SHADE, USHADE(T) vs. USHADE(DPT) (CEC 2014 benchmarks, 10 dimension, maximum evaluation budgets = 200, 000). Wilcoxon ranked sum test $\mathrm { { \dot { p } = 0 . 0 5 } ) }$ results on $F 1$ to $F 3 0$ are shown. ( $^ +$ : better than USHADE(DPT), -: worse than USHADE(DPT), : no significant difference)
Table 5: Comparison of LSHADE, SHADE, USHADE(T) vs. USHADE(DPT) (CEC 2014 benchmarks, 30 dimension, maximum evaluation budgets = 600, 000). Wilcoxon ranked sum test $\mathrm { { \dot { p } = 0 . 0 5 } ) }$ results on $F 1$ to $F 3 0$ are shown. ( $\cdot$ : better than USHADE(DPT), -: worse than USHADE(DPT), ≈: no significant difference)
Table 6: Comparison of LSHADE, SHADE, USHADE(T) vs. USHADE(DPT) (CEC 2014 benchmarks, 50 dimension, maximum evaluation budgets = 1, 000, 000). Wilcoxon ranked sum test(p = 0.05) results on $F 1$ to $F 3 0$ are shown. ( $^ +$ : better than USHADE(DPT), -: worse than USHADE(DPT), $\approx$ : no significant difference)
# 4.2.3 Evaluation on CEC2022 benchmarks
Finally, we compared the performance of USHADE(DPT) with state-of-the-art methods on the CEC2022 benchmarks. Specifically, we compare USHADE(DPT) with the top four algorithms in CEC 2022 competition:
• EA4eig [15] (1st place), an ensemble of four high-performing evolutionary algorithms: (1) (Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES), (2) Differential Evolution with Covariance Matrix Learning and Bimodal Distribution Parameter Setting (CoBiDE), (3) an adaptive variant of jSO [10], and (4) Differential Evolution With an Individual-Dependent Mechanism (IDE)) enhanced by Eigen crossover.
• NL-SHADE-LBC [64] (2nd place), a LSHADE variant which incorporates finely tuned biased parameter adaptation and combines the rank-based mutation strategy introduced in LSHADERSP [61].
• NL-SHADE-RSP-MID [6] (3rd place), a LSHADE variant with a restart mechanism. Generates individuals biased to the center of subpopulations obtained by partitioning the population by k-means clustering.
• S-LSHADE-DP [76] (4th place), a LSHADE variant which monitors its population using indicators of stagnation, and when stagnation is detected, it attempts to resolve the stagnation by generating new individuals using perturbations different from mutation.
EA4eig is an ensemble algorithm, while the other three are variants of LSHADE.
We compared the distributions of the final best-so-far values achieved by the algorithms on the CEC2022 benchmark problems. For USHADE(DPT), following the CEC 2022 competition protocol, we performed 30 independent runs on 12 functions, with maximum evaluation budgets of 200,000 and 1,000,000 for $D = 1 0 , 2 0$ , respectively. For EA4eig, NL-SHADE-LBC, NL-SHADE-RSP-MID, and S-LSHADE-DP, we use the raw data for the final best-so-far values downloaded from the CEC2022 competition data repository [38].
Table 7: Comparison of S-LSHADE-DP (4th place), NL-SHADE-RSP-MID (3rd place), NL-SHADE-LBC (2nd place) and EA4eig (1st place) vs USHADE(DPT). (CEC 2022 benchmarks, 10 and 20 dimension, maximum evaluation budgets $= 2 0 0 , 0 0 0$ for $D = 1 0$ ${ \mathrm { a n d } } = 1 , 0 0 0 , 0 0 0$ for $D = 2 0$ ). Wilcoxon ranked sum te $\mathrm { s t } ( p = 0 . 0 5 )$ results on $F 1$ to $F 1 2$ are shown. (-: worse than USHADE(DPT), $^ +$ : better than USHADE(DPT), $\approx$ : no significant difference)
Table 7 shows the results of the comparisons. For $D = 1 0$ , USHADE(DPT) outperformed the thirdplace algorithm from the CEC 2022 competition on 3 problems and tied (no statistically significant difference) on 4 others. Compared to the fourth-place algorithm, USHADE(DPT) performed better on
2 problems and tied on 6 problems. The first- and second-place algorithms achieved the same optimal solutions as USHADE(DPT) and outperformed USHADE(DPT) on 7 and 6 problems, respectively. For $D = 2 0$ , USHADE(DPT) outperformed the third-place algorithm on 5 problems and was worse on 4. Against the fourth-place algorithm, USHADE(DPT) was superior on 3 problems and worse on 3. In comparison with the first- and second-place algorithms, USHADE(DPT) performed better on 1 and 2 problems, respectively, reached the same optimal solution on 3 problems, and was outperformed on 5. Thus, USHADE(DPT) performed worse than the 1st and 2nd place entries (EA4eig, NL-SHADE-LBC) in the CEC2022 competition, but comparably to the 3rd and 4th place entries (NL-SHADE-RSP-MID, S-LSHADE-DP).
# 4.3 Simulating adaptive population sizes
The adaptive control of the tournament parameter $T$ in USHADE (Section 3.2.2 is effectively similar to adaptively controlling population size in standard generational replacement-based DEs. Increasing the tournament size in the T and DPT policies (i.e., decreasing the parameter $T ^ { i , t }$ ) results in the selection of a smaller number of better-fitness individuals for offspring generation. Although the probability of selection varies among individuals, using only better-fitness individuals for mutation closely resembles reducing the population size in standard DE by eliminating worse-fitness individuals. Conversely, decreasing the tournament size (increasing $T ^ { i , t }$ ) has an effect similar to re-including previously excluded/discarded worse-fitness individuals for mutation, and is analogous to enlarging the population.
# 4.3.1 Behavior of adaptively controlled $T$ parameter
To understand how the adaptive control of $T$ in USHADE(DPT) behaves, we plot smoothed trajectories of $T ^ { i , t }$ for all of the CEC2014 benchmarks (4 runs/problem), shown in Figures 3 and 4. Each colored line corresponds to 1 run. Across all problems, $T ^ { i , t }$ tends to decrease in the early search phase, with many runs reaching the minimum threshold, after which $T ^ { i , t }$ generally increases. Exceptions include benchmarks $F 1$ , $F 2$ , and $F 3$ , where the optimal solution is found early, and no subsequent increase in T i,t occurs.
The behavior of parameter $T$ varies considerably across different problems. For instance, in the case of $D = 1 0$ (Figure 3, problems such as $F 2 1$ and $F 2 2$ exhibit relatively stable values of $T$ , whereas problems like $F 5 , F 1 4 , F 2 6$ , and $F 2 7$ showed large fluctuations. In the case of $D = 3 0$ (Figure 4, top), $T$ rarely exceeds its initial value, with the only exceptions occurring in problems $F 5 , F 1 3$ , and $F 1 4$ . Similarly, for $D = 5 0$ (Figure 4, bottom), problems $F 1 3$ and $F 1 4$ are distinctive in that the value of $T$ levels off around 500, in contrast to other problems. Thus, the behavior of $T$ varies significantly depending on the problem, suggesting that the $T$ is being adaptively controlled in response to problem characteristics.
# 4.3.2 Robustness with respect to maximum evaluation budgets
If adaptive control of $T$ is effectively behaving like adaptive population control, then a possible, significant benefit is that unlike standard, deterministic control of the population size, such as the widely used linear population size reduction method (LPSR), whose performance depends on tuning a predefined reduction schedule (reducing the population size from $| P ^ { 1 } |$ to $| P | _ { \mathrm { m i n } }$ linearly) to the maximum evaluation budgets Lemvalxuation (equation 4) which is known a priori, the performance of USHADE should be somewhat robust with respect to the maximum evaluation budgets.
Figure 5 presents the ECDFs of USHADE(T), USHADE(DPT), SHADE, and three variants of LSHADE on the CEC 2014 benchmark suite for $\textit { D } = 1 0 , 3 0 , 5 0$ . The linear population size reduction (LPSR) strategy employed by LSHADE requires a predefined reduction schedule.
The standard version of LSHADE reaches its minimum population size $| P | _ { \operatorname* { m i n } } = 4$ after $m a x e v a l s =$ Levaluation evaluations. • LSHADE (half) is a variant scheduled to reach $| P | _ { \mathrm { m i n } }$ at $m a x e v a l s = L _ { \mathrm { m a x } } ^ { \mathrm { e v a l u a t 1 o n } } / 2$ evaluations.
Figure 3: Values of adaptive parameter $T$ in USHADE(DPT) for the CEC 2014 benchmarks with $D = 1 0$ . The curves represent moving averages of $T$ with a window width of $1 8 \times D$ , 4 independent trials per problem (each trial is a colored line). The vertical axis corresponds to $T$ , which is initialized to $1 8 \times D$ and has a minimum value of 100. The horizontal axis denotes the number of fitness evaluations. The subfigures (1 subfigure/problem) are arranged in order (Row $1 { : } F 1 - F 6$ , Row $2 { : } F ^ { \prime } 7 - F { : } 1 { \mathrm { 2 } }$ ,..., Row 6: $F 2 5 - F 3 ( \$ ).
Figure 4: Values of adaptive parameter $T$ in USHADE(DPT) for the CEC 2014 benchmarks with $D = 3 0$ and $D = 5 0$ . The curves represent moving averages of $T$ with a window width of $1 8 \times D$ , 4 independent trials per problem (each trial is a colored line). The vertical axis corresponds to $T$ , which is initialized to $1 8 \times D$ and has a minimum value of 100. The horizontal axis denotes the number of fitness evaluations. The subfigures (1 subfigure/problem) are arranged in order (Row $1 { : } F 1 - F 6$ , Row $2 { : } F 7 - F 1 2 { , } . . .$ , Row 6: $F 2 5 - F 3 0$ ).
Figure 5: Empirical cumulative distribution functions (ECDFs) for six algorithms—USHADE(T), USHADE(DPT), SHADE, and three variants of LSHADE (standard, half schedule, and double schedule)—on the CEC 2014 benchmarks with $D = 1 0 , 3 0 , 5 0$ . In each plot, the horizontal axis indicates the number of fitness evaluations, while the vertical axis shows the proportion of runs (out of 51) in which the fitnessbest-so-far reached a predefined target. These targets were determined per problem based on the median and the first and third quartiles of fitnessbest-so-far,final, calculated across all 51 runs and all six algorithms. The ECDF shown represents the average across all 30 benchmark problems.
• LSHADE (double) uses a schedule which reaches $| P | _ { \mathrm { m i n } }$ after $m a x e v a l s = 2 \times L _ { \mathrm { m a x } } ^ { \mathrm { e v a l u a t 1 o n } }$ evaluations.
As shown in the figure, LSHADE(half) shows a rapid increase in ECDF as the scheduled maxevals approach and a very slow pace of increase again after maxevals is exceeded. Thus, LSHADE (standard) has an ECDF that increases as the number of fitness evaluations approaches $L _ { \mathrm { m a x } } ^ { \mathrm { e v a l u a t i o n } }$ , and if the search continues after that, performance is expected to stagnate. LSHADE (double) with maxevals larger than $L _ { \mathrm { m a x } } ^ { \mathrm { e v a l u a t i o n } }$ is not performing well due to improper scheduling. LPSR of LSHADE makes it difficult to adjust the evaluation budgets dynamically. In contrast, USHADE implicitly mimics population size control through adaptive mechanisms, eliminating the need for such scheduling.
Figure 5 shows that USHADE outperforms LSHADE, LSHADE (half), and LSHADE (double). Thus, adaptive control of $T$ in USHADE, enabled by the flexibility afforded by the UDE framework which keeps all created individuals, is a significant advantage over the standard generational replacement based DE which discards valuable information (failed individuals, as well as individuals replaced during generational replacement).
# 4.4 Are failed individuals useful or harmful?
Standard DE algorithms only keep successful individuals (individuals with better fitness than their parent) in the population, and discard failed individuals. In contrast, USHADE keeps all offspring in the population, including failed ones. This section empirically investigates whether failed individuals contribute to search progress in USHADE, and whether excluding failed individuals affects convergence speed to better-fitness solutions.
# 4.4.1 Frequency of Usage of Failed Individuals
We analyzed whether the best-so-far offspring at any given point during a single run of the CEC 2014 benchmarks was derived from a failed or successful parent. As individuals were created, we marked them as “successful” or “failed” depending on whether their fitness was better than their parent, and each time a new best-so-far individual was evaluated (when fitnessbest-so-far is updated), we recorded whether its parent was a successful individual or failed individual.
Figure 6 shows the fraction of best-so-far individuals that had a failed parent, for each of the 30 benchmark problems ( $\textit { D } = 1 0 , 3 0 , 5 0$ dimensions) across 51 runs for each problem. The fraction of best-so-far individuals with failed parents was approximately $3 0 \%$ for $D = 1 0 , 3 0$ , and around $2 0 \%$ for $D = 5 0$ . Since a significant proportion of the best individuals were generated from failed parents, this indicates that in USHADE, failed individuals are not useless – they contribute to search progress by being the parents of new best-so-far individuals.
Figure 6 also shows that although the extent to which failed individuals are utilized in UDE varies depending on the problem, even the lowest utilization (for $D = 5 0$ on the $F 1$ problem) is around $1 5 \%$ .
Figure 6: Fraction of best-so-far individuals found during search whose parent was a failed individual (30 problems from CEC 2014 benchmarks, 51 runs/problem, $\mathinner { D \mathopen { = } 1 0 , 3 0 , 5 0 }$ ).
# 4.4.2 Comparison of search progress with and without failed individuals
Next, we evaluate whether failed individuals contribute positively or negatively to the search efficiency of USHADE. We compare USHADE(DPT) to USHADE/DF (USHADE which discards all failed individual, defined in Section 3.3).
Figure 7 compares the ECDFs for USHADE(DPT) and USHADE/DF(DPT). For $D = 1 0$ and $D = 3 0$ , USHADE(DPT) consistently outperformed USHADE/DF(DPT). However, for $D = 5 0$ , the two variants performed comparably up to 50 million evaluations, after which USHADE/DF(DPT) achieved slightly better performance near the end of the run.
Tables 8 and 9 comparing the fitnessbest-so-far of the two methods at the mid-point ( $1 e 4 \times D )$ and endpoint $( 2 e 4 \times D )$ of the search (Wilcoxon’s rank-sum comparison, $p = 0 . 0 5$ ) Table 8 shows that at the midpoint of search ( $1 e 4 \times D$ evaluations), USHADE/DF(DPT) performed significantly worse than
Figure 7: Empirical cumulative distribution functions (ECDFs) for four algorithms-USHADE(DPT), USHADE(DPT) without failed individual, LSHADE and SHADE—on the CEC 2014 benchmarks with $D =$ 10, 30, 50. In each plot, the horizontal axis indicates the number of fitness evaluations, while the vertical axis shows the proportion of runs (out of 51) in which the fitnessbest-so-far reached a predefined target. These targets were determined per problem based on the median and the first and third quartiles of fitnessbest-so-far,final, calculated across all 51 runs and all six algorithms. The ECDF shown represents the average across all 30 benchmark problems.
Table 8: Comparison of USHADE(DPT) vs. USHADE/DF(DPT) (USHADE without failed individuals) after $= 1 0 , 0 0 0 \times D$ evaluations (CEC 2014 benchmarks, 10,30,50 dimension). Wilcoxon ranked sum test ${ \bf \dot { \rho } } p = 0 . 0 5 )$ results on $F 1$ to $F 3 0$ are shown. ( $^ +$ : better than USHADE(DPT), -: worse than USHADE(DPT), ≈: no significant difference)
Table 9: Comparison of USHADE(DPT) vs. USHADE/DF(DPT) (USHADE without failed individuals) after $= 2 0 , 0 0 0 \times D$ evaluations (CEC 2014 benchmarks, 10,30,50 dimension). Wilcoxon ranked sum test ${ \bf \dot { \rho } } p = 0 . 0 5 )$ results on $F 1$ to $F 3 0$ are shown. ( $\cdot$ : better than USHADE(DPT), -: worse than USHADE(DPT), ≈: no significant difference)
USHADE(DPT) on 22 problems for $D = 1 0$ , 17 problems for $D = 3 0$ , and 16 problems for $D = 5 0$ , indicating that including failed individuals helps find better solutions with fewer evaluations. However, for problems such as $F 6$ and those from $F 2 4$ onwards, USHADE/DF(DPT) achieved better fitness than USHADE(DPT). Since problems $F 2 2$ and onward are composite functions that combine problems with different characteristics, this suggests that including failed individuals may hinder performance on more complex problems.
Table 9 shows that at the end of search ( $2 e 4 \times D$ evaluations), the number of problems where USHADE/DF(DPT) outperformed USHADE(DPT) increased to 5 for $D = 1 0$ , $9$ for $D = 3 0$ , and 12 for $D = 5 0$ . These results suggest that although using failed individuals accelerates early-stage convergence, excluding them to focus more on the promising regions of the search can improve final solution quality given a sufficiently large evaluation budget. However, this does not necessarily mean that discarding failed individuals is necessary – it may be possible to improve the selection policy to increase the bias for selecting successful individuals as search progresses. This is a direction for future work. | Differential Evolution (DE) is a widely used evolutionary algorithm for
black-box optimization problems. However, in modern DE implementations, a major
challenge lies in the limited population diversity caused by the fixed
population size enforced by the generational replacement. Population size is a
critical control parameter that significantly affects DE performance. Larger
populations inherently contain a more diverse set of individuals, thereby
facilitating broader exploration of the search space. Conversely, when the
maximum evaluation budgets is constrained, smaller populations focusing on a
limited number of promising candidates may be more suitable. Many
state-of-the-art DE variants incorporate an archive mechanism, in which a
subset of discarded individuals is preserved in an archive during generation
replacement and reused in mutation operations. However, maintaining what is
essentially a secondary population via an archive introduces additional design
considerations, such as policies for insertion, deletion, and appropriate
sizing. To address these limitations, we propose a novel DE framework called
Unbounded Differential Evolution (UDE), which adds all generated candidates to
the population without discarding any individual based on fitness. Unlike
conventional DE, which removes inferior individuals during generational
replacement, UDE eliminates replacement altogether, along with the associated
complexities of archive management and dynamic population sizing. UDE
represents a fundamentally new approach to DE, relying solely on selection
mechanisms and enabling a more straightforward yet powerful search algorithm. | [
"cs.NE",
"cs.AI",
"G.1.6; I.2.8"
] |
# 1 Introduction
Large language models (LLMs) are increasingly integrated into decision-support systems across high-stakes domains such as hiring, healthcare, and loan approvals [1–3]. In these contexts, ensuring fairness and transparency is not just an ethical imperative but often a legal requirement, as recent regulations emphasize algorithmic accountability and nondiscrimination [4–6]. Among these domains, hiring has received particular attention due to both its societal importance and the real-world use of LLMs in resume screening [6, 7].
To date, most studies of LLMs in hiring have focused on scoring individual resumes or responses, documenting biases related to gender, race, and socioeconomic background [7–13]. A growing line of work now advocates for pairwise or setwise comparisons, arguing that relative judgments are more consistent and more human-like [14–17]. However, as these frameworks gain traction, it becomes increasingly urgent to examine the biases they may introduce, especially in high-stakes settings where even small distortions can have large consequences.
Ranking-based decisions are particularly susceptible to positional and contextual effects. In human judgment, these include primacy and recency [18–23], anchoring [24, 25], contrast effects [26, 27], and decoy effects [28–30]. These biases can undermine consistency and lead to inconsistent or irrational preferences. Recent studies show that LLMs are vulnerable to similar biases. Positional effects have been documented in multiple-choice settings [31, 32], classification tasks [33, 34], and pairwise evaluations [17, 35]. Other context-related effects, such as anchoring and decoy effects, have also been observed [36, 37]. Yet this literature remains largely descriptive: we know these effects exist, but not how they manifest in different contexts, how they interact with other biases, or what mechanisms drive them.
In this work, we systematically investigate positional biases in LLMs. We uncover previously unreported biases, show that these effects can meaningfully distort underlying preferences, and find important differences from human biases. Across two domains—hiring and color selection—we uncover a consistent, quality-dependent pattern in pairwise comparisons: when the options are high quality, LLMs exhibit a primacy bias, favoring the first candidate or color. For lower-quality options, however, they favor latter options. In triplewise comparisons, we identify a novel centrality bias: a consistent preference for the middle option. Both the centrality effect and the quality-dependent shift from primacy to recency appear to be unique to LLMs and have not, to our knowledge, been reported in human (or LLM) decision-making. We further show that positional biases are, for the most part, stronger than gender biases, and uncover a strong and previously undocumented source of distortion: a bias favoring certain names over others. These patterns suggest that LLMs are not merely inheriting human-like heuristics from their training data, but are manifesting new failure modes with distinct underlying mechanisms.
Separate from these novel patterns, we highlight a conceptual distinction that has been largely overlooked in prior work on order effects in both humans and LLMs. Positional biases are typically treated as a single phenomenon, but they can arise from two fundamentally different mechanisms: tie-breaking heuristics and true preference distortions. Tie-breaking occurs when the agent (human or AI) has no clear preference and defaults to a fixed positional rule (e.g., always guessing option C on a multiple-choice test). In contrast, a preference distortion occurs when the order of presentation alters an otherwise strict preference. This distinction matters: while tie-breaking is relatively harmless, preference distortion implies the model may select inferior options due solely to presentation order.
Disentangling these mechanisms is difficult, especially in domains where ground-truth preferences are ambiguous. Many prior studies suggest that order effects emerge mainly when options are similar [17, 31, 32], consistent with the tie-breaking view. However, we show that positional effects in LLMs can lead to genuine preference reversals. Leveraging the ability to repeatedly query LLMs and control their sampling behavior via temperature, we introduce a simple yet effective mitigation strategy: querying the model several times at higher temperature settings. When the temperature parameter is increased, the models’ output becomes more stochastic, and repeated sampling shows more variation. We can quantify the model’s preferences by the frequency with which the options are selected: when the probability of selecting option $a$ over $b$ at temperature 1 is statistically significant, we infer that the model prefers $a$ to $b$ , even if the model appears indifferent at temperature 0. In such scenarios, increasing the temperature can improve accuracy, contrary to the prevailing understanding that greater accuracy is attained at lower temperature settings [38, 39]. We elaborate on this technique in Subsection 2.3.
To formalize the distinction between tie-breaking and true distortions, we extend the classical framework of decisionmaking, which represents preferences using the binary relations $a \succ b$ (strict preference) and $a \sim b$ (indifference). We introduce a refined notation that distinguishes between two types of strict preferences. We write $a \succ b$ to indicate that the preference for $a$ over $b$ is strong and stable across contexts. In contrast, $a \succcurlyeq b$ denotes a genuine but unstable preference that may be reversed by order effects. This framework allows for clearer interpretation of observed behavior. if $a \succcurlyeq b$ , but $b$ is consistently selected when presented first, we attribute the reversal to a distortion of the underlying preference, not to heuristic tie-breaking. However, if $a \sim b$ , the cause for the order effect cannot be disambiguated.
Together, our findings suggest that LLM comparisons are governed by both familiar and novel biases, and that these biases can systematically skew decisions. The implications are broad: positional distortions often outweigh demographic biases, and they cannot be reliably corrected by averaging or randomizing presentation. In our discussion, we explore both theoretical explanations and practical mitigation strategies.
# 2 Results
We evaluated positional biases in three widely used large language models—GPT-4o-mini, Claude 3 Haiku, and Llama 3 8B—across two domains designed to reflect real-world comparative decisions: resume screening and color selection. As order effects are most likely to arise when alternatives are similar in quality [17, 31], we constructed sets of items that are close in overall quality and grouped them into four tiers: ‘Ideal’, ‘Fair’, ‘Plain’ and ‘Harsh’ for colors and ‘Best’, ‘Good’, ‘Mediocre’ and ‘Weak’ for resumes. In the hiring domain, we used LLM-generated resumes spanning the four quality tiers in four professions. In the color domain, we curated sets of wall paint colors grouped by their suitability for a child’s room.
For each model and domain, we conducted exhaustive pairwise and triplewise comparisons, testing all permutations of item order. Prompts included “Select the strongest candidate” and “Which color is best for a kid’s room?” Each model (a) Proportion of color choices by position across 4 quality tiers in pairwise comparisons. Tier quality is decreasing from left to right: ‘Plain’ is the highest quality tier and ‘Harsh’ is the lowest.
Figure 1: Positional bias in pairwise comparisons at $\pmb { T } = \mathbf { 1 }$ . (a) shows the effect of presentation order on color selection in pairwise comparisons for each quality tier. (b) shows aggregate positional effects in resume evaluations across professions. In both tasks, higher-quality options tend to exhibit a primacy effect and lower-quality ones a recency effect, though the precise threshold separating them varies by model and domain. Full results for color comparisons are provided in Appendix C ; results for resumes disaggregated by profession are given in Appendix D
was evaluated at multiple temperature settings. At temperature 0 $\begin{array} { r } { T = 0 } \end{array}$ ), models behave almost deterministically,1 selecting the most likely answer. At temperature 1 $T = 1 \dot { }$ ), responses vary across repeated queries, enabling statistical analysis of positional effects. In our analyses, we treated each $T = 1$ response as an independent sample from the model’s output distribution. Full experimental details appear in Section 4 (Materials and Methods) and the Appendix.
# 2.1 Order Effects in Pairwise Comparisons
We queried GPT-4o-mini, Claude 3 Haiku, and Llama $3 ~ 8 \mathrm { B }$ on exhaustive pairwise comparisons in two domains— resumes and colors—at multiple temperature settings. In every setting, higher-quality tiers exhibited a primacy effect (more likely to be selected when presented first), while lower-quality tiers showed a recency effect (more likely to be selected when presented second); these effects are clearly visible in Figure 1. For example, when comparing the two colors of the ‘Fair’ tier (Gentle Coral and Buttercream Yellow), GPT-4o-mini chose the first option $100 \%$ of the time at $T = 0$ and $89 \%$ at $T = 1$ (binomial $p = 2 . 5 \times 1 0 ^ { - 1 6 } .$ ). The quality tier at which bias flipped from primacy to recency was highest for Claude 3 Haiku and lowest for Llama $3 ~ 8 \mathrm { B }$ , a pattern consistent across both domains. Full statistics appear in Appendices C (colors) and D (resumes).
# 2.2 Order Effects in Triplewise Comparisons
In triplewise comparisons, higher-quality tiers exhibit a primacy effect, consistent with the pairwise results. For lowerquality tiers, all models favor latter positions, though they differ in whether the second or third option is preferred; see Figure 2. GPT-4o-mini and Claude 3 Haiku both display a centrality bias—favoring the middle option over first or last. For example, at $T = 0$ , GPT-4o-mini always selected the middle option for ‘Plain’ colors, and Claude 3 Haiku did so $8 3 \%$ of the time for ‘Harsh’ colors. In contrast, Llama 3 8B did not display a centrality bias, but instead showed simultaneous primacy and recency effects, resembling the serial position effects observed in humans [21, 40]: among colors in the ‘Fair’ tier at $T = 0$ , Llama 3 8B chose the first and last positions $50 \%$ of the time each, and never selected the middle option. See Appendices F (colors) and G (resumes) for additional details.
Figure 2: Positional bias in triplewise comparisons at ${ \pmb T } = { \bf 0 }$ . (a) shows the effect of presentation order on color selection. (b) shows aggregate positional effects in resume evaluations across professions. Additional results for color selection are provided in Appendix F, and resumes selection results disaggregated by profession are in Appendix G.
Across models and domains, the bias generally shifts monotonically toward later positions as quality declines, with the exception of Claude 3 Haiku in the color domain. Positional biases are notably weaker in resume selection for Claude 3 Haiku and Llama 3 8B compared to color selection. As we show in the Name Bias section below, this attenuation is largely due to interference from strong preferences for specific names.
# 2.3 Tie-Breaking Heuristics vs. True Distortions
Do positional order effects reflect arbitrary tie-breaking, or genuine distortions of the model’s underlying preferences? Our notation allows us to distinguish between these cases in pairwise comparisons. If $a$ is chosen over $b$ regardless of order, we say the preference is robust $( a \ \succ b )$ . If the outcome flips depending on order, we write $a \ \overset { } { \sim } \ b$ . When we observe $a \underset { \sim } { \overset { \triangledown } { \sim } } b$ , exactly one of the following holds: $a \succcurlyeq b , b \succcurlyeq a$ , or $a \sim b$ . A fragile preference implies that order effects distort what the model otherwise “believes” to be the stronger option.
At $T = 0$ , all models showed near-deterministic positional behavior in color comparisons, selecting the same position at least $98 \%$ of the time for each tier. That is, all twelve model–tier pairs satisfied $a \stackrel { ? } { \sim } b$ , but determinism at $T = 0$ prevents us from identifying whether the underlying preference is fragile or indifferent.
We therefore repeated all comparisons at $T = 1$ , where increased stochasticity can reveal the suppressed preferences. To give some intuition as to why this works, we use a simple, stylized example with Von Newmann Morgenstern utilities.2 Assume the model assigns utilities $u ( a ) = 5 , u ( \bar { b } ) = \bar { 5 } . 1$ , and there is a primacy boost $\delta = 0 . 2$ . When presented $( a , b )$ at $T \ = \ 0$ , the model always chooses $a$ (since $5 + 0 . 2 > 5 . 1 )$ and $( b , a )$ always yields $b$ (since $5 . 1 + 0 . 2 > 5$ ). Now suppose that $T = 1$ adds uniform noise $\varepsilon \sim U [ - 0 . 4 , 0 . 4 ]$ to the first option’s utility.3 At $T = 1$ , $( a , b )$ yields $a$ with probability 0.625 (as $\mathbb { P } [ 5 . 2 + \varepsilon \geqslant 5 . 1 ] = 0 . 6 2 5$ ), and $( b , a )$ yields $b$ with probability 0.875. Thus, $b$ is significantly more likely to be chosen than $a$ at $T = 1$ , despite the fact that $a \stackrel { \triangledown } { \sim } b$ . Consistent with this intuition, eight of twelve model–tier color pairs in our experiments showed a significant preference $( p < 0 . 0 5 )$ and three of these showed overwhelming distortion $( p < 1 0 ^ { - \bar { 6 } } \cdot$ ). We obtained similar results for resumes; in one case, one resume was 1.5 times more likely to be chosen than the other, showing that order effects can lead to selection of significantly inferior options. For more details, see Appendix E.
(a) Name selection frequencies in triplewise comparisons. The histogram shows how often each candidate name was selected across all triplewise tasks: selection counts reflect the number of times the name was chosen as the best option. See Appendix H for additional results and details.
(b) Gender and order effects at $T = 0$ in pairwise resume comparisons. All models show stronger biases for presentation order than for gender. Results by profession and temperature are provided in Appendix I .
Figure 3: Interaction of order effects with other biases. (a) shows the distribution of name selections in triplewise comparisons. Claude 3 Haiku exhibits strong preferences for certain names, while GPT-4o-mini shows a relatively balanced selection pattern. (b) shows the effect of gender and presentation order in pairwise resume comparisons. Order effects appear stronger than gender effects.
A fundamental assumption in rational-choice models is Independence of Irrelevant Alternatives (IIA): the ranking between $a$ and $b$ should not change when a third option $c$ is introduced. We observed multiple order-dependent IIA violations. For instance, for GPT-4o-mini, we found that Pale Coral $\succ$ Emerald Green. When presented with the triple (Emerald Green, Pale Coral, Beige), GPT-4o-mini consistently selected Emerald Green, violating IIA.
# 2.4 Interaction of Positional Bias with Other Biases
In addition to positional order effects, LLMs exhibit other well-documented biases, such as preferences based on gender or identity. In this section, we examine how these biases interact with positional effects. While gender bias in algorithmic decision-making is well documented, we also uncover an additional bias: consistent preferences for specific individual names, even when those names are carefully selected to avoid known bias-triggering cues such as race and ethnicity. To our knowledge, such name-specific biases have not been documented in human decisionmaking under comparable conditions. Across all models, we find that these biases—though model-specific in strength and direction—are generally weaker than the positional effects, although in some cases they appear to compete with or modulate them.
Name Bias. The positional bias in triplewise color comparisons is so strong that in every tier, there is at least one position that is never chosen at $T = 0$ (except in GPT-4o-mini’s ‘Fair’ tier, which selected the last position a single time). This positional skew appears less pronounced in resume comparisons—particularly for Claude 3 Haiku and Llama 3 8B. Upon closer inspection, this reduction in positional bias appears to be driven by strong, competing biases for or against specific synthetic personas, especially individual names. Claude 3 Haiku, in particular, exhibits strong name-based preferences: it consistently favors certain names over others, regardless of resume content or position. For example, in direct comparisons, Claude 3 Haiku ‘Christopher Taylor’ over ‘Andrew Harris’ in $64 \%$ of the cases (82 out of 128 trials, binomial $p = 0 . 0 0 1 9 , h = 0 . 2 9 )$ ). These patterns emerged even though all names were drawn from a controlled set of generic Caucasian U.S. identities, chosen to minimize racial, ethnic, and other socially marked biases that have been documented in LLMs [41, 42].
This pattern is evident in Figure 3a, which shows the distribution of selection frequencies for each name across triplewise comparisons. Claude 3 Haiku’s distribution is flat, indicating that a few names are chosen disproportionately often while others are rarely selected—consistent with strong and persistent name preferences. GPT-4o-mini, by contrast, displays a near-perfect normal distribution, suggesting minimal name-based bias in the triplewise setting. Llama 3 8B lies between these extremes, with moderate variation across names. Although GPT-4o-mini shows no signs of name bias in triplewise comparisons, we detect a small but statistically significant name preference in the pairwise setting. See Appendix H for full details.
Gender Bias. Prior work on gender bias in algorithmic hiring is mixed: some studies report little to no bias [43], others find systematic preferences for male candidates [7, 44], while more recent evaluations suggest a tilt toward female candidates [8, 9] or context-dependent outcomes [13]. Studies using pairwise evaluations have found a bias favoring female candidates [8, 9, 35], a pattern we also replicate: for example, GPT-4o-mini selects female candidates $5 5 . 7 \%$ of the time at $T = 1$ $\langle p = \bar { 3 } . 2 \dot { 2 } \times 1 0 ^ { - 4 }$ , $h = 0 . { \dot { 1 } } 1 { \dot { 1 } } .$ ), with Claude 3 Haiku and Llama $3 ~ 8 \mathrm { B }$ showing similar trends.
While gender bias in algorithmic decision-making is an important topic in its own right, our primary goal is to examine how gender interacts with positional order effects. We find that gender effects are overshadowed by substantially stronger order biases. Across all three models, the candidate in the dominant position (first or second, as defined by same-gender comparisons) is selected at much higher rates. For GPT-4o-mini, this rate reaches $9 2 . 8 \%$ in mixedgender comparisons—compared to $9 3 . 6 \%$ in same-gender trials—suggesting that gender bias can at times counteract, but rarely override, the positional preference. Claude 3 Haiku and Llama 3 8B show similar patterns, with positional bias rates of $8 4 . 3 \%$ and $7 2 . 3 \%$ in mixed-gender comparisons, versus $8 7 . 6 \%$ and $8 9 . 2 \%$ in same-gender comparisons, respectively. Bootstrapped confidence intervals confirm that order bias significantly exceeds gender bias across all models ( $9 5 \%$ CI for $h _ { \mathrm { o r d e r } } - h _ { \mathrm { g e n d e r } }$ excludes 0 in all cases).
This difference between the effect sizes of the positional and gender effects can be clearly seen in Figure 3b, where gender effects manifest in two ways: a modest overall preference for female candidates, and the observation that nearly all instances of overcoming the positional disadvantage involve female candidates. Full statistics appear in Appendix I.
# 3 Discussion
This study demonstrates systematic positional biases in LLM-driven comparisons, with methodological, theoretical, and practical implications. Across both color and hiring domains, we find that positional biases are quality-dependent and strong: primacy effects dominate for high-quality options, while lower-quality choices elicit a recency effect.
Most prior research has not clearly distinguished between two possible sources of positional effects: arbitrary tiebreaking and genuine preference distortion. Many studies have observed that order effects appear when the model is indifferent between the different options [17, 31, 32], and studies on positional biases and the decoy effect often purposefully use alternatives of similar quality [37,45]. While a few human studies have attempted to disentangle these explanations [46], conclusions remain limited due to humans’ limited ability to determine their ‘true’ preferences [24] or compute expected utilities [47]. The difficulty to determine ground-truth preferences in human subjects is further complicated by the fact that preferences are influenced by contextual factors and shift over time [48].
By contrast, LLMs allow repeated querying under controlled conditions. We exploit this property, along with the temperature parameter (which has no human analogue), to uncover latent preferences and distinguish tie-breaking from distortion by querying across multiple temperatures. Using this methodology, we show that LLMs do not merely exhibit positional effects when indifferent but can reverse strict preferences based solely on presentation order. Although subtle, this distinction matters: tie-breaking is relatively benign, whereas distortion leads to selecting an inferior option purely because of the order of presentation.
Much of the work on LLM bias has centered on uncovering and correcting cognitive distortions, such as anchoring, confirmation bias, or gender stereotyping, that models acquire from human-generated training data [41,49–51]. These studies often argue that models inherit cognitive biases that are present in the data [52–54]. Even studies that identify behavior that diverges from human norms (for instance, a female-candidate preference in resume-ranking tasks, where humans often favor male candidates) still frame these findings through the lens of human cognitive biases [8, 9]. In contrast, we uncover two new failure modes that have no clear analogue in human decision-making: a centrality bias in triplewise comparisons and a name bias, even though names were explicitly chosen to avoid known demographic triggers.
These patterns raise new theoretical questions. Classical explanations for human order effects, such as memory limitations [55,56], proactive inhibition [57], or heuristics [24], cannot readily explain our findings of quality-dependent bias shifts or the centrality bias. One possible account—drawing on the long-established notions of reference points [40,58] and contrast effect [26,27] in behavioral decision theory—is that LLMs establish an implicit reference point from their training data and then evaluate each option in contrast to this baseline. Above-baseline options are inflated, leading to primacy, while below-baseline ones are penalized. However, it is not clear how to extend this explanation to biases observed in triplewise comparisons, in particular the centrality bias. Moreover, model-specific differences (e.g., centrality effects in GPT-4o-mini and Claude 3 Haiku but not Llama 3 8B), indicate that different architectures details may give rise to distinct bias mechanisms. Together, these findings call for new theoretical frameworks to explain how biases emerge and propagate in artificial agents—frameworks that go beyond human cognitive models and the biases embedded in training data.
From a practical standpoint, standard mitigation strategies such as randomizing orders or averaging across queries do not fully eliminate positional distortions. Instead, we propose approaches that leverage LLM-specific characteristics. First, raising the temperature uncovers fragile preferences that deterministic (zero-temperature) settings suppress. Although low temperatures are often favored for precision and factuality, and high temperatures for creativity [38,39], we find that in these comparison tasks a higher temperature can actually improve selection accuracy. Second, positional bias itself can be informative: a pronounced primacy effect may indicate that both options are of high quality, whereas a strong recency effect may signal that both are of lower quality.
While our findings hold across three contemporary models, new architectures should be audited for emergent order and identity biases. In high-stakes domains such as legal reasoning, medical diagnosis, or scientific peer review, biasaware evaluation pipelines can combine model-driven diagnostics with human-in-the-loop checks. Finally, causal interpretability methods and adversarial probing could illuminate the internal mechanisms that give rise to preference distortions, paving the way toward bias-robust LLM design and deployment.
# 4 Materials and Methods
# 4.1 Resume and Color Generation and Selection
As order effects are more pronounced when the options are of comparable quality, we generated three options for each of four quality tiers of colors and resumes. For colors, we denoted these tiers by ‘Ideal’, ‘Fair’, ‘Plain’ and ‘Harsh’; for resumes, by ‘Best’, ‘Good’, ‘Mediocre’ and ‘Weak’. We chose different names for the tiers in each domain so as not to imply an equivalence between them (i.e., ‘Plain’ colors are not necessarily comparable to ‘Mediocre’ resumes in terms of quality).
We first generated the candidates for each tier. For the resumes, we had GPT-4o-mini generate nine sets of four resumes of decreasing quality for each of four professions: Mechanical Engineer, Registered Nurse, Journalist, and Real Estate Agent. We removed the personal information from the resumes. The resume generation prompts are shown in Appendix B. For colors, we asked each model to generate colors that they ‘thought’ were comparable.
For each quality tier, we performed triplewise comparisons among the candidate options and selected three options based on the following criteria. First, each had to be selected in at least one of the six possible permutations. Second, to ensure that the tier labels were meaningful, we checked that colors from a higher tier were chosen over those from a lower tier in at least two-thirds of all pairwise comparisons. To select the colors for pairwise comparisons in each tier, we evaluated all three possible pairs among the three options and selected a pair in which neither color was consistently preferred over the other. We note that our selection methodology led to the fact that we did not necessarily use the same resumes or colors for the three models.
For the personal information, We generated 30 fictional personas, using 15 common American Caucasian last names, male first names and female first names, selected from the list of the most popular names in the US as tracked by the SSA [59]. We paired each first name with a random surname and state, and randomly extrapolated to a complete profile. For more details, see Appendix B.
# 4.2 Pairwise Comparisons
For each pair, we presented the model with both orders to isolate position effects. For resumes, we selected four male names and four female names and performed exhaustive same-gender comparisons. That is, for each pair of names $i$ and $j$ , we compared resume 1 with $i$ ’s personal information to resume 2 with $j$ ’s information, and vice versa. This yielded a total of 768 pairwise comparisons per model at each temperature (4 professions $\times ~ 4$ tiers $\times ~ 2$ genders $\times$ ${ \binom { 4 } { 2 } } = 6$ name pairs $\times 2$ name-resume assignments $\times ~ 2$ presentation orders). For colors, we repeated the prompt for each permutation 50 times at each temperature $T \in \{ 0 , 0 . 5 , 1 \}$ .
For the cross-gender comparisons, we used the same resumes and personal information as in the same-gender comparisons and compared all combinations of male and female names. That is, for each male name $i$ and female name $j$ , we compared resume 1 with $i$ ’s personal information to resume 2 with $j$ ’s information, and vice versa. This yielded 1,024 additional pairwise comparisons per model at each temperature (4 professions $\times 4$ tiers $\times 4$ female names $\times 4$ male names $\times 2$ name-resume assignments $\times 2$ presentation orders).
To verify that the observed name bias stemmed specifically from the name rather than other personal information (e.g., address, email host), we performed additional comparisons after swapping the names while keeping the remaining details fixed.
# 4.3 Triplewise Comparisons
For each resume triple, we randomly assigned the 15 male and 15 female names (and their associated personal information) across the three resumes, yielding 5 distinct triples per gender. We then presented all 6 permutations of each triple to each LLM, asking the model to select the strongest candidate; the exact prompt is provided in Appendix B. In total, we conducted 960 triplewise comparisons per model at each temperature (4 professions $\times 4$ tiers $\times 2$ genders $\times ~ 6$ permutations $\times ~ 5$ sets of triples). For colors, we repeated the prompt for each permutation 40 times at each temperature, for a total of 240 prompts. the exact prompt for color comparisons is provided in Appendix A.
# 4.4 Data Cleaning
Although we instructed the model to output only the name of the strongest candidate in each comparison, LLMs occasionally returned more verbose or inconsistent responses (e.g., “Candidate 1: Amanda Thomas”). In both the color and resume domains, we corrected clearly identifiable output errors and discarded nonsensical responses (e.g., Claude 3 Haiku selecting “White” in a comparison between Burgundy and Hot Pink). However, we did not discard all mismatches, as some models—Claude 3 Haiku in particular, but also Llama 3 8B—occasionally produced minor spelling variations (e.g., “Offwhite” instead of “Off-White,” or “Palelinen” instead of “Pale Linen”). Discarding all such cases would introduce bias; for example, if Off-White was misspelled while Beige was not, discarding those comparisons would unfairly favor Beige.
# 4.5 Statistical Tests
We evaluated positional order effects in pairwise comparisons using two-tailed binomial tests, assessing whether the proportion of selections for the first versus second position significantly deviated from the null hypothesis of no positional bias (equivalent to $p \ = \ 0 . 5 )$ . As all other effects are controlled for, a statistically significant deviation indicates that the model’s choices are systematically influenced by presentation order. We report effect sizes using Cohen’s $h$ , a standardized measure of deviation from the expected proportion.
For triplewise comparisons, we used chi-square goodness-of-fit tests to evaluate whether the distribution of selections across the three positions departed from a uniform distribution (i.e., each position selected with probability $1 / 3$ under the null hypothesis of no bias). Significant deviations indicate systematic positional preferences. Effect sizes for these tests are reported using Crame´r’s $V$ . We did not perform statistical tests for comparisons at $T = 0$ as the results are almost deterministic.
To compare the relative strength of gender and positional biases, we used the same dataset and performed nonparametric bootstrapping. For each of 10,000 resamples, we computed effect sizes for both gender and positional bias within the same matched examples. The resulting bootstrap distribution allowed us to assess the consistency and magnitude of the difference, confirming that order effects consistently exceeded gender effects across resamples.
# 4.6 Data, Materials, and Software Availability
Replication archive with code and data is available at Open Science Framework at https://osf.io/59df6/.
# References
[1] Chengguang Gan, Qinghao Zhang, and Tatsunori Mori. Application of LLM Agents in Recruitment: A Novel Framework for Automated Resume Screening. Journal of Information Processing, 32:881–893, 2024.
[2] Jean Lee, Nicholas Stevens, and Soyeon Caren Han. Large Language Models in Finance (FinLLMs). Neural Computing and Applications, pages 1–15, 2025. [3] Xintian Yang, Tongxin Li, Qin Su, Yaling Liu, Chenxi Kang, Yong Lyu, Lina Zhao, Yongzhan Nie, and Yanglin Pan. Application of large language models in disease diagnosis and treatment. Chinese Medical Journal, 138(02):130–142, 2025.
[4] European Union. Regulation (EU) 2024/1689 of the european parliament and of the council. Official Journal of the European Union, L(2024/1689):1–222, 2024.
[5] Lara Groves, Jacob Metcalf, Alayna Kennedy, Briana Vecchione, and Andrew Strait. Auditing work: Exploring the new york city algorithmic bias audit regime. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency, pages 1107–1120, 2024.
[6] Alessandro Fabris, Nina Baranowska, Matthew J. Dennis, David Graus, Philipp Hacker, Jorge Saldivar, Frederik Zuiderveen Borgesius, and Asia J. Biega. Fairness and bias in algorithmic hiring: A multidisciplinary survey. ACM Transactions on Intelligent Systems and Technology, 16(1):1–54, January 2025.
[7] Kyra Wilson and Aylin Caliskan. Gender, race, and intersectional bias in resume screening via language model retrieval. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, volume 7, pages 1578–1590, 2024.
[8] Johann D Gaebler, Sharad Goel, Aziz Huq, and Prasanna Tambe. Auditing the use of language models to guide hiring decisions. arXiv preprint arXiv:2404.03086, 2024. [9] Ze Wang, Zekun Wu, Xin Guan, Michael Thaler, Adriano Koshiyama, Skylar Lu, Sachin Beepath, Ediz Ertekin Jr, and Maria Perez-Ortiz. Jobfair: A framework for benchmarking gender hiring bias in large language models. arXiv preprint arXiv:2406.15484, 2024.
[10] Seungone Kim, Jamin Shin, Yejin Cho, Joel Jang, Shayne Longpre, Hwaran Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, et al. Prometheus: Inducing fine-grained evaluation capability in language models. In The Twelfth International Conference on Learning Representations, 2023.
[11] Chenglong Wang, Hang Zhou, Kaiyan Chang, Tongran Liu, Chunliang Zhang, Quan Du, Tong Xiao, and Jingbo Zhu. Learning evaluation models from large language models for sequence generation. arXiv preprint arXiv:2308.04386, 2023.
[12] Hai Ye and Hwee Tou Ng. Self-judge: Selective instruction following with alignment self-evaluation. arXiv preprint arXiv:2409.00935, 2024.
[13] Athena Wen, Tanush Patil, Ansh Saxena, Yicheng Fu, Sean O’Brien, and Kevin Zhu. FAIRE: Assessing Racial and Gender Bias in AI-Driven Resume Evaluations, 2025.
[14] Yinhong Liu, Han Zhou, Zhijiang Guo, Ehsan Shareghi, Ivan Vulic´, Anna Korhonen, and Nigel Collier. Aligning with human judgement: The role of pairwise preference in large language model evaluators. arXiv preprint arXiv:2403.16950, 2024.
[15] Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon. A setwise approach for effective and highly efficient zero-shot ranking with large language models. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 38–47, 2024.
[16] Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Le Yan, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, et al. Large language models are effective text rankers with pairwise ranking prompting. arXiv preprint arXiv:2306.17563, 2023.
[17] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging LLM-as-a-judge with MT-bench and chatbot arena. Advances in Neural Information Processing Systems, 36:46595–46623, 2023.
[18] Edward C Webster and Clifford Wilfred Anderson. Decision making in the employment interview. (No Title), 1964.
[19] BM Springbett. Factors affecting the final decision in the employment interview. Canadian Journal of Psychology/Revue canadienne de psychologie, 12(1):13, 1958.
[20] Arnaud Rey, Ke´vin Le Goff, Marle\`ne Abadie, and Pierre Courrieu. The primacy order effect in complex decision making. Psychological Research, 84(6):1739–1748, 2020.
[21] Jamie Murphy, Charles Hofacker, and Richard Mizerski. Primacy and recency effects on clicking behavior. Journal of computer-mediated communication, 11(2):522–535, 2006.
[22] Manuel London and Milton D Hakel. Effects of applicant stereotypes, order, and information on interview impressions. Journal of Applied Psychology, 1974.
[23] Robert E Carlson. Effect of interview information in altering valid impressions. Journal of Applied Psychology, 55(1):66, 1971.
[24] Amos Tversky and Daniel Kahneman. Judgment under uncertainty: Heuristics and biases: Biases in judgments reveal some heuristics of thinking under uncertainty. science, 185(4157):1124–1131, 1974.
[25] Adrian Furnham and Hua Chu Boo. A literature review of the anchoring effect. The Journal of Socio-Economics, 40(1):35–42, 2011.
[26] Kenneth N Wexley, Raymond E Sanders, and Gary A Yukel. Training interviewers to eliminate contrast effects in employment interviews. Journal of Applied Psychology, 57(3):233, 1973.
[27] Gary P Latham, Kenneth N Wexley, and Elliot D Pursell. Training managers to minimize rating errors in the observation of behavior. Journal of Applied Psychology, 60(5):550, 1975.
[28] Scott Highhouse. Context-dependent selection: The effects of decoy and phantom job candidates. Organizational Behavior and Human Decision Processes, 65(1):68–76, 1996.
[29] Jonathan C Pettibone and Douglas H Wedell. Examining models of nondominated decoy effects across judgment and choice. Organizational Behavior and Human Decision Processes, 81(2):300–328, 2000.
[30] Nasim Mousavi, Panagiotis Adamopoulos, and Jesse Bockstedt. The decoy effect and recommendation systems. Information Systems Research, 34(4):1533–1553, 2023.
[31] Pouya Pezeshkpour and Estevam Hruschka. Large language models sensitivity to the order of options in multiplechoice questions. arXiv preprint arXiv:2308.11483, 2023.
[32] Xiutian Zhao, Ke Wang, and Wei Peng. Measuring the inconsistency of large language models in preferential ranking. arXiv preprint arXiv:2410.08851, 2024.
[33] Xiaobo Guo and Soroush Vosoughi. Serial position effects of large language models, 2024.
[34] Yiwei Wang, Yujun Cai, Muhao Chen, Yuxuan Liang, and Bryan Hooi. Primacy effect of chatgpt. arXiv preprint arXiv:2310.13206, 2023.
[35] David Rozado. Gender and Positional Biases in LLM-Based Hiring Decisions: Evidence from Comparative CV/Resume Evaluations. arXiv preprint arXiv:2505.17049, 2025.
[36] Jiaxu Lou and Yifan Sun. Anchoring bias in large language models: An experimental study. arXiv preprint arXiv:2412.06593, 2024.
[37] Kremena Valkanova and Pencho Yordanov. Irrelevant alternatives bias large language model hiring decisions. arXiv preprint arXiv:2409.15299, 2024.
[38] Frank F Xu, Uri Alon, Graham Neubig, and Vincent Josua Hellendoorn. A systematic evaluation of large language models of code. In Proceedings of the 6th ACM SIGPLAN international symposium on machine programming, pages 1–10, 2022.
[39] Matthew Renze. The effect of sampling temperature on problem solving in large language models. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 7346–7356, 2024.
[40] Manel Baucells, Martin Weber, and Frank Welfens. Reference-point formation and updating. Management Science, 57(3):506–519, 2011.
[41] Xuechunzi Bai, Angelina Wang, Ilia Sucholutsky, and Thomas L Griffiths. Explicitly unbiased large language models still form biased associations. Proceedings of the National Academy of Sciences, 122(8):e2416228122, 2025.
[42] Myra Cheng, Esin Durmus, and Dan Jurafsky. Marked personas: Using natural language prompts to measure stereotypes in language models. In The 61st Annual Meeting Of The Association For Computational Linguistics, 2023.
[43] Lena Armstrong, Abbey Liu, Stephen MacNeil, and Danae¨ Metaxa. The silicon ceiling: Auditing gpt’s race and gender biases in hiring. In Proceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, EAAMO ’24, page 1–18. ACM, October 2024.
[44] Hadas Kotek, Rikker Dockum, and David Sun. Gender bias and stereotypes in large language models. In Proceedings of the ACM collective intelligence conference, pages 12–24, 2023.
[45] Mary Frances Luce. Choosing to avoid: Coping with negatively emotion-laden consumer decisions. Journal of consumer research, 24(4):409–433, 1998.
[46] George D Farmer, Paul A Warren, Wael El-Deredy, and Andrew Howes. The effect of expected value on attraction effect preference reversals. Journal of Behavioral Decision Making, 30(4):785–793, 2017.
[47] Sarah Lichtenstein and Paul Slovic. Reversals of preference between bids and choices in gambling decisions. Journal of experimental psychology, 89(1):46, 1971.
[48] William M. Hedgcock, Raghunath Singh Rao, and Haipeng (Allan) Chen. Choosing to choose: The effects of decoys and prior choice on deferral. Management Science, 62(10):2952–2976, 2016.
[49] Alberto Acerbi and Joseph M Stubbersfield. Large language models show human-like content biases in transmission chain experiments. Proceedings of the National Academy of Sciences, 120(44):e2313790120, 2023.
[50] Kyrtin Atreides and David J Kelley. Cognitive biases in natural language: Automatically detecting, differentiating, and measuring bias in text. Cognitive Systems Research, 88:101304, 2024.
[51] Aadesh Salecha, Molly E Ireland, Shashanka Subrahmanya, Jo˜ao Sedoc, Lyle H Ungar, and Johannes C Eichstaedt. Large language models display human-like social desirability biases in big five personality surveys. PNAS nexus, 3(12):pgae533, 2024.
[52] Jessica Maria Echterhoff, Yao Liu, Abeer Alessa, Julian J McAuley, and Zexue He. Cognitive bias in high-stakes decision-making with llms. CoRR, 2024.
[53] Simon Malberg, Roman Poletukhin, Carolin M Schuster, and Georg Groh. A comprehensive evaluation of cognitive biases in llms. arXiv preprint arXiv:2410.15413, 2024.
[54] Ammar Shaikh, Raj Abhijit Dandekar, Sreedath Panat, and Rajat Dandekar. Cbeval: A framework for evaluating and interpreting cognitive biases in llms. arXiv preprint arXiv:2412.03605, 2024.
[55] James Deese and Roger A Kaufman. Serial effects in recall of unorganized and sequentially organized verbal material. Journal of experimental psychology, 54(3):180, 1957.
[56] Hermann Ebbinghaus. [image] memory: A contribution to experimental psychology. Annals of neurosciences, 20(4):155, 2013.
[57] Geoffrey Keppel and Benton J Underwood. Proactive inhibition in short-term retention of single items. Journal of verbal learning and verbal behavior, 1(3):153–161, 1962.
[58] Terry L. Boles and David M. Messick. A reverse outcome bias: The influence of multiple reference points on the evaluation of outcomes and decisions. Organizational Behavior and Human Decision Processes, 61(3):262–275, 1995.
[59] U.S. Social Security Administration. Popular baby names by decade. https://www.ssa.gov/oact/ babynames/decades/index.html, 2024. Accessed: 2025-06-10.
# A Color Comparison Sets and Prompts
Each model was asked to perform triplewise comparisons on four sets of three colors, categorized by tier. From best to worst, the tiers are Ideal, Fair, Plain, and Harsh. The method for generating these color sets is outlined in the Materials and Methods section of the main paper. Table 1 lists the specific colors used for each model and tier.
Table 1: Colors used in triplewise and pairwise comparisons. The three colors in each row were used for triplewise comparisons foe each model and tier. Colors 1 and 2 were used for pairwise comparisons.
For triplewise comparisons, each model was presented with all six permutations of the three colors and asked to select the best one for a kid’s room. For pairwise comparisons, both permutations of the two colors were used. The prompt used for pairwise comparisons was:
The prompt used for triplewise comparisons is similar and omitted.
# B Resume Comparison Sets and Prompts
The selection methodology of resume for triple- and pairwise comparisons is outlined in the Materials and Methods section of the main paper. The prompts used to generate the resumes are the following.
We generated synthetic personal profiles by randomly combining pre-defined lists of common U.S. Caucasian first and last names, and U.S. state abbreviations. Below are the full lists:
Last names: Smith, Johnson, Williams, Brown, Jones, Miller, Davis, Wilson, Anderson, Taylor, Thomas, Moore, Harris, Clark, Lewis.
Male first names: James, John, Robert, Michael, William, David, Joseph, Charles, Thomas, Christopher, Daniel, Matthew, Andrew, Joshua, Brandon.
Female first names: Emily, Jessica, Ashley, Sarah, Elizabeth, Hannah, Samantha, Lauren, Megan, Rachel, Amanda, Rebecca, Nicole, Stephanie, Katherine.
The male and female names were randomly permuted and matched with a last name; each last name appeared once for male candidates and once for female candidates. The prompt used for pairwise resume comparisons was:
# C Pairwise Color Comparisons
We used Colors 1 and 2 from Table 1 for each model and tier to conduct pairwise color comparisons. Each color set was tested in 50 pairwise comparisons for both permutations, resulting in 100 total comparisons per set at each temperature setting.
Table 2 reports the results at $T = 1$ , showing the proportion of times each position was chosen, along with effect sizes (Cohen’s $h$ ) and associated $p$ -values. Effect sizes are measured against a $50 \%$ baseline, and $p$ -values reflect the significance of deviation based on a two-sided binomial test. Even at the highest temperature setting used, most comparisons reveal statistically significant and substantial positional biases. We do not perform statistical tests for $T = 0$ as the results are almost deterministic.
Figure 4 shows the results at different temperatures. At $T = 0$ (the topmost plot), all models strongly prefer the first option for high-quality color sets, and shift to the second position for lower-quality sets. While higher temperatures introduce randomness, the positional bias remains robust across models and tiers.
Table 2: Positional bias in pairwise color comparisons, $\pmb { T } = \mathbf { 1 }$ . Proportions of first and second selections across quality tiers. Cohen’s $h$ quantifies effect sizes from a $50 \%$ baseline. Asterisks indicate significance: $\mathbf { p < 0 . 0 0 1 ^ { * * * } }$ .
Figure 4: Order effects in pairwise comparisons at three temperature settings. Each panel $\textstyle T = 0 , 0 . 5$ , and 1, top to bottom) shows the percentage of selections by position, broken down by model and color tier.
# D Pairwise Resume Comparisons
For pairwise resume comparisons, we selected two resumes as outlined in Section 4 (Materials and Methods). We randomly selected four male names and four female names, and conducted exhaustive pairwise comparisons: 2 genders $\times \ ( _ { 2 } ^ { 4 } )$ name pairs $\times ~ 2$ name-resume assignments $\times 2$ presentation orders, for a total of 48 comparisons per tier and profession.
The names used for pairwise comparisons were Brandon Lewis, Charles Wilson, Christopher Taylor and Andrew Harris (male) and Elizabeth Jones, Ashley Williams, Amanda Thomas and Emily Smith (female).
Table 3 shows the proportion of times each position was chosen at $T = 1$ , along with effect sizes (Cohen’s $h$ ) and associated $p$ -values. As in Appendix C, effect sizes are measured against a $50 \%$ baseline, and $p$ -values reflect the significance of deviation based on a two-sided binomial test.
Figure 5 and Figure 6 plot the position effect results for the four professions, as well as aggregated results, at temperatures 0 and 1 respectively. The results align with findings from the color experiment: all models tend to prefer the first position for high-quality resumes but shift to favor the second position for low-quality ones. Figure 4 shows the corresponding results at $T = 1$ . Results for temperature 0.5 are consistent with the other findings and are omitted.
Table 3: Pairwise position bias in resume comparisons, $\textbf { \em T } = \textbf { 1 }$ . Results are aggregated across all professions, grouped by model and tier. For each condition, we report the proportion of times the first vs. second resume was chosen (First/Second $\%$ ), the corresponding Cohen’s $h$ effect size, and the $p$ -value from a two-sided binomial test against a $50 \%$ null. All p-values are from chi-square tests of uniform choice; $\mathbf { p < 0 . 0 0 1 ^ { * * * } }$ indicates statistical significance.
Figure 5: Order effects in pairwise resume comparisons, $\pmb { T } = \mathbf { 0 }$ . Each panel shows the proportion of times each position was chosen for each profession (Mechanical Engineer, Real Estate Agent, Journalist and Registered Nurse, top to bottom) and aggregated, broken down by LLM and quality tier.
Figure 6: Order effects in pairwise resume comparisons, $\pmb { T } = \mathbf { 1 }$ . Each panel shows the proportion of times each position was chosen for each profession (Mechanical Engineer, Real Estate Agent, Journalist and Registered Nurse, top to bottom) and aggregated, broken down by LLM and quality tier.
# E Distinguishing True Preference Distortions from Tie-Breaking Heuristics.
As we reported in Appendix C, we conducted 50 comparisons for each permutation of each pair of colors at each temperature. Here, we conducted an additional 500 pairwise tests (250 iterations per permutation), for a total of 600 pairwise comparisons. The results are shown in Table 4. We found that in eight of the twelve tests, the model’s preference was statistically significant. Even when there is a clear preference, the effect size is small. This is to be expected, as the colors were chosen so that the LLM would be indifferent between them at $T = 0$ .
As we observed a strong name bias and moderately strong gender bias, we chose names in order to try to minimize the impact of these biases. We selected two female names that we found evoked a minimal bias in the pairwise comparisons: Ashley Williams and Emily Smith. Similarly to the color selection, we conducted 600 pairwise comparisons for resumes. Here, we needed to compare Resume 1 with Ashley Williams’ personal information to Resume 2 with Emily Smith’s personal information and vice versa. We therefore performed 150 iterations per permutation and resume-name pair, for a total of 600 pairwise comparisons. Here, the model’s preference was statistically significant in five of the twelve resume pairs. The results for resume comparisons are shown in Table 5.
# F Triplewise Color Comparisons
For triplewise comparisons, we used all three colors from Table 1 in Appendix A. Each of the $3 ! = 6$ permutations was iterated 40 times, for a total of 240 triplewise comparisons.
We used a chi-square test to assess whether LLMs exhibit position bias when evaluating the three colors of equal quality. Under the null hypothesis of no positional preference, we would expect the model to select each position with equal probability (i.e., one-third each). A statistically significant deviation from this uniform distribution (as indicated by $p < 0 . 0 0 1 )$ suggests that the model’s selection is systematically influenced by resume order rather than content. To quantify the strength of this effect, we report Crame´r’s V as a measure of effect size. Table 6 shows the results of the triplewise comparisons at $T = 1$ , showing the proportion of times each position was chosen, along with effect sizes and associated $p$ -values.
Figure 7 shows the results at different temperatures. Similarly to the pairwise comparisons, while higher temperature introduce randomness, the positional bias remains robust.
Table 6: Triplewise color comparisons, ${ \pmb T } = { \bf 1 }$ . Proportions chosen from each position across models and quality tiers, reported as First/Second/Third $( \% )$ . All $p$ -values from chi-square tests of uniform choice; $\mathbf { p < 0 . 0 0 1 ^ { * * * } }$ denotes statistical significance.
Figure 7: Order effects in triplewise comparisons at three temperature settings. Each panel $( T = 0 , 0 . 5$ , and 1, top to bottom) shows the percentage of selections by position, broken down by model and color tier.
# G Triplewise Resume Comparisons
The method for generating the resumes for triplewise comparisons is outlined in Section 4 (Materials and Methods). Similarly to the triplewise color comparisons, We conducted a chi-squared goodness-of-fit test under the null hypothesis that each position is selected with equal probability (i.e., 1/3 for each of the three positions), and measured effect sizes using Crame´r’s V. Table 6 shows the results of the triplewise comparisons at $T = 1$ , showing the proportion of times each position was chosen, along with effect sizes and associated $p$ -values.
Figure 8 and Figure 9 visualize the position effect results for the four professions, as well as aggregated results, at temperatures 0 and 1 respectively.
Table 7: Order bias in triplewise resume comparisons, $\pmb { T } = \mathbf { 1 }$ . For each model and quality tier, “First/Second/Third $( \% ) ^ { \dag }$ shows the percentage of times the first, second, or third option was chosen. Position bias was evaluated with a chisquare goodness-of-fit test $( \chi ^ { 2 } )$ against uniform choice; Crame´r’s $V$ is the effect size. . Asterisks denote $^ { * } p < 0 . 0 5$ , $^ { * * } p < 0 . 0 1$ , $^ { * * * } p < 0 . 0 0 1$ .
Figure 8: Positional Bias of triplewise resume comparisons, $\textbf { \em T } = \textbf { 0 }$ . Positional order effects in triplewise comparisons across four professions (Mechanical Engineer, Real Estate Agent, Journalist and Registered Nurse, top to bottom) and aggregated across professions.
Figure 9: Triplewise comparisons, $\pmb { T } = \mathbf { 1 }$ . Positional order effects in triplewise comparisons across four profession (Mechanical Engineer, Real Estate Agent, Journalist and Registered Nurse, top to bottom) and aggregated totals.
# H Name Bias
# H.1 Name Bias in Triplewise Comparisons
To evaluate whether name selections were uniformly distributed across the 30 candidate names for each model, we performed a chi-squared goodness-of-fit test. The results are shown in Table 8: GPT-4o-mini’s selections closely match a uniform distribution $( \chi ^ { 2 } = 9 . 6$ , $p = 0 . 9 9 9 7 \mathrm { \cdot }$ , while Claude 3 Haiku shows a strong deviation $( \chi ^ { 2 } = 2 1 5 . \dot { 8 }$ , $p = 1 . 9 \times 1 0 ^ { - 3 0 } )$ , indicating a strong name-level selection bias. Llama falls between these extremes $( \chi ^ { 2 } = 7 3 . 1$ , $p = 1 . 1 \times 1 0 ^ { - 5 } )$ .
Figure 10 visualizes the full distribution of name selection frequencies for each model using histograms with bin size 3. The expected number of times each name would be chosen if there was no name bias is 32. GPT-4o-mini chose each name between 26 and 38 times, consistent with little to no bias. Claude 3 Haiku ’s selections show high variance, with a wide spread and strong preference for certain names. Figure 11 shows box plots for each model, confirming these patterns: Claude 3 Haiku has the widest interquartile range and largest spread, while GPT-4o-mini’s selection counts are narrowly distributed with minimal outliers. Table 9 shows Claude 3 Haiku’s the top 5 least and most ‘favorite’ names: it chose some names as few as 8 times, and some as much as 62 times.
Table 8: Name bias in triplewise comparisons, $\textbf { \em T } = \textbf { 1 }$ . For each model, we report separate goodness-of-fit $\chi ^ { 2 }$ statistics and $\boldsymbol { \mathrm { \tt ~ p } }$ -values for male names, female names, and combined (All), along with the total number of comparisons (n). Asterisks indicate statistical significance at $^ { * * * } p < . 0 0 1$ .
Figure 10: Histogram of name selections per model, binned every 3. The histogram shows how often each of the 30 candidate names was chosen across all triplewise comparisons.
Figure 11: Box plot of the times each name was selected per model.
Table 9: Claude 3 Haiku’s favorite and least favorite names.
# H.2 Name Bias in Pairwise Comparisons
As noted in Appendix B, the names used for pairwise comparisons were Brandon Lewis, Charles Wilson, Christopher Taylor and Andrew Harris (male) and Elizabeth Jones, Ashley Williams, Amanda Thomas and Emily Smith (female). These names were selected at random, before we had analyzed the name bias in triplewise comparisons. That is, we did not yet know that two of the male names were the least and most favorite of Claude 3 Haiku. The number of samples from the pairwise comparisons that we had previously carried out (Appendix D), were insufficient for statistical significance (except for Claude 3 Haiku). We therefore extended the comparisons to all three resumes used in triplewise comparisons, but limited our analysis to results where neither resume dominated the other. The results are shown in Table 10. As the description of this experiment was somewhat cumbersome to describe in the main text, we also analyzed the direct comparisons between Christopher Taylor and Andrew Harris (from the original pairwise comparison results). Claude 3 Haiku selected Christopher Taylor in 82 out of 128 trials $\dot { \boldsymbol { p } } = 0 . 0 0 1 9$ , $h = 0 . 2 9$ ).
Table 10: Name bias in pairwise comparisons, ${ \pmb T } = { \bf 1 }$ . Chi-squared tests for uniform name selection distribution, compared to a baseline of $2 5 \%$ , by gender. Asterisks denote significance levels: $^ { * } p < 0 . 5 , ^ { * * } p < . 0 1$ , $^ { * * * } p < . 0 0 1$ .
# I Gender Bias
We conducted cross-gender pairwise comparisons using the same set of resumes and name–profile pairings employed in our same-gender experiments. For each tier and profession, we compared every male–female pairing in both presentation orders, yielding 64 total comparisons per tier per profession (see Section 4). Table 11 reports the positional bias observed in these cross-gender trials, while Table 12 reports the gender bias. Across nearly all models and tiers, positional effects exceed gender effects, with the sole exception of the ‘Best’ tier under Claude 3 Haiku.
To test whether this difference in effect sizes is itself statistically significant, we first defined, for each model–tier combination, the dominant position as the position (first or second) chosen most frequently in our same-gender comparisons (Appendix D). We then aggregated two counts over the cross-gender data: (1) the number of times the resume in that dominant position was selected, and (2) the number of times the female candidate was selected. The resulting positional and gender effect sizes are reported in Table 13. Finally, we performed a nonparametric bootstrap (10,000 resamples with replacement) on these aggregated counts to generate $9 5 \%$ confidence intervals for the difference in effect sizes; in every model–tier case the interval excluded zero, confirming that order effects are stronger than gender effects.
The reader may wish to compare Table 11 with Table 3, which gives the corresponding results for same-gender comparisons. On aggregate, GPT-4o-mini, Claude 3 Haiku and Llama $3 ~ 8 \mathbf { B }$ chose the candidate in the dominant position $9 3 . 6 \%$ , $8 7 . 6 \%$ and $8 9 . 2 \%$ of the time respectively, in same-gender comparisons, compared to $9 2 . 8 \%$ , $8 4 . 3 \%$ and $8 6 . 5 \%$ in cross-gender comparisons. We note that the name bias may play a role in these results; we do not attempt to disentangle the name and gender biases. Table 14 breaks down the results further, reporting the proportion of times the first position was chosen when the female/male candidate was presented first. Across all models and tiers, the first position was selected more often when the female candidate was presented first (except for one case, when the candidate in the first position was never selected).
Figure 12 and Figure 13 show the proportion of times each position was chosen, broken down by gender, at $T = 0$ and $T = 1$ respectively.
Table 11: Pairwise position bias in cross-gender resume comparisons, ${ \pmb T } = { \bf 1 }$ . Results are aggregated across all professions, grouped by model and tier. For each condition, we report the proportion of times the first vs. second resume was chosen (First/Second $\%$ ), the corresponding Cohen’s $h$ effect size, and the $p$ -value from a two-sided binomial test against a $50 \%$ null. All p-values are from chi-square tests of uniform choice; $\mathbf { p < 0 . 0 0 1 ^ { * * * } }$ indicates statistical significance.
Table 12: Gender bias at temperature 1. The two candidates differ in gender and alternate positions in the pairwise comparisons; values show the percentage of times the female vs. male candidate was selected, reported as Female/Male $( \% )$ . Statistical significance is from binomial tests; effect sizes are Cohen’s $h$ . Asterisks denote $^ { * } p < . 0 5$ , $^ { * * } p < . 0 1$ , $^ { * * * } p < . 0 0 1$ .
Table 13: Aggregated order and gender bias. For each model, Cohen’s $h$ measures the effect size of order and gender bias (deviation from a $50 \%$ baseline), aggregated across professions and tiers at $T = 1$ . We also report the mean bootstrapped difference in effect sizes $( \Delta \bar { h } \bar { = } \bar { h } _ { \mathrm { o r d e r } } - h _ { \mathrm { g e n d e r } } )$ and its $9 5 \%$ confidence interval, based on 10,000 resamples. In all cases, order bias significantly exceeds gender bias.
Table 14: First-position selections by candidate gender. “Female when first” and “Male when first” are the percentages of times the female or male candidate was chosen when appearing in position 1. Diff $( \% )$ represents the difference in female and male selection rates. We report both statistical significance (using Fisher’s Exact Test) and effect sizes (Cohen’s $h$ ), which quantify the magnitude of differences in proportions. Asterisks indicate significance levels: $^ { * } p < 0 . 0 5$ , $^ { * * } p < 0 . 0 1$ , $^ { * * * } p < 0 . 0 0 1$ .
Figure 12: Gender and position bias in pairwise comparisons, ${ \textbf { \em T } } = { \textbf { 0 } }$ . Gender effects in pairwise comparisons across four professions (Mechanical Engineer, Real Estate Agent, Journalist and Registered Nurse, top to bottom) and aggregated. Each panel shows the proportion of times each position was chosen, broken down by LLM and quality tier.
Figure 13: Gender and position bias in pairwise comparisons, $\textbf { \em T } = \textbf { 1 }$ . Gender effects in pairwise comparisons across four professions (Mechanical Engineer, Real Estate Agent, Journalist and Registered Nurse, top to bottom) and aggregated. Each panel shows the proportion of times each position was chosen, broken down by LLM and quality tier. | Large language models (LLMs) are increasingly used in decision-support
systems across high-stakes domains such as hiring and university admissions,
where decisions often involve selecting among competing alternatives. While
prior work has noted positional order biases in LLM-driven comparisons, these
biases have not been systematically dissected or linked to underlying
preference structures. We provide the first comprehensive investigation of
positional biases across multiple LLM architectures and domains, uncovering
strong and consistent order effects, including a novel centrality bias not
previously documented in human or machine decision-making. We also find a
quality-dependent shift: when options are high quality, models exhibit primacy
bias, but favor latter options when option quality is low. We further identify
a previously undocumented bias favoring certain names over others. To
distinguish superficial tie-breaking from true distortions of judgment, we
introduce a framework that classifies pairwise preferences as robust, fragile,
or indifferent. We show that order effects can lead models to select strictly
inferior options, and that positional biases are typically stronger than gender
biases. These findings suggest that LLMs are not merely inheriting human-like
biases, but exhibit distinct failure modes not seen in human decision-making.
We propose targeted mitigation strategies, including a novel use of the
temperature parameter, to reduce order-driven distortions. | [
"cs.AI"
] |
# 1. Introduction
Subset selection, also known as coreset selection (Zheng et al., 2023; Wan et al., 2024b), has become an effective approach to improve model training efficiency by identifying a small, representative subset of training data without significantly compromising model performance. This task is particularly important in scenarios involving largescale datasets (Wan et al., 2024a; Wang et al., 2025; Jia et al., 2025), where full dataset training is computationally prohibitive. Subset selection methods can be broadly categorized into one-shot (Xia et al., 2024; Yang et al., 2024) and adaptive approaches (Karanam et al., 2022; Killamsetty et al., 2022). In this work, we focus on one-shot subset selection, which identifies subsets in a single pass, offering computational advantages over adaptive methods that require iterative selection during model training.
Traditional one-shot subset selection methods typically rely on a pre-trained model as an information extractor (IE) to derive data characteristics such as features, gradients, or uncertainty scores. These characteristics are then used to identify the most representative subset. While numerous strategies—such as feature-based (Agarwal et al., 2020; Sener & Savarese, 2017), uncertainty-based (Coleman et al., 2019; Wu et al., 2024), and gradient matching-based approaches (Mirzasoleiman et al., 2020)—have been proposed, these methods fundamentally depend on pre-trained models obtained by training on the full dataset of the target task, as shown in Figure 1 (a). This inherently introduces significant dataset dependency, which limits their applicability, particularly in large-scale data scenarios. Efforts to reduce this dependency, such as employing lightweight proxy models (Coleman et al., 2019) or minimizing pretraining epochs (Guo et al., 2022), only partially mitigate the computational burden without fundamentally addressing the dataset dependency issue.
Recent advancements in foundation models (FMs), such as pre-trained vision models (Caron et al., 2021; Oquab et al., 2023) and vision-language models (Radford et al., 2021; Zhai et al., 2023; Sun et al., 2023), offer a promising alternative. A natural alternative to subset-based methods is fine-tuning or adapting FMs to the target dataset (Ding et al., 2023). While these approaches leverage pre-trained knowledge, they still require full-dataset access during fine-tuning, which undermines the computational efficiency that subset selection seeks to achieve. Moreover, these methods often face challenges such as overfitting on noisy datasets (Feng et al., 2024) and scalability issues on large datasets. In contrast, subset-based methods decouple the data selection process from task-specific training, enabling efficient learning without full-dataset reliance. With their robust generalization capabilities, FMs can serve as direct alternatives to traditional IEs, enabling dataset-agnostic subset selection pipelines, as illustrated in Figure 1 (b). Unlike traditional pipelines that rely on task-specific pre-training, FM-based pipelines eliminate the need for task-specific pre-training, making them well-suited for large and diverse datasets. Despite their potential, the advantages of FM-based pipelines over traditional methods remain under-explored. While some studies (Xie et al., 2023; Killamsetty et al., 2023) have investigated this approach, prior work (Xie et al., 2023) has revealed that simply using FMs for subset selection does not consistently lead to superior performance. This highlights critical open questions: Can FMs truly replace task-specific IEs in subset selection? If so, under what conditions?
In this paper, we conduct extensive experiments to investigate the strengths and limitations of using FMs as IEs for subset selection. Detailed experimental statistics and analysis can be found in Single Model Study section. Our experiments on subset selection using three kinds of models as IEs on five different types of image datasets, i.e., CIFAR10 (Krizhevsky et al., 2009), CIFAR-10N-worse (CIFAR
10N) (Wei et al., 2022), CIFAR-10-imbalance (CIFAR10I) (Cui et al., 2019), Oxford-IIIT Pet (Pet) (Parkhi et al., 2012)) and Oxford-IIIT Pet-N (Pet-N), revealed surprising findings: (1) FMs consistently outperform traditional IEs on both clean and noisy fine-grained datasets; and (2) FMs demonstrate limited advantages for subset selection on coarse-grained datasets with noisy labels.
While FMs are well-suited for fine-grained datasets, the optimal choice of FM as a feature extractor for subset selection remains an open question. Moreover, existing feature-based methods fail to comprehensively analyze feature distributions from both intra- and inter-class perspectives, resulting in suboptimal selection performance. To address these limitations, we introduce a novel subset selection pipeline that leverages multiple FMs with unknown selection performance to enhance fine-grained dataset selection. Our proposed RAM-APL method integrates diverse FMs (i.e., DINOv2 and CLIP) and quantifies data importance through a systematic analysis of feature distributions across both intra- and inter-class levels, achieving state-of-the-art performance on three fine-grained image datasets.
The contributions of our work are three-fold:
• An in-depth study on the strengths and limitations of foundation models compared to traditional information extractors for subset selection reveals that foundation models consistently outperform traditional IEs on finegrained datasets, whereas their advantage diminishes on coarse-grained datasets with noisy labels. • A novel subset selection pipeline employing multiple foundation models with unknown selection performance as IEs is proposed for fine-grained image datasets. RAM-APL, an effective subset selection method, is designed based on the novel pipeline. • Extensive experiments verify the superiority of RAMAPL on three fine-grained image datasets. Specifically on the Caltech-UCSD Birds-200-2011 dataset, RAM-APL achieves an average improvement of $6 . 4 \%$ in prediction accuracy over Random method across all sampling rates.
# 2. Related Works
Current one-shot subset selection methods typically follow a traditional selection pipeline, which consists of an information extractor, a measurer, and a selector. Various measures have been proposed to leverage the information provided by the extractor to assess data importance, including feature-based (Agarwal et al., 2020; Sener & Savarese, 2017), gradient-based (Kothawade et al., 2022; Killamsetty et al., 2021a), training dynamic-based (Toneva et al., 2018; Swayamdipta et al., 2020; He et al., 2024; Zhang et al.,
2024) and other weighting strategies (Zhou et al., 2020; Coleman et al., 2019; Zheng et al., 2022). Regardless of the above methods, their extractors are usually trained to converge on the full training set of the target task, rendering the pre-trained extractor data-dependent and limiting the applicability of subset selection to new large-scale datasets. For example, TDDS (Zhang et al., 2024) required 90 epochs of extractor training on ImageNet-1K to gather training dynamics, surpassing the 60 epochs needed for training the target model on the coreset. To solve this problem, Coleman et al. (Coleman et al., 2019) designed a small proxy model to perform data selection, achieving significantly faster pretraining. Guo et al. (Guo et al., 2022) proposed to pre-train a model for a small number of epochs. However, they do not break free from dataset dependency. Recently, some studies (Xie et al., 2023; Killamsetty et al., 2023) have explored using foundation models (FMs) as IEs for data selection, showing promise in addressing dataset dependency. Nevertheless, neither study has conclusively demonstrated that FMs outperform traditionally trained IEs. Specifically, (Xie et al., 2023) found that simply utilizing an FM does not guarantee superior data selection performance, raising questions about the viability of FMs as substitutes for traditional IEs. Our comprehensive investigation reveals that FMs universally dominate traditional IEs on fine-grained datasets (both clean and noisy), while their advantage diminishes on coarse-grained datasets with noisy labels. Furthermore, the contribution of an FM to subset selection varies across datasets. To maximize the potential of FMs for fine-grained subset selection, we propose strategically combining multiple FMs with complementary capabilities.
Since only features can be obtained from each FM, how to effectively use the unaligned features extracted from multiple FMs to measure and select data is the key problem. Existing feature-based subset selection methods can be classified into two main categories: geometry-based methods (Welling, 2009; Sener & Savarese, 2017; Xia et al., 2023) and decision boundary-based methods (Ducoffe & Precioso, 2018; Margatina et al., 2021). For geometry-based methods, studies (Welling, 2009; Sener & Savarese, 2017) selected samples whose distributions are not close to each other in feature space so that subsets do not have redundant information. These subsets usually make the model a good generalization. However, they treat samples whose distributions are not close to each other with equal importance, making subset selection for fine-grained datasets disregard inter-class distribution differences. Decision boundary-based methods select data close to the decision boundary, which is a time-consuming and biased selection process that is not beneficial for model generalization. Taking the best of both types of methods, we propose the subset selection method RAM-APL for fine-grained datasets.
# 3. Preliminary: Subset Selection
In downstream tasks such as image classification and recognition, we consider a large-scale training set $\pmb { \mathcal { D } } =$ $\{ I _ { 1 } , \ldots , I _ { N } \}$ with a dataset size $\mathbf { N }$ , where each sample $I _ { i } = ( x _ { i } , y _ { i } )$ consists of input data $x _ { i }$ and its corresponding class label $y _ { i } \in \{ 1 , \ldots , C \}$ . In scenarios where there’s a specified budget $p$ , subset selection is used to identify a subset $s$ of $\scriptstyle { \mathcal { D } }$ that contains the most informative data for the target downstream task. It is expected that the model $\theta ^ { S }$ trained on $s$ can perform on par with the model $\theta ^ { \mathcal { D } }$ trained on $\scriptstyle { \mathcal { D } }$ . The performance of subset selection is evaluated by the performance of model $\theta ^ { S }$ on the test set of the target downstream task. The subset $\pmb { S } = \{ I _ { 1 } , \ldots , I _ { M } \}$ has a size $M$ , where $M < N$ , and the sampling rate for subset selection is defined as $p = M / N$ . In the practical study, $p$ is pre-specified, and the subset $s$ is selected with the expectation of maximizing the target model’s accuracy while adhering to the budget constraint.
Subset selection relies on an Information Extractor (IE) to extract information from each sample, which is then used to assess the importance of the sample and select the most informative data. Traditionally, the IE is a model pretrained on the full training set, which inherently introduces dataset dependency, limiting the applicability of this approach across different datasets. To address this limitation, a more flexible and generalizable approach is necessary, and it is therefore crucial to explore alternatives that reduce or eliminate dataset dependency.
# 4. Single-Model Study
Foundation Models (FMs) have recently emerged as a promising alternative to traditional information extractors (IEs) for subset selection. However, the advantages of FMbased selection over conventional methods remain largely unexplored. In this section, we investigate whether a single foundation model can effectively replace traditional IEs and address the following two key questions: Question 1: In which cases are foundation models most effective, and in which cases are they not? Question 2: Do all FMs perform equally? Our extensive experiments reveal several key findings:
• Observation 1: FMs demonstrate limited advantages for subset selection on noisy, coarse-grained datasets. • Observation 2: Conversely, FMs significantly and consistently outperform traditional IEs for subset selection on fine-grained datasets (both clean and noisy). • Observation 3: Different FMs perform differently as information extractors for subset selection.
Inspired by Observations 2 and 3, we propose a FM-based algorithm for superior fine-grained subset selection, which is elaborated in Section 5. In subsequent paragraphs, we provide detailed explanations for these observations.
Experimental Setting. To assess the applicability of foundation models as information extractors (IEs), we conducted subset selection experiments using a single model as the IE across five distinct image datasets: CIFAR-10 (Krizhevsky et al., 2009), CIFAR-10N-worse (CIFAR-10N) (Wei et al., 2022), CIFAR-10-imbalance (CIFAR-10I) (Cui et al., 2019), Oxford-IIIT Pet (Pet) (Parkhi et al., 2012)) and Oxford-IIIT Pet with $20 \%$ symmetric label noise (Oxford-IIIT Pet-N, abbreviated as Pet-N). We apply three kinds of models for feature extraction in subset selection respectively. Three kinds of models are: (1) models pre-trained on the target training dataset for ten epochs (Guo et al., 2022), referred to as model-TD. Once the target task changes, the model needs to be pre-trained again; (2) models pre-trained on TinyImageNet (TIN) (Krizhevsky et al., 2012) for ten epochs, referred to as model-TIN. TinyImageNet is a larger classification dataset, models pre-trained on it possess a stronger representation ability compared to those pre-trained on target datasets. Given this, we think that model-TIN has the potential to serve as an alternative to traditional IEs without retraining when the target task changes; and (3) a single foundation model (i.e., DINOv2, CLIP, SigLIP, or EVA-CLIP). To explore the impact of the above three kinds of models as IEs on selection algorithms, we implement four classical algorithms, i.e., MIN, K-center Greedy (KCG) (Sener & Savarese, 2017), Graph Cut (GC) (Iyer et al., 2021) and Moderate DS (MDS) (Xia et al., 2023) over the extracted features. Besides, we use each selection algorithm to select training samples with various sampling rates (i.e., $10 \%$ , $30 \%$ , and $5 0 \%$ ), and train target models over the selected subsets. We provide the detailed experimental setup and results in Appendix A.
We analyzed which of the three single models served as the most effective IE across four subset selection methods and three sampling rates. The frequency of each type of single model being the optimal IE under 12 settings on each dataset is presented in Figure 1 (c). Surprisingly, we found that directly using features extracted from the FM for subset selection does not consistently outperform features extracted from traditional pre-trained models.
(Observation) FMs demonstrate limited advantages for subset selection on noisy, coarse-grained datasets. In contrast, FMs consistently outperform traditional IEs for subset selection on both clean and noisy fine-grained datasets. In the case of selecting CIFAR-10N, the FM only emerged as the optimal IE in 4 out of 12 experimental setups. Conversely, the FM performed well on the other four datasets, especially on the Pet and Pet-N. For subset selection on CIFAR-10, the FM was the optimal IE in 6 out of 12 experimental setups, but the best result at each sampling rate was achieved using model-TIN as the IE. In the case of CIFAR-10I, the FM was optimal in 8 out of 12 experimental setups, but at a low sampling rate of $1 \%$ , model-TD yielded the best results. Encouragingly, the single FM performed best in 9 out of 12 experimental setups on the Pet and Pet-N datasets and achieved the best results across all sampling rates. Thus, the FM presents a viable alternative to traditionally trained IEs for fine-grained image datasets. The Single-Model Study on more coarse- and finegrained tasks shows the same conclusions, as summarized in Appendix A.2.
Figure 2. Relationship between foundation model performance on the target task and subset selection performance using that FM as IE. Superior target task accuracy does not necessarily lead to better subset selection performance across different foundation models and selection methods.
(Observation) Different FMs perform differently as information extractors for subset selection, and the superior performance of FMs on downstream tasks does not guarantee better subset selection effects. Various FMs are available, including DINOv2, CLIP, SigLIP, and EVA-CLIP. If the method is to be designed for fine-grained datasets according to the subset selection pipeline (b), an optimal FM needs to be identified first as the IE. An intuitive idea is to identify the optimal FM by testing each on the target task, with the best-performing FM chosen as the IE. However, we observe that superior performance on the downstream task does not guarantee better subset selection. As shown in Figure 2, although EVA-CLIP has strong zeroshot classification on Pet, it is not optimal for any selection method. Furthermore, our experiments indicate that the optimal FM as the IE varies depending on factors such as target datasets, selection methods, and sampling rates. For instance, Figure 2 demonstrates that for selecting $50 \%$ of the Pet dataset, DINOv2 performs best as the IE for the MIN method, while CLIP excels for the KCG method. Additional analysis of optimal FMs across sampling rates is presented in Appendix A.3. Therefore, Pipeline (b) requires an additional step to identify the best FM to achieve the most effective performance across different scenarios. This undoubtedly introduces an optimization detour, diverting focus from the primary goals of data measurement and selection.
While FMs are well-suited for fine-grained datasets, the optimal choice of FM as an feature extractor for FM-based subset selection remains an open question. Moreover, existing feature-based methods fail to comprehensively analyze feature distributions from both intra- and inter-class perspectives, resulting in suboptimal selection performance. To address these limitations, we explore a novel subset selection pipeline that directly employs multiple FMs with unknown individual contributions as IEs. Building on our pipeline, we propose the RAM-APL method, achieving state-of-the-art performance on multiple fine-grained datasets.
# 5. Proposed Method: RAM-APL
We are the first to investigate selection with multiple foundation models. In this section, we mainly propose a baseline method with multiple models as feature extractors. We introduce the problem formulation in Section 5.1. The subset selection method is then explained in detail in Section 5.2, which includes two metrics, namely ranking mean and accuracy of pseudo-class labels.
# 5.1. Problem Formulation
Multiple foundation models $\mathbf { \mathcal { M } } _ { \mathcal { F } }$ are used to extract information of training data in our method, where $\scriptstyle { \mathcal { M } } _ { \mathcal { F } } =$ $\{ M _ { F } ^ { 1 } , \ l . . . , M _ { F } ^ { m } \}$ . Foundation models can be directly used as feature extractors, but features of the same samples extracted by different models are not aligned. Therefore, the two key challenges in our method design are effectively fusing features and accurately measuring sample importance based on the fused representations.
# 5.2. Method
The primary challenge in learning from fine-grained image datasets lies in their large intra-class differences and small inter-class differences. Existing subset selection methods either emphasize intra-class distribution while overlooking inter-class similarities or focus on decision-boundary samples while neglecting samples from other distributions within the class. To address these limitations, we propose RAM-APL, a selection method that quantifies data importance by jointly considering both intra-class and inter-class distributions.
Feature Extraction Given a fine-grained image dataset $\mathcal { D }$ , we extract features using multiple FMs $M _ { F } ^ { i }$ , where $i \in \{ 1 , \ldots , m \}$ . The extracted feature set is denoted as $\mathcal { F } = [ \mathcal { F } ^ { 1 } , \ldots , \mathcal { F } ^ { m } ]$ , where $\pmb { \mathcal { F } } ^ { i } = [ F _ { 1 } ^ { i } , \dots , F _ { N } ^ { i } ]$ represents the feature representations of $\scriptstyle { \mathcal { D } }$ obtained from the $i ^ { t h }$ foundation model $M _ { F } ^ { i }$ . Each feature vector $F _ { j } ^ { i } \in \mathbb { R } ^ { K i }$ for a data sample $I _ { j }$ is defined as: ${ \pmb F } _ { j } ^ { i } = \left[ f _ { j } ^ { i , 0 } , f _ { j } ^ { i , 1 } , \ldots , f _ { j } ^ { i , K _ { i } - 1 } \right] \in$ $\mathbb { R } ^ { K _ { i } }$ , where $K _ { i }$ represents the feature dimensionality of the $i ^ { t h }$ model. Since FMs may produce features of varying dimensions, their representations are not necessarily aligned.
RAnking Mean (RAM) RAM maps features extracted by different foundation models from their unaligned feature spaces into a distance ranking space (an aligned space), facilitating the evaluation of data importance based on intraclass distribution.
After acquiring the feature set $\mathcal { F }$ , we map the features extracted by each foundation model to a distanceranking space. Specifically, given the feature set $\mathcal { F } ^ { i } =$ $[ F _ { 1 } ^ { i } , \dots , F _ { N } ^ { i } ]$ from foundation model $M _ { F } ^ { i }$ , we define the central feature of class $c$ as the mean feature vector:
$$
\tilde { \boldsymbol { F } } _ { c } ^ { i } = \frac { 1 } { | \boldsymbol { S } | } \sum _ { j \in \boldsymbol { S } } \boldsymbol { F } _ { j } ^ { i } ,
$$
where $S$ represents the set of indices belonging to class $c$ The Euclidean distance between a sample $F _ { j } ^ { i }$ and its class center $\tilde { F } _ { c } ^ { i }$ serves as a measure of representativeness, with smaller distances indicating higher representativeness (Xia et al., 2023):
$$
\begin{array} { r } { d \left( F _ { j } ^ { i } , \tilde { F } _ { c } ^ { i } \right) = \| F _ { j } ^ { i } - \tilde { F } _ { c } ^ { i } \| _ { 2 } , } \end{array}
$$
where $\left\| \cdot \right\| _ { 2 }$ denotes the Euclidean norm. Samples are ranked within each class according to their computed distances, producing ranked values $\bar { \pmb { \mathscr { R } } } ^ { i } = [ r _ { 1 } ^ { i } , . . . , \bar { r } _ { | S | } ^ { i } ]$ for model $M _ { F } ^ { i }$ , where $r _ { j } ^ { i } \in \mathbb { Z } ^ { + }$ and smaller values indicate closer distances. This process is repeated for all $m$ foundation models, mapping unaligned features into a unified distanceranking space. The final ranking mean of class $c$ is denoted as:
$$
\overline { { \mathcal { R } } } _ { c } = [ \overline { { r } } _ { 1 } , \dots , \overline { { r } } _ { | S | } ] ,
$$
where $\begin{array} { r } { \overline { { r } } _ { j } = \frac { 1 } { m * \lvert S \rvert } \sum _ { i = 1 } ^ { m } r _ { j } ^ { i } \in [ 0 , 1 ] } \end{array}$ represents the normalized ranking mean for sample $I _ { j }$ . A smaller normalized ranking mean indicates greater alignment with class prototypes across foundation models. Visual analysis in Appendix B.4 further reveals that samples with lower normalized ranking means tend to exhibit more distinct target objects and simpler backgrounds.
Accuracy of Pseudo-class Labels (APL) APL maps features extracted by various foundation models from their unaligned feature space into a pseudo-class confidence score based on the unified inter-class distance ranking.
After obtaining the feature set $\mathcal { F }$ , we assign pseudo-class labels to features extracted from each foundation model separately. Specifically, given the feature set $\begin{array} { r l } { \mathcal { F } ^ { i } } & { { } = } \end{array}$ $[ F _ { 1 } ^ { i } , \dots , F _ { N } ^ { i } ]$ from foundation model $M _ { F } ^ { i }$ , we first compute the central features of all $C$ classes using Equation (1), collectively denoted as F˜i = [F˜0i, . . . , F˜(iC 1)]. Next, we calculate the Euclidean distances between each sample $F _ { j } ^ { i }$ and all central features following Equation (2). These distances are represented as: $D ( F _ { j } ^ { i } ) = [ d _ { j , 0 } ^ { i } , \dots , d _ { j , ( C - 1 ) } ^ { i } ]$ where $d _ { j , c } ^ { i }$ represents the distance between $F _ { j } ^ { i }$ and the central feature $\tilde { F } _ { c } ^ { i }$ . The pseudo-class label for sample $I _ { j }$ in the feature space of $M _ { F } ^ { i }$ is then assigned based on the nearest central feature, computed as:
$$
\tilde { y } _ { j } ^ { i } = \arg \operatorname* { m i n } D ( F _ { j } ^ { i } ) .
$$
If the assigned pseudo-class label matches the ground-truth label, i.e., $\tilde { y } _ { j } ^ { i } = y _ { j }$ , then the sample is considered correctly classified in the feature space of $M _ { F } ^ { i }$ , and we assign a score of $\varphi _ { j } ^ { i } = 1$ . Otherwise, we set $\varphi _ { j } ^ { i } = 0$ .
By repeating this process across all $m$ foundation models, we obtain a set of classification scores for each sample across different feature spaces. The average pseudo-class label accuracy for sample $I _ { j }$ is then computed as:
$$
\overline { { \varphi } } _ { j } = \frac { 1 } { m } \sum _ { i = 1 } ^ { m } \varphi _ { j } ^ { i } .
$$
A lower value of $\overline { { \varphi } } _ { j }$ indicates that sample $I _ { j }$ is more frequently misclassified across different feature spaces, suggesting a higher degree of similarity to other classes and thus greater difficulty in distinguishing it within the feature distribution. Finally, we represent the overall pseudo-class label accuracy for the entire dataset $\scriptstyle { \mathcal { D } }$ as:
$$
\overline { { \varphi } } = [ \overline { { \varphi } } _ { 1 } , \ldots , \overline { { \varphi } } _ { N } ] .
$$
Subset Selection The importance of data samples in finegrained learning is quantified through a linear combination of RAnking Mean and the Accuracy of Pseudo-class Labels (RAM-APL), formulated as:
$$
S c o r e = W _ { 1 } \times { \overline { { \pmb { \mathscr { R } } } } } + W _ { 2 } \times ( 1 - { \overline { { \varphi } } } ) .
$$
Here, $W _ { 1 }$ and $W _ { 2 }$ control the contributions of intraclass and inter-class distributions, respectively. Inspired by (Swayamdipta et al., 2020), which highlights that easier samples facilitate optimization, we prioritize high intra-class similarity at lower sampling rates $p$ , gradually incorporating harder samples as $p$ increases. Thus, $W _ { 1 }$ and $W _ { 2 }$ are dependent on the sampling rate $p$ . The weight functions are defined as:
$$
\begin{array} { l } { { W _ { 1 } = \alpha + ( 1 - \alpha ) \times \displaystyle \frac 1 { 1 + e ^ { \beta * ( { \textrm p - } 0 . 5 ) } } } } \\ { { W _ { 2 } = 1 - W _ { 1 } } } \end{array}
$$
Samples with the smallest scores are selected into $s$ up to the predefined budget. The hyper-parameters $\alpha$ and $\beta$ regulate the balance between intra-class and interclass information across different sampling rates. Experimental results demonstrate that the best selection performance on fine-grained datasets is achieved using $\scriptstyle { \mathcal { M } } _ { \mathcal { F } } =$ $\{ { \mathrm { C L I P } } , { \mathrm { D I N O v } } 2 \}$ .
# 6. Experiments
# 6.1. Experimental Settings
Datasets. We evaluate our method on three classical fine-grained image classification datasets: Oxford-IIIT Pet (Pet) (Parkhi et al., 2012), Food-101 (Bossard et al., 2014), and Caltech-UCSD Birds-200-2011 (CUB-200-2011) (Wah et al., 2011). The Oxford-IIIT Pet comprises 7,349 images of 37 different breeds of cats and dogs. Food-101 has 101 classes, each with 750 training images and 250 test images. CUB-200-2011 consists of 11,788 images of 200 bird subcategories. Detailed dataset statistics are provided in Appedix A.1.
Foundation Models as Feature Extractor. We adopt two FMs as feature extractors for fine-grained image datasets, i.e., CLIP-VITl14 (Radford et al., 2021) and DINOv2- VITs14 (Oquab et al., 2023). The visual encoder of CLIPVITl14 is used to extract image features, while the final layer [CLS] token embedding output of DINOv2-VITs14 serves as the feature representation. We emphasize that these FMs were not fine-tuned on the target datasets and were used solely as feature extractors for subset selection. The impact of varying the number of foundation models on selection performance is discussed in Section 6.4.
Target Model Architecture & Training Parameters. For Pet and Food-101 datasets, we use the 18-layer residual network (ResNet-18) (He et al., 2016) as the model backbone, initializing it randomly for training. For the CUB-200-2011 dataset, we adopt ResNet-50 as the model backbone, initialized with weights pre-trained on ImageNet (Deng et al., 2009). We follow the experimental setup from (Guo et al., 2022). Specifically, we use SGD as the optimizer with batch size 128, initial learning rate 0.1, Cosine decay scheduler, momentum 0.9, weight decay $5 \times 1 0 ^ { - 4 }$ , and 200 training epochs. For data augmentation, we employ a random resized crop to $2 2 4 \times 2 2 4$ resolution, followed by random horizontal flipping on training images. The code of our study is available at: https://github.com/ZhijingWan/RAM-APL.
Evaluation Metric. Prediction accuracy of a well-trained target model on the test set is used as the evaluation metric.
Comparison Methods. Multiple subset selection methods act as baselines for comparison. Specifically, we compare with (1) Random, which uniformly selects samples as the subset; (2) Herding (Welling, 2009); (3) K-Center Greedy (KCG) (Sener & Savarese, 2017); (4) Contextual Diversity (CD) (Agarwal et al., 2020); (5) Margin (Coleman et al., 2019); (6) Forgetting (Toneva et al., 2018); (7) GraNd (Paul et al., 2021); (8) Cal (Margatina et al., 2021); (9) Glister (Killamsetty et al., 2021b); (10) Graph Cut(GC) (Iyer et al., 2021); (11) Moderate DS (MDS) (Xia et al., 2023); (12) MINimum distance (MIN), which selects samples with the minimum distance from the central feature of its class. Details of baselines are in Appendix B.1.
60 2340 RMaINndom Herding 670 80 4560 KCG CMDargin Forgetting 23450 GraNd OMuDrSs CGlaCilster 230 0 0 0 1 10 30 50 70 1 10 30 50 70 10 30 50 70 Sampling Rates(%) Sampling Rates(%) Sampling Rates(%) (a) Oxford-IIIT Pet (b) Food-101 (c) CUB-200-2011
We implemented each selection method based on the oneshot subset selection pipeline using code in the DeepCore library1. The information extractors used in baselines (2)- (12) were obtained using the traditional method, i.e., training a model with the same backbone as the target model on the target training set for 10 epochs to ensure a fair comparison.
# 6.2. Comparison with Baselines
The results comparing the accuracy between the different subset selection methods on each fine-grained dataset are shown in Figure 3. Given each sampling rate, class-balanced sampling is performed. The experiments of each method on Pet were performed five times with different random seeds, while the experiments on Food-101 and CUB-200-2011 were performed three times with different random seeds due to the high computational effort. We adopt $\alpha = 0 . 2$ and $\beta = 1$ for our method across all datasets.
As shown in Figure 3, our method outperforms all baselines at each sampling rate. We compute the average performance gain of each method over Random across all sampling rates. On Pet, our method achieves a $3 . 7 4 \%$ average improvement, substantially outperforming the sub-optimal GC method, which shows a $1 . 5 2 \%$ average improvement. On Food-101, our gain reaches $4 . 4 4 \%$ compared to GC’s $3 . 0 4 \%$ . On CUB200-2011, our method shows a $6 . 4 0 \%$ average improvement versus GC’s $2 . 7 8 \%$ . Detailed performance and additional cross-architecture generalization results are provided in Appendix B.
# 6.3. Ablation Study
Our method mainly consists of two novel designs: two feature measure metrics for multiple foundation models (i.e., “RAM” and “APL”). We evaluate the effectiveness of each design on the Pet dataset. Firstly, the RAM is designed primarily to effectively fuse the features extracted from multiple foundation models in terms of intra-class distribution, enabling the subset selection performance to be not inferior to that of any individual foundation model. As shown in Table 1, when using RAM to fuse the features extracted from CLIP and DINOv2 and selecting the samples with the minimum ranking mean, the performance of “RAM” is better than that of “MIN” with Model-TD or CLIP as the IE at each sampling rate. This validates the effectiveness of the RAM strategy. By combining APL and RAM to assess data importance for subset selection, our method outperforms the “MIN” baseline with DINOv2 as the IE at both $1 \%$ and $5 0 \%$ sampling rates. These results highlight the effectiveness of the joint RAM-APL strategy in fine-grained subset selection. Further analysis in Appendix B.5 shows that RAM-APL selects more diverse samples, enhancing overall coverage of the feature space.
Table 1. Ablation study based on Pet. Model-TD refers to the model pre-trained on Pet for 10 epochs.
# 6.4. Analysis and Discussion
Parameter analysis. The hyper-parameters $\alpha$ and $\beta$ are used to set the joint weights $W _ { 1 }$ and $W _ { 2 }$ according to Formula 8. We study the impact of them in Figure 4, testing five different values for $\alpha$ and $\beta$ . In particular, we compared the basic weight-setting strategy for fusion, i.e., the equalweighted fusion strategy, where $W _ { 1 } = 1$ and $W _ { 2 } = 1$ . As illustrated in Figure 4, the best performance was achieved with $\alpha = 0 . 2$ and $\beta = 1$ , outperforming the equal-weighted fusion strategy. When $\alpha = 0 . 2$ and $\beta = 1$ , the fusion weights $( W _ { 1 } , W _ { 2 } )$ corresponding to $1 \%$ , $10 \%$ , $30 \%$ , $50 \%$ , and $70 \%$ sampling rates were (0.696, 0.304), (0.679, 0.321), (0.640, 0.360), (0.600, 0.400), (0.560, 0.44), respectively. As the sampling rate increases, $W _ { 2 }$ increases while $W _ { 1 }$ remains greater than $W _ { 2 }$ . This observation suggests that focusing more on inter-class feature distributions as the sampling rate increases helps to select better fine-grained subsets, but it is crucial to ensure that the intra-class assessment scores continue to dominate.
Figure 4. Parameter analysis when sampling $70 \%$ of the Pet. It shows that our method achieves the best performance when $\alpha = 0 . 2$ and $\beta = 1$ . The grey dotted line indicates the selection method with $S c o r e = \overline { { \pmb { \mathscr { R } } } } + ( 1 - \overline { { \pmb { \varphi } } } )$ i.e., the direct assignment $W _ { 1 } = W _ { 2 } = 1$ without using Formula 7.
Performance impact of the number of different FMs used for IE. There exists a diverse set of FMs capable of extracting visual features, including DINOv2 (Oquab et al., 2023), CLIP (Radford et al., 2021), SigCLIP (Zhai et al., 2023) and EVA-CLIP (Sun et al., 2023). These models differ in their architectures, training strategies, and training datasets, leading to distinct knowledge and representation capabilities (as demonstrated in Appendix B.6). This raises a natural question: Does incorporating more FMs as IEs enhance our method’s performance?
To explore this, we evaluate different combinations of DINOv2-VITs14, CLIP-VITL14, SigLIP-base-patch16- 224, and EVA-CLIP- $\mathbf { \boldsymbol { \cdot } } 8 \mathbf { B } ^ { 2 }$ on the Pet dataset. As shown in Table 2, using multiple FMs yields better overall performance than any single model. $\scriptstyle \mathrm { { D I N O v 2 + C L I P } }$ achieves the best trade-off between efficiency and accuracy, while adding EVA-CLIP yields further overall gains when computational resources permit. These findings support the benefit of multi-model consensus in our framework.
In the main experiments, we adopt DINOv2 and CLIP as our default IE pair, which yields consistent improvements over subset selection baselines across three fine-grained datasets.
Table 2. Comparison of the performance of our method using different numbers of foundation models as information extractors. Here, “D”, “C”, “S” and “E” represent DINOv2, CLIP, SigLIP, EVA-CLIP, respectively.
Table 3. Comparison of feature fusion strategies.
Feature fusion strategy. Features extracted from different foundation models often exhibit misalignment due to architectural and training discrepancies. In RAM-APL, a simple fusion baseline is to concatenate features from different foundation models, referred to as “Concatenate.” As shown in Table 3, our proposed fusion strategy outperforms simple concatenation, particularly under higher sampling ratios, which are critical in real-world deployment scenarios. | One-shot subset selection serves as an effective tool to reduce deep learning
training costs by identifying an informative data subset based on the
information extracted by an information extractor (IE). Traditional IEs,
typically pre-trained on the target dataset, are inherently dataset-dependent.
Foundation models (FMs) offer a promising alternative, potentially mitigating
this limitation. This work investigates two key questions: (1) Can FM-based
subset selection outperform traditional IE-based methods across diverse
datasets? (2) Do all FMs perform equally well as IEs for subset selection?
Extensive experiments uncovered surprising insights: FMs consistently
outperform traditional IEs on fine-grained datasets, whereas their advantage
diminishes on coarse-grained datasets with noisy labels. Motivated by these
finding, we propose RAM-APL (RAnking Mean-Accuracy of Pseudo-class Labels), a
method tailored for fine-grained image datasets. RAM-APL leverages multiple FMs
to enhance subset selection by exploiting their complementary strengths. Our
approach achieves state-of-the-art performance on fine-grained datasets,
including Oxford-IIIT Pet, Food-101, and Caltech-UCSD Birds-200-2011. | [
"cs.CV",
"cs.LG"
] |
# 1. Introduction
A key objective in exploratory analysis is to minimize data-to-analysis time while enabling real-time interactions and efficient analytical computations on very large data files. In many cases, the development of approximate and incremental techniques is essential for addressing the aforementioned challenges, as they enable dynamic adjustments to system performance and result accuracy [1, 2].
The in-situ paradigm is a common practice to minimize data-to-analysis time, referring to on-the-fly data exploration and analysis of large raw data sets such as CSV or JSON files [3, 4, 5, 6]. In-situ techniques aim to bypass the overhead of fully loading and indexing data in a DBMS (minimize the data-to-analysis time), while offering efficient query evaluation. In the context of in-situ data exploration, previous works [3, 4] have focused on building adaptive indexes that leverage the locality-based behavior of data exploration, by initially creating a "crude" version of the index around the initial area of interest, and dynamically enriching and adapting the structure and its contents (e.g., statistics) based on user interactions.
While in-situ setting ensures low index initialization cost (low data-to-analysis time), there are cases where poor query evaluation performance is observed. Some examples of such cases include: (1) when accessing areas with a high density of objects; (2) during the initial queries or when the user explores a previously unseen area, where the index has not yet adapted sufficiently; and (3) when exploring very large data files on commodity hardware, such as low-specification laptop.
Meanwhile, in many interactive analytical tasks, users do not always require exact results; instead response time is more crucial than result accuracy [2, 7, 8, 9, 10, 11]. For example, several visual analytic tasks, such as class or outlier analysis in scatterplots, pair-wise comparison of spatial areas on maps, often begin with approximate insights, which can be used by the experts to quickly identify specific areas in the exploration space for further analysis. The development of approximate and incremental techniques, will be able to guarantee efficient query evaluation and scalability in the challenging cases describe above, where poor performance is observed.
Although, approximate query processing (AQP) [12, 13, 14] is a long-studied problem in the areas of databases, data mining and information visualization; a gap exists when AQP is coupled with the in-situ setting. While recent works in approximate query evaluation [15, 16] combine sampling with pre-aggregation to improve confidence intervals and reduce query costs, these approaches rely on precomputed structures, which lack adaptability to evolving query workloads and user interactions. Moreover, their reliance on fully precomputed aggregates removes uncertainty but lacks adaptability to evolving query patterns. This rigidity limits their effectiveness in dynamic exploration scenarios where queries are unpredictable. To address this limitation, we integrate adaptive indexing over raw data files with incremental sampling. Instead of precomputing and storing exact aggregates, we leverage the samples collected during query evaluation to maintain approximate aggregates. These aggregates can be reused in future queries, reducing unnecessary I/O while ensuring accuracy constraints are met.
Motivating Example. Consider an example where an astronomer uses they laptop to explore a very large CSV file (e.g, Sloan Digital Sky Survey – SDSS) containing celestial objects (e.g., stars) via a visual interface. Each object is described by four attributes (Fig. 1a): right ascension (Asc) and declination (Decl), that correspond to terrestrial longitude and declination to geographic latitude; age (the age of the star in billion years); and diameter (the diameter in km).
The astronomer begins by inspecting a specific region of the sky through a 2D visualization based on Asc and Decl, where objects are displayed alongside aggregate statistics, such as the average age or maximum diameter, computed for the current viewport.
As the astronomer interacts with the visualization (panning across sky regions or zooming in for detailed analysis) the system dynamically updates both the displayed objects and their aggregate statistics. Initially, the focus is on broad regions to identify anomalous patterns, such as clusters of exceptionally young stars or unusually large diameters. At this exploratory phase, exact values are unnecessary, and approximations enable rapid insight generation.
Upon identifying an area of interest, the astronomer may require higher accuracy to refine their analysis. To manage this trade-off, they can adjust the acceptable error bound; allowing higher error for broad overviews and anomaly detection while lowering it for precise measurements. This approach enhances interactivity, ensuring a seamless transition from rapid exploration to detailed analysis.
Basic Challenges. Some of the key challenges that arise in interactive data analysis over large datasets, which do not fit into main memory, include:
− Low Data-to-Analysis Time. Achieving a low data-to-analysis time requires eliminating a preprocessing phase; thus, tasks such as data organization, indexing, or analysis cannot be considered viable options. Therefore, efficiency and scalability must be achieved solely through on-the-fly processing.
Real-time Interaction. Each user interaction (e.g., zooming, requesting statistics) triggers new queries that must be processed in real time. This is particularly challenging due to the high I/O operations cost associated with frequent raw file access. The challenge becomes even greater in areas with a high density of objects; during the first queries; and when exploration is performed on commodity hardware.
− Approximate Analysis. In numerous exploratory and analytical tasks, users often prioritize speed over exact results. Developing efficient approximate mechanisms that operate (on-the-fly) on large raw files while offering adjustable accuracy and ensuring reliable error guarantees, is highly challenging.
Our Approach. To address these challenges, we propose an approximate query processing framework that balances accuracy and efficiency for large raw data files. In this context, the framework incorporates techniques such as on-the-fly index construction, integration of adaptive indexing with incremental user-driven sampling, and incremental aggregate computations.
The proposed framework ensures low data-to-analysis time, high performance even in challenges cases (e.g., high object density), and efficient approximate computations. The effectiveness of the proposed framework against the aforementioned challenges is clearly demonstrated in the experimental analysis section.
Contributions. In this work, we propose an adaptive approximate query processing framework for raw data exploration that combines incremental sampling with adaptive indexing to balance perfomance and accuracy. Our main contributions are:
− We introduce a main-memory adaptive indexing scheme (VALINOR-A) that is designed for efficient approximate query processing.
− We introduce a user-driven incremental sampling approach that is integrated with index adaptation techniques.
− We implement a mechanism that incrementally compute and reuse partial aggregates derived from sampled data, reducing redundant file accesses and improving efficiency.
− We introduce an error-bounded approximation strategy that maintains confidence guarantees while reducing I/O operations.
− We implement and evaluate our approach in an in-situ query processing environment, demonstrating significant improvements in query latency and I/O efficiency, using real and synthetic datasets.
Tile tz metadata approximate Asc DeclAttArigbeuteDisam OFffilse t Decl intervals m#etOabdjeatcats tz.ℳ sampled objects $\begin{array} { r } { \begin{array} { c c c c c c } { 0 _ { ! } } & { 2 1 } & { 1 1 } & { 3 } & { 7 } \\ { \frac { \eta } { 8 } } & { 0 _ { 2 } } & { 2 3 } & { 1 2 } & { 1 } & { 4 } \\ { \frac { \Theta } { 5 } } & { 0 _ { 3 } } & { 1 1 } & { 1 } & { 7 } & { 6 } \\ { 0 _ { 4 } } & { 1 9 } & { 7 } & { 2 } & { 3 } \\ { 0 _ { \mathrm { s } } } & { 2 9 } & { 1 8 } & { 4 } & { 8 } \\ & & & { \cdots } & & \end{array} } \end{array}$ f1 f2 f3 20 tz tz.IDAsec = [10, 20) n = 3 1 0 z 1 . otz o 5 . tz.𝓞[2] f4 10 tJa o4 tJb tJ . Asc Decl file offset m∑iAgn(eA=g7e)=3 m∑aDixa(Dmi=a1m5)=8 f5 tJco3 tJd .! o1 : ⟨213 112 f1 ⟩ f2 ⟩ ∑Age2 =25 ∑Diam2=113 . o5 : ⟨29 18 f5 ⟩ 10 20 30 Asc child tiles: tz.𝓒 = ∅ tz.𝓞[2] (a) (b) (c)
Outline. The paper is organized as follows. Section 2 introduces the fundamental concepts of our framework, including the exploration model and the indexing approach we build upon. Section 3 formally describes VALINOR-A for approximate query processing, while Section 4 details the query execution workflow, integrating approximate query answering and index adaptation. Section 5 presents the experimental evaluation, and Section 6 reviews related work. Finally, Section 7 concludes the paper.
# 2. Background
This section outlines the exploration model that this work based on and provides a brief overview of the tile-based VALINOR index [4], which serves as the foundation for our work.
# 2.1. Exploration Model
Data File & Data Objects. We consider the scenario where a user interactively explores data stored in a large raw data file $\mathcal { F }$ (e.g., CSV) using 2D visualization techniques, such as scatter plots or maps. The raw data file consists of a set of $d$ -dimensional objects $\mathcal { O }$ , where each object $o _ { i }$ is represented as a list of attribute values: $o _ { i } = ( a _ { i , 1 } , a _ { i , 2 } , \ldots , a _ { i , d } )$ . Each attribute $\textit { \textbf { A } } \in \textit { \textbf { A } }$ may be spatial, numeric, or textual. Furthermore, each object $o _ { i }$ is associated with an offset $f _ { i }$ , a hex value, indicating its position within the file $\mathcal { F }$ .
The dataset includes at least two numeric attributes (e.g., longitude, latitude), which are mapped to the $\mathbf { \boldsymbol { X } }$ and $\mathrm { Y }$ axes of the visualization and are referred to as axis attributes $A _ { x }$ and $A _ { y }$ . The remaining attributes are referred as non-axis attributes.
Example 1. [Data Objects] In Figure 1a sample of the raw data is presented, containing five objects $( o _ { 1 } - o _ { 5 } )$ , where each object represents a sky object, such as a star (as described Sec. 1). For example, considering the object $o _ { 1 }$ , we have that $a _ { 1 , 1 } = 2 1$ , $a _ { 1 , 4 } = 7$ , etc. Further, for each object $o _ { i }$ there is a file pointer $f _ { i }$ that corresponds to the offset of $o _ { i }$ from the beginning of the file.
Table 1 Common Notation
User Interactions. In our exploration model, users interact with the dataset through a set of user operations (e.g., zoom), which are mapped to data-access operations. These operations define the exploration process, allowing users to dynamically refine their view.
A visualized area $\Phi \ : = \ : ( I _ { x } , I _ { y } )$ represents the current viewport, defined by two numeric intervals $I _ { x } = [ x _ { 1 } , x _ { 2 } ]$ and $I _ { y } ~ = ~ [ y _ { 1 } , y _ { 2 } ]$ , where $I _ { x }$ and $I _ { y }$ specify the range of values for the axis attributes $A _ { x }$ and $A _ { y }$ , respectively. The visible objects in this area are those whose axis attribute values fall within these intervals. The mapping between data values and the visualization plane follows a linear or affine transformation.
Users explore the data using operations such as panning, which moves the visualized area by shifting $I _ { x }$ and $I _ { y } ,$ i.e., changing the viewport, and zooming, which expands or contracts $I _ { x }$ and $I _ { y }$ around a focal point, adjusting the level of detail. As users explore iteratively, these operations are often executed in sequence, forming a user exploration session, where each new interaction refines the previous results. Furthermore, in this work, we also focus on a scenario where the user seeks to analyze the data by computing aggregate values (e.g., mean, sum) over non-axis attributes for the objects in the visualized area.
I/O Operation. An I/O operation is a file access that reads the attributes values (even a subset of its attributes) of an object; i.e., the number of I/O operations corresponds to the number of objects, we read their attribute values.
# 2.2. VALINOR Indexing Scheme
The locality-based nature of 2D exploratory analysis has been a key consideration in our past work [4] on in-situ visual exploration, where we introduced VALINOR. It is a hierarchical, tile-based indexing scheme designed to optimize query performance by progressively adapting to user interactions. The goal of VALINOR is to efficiently support dynamic, interactive exploration and statistics computations over large raw data files while minimizing I/O costs.
VALINOR is an in-memory index that organizes data objects into hierarchies of non-overlapping rectangular tiles. These tiles are defined over the domains of the axis attributes $A _ { x }$ and $A _ { y }$ . Each tile stores the objects that fall within its boundaries based on their axis attribute values and is associated with a set of metadata (e.g., count, sum, average). Metadata are computed from all objects in a tile and facilitate retrieval of exact aggregate values over non-axis attributes. The index is initialized and progressively adjusts itself to the user interactions, by splitting the tiles visited into more fine-grained ones, thus forming a hierarchy of tiles.
During the initialization phase, a lightweight, coarse-grained version of the VALINOR is constructed on-the-fly, by parsing the raw file once. This results in a small overhead in the data-to-analysis time.
Then, during exploration, VALINOR is progressively refined based on user interactions. This is achieved through tile splitting, where tiles that are frequently accessed are dynamically subdivided into finer-grained subtiles enriched with aggregated metadata computed from the file. This progressive index adaptation aims to reduce the number of required I/O operations by aligning the index structure with user exploration patterns.
Due to the exact computation of metadata in each tile, some of the shortcomings of VALINOR are that I/O costs remain high over regions with a high density of objects or during the initial stages of a user exploration session when the index has not yet sufficiently adapted to user exploration patterns (i.e., first queries). The performance in the aforementioned cases become even more challenging, when the user requests aggregates about attributes whose metadata are not yet stored in the index, e.g., attributes that were not queried in previous interactions.
# 3. VALINOR-A: Adaptive Index for Approximate Query Processing
In this section, we introduce the VALINOR-A (Adaptive Index for Approximate Query Processing) indexing scheme. The proposed index extends VALINOR with new capabilities for approximate query processing, incremental sampling and an adaptation mechanism (more details in Sec. 4) which result to a user-driven sampling strategy. VALINOR-A enables efficient approximate estimation of aggregate statistics while controlling error bounds. Next, we provide some definitions and present the structure and basic concepts of the index.
# 3.1. Exploratory Query and Results
Exploratory Query. An exploratory query $Q$ over a set of objects is defined by: a selection area; a set of aggregate functions; a user-defined error bound; and a confidence level.1 Formally, exploratory query $Q$ is defined as tuple:
$\mathbf { \Phi } - \mathbf { \Phi } I _ { x } \ = \ [ x _ { 1 } , x _ { 2 } ]$ and $I _ { y } ~ = ~ [ y _ { 1 } , y _ { 2 } ]$ define the query area (i.e., 2D window), specifying a rectangular range over the axis attributes $A _ { x }$ and $A _ { y }$ . The query result includes the objects ${ \mathcal { O } } _ { Q } \subseteq { \mathcal { O } }$ whose values for $A _ { x }$ and $A _ { y }$ fall within these intervals. For a query $Q$ , we refer to its internals $I _ { x }$ and $I _ { y }$ as $Q . I _ { x }$ and $Q . I _ { y }$ , respectively.
− $\scriptstyle { \mathcal { L } }$ is a set ${ \mathcal { L } } = \{ f _ { i } ( A _ { i } ) \mid 1 \leq i \leq k \}$ , where $f _ { i } ( A _ { i } )$ denote an algebraic aggregate function $f _ { i }$ (e.g., sum, mean) [17] applied to a numeric non-axis attribute $A _ { i }$ over the ${ \mathcal { O } } _ { Q }$ objects. Note that, in this work we consider only univariate aggregate functions.
$\begin{array} { r l r } { \epsilon _ { \mathrm { m a x } } } & { { } \in } & { [ 0 , 1 ) } \end{array}$ is a user-defined error bound, specifying the maximum allowable relative error for the approximate result. Note that, if $\epsilon _ { \mathrm { m a x } } = 0$ (i.e., exact query evaluation), the index behaves as the original VALINOR index [4].
− 𝛾 is the confidence level, ensuring that the actual aggregate value of an aggregate function $f ( A _ { i } )$ lies within the computed confidence interval $C I _ { \gamma } ( f ( A _ { i } ) )$ with probability 𝛾.
Query Result. The result $\mathscr { R }$ of an exploratory query $Q$ over $\mathcal { O }$ is defined $\mathrm { a s } ^ { 2 }$ :
$\mathcal { R } = \{ \langle \hat { v } _ { f _ { i } ( A _ { i } ) } , C I _ { \gamma } ( f _ { i } ( A _ { i } ) ) \rangle | \forall f _ { i } ( A _ { i } ) \in \mathcal { L } \} , \mathrm { w h e r }$ e
− $\hat { v } _ { f _ { i } ( A _ { i } ) }$ is the estimated aggregate value for $f _ { i }$ over $A _ { i }$ , compute over ${ \mathcal { O } } _ { Q }$ objects.
$- ~ C I _ { \gamma } \left( f _ { i } ( A _ { i } ) \right)$ denotes the confidence interval $[ L _ { f _ { i } ( A _ { i } ) } , U _ { f _ { i } ( A _ { i } ) } ]$ for $f _ { i } ( A _ { i } )$ , such that the exact aggregate value lies within this interval with probability $\gamma$ , where ${ \cal L } _ { f _ { i } ( A _ { i } ) }$ and $U _ { f _ { i } ( A _ { i } ) }$ are the minimum and the maximum values of $f _ { i } ( A _ { i } )$ , respectively.
For exact queries $\epsilon _ { \mathrm { { m a x } } } = 0 )$ , the confidence interval collapses to a single value, meaning the estimated value is exact. For approximate queries $( \epsilon _ { \operatorname* { m a x } } > 0 )$ , the confidence interval provides a bounded estimate with controlled uncertainty.
# 3.2. Index Structure
Tile. The index’s tiles $\tau$ are defined over the domains of the axis $A _ { x }$ and $A _ { y }$ attributes and a tile $t \in { \mathcal { T } }$ is defined from two ranges $t . I _ { x }$ and $t . I _ { y }$ , in the same domains, respectively. Each tile encloses an ordered set of objects $\mathbf { \Omega } _ { t . \mathcal { O } }$ , when the values $a _ { i , x }$ and $a _ { i , y }$ of an object $o _ { i } \in t . { \mathcal { O } }$ fall within the intervals, $t . I _ { x }$ and $t . I _ { y }$ of the tile, respectively. As $t . { \mathcal { O } } _ { [ i ] }$ we denote the object in $i$ -th position of the objects ordered set.
Tiles Hierarchy. In each level of the hierarchy, there are no overlaps between the tile intervals of the same level, i.e., disjoint tiles. A non-leaf tile 𝑡 can have an arbitrary number of child tiles 𝑡. , enclosing the intervals of its children. That is, given a non-leaf tile $t$ defined by the intervals $t . I _ { x } = [ x _ { 1 } , x _ { 2 } )$ and $t . I _ { y } = [ y _ { 1 } , y _ { 2 } )$ ; for each child tile $t ^ { \prime }$ of $t$ , with $t ^ { \prime } . I _ { x } \ = \ [ x _ { 1 } ^ { \prime } , \dot { x _ { 2 } ^ { \prime } } )$ and $t ^ { \prime } . I _ { y } \ = \ [ y _ { 1 } ^ { \prime } , y _ { 2 } ^ { \prime } )$ , it holds that $x _ { 1 } \leq x _ { 1 } ^ { \prime }$ , $x _ { 2 } \geq x _ { 2 } ^ { \prime } , \bar { y } _ { 1 } \bar { \leq } y _ { 1 } ^ { \prime }$ and $y _ { 2 } \geq y _ { 2 } ^ { \prime }$ . The leaf tiles correspond to tiles without children and can appear at different levels in the hierarchy.
Example 2. [Tiles & Tiles Hierarchy] Considering the input data (Fig. 1a), Figure 1b presents a version of the index, where the Asc and Decl have been selected as the two axis attributes. The index (in the upper-level) divides the 2D space into $4 \times 3$ equally sized disjoint tiles3, and the tile $t _ { j }$ is further divided into $2 \times 2$ subtitles of arbitrary sizes, forming a hierarchy of tiles.
Fully-contained and Partially-contained Tile. Based on the spatial relationship between the 2D area $Q . I _ { x } \times Q . I _ { y }$ defined by a query $Q$ and an overlapping tile, we classify the tile as either partially-contained or fully-contained within the region defined by the query.
Formally, we refer that an interval $I = [ a , b ]$ is contained into an interval $\begin{array} { l } { { I ^ { \prime } = [ c , d ] } } \end{array}$ , denoted as $\textit { I } \subseteq \ I ^ { \prime }$ , when $\textit { a } \geq \textit { c }$ and $\textit { b } \leq \textit { d }$ . Also, the intersection between two intervals $I$ and $I ^ { \prime }$ , denoted as ${ \boldsymbol { I } } \cap { \boldsymbol { I } } ^ { \prime }$ , results to the interval $[ m a x ( a , c ) , m i n ( b , d ) ]$ . An empty interval is denoted as $\varnothing$ .
A tile 𝑡 is fully-contained in a query $Q$ , when $t . I _ { x } \subseteq Q . I _ { x }$ and $t . I _ { y } \subseteq Q . I _ { y }$ . Thus, in this case all the tile’s objects $\ d _ { t . \mathcal { O } }$ contribute to the query result.
Further, a tile 𝑡 that is not fully-contained, is said to be partially-contained in a query $Q$ , i.e., $t . I _ { x } \cap Q . I _ { x } \neq \emptyset$ and/or $t . I _ { y } \cap Q . I _ { y } \neq \emptyset$ .
Tile Metadata. Each tile $t$ is associated with a set of aggregate metadata $_ { t . { \mathcal { M } } }$ (Fig. 1c), consisting of numeric values computed using algebraic aggregate functions over one attribute of the objects in $\mathbf { \Omega } _ { t . \mathcal { O } }$ . These functions include, but are not limited to, count, sum, mean, and sum of squares of deltas, enabling efficient computation of aggregates across tiles.
Unlike the exact query answering setting, where metadata store exact aggregate values for all objects in a tile, the approximate setting maintains metadata for $a$ subset of objects. That is, rather than computing aggregates over all objects in $_ { t . \mathcal { O } }$ , metadata are computed and updated based only on a sampled set $t . S \subseteq t . { \mathcal { O } }$ , where $t . S$ consists of objects whose attributes values were read from the file.
To facilitate incremental sampling and metadata updates, each tile $t$ is associated with a bitmap $_ { t . { \pmb b } }$ of size $\left| t . \mathcal { O } \right|$ , where each bit indicates whether the corresponding |obje|ct has been included in the sampled subset 𝑡. . Specifically:
− If $t . \pmb { b } [ i ] = 1$ , the $i$ -th object $t . { \mathcal { O } } _ { [ i ] }$ has been sampled and used in the computation of approximate aggregate values.
− Otherwise $( t . { \pmb b } [ i ] = 0 )$ , the object $t . \mathcal { O } _ { [ i ] }$ has not been sampled and can be used in a following sampling if needed.
Exact, Approximate & Not available Metadata. Utilizing the tile bitmap, we can determine whether the metadata stored in the tile are exact, approximate or available. Particularly, the status of a tile 𝑡 metadata can be: (1) computed over all the objects $\mathbf { \Omega } _ { t . \mathcal { O } }$ included in $t$ (exact metadata); (2) computed over a sampled subset of the objects included in 𝑡 (approximate metadata); and (3) not available. Hence, if all bitmap elements are equal to 1, this indicates that all objects have been read from the file, and the computed aggregates are exact. If all elements are equal to 0, the metadata is not available. If some elements are equal to 1, the metadata is computed approximately. For example, in Figure 1c, based on the bitmap contents, the objects $o _ { 1 }$ an $o _ { 5 }$ have been read from the file. So, the metadata is computed based on these two objects (approximate metadata).
Tile Contents Summary. Figure 1c presents the contents of the tile $t _ { z }$ . For each tile $t$ , the index stores: (1) the intervals $t . I _ { x }$ and $t . I _ { y }$ ; (2) the object entries $_ { t . \mathcal { O } }$ contained in the tile, where each entry contains the values of the axis attributes along with the offset pointing to the position of the object in the file; (3) a set of metadata $_ { t . { \mathcal { M } } }$ which contains approximate aggregated values incrementally updated over the sampled objects, and a bitmap indicating the tile’s object used in aggregate values’ computations; (4) a set of child tiles pointers $t _ { z } . C$ .
Metadata, Computations & Sampling. In nutshell, the use of metadata during query processing is outlined as follow (more details in Sec. 4.1). If a tile’s metadata is exact, its values are used to estimate the tile’s contribution to the result. Else, if the tile’s metadata is approximate, the approximate aggregates are used in computations. Here, there are cases that the approximate metadata results to a confidence interval that does not meet the user-defined error threshold. In these cases, additional samples are incrementally drawn from the unsampled objects and used to refine existing metadata. This process incrementally refines both the aggregate estimates and the confidence interval until the error bound is satisfied. Finally, in case the tile’s metadata is not available, the same incremental sampling procedure as in the previous case is followed.
# 3.3. Index Initialization
The initialization process for VALINOR-A follows the same principles as the original VALINOR index [4]. Instead of a separate loading phase, the index is constructed on-the-fly when the first query $Q _ { 0 }$ is issued, ensuring minimal data-to-analysis time overhead. The raw file is parsed once to create an initial flat tile structure while evaluating the first query. In the simplest version, tile sizes are determined using a binning technique that partitions the data space into equal-sized tiles, forming a lightweight tile-based index.
During parsing, objects are assigned to their respective tiles, and metadata for non-axis attributes is computed and stored. At the end of initialization, each tile contains exact metadata, pertaining to all its objects.
More advanced initialization strategies, such as query-driven initialization [4], refine the tile layout by allocating smaller tiles near the first query. This increases the likelihood of fully contained tiles in early exploration steps, reducing file access and improving query performance. For further details, we refer the reader to [4].
# 4. Approximate Query Processing & Index Adaptation
In this section, we present our methodology for the evaluation of an exploratory query over the VALINOR-A, the user-driven sampling strategy for computing aggregate statistics and the index adaptation process. We first provide an overview of our methods, and proceed with the detailed steps of the query processing.
# 4.1. Query Processing Process
Given an exploratory query $Q \ = \ \langle I _ { x } , I _ { y } , \mathcal { L } , \epsilon _ { \mathrm { m a x } } , \gamma \rangle$ , our approach evaluates the query by ⟨entifying the tile⟩s overlapping the region defined by the query, leveraging precomputed metadata, and employing incremental sampling to estimate aggregates while ensuring error guarantees. The query evaluation consists of the following steps.
Note that the following steps are not performed in a strict order, as some steps occur in parallel during the incremental sampling. As described next, the Steps 3–5 are executed iteratively, since are parts of the incremental sampling procedure. The query processing process is also described in Example 3.
# 4.1.1. [Step 1] Find Overlapping Tiles
We first identify the set of overlapping tiles by determining their spatial relationship with the query region $Q . I _ { x } \times Q . I _ { y }$ . These tiles are either fully-contained or partially-contained by the query.
# 4.1.2. [Step 2] Combine Spatial Relations and Metadata Status to Access Required Sampling
For each tile overlapping the query region, we assess its spatial relation to the query and its stored metadata status, to decide if additional objects must be retrieved (sampled) from the file. We have the following four cases:
− Case 1: Fully-contained tile with exact metadata. For each fully contained tile, if aggregate metadata exist that represent all objects in the tile, our approach utilizes them directly, i.e., eliminating the need to access the file. These complete metadata contribute to the query result without introducing any uncertainty.
− Case 2: Fully-contained tile with approximate metadata. In this case metadata of a fully-contained tile have been computed on a sampled subset of the tile’s objects. The computed approximate values are used to estimate the query result without the need to read further samples from the file.
However, as analyzed in the next sections, there are cases where additional sampling is needed, since the estimated uncertainty exceeds $\epsilon _ { \mathrm { m a x } }$ (more details in next section). As a result, file access may be required in order to read the new objects that will be added in the sample. Also in this case, the metadata are incrementally updated based on the new samples.
− Case 3: Partially-contained tile. Regardless of the metadata status (whether exact or approximate), metadata cannot be used for partially contained tiles. Therefore, we perform sampling in this tile.
A tile’s metadata pertains to a (subset of) all objects within the tile. However, in this case, we require metadata specifically for the objects that fall within the query region. Thus, sampling is conducted over these objects using an initial sampling rate. This rate is incrementally adjusted until the user-defined error threshold is met (more details follow). As a result, file access is required only for the sampled objects.
− Case 4: Fully or Partially-contained Tile with no metadata. During index initialization or adaptation, new tiles are created. In the particular case of index adaptation (more details in Sec. 4.2), a tile overlapping the user query can split into new fully-contained and/or partially-contained tiles, with empty metadata. We perform sampling to compute metadata of these new tiles. Note that, index adaptation is not separate step in query evaluation, but rather it is performed in parallel with the current step.
User-driven Sampling. In our approach, sampling is tightly integrated with index adaptation, forming a user-driven sampling approach. Tile splitting increases index granularity in frequently explored regions, reducing the query area that requires sampling. By increasing the number of fully-contained tiles, our method minimizes file access since such tiles can be answered directly using stored metadata. Furthermore, based on the user-defined error threshold, we retrieve and store metadata only for a subset of objects per tile; yielding partial metadata that is sufficient for answering subsequent queries with similar error bounds without additional I/O.
# 4.1.3. [Step 3] Read Samples from the File
In this step, we perform sampling on the objects within the tiles identified in the previous step to compute their metadata. Sampling is conducted incrementally, with a sampling rate estimated at each iteration. Since multiple sampling iterations may be required, heuristics are applied to adjust the sampling rate dynamically in each round (more details follow).
To reduce random I/O’s and improve file access performance, we adopt the following approach: we sort objects in the tile based on their offsets before accessing the file, ensuring that the I/O operations are performed in sequential order. This reduces the overhead of fragmented disk I/O operations and enhances overall query efficiency.
# 4.1.4. [Step 4] Metadata Incremental Updates
Each time new objects are read from the file, their values incrementally update the available metadata, ensuring that future queries benefit from more refined estimates. While the updated metadata can be leveraged in the current query to compute a value estimation and a confidence interval, there are cases where it may not remain valid for future queries.
In fully contained tiles, the updated metadata are computed from uniformly sampled objects in the entire tile. Therefore, it remains valid for any query and stored in the index. However, in partially contained tiles, the sampled objects are drawn only from the region overlapping the query, meaning they cannot be used to update the tile’s metadata. Although these objects belong to the tile, incorporating them would not ensure a uniform sample across the entire tile. Consequently, while the metadata is used for query computation in this case, it is not stored in the index.
# 4.1.5. [Step 5] Compute Aggregate Value Estimation & Confidence Interval
In this step, we combine exact and approximate (sample-based) metadata of tiles to compute a total aggregate estimate and its confidence interval.
For each aggregate function $f ( A ) \in { \mathcal { L } }$ , the confidence interval $C I _ { \gamma } ( f ( A ) )$ is computed by combining exact and approximate metadata from the tiles overlapping the query. We treat each tile as a stratum in a stratified sampling-based approach.
The final estimate follows stratified sampling principles, where the total aggregates consist of:
− Exact metadata from fully-contained tiles with metadata computed over all objects in the tile (Step 2: Case 1).
− Approximate metadata from fully-contained tiles whose metadata are based on a subset of objects in the tile (Step 2: Case 2).
− Approximate metadata from partially contained tiles, where new samples are drawn specifically from the portion of each tile overlapping the query region (Step 2: Case 3).
For the sampled portions, we apply the Central Limit Theorem (CLT) to approximate the distribution of the mean estimator as normal when the sample size is sufficiently large. The CLT states that, as the sample size increases, the distribution of the sample mean approaches a normal distribution centered at the true population mean, regardless of the underlying data distribution:
$$
\hat { v } _ { f ( A ) } \sim \mathcal { N } \Big ( \mu , \frac { \sigma ^ { 2 } } { N } \Big ) ,
$$
where $\mu$ is the true population mean for the queried region, $\sigma ^ { 2 }$ is the population variance, and $N$ is the total number of sampled objects.
Aggregate Value Estimation: Combining Exact & Approximate Metadata Let $\mathcal { T } _ { F _ { e } }$ be the set of fully contained tiles with exact metadata, and $\boldsymbol { \tau } _ { F _ { a } }$ be the set of fully contained tiles with approximate metadata. Also, $\boldsymbol { \mathcal { T } } _ { P }$ denote the set of partially contained tiles, from which new samples are drawn from their intersection with the query.
− Linear, decomposable aggregates (e.g., sum): the overall estimated aggregate value is:
$$
\hat { v } _ { \mathrm { s u m ( A ) } } = \sum _ { \forall t \in \mathcal { T } _ { F _ { e } } } v _ { t } + \sum _ { \forall t \in \mathcal { T } _ { F _ { a } } } \hat { v } _ { t } + \sum _ { \forall t \in \mathcal { T } _ { _ { P } } } \hat { v } _ { t } ,
$$
where $\boldsymbol { v } _ { t }$ is the exact aggregate value for a fully contained tile $t ~ \in ~ \mathcal { T } _ { F _ { e } }$ , and $\hat { v } _ { t }$ is the approximate aggregate for a tile $t \ \in \ { \mathcal { T } } _ { F _ { a } } \cup { \mathcal { T } } _ { P }$ , computed via sample-based estimation. Because sums are additive, we can simply add the tile-level results.
− Mean aggregate: each fully or partially contained tile provides a partial sum (exact or approximate, based on the metadata status) and an object count for the query region. To obtain the overall mean, we add these partial sums from all tiles and then divide by the total object count across them.
− Count aggregate: as we store the $A _ { x }$ and $A _ { y }$ attribute values of each object in our index, we can directly identify all objects contained in the query region. Consequently, 𝑐𝑜𝑢𝑛𝑡 does not require an approximation.
Variance Estimation. Since our approach follows a stratified sampling model (treating each tile as a stratum) and we sample without replacement, let:
− $N _ { t } =$ population size of tile $t$ , i.e., number of objects in the tile $t$ that lie in the queried region
− $n _ { t } =$ sample size in tile $t$ (how many objects we actually read)
− $\hat { \sigma } _ { t } ^ { 2 } =$ sample variance of the attribute in tile 𝑡
$\begin{array} { r } { - \ N \ = \ \sum _ { \forall t \in \mathcal { T } _ { Q } } N _ { t } \ = \ t o } \end{array}$ tal population in the query region, where $\tau _ { Q }$ are the tiles overlapped with the query.
For a sum over the queried region, each tile’s estimated contribution is $\hat { v } _ { t } = N _ { t } \hat { \mu } _ { t }$ , where $\hat { \mu } _ { t }$ is the sample mean of tile $t$ . A standard stratified-sampling formula with the Finite-Population Correction (FPC) is:
$$
\begin{array} { r c l } { \operatorname { V a r } \left( \hat { V } _ { \mathrm { s u m } } \right) ~ { = } } & { \displaystyle \sum _ { ( { \mathcal { T } } _ { F _ { a } } \cup { \mathcal { T } } _ { P } ) } N _ { t } ^ { 2 } \frac { \hat { \sigma } _ { t } ^ { 2 } } { n _ { t } } \left( 1 - \frac { n _ { t } } { N _ { t } } \right) } \end{array}
$$
For a mean over the region, the overall estimator is $\begin{array} { r } { \hat { V } _ { \mu } = \sum _ { t } \bigl ( \frac { N _ { t } } { N } \bigr ) \hat { \mu } _ { t } . } \end{array}$ . Hence,
$$
\operatorname { V a r } \left( \hat { V _ { \mu } } \right) ~ = ~ \sum _ { \forall t \in \atop ( \mathcal { T } _ { F _ { a } } \cup \mathcal { T } _ { P } ) } \left( \frac { N _ { t } } { N } \right) ^ { 2 } \frac { \hat { \sigma } _ { t } ^ { 2 } } { n _ { t } } \left( 1 - \frac { n _ { t } } { N _ { t } } \right)
$$
Note that, we include FPC in the above formulas because we sample without replacement. This choice aligns with our incremental sampling approach: as we read more objects from a tile to refine its metadata, we may eventually exhaust the tile, obtaining exact metadata and thus eliminating uncertainty for future queries on that tile.
Confidence Interval Computation. Using the CLT, we then derive a confidence interval for either sum or mean in the standard way, for confidence level 𝛾:
$$
C I _ { \gamma } ( f ( A ) ) \ = \ \Big [ \hat { v } _ { f ( A ) } \ \pm \ z _ { \gamma } \sqrt { \mathrm { V a r } ( \hat { v } _ { f ( A ) } ) } \Big ] ,
$$
where $z _ { \gamma }$ is the appropriate normal quantile.
Note that, because exact contributions from $\boldsymbol { \tau } _ { F _ { e } }$ have zero variance, they do not increase uncertainty in the final estimate.
In the case of variance or standard deviation aggregate function, additional adjustments via the chi-square distribution are required (omitted for brevity).
Regarding, min and max, confidence intervals are generally unreliable because sampling may miss extreme values; so, in those cases, the index falls back to exact computation.
# 4.1.6. Incremental Sampling Procedure
In our incremental sampling process, each iteration involves the Steps 3–5. Particularly, after deriving the confidence interval (Step 5), our approach checks whether the maximum relative error $\epsilon _ { \mathrm { e s t } }$ satisfies the user-defined bound 𝜖max.
If the user-defined bound is not satisfied, a next sampling iteration is performed. So, the execution goes to Step 3, estimating a new sampling rate as described next. The process continues incrementally, refining query estimates until the computed error falls within the acceptable range defined by $\epsilon _ { \mathrm { m a x } }$ . Otherwise, the query result is formed and returned to the user.
Relative Error. The maximum relative error is computed by taking half the confidence interval’s width (the margin of error) over the estimated value (the midpoint of the interval):
$$
\epsilon _ { \mathrm { e s t } } = { \frac { { \frac { 1 } { 2 } } \left| C I _ { \gamma } ( f ( A ) ) \right| } { \hat { v } _ { f ( A ) } } }
$$
Here, $\hat { v } _ { f ( A ) }$ is the estimated aggregate value, and $\left| C I _ { \gamma } ( f ( A ) ) \right|$ denotes the interval’s total width. If $\epsilon _ { \mathrm { e s t } } > \epsilon _ { \mathrm { m a x } }$ , |additional s|ampling is triggered to reduce uncertainty, typically by reading more objects from tiles where metadata remain incomplete. The process continues until $\epsilon _ { \mathrm { e s t } } \leq \epsilon _ { \mathrm { m a x } }$ .
Sampling Rate Adjustment. To reduce error efficiently, our approach adaptively adjusts the sampling rate rather than using a fixed-step increase. We utilize a heuristic approach where, in each iteration, we compute a multiplicative factor based on the ratio of the current error to the desired threshold, typically 𝜖current , reflecting the 1 relationship between sample size and standard error. If this factor exceeds a cap (e.g., 2.0), we limit the increase to avoid excessively large jumps in the sampling rate. Conversely, if the computed increment is too small, we enforce a minimum increase to ensure noticeable progress.
This adaptive adjustment balances error reduction with I/O costs. In each sampling iteration, the tiles are sampled based on the current sampling rate, and the objects selected are potentially scattered throughout the file, incurring random I/O overhead. By making larger sampling rate jumps when error is high (and refining the rate more gradually near the threshold), we reduce the number of rounds requiring additional data retrieval, thus minimizing repeated random accesses. This strategy ensures that each batch of additional samples contributes significantly to narrowing the confidence interval while avoiding unnecessary data retrieval.
Figure 2: Approximate Query Processing & Index Adaptation
# 4.2. Index Adaptation
We employ the index adaptation technique from [4] that adjusts the index based on user queries and aims at minimizing I/O operations and computational costs, progressively increasing interactivity in frequently explored areas. The index adaptation is performed after a user operation, affects the tiles of the index that are contained in the user query and results in: modifying the index structure, i.e., splitting tiles into multiple ones and adjusting tile sizes, and enriching tiles with missing metadata.
Adaptation is performed in parallel with Step 2 (Combine Spatial Relations and Metadata Status to Initialize Sampling) (Sec. 4.1.2), where sampling is guided by tiles splitting, resulting to a user-driven sampling process.
The decision to split a tile is guided by I/O cost considerations, ensuring that adaptation remains beneficial for future queries. For each tile we need to split, we estimate the expected splitting gain in terms of I/O cost, for evaluating a (future) query. If the expected splitting gain exceeds a fixed threshold, a split is performed. A further analysis of the splitting model is presented in [4]. Given the locality-based characteristics of interactive exploration, tile splitting increases the likelihood that subsequent queries will fully contain a tile within a frequently explored region.
In our experiments, we follow a quadtree-like splitting strategy that recursively divides the tile into four equal subregions, for which new statistics are calculated.
Example 3. [Approximate Query Processing & Index Adaptation] The query processing and index adaptation process are presented in Figure 2. As input we have an initialized index where each tile has exact metadata, an exploratory query and a raw file. Note that, the steps numbering here differs from the steps numbering in Section 4.1
$\bullet$ To evaluate query $Q$ we first find the leaf tiles that spatially overlap (partially or fully-contained) with the query region: $t _ { 1 }$ , $t _ { 2 }$ , ${ t _ { 3 } , \ldots } { t _ { 9 } }$ . Since no objects are selected by the query from the tiles $t _ { 1 } , t _ { 2 } , t _ { 3 } , t _ { 4 } , t _ { 8 }$ and $t _ { 9 }$ , these tiles are not included in the query evaluation process. $\pmb { \otimes }$ Next, we check if the overlapping tiles need to be split, in such case, the tiles are split into smaller subtiles. In each splitting step, the process considers criteria related to I/O cost in order to decide whether to perform a split or not (more details at Sec. 4.2). For simplicity, in our example, we assume that $t _ { 7 }$ is split into four equal disjoint subtiles: $t _ { 7 _ { a } } , t _ { 7 _ { b } } , t _ { 7 _ { c } }$ , and $t _ { 7 _ { d } }$ . Since no objects are selected from the tiles $t _ { 7 _ { a } }$ and $t _ { 7 _ { c } }$ , these tiles are not further considered.
$\otimes$ Next, we examine each of the overlapped tiles (including the ones resulted by splitting), to determine which case each tile belongs to (Case 1–5, Sec. 4.1.2). We have the following tile categorization: (1) Case $\boldsymbol { { \mathit { 1 } } }$ (Fully-contained tile with exact metadata): $\mathbf { t _ { 5 } }$ ; (2) Case $3$ (Partially-contained tile): $\bf { t _ { 6 } }$ ; and (3) Case 4 (Fully or Partially-contained tile resulted by Adaptation): $\mathbf { t _ { 7 _ { b } } }$ and $\bf t _ { 7 _ { d } }$ .
Based on this categorization, for each tile, we determine the set of objects in which a sampling has to be performed (i.e., read objects attributes’ values from the file): (1) $\mathbf { t } _ { 5 }$ : we do not need to read objects from the file (no sampling); $( 2 ) \mathbf { t _ { 6 } }$ : we have to perform sampling over the objects included in the area selected by the query, i.e., the objects included in blue area; (3) $\mathbf { t } _ { 7 _ { \mathrm { b } } }$ : we have to perform sampling considering all the tile’s objects; and (4) $\bf t _ { 7 _ { d } }$ we have to perform sampling over the objects included in the area selected by the query, i.e., the objects included in yellow area.
Figure 3: Query execution time per query.
$\bullet$ In tiles $t _ { 6 } , \ t _ { 7 _ { b } }$ and $t _ { 7 _ { d } }$ we use a sampling method to select the objects to read from the file, considering the object sets defined in the previous step. First, a sampling rate is determined. As described, in our incremental sampling process, a different sampling rate is computed and adjusted in each iteration. Based on this rate, a uniform sampling is used to select which objects to be read from the file $\bullet$ .
$\bullet$ Using the data retrieved from the file, the metadata of each tile is computed from scratch or updated. This update incorporates the newly sampled objects, thereby refining the tile’s stored metadata.
$\textcircled{4}$ Based on updated metadata, the query’s confidence interval is computed by combining each tile’s contribution—using exact metadata when available and approximate metdata otherwise.
The computed query confidence interval is then compared against the user-defined error bound. $\otimes$ If the computed relative error bound is within the acceptable bound, the system returns the query result along with its confidence interval. $\bullet$ Otherwise (the interval remains too wide), a new sampling iteration is performed Steps 4–7, and the Steps 4–7 are repeated until the error bound is satisfied.
# 5. Experimental Analysis
The objective of our evaluation is to assess the performance and the effectiveness of our approach in
terms of response time, I/O operations, relative error, and confidence interval coverage. We evaluate our index and competitors over one real and two synthetic datasets.
# 5.1. Experimental Setup
Datasets & Queries. In our experiments we have used two synthetic datasets (SYNTH10 / 50), and one real dataset, the NYC Yellow Taxi Trip Records (TAXI).
SYNTH10 / 50 Synthetic Datasets. Regarding the synthetic datasets (SYNTH10 / 50), we have generated two CSV files of 100M data objects, having 10 and 50 attributes (11 and 51 GB, respectively). The datasets contain numeric attributes in the range [0, 1000], following a uniform distribution.4 Regarding queries, as in [3, 4], the query region is defined over two attributes that specify a window size containing approximately 100K objects.
TAXI Dataset. The TAXI dataset is a CSV file, containing information regarding taxi rides in NYC.5 Each record corresponds to a trip, described by 18 attributes. We selected a subset of this dataset for 2014 trips with 165M objects and 26 GB CSV file size. The Longitude and Latitude of the pick-up location are the axis attributes of the exploration. The query region is defined over an area of $2 \mathrm { k m } \times 2 \mathrm { k m }$ size, with the first query $Q _ { 0 }$ posed in central Manhattan. For aggregate computations, we consider the total trip amount as the axis attribute of interest.
Figure 4: Number of $1 / 0$ operations per query.
Exploration Scenario. In our evaluation, we consider a typical exploration scenario such as the one used in [3, 4]. This scenario attempts to formulate a common user behavior in 2D exploration, where the user explores nearby regions using pan and zoom operations [18, 19, 20, 21, 22, 23], such as the “region-of-interest” or “following-a-path” scenarios, which are commonly used in map-based exploration. We generated sequences of 100 overlapping queries, with each region shifted by $1 0 \%$ (i.e., a pan operation) relative to the previous one in a random direction. Although the shift is random, it is biased toward a high-level trajectory, simulating a "follow-a-path" exploration scenario. At each step, the user requests aggregate values, such as sum or average, over one of the non-axis attributes of the dataset within the currently visualized area.
Competitors. We compare our method VALINOR-A (VAL-A) with: (1) VALINOR (VAL) [4], which provides exact query answers; and (2) VALINOR-S (VAL-S), a baseline that leverages VALINOR indexing to rapidly identify objects within the query region and retrieve them efficiently using stored file offsets. Unlike our approach, VAL-S does not maintain aggregate metadata; instead, it performs incremental sampling over all objects in the query region for aggregate computations until the error constraint is met. In effect, VAL-S amounts to a plain, incremental sampling approach without the benefits of reusing precomputed aggregate metadata.
Tile Structure Parameters. Regarding tile structure, for all the methods, we adopt the setting used in [3, 4], where the tile structure is initialized with $1 0 0 \times 1 0 0$ equal-width tiles, while an extra $20 \%$ of the number of initial tiles was also distributed around the first query using the Query-driven initialization method [4]. Also, the numeric threshold for the adaptation of VAL-A was set to 200 objects. More details about these parameters can be found at [4].
Metrics. In our experiments, we measure the: (1) Execution Time per query, and Overall Execution Time of an exploration scenario, that includes: initialization time and query evaluation time for all the queries included in the exploration scenario. (2) I/O Operations performed during query evaluation; and during the whole workflow. (3) Relative Error, defined as the difference between the estimated and exact aggregate values, normalized by the exact value:
$$
\epsilon _ { \mathrm { a c t u a l } } = { \frac { | { \hat { v } } _ { f ( A ) } - v _ { f ( A ) } | } { v _ { f ( A ) } } }
$$
where $\hat { v } _ { f ( A ) }$ is the estimated aggregate and $v _ { f ( A ) }$ is the exact aggregate computed from all objects in the query region. (4) Confidence Interval Coverage which measures the proportion of queries where the exact aggregate value falls within the computed confidence interval.
Implementation. VAL-A is implemented on JVM as part of the RawVis open source data visualization system [24]. The experiments were conducted on an 3.60GHz Intel Core i7-3820 with 64GB of RAM. We applied memory constraints (12GB max Java heap size) in order to measure the performance of our approach and our competitors.
# 5.2. Results
# 5.2.1. Query Execution Time
In this experiment, we compare the query execution time of VAL-A against competitors across the three datasets.
VAL-SA VAL L 240 VAL
20
10
0 10% 5% 2% 1% 10% 5% 2% 1% Error Bound (%) Error Bound (%) (a) SYNTH10 (b) SYNTH50 1000 1 VAL 800 VAL 600 G 2400 0 Error Bound (%) (c) TAXI
Figure 3 presents the execution time for queries $Q _ { 1 } \sim Q _ { 9 9 }$ , excluding the first query, which involves index initialization and construction common to all approaches. VAL-A and VAL-S are configured with a very tight $1 \%$ user-defined error threshold $( \epsilon _ { \mathrm { m a x } } = 0 . 0 1 \$ ).
Across all datasets, VAL-A consistently achieves lower query execution times compared to both VAL and VAL-S. VAL, which computes exact aggregates, incurs significantly higher and more variable execution times due to the need to access all required objects for which it cannot fully utilize stored metadata from fully-contained tiles. VAL-S, while employing the same incremental sampling approach as VAL-S, does not store aggregate metadata. As a result, for every query, VAL-S must perform incremental sampling across its overlapping tiles, continuously retrieving object values from the raw data file until the confidence interval meets the user-defined threshold. This leads to higher query times, and for many queries, even worse performance than VAL, which directly computes exact results. In effect, VAL-S only utilizes the tile-based index to locate objects within the query region, sample from them, and access their attribute values required for aggregation.
In contrast, VAL-A efficiently balances incremental sampling with metadata reuse, reducing execution time across the query sequence even under the strict $1 \%$ error threshold. Unlike VAL-S, VAL-A stores and updates partial metadata, leveraging previously sampled values to minimize redundant I/O operations in future queries. While both VAL and VAL-A benefit from index adaptation, which dynamically refines the tile structure in frequently explored areas, VAL still requires reading all objects within a query region. In contrast, VAL-A can selectively update its stored metadata when the computed confidence interval remains within user-defined constraints, further reducing the need for full-file access.
Especially, for the TAXI dataset, during the last queries of the exploration scenario, the user navigates to areas with significantly fewer taxi trips. This justifies the sharp drop in execution time for all three methods, as fewer objects need to be accessed and processed.
# 5.2.2. I/O Operations
The query execution time examined above is primarily determined by the number of I/O operations required to access objects from the raw data file. This is evident in Figure 4, where the I/O plots closely follow the corresponding execution time trends from Figure 3. Also here, VAL-A and VAL-S are configured with a very tight $1 \%$ user-defined error threshold $( \epsilon _ { \mathrm { m a x } } = 0 . 0 1 \$ ).
For the synthetic datasets (SYNTH10/50), the I/O operation trends are nearly identical (Fig. 4a & 4b). This similarity is expected since both datasets contain the same number of objects with uniformly distributed attribute values in the same range. The only difference between them is the number of attributes (i.e., 10 and 50, respectively), which does not affect the number of accessed objects for answering aggregate queries.
# 5.2.3. Effect of Error Bound on Performance
In this experiment, we evaluate how the user-defined error bound $\epsilon _ { \mathrm { m a x } }$ impacts the total query evaluation time for VAL-A and VAL-S. Figure 5 presents the overall query execution time for the full sequence of queries (excluding $Q _ { 0 }$ which initializes the index) under different error bounds: $1 0 \%$ , $5 \%$ , $2 \%$ , and $1 \%$ . As expected, since VAL does not utilize approximate query evaluation, its performance remains constant regardless of the error bound. VAL-A and
VAL-S benefit from higher error bounds by reducing the number of samples required to meet the confidence interval constraints, resulting in lower query evaluation times.
Overall, across all datasets and all error bounds, VAL-A consistently outperforms both methods. On average, VAL-A achieves a $3 . 9 \times$ speedup over VAL-S and $7 . 4 \times$ over VAL across all cases.
SYNTH10. As depicted in Figure 5a, VAL-A completes the sequence of queries in under 5 sec even at the smallest error bound $( 1 \% )$ , while VAL-S takes around 22 sec and VAL about $3 2 { \mathrm { ~ s e c } }$ .
The total query time for VAL-S increases significantly as the error bound tightens from $1 0 \%$ to $1 \%$ , reflecting the additional sampling effort required to meet the stricter confidence interval constraints. This leads to increased file accesses and higher I/O costs. VAL-A follows a similar trend but remains consistently faster due to its adaptive metadata reuse, which reduces the need for redundant sampling.
SYNTH50. In Figure 5b, the performance gap increases further compared to SYNTH10: VAL-A completes the workload in 7 sec at $1 \%$ error, compared to 28 sec for VAL-S and 64 sec for VAL.
A similar pattern as in SYNTH10 is observed, but with an overall increase in query execution time due to the larger file size. The relative performance of VAL-A and VAL-S remains consistent, with VAL-A achieving lower total execution times.
TAXI. In TAXI (Fig. 5c), the advantage of VAL-A becomes particularly clear, where the exploration area contains a high concentration of taxi trips (central Manhattan). In this dataset, even for the tightest error bound $( 1 \% )$ , VAL-A finishes in 150 sec, while VAL-S takes over 500 sec and VAL exceeds 1000 sec. The performance gap is particularly pronounced due to the dataset’s higher I/O overhead, making metadata reuse even more crucial.
VAL-A exhibits minimal variation in execution time across different error bounds. This is primarily due to the low variance of the aggregate attribute (trip fare amount), which results in stable estimates even with small sample sizes. As a result, the confidence interval converges quickly, and additional sampling has a diminishing impact on accuracy. In our incremental sampling approach, we begin with an initial sampling rate and progressively refine it based on computed confidence intervals until the required error bound is met. Since sampling stops once the confidence interval satisfies the threshold, the total number of sampled objects remains nearly the same across different error bounds, leading to consistent query execution times. This behavior is also observed in VAL-S, which follows the same sampling strategy. However, VAL-A maintains significantly lower overall execution times due to its metadata reuse, which reduces redundant I/O operations.
Figure 6: Relative Error for different User-defined error bounds [SYNTH10].
# 5.2.4. Approximation Accuracy
To examine how the user-defined error threshold impacts approximation accuracy, we vary the error bound $\epsilon _ { \mathrm { m a x } }$ and measure the resulting relative error across the query sequence. Figure 6 presents the relative error for different thresholds $( 1 \% , 2 \% , 5 \%$ , and $1 0 \%$ ) for VAL-A over the SYNTH10 dataset. Similar trends were observed for SYNTH50 and are omitted for brevity. In the TAXI dataset, due to the distribution characteristics of the aggregated attribute (taxi fare amount), different error bounds result in less pronounced variations in relative error.
As shown in the Figure 6, the actual relative error (computed as the deviation of the estimated aggregate from the exact result, normalized by the exact value) remains below the corresponding user-defined error bound $\epsilon _ { \mathrm { m a x } }$ . This confirms that VAL-A effectively maintains computed confidence intervals within user-specified constraints.
As expected, lower error bounds (e.g., $1 \%$ ) result in consistently lower relative errors, as more samples are taken to refine estimates. Conversely, higher error bounds $5 \%$ and $1 0 \%$ ) lead to larger relative errors, as incremental sampling adapts dynamically based on the user-defined threshold. Interestingly, the relative error curves for $5 \%$ and $1 0 \%$ follow a similar trend. This occurs because, beyond a certain point, additional sampling has a diminishing impact on accuracy, as the confidence interval stabilizes at a similar rate for both thresholds. Consequently, both error bounds require a comparable number of samples to meet their constraints, leading to nearly identical relative error behavior. This effect is influenced by dataset-specific characteristics, such as attribute distribution.
At the start of the exploration, VAL-A benefits from exact metadata stored during index initialization, as the file is parsed while constructing the index. This results in very low relative error for initial queries, as fully contained tiles in the queries contribute no uncertainty to the query result. However, as user exploration continues, the index dynamically adapts by splitting tiles in frequently queried areas to improve granularity and maximize the number of fully contained tiles in future queries. When a tile is split, its metadata is incrementally updated during query evaluation using objects read from the file.
Since VAL-A utilizes incremental sampling, it does not require full metadata storage for all tiles but instead adjusts I/O adaptively based on the user-defined error threshold.
Figure 7: Exact aggregate value & Confidence interval [SYNTH10] $\mathbf { \epsilon } _ { \mathbf { m a x } } = 0 . 0 1 \$ ).
As a result, even though relative error increases over time due to tile splits and new exploration areas, it consistently remains well below the user-defined threshold $\epsilon _ { \mathrm { m a x } }$ . This demonstrates that VAL-A successfully balances efficiency and accuracy, dynamically managing metadata storage and I/O operations while ensuring query results adhere to user-specified confidence bounds.
# 5.2.5. Confidence Interval Behavior
To further assess approximation accuracy, we evaluate the behavior of the computed confidence intervals during the exploration scenario. Figure 7 shows the confidence intervals for the estimated aggregate values (here, the sum) alongside the exact aggregate values for the SYNTH10 dataset, using a user-defined error bound of $1 \%$ . Similar trends were observed for other datasets and error bounds.
As shown, the exact aggregate consistently falls within the computed confidence intervals for most queries, confirming that VAL-A maintains statistically valid approximations. In this experiment, we set the confidence level to $\begin{array} { r l r } { \gamma } & { { } = } & { 0 . 9 5 } \end{array}$ , meaning that if the same query were repeated under identical conditions, the computed confidence interval would contain the exact aggregate in 95 out of 100 cases due to the randomness inherent in sampling. In practice, even when different queries are posed during exploration, the average coverage rate in our experiments is approximately $9 5 \%$ .
In the results shown in Figure 7, only 2 out of 100 queries fell outside the computed confidence interval, confirming the reliability of our estimates.
# 6. Related Work
This work is related to various fields, namely, in-situ processing, approximate processing, adaptive indexing, and visual-oriented data management. We briefly describe most prominent research works in these areas and justify our contribution.
In-situ Raw Data Processing. Data loading and indexing account for a substantial portion of the time required to complete an analysis in both traditional RDBMSs and Big Data environments [25]. To address this bottleneck, in-situ query processing avoids loading data into a DBMS by enabling direct operations on raw data files. One of the first systems designed for querying raw data without relying on a DBMS was NoDB [5], while PostgresRAW was an early adopter of in-situ query processing. PostgresRAW constructs auxiliary structures known as "positional maps" on-the-fly, which track the positions of data attributes in files and temporarily store previously accessed data in cache. However, unlike our approach, PostgresRAW support only exact query answering. Moreover, the positional maps are only effective at reducing parsing and tokenization overhead during query execution, and they cannot reduce the number of objects examined in two-dimensional range queries. Additionally, our method improves the performance of aggregate queries by reducing raw file accesses through the reuse of previously calculated statistics at the tile level.
DiNoDB [26] is a distributed version of PostgresRAW. In the same direction, PGR [27] extends the positional maps in order to both index and query files in formats other than CSV. In the same context, Proteus [28] supports various data models and formats. Slalom [29, 6] exploits the positional maps and integrates partitioning techniques that take into account user access patterns.
Raw data access methods have been also employed for the analysis of scientific data, usually stored in array-based files. In this context, Data Vaults [30] and SDS/Q [31] rely on DBMS technologies to perform analysis over scientific array-based file formats. Further, SCANRAW [32] considers parallel techniques to speed up CPU intensive processing tasks associated with raw data accesses.
RawVis [4, 33] exploits VALINOR, a tile-based index in the context of in-situ visual exploration, supporting 2D visual operations over numeric attributes. [3, 24, 34] extend RawVis to support categorical-based operations, offering also a memory management mechanism. Compared to this work, the previous versions of RawVis framework do not support approximate query answering.
Note that, several well-known DBMS support SQL querying over CSV files. Particularly, MySQL provides the CSV Storage Engine6, Oracle offers the External Tables7 and Postgres has the Foreign Data. These tools enable interoperability with raw formats, but do not focus on user interaction, parsing the entire file for each posed query, and resulting in significantly low query performance [5].
Approximate Processing. Approximate Query Processing (AQP) [12, 13, 14, 35] is a long studied area offering means for rapid, “good-enough” answers in interactive data exploration. A common thread in most of these approaches is the use of stratified sampling to deliver statistically bounded estimates [36, 37, 38, 39, 40, 41]. Similar to these methods, our work leverages stratified sampling; however, we adapt the sampling strategy to the in-situ exploratory setting and on-the-fly index adaptation. Rather than relying on a pre-defined sampling ratio, our approach dynamically adjusts the sampling process based on the evolving user interactions, index adaptations and the characteristics of the data.
Subsequent research has explored the use of auxiliary indices and pre-aggregation techniques to further accelerate query evaluation. Systems proposed in [42, 43] build auxiliary data structures that store precomputed summaries, which are then combined with sampled estimates, thereby avoiding full data scans during query execution.
Methods based on approximate pre-aggregation [44, 45] compute partial aggregates either offline or on-the-fly, and then refine these estimates as additional data is processed. While effective in many scenarios, these techniques typically require prior data loading and pre-processing, which can be prohibitive in exploratory contexts. In contrast to these techniques, our method minimizes both pre-processing time and storage by initially creating a “crude” version of the index, tailored for overlapping, non-random exploratory queries. The index is then enriched as the user explores the data, thereby reducing the overhead between initialization and query execution. In essence, our approach capitalizes on the natural locality of exploratory queries to limit upfront costs while still enabling efficient, in-situ approximate query processing.
In visual data analysis, approximate processing techniques (a.k.a. data reduction), such as sampling and binning, have been widely used to improve interactivity and address information overloading [15, 46, 8, 47, 7, 9, 11, 48, 49]. Most of them follow progressive approaches [2, 50, 51, 52, 53, 54, 55, 10, 37, 56, 57, 58]. Instead of performing all the computations in one step (that can take a long time to complete), they split them in a series of short chunks of approximate computations that improve with time. Some of these works also consider visualization parameters to ensure perceptually similar visualizations and offer visualization-aware error guarantees.
More recently, techniques such as ${ \mathrm { A O P } } + + { }$ [15] and PASS [16] have combined sampling-based AQP with approximate pre-aggregation to produce tighter confidence intervals for aggregate queries and reduce query costs. While our method likewise leverages sampling and aggregates, it differs in two crucial ways. First, instead of relying primarily on complete precomputed aggregates, we also maintain partial aggregates from sampling, which incrementally refine query results whenever accuracy must be improved. Second, whereas ${ \mathrm { A O P } } + + { }$ and PASS rely on significant offline precomputation based on predicted or fixed workloads, our approach adaptively refines sampling and index partitioning during user exploration. Only the areas that are actually viewed and queried are progressively refined, eliminating both up-front overhead and restrictive assumptions about future queries. As users pan and zoom in the 2D plane, we read objects on demand from raw data files and update partial aggregates to narrow the gap between approximate and exact results. This incremental strategy is especially effective for in-situ visual analysis, where minimizing I/O costs and handling unanticipated query regions are paramount.
This paper is influenced from our preliminary work [59], where we presented early results on adaptive indexing for approximate query answering. In [59], we relied on exact min-max tile metadata to deterministically bound aggregates and reduce I/O. However, such bounds are often overly pessimistic, leading to wide confidence intervals and making this approach practical only when the aggregated attribute has very low variance, i.e., when its minimum and maximum values are close. In this work, we adopt different approaches, we develop an incremental sampling strategy that dynamically refines query results until they meet user-defined accuracy constraints. Additionally, we leverage sampled objects to compute and store approximate metadata, further reducing redundant I/O.
Indexes in Human-Data Interaction Scenarios. In the context of human-data interaction, several indexes have been introduced. VisTrees [60] and HETree [47] are tree-based main-memory indexes that address visual exploration use cases, i.e., they offer exploration-oriented features such as incremental index construction and adaptation.
Nanocubes [61], Hashedcubes [62], SmartCube [63], Gaussian Cubes [64], and TopKubes [65] are main-memory data structures defined over spatial, categorical and temporal data. The aforementioned works are based on main-memory variations of a data cube in order to reduce the time needed to generate visualizations.
Further, graphVizdb [66, 67] is a graph-based visualization tool, which employs a 2D spatial index (e.g., R-tree) and maps user interactions to 2D window queries. To support scalability a partition-based graph drawing approach is proposed. Spatial 2D indexing is also adopted in Kyrix [68]. Kyrix is a generic platform that supports efficient Zoom and Pan operations over arbitrary data types. These works do not consider approximate query processing, as they require a preprocessing phase to create an index, and thus cannot be used in in-situ scenarios. Another shortcoming is that they reside in main memory, which in many cases require prohibitive amounts of memory.
In a different context, tile-based structures are used in visual exploration scenarios. Semantic Windows [19] considers the problem of finding rectangular regions (i.e., tiles) with specific aggregate properties in exploration scenarios. ForeCache [21] considers a client-server architecture in which the user visually explores data from a DBMS. The approach proposes a middle layer, which prefetches tiles of data based on user interaction. Our work considers different problems compared to the aforementioned approaches.
Finally, survey papers on the broader areas of human-data interaction and visual analytics, including the involvement of AI, can be found in [69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 1, 84].
Adaptive Indexing. Similarly to our work, the basic idea of approaches like database cracking and adaptive indexing is to incrementally adapt the indexes and/or refine the physical order of data, during query processing, following the characteristics of the workload [85, 86, 87, 88, 89, 90, 91, 92, 89, 93, 94, 95].
However, these works neither support approximate query processing nor are they designed for the in-situ scenario. In most cases the data has to be previously loaded / indexed in the system/memory, i.e., a preprocessing phase is considered. Additionally, the aforementioned works refine the (physical) order of data, performing highly expensive data duplication and allocate large amount of memory resources. Nevertheless, in the in-situ scenarios the analysis is performed directly over immutable raw data files considering limited resources.
Furthermore, most of the existing cracking and adaptive indexing methods have been developed in the context of column-stores [96, 97, 91, 98, 99, 100, 101, 93], or MapReduce systems [102]. On the other hand, our work has been developed to handle raw data stored in text files with commodity hardware. | Minimizing data-to-analysis time while enabling real-time interaction and
efficient analytical computations on large datasets are fundamental objectives
of contemporary exploratory systems. Although some of the recent adaptive
indexing and on-the-fly processing approaches address most of these needs,
there are cases, where they do not always guarantee reliable performance. Some
examples of such cases include: exploring areas with a high density of objects;
executing the first exploratory queries or exploring previously unseen areas
(where the index has not yet adapted sufficiently); and working with very large
data files on commodity hardware, such as low-specification laptops. In such
demanding cases, approximate and incremental techniques can be exploited to
ensure efficiency and scalability by allowing users to prioritize response time
over result accuracy, acknowledging that exact results are not always
necessary. Therefore, approximation mechanisms that enable smooth user
interaction by defining the trade-off between accuracy and performance based on
vital factors (e.g., task, preferences, available resources) are of great
importance. Considering the aforementioned, in this work, we present an
adaptive approximate query processing framework for interactive on-the-fly
analysis (with out a preprocessing phase) over large raw data. The core
component of the framework is a main-memory adaptive indexing scheme
(VALINOR-A) that interoperates with user-driven sampling and incremental
aggregation computations. Additionally, an effective error-bounded
approximation strategy is designed and integrated in the query processing
process. We conduct extensive experiments using both real and synthetic
datasets, demonstrating the efficiency and effectiveness of the proposed
framework. | [
"cs.DB",
"97R50, 68P05, 68P15",
"H.3.1; H.2.4; E.1"
] |
# 1. Introduction
Accurate segmentation in medical imaging is crucial for a variety of clinical applications, from computer-aided diagnostics to treatment planning (Yang and Yu, 2021). In the context of Multiple Sclerosis (MS) research, the segmentation of hyperintensity areas, identifiable on head MRI scans, are indicative of pathological changes in the brain and are closely associated with MS pathology. Developing robust automated segmentation methods is crucial to improve the understanding of white matter hyperintensities (WMH) and enhancing diagnosis, monitoring, and treatment strategies for MS patients, making WMH segmentation a key predictor (Palladino et al., 2020). Accurate and reliable WMH segmentation directly impacts patient care and clinical decision-making, as it helps in estimating lesion load, an important marker for disease progression and treatment response (Chaves et al., 2024).
This task is usually approached using deep learning strategies based on convolutional neural networks (CNNs) (Tran et al., 2022). Models based on CNNs, known for their outstanding performance in segmentation tasks, heavily rely on consistent distributions between training and test datasets. When confronted with changes in distribution, such as variations in MRI machine types or acquisition parameters across different medical centers, a phenomenon known as domain shift occurs, usually leading to a decline in segmentation accuracy. This presents a significant challenge as it compromises the model’s ability to generalize effectively across diverse imaging scenarios. In addition to compromising the discriminative performance of the model, domain shift can also impact its calibration (Ricci Lara et al., 2023; Mosquera et al., 2024; Ovadia et al., 2019). Calibration, which refers to the alignment between predicted probabilities and observed outcomes, is essential for accurate decision-making (Sambyal et al., 2023). When faced with domain shift, the model predictions could become less calibrated in the target domain, potentially misleading the clinician’s interpretation of the results. Poor calibration can lead to overconfidence in wrong decisions or unnecessary doubts about correct ones. While one would expect the probabilistic outputs of CNN segmentation models to be affected by domain shift, manifesting higher uncertainty in the predictions, this is not usually the case. Instead, models tend to remain overconfident even in situations where predictions are wrong (e.g. producing predictions close to 0 -background- or 1 -lesion- in a binary lesion segmentation scenario, instead of assigning values close to 0.5 which would better reflect uncertainty about the unknown data distribution).
Therefore, addressing domain shift is not only important to ensure accurate segmentation, but also plays a vital role in maintaining the calibration of the model across domains, ultimately enhancing its utility in clinical practice. In this work, we are interested in quantifying model uncertainty under domain shift scenarios, a concept closely related to model calibration. In cases when we go from in-distribution (ID) data samples, which are similar to the training data, to out-of-distribution (OOD) samples, which deviate from the training data distribution, uncertainty quantification (UQ) can allow us to flag segmentation cases which require intervention (Mehrtash et al., 2020).
In the context of medical imaging, models trained with classical loss functions (such as the popular Cross Entropy -CE- or soft Dice loss (Milletari et al., 2016)) may exhibit overconfidence in their predictions when faced with OOD data, leading to suboptimal outcomes. An example is shown in Figure 1 (central column), where a WMH segmentation prediction generated by a model trained using a classical pixel level CE as the loss function, shows the label likelihood close to one across the entire segmented area. However, it would be more beneficial for the model to express uncertainty in areas where less consensus between raters could be expected, such as at lesion boundaries or in small, isolated lesions distant from larger lesion areas (as in the right column). This discrepancy underscores the need for more sophisticated loss functions and training strategies that can effectively address domain shift challenges in medical imaging applications, encouraging the model to doubt in OOD scenarios, instead of producing overconfident predictions.
Figure 1: Comparison of White Matter Hyperintensity (WMH) segmentation on axial and sagittal FLAIR MRI from a multiple sclerosis patient. The input FLAIR image (left) and an overconfident output from a CE Softmax model (center) are contrasted with the result from CE+MEEP Softmax (right). This latter approach yields more detailed probabilistic segmentations, capturing uncertainty more effectively through intermediate values—especially around lesion boundaries and in small WMH regions.
Previous work has proposed the use of regularization methods to discourage overconfident predictions. In the context of classification problems, Pereyra et al. (2017) proposed to increase entropy in the probabilistic output (i.e. preventing peaked distributions and promoting uniformity) of classification models by incorporating an additional regularization term to the loss function, representing the negative entropy of the output probability. Since confident predictions correspond to output distributions that have low entropy, this regularization term that prevents peaked distributions was shown to help avoid overconfidence for ID data. However, it was not evaluated under distribution shift scenarios. This idea was further refined in (Larrazabal et al., 2023) where, instead of penalizing low entropy for all predictions, only the erroneous ones were penalized, resulting in more accurate segmentations for ID data.
So far, the use of maximum entropy methods for image segmentation has mostly been limited to ID data. At the same time, previous work (Nair et al., 2020) investigated the use of entropy as a measure for uncertainty quantification in the context of WMH segmentation. However, they did not explore maximum entropy methods to enhance these estimates, nor did they address the implications of distribution shifts, which are a critical issue in multi-centric scenarios. Here we study maximum entropy methods to improve UQ under distribution shifts in WMH segmentation. In particular, we will examine whether these models can maintain accurate uncertainty estimation when confronted with changes in data distribution, crucial for reliable decision-making in clinical settings. Additionally, we will explore model calibration under OOD scenarios, providing insights into the effectiveness of the maximum entropy methods in detecting erroneous cases. By assessing the model performance across various medical centers and imaging scenarios, our goal is to uncover its adaptation and generalization capacity in diverse clinical environments, ultimately aiming to provide valuable guidance for integrating deep learning models into clinical practice and advancing patient care outcomes.
Contributions: Our main contributions are threefold: 1) we investigate the impact of domain shift on model calibration for WMH segmentation, 2) we propose the use of maximum entropy regularization for improving uncertainty estimates in WMH segmentation under domain shift, and 3) we assess the correlation between uncertainty and segmentation errors in this scenario. By achieving these goals, we aim to enhance the reliability and clinical applicability of deep learning models in the context of WMH segmentation for MS patients. Specifically, we hypothesize that higher entropy values will correlate with lower Dice scores, particularly under domain shift conditions, enabling entropy-based uncertainty estimates to serve as reliable proxies for segmentation performance. To validate this hypothesis, we systematically evaluate existing entropy-based regularization methods on multicentric MRI datasets acquired under varying scanning protocols and patient populations. In our experiments, maximum entropy regularization methods indeed improved uncertainty estimation and calibration under domain shift.
# 2. Materials and methods
Let us say we have a segmentation model 𝑆: $X \to Y$ that, given an image $X$ , returns a probabilistic voxel level segmentation map $Y$ , as $Y \ = \ S ( X )$ . For every voxel 𝑖, 𝑌 will assign a probability $\boldsymbol { y } _ { \boldsymbol { i } }$ for the WMH lesion class, and $\boldsymbol { 1 } - \boldsymbol { y } _ { i }$ will be the probability of healthy tissue. Without loss of generality, in our case the model $S$ is an encoder-decoder convolutional neural network which follows a U-Net architecture (Ronneberger et al., 2015). Note that this formulation is model-agnostic, and hence other architectures could also be considered. Given the probabilistic segmentation map, we aim to estimate voxel-level uncertainty. In this study, we focus on predictive entropy as the uncertainty metric.
# 2.1 Entropy-based uncertainty estimation
Various methods have been proposed for estimating uncertainty in medical image segmentation, including Monte Carlo Dropout (Gal and Ghahramani, 2016), model ensembling, and Probabilistic U-Net (Kohl et al., 2018). In this work, we focus on predictive entropy, a widely adopted approach (Czolbe et al., 2021; Nair et al., 2020) due to its simplicity and interpretability.
Figure 2: Input FLAIR MRI (top left) and ground truth segmentation (bottom left) for White Matter Hyperintensities (WMH). These are shown alongside softmax probability outputs from CE Softmax (top center) and CE $^ +$ MEEP Softmax (top right) models, and their respective voxel entropy maps: CE Entropy (bottom center) and CE $^ +$ MEEP Entropy (bottom right). Notably, the CE $^ +$ MEEP Entropy maps more distinctly highlights uncertainty in small WMH lesions visible in the ground truth, compared to the CE Entropy map.
Uncertainty in model predictions can be estimated using predictive entropy. For binary segmentation, the binary entropy of the segmented region has been employed to provide insights into the confidence levels associated with the predictions (Czolbe et al., 2021; Mehrtash et al., 2020; Nair et al., 2020). Given a Bernoulli probability distribution parameterized by $p$ , its binary entropy is defined as
$$
{ H _ { _ b } } ( p ) = - \ p { l o { g _ { _ { 2 } } } } ( p ) - ( 1 - p ) l o { g _ { _ { 2 } } } ( 1 - p ) ,
$$
where $p$ stands for the probability that a voxel or data point belongs to the foreground class, which, in case of WMH segmentation, is the probability associated with the lesion class. The binary entropy $H _ { _ b }$ could range from 0 to 1: when ${ \boldsymbol { H } } _ { \boldsymbol { b } } = 0$ , the outcome is entirely predictable, and when ${ \cal H } _ { b } = 1$ it is completely unpredictable or random. In a binary segmentation scenario, if a model assigns a probability close to 1 for a voxel belonging to the target class, then the entropy will be very low, indicating high confidence. Alternatively, if the model assigns a probability of 0.5, the entropy will be maximum, indicating high uncertainty (see Figure 2). This allows practitioners to identify uncertain regions, potentially requiring further inspection or intervention, thus enhancing the model reliability and interpretability.
# 2.2 Improving entropy-based uncertainty estimation via maximum entropy methods
We propose three strategies to promote higher entropy distributions and evaluate their effectiveness in terms of uncertainty estimation under domain shift. Such strategies are implemented as an additional term in the loss function for training the neural network. In general, we will train our models using the following loss function:
$$
{ \cal L } ~ = ~ { \cal L } _ { _ { s e g } } ( Y , \widehat { Y } ) ~ + ~ \lambda { \cal L } _ { _ { r e g } } ( Y ) ,
$$
where $L _ { \it { s e g } }$ is the data term (either cross entropy or soft Dice loss) computed by comparing the predicted segmentation mask $Y$ with the ground-truth label $\widehat { \boldsymbol { Y } }$ , and $L _ { { _ { r e g } } }$ is a regularization term defined to encourage high entropy. In what follows, we introduce three alternatives for this regularization term.
# 2.2.1 Overall confidence penalty
As previously discussed, overconfident models tend to assign all probability into a single class. To avoid such behavior, we first propose to encourage high entropy for all voxel predictions. We follow the idea introduced by (Pereyra et al., 2017) in the context of image classification,
adapting it to the context of image segmentation. Thus the entropy of all voxel predictions $y _ { i } \in Y$ in the predicted segmentation mask $Y$ is computed, defining the regularization term as
$$
L _ { _ a } ( Y ) = - \ H _ { _ b } ( Y ) = & - \sum _ { y _ { i } \in Y } \ - y _ { _ i } \ l o g _ { _ 2 } ( y _ { i } ) - ( 1 - y _ { _ i } ) l o g _ { _ 2 } ( 1 - y _ { _ i } ) .
$$
This term is added in the the overall loss function, encouraging maximum entropy for all voxel predictions: $\begin{array} { r l } { L ( Y , \ { \widehat { Y } } ) } & { { } = \ L _ { _ { s e g } } ( Y , \ { \widehat { Y } } ) \ + \ \lambda L _ { _ a } ( Y ) } \end{array}$ ). This approach systematically enforces higher entropy in the outputs of the model, acting as a strong regularizer and improving generalization by reducing overconfidence even in correct predictions.
# 2.2.2 Maximum entropy on erroneous predictions
The term defined in the previous section penalizes high confidence for all voxel predictions. However, if a prediction is correct, in principle there is nothing wrong with the model being confident about it. Indeed, we argue that one would like to avoid overconfident predictions especially in cases where those predictions are wrong. Thus, we resort to the maximum entropy on erroneous predictions (MEEP) regularizer, $L _ { _ m } ( Y _ { _ w } )$ , which penalizes low entropy only for erroneous predictions. We will use $\mathbf { y } _ { \ l _ { w } }$ to define the set of voxels whose label was incorrectly predicted, and hence we can define the regularizer as
$$
L _ { _ m } ( Y _ { _ w } ) \ = - \ H _ { _ { b } } ( Y _ { _ w } ) = - \ \sum _ { y _ { _ i } \in Y _ { _ w } } \ - y _ { _ i } \ l o g _ { _ 2 } ( y _ { _ i } ) \ - \ ( 1 - y _ { _ i } ) l o g _ { _ 2 } ( 1 - y _ { _ i } ) .
$$
This regularizer will penalize low entropy (i.e. peaky) distributions only when the predictions are wrong, which intuitively encourages uniform predictions in highly uncertain situations. In particular, we hypothesize that this term will help in domain shift scenarios due to changes in intensity distributions when facing multicentric datasets. Similarly as before, we will add this term to the overall loss function, encouraging maximum entropy only for voxels which were wrongly predicted, resulting in the following loss $\begin{array} { r l } { L ( Y , \widehat { Y } ) } & { = \ L _ { _ { s e g } } ( Y , \widehat { Y } ) \ + \ \lambda L _ { _ { m } } ( Y _ { _ { w } } ) . } \end{array}$
# 2.2.3 Maximum entropy on erroneous predictions via KL divergence
We evaluate a third approach where we also encourage high entropy in erroneous predictions but following a different strategy. Instead of subtracting the entropy of misclassified voxels from the overall loss function, we introduce a regularization term to encourage their predictions to be uniformly distributed, by minimizing the Kullback-Leibler (KL) divergence with respect to a uniform distribution. The KL divergence $D _ { _ { K L } } ( Q | | P )$ provides a notion of difference between two probability distributions $P$ and $\mathsf { Q }$ . Since the uniform distribution has maximum entropy, we will minimize the difference between the predicted distribution for misclassified voxels $Y _ { _ w }$ and the uniform distribution $\mathsf { Q }$ , by adding a regularization term $L _ { _ { K L } } ( Y _ { _ { w } } ) = - \ D _ { _ { K L } } ( Q | | Y _ { _ { w } } )$ , resulting in the loss function: $\begin{array} { r l } { L ( Y , \widehat { Y } ) } & { { } = \ L _ { _ { s e g } } ( Y , \widehat { Y } ) \ + \ \lambda L _ { _ { K L } } ( Y _ { _ w } ) } \end{array}$ . Note that although $L _ { _ { K L } } ( Y _ { _ { w } } )$ and $L _ { _ m } ( Y _ { _ w } )$ drive $Y _ { _ { w } }$ towards a uniform distribution, their gradient dynamics differ, resulting in different effects on the neural weight updates during training. In this study, we conduct an experimental analysis to determine which term yields better UQ under domain shift.
# 2.3 Metrics and evaluation protocols
Here we are interested in assessing how WMH segmentation models behave under domain shift, improve their performance both in terms of discrimination and calibration, and understand if the entropy of the predictions can be used as a proxy to anticipate potential failures. These aspects provide a comprehensive understanding of the overall performance and its suitability for real-world applications. In what follows, we describe the metrics that are used to evaluate each of these aspects.
# 2.3.1 Discrimination metrics
Discriminative ability is achieved when the model can effectively distinguish between different classes. For the segmentation tasks, the Dice Coefficient was used. This widely used metric measures the overlap between the predicted segmentation and the ground truth. It is calculated as
where $\vert G \cap P \vert$ represents the number of elements common to both the ground truth set $G$ and the predicted set $P$ , and $\left. \cdot \right.$ denote the number of elements in the set.
# 2.3.2 Calibration metrics
Calibration metrics are crucial for assessing how well the predicted probabilities of a model align with actual outcomes. Previous studies have shown that segmentation models trained with Dice loss tend to be overconfident (Yeung et al., 2023; Murugesan et al., 2023), while cross-entropy training typically leads to better calibrated models (Mehrtash et al., 2020).
Among calibration metrics, the Expected Calibration Error (ECE) is useful for assessing the reliability of probability estimates. To calculate it, we first allocate each voxel prediction to a bin, depending on the predicted probability value. Here we consider bin separation of 0. 1, resulting in $M = 1 0$ bins of the form $\{ B _ { _ o } = [ 0 , 0 . 1 ) , B _ { _ 1 } = [ 0 . 1 , 0 . 2 ) \ldots B _ { _ { 1 0 } } = [ 0 . 9 , 1 ] \}$ . ECE is then calculated as
$$
\begin{array} { r l } { E C E \ = \ } & { { } \underset { m = 1 } { \overset { M } { \sum } } \frac { \big \vert { B } _ { m } \big \vert } { n } \big \vert a c c ( { B } _ { { } _ { m } } ) \ - \ c o n f ( { B } _ { { } _ { m } } ) \big \vert , } \end{array}
$$
where $\left| B _ { { } _ { m } } \right|$ is the number of samples in bin $B _ { { } _ { m } } , n$ is the total number of samples, $a c c ( B _ { { } _ { m } } )$ is the accuracy of voxels in bin $B _ { { } _ { m } }$ , and 𝑐𝑜𝑛𝑓 ${ \bf \Pi } ( B _ { m } )$ is the average confidence (predicted probability value) of bin $B _ { { } _ { m } }$ . This metric captures the average discrepancy between predicted probability and actual accuracy across all bins.
Another essential tool for assessing calibration is the reliability plot. This graphical representation plots the average predicted probability, $p$ , against the actual fraction of positives, $f _ { p }$ , for each bin. Ideally, the points in a reliability plot should lie on the line $p = f _ { p }$ , indicating perfect calibration where the predicted probability matches the observed frequency of the event. This visualization helps identify areas where the model is overconfident or underconfident in its predictions.
By incorporating these metrics, we can comprehensively evaluate both the discriminative power and the calibration quality of machine learning models, ensuring their reliability and effectiveness in clinical practice.
# 2.3.3 Uncertainty quantification protocols
To evaluate the relationship between segmentation performance and uncertainty estimates, we computed the Pearson correlation between the average foreground entropy and the Dice coefficient across scans. For each case, we first filtered voxels classified by the model as foreground (predicted probability $> 0 . 5$ ) and then computed the mean entropy over these voxels. This approach simulates a clinical scenario where ground-truth labels are unavailable, focusing the uncertainty analysis on the model's positive predictions.
# 2.3.4 Evaluation on different lesion sizes
Previous work has shown that WMH segmentation methods tend to present lower quality for smaller lesions (Chaves et al., 2024). Thus, one would expect that entropy-based uncertainty estimates present higher values for patients with smaller lesion load. We thus examine in Section 3.2 how uncertainty varies with lesion size in a comparative analysis, grouping the lesions according to their volume (smaller than $5 \mathrm { m L }$ , between 5 mL and $1 5 \mathsf { m L }$ and bigger than $1 5 \mathrm { m L } )$ .
# 2.4 Datasets
This retrospective study analyzed two WMH segmentation datasets:
White Matter Hyperintensity (WMH) Segmentation Challenge: WMH Segmentation Challenge dataset consists of brain MR images (T1 and FLAIR) with manual annotations of WMH. The dataset includes 60 training sets of T1/FLAIR images from three different institutions, annotated by experts in WMH scoring and 110 test sets from five different scanners. The dataset was derived from patients with various degrees of aging-related degenerative and vascular pathologies to ensure generalizability of segmentation methods across scanners and patient variability. The participants had a mean age of approximately 70 years $( 7 0 . 1 \pm 9 . 3 \$ years), with an equal gender distribution ( $5 0 \%$ male). WMH burden varied widely, with mean WMH volume of $1 6 . 9 \pm 2 1 . 6 \mathrm { ~ m l }$ and a mean lesion count of $6 2 \pm 3 5$ lesions per subject. This dataset was created as part of the WMH Segmentation Challenge, associated with MICCAI 2017, and was active from 2017 to 2022. The challenge aimed to evaluate and compare methods for the automatic segmentation of WMH of presumed vascular origin. Participants trained their models on the provided training data and submitted their methods for evaluation using the unreleased test data. Results of this challenge have been published in (Kuijf et al., 2019).
3D MR Image Database of Multiple Sclerosis Patients with White Matter Lesion Segmentations (3D-MR-MS): The 3D-MR-MS dataset (Lesjak et al., 2018) comprises magnetic resonance (MR) images from 30 patients with multiple sclerosis (MS), acquired at the University Medical Center Ljubljana. The dataset includes co-registered and bias-corrected T1-weighted (T1W), contrast-enhanced T1-weighted (T1WKS), T2-weighted (T2W), and FLAIR images, as well as corresponding brain masks and intra-study transform parameters. The patients had a median age of 39 years (range: 25 to 64), with a female-to-male ratio of 23:7. The dataset is designed to support research in automated lesion segmentation for neurodegenerative diseases like MS. Lesion burden varied significantly, with a total of 3316 lesions segmented and an overall lesion volume (total lesion load, TLL) of $5 6 7 ~ \mathrm { m l }$ . The median lesion volume per subject was $1 5 . 2 ~ \mathrm { m l }$ (range: 0.337–57.5 ml, interquartile range: $3 1 . 1 ~ \mathsf { m l }$ ). Lesion sizes ranged from 2 μl to 250 μl (5th to 95th percentile).
Our analysis utilized existing MRI scans and their corresponding manual WMH segmentations to develop and evaluate the proposed methods for uncertainty estimation in WMH segmentation. To ensure consistency, we applied identical preprocessing steps to both datasets, including resampling images to match their spatial resolutions, z-score intensity standardization, and N4 bias field correction (already provided for the 3D-MR-MS dataset).
# 2.5 WMH segmentation model details
For all experiments in this study, we employed a 3D U-Net architecture (Ronneberger et al., 2015) for WMH segmentation, implemented using the MONAI framework (Cardoso et al., 2022). The model was designed for 3D volumetric MRI data, accepting two input channels (FLAIR and T1-weighted images) and producing two output channels representing the background and WMH classes. The network consisted of four downsampling/upsampling levels, with feature channels set to (8, 16, 32, 64). Downsampling was achieved using $2 \times 2 \times 2$ strided convolutions.
A dropout rate of 0.2 was applied within the network during training for regularization. The model was optimized with the Adam method, using an initial learning rate of 0.001 and no weight decay. Training was patch-based, with a batch size of 64 patches of size $3 2 \times 3 2 \times 3 2$ voxels extracted from the input volumes, and proceeded for up to 800 epochs.
Regularization weights for the different regularization terms were selected individually for each strategy through grid search, balancing segmentation performance and the quality of uncertainty estimation.
Inference was also performed using a patch-based sliding window approach with the same patch size $( 3 2 \times 3 2 \times 3 2 )$ , aggregating predictions to reconstruct full-volume segmentations. This standardized model and preprocessing configuration (described in Section 2.4) provided a robust baseline, allowing for the evaluation of regularization strategies on model performance, calibration, and uncertainty estimation under domain shift.
# 3. Results
In this section, we empirically evaluate the proposed methods, investigating the relationship between model confidence, segmentation quality, and robustness of the model when exposed to OOD samples. We use the previously discussed White Matter Hyperintensity (WMH) Segmentation Challenge dataset (which is considered to be ID) and the 3D MR Image Database of Multiple Sclerosis Patients (3D-MR-MS), considered to be OOD.
# 3.1 Entropy as a proxy for error prediction in domain shift scenarios
We evaluated the relationship between segmentation performance and uncertainty estimates by analyzing the Pearson correlation between average foreground entropy (as described in Section 2.3.3) and Dice scores across scans. Figure 3 presents scatter plots of entropy as a function of Dice for both ID and OOD data across the four strategies: cross-entropy (CE), CE regularized with Maximum Entropy on Erroneous Predictions $( C E _ { \mathsf { M E E P } } )$ , CE regularized with Kullback-Leibler divergence $( C E _ { \mathsf { K L } } )$ , and CE with Maximum Entropy on All Predictions (CEMEALL). Linear regression lines are fitted to each set of data points, revealing distinct trends for each loss function, regardless of the medical center. The Pearson correlation coefficient is provided for each loss function.
Figure 3: Scatter plot comparing entropy of foreground predictions and Dice coefficient, per image, for ID and OOD patients. Pearson correlation coefficient between entropy and Dice is shown in parenthesis in the legend box. It can be observed that entropy estimates for MEEP and KL yield better anti-correlation, thus serving as predictors of potential failures.
Although the differences in Pearson correlation coefficients are not very large, consistent trends are observed: regularization methods targeting uncertainty improvement (CEMEEP and $C E _ { \mathsf { K L } } ,$ ) systematically achieve stronger negative correlations between entropy and Dice scores compared to standard cross-entropy (CE). This indicates that entropy-based regularization leads to uncertainty estimates that more reliably reflect segmentation performance, supporting their use as practical predictors of potential failures, particularly under domain shift.
Across all loss functions, a negative correlation is observed between Dice and entropy, indicating that higher segmentation quality is generally associated with lower uncertainty. However, the strength of this correlation varies across loss functions, with ${ \mathsf { C E } } _ { \mathsf { M E E P } }$ and $C E _ { \mathsf { K L } }$ exhibiting stronger negative correlations (−0.835 and −0.807, respectively) compared to CE (−0.826) and CEMEALL (−0.861). Specifically, the Pearson correlation coefficients between average foreground entropy and Dice were $- 0 . 8 2 6$ for CE, −0.835 for CEMEEP, $- 0 . 8 0 7$ for $C E _ { \mathsf { K L } }$ , and $- 0 . 8 6 1$ for $C E _ { \mathsf { M E A L L } }$ . This suggests that ${ \mathsf { C E } } _ { \mathsf { M E E P } }$ and $C E _ { \mathsf { K L } }$ may provide more reliable uncertainty estimates, as their entropy values more closely track the actual segmentation performance.
To further investigate the behavior of uncertainty estimates under domain shift, we examine their distribution across different types of prediction errors. Figure 4 presents a scatterplot where each point represents a voxel, color-coded based on the classification outcome: true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN). In addition, out-of-distribution (OOD) data are indicated by blue bars, while in-distribution (ID) data are shown with orange bars.
In the case of TP, CE and $C E _ { \mathsf { M E A L L } }$ exhibit the lowest uncertainty, while ${ \mathsf { C E } } _ { \mathsf { M E E P } }$ and $C E _ { \mathsf { K L } }$ yield higher uncertainty, both for ID (blue) and OOD (orange) cases. For TN, a similar behavior is observed, although less dispersed, with uncertainty medians close to zero for all methods. As expected, FP exhibit higher uncertainties since the model is making incorrect predictions. Notably, ${ \mathsf { C E } } _ { \mathsf { M E E P } }$ and $C E _ { \mathsf { K L } }$ offer the highest uncertainty for these cases, both in and out of distribution. This heightened uncertainty for FP is desirable, as it allows for the identification of potentially erroneous segmentations, particularly in the challenging OOD setting where the model is more likely to find unfamiliar data distributions. FP often occur in regions with ambiguous image characteristics, making it difficult for the model to confidently distinguish them from TP. Finally, for FN, ${ \mathsf { C E } } _ { \mathsf { M E E P } }$ and $C E _ { \mathsf { K L } }$ again show higher uncertainty, indicating their ability to express doubt when the model is incorrect.
Figure 4: Distribution of uncertainty estimates across different prediction outcomes (True Positives, True Negatives, False Positives, False Negatives) for various training strategies under ID and OOD scenarios. Each point represents a voxel, with blue indicating ID data and orange representing OOD data. The $\mathsf { x }$ -axis shows different training strategies, while the y-axis represents entropy values. Black triangles denote median entropy values. This visualization allows for comparison of uncertainty behaviors across different loss functions, revealing how methods like ${ \mathsf { C E } } _ { \mathsf { M E E P } }$ and $C E _ { \mathsf { K L } }$ tend to yield higher uncertainties, particularly for false positives and false negatives, in both ID and OOD settings.
Figure 5: Boxplots comparing metrics across in-distribution (ID) and out-of-distribution (OOD) data for different loss functions. (Left): Average entropy for voxels predicted as positive, showing a general increase in uncertainty under domain shift, especially for CEMEEP and CEKL. (Middle): Dice score performance across loss functions, with ID scores consistently higher than OOD scores. (Right): Hausdorff distances illustrating boundary localization performance across ID and OOD cases. Statistical significance is indicated where applicable according to the Mann–Whitney U test.
To gain deeper insights into how maximum entropy regularizers affect the uncertainty estimates, we first analyze entropy levels across ID and OOD data, as shown in Figure 5. As stated previously, the outcomes display two distinctive patterns: standard cross-entropy (CE) and CEMEALL exhibit lower entropy values, (i.e. which translate into higher confidence in their predictions). Conversely, ${ \mathsf { C E } } _ { \mathsf { M E E P } }$ and $C E _ { \mathsf { K L } }$ demonstrate elevated entropy levels, particularly for OOD data, suggesting increased sensitivity to domain shift and a greater ability to capture uncertainty in challenging scenarios.
A Mann-Whitney U test confirms this observation, revealing statistically significant differences in entropy levels between ID and OOD samples for CEMEEP and $C E _ { \mathsf { K L } }$ , further supporting their effectiveness in distinguishing between the two scenarios. This ability to differentiate between ID and OOD data based on uncertainty estimates is crucial for identifying unreliable predictions and ensuring the model's robustness in real-world clinical settings.
# 3.2 Uncertainty and lesion size analysis
Figure 6 shows that smaller lesions tend to have higher entropy across all loss functions. This observation aligns with the difficulty of reaching expert consensus on ground-truth labels for smaller lesions, as their subtle appearance can make them difficult to identify and delineate. Larger lesions are generally associated with lower entropy values, indicating higher model confidence, and this tendency is consistently observed for both ID and OOD cases.
Quantitatively, with CE the median entropy for small lesions $( < 5 ~ \mathsf { m L } )$ was approximately 0.58, compared to 0.23 for large lesions $( > 1 5 ~ \mathsf { m L } )$ , illustrating the decrease in model uncertainty with increasing lesion size. Notably, the ${ \mathsf { C E } } _ { \mathsf { M E E P } }$ regularization strategy specifically targets these smaller lesions by pushing uncertainty levels toward the maximum, reflecting the inherent ambiguity and potential for disagreement in these cases. This targeted approach could be particularly valuable in clinical practice, as it allows the model to flag its own limitations and prompt further investigation or consultation for uncertain, small lesions.
Figure 6: Boxplots comparing average entropy for voxels predicted as positive across different strategies in three lesion volume ranges. The plot distinguishes between ID (filled boxes) and OOD (unfilled boxes) data. We observe that larger lesion volumes are generally associated with lower entropy, confirming that it can serve as an indicator of model uncertainty. Notably, this tendency is conserved for both ID and OOD cases.
# 3.3 Model calibration in domain shift scenarios
Finally, to assess the impact of domain shift on model calibration, we analyze reliability diagrams and ECE for each loss function considering both ID and OOD scenarios (Figure 7). In the ID scenario, CEMEEP outperforms other losses in terms of Expected Calibration Error (ECE), while in the OOD scenario, all loss functions exhibit poorer calibration, except for the KL-based loss, which demonstrates superior calibration and robustness to domain shift.
Figure 7: Reliability plots for different loss functions on ID and OOD data. Each colored line corresponds to a different loss function, with the ECE shown in parentheses (best ones are shown in bold). Points above the diagonal indicate underconfidence, while points below indicate overconfidence. A well-calibrated model should approximate the dashed diagonal line (representing perfect calibration).
# 4. Discussion
In this study, we investigated the impact of domain shift on model calibration and uncertainty estimation in white matter hyperintensity (WMH) segmentation. Our findings demonstrate that entropy-based uncertainty estimates could be used as a proxy for anticipating segmentation errors in unseen domains. Specifically, we observed a significant correlation between increasing segmentation errors due to domain shifts and rising entropy-based uncertainty estimates. By incorporating maximum-entropy regularization techniques, such as ${ \mathsf { C E } } _ { \mathsf { M E E P } }$ and $C E _ { \mathsf { K L } }$ , we further strengthened this correlation and improved model calibration.
Our analysis also revealed that the choice of loss function significantly influences the uncertainty quantification quality. While standard cross-entropy and $C E _ { \mathsf { M E A L L } }$ loss functions tend to produce lower entropy values, CEMEEP and $C E _ { \mathsf { K L } }$ yield higher uncertainty levels, particularly for OOD data. This suggests that ${ \mathsf { C E } } _ { \mathsf { M E E P } }$ and $C E _ { \mathsf { K L } }$ are more sensitive to domain shifts and better at capturing uncertainty in challenging scenarios. Additionally, our investigation into the relationship between lesion size and uncertainty revealed that smaller lesions tend to have higher uncertainty across all loss functions. This finding highlights the importance of considering lesion size when interpreting model predictions and emphasizes the need for further research into uncertainty estimation for small lesions. Models trained with maximum-entropy regularization achieved lower ECE values compared to standard training, further confirming the effectiveness of entropy-based regularization for maintaining reliable probabilistic outputs across distributions.
The analysis of prediction outcomes showed that uncertainty levels were higher for incorrect predictions (false positives and false negatives) in regularized models, especially under domain shift. This behavior is desirable in clinical practice, as it helps to identify unreliable segmentations and regions that may require expert review, enhancing the interpretability and safety of models. Notably, maximum-entropy regularization amplified uncertainty, particularly in smaller lesions, aligning model uncertainty with regions of greater clinical ambiguity. This could be valuable for detecting subtle or borderline lesions, which are typically harder to segment accurately.
In conclusion, our study underscores the importance of uncertainty estimation and model calibration in mitigating the challenges posed by domain shift in medical image analysis. By incorporating maximum-entropy regularization techniques and carefully considering the choice of loss function, more robust and reliable deep learning models for WMH segmentation can be developed. These strategies not only improve segmentation performance but also provide better indicators of prediction confidence, which are essential for safe clinical deployment in multi-center and heterogeneous imaging environments. Future work could extend this analysis by evaluating maximum-entropy regularization across different segmentation architectures, providing deeper support to the robustness and generalizability of these techniques.
# 5. Acknowledgements
The authors gratefully acknowledge NVIDIA Corporation with the donation of the GPUs used for this research, the support of Universidad Nacional del Litoral with the CAID program and Agencia Nacional de Promoción de la Investigación, el Desarrollo Tecnológico y la Innovación for the support with the PICT program. EF was supported by the Google Award for Inclusion Research (AIR) Program. VFM was partially supported by the Emerging Leaders in the Americas Program (ELAP) program. We also thank Calcul Quebec and Compute Canada.
# 6. Data Availability Statement
The datasets used in this study are publicly available:
1. The White Matter Hyperintensity (WMH) Segmentation Challenge dataset is available at https://wmh.isi.uu.nl/ under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).
2. The 3D MR Image Database of Multiple Sclerosis Patients with White Matter Lesion Segmentations (3D-MR-MS) is available at https://lit.fe.uni-lj.si/en/research/resources/3D-MR-MS/ under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).
# Glossary
White Matter Hyperintensity (WMH): Areas of increased brightness that appear on specific types of magnetic resonance imaging (MRI) scans, indicating changes in the brain's white matter tissue. These areas are commonly associated with various neurological conditions, particularly multiple sclerosis.
Domain Shift: A phenomenon where the statistical properties of the data used to train a machine learning model differ from those encountered during deployment, often leading to decreased model performance.
Model Calibration: The extent to which a model's predicted probabilities align with observed outcomes. A well-calibrated model's confidence score accurately reflect the likelihood of correct predictions.
Entropy: A measure of uncertainty in probability distributions. In the context of binary segmentation, higher entropy values indicate greater uncertainty in the model's predictions.
Overconfidence: A situation where a model assigns very high probability values to its predictions, even when those predictions are incorrect.
In-Distribution (ID): Data that follows the same statistical distribution as the data used to train the model.
Out-of-Distribution (OOD): Data that differs significantly from the training data distribution, often leading to decreased model performance.
Expected Calibration Error (ECE): A metric that measures the difference between a model's predicted probabilities and the actual observed frequencies of correct predictions.
Maximum Entropy Regularization: A technique that encourages models to express uncertainty by penalizing low-entropy (highly confident) predictions.
Dice Coefficient: A metric that measures the spatial overlap between two segmentations, commonly used to evaluate the accuracy of medical image segmentation models.
U-Net: A specific type of convolutional neural network architecture commonly used for medical image segmentation tasks.
Kullback-Leibler (KL) Divergence: A measure of difference between two probability distributions.
FLAIR (Fluid-Attenuated Inversion Recovery): A specific type of MRI sequence that suppresses cerebrospinal fluid signals, making it easier to identify white matter lesions.
Ground Truth: The reference standard segmentation, typically created by expert human annotators, used to evaluate the performance of automated segmentation methods.
# CRediT Author Statement
Franco Matzkin: Conceptualization, Methodology, Software, Validation, Formal Analysis, Investigation, Data Curation, Writing - Original Draft, Writing - Review & Editing, Visualization
Diego H. Milone: Supervision, Conceptualization, Methodology, Writing - Review & Editing, Project Administration
José Dolz: Resources, Supervision, Writing - Review & Editing, Methodology
Agostina Larrazabal: Methodology, Software
Enzo Ferrante: Supervision, Conceptualization, Methodology, Writing - Review & Editing, Project Administration
# Funding
This work was supported by the National Scientific and Technical Research Council (CONICET, Argentina) and the Emerging Leaders in the Americas Program (ELAP) from the Government of Canada, which funded a research stay at ETS Montreal. The funding sources had no involvement in the study design; in the collection, analysis and interpretation of data; in the writing of the report; or in the decision to submit the article for publication.
# Declaration of Generative AI Use in Scientific Writing
The authors declare the use of large language models (ChatGPT and Claude.ai) solely for grammar checking and language translation assistance, as Spanish is the native language of the research team. All scientific content, analysis, and conclusions were independently developed by the authors. The final manuscript was thoroughly reviewed and approved by all authors to ensure accuracy and integrity of the scientific content.
# Ethics Statement
No ethical approval was required for this study as it utilized only publicly available datasets: the White Matter Hyperintensity (WMH) Segmentation Challenge dataset and the 3D MR Image Database of Multiple Sclerosis Patients (3D-MR-MS), both of which are freely accessible for research purposes under their respective Creative Commons licenses.
# References
Cardoso, M. J., Li, W., Brown, R., Ma, N., Kerfoot, E., Wang, Y., ... & Feng, A. (2022). Monai: An open-source framework for deep learning in healthcare. arXiv preprint arXiv:2211.02701.
Chaves H, Serra MM, Shalom DE et al. (2024) Assessing robustness and generalization of a deep neural network for brain MS lesion segmentation on real-world data. Eur Radiol 34:2024-2035. DOI: 10.1007/s00330-023-10093-5
Czolbe S, Arnavaz K, Krause O, Feragen A (2021) Is segmentation uncertainty useful?. In: Information Processing in Medical Imaging: 27th International Conference, IPMI 2021, Proceedings. Springer, 715-726. DOI: 10.1007/978-3-030-78191-0_55
Gal Y, Ghahramani Z (2016) Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: Proceedings of the 29th International Conference on Machine Learning, Vol. 48. Available at: https://proceedings.mlr.press/v48/gal16.html
Kohl S, Romera-Paredes B, Meyer C et al. (2018) A probabilistic U-Net for segmentation of ambiguous images. In: Advances in Neural Information Processing Systems, Vol. 31
[dataset] Kuijf HJ, Biesbroek JM, De Bresser J et al. (2019) Standardized assessment of automatic segmentation of white matter hyperintensities and results of the WMH segmentation challenge. IEEE Trans Med Imaging 38:2556-2568. DOI: 10.1109/TMI.2019.2905770
Larrazabal AJ, Martínez C, Dolz J, Ferrante E (2023) Maximum entropy on erroneous predictions: Improving model calibration for medical image segmentation. In: Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. Lecture Notes in Computer Science, vol 14222. Springer, Cham. 273-283
[dataset] Lesjak Ž, Galimzianova A, Koren A et al. (2018) A novel public MR image dataset of multiple sclerosis patients with lesion segmentations based on multi-rater consensus. Neuroinformatics 16:51-63. DOI: 10.1007/s12021-017-9348-7
Mehrtash A, Wells WM, Tempany CM, Abolmaesumi P, Kapur T (2020) Confidence calibration and predictive uncertainty estimation for deep medical image segmentation. IEEE Trans Med Imaging 39:3868-3878. DOI: 10.1109/TMI.2020.3006437
Milletari F, Navab N, Ahmadi SA (2016) V-net: Fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV). IEEE, 565-571
Mosquera C, Ferrer L, Milone DH, Luna D, Ferrante E (2024) Class imbalance on medical image classification: towards better evaluation practices for discrimination and calibration performance. Eur Radiol 1-9. DOI: 10.1007/s00330-024-10834-0
Murugesan B, Liu B, Galdran A, Ayed IB, Dolz J (2023) Calibrating segmentation networks with margin-based label smoothing. Med Image Anal 87:102826. DOI: 10.1016/j.media.2023.102826
Nair T, Precup D, Arnold DL, Arbel T (2020) Exploring uncertainty measures in deep networks for multiple sclerosis lesion detection and segmentation. Med Image Anal 59:101557. DOI: 10.1016/j.media.2019.101557
Ovadia Y, Fertig E, Ren J et al. (2019) Can you trust your model's uncertainty? Evaluating predictive uncertainty under dataset shift. In: Advances in Neural Information Processing Systems, vol 32
Palladino JA, Slezak DF, Ferrante E (2020) Unsupervised domain adaptation via CycleGAN for white matter hyperintensity segmentation in multicenter MR images. In: 16th International Symposium on Medical Information Processing and Analysis, 11583:1158302. SPIE
Pereyra G, Tucker G, Chorowski J, Kaiser Ł, Hinton G (2017) Regularizing Neural Networks by Penalizing Confident Output Distributions. In: ICLR Workshop
Ricci Lara MA, Mosquera C, Ferrante E, Echeveste R (2023) Towards Unraveling Calibration Biases in Medical Image Analysis. In: Wesarg S et al (eds) Clinical Image-Based Procedures ... Lecture Notes in Computer Science, vol 14242. Springer, Cham
Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, 18th International Conference, Proceedings, Part III. Springer, 234-241
Sambyal AS, Niyaz U, Krishnan NC, Bathula DR (2023) Understanding calibration of deep neural networks for medical image classification. Comput Methods Programs Biomed 242:107816
Tran P, Thoprakarn U, Gourieux E et al. (2022) Automatic segmentation of white matter hyperintensities: validation and comparison with state-of-the-art methods on both Multiple Sclerosis and elderly subjects. Neuroimage Clin 33:102940
Yang R, Yu Y (2021) Artificial Convolutional Neural Network in Object Detection and Semantic Segmentation for Medical Imaging Analysis. Front Oncol 11:638182
Yeung M, Rundo L, Nan Y, Sala E, Schönlieb CB, Yang G (2023) Calibrating the dice loss to handle neural network overconfidence for biomedical image segmentation. J Digit Imaging 36:739-752 | Accurate segmentation of white matter hyperintensities (WMH) is crucial for
clinical decision-making, particularly in the context of multiple sclerosis.
However, domain shifts, such as variations in MRI machine types or acquisition
parameters, pose significant challenges to model calibration and uncertainty
estimation. This study investigates the impact of domain shift on WMH
segmentation by proposing maximum-entropy regularization techniques to enhance
model calibration and uncertainty estimation, with the purpose of identifying
errors post-deployment using predictive uncertainty as a proxy measure that
does not require ground-truth labels. To do this, we conducted experiments
using a U-Net architecture to evaluate these regularization schemes on two
publicly available datasets, assessing performance with the Dice coefficient,
expected calibration error, and entropy-based uncertainty estimates. Our
results show that entropy-based uncertainty estimates can anticipate
segmentation errors, and that maximum-entropy regularization further
strengthens the correlation between uncertainty and segmentation performance
while also improving model calibration under domain shift. | [
"eess.IV",
"cs.CV"
] |
# I. INTRODUCTION
Large Language Models (LLMs) have emerged as a fundamental tool in modern software development [1]–[3], demonstrating exceptional language understanding and generation capabilities. Their application has shown remarkable potential across various software engineering tasks [4], [5], particularly in code generation [6]. However, as LLMs are increasingly deployed, evaluating the correctness of generated code remains a significant challenge [7], [8], primarily because multiple correct or semantically equivalent solutions [9] may exist for a given programming problem.
Traditional evaluation metrics, which are either referencebased or test-based, have been widely adopted. However, these metrics suffer from inherent limitations. Reference-based metrics (e.g., BLEU [10], ROUGE [11] and ChrF [12]) depend on high-quality reference code and frequently penalize implementations that are correct but diverge from them. Test-based metrics (e.g., $\operatorname* { P a s s } @ \operatorname { k }$ [13]) require careful manual design of comprehensive test cases that cover edge cases, along with secure environments for code execution. Another evaluation method is human evaluation [14], which is accurate yet expensive, as it involves multiple domain experts who directly assess the correctness of generated artifacts. Importantly, this method is prohibitively labor-intensive and time-consuming, rendering it impractical for large-scale assessments. These constraints significantly limit the flexibility and scalability of human evaluation for code generation evaluation [15], [16].
Recent advancements in LLMs have catalyzed the development of LLM-as-Judge methods [17]–[19], which directly evaluate the functional consistency between problem descriptions and generated code. These methods offer a promising alternative to traditional evaluation method [20]. However, with the rapid proliferation of LLM-as-Judge methods, there remains considerable uncertainty regarding their performance in code generation evaluation and it is far from clear which method delivers optimal results [21].
Empirical study. We first conduct a large-scale empirical study to systematically compare different LLM-as-Judge methods in code generation evaluation. Specifically, we classify existing LLM-as-Judge methods into two categories, i.e., methods based on general models (e.g., GPT-3.5-turbo and GPT4o) and methods based on reasoning-focused models (e.g., DeepSeek-R1 [22]). To ensure the comprehensive evaluation, we curate three datasets (i.e., HumanEval-Judge, MBPP-Judge and BigCodeBench-Judge) as new benchmarks for evaluating the effectiveness of LLM-as-Judge methods in code generation evaluation. Our findings indicate that, while these methods generally perform well, they exhibit significant discrepancy across various dimensions. In particular, the former requires elaborate prompts and lacks explainability, whereas the latter provides enhanced explainability with simpler prompts but demands substantial computational resources due to their parameter sizes.
Our methods. To address these limitations and advance the state of code generation evaluation, we propose a novel code evaluation method that effectively balances accuracy, efficiency, and explainability. We name it CODE-DITING1.
To reduce the computational cost, we develop a data distillation framework that transfers reasoning capabilities from the powerful DeepSeek-R1-671B model to our more compact CODE-DITING model, available in 1.5B and 7B parameter sizes. Through this process, we construct a high-quality dataset CODEJUDGE-17K consisting of 17,000 carefully curated samples with reasoning paths. This method not only enhances the explainability of the evaluation but also makes the reasoning process more accessible and comprehensible. To further enhance performance, the CODE-DITING models employ PiSSA [23] technique for model training and the majority vote strategy during inference.
Experimental results demonstrate that CODE-DITING 1.5B outperforms all models of comparable parameter magnitude and achieves performance equivalent to models with five times the parameter count. Notably, CODE-DITING 7B surpasses even large-scale models such as GPT-4o and DeepSeekV3 671B [24], despite utilizing only $1 \%$ of their parameter volume. Our ablation studies reveal that all components of CODE-DITING are essential for its superior performance. In addition, we demonstrate that CODE-DITING is robust to preference leakage [25], where evaluation models show bias toward code produced by same series of architectures, a common issue in LLM-as-Judge methods. These findings establish CODE-DITING as a promising alternative for code generation evaluation, representing a significant advancement in the field.
# Summary of contributions.
We curate three datasets (i.e., HumanEval-Judge, MBPPJudge and BigCodeBench-Judge) as benchmark for the empirical study. In addition, we introduce a new dataset CODEJUDGE-17K designed for training purposes.
1The name is from Chinese classic Journey to the West, reflecting the model’s goal to accurately discern the correctness of code implementations, just as the mythical creature distinguishes truth from falsehood.
We design and carry out a large-scale empirical study to systematically compare different LLM-as-Judge methods in code generation evaluation.
We propose CODE-DITING, a novel code evaluation method that effectively balances accuracy, efficiency and explainability.
We conduct extensive experiments to evaluate the performance of CODE-DITING on different scenarios, including performance comparisons, ablation studies and analyses of preference leakage.
To facilitate reproducibility, experimental data and model weights are released at https://github.com/Code-DiTing.
# II. BACKGROUND
# A. Problem Formulation
We formally define the code generation evaluation problem as follows. Let $\chi$ be the space of problem descriptions, $y$ be the space of code implementations, $\mathcal { R }$ be the space of reference implementations and $\tau$ be the space of test case sets.
Given a problem description $x \in \mathcal { X }$ , a code generation model $M : \mathcal { X } \mathcal { Y }$ produces code $y = M ( x )$ . The evaluation function $\mathcal { F } : \mathcal { X } \times \mathcal { Y } { \times } \mathcal { R } { \times } \mathcal { T } \to \{ 0 , 1 \}$ determines the functional correctness of $y$ with respect to $x$ . Formally,
$$
\mathcal { F } ( x , y , r , T ) = \left\{ \begin{array} { l l } { { 1 , } } & { { \mathrm { i f ~ } y \mathrm { ~ i s ~ f u n c t i o n a l l y ~ c o r r e c t } } } \\ { { 0 , } } & { { \mathrm { o t h e r w i s e } } } \end{array} \right.
$$
where $r \in \mathcal { R } \cup \{ \perp \}$ is an (optional) reference implementation $\mathit { r } = \bot$ means that $r$ is not provided) and $T \in \mathcal T \cup \{ \perp \}$ is an (optional) set of test cases $\mathbf { \nabla } T = \perp$ means that $T$ is not provided).
Based on the availability of $r$ or $T$ , the existing code generation evaluation methods can be categorized into: referencebased, test-based, and reference-and-Test-free evaluation. Table I summarizes a comparison of code generation evaluation metrics used in various methods.
# B. Reference-Based Evaluation $( r \neq \perp$ )
Reference-based methods compute the similarity between $y$ and $r$ , based on metrics ranging from token-based metrics (e.g., BLEU [10], ChrF [12]) to semantics-aware ones (e.g., CodeBLEU [28], CodeBERTScore [29]).
Token-based metrics are limited to the n-gram lexical similarity computation and ignore potential semantic information in the code. These metrics originate from, e.g., machine translation and text summarization, including BLEU [10], ROUGE [11] and ChrF [12]. Additionally, exact match (EM) metrics are widely used in code synthesis. Eghbali et al. [27] proposed the CrystalBLEU metric to enhance evaluation accuracy by excluding common n-grams that inflate BLEU scores due to verbose syntax and coding conventions. Furthermore, Liguori et al. [26] argued that edit distance (ED) better measures code similarity compared to other token-based metrics.
Semantics-based metrics consider the syntactic structure, data flow information and potential semantic information of code. Ren et al. [28] proposed CodeBLEU, which injects code syntax through AST and code semantics through data flow. Dong et al. [30] proposed CodeScore, which conducts supervised learning on datasets with test cases to perform functional evaluation of code synthesis. Zhou et al. [29] proposed CodeBERTScore, which uses CodeBERT to performs contextual encoding of reference and predicted code to calculate similarity scores between each token. Yang et al. [31] proposed CodeScore-R based on UniXcoder and contrastive learning, which employs sketch processing, syntax transformation and mutation testing to improve the robustness of metric.
TABLE I: Comparison of Code Generation Evaluation Metrics, where Func. means functional correctness, Auto. means automatic evaluation, Expl. means explainability and Open. means using open-source models. $\checkmark$ denotes applicable, $\times$ denotes not applicable and $0$ denotes optional.
Nevertheless, these methods cannot directly assess functional correctness, require high-quality reference code collection, and penalize correct but divergent implementations.
# C. Test-Based Evaluation $( T \neq \perp ,$ )
Test-based methods [13] execute code against test cases $T$ to assess functional correctness. The widely-adopted pass $@ \mathbf { k }$ metric is defined as
$$
\operatorname { p a s s } \ @ \mathbf { k } = \mathbb { E } _ { x } \left[ 1 - { \frac { { \binom { n - c } { k } } } { { \binom { n } { k } } } } \right]
$$
where $n$ (resp. c) is the total (resp. correct) number of samples for the problem $x$ . This metric has become standard in evaluating code generation models.
Despite its popularity, pass $@ \mathbf { k }$ requires human experts for designing high-quality test cases, and demands secure execution environments to prevent malicious code execution.
# D. Reference-and-Test-Free Evaluation $\mathit { r } = \perp$ and $T = \perp$ )
When neither reference implementations nor test cases are available, evaluation typically relies on either human evaluation or LLM-as-judge methods [34], [35]. Human evaluation, while accurate, is prohibitively expensive and time-consuming for large-scale assessments. Yang et al. [14] proposed a sampling-based method applied in small-scale empirical studies, which can obtain high-quality assessments of predicted code within acceptable time and resource constraints.
Recent LLM-as-judge methods leverage large language models to directly evaluate the functional consistency between problem descriptions and generated code. Zhuo et al. [32] proposed ICE-Score, which pioneered the use of GPT-3.5 as a judge to evaluate code generation model performance through carefully crafted prompt engineering. Tong et al. [33] introduced CodeJudge, which not only utilizes GPT-3.5 but also explores smaller open-source models as judges, employing a two-stage prompt engineering method for evaluation.
While promising, these methods generally require complex prompt engineering, rely on proprietary closed-source models, and lack explanations for their judgments. In contrast, we aim to provide a simple and explainable evaluation method that requires neither reference implementations nor test cases, which can balance accuracy, efficiency, and explainability.
# III. EMPIRICAL STUDY
In this section, we conduct an empirical study to explore the existing LLM-as-judge methods to code generation evaluation and analyze the various factors on their effectiveness.
# A. Experiment Setup
Code Generation Datasets. To comprehensively evaluate LLM-as-judge methods, establishing accurate and diverse benchmarks is a crucial first step. We select three diverse and widely adopted datasets that faithfully simulate real-world code generation scenarios. Our dataset selection is guided by two principles: (1) To ensure accurate assessment of semantic correctness, we prioritize datasets with exceptional test case quality and quantity, specifically targeting those with test coverage approaching $100 \%$ ; (2) Beyond algorithm-centric problems, datasets need to encompass a wide range of libraries and function call patterns typical in professional software development, enabling thorough evaluation of LLM-as-judge methods across varied programming contexts.
As a result, we select the following datasets:
HumanEval-plus [36] is an enhanced variant of the HumanEval benchmark that addresses fundamental groundtruth issues in the original dataset (including unhandled edge cases, logical errors and performance limitations). It expands the test coverage from an average of 9.6 to 764.1 test cases per problem, incorporating more challenging edge cases and complex functionalities to ensure rigorous and comprehensive evaluation.
• MBPP-plus [36] applies similar enhancement techniques to the MBPP benchmark, resulting in a test suite 35 times larger than the original dataset.
BigCodeBench [37] specifically targets real-world software development scenarios by incorporating diverse libraries and complex function call patterns. It comprises 1,140 function-level tasks that challenge LLMs to interpret instructions and orchestrate multiple function calls across 139 different libraries. Each programming task is
TABLE II: Sample Statistics for HumanEval-Judge, MBPPJudge and BigCodeBench-Judge Datasets
validated through an average of 5.6 carefully designed test cases, achieving a mean branch coverage of $9 9 \%$ .
Data Sampling. With the chosen benchmark datasets, we proceed to sample code generated by various LLMs. We employ different models of varying sizes: Qwen2.5Coder (1.5B/7B) [38] and DeepSeekCoder (1.3B/6.7B) [39] to ensure diversity in the generated solutions. Using multiple models not only enhances the diversity of our dataset but also allows us to evaluate the robustness of LLM-as-Judge methods across different code generation patterns and qualities.
During the data processing phase, we extract natural language problem descriptions and corresponding code implementations from the generated samples through rigorous data cleaning and deduplication processes. Additionally, we remove code comments to enhance conciseness and focus the evaluation on functional implementation rather than documentation. Data Labeling. (1) Automatic Labeling. We utilize test cases from the existing datasets to automatically label code samples. Functional correctness is determined using the pass $\ @ 1$ metric, serving as the ground-truth for evaluation. (2) Manual Verification. To address potential mislabeling from expanded test cases in HumanEval-plus/MBPP-plus, three authors independently review samples that passed original benchmarks but failed enhanced test suites. Labels are assigned directly when judgments align, or through discussion when opinions differ, ensuring high-quality ground-truth labels.
We hence curate three datasets: HumanEval-Judge (640 samples), MBPP-Judge (1,512 samples) and BigCodeBenchJudge (800 samples). Detailed statistics, including class distributions, are provided in Table II.
# B. LLM-as-Judge Methods
Foundation Models. To comprehensively evaluate LLM-asjudge methods across different model scales and architectures, we select a diverse set of foundation models:
Closed-source models: GPT-3.5-turbo and GPT-4o.
Large-scale open-source models: DeepSeek-v3-671B and DeepSeek-r1-671B.
Medium-scale open-source models:Llama3-8B, Qwen2.5-7B and DeepSeek-r1-distill-7B.
Small-scale open-source models: Llama3-1.5B, Qwen2.5- 1.5B and DeepSeek-r1-distill-1.5B.
The DeepSeek series models are classified as reasoning models because of their powerful reasoning capabilities, and the rest of the models are classified as general models. We limited our study to these models as they provide sufficient representativeness across different architectures, capabilities, and parameter scales.
This selection enables us to analyze how model size affects the performance of LLM-as-judge methods and investigate whether smaller, more computationally efficient models can achieve comparable evaluation quality to their larger counterparts.
Existing Prompting Methods. We evaluate four representative prompting methods that represent different perspectives to eliciting code evaluation capabilities from LLMs:
Vanilla, which is a straightforward prompting method that directly asks the model to evaluate code correctness based on the problem description and implementation, without additional guidance.
• CoT [40], which encourages the model to perform stepby-step reasoning by analyzing the code’s logic, identifying potential issues, and then making a final judgment on correctness.
ICE SCORE [32], which performs multi-dimensional evaluation and instructs the LLM to predict an evaluation score from 0 to 4 based on an evaluation criterion. In our experiments, we adopt the evaluation score as 0 or 1 for functional correctness.
CodeJudge [33], which is a two-phase method, where a summary of the given code is first generated and then is evaluated to determine whether the code is correct, based on the generated summary and the given problem description.
# C. Evaluation Metrics
To comprehensively assess the performance of LLM-asjudge methods for code evaluation, we employ three metrics. Accuracy (Acc). It measures the proportion of correctly classified instances among all evaluated samples. For $n$ code samples with ground truth labels $y _ { i }$ and predicted labels $\hat { y } _ { i }$ :
$$
\mathrm { A c c u r a c y } = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \mathbb { I } ( \hat { y } _ { i } = y _ { i } )
$$
where $\mathbb { I } ( \cdot )$ is the indicator function that returns 1 for correct predictions and 0 otherwise.
F1 Score (F1). It is the macro-average of precision and recall, particularly valuable for our datasets with class imbalance:
$$
\begin{array} { c } { { \mathrm { P r e c i s i o n } = \displaystyle \frac { \mathrm { T P } } { \mathrm { T P } + \mathrm { F P } } , \quad \mathrm { R e c a l l } = \displaystyle \frac { \mathrm { T P } } { \mathrm { T P } + \mathrm { F N } } } } \\ { { \mathrm { F 1 } = 2 \times \displaystyle \frac { \mathrm { P r e c i s i o n } \times \mathrm { R e c a l l } } { \mathrm { P r e c i s i o n } + \mathrm { R e c a l l } } } } \end{array}
$$
F1 ranges from 0 to 1, with higher values indicating better performance in identifying functionally correct code.
Matthews Correlation Coefficient (MCC). It provides a balanced measure by considering all confusion matrix entries:
$$
\mathbf { M C C } = { \frac { \mathbf { T P } \times \mathbf { T N } - \mathbf { F P } \times \mathbf { F N } } { \sqrt { ( \mathbf { T P } + \mathbf { F P } ) ( \mathbf { T P } + \mathbf { F N } ) ( \mathbf { T N } + \mathbf { F P } ) ( \mathbf { T N } + \mathbf { F N } ) } } }
$$
MCC ranges from $^ { - 1 }$ to 1, where 1 indicates perfect prediction, 0 random prediction, and $^ { - 1 }$ inverse prediction. It is less sensitive to class imbalance than the accuracy and F1 score.
# D. Implementation Details
Across all experiments, we fix the maximum context length at 8k tokens. Temperature settings were tailored to model type: 0.6 for reasoning-focused models (to promote exploratory reasoning) and 0.0 for general-purpose models (to ensure deterministic outputs).
We interact with the following large-scale models (via their official APIs): DeepSeek-v3-67B, DeepSeek-r1-67B, GPT-3.5- turbo and GPT-4o. Medium/small-scale open-source models were from Hugging Face, with inference optimized via VLLM [41] on a single RTX 4090 GPU to maximize throughput.
# E. Empirical Findings
The results are shown in Table III. Based on the extensive experiments with different models and prompting methods for code evaluation tasks, we have identified differences between general models (GPT/DeepSeek-V3/Llama3/Qwen2.5 series) and reasoning models (DeepSeek-R1 series):
(1) General Models Depend on Prompt Engineering. Our analysis reveals that general-purpose models show high sensitivity to prompt engineering.
Large-scale models respond differently to prompts: GPT3.5-turbo and DeepSeek-v3 perform best with CodeJudge, while GPT-4o excels with CoT. For medium and small-scale models, structured approaches like ICE SCORE significantly improve performance.
A notable example is Llama3 8B, which achieved an accuracy of 0.658 and MCC of 0.265 using ICE SCORE, substantially outperforming its Vanilla baseline (accuracy 0.622, MCC 0.194).
# Finding 1
For general models, optimal prompting strategies vary by architecture and scale, requiring model-specific customization.
(2) Reasoning Models Prefer Simple Prompts. In contrast to their general counterparts, reasoning models exhibit consistent superior performance with simpler prompts. The Vanilla method emerges as the most effective approach across all DeepSeek-r1-distill model sizes (7B and 1.5B).
Notably, increased prompt complexity often leads to performance degradation, with the 7B model achieving a remarkable 0.737 accuracy using the basic Vanilla approach.
# Finding 2
For reasoning models, they have already internalized the reasoning capabilities, requiring no external provision of reasoning steps or structured frameworks.
(3) Performance Comparison. Our evaluation reveals the superior stability of reasoning models across diverse datasets, highlighting their robustness and generalizability.
For large-scale models, DeepSeek-r1-671B with the best prompting method achieves an accuracy of 0.834, F1 score of 0.815 and MCC of 0.632, significantly higher than others. Similarly, for the 7B-scale, DeepSeek-r1-distill 7B with the best prompting method achieves an accuracy of 0.737, F1 score of 0.710, and MCC of 0.443. For the 1.5B-scale models, DeepSeek-r1-distill 1.5B achieves the best accuracy of 0.652, F1 score of 0.604 and MCC of 0.241.
# Finding 3
At comparable parameter scales, reasoning models demonstrate superior and more stable performance across different datasets compared to general models.
# IV. METHODS
In this section, we introduce our method CODE-DITING. Based on the empirical findings in Section III, we build on two key insights: (1) explicit reasoning paths significantly enhance code evaluation accuracy while enabling better sample explainability; and (2) smaller models with appropriate training can potentially match or exceed the performance of much larger models.
CODE-DITING distills reasoning capabilities into compact models to balance accuracy with computational efficiency, as shown in Figure 1.
# A. Dataset Construction
To effectively transfer reasoning capabilities from largescale models to CODE-DITING, high-quality training data are essential.
1) Source Benchmark Collection: We follow three key principles for dataset selection:
• Diversity: The dataset needs to cover a wide range of programming scenarios, including algorithmic problems, system programming, and library usage. • Difficulty: In addition to basic syntax tasks, the dataset must encompass complex logical challenges and multistep reasoning problems. • Quality: The dataset needs to contain high-coverage test cases to ensure reliable functional correctness assessment.
Based on these principles, we select three large-scale code generation benchmarks, i.e, KodCode [42], OpenCoder [43], and CodeHarmony [44], as the seed data. These benchmarks are widely used to train large-scale CodeLLMs.
2) Code Generation and Labeling: To ensure a balanced and representative training set, we implement a systematic data generation and labeling process. In addition, to control the distribution of correct and incorrect examples in our dataset, we generate multiple candidate solutions using Qwen2.5-Coder (1.5B/7B) for each programming task. For quality assurance, we employ a multi-step validation process as HumanEvalJudge:
We evaluate each solution’s functional correctness through test cases and compute the pass $\ @ 1$ as the label.
TABLE III: Performance Comparison of Different Models and Prompting Methods across Datasets
• We apply static analysis tools to identify and filter out solutions containing syntax errors.
• We remove code comments to focus the evaluation on core implementation logic.
3) Reasoning Knowledge Distillation: To transfer the logical reasoning capabilities of large-scale reasoning models to our target dataset and enhance sample explainability, we implement a distillation process. For each triple nl, code, label , we use DeepSeek-R1-671B (the SOTA reasoning model, as shown in Table III) in Vanilla setting to produce independent judgments on code functional correctness, including both predicted labels and reasoning paths. This process yields raw distillation data in the format $\langle { \boldsymbol { \mathrm { n l } } } \rangle$ , code, label, reasoning .
4) Data Filtering and Sampling: We implement a multistage filtering mechanism:
• Accuracy filtering. We remove samples where DeepSeek-R1-671B’s predictions disagreed with test case labels to ensure consistency; Logical coherence filtering. We employ DeepSeek-V3 as a discriminator2 to detect and eliminate reasoning paths containing hallucinations or logical inconsistencies; Class balancing. We downsample the filtered data to achieve a 1:1 ratio between positive and negative samples, addressing the imbalance in the original dataset where correct samples were overrepresented.
As a result, we construct CODEJUDGE-17K, a high-quality dataset containing 17,000 samples. CODEJUDGE-17K features a balanced distribution of correct and incorrect code samples across diverse programming tasks, spanning from basic
L. Dataset Construction II.ModelTraining! III. Model Inference Determinethecorrectnessofthe code snippet.Output only YesorNo. #Problem:{NL #Code:{Code} code 1 Test Case 1 &Reynove Comreode V code 2 > Test Case 2 X S 圈 + □ □ Code M Test Case N <NL,Code,Label> Benchmark Qwen2.5-Coder Code Generation Run Test Cases Labeling Nsohing <thinkreasoning/K/thik Vote Label <think>reasoning2</think> 2 0 V Fe A 日 A <answer>Yes</answer> <NL, Code,Label> DeepSeek-R1 Distillation and Sampling CodeJudge-17k 1
algorithmic challenges to complex system implementations. Each sample is accompanied by a detailed reasoning path that explains the judgment process, making the dataset valuable for training explainable code judgment models.
# B. Model Training
To transfer reasoning capabilities to smaller models while maintaining efficiency, we train the model in three stages.
1. Knowledge Injection We hypothesize that explicit reasoning paths are crucial for code evaluation tasks. To inject this capability while minimizing deployment costs, we use DSR1-distil (1.5B/7B) as base models which are fine-tuned on CODEJUDGE-17K. This enables smaller models to learn from larger experts while requiring only $1 \%$ of the parameters.
2. Parameter-Efficient Fine-tuning with LoRA To optimize model training while maintaining performance, we adopt LowRank Adaptation (LoRA), a parameter-efficient fine-tuning technique. This freezes pre-trained weights $W _ { 0 } \in \mathbb { R } ^ { d \times k }$ and introduces trainable low-rank matrices $A \in \mathbb { R } ^ { r \times k }$ , $B \in \mathbb { R } ^ { d \times r }$ $( r \ll \operatorname* { m i n } ( d , k ) )$ :
$$
W = W _ { 0 } + B A
$$
This reduces trainable parameters from $d k$ to $r ( d + k )$ , preserving performance with minimal overhead.
3. PiSSA Initialization To enhance training efficiency and model performance, we leverage Principal Singular Vector Adaptation (PiSSA) [23] for initializing LoRA matrices. Instead of Kaiming-uniform [45] initialization used in LoRA, PiSSA leverages the intrinsic low-rank structure of $W _ { 0 }$ through truncated SVD, i.e., $W _ { 0 } \approx U _ { r } \Sigma _ { r } V _ { r } ^ { \top }$ . The LoRA matrices are then initialized as
$$
B = U _ { r } \Sigma _ { r } ^ { 1 / 2 } , A = \Sigma _ { r } ^ { 1 / 2 } V _ { r } ^ { \top }
$$
This ensures $\Delta W = B A$ initially aligns with $W _ { 0 }$ ’s principal subspace, concentrating updates on directions critical for functional preservation. Compared to the Kaiming-uniform initialization, PiSSA provides structured starting points that improve convergence speed and final performance, particularly in low-rank regimes.
# C. Model Inference
Considering that the reasoning model may have inconsistent reasoning paths when the temperature is set to 0.6, we use the Majority Vote strategy to determine the final reasoning result and further enhance model inference performance. This belongs to parallel inference methods, where the model performs multiple independent inferences on the same input, and the most frequent result is selected as the final judgment.
From a probabilistic perspective, if the probability of a correct judgment in a single inference is $P ( A )$ , the probability of the final result being correct can be modeled through a binomial distribution when conducting $T$ independent inferences. Specifically, if at least $( T + 1 ) / 2$ inference results are correct (i.e., the majority vote is correct), then the probability of the final judgment being correct is
$$
P \left( X \geq \frac { T + 1 } { 2 } \right) = \sum _ { k = \lceil \frac { T + 1 } { 2 } \rceil } ^ { T } { \binom { T } { k } } P ( A ) ^ { k } ( 1 - P ( A ) ) ^ { T - k } .
$$
When $P ( A ) > 0 . 5$ , according to the Law of Large Numbers, as $T$ increases, the success probability of the majority vote strategy $\begin{array} { r } { P \left( X \geq \frac { T + 1 } { 2 } \right) } \end{array}$ will continuously improve. This explains why majority voting can effectively enhance model performance: as long as the accuracy of a single inference exceeds random guessing (i.e., $P ( A ) > 0 . 5 )$ , multiple voting can significantly reduce the probability of misjudgment.
In our experiments, we perform $T = 7$ independent inferences for each test sample and use majority voting to determine the final judgment result. Note that $T$ is set 7 based on RQ3 findings (Section V-C) as the optimal tradeoff between model performance and inference latency.
# V. EXPERIMENTS AND ANALYSIS
To evaluate the effectiveness and benefits of CODE-DITING, we mainly study the following three research questions (RQs):
A. RQ1: How does CODE-DITING perform compared to the state-of-the-art methods?
To evaluate the performance of CODE-DITING, we compare it with various models mentioned in Section III. For a fair comparison, we use the most effective prompt for each model and employ the same evaluation metrics. The results are presented in Table IV.
TABLE IV: Performance Comparison of Different Models and Prompting Methods across Datasets
(1) Performance Comparison. Both CODE-DITING 1.5B and 7B models significantly outperform other models in their respective parameter scales, with substantial improvements across accuracy, F1 score and MCC metrics. In particular, CODE-DITING 1.5B surpasses Llama3 1B, Qwen2.5 1.5B and even the base DS-r1-distill 1.5B model by large margins. Similarly, CODE-DITING 7B shows clear advantages over Llama3 8B, Qwen2.5 7B, and the base DS-r1-distill 7B model.
(2) Parameter Efficiency. The parameter efficiency of our method is particularly noteworthy, as CODE-DITING 1.5B achieves performance comparable to DS-r1-distill 7B despite using only about $20 \%$ of its parameters, demonstrating the effectiveness of our knowledge distillation method in transferring reasoning capabilities to smaller models.
Most impressively, CODE-DITING 7B outperforms both (closed-source) GPT-4o and DeepSeek-V3 (671B) across all three datasets, falling short only of DeepSeek-R1 671B. This is remarkable considering that CODE-DITING 7B uses only about $1 \%$ of the parameters of these larger models.
Both CODE-DITING variants maintain strong performance across all evaluation datasets, indicating robust generalization capabilities. These results validate our hypothesis that explicit reasoning paths are crucial for code evaluation tasks and demonstrate that smaller models can effectively learn these reasoning patterns through our proposed fine-tuning method.
# Summary of RQ1
CODE-DITING demonstrates superior performance in code evaluation compared to state-of-the-art methods. The 1.5B variant outperforms all models in its parameter class, matching models of $5 \mathrm { x }$ larger. The 7B variant surpasses GPT-4o and DeepSeek-V3(671B), using only $1 \%$ of their parameters.
Fig. 2: Ablation Study (F1 Score) of Data Filtering Component
# B. RQ2: What is the impact of different components of CODEDITING?
To evaluate the effectiveness of different components of CODE-DITING, we conducted a series of ablation studies, focusing on three key aspects: data filtering, parameter initialization and inference strategy.
(1) Data Filtering Component. Figure 2 illustrates the impact of the data filtering component on model performance. We compare the F1 scores under $\mathbf { k } { = } 1$ (single inference) across different datasets and observe that the data filtering strategy consistently and significantly improves model performance. This empirical evidence strongly supports the hypothesis that high-quality reasoning paths are crucial for models to develop accurate code evaluation capabilities.
Specifically, the relative improvement from filtering is notably more pronounced in the smaller 1.5B model compared to the 7B model. This distinct impact suggests that smaller models, with their inherently limited representational capacity, benefit disproportionately from high-quality training data, as they lack the parameter space to effectively learn from noisy or ambiguous examples.
(2) PiSSA Component. Figure 3 shows the impact of PiSSA initialization on model performance. We also compare F1 scores at $\mathbf { k } { = } 1$ across different initialization methods to isolate this component’s contribution. In standard LoRA implementations, the A matrix is typically initialized using Kaiminguniform initialization, while the B matrix is initialized to zero. In contrast, PiSSA derives both A and B matrices through SVD decomposition, which fundamentally aligns the initialization with model’s intrinsic parameter structure.
Fig. 3: Ablation Study (F1 Score) of PiSSA Component
The experimental results reveal that PiSSA yields substantial performance improvements on the HumanEval-Judge and MBPP-Judge datasets compared to standard LoRA initialization techniques. However, we observe that the performance enhancement on the more challenging BigCodeBench-Judge dataset is less pronounced, suggesting that initialization benefits may vary with task complexity and dataset characteristics.
These findings indicate that PiSSA initialization helps models converge to more optimal solution spaces, particularly in parameter-constrained low-rank adaptation scenarios.
(3) Inference Component. Figure 4 presents a detailed analysis of how our inference strategy affects model performance. We systematically compare F1 scores across different values of $\mathbf { k }$ (the number of inference passes) to identify the optimal configuration. The results demonstrate a clear pattern: as k increases, model performance consistently improves, though with diminishing returns at higher values.
To determine the most practical configuration for real-world applications, we conduct an analysis of the performanceefficiency trade-off. Our experiments are performed using vllm as the inference server on a single NVIDIA RTX 4090 GPU. The baseline latency $( \mathrm { k } { = } 1 )$ ) for a single inference pass is 0.15s and 0.30s for the 1.5B and 7B models, respectively. As expected, the time cost scales linearly with $\mathbf { k }$ , reaching approximately 1s (1.5B) and 2s (7B) when $\mathbf { k } { = } 7$ .
By analyzing both the performance improvements and computational overhead across different k values, we identify $\mathrm { k } = 7$ as the optimal one. This configuration delivers substantial accuracy gains while maintaining reasonable inference latency, making it well-suited for practical applications where both prediction quality and response time are critical considerations.
Fig. 4: Ablation study (F1 Score) of the inference component
# Summary of RQ2
Our ablation studies demonstrate that each component of CODE-DITING contributes significantly to its overall performance. With the combination of data filtering, PiSSA initialization, and the optimal inference strategy, CODE-DITING achieves state-of-the-art performance while maintaining computational efficiency.
C. RQ3: Does CODE-DITING suffer from preference leakage?
Preference leakage [25] refers to a contamination issue in LLM-as-judge frameworks where correlations between the synthetic data generator and the LLM-based evaluator lead to biased assessments.
In our training process, we have used code generated by models in the same families (DeepSeek and Qwen Coder) that serve as our base models. This raises a legitimate concern: does CODE-DITING exhibit preference bias toward code generated by models similar to those used in its training data?
To systematically investigate this potential issue, we consider Agreement Rate and Cohen’s Kappa [46] as the evaluation metrics. Specifically, Agreement Rate measures the consistency of judgments between different evaluation scenarios:
Cohen’s Kappa quantifies the agreement between evaluators while accounting for chance agreement:
$$
\mathrm { C o h e n } ^ { \prime } \mathrm { s \ K a p p a = } \frac { p _ { o } - p _ { e } } { 1 - p _ { e } }
$$
TABLE V: Consistency analysis across different code generation models
where $p _ { o }$ is the observed agreement rate and $p _ { e }$ is the expected agreement rate by chance. The chance agreement $p _ { e }$ is calculated based on the marginal distributions of each evaluator’s judgments:
$$
p _ { e } = \sum _ { i } ( p _ { i 1 } \times p _ { i 2 } )
$$
where $p _ { i 1 }$ and $p _ { i 2 }$ represent the proportion of samples classified as category $\mathbf { \chi } _ { i }$ by the first and second evaluator, respectively. This adjustment for chance agreement makes Cohen’s Kappa a more robust measure than simple agreement rate, especially when the distribution of categories is imbalanced.
We carry out experiments the assess the consistency of CODE-DITING from different perspectives.
(1) Consistency across different code generators. This experiment evaluates whether CODE-DITING maintains consistent judgments when evaluating code generated by different models for the same programming task. We selected 50 problems from each dataset and used two models not involved in our training data generation (i.e., GPT-4o and Claude3.5) to generate code solutions. We then assessed whether CODE-DITING produced consistent evaluations regardless of the code’s source.
As shown in Table V, CODE-DITING demonstrates high consistency in its judgments across different code generators, with agreement rates exceeding $9 3 \%$ across all datasets. The exceptionally high Cohen’s Kappa values (ranging from 0.86 to 0.96) indicate near-perfect agreement beyond what would be expected by chance. This consistency is particularly evident on the HumanEval-Judge dataset, where agreement rates reach $9 8 \%$ with GPT-4o-generated code and $9 7 \%$ with Claude-3.5-generated code. Even on the more challenging BigCodeBench-Judge dataset, which involves complex library interactions, CODE-DITING maintains agreement rates of $94 \%$ and $9 3 \%$ respectively. These results strongly suggest that CODE-DITING’s evaluation mechanism focuses on the intrinsic quality and correctness of code rather than superficial patterns associated with specific code generators.
(2) Consistency across different problem descriptions. This experiment examines whether CODE-DITING maintains consistent judgments when the same code is evaluated against semantically equivalent–but differently phrased–problem descriptions. We also select 50 code samples from each dataset and use GPT-4o and Claude-3.5 to generate paraphrased versions of the original problem descriptions while preserving their semantic meaning. We then evaluate whether CODEDITING’s judgments remained consistent across these different problem formulations.
TABLE VI: Consistency analysis across different problem descriptions
Table VI shows that CODE-DITING maintains even higher consistency with agreement rates of $9 4 - 9 6 \%$ across datasets. The Cohen’s Kappa values (0.87-0.92) indicate near-perfect agreement, substantially exceeding what would be expected by chance. Notably, the consistency remains stable across all three datasets, with minimal variation between HumanEvalJudge, MBPP-Judge and BigCodeBench-Judge. The stability is particularly significant for BigCodeBench-Judge, where the complexity of library interactions could potentially make the model more sensitive to variations in problem descriptions. The high agreement rates for both GPT-4o and Claude-3.5 paraphrases demonstrate that CODE-DITING robustly captures the semantic relationship between code and requirements, focusing on functional alignment rather than superficial textual patterns in the problem description. This resilience to paraphrasing suggests that CODE-DITING has developed a deep understanding of programming tasks that transcends specific wording choices.
# Summary of RQ3
CODE-DITING does not suffer from significant preference leakage. It maintains high consistency when evaluating code from different generators and when assessing code against semantically equivalent problem descriptions.
# VI. THREATS TO VALIDITY
Internal Validity. The primary threat to internal validity concerns implementation fidelity. We mitigated this by carefully implementing baseline methods according to their original descriptions, using public implementations where available, and thoroughly validating our CODE-DITING implementation. Regarding potential bias in the distilled CODEJUDGE-17K training dataset, we employed multi-stage filtering to ensure high-quality reasoning paths and accurate labels.
External Validity. External validity threats stem from our dataset and model selections. We chose HumanEval-plus, MBPP-plus, and BigCodeBench for their high-quality test cases and diverse programming scenarios, though future work could explore additional programming paradigms and domainspecific languages. Our model selection spans various scales and architectures (closed-source GPT models, large-scale DeepSeek models, and smaller open-source models ranging from 1.5B to 8B parameters), providing meaningful insights within our hardware constraints (single RTX 4090 GPU).
Construct Validity. Construct threats concern the performance metrics used to evaluate the performance of CODE-DITING and the compared methods. To evaluate the performance of models, we utilized Accuracy, F1-score, and MCC as the evaluation metrics. Furthermore, to evaluate the performance leakage issue of CODE-DITING, we used Agreement Rate and Cohen’s Kappa as the evaluation metrics. | Trustworthy evaluation methods for code snippets play a crucial role in
neural code generation. Traditional methods, which either rely on reference
solutions or require executable test cases, have inherent limitation in
flexibility and scalability. The recent LLM-as-Judge methodology offers a
promising alternative by directly evaluating functional consistency between the
problem description and the generated code. To systematically understand the
landscape of these LLM-as-Judge methods, we conduct a comprehensive empirical
study across three diverse datasets. Our investigation reveals the pros and
cons of two categories of LLM-as-Judge methods: the methods based on general
foundation models can achieve good performance but require complex prompts and
lack explainability, while the methods based on reasoning foundation models
provide better explainability with simpler prompts but demand substantial
computational resources due to their large parameter sizes. To address these
limitations, we propose CODE-DITING, a novel code evaluation method that
balances accuracy, efficiency and explainability. We develop a data
distillation framework that effectively transfers reasoning capabilities from
DeepSeek-R1671B to our CODE-DITING 1.5B and 7B models, significantly enhancing
evaluation explainability and reducing the computational cost. With the
majority vote strategy in the inference process, CODE-DITING 1.5B outperforms
all models with the same magnitude of parameters and achieves performance which
would normally exhibit in a model with 5 times of parameter scale. CODE-DITING
7B surpasses GPT-4o and DeepSeek-V3 671B, even though it only uses 1% of the
parameter volume of these large models. Further experiments show that
CODEDITING is robust to preference leakage and can serve as a promising
alternative for code evaluation. | [
"cs.SE",
"cs.AI"
] |
# 1 Introduction
Search-based software engineering (SBSE) has been a prominent field for nearly a quarter-century—approaching its silver jubilee—since it was first introduced by Harman and Jones [17] in 2001. Over the years, it has rapidly evolved to address emerging and complex software engineering problems. It has been successfully applied throughout the software engineering (SE) lifecycle, including requirements engineering, software design, software development, software testing, deployment, and maintenance [19].
SBSE uses metaheuristic search optimization techniques to solve SE problems. This involves reformulating SE problems as metaheuristic search problems by creating solution representation, defining fitness functions, and selecting search operators [17]. The problem formulated in this way is then solved by applying search and optimization techniques to identify optimal solutions. These techniques include single-objective, multi-objective, and many-objective search algorithms, each designed for different problem types and levels of complexity [18, 19].
Recently, artificial intelligence (AI) foundation models (FMs) have brought a major transformation, attracting widespread interest across research communities, academia, and industry [23]. These models are trained on vast amounts of data and are capable of performing a wide range of analytical tasks. For specific application domains, these models can be adapted using techniques such as fine-tuning and prompt engineering. Depending on their modalities, architecture, and application domains, FMs can be categorized as follows: Large Language Models (LLMs) like GPT for textual content; Vision Language Models (VLMs) like ResNet for image and video data; Speech and Audio Models (SAMs) like WaveNet for tasks related to speech recognition, synthesis, and audio analysis; and Multimodal Models (MMs) like CLIP that integrate multiple types of data, such as text and images, to enable content generation and comprehension across different modalities [44]. Their ability to generalize knowledge across multiple domains and adapt to diverse applications has enabled new AI-driven innovations, making them a focal point of exploration. As a result, researchers are actively exploring their potential, academic institutions are integrating them into curricula, and industries are leveraging them to enhance automation, decision-making, and user experiences.
Considering rapid AI transformations driven by FMs, in this paper, we present a strategic roadmap exploring the synergistic interplay between SBSE and FMs in the context of future innovations and advancements. We structure this roadmap to explore four key dimensions, as illustrated in Figure 1: (i) how FMs can be leveraged for SBSE design (Section 2), (ii) how FMs can complement SBSE applied for SE problems (Section 3), (iii) how well-studied SBSE practices can be adapted to FMs customized for particular SE activities (Section 4), and (iv) how SBSE and FMs operate together and their synergistic potential (Section 5). For each dimension, we review the current state of the art, discuss open challenges, and present opportunities with a forward-looking vision that outlines potential research directions.
Section 2 Section 3 SBSE Design Output: SE Artifact 圖 FM Support gith id Initialize SBSE Gener国 FM Refined 國 Implementation Generation Evaluate Fitness Final SE Artifact Testing, Debugging& Repair 三 Re-Prompt Prompt
Section 5 iSection 4 Prompt (e.g.,Fitness Evaluation) √ Refined Initialize SBSEV FM Prom FM Generate D Input SBSE 国 園 W Search Guidance, ↑ Re-Prompt SE Artifact Evaluate Fitness FinalSE Final SE e.g., Fitness Artifact
# 2 Foundation Models for Automating Search-based Software Engineering Design
Recently, LLMs have attracted significant interest in evolutionary computation due to their ability to enhance various aspects [42]. An important step in this regard is to use LLMs as search operators for single and multi-objective optimization, such as language model crossover (LMX) for single-objective [28] and LLM-assisted offspring generation for multi-objective optimization [41]. Another line of work is evolutionary algorithm generation and improvement using LLMs. Notable works include using LLMs to generate swarm intelligence algorithms [31], designing heuristic algorithms [24], and evolving search algorithms [45]. In addition, some studies also focused on using LLMs to assist novice users in interpreting and comprehending evolutionary algorithm-generated solutions [25, 38]. While LLM-based evolutionary computation has recently gained attention in the literature, its effectiveness within the SBSE context remains understudied. Moreover, given LLMs’ potential to enhance evolutionary computation, we anticipate that other FMs, such as VLMs and MMs, could further complement SBSE in solving complex SE problems. Considering this, we outline key research opportunities as follows:
# 2.1 Design of Fitness Functions for SBSE Problems
Typically, a software engineer manually identifies and defines the fitness functions for SBSE problems based on their understanding of the domain. FMs have the potential to assist in automating the definition and subsequent implementation of fitness functions for a given SBSE problem. For example, in test case prioritization, FMs can help provide alternative fitness function options to guide search algorithms in prioritizing test cases, e.g., based on historical data on faults found by each test case and the coverage achieved. We foresee the following options to be investigated with FMs for designing fitness functions for SBSE problems: 1) FMs can be studied as a chatbot created for SBSE users, where they can build fitness functions by interacting with the chatbot and then implement their fitness functions themselves; 2) SBSE users can provide a set of requirements and ask an FM to implement the fitness function. In addition, VLMs, SAMs, and MMs can be used to derive fitness function suggestions from images, audio, and video data, for example, when generating test scenarios for autonomous driving systems (ADS).
# 2.2 Design of Search Operators
In a recent study, Wu et al. [42] categorized various LLM-based search operators, such as crossover and mutation, for single and multi-objective algorithms. However, the application of such operators in SBSE remains undetermined. Therefore, one potential research direction is to explore LLM-based search operators in the SBSE context. Another research direction is to define search operators using other FMs like VLMs and MMs to enable SBSE applications to emerging domains such as autonomous systems and quantum computing. For example, when dealing with ADS data in image format, VLM-based search operators can be employed to generate test scenarios. Similarly, for ADS data in audio or video formats, search operators based on SAMs and MMs can enhance test scenarios considering the mixed-mode nature of data.
# 2.3 Solution Encoding for SBSE Problems
A fundamental step in SBSE is solution encoding for a particular SE problem. This varies from one problem nature to another and requires problem-domain knowledge. FMs with diverse capabilities can support solution encoding for different SE lifecycle phases. For instance, to verify software models with SBSE during the design phase, MMs with both textual and visual capabilities can facilitate solution encoding by analyzing software requirements and design artifacts, often created using graphical modeling languages such as Unified Modeling Language (UML).
# 2.4 FMs for SBSE Implementation Generation
One advanced application of FMs is generating either partial or complete implementations of SBSE problems. In a partial implementation, FMs generate code to solve an SBSE problem, which the user then completes manually. Alternatively, FMs can create complete implementations of SBSE problems that can be further refined. This capability extends across key SBSE components, including solution encoding, operator selection, fitness function design, and search algorithm selection, enabling more efficient and automated problem-solving.
# 2.5 FMs for Testing, Debugging, and Repair of SBSE Implementation
The implementation of SBSE problems is challenging to test due to the absence of test oracles. To this end, FMs can be used to automate the testing of SBSE problems, ensuring that their implementation is correct. Therefore, further research is needed to assess whether FMs can be used to generate tests as well as test oracles for a given SBSE problem. Once issues are identified in SBSE implementations through testing, it is important to debug and repair them. These tasks are typically performed manually. In this context, FMs can be used to automatically debug SBSE implementations and generate patches to fix identified issues. However, this requires further research investigation, which has not been a focus in the literature.
# 3 Foundation Models for SE Artifacts Generated with Search-based Software Engineering
In recent years, there has been growing research interest in various SE areas in using LLMs for SBSE. One notable area is software genetic improvement (GI). In this context, Kang and Yoo [20] presented the initial concept of generating mutants from LLMs to optimize the process of GI. Similar to this work, Brownlee et al. [5] employed LLMs as mutation operators to increase diversity in the search process for GI. Software program refactoring represents another emerging direction where FMs are applied in SBSE. In this regard, Choi et al. [10] utilized LLMs to guide a search-based approach, enhancing refactoring efficiency while reducing complexity. Another key area of interest is software testing, where LLMs are being explored for search-based software testing (SBST). In this case, Turhan [40] introduced evoLLve’M, an approach that utilizes LLMs to enhance JUnit test assertions generated by EvoSuite’s evolutionary process. Similar to this work, Biagiola et al. [4] employed LLMs to improve the readability of unit tests generated by EvoSuite through search techniques.
In summary, the literature review indicates that (i) LLMs, among various FMs, are increasingly used for SBSE, and (ii) numerous SE phases, including requirements engineering, software design, and debugging, remain largely unexplored. Despite SBSE’s long-standing role in solving SE problems, several challenges persist, which we anticipate can be addressed through FMs in the future. Below, we highlight key opportunities and research directions.
# 3.1 FMs for SE Artifact Interpretation
An SE problem in SBSE is formulated using different solution encoding schemes, such as Binary or Tree-based encoding, depending on the problem’s nature. The final output is generated as an encoded solution, often requiring manual effort for interpretation and transformation into a specific SE artifact. This effort varies based on the SE lifecycle phase. For instance, in the case of requirements prioritization for the next release problem (NRP), the SBSE-generated prioritization sequence needs to be interpreted in textural requirements and utilized for NRP. Similarly, software models generated with SBSE need to be converted to graphical models. In this context, FMs can play a significant role in two directions. First, LLMs with text comprehension and analytical capabilities can be used for SBSE-generated SE artifacts that are required to be interpreted and represented in a textual format. Second, VLMs and MMs can be employed to analyze and transform SBSE-generated SE artifacts into graphical representations.
# 3.2 FMs for SE Artifact Refinement
For some SE problems, SBSE-generated solutions do not need to be transformed. For example, test data generation for primitive types and program improvement with generic programming. Such scenarios require domain experts to verify the generated solution manually. In the case of test data generation, test effectiveness using metrics like coverage needs to be analyzed after running tests on the system under test. Similarly, programs generated with generic programming either need to be compiled and tested, or they need to be manually examined using code inspection and walkthrough techniques. FMs have significant potential in these cases. For the test data generated using SBSE, LLMs can conduct a static analysis utilizing the source code and test data to provide insights on test effectiveness, such as untested paths or incomplete coverage scenarios. In developing LLM-based static analysis techniques, we foresee the potential for devising novel test effectiveness metrics that complement these techniques. Furthermore, LLM-based static analysis (e.g., type checking and data flow analysis) can be further extended to verify programs generated with SBSE statically.
# 3.3 FMs for Pareto Analysis
In the case of multi- and many-objective SE problems, SBSE generates non-dominated or Pareto-optimal solutions in the objective space. A common practice is to analyze the resultant solutions and select the best one meeting a particular set of requirements. This process needs to be repeated whenever requirements change. In this regard, LLMs can support domain experts in understanding the trade-offs between different objectives and selecting the best solution for a given scenario. Moreover, for non-expert users, VLMs or MMs can construct a visual representation of the Pareto front, illustrating the set of all non-dominated solutions and offering suggestions to assist in selecting the most suitable option.
# 4 Search-based Software Engineering for SE Artifacts Generated with Foundation Models
FMs are increasingly being integrated into the entire SE lifecycle [23]. This results in FMs customized for specialized SE activities. A prominent example is LLMs tailored for different coding tasks, such as CodeX [9] for code generation and CodeGen [9] for program synthesis. In addition to code generation, these LLMs are employed to generate test cases [13]. For debugging, LLMs have been utilized for fault localization [43] and software bug reproduction from bug reports using prompting techniques [21]. Another line of research is focused on applying SBSE techniques to optimize FMs for specific SE problems. A noteworthy area is software effort estimation, where Tawosi et al. [39] introduced the idea of applying search techniques to optimize few-shot methods for fine-tuning LLMs to enhance their performance in estimating software effort. Another area of research focuses on enhancing image quality through SBSE. In this regard, Berger et al. [3] presented StableYolo, which employs a multi-objective search to optimize LLM prompts and parameters for generating high-quality and realistic images using tools such as YOLO and Stable Diffusion. Later, Gong et al. [16] introduced GreenStableYolo, which applies a multi-objective search for efficient LLM inference while maintaining high-quality image generation.
A key limitation noted in the literature is the lack of sufficiently large, high-quality datasets. Since FMs like LLMs are trained on enormous amounts of data, applying SBSE to optimize techniques such as prompting or few-shot learning necessitates access to similarly extensive and reliable data. Despite this challenge, an overview of existing works on applying SBSE to LLMs indicates that SBSE has significant potential for optimizing SE tasks utilizing FMs. In summary, this highlights several open problems that need to be addressed and significant opportunities for future research to expand the application of SBSE techniques to a broader range of FMs (e.g., VLMs and MMs) and diverse SE domains. Below, we present possible opportunities and future directions.
# 4.1 Search-based Code Improvement
Recent literature has shown significant interest in using general and fine-tuned LLMs for code generation. However, LLM-generated code often leads to compiler errors, and testing such code remains an open challenge [13]. Furthermore, integrating LLM-generated code in the software code base requires significant manual effort in analyzing code, resolving conflicts, and testing the integrated code. The current practice to improve the code generated by LLMs is using prompt engineering [13]. Building on the extensive research and practical applications of search-based program improvement and code repair, applying these techniques to refine LLM-generated code opens significant research opportunities. This advancement will extend the applicability of traditional SBSE techniques from human-written code to optimizing LLM-generated code.
# 4.2 Search-based Testing of FMs-Generated Artifacts
Similar to code generation challenges, the test cases generated from LLMs suffer from several issues, such as erroneous tests, low coverage, and test flakiness [13]. The existing approaches tackle these issues using different LLMs and prompting techniques. Given the well-studied SBST techniques, one potential research direction is to apply SBST to improve tests generated with LLMs. Moreover, in case a large number of tests are generated with LLMs, it is infeasible to run all tests for regression testing in continuous integration/continuous deployment (CI/CD) or DevOps environments. Therefore, another research direction is to apply SBST regression test selection, minimization, and prioritization techniques for LLM-generated tests.
# 4.3 Search-based Debugging, Repair, and Evolution of FMs-Generated Artifacts
LLM-based fault localization and reproduction techniques generate fault reports, which are subsequently used for tasks such as debugging, program repair, and test evolution. Given that SBSE literature has well-established techniques for software maintenance and evolution, we argue that these techniques can be leveraged to enhance the process. For instance, search-based program repair techniques can be adapted to improve code as a next step after fault localization by LLMs. Moreover, once the code has been improved, another direction involves using SBSE to evolve tests based on the fault information.
# 5 Interplay between Search-based Software Engineering and Foundation Models
A growing research trend is to explore the synergy between SBSE and LLMs, investigating how LLMs can enhance SBSE techniques and how SBSE can improve LLMs for SE tasks. One notable area is software development, where Gao et al. [15] introduced SBLLM, a framework for code optimization that leverages LLMs to apply evolutionary search techniques like crossover and mutation. For the application-specific use of search and LLMs, Shojaee et al. [37] presented LLM-SR, an approach that combines LLMs and evolutionary search to discover mathematical equations from the programs, demonstrating the potential to enhance the accuracy and efficiency with both LLMs and search techniques. Another prominent area of interest is exploring the potential of LLMs with SBST. In this direction, Dakhama et al. [12] introduced SearchGEM5, an approach that combines search-based fuzzing with LLMs to generate test cases for the Gem5 simulator. Later, Bruce et al. [6] presented an approach that utilizes LLMs to guide search-based fuzzing toward finding faults in ARM simulator software. Lemieux et al. [22] presented a technique called CodaMosa, which integrates a multi-objective search algorithm with a coding LLM to generate tests for Python code capable of achieving high coverage. Furthermore, Sapozhnikov et al. [33] presented TestSpark, a tool that utilizes the search-based capabilities of EvoSuite and LLMs’ textual capabilities to generate Junit tests. Subsequently, Abdullin et al. [1] conducted a comparative study of search-based tools, symbolic execution, and TestSpark configured with various LLMs. Their results demonstrated that while LLM-based tools underperformed in the case of code coverage, they outperformed in achieving higher mutation scores.
While the synergistic potential between SBSE and FMs remains largely unexplored, early research reveals a promising interplay. However, it also highlights a significant performance limitation due to the substantial computational resources required by the integrated use of SBSE and FMs. In the following, we present a research roadmap and opportunities, exploring the potential of SBSE and FMs in advancing metaheuristics search in SBSE, overcoming FM-related challenges, solving SE problems, and addressing challenges in emerging domains.
# 5.1 Advancing Metaheuristics Search in SBSE
As highlighted in the latest survey [42] and our literature analysis, FMs, specifically LLMs, can support various search algorithms in multiple ways. This includes using FMs to generate population, perform crossover and mutation, act as fitness functions, and support multi-objective optimizations. Given the advantages that FMs bring to the search process, one potential research direction is developing novel FM-inspired search algorithms. Furthermore, search-based algorithms, such as genetic algorithms (GAs), initiate the search process with a randomly generated population. During the search process, different selection strategies like tournament and rank-based selection are used to pick individuals from the current population to generate offspring for the next generation. For solving complex domain problems, intelligently selecting and reusing solutions individuals has demonstrated to be an efficient strategy [35, 36]. In this context, a promising research direction is using FMs to leverage domain knowledge for generating an optimized initial population, thereby enhancing the efficiency of the search process. For instance, for SBSE applied to emerging domains requiring mixed-mode capabilities, such as robotics and ADS, FMs like VLMs or MMs can support generating the initial population. This could significantly broaden SBSE’s applicability to diverse, complex domains with cutting-edge challenges. Moreover, during the search, FMs can be used to calculate fitness, depending on the SBSE problem at hand.
For example, in ADS testing, FMs can be queried to assess whether a scenario is realistic. This information can then be used to define fitness functions that guide search algorithms for SBSE problems. Thus, more research is needed to investigate which SBSE problems can utilize FMs to calculate fitness during the search. Another significant challenge is that search algorithms often generate invalid solutions that require repairs. To address this, FMs can repair solutions during the search, ensuring the search is properly guided. To this end, new research methods based on FMs are needed to prompt FMs to repair solutions when issues are encountered during the search.
# 5.2 Addressing FM Challenges
A key challenge of FMs is non-determinism, where the same input can produce varying outputs, affecting consistency and reliability. This non-determinism leads to several challenges in applying FMs in SE development phases. For instance, in CI/CD and DevOps, variability in generated configurations may lead to inconsistencies in software releases. SBST can complement handling non-determinism in FMs by search-based robustness testing techniques with the objective of exploring inputs leading to high model variability. Another direction in which SBSE can be applied is to find the best configurations for software and hardware resources that minimize variability in model behavior. Another challenge of FMs is their inherent uncertainty. Since FMs are trained on vast amounts of data, fine-tuning them also requires a significant amount of data, and the quality of such data (e.g., redundant features or imbalanced data) can be a potential source of uncertainty. Such uncertainties can significantly affect model interference, leading to ineffective performance for several SE phases. SBSE can be applied to select an optimal set of features, which will result in effectively managing data-related uncertainty and efficient training/fine-tuning of FMs. Furthermore, a key challenge in FMs is hallucination, where the model generates entirely fabricated or factually incorrect information [32], posing a significant concern in FM applications. As a result, detecting and mitigating hallucinations has become a key focus of ongoing research [8]. In this regard, we foresee significant potential for SBSE to effectively address the challenge of hallucinations in FMs. Specifically, SBSE can complement black-box methods for hallucination detection, as highlighted by the need for such methods [8]. By leveraging a search-based approach, a search algorithm can iteratively generate a diverse population of prompts, collect corresponding responses from FMs, and use a fitness function to continuously assess response variations, enabling more precise hallucination detection.
# 5.3 Solving SE Problems
SBSE has been applied to many software requirement engineering (RE) problems, including requirement selection and prioritization, requirement satisfaction, and fairness analysis in requirements. Software requirements are typically elicited and documented in a textual format. FMs, specifically LLMs with text generation, comprehension, and analysis capabilities, can complement search-based RE in many ways. For instance, in the case of requirements selection, LLMs can be provided with software requirements and then act as a fitness function during search optimization for solving the next release problem. Moreover, for requirements prioritization with multi-objective search, LLMs can be used to analyze conflicting objectives and select Pareto-dominant solutions. In addition to RE, SBSE techniques can be applied to test FMs used to solve SE problems (e.g., code generation and test generation). Taking the example of FMs generating source code, the aim is to test whether FMs generate correct code. For this purpose, SBST can be applied to generate test cases that prompt FMs to generate source code. The generated source code can then be used to compute a fitness function, guiding the search to identify faults that make FMs generate the wrong code. To this end, with carefully designed fitness functions, search-based testing techniques can be used to identify faults in FMs for various software engineering tasks. Once faults are identified using SBSE approaches, they can also be employed to repair FMs. This can be achieved by defining a fitness function to generate patches (e.g., fine-tuning parameters) for FMs until the faults are resolved. In software testing, graphical user interface (GUI) testing presents a compelling application for integrating SBSE and FMs. Modern software applications are designed with advanced GUIs to facilitate user-friendly interactions. GUI testing is essential to ensure functionality, visual consistency, compatibility, and usability. The available tools for GUI testing, like GUITAR [30] and Selenium [7] require test scripts to execute tests. Therefore, combining SBST with MMs, which possess mixed-mode capabilities, such as textual and graphical elements, presents an intriguing direction for future research.
# 5.4 FM-Integrated SBSE for Emerging Domains
FMs integrated with SBSE can be used to solve complex and dynamic nature problems in emerging domains. For example, in system-level testing of live Internet of Things (IoT) applications, the comprehension and analytical capabilities of FMs (such as LLMs or MMs), combined with SBST, can enable testers to perform online testing in an organized and intelligent manner. Similarly, VLMs and MMs can facilitate the safety analysis of self-adaptive robotics by processing mixed-mode data from cameras, sensors, and actuators. Overall, we envision that integrating
FMs with SBSE will offer substantial support across the software engineering lifecycle in diverse domains, including cyber-physical systems, smart grids, and autonomous vehicles.
# 6 Related Works
To the best of our knowledge, this paper presents the first forward-looking roadmap for SBSE in the context of FMs. While several existing studies have presented the state of research and future directions, many are outdated or no longer reflect the latest advancements, specifically the ones due to FMs. Our work complements the existing works by introducing a timely and forward-thinking vision for the evolution of SBSE in the FM era. Initially, Harman et al. [19] presented a detailed survey of the SBSE research state till 2010, challenges, and future directions. Subsequently, Colanzi et al. [11] presented a systematic mapping study to analyze the evolution of SBSE from 2009 to 2019. Sarro [34] later presented a brief vision highlighting SBSE applications for building responsible software systems. Recently, Wu et al. [42] presented a literature review and a roadmap for integrating LLMs with evolutionary algorithms (EAs). In comparison, our work focuses on the synergistic potential between SBSE, which leverages EAs for solving SE problems, and FMs, which encompass not only LLMs but also VLMs, SAMs, and MMs. Furthermore, we specifically present a roadmap focusing on the potential interplay between SBSE and FMs within the SE context, whereas their work focused on cross-domain research areas.
In the context of SBST, McMinn [26] conducted a comprehensive survey that focused on test data generation using search techniques. Similarly, Ali et al. [2] provided a systematic review of the literature on empirical studies related to search-based test case generation up to the year 2007. McMinn [27] presented a review of the SBST area, including studies till 2011, open challenges, and future directions. Neelofar et al. [29] analyzed the strengths and limitations of automated techniques in SBST, specifically focusing on instance space analysis. In recent work, Fraser and Arcuri [14] outlined a brief vision for the future of search-based test generation in the context of LLMs. In contrast, our work takes a broader perspective, exploring the interplay between SBSE and FMs (in addition to LLMs). | Search-based software engineering (SBSE), at the intersection of artificial
intelligence (AI) and software engineering, has been an active area of research
for about 25 years. It has been applied to solve numerous problems across the
entire software engineering lifecycle and has demonstrated its versatility in
multiple domains. With the recent advancements in AI, particularly the
emergence of foundation models (FMs), the evolution of SBSE alongside FMs
remains undetermined. In this window of opportunity, we propose a research
roadmap that articulates the current landscape of SBSE in relation to
foundation models (FMs), highlights open challenges, and outlines potential
research directions for advancing SBSE through its interplay with FMs. This
roadmap aims to establish a forward-thinking and innovative perspective for the
future of SBSE in the era of FMs. | [
"cs.SE",
"cs.AI"
] |
# 1 Introduction
Urban traffic congestion has become one of the most pressing challenges in modern cities, leading to increased travel times, environmental pollution, and economic losses. As urban populations continue to grow, efficient traffic management systems are essential for maintaining the functionality of urban infrastructure. Among various solutions, TSC plays a central role in optimizing vehicle flows at intersections to mitigate congestion. Traditional TSC methods, such as Fixed-Time Control (FTC)[1], MaxPressure control[2], and adaptive systems like SCOOT[3], have been widely deployed in real-world systems. However, these approaches often rely on static configurations or pre-defined heuristics, making them ill-suited for coping with highly dynamic and stochastic traffic conditions that frequently occur in urban environments.
In recent years, RL has emerged as a promising approach for adaptive TSC[4, 5, 6], owing to its ability to learn optimal policies through interaction with the environment. As illustrated in Figure 1b, existing RL-based TSC methods are broadly categorized into centralized and multi-agent approaches. Centralized RL treats the entire network as a single decision-maker and effectively optimizes global objectives[7, 8, 9], but suffers from poor scalability due to the exponential growth of joint state and action spaces. Multi-agent RL improves scalability by assigning an agent to each intersection[10, 11, 12, 13], yet often lacks global coordination, leading to misaligned local decisions and reduced system-wide efficiency. Challenge 1: How to develop an RL-based TSC method that simultaneously ensures scalability and global coordination remains unresolved.
Despite the development of various RL algorithms, the evaluation of TSC methods remains constrained by limitations of existing benchmark scenarios. Widely used scenarios (such as Grid5 $\times 5$ [14], Arterial $4 \times 4$ 4[15], and Cologne8[16]) are based on either synthetic road networks or simplified real-world maps, and the traffic flows are simulated using default configurations or real data sampled from a narrow time window. Challenge 2: These scenarios fail to capture the complexity and variability of real-world traffic, which can be influenced by diverse conditions such as adverse weather, holiday travel surges, or transitions between peak and off-peak periods, as shown in Figure 1a.
Figure 1: Realistic traffic flow scenarios and RL architectures for TSC
Several non-trivial technical issues must be re
solved when addressing these two challenges. First, agents must optimize global rather than local traffic objectives, which requires capturing long-range dependencies and coordinating actions across distant intersections. Second, preserving global information in large networks raises scalability concerns, as the observation space expands rapidly with the number of intersections, leading to memory and computational bottlenecks. Third, coordinating a growing number of agents becomes increasingly complex. Additionally, it is essential to construct diverse traffic flow settings that reflect real-world conditions, enabling a more comprehensive evaluation of TSC methods.
To tackle these issues, we propose the HiLight, which is a hierarchical RL framework consists of two components: Meta-Policy and Sub-Policy. The high-level Meta-Policy partitions the traffic network into subregions and generates sub-goal using an architecture that combines a Transformer encoder and LSTM, to effectively capture spatial correlations and temporal dynamics. The sub-goal is then used to guide the low-level Sub-Policy agents, which control individual intersections and make fine-grained decisions based on their local observations, as well as global features provided by the Meta-Policy. In addition, an adversarial training strategy is employed, in which the Meta-Policy learns to produce increasingly challenging sub-goals, while the Sub-Policy is trained to outperform these goals in actual traffic performance. To evaluate the effectiveness of the proposed method across different conditions, we conduct experiments in both large-scale and standard benchmark scenarios. Specifically, we design three diverse traffic flow patterns based on the Manhattan network to serve as large-scale stress tests, and also validate our method on five widely used traffic environments. Experimental results demonstrate that our approach achieves strong and consistent performance, particularly under large-scale and dynamic urban traffic settings.
# 2 Related Works
Traffic Signal Control TSC involves dynamically adjusting traffic light phases to optimize flow across a road network. Traditional methods—such as FTC[1], MaxPressure[2], SCOOT[3], and SCATS[17]—rely on heuristic rules or local feedback, limiting their adaptability to dynamic and uncertain traffic conditions. Subsequent studies explored kernel-based methods and natural actorcritic algorithms to enhance function approximation capabilities, and adopted deep reinforcement learning techniques such as DQN, DDPG, and A2C to improve policy learning [18, 19, 20, 21, 22]. However, these methods are typically evaluated in isolated or small-scale networks and fail to address scalability in large-scale scenarios.
Multi-agent reinforcement learning (MARL) approaches, such as IDQN [23, 24], IPPO [25], and MAPPO [26], often rely on agent interaction only during training, leading to poor collaboration during decentralized execution. To enhance coordination among intersections, recent studies introduced explicit collaboration mechanisms, such as modeling neighboring representations using graph neural networks (e.g., LibSignal, CoLight [27], MaCAR [12], DynSTGAT [28]) or applying clustering strategies to form cooperative groups [29, 13]. However, these methods typically enable collaboration only within limited regions and fail to provide global context to all intersections in large-scale networks, thereby overlooking the potential inconsistency between local optimization and global traffic efficiency. In addition, MetaLight [30] and MetaVIM [31] adopt meta-learning techniques to improve generalization across scenarios, yet their reliance on complex architectures or handcrafted rules hampers scalability and deployment in heterogeneous real-world traffic networks.
Hierarchical RL Hierarchical reinforcement learning (HRL) enables more efficient learning and decision-making by structuring policies across multiple levels, where higher-level controllers make decisions and lower-level policies execute fine-grained actions. Existing HRL methods can be broadly categorized into options-based frameworks[32, 33], goal-conditioned hierarchical policies[34, 35], and feudal or manager-worker architectures[36, 37]. These approaches commonly define a high-level policy (or manager) that selects sub-goals or tasks, while a low-level policy (or worker) executes primitive actions to achieve them. Although HRL has demonstrated strong performance in domains such as robotics and navigation, it has rarely been applied to large-scale urban traffic control, where the agent must coordinate across many regions, balance local reactivity with global efficiency, and adapt to diverse traffic patterns.
# 3 Problem Definition
# 3.1 Preliminaries
Traffic Intersection Intersection refers to a location in the road network where two or more roads cross or merge, typically equipped with traffic signals to regulate vehicle movement. Figure 2 shows a typical example: a standard fourleg intersection (also called a four-arm intersection), where four roads meet, forming a cross shape. Each approach (arm) contains three incoming lanes $l _ { i n }$ dedicated to left-turn, through, and right-turn movements, and typically three corresponding outgoing lanes ${ l _ { o u t } }$ .
Signal Phase A phase is defined as the set of green light indications for all incoming lanes at an intersection. As illustrated in Figure 2, a standard intersection typically adopts eight phases, each governing vehicle movements such as left turns, through movements, and right turns. In some scenarios, a single incoming lane may accommodate multiple movement directions. For example, in the Arterial $4 { \times } 4$ scenario, certain
Figure 2: The illustration of intersection, phases and pressure
lanes serve both through and right-turning vehicles. Additionally, in accordance with real-world practices, right-turning vehicles are not regulated by traffic signals.
Pressure Pressure is a metric used to quantify the imbalance of traffic flow at an intersection, specifically between incoming and outgoing lanes. It is typically defined as the difference between the queue lengths (or vehicle densities) of incoming and outgoing lanes for a specific movement. The total pressure $P$ for a phase is usually calculated as the sum of pressures over all movements controlled by that phase: $\begin{array} { r } { \bar { P } = \sum _ { ( i , j ) } ( q _ { i } - q _ { j } ) } \end{array}$ , where $q _ { i }$ is the number of vehicles on the incoming lane $l _ { i n } ^ { i }$ , and $q _ { j }$ is that on the corresponding outgoing lane $l _ { o u t } ^ { j }$ , as shown in Figure 2.
# 3.2 Problem Formulation
We model the large-scale TSC problem based on Multi-agent Markov Decision Process (MMDPs) as hierarchical MMDPs, which can be defined by the tuple $< \mathcal { T } , S , \mathcal { O } , \mathcal { A } , \mathcal { P } , \mathcal { R } , \gamma >$ . In this paper, we refer to the high-level policy of HRL as the Meta-Policy, which is responsible for generating abstract goals or selecting sub-policies. The low-level policy, referred to as the Sub-Policy, executes concrete actions to fulfill the objectives specified by the Meta-Policy. The set of intersections is defined as $\mathcal { I } = \{ 1 , 2 , \dots , N \}$ , where $N$ is the total number of intersections and each $i \in \mathcal { I }$ is controlled by an agent. All agents share the same set of parameters. $S$ denotes the global state, and $O$ is a partial observation derived from $S$ via an observation function $f _ { o b s } : s o$ , where $s \in S , o \in O$ . The action space is defined as $A = \{ A ^ { H } , A ^ { L } \}$ , where $A ^ { H }$ contains only a single global sub-goal $a ^ { H }$ assigned by the Meta-Policy, and each Sub-Policy agent selects its local action $\mathbf { \check { A } } ^ { L } = \{ a _ { i } ^ { L } \mathbf \check { \} } _ { i \in \mathbb { Z } }$ based on its local observation and the global sub-goal $a ^ { H }$ . $P$ and $\gamma$ denote the transition probability function and the discount factor separatel. The reward function $\mathcal { R }$ is shared among all intersections and consists of a global high-level reward and local low-level rewards, formulated as:
$$
\mathcal { R } ( \boldsymbol { s } , \boldsymbol { a } ^ { H } , \boldsymbol { a } _ { 1 } ^ { L } , \ldots , \boldsymbol { a } _ { N } ^ { L } , \boldsymbol { s } ^ { \prime } ) = \mathcal { R } ^ { H } ( \boldsymbol { s } , \boldsymbol { a } ^ { H } , \boldsymbol { s } ^ { \prime } ) + \sum _ { i \in \mathcal { I } } \mathcal { R } _ { i } ^ { L } ( \boldsymbol { s } _ { i } ^ { L } , \boldsymbol { a } _ { i } ^ { L } , \boldsymbol { s } ^ { \prime } _ { i } ^ { L } )
$$
where $s ^ { \prime } \in \mathcal { S }$ is the next state generated according to the transition probability $\mathcal { P } ( s ^ { \prime } | s , a ^ { H } , a ^ { L } )$ $\mathcal { R } ^ { H }$ reflects the overall performance of the sub-goal assignment, and $\mathcal { \bar { R } } _ { i } ^ { L }$ corresponds to the local objective of each agent $i \in \mathcal { I }$ , such as minimizing queue length or waiting time. Correspondingly, the overall policy is factorized into a high-level global policy $\pi ^ { H }$ and decentralized local policies $\pi ^ { L }$ , such that the joint policy $\pi ^ { a l l }$ is defined as the product of these two levels. Specifically, the high-level policy $\check { \pi } ^ { H }$ selects a global sub-goal $a ^ { H }$ based on the global state $s$ , while low-level policy $\pi ^ { \widetilde { L } }$ chooses the local action $a _ { i } ^ { L }$ conditioned on the local observation and the global sub-goal $a ^ { \dot { H } }$ . The overall joint policy can thus be written as:
$$
\pi ^ { a l l } ( a ^ { H } , a ^ { L } | s ) = \pi ^ { H } ( a ^ { H } | s ) \cdot \prod _ { i = 1 } ^ { N } \pi ^ { L } ( a _ { i } ^ { L } | o ^ { i } , a ^ { H } )
$$
All agents coordinate to maximize the expected cumulative discounted reward:
$$
\operatorname* { m a x } _ { \pi ^ { a l l } } \mathbb { E } _ { \pi ^ { a l l } } \left[ \sum _ { t = 0 } ^ { \infty } \gamma ^ { t } r ( s _ { t } , a _ { t } ^ { H } , a _ { t } ^ { L } ) \right]
$$
Note that, for clarity in illustrating the overall process of the Markov decision process, the observation function $f _ { o b s }$ defined above has been simplified. In practical implementations, the observation functions for the Meta-Policy and the Sub-Policy workers are not identical. The detailed design and differences between these components are elaborated in the next section.
# 4 Methodology
In this section, we propose a novel HRL framework HiLight for traffic signal control. As shown in Figure 3, the architecture consists of two branches: a global Meta-Policy and a local Sub-Policy, which work in coordination while being jointly optimized in an adversarial manner to achieve efficient control across large-scale traffic networks.
To address the challenge of balancing global coordination and local adaptability in large-scale traffic environments, HiLight leverages a Meta-Policy, based on Transformer and LSTM, to extract global representations from historical regional observations, and a shared-parameter Sub-Policy to make localized control decisions based on both local states and global guidance. The Meta-Policy provides a goal that serves as a high-level directional signal for all agents, while the Sub-Policy adapts its behavior accordingly through an Actor-Critic framework. Meanwhile, to improve learning efficiency and policy performance under sparse rewards and multi-agent objectives, we introduce an adversarial training mechanism, in which the Meta-Policy aims to generate increasingly challenging goals, and the Sub-Policy seeks to surpass them. The following sections provide more details of the HiLight.
# 4.1 Meta-Policy
In large-scale TSC scenarios, due to the high dimensionality of the global state, traffic signals are typically controlled using MARL, where each intersection is managed by an individual agent making local decisions. To enhance the perception range of each agent, existing studies mainly adopt approaches such as assigning collaborators or forming groups[38, 29]. However, in large-scale networks, the information provided by such methods remains limited. Agents often lack awareness of the overall traffic conditions across the network, which may lead to decisions that are misaligned with the global traffic efficiency. In contrast to these approaches, the Meta-Policy in HiLight operates at the global level, providing each agent with global information and guiding their optimization direction through sub-task assignments, thereby ensuring consistency with the overall objective of improving network-wide traffic flow.
Figure 3: An overview of HiLight.
# 4.1.1 Transformer Encoding Across Subregions
After obtaining the sequential inputs for each subregion, Meta-Policy employs a Transformer encoder to enable information exchange across different subregions and extract global traffic patterns. Given the input data structured as $( \bar { M } + 1 , T , d _ { r e g } )$ , where $M$ is the number of subregions, and the additional dimension represents the prepended a learnable input token $\mathbf { x } _ { t o k e n } \in \mathbb { R } ^ { d }$ , $d _ { r e g }$ is the dimensionality of the regional feature representation. Specifically, the input to Transformer encoder for time step $t$ is:
$$
{ \bf X } ^ { t } = [ { \bf x } _ { t o k e n } , { \bf s } _ { 1 } ^ { t } , { \bf s } _ { 2 } ^ { t } , \ldots , { \bf s } _ { M } ^ { t } ] \in \mathbb { R } ^ { ( M + 1 ) \times d _ { r e g } }
$$
where $\mathbf { s } _ { z } ^ { t } \in \mathbb { R } ^ { d }$ is the regional state at time step $t$ , see Appendix A for more details of regional state. To incorporate positional information across the subregions, we apply sinusoidal positional encoding as $\mathbf { X } _ { p o s } ^ { t } = \mathbf { X } ^ { t } + \mathbf { P } \mathbf { E } ( p o s )$ , with the positional encoding defined as:
$$
\mathbf { P } \mathbf { E } ( p o s , 2 i ) = \sin ( \frac { p o s } { \tau ^ { 2 i / d _ { m o d e l } } } ) , \quad \mathbf { P } \mathbf { E } ( p o s , 2 i + 1 ) = \cos ( \frac { p o s } { \tau ^ { 2 i / d _ { m o d e l } } } )
$$
where $\tau$ is a scaling parameter. The transformer encoder processes the input and computes attention among subregions:
$$
[ { \bf E } _ { G } ^ { t } , { \bf E } _ { 1 } ^ { t } , { \bf E } _ { 2 } ^ { t } , \ldots , { \bf E } _ { M } ^ { t } ] = T r a n s f o r m e r E n c o d e r ( { \bf X } _ { p o s } ^ { t } )
$$
Here, $\mathbf { E } _ { G } ^ { t }$ denotes the embedding of the input token summarizing the global state at time step $t$ , and $\mathbf { E } _ { z } ^ { t }$ represents the embedding of subregion $z$ at time step $t$ , each subregion embedding is calculated via self-attention across subregions. The global embedding $F _ { g } = \mathbf { E } _ { G } ^ { T }$ at the current time step $T$ is directly provided to the sub-policy as one of the observations of agents, to enhance global context awareness during Sub-Policy decision-making.
# 4.1.2 LSTM-based Sub-goal Generation
To produce sub-goals reflecting temporal dependencies and guide local agents with coherent longterm objectives, the subregion embeddings from the Transformer across the past $T$ timesteps are reorganized and input into the LSTM network.
First, we transpose the embeddings to shape the data appropriately for temporal modeling:
$$
\hat { \mathbf { E } } _ { z } = [ \mathbf { E } _ { z } ^ { 1 } , \mathbf { E } _ { z } ^ { 2 } , \ldots , \mathbf { E } _ { z } ^ { T } ] \in \mathbb { R } ^ { T \times d _ { r e g } } , \quad z \in \{ 1 , 2 , \ldots , M \}
$$
Then, the embeddings are rearranged such that the subregions become the batch dimension and the temporal dimension is used as the recurrent axis. This reshaping yields an input of shape $( T , M , d )$ for the LSTM:
$$
\mathbf { H } = L S T M ( \mathbf { E } _ { 1 : T } , \ldots , \mathbf { E } _ { M : T } ) \in \mathbb { R } ^ { T \times M \times d _ { h i d d e n } }
$$
where the LSTM learns to model temporal dependencies within each subregion’s trajectory. The final hidden state of each subregion is then aggregated to produce a global sub-goal representation:
$$
\mathbf { G } = \phi ( [ \mathbf { h } _ { 1 } ^ { T } \lVert \mathbf { h } _ { 2 } ^ { T } \rVert \dots \lVert \mathbf { h } _ { M } ^ { T } ] ) \in \mathbb { R } ^ { d _ { g } }
$$
where $\mathbf { h } _ { z _ { i } } ^ { T }$ denotes the final hidden state of the LSTM for subregion $z _ { i } , \parallel$ represents concatenation, and $\phi ( \cdot )$ is a feedforward layer to project the concatenated features into a fixed-size global sub-goal vector $\mathbf { G }$ . This vector $\mathbf { G }$ is broadcasted to all agents of Sub-Policy and serves as a shared directional objective, encouraging the agents at each intersection to align their local decisions with global traffic optimization trends.
# 4.2 Sub-Policy
Meta-Policy provides both global representations and high-level optimization objectives, while the core responsibility of the Sub-Policy is to perform real-time traffic signal control decisions at each intersection based on comprehensive observation information. Each intersection is modeled as an independent agent, and all agents share the same network parameters. The Sub-Policy integrates local traffic observations from the surrounding neighbors with the global guidance representation provided by the Meta-Policy, thereby constructing a final observation with multi-scale information awareness. This final observation serves as the input to both the policy and value networks within an Actor-Critic framework.
# 4.2.1 Local Observation Encoding
At each time step, the local observation for a given intersection is composed of its own state and the states of its four nearest neighboring intersections, resulting in a total of five observation vectors. The information from neighboring intersections is dynamically weighted according to its relative importance at the current time step. This design captures both the temporal variability and directional sensitivity of traffic flow. For instance, during morning or evening rush hours, different neighbors may serve as upstream intersections that carry major incoming traffic.
Let $\mathbf { o } _ { v } ^ { t } \in \mathbb { R } ^ { k }$ denote the raw $k$ -dimensional observation vector of intersection $v$ at time $t$ ; the specific observation features are detailed in the Appendix A. Let $\mathcal { N } ( v )$ denote the set of its four nearest neighboring intersections. At time $t$ , each observation $\mathbf { o } _ { v } ^ { t }$ is first encoded into an intermediate representation $\mathbf { h } _ { v } ^ { t }$ via a shared Multilayer Perceptron (MLP).
# 4.2.2 Dynamic Neighbor Aggregation
These embedded vectors are directly passed to the Graph Attention Concat (GAC) module. Let ${ \bf H } ^ { t } = [ { \bf h } _ { 1 } ^ { t } , \dots , { \bf h } _ { N } ^ { t } ] \in \mathbb { R } ^ { N \times F }$ denote the feature matrix at time $t$ for all $N$ intersections, and let $\mathbf { A } \in \mathbb { R } ^ { N \times N }$ be the adjacency matrix where $A _ { i j } = 1$ if node $j$ is a neighbor of node $i$ . For each valid edge $( i , j )$ with $A _ { i j } = 1$ , we compute an attention score as:
$$
e _ { i j } = L e a k y R e L U \left( \mathbf { a } ^ { \top } [ \mathbf { h } _ { i } ^ { t } \mid \mid \mathbf { h } _ { j } ^ { t } ] \right)
$$
with $\mathbf { a } \in \mathbb { R } ^ { 2 F }$ a learnable attention vector and $\parallel$ denoting vector concatenation, and attention scores are computed as $\alpha _ { i j } = s o f t m a x ( e _ { i j } )$ over the neighborhood $\mathcal { N } _ { i } = \{ j \ | \ A _ { i j } = 1 \}$ . The final representation $\mathbf { z } _ { i } ^ { t }$ is constructed by concatenating $\mathbf { h } _ { i } ^ { t }$ with the weighted neighbor features:
$$
\mathbf { z } _ { i } ^ { t } = \mathbf { h } _ { i } ^ { t } \parallel \alpha _ { i j _ { 1 } } \mathbf { h } _ { j _ { 1 } } ^ { t } \parallel \alpha _ { i j _ { 2 } } \mathbf { h } _ { j _ { 2 } } ^ { t } \parallel \alpha _ { i j _ { 3 } } \mathbf { h } _ { j _ { 3 } } ^ { t } \parallel \alpha _ { i j _ { 4 } } \mathbf { h } _ { j _ { 4 } } ^ { t }
$$
If fewer than four neighbors exist, zero vectors are padded to maintain fixed output size $\mathbf { z } _ { i } ^ { t } \in \mathbb { R } ^ { 5 F }$ . Meta-Policy to form the final obiservation of intersection Eventually, the representation $\mathbf { z } _ { i } ^ { t }$ is concatenated with the global feature vector $i$ : ${ \bf o } _ { i } ^ { f i n a l } = { \bf z } _ { i } ^ { t } \parallel F _ { g }$ $F _ { g }$ generated by the
# 4.3 Joint Optimization
In this section, we describe the joint optimization mechanism integrating Meta-Policy and SubPolicy under an adversarial hierarchical reinforcement learning framework. We elaborate on reward definitions, trajectory formulation, and adversarial loss functions.
# 4.3.1 Reward and Trajectory
The reward structure optimizes global traffic efficiency and local intersection throughput simultaneously. At each time step $t$ , each intersection (agent) $i$ receives a local reward $r _ { i } ^ { t }$ that evaluates its own traffic condition. The reward is calculated as a weighted combination of several key metrics: queue length ${ q } { l } _ { i } ^ { t }$ , average vehicle waiting time $\boldsymbol { w t _ { i } ^ { t } }$ , delay time $d t _ { i } ^ { t }$ , pressure ${ p } s _ { i } ^ { t }$ and speed score ${ \bf \nabla } _ { s s _ { i } ^ { t } }$ , with detailed definitions in Appendix A. The local reward is then given $\mathsf { a s } \mathsf { : } r _ { i } ^ { t } \bar { \mathsf { \Pi } } = - ( q l _ { i } ^ { t } + w t _ { i } ^ { t } + d t _ { i } ^ { t } + p s _ { i } ^ { t } - s s _ { i } ^ { t } )$ .
Meanwhile, the Meta-Policy sets global goal vectors at each timestep $G ^ { t }$ , comprising targets for global waiting time $G _ { w } ^ { t }$ and global queue length $G _ { q } ^ { t }$ . The global reward $\boldsymbol { r } _ { g } ^ { t }$ assesses the discrepancy between the current global state, specifically total waiting time $W _ { g l o b a l } ^ { t }$ and total queue length $Q _ { g l o b a l } ^ { t }$ Then, the goal reward is defined as:
$$
r _ { g } ^ { t } = - \left[ \beta _ { q } \left( W _ { g l o b a l } ^ { t } - G _ { q } ^ { t } \right) + \beta _ { w } \left( Q _ { g l o b a l } ^ { t } - G _ { w } ^ { t } \right) \right]
$$
where $\beta$ is scalar weights balancing the importance of each metric. Based on these reward definitions, trajectories are formed at each timestep. For each intersection $i$ , the trajectory is represented as $\tau _ { i } ^ { t } \overset { \cdot } { = } ( \mathbf { o } _ { i } ^ { f i n a l , t } , a _ { i } ^ { t } , r ^ { t } , \mathbf { o } _ { i } ^ { f i n a l , t + 1 } )$ oifinal,t+1), where rt = rit + rtg.
# 4.3.2 Adversarial Training Mechanism
In our framework, the Meta-Policy generates a global goal $G ^ { t }$ intended to predict the global traffic status at timestep $t$ , but deliberately adjusted to be slightly more optimal than the actual outcome. The Meta-Policy’s objective consists of two components: the first encourages $G ^ { t }$ to approximate the actual global traffic condition after $t$ steps, and the second incentivizes the Meta-Policy to set even more ambitious sub-goals by rewarding more challenging targets. Formally, the Meta-Policy minimizes the following loss:
$$
\begin{array} { r } { \mathcal { L } _ { M e t a } = \mathbb { E } _ { \tau \sim \pi _ { s u b } } \left[ \left. G ^ { t } - \left( W _ { g l o b a l } ^ { t } , Q _ { g l o b a l } ^ { t } \right) \right. ^ { 2 } + \eta _ { 1 } r _ { g } ^ { t } \right] } \end{array}
$$
Meanwhile, the Sub-Policy aims to not only maximize cumulative returns via standard Actor-Critic updates but also align its achieved global traffic performance with the targets set by the Meta-Policy. The Sub-Policy minimizes the following loss:
$$
\begin{array} { r } { \mathcal { L } _ { S u b } = \mathbb { E } _ { \tau \sim \pi _ { s u b } } \left[ \mathcal { L } _ { A C } + \eta _ { 2 } \left( \beta _ { q } \left( W _ { g l o b a l } ^ { t } - G _ { q } ^ { t } \right) + \beta _ { w } \left( Q _ { g l o b a l } ^ { t } - G _ { w } ^ { t } \right) \right) \right] } \end{array}
$$
where $\mathcal { L } _ { A C }$ denotes the standard Actor-Critic loss (policy gradient $^ +$ value loss) and $\eta , \beta$ balance the importance of adversarial loss.
# 5 Experiments and Results
# 5.1 Experimental Setup
We utilize the Simulation of Urban MObility (SUMO) traffic simulator as the environment for all experiments, which is employed in a non-GUI mode to enable efficient large-scale simulations. Each episode is set to 3600 seconds of simulated time, with a rollout length of 240 steps. The yellow light duration is 5 seconds, and each signal phase iteration lasts 10 seconds. As shown in Table 1, our experimental setup includes one large-scale scenario based on Manhattan, containing 2668 traffic lights, as well as five commonly used benchmark scenarios.
For the Manhattan2668 scenario, we define the network using OSMWebWizard1 and manually refine its intersection details. Traffic flow is generated based on open-source taxi trip data2, and we select dates associated with adverse weather conditions using Local Climatological Data3 to simulate the Adverse Weather Flow. In addition, we simulate two other realistic traffic patterns based on actual urban traffic conditions: the Peak Transition Flow, which represents a sudden influx of vehicles during the transition from off-peak to peak hours, and the Holiday Rush Flow, which reflects surges in traffic during holiday periods with increased travel demand. Visualizations of the traffic scenarios described in Appendix D. Ablation Study can be found in the Appendix B.
Table 1: Statistics of evaluation scenarios
Table 2: Performance comparison on Manhattan2668 under three traffic flow settings
# 5.2 Evaluation Metrics and Compared Methods
We employ two widely used evaluation metrics in TSC: Average Travel Time (ATT) and Average Delay Time (ADT). ATT measures the average duration that all vehicles spend in the scenario, from entering to exiting. ADT captures the average delay caused by congestion or traffic signals.
To comprehensively evaluate the performance of HiLight, we compare it with eleven existing methods, including two traditional FTC[1] and MaxPressure[2]—as well as eight state-of-the-art RL or MARL algorithms: CoLight[27], MPLight[39], MetaLight[30], IPPO[40], rMAPPO[26], MetaGAT[41], GESA[14], CoSLight[42] and X-Light[43]. Detailed descriptions of these methods are provided in the AppendixC.
# 5.3 Performance in Large-Scale Scenario
In a large-scale traffic network based on the Manhattan map, our approach achieves the lowest mean trip times (690.88 s, 913.84 s, 980.24 s) and delay times (549.02 s, 649.71 s, 598.65 s) across all traffic flow settings, as shown in Table 2, outperforming even the strongest baselines by $2 . 6 \mathrm { - } 6 . 8 \%$ .
These results demonstrate the effectiveness of our hierarchical structure in handling large and dynamic traffic systems. Compared to flat or fully decentralized methods such as CoLight or IPPO, our method benefits from its two-level decision-making mechanism. The high-level policy captures global traffic dynamics and provides strategic guidance to each region, which helps mitigate the lack of coordination observed in multi-agent methods. Meanwhile, the low-level policies, conditioned on regional objectives, allow for fine-grained control tailored to local traffic states. The combination of top-down coordination and bottom-up adaptability allows our method to generalize better under highly dynamic conditions such as weather disruptions or holiday rushes, where other methods often struggle with delayed responses or local optima.
# 5.4 Performance in Standard Benchmark Scenarios
To further evaluate the generalizability of our method, we conduct experiments on a variety of traffic scenarios with different network sizes, ranging from small-scale networks such as Cologne8 (8 traffic lights) and $\mathrm { G r i d 4 } \times 4$ (16 traffic lights), to more complex ones such as Manhattan2668, which includes up to 2668 intersections. The results, summarized in Table 3, reveal a clear trend: while our method performs competitively in small-scale networks, its advantages become increasingly pronounced as the network size grows. As visualized in Figure 4, this advantage scales positively with the number of intersections in the scenario. Note that the results of Manhattan2668 in Table 3 is calculated as the average over the three traffic flow settings as reported in Table 2.
Table 3: Performance comparison on standard scenarios and Manhattan2668 | Efficient traffic signal control (TSC) is essential for mitigating urban
congestion, yet existing reinforcement learning (RL) methods face challenges in
scaling to large networks while maintaining global coordination. Centralized RL
suffers from scalability issues, while decentralized approaches often lack
unified objectives, resulting in limited network-level efficiency. In this
paper, we propose HiLight, a hierarchical reinforcement learning framework with
global adversarial guidance for large-scale TSC. HiLight consists of a
high-level Meta-Policy, which partitions the traffic network into subregions
and generates sub-goals using a Transformer-LSTM architecture, and a low-level
Sub-Policy, which controls individual intersections with global awareness. To
improve the alignment between global planning and local execution, we introduce
an adversarial training mechanism, where the Meta-Policy generates challenging
yet informative sub-goals, and the Sub-Policy learns to surpass these targets,
leading to more effective coordination. We evaluate HiLight across both
synthetic and real-world benchmarks, and additionally construct a large-scale
Manhattan network with diverse traffic conditions, including peak transitions,
adverse weather, and holiday surges. Experimental results show that HiLight
exhibits significant advantages in large-scale scenarios and remains
competitive across standard benchmarks of varying sizes. | [
"cs.LG",
"cs.AI"
] |
# 1 Introduction
Transformers [Vaswani et al., 2017] are expressive set encoders, which when paired with positional encodings, can serve as sequence encoders. The attention mechanism in a transformer block allows us to model the long and short term dependencies in a sequence in an input-dependent manner instead of relying on handcrafted dependency modeling as in recurrent (uni-directional and bi-directional) and convolutional models. The single hidden layer multi-layered perceptron (or MLP) in the transformer block introduces non-linearities enabling further expressivity. Transformers have been extremely successful in modeling natural language, and are the core blocks of various large language models or LLMs. They have also been successful in vision, tabular data, and time series among various other applications.
The expressivity of attention-based transformers [Yun et al., 2020a] comes with a computational overhead where the attention mechanism requires time and memory quadratic in the sequence length. To address this, various efficient transformers have been developed [Tay et al., 2022], utilizing various techniques such as fixed sparse attention patterns, low rank approximations of the attention matrix, and input-dependent sparse attention patterns. In this work, we focus on sparse attention mechanisms, both input-dependent and input-agnostic. Existing literature has studied sparse attention as a way to speed up the forward pass (inference), which in turn can speed up each training step [Tay et al., 2021]. However, sparse attention has always been viewed as an approximation of the gold standard full attention.
Contributions. One can view sparse attention as a form of sensory gating, and this is considered an essential component of biological cognitive systems, allowing rapid learning [Jones et al., 2016, Fritzsch, 2020], and the absence of it is often considered a marker for schizophrenia [Judd et al., 1992]. The gating is often achieved via inhibitory signals. Related observations made by Bengio [2019] suggest some motivations. He makes a connection between a form of input-dependent sparse attention and the global workspace theory of consciousness in cognitive science, as well as the properties of natural language sentences and symbolic AI representations used in planning and reasoning [nss, 2024], “stipulating that elements of a conscious thought are selected through an attention mechanism (such as the content-based attention mechanism we introduced in Bahdanau et al. [2016]) and then broadcast to the rest of the brain, strongly influencing downstream perception and action as well as the content of the next conscious thought”. As the “elements” or weight vectors being attended to are often discussed as semantic concepts, one can refer to the same phenomenon as “semantic focus” and explore its possible benefit to learning efficacy. Motivated by this, we consider the following question in this paper: “Can sparse attention in transformers be beneficial in terms of learning convergence and generalization, in comparison to full attention?”. To this end, we share the following findings:
• (§4) Focusing on benchmarks of structured languages designed to evaluate capabilities of transformers [Tay et al., 2021, Deletang et al., 2023], and controlling for all involved hyperparameters, we make two empirical observations: – Sparse attention with input-agnostic sparsity patterns empirically struggles with expressivity (as implied by Yun et al. [2020a,b]), and does not show benefits in terms of learning convergence and generalization even when equipped with enough expressivity (via global tokens [Ainslie et al., 2020, Zaheer et al., 2020]). – Sparse attention with a specific form of input-dependent sparsity pattern that limits the attention to the top attention scores – the heavy-hitters (such as top- $k$ attention [Gupta et al., 2021, Zeng et al., 2025]) – are empirically as expressive as the standard full attention, and can converge significantly faster during training, while generalizing as well as, and at times better than, the full attention model. These improvements hold across various hyperparameters, both related to the architecture (such as the number of heads per transformer block, the number of transformer blocks, the MLP activation function), and the optimizer (such as the initial learning rate, and the learning rate decay).
• (§5) We then try to theoretically understand why this might be happening, and characterize conditions under which sparse attention can provide better learning convergence and generalization guarantees. Our analysis is based on two critical insights:
– For any $\lambda$ -Lipschitz learning objective (with respect to the learnable parameters), the convergence rate and algorithmic stability [Bousquet and Elisseeff, 2000] of (stochastic) gradient-descent based algorithms are dependent on Lipschitz constant $\lambda$ , with smaller values implying better convergence and stability guarantees; better stability implies better generalization [Hardt et al., 2016]. We show that the Lipschitz constant of a transformer-based model is tied to the input-stability of the softmax in the attention mechanism – better input-stability implies better Lipschitz constant. Thus, we establish how the input-stability of softmax directly affects the learning convergence and generalization.
– The sparsity pattern of the sparse attention affects the overall learning convergence and generalization through its effect on the input-stability of the softmax. The input-stability of the (sparse) softmax is closely tied to the range or the semantic dispersion of the values (the query-key dot-products) over which the softmax is applied (formally discussed in definition 2) – larger dispersion implies worse input-stability. While input-agnostic sparsity patterns do not necessarily improve the dispersion over the full-attention model, input-dependent sparsity that only focuses on the heavy-hitters can significantly improve this dispersion, thus implying improved input-stability. This effectively translates to an improved Lipschitz constant, thus convergence and generalization guarantees. We also empirically validate that the dispersion and the estimated Lipschitz constant of input-dependent sparse attention show improvements over full attention.
Figure 1: Visualizations of dot-product based attention scores matrices, which along with the value matrix VX, gives us the attention-based token updates $\mathsf { A } ( \mathbf { X } )$ (see equation (1) in section 3). The horizontal axis denotes keys and the vertical axis queries. The color intensities denote the value of the attention scores (higher intensities denote higher scores), and the white entries in the matrices corresponds to masked entries. Figure 1a depicts standard full attention score matrix; figure 1b, figure 1c and figure 1d depict various input-agnostic sparse attention score matrices. Figure 1e shows the use of global tokens (attention scores are shown in orange) in conjunction with banded attention (scores are shown in blue), with the last two tokens being the global tokens – all tokens attend to and are attended by these global tokens. Note that the per-query semantic dispersion (see definition 2, figure 9) of the unmasked attention scores in the input-agnostic masks would be similar in general to that of standard attention. Input-dependent masked attention such as top- $k$ attention (shown in figure 1f) can have a much smaller attention semantic dispersion compared to standard attention.
Outline. We begin by discussing relevant literature on transformers and their analyses (both empirical and theoretical) in section 2. We present the precise problem setup including the data, the model and the training loss in section 3. Following that, in section 4, we present our empirical observations regarding the effect of sparse attention on the learning convergence and generalization on 8 tasks. In section 5, we try to theoretically understand the observed behavior, and try to characterize when and how sparse attention can provide benefits over the standard full attention. We conclude in section 6, summarizing our contributions, and discussing future work.
# 2 Related Work
In this section, we cover literature on efficient transformers, and the theoretical and empirical investigations on the capabilities and limitations of transformers. Finally, we will also briefly discuss the existing research on optimization with transformers.
Efficient transformers with sparse attention. The transformer architecture [Vaswani et al., 2017] has had tremendous impact in various fields such as language modeling, vision and tabular data, and spurred new research into the development of architectural variants or X-formers [Tay et al., 2022, Phuong and Hutter, 2022, Lin et al., 2022]. Many of these have been developed to address the quadratic computational complexity of the attention mechanism in a transformer block with respect to the context length (the number of tokens in the context), with the goal of increasing the context length. One common technique is to sparsify the attention mechanism. Usually each (query) token in the context attends to all other (key) tokens as in figure 1a, leading to the quadratic cost. Instead, we can limit the set of key tokens attended to by any particular query token. Input-agnostic sparsification strategies include attending (i) within a window as in figure 1b [Parmar et al., 2018] or a block as in figure 1d [Qiu et al., 2020], (ii) in a strided manner as in figure 1c [Beltagy et al., 2020, Child et al., 2019], (iii) to random tokens Zaheer et al. [2020], or (iv) to only a small number of global tokens and these global tokens attend to all other tokens [Ainslie et al., 2020, Zaheer et al., 2020]; this is often used in conjunction with other forms of sparse attention as shown in figure 1e. Input-dependent sparsification strategies include (i) using a scoring mechanism and attending only to the highest scoring tokens as in figure 1f [Tay et al., 2020, Gupta et al., 2021], or (ii) clustering [Roy et al., 2021] or hashing [Kitaev et al., 2020] tokens into buckets and attending only to in-bucket tokens. Surveys such as Tay et al. [2022] and Lin et al. [2022] cover various other forms. These input-dependent sparse attention mechanisms focus the attention on the keys corresponding to the highest dot-product scores – the heavy hitters – while explicitly ignoring the remaining keys. Sparse attention is considered in all these cases as a way to speed up the attention mechanism in the transformer block during the forward pass without significantly deteriorating the downstream performance, with the standard full attention being the gold-standard. The Long-range Arena or LRA [Tay et al., 2021] serves as one such benchmark comparing different efficient transformers to the standard transformer.
In contrast to above, we theoretically study the effect of sparse attention based transformers on the learning or empirical risk minimization (ERM) convergence of the whole model (containing multiple transformer blocks), and the in-distribution generalization of the model obtained via ERM. We attempt to characterize conditions under which sparse attention might show improvements over full attention.
Empirical evaluations of transformer capabilities. While benchmarks such as the LRA [Tay et al., 2021] focus on the efficiency and in-distribution generalization, transformers have also been thoroughly evaluated on benchmarks studying specific forms of out-of-distribution generalization such as compositional generalization and length generalization. Compositional generalization benchmarks such as COGS [Kim and Linzen, 2020] and SCAN [Lake and Baroni, 2018] consider sequence-to-sequence translation problems, and they have been used to highlight the inability of transformers to systematically generalize [Sikarwar et al., 2022]. However, subsequent work such as Csordás et al. [2021], Ontanon et al. [2022] have demonstrated ways in which transformers can systematically generalize. The Neural Networks and Chomsky Hierarchy or NNCH benchmark [Deletang et al., 2023] considers language transduction tasks from different formal language classes such as regular, deterministic context-free and context-sensitive languages. This benchmark studies the ability of various models (including transformers) to length generalize – that is, generalize to longer input sequences when being trained in a length limited manner. There has also been a lot of research on improving the performance of transformer based models on these out-of-distribution generalization benchmarks leveraging auxiliary tasks [Jiang and Bansal, 2021] and chain-of-thought prompting [Drozdov et al., 2023].
In our work, we focus on the theoretical analysis of the ERM convergence and the in-distribution generalization of models based on multiple transformer blocks, and empirically validate our theoretical insights utilizing these above benchmarks. We consider one multiclass classification task from the LRA benchmark [Tay et al., 2021] and a subset of the tasks from the NNCH benchmark [Deletang et al., 2023] that can be posed as supervised classification problems.
Theoretical treatment of transformer capabilities. Given the widespread success of transformers, there have been various theoretical studies on the capabilities and limitations of transformers. One line of research focuses on the ability of transformers to express (and thus recognize) formal languages [Strobl et al., 2024]. Some of these works study transformers with hard attention [Bhattamishra et al., 2020, Hahn, 2020, Hao et al., 2022, Merrill et al., 2022], while others consider the more commonly used softmax attention [Chiang and Cholak, 2022, Chiang et al., 2023]. Another line of research has focused on understanding the capabilities of transformers as algorithms [Li et al., 2023], demonstrating how transformers can, under specific parameter settings, perform in-context gradient descent for linear regression [Von Oswald et al., 2023] or in-context clustering [Geshkovski et al., 2023], and how easily can such parameters can be found [Li et al., 2023, Ahn et al., 2023, Zhang et al., 2024]. Yun et al. [2020b] focus on universal approximation of sparse attention transformer for sequence-to-sequence problems, and establish conditions on the sparsity pattern that ensure desired expressivity given enough number of transformer layers.
Viewing hard-attention as a form of input-dependent sparse attention, these existing expressivity results [Strobl et al., 2024] are complementary to our focus on learning convergence and in-distribution generalization for models using multiple sparse attention based transformer blocks – existing hard-attention expressivity results discuss whether sparse attention transformers are expressive enough for the task at hand.
Our study here focuses on how quickly and sample efficiently can such transformers learn the task, and how he attention sparsity pattern plays a role.
Optimization with transformers. There has been a lot of work on understanding the optimization of transformers in terms of the benefit of adaptive methods such as Adam over non-adaptive SGD [Zhang et al., 2020, Pan and Li, 2022, Jiang et al., 2023, Kunstner et al., 2023, Ahn et al., 2024]. However, the focus there is to understand why optimizers such as Adam converge significantly faster than SGD with transformer models; no such consistent difference has been established for previous architectures such as convolutional or residual. Li et al. [2025] recently present an analysis of the training dynamics with SignGD for a single transformer block model for a specific noisy binary classification problem, working in the “feature learning framework”, and empirically demonstrating that the dynamics of SignGD and Adam are quite similar, thus making SignGD a useful proxy for analyzing Adam.
Our study is complementary to this line of work where we study the effect of sparsity in attention to non-adaptive SGD convergence and generalization. We also consider a more general sequence learning problem with multiple transformer blocks.
# 3 Problem Setup
In this section, we detail the problem setup, introducing the notation, and presenting the transformer based model, the training data and the learning loss.
Notation. We denote the index set as $[ [ n ] ] \triangleq \{ 1 , \dots , n \}$ for any natural number $n \in \mathbb { N }$ . We use $X$ for input sequences of token indices $v \in \mathbb { I } ^ { \boldsymbol { D } } \mathbb { I }$ in a voJcaKbulary $\nu$ of size $D$ , and $y$ for labels or targets. We use $\mathbf { x } \in \mathbb { R } ^ { d }$ for a token embedding vector andJ $\mathbf { X } \in \mathbb { R } ^ { d \times L }$ for the sequence (matrix) of $L$ token embeddings. For any vector $\mathbf { v }$ , we use $\boldsymbol { v } _ { i }$ to denote its $i$ -th entry, and $\| \mathbf { v } \|$ to denote its Euclidean norm. For a matrix $\mathbf { W }$ , we denote its $( i , j )$ -th entry as $W _ { i j }$ , $i$ -th column as $\mathbf { W } _ { : i }$ and $i$ -th row as $\mathbf { W } _ { i : }$ . We use $\| \mathbf { W } \|$ and $\| \mathbf { W } \| _ { 2 , 1 }$ to denote the spectral and $\ell _ { 2 , 1 }$ norms of $\mathbf { W }$ , where $\ell _ { 2 , 1 }$ norm is the sum of the Euclidean norms of the columns $\mathbf { W } _ { : i }$ of the matrix W. For a tuple $\theta = ( \mathbf { W } ^ { ( 1 ) } , \dots , \mathbf { W } ^ { ( n ) } )$ of $n$ matrices, we let $\| \theta \| = \operatorname* { m a x } _ { i \in [ n ] } \| \mathbf { W } ^ { ( i ) } \|$ . We consider a learning problem with input sequences $X = [ v _ { 1 } , \dots , v _ { L } ] \in \left[ [ D ] \right] ^ { L }$ of length exactlJyK $L$ with its $i$ -th entry $v _ { i }$ denoting the $v _ { i }$ -th token in a vocabulary $\nu$ , with outputs $y \in \mathcal { V }$ . 1 For a learnable function $f : \mathcal { X } \mathcal { Y }$ with learnable parameters $\theta$ , we explicitly write the function as $f _ { \theta } ( X )$ with $X \in { \mathcal { X } }$ .
Transformer block. Consider a $L$ length sequence of token embeddings $\mathbf { X } \in \mathbb { R } ^ { d \times L }$ with the $i$ -th token embedding denoted as $\mathbf { X } _ { : i } \in \mathbb { R } ^ { d }$ . Let TF : $\mathbb { R } ^ { d \times L } \mathbb { R } ^ { d \times L }$ denote a transformer block with learnable parameters $\boldsymbol { \theta } = ( \mathbf { W } , \mathbf { V } , \mathbf { P } , \mathbf { R } )$ with $\mathbf { W } , \mathbf { V } \in \mathbb { R } ^ { d \times d } , \mathbf { P } , \mathbf { R } \in \mathbb { R } ^ { d _ { \mathsf { M L P } } \times d }$ . The transformer block output is then defined as:
$$
\mathsf { T F } _ { \theta } ( \mathbf { X } ) = \mathsf { L N } ( \widetilde { \mathbf { X } } + \underbrace { \mathbf { R } ^ { \top } \sigma ( \mathbf { P } \widetilde { \mathbf { X } } ) } _ { \mathrm { M L P } _ { \mathbf { P } , \mathbf { R } } ( \widetilde { \mathbf { X } } ) } ) , \quad \mathrm { a n d } \quad \widetilde { \mathbf { X } } = \mathsf { L N } ( \mathbf { X } + \underbrace { \mathbf { V } \mathbf { X } \mathrm { s o f t m a x } ( \mathbf { X } ^ { \top } \mathbf { W } \mathbf { X } ) } _ { \mathsf { A w , v } ( \mathbf { X } ) } ) ,
$$
where $\mathsf { L N } : \mathbb { R } ^ { d } \to \mathbb { R } ^ { d }$ is the token-wise (columnwise) layer normalization (or LayerNorm [Ba et al., 2016]; one can also use RMSNorm [Zhang and Sennrich, 2019]), and the $\mathbf { R } ^ { \top } \sigma ( \mathbf { P } \widetilde { \mathbf { X } } )$ denotes the token-wise single hidden layer MLP : $\mathbb { R } ^ { d } \to \mathbb { R } ^ { d }$ . One simplification here is that we are not ceonsidering learnable parameters in the LayerNorm. The learnable parameters of the LayerNorm can be incorporated in our analysis much like that of the MLP block but with additional notation. The columnwise softmax $( \cdot )$ of the dot-products $\mathbf { X } ^ { \top } \mathbf { W } \mathbf { X }$ between the query and key matrices, 2 combined with the value matrix VX denotes the dot-product self-attention $\mathsf { A } : \mathbb { R } ^ { d \times L } \to \mathbb { R } ^ { d \times L }$ . Here $d$ is the transformer $" d _ { \mathrm { m o d e l } } ? \$ . We consider single head attention here for the ease of exposition, but our analysis can be easily extended to multi-headed attention; please see appendix D.4. While Vaswani et al. [2017] utilized ReLU as the activation function $\sigma$ in the MLP, subsequent works [Devlin et al., 2019] have used other activation functions such as the GELU [Hendrycks and Gimpel, 2016] and the ELU [Clevert et al., 2015]. Furthermore, many different variations of the transformer block has also been utilized in literature. 3
Masked softmax. The columnwise softmax : $\mathbb { R } ^ { L } \to S _ { L }$ independently transforms each column of its $( L \times L )$ input to a $L$ -dimensional simplex $S _ { L } \triangleq \{ \mathbf { s } \in \mathbb { R } ^ { L } , s _ { j } \geq 0 \forall j \in [ [ L ] ] , \sum _ { j = 1 } ^ { L } s _ { j } = 1 \}$ ; that is the $i$ -th column softmax $\mathbf { \langle D \rangle } _ { : i } \in S _ { L }$ for a pre-activation dot-product matrix $\mathbf { D } \in \mathbb { R } ^ { L \times L }$ . A common modification of this transformer block is the replacement of the softmax with a sparse masked softmax which has an associated masking function $m : \mathbb { R } ^ { L \times L } \{ 0 , 1 \} ^ { L \times L }$ . For standard self-attention, with a pre-activation dot-product matrix $\mathbf { D } = \mathbf { X } ^ { \top } \mathbf { W } \mathbf { X } \in \mathbb { R } ^ { L \times L }$ , the attention mask matrix $\mathbf { M } = m ( \mathbf { D } )$ is trivially given with $M _ { j i } = 1$ for all $j , i \in [ [ L ] ]$ . For causal attention, the $( j , i )$ -th mask matrix entry is $M _ { j i } = \mathbb { I } ( j < i )$ . For banded attention with a winJdoKw width of $w \in \mathbb { N }$ , the $( j , i )$ -th mask matrix entry is $M _ { j i } = \mathbb { I } ( i - w \leq j \leq i + w )$ . The $( j , i )$ -th entry $A _ { j i }$ of the post-activation attention matrix A = softmax $\mathbf { \tau } ( \mathbf { D } )$ for standard and masked attention is given as follows:
$$
A _ { j i } = \frac { \exp ( D _ { j i } ) } { \sum _ { j ^ { \prime } = 1 } ^ { L } \exp ( D _ { j ^ { \prime } i } ) } , \qquad A _ { j i } = \frac { \exp ( D _ { j i } ) \cdot M _ { j i } } { \sum _ { j ^ { \prime } = 1 } ^ { L } \exp ( D _ { j ^ { \prime } i } ) \cdot M _ { j ^ { \prime } i } } .
$$
For the simplicity of notation we will denote the masked softmax based sparse self-attention as ${ \mathsf { A } } : { \mathbb { R } } ^ { d \times L } $ $\mathbb { R } ^ { d \times L }$ with the implicit understanding that standard softmax attention is a special case of sparse attention with a trivial mask.
Complete model. The model is defined as $f _ { \Theta } : \left[ \left[ D \right] \right] ^ { L } \to \hat { \mathcal { V } }$ with token and position embeddings $\mathbf { T } \in \mathbb { R } ^ { d \times D }$ and $\mathbf { E } \in \mathbb { R } ^ { d \times L }$ respectively, $\tau$ transformer blocks eaJchKwith parameters $\mathbf { \boldsymbol { \theta } } ^ { ( t ) } = ( \mathbf { W } ^ { ( t ) } , \mathbf { V } ^ { ( t ) } , \mathbf { P } ^ { ( t ) } , \mathbf { R } ^ { ( t ) } ) , t \in \mathbb { [ } [ \tau ] ] .$ and a readout linear layer with weights $\Phi \in \mathbb { R } ^ { Y \times d }$ using token projection vector $\boldsymbol \omega \in \mathbb { R } ^ { L }$ , where $Y$ is JthKe dimensionality of the output $\hat { \mathcal { Y } }$ (for example, the number of classes in output domain $y$ ). The $i \cdot$ -th token $v _ { i } \in \mathbb { I } D \mathbb { I }$ in the input $X$ is initially embedded as $\mathbf { T } _ { : v _ { i } } + \mathbf { E } _ { : i }$ using the token and position embeddings:
$$
\begin{array} { r } { \mathsf { e } _ { \Theta } ( X ) = \Phi ( \mathbf { X } ^ { ( \tau ) } \omega ) , \quad \mathbf { X } ^ { ( t ) } = \mathsf { T F } _ { \theta ^ { ( t ) } } ( \mathbf { X } ^ { ( t - 1 ) } ) , \forall t \in [ [ \tau ] ] , \quad \mathbf { X } ^ { ( 0 ) } = [ \mathbf { T } _ { : v _ { 1 } } + \mathbf { E } _ { : 1 } , \ldots , \mathbf { T } _ { : v _ { L } } + \mathbf { E } _ { : L } ] . } \end{array}
$$
Here $\boldsymbol { \omega } \in \mathbb { R } ^ { L }$ is the $( f t x e d )$ token projection vector – we can set the $\boldsymbol { \omega } = [ 0 , 0 , \ldots , 0 , 1 ] ^ { \top }$ to select the last token to make the final prediction, and $\omega = ( 1 / L ) \mathbf { 1 } _ { L }$ uses the average of the $L$ tokens (along the sequence length dimension), where $\mathbf { 1 } _ { L }$ is the all-one $L$ dimensional vector. The $\Theta$ in $f _ { \Theta } ( \cdot )$ denotes the tuple of all the (learnable) model parameters, that is $\boldsymbol { \Theta } \triangleq ( \mathbf { T } , \boldsymbol { \theta } ^ { ( 1 ) } , \dots , \boldsymbol { \theta } ^ { ( \tau ) } , \boldsymbol { \Phi } )$ . Here we are assuming that the position encodings are not learned, but that can also be incorporated in our study.
Training. Given a set $S$ of $n$ sequence-output pairs $( X , y ) , X \in [ [ D ] ^ { L } , y \in \mathcal { Y }$ for training, and a per-sample loss function $\ell : \mathcal { V } \times \hat { \mathcal { V } } \to \mathbb { R }$ , the learning involves solving the folloJwiKng empirical risk minimization or ERM problem:
$$
\operatorname* { m i n } _ { \Theta \triangleq ( \mathbf { T } , \theta ^ { ( 1 ) } , \dots , \theta ^ { ( r ) } , \Phi ) } \mathcal { L } ( \Theta ) \triangleq \frac { 1 } { n } \sum _ { ( X , y ) \in S } \ell ( y , f _ { \Theta } ( X ) ) \quad ( f _ { \Theta } ( \cdot ) \ \mathrm { d e f i n e d ~ i n ~ e q u a t i o n ~ } ( 3 ) ) .
$$
In the sequel, we will study, first empirically and then theoretically, (i) the convergence rate of stochastic gradient descent for this learning problem, and (ii) the generalization of the learned model.
# 4 Empirical Observations
In this section, we focus on empirically ablating the effect of the different forms of sparse attention on the ERM convergence and generalization. For this purpose, we ensure that all hyperparameters (architectural and optimization) are the same between the standard full attention, and the various sparse attention. We will first discuss the tasks and sparse attention choices; the hyperparameter (architectural and optimization) selection procedure and our compute resources are discussed in appendix B. Then, we will present the comparison between the standard full attention and various sparse attention. Subsequently, focusing on full attention and heavy-hitter style input-dependent sparse attention, we will study the effect of hyperparameters (or the lack thereof) on their relative behaviors.
Tasks. We consider the List Operations or ListOps task [Nangia and Bowman, 2018] from the LRA benchmark [Tay et al., 2021] with sequence lengths between 500 and 600 both for training and testing because we are evaluating in-distribution learning and generalization. This is a 10-class classification problem. We select this task over the other tasks in the LRA benchmark because (i) this is a task where transformers have better than random performance (around $3 0 \mathrm { - } 4 0 \%$ compared to a random $1 0 \%$ performance), but there is still a significant room for improvement, and (ii) we can control the length of the input sequences and still have a meaningful problem, which is not as straightforward with the other document or image processing tasks in LRA. From the NNCH benchmark [Deletang et al., 2023], we consider 3 tasks that can be solved as a binary classification problem – Parity, Even Pairs, and Missing Duplicates, and 4 tasks that can be solved as a multi-class classification problem – Cycle Navigation, Stack Manipulation, Modular Arithmetic with Brackets and Solve Equation. Parity, Even Pairs and Cycle Navigation are regular languages. Stack Manipulation, Modular Arithmetic and Solve Equation are deterministic context-free languages, while Missing Duplicates is a context-sensitive language. For the NNCH tasks, we consider input sequences of length 40 both for training and testing; Deletang et al. [2023] train on the same length but test on longer to evaluate out-of-distribution length generalization. For all the tasks, we utilize a training / holdout sets of sizes 5000 / 2000.
Sparse attention. While there are various sparse attention mechanisms (as we discussed in section 2), we will consider a representative subset for our empirical evaluations. For input-agnostic sparse attention, we choose banded attention (figure 1b [Parmar et al., 2018]) and block-local attention (figure 1d [Qiu et al., 2020]), with varying band and block sizes respectively. For input-dependent heavy-hitter sparse attention, we choose top- $k$ attention (figure 1f [Gupta et al., 2021]). The main motivation for selecting top- $k$ over LSH based [Kitaev et al., 2020] or clustering based [Roy et al., 2021] input-dependent sparse attention is that we can then easily ensure that the input-dependent sparse attention attends to exactly the same number of tokens as in the input-agnostic ones – that is, the number of nonzeros in each column of the attention score matrix is exactly the same across all sparse attention patterns we consider. We also consider versions of these input-agnostic sparse attention with varying number of global tokens (figure 1e). Note that, as we have highlighted before, the number of learnable parameters is exactly the same between the model using standard full attention and the one using sparse attention. A minor difference is with global tokens where we also learn their initial global token embeddings. For this reason, we use exactly the same hyperparameters for the full and sparse attention versions of the same model to ablate the effect of the sparse attention.
# 4.1 Overall Learning Convergence and Generalization
We present our first set of experimental results in figure 2 and figure 3, comparing the overall learning convergence and generalization of full attention based models to those using sparse attention. For these experiments, we use the ReLU activation for the MLP component of the transformer block as in the original configuration [Vaswani et al., 2017]. These results are aggregated over 10 repetitions, and we present the median performance and its inter-quartile range. Note that, for the 8 tasks we consider, the full attention model is expressive enough to achieve $1 0 0 \%$ training accuracy in all tasks, but generalizes non-trivially for only 4 of these 8 tasks – ListOps, Even Pairs, Missing Duplicates and Stack Manipulation. For the remaining 4 tasks – Parity, Modular Arithmetic, Solve Equation and Cycle Navigation – the full attention model has generalization performance close to random guessing. While we only present results for a single sparsity level (mask size) here, we present more detailed results and different sparsity levels in appendix C.1.
Figure 2: Learning convergence and generalization curves for full attention and various sparse attention based models. Each column corresponds to a task; we present 4 tasks here and 4 more in figure 3. The legend is the same across all datasets – BAND(5) denotes banded attention (figure 1b) with a band size of 5; BAND(5):1 denotes the same with a single global token (figure 1e). BLOC(5) denotes block local attention (figure 1d) with a block size of 5; BLOC(5):1 denotes the same with single global token. TOPK(5) is top- $\mathbf { \nabla } \cdot k$ attention with $k = 5$ . Top row: Training cross-entropy loss trajectories – lower is better. Bottom row: Generalization performance on held-out set as training progresses – higher is better. Further results with different mask sizes and different number of global tokens is presented in figure 11 (training cross-entropy), figure 12 (training accuracy), table 2 (generalization) and table 3 (convergence). † For the Parity task, all forms of attention have poor generalization, with a held-out accuracy as low as random guessing $5 0 \%$ for binary classification).
Observation 1. Input-dependent heavy-hitter sparse attention significantly speeds up ERM convergence while input-agnostic sparse attention do not show any consistent improvement over full attention.
The results in the top row of figure 2 and figure 3 show that the input-agnostic sparse attention often converges slower than full attention. They also often struggle with expressivity in the absense of the global tokens, as seen for block-local attention with Parity and Cycle Navigation, and both block local and banded attention with Even Pairs and Missing Duplicates. This is expected as per the expressivity results of Yun et al. [2020b]. The inclusion of the global token addresses this issue. In contrast, the training loss of top- $k$ attention converges significantly faster than full attention all cases except Stack Manipulation, where all forms of attention converge extremely rapidly. Top- $k$ attention shows improvements (in terms of achieving $9 5 \%$ training accuracy) over full attention ranging between $1 . 3 7 \times$ (121 epochs vs 167 epochs) with ListOps to $8 . 8 3 \times$ (6 epochs vs 53 epochs) with Even Pairs (see table 3 in appendix C.1 for further results on this). In all these cases, top- $k$ attention is able to be as expressive as full attention without the need for any global tokens. This consistent faster training of top- $k$ attention in terms of the number of optimization steps needed to converge is not something discussed in existing literature to the best of our knowledge.
Figure 3: Same as figure 2 with 4 more NNCH tasks. Further results with different mask sizes and different number of global tokens is presented in figure 13 (training cross-entropy) and figure 14 (training accuracy). † For the Modular Arithmetic, Solve Equation and Cycle Navigation tasks, all forms of attention have poor generalization, with a held-out accuracy as low as random guessing $2 0 \%$ for each of these 5-class classification tasks).
Observation 2. Input-dependent heavy-hitter sparse attention generalizes faster during the training process.
The results in the bottow row of figure 2 and figure 3 show that, in all cases of non-trivial generalization, the input-dependent sparse attention achieves similar (Even Pairs and Missing Duplicates) or better (ListOps) holdout accuracy when compared to the full attention. Furthermore, it attains this generalization level much earlier during the training process. This does not hold for the other 5 tasks, where either (i) the full attention models generalizes poorly, and so do all other attention forms (Parity, Modular Arithmetic, Solve Equation, Cycle Navigation), or (ii) all attention forms generalize equally well (Stack Manipulation). Note that input-dependent heavy-hitter top- $k$ achieves better empirical generalization performance both in terms of the highest holdout accuracy during the training trajectory, and the final holdout accuracy. The latter highlights that the faster ERM convergence of input-dependent sparse attention does not lead to overfitting. In fact, with the ListOps task, the final holdout accuracy with full attention drops from around $3 5 . 1 \pm 0 . 6 \%$ to $2 8 . 9 \pm 1 . 4 \%$ , while the drop with top- $k$ attention is only from $3 6 . 3 \pm 0 . 3 \%$ to $3 1 . 3 \pm 0 . 9 \%$ . In general, the top- $k$ attention based transformers also have comparitively lower variations in their performance as evidenced by the fairly tight inter-quartile ranges of the trajectories of the training loss and holdout accuracy.
# 4.2 Effect of Hyperparameters
Here we study the effect of the different hyperparameter choices on the relative performances of the full and different sparse attention models. First we study the effect of changing the activation function in the MLP component of a transformer block to evaluate whether the differences in empirical performance are due to the attention component or the MLP component of the block. Then, we study the effect of varying the number of blocks and the number of heads in the model. Finally, we study the effect of varying various optimizer hyperparameters such as the learning rate and its scheduling. In these sets of experiments, we mainly focus on the 3 tasks where the full attention model demonstrates nontrivial generalization (thus excluding Parity, Modular Arithmetic, Solve Equation and Cycle Navigation), and there is a difference between the full and the different sparse attention models (thus excluding Stack Manipulation). Details results on additional tasks and different levels of sparsity are presented in appendix C.2.
Figure 4: Same as figure 2 for 3 of the tasks with GELU activation. See results for additional tasks and configurations in appendix C.2.
Observation 3. The improvement of input-dependent heavy-hitter sparse attention over full attention in terms of learning convergence and generalization is not affected by the choice of the activation function σ in the MLP block of a transformer block.
We present the performances of the different attention mechanisms (full and sparse) with the GELU activation [Hendrycks and Gimpel, 2016] in figure 4 and with the Mish activation [Misra, 2019] in figure 5 for $3 / 8$ tasks. For these experiments, we have kept all other hyperparameters (number of heads and blocks, learning rate and its scheduling, batch size) exactly the same as in figure 2 and figure 3 to ablate the effect of the change in the MLP activation. Comparing these results to figure 2, we see that there is not a lot of qualitative differences in the performances both in terms of learning convergence and generalization.
Figure 5: Same as figure 2 for 3 of the tasks with Mish activation. See additional results in appendix C.2.
The input-agnostic sparse attention models continue to converge comparably to full attention with ListOps and Missing Duplicates while falling behind in Even Pairs. One marked difference here is that, with ListOps, the full attention model initially converges slower than the other sparse attention models (compare figure 2a with figure 4a and figure 5a). This is more marked with the Mish activation. However, finally the full attention model convergence catches up to the input-agnostic sparse attention models. This initial slowdown in the convergence is also reflected in the initial lower generalization accuracy. In contrast, the input-dependent heavy-hitter top- $k$ attention continues to consistently converge faster than full attention in terms of the training loss for both these MLP activations, with very little differences from the results with ReLU activation. This form of sparse attention also achieves better generalization performance earlier in the training process. This indicates that the difference is performance is probably due to the differences in the attention mechanism and not an artifact of the MLP block configuration.
Observation 4. The improvement of the input-dependent heavy-hitter sparse attention over full attention is agnostic to the transformer architecture in terms of the number of blocks utilized in the model, and appears to increase with the number of heads in each block.
Figure 6: Comparison of full attention and top- $k$ attention in terms of the training loss trajectory for varying model architectures with the ListOps task.
In figure 6, we study the effect of varying the model architecture in terms of the number of transformer blocks or the number of attention heads per transformer block. We have again fixed all other hyperparameters as in figure 2 to solely ablate the effect of the considered architectural changes. Here we only present results for full attention and top- $k$ attention (with $k = 5$ ) for the ListOps task. The effect of the number of transformer blocks is shown in figure 6a, and the results indicate that top- $k$ attention continues to converge faster than full attention across all number of blocks $\tau$ tried $\left( \tau \in \{ 6 , 1 0 , 1 5 , 2 2 \} \right)$ . The relative performance difference does not seem to be affected by the number of blocks. The effect of the number of heads is presented in figure 6b. These results indicate again that top- $k$ sparse attention based models continue to converge faster than their full attention variants. Furthermore, as the number of heads increase from 1 to 4 and 8, the convergence of the full attention model appears to slow down while the convergence of the top- $k$ sparse attention stays almost the same, and thus, the relative improvement increases with the increase in the number of heads.
Observation 5. The improvement of the input-dependent heavy-hitter sparse attention over full attention holds across varying optimizer hyperparameters, especially for hyperparameters that have the most promising convergence.
In figure 7, we present the effect of varying the learning rate and its decay rate in the SGD optimization for full and top- $k$ attention with the ListOps task. In figure 7a, we fix the decay rate to 0.99 (as in figure 2)
and vary the initial learning rate from 0.66 (column 1), 1.0 (column 2; used in previous experiments with ListOps), 1.5 (column 3) and 2.25 (column 4). We see that for the smaller values of the learning rate (0.66 and 1), both full attention and top- $k$ attention have the best convergence, with top- $k$ converging faster than full attention. For the larger initial learning rate (1.5 and 2.25), convergence slows down for both, and the difference between full and top- $k$ attention is less pronounced, though top- $k$ appears to be slightly better, especially in the initial part of the training. In figure 7b, we fix the initial learning rate to 1.0, and vary the decay rate to be 0.9 (column 1), 0.99 (column 2; used in previous experiments with ListOps), 0.999 (column 3) and 0.9999 (column 4). For slower decay rates (0.9999 and 0.999), the overall convergence for both methods slow down though top- $k$ continues to converge faster than full attention. For faster decay rate of 0.9, top- $k$ initially appears to outperform full attention with a big margin. However, very quickly both method stall prematurely as the learning rate becomes too small.
Observation 6. The improvement of the input-dependent heavy-hitter sparse attention over full attention also holds for the Adam optimizer with varying learning rates, especially for hyperparameters that have the most promising convergence.
Figure 7: Comparison of full attention and top- $k$ attention in terms of the training loss trajectory for varying optimization hyperparameters with the ListOps task.
Figure 8: Varying learning rates for Adam optimizer.
While all our previous empirical observations were utilizing the SGD optimizer, in figure 8, we also evaluate whether some of the relative performances translate to the more widely utilized Adam optimizer [Kingma and Ba, 2015] on the ListOps task. We evaluate various learning rates, and see that the learning rate that provided convergence for SGD (initial learning rate of around 0.1-1.0) lead to divergence with Adam. Hence, we tried smaller learning rates and see that the improved convergence of the input-dependent heavy-hitter sparse attention is also present when using the Adam optimizer for learning, with significant differences in some cases.
# 5 Theoretical Understanding
The empirical observations we made in the previous section demonstrate that input-agnostic sparse attention can struggle with expressivity, and does not show any consistent benefit over full attention. In contrast, input-dependent heavy-hitter top- $k$ attention show significant speedup in training convergence and achieving strong generalization. In this section, we want to theoretically understand why this might be happening. We begin by considering the factors that affect the convergence and generalization of SGD based training.
First considering convergence, standard analysis of SGD show that, for a $\alpha$ -Lipschitz and $\beta$ -smooth finite-sum (non-convex) objective, with learning rates $\eta _ { i }$ at the $i$ -th step, converge to a $\epsilon$ -stationary point in $T$ steps where $\epsilon \sim O ( \beta \alpha ^ { 2 } \left( \sum _ { i = 0 } ^ { T - 1 } \eta _ { i } ^ { 2 } \right) / \left( \sum _ { i = 0 } ^ { T - 1 } \eta _ { i } \right) )$ . Different choices of $\eta _ { i } , i \in [ [ T ] ]$ (such as $\eta / i$ or $\eta / \sqrt { i }$ for some constant $\eta$ with $\eta _ { 0 } = \eta$ ) provide different convergence rates (such as $O ( 1 / \log ( T ) )$ or $O ( 1 / \sqrt { T } ) )$ . As we control for the the learning rate and its scheduling for all forms of attention in our empirical evaluations, and we ensure that all models start learning from the same initial set of parameters, the main distinction between the different forms of attention could be the Lipschitz constant $\alpha$ and the smoothness constant $\beta$ . Note that, with non-smooth activation function like ReLU, we are using stochastic sub-gradient descent, where the guarantees are much weaker but still depend on the Lipschitz constant.
Generalization error of a model is defined as the difference between empirical risk (computed on the training samples) and the true risk (computed over the population). A low training error combined with a low generalization error implies strong performance on unseen data. Utilizing the seminal work [Bousquet and Elisseeff, 2000] on algorithmic stability, Hardt et al. [2016, Theorem 2.2] show that learning with $\varepsilon$ -stable randomized algorithm guarantees an expected generalization error (with expectation over the randomness in the algorithm and the training data sampling) of at most $\varepsilon$ . Furthermore, they show that, for $\alpha \cdot$ -Lipschitz and $\beta$ -smooth finite-sum nonconvex objective, the $T$ step SGD algorithm with per-step learning rate $\eta _ { i } \leq \eta / i$ is $\varepsilon$ -uniformly stable with $\varepsilon \sim O \left( ( \eta \alpha ^ { 2 } ) ^ { 1 / 1 + \beta \eta } ( 1 + 1 / \beta \eta ) T ^ { \beta \eta / 1 + \beta \eta } \right)$ [Hardt et al., 2016, Theorem 3.12]. As we have again controlled for the learning rates and its schedule, the only distinguishing factor between the different forms of attention (full or sparse) are the Lipschitz and smoothness constants.
Based on this intuition, we will focus on the Lipschitz constant. First, we will try to characterize how the behavior of the softmax – specifically the input stability of the softmax function – in the attention mechanism of the transformer block affects the Lipschitz constant of the overall learning objective. Then, we will characterize how the different forms of sparse attention affect the input-stability of the softmax function and the attention mechanism.
# 5.1 From Softmax Input-stability to Loss Lipschitz Constant
This learning is performed with SGD, and we are interested in understanding the effect of the masked softmax operation on both this optimization for learning the model, and the subsequent generalization of this model. Note that, fixing all other hyperparameters (such as the embedding dimension $d$ , the MLP hidden layer size $d _ { \mathsf { M L P } }$ , the number of transformer blocks $\tau$ ), there is no difference in the number of learnable parameters between a model that uses the standard softmax and the one using masked softmax (assuming that the masking does not introduce additional learnable parameters). We explicitly study the effect of this masked softmax operation in terms of the stability or Lipschitz property of the (masked) softmax.
Definition 1. A masked softmax is $\xi$ -input-stable if $\mathbf { : } \forall \mathbf { z } , \bar { \mathbf { z } } \in \mathbb { R } ^ { d }$ ,
$$
\begin{array} { r } { \| \mathrm { s o f t m a x } ( \mathbf { z } ) - \mathrm { s o f t m a x } ( \bar { \mathbf { z } } ) \| _ { 1 } \leq \xi \| \mathbf { z } - \bar { \mathbf { z } } \| _ { 1 } . } \end{array}
$$
The self-attention operation $\mathsf { A } : \mathbb { R } ^ { d \times L } \to \mathbb { R } ^ { d \times L }$ with learnable parameters $\mathbf { W } , \mathbf { V } \in \mathbb { R } ^ { d \times d } \ i s \ s t a b l e$ with respect to its input and parameters if $\forall \mathbf { X }$ , $\bar { \mathbf X } \in \mathbb R ^ { d \times L }$ , ${ \mathbf W } , \bar { \mathbf W } , { \mathbf V } , \bar { \mathbf V } \in \mathbb { R } ^ { d \times d }$ :
$$
\begin{array} { r l } & { \| \mathsf { A } _ { \mathbf { W } , \mathbf { V } } ( \mathbf { X } ) - \mathsf { A } _ { \mathbf { W } , \mathbf { V } } ( \bar { \mathbf { X } } ) \| _ { 2 , 1 } \leq \lambda _ { X } ( \xi ) \| \mathbf { X } - \bar { \mathbf { X } } \| _ { 2 , 1 } , } \\ & { \| \mathsf { A } _ { \mathbf { W } , \mathbf { V } } ( \mathbf { X } ) - \mathsf { A } _ { \bar { \mathbf { W } } , \mathbf { V } } ( \mathbf { X } ) \| _ { 2 , 1 } \leq \lambda _ { W } ( \xi ) \| \mathbf { W } - \bar { \mathbf { W } } \| , } \\ & { \| \mathsf { A } _ { \mathbf { W } , \mathbf { V } } ( \mathbf { X } ) - \mathsf { A } _ { \mathbf { W } , \bar { \mathbf { V } } } ( \mathbf { X } ) \| _ { 2 , 1 } \leq \lambda _ { V } \| \mathbf { V } - \bar { \mathbf { V } } \| , } \end{array}
$$
where $\lambda _ { X } ( \xi ) , \lambda _ { W } ( \xi )$ are constants that depend on $\xi$ .
We will precisely characterize the values of the constants in the above definition $( \xi , \lambda _ { X } ( \xi ) , \lambda _ { W } ( \xi ) , \lambda _ { V } )$ for the different (masked) softmax operations and corresponding (masked) self-attention operations in the sequel. However, we define them here to highlight how the stability of the softmax operation affects the stability of the self-attention operator A, and how this affects the Lipschitz-ness of the learning objective in equation (4) with respect to the model parameters $\boldsymbol { \Theta } = ( \mathbf { T } , \boldsymbol { \theta } ^ { ( 1 ) } , \dots , \boldsymbol { \theta } ^ { ( \tau ) } , \boldsymbol { \Phi } )$ . For completeness, we first need to establish the stability properties of the MLP component of a TF block (see proof in appendix D.1):
Lemma 1. Assuming that the MLP activation $\sigma$ is $\lambda _ { \sigma }$ -Lipschitz with $\sigma ( 0 ) = 0$ , and the MLP parameters have norms bounded by $B > 0$ , that is $\| \mathbf { P } \| \leq B$ and $\left\| \mathbf { R } \right\| \leq B$ , the token-wise MLP and LN operations are stable with respect to their input and model parameters as follows $\forall \mathbf { x } , \mathbf { x } ^ { \prime } \in \mathbb { R } ^ { d } , \| \mathbf { x } \| , \| \bar { \mathbf { x } } \| \leq \Xi , \mathbf { P } , \bar { \mathbf { P } } \in \mathbb { R } ^ { d _ { \operatorname { M L P } } \times d } , \mathbf { R } , \bar { \mathbf { R } } \in \mathbb { R } ^ { d _ { \operatorname { M L P } } \times d }$ :
$$
\begin{array} { r l r } & { } & { \| \mathsf { M L P } _ { \mathbf { P } , \mathbf { R } } ( \mathbf { x } ) - \mathsf { M L P } _ { \mathbf { P } , \mathbf { R } } ( \bar { \mathbf { x } } ) \| \le \eta _ { X } \| \mathbf { x } - \bar { \mathbf { x } } \| , } \\ & { } & { \| \mathsf { M L P } _ { \mathbf { P } , \mathbf { R } } ( \mathbf { x } ) - \mathsf { M L P } _ { \bar { \mathbf { P } } , \mathbf { R } } ( \mathbf { x } ) \| \le \eta _ { P } \| \mathbf { P } - \bar { \mathbf { P } } \| , } \\ & { } & { \| \mathsf { M L P } _ { \mathbf { P } , \mathbf { R } } ( \mathbf { x } ) - \mathsf { M L P } _ { \mathbf { P } , \bar { \mathbf { R } } } ( \mathbf { x } ) \| \le \eta _ { R } \| \mathbf { R } - \bar { \mathbf { R } } \| , } \\ & { } & { \| \mathsf { L N } ( \mathbf { x } ) - \mathsf { L N } ( \bar { \mathbf { x } } ) \| \le \zeta _ { \mathsf { L N } } \| \mathbf { x } - \bar { \mathbf { x } } \| , } \end{array}
$$
where $\eta _ { X } = B ^ { 2 } \lambda _ { \sigma }$ , $\eta _ { P } = \eta _ { R } = \lambda _ { \sigma } B \Xi$ .
The Lipschitz property of the LayerNorm (and the corresponding value of $\zeta _ { L N }$ ) has been previously established in Kim et al. [2021]. Given this, we can establish the following results for a transformer block (see proof in appendix D.2):
Theorem 1. Given definition 1 and lemma 1, a transformer block TF with learnable parameters $\boldsymbol { \theta } = ( \mathbf { W } , \mathbf { V } , \mathbf { P } , \mathbf { R } )$ is $\lambda _ { \theta } ( \boldsymbol { \xi } )$ -stable with respect to its learnable parameters $\theta$ with
$$
\lambda _ { \theta } ( \xi ) = \zeta _ { \perp \mathsf { N } } \left( \zeta _ { \mathsf { L N } } ( 1 + \eta _ { X } ) ( \lambda _ { W } ( \xi ) + \lambda _ { V } ) + L ( \eta _ { P } + \eta _ { R } ) \right) ,
$$
and TF is $\lambda _ { \mathbf { X } } ( \boldsymbol { \xi } )$ -stable with respect to its input $\mathbf { X }$ with
$$
\lambda _ { \mathbf { X } } ( \boldsymbol { \xi } ) = \zeta _ { \mathsf { L N } } ^ { 2 } ( 1 + \eta _ { X } ) ( 1 + \lambda _ { X } ( \boldsymbol { \xi } ) ) ,
$$
where we explicitly note the dependence of the stability constant with respect to learnable parameters $\lambda _ { \theta } ( \xi )$ , and input $\lambda _ { \mathbf { X } } ( \boldsymbol { \xi } )$ to the Lipschitz constant $\xi$ of the (masked) softmax operation. Thus, for any parameter tuples $\theta , { \bar { \theta } }$ and input $\mathbf { X } , \bar { \mathbf { X } }$ , we have
$$
\begin{array} { r } { \| \mathsf { T F } _ { \theta } ( \mathbf { X } ) - \mathsf { T F } _ { \widetilde { \theta } } ( \mathbf { X } ) \| _ { 2 , 1 } \leq \lambda _ { \theta } ( \xi ) \| \theta - \bar { \theta } \| , \quad a n d \quad \| \mathsf { T F } _ { \theta } ( \mathbf { X } ) - \mathsf { T F } _ { \theta } ( \bar { \mathbf { X } } ) \| _ { 2 , 1 } \leq \lambda _ { \mathbf { X } } ( \xi ) \| \mathbf { X } - \bar { \mathbf { X } } \| . } \end{array}
$$
This allows us to establish the following result for the aforementioned model with $\tau$ transformer blocks (see proof in appendix D.3):
Theorem 2. Assuming that the sample wise loss $\ell$ in equation (4) is $\alpha$ -Lipschitz and $\| \Phi \| \leq 1$ with $\omega = ( 1 / L ) \mathbf { 1 } _ { L }$ , under the conditions of definition 1 and theorem 1, the learning objective $\mathcal { L }$ in equation (4) is $\lambda _ { \mathcal { L } } ( \boldsymbol { \xi } )$ -Lipschitz with respect to the learnable parameters $\boldsymbol { \Theta } = ( \mathbf { T } , \boldsymbol { \theta } ^ { ( 1 ) } , \dots , \boldsymbol { \theta } ^ { ( \tau ) } , \boldsymbol { \Phi } )$ , where
$$
\lambda _ { \mathcal { L } } ( \boldsymbol { \xi } ) = \alpha \left( \Xi + \lambda _ { \mathbf { X } } ( \boldsymbol { \xi } ) ^ { \tau } \left( 1 + \frac { \lambda _ { \theta } ( \boldsymbol { \xi } ) } { L ( \lambda _ { \mathbf { X } } ( \boldsymbol { \xi } ) - 1 ) } \right) \right) , \quad \alpha n d \quad | \mathcal { L } ( \Theta ) - \mathcal { L } ( \bar { \Theta } ) | \leq \lambda _ { \mathcal { L } } ( \boldsymbol { \xi } ) \| \Theta - \bar { \Theta } \| ,
$$
for any set of model parameters $\Theta , \bar { \Theta }$ .
This characterizes how the Lipschitz constant of the learning loss, and thus the convergence rate of the SGD based ERM, is tied to the input-stability constant $\xi$ of the (masked) softmax. Thus, based on theorem 1, the larger the values of $\lambda _ { W } ( \xi )$ , $\lambda _ { X } ( \xi )$ and $\lambda _ { V }$ in definition 1, the larger the Lipschitz constant of the training loss. We will characterize these quantities in the sequel.
# 5.2 Role of Sparse Softmax
To understand the effect of sparsity on the stability of the softmax function, we begin with understanding the stability of the standard full softmax and the subsequent full attention operation. Li et al. [2023] establish the following stability of the standard softmax (see lemma S2 in appendix E.1):
Lemma 2 (adapted from Li et al. [2023] Lemma B.1). For any ${ \bf z } , \bar { \bf z } \in \mathbb { R } ^ { L }$ with
$$
\operatorname* { m a x } _ { i , j \in \left[ \left[ L \right] \right] } z _ { i } - z _ { j } \leq \delta , \quad a n d \quad \operatorname* { m a x } _ { i , j \in \left[ \left[ L \right] \right] } \bar { z } _ { i } - \bar { z } _ { j } \leq \delta ,
$$
for a positive constant $\delta > 0$ , we have the following:
$$
\| \mathsf { s o f t m a x } ( \mathbf { z } ) \| _ { \infty } \leq \frac { e ^ { \delta } } { L } , \quad \| \mathsf { s o f t m a x } ( \mathbf { z } ) - \mathsf { s o f t m a x } ( \bar { \mathbf { z } } ) \| _ { 1 } \leq \frac { e ^ { \delta } } { L } \| \mathbf { z } - \bar { \mathbf { z } } \| _ { 1 } .
$$
A critical factor in the softmax stability is this quantity $\delta$ that is the upper bound on the difference between the largest and smallest values over which the softmax is applied. In the context of dot-product self-attention, it corresponds to the difference between the largest and smallest query-key dot-products for any query. We call this term the semantic dispersion, and define it precisely as follows:
Definition 2. For a (sparse) attention transformer block with $L$ length input sequences $\mathbf { X } \in \mathbb { R } ^ { d \times L }$ , and a mask $\mathbf { M } \in \{ 0 , 1 \} ^ { L \times L }$ (input dependent or input agnostic), we define the per-query semantic dispersion as a scalar $\delta > 0$ such that, for any query token X:i, $i \in [ [ L ] ]$ the maximum difference between the largest and smallest unmasked query-key dot-productsJisKbounded from above by $\delta$ . That is, for any input sequence of token representations $\mathbf { X } \in \mathbb { R } ^ { d \times L }$ , mask $\mathbf { M } \in \{ 0 , 1 \} ^ { L \times L }$ and attention parameters $\mathbf { W } \in \mathbb { R } ^ { d \times d }$ , for all query tokens $i \in [ [ L ] ]$ , we have
$$
\delta \geq \operatorname* { m a x } _ { \substack { j , j ^ { \prime } \in [ L ] : M _ { j i } = M _ { j ^ { \prime } i } = 1 } } ( \mathbf { X } _ { : i } ^ { \top } \mathbf { W } \mathbf { X } _ { : j } - \mathbf { X } _ { : i } ^ { \top } \mathbf { W } \mathbf { X } _ { : j ^ { \prime } } ) .
$$
We discuss this definition with examples in figure 9. Based on this definition, we can establish the stability of the softmax and the attention operation A in terms of the $\xi , \lambda _ { X } ( \xi ) , \lambda _ { W } ( \xi )$ in definition 1 as follows (see theorem S3 in appendix E.1 for details):
Theorem 3 (partially adapted from [Li et al., 2023] Lemma B.2). Assuming that the per-token Euclidean norms are bounded as $\| \mathbf { X } _ { : i } \| \leq \Xi \forall i \in \mathbb { [ L ] }$ , and the parameter norms are bounded at $\lVert \mathbf { W } \rVert \leq \Gamma$ and $\| \mathbf { V } \| \leq \Upsilon$ , and the per-query semantJic Kdispersion (definition 2) is bounded by $\delta _ { s } > 0$ . Then the standard softmax is $\xi _ { s }$ -stable with $\xi _ { s } = e ^ { \delta _ { s } } / { \cal L }$ , and the standard attention is stable as in
δs 2ΓΞ2 δr 2ΓΞ2 z7 z1 z9 z4 z3z6z5 z2z10z8 z7 z1 z9 z4 z3z6z5 z2z10z8 1 - □ 早 □ □
ΓΞ2 0 +ΓΞ2 ΓΞ2 0 +ΓΞ2 (a) Standard attention dispersion $\delta _ { s }$ (b) Banded attention dispersion $\delta _ { r }$ δr < 2ΓΞ2 ∆h δh ≪ 2ΓΞ2 z7 z1 z9 z4 z3z6z5 $z _ { \mathrm { 2 } } z _ { \mathrm { 1 0 } } z _ { \mathrm { 8 } }$ z7 z1 z9 z4 z3z6z5 z2z10z8 1 1 □ □ 1 □ 早101□ 中
ΓΞ2 0 +ΓΞ2 ΓΞ2 0 +ΓΞ2
(c) Causal banded attention dispersion $\delta _ { r }$ (d) Heavy-hitter attention dispersion $\delta _ { h }$ & separatio
Figure 9: Examples of per-query semantic dispersion $\delta$ (definition 2) and heavy-hitter semantic separation $\Delta$ (definition 3): Consider a sequence of length $L = 1 0$ , and we will demonstrate the concepts for query token $\mathbf { X } _ { : 6 }$ . Let $z _ { j } = \mathbf { X } _ { : 6 } ^ { \top } \mathbf { W } \mathbf { X } _ { : j } , j \in \mathbb { [ L ] }$ denote the $j$ -th query-key dot-product. (a) Figure 9a shows that in standard full attention (no maskingJ), tKhe $z _ { j } \mathbf { s }$ (denoted by the ■) can range between $- \Gamma \Xi ^ { 2 }$ and $+ \Gamma \Xi ^ { 2 }$ under the conditions of theorem 1 (namely $\| \mathbf { W } \| \leq \Gamma , \| \mathbf { X } _ { : i } \| \leq \Xi \forall i \in \mathbb { [ L ] } )$ , thereby giving us a semantic dispersion $\delta _ { s } \approx 2 \Gamma \Xi ^ { 2 }$ in this example. In general, with full attention, wJe cKannot expect a tighter bound on $\delta _ { s }$ than $2 \Gamma \Xi ^ { 2 }$ . (b) Figure 9b shows the same example with an input-agnostic banded masked attention with the same dot-product values, where the query token $\mathbf { X } _ { : 6 }$ only attends to succeeding key tokens $\mathbf { X } _ { : 6 } , \mathbf { X } _ { : 7 } , \mathbf { X } _ { : 8 }$ (the ■), while the remaining dot-products are masked (the $\boxed { \begin{array} { r l } \end{array} } .$ ). In this example, the semantic dispersion $\delta _ { r } \approx \delta _ { s } \approx 2 \Gamma \Xi ^ { 2 }$ , no better than with full-attention. (c) Figure $\mathfrak { s } _ { \mathtt { C } }$ shows the example with an input-agnostic causal banded attention mask where token $\mathbf { X } _ { : 6 }$ only attends to the preceeding key tokens $\mathbf { X } _ { : 5 } , \mathbf { X } _ { : 4 } , \mathbf { X } _ { : 3 }$ , masking out the rest. In this case, this input-agnostic masked attention has a small dispersion $\delta _ { r } < 2 \Gamma \Xi ^ { 2 }$ better than that of full-attention $\delta _ { s } \approx 2 \Gamma \Xi ^ { 2 }$ . However, there is usually no way to ensure that a condition where $\delta _ { r } \ll \delta _ { s }$ will exist. (d) Figure 9d shows the example with an input-dependent heavy-hitter attention, where only the high values are unmasked, and there is a significant semantic separation $\Delta _ { h }$ between the masked and unmasked dot-products. With this form of input-dependent masking, we can potentially have a significantly smaller semantic dispersion $\delta _ { h } \ll 2 \Gamma \Xi ^ { 2 }$ implying $\delta _ { h } \ll \delta _ { s }$ .
definition 1 with
$$
\lambda _ { X } ( \xi _ { s } ) = \xi _ { s } \Upsilon L ( 2 \Gamma \Xi ^ { 2 } + 1 ) = e ^ { \delta _ { s } } \Upsilon ( 2 \Gamma \Xi ^ { 2 } + 1 ) , \quad \lambda _ { W } ( \xi _ { s } ) = \xi _ { s } \Upsilon L ^ { 2 } \Xi ^ { 3 } = e ^ { \delta _ { s } } \Upsilon L \Xi ^ { 3 } , \quad \lambda _ { V } = L \Xi .
$$
Note that the semantic dispersion $\delta _ { s }$ plays a significant role in $\lambda _ { X } ( \xi _ { s } )$ and $\lambda _ { W } ( \xi _ { s } )$ . Thus, larger the value of $\delta _ { s }$ , the higher the values of these constants, and thus higher per-transformer-block stability constants $\lambda _ { \theta } ( \xi _ { s } )$ and $\lambda _ { \mathbf { X } } ( \xi _ { s } )$ in theorem 1. We discuss the semantic dispersion for full attention in figure 9a. In general, we cannot expect this dispersion $\delta _ { s }$ to be significantly smaller than $2 \Gamma \Xi ^ { 2 }$ .
Next we study the stability of input-agnostic regular $k$ -sparse attention transformers, where each query token attends to exactly $k$ key tokens, and each key token is attended to by exactly $k$ query tokens. 4 This form includes the aforementioned banded attention (figure 1b), block-local attention (figure 1d) and strided attention (figure 1c); random attention satisfies this only in expectation. It seems intuitive that sparse attention would increase the stability because it reduces the number of pairwise token interactions, and thus, the propagation of any input perturbation, with more sparsity leading to more stability. However, we show that the effect of this form of sparse attention is more nuanced (see theorem S4 in appendix E.2):
Theorem 4. Consider the self-attention operation $\mathsf { A } : \mathbb { R } ^ { d \times L } \to \mathbb { R } ^ { d \times L }$ with input X of L token representations and parameters W $\mathbf { \Delta } , \mathbf { V } \in \mathbb { R } ^ { d \times d }$ utilizing a $k$ -regular input-agnostic masking function $m : \mathbb { R } ^ { L \times L } \{ 0 , 1 \} ^ { L \times L }$ where $m ( \mathbf { D } ) = \mathbf { M } \forall \mathbf { D } \in \mathbb { R } ^ { L \times L }$ . Assuming that the per-token Euclidean norms are bounded as $\| \mathbf { X } _ { : i } \| \leq \Xi \forall i \in \mathbb { [ L ] }$ , and the parameter norms are bounded at $\lVert \mathbf { W } \rVert \leq \Gamma$ and $\| \mathbf { V } \| \leq \Upsilon$ , and the per-query semaJntiKc dispersion (definition $\mathcal { Q }$ ) is bounded by $\delta _ { r } > 0$ . Then the masked softmax is $\xi _ { r }$ -stable with $\xi _ { r } = e ^ { \delta r } / k$ , and the regular $k$ -sparse attention is stable as in definition 1 with
$$
\lambda _ { X } ( \xi _ { r } ) = \xi _ { r } \Upsilon k ( 2 \Gamma \Xi ^ { 2 } + 1 ) = e ^ { \delta _ { r } } \Upsilon ( 2 \Gamma \Xi ^ { 2 } + 1 ) , \quad \lambda _ { W } ( \xi _ { r } ) = \xi _ { r } \Upsilon L k \Xi ^ { 3 } = e ^ { \delta _ { r } } \Upsilon L \Xi ^ { 3 } , \quad \lambda _ { V } = L \Xi ^ { 3 } + \delta _ { r } \Upsilon L \Xi ^ { 3 } .
$$
This result shows that input-agnostic $k$ -regular sparse attention provides guarantees very similar to those of full attention except for the $e ^ { \delta _ { r } }$ term involving the per-query semantic dispersion. This implies that this sparse attention would have significant improvement in stability only if the per-query semantic dispersion $\delta _ { r }$ is sufficiently small relative to the full attention semantic dispersion $\delta _ { s }$ ; one such situation is visualized in figure $9 \mathsf { c }$ . With regular sparse attention such as banded, block-local or strided, the dispersion $\delta _ { r }$ would be small only if the per-query dot-products somehow align with the sparsity patterns – with temporal locality based patterns such as banded and block-local, the dot-products for nearby keys (in terms of sequence position) would require to have a small range; with strided patterns, the dot-products for keys matching the stride regularity should span a small range. These conditions are too restrictive, and thus, $\delta _ { r }$ will generally not be sufficiently smaller than the semantic dispersion of standard full-attention $\delta _ { s } \approx 2 \Gamma \Xi ^ { 2 }$ (as shown in the example of figure 9b). An important aspect of this above result in theorem 4 is that, if $k L$ (that is, we are considering full attention), then $\delta _ { r } \delta _ { s }$ , and the results reduce to exactly those of theorem 3.
An input-dependent sparse attention is the “heavy-hitter attention”, where, for any query token $i \in [ [ L ] ]$ , we mask all but the highest values $\mathbf { X } _ { : i } ^ { \top } \mathbf { W } \mathbf { X }$ in column $i$ of the attention dot-product matrix $\mathbf { X } ^ { \top } \mathbf { W } \mathbf { X }$ , JanKd there is a significant gap between the unmasked dot-product $\mathbf { X } _ { : i } ^ { \top } \mathbf { W } \mathbf { X } _ { : j }$ for the unmasked keys $j$ with $M _ { j i } = 1$ , and the masked dot-product $\mathbf { X } _ { : i } ^ { \top } \mathbf { W } \mathbf { X } _ { : j ^ { \prime } }$ for the masked keys $j ^ { \prime }$ , $M _ { j ^ { \prime } i } = 0$ . LSH based attention [Kitaev et al., 2020], top- $k$ attention [Gupta et al., 2021], cluster attention [Roy et al., 2021], and thresholded attention [Zhao et al., 2019] fit this form of sparse attention. Unlike the regular $k$ -sparse attention, here each token can attend to a small number of tokens (figure 1f), but each token can be attended to by anything between 0 and $L$ tokens, making the stability analysis of input-agnostic regular sparse attention (theorem 4) inapplicable. To study these heavy-hitter sparse attention forms, we need to formalize a notion of semantic separation between the masked and unmasked query-key dot-products:
Definition 3. For a sparse attention transformer block with $L$ length input sequences $\mathbf { X } \in \mathbb { R } ^ { d \times L }$ , and an input-dependent heavy-hitter mask $\mathbf { M } \in \{ 0 , 1 \} ^ { L \times L }$ , we define the per-query semantic separation as a scalar $\Delta > 0$ such that, for any query token $\mathbf { X } _ { : i } , i \in [ [ L ] ]$ the minimum difference between the a pair of masked and unmasked query-key dot-products iJs Kbounded from below by $\Delta$ . That is, for all query tokens $i \in [ [ L ] ]$ , with unmasked key $j$ and masked key $j ^ { \prime }$ , we have
$$
\Delta \leq \operatorname* { m i n } _ { \substack { \forall j , j ^ { \prime } \in [ [ L ] : M _ { j i } = 1 , M _ { j ^ { \prime } i } = 0 } } ( \mathbf { X } _ { : i } ^ { \top } \mathbf { W } \mathbf { X } _ { : j } - \mathbf { X } _ { : i } ^ { \top } \mathbf { W } \mathbf { X } _ { : j ^ { \prime } } ) .
$$
The notion of separation is visualized in figure 9d. Given this definition, we present the stability of the heavy-hitter attention in the following (see theorem S5 in appendix E.3):
Theorem 5. Consider the self-attention operation $\mathbb { A } : \mathbb { R } ^ { d \times L } \mathbb { R } ^ { d \times L }$ with input $\mathbf { X }$ of $L$ token representations and parameters W, $\mathbf { V } \in \mathbb { R } ^ { d \times d }$ utilizing a $k$ -heavy-hitter input-dependent masking function $m : \mathbb { R } ^ { L } \{ 0 , 1 \} ^ { L }$ , applied columnwise to the dot-product matrix to get a mask matrix $\mathbf { M } \in \{ 0 , 1 \} ^ { L \times L }$ . Assuming the following: (i) For any query-key pairs ${ \bf X } , \bar { \bf X } \in \mathbb { R } ^ { d \times L }$ , the $k$ -heavyhitter mask $\mathbf { M } = m ( \bar { \mathbf { X } } ^ { \top } \mathbf { W } \mathbf { X } )$ (applied columnwise) has a minimum per-query semantic separation (definition 3) of $\Delta _ { h } > 0$ , (ii) $A$ maximum of $\beta k , \beta > 1$ query tokens attend to a single key token, that is, $\Vert \mathbf { M } _ { i : } \Vert _ { 1 } \leq \beta k$ for any $i \in [ [ L ] ]$ , (iii) The per-token Euclidean norms are bounded as $\| \mathbf { X } _ { : i } \| \leq \Xi \forall i \in \mathbb { \left[ L \right] }$ , and the parameteJr Knorms are bounded at $\lVert \mathbf { W } \rVert \leq \Gamma$ and $\lVert \mathbf { V } \rVert \leq \Upsilon$ , and $( \romannumeral 1 )$ The per-queryJ sKemantic dispersion (definition 2) is bounded by $\delta _ { h } > 0$ . Then the masked softmax is $\xi _ { h }$ -stable with $\xi _ { h } = \left( e ^ { \delta _ { h } } / k \right) ( 1 + 1 / \Delta _ { h } )$ , and the $k$ -heavy-hitter sparse attention is stable as in definition 1 with
Table 1: Bounds for $\xi , \lambda _ { X } ( \xi ) , \lambda _ { W } ( \xi ) , \lambda _ { V }$ from definition 1 for different forms of attention. Note that $\lambda _ { V } = L \Xi$ for all forms of attention, and thus elided from this table.
$$
\begin{array} { r l } & { \lambda _ { X } ( \xi _ { h } ) = \xi _ { h } \Upsilon k \left( 2 \Gamma \Xi ^ { 2 } ( \beta + 1 ) + \frac { \beta } { 1 + 1 / \Delta _ { h } } \right) = e ^ { \delta _ { h } } \Upsilon \left( \beta + 2 \Gamma \Xi ^ { 2 } ( \beta + 1 ) ( 1 + 1 / \Delta _ { h } ) \right) , } \\ & { \lambda _ { W } ( \xi _ { h } ) = 2 \xi _ { h } \Upsilon L k \Xi ^ { 3 } = 2 e ^ { \delta _ { h } } \Upsilon L \Xi ^ { 3 } ( 1 + 1 / \Delta _ { h } ) , \quad \lambda _ { V } = L \Xi . } \end{array}
$$
First, note that, with the heavy-hitter attention, we would expect the per-query semantic separation $\delta _ { h }$ – the gap between the highest and lowest unmasked dot-products – to be significantly smaller than $\delta _ { s }$ especially for small $k$ .
To compare the stability constants for all different forms of attention, we have put them together in table 1. To characterize the conditions when the stability constants for the heavy-hitter sparse attention provides improved guarantees over full attention, we have the following result:
Corollary 1. Consider the definitions and conditions of theorem $\mathfrak { z }$ and theorem 5. Further assume that (i) the maximum per-query semantic dispersion for standard attention is $\delta _ { s } \leq 2 \Gamma \Xi ^ { 2 }$ , while that of heavy-hitter attention is $\delta _ { h } = c _ { 1 } \delta _ { s }$ , and (ii) the heavy-hitter minimum per-query semantic separation is $\Delta _ { h } = c _ { 2 } \delta _ { s }$ for some positive constants $c _ { 1 } , c _ { 2 }$ . Then $\lambda _ { W } ( \xi _ { h } ) < \lambda _ { W } ( \xi _ { s } )$ when
$$
c _ { 1 } + \frac { 1 } { \delta _ { s } } \log 2 \left( 1 + \frac { 1 } { c _ { 2 } \delta _ { s } } \right) < 1 ,
$$
and $\lambda _ { X } ( \xi _ { h } ) < \lambda _ { X } ( \xi _ { s } )$ when
$$
c _ { 1 } + \frac { 1 } { \delta _ { s } } \log \left( 2 \Gamma \Xi ^ { 2 } ( 1 + \beta ) \left( 1 + \frac { 1 } { c _ { 2 } \delta _ { s } } \right) + \beta \right) - \frac { 1 } { \delta _ { s } } \log ( 2 \Gamma \Xi ^ { 2 } + 1 ) < 1 .
$$
This result shows that moderate reduction in the dispersion $\left( \delta _ { h } \ \mathtt { v s } \ \delta _ { s } \right)$ allow for significant improvements in $\lambda _ { W }$ even for small separation $\Delta _ { h }$ , while improvements in $\lambda _ { X }$ are more moderate. We discuss this in detail in appendix E.4.
To see how these stability constants affect the loss landscapes, we also visualize them in figure 10 (top and middle rows) utilizing the techniques proposed in Li et al. [2018] (see appendix E.5). We see that the contours on the loss surfaces of full attention model are somewhat asymmetric – see for example, around the center in figure 10b, figure 10c, and moderately in figure 10a. In contrast, the loss surfaces of the heavy-hitter top- $k$ attention model are quite symmetric, especially around the center. We also utilize the loss surface to approximately estimate the Lipschitz constant across the loss landscape (see details in appendix E.5). We plot the distribution of these estimates in the bottom row of figure 10 for varying distance from the optimum – we plot the 50-th, 75-th, 95-th and 99-th percentile values of these estimates of the full attention model and the heavy-hitter top- $k$ attention model. We see that near the optimum (the final trained model), the distributions of these estimates are close for both the models. However, as we move farther away from the trained model, the distributions change significantly, and top- $k$ attention provides a smaller Lipschitz constant estimate compared to full attention all percentiles of the distribution. This indicates that, empirically, the loss for top- $k$ attention has a more favorable Lipschitz continuity compared to full attention, which in turn implies both faster convergence and better generalization guarantees. Thus, our stability-based theoretical investigation in this section appears to align with our empirical observations in section 4.
Figure 10: Top and middle rows: Loss surfaces of the models with full attention (top row) and top- $k$ attention (middle row) for the tasks considered in figure 2 with the corresponding hyperparameters utilizing the filter-normalized version of the loss landscape visualization. The (0,0) grid point corresponds to the final trained model – the optimum. Bottom row: Distribution of the estimated Lipschitz constants computed in the random directions used to generate the loss landscapes. We report the distributions on the vertical axis in terms of the 50-th (dotted), 75-th (dash-dotted), 95-th (dashed) and 99-th (solid) percentiles (lower is better). On the horizontal axis, we denote the distance of the parameters from the optimum on the grid, and visualize how the distributions vary with the distance. | Various forms of sparse attention have been explored to mitigate the
quadratic computational and memory cost of the attention mechanism in
transformers. We study sparse transformers not through a lens of efficiency but
rather in terms of learnability and generalization. Empirically studying a
range of attention mechanisms, we find that input-dependent sparse attention
models appear to converge faster and generalize better than standard attention
models, while input-agnostic sparse attention models show no such benefits -- a
phenomenon that is robust across architectural and optimization hyperparameter
choices. This can be interpreted as demonstrating that concentrating a model's
"semantic focus" with respect to the tokens currently being considered (in the
form of input-dependent sparse attention) accelerates learning. We develop a
theoretical characterization of the conditions that explain this behavior. We
establish a connection between the stability of the standard softmax and the
loss function's Lipschitz properties, then show how sparsity affects the
stability of the softmax and the subsequent convergence and generalization
guarantees resulting from the attention mechanism. This allows us to
theoretically establish that input-agnostic sparse attention does not provide
any benefits. We also characterize conditions when semantic focus
(input-dependent sparse attention) can provide improved guarantees, and we
validate that these conditions are in fact met in our empirical evaluations. | [
"cs.LG"
] |
# 1 INTRODUCTION
The growing demand to automate software development tasks has led to the emergence of automated techniques for generating software artifacts, such as code snippets [2, 35], code changes [13, 43], and summarization [30]. However, evaluating the correctness of those generated artifacts remains a challenge, largely due to the existence of multiple correct or semantically equivalent solutions for a given problem.
One accurate evaluation method is human evaluation, where multiple human experts directly assess the correctness of the generated artifacts. However, human evaluation is labor-intensive and time-consuming, making it impractical for large-scale assessments. An alternative is test-based metrics, such as pass@k [2], where human experts manually design a set of test cases and the generated code is then executed to check whether it passes these test cases. While test-based metrics are more scalable than human evaluation, they still require the careful manual design of comprehensive test cases that cover edge cases [16, 44]. Designing complete test cases is tedious and challenging itself, and many Software Engineering (SE) tasks lack the necessary test cases, making test-based metrics not practical for large-scale evaluations.
To enable scalable evaluation of generated artifacts, several automatic evaluation metrics have been proposed [25, 27, 37, 41, 45]. These metrics offer greater scalability by eliminating the need for human evaluation or test cases. However, they are typically less accurate in assessing correctness [11]. This study aims to advance automatic evaluation metrics to bridge the gap between automated evaluation results and human judgment. These automatic metrics can generally be categorized into three types: 1) Match-based metrics, 2) Embedding-based metrics, and 3) LLM-as-judge metrics. Match-based metrics, such as BLEU [25] and CodeBLEU [27], evaluate the similarity between the generated artifact and a reference, i.e., a correct answer. Embedding-based metrics [37, 41], on the other hand, also compare the generated artifact to a reference, but they first encode both into embeddings and then measure the similarity between them. In contrast, the LLM-as-judge metric [45] instructs the LLMs to judge the quality of the generated artifact. Despite the widespread adoption of metrics above, they still suffer from two major limitations.
Interpreting Similarity as Correctness. Both match-based and embedding-based metrics use similarity as an indicator of correctness. However, similarity does not always align with correctness. For example, if the generated artifact is semantically equivalent to the reference but differs significantly in syntax, the similarity scores would be low, failing to accurately reflect the correctness. Additionally, Evtikhiev et al. [11] provided empirical evidence demonstrating a significant misalignment between human judgment and match-based metrics.
Lack of Diverse Evaluation Strategies for Correctness Assessment. The state-of-the-art (SOTA) LLM-as-judge evaluation metric for code, ICE-Score [45], instructs LLMs to directly assign evaluation scores based on predefined criteria—natural language descriptions of correct and incorrect code. However, it primarily focuses on a single strategy, lacking diverse strategies to assess correctness from different angles. A more comprehensive LLM-as-judge framework is needed to integrate multiple evaluation strategies, ensuring a more reliable and robust assessment.
Our Work. To address these limitations, we propose SWE-Judge (SoftWarE Judge), the first LLM-as-Ensemble-Judge metric designed to assess the correctness of generated software artifacts, including code snippets, patches, and summarization. Unlike match-based and embedding-based metrics that approximate correctness through similarity, SWE-Judge, like other LLM-as-judge approaches, leverages LLMs’ reasoning and software comprehension abilities for semantic evaluation. Inspired by the rigorous academic peer-review process [7], where multiple reviewers collaborate to ensure an accurate assessment of a paper’s quality, SWE-Judge utilizes a multi-evaluator framework. Specifically, SWE-Judge first defines five distinct evaluation strategies, each represented by an independent evaluator responsible for its own correctness assessment strategy. Second, a dynamic team-selection mechanism chooses the most suitable subset of evaluators. Third, the selected team members conduct their assessments, and their results are aggregated to generate a final correctness score through ensembling. This approach enhances the evaluation process by incorporating diverse strategies and dynamically selecting a suitable team of judges, thereby improving the quality of the automatic correctness assessment.
We evaluate SWE-Judge on a diverse set of SE datasets, including CoNaLa [11, 36], Card2Code [11, 19], HumanEval-X [40], APPS [14], APR-Assess [15], and Summary-Assess [22, 29]. These datasets encompass three popular SE tasks, code generation [14, 19, 36, 40], automated program repair [15], and code comments [22, 29], across five programming languages: Java, $C + +$ , Python, JavaScript, and Go, and cover three different types of generated software artifacts: code snippets, patches, and comments. Following prior work [11, 45], we employ Kendall’s $\tau$ coefficient, Spearman’s $r _ { s }$ , and Pearson’s $r _ { p }$ to quantify the statistical correlation between the assessments made by SWE-Judge and the ground truths, defined by either human evaluation results or test execution outcomes. The experimental results illustrate that SWE-Judge achieves significantly and consistently higher correlations $( 5 . 9 \% - 1 8 3 . 8 \% )$ than the baselines. Moreover, SWE-Judge also achieves agreement levels with human annotators that are comparable to inter-annotator agreement observed in code generation and automated program repair. This underscores its potential to serve as a reliable substitute for human evaluators in these tasks.
Contributions. The main contributions are as follows:
• To the best of our knowledge, we are the first to propose an LLM-as-Ensemble-Judge evaluation metric for assessing diverse software artifacts. Our SWE-Judge is designed to integrate multiple novel evaluation strategies proposed in this work, enabling a comprehensive and robust correctness assessment.
• We conducted extensive experiments to evaluate the effectiveness of SWE-Judge across five programming languages (i.e., Java, $C { + } { + }$ , Python, JavaScript, and Go) and three types of software artifacts (i.e., source code, code changes, and comments), and three popular generation-based SE tasks: code generation, automated program repair, and code summarization.
• SWE-Judge significantly and consistently outperforms existing automatic evaluation metrics, achieving new state-of-the-art performance.
# 2 PRELIMINARIES
# 2.1 Problem Statement
Automatic evaluation metrics aim to assess the quality of software artifacts generated by SE automation tools. In this work, we focus specifically on the aspect of functional correctness, which refers to the extent to which a generated software artifact fulfills the intended functional behavior described in the user requirement. Correctness is a fundamental and indispensable attribute in many SE tasks. Without correctness, other desirable properties—such as efficiency or readability are rendered secondary. Formally, the task is defined as follows and illustrated in $\textcircled{1}$ of Figure 2. Let $x$ denote a user’s requirement (e.g., a natural language description of a task), and let $y$ be a software artifact generated by an automated SE tool (e.g., an LLM-based code generator), intended to fulfill the requirement $x$ . Let $r$ be a reference solution that correctly fulfills the user requirement $x$ . For each generated software artifact $y$ , human annotators provide a correctness score, e.g., $S \in \{ 0 , 1 , 2 , 3 , 4 \}$ , where 0 indicates a completely incorrect software artifact and 4 indicates a fully correct one. Our objective is to develop an automatic evaluation metric $\mathcal { E } ( x , y , r )$ that can closely correlate with the human-provided correctness score $S$ .
# 2.2 State-of-the-Art Metric: ICE-Score
The state-of-the-art LLM-as-judge evaluation metric for code, ICE-Score [45], prompts LLMs to directly assign evaluation scores to the generated software artifacts. Formally, it takes the requirement $x$ , the generated software artifact $y$ , and the reference solution $r$ , and inserts them into a predefined prompt, yielding $P r o m p t ( x , y , r )$ . The LLM then generates a score based on this prompt: $P = L L M ( P r o m p t ( x , y , r ) )$ . We showcase the ICE-Score’s prompt for the code generation task below:
# Abstracted ICE-Score Prompt
[Task Description] Your task is to rate the code snippet only on one metric .. [Evaluation Criteria] Functional Correctness (0-4) - Execution-based quality of the code snippet combined with the problem ...
# [Evaluation Steps]
1. Read the problem carefully; 2. Read the code snippet and compare it to the problem; 3. Assign a score for functional correctness on a scale of 0 to 4. [Data] Problem: x, Code Snippet: y, Reference Code (Optional): r
The core idea behind ICE-Score is to directly “ask” the LLM to assess the correctness of the generated code $y$ , as reflected in instructions like “Your task is to rate the code snippet” and “Assign a score for functional correctness.” This represents a straightforward strategy in using LLMs for correctness evaluation, which we refer to as the “Direct Assess” strategy.
However, ICE-Score focuses solely on this strategy, leaving other potential strategies unexplored. For example, one could prompt the LLM to determine whether the generated software artifact $y$ is functionally equivalent to the reference solution $r$ , with respect to the user requirement $x$ . Alternatively, the LLM could first generate test cases based on $x$ , and then verify whether $y$ passes all those tests. To address this limitation, we propose SWE-Judge, which extends beyond the Direct Assess strategy. SWE-Judge explores and integrates multiple evaluation strategies for assessing correctness, leading to more accurate evaluation scores compared to ICE-Score.
# 2.3 Motivating Example
Our work is inspired by the rigorous academSMiU cCl ssipficaetione: Rrest-ricrtedeview process, as illustrated in Figure 1. In a typical review process, authors submit a manuscript, after which the editor selects multiple suitable reviewers to conduct peer review. Each reviewer independently provides their review and feedback,
Fig. 1. Motivating example of the academic peer-review process.
Generated Data for Evaluation 2 Dynamic Team Formation generate ? Step1: Initial Teaming 山1 。。
query automated generated Reference S1 S2 S S5 SE tools software artifact Answer for ? Team 1 Team 2 Team 3 Diverse Evaluation Strategies (S1, S2, S3) (S3, S4) (S4, S5) Step2: Team Trials on A Few Samples S1 S2 S3 Direct Assess Direct Assess and Equivalence Team 1 Team Team 3 Rethink Assess Best Team in Trial 国幽 图 图圓 3 Correctness Score Generation 1 ?(?, ?
1 S4 S5 4 ensemble
1 Generate Tests and Assess Analyze Reference and Assess 1 中 中 中 S 3.7 TEST X 国 Data for Best Team for Correctness Evaluation This Task Score
which the editor then synthesizes into a final editorial decision. The high quality of this process can largely be attributed to two factors: (1) the editor’s ability to select appropriate reviewers, and (2) the professionalism of each individual reviewer.
Drawing an analogy to this process, a key limitation of ICE-Score is that it relies on only a single “reviewer” (i.e., one evaluation strategy), without a pool of potential “reviewers” to select from. Moreover, it lacks a mechanism to assemble a team of complementary reviewers that can produce a more reliable evaluation. Motivated by this, our work proposes two core ideas:
Designing diverse evaluation strategies to ensure variation in perspectives—similar to how reviewers often bring different evaluation angles to peer review. Introducing a lightweight team assembly mechanism that selects an effective combination of evaluation strategies, akin to the reviewer assignment step in the academic peer review.
# 3 OUR APPROACH
The framework of SWE-Judge is illustrated in Figure 2. Given a requirement $x$ , a generated software artifact $y$ , and a reference solution $r$ , SWE-Judge produces a correctness evaluation score $\mathcal { E } ( x , y , r )$ . The framework consists of three main components: the first defines the evaluation strategies, the second selects an appropriate team, which consists of a few evaluation strategies, and the third performs the actual scoring.
Part 1: Diverse Evaluation Strategies $\textcircled{1}$ of Figure 2). Given a requirement $x$ , a generated software artifact $y$ , and a reference solution $r$ , this component defines five distinct correctness evaluation strategies to assess the correctness of the generated software artifact $y$ from diverse perspectives.
Part 2: Dynamic Team Formation $\textcircled{2}$ of Figure 2). Given the five evaluation strategies, this part aims to assemble an effective subset, referred to as a “team”, from these strategies. Importantly, the team selection is performed dynamically for each dataset, allowing the assembled team to adapt to the characteristics of different datasets.
Part 3: Correctness Score Generation $\textcircled{3}$ of Figure 2). Once the team is determined, it is used to evaluate the correctness of data samples in the evaluation dataset, generating individual scores for each data sample. These individual scores are then aggregated to produce the final correctness score.
# 3.1 Diverse Evaluation Strategies
Basics of Evaluation Strategy. Our tool is built upon the zero-shot capabilities of LLMs. Specifically, we do not provide any human-annotated scores as input to the LLMs. Instead, for each evaluation data sample, we construct prompts containing the user requirement $x$ , the generated software artifact $y$ , and the reference solution $r$ . These prompts are then fed into the LLM, which generates a response score based on its assessment of the correctness of $y$ .
Each unique prompt design corresponds to a distinct evaluation strategy. By pairing a specific strategy’s prompt with an LLM, we form an evaluator that generates individual correctness scores in a zero-shot fashion. We introduce the prompt designs of five different evaluation strategies as follows:
Strategy 1: Direct Assess. Similar to the previous SOTA approach ICE-Score [45], Strategy 1 (P1) directly asks the LLM to assess the correctness of the generated output $y$ . Below, we present an example of the prompt used in P1 for the code generation task. For the detailed prompts used in P1 across different datasets, please refer to our online replication package.
# Prompt of Strategy 1
[Task Description] Your task is to rate the code snippet...
[Evaluation Criteria] Functional Correctness (0-4) - Execution-based quality of the code snippet combined with the problem ...
# [Evaluation Steps]
1. Read the problem carefully; 2. Read the code snippet and compare it to the problem; 3. Assign a score for functional correctness on a scale of 0 to 4. [Data] Problem: x, Code Snippet: y, Reference Code (Optional): r
In Strategy 1, we can choose whether or not to provide the reference solution $r$ in the data fields, leading to two variants of Strategy 1, denoted as $P 1 _ { a }$ and $P 1 _ { b }$ . In $P 1 _ { a }$ , no reference solution is provided, while in $P 1 _ { b }$ , the reference solution is included.
Strategy 2: Direct Assess and Rethink. Strategy 2 (P2) builds upon Strategy 1 (P1). In P1, the LLM directly provides a correctness score $\hat { s _ { 1 } }$ for the generated software artifact $y$ , typically accompanied by a brief explanation (1–2 sentences) justifying the assigned score. Inspired by the way humans often reflect on their initial judgments, P2 introduces a rethink step. This step prompts the LLM to review both its previously assigned score and the reasoning behind it, and to consider whether any revision is necessary.
Concretely, the LLM is asked to critically re-evaluate the validity of its earlier explanation and adjust its score accordingly. For example, suppose in P1 the LLM assigns a low score to the generated software artifact $y$ due to a flaw it identifies (e.g., a reason 𝑒). During the rethink phase, if the LLM realizes that this reason $e$ is actually incorrect, it is encouraged to revise the score upward. Conversely, if the LLM initially gives a high score based on a positive justification $e$ , but later determines that $e$ does not hold, it should lower the score accordingly in the rethink step. If the LLM in the rethink step agrees on the previous reason 𝑒, then the score is unchanged. Below, we present an example of the prompt used in P2 for the code generation task.
<Prompt of Strategy $\mathbf { 1 } >$
<Response from Strategy 1: predicted score $\hat { s _ { 1 } }$ and its reasons $e >$
""" Prompt Segment Unique to Strategy 2 """
[Task Description] Your task is to recheck whether the reason and score are proper... [Evaluation Criteria]
1. If a bad reason about the code snippet is validated to be ‘False’, increase the score a bit...
2. If a good reason about the code snippet is validated to be ‘False’, decrease the score a bit...
3. If a reason is validated to be ‘True’, then please do not change the score...
# [Evaluation Steps]
1. Please only validate the previous score and reason.
2. Please reply with your adjusted score.
[Data] Problem: x, Code Snippet: y, Predicted Score from P1: $\hat { s _ { 1 } }$ , Reasons from P1: e
After the rethink step, the LLM will produce an adjusted score $\hat { s _ { 2 } }$ by either increasing, decreasing, or maintaining the original correctness score generated in P1 $( \hat { s _ { 1 } } )$ . For the detailed prompts used in P2 and other strategies across different datasets, please refer to our online replication package.
Strategy 3: Equivalence Assess. Strategy 3 (P3) adopts a fundamentally different approach from P1 and P2. Since the reference solution, $r$ , can correctly satisfy the user requirement $x$ , we can assess the correctness of $y$ by evaluating its equivalence to $r$ . The underlying idea is that if $y$ and $r$ are semantically or functionally equivalent, then it is highly likely that $y$ also meets the original requirement $x$ . Therefore, rather than reasoning directly on the correctness of $y$ , the LLM focuses on comparing $y$ and $r$ , making this strategy an equivalence-based evaluation strategy. We present an example of the prompt used in P3 for the code generation task:
# Prompt of Strategy 3
[Task Description] Given two code implementations or code diffs, your task is to assess whether they are semantically equivalent...
[Evaluation Criteria] Semantic Equivalence: To what extent do the two code versions produce the same behavior...
# [Evaluation Steps]
1. Read and analyze both code versions carefully. Read the problem description too... 2. Compare their functionality, structure, and logic to determine if they yield the same output and behavior...
3. Assign a Semantic Equivalence score...
[Data] Problem: x, Code Snippet: y, Reference Code: r
Strategy 4: Generate Tests and Assess. Strategy 4 (P4) introduces another different evaluation strategy. The core idea is straightforward: when the generated artifact $y$ is a code snippet or a code change, test cases serve as an effective means for assessing its correctness. Figure 3 illustrates the prompt design for Strategy 4. This strategy consists of two steps. In the first step, we prompt the LLM to generate test cases based on the user requirement $x$ and the reference code $r$ , as our evaluation data does not include test cases in the input. In the second step, we provide the generated software artifact $y$ along with the previously generated test cases as input to the LLM. The LLM is then asked to evaluate whether $y$ can pass all the generated test cases and, based on this evaluation, assign a correctness score.
P4: Generate Tests and P5: Analyze Reference then Assess and then Assess Step1: Generate Tests Step1: Generate Key Properties of Correct Code [Task] Your task is to generate tests that evaluate [Task] Your task is to identify the core key points from the golden answer... correctness... [Steps] [Steps] 1. Read the problem... 1. Read the problem.. 2. Analyze the golden answer to identify its 2. Identify key functional aspects, edge cases... ? S 3. Generate a diverse set of test cases... essential components... [Data] Problem: x, Reference Code: r LLM 3. List the extracted key points.. LLM [Data] Problem: x, Reference Code: r Generated tests are: ... (t) Generated Key Propertie are: ... (k) Step2: Assess Correctness (About Passing Tests) Step2: Assess Correctness (Fulfilling Key Properties) [Task] Your task is to determine whether the code [Task] Your task is to assess whether the code will pass all the test cases. contains all the key points... [Evaluation Criteria] To what extent is the code [Evaluation Criteria] To what extent does the expected to pass the given test cases? code accurately capture the essential key points? [Steps] 1. Read the code and the test cases... S [Steps] 1. Read the code and the key points... 2. Determine if the code pass all test cases... LLM 2. Assign a Key Points Coverage score... LLM [Data] Code Snippet: y, Tests: t [Data] Code Snippet: y, Key Properties: k Correctness Score: ... Correctness Score: ...
Strategy 5: Analyze Reference and Assess. Strategy 5 (P5) adopts an analytical approach. The right side of Figure 3 illustrates the prompt design for Strategy 5. This strategy involves two steps. First, the LLM identifies the critical properties of $r$ that make it a correct solution for $x$ . In the second step, the LLM checks whether $y$ preserves those core properties. If the LLM determines that $y$ aligns with the reference solution’s key characteristics, it considers $y$ to be correct.
Turning Strategies into Evaluators. With all strategies defined, we can now pair each strategy with a specific LLM to construct a set of evaluators. Each evaluator produces an independent correctness score according to its respective evaluation strategy. In addition, as the target human score ranges vary across datasets such as CoNaLa [11, 36] using a 0–4 scale, APR-Assess [15] using 0–1, and Summary-Assess [22, 29] using 1–5, we standardize the output range across all strategies to ensure consistency. To do this, we include an instruction in each prompt that constrains the LLM to output a score within the 0–100 range. In a later stage of SWE-Judge (described in Section 3.3), we apply a linear transformation to map the predicted score from the 0–100 range to the corresponding range used by the evaluation dataset.
# 3.2 Dynamic Team Formation
Just like an academic peer-review process depends on selecting suitable reviewers to ensure the quality of reviews, we argue that evaluating generated software artifacts similarly benefits from assembling a well-matched team. Building on this insight, this component dynamically assembles an effective team from the available 5 strategies, tailoring the combination to each dataset in order to better align with its specific characteristics.
Initial Teaming. Although we have 5 evaluation strategies, Strategy 1 has two variants. Let $\mathcal { P } = \{ P _ { 1 a } , P _ { 1 b } , P _ { 2 } , P _ { 3 } , P _ { 4 } , P _ { 5 } \}$ denote the set of strategy variants we can choose from. We leverage LLMs to automatically obtain correctness scores, allowing us to explore a broad space of strategy combinations. Specifically, we consider all combinations that include at least two distinct strategies. In principle, there are $\begin{array} { r } { \sum _ { k = 2 } ^ { 6 } { \binom { 6 } { k } } = 5 7 } \end{array}$ possible teams that can be formed from $\mathcal { P }$ . We denote these combinations as $\mathcal { T } = \{ T _ { 1 } , T _ { 2 } , \dots , T _ { 5 7 } \}$ .
Team Trials on A Few Annotated Samples. To identify the best team from $\mathcal { T } = \{ T _ { 1 } , T _ { 2 } , \dots , T _ { 5 7 } \}$ , we utilize a small set of annotated examples from the evaluation dataset. We randomly sample 10 instances, assuming their ground truth correctness scores are available, as annotating this number is feasible for a human developer. Each team generates predicted scores for these samples, and we measure their alignment with the ground truth using Kendall’s $\tau$ coefficient, Spearman’s $r _ { s }$ , and Pearson’s $r _ { p }$ . The team with the highest correlation is selected as the best team for the dataset.
# 3.3 Final Correctness Score Generation
For illustration purposes, suppose the selected team $T _ { i }$ comprises strategies $P _ { 1 } , P _ { 2 }$ , and $P _ { 3 }$ . Please note that $T _ { i } = ( P _ { 1 } , P _ { 2 } , P _ { 3 } )$ is just an example.
Individual Score Prediction. Each sample $d _ { i } \in \mathcal { D }$ is represented as a tuple $( x _ { i } , y _ { i } , r _ { i } )$ . The strategies in the selected team independently generate correctness scores for each sample, resulting in three individual scores: $s _ { 1 } , s _ { 2 }$ , and $s _ { 3 }$ , corresponding to $P _ { 1 } , P _ { 2 }$ , and $P _ { 3 }$ , respectively.
Score Ensembling. To generate the final score for each sample, we aggregate the individual scores from the team members using a simple averaging ensembling strategy. The final predicted score $\hat { s }$ is computed as: 𝑠ˆ = 𝑠1+𝑠32+𝑠3 .
Mapping Score to Target Scale. The predicted score $\hat { s }$ is initially on a 0–100 scale. However, humanannotated scores in different datasets may use different scales (e.g., 1–5). To ensure compatibility with the evaluation criteria, we apply a linear transformation [5]. For example, for datasets where the human-annotated scores follow a 1–5 scale, we map the predicted score as follows: $\mathcal { E } ( x , y , r ) =$ $\begin{array} { r } { \frac { \hat { s } } { 1 0 0 } \times 4 + 1 } \end{array}$ .dT oisr tmhaetiofinaelncsourrescthnaetssthsecopre dpircoteddu csecdorbeys aoliugrnSwWitEh-Jtuhdegtea.rget scale of each $\mathcal { E } ( x , y , r )$
# 4 EXPERIMENTAL SETUP
In this section, we introduce the datasets used in our experiments, the baseline methods for comparison, and the evaluation methodology to assess the effectiveness of our proposed metric. We also outline the implementation details and define the key research questions.
# 4.1 Datasets
We evaluate SWE-Judge on three popular SE tasks: code generation, automated program repair, and code summarization. The primary goal is to assess how well SWE-Judge’s results align with human evaluation results. Therefore, we have selected evaluation datasets that include human evaluation scores. The selected datasets are:
• CoNaLa [36] is a Python Code Generation benchmark consisting of 472 tasks sourced from StackOverflow. We selected CoNaLa because Evtikhiev et al.[11] provided human evaluation scores for code generated by various automated code generation tools addressing these CoNaLa coding problems. Specifically, experienced software developers rated the generated code on a scale from 0–4.
Card2Code Hearthstone (shortened as Card2Code) [19] is a Python Code Generation benchmark derived from the collectible trading card game Hearthstone. The dataset contains 665 pairs, each consisting of a Hearthstone card description and its corresponding Python code snippet. We selected Card2Code Hearthstone due to its inclusion in Evtikhiev et al.’s[11] study, where human evaluators rated the generated code on a scale from 0–4.
• APR-Assess [15] is a human-annotated dataset for Automated Program Repair (APR), involving the generation of patches (i.e., code changes) to fix identified bugs. It consists of 189 patches generated by program repair tools, each manually evaluated for correctness. Experienced developers rated the quality of these patches on a scale from 0–1.
, Vol. 1, No. 1, Article . Publication date: May 2025.
Summary-Assess[29] is a human-annotated dataset for Code Summarization, which focuses on generating accurate descriptions for given code snippets. It consists of 1,611 code summaries annotated by 226 human developers and is based on a publicly available Java code summarization dataset[17]. Human annotators evaluate various aspects of each summary, including conciseness, fluency, and content adequacy, on a scale from 1 to 5. Since our study focuses on the correctness aspect, we use the human evaluation results specifically for content adequacy as the ground truth labels.
In addition to evaluating the alignment with human assessment results, we also examine how well SWE-Judge aligns with test case execution outcomes. To this end, we select two popular code generation datasets with available test cases: HumanEval-X [40], which spans multiple programming languages, and APPS [14], which includes more complex and challenging coding tasks.
• HumanEval-X [40] is a multilingual extension of the widely used code generation benchmark HumanEval [2]. It consists of 164 introductory coding tasks, each with a natural language description, test cases, and a reference solution. For our evaluation, we focus on five programming languages: Python, $C + +$ , Java, JavaScript, and Go.
• APPS [14] is a Python code generation benchmark that includes introductory-level, interviewlevel, and competition-level coding tasks collected from code competition websites. We evaluate SWE-Judge on 100 sampled competition-level tasks of APPS.
# 4.2 Selected Baselines
Match-based Metrics. We choose 7 popular match-based metrics as baselines. BLEU [25] measures the similarity between the generated content and the ground-truth answer by comparing n-gram overlaps while applying a penalty for excessively short responses. ROUGE-L [18] measures the similarity by using the longest common subsequence between the generated code/text and the reference code/text. METEOR [1] measures the similarity based on the number of matched tokens. $C h r F + +$ [26] measures the similarity by character-level n-gram precision and recall. CodeBLEU [28] enhances traditional BLEU by incorporating structural code similarities. RUBY [32] evaluates similarity by considering lexical, syntactical, and semantic representations of source code. CrystalBLEU [10] is the state-of-the-art match-based metric designed to measure code similarity. It first removes the most common n-grams before calculating the BLEU score to better capture meaningful differences between the generated content and the ground truth answer.
Embedding-based Metric. We choose 4 popular embedding-based metrics as baselines. MoverScore [39] evaluates similarity by computing the Earth Mover’s Distance between the generated content and the reference answer. It represents both code/texts using token embeddings. BERTScore [38] calculates pairwise token similarity between the generated content and the reference answer with token representations from a pre-trained model BERT. CodeBERTScore [42] is a state-of-the-art embedding-based metric designed for code evaluation, building upon BERTScore with adaptations for code-specific tasks. It leverages a fine-tuned CodeBERT model [12] to encode both the generated and reference code, then calculates a cosine similarity matrix between their embeddings to assess semantic alignment. Lastly, for the code summarization task specifically, SIDE [22] is a state-of-the-art metric that leverages contrastive learning when calculating cosine similarity.
LLM-as-judge Metrics. We select two LLM-as-judge metrics as baselines. Vanilla LLM refers to the default LLM used with a straightforward prompt, without employing the specialized strategies proposed in this work. Specifically, we provide the LLM with a simple instruction:“Please assign a correctness score to the given input data.” ICE-Score [45] is the state-of-the-art LLM-as-judge method for code evaluation. It extends the recent LLM-as-judge approach for text, G-Eval [20], with adaptations for code evaluation. ICE-Score prompts the LLM to generate a correctness score based on pre-defined evaluation criteria.
# 4.3 Effectiveness Evaluation
We use two evaluation approaches to assess the effectiveness of SWE-Judge and baselines.
Statistical Correlations. Prior studies [11, 45] have employed statistical correlation metrics, such as Kendall’s $\tau$ coefficient, Spearman’s $r _ { s }$ , and Pearson’s $r _ { p }$ , as robust methods to measure the statistical correlation between evaluation results produced by automatic evaluation metrics and the ground truth. Specifically, Kendall’s $\tau$ coefficient [4] measures the ordinal association between two data, Spearman $r _ { s }$ [8] is a measure of rank correlation, and Pearson’s $r _ { p }$ [6] is a measure of linear correlation. In this work, we adopt those three correlation scores to evaluate SWE-Judge on all studied tasks and datasets. For ease of comparing different methods, we also calculate the averaged correlation score by averaging the three kinds of correlations above.
Statistical Agreements. We also evaluate the statistical agreement between our tool’s results and human evaluation scores. Specifically, we use Cohen’s Kappa score [3], a statistical measure that assesses the agreement between two raters who independently classify items into categories.
# 4.4 Implementation Details
We evaluate the effectiveness of SWE-Judge using the OpenAI GPT-4o mini model (i.e., gpt-4omini-2024-07-18) [23] as the backbone. We selected the GPT-4o mini model due to its lightweight, fast, and cost-effective nature, providing a more affordable alternative compared to the OpenAI GPT-3.5, GPT-4, GPT-4o, o1, o3, and GPT-4.5 models [24]. We set the temperature to 0 to reduce the impact of randomness in the LLM on the results.
# 4.5 Research Questions
Our work aims to mainly answer three Research Questions (RQs).
• RQ1: How well does SWE-Judge correlate with human judgment compared to baseline methods? In RQ1, we investigate whether SWE-Judge generates evaluation results that more closely correlate with human judgment compared to baseline evaluation metrics. • RQ2: How does the agreement between SWE-Judge and human evaluators compare to the agreement among humans? In RQ2, we quantify the gap between human-tool agreement and human-human agreement to assess how closely SWE-Judge can replace human evaluators. • RQ3: How do the key design components of SWE-Judge impact its effectiveness? We conduct an ablation study to assess the contributions of the main modules within SWE-Judge.
# 5 EXPERIMENTAL RESULTS
In this section, we present experimental results and answers to each research question.
# 5.1 RQ1: Correlation with Human Scores
In this RQ, we evaluate the correlation between SWE-Judge’s scores and human-annotated scores. Table 1 shows how well SWE-Judge’s scores correlate with human judgments across four humanannotated datasets. For the results of each dataset, the first three columns report statistical correlation metrics: Kendall’s $\tau$ , Spearman’s $r _ { s }$ , and Pearson’s $r _ { p }$ , respectively.
SWE-Judge achieves the highest alignment with human evaluations, consistently and significantly outperforming all baseline methods. As shown in Table 1, on the CoNaLa dataset, SWE-Judge surpasses all baselines by $2 7 . 1 \% - 1 5 9 . 1 \%$ , based on the average of three statistical correlation metrics. On the Card2Code dataset, it demonstrates gains between $5 . 9 \%$ and $6 3 . 8 \%$ , while on APR-Asses, improvements exceed $7 0 . 7 \%$ . For the Summary-Assess dataset, SWE-Judge again leads with gains from $1 5 . 8 \%$ to $1 8 3 . 8 \%$ on average. Furthermore, SWE-Judge shows generalizability across diverse software artifacts: code changes in APR-Assess, code snippets in CoNaLa and Card2Code, and natural language descriptions in Summary-Assess. In all cases, SWE-Judge achieves the highest correlation with human scores, consistently outperforming all the others
Table 1. Experimental results for correlation with human scores. The highest correlation is highlighted in bold, and the second-highest is underlined.
SWE-Judge achieves strong alignment with human judgments on code generation and automated program repair datasets. For code generation, SWE-Judge achieves high correlations on the CoNaLa dataset, with Kendall’s $\tau = 6 0 . 3$ , Spearman’s $r _ { p } = 7 1 . 2$ , and Pearson’s $r _ { s } = 6 8 . 3$ . On the Card2Code code generation dataset, the scores are even higher: Kendall’s $\tau = 7 0 . 4$ , Spearman’s $r _ { p } = 8 3 . 8$ , and Pearson’s $r _ { s } = 8 1 . 7$ . For automated program repair, SWE-Judge demonstrates even stronger alignment on the APR-Assess dataset, reaching 77.5 across all three metrics. Although SWE-Judge ’s performance on the code summarization dataset (Summary-Assess) is relatively lower compared to other datasets, it still significantly outperforms competing methods.
Answer to RQ1: SWE-Judge achieves the highest alignment with human evaluations, consistently and significantly outperforming all baseline methods by $5 . 9 \% - 1 8 3 . 8 \%$ across four human-annotated datasets. These datasets cover three popular SE tasks, namely code generation, automated program repair, and code summarization, and include three distinct types of software artifacts.
# 5.2 RQ2: Human-Tool Agreement V.S. Human-Human Agreement
In RQ2, we evaluate how closely SWE-Judge aligns with individual human annotators compared to how well humans agree with each other.
Setup. It is important to note that the datasets we study, i.e., CoNaLa, Card2Code, APR-Assess, and Summary-Assess, are all annotated by multiple human developers. In RQ1, we used the aggregated human score as the ground truth. This score is obtained by combining the individual ratings from different annotators. For example, in the CoNaLa dataset, Evtikhiev et al. [11] adopted the M-MSR algorithm [21] to aggregate multiple human grades into a single aggregated human score. In contrast, RQ2 uses individual human annotations as the ground truth to evaluate SWE-Judge. Our objective is to measure the gap between the agreement levels of SWE-Judge with human annotators (human-tool agreement) and the agreement among human annotators themselves (human-human agreement). If its agreement with individual annotators matches the level of agreement among humans themselves, it indicates that SWE-Judge could serve as a reliable surrogate for human evaluation.
Fig. 4. Experimental results for agreement between human developers (highlighted in blue) and agreement between SWE-Judge and humans (highlighted in blue).
Specifically, we group human annotators into pairs and compute Cohen’s Kappa [3] scores to quantify their agreement levels. For each dataset, we report:
• Min Human–Human: the lowest agreement score observed among all human annotator pairs;
Max Human–Human: the highest agreement score observed among all annotator pairs;
• Average Human–Human: the mean agreement across all annotator pairs.
Additionally, we pair SWE-Judge with each human annotator and compute the average Cohen’s Kappa score on all human-tool pairs, denoted as Average Human–Tool. This metric reflects the overall agreement between SWE-Judge and individual human annotators. In Figure 4, we highlight the Average Human–Tool score in red for visual clarity.
Results. Figure 4 presents the comparison between human–tool and human–human agreement across the four human-annotated datasets.
SWE-Judge achieves agreement levels with human annotators that, on average, are comparable to the agreement observed among human annotators themselves on the code generation and automated program repair tasks. For code generation, SWE-Judge achieves an average Cohen’s Kappa score of 24.1 on the CoNaLa dataset, slightly below the human–human average of 25.7. On the Card2Code dataset, SWE-Judge performs even better, with a score of 35.1 compared to the human–human average of 30.5. For automated program repair, SWE-Judge attains a Cohen’s Kappa score of 66.7, surpassing the human–human agreement average of 60.1. These results suggest that SWE-Judge has the potential to serve as a reliable substitute for human evaluators in evaluating both code generation and automated program repair tasks.
However, in the code summarization task, there remains a substantial gap between the agreement of SWE-Judge and human annotators compared to human–human agreement. On the Summary-Assess dataset, SWE-Judge achieves an average Cohen’s Kappa score of 4.6, which is significantly lower than the human–human average of 15.5. This suggests that SWE-Judge is not yet a viable replacement for human evaluators in the context of code summarization. Nonetheless, as shown in Table 1, SWE-Judge remains the best-performing automatic evaluation metric in the code summarization task, highlighting the progress made through our approach.
Answer to RQ2: On average, SWE-Judge achieves agreement levels with human annotators that are comparable to those observed among human annotators themselves in code generation and automated program repair tasks. This suggests that SWE-Judge can be reliably used as a substitute for human evaluators in these tasks. However, a gap remains in using SWE-Judge to replace human evaluators in the code summarization task.
# 5.3 RQ3: Ablation Study
In this RQ, we examine the contribution of two key components in SWE-Judge: 1) the Strategy Design, and 2) the Dynamic Team Selection. Table 1 presents the results of the ablation study in the last two rows. The row labeled as “wo Team Selection” shows the performance of SWE-Judge without the team selection mechanism, where all strategies are combined through simple ensembling. The row labeled “wo Team & Strategy” presents SWE-Judge ’s performance when both the team selection mechanism and custom-designed strategies are removed, relying solely on an LLM with a basic prompt: “Please assign a correctness score to the given input data.”
All key designs are essential for achieving the best effectiveness. Based on the results shown in Table 1, we observe that removing any key design in SWE-Judge leads to a reduction in statistical correlation scores. Specifically, without the team selection mechanism, the variant of SWE-Judge shows drops of $5 . 4 \%$ , $2 1 . 3 \%$ , and $1 4 . 9 \%$ in the average correlation scores for CoLaNa, APR-assess, and Summary-assess, respectively. While the lack of team selection does not negatively impact performance on the Card2Code dataset, the overall average performance across all four datasets decreases by $9 . 6 \%$ . Furthermore, removing both the team selection mechanism and the custom-designed strategies leads to drops of $4 6 . 4 \%$ , $7 . 1 \%$ , $5 2 . 8 \%$ , and $1 3 . 6 \%$ in the average correlation scores for CoLaNa, Card2Code, APR-assess, and Summary-assess, respectively. These results underscore the critical role of each component in SWE-Judge’s effectiveness.
Answer to RQ3: The key designs are essential for the effectiveness of SWE-Judge. Removing these designs results in performance drops of $4 6 . 4 \%$ , $7 . 1 \%$ , $5 2 . 8 \%$ , and $1 3 . 6 \%$ in the average correlation scores for CoLaNa, Card2Code, APR-assess, and Summary-assess, respectively.
Table 2. Experimental results for correlation with test case execution outcomes. The highest correlation is highlighted in bold, and the second-highest is underlined.
# 6 DISCUSSION
# 6.1 Generalizability to Test Case Execution Outcomes
In this subsection, we examine the generalizability of SWE-Judge to labels based on test execution, evaluating how well SWE-Judge’s scores align with the execution outcomes, where 0 indicates test failure while 1 represents test success. To this end, we select two popular code generation datasets that include accompanying test cases: HumanEval-X [40], which spans multiple programming languages, and APPS [14], which provides more complex and challenging coding tasks. Table 2 presents the average correlation between SWE-Judge’s evaluation scores and test case execution results on both datasets.
Table 2 demonstrates that SWE-Judge achieves the highest average correlation with test case execution outcomes, consistently and significantly outperforming all baseline methods. On the HumanEval-X dataset, SWE-Judge outperforms all baselines by an average margin of $3 2 . 1 \%$ to $7 8 . 8 \%$ across all programming languages. On the APPS dataset, SWE-Judge achieves even greater improvements, surpassing all baselines by $1 7 5 . 6 \%$ to $1 6 9 1 . 7 \%$ in terms of average correlation scores. These results confirm SWE-Judge ’s generalizability from human-annotated scores to test case-based execution outcomes.
# 6.2 Case Study
Figure 5 presents selected examples of generated software artifacts, along with their corresponding scores from top-performing automatic evaluation metrics and human annotators. The figure includes evaluations of generated code from the CoNaLa dataset and code summaries from the Summary-Assess dataset. Those human scores range from 0 (incorrect) to 4 (fully correct). From these examples, we identify three main issues with existing metrics. First, BERTScore, MoverScore, and CodeBERTScore tend to assign high scores across most examples, while ${ \mathrm { C h r F } } + +$ and CodeBLEU tend to give relatively low scores. This reduces score variation and makes it harder to distinguish correct data from incorrect ones, potentially misleading the users. Second, except for ICE-Score and the Vanilla LLM method, the other baselines fail to produce scores that fall within the same range as human annotations, which makes direct comparison challenging. Third, some baselines mistakenly assign higher or equal scores to incorrect data compared to correct ones. For example, in Case I of Figure 5, Vanilla LLM, BERTScore, CodeBERTScore, ${ \mathrm { C h r F } } + +$ , CodeBLEU, and RUBY
(a) Case I: NL to Code (from CoNaLa)
Human Score: 4
SWE-Judge: 4
Vanilla LLM: 3
ICE-Score: 3
BertScore: 93.8
Moverscore: 68.9
CodeBertScore: 84.7
Chrf++: 44.4
CodeBleu: 29.5
Ruby: 27.3
(b) Case II: Code to NL (from Summary-Assess)
protected void die() { Input assert !initialized(); Code playerDies $\ c =$ true; Reference invoke this method while precomputing the effects Summary of this move if it is Generated this method is called when the user has been Summary X created
Human Score: 0
SWE-Judge: 0
Vanilla LLM: 1
ICE-Score: 1
BertScore: 86.7
Moverscore: 62.3
CodeBertScore: 73.9
Chrf++: 23.3
CodeBleu: 27.0
Ruby: 15.4
Fig. 5. Case study comparing different automated evaluation metric approaches. Human-assigned scores range from 0 (completely incorrect) to 4 (completely correct).
all give higher or equal scores to flawed code. In contrast, SWE-Judge does not suffer from those issues. Its scores are more consistent with human judgments and better reflect the correctness of the data under evaluation.
# 6.3 Threats to Validity
Our findings are limited to the specific SE datasets examined in this study and may not generalize to all SE datasets. To mitigate this limitation, we selected three widely adopted datasets—CoNaLa, Card2Code, and Summary-Assess—and introduced APR-Assess, a human-annotated dataset not previously studied in this context. These benchmarks collectively span three major SE tasks: code generation, automated program repair, and code summarization. They also cover diverse software artifacts, including source code, code changes, and code comments. Moreover, SWE-Judge uses the OpenAI GPT-4o mini model for its cost-effectiveness, though its performance may improve when more advanced LLMs (e.g., GPT-4.5) are used. Users may choose a more advanced and inevitably more expensive LLM if they aim to achieve even higher performance.
# 7 RELATED WORK
Evaluating the correctness of software artifacts generated by automated tools remains a major challenge. While human evaluation is accurate, it is labor-intensive and not scalable. Test-based metrics like pass $@ \mathbf { k }$ [2] require manually written test cases, making them also impractical for large-scale use. To address this, many automatic evaluation metrics have been adopted in SE tasks. Several are adapted from natural language processing (NLP), such as BLEU [25], ROUGEL [18], METEOR [1], ${ \mathrm { C h r F } } + +$ [26], and BERTScore [38]. Beyond these, SE-specific metrics have been proposed to better capture the characteristics of SE data. Ren et al. introduced CodeBLEU [28], which extends BLEU by incorporating code syntax and structure. Tran et al. proposed RUBY [32], which measures similarity using lexical, syntactic, and semantic representations. Eghbali et al. developed CrystalBLEU [10], which improves BLEU by filtering out common n-grams to focus on more informative patterns. Zhou et al. proposed CodeBERTScore [42], which adapts BERTScore to code by leveraging a fine-tuned CodeBERT model to compute semantic similarity between generated and reference code. Mastropaolo et al. introduced SIDE [22], a metric that applies contrastive learning to enhance cosine similarity-based evaluation on code summaries. Recently, Zhuo et al. proposed ICE-Score[45], which prompts an LLM to assign a correctness score (0–4) based on predefined criteria, enabling more accurate evaluation aligned with human judgment. Additionally, Evtikhiev et al. [11] empirically evaluated six metrics—BLEU, ROUGE-L, METEOR, ChrF, CodeBLEU, and RUBY—against human judgments and found significant misalignment, emphasizing the need for more accurate and reliable automatic metrics.
In contrast to existing SE-specific metrics, SWE-Judge (1) leverages LLMs to better capture the semantics of software artifacts, (2) introduces diverse evaluation strategies to infer correctness, and (3) incorporates a lightweight team selection process to identify effective strategy combinations, thereby enhancing the accuracy and reliability of automatic correctness evaluation. Moreover, while prior metrics are typically evaluated on a single SE task (e.g., code generation), our study demonstrates the effectiveness of SWE-Judge across three popular tasks: code generation, automated program repair, and code summarization.
Additionally, several concurrent studies investigate similar topics in parallel. Wang et al. [33] empirically investigate LLM-as-a-judge methods from NLP for evaluating SE tasks, focusing on consistency and readability aspects. In contrast, our study introduces a new SE-specific metric that accurately reflects the functional correctness of generated software artifacts. Tong et al. [31] proposed CODEJUDGE, which uses LLMs to evaluate the functional correctness of generated code in code generation. Dong et al. [9] proposed CodeScore, a method that estimates the functional correctness of generated code by evaluating its pass ratio and executability. Unlike their approaches, we propose a team selection mechanism and diverse prompting strategies to infer functional correctness, while also evaluating our tool across three popular tasks. CodeUltraFeedback [34] assesses LLMs’ alignment with human evaluations from five non-functional code aspects, such as coding style. In contrast, our study focuses on proposing a new, effective SE-specific metric to evaluate the functional correctness of software artifacts. | Large Language Models (LLMs) and other automated techniques have been
increasingly used to support software developers by generating software
artifacts such as code snippets, patches, and comments. However, accurately
assessing the correctness of these generated artifacts remains a significant
challenge. On one hand, human evaluation provides high accuracy but is
labor-intensive and lacks scalability. On the other hand, other existing
automatic evaluation metrics are scalable and require minimal human effort, but
they often fail to accurately reflect the actual correctness of generated
software artifacts.
In this paper, we present SWE-Judge, the first evaluation metric for
LLM-as-Ensemble-Judge specifically designed to accurately assess the
correctness of generated software artifacts. SWE-Judge first defines five
distinct evaluation strategies, each implemented as an independent judge. A
dynamic team selection mechanism then identifies the most appropriate subset of
judges to produce a final correctness score through ensembling. We evaluate
SWE-Judge across a diverse set of software engineering (SE) benchmarks,
including CoNaLa, Card2Code, HumanEval-X, APPS, APR-Assess, and Summary-Assess.
These benchmarks span three SE tasks: code generation, automated program
repair, and code summarization. Experimental results demonstrate that SWE-Judge
consistently achieves a higher correlation with human judgments, with
improvements ranging from 5.9% to 183.8% over existing automatic metrics.
Furthermore, SWE-Judge reaches agreement levels with human annotators that are
comparable to inter-annotator agreement in code generation and program repair
tasks. These findings underscore SWE-Judge's potential as a scalable and
reliable alternative to human evaluation. | [
"cs.SE",
"cs.AI",
"cs.CL"
] |
# GitHub Proxy Server: A tool for supporting massive data collection on GitHub
Hudson Silva Borges Universidade Federal de Mato Grosso do Sul - UFMS Campo Grande - MS - Brasil hudson.borges@ufms.br
Marco Tulio Valente Universidade Federal de Minas Gerais - UFMG Belo Horizonte - MG - Brasil mtov@dcc.ufmg.br
# RESUMO
GitHub é a plataforma de codificação social mais popular e amplamente utilizada por comunidades e empresas para hospedagem de projetos open-source. Além disso, a plataforma conta com uma poderosa API que permite a pesquisadores coletarem informações públicas de projetos hospedados nela. Contudo, a coleta massiva de dados pode ser bastante desafiadora devido a limitações e mecanismos de detecção de abusos existentes. O presente trabalho apresentada uma ferramenta, chamada GitHub Proxy Server, que abstrai tais complexidades por meio de uma arquitetura independente de plataforma e linguagem de programação. Experimentos realizados com a ferramenta mostram que é possível melhorar o desempenho de tarefas de mineração do GitHub sem que complexidades adicionais sejam inseridas nos projetos.
# ABSTRACT
GitHub is the most popular social coding platform and widely used by developers and organizations to host their open-source projects around the world. Besides that, the platform has a web API that allow developers collect information from public repositories hosted on it. However, collecting massive amount of data from GitHub can be very challenging due to existing restrictions and abuse detection mechanisms. In this work, we present a tool, called GitHub Proxy Server, which abstracts such complexities into a tool that is independent on operational system and programming language. We show that, using the proposed tool, it is possible to improve the performance of GitHub mining tasks without any additional complexities.
Ferramenta: https://github.com/gittrends-app/github-proxy-server Apresentação: https://youtu.be/Dld9sK2lE1k Licença: MIT
# KEYWORDS
github, proxy, mineração, apis
# ACM Reference Format:
Hudson Silva Borges and Marco Tulio Valente. 2022. GitHub Proxy Server: A tool for supporting massive data collection on GitHub. In XXXVI Brazilian Symposium on Software Engineering (SBES 2022), October 5–7, 2022, Virtual Event, Brazil. ACM, New York, NY, USA, 6 pages. https://doi.org/10.1145/ 3555228.3555276
# 1 INTRODUÇÃO
Nos últimos anos, GitHub tem sido utilizado como fonte primária de dados por pesquisadores e desenvolvedores interessados na melhoria do estado da arte e na construção de ferramentas de apoio ao desenvolvimento open-source [4, 5]. De fato, GitHub é a plataforma de codificação social mais popular e amplamente utilizada por desenvolvedores e empresas para hospedagem de seus projetos [2]. Alguns dos fatores que ajudam a explicar tal popularidade são as diversas funcionalidades disponíveis que vão muito além de simples ferramentas de controle de versão, como, por exemplo, acompanhar a atividade de outros desenvolvedores (follow) e fóruns de discussão integrados (Discussion) no próprio projeto [1, 3, 6, 12].
Além das funcionalidades sociais, o GitHub permite que seus usuários acessem seus serviços web por meio de APIs (Application Programming Interface) públicas. Atualmente duas versão estão disponíveis: REST API (v3)1 e GraphQL API (v4).2 Em ambas APIs é possível que usuários coletem informações públicas de quaiquers projetos hospedados na plataforma. Contudo, para acessar tais informações é necessário que seus usuários sigam regras e recomendações da própria plataforma, o que pode representar um grande desafio para aqueles que necessitam coletar uma quantidade massiva de dados [7]. Por exemplo, cada usuário autenticado com um token tem o direito de realizar até 5.000 requisições por hora. Além disso, realizar requisições paralelas com o mesmo token pode bloquear o acesso do usuário temporariamente. De acordo com o GitHub, tais regras e recomendações são aplicadas para garantir a disponibilidade do serviço e um uso justo de seus serviços.3
Com intuito de simplificar esse processo e tornar o processo de coleta mais transparente a desenvolvedores e pesquisadores que usam a API do GitHub, este trabalho apresenta uma ferramenta, chamada GitHub Proxy Server, que abstrai todas as limitações e recomendações da plataforma GitHub por meio de uma arquitetura proxy independente de sistema operacional e linguagem de programação. A ferramenta tem como principais características: (i) suporte a múltiplos tokens de acesso; (ii) orquestramento automático de requisições simultâneas, (iii) balanceamento de carga, e (iv) configurações ajustáveis. Um estudo de caso com a ferramenta mostrou que a integração é simples com as principais bibliotecas disponíveis atualmente além de permitir que desenvolvedores utilizem os recursos computacionais disponíveis ao máximo.
Um estudo de caso com a ferramenta, mostra que a integração com bibliotecas existentes é bastante simples, bastando simplesmente modificar o endereço base de destino para a ferramenta proposta. Além disso, é possível otimizar o tempo gasto na coleta de dados ao utilizar processos paralelos sem que essa atividade viole as recomendações do próprio GitHub.
A seguir é apresentada a organização deste artigo. Na Seção 2 são detalhadas as limitações e restrições impostas pela plataforma GitHub em sua API, algumas alternativas à API do GitHub e como tais limitações impactam desenvolvedores e pesquisadores. Na Seção $3 \textup { \AA }$ detalhada a arquitetura proposta e na Seção 4 a implementação da ferramenta. A Seção 5 apresenta um estudo sobre como integrar a ferramenta proposta em ferramentas e atividades já existentes enquanto a Seção 6 realiza uma comparação com ferramenta relacionadas. Por fim, a Seção 7 conclui o trabalho e apresenta direções futuras para a ferramenta.
# 2 COLETA DE DADOS NO GITHUB
Na literatura, a maioria dos estudos exploram os desafios envolvidos em pesquisas que envolvem mineração de dados do GitHub. Por exemplo, Kalliamvakou et al. [9] reportam as promesas e os perigos de se minerar repositórios GitHub com foco nas características dos projetos hospedados. Os autores também derivam um conjunto de recomendações a pesquisadores interessados em usar projetos hospedados na plataforma.
Contudo, poucos estudos reportam os desafios técnicos envolvidos na coleta de dados do GitHub. Usuários que desejam coletar dados das APIs do GitHub (i.e., REST ou GraphQL) devem estar atentos quanto à limitação no número de requisições que cada usuário pode realizar. Usuários não autenticados com um token de acesso podem fazer um total de 60 requisições em uma hora enquanto usuários autenticados têm esse limite elevado a 5.000 requisições/hora/token. Ao ultrapassar esse limite usuários deverão aguardar o limite ser resetado para realizar novas requisições. Uma alternativa adotada por diversos usuários consiste em obter mais de um token de acesso, de diferentes usuários, para multiplicar a capacidade de coleta.
Além disso, o GitHub implementa uma série de mecanismos internos para detecção de abusos no uso de seus serviços. Usuários que não seguem as recomendações da plataforma podem ter seu acesso bloqueado temporariamente, como demonstra a seguinte mensagem da própria API: "You have triggered an abuse detection mechanism and have been temporarily blocked from content creation. Please retry your request again later". Portanto, usuários devem estarem atentos às recomendações para não incorrer em problemas.
Por exemplo, não é recomendado que usuários realizem múltiplas requisições em paralelo para seus servidores. Por exemplo, não é recomendado que usuários enviem requisições simultâneas para dois endpoints diferentes (por exemplo, estrelas e issues) usando o mesmo token de acesso. Além disso, é recomendado que usuários adicionem um tempo mínimo entre as requisições. Atualmente a plataforma disponibiliza um guia de boas práticas com diversas dicas aos usuários de seus serviços.4
Uma alternativa adotada por pesquisados a fim de facilitar a tarefa de coleta de dados, é a adoção de datasets com dados previamente coletados. Gousios et al. [8] disponibiliza periodicamente dados coletados da API REST por meio do projeto GHTorrent. Já o projeto GHArchive5 monitora, arquiva e compartilha todos eventos públicos lançados pelo GitHub [10]. Contudo, o grande problema deste deste modelo de dados está relacionada à atualizações e completude dos dados. Por exemplo, até a escrita deste trabalho, os dados do GHTorrent foram atualizados em 06/2019 (três anos atrás).
# 3 ARQUITETURA PROPOSTA
Com o propósito de ser uma ferramenta independente de plataforma e de linguagem de programação, GitHub Proxy Server foi concebido para atuar como um proxy entre os serviços do GitHub (i.e., REST API e GraphQL API) e aqueles usuários que necessitam fazer coletas massivas do dados. Essencialmente, na arquitetura proposta, o servidor proxy é responsável por receber as requisições dos usuários e encaminhá-las aos serviços do GitHub de forma a evitar as restrições e minimizar as limitações originalmente impostas deixando seus usuários livres da responsabilidade de gerenciar os tokens de acesso disponíveis. A Figura 1 apresenta a arquitetura da ferramenta proposta.
Figura 1: Arquitetura do GitHub Proxy Server
Para que o servidor proxy execute corretamente os usuários devem prover, pelo menos, um token de acesso ao GitHub. Esse token pode ser gerado pelo próprio usuário na página do GitHub ou usar de tokens de acesso gerados por aplicações cadastradas no GitHub. Para cada um dos tokens fornecidos, o servidor cria uma worker interno que é responsável por gerenciar e coordernar todas as requisições feitas a partir dele. Além disso, quando o servidor proxy recebe uma requisição dos clientes, ele automaticamente encaminha essa requisição ao worker que possui maior capacidade e disponibilidade de processá-la.
Na arquitetura proposta, cada requisição é recebida pelo servidor e o mesmo verificará se a requisição possui um token de acesso e um agente de usuário (a.k.a, user-agent), informações que devem estar presentes em todas requisições aos serviços do GitHub. Caso algumas destas informações estejam ausentes, o worker automaticamente preencherá com seu respectivo token e um valor padrão para o agente de usuário.
Assim, na arquitetura proposta, usuários poderão realizar as requisições necessárias sem maiores preocupações além de permitir que diversas aplicações clientes estejam em execução simultaneamente, fazendo assim um melhor aproveitamento dos recursos computacionais disponíveis. Por exemplo, um grupo de pesquisa ou desenvolvedores que compartilham de um conjunto de tokens de acesso e desejam coletar dados da plataforma podem criar uma instância única do servidor proxy e usarem sem nenhum esforço adicional de configuração ou sincronização.
A seguir são apresentadas as principais funcionalidades da ferramenta proposta:
Suporte a múltiplos tokens de acesso: Os serviços do GitHub limitam o número de requisições que usuários podem realizar a seus serviços. Para que estes consigam ir além destes limites, é frequente que usem de vários tokens ao mesmo tempo. A ferramenta proposta permite que usuários forneçam diversos tokens e faz com que cada um deles atue como um cliente independente, fazendo a gestão do número de requisições disponíveis e tratando automaticamente eventuais problemas nas requisições.
Orquestramento das requisições: Ao realizar requisições simultaneas usando o mesmo token, o GitHub pode identificar e bloquear o acesso deste usuário. Para evitar tal situação, a ferramenta proposta cria uma fila de requisições para cada token, evitando que tal situação ocorra. Além disso, é possível adicionar um tempo de espera entre requisições para evitar chamadas sucessivas sem intervalo. Tal abordagem permite que o servidor evite o abuso no uso dos serviços do GitHub, assim como recomendado pela própria plataforma.
Balanceamento de carga: Com objetivo de fazer o melhor aproveitamento possível das requisições disponíveis, é proposto uma função de balanceamento na distribuição das requisições recebidas para os workers disponíveis. Assim, se houver workers inativos, as requisições são encaminhadas diretamente. Senão, a requisição é encaminhada para o worker com menor fila de requisições pendentes. Havendo mais de uma fila com o mesmo número de requisições pendentes, a requisição é encaminhada para a primeira com maior número de requisições disponíveis para o GitHub.
Configurações ajustáveis: Todas as configurações adotadas pela ferramenta são personalizáveis com objetivo de atender melhor às necessidades dos usuários. Por exemplo, por padrão, o tempo entre requisições dos workers é definido em 250 milissegundos. Contudo, usuários podem aumentar ou reduzir de acordo com suas necessidades. Além disso, é possível definir um limite máximo de uso para cada token, deixando um conjunto mínimo restante para os proprietários dos tokens de acesso usados.
# 4 FERRAMENTA
Uma implementação da arquitetura proposta está disponível publicamente em https://github.com/gittrends-app/github-proxy-server sob a licença MIT.6 A ferramenta pode ser instalada e utilizada usando o gerenciador de pacotes NPM (Node Package Manager) ou a partir dos executáveis disponíveis para cada uma das plataformas (Linux, Windows e MacOs). Atualmente, a ferramenta pode ser executada somente a partir do terminal e tem como requisito obrigatório que seus usuários forneçam os tokens de acesso do GitHub via argumentos, variáveis de ambiente ou arquivo de configuração. A Figura 2 apresenta uma captura de tela com todas opções aceitas pela ferramenta.
github-proxy-server --help
Usage:cli [options]
Options: -p, -port <port Port to start the proxy server (default:3000) -t. --token<token> GitHub token to be used (default:[1) --api <api> API versionto proxy requests(choices: "graphql","rest",default:"graphql") --tokens <filex File containing a list of tokens --request-interval <interval> Interval between requests (ms) (default:250) =-request-timeout <timeout> Request timeout (ms)(default:20000) --min-remaining <number> Stopusing token on (default:100) --clustering Enable clustering mode (require redis) (default: false) --clustering-redis-host <host> (clustering)redis host (default: "localhost" --clustering-redis-port <port> (clustering)redis port (default:6379) --clustering-redis-db <db> (clustering)redisdb (default:0) --silent Dont show requests outputs (default: false) -V, -version output the current version -h,--help display help for command Done in 0.62s
Por padrão, a ferramenta está configurada para trabalhar com a API GraphQL, versão que sucede a API REST. Contudo, usuários podem selecionar qual versão ele deseja utilizar. Essa informação é necessária pois o controle das requisições é feita por serviço, ou seja, as contagens são independentes.
Também é possível configurar o intervalo entre as requisições (opção –request-interval) que será adotado, o tempo máximo de execução de uma requisição (opção –request-timeout) antes de ser automaticamente abortada, e o número mínimo de requisições que a aplicação deve manter sem utilizá-las (opção –min-remaining). Por fim, os usuários podem optar pela construção de clusters de servidores (opção –clustering) para expandir a capacidade dos mesmos mantendo a sincronia entre eles.
A ferramenta também possui um serviço de logging e um monitor de atividades que permite seus usuários monitorem as requisições que passam por ele, incluindo um resumo dos resultados (i.e., status code) das requisições aos serviços do GitHub (Figura 3). O monitoramento do uso permite aos seus usuários ajustarem de forma mais adequada os parâmetros da ferramenta.
# 5 INTEGRAÇÃO COM OUTRAS FERRAMENTAS
Como descrito anteriormente, a ferramenta proposta foi concebida para ser independente de plataforma e linguagem de programação.
Figura 3: Monitoramento de atividades
Além do mais, ela tem por objetivo permitir que qualquer usuário possa utilizar da ferramenta sem a necessidade de grandes modificações em seus aplicações.
Atualmente diversas bibliotecas estão disponíveis para desenvolvedores interessados em acessar os serviços do GitHub por meio de suas linguagens de programação preferidas. Inclusive, o próprio GitHub provê uma lista de bibliotecas disponíveis em sua documentação oficial. Neste trabalho, foram analisadas inicialmente as três bibliotecas oficiais da própria equipe do GitHub: (i) octokit.rb9, para Ruby, (ii) octokit.net10, para .NET, e (iii) octokit.js11, para JavaScript. Nestas três bibliotecas foram observadas formas de definir a URL de destino das requisições, possibilitando assim a integração das aplicações com a ferramenta proposta.
Por exemplo, a Figura 4 mostra como um desenvolvedor pode configurar a biblioteca octokit.js para usar o GitHub Proxy Server. Basicamente, usuários devem direcionar as requisições da biblioteca para o servidor proxy modificando a propriedade baseUrl no construtor do cliente.
Figura 4: Integração com octokit.js
Considerando bibliotecas de outras linguagens, também foi observado que diversas delas possuem suporte ao uso de diferentes URLs destino ou configuração de proxy interno. Por exemplo, a biblioteca PyGitHub12, a mais popular para a linguagem Python com mais de 5k estrelas, permite que usuários usem diferentes URLs como base para as requisições a partir do parâmetro base_url.
Figura 5: Integração com PyGithub
# 5.1 Exemplo de Uso
Para ilustrar e medir o impacto da ferramenta em atividades de mineração, propõe-se a seguinte atividade: Realizar a busca pelos repositórios mais populares do GitHub, em número de estrelas, coletar suas issues, releases, tags e stargazers usando a API REST e armazenando os dados obtidos em uma base de dados não relacional.
Neste trabalho, será comparado o desempenho obtido nessa atividade usando: (i) a ferramenta proposta e (ii) realizando as requisições diretamente ao serviço do GitHub (i.e., sem a ferramenta). Também será avaliado o desempenho ao utilizar um único token de acesso e utilizando múltiplos tokens.
Considerando que, ao realizar as requisições diretamente aos serviços do GitHub, os desenvolvedores estão sujeitos a todas limitações e restrições da plataforma, neste exemplo de uso optou-se por realizar as requisições sequencialmente, com um intervalo após as requisições de 50 milissegundos, para cada token disponível. Como consequência a coleta dos recursos também foi sequencial. A Figura 6 ilustra o processo adotado para a atividade de coleta sem usar da ferramenta proposta. No modelo de processamento proposto, é garantido que somente uma requisição seja feita a cada momento por cada token disponível.
Figura 6: Processamento sem a ferramenta
Já para o processo de coleta de dados usando a ferramenta proposta (Figura 7), os scripts foram modificados para realizarem processamentos paralelos. Neste caso, requisições paralelas não implicam em eventuais problemas porque a ferramenta proposta, ao receber tais requisições, é capaz de processá-las sem inferir as restrições de uso dos serviços do GitHub. Neste modelo, foram utilizados três processos paralelos para cada token disponível. Além disso, cada processo pode ter até seis requisições simultâneas a cada instante (uma para cada recurso do GitHub em processamento).
Figura 7: Processamento usando o GitHub Proxy Server
Os scripts utilizados neste exemplo de uso estão disponíveis na branch cbsoft-tools-2022 sob o diretório benchmark do repositório do projeto. Além disso, os dados da análise reportada a seguir estão presentes no mesmo diretório assim como o arquivo R Markdown13 para reprodução dos resultados.
Para a execução dos testes, foi alugada uma máquina virtual em nuvem na Amazon Web Services (AWS) com 4GB de memória RAM, 2 vCPUs e um disco sólido de 80GB.14 O ambiente foi configurado com a última versão disponível do Node.js15 e o banco de dados MongoDB16. Todos os testes foram executados individualmente sob as mesmas condições de execução.
5.1.1 Performance com único token de acesso. A Figura 8 mostra o tempo gasto em cada requisição realizada para ambas as fontes. Como as requisições diretas aos serviços do GitHub foram feitas de forma sequencial, é possível observar que a duração das requisições foi bem pequena. Além disso, observou-se também que alguns endpoints da API (e.g., issues) demandam mais tempo para responder do que outros (e.g., stargazers). O tempo total para que todas requisições fossem realizadas foi de 35 minutos e 22 segundos.
Já as requisições realizadas através do servidor proxy apresentaram uma duração mais longa. Isso é justificado pelo fato da ferramenta proposta enfileirar requisições paralelas. Contudo, observouse também que o mesmo número de requisições foi feito em 32 minutos e 13 segundos. Um dos motivos observados que ajudam a justificar os resultados obtidos, é melhor aproveitamento dos recursos computacionais disponíveis ao realizar processamentos paralelos. Por exemplo, tarefas de escrita dos dados no disco e requisições de rede podem ser facilmente intercaladas pelos sistemas operacionais.
5.1.2 Performance com múltiplos tokens de acesso. A Figura 9 mostra a linha do tempo ao realizar a tarefa proposta usando três tokens de acesso. Em ambos os casos, é possível observar que existe uma concentração maior de requisições ao longo do tempo. Mas, mais uma vez, o uso do servidor proxy para intermediação das requisições permitiu que a coleta fosse realizada em um tempo consideravelmente menor. Por meio de requisições diretas, o tempo necessário para processar todas requisições foi de 36 minutos e 55 segundos, valor bem próximo ao obtido anteriormente. Usando o servidor proxy o tempo foi conideravelmente menor, de 26 minutos e 49 segundos. Portanto, desenvolvedores podem beneficiar-se bem mais ao utilziar a ferramenta com múltiplos tokens de acesso.
Figura 8: Linha do tempo das requisições
Figura 9: Linha do tempo das requisições (três tokens)
# 6 FERRAMENTAS RELACIONADAS
Na literatura, diversas ferramentas para análise de repositórios git foram propostas. Por exemplo, a ferramenta PyDriller permite desenvolvedores clonarem e extrairem informações sobre commits, desenvolvedores, e outros metadados diretamente dos projetos [11]. Contudo, até o momento da escrita deste artigo, nenhuma outra ferramenta open-source de mesmo propósito à apresentada neste trabalho foi encontrada. A ferramenta proposta tem por objetivo auxiliar pesquisadores a conduzirem coletas massivas de dados independente da plataforma ou linguagem de programação adotada pelos mesmos diretamente do GitHub.
Contudo, bibliotecas para certas linguagens de programação podem oferecer funcionalidades semelhantes às propostas nesta ferramenta. Por exemplo, a biblioteca oficial do GitHub para a linguagem
JavaScript, octokit.js, fornece um conjunto de funcionalidades para interagir com as APIs REST e GraphqQL por meio de plugins. Especificamente, o plugin octokit/plugin-throttling.js implementa todas as recomendações de boas práticas do próprio GitHub com objetivo de previnir os mecanismos de detecção de abusos. Conudo, a biblioteca não permite adicionar vários tokens de acesso como a ferramenta proposta.
A fim de verificar se outras bibliotecas também oferecem tais recursos, foi conduzida uma breve análise sobre a documentação das bibliotecas de terceiros para JavaScript listadas na documentação da própria plataforma.18 Observou-se que nenhuma das quatro bibliotecas implementa funcionalidades equivalentes. O mesmo comportamento foi observado em bibliotecas de outras linguagens de programação. Portanto, isso ressalta a importância da ferramenta proposta.
# 7 CONSIDERAÇÕES FINAIS
O GitHub tem sido utilizado como fonte primária de dados por diversos pesquisadores na condução de suas pesquisas. Dentre os fatores que ajudam explicar tal fenômeno estão a popularidade da plataforma na comunidade open-source e também uma poderosa API que permite que desenvolvedores e pesquisadores possam acessar informações nela disponíveis. Contudo, o acesso a tais informações não é trivial e seus usuários estão sujeitos a restrições que podem inviabilizar a coleta massiva de dados.
Neste artigo é apresentada uma ferramenta, chamada GitHub Proxy Server, que tem como objetivo abstrair as restrições e recomendações de acesso aos serviços do GitHub de forma a tornar o processo totalmente transparente para desenvolvedores e pesquisadores que desejam coletar dados da plataforma. Avaliações conduzidas com a ferramenta mostraram que a integração com bibliotecas e outras ferramentas existentes é bastante simples. Além disso, ao permitir o uso de concorrência de requisições, os usuá- rios da ferramenta podem otimizar suas atividades para melhor aproveitamento dos recursos computacionais disponíveis. Por fim, considerando que a autenticação é feita por meio da modificação do header das requisições, a solução proposta tem o potencial de suportar, inclusive, futuras versões da API do GitHub.
Como trabalhos futuros, planeja-se adicionar uma funcionalidade de obtenção automática de tokens de acesso. Essa obtenção visa facilitar a configuração da ferramenta e facilitar a tarefa que atualmente deve ser realizada manualmente pelos usuários. Pretende-se também avaliar uma estratégia de ajuste automático de parâmetros com base no uso. Atualmente tais valores são configurados manualmente ao iniciar a aplicação.
# REFERÊNCIAS
[1] Hudson Borges, Andre Hora, and Marco Tulio Valente. 2016. Predicting the Popularity of GitHub Repositories. In 12th International Conference on Predictive Models and Data Analytics in Software Engineering (PROMISE). 1–10.
[2] Hudson Borges, Andre Hora, and Marco Tulio Valente. 2016. Understanding the factors that impact the popularity of GitHub repositories. In 32nd International Conference on Software Maintenance and Evolution (ICSME). 334–344.
[3] Hudson Borges and Marco Tulio Valente. 2018. What’s in a GitHub star? understanding repository starring practices in a social coding platform. Journal of Systems and Software 146 (2018), 112–129.
[4] Valerio Cosentino, Javier Luis Cánovas Izquierdo, and Jordi Cabot. 2016. Findings from GitHub: methods, datasets and limitations. In 13th Working Conference on Mining Software Repositories (MSR). 137–141.
[5] Valerio Cosentino, Javier Luis Cánovas Izquierdo, and Jordi Cabot. 2017. A Systematic Mapping Study of Software Development With GitHub. IEEE Access 5 (2017), 7173–7192.
[6] Laura Dabbish, Colleen Stuart, Jason Tsay, and Jim Herbsleb. 2012. Social coding in GitHub: transparency and collaboration in an open software repository. In Conference on Computer Supported Cooperative Work (CSCW). 1277–1286.
[7] Georgios Gousios and Diomidis Spinellis. 2017. Mining Software Engineering Data from GitHub. In 39th International Conference on Software Engineering Companion (ICSE-C). 501–502.
[8] Georgios Gousios, Bogdan Vasilescu, Alexander Serebrenik, and Andy Zaidman. 2014. Lean GHTorrent: GitHub data on demand. In 11th Working Conference on Mining Software Repositories (MSR). 384–387.
[9] Eirini Kalliamvakou, Georgios Gousios, Kelly Blincoe, Leif Singer, Daniel M German, and Daniela Damian. 2016. An in-depth study of the promises and perils of mining GitHub. Empirical Software Engineering 21 (2016), 2035–2071.
[10] Nuthan Munaiah, Steven Kroh, Craig Cabrey, and Meiyappan Nagappan. 2017. Curating GitHub for engineered software projects. Empirical Software Engineering 22, 6 (2017), 3219–3253.
[11] Davide Spadini, Maurício Aniche, and Alberto Bacchelli. 2018. PyDriller: Python framework for mining software repositories. In 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE). 908–911.
[12] Ferdian Thung, Tegawende F Bissyande, David Lo, and Lingxiao Jiang. 2013. Network structure of social coding in GitHub. In 17th European Conference on Software Maintenance and Reengineering (CSMR). 323–326. | GitHub is the most popular social coding platform and widely used by
developers and organizations to host their open-source projects around the
world. Besides that, the platform has a web API that allow developers collect
information from public repositories hosted on it. However, collecting massive
amount of data from GitHub can be very challenging due to existing restrictions
and abuse detection mechanisms. In this work, we present a tool, called GitHub
Proxy Server, which abstracts such complexities into a tool that is independent
on operational system and programming language. We show that, using the
proposed tool, it is possible to improve the performance of GitHub mining tasks
without any additional complexities. | [
"cs.SE"
] |
# 1 Introduction
Spiking neural networks (SNNs) [1] and in particular recurrent SNNs (RSNNs) constitute the basis of energy-efficient computation in the brain [2] and in neuromorphic hardware [3]. While RSNNs can be implemented efficiently in neuromorphic systems, training of these models with powerful gradient-based learning algorithms is mostly performed on standard digital hardware such as graphical processing units (GPUs) using BPTT — the gold standard training method for spiking and non-spiking recurrent neural networks.
However, BPTT requires an expensive backward pass through the entire sequential computation process, with a memory and time complexity scaling linearly with the number of computation steps. The sequential nature of this algorithm introduces a computational bottleneck on GPUs, where the unrolled computational graph needs to be processed state-by-state in the backward pass, hindering parallelization. In artificial neural network models for sequence processing, this bottleneck has recently been addressed by parallelizable models [4–7], achieving significantly increased throughput via more exhaustive utilization of GPU resources. The parallelization of these sequence models, which are often referred to as deep state-space models (SSMs), is achieved by removing the nonlinearity between RNN state transitions. In the realm of SNNs, corresponding parallelizable SNN models have been proposed [8–11]. However, in order to utilize the parallel processing power of GPUs, significant departures from the fundamental properties of spiking neuron models have to be accepted. First, nonlinear behavior of neural dynamics such as the membrane potential reset after a spike have to be avoided, and second, no recurrent connections are possible within the network. In addition, the fundamental limitations of BPTT remain: In addition to its sequence-length dependent memory consumption, BPTT can only operate in an offline manner, since it requires processing of the full input time series before the backward pass can be initiated. This principle is fundamentally incompatible with neuromorphic hardware [12].
To address this issue, online methods of gradient learning have been proposed, with real-time recurrent learning (RTRL) [13] as its initial form, where instead of a separate backward pass, the gradients are accumulated during the forward pass. While the memory requirement of RTRL is independent of the computation length, it scales as $\bar { \mathcal { O } } ( N ^ { 2 } )$ with the number of parameters $N$ , which renders this and related algorithms infeasible in practice (see Appendix A). To overcome this issue, approximations have been introduced that offer a trade-off between memory overhead and gradient approximation accuracy. One such approximation is e-prop (eligibility propagation) [12], where the pathways of gradient flow through the recurrent synaptic connections are disregarded, while gradients through the neuron dynamics are forward propagated as in RTRL. Approximate forward propagation algorithms can be applied to virtually all reasonable spiking neuron models and recurrent network architectures. However, while they can be implemented efficiently in neuromorphic hardware [14], they do not take advantage of parallelization in the time domain, which makes training on long sequential input extremely time consuming on standard hardware such as GPUs, hindering progress in this direction of research.
Here, we introduce HYbrid PRopagation (HYPR), a method that combines approximate online forward learning with segment-wise parallel approximate backpropagation to enable partial parallelization during training. By ignoring the gradient pathways through recurrent connections, we parallelize BPTT within each new sequence segment. The resulting back-propagated gradients are then combined with the forward-propagated gradients from previous segments, allowing for infinite training context length with constant memory complexity. We show, that the combination of parallelization and approximate forward propagation yields the best of both worlds: high-throughput segment-wise online learning through parallelization paired with constant, i.e., sequence length independent, memory demands and high accuracy through powerful neuron models. HYPR enables parallelization of parameter update computation over sequence segments for recurrent SNNs and almost arbitrary spiking and non-spiking neuron models. This holds even for neuron models that are not inherently parallelizable due to non-linearities in the state transition function, for example due to a spike-triggered reset mechanism, as it is commonly used in leaky integrate-and-fire (LIF) or adaptive LIF neurons. We show that even in a medium-sized network, this segment-wise parallelization of parameter update computations results in a $1 0 8 \times$ -speedup of training compared to the mathematically equivalent fully-online algorithm e-prop, when executed on a GPU.
Recent work on RSNNs [15–17] suggests that oscillatory state dynamics of neural elements can be very beneficial for sequence processing. Since such neuron models introduce complex neuon state dynamics, they may be particularly well-suited for approximate forward propagation algorithms such as HYPR, which propagate gradients through the state dynamics but neglect those through network recurrencies. We therefore applied HYPR to networks consisting of spiking neurons with oscillatory subthreshold dynamics [15, 17]. We found that using HYPR to train such SNNs clearly reduces the gap to BPTT-trained SNNs.
# 2 Related Work
Several variants and approximations of forward- and back-propagation of gradients have been proposed. For example, the sequence-length dependent memory consumption of BPTT has been addressed in truncated BPTT [18], where gradients are only back-propagated for a predefined number of time steps, enabling constant memory complexity. An obvious consequence of this truncation however is that temporal credit assignment beyond its context window is impossible, rendering it infeasible to learn tasks where long-term dependencies occur.
Figure 1: a Generic neuron model framework considered in this work. We differentiate between intra-layer neuron recurrence through explicit recurrent weighted connections (magenta) and implicit recurrence through the state-to-state transition (blue). b The local gradient $[ d \mathbf { s } _ { i } ^ { t } / d \theta _ { i } ] _ { \mathrm { l o c a l } }$ is given by the pathway through the states of neuron $i$ without considering indirect influence through outputs of other neurons. The magenta pathways are ignored in the local gradient.
The exact forward learning RTRL [13] on the other hand naturally solves the problem of supporting temporal credit assignment across infinite context length under constant memory, at the cost of very high memory demands of $\mathcal { O } ( N ^ { 2 } )$ and slow execution time due to high memory I/O load (see Appendix A). This complexity has been reduced successfully by approximation [19–22] of the full sensitivity matrix or by discarding certain pathways of the gradients as in e-prop [12] or OTTT [23]. Interestingly, omission of certain pathways of the gradient can be applied to RTRL and BPTT alike. It reduces the computational complexity of both algorithms and, as we show in this work, enables BPTT to be efficiently parallelized over the time dimension. As discussed in [24, 25], combining forward gradient learning for infinite context length with segment-wise backward gradient learning for efficiency constitutes a potentially powerful and efficient hybrid of both methods. Despite this promising perspective, this research direction remains surprisingly underexplored. Our work shows how a combination of e-prop and segment-wise backward gradient accumulation can partially parallelize training.
Online learning has also been discussed in the context of parallelizable models: [25] and [26] show that in parallelizable models the memory complexity of RTRL can be significantly reduced despite using exact gradients when nonlinear inter-dependencies of neurons of the same layer are removed. While being applicable to the family of parallelizable spiking networks [8, 9, 11, 27], this principle is fundamentally incompatible with standard RSNNs such as vanilla recurrent LIF networks.
# 3 Background
# 3.1 Nonlinear neuron models
Consider a single neuron in a layer of recurrently connected (potentially spiking) neurons with an arbitrary, possibly non-linear, state transition function $f$ and an output function $g$ given by
$$
\begin{array} { r } { \mathbf { s } _ { i } ^ { t } = f ( \mathbf { s } _ { i } ^ { t - 1 } , I _ { i } ^ { t } ) , \qquad \quad y _ { i } ^ { t } = g ( \mathbf { s } _ { i } ^ { t } ) , } \end{array}
$$
where $\mathbf { s } _ { i } ^ { t } \in \mathbb { R } ^ { k }$ denotes the $k$ -dimensional state of neuron $\mathbf { \chi } _ { i }$ at time step $t$ , $I _ { i } ^ { t } \in \mathbb { R }$ the neuron input and $y _ { i } ^ { t } \in \mathbb { R }$ the neuron output, as shown in Fig. 1a. The state dimension $k$ varies between different neuron models. In spiking neuron models, the output function $g$ is usually the Heaviside step function, which is not differentiable. As is common practice — and also adopted in this work — the partial derivative ${ \partial y _ { i } ^ { t } } / { \partial \mathbf { s } _ { i } ^ { t } }$ is hence approximated using a surrogate derivative [28, 29] (see Appendix M). The scalar neuron input $I _ { i } ^ { t }$ is composed of a feed-forward input $I _ { \mathrm { f f } , i } ^ { t }$ and a recurrent input $I _ { \mathrm { r e c } , i } ^ { t }$ given by
$$
\begin{array} { r } { I _ { i } ^ { t } = I _ { \mathrm { f f } , i } ^ { t } + I _ { \mathrm { r e c } , i } ^ { t } + b _ { i } = \mathbf { w } _ { \mathrm { f f } , i } ^ { \top } \mathbf { x } ^ { t } + \mathbf { w } _ { \mathrm { r e c } , i } ^ { \top } \mathbf { y } ^ { t - 1 } + b _ { i } , } \end{array}
$$
with feed-forward weight vector $\mathbf { w } _ { \mathrm { f f } , i } \in \mathbb { R } ^ { d }$ , recurrent weight vector $\mathbf { w } _ { \mathrm { r e c } , i } \in \mathbb { R } ^ { m }$ , scalar bias $b _ { i }$ and input vector $\mathbf { x } ^ { t } \in \mathbb { R } ^ { d }$ , given by the $t$ -th element of some (possibly infinitely long) input time series $X = [ \mathbf { x } ^ { 1 } , \mathbf { x } ^ { 2 } , \ldots ]$ . Vector $\mathbf { y } ^ { t - 1 } \in \mathbb { R } ^ { m }$ contains the outputs $y _ { j } ^ { t - 1 }$ of all neurons $j \in \{ 1 , \dots , m \}$ of the same layer from the previous time step.
# 3.2 Approximate forward propagation
In e-prop [12], parameter updates are calculated online in the sense that no back-propagation of gradients is required. Let $\theta _ { i } \in \mathbb { R } ^ { p }$ denote the vector of all parameters of neuron $i$ and let $\bar { \mathcal { L } } ^ { t }$ denote some loss at time step $t$ . In each time step $t$ , a corresponding approximate parameter gradient (APG) $\begin{array} { r } { \tilde { \nabla } \theta _ { i } ^ { t } \approx \nabla _ { \theta _ { i } } \mathcal { L } ^ { t } } \end{array}$ is computed and passed to an optimizer such as ADAM [30] to obtain parameter updates. The APG $\tilde { \nabla } \theta _ { i } ^ { t }$ is given by
$$
\tilde { \nabla } \theta _ { i } ^ { t } = \frac { d \mathcal { L } ^ { t } } { d y _ { i } ^ { t } } \frac { \partial y _ { i } ^ { t } } { \partial \mathbf { s } _ { i } ^ { t } } \left[ \frac { d \mathbf { s } _ { i } ^ { t } } { d \theta _ { i } } \right] _ { \mathrm { l o c a l } } ,
$$
where $[ d \mathbf { s } _ { i } ^ { t } / d \theta _ { i } ] _ { \mathrm { l o c a l } }$ is a local approximation of the gradient of the neuron state $\mathbf { s } _ { i } ^ { t }$ with respect to its parameters, see Appendix $\mathbf { B }$ for its exact definition. Here and in the following, we use the notation $\partial$ for partial derivatives, and the notation $d$ for total derivatives, in line with [12]. The full gradient $d \mathbf { s } _ { i } ^ { t } / \bar { d \theta } _ { i }$ , as used in BPTT, contains all pathways by which $\theta _ { i }$ directly as well as indirectly influences state $\mathbf { s } _ { i } ^ { t }$ . The indirect pathways emerge for example from the dependence of $\mathbf { s } _ { i } ^ { t }$ on outputs $y _ { j } ^ { t - 1 }$ from other neurons, which again depend on previous outputs from neuron $i$ (see Fig. 1). In e-prop, these pathways are disregarded and $[ d \mathbf { s } _ { i } ^ { t } / d \theta _ { i } ] _ { \mathrm { l o c a l } }$ contains only the neuron-internal gradient pathways through the state-to-state derivatives $\partial s _ { i } ^ { q } / \partial s _ { i } ^ { q - 1 }$ without propagating through recurrent intra-layer neuron connections (Fig. 1b).
In e-prop, the term $[ d \mathbf { s } _ { i } ^ { t } / d \theta _ { i } ] _ { \mathrm { l o c a l } }$ from Eq. (3) is called eligibility matrix $\mathbf { e } _ { i } ^ { t } \in \mathbb { R } ^ { k \times p }$ and is computed in a forward manner together with the neuron states during the regular forward pass:
$$
\left[ { \frac { d \mathbf { s } _ { i } ^ { t } } { d \theta _ { i } } } \right] _ { \mathrm { l o c a l } } \underline { { \stackrel { \mathrm { d e f } } { = } } } { \mathbf { e } } _ { i } ^ { t } = { \frac { \partial \mathbf { s } _ { i } ^ { t } } { \partial \mathbf { s } _ { i } ^ { t - 1 } } } \mathbf { e } _ { i } ^ { t - 1 } + { \frac { \partial \mathbf { s } _ { i } ^ { t } } { \partial \theta _ { i } } } .
$$
From this eligibility matrix, the APG $\tilde { \nabla } \theta _ { i } ^ { t }$ at time step $t$ can be computed online without a backward pass by replacing $[ \dot { d } \mathbf { s } _ { i } ^ { t } / d \theta _ { i } ] _ { \mathrm { l o c a l } }$ with the forward-propagated eligibilities $\mathbf { e } _ { i } ^ { t }$ in Eq. (3).
In an online learning algorithm such as e-prop, APGs are computed in a time-local manner; thus, the parameter update is computed directly at time $t$ . Because APGs are computed online in tandem with states and outputs, recurrent networks can be trained on arbitrarily long time series — retaining potentially infinite training context via eligibility matrices $\mathbf { e } _ { i } ^ { t }$ , depending on the timescale of the neuron dynamics. This is in contrast to truncated BPTT, which uses a strictly limited context window.
# 3.3 Parallelization
Training of recurrent models can be sped up drastically by parallelization of computations over time. In SSMs, parallelization of both the forward and backward pass is made possible by choosing the state-to-state transition to be linear [4]. In our notation, this would require a linear state transition function $f$ (Eq. (1)) as well as a linear output function $g$ if recurrent connections are used. Hence, SSMs either use linear recurrent interactions [31] or no recurrent connections at all [7, 32].
In this case, the network dynamics can be written as $\mathbf { s } ^ { t } = A ^ { t } \mathbf { s } ^ { t - 1 } + \mathbf { x } ^ { t }$ , where $\mathrm { \bf s } ^ { t }$ is the state vector (of the whole layer) at time $t$ , $\mathrm { ~ \bf ~ x ~ } ^ { t }$ is the input vector, and $A ^ { t }$ denotes the time-variant state transition matrix. Then, the series of states $[ \mathbf { s } ^ { 1 } , \mathbf { s } ^ { 2 } , \mathbf { s } ^ { 3 } , \ldots ]$ can be written explicitly as $[ { \bf x } ^ { 1 } , A ^ { 1 } { \bf x } ^ { 1 } +$ $\mathbf { x } ^ { 2 } , A ^ { 2 } A ^ { 1 } \mathbf { x } ^ { 1 } + A ^ { 2 } \mathbf { x } ^ { 2 } + \mathbf { x } ^ { 3 } , . . . ]$ which can be efficiently computed via the associative scan (also called parallel prefix sum) algorithm [33] in $\mathcal { O } ( \log T )$ time where $T$ is the sequence length. Please refer to [34] for a more detailed explanation.
Obviously, this parallelization is not possible for usual RSNNs, since the spiking mechanism is by definition nonlinear. Nevertheless, we show below that training with approximate forward propagation can be parallelized even in nonlinear RSNNs.
# 4 Hybrid propagation (HYPR)
# 4.1 Parallelization in forward gradient learning
To illustrate the relationship between forward gradient learning and parallelizable SSMs, we reformulate the computation of eligibility matrices $\mathbf { \bar { e } } _ { i } ^ { t } \in \mathbb { R } ^ { k \times p }$ from Eq. (4) and APGs from Eq. (3) into a linear SSM form:
$$
\begin{array} { c } { { { \bf e } _ { i } ^ { t } = A _ { i } ^ { t } { \bf e } _ { i } ^ { t - 1 } + \delta _ { i } ^ { t } } } \\ { { \tilde { \nabla } { \theta _ { i } ^ { t } } = B _ { i } ^ { t } { \bf e } _ { i } ^ { t } , } } \end{array}
$$
with $\begin{array} { r } { A _ { i } ^ { t } = \frac { \partial { \bf s } _ { i } ^ { t } } { \partial { \bf s } _ { i } ^ { t - 1 } } } \end{array}$ = ∂s∂ts−i 1 , δit = $\begin{array} { r } { \delta _ { i } ^ { t } = \frac { \partial { \bf s } _ { i } ^ { t } } { \partial \theta _ { i } } } \end{array}$ and $\begin{array} { r } { B _ { i } ^ { t } = \frac { d \mathcal { L } ^ { t } } { d y _ { i } ^ { t } } \frac { \partial y _ { i } ^ { t } } { \partial { \bf s } _ { i } ^ { t } } } \end{array}$ . Remarkably, this formulation is linear despite possible non-linearities in the functions $f$ and $g$ (Eq. (1)) of the neuron model at hand. These nonlinearities are implicitly contained in the partial derivatives ${ \partial \mathbf { s } _ { i } ^ { t } } / { \partial \mathbf { s } _ { i } ^ { t - 1 } }$ and ${ \partial y _ { i } ^ { t } } / { \partial \mathbf { s } _ { i } ^ { t } }$ , which are — by definition of gradients — linear first-order approximations. The linear SSM from Eqs. (5) and (6) can be either solved recurrently step-by-step (as in e-prop), or more efficiently for multiple time steps in parallel, as the SSM literature suggests [34]. The latter is the heart of our HYPR algorithm: We can overcome the sequentiality bottleneck of e-prop by exploiting the associativity of these linear operations. This applies to both: We can calculate — in parallel — the updates to eligibility matrices over multiple time steps and combine them at the end, as well as time step-wise APGs over multiple time steps which we combine at the end. The thereby obtained cumulative APGs are equivalent to the cumulative APGs in e-prop, but computed orders of magnitudes faster. In Section 5 we report a speedup of $1 0 8 \times$ for a medium size SNN with 1M parameters.
# Algorithm 1: HYPR
In HYPR, time series are processed in segments which we refer to as subsequences. HYPR traverses through theses subsequences in their temporal order, computes parameter updates within the subsequence efficiently, and propagates eligibility matrices to the next subsequence. For each subsequence, HYPR can be separated into two stages: a sequential S-stage, where the subsequence is passed through the network sequentially item-by-item, and a parallel $P$ -stage, in which the APGs as well as eligibility matrices for further forward propagation to successive subsequences are computed in
Input: Time series $X$
Input: Network $\mathcal { N }$ with parameters $\theta$
$\mathbf { e } _ { \theta } ^ { 0 ^ { - } } \mathbf { 0 }$ ;
$\tilde { \nabla } \theta ^ { \mathrm { c u m } } \gets \mathbf { 0 } ;$
foreach Subsequence ${ \bar { X } } _ { l }$ in time series $X$ do s1...λ, $\mathbf { y } ^ { 1 \ldots \bar { \lambda } }$ , $\mathcal { L } ^ { 1 \ldots \lambda } \mathtt { S - s t a g e } ( \mathcal { N } , \ \bar { X } _ { l } )$ ; $\mathbf { e } _ { \theta } ^ { \lambda }$ , [ ˜ θ]1:λ P-stage( , $\bar { X } _ { l }$ , $\mathbf { s } ^ { 1 \ldots \lambda }$ , $\mathbf { y } ^ { 1 \ldots \lambda }$ , ${ \bf e } _ { \theta } ^ { 0 }$ , $\mathcal { L } ^ { 1 . . . \lambda } )$ ${ \mathbf e } _ { \theta } ^ { 0 } \gets { \mathbf e } _ { \theta } ^ { \lambda _ { \mathrm { - } } }$ $\begin{array} { r } { \tilde { \nabla } \theta ^ { \mathrm { { c u m } } } \gets \tilde { \nabla } \theta ^ { \mathrm { { c u m } } } + [ \tilde { \nabla } \theta ] ^ { 1 : \lambda } ; } \end{array}$
end
$\theta \gets \mathsf { o p t i m i z e r } \left( \tilde { \nabla } \theta ^ { \mathrm { c u m } } \right) ;$
parallel over the time dimension of the subsequence.
More formally, let $X = \mathbf { x } ^ { 1 \ldots T }$ denote the input sequence. Here and in the following, we use the notation $\mathbf { x } ^ { 1 \ldots \bar { T } }$ to denote a sequence $\bigl [ \mathbf { x } ^ { 1 } , \ldots , \mathbf { \bar { x } } ^ { T } \bigr ]$ . First, $X$ is split into subsequences $\bar { X } _ { l } = \bar { \mathbf { x } } _ { l } ^ { 1 \ldots \lambda }$ of length $\lambda$ for $l \in \{ 1 , 2 , \ldots \}$ . Hence, the $t$ -th item $\bar { \mathbf { x } } _ { l } ^ { t }$ in subsequence $\bar { X } _ { l }$ corresponds to item $\mathbf { x } ^ { \lambda ( l - 1 ) + t }$ in $X$ . For ease of notation, we explain HYPR with respect to one specific subsequence ${ { \bar { X } } _ { l } }$ and drop the subsequence index $l$ . A brief overview of the algorithm is shown in Algorithm 1, for detailed pseudo-code refer to Algorithm A1 in Appendix C.
During the S-stage of HYPR, we sequentially compute neuron states $\mathbf { s } _ { i } ^ { 1 \ldots \lambda }$ , outputs $y _ { i } ^ { 1 \ldots \lambda }$ , as well as losses $\mathcal { L } ^ { 1 \ldots \lambda }$ of the network $\mathcal { N }$ over subsequence $\bar { X }$ :
$$
\begin{array} { r } { \mathbf { s } _ { i } ^ { 1 \ldots \lambda } , y _ { i } ^ { 1 \ldots \lambda } , \mathcal { L } ^ { 1 \ldots \lambda } = \mathtt { S - s t a g e } ( \mathcal { N } , \bar { X } ) . } \end{array}
$$
The sequentiality of the S-stage cannot be avoided since we assume an arbitrary non-parallelizable neuron model, for example a vanilla LIF neuron. However, we significantly reduce the computational burden of this sequential forward pass by postponing the computation of eligibility matrices and APGs to the later, parallel P-stage. The neuron states and outputs obtained during the S-stage are then cached for the P-stage. Note, that the required memory is independent of the length of the original (potentially infinitely long) sequence, but is rather $\mathcal { O } \left( \lambda \right)$ . I.e., it scales linearly with hyperparameter $\lambda$ , the subsequence length, which can be chosen with respect to the available memory.
In the $\mathrm { \bf P }$ -stage of the HYPR algorithm, eligibility matrices and APGs are computed efficiently in parallel. We can summarize the $\mathrm { \bf P }$ -stage of HYPR as calculating the eligibility matrix $\mathbf { e } _ { i } ^ { \lambda }$ at the end of the subsequence, as well as the cumulative APG $\begin{array} { r } { [ \tilde { \nabla } \theta _ { i } ] ^ { 1 : \lambda } = \sum _ { t = 1 } ^ { \lambda } \tilde { \nabla } \theta _ { i } ^ { t } } \end{array}$ over the subsequence:
$$
\mathbf { e } _ { i } ^ { \lambda } , [ \tilde { \nabla } \theta _ { i } ] ^ { 1 : \lambda } = \operatorname { P - s t a g e } ( \mathbf { e } _ { i } ^ { 0 } , \bar { X } , I _ { i } ^ { 1 . . . \lambda } , y _ { i } ^ { 1 . . . \lambda } , \mathbf { s } _ { i } ^ { 1 . . . \lambda } ) ,
$$
where ${ \bf e } _ { i } ^ { 0 }$ denotes the eligibility matrix from the end of the previous subsequence. In Section 4.2 we describe how HYPR computes $\mathbf { e } _ { i } ^ { \lambda }$ , and in Section 4.3 we describe how it obtains $[ \tilde { \nabla } \theta _ { i } ] ^ { 1 : \lambda }$ .
# 4.2 Efficient calculation of $\mathbf { e } _ { i } ^ { \lambda }$
We can unroll the recurrent definition of $\mathbf { e } _ { i } ^ { t }$ from Eq. (5) to obtain the explicit representation
$$
\mathbf { e } _ { i } ^ { \lambda } = \delta _ { i } ^ { \lambda } + \underbrace { A _ { i } ^ { \lambda } } _ { \phi _ { i } ^ { \lambda : \lambda } } \delta _ { i } ^ { \lambda - 1 } + \underbrace { A _ { i } ^ { \lambda } A _ { i } ^ { \lambda - 1 } } _ { \phi _ { i } ^ { \lambda : \lambda - 1 } } \delta _ { i } ^ { \lambda - 2 } + . . . + \underbrace { A _ { i } ^ { \lambda } \ . . . . A _ { i } ^ { 2 } } _ { \phi _ { i } ^ { \lambda : 2 } } \delta _ { i } ^ { 1 } + \underbrace { A _ { i } ^ { \lambda } \ . . . \ A _ { i } ^ { 1 } } _ { \phi _ { i } ^ { \lambda : 1 } } \mathbf { e } _ { i } ^ { 0 } .
$$
Partial derivatives for neuron parameters $\delta _ { i } ^ { t } = \partial \mathbf { s } _ { i } ^ { t } / \partial \theta _ { i }$ are trivial to obtain in parallel as shown in Appendix D. State-to-state derivatives $A ^ { t }$ can also be obtained in parallel as shown in Appendix $\mathrm { ~ E ~ }$ . We define each $\phi _ { i } ^ { \lambda : t }$ as the cumulative state transition matrix from $t$ to $\lambda$ , given by
$$
\phi _ { i } ^ { \lambda : t } = \prod _ { k = \lambda } ^ { t } A _ { i } ^ { k } = \prod _ { k = \lambda } ^ { t } { \frac { \partial { \bf s } _ { i } ^ { k } } { \partial { \bf s } _ { i } ^ { k - 1 } } } .
$$
Matrices $\phi _ { i } ^ { \lambda : t }$ can be computed in parallel with time complexity ${ \mathcal { O } } ( \log \lambda )$ and memory complexity $\mathcal O ( \lambda )$ for all time steps using the associative scan algorithm. Finally, we can calculate $\mathbf { e } _ { i } ^ { \lambda }$ as
$$
{ \bf e } _ { i } ^ { \lambda } = \delta _ { i } ^ { \lambda } + \sum _ { t = 1 } ^ { \lambda - 1 } \delta _ { i } ^ { t } \phi ^ { \lambda : t + 1 } + { \bf e } _ { i } ^ { 0 } \phi ^ { \lambda : 1 } .
$$
Note that Eq. (11) alleviates the need to calculate the intermediate eligibility matrices $\mathbf { e } _ { i } ^ { 1 } , \ldots , \mathbf { e } _ { i } ^ { \lambda - 1 }$ . The direct computation of the final $\mathbf { e } _ { i } ^ { \lambda }$ from the sequence $\left[ \delta _ { i } ^ { 1 } , \delta _ { i } ^ { 2 } , \dots , \delta _ { i } ^ { \lambda } \right]$ and ${ \bf e } _ { i } ^ { 0 }$ provides one of the major sources of efficiency gain in HYPR. The reasons are two-fold: First, the operation from Eq. (11) can be fully parallelized on a GPU, since it is explicit and all individual terms are independent of each other. Second, in vanilla e-prop, the entire eligibility matrix (which is relatively large) is loaded from memory, updated, and stored in memory at each time step as shown in Eq. (4), resulting in significant memory I/O. In HYPR we exploit that the terms ${ \delta } _ { i } ^ { t }$ are outer products of two vectors (see Appendix D). Hence, it is much more efficient to first collect the low-rank factors of these terms for all time steps and only then update the eligibility matrix by the efficient operation from Eq. (11). Intuitively, this can be interpreted as simultaneously forward-projecting all lowrank intermediate eligibility matrix updates and then combining them in the final time step via the sum-term in Eq. (11), instead of step-by-step propagating the full eligibility matrix.
# 4.3 Putting it all together: Efficient calculation of $[ \tilde { \nabla } \theta _ { i } ] ^ { 1 : \lambda }$
Cumulative APGs $[ \tilde { \nabla } \theta _ { i } ] ^ { 1 : \lambda }$ are computed efficiently by using a hybrid of a parallelized backward gradient accumulation through the subsequence and forward propagation of eligibility matrices from the previous subsequence. This way, HYPR exploits the low-dimensional intermediate terms similar as in backpropagation, but still operates in a constant memory complexity regime and can hence be applied to infinitely long sequences. The constant memory complexity stems from the fixed length $\lambda$ of the subsequence of which we compute the APGs.
Let $\begin{array} { r } { \mathcal { L } = \sum _ { t = 1 } ^ { \lambda } \mathcal { L } ^ { t } } \end{array}$ denote a summative loss function, where component $\mathcal { L } ^ { t }$ can be obtained from the network output $\mathbf { y } ^ { t }$ at time $t$ . Consider the cumulative APG $\begin{array} { r } { [ \tilde { \nabla } \theta _ { i } ] ^ { 1 : \lambda } = \sum _ { t = 1 } ^ { \lambda } \tilde { \nabla } \theta _ { i } ^ { t } } \end{array}$ , which can be explicitly written as (see Eq. (6))
$$
[ \tilde { \nabla } \theta _ { i } ] ^ { 1 : \lambda } = \frac { d \mathcal { L } ^ { \lambda } } { d \mathbf { s } _ { i } ^ { \lambda } } \mathbf { e } _ { i } ^ { \lambda } + \frac { d \mathcal { L } ^ { \lambda - 1 } } { d \mathbf { s } _ { i } ^ { \lambda - 1 } } \mathbf { e } _ { i } ^ { \lambda - 1 } + \ldots + \frac { d \mathcal { L } ^ { 1 } } { d \mathbf { s } _ { i } ^ { 1 } } \mathbf { e } _ { i } ^ { 1 } .
$$
a b c d
1230 memory limit 105 Time/batch [s] 0.511.5 15 50 8 0 0 千 ? 5 2 0 8 6 8 10 456 1 104 1041852 5 Input Length Input Length Subsequence Length λ Subsequence Length λ BPTT HYPR λ=1000 HYPR λ=100
We can unroll each eligibility matrix $\mathbf { e } _ { i } ^ { t }$ according to Eq. (9) and reorder the terms as shown in Eq. (A10) in Appendix $\mathrm { \Delta F }$ to obtain
$$
[ \tilde { \nabla } \theta _ { i } ] ^ { 1 : \lambda } = \mathbf { q } _ { i } ^ { \lambda } \delta _ { i } ^ { \lambda } \dots + \mathbf { q } _ { i } ^ { 3 } \delta _ { i } ^ { 3 } + \mathbf { q } _ { i } ^ { 2 } \delta _ { i } ^ { 2 } + \mathbf { q } _ { i } ^ { 1 } \delta _ { i } ^ { 1 } + \mathbf { q } _ { i } ^ { 0 } \mathbf { e } _ { i } ^ { 0 } = \mathbf { q } _ { i } ^ { 0 } \mathbf { e } _ { i } ^ { 0 } + \sum _ { t = 1 } ^ { \lambda } \mathbf { q } _ { i } ^ { t } \delta _ { i } ^ { t } ,
$$
where the vectors $\mathbf { q } _ { i } ^ { t }$ are given by
$$
\mathbf { q } _ { i } ^ { t } = \frac { d \mathcal { L } ^ { \lambda } } { d \mathbf { s } _ { i } ^ { \lambda } } A _ { i } ^ { \lambda } A _ { i } ^ { \lambda - 1 } \ldots A _ { i } ^ { t + 1 } + \ldots + \frac { d \mathcal { L } ^ { t + 2 } } { d \mathbf { s } _ { i } ^ { t + 2 } } A _ { i } ^ { t + 2 } A _ { i } ^ { t + 1 } + \frac { d \mathcal { L } ^ { t + 1 } } { d \mathbf { s } _ { i } ^ { t + 1 } } A _ { i } ^ { t + 1 } + \frac { d \mathcal { L } ^ { t } } { d \mathbf { s } _ { i } ^ { t } } .
$$
This equation can again be written recursively in linear SSM form
$$
\mathbf { q } _ { i } ^ { t } = \mathbf { q } _ { i } ^ { t + 1 } A _ { i } ^ { t + 1 } + \frac { d \mathcal { L } ^ { t } } { d \mathbf { s } _ { i } ^ { t } } ,
$$
which can, similar to how we solve the linear SSM form from Eq. (5), be parallelized (see Appendix $\mathbf { G }$ for details). The key observation here is that we can accelerate the calculation of coefficients $q _ { i } ^ { t }$ for all time steps $t \in \left\{ 1 , \dots , \lambda \right\}$ using the parallel associative scan algorithm. HYPR computes cumulative APGs via Eq. (13) by combining vectors ${ \bf q } _ { i } ^ { 0 \ldots \lambda }$ , gradients $\delta _ { i } ^ { 1 \dots \lambda }$ , and the previous eligibility matrix ${ \bf e } _ { i } ^ { 0 }$ (see Appendix $\mathrm { H }$ for an illustration). The resulting APGs and parameter updates are equivalent to those from the fully-forward variant $\lambda = 1 \dot $ ), but are computed more efficiently (see Sec. 5.1), making training independent of $\lambda$ . Looking at Eq. (14), it may seem that APGs $\tilde { \nabla } \dot { \theta _ { i } ^ { t } }$ depend on future losses as in BPTT, which would contradict online learning. This is not the case since HYPR is mathematically equivalent to fully-online e-prop (see Appendix F). Our narrative was so far based on a single recurrent layer network. See Appendix I for the multi-layer case.
# 5 Experiments
# 5.1 Comparing time- and memory-requirements of HYPR and BPTT
As an initial experiment, we compared HYPR and BPTT in terms of memory and runtime performance on long sequences. As a network, we used an RSNN with a single hidden layer consisting of 1024 Balanced Resonate and Fire (BRF) [17] neurons, followed by an output layer of leaky integrator (LI) neurons $ { \approx } 1 { \mathbf { M } }$ parameters). We tested the network on a toy task, to which we refer as the cue task (see Appendix K for details). The network received inputs from 15 spiking input neurons. In the first 20 time steps, either neurons 1 to 5 (input class A) or neurons 6 to 10 (input class B) were active, while other neurons remained silent. This cue was followed by a (potentially very long) delay period where all input neurons remained silent. At the end of the sequence, input neurons 11 to 15 became active, indicating a recall period. The network output should then indicate during the recall period whether the initial cue belonged to class A or B. This task tests the long-term credit assignment capabilities of a learning algorithm: Only if the information provided at the first cue is successfully propagated to the end of the sequence, the task can be learned. An advantage of this task is the possibility to directly control the input sequence length and with it the length of long-term dependencies in the task. In Fig. 2 we compare the memory consumption and wall clock training time for HYPR and BPTT on this tasks. For all input lengths shown in Fig. 2 the trained networks were able to solve the task (training accuracy $100 \%$ ), confirming successful temporal credit assignment with both algorithms, BPTT and HYPR. For sequence lengths of above approximately 3,400, BPTT required more memory than available on the tested GPU, whereas HYPR consumes constant memory, regardless of the sequence length. With parallelization we could achieve up to a $1 0 8 \times$ speedup over the sequential e-prop, see Fig. 2d. Since the memory consumption of HYPR does not scale with input sequence length (given constant $\lambda$ ), it can work with arbitrary-length sequences on a single GPU without exceeding its memory.
Table 1: Comparison of BPTT and HYPR learning algorithms on benchmark datasets. Here and in the following tables, results are reported on the test set, and statistics (mean $\pm$ std. dev.) were performed over 5 random seeds, where the parameter initialization and sampling of the validation set for model selection are randomized each time. BPTT entries include the original accuracies reported by the respective authors in parentheses if available. The number of parameters depends on the model and dataset but was equivalent between HYPR and BPTT, see Appendix M.
# 5.2 Narrowing the performance gap between approximate forward propagation and BPTT
In HYPR, APGs are computed based on approximate gradients of the parameters. This approximation might affect task performance compared to BPTT. To investigate on this matter, we compared both algorithms on benchmark datasets commonly used in the SNN and RNN research communities: Spiking Heidelberg Digits (SHD) [36], an ECG dataset [37], and sequential MNIST (sMNIST) [29, 38]. We compared the accuracy of BPTT and HYPR on networks based on three different neuron models: two oscillatory models, BRF [17] and SE-adLIF [15], as well as ALIF [35], a nonoscillatory neuron model with threshold adaptation, see Appendix L for detailed descriptions of the neuron models. The latter model was initially used for testing the capabilities of the approximate forward propagation algorithm e-prop [12]. The results are shown in Tab. 1. Due to the approximative nature of HYPR, we do not expect it to outperform BPTT on any benchmark. However, we can observe that across the wide variety of tasks, HYPR is mostly on par with BPTT for the two oscillatory neuron models, BRF and SE-adLIF. This result is surprising since in HYPR the gradient pathway through the recurrent synaptic connections is disregarded. We account the good performance of HYPR to the choice of neuron model: The spiking neuron models that currently achieve the best results on the benchmark datasets at hand are all of oscillatory nature, a feature that has recently been studied extensively [11, 15–17]. These oscillations support propagation of time-sensitive information through their state-to-state transitions, a feature that might naturally be well exploited by the forward gradient learning of HYPR. We also found that networks trained with HYPR benefit from additional layers, see Appendix J. Note, that we only considered RSNNs that do not violate any fundamental constraints of neuromorphic hardware: Models including features like layer normalization, floating-point-value-based communication between neurons (for example through skip connections), temporal convolutions, or attention were intentionally excluded.
Encouraged by these results, we explored the limitations of HYPR and tested it on challenging tasks with long-range dependencies from the long-range arena benchmarks [39]. We tested sequential CIFAR (sCIFAR) and the easier variant of Pathfinder, to which we refer to as Pathfinder-E.
In sCIFAR, the pixels of images from the CIFAR10 datasets are presented sequentially to the network, and the network has to classify the image category (sequence length 1024). In PathfinderE, an image of line drawings is presented sequentially pixel-by-pixel (sequence length 1024), and the network has to decide whether a starting point is connected by a line with an end point. In Tab. 2, we report the first successful application of RSNNs with an approximate forward-learning
Table 2: Classification accuracy (mean std. dev. over 5 runs) of HYPR-trained RSNNs with BRF neurons on two tasks of the longrange arena benchmark [39].
algorithm on sCIFAR and Pathfinder-E. Nevertheless, we observe a larger performance gap between BPTT and HYPR in these challenging tasks, which remains to be explored further.
# 5.3 Influence of Recurrent Connections
In HYPR, the gradient path through the recurrent synaptic connections is ignored. This raises the question, whether networks trained with HYPR can successfully learn to utilize these recurrent synaptic pathways. To answer this question, we compared the performance of our networks to the performance of networks without recurrent connections, see Tab. 3. We found
that performance drops significantly when recurrent connections are dropped, suggesting that HYPR does indeed utilize recurrent connectivity despite its ignorance to their gradient pathways. Interestingly, the performance drop is higher for both BPTT and HYPR on the more demanding sCIFAR data set, indicating the importance of recurrent interactions for harder tasks.
Table 3: Comparison of performance on SHD, sMNIST and sCIFAR on networks of BRF neurons [17] trained with and without recurrent connections using BPTT and HYPR.
# 6 Discussion
This work introduces HYPR, a scalable and efficient segment-wise online training algorithm applicable to almost arbitrary RSNNs and RNNs. Through parallelization over segments, we achieve significant training speedup, up to $1 0 8 \times$ in our experiments, compared to the mathematically equivalent fully-online algorithm e-prop [12]. In contrast to e-prop, where weight updates are calculated in a time step-wise online manner, the segment-wise parallelization in HYPR effectively utilizes GPU resources while still operating in a constant memory regime, without sacrificing the infinite training context of e-prop.
We demonstrated that HYPR excels if applied to oscillatory spiking neuron models, a new generation of models that has been recently demonstrated to be powerful on various benchmarks [15–17]. We first showed how HYPR can overcome the memory limitations of BPTT at a comparable scaling of the runtime (Fig. 2a,b). Second, we demonstrated that the synergy between HYPR and oscillatory neuron models significantly narrows the gap between BPTT and approximate forward gradient learning as compared to previously proposed neuron models with threshold adaptation [12]. Further, we demonstrated that, despite ignoring gradient pathways through recurrent connections, HYPR utilizes these connections for more accurate classification performance (Tab. 3).
Limitations and future work HYPR is based on an approximate gradient computation that neglects gradient paths through recurrent connections. As such, its accuracies should be below those achievable by BPTT. We found that, despite working surprisingly well on the challenging datasets sCIFAR and Pathfinder-E, a performance gap between BPTT and HYPR still persists (Tab. 2). Future work could investigate methods that combine forward-propagated eligibilities with truncated backpropagated gradients through recurrent connections.
Since HYPR is a relatively complex learning algorithm, it is potentially hard to implement on neuromorphic hardware, where pure forward propagation may be preferable. In any case, we believe that the achievable speedups on standard GPU-based architectures can boost research on efficient learning algorithms for RSNNs, which has been hindered by the sequentiality bottleneck of forward propagation algorithms so far.
# Appendix
# A Computational overhead of forward-propagating gradients
A major disadvantage of forward-propagating gradients (e.g. as in e-prop) is a significant overhead in multiplication operations, compared to backpropagation. As a guiding example, consider the following computational chain: $\begin{array} { r } { \dot { \mathbf { s } } ^ { 0 } = f ( \mathbf { w } ) , \dot { \mathbf { s } } ^ { t + \dot { 1 } } \ = \ g ( \mathbf { s } ^ { t } ) , \mathcal { L } \ = \ h ( \mathbf { s } ^ { \ell } ) } \end{array}$ , with some parameter $\mathbf { w } \in \mathbb { R } ^ { p }$ , intermediate values $\mathbf { s } ^ { t } \in \mathbb { R } ^ { k }$ and scalar $\mathcal { L } \in \mathbb { R }$ . Computing the gradient $\nabla _ { \mathbf { w } } \mathcal { L }$ via the chain rule results in the following chain of multiplications:
$$
\nabla _ { \mathbf { w } } \mathcal { L } = \underbrace { \nabla _ { \mathbf { s } ^ { \ell } } \mathcal { L } } _ { \mathbb { R } ^ { k } } \underbrace { \frac { \partial \mathbf { s } ^ { \ell } } { \partial \mathbf { s } ^ { \ell - 1 } } } _ { \mathbb { R } ^ { k \times k } } \underbrace { \frac { \partial \mathbf { s } ^ { \ell - 1 } } { \partial \mathbf { s } ^ { \ell - 2 } } } _ { \mathbb { R } ^ { k \times k } } \cdots \underbrace { \frac { \partial \mathbf { s } ^ { 2 } } { \partial \mathbf { s } ^ { 1 } } } _ { \mathbb { R } ^ { k \times k } } \underbrace { \frac { \partial \mathbf { s } ^ { 1 } } { \partial \mathbf { s } ^ { 0 } } } _ { \mathbb { R } ^ { k \times k } } \underbrace { \frac { \partial \mathbf { s } ^ { 0 } } { \partial \mathbf { w } } } _ { \mathbb { R } ^ { k \times p } } .
$$
One could resolve this chain in a forward manner, i.e.
$$
\nabla _ { \mathbf { w } } \mathcal { L } = \nabla _ { \mathbf { s } ^ { \ell } } \mathcal { L } \frac { \partial \mathbf { s } ^ { \ell } } { \partial \mathbf { s } ^ { \ell - 1 } } \Big ( \cdot \cdot \cdot \Big ( \frac { \partial \mathbf { s } ^ { 3 } } { \partial \mathbf { s } ^ { 2 } } \underbrace { \Big ( \frac { \partial \mathbf { s } ^ { 2 } } { \partial \mathbf { s } ^ { 1 } } \underbrace { \Big ( \frac { \partial \mathbf { s } ^ { 1 } } { \partial \mathbf { s } ^ { 0 } } \frac { \partial \mathbf { s } ^ { 0 } } { \partial \mathbf { w } } \Big ) } _ { \mathbb { R } ^ { k \times p } } \Big ) } _ { \mathbb { R } ^ { k \times p } } \Big ) \Big ) \cdot \cdot \cdot \Big ) ,
$$
where terms in brackets are evaluated first, or in a backward manner
$$
\nabla _ { \mathbf { w } } \mathcal { L } = \Big ( \cdots \underbrace { \Big ( \Big ( \nabla _ { \mathbf { s } ^ { \ell } } \mathcal { L } \underbrace { \frac { \partial \mathbf { s } ^ { \ell } } { \partial \mathbf { s } ^ { \ell - 1 } } } \Big ) } _ { \mathbb { R } ^ { k } } \frac { \partial \mathbf { s } ^ { \ell - 1 } } { \partial \mathbf { s } ^ { \ell - 2 } } \Big ) \cdots \frac { \partial \mathbf { s } ^ { 1 } } { \partial \mathbf { s } ^ { 0 } } \Big ) \frac { \partial \mathbf { s } ^ { 0 } } { \partial \mathbf { w } } .
$$
Although the result is equivalent, the intermediate terms in the forward manner are significantly larger and require a significantly higher amount of multiplication operations and memory compared to the backward manner. While the forward mode requires $\ell k ^ { 2 } p + \bar { k } p$ multiplications, the backward mode only requires $\ell k ^ { 2 } + k p$ multiplications. Usually, $p$ is the largest of the three variables $( k , \ell , p )$ and in the order of millions. Hence, training algorithms involving forward propagation of gradients, for example RTRL [13] or e-prop [12], require significantly higher memory I/O compared to BPTT since they forward-propagate and materialize large sensitivity/eligibility matrices in each time step.
# B Direct and indirect gradient pathways
In BPTT, the exact total derivatives $\frac { d \mathbf { s } _ { j } ^ { t } } { d \theta _ { i } }$ are used to compute the parameter update. Recursively written, they can be computed by
$$
\frac { d \mathbf { s } _ { j } ^ { t } } { d \theta _ { i } } = \underbrace { \frac { \partial \mathbf { s } _ { j } ^ { t } } { \partial \theta _ { i } } } _ { \displaystyle \partial \theta _ { i } } + \frac { \partial \mathbf { s } _ { j } ^ { t } } { \partial \mathbf { s } _ { j } ^ { t - 1 } } \frac { d \mathbf { s } _ { j } ^ { t - 1 } } { d \theta _ { i } } + \sum _ { k \neq j } \frac { \partial \mathbf { s } _ { j } ^ { t } } { \partial \mathbf { y } _ { k } ^ { t - 1 } } \frac { \partial \mathbf { y } _ { k } ^ { t - 1 } } { \partial \mathbf { s } _ { k } ^ { t - 1 } } \frac { d \mathbf { s } _ { k } ^ { t - 1 } } { d \theta _ { i } } \nonumber
$$
which consists of a direct gradient pathway (neuron-internal recurrence) that does not ”leave” the neuron, and an indirect pathway (intra-layer recurrence) through other neurons, see also Fig. 1. In e-prop, $d \mathbf { s } _ { j } ^ { t } / d { \boldsymbol { \theta } } _ { i }$ is replaced by a local approximation $[ d \mathbf { s } _ { j } ^ { t } / d \theta _ { i } ] _ { \mathrm { l o c a l } }$ with $[ d \mathbf { s } _ { j } ^ { t } / d \theta _ { i } ] _ { \mathrm { l o c a l } } \stackrel { \mathrm { d e f } } { = } \mathbf { 0 } \forall j \neq i$ which cancels all indirect components from Eq. (A4). The non-zero terms $[ d \mathbf { s } _ { i } ^ { t } / d \theta _ { i } ] _ { \mathrm { l o c a l } }$ are defined recursively by
$$
\left[ { \frac { d { \bf s } _ { i } ^ { t } } { d \theta _ { i } } } \right] _ { \mathrm { l o c a l } } = { \frac { \partial { \bf s } _ { i } ^ { t } } { \partial \theta _ { i } } } + { \frac { \partial { \bf s } _ { i } ^ { t } } { \partial { \bf s } _ { i } ^ { t - 1 } } } \left[ { \frac { d { \bf s } _ { i } ^ { t - 1 } } { d \theta _ { i } } } \right] _ { \mathrm { l o c a l } } .
$$
# C Pseudocode of the HYPR algorithm
In Algorithm A1 we show pseudocode for the HYPR algorithm. The algorithm is applied to batches of data, we omit the batch dimension in the pseudocode for simplicity. We denote the parallel
computation of Jacobians as $j a c$ , where the first argument corresponds to the function to be differentiated, and assoc scan as the associative scan function, where the first argument defines the binary associative operator used by the scan algorithm.
# Algorithm A1: single-layer HYPR
Input: $X ^ { 1 : T } \in \mathbb { R } ^ { T \times d }$
Input: Network $\mathcal { N }$ with parameters $\theta = \{ W _ { \mathbb { H } } \in \mathbb { R } ^ { d \times m }$ , $W _ { \mathrm { r e c } } \in \mathbb { R } ^ { m \times m } , ~ b \in \mathbb { R } ^ { m } \}$
Input: Subsequence length $\lambda$
Output: Updated parameters $\theta$
$\mathbf { e } ^ { 0 } \mathbf { 0 }$ $/ / \in \mathbb { R } ^ { k \times d _ { \theta } }$ with no. of parameters $d _ { \theta }$ in $\mathcal { N }$
$\tilde { \nabla } \theta ^ { \mathrm { c u m } } \gets \mathbf { 0 }$
for $\ell \gets 1$ to $T / \lambda$ do $\begin{array} { l } { { t _ { 0 } ( \ell - 1 ) \lambda } } \\ { { \bar { X } X ^ { ( t _ { 0 } + 1 ) \ldots ( t _ { 0 } + \lambda ) } } } \\ { { I _ { \mathrm { f f } } \bar { X } W _ { \mathrm { f f } } + b } } \end{array}$ // get next subsequence $\bar { X } \in \mathbb { R } ^ { \lambda \times d }$ // $I _ { \mathrm { f f } } \in \mathbb { R } ^ { \lambda \times m }$ (parallel) /\* S-stage (sequential) \*/ Initialize $\mathbf { s } ^ { 0 } \mathbf { 0 } , \mathbf { y } ^ { 0 } \mathbf { 0 }$ for $t \gets 1$ to $\lambda$ do $I _ { \mathrm { r e c } } ^ { t } W _ { \mathrm { r e c } } y ^ { t - 1 }$ $/ / \in \mathbb { R } ^ { m }$ It ← Itff + Irtec st f (st−1, It) yt ← g(st) $\mathcal { L } ^ { t } 1 0 \mathbf { s } \mathbf { s } ( y ^ { t } )$ // supervised or unsupervised loss function /\* P-stage (parallel) \*/ $\begin{array} { r l } & { [ \frac { \partial \mathbf { s } ^ { 1 } } { \partial \mathbf { s } ^ { 0 } } , \dotsc , \frac { \partial \mathbf { s } ^ { \lambda } } { \partial \mathbf { s } ^ { \lambda - 1 } } ] , [ \frac { \partial \mathbf { s } ^ { 1 } } { \partial I ^ { 1 } } , \dotsc , \frac { \partial \mathbf { s } ^ { \lambda } } { \partial I ^ { \lambda } } ] \mathbf { j a c } ( f , \mathbf { s } ^ { 0 \cdot \lambda - 1 } , I ^ { 1 \cdot \lambda - \lambda } ) } \\ & { [ \middle / \frac { \partial \mathbf { s } ^ { \lambda + 1 } } { \partial \mathbf { s } ^ { \lambda } } \in \mathbb { { \mathbb { R } } } ^ { m \times k \times k } , \frac { \partial \mathbf { s } ^ { \lambda } } { \partial I ^ { \tau } } \in \mathbb { { \mathbb { R } } } ^ { m \times k } , \mathrm { ~ s e e ~ \mathbb { A p p e n d i x ~ E } } } \\ & { [ \frac { d \mathcal { L } ^ { \lambda } } { d \mathbf { s } ^ { 1 } } , \dotsc , \frac { d \mathcal { L } ^ { \lambda } } { d \mathbf { s } ^ { \lambda } } ] \mathbf { j a c } ( \{ g , \log \ R , \log \} , \mathbf { s } ^ { 1 \cdot \lambda , \lambda } , \mathbf { y } ^ { 1 \cdot \lambda } ) } \\ & { [ \middle / \frac { \partial \mathcal { L } ^ { \lambda } } { \partial \mathbf { s } ^ { \lambda } } \in \mathbb { R } ^ { m \times k } , \mathrm { ~ s e e ~ \mathbb { A p p e n d i x ~ E } } } \\ & { [ \phi ^ { \lambda + 1 } , \dotsc , \phi ^ { \lambda \cdot \lambda } ] \mathbf { a s s o c . 5 c a n } ( \times , [ \frac { \partial \mathbf { s } ^ { \lambda } } { \partial \mathbf { s } ^ { 0 } } , \dotsc , \frac { \partial \mathbf { s } ^ { \lambda - 1 } } { \partial \mathbf { s } ^ { \lambda - 1 } } ] ) } \\ & { \middle / ( \phi ^ { \lambda + 1 } \in \mathbb { R } ^ { m \times k \times k } , \mathrm { ~ s e e ~ t e x t } } \\ & { \mathbf { q } ^ { 0 \cdot \lambda } \mathrm { r e v e r s e - s c a n } ( [ \frac { \partial \mathbf { s } ^ { \lambda } } { \partial \mathbf { s } ^ { 0 } } , \dotsc , \frac { \partial \mathcal { L } ^ { \lambda } } { \partial \mathbf { s } ^ { \lambda - 1 } } ] , \mathrm { ~ f o r ~ \mathbb { A } ^ { \lambda } ~ } ] } \\ & { \middle / \mathbf { q } ^ { \lambda + 1 } , \enspace } \end{array}$ $\begin{array} { r } { \mathbf { e } ^ { \lambda } \mathbf { e } ^ { 0 } \phi ^ { \lambda : 1 } + \sum _ { t = 1 } ^ { \lambda - 1 } \delta ^ { t } \phi ^ { \lambda : t + 1 } + \delta ^ { \lambda } } \end{array}$ // with δt = ∂Isλ ∂Iθt , decomposed as described in Appendix D $[ \tilde { \nabla } \theta ] ^ { 1 : \lambda } \gets \mathbf { e } ^ { 0 } \delta ^ { 0 } + \sum _ { t = 1 } ^ { \lambda } \mathbf { q } ^ { t } \delta ^ { t }$ e0 eλ $\begin{array} { r } { \tilde { \nabla } \theta ^ { \mathrm { c u m } } \gets \tilde { \nabla } \theta ^ { \mathrm { c u m } } + [ \tilde { \nabla } \theta ] ^ { 1 : \lambda } } \end{array}$
θ optimizer( ˜ θcum)
return θ
# D Parameter gradients $\delta _ { i } ^ { t }$
We intentionally separated the calculation of input $I _ { i } ^ { t }$ given by Eq. (2) from the state transition $f$ in Eq. (1) to emphasize that partial derivatives $\delta ^ { t }$ can be factorized into low-dimensional factors to significantly reduce the memory consumption and memory $_ \mathrm { I / O }$ of HYPR. By using these lowdimensional factors, the full parameter gradients $\delta ^ { t }$ are never materialized, enhancing performance and memory-efficiency of HYPR: For example for the feed-forward weight matrix $\breve { W } _ { \mathrm { f f } } \in \mathbb { R } ^ { m \times d }$ , parameter gradient $\delta _ { W _ { \mathrm { f f } } } ^ { t }$ is given by the full Jacobian tensor $\begin{array} { r } { \frac { \partial S ^ { t } } { \partial W _ { \mathrm { f f } } } \ \in \ \mathbb { R } ^ { m \times k \times m \times d } } \end{array}$ with respect to the matrix $S ^ { t } \in \mathbb { R } ^ { m \times k }$ of all neuron states in time step $t$ and requires $\mathcal { O } ( m ^ { 2 } k d )$ memory. More specifically, its components are given as ∂WSftf $\begin{array} { r } { \left( \frac { \partial S ^ { t } } { \partial W _ { \mathrm { f f } } } \right) _ { i j \ell m } = \frac { \partial S _ { i j } ^ { t } } { \partial W _ { \mathrm { f f } , \ell m } } } \end{array}$ . However, we found that this memory requirement can be drastically reduced by instead computing and caching some smaller matrices $D ^ { i } \in \mathbb { R } ^ { m \times k }$ with $D _ { i j } ^ { t } = \partial S _ { i j } ^ { t } / \partial I _ { \mathrm { f f } , i } ^ { t }$ together with vectors $\bar { \mathbf { x } } ^ { t } \in \mathbb { R } ^ { d }$ from subsequence $\bar { X }$ . Note, that $\partial I _ { \mathrm { f f } , i } ^ { t } / \partial W _ { \mathrm { f f } , i } = \bar { \mathbf { x } } ^ { t }$ for all neurons $i$ , since all neurons receive the same input. Together $D ^ { t }$ and $\hat { \mathbf { x } } ^ { t }$ consume only $\mathcal { O } ( m k + d )$ memory and allow to efficiently perform the operations from Eq. (11) and (13) using Einstein summations without ever materializing the large parameter gradient δtWff .
Hence, we can express the components of $\delta ^ { t }$ that correspond to the parameters in the feed-forward weight matrix $W _ { \mathrm { f f } }$ in Eqs. (11) and (13) as
$$
( \delta _ { W _ { \mathrm { f } } } ^ { t } ) _ { i j \ell m } = \left( \frac { \partial S ^ { t } } { \partial W _ { \mathrm { f } } } \right) _ { i j \ell m } = \frac { \partial S _ { i j } ^ { t } } { \partial W _ { \mathrm { f } \ell } } = \frac { \partial S _ { i j } ^ { t } } { \partial I _ { \mathrm { f } \ell } ^ { t } } \frac { \partial I _ { \mathrm { f } \ell } ^ { t } } { \partial W _ { \mathrm { f } \ell , i m } ^ { t } } = \left\{ \begin{array} { l l } { D _ { i j } ^ { t } \bar { \mathbf { x } } _ { m } ^ { t } } & { \mathrm { i f } \quad \ell = i } \\ { 0 } & { \mathrm { i f } \quad \ell \neq i } \end{array} \right.
$$
The same principle applies to $W _ { \mathrm { r e c } }$ , where $( \delta _ { W _ { \mathrm { r e c } } } ^ { t } ) _ { i j \ell m } = D _ { i j } ^ { t } \mathbf { y } _ { m } ^ { t }$ if $\ell = i$ , else 0. All matrices $D ^ { t }$ can be obtained in parallel as explained in Appendix E.
# E Efficient calculation of partial gradients
All partial gradients involved in HYPR can be calculated efficiently in parallel. We first compute Jacobian matrices $J _ { f , i } ^ { t }$ and $J _ { g , i } ^ { t }$ of functions $f$ and $g$ from Eq. (1). The Jacobians are given by
$$
\begin{array} { r l } & { J _ { f , i } ^ { t } = \left[ \begin{array} { c c c c c c } { \frac { \partial s _ { i , 1 } ^ { t } } { \partial s _ { i , 1 } ^ { t - 1 } } } & { \ldots } & { \frac { \partial s _ { i , 1 } ^ { t } } { \partial s _ { i , k } ^ { t - 1 } } } & { \frac { \partial s _ { i , 1 } ^ { t } } { \partial I _ { i } ^ { t } } } \\ { \vdots } & & & { \vdots } \\ { \frac { \partial s _ { i , k } ^ { t } } { \partial s _ { i , 1 } ^ { t - 1 } } } & { \ldots } & { \frac { \partial s _ { i , k } ^ { t } } { \partial s _ { i , k } ^ { t - 1 } } } & { \frac { \partial s _ { i , k } ^ { t } } { \partial I _ { i } ^ { t } } } \end{array} \right] \in \mathbb { R } ^ { k \times ( k + 1 ) } , \qquad J _ { g , i } ^ { t } = \left[ \begin{array} { c } { \frac { \partial y _ { i } ^ { t } } { \partial s _ { i , 1 } ^ { t } } } \\ { \vdots } \\ { \frac { \partial y _ { i } ^ { t } } { \partial s _ { i , k } ^ { t } } } \end{array} \right] \in \mathbb { R } ^ { k \times 1 } , } \end{array}
$$
where we recall that $k$ is the neuron state dimension. $s _ { i , j } ^ { t }$ refers to the $j$ -th state of neuron $i$ at time step $t$ and $I _ { i } ^ { t }$ the input to neuron $i$ at time step $t$ . Given the cached latent states and outputs obtained in the S-stage, these Jacobians can be computed for all time steps of the subsequence in parallel. This operation has time complexity $\mathcal { O } ( 1 )$ , given sufficiently many concurrent processors. These Jacobian matrices can be obtained by using auto-differentiation frameworks, without requiring a model-dependent explicit implementation, referred to as $j a c$ in Algorithm A1. This allows HYPR to be trivially adjusted to a broad variety of neuron models without requiring manual implementation work. Commonly, vector-Jacobian products (VJPs) are preferred over explicitly calculating Jacobians, since the Jacobians are usually large (see also Appendix D). However, in HYPR, we drastically reduced the dimensionality of these Jacobians $J _ { f , i } ^ { t }$ and $J _ { g , i } ^ { t }$ of functions $f$ and $g$ by dissecting the computation of scalar neuron input $I _ { i } ^ { t }$ from the state-to-state transition function $f$ . Hence, the Jacobians are computed w.r.t. scalar input $I _ { i } ^ { t }$ instead of high-dimensional parameter $\theta _ { i }$ and hence are low-dimensional. This dissection allows the low-rank decomposed representation of parameter gradients $\delta ^ { t }$ described in Appendix D.
We can slice these Jacobians into partial derivatives $\frac { \partial \mathbf { s } _ { i } ^ { t } } { \partial \mathbf { s } _ { i } ^ { t - 1 } } \in \mathbb { R } ^ { k \times k }$ $\frac { \partial \mathbf { s } _ { i } ^ { t } } { \partial I _ { i } ^ { t } } \in \mathbb { R } ^ { k \times 1 }$ and $\frac { \partial y _ { i } ^ { t } } { \partial \mathbf { s } _ { i } ^ { t } } \in \mathbb { R } ^ { k \times 1 }$
# F Combining approximate backward and forward propagation
In this section we explain in detail how to obtain the backward formulation in Eq. (14) from the forward-SSM formulation of the eligibility matrix from Eq. (5). Consider a summative loss function
${ \mathcal { L } } = \sum _ { } { \mathcal { L } } ^ { t }$ . Then, APG $\tilde { \nabla } \theta _ { i } ^ { t }$ (see Eq. (6)) is given by
$$
\begin{array} { l } { { \displaystyle { \tilde { \nabla } } \theta _ { i } ^ { t } = \frac { d \mathcal { L } ^ { t } } { d s _ { i } ^ { t } } \mathbf { e } _ { i } ^ { t } } } \\ { { \displaystyle \quad = \frac { d \mathcal { L } ^ { t } } { d s _ { i } ^ { t } } \left( \delta _ { i } ^ { t } + \underbrace { A _ { i } ^ { t } } _ { \phi _ { i } ^ { t : t } } \delta _ { i } ^ { t - 1 } + \underbrace { A _ { i } ^ { t } A _ { i } ^ { t - 1 } } _ { \phi _ { i } ^ { t : t - 1 } } \delta _ { i } ^ { t - 2 } + \dots + \underbrace { A _ { i } ^ { t } \dots A _ { i } ^ { 2 } } _ { \phi _ { i } ^ { t : 2 } } \delta _ { i } ^ { 1 } + \underbrace { A _ { i } ^ { t } \dots A _ { i } ^ { 1 } } _ { \phi _ { i } ^ { t : 1 } } \mathbf { e } _ { i } ^ { 0 } \right) , } } \end{array}
$$
where in the second line, $\mathbf { e } _ { i } ^ { t }$ is unrolled as in Eq. (9). The cumulative APG $\begin{array} { r } { [ \tilde { \nabla } \theta _ { i } ] ^ { 1 : \lambda } = \sum _ { t = 1 } ^ { \lambda } \tilde { \nabla } \theta _ { i } ^ { t } } \end{array}$ is then given by the summation of all terms $\tilde { \nabla } \theta _ { i } ^ { t }$ as
$$
\begin{array} { r l } { { \sum _ { i = 0 } ^ { \lambda } \overline { { \nabla } } \theta _ { i } ^ { \mu } = \underbrace { \frac { \partial \hat { \mathcal { L } } ^ { [ 1 ] } } { \partial \hat { \mathcal { L } } ^ { [ 1 ] } } ( \hat { \mathcal { L } } _ { i } ^ { [ 2 ] } + \phi _ { i } ^ { [ 1 ] } \cdot \mathbf { e } _ { i } ^ { [ \lambda ] } ) } _ { \hat { \mathcal { O } } _ { i } ^ { [ 1 ] } } } \phantom { x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x } } \\ & { \quad + \underbrace { \frac { \partial \hat { \mathcal { L } } ^ { [ 2 ] } } { \partial \hat { \mathcal { L } } ^ { [ 3 ] } } ( \hat { \mathcal { S } } _ { i } ^ { [ 1 ] } \cdot \frac { \phi _ { i } ^ { [ 2 ] } \cdot \mathbf { e } _ { i } ^ { [ \lambda ] } \cdot \mathbf { e } _ { i } ^ { [ \lambda ] } } { \nabla \theta _ { i } ^ { [ 2 ] } } \cdot \frac { \phi _ { i } ^ { [ 3 ] } } { \nabla \theta _ { i } ^ { [ 3 ] } } ) } _ { \hat { \mathcal { O } } _ { i } ^ { [ 2 ] } } } \\ & { \quad + \underbrace { \frac { \partial \hat { \mathcal { L } } ^ { [ 3 ] } } { \partial \hat { \mathcal { L } } ^ { [ 1 ] } } ( \hat { \mathcal { S } } _ { i } ^ { [ 1 ] } + \hat { \mathcal { S } } _ { i } ^ { [ 3 ] } \cdot \mathbf { e } _ { i } ^ { [ \lambda ] - \frac { 1 } { \delta } } \hat { \mathcal { S } } _ { i } ^ { [ 3 ] } + \hat { \mathcal { O } } _ { i } ^ { [ 4 ] } \cdot \mathbf { i } ^ { [ 3 ] } \cdot 2 \hat { \mathcal { O } } _ { i } ^ { [ - 3 ] } - \cdots + \phi _ { i } ^ { [ 3 ] - 1 } \cdot \mathbf { e } _ { i } ^ { [ \lambda ] } ) } _ { \hat { \mathcal { O } } _ { i } ^ { [ 4 ] } } } \\ & \quad + \underbrace { \frac { \partial \hat { \mathcal { L } } ^ { [ 3 ] } } { \partial \hat { \mathcal { L } } ^ { [ 3 ] } } ( \hat { \mathcal { S } } _ { i } ^ { [ 4 ] } + \hat { \mathcal { S } } _ { i } ^ { [ 3 ] } \cdot \hat { \mathcal { S } } _ { i } ^ { [ 4 ] - 1 } + \hat { \mathcal { S } } _ { i } ^ { [ 4 ] } \cdot \mathbf { e } _ { i } ^ { [ \lambda ] } \cdot \mathbf { e } _ { i } ^ { [ \lambda ] } + \cdots + \phi _ { i } ^ { [ 4 ] } \cdot \mathbf { e } _ { i } ^ { [ \lambda ] } ) } \\ & \end{array}
$$
where the term $\delta _ { i } ^ { \lambda }$ appears once, the term $\delta _ { i } ^ { \lambda - 1 }$ appears twice, and so on. A number of $\mathcal { O } ( \lambda ^ { 2 } )$ cumulative state transition matrices $\phi ^ { r : s }$ with $r$ , $s \in \{ 1 , \ldots , \lambda \} , r \geq s$ would need to be calculated to parallelize this computation. Rearranging this equation by collecting all coefficients for $\delta _ { i } ^ { 1 }$ , . . . , $\delta _ { i } ^ { \lambda }$ and ${ \bf e } _ { i } ^ { 0 }$ shows why the backward formulation is much more elegant to solve this problem:
$$
\sum _ { t = 1 } ^ { \lambda } \Delta \theta _ { i } ^ { t } = . . .
$$
$$
\begin{array} { r l } & { = - , } \\ & { \quad } \\ & { \quad + \underbrace { \left( \frac { d \hat { \sigma } ^ { ( \hat { X } ^ { \hat { X } ^ { \prime } } } } { d \hat { S } ^ { \epsilon } } \lambda ^ { \hat { X } } \hat { a } ^ { \hat { X } ^ { \epsilon } - 1 } + \frac { d \hat { \bar { X } } ^ { ( \hat { X } ^ { \prime } - 1 ) } } { d \hat { S } ^ { \epsilon } } \lambda ^ { \hat { X } - 1 } - \frac { d \hat { \bar { X } } ^ { ( \hat { X } ^ { \prime } - \hat { X } ^ { \prime } ) } } { d \hat { S } ^ { \epsilon } } \right) \hat { a } ^ { \hat { X } - 2 } } _ { { \boldsymbol { \phi } } _ { \epsilon } ^ { \hat { X } ^ { \prime } - 1 } } } \\ & { \quad + \underbrace { \left( \frac { d \hat { \sigma } ^ { ( \hat { X } ^ { \prime } ) } } { d \hat { S } ^ { \epsilon } } \lambda ^ { \hat { X } } \hat { a } ^ { \hat { X } ^ { \epsilon } - 1 } \frac { d \hat { \bar { X } } ^ { ( \hat { X } ^ { \prime } - 1 ) } } { d \hat { S } ^ { \epsilon } } \right) \hat { a } ^ { \hat { X } - 1 } } _ { { \boldsymbol { \phi } } _ { \epsilon } ^ { \hat { X } ^ { \prime } - 1 } } } \\ & { \quad + \underbrace { \left( \frac { d \hat { \sigma } ^ { ( \hat { X } ^ { \prime } ) } } { d \hat { S } ^ { \epsilon } } \right) \hat { a } ^ { \hat { X } } \hat { a } ^ { \hat { X } } } _ { { \boldsymbol { \phi } } _ { \epsilon } ^ { \hat { X } ^ { \prime } } } } \\ & { \quad - \underbrace { \left( \frac { d \hat { \sigma } ^ { ( \hat { X } ^ { \prime } ) } } { d \hat { S } ^ { \epsilon } } \right) \hat { a } ^ { \hat { X } } \hat { a } ^ { \hat { X } } } _ { { \boldsymbol { \phi } } _ { \epsilon } ^ { \hat { X } ^ { \prime } } } } \\ & { = \exp ( d \hat { \bar { \lambda } } ^ { \epsilon } \hat { a } ^ { \hat { X } } \hat { a } ^ { \hat { X } ^ { \epsilon } - 1 } - \exp ( \hat { \bar { \lambda } } ^ { \hat { X } } \hat { a } ^ { \hat { X } } \hat { a } ^ { \hat { X } ^ { \epsilon } } ) } \end{array}
$$
This formulation is mathematically equivalent to Eq. (A10), but the individual APGs $\tilde { \nabla } \theta _ { i } ^ { t }$ disappeared. This is possible, since we are only interested in their sum, not in the individual terms. The major advantage here is that vectors $\dot { \bf q } _ { i } ^ { t } \in \mathbb { R } ^ { 1 \times k }$ are much smaller than eligibility matrices $\mathbf { e } _ { i } ^ { t } \in \mathbb { R } ^ { \bar { k } \times d _ { \theta } }$ and can be efficiently computed using the associative scan algorithm, as explained in
Appendix G. Note, that for the implementation we replaced all $\delta _ { i } ^ { t }$ by their low-rank representation from Appendix D which allows efficient parallel computation of Eq. (A11) using Einstein summation.
# G Computing $\mathbf { q } _ { i } ^ { t }$ with the associative scan
Recall the linear SSM formulation from Eq. (15):
$$
\mathbf q _ { i } ^ { t } = \mathbf q _ { i } ^ { t + 1 } A _ { i } ^ { t + 1 } + \boldsymbol { \ell } _ { i } ^ { t }
$$
with $\begin{array} { r } { \ell _ { i } ^ { t } = \frac { d \mathcal { L } ^ { t } } { d \mathbf { s } _ { i } ^ { t } } } \end{array}$ . We can efficiently obtain the sequence $[ { \bf q } _ { i } ^ { \lambda } , . . . , { \bf q } _ { i } ^ { 0 } ] = [ \ell _ { i } ^ { \lambda } , \ell _ { i } ^ { \lambda - 1 } + \ell _ { i } ^ { \lambda } A _ { i } ^ { \lambda }$ , $\ell _ { i } ^ { \lambda - 2 } +$ $\ell _ { i } ^ { \lambda - 1 } A _ { i } ^ { \lambda - 1 } + \ell _ { i } ^ { \lambda } A _ { i } ^ { \lambda } A _ { i } ^ { \lambda - 1 } ]$ using the associative scan [33], as described in [34]: Given any sequence $[ a , b , c , \ldots ]$ of length $\lambda$ and a binary associative operator $\bullet$ which satisfies $( a \bullet b ) \bullet c = a \bullet ( b \bullet c )$ , the sequence $[ \bar { a } , ~ a \bullet b , ~ a \bullet b \bullet c , ~ \ldots ]$ can be computed in ${ \mathcal { O } } ( \log \lambda )$ time complexity, given sufficient parallel processors are available.
We define tuples $p _ { i } ^ { t } \ { \stackrel { \mathrm { d e f } } { = } } \ ( A _ { i } ^ { t + 1 } , \ell _ { i } ^ { t } )$ with $\boldsymbol { A } _ { i } ^ { \lambda + 1 } = \mathbf { I }$ as identity matrix, $\ell _ { i } ^ { 0 } = 0$ and binary associative operator $( a _ { 1 } , a _ { 2 } ) \bullet ( b _ { 1 } , b _ { 2 } ) \stackrel { \mathrm { d e f } } { = } ( b _ { 1 } \times a _ { 1 } , \ b _ { 1 } a _ { 2 } + b$ , where subscripts 1 and 2 refer to the first and second tuple element respectively. Applying the associative scan using this binary associative operator on the sequence $[ p _ { i } ^ { \lambda } , . . . , p _ { i } ^ { \bar { 0 } } ]$ results in the sequence $[ r _ { i } ^ { \lambda } , . . . , r _ { i } ^ { 0 } ]$ with
$$
\begin{array} { r l } { r ^ { \lambda } = p ^ { \lambda } } & { { } = ( A ^ { \lambda + 1 } , \ell ^ { \lambda } ) } \\ { r ^ { \lambda - 1 } = p ^ { \lambda } \bullet p ^ { \lambda - 1 } } & { { } = ( A ^ { \lambda } A ^ { \lambda + 1 } , A ^ { \lambda } \ell ^ { \lambda } + \ell ^ { \lambda - 1 } ) } \\ { r ^ { \lambda - 2 } = p ^ { \lambda } \bullet p ^ { \lambda - 1 } \bullet p ^ { \lambda - 2 } } & { { } = ( \underbrace { A ^ { \lambda - 1 } A ^ { \lambda } A ^ { \lambda + 1 } } _ { \phi ^ { \lambda - 2 : \lambda + 1 } } , \underbrace { A ^ { \lambda - 1 } A ^ { \lambda } \ell ^ { \lambda } + A ^ { \lambda - 1 } \ell ^ { \lambda - 1 } + \ell ^ { \lambda - 2 } } _ { \mathbf { q } ^ { \lambda - 2 } } ) } \end{array}
$$
where we omitted neuron index $i$ for readability. From each resulting tuple $r ^ { t } = ( \phi ^ { t : \lambda + 1 } , \mathbf { q } ^ { t } )$ we can extract $\mathbf { q } ^ { t }$ as the second tuple element.
# H Schematic illustration of HYPR
Fig. A1 shows a schematic illustration of the forward and backward accumulation in HYPR.
# I Extension of HYPR to multi-layer networks
HYPR can be applied to multi-layered networks. This can trivially be achieved by back-propagating the loss $\mathcal { L } ^ { t }$ obtained in time step $t$ through layers to obtain $d \mathcal { L } ^ { t } / d \mathbf { s } _ { l , i } ^ { t }$ , where $\mathbf { s } _ { l , i }$ denotes the state of neuron $i$ in layer $l$ at time step $t$ , which can directly be plugged into the computation of APG $\tilde { \nabla } \theta _ { l , i } ^ { t }$ (see Eq. (6)) of layer $l$ . However, it has to be noted that with this approach some gradient pathways are disregarded compared to BPTT: The exact gradient $d \mathcal { L } ^ { t } / d \mathbf { s } _ { l - 1 , i } ^ { r }$ with $r < t$ of layer $l - 1$ can be expressed as
$$
\frac { d \mathcal { L } ^ { t } } { d \mathbf { s } _ { l - 1 , i } ^ { r } } = \sum _ { k } \frac { d \mathcal { L } ^ { t } } { d \mathbf { s } _ { l , k } ^ { r } } \frac { \partial \mathbf { s } _ { l , k } ^ { r } } { \partial \mathbf { s } _ { l - 1 , i } ^ { r } } + \sum _ { k } \frac { d \mathcal { L } ^ { t } } { d \mathbf { s } _ { l - 1 , k } ^ { r + 1 } } \frac { \partial \mathbf { s } _ { l - 1 , k } ^ { r + 1 } } { \partial \mathbf { s } _ { l - 1 , i } ^ { r } } .
$$
BPTT accounts for all gradient pathways involved. In contrast, the APGs $\tilde { \nabla } \theta _ { l , i } ^ { t }$ used in HYPR $\begin{array} { r } { \frac { d \mathcal { L } ^ { t } } { d \mathbf { s } _ { l , k } ^ { r } } \frac { \partial \mathbf { s } _ { l , k } ^ { r } } { \partial \mathbf { s } _ { l - 1 , i } ^ { r } } } \end{array}$ $r < t$ $\mathbf { s } _ { l - 1 , i } ^ { r }$ influences loss $\mathcal { L } ^ { t }$ through states $( \mathbf { s } _ { l , k } ^ { r } , \ldots , \mathbf { s } _ { l , k } ^ { t - 1 } )$ of successive layer $l$ . The resulting approximation $[ d \mathcal { L } ^ { t } / d \mathbf { s } _ { l - 1 , i } ^ { r } ] _ { \mathrm { l o c a l } }$ of HYPR is given by
$$
\left[ \frac { d \mathcal { L } ^ { t } } { d \mathbf { s } _ { l - 1 , i } ^ { r } } \right] _ { \mathrm { l o c a l } } = \left\{ \begin{array} { l l } { \frac { \partial \mathbf { s } _ { l - 1 , k } ^ { r + 1 } } { \partial \mathbf { s } _ { l - 1 , i } ^ { r } } \left[ \frac { d \mathcal { L } ^ { t } } { d \mathbf { s } _ { l - 1 , i } ^ { r + 1 } } \right] _ { \mathrm { l o c a l } } } & { \mathrm { i f \quad } r < t } \\ { \frac { d \mathcal { L } ^ { t } } { d \mathbf { s } _ { l - 1 , i } ^ { r } } } & { \mathrm { i f \quad } r = t } \end{array} \right.
$$
Figure A1: Schematic illustration of the combination of forward-accumulation (Eq. (5)) and backward-accumulation (Eq. (15)) in HYPR.
The terms $[ d \mathcal { L } ^ { t } / d \mathbf { s } _ { l - 1 , i } ^ { r } ] _ { \mathrm { l o c a l } }$ are never explicitly computed in HYPR, we provide it merely to demonstrate which terms are ignored by HYPR in multi-layered networks. For the multi-layer networks, we compute the spatially back-propagated loss gradients $d \mathcal { L } ^ { t } / d \mathbf { s } _ { l - 1 , i } ^ { t }$ and use them in Eq. (14). The approximation of the multi-layer case discussed here is inherent to online algorithms and has previously been discussed in [26] and [23].
# J Performance of multi-layer networks trained with HYPR
Tab. A1 shows the performance of SRNNs consisting of 1 3 layers of SE-adLIF neurons in the ECG task, trained with HYPR. All other hyperparameters were equal to the ones reported in Appendix M.2. Despite the approximations discussed in Appendix I, the test accuracy improves when more layers are added to the network.
Table A1: Performance of a multi-layer SE-adLIF network trained with HYPR. We report mean $\pm$ std. dev. over 5 random seeds which randomized initialization and train/validation split.
# K Details of datasets and preprocessing
For all datasets except cue and Pathfinder-E, a train/test split is predefined. We further split the training set into a smaller training set and a validation set with $9 0 \%$ and $1 0 \%$ relative size respectively and perform model selection on the validation set. For Pathfinder-E and cue, we split training/validation/test as $8 0 / 1 0 / 1 0 \%$ .
# K.1 Details of the cue task
The cue task is a binary classification task designed to test a model’s ability to remember a class cue over an extended delay and indicate its identity during a recall input. Each input pattern $\mathbf { X } \in$ $\{ 0 , 1 \} ^ { T \times D }$ in this task consists of spike trains across $D \ = \ 1 5$ neurons and a total duration of $T = 2 T _ { \mathrm { p a t } } + T _ { \mathrm { d e l a y } }$ time steps, where $T _ { \mathrm { p a t } }$ is the pattern length and $T _ { \mathrm { d e l a y } }$ is the delay length. The task is to classify each pattern into class A or B according to the activation pattern presented before the delay period.
Figure A2: The cue task. a One input sample from class B with a length of $T = 1 0 0 0$ time steps. Red bar below input neurons indicates the time when a target signal is available to the network. b Hidden layer activity (top, subsample of 100 neurons) and model output (bottom) of a single hidden layer RSNN trained on this task. Red (blue) lines indicate the network’s predicted probability for class A (B).
To generate input patterns for this task, we employed a three-phase process in which we first randomly assign a target class, then generate class-specific input patterns, followed by a delay period and a class-independent recall cue signal. An example input pattern is shown in Fig. A2a. In detail, this procedure is as follows: For each pattern, a binary target class $c \in \{ A , B \}$ is randomly assigned with equal probability. This target determines which set of neurons will be active during the initial pattern phase. The pattern phase has a duration of $T _ { \mathrm { p a t } }$ time steps. If the target is class A, input neurons 1 to 5 are active with probability $p _ { A } = 0 . 5$ at each time step, generating a binary pattern where each spike $x _ { t , i } \sim \mathrm { B e r n o u l l i } ( p _ { A } )$ . If the target is class B, input neurons 6 to 10 are activated according to the same mechanism. All other neurons remain silent during this phase. Following the pattern phase, a delay period of length $T _ { \mathrm { d e l a y } }$ time steps occurs, during which all neurons remain silent. This creates a temporal gap between the presentation of the class-specific pattern and the recall cue.
The final phase is the recall phase, which also lasts for $T _ { \mathrm { p a t } }$ time steps. During this phase, input neurons 11 to 15 are active according to the same process as used in the pattern phase. Each target is represented as a one-hot encoded vector $\mathbf { y } \in \{ 0 , \overset { \cdot } { 1 } \} ^ { 2 }$ and presented to the network only during the recall period.
Fig. A2b shows the hidden layer spiking activity and output layer softmax probabilities of an RSNN that was trained on this task. The correct class probability rises in response to the recall cue.
For our experiments, we used $T _ { \mathrm { p a t } } = 2 0$ time steps and created multiple variants of the data set with different delay lengths $T _ { \mathrm { d e l a y } } \dot { \in } \left\{ 1 k , 2 k , 5 k , 1 0 k , 1 6 k \right\}$ time steps to test the model’s capacity to maintain information over increasingly longer temporal gaps. For each variant, we generated a total of 256 samples, with an equal number of samples for each class. After generating an entire dataset, samples were randomly shuffled and split into a training set $( 8 0 \% )$ and test set $( 2 0 \% )$ .
# K.2 Details of the benchmark datasets
SHD: The Spiking Heidelberg Digits (SHD) dataset is an audio-based classification dataset for benchmarking SNNs. It consists of 20 classes, corresponding to the spoken digits 0 to 9 in German and English, where the audio is converted into spike trains based on a detailed cochlea model [36].
We use the version from the Tonic python library (version 1.5.1) for neuromorphic datasets2, which is publicly available for research (Creative Commons Attribution 4.0 International License). We reduced the dimension from 700 to 140 channels by sum pooling and summed the spikes over $4 \mathrm { m s }$ time windows. The same preprocessing has been applied in [15].
ECG: The electrocardiogram (ECG) dataset involves signals with six distinct characteristic waveforms, whose shape and duration is informative of functioning of the cardiovascular system. The task consists of recognition of the 6 classes per time-step. We use the dataset3 version preprocessed and utilized by the ALIF paper [35], which is referenced to the original publicly available QT Database from PhysioNet4 [37] under the Open Data Commons Attribution License v1.0.
SMNIST: In this task, the $2 8 x 2 8$ dimensional grayscale images of the MNIST dataset [38] are presented in a pixel-by-pixel sequence of length 784, and the task is to decide to which one of the 10 handwritten digits is present in the current image. This sequential MNIST (sMNIST) task formulation was initially introduced in [29]. We provide the input to the networks after normalization of pixel values to be between [0, 1]. We access to this publicly available dataset via the Tensorflow Datasets library5.
SCIFAR: In this task, the $3 2 \mathrm { x } 3 2$ dimensional colored images of the CIFAR-10 dataset [40] are presented in a pixel-by-pixel sequence of length 1024, and the task is to decide to which one of the 10 categories the current image belongs to. We provide each input in RGB, thus the inputs contain three channels. Inputs are provided after normalization of pixel values to be between $[ 0 , 1 ]$ . We access to this publicly available dataset via the Tensorflow Datasets library6. This task is also one of the long-range arena benchmark tasks [39], under category “Image”.
PATHFINDER-E: It is one of the long-range arena benchmark tasks [39], where a $3 2 \mathrm { x } 3 2$ grayscale image of line drawings is presented in a pixel-by-pixel sequence of length 1024, and the task is to decide whether a starting point is connected by a line with an end point (i.e., binary classification). Our dataset implementations are based on the code repository (V2 release)7 and the publicly available long range arena repository8. We use the easier variant of this task (-E), with the difficulty level indicated as “baseline” in the dataset, as opposed to the harder variant with additional distracting contours present in the image.
# L Details of neuron models
As shown in Table A2, each neuron model can be represented by its state vector s, update function $f$ , and output function $g$ . In all neuron models, output function $g$ is the Heaviside step function $\Theta$ .
# L.1 Balanced Resonate-and-Fire (BRF)
The dynamics of the BRF [17] neuron model are given by:
$$
\begin{array} { r l } & { b ^ { t } = p _ { \omega } - b _ { \mathrm { o f f s e t } } - q ^ { t - 1 } } \\ & { u ^ { t } = u ^ { t - 1 } + \Delta t \cdot ( b ^ { t } \cdot u ^ { t - 1 } - \omega \cdot v ^ { t - 1 } + I ^ { t } ) } \\ & { v ^ { t } = v ^ { t - 1 } + \Delta t \cdot ( \omega \cdot u ^ { t - 1 } + b ^ { t } \cdot v ^ { t - 1 } ) } \\ & { z ^ { t } = \Theta ( u ^ { t } - \theta - q ^ { t - 1 } ) } \\ & { q ^ { t } = \alpha \cdot q ^ { t - 1 } + z ^ { t } } \end{array}
$$
where $\theta$ is the base firing threshold, $\omega$ is the angular frequency parameter controlling oscillations, $\begin{array} { r } { p _ { \omega _ { . } } = \frac { - 1 + \sqrt { 1 - ( \Delta t \cdot \omega ) ^ { 2 } } } { \Delta t } } \end{array}$ is the divergence boundary, $b _ { \mathrm { o f f s e t } }$ is the dampening parameter, $q ^ { t }$ is an adaptation variable with $\alpha = 0 . 9$ as adaptive decay factor, and $\Delta t$ is the simulation time step. We use $\Delta t = 0 . 0 1$ in all experiments. The BRF neuron generates a spike when the membrane potential ${ \boldsymbol { u } } ^ { t }$ exceeds the adaptive threshold $\boldsymbol { \theta } + \boldsymbol { q } ^ { t - 1 }$ . In the BRF model the membrane potential is not reset after a spike. Instead, the adaptation variable $q ^ { t }$ increases by the spike output $\bar { z } ^ { t } \in \mathrm { 0 , 1 }$ and affects both the threshold for future spikes and the dampening term $b ^ { t }$ in the subsequent dynamics. We train $b _ { \mathrm { o f f s e t } }$ and $\omega$ together with the network weights. We used the same weight initialization as in [17].
Table A2: $f$ and $g$ functions for different neuron models. In the ALIF and SE-adLIF models, the variable $u _ { i } ^ { t }$ is set to 0 (hard reset) if the neuron spiked in the previous time step (not shown).
# L.2 SE discretized adaptive Leaky Integrate and Fire (SE-adLIF)
The temporal dynamics of the Symplectic-Euler(SE) discretized SE-adLIF [15] neuron are given by
$$
\begin{array} { r l } & { \hat { u } ^ { t } = \alpha u ^ { t - 1 } + ( 1 - \alpha ) ( I ^ { t } - w ^ { t - 1 } ) } \\ & { z ^ { t } = \Theta ( \hat { u } ^ { t } - \theta ) } \\ & { u ^ { t } = \hat { u } ^ { t } ( 1 - z ^ { t } ) } \\ & { w ^ { t } = \beta w ^ { t - 1 } + ( 1 - \beta ) ( a u ^ { t } + b z ^ { t } ) } \end{array}
$$
where $\tau _ { u }$ and $\tau _ { w }$ are the membrane potential and adaptation time constants, and $a = \rho \hat { a } , b = \rho \hat { b }$ are adaptation parameters. For all experiments, we keep $\rho = 1 2 0$ fixed and initialize $\hat { a } , \hat { b } \sim \mathcal { U } ( 0 , 1 )$ , which are trained together with other parameters. During training we clip both $\hat { a }$ and $\hat { b }$ in the range $[ 0 , 1 ]$ . Further, we employ the same time constant interpolation as in [15] with the ranges $\tau _ { u } \in [ 5 , 2 5 ]$ and $\bar { \tau } _ { w } \in [ 6 0 , 3 0 0 ]$ for all experiments. We used the same weight initialization as in [15].
# L.3 Adaptive Leaky Integrate and Fire (ALIF)
The dynamics of the ALIF [35] model are given by
$$
\begin{array} { r l } & { a ^ { t } = \rho \cdot a ^ { t - 1 } + ( 1 - \rho ) \cdot z ^ { t - 1 } } \\ & { A ^ { t } = b _ { j 0 } + \beta \cdot a ^ { t } } \\ & { u ^ { t } = \alpha \cdot u ^ { t - 1 } + ( 1 - \alpha ) \cdot I ^ { t } - A ^ { t } \cdot z ^ { t - 1 } } \\ & { z ^ { t } = \Theta ( u ^ { t } - A ^ { t } ) } \end{array}
$$
Where $\alpha = e ^ { - \Delta t / \tau _ { u } }$ is the membrane potential decay factor, $\rho = e ^ { - \Delta t / \tau _ { a } }$ is the adaptation decay factor, where $\tau _ { u }$ is the membrane potential time constant, $\tau _ { a }$ is the adaptation time constant. $\beta$ is the adaptation strength coefficient, and $A ^ { t }$ is the adaptive threshold. The ALIF neuron implements an adaptive threshold mechanism combined with a spike-triggered reset. The adaptation variable $a ^ { t }$ tracks the neuron’s recent spiking history, increasing with each spike $z ^ { t - 1 }$ . This creates a dynamic threshold $A ^ { t } = b _ { j 0 } + \beta \cdot { \bf { \bar { \alpha } } } a ^ { t }$ that rises after spiking activity and implementing spike-frequency adaptation. After a spike, membrane potential $\boldsymbol { u } ^ { t }$ is reset to $0$ .
# L.4 Leaky Integrator (LI)
The dynamics of the LI neurons in the output layer are given by
$$
u ^ { t } = \alpha \cdot u ^ { t - 1 } + ( 1 - \alpha ) \cdot I ^ { t }
$$
where $\alpha = e ^ { - \Delta t / \tau _ { u } }$ is the decay factor with time constant $\tau _ { u }$
# M Training details and hyperparameters
# M.1 Training details
Surrogate gradients: We employed surrogate gradient functions that approximate the derivative of $\Theta ( x )$ . In our experiments, we utilize two common surrogate gradients. The first is the SLAYER [41] surrogate gradient, defined as $\begin{array} { r } { \frac { d \Theta ( x ) } { d x } \approx \alpha c e ^ { - \alpha | x | } } \end{array}$ , where $\alpha = 5$ controls the sharpness of the exponential curve and $c = 0 . 2$ adjusts the amplitude of the gradient. The second approach is the double Gaussian [35] surrogate gradient, given by $\begin{array} { r } { \frac { d \boldsymbol { \Theta } ( x ) } { d x } \approx \gamma \left[ ( 1 + p ) \cdot \boldsymbol { \mathrm { G } } ( x ; 0 , \sigma _ { 1 } ) - 2 p \cdot \boldsymbol { \mathrm { G } } ( x ; 0 , \sigma _ { 2 } ) \right] , } \end{array}$ where $\mathbf { G } ( x ; \mu , \sigma )$ represents the Gaussian probability density function, $\sigma _ { 1 } = 0 . 5$ and $\sigma _ { 2 } ~ = ~ 6 \sigma _ { 1 }$ control the width of the Gaussian curves, $p = 0 . 1 5$ adjusts the relative weight between the two Gaussians, and $\gamma = 0 . 5$ is an overall scaling factor.
Optimizer: We used the ADAM [30] optimization algorithm with $\beta _ { 1 } ~ = ~ 0 . 9$ , $\beta _ { 2 } ~ = ~ 0 . 9 9 9$ and $\epsilon = 1 { \mathrm { e } } - 8$ but with different learning rates (see Appendix M.2) for all experiments. For HYPR, we accumulated APGs over the entire sequence before applying the weight updates. We applied gradient clipping for all experiments: if the gradient norm exceeded a certain magnitude, we rescaled it to $m _ { \mathrm { g r a d } }$ . The values of $m _ { \mathrm { g r a d } }$ are discussed in Appendix M.2.
Learning rate schedule: We explored three different learning rate scheduling methods: constant, linear and cosine. In the constant scheduler, the learning rate was held constant throughout all training epochs. In the linear scheduler, we decayed it linearly from an initial learning rate $\eta ^ { \mathrm { i n i t } }$ to 0 at the final epoch. In the cosine decay scheduler [42], we define the learning rate $\eta ^ { \bar { k } }$ at the $k$ -the epoch as $\eta ^ { k } = \bar { \eta } ^ { \mathrm { i n i t } } \cdot [ 0 . 5 \cdot ( 1 + \cos { ( \pi \cdot k / \# \mathrm { e p o c h s } ) } ) ]$ .
Loss functions: Our network architecture uses leaky integrator neurons in the output layer that match the number $C$ of classes of the corresponding task. At each time step $t$ , the network output $\bar { \mathbf { y } } ^ { t } \in \mathbb { R } ^ { C }$ was given by the vector of membrane potentials $\mathbf { u } ^ { t } \in \mathbb { R } ^ { C }$ . Depending on the task and the training algorithm (BPTT or HYPR), we used different functions to compute a loss from the series of outputs $\left[ \bar { \mathbf { y } } ^ { 1 } , \mathbf { \Omega } . . . , \bar { \mathbf { y } } ^ { T } \right]$ . The sum-of-softmax loss was given by
$$
\mathcal { L } = \mathrm { C E } \left( \operatorname { s o f t m a x } \left( \sum _ { \substack { t = t _ { 0 } + 1 } } ^ { T } \operatorname { s o f t m a x } \left( \bar { \mathbf { y } } ^ { t } \right) \right) , \mathbf { y } ^ { * } \right) ,
$$
where CE denotes the cross entropy loss, softmax is the softmax function, $\mathbf { y } ^ { * } \in \mathbb { R } ^ { C }$ a one-hot encoded target vector and $t _ { 0 }$ defines the time step up to which the network output is ignored. $t _ { 0 }$ is a hyperparameter and varies between different tasks and models and is shown in Appendix M.2. The sum-of-softmax is not compatible with HYPR since no per-timestep loss $\mathcal { L } ^ { t }$ can be obtained at time step $t$ without back-propagating the loss through time. Therefore, we used a summative per-timestep loss, given by
$$
\mathcal { L } = \frac { 1 } { T - t _ { 0 } } \sum _ { t = t _ { 0 } + 1 } ^ { T } \mathcal { L } ^ { t } = \frac { 1 } { T - t _ { 0 } } \sum _ { t = t _ { 0 } + 1 } ^ { T } \mathrm { C E } \left( \mathrm { s o f t m a x } \left( \sum _ { t = t _ { 0 } + 1 } ^ { T } \bar { \mathbf { y } } ^ { t } \right) , \mathbf { y } ^ { * t } \right) ,
$$
where $\mathbf { y } ^ { * t }$ can be a per-timestep target (for example in the ECG task) or is the same for every time step (for example in SHD).
Table A3: List of hyperparameters corresponding to the cue dataset simulations with the BRF model from Fig. 2. Hyperparameters that were different for HYPR are denoted in parentheses. For this experiment we used $\alpha = 1$ and $c = 0 . 2$ for the SLAYER surrogate gradient. $m _ { \mathrm { g r a d } }$ is the gradient clipping magnitude.
Class prediction: To compute the accuracy, we obtained a class prediction $\hat { y }$ from $[ \bar { \mathbf { y } } ^ { 1 } , \ . . . , \ \bar { \mathbf { y } } ^ { T } ]$ via three different methods:
$$
\begin{array} { l } { \displaystyle \hat { y } = \mathrm { a r g m a x } \left( \sum _ { t = t _ { 0 } + 1 } ^ { T } \mathrm { s o f t m a x } ( \bar { \mathbf { y } } ^ { t } ) \right) } \\ { \displaystyle \hat { y } = \mathrm { a r g m a x } \left( \sum _ { t = t _ { 0 } + 1 } ^ { T } \bar { \mathbf { y } } ^ { t } \right) } \\ { \displaystyle \hat { y } ^ { t } = \mathrm { a r g m a x } ( \bar { \mathbf { y } } ^ { t } ) } \end{array}
$$
The method of choice for each task and model is shown in Appendix M.2.
Frameworks: We used the open source Python frameworks Jax [43], for calculation and automatic differentiation, Hydra9, for configuring experiments, and $\mathrm { A i m } ^ { 1 0 }$ for experiment tracking.
# M.2 Hyperparameters and Tuning
The hyperparameters for the BRF model trained on the cue task to obtain the plots in Fig. 2 are shown in Tab. A3. Tables A4, A5, A6, and A7 show the hyperparameters used for training the neuron models on each benchmark dataset. For each model and task, we manually tuned the number of neurons, batch size, number of layers, surrogate gradient and number of ignored time steps. Since it is impossible to run all hyperparameter configurations, we oriented the search initially on the configurations from the authors of the original model and tuned the hyperparameters with educated guesses and small grid searches. Increasing the number of parameters did not always result in better test accuracy due to overfitting, hence the number of neurons (and therefore the number of parameters) varies between different neuron models and tasks.
# M.3 Compute resources
From the benchmark datasets, the Pathfinder-E experiments took the longest to execute, with $\approx$ 25 hours of execution time for 300 epochs on a single L40 GPU. The SHD experiments were the shortest, with $\approx 5$ minutes of execution time for 300 epochs on an L40 GPU.
Table A4: List of hyperparameters corresponding to the SHD dataset simulations. Hyperparameters that were different for HYPR are denoted in parentheses. $t _ { 0 }$ defines the time step up to which the network output is ignored for the loss computation and class prediction. $m _ { \mathrm { g r a d } }$ is the gradient clipping magnitude.
Table A5: List of hyperparameters corresponding to the ECG dataset simulations. Hyperparameters that were different for HYPR are denoted in parentheses. $t _ { 0 }$ defines the time step up to which the network output is ignored for the loss computation and class prediction. $m _ { \mathrm { g r a d } }$ is the gradient clipping magnitude.
# M.4 Code availability
The code to reproduce all experiments from this work is publicly available on GitHub under the CC BY-SA 4.0 license11 at https://github.com/IMLTUGraz/HYPR.
# Acknowledgements
This research was funded in whole or in part by the Austrian Science Fund (FWF) [10.55776/COE12] (R.L., M.B., O ¨O), and by NSF EFRI grant #2318152 (R.L., Y.B.).
Table A6: List of hyperparameters corresponding to the SMNIST dataset simulations. Hyperparameters that were different for HYPR are denoted in parentheses. $t _ { 0 }$ defines the time step up to which the network output is ignored for the loss computation and class prediction. $m _ { \mathrm { g r a d } }$ is the gradient clipping magnitude.
Table A7: List of hyperparameters corresponding to the SCIFAR and PATHFINDER-E dataset simulations with the BRF neuron model [17]. Hyperparameters that were different for HYPR are denoted in parentheses. $t _ { 0 }$ defines the time step up to which the network output is ignored for the loss computation and class prediction. $m _ { \mathrm { g r a d } }$ is the gradient clipping magnitude.
References
[1] Wolfgang Maass. Networks of spiking neurons: the third generation of neural network models. Neural Networks, 10(9):1659–1671, 1997.
[2] Wulfram Gerstner, Werner M Kistler, Richard Naud, and Liam Paninski. Neuronal dynamics: From single neurons to networks and models of cognition. Cambridge University Press, 2014.
[3] Aaron R Young, Mark E Dean, James S Plank, and Garrett S Rose. A review of spiking neuromorphic hardware communication systems. IEEE Access, 7:135606–135620, 2019.
[4] Albert Gu, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, and Christopher Re´. Combining recurrent, convolutional, and continuous-time models with linear state space layers. Advances in Neural Information Processing Systems, 34:572–585, 2021.
[5] Albert Gu, Tri Dao, Stefano Ermon, Atri Rudra, and Christopher Re´. Hippo: Recurrent memory with optimal polynomial projections. Advances in Neural Information Processing Systems, 33:1474–1487, 2020.
[6] Ankit Gupta, Albert Gu, and Jonathan Berant. Diagonal state spaces are as effective as structured state spaces. Advances in Neural Information Processing Systems, 35:22982–22994, 2022.
[7] Antonio Orvieto, Samuel L Smith, Albert Gu, Anushan Fernando, Caglar Gulcehre, Razvan Pascanu, and Soham De. Resurrecting recurrent neural networks for long sequences. In International Conference on Machine Learning, pages 26670–26698. PMLR, 2023.
[8] Yang Li, Yinqian Sun, Xiang He, Yiting Dong, Dongcheng Zhao, and Yi Zeng. Parallel spiking unit for efficient training of spiking neural networks. In 2024 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE, 2024.
[9] Yulong Huang, Zunchang Liu, Changchun Feng, Xiaopeng Lin, Hongwei Ren, Haotian Fu, Yue Zhou, Hong Xing, and Bojun Cheng. PRF: Parallel resonate and fire neuron for long sequence learning in spiking neural networks. arXiv preprint arXiv:2410.03530, 2024.
[10] Peng Xue, Wei Fang, Zhengyu Ma, Zihan Huang, Zhaokun Zhou, Yonghong Tian, Timothe´e Masquelier, and Huihui Zhou. Channel-wise parallelizable spiking neuron with multiplicationfree dynamics and large temporal receptive fields. arXiv preprint arXiv:2501.14490, 2025.
[11] Malyaban Bal and Abhronil Sengupta. P-spikessm: Harnessing probabilistic spiking state space models for long-range dependency tasks. In The Thirteenth International Conference on Learning Representations (ICLR), 2025.
[12] Guillaume Bellec, Franz Scherr, Anand Subramoney, Elias Hajek, Darjan Salaj, Robert Legenstein, and Wolfgang Maass. A solution to the learning dilemma for recurrent networks of spiking neurons. Nature Communications, 11(1):3625, July 2020.
[13] Ronald J. Williams and David Zipser. A Learning Algorithm for Continually Running Fully Recurrent Neural Networks. Neural Computation, 1(2):270–280, June 1989.
[14] Charlotte Frenkel and Giacomo Indiveri. ReckOn: A $2 8 \mathrm { n m }$ sub-mm2 task-agnostic spiking recurrent neural network processor enabling on-chip learning over second-long timescales. In IEEE International Solid-State Circuits Conference (ISSCC), volume 65, pages 1–3, 2022.
[15] Maximilian Baronig, Romain Ferrand, Silvester Sabathiel, and Robert Legenstein. Advancing spatio-temporal processing in spiking neural networks through adaptation. arXiv preprint arXiv:2408.07517, 2025.
[16] Alexandre Bittar and Philip N Garner. A surrogate gradient spiking baseline for speech command recognition. Frontiers in Neuroscience, 16:865897, 2022.
[17] Saya Higuchi, Sebastian Kairat, Sander Bohte, and Sebastian Otte. Balanced resonate-andfire neurons. In International Conference on Machine Learning (ICML), pages 18305–18323. PMLR, 2024.
[18] Ronald J Williams and Jing Peng. An efficient gradient-based algorithm for on-line training of recurrent network trajectories. Neural Computation, 2(4):490–501, 1990.
[19] Corentin Tallec and Yann Ollivier. Unbiased online recurrent optimization. In International Conference On Learning Representation (ICLR), 2018.
[20] Asier Mujika, Florian Meier, and Angelika Steger. Approximating real-time recurrent learning with random kronecker factors. Advances in Neural Information Processing Systems, 31, 2018.
[21] Jacob Menick, Erich Elsen, Utku Evci, Simon Osindero, Karen Simonyan, and Alex Graves. Practical real time recurrent learning with a sparse approximation. In International Conference On Learning Representation (ICLR), 2021.
[22] David Silver, Anirudh Goyal, Ivo Danihelka, Matteo Hessel, and H. V. Hasselt. Learning by directional gradient descent. In International Conference On Learning Representation (ICLR), 2022.
[23] Mingqing Xiao, Qingyan Meng, Zongpeng Zhang, Di He, and Zhouchen Lin. Online training through time for spiking neural networks. Advances in Neural Information Processing Systems, 35:20717–20730, 2022.
[24] Ju¨rgen Schmidhuber. A fixed size storage $o ( n ^ { 3 } )$ time complexity learning algorithm for fully recurrent continually running networks. Neural Computation, 4(2):243–248, 1992.
[25] Kazuki Irie, Anand Gopalakrishnan, and Ju¨rgen Schmidhuber. Exploring the promise and limits of real-time recurrent learning. In The Twelfth International Conference on Learning Representations (ICLR), 2024.
[26] Nicolas Zucchet, Robert Meier, Simon Schug, Asier Mujika, and Joao Sacramento. Online learning of long-range dependencies. Advances in Neural Information Processing Systems, 36:10477–10493, 2023.
[27] Wei Fang, Zhaofei Yu, Zhaokun Zhou, Ding Chen, Yanqi Chen, Zhengyu Ma, Timothe´e Masquelier, and Yonghong Tian. Parallel Spiking Neurons with High Efficiency and Ability to Learn Long-term Dependencies. Advances in Neural Information Processing Systems, 36:53674–53687, 2023.
[28] Steven K. Esser, Paul A. Merolla, John V. Arthur, Andrew S. Cassidy, Rathinakumar Appuswamy, Alexander Andreopoulos, David J. Berg, Jeffrey L. McKinstry, Timothy Melano, Davis R. Barch, Carmelo di Nolfo, Pallab Datta, Arnon Amir, Brian Taba, Myron D. Flickner, and Dharmendra S. Modha. Convolutional networks for fast, energy-efficient neuromorphic computing. Proceedings of the National Academy of Sciences, 113(41):11441–11446, 2016.
[29] Guillaume Bellec, Darjan Salaj, Anand Subramoney, Robert Legenstein, and Wolfgang Maass. Long short-term memory and learning-to-learn in networks of spiking neurons. Advances in Neural Information Processing Systems, 31, 2018.
[30] Diederik P Kingma. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[31] Aaron Voelker, Ivana Kajic´, and Chris Eliasmith. Legendre memory units: Continuous-time representation in recurrent neural networks. Advances in Neural Information Processing Systems, 32, 2019.
[32] Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023.
[33] Guy E. Blelloch. Prefix sums and their applications. Technical Report CMU-CS-90-190, School of Computer Science, Carnegie Mellon University, November 1990.
[34] Jimmy TH Smith, Andrew Warrington, and Scott Linderman. Simplified state space layers for sequence modeling. In The Eleventh International Conference on Learning Representations (ICLR), 2023.
[35] Bojian Yin, Federico Corradi, and Sander M Bohte´. Accurate and efficient time-domain classification with adaptive spiking recurrent neural networks. Nature Machine Intelligence, 3(10): 905–913, 2021.
[36] Benjamin Cramer, Yannik Stradmann, Johannes Schemmel, and Friedemann Zenke. The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, 33(7):2744–2757, 2022.
[37] Pablo Laguna, Roger G Mark, A Goldberg, and George B Moody. A database for evaluation of algorithms for measurement of qt and other waveform intervals in the ecg. In Computers in Cardiology, pages 673–676. IEEE, 1997.
[38] Yann LeCun. The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/, 1998.
[39] Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, and Donald Metzler. Long range arena: A benchmark for efficient transformers. arXiv preprint arXiv:2011.04006, 2020.
[40] Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical Report, University of Toronto, 2009.
[41] Sumit B Shrestha and Garrick Orchard. Slayer: Spike layer error reassignment in time. Advances in Neural Information Processing Systems, 31, 2018.
[42] Ilya Loshchilov and Frank Hutter. SGDR: Stochastic gradient descent with warm restarts. In International Conference on Learning Representations (ICLR), 2022.
[43] James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http: //github.com/jax-ml/jax. | Recurrent spiking neural networks (RSNNs) can be implemented very efficiently
in neuromorphic systems. Nevertheless, training of these models with powerful
gradient-based learning algorithms is mostly performed on standard digital
hardware using Backpropagation through time (BPTT). However, BPTT has
substantial limitations. It does not permit online training and its memory
consumption scales linearly with the number of computation steps. In contrast,
learning methods using forward propagation of gradients operate in an online
manner with a memory consumption independent of the number of time steps. These
methods enable SNNs to learn from continuous, infinite-length input sequences.
Yet, slow execution speed on conventional hardware as well as inferior
performance has hindered their widespread application. In this work, we
introduce HYbrid PRopagation (HYPR) that combines the efficiency of
parallelization with approximate online forward learning. Our algorithm yields
high-throughput online learning through parallelization, paired with constant,
i.e., sequence length independent, memory demands. HYPR enables parallelization
of parameter update computation over the sub sequences for RSNNs consisting of
almost arbitrary non-linear spiking neuron models. We apply HYPR to networks of
spiking neurons with oscillatory subthreshold dynamics. We find that this type
of neuron model is particularly well trainable by HYPR, resulting in an
unprecedentedly low task performance gap between approximate forward gradient
learning and BPTT. | [
"cs.NE",
"cs.AI",
"cs.LG"
] |
# 1 INTRODUCTION
As human society deepens its reliance on information systems and information technology, the need to develop information systems in an efficient, rigorous, and dependable way, is vital. Conceptual modeling has long been recognized as a valuable foundation from which to develop information systems because it involves extracting concepts from the real-world, or application domain, and representing them in a way that supports interaction between designers and users of the systems. We define conceptual modeling as: an activity that occurs during information systems development and use that involves capturing, abstracting, and representing relevant aspects of reality, to support understanding, communication, design, and decision making. Conceptual models are comprised of constructs, such as entities, events, goals, attributes, relationships, roles, and processes, connected by well-defined rules.
The field of conceptual modeling emerged in the 1970s. It was initially understood as a phase of information systems development that systematically captures user requirements for database design. For example, relational database design, especially in large organizations, was preceded by conceptual modeling using entity-relationship diagrams (ERD), extended entity-relationship diagrams (EERD) [4, 177, 212, 222] or class diagrams in the Unified Modeling Language (UML) [51]. Over time, the field has expanded and matured, making contributions to requirements engineering, knowledge representation, process modeling, goal and value representation, ontology development, philosophy, and more recently, data analytics [54, 55, 87, 146, 148]. In the field of process modeling and process engineering, for example, popular conceptual modeling languages include Business Process Modeling and Notation (BPMN), Data Flow Diagrams (DFD), activity diagrams in UML, Event-driven process chains (EPC), and others. Some languages are designed for specific applications, ranging from wide-domain applicability, such as enterprise modeling (e.g., ArchiMate) to more niche languages, such as Formalized Administrative Notation (FAN) originally designed to support administrative workers in Argentina [12, 103].
Because conceptual modeling activities include abstraction and representation, they provide multiple benefits for domain understanding and comprehension [30, 226]. These models help structure reality by omitting aspects of the domain deemed irrelevant for some purpose [167]. Conceptual models reduce the complexity of the domains to be represented and help cope with the complexity of the information systems development process [51, 76, 202, 214]. As such, conceptual models may aid in decision making and problem solving [242].
Conceptual models are also commonly viewed as boundary objects [20]. A boundary object is ۔an artifact or a concept with enough structure to support activities within separate social worlds, and enough elasticity to cut across multiple social worldsە [211]. Indeed, conceptual models are frequently used by both technical and non-technical business users to gain a common basis for understanding the goals and requirements of systems to be built or the data to be used in decision making.
Conceptual modeling as a recognized field is now entering its 6th decade of practice. The proven record of conceptual modeling in the context of information technology development and use suggests it is posed to remain an important development and analysis tool for the foreseeable future. At the same time, to remain relevant, conceptual modeling needs to continue evolving and adapting to new technologies and application domains, as well as continued digitalization [127]. To ensure the future impact and effective use of conceptual modeling, it is valuable to analyze what has been accomplished to date, and to identify fruitful areas for future development as is the goal of this survey.
To guide the evolution of conceptual modeling research, various frameworks have been developed [131, 146, 158, 184, 196, 236] where the authors consider the state of the art in theory and practice. Most of these efforts do not engage in a comprehensive survey of conceptual modeling publications. Instead, they are typically based on new conceptual modeling assumptions (e.g., mediation versus representation) and provide guidance on their applications to the conceptual modeling community. These frameworks lack the nuances of a comprehensive, structured literature review, aimed at inclusive coverage of the topics at granular and high-abstraction levels [171]. A comprehensive and structured literature review promises fruitful ideas for specific research projects in conceptual modeling and the identification of broader trends and long-term research opportunities.
Coinciding with the 50-year anniversary of conceptual modeling, the objectives of this paper are to: review concepts, topics, and themes, as research in conceptual modeling has progressed over time; and identify needed continued research due to on-going technology advances and increased digital ubiquity. The contributions are to: provide a review of conceptual modeling; show how research topics related to conceptual modeling have evolved; and propose how research on conceptual modeling can continue to contribute to our digital society.
We reviewed over 5,300 papers from 35 related journals and conferences which, to the best of our knowledge, is the largest corpus of conceptual modeling research ever analyzed. These publications were retrieved from different, relevant disciplines including computer science, information systems, software engineering, human-computer interaction, and database design. We cataloged the papers into a text corpus and analyzed them with a mixed method approach. Our results are based on natural language processing (using LDA analysis of full-text papers). We also developed a specialized conceptual modeling language model to identify semantically similar terms and a topic model to identify meaningful groups of papers whose topics are closely related. Finally, we augmented the quantitative evidence with qualitative analysis and insights. This is an effort to review broadly the conceptual modeling literature belonging to different disciplines, genres, and levels of maturity (e.g., journals with multi-year review cycles, short conference papers, and long conference papers). Following this broad survey, future research directions are identified for how conceptual modeling can, and should, evolve.
This paper proceeds as follows. Section 2 provides background on conceptual modeling and details the structured literature review. This is followed, in Section 3, by a discussion of the findings and implications extracted from the review. Section 4 discusses the implications for the continued evolution of conceptual modeling research. Section 5 concludes the paper. Appendix A (supplement) provides the entire list of papers considered. Appendix B details the results for the topic analysis by year. Appendix C identifies the top 40 bigrams and trigrams per year.
# 2 PRIOR RESEARCH AND LITERATURE REVIEW METHOD
This section reviews prior work on reviewing and analyzing conceptual modeling, as well as provides an overview of the approach taken in this paper.
# 2.1 Related work: prior reviews of conceptual modeling
Conceptual modeling can be traced to the first uses of organizational process flowcharts in the 1920s [47]. These graphical notations represented organizational decision logic, flow of resources and activities, becoming the precursors of business process models of today. Since the 1950s these diagrams increasingly contained data objects and flows of information through systems. Possibly the first mention of the need for graphical conceptual data models was made by Young and Kent in 1958 [244]: ۔the graphical presentation, which can be modified to suit the needs of the user (e.g., by including descriptive labels), should be helpful in determining the best organizational files and subroutines and in providing a check on redundant and superfluous information.ە (p.479).
Conceptual modeling, as a discipline, emerged in the 1970s in response to the emergence of new technologies, such as hierarchical, network and relational databases, as well as increased societal reliance on information systems. With the use and failures of databases and other information systems, it became clear that it was important to correctly capture real-world facts and rules, irrespective of technological solutions. In the data management area, the notion of separating logical, physical, and conceptual layers emerged [116, 209]. Through pioneering works of Abrial [2], Bachman [13-15] Codd [43], Chen [38],
Sundgren [218], Olle [168], Sibley [207], Nijssen [163-166], Kent [113], Bubenko [26, 27], and others, foundational notions of conceptual modeling and first notations and constructs were introduced. A new field of conceptual modeling was born.
Throughout its 50-year history, there have been many efforts to survey the state of the art in the area. These efforts emerged from recognizing the importance and usefulness of conceptual modeling and the various ways in which conceptual modeling has progressed. Early review efforts focused on the traditional application of conceptual modeling within the context of database design [101, 177] and process management [183]. Popular review topics included: ERDs and EERDs [4, 177, 212, 222]; UML [51]; conceptual modeling and domain ontologies [148, 180, 216, 217, 230]; and BPMN [250]. Recent research considers emergent applications of conceptual modeling within the contexts of big data [214], analytics [159, 160], machine learning [141, 247], corporate social responsibility [53], and others. These publications, although valuable, are highly focused.
In addition, significant frameworks, which seek to capture the essence of conceptual modeling and suggest future advances and research opportunities, have been developed [54, 76, 146, 158, 184, 236]. However, they have been derived mainly by considering the state of the art in conceptual modeling but lack a comprehensive survey of conceptual modeling publications from which to propose a more granular guidance for future research to the conceptual modeling community.
Prior reviews of conceptual modeling research were generally undertaken from a given disciplinary perspective. From an information systems perspective, Wand and Weber [236] assessed past conceptual modeling research and suggested focusing on evaluation, rather than development of new conceptual modeling grammars. (A conceptual-modeling grammar (also known as conceptual modeling language, or meta-model) is a formal specification for the creation of conceptual models. It comprises of constructs and rules that prescribe how to combine the constructs to model real-world domains [236]. Recker et al. [184] conducted a focused review of publications in information systems, which led to the formulation of a Framework for Conceptual Modeling in the Digital World. This work suggested that conceptual modeling scripts become tools of mediation between digital and physical systems. Similarly, Frank et al. [76] conducted a synoptical review of modeling publications in the journal Business and Information Systems Engineering. From a retrospective analysis, they suggested fruitful research opportunities, especially those dealing with emerging and new phenomena.
Some review efforts sought to combine the perspectives and analyze a wide spectrum of research in computer science, software engineering, information systems, and other relevant domains. These studies typically focused on a particular topic or modeling language. For example, Aguirre-Urreta and Marakas [4] conducted a survey of semantic data modeling techniques (specifically, extended entity-relationship vs object-oriented modeling), based on a wide range of journal articles from computer science, software engineering and information systems areas. Molina et al. [154] conducted a review of conceptual modeling of groupware systems by analyzing the software engineering and human–computer interaction literature. This survey identified several popular groupware systems notations, including the ConcurTaskTrees (CTT)
[176], Group Task Analysis (GTA) Framework [228], Collaborative Usability Analysis (CUA) notation [178], and a task analysis method called the Multiple Aspect Based Task Analysis (MABTA) [130].
Other reviews of conceptual modeling are based on surveys of partitioners, as opposed to the literature alone. Notable insights on the usage of UML, for example, were made by Dobing and Parsons [58], who showed that UML is not only a language used for software engineering but also for database design. Similar analysis was conducted for other languages. For example, trends in the usage of BPMN were investigated by Compagnucci et al. [45], Rolon et al [191], Bork et al. [25], and Muehlen et al. [253].
Recognizing important historic and cultural differences in the usage of conceptual modeling, some reviews were sensitive to specific contexts and cultures. Davies at el. [51] conducted a survey of modeling with the focus on those used in Australia. Fettke [71] targeted German practitioners and also compared German and Australian modeling traditions. By combining different perspectives and sources of information, these reviews generated valuable theoretical and pedagogical insights, such as the frequency of different element usage, or the popularity of different languages.
Most reviews of conceptual modeling research focused on publications in journals, rather than conferences. This choice is understandable. Scientific journals, commonly having multiple review cycles, are generally considered to offer more rigorous, validated, and practically dependable knowledge. At the same time, when the objective is to identify emerging trends, it is also important to consider conceptual modeling conferences.
Much prior work has been based on manual extraction, and often, a manual analysis of the literature. For example, Recker et al. [184] coded the articles as exemplifying engrained assumptions in conceptual modeling (e.g., modeling conducted by professional analysts. Frank et al. [76] manually extracted modeling publications from the Business & Information Systems Engineering journal. A manual process is generally preferred because it permits the authors to carefully examine each publication for inclusion and relevance, as well as to classify a paper based on a given coding schema.
With the expansion of the literature, and its online availability, it becomes increasingly difficult to capture the full spectrum of work in conceptual modeling based on manual coding. Automated approaches to a literature review are increasing across all research fields [233]. For example, Harer and Fill [95] conducted a fully automated analysis of modeling publications from eight computer science and software engineering journals and the International Conference on Conceptual Modeling (ER). They used Latent Dirichlet Allocation (LDA) modeling and identified the evolution of modeling topics over time, although they did not propose a comprehensive research agenda stemming from these literature findings. In this research, we also adopt LDA as an approach to the literature review.
# 2.2 Structured Literature Review Method for Analyzing Conceptual Modeling Publications
# 2.2.1 General assumptions and approach
To analyze conceptual modeling research, we identified the relevant literature and performed a topic analysis to extract the most frequent topics that occurred by year. An overview of this structured review process is shown in Figure 1. We manually pre-screened each paper to ensure their principal contribution is conceptual modeling, which is challenging using automated approaches alone. To add validity to the insights, we also combined these with a qualitative analysis.
Figure 1. Structured survey of research on conceptual modeling
To appreciate and analyze the enormous amount of research that has been conducted on conceptual modeling, we performed an inclusive structured literature review. We adopted a multidisciplinary perspective, with our sources and analysis reflecting a broad spectrum of topics, themes, and trends in conceptual modeling. We also included several conferences, which offer insights into emerging research, and permit the capture of research results which, for various reasons, result in journal articles. Although our choice of literature sources is not limited to a given discipline, our disciplinary ties are mainly to the information systems area. We sought to mitigate any biases arising from adopting a particular perspective by having 6 coders (4 research assistants and 2 paper authors) and using a wide array of publication sources (35 in total).
# 2.2.2 Identification of relevant literature
To undertake a comprehensive review, we first identified a list of sources of relevant literature. As typical of topical literature reviews (e.g., [51, 60, 124, 184, 243]), our focus was on journal publications, since academic journals typically involve multiple review cycles, leading to rigorous results. We selected a list of journals identified in prior research [4, 51, 184, 196, 247] and surveys of researchers working in the conceptual modeling area as further outlets for conceptual modeling research (e.g., https://sigsand.com/sigsand-journal-list-survey/). We also included publications from well-recognized conferences that specialize in conceptual modeling topics: the International Conference on Conceptual Modeling (ER Conference), the International Conference on Advanced Information Systems Engineering (CAiSE) and Exploring Modeling Methods for Systems Analysis and Development (EMMSAD). To capture research on behavioral and managerial implications of conceptual modeling, as well as additional empirical work on conceptual modeling languages, we added publications from two leading information systems conferences: the International Conference on Information Systems (ICIS), and Americas Conference on Information Systems (AMCIS). Although not exhaustive, these conferences still facilitated the inclusion of a wider spectrum of conceptual modeling topics.
# 2.2.3 Keyword search for relevant topics by year
We used keywords to search databases that contained full-text papers related to conceptual modeling in the identified journals and conferences. We used full-text search as opposed to a more common search in abstracts and keywords searching since ۔searching full text is more likely to find relevant articles than searching only abstractsە [132, p.1].
The terms used for the search were: ۔conceptual model,ە ۔conceptual modeling grammar,ە ۔ontology,ە and meaningful variations of these terms. Table 1 provides examples of other keywords used. We manually reviewed each search result to ensure we were: capturing an appropriate topic; identifying an appropriate paper; and interpreting the results within the entire context of the paper. We trained 4 research assistants to code the papers as candidates for inclusion. One student had completed a masterۑs degree; two were current masters students; and one was an advanced bachelor student. A co-author supervised this effort, developing the explicit inclusion protocol shown in Table 1, to ensure process consistency, transparency, and replicability [29, 220, 233].
The result of this process included 3,910 papers published by January 2021. The literature review inclusion protocol is also summarized in Table 1. We then added two additional relevant sources: the International Conference on Advanced Information Systems Engineering (CAiSE) and the Enterprise Modelling and Information Systems Architectures Journal (EMISAJ).
Table 1. Literature review inclusion protocol for conceptual modeling (CM)
The final number of papers analyzed was 5,303 across 35 journals and conferences collected over the period 1976 to 2022. The sources of our analysis are summarized in Table 2.
Table 2: Sources of publications for structured literature review
# 2.3 Topic analysis for evolution
We analyzed the full text of the publications we retrieved (as opposed to abstracts only). Full-text articles and abstracts are structurally different [44]. Abstracts are comprised of shorter sentences and very succinct text presenting only the most important findings. Studies have shown that text mining efforts limited to abstracts lack important knowledge present in the full text documents [239].
To understand how topics evolved, we applied natural language processing (NLP) and topic analysis on a yearly basis. Using NLP has become increasingly popular to uncover useful information from large bodies of text and also allows for validation of findings through replication [1]. For most of the NLP analysis we used Latent Dirichlet Allocation (LDA), a common analysis technique that has been widely applied to natural language processing, social media analysis, and information retrieval, For example, it has been used to analyze scientific publications during the early phase of the COVID-19 pandemic [5], analyze the evolution of information systems business value research [249], assess the value of editorial reviews in user-generated content platforms [56], and investigate twitter data in real-time during a natural disaster [251]. The steps involved in the LDA analysis are described in Table 3 and illustrated using publications in the year 2000.
Table 3. Steps in topic analysis (Year 2020)
Step a. To assemble a bucket of text, we used all the papers identified in a given year to create a dataset and derive the topic model for that year.
Step b. As common in NLP, we engaged in preprocessing to prepare the data for analysis [59]. Preprocessing methods play an important role in preparing the data for insights and typically comprise the first step in the text mining process [231]. Preprocessing in our case involved creating a subset of the data for analysis and eliminating stop words and lemmatization to reduce the dimensionality of the data. We used the NLTK (Natural Language Toolkit) library [134], which includes a dictionary of common English stop words to remove (e.g., the, in, a, an) and lemmatize the corpus to reduce the dimensionality of the dataset. The lemma of a word includes its base form plus inflected forms [69, 231]. For example, the words ۔modelsە, ۔modeledۑ and ۔modelingە have ۔modelە as their lemma. Lemmatization groups together various inflected forms of a word into their base form (e.g., ۔modelingە to ۔modelە) [142]. The Spacy library supports our lemmatization task by considering nouns, adjectives, verbs, and adverbs in the documents. To augment the dataset, we used n-grams or sequences of n consecutive items. For example, bigrams (e.g., conceptual model) and trigrams (e.g., business process modeling) are included in our analysis. In Appendix C of the online supplement, we list the top 40 bigrams and trigrams created based on our corpus.
After preprocessing the data, topic modeling techniques identify relationships among the text documents. We employed the popular Latent Dirichlet Allocation (LDA) method, which has been widely applied to natural language processing, social media analysis, and information retrieval [104, 124, 233]. LDA is an unsupervised probabilistic method that assumes each document can be represented as a probabilistic distribution over latent topics [104]. Most LDA models focus on topic extraction [34] by uncovering hidden structures (semantics) from a large corpus. In LDA, topics are represented by word probabilities. The words with the highest probabilities in each topic provide a good indication of the topic. Gensim (an open source python library for topic modeling) and MALLETۑs LDA implementations were used to run the topic modeling on the datasets (yearly papers) [104]. MALLET is a Java-based package for statistical natural language processing, document classification, clustering, topic modeling, information extraction, and other machine learning applications to text [147].
Step c. To evaluate the topic models, we computed the coherence scores for various numbers of topics to identify the optimal model. We then used pyLDAVis Python package to visualize the information contained in a topic model [208]. A good document model should provide both coherent patterns of language and an accurate distribution of words within documents [188]. We use Gensimۑs coherence model to calculate topic coherence for topic [134].
Figure 2. Coherence score by number of topics
To select the optimal number of topics, we chose the topics that gave the highest coherence score. Document models with higher topic coherence are more interpretable (i.e., words in a coherent topic have higher mutual information, thus are assumed to be related). We capped the maximum number of topics at 15 and created a visual representation, as shown in Figure 2, with an example of articles published in 2020. The image shows the coherence score at different values of N (number of topics). The same analysis was performed for each of the subsets of the corpus (i.e., content of all research papers by year). When displaying topics to users, each topic T is generally represented as a list of the $M = 5 , . . . , \Nu$ most probable words for that topic, in descending order.
Step d. To visualize the topics created and top terms per topic, we used LDAvis, a web-based interactive visualization [208]. Gensimۑs pyLDAVis is the most used tool to visualize the information contained in a topic model. Since each topic is embedded in a high-dimensional space, the pyLDAvis applies dimensionality reduction techniques to project each topicۑs high-dimensional embedding onto a 2D space [208]. Note, pyLDAvisۑ default principal component analysis (PCA) method, maximizes the variance of each topicۑs projection along the new axis using the two Principal Components, PC1 and PC2 [109]. Figure 3 is our visualization of the topic models [41, 208]. Each bubble on the left-hand side represents the marginal topic distribution. The size of the bubble, represents the marginal topic distribution (i.e., the percentage that a topic makes up in the corpus) [208]. A significant topic model has relatively large, nonoverlapping bubbles scattered throughout the chart, instead of being clustered in one quadrant. A model with too many topics, typically, has many overlapping, small sized bubbles clustered in one region of the chart. On the right-hand side, the words represent the salient terms are shown. For additional illustration, we highlighted Cluster 2, showing the top 30 most salient terms that form the selected topic and their estimated term frequencies.
Figure 3. Visualization of Topic Models Legend: Marginal topic distribution: size of the bubble; PC1 and PC2: principle component analysis
Step f. The Doc2Vec algorithm was used to build a natural language model that created language representations from the full texts of all the conceptual modeling papers from 1975 to 2022. The model was built to complement the year-by-year analysis provided by the topic analysis. The model built identifies semantic relationships between terms [115, 125]. For example, in Figure 4, we can use terms from cluster 2 (e.g., process, bpmn, model), and use the function, most_similar, from the doc2vec model created, to find terms with a positive resemblance to those terms. In this case, the top 5 words are EPC (event-driven process chain), DMN (decision model and notation), BPEL (business process execution language), and meta-model, which, within the context of process models, can refer to capturing informational and behavioral aspects of business processes. Notice that although ېbpmnۑ and ېbpelۑ are syntactically different, the language model learned their semantic similarity (i.e., BPMN is used when designing and improving business processes and BPEL is the execution language to execute these business processes).
Figure 4. Language model adds context to terms in topics
With the Doc2Vec model, we built our own domain model (language model) of conceptual modeling. Based on this language model we can now estimate how close words are to each other (co-occurrence), and find similarities or synonyms, without explicitly identifying them. This model could also be used for any future analysis. We then applied our language model to the buckets of research papers by year.
# 3 RESULTS AND FINDINGS
To present the findings from our literature review analysis, we employ a mixed-method approach, where we combine insights drawn from the statistical techniques described above with manual and qualitative insights. In the manual and qualitative review, we draw upon known publications from 1970s to 2022. Our statistical analysis focuses mainly on research from 2005 to 2020. There was a three-fold increase in publication rates from 2005 onwards. Note, this does not necessarily represent an increase in the volume of research papers, although there is reported evidence of such an increase [184, 196]. This could also be a function of increased inclusion of journal publications in the databases used to extract the papers. Hence, our analysis focuses on 2005-2020 to ensure that our results are less sensitive to the varied digitization practices of academic content. The focused analysis of more recent publications further permits stronger inferences regarding emerging topics in conceptual modeling, providing practical utility to researchers wishing to identify existing gaps. At the same time, we reviewed all available papers to uncover themes and draw general conclusions regarding the state of conceptual modeling research and supplement and interpret our natural language processing findings with this manual effort. Table 4 reports the number of topics by year and the number of articles with full text found in our sample for that year.
Table 4: Topics and number of articles per year2
# 3.1 Topics for 1976 to 2004
Prior to 2005, there were 639 papers with full text extracted from the databases based on the process we followed. We followed the steps described in the methodology to summarize the significant topics that emerged. Below are the main topics (see Appendix A for the list of all papers and Appendix B for analysis of the entire literature by year).
Topic 0 (Analysis): terms of system, model, information, process, design, analysis, requirement, development, user, datum. This includes computer-aided software engineering [37] and improving the quality of data models [156]. A methodology to derive requirements for information systems development [198], involves cognitive fit in requirements modeling [3], or understanding and representing user viewpoints during requirements [49]. The diffusion of information systems development methods was explored [23] as well as Object-Oriented Systems Development [108]. Challenges of strategic data planning [205] were investigated as were efforts to describe organizations as sets of business processes and to derive a conceptual framework for understanding such business processes and business process modeling [110], and business process redesign [221, 238].
Topic 1 (State flow): terms of model, state, event, time, system, specification, object, temporal. A structured operational semantics for UML-state charts was investigated [232], as was a formalization of
UML state machines using temporal logic [193], and a method for describing the syntax and semantics of UML statecharts [107]. Tool support for verifying UML activity diagrams [65] and automatically detecting and visualizing errors in UML Diagrams [32] was developed. Additional research focused on formal semantics of static and temporal state-oriented OCL constraints [74].
Topic 2 (Object-oriented modeling): terms of object, class, type, model, instance, attribute, property, set, define. A multi-level view model for secure object-oriented databases were developed [16], as was an analysis of the notion and issues related to object-oriented query languages [22]. A template for defining enterprise modeling constructs was developed [169], as was formal semantics of an Entity-RelationshipBased query language [98].
Topic 3 (Entity-relationship modeling): terms of entity, relationship, attribute, set, database, relation, type, constraint, model, key. Papers include analyzing the entity-relationship approach to database design [161]; proposing a normal form for relational databases based on domains and keys [67], proposing an algebra for a general entity-relationship model [172], representing extended entityrelationship structures in relational databases [144], or discussing how to map an entity-relationship schema into a SQL schema [123]. Justification for the inclusion of dependency normal form was investigated [128]. Researchers also investigated computational problems related to the design of normal form relational schemas [17] and the design of relational database schemata, generally [248].
Topic 4 (Knowledge modeling): terms of type, knowledge, model, conceptual, concept, information, system, ontology, language, domain. Research on domain ontologies included a declarative approach for reusing them [152] and their grammatical specification [7]. Other work focused on an algebraic approach to modular construction of logic knowledge bases [204]; subtyping and polymorphism in object-role modeling [93]; and expressiveness in conceptual modeling [223].
Topic 5 (Data modeling): terms of database, datum, design, query, system, data, number, base, user, table. Papers include data base research [153], physical design for relational databases [73] and the sensitivity of physical design to changes in underlying factors [170]. Other efforts focused on a preliminary system for the design of DBTG data structures [83], database design principles for placement of delay-sensitive data on disks, and a case study of database design using the dataid approach [52].
Table 5. Emergence of topics from papers related to conceptual modeling
As can be seen from the automated extraction of topics, the main theme in conceptual modeling over the years, until 2005, focused on analysis, and various types of conceptual modeling. Of note, Chenۑs [38] entity-relationship model was the main conceptual model used to represent an application domain. Smith and Smith [210] identified abstractions as an important way to capture semantics. Other work on data abstractions and semantic relationships were based on this work (e.g., [84, 85, 129, 138, 146, 173, 213, 240]). Efforts were underway to recognize the need to separate database design phases and to create formal ways to transform a conceptual to a logical design with the relational model becoming the standard (e.g. [222]). The emergence of object-oriented methodologies, systems development, UML, domain ontologies, grammars, and other methods began, consistent with our analysis. We can also conclude that largely, conceptual modeling was focused on modeling data and processes.
# 3.2 Topics for 2005-2020
Table 5 provides an overview of the topics that emerged from 2005-2020. Each cell provides a topic and the most representative terms for that topic. The circled terms show, for example, how the topic of ontology progressed over the 15-year analysis. In 2005, ontology appeared in connection with semantic languages. Five years later, in 2010, it was not prominent; nor was it in 2017 and 2018. In other years there is evidence of its use with respect to the web, topics related to knowledge management and, more recently, for requirements and domain concepts.
In Table 5, the main terms (by cluster) are provided; the less-frequent ones are not shown for clarity of presentation. In 2010, for example, the clustered topics can similarly be labeled as: Topic 1 (data); Topic 2 (constraints); Topic 3 (business processes); Topic 4 (web services); Topic 5 (context); Topic 6 (UML); Topic 7 (modeling); Topic 8 (schema); Topic 9 (knowledge management). The topic analysis for 2020 is presented below. For all the other years, the details of the topic analysis are given in Appendix B.
# 3.2.1 Topic Analysis for Year 2020
To illustrate the results of our analysis, we highlight the year 2020. The topics are described below and are indicative of the types of research carried out in 2020.
Topic 1 (business process design and execution) refers to model, business, service, platform, and also method, design, capability, goal, and process. Sixteen papers fall under this topic. Business Process Management Suites (BPMS) are being adopted in organizations to increase business process agility. Yet, organizations struggle to achieve agile business processes. BPMS is useful for practitioners wanting to adopt a business process management suite that addresses the difficulty of integrating with other applications [118]. Organizations operate within dynamic environments to which they need to adapt. The age of digitization requires rapid design and re-design of enterprises. The design and engineering methodology for organizations (DEMO) is an established modeling method for representing the organization domain of an enterprise. Gray et al. [86] addresses stakeholder heterogeneity by enabling transformation of a DEMO organization construction diagram (OCD) into a BPMN collaboration diagram. Currently, enterprise modeling and capability modeling facilitate the design and analysis of capabilities. Koutsopoulos et al. [119] introduces a capability change meta-model that serves as the basis for capability change. Recently, Industry 4.0 has attracted much research and development over the last decade. At its core is the need to connect physical devices with their digital representations (i.e., digital twin). Sandkuhl & Stirna [201] analyzes the suitability of enterprise modeling and capability management for the purpose of developing and management of business-driven digital twins.
Topic 2 (business modeling and mining) is characterized by the terms process, datum, model, activity, business, event, log, time, task, and trace. Eighteen papers fall under this topic. Ramadan et al. [182] propose a BPMN-based framework that supports the design of business processes considering security, data-minimization and fairness requirements. Camargo et al. [31] present an accuracy-optimized method to discover business process simulation models from execution logs. Several process mining techniques discover models for predictive analyses. These techniques need an appropriate time step size, the selection of which, thus far, has been an ad-hoc and manual endeavor. Pourbafrani et al. [179] propose a novel semi-automated time-granularity detection framework and highlight the importance of using accurate granularity in time step selection. Process mining aims to obtain insights from event logs to improve business processes. In complex environments with large variances in process behavior, analyzing and making sense of complex processes becomes challenging. Insights into such processes can be obtained by identifying sub-groups of traces (cohorts) and studying their differences. Leemans et al. [126] introduces a framework that considers ordering of activities in traces (control flow), the relative frequency of traces (stochastic perspective), and cost.
Topic 3 (process analysis) is characterized by the terms process, case, object, set, instance, state, event, model, context, and datum. Fourteen papers fell under this topic. Andrews et al. [9] presents concepts for enabling context switching at runtime for object-aware process management and discusses use cases in which context switching capabilities can be utilized. Rodrigues et al. [189] explores the view of occurrents as transitions between situations and propose a framework for the ontological analysis of occurrents. Andree et al. [8] developed an exception handling technique for fragment-based case management (fCM) for handling unknown events.
Topic 4 (modeling languages) is characterized by the terms model, language, specification, tool, class, element, type notation, object, and modeling. Ten papers fall under this topic. Bork et al. [25] provide a Systematic Literature Review aimed to analyze published standard modeling language specifications such as Business Process Model and Notation and the Unified Modeling Language. This survey provides a foundation for research aiming to increase consistency and improve comprehensiveness of information systems modeling languages. The survey reveals heterogeneity in: (i) the modeling language concepts being specified; and (ii) the techniques being employed for the specification of these concepts. Zolotas [252] bridges a proprietary UML modeling tool used for model-based development of safety-critical systems with an open-source family of languages for automated model management.
Topic 5 (empirical evaluations) is characterized by the terms model, modeling, task, question, group, subject, conceptual, result, study, and experiment. Nine papers fall under this topic. Bernardez [21] evaluates whether Software Engineering students enhance their conceptual modeling performance after several weeks of practicing mindfulness. Verdonck et al. [230] study the extent to which the pragmatic quality of ontology-driven models is influenced by the choice of a particular ontology, given a certain understanding of that ontology.
Topic 6 (data models) is characterized by the terms datum, query, model, data, time, set, schema, database, and table. Thirteen papers fall under this topic. Wang et al. [237] propose a Deep Temporal Multi-Graph Convolutional Network (DT-MGCN) model that integrates graph generation component with spatial-temporal component to capture the dependencies between crime and various external factors. Hartmann et al. [96] implements an efficient approach for dynamic alternative route planning that can respond to road network changes. For data models used in big data analysis, such as Multilayer Networks, there is a need to transform the user/application requirements using a modeling approach such as the extended entity relationship (EER). Komar et al. [117] show how the EER approach can be leveraged for modeling given data to generate Multi-layer Networks (MLN)s and appropriate analysis expressions on them.
Topic 7 (ontology) is characterized by the terms ontology, system, concept, information, conceptual, domain, vulnerability, knowledge base, and research. Fourteen papers fall under this topic. For example, Syed [219] present the Cybersecurity Vulnerability Ontology (CVO), a conceptual model for formal knowledge representation of the vulnerability management domain. Lukyanenko et al. [139] propose a General Systemist Ontology (GSO) as a foundation for developing information technologies where the application could benefit from a systems perspective.
Topic 8 (software engineering) is characterized by the terms model, system, requirement, simulation, base, design, software, and engineering. Seven papers fall under this topic and provide a systematic mapping and solid basis for classifying approaches to systems modeling language (SysML) [241]; ArtifactBased Workflows for Supporting Simulation Studies [195]; and a Toolbox for the Internet of Things-Easing the Setup of IoT Applications [77]. Within the context of digital transformation, speeding up the time-tomarket of high-quality software products is a big challenge. Software quality correlates with the success of requirements engineering sessions (e.g., software analysts collect relevant material). Comprehensible requirements need to be specified for software implementation and testing. Many of these activities are performed manually, causing process delays and software quality issues, such as reliability, usability, comprehensibility. Ruiz & Hasselman [194] propose a framework for automating the tasks of requirements specification.
# 3.2.2 Prevalence of terms
In addition to the analysis by topic and by year, we also analyzed the prevalence of terms across the years. Table 6 summarizes the terms, capturing the conceptual modeling interest, as it progressed over the last 15 years. Clearly, some are reoccurring, others fad over time, and yet others, evolve. As expected, terms related to models, systems, and processes were consistent over the years of study. The terms information, goal, class, ontology, domain, database, and schema were likewise found over the years.
Table 6. Terms by year
Some terms have been consistently popular. These include such general terms as data, information, database, requirements, and process. Some terms that appeared in multiple years were sometimes confined to a small number of years. For example, the term, relation, had consistency between 2008 to 2013, as relational databases continued to mature, and become standardized. (Thereafter, NoSQL and NewSQL became popular). Other topics have a sporadic influence, such as knowledge, concept, or property. Still, others emerged as relevant but only for one year, such as message, interaction, and element. Some of these were more notable in early years but may have been absorbed in different ways. For example, construct and context are known to be important concepts in conceptual modeling and have been prominent in many research endeavors throughout the years.
# 3.3 Additional analysis
The above analysis is based on the amalgamation of conference and journal papers on conceptual modeling. Collectively, it presents the overall themes, topics, and trends, common across the entire conceptual modeling community. However, conceptual modeling community is highly diverse and heterogeneous. It draws upon the concepts, theories, challenges and problems from computer science, software engineering, management information systems, design science, as well as scientific domains (e.g., genomics), being influenced by such disciplines as philosophy, cognitive science, psychology, linguistics, semiotics, among others.
Very few attempts have been made to take a full stock of the diversity of conceptual modeling, the different perspectives that comprise it, or the identification of the common thread among these perspectives. While doing so directly is beyond the scope of this paper, we contribute to these goals by conducting a focused analysis of two specific outlets for conceptual modeling: the International Conference on Advanced Information Systems Engineering (CAiSE) and the open access journal, Enterprise Modelling and Information Systems Architectures (EMISAJ).5
The Enterprise Modelling and Information Systems Architectures is a journal that, although accepting all conceptual modeling research, has a historic focus on Enterprise Modelling, Information Systems Architectures and Business Modeling (Modellierung betrieblicher Informationssysteme). It originated in a prolific German modeling community. The community has been known for its contributions to business information systems design and interest in organizational information systems.
The International Conference on Advanced Information Systems Engineering, on the other hand, is a premier conference with a traditional focus on ۔Information Systems Engineering with a special emphasis on the theme of Cyber-Human Systemsە (emphasis in the original, see: caise23.svit.usj.es). The first Conference on ۔Advanced Systems Engineeringە, CASEۑ89, was arranged during in May 1989, jointly by SISU in Stockholm, Sweden and has been active ever since. The conference was originally organized by prominent Nordic conceptual modeling community lead by the Swedish Institute for Systems Development (SISU) in co-operation with the Swedish Society for Information Processing SSI [190]. Attesting to the importance of understanding the distinct voices within community, Rolland et al. [190] explained that none of the existing venues was providing a broad integration of modeling with issues of information systems development and its pertinent social factors. The isolation of these two sources permitted a comparison of the general corpus with the sources that have a disciplinary focus – computer science and software engineering. Furthermore, by conducting a more focused analysis, we were able to surface some of the niche topics, which were overshadowed by the mainstream topics in the general corpus. Table 7 summarizes the topic analysis.
Table 7. Topics from papers from CAiSE and EMISAJ
We also performed a topic analysis for CAiSE and EMISAJ separately, similar to that of Table 7. Then, to assess whether these corpora are similar to each other and similar to the initial corpus (Table 5), we used pre-trained language models to measure the similarity among these corpora. We use the sentenceTransformer "stsb-roberta-large," a pre-trained sentence embedding model based on the RoBERTa architecture [133] to calculate the similarity metrics between the different corpora. This is a large-sized model with a large number of parameters trained on a large corpus of text, which has shown to be effective on a wide range of NLP tasks, especially use cases that require a high level of accuracy and understanding of input text [187]. This provides a global indicator of the similarity between the collection of topics across the different corpora in the time analyzed.
The process of using pre-trained language models to compare the similarity between two lists, involves converting the input text into embeddings (i.e., vector representations) and comparing these embeddings between each other to produce a similarity score. The embeddings capture the semantic meaning of the text, with the similarity score being a measure of the similarity between them. The similarity score ranges from 0 to 1. A score closer to 1 indicates a higher similarity between the two topic collections of text; a score closer to 0 indicates lower similarity. The results are summarized in Table 8. Quantitatively, the overlap between CAiSE and EMISAJ with the rest of the corpus is 0.784 indicating substantial similarity [227].
Table 8: Corpora similarity comparisons among the initial corpus, EMISAJ and CAiSE
The analysis of CAiSE and EMISAJ versus the rest of the corpus, reveals a substantial overlap between these two and other sources. As can be seen from Table 7 compared to Tables 5 and 6, the topics covered by CAiSE and EMISAJ are very similar with a few specific terms (e.g., set, configuration, artifact, checklist) related to more business processes, compared to Table 6. Still, it reinforces our overall conclusion of the prevalence of business process modeling and related topics in the more recent period of conceptual modeling scholarship. Second, as in the more general corpus, there is a significant interest in representing the structure of the domain with data-oriented conceptual models. We also visually see the common core in Table 9 which shows the word clouds of CAiSE and EMISAJ, with many terms overlapping. This analysis reenforced our earlier findings of the common core in conceptual modeling.
Table 9. Topics from CAiSE and EMISAJ shown as word clouds based on relative topic frequency across all years
Common core. We first analyze these two sources relative to the general corpus of 33 journals and conferences. The topic analysis for these two sources together is summarized in Table 7. This analysis is based on the same process and techniques as for the main corpus.
The analysis of CAiSE and EMISAJ versus the rest of the corpus, reveals a substantial overlap between these two and other sources. As can be seen (Table 7 compared to Tables 5 and 6), the topics covered by CAiSE and EMISAJ are very similar with a few specific terms (e.g., set, configuration, artifact, checklist) related to more business processes, compared to Table 6. Still, it reinforces our overall conclusion of the prevalence of business process modeling and related topics in the more recent period of conceptual modeling scholarship. Second, as in the more general corpus, there is a significant interest in representing the structure of the domain with data oriented conceptual models.
Distinct voices. The word clouds are generated based on the frequency of the topics between CAiSE and EMISAJ across all of the years of these publications. This offers us a unique opportunity to reveal distinct conceptual modeling perspectives that has been very rarely articulated in conceptual modeling research.
In CAiSE (Table 9, left side) the results depict a community that places conceptual modeling within a broader context of organizational and socio-economic issues. This is underscored by such terms as ۔usersە, ۔taskە, ۔requirementsە, ۔workە, ۔operationە, ۔context.ە The software development and database focus are also notable, with terms of ۔softwareە, ۔methodە, ۔applicationە, ۔queryە, ۔baseە, ۔setە, and ۔datum.ە
We see a similar pattern of the overlap with the common core in EMISAJ (terms of ۔processە, ۔modelە, ۔modellingە). At the same time, we see EMISAJۑs distinct voice. The focus on building business information systems – indeed the terms ۔businessە, ۔transactionsە, ۔serviceە, ۔orderە, ۔managementە and ۔customerە are more prominent than in CAiSE or the general (initial) corpus. True to the title of the journal - Enterprise Modelling and Information Systems Architectures – we see terms ۔enterpriseە, ۔architectureە, ۔systemە and ۔engineering.ە
From the exposition of two individual sources, we can vividly observe the distinct voices of different conceptual modeling communities. While sharing interests with other subcommunities, International Conference on Advanced Information Systems Engineering and Enterprise Modelling and Information Systems Architectures are clearly making their own unique contributions to conceptual modeling. Among other things, these contributions reveal the multifaceted nature of conceptual modeling and the need to better understand and encourage diversity, as well as more holistic approach to modeling issues.
# 3.4 Summary of the results and findings
We can now synthesize the findings across the years to draw general conclusions. Over the last 15 years, we observe a great prevalence of process-oriented conceptual modeling research. Half of the clusters deal with process modeling, of which BPMN is clearly the most popular. After process, the topic of data modeling emerges as the most common. This is interesting and notable. From our analysis of the historic literature (1970s-1990s), data models were the most common conceptual models. In the past 15 years, however, process models began to dominate the conceptual modeling landscape. Several potential explanations exist. First, there is an increased growth of technologies that do not rely on conceptual data models, such as entity relationship diagrams, for their development. These include societally important applications, such as artificial intelligence, natural language processing and machine learning. Very little is known about conceptual modeling within these contexts [72, 135, 184]. There are also notable changes to database storage, including the growth of non-relational databases, such as NoSQL and NewSQL [28]. There is no established approach for modeling these databases using traditional conceptual modeling notations [97, 112].
Under these circumstances, the relative importance of data models continues to decline [11, 136, 196]. At the same time, process models appear to be adapting much better to the new landscape. Indeed, whether using flexible database technologies, or artificial intelligence, the introduction of these technologies in organizations continues to require the understanding of the previous process, as well as the modeling of the new process, even if the details of the new technology are not represented in the process models themselves. Furthermore, the process modeling community has effectively embraced some of these new technologies. For example, active research uses machine learning, natural language processing, and other artificial intelligence-based techniques to mine processes [72, 91, 121, 151]. More broadly, process models demonstrate a high resilience to the changing technological landscape.
Other notable findings include the importance of theoretical and methodological research. This is manifested in the continued development of notations, as well as interest in general ontology. As information technology impact more and more aspects of human existence, it becomes ever more
important to develop these technologies based on solid theoretical and methodological foundations.
Methodological and ontology work in conceptual modeling is a response to these challenges.
An important observation is that many of the topics of interest to the conceptual modeling community do not occupy a sizable share of the publications or are absent. Examples are the modeling of goals, intentions, dependencies, and contingencies among actors in the domain of modeling, despite persistent recognition of the need to dedicate more attention to these valuable topics [158, 245]. In addition, there is little focus on explosive, emergent technology trends, such as artificial intelligence, analytics, crowdsourcing, blockchain or large language models.
This is a broadening of the focus of conceptual modeling. As Table 6 suggests, research in recent years emphasized the importance of business, organizational applications, context, and modeling capabilities. This is evidenced by terms such as service, risk, execution, business, task, and goal. There is emphasis on evolving techniques and the need to apply appropriate constructs as indicated by the terms graph (for representation), conceptual, class, object, and others. We also compare our results to another large scale literature review on conceptual modeling by Härer and Fill [95], since the methodology, outlets, and years considered, make it appropriate to do so. Table 10 summarizes the comparison and identifies specific overlap in terms extracted by both efforts.
Table 10. Comparison to Results from Härer and Fill [95]
# 4 DISCUSSION AND FUTURE RESEARCH
Conceptual modeling emerges as a mature research area that continues to progress in response to the evolving needs of information systems development and use [39]. Conceptual modeling will continue to be used for representation and communication but will be required to adapt to the demands of new technology trends. Figure 5 shows the traditional focus of conceptual modeling and its progression to proposed future research that will consider the emerging technology adaption and use in an increasingly digital world.
Figure 5. Evolution and continued progression of conceptual modeling
# 4.1 Foundations, Exploitations and Explorations
Over the past 50 years, conceptual modeling scholarship developed a core, as well as periphery areas of research. Our analysis of the literature suggests that stable and persistent themes emerged. We can also identify areas that have, historically, been on the boundaries of what constitutes research in conceptual modeling.
The core themes in conceptual modeling include the development and evaluation of conceptual modeling languages and methods, research on general and domain ontologies, and the application and extension of conceptual modeling languages. Much attention has been on conceptual data modeling and conceptual process modeling; for example, ER model and UML, and BPMN, EPC and DFD, respectively. The analysis of the year 2020 is an example. Less common, but also part of the core of conceptual modeling, are languages that deal with the representation of goals, actors, and values; for example, the $\mathbf { i } ^ { \star }$ framework [245]. These languages were mainly oriented towards the collection of user requirements to build organizational information technologies (e.g., ERP), database design, and process improvement. Underlying these conceptual modeling languages is a stream of work seeking to establish strong theoretical foundations for conceptual modeling. Notable theories include a general ontology, such as the BWW (Bunge-Wand-Weber) [235], DOLCE [81] or UFO (Unified Foundational Ontology) [70, 88, 90]. These theories typically assume a materialistic view of reality; that is, that all entities to be modeled are material in nature [215].
Considerably fewer studies have explored topics on the boundaries of conceptual modeling. These include the application of conceptual modeling in contexts where flexible and agile approaches to information systems development did not find utility in explicit and formal conceptual modeling. Such contexts include social media, rapid application development, and web platforms. Despite the lack of widespread use of conceptual modeling in such contexts, further research is needed that is especially tailored to these environments. For example, traditional conceptual modeling approaches appear inadequate for modeling requirements in highly dynamic and heterogeneous online settings, such as citizen science applications [57, 100, 136]. More broadly, a debate emerged over whether conceptual modeling is simply not applicable when developing social media applications, or in agile development, or for modeling NoSQL databases [11, 112, 136]. New conceptual modeling approaches have begun to emerge, suggesting concept modeling is indeed valuable, if not indispensable for these new settings [28, 78, 97, 137, 203]. Likewise, research has noted the limitations of the materialist view of reality [215] and corresponding ontologies (e.g., BWW), which is one of the dominant ontological foundations in conceptual modeling. Yet these ontologies struggle in modeling institutional objects, such as identifiers and social facts [62-64]. Correspondingly, an emerging stream of work focuses on the development of conceptual modeling languages and methods uniquely sensitive to institutional reality [62-64]. Another limitation of a materialist view is that it can be difficult to model psychological intentions and mechanisms, which do not necessarily reduce to the underlying physical processes [140].
The existence of the stable core attests to the healthy cumulative body of research in conceptual modeling, and the relative stability of conceptual modeling as a research discipline. At the same time, to ensure any discipline is adaptive and agile in the face of change, it is important to challenge stable assumptions [6, 120]. Any field of practice needs to encourage both exploitation (where dominant ideas are examined and applied) as well as exploration (where radically new ideas are proposed and dominant assumptions challenged) [92, 110, 143, 234].
Additional research is needed to deal with topics that are closely related to conceptual modeling, although they might primarily be considered in other areas. One analysis of four dominant conceptual modeling assumptions [184], widely held within conceptual modeling scholarship, suggests that the core in conceptual modeling is deeply entrenched, and that there may not be enough exploration of new ideas and expansion of the periphery of conceptual modeling. These assumptions are that: conceptual models are static representations of physical reality; conceptual modeling diagrams (scripts) represent the deep structure of information systems, which are produced and consumed by humans, and conceptual modeling is an activity undertaken by professional analysts typically for organizational information systems development. Our results support this conclusion because a great majority of the studies focused on what we identified as core conceptual modeling themes. Consequently, a direction for future research is to investigate the extent to which the core of conceptual modeling corresponds to the core of the information systems development and to identify research opportunities that would bring to the forefront themes to address the evolving needs of information technology developers and users.
Conceptual modeling is a diverse and heterogeneous field. Yet, few attempts have been made to take full stock of its diversity, the multiplicity of different perspectives and the value these perspectives provide to the field. We, therefore, conducted two separate analyses for two recognized, additional outlets for conceptual modeling research: the International Conference on Advanced Information Systems Engineering (CAiSE) and the open access journal, Enterprise Modelling and Information Systems Architectures (EMISAJ). (See Tables 7, 8, and 9.) This analysis reveals that, overall, these two sources display the common themes in conceptual modeling as identified in our general corpus, as well as their own unique characteristics. The results obtained offer a rich ground for an interesting debate within the conceptual modeling community. As disciplines mature, they develop their own identity. There is a clear sense of identity which has emerged over the years in the conceptual modeling community. At the same time, this identity is not homogeneous. It is, thus, reasonable to conclude that the future of conceptual modeling depends on the ability to work from a set of common, and agreed upon, fundamental concepts and research approaches, while engaging in new, and unique perspectives. This is consistent with how most fields of inquiry progress and mature over time.
# 4.2 Application in Non-traditional Settings
Researchers should continue exploring the typical use cases of conceptual modeling to extend the range of its applications and tasks. New approaches to conceptual modeling are potentially needed due to the: (1) rapid proliferation of open and heterogeneous environments, such as social media; (2) increased system complexity, such as those supporting genomics applications, (3) new non-professional users; (4) rise in unstructured, distributed data and flexible data storage, such as NoSQL databases and data lakes; and (5) new computational opportunities, such as data intensive machine learning and artificial intelligence.
As our analysis shows, there has been a growing effort to apply conceptual modeling in new contexts, such as social media, citizen science, blockchain, big data, robotic automation, artificial intelligence, data analytics, and human genome [33, 89, 94, 122, 137, 141, 160, 174, 214]. Examples of specific topics investigated are: modeling log-based files using a UML variant [181]; automated schema migration and optimization between different NoSQL data stores [46]; artificial intelligence-based approaches to map heterogeneous data models to a relational model in order to take advantage of the ubiquity and maturity of relational databases [133, 246]; just-in-time modeling of flexible data to support varied data analytics activities [35]; JSON schema verification in the context of software interoperability [10, 79]; application of a traditional enterprise modeling language, ArchiMate, to the new context of blockchain systems [48, 229]; conceptual modeling reenforced via augmented reality [157]; multi-level modeling [75, 105, 106]; selecting labels for the diagrammatic elements (e.g., entity types) and natural language processing techniques [224, 225].
As these varied topics demonstrate, the conceptual modeling field is beginning to explore new horizons. Evolving work on the boundaries demonstrates that meaningful and effective conceptual modeling solutions can be formulated in new settings. This suggests there are many unexplored research opportunities for future conceptual modeling scholarship that challenge some of the entrenched assumptions. Practitioners still face uncertainty: does conceptual modeling matter for DevOps practices, social media applications, or other contexts? Questions such as these present an important opportunity for conceptual modeling scholarship to make important, practical contributions and rediscover the value of conceptual modeling beyond its core areas.
# 4.3 New Frameworks and Theories
Both our analysis and consideration of recent publications argue for greater exploration of the applicability of conceptual modeling in different settings. However, extending conceptual modeling to new areas, such as social media, blockchain, or the Internet of things, is not merely a matter of applying traditional conceptual modeling approaches and techniques. Existing research has established that such direct application does not always yield beneficial results. Part of the problem may be that these new settings are dramatically different from the traditional (e.g., corporate, tightly controlled) environments that inspired prevailing theories and frameworks of conceptual modeling. In addition to the development of conceptual modeling languages (or, conceptual modeling grammars), novel frameworks and theoretical foundations may be required.
As trends in diversity and sophistication of information technologies grew, conceptual modeling attempted to adapt. Conceptual data models, for example, carved out an important niche to facilitate systems development and support for the relational database development [206]. However, as the relative market share of relational databases began to shrink, so did the relevance of conceptual data models. Relational databases were once dominant for organizational storage, but are increasingly marginal for social media, cloud computing, machine learning, and the Internet of Things applications where NoSQL and NewSQL alternatives dominate. Modeling for social media, cloud, machine learning and Internet of Things remains challenging, with few clearly-established solutions [24, 68, 111]. Similarly, conceptual models were commonly used to support structured development of organizational technologies. However, as more development began to use agile methods, and more applications developed outside organizational boundaries or by end-users themselves [40], conceptual models appeared less relevant.
The lack of broader applications of conceptual modeling to new contexts results from how conceptual modeling is conceptualized. Since most popular conceptual modeling languages were invented long before the age of social media, data-driven artificial intelligence, mobile devices, and virtual reality, they mainly assume a static, unchanging view of domains. As a result, these models are rigid [135] and do not dynamically respond to change.
Second, conceptual modeling has always been understood as a formal, professional activity, better aligned with highly structured development approaches (e.g., waterfall method). Indeed, conceptual modeling was perhaps purposefully ignored by the architects of agile methods [61, 192]. However, conceptual modeling in modern practice is increasingly fluid, flexible, and adaptive [78, 97]. Still, these ideas have not been ingrained in the theory and frameworks of conceptual modeling. The divergence between the ever-increasing demands of the real-world and the capabilities of conceptual models, could result in marginalization of conceptual modeling, both in practice and in research, evidence of which already exists [11, 136, 184].
In response to these challenges, there have been attempts to explore new directions for conceptual modeling [174, 184, 196, 200, 236]. These efforts should, no doubt, continue. For example, there is a lack of understanding of how conceptual modeling may be realized in multiple formats, such as those that use multimedia. Very little work has considered integration of images into conceptual modeling [145] or the use of conceptual modeling in virtual reality [157]. There is no overarching framework that would guide these efforts. There is also a lack a theoretical understanding of when and why the use of more advanced multimedia is needed. Traditionally, multimedia learning theory [147] has been used to justify and better understand the benefits and limitations of using text and graphics [19, 80, 82, 185]. We lack a corresponding theoretical understanding of the use of advanced multimedia for conceptual modeling.
There is a growing body of research that identifies the potential utility of conceptual modeling to support requirements elicitation and development of machine learning models [72, 137, 186]. Here again, we lack a comprehensive framework and theoretical foundations that could anchor these efforts. Hence, there is an opportunity to revisit traditional conceptual modeling foundations when considering new conceptual modeling developments.
# 4.4 Improvement of Process of Developing, Deploying and Learning Conceptual Modeling
With the rapid development of information technologies new opportunities arise related to how conceptual models are created. Traditionally, conceptual models were considered to be diagrams that had to be drawn on paper by analysts. With advances in artificial intelligence, however, including such techniques as machine learning and natural language processing, it is becoming increasingly possible to generate conceptual models automatically or semi-automatically based on a variety of inputs. These inputs can include user documents written in a natural language. Research in this direction has already begun, such as work on process mining from digital traces [66, 91, 149, 150].
There are also efforts to improve the process of conceptual modeling itself. Making conceptual models easier to create as well as deploy is now a new concern. Computer-Aided Software Engineering (CASE) tools became popular in the 1980s, permitting the development of software code based on the semantics captured in conceptual modeling representations [42, 51, 99, 175]. Automated model generation has also been applied within the context of Internet of Things and cloud computing under the ۔models@runtimeە paradigm [18, 36, 50]. Eventually, it might be possible to create artificial intelligent systems that can directly interview potential users as well as consult sources beyond organizational boundaries, such as social media or user-generated images and videos for the generation of conceptual models. Equipped with these inputs, a conceptual modeling design engine may automatically generate a variety of conceptual modeling diagrams. This could be beneficial for ensuring that conceptual models are continuously updated and synchronized with rapidly evolving organizational and social contexts.
Artificial intelligence can also become instrumental for conceptual modeling pedagogy. Ternes et al. [224, 225] have already developed a tool that is supported by natural language processing capabilities and functions as a dynamic assistant for teaching best conceptual modeling practices. The label selection capability of the tool can also be instrumental, even for experienced modelers, because it could potentially compensate for the lack of deep domain expertise. Similarly, future studies could explore the potential of artificial intelligence to improve teaching conceptual modeling, through better personalization aimed at the varied levels of motivation and expertise of the learners.
# 4.5 Broaden User Base of Conceptual Models
As more people become computer literate, and the technology skills of organizational employees continuously expand [114, 162], it is becoming more important to support broad categories of users in their data management needs through conceptual models. This leads to an exciting new research direction for making conceptual modeling understandable and accessible to the masses. More individuals in organizations and beyond are engaging with information technologies and creating their own technology solutions [114, 155, 162]. As Sandkuhl et al. [200] argue, conceptual modeling is due to become a daily activity for everyone. Recker et al. [184] makes this notion more concrete by advancing the notion of citizen modeling.
Correspondingly, research is needed to support these varied users and developers of IT. Consequently, the representations that capture relevant facts about a domain or data stored need to be more accessible than previously required. This questions the fundamental assumption of relying on static, graphical representations as conceptual models. Instead, new formats, such as text narratives should be considered. Alternatively, multimedia videos with highly dynamic and animated scenarios can be used. Some of this work is already emerging (e.g., YouTube videos which animate conceptual modeling concepts6).
Traditionally, conceptual models were designed by IT professionals and followed predefined grammatical rules, predicated mainly on abstractions (e.g., identification of classes). However, in many contexts, the representation of concrete objects (instances, entities), can be beneficial, including by allowing those not familiar with abstractions to develop and use conceptual models [102, 137]. In this sense, conceptual models should adapt to the skills, needs, and tasks of users; for example, by leveraging representations based on abstraction (via classes), as well as instances, and narratives.
Samuel et al. [199] challenge the prevailing approach in conceptual modeling to use abstract-based cardinality notations, and demonstrate the advantages of an alternative, instance-based representations. Lukyanenko et al. [135] suggest that in many domains, especially social media, concrete instances, rather than abstract classes should be used to better capture nuanced domain semantics. Eriksson et al. [64] added new refinements to the instance-based conceptual modeling theory by emphasizing the benefits of institutional ontology. Saghafi et al. [197] investigate instance-based representations within the context of query formulation.
Much more research is needed to unfreeze the traditional focus of conceptual modeling research on business development and organizational settings. Because conceptual modeling activities are increasingly becoming mainstream, the research community needs to conduct more scholarship on citizen and grassroots modeling, as modeling by novices, continues to grow.
# 4.6 Relationship between Grammars and Models
As conceptual modeling transitions from an activity solely performed by experienced IT professionals to potentially members of the general public, there is further need to reconsider the fundamental relationship between conceptual modeling grammars and the diagrams (scripts) that are developed using these grammars [30, 236]. Wand and Weber [236] define a script as ۔the product of the conceptual-modeling processە and suggest ۔each script is a statement in the language generated by the grammarە (p.364). Hence, traditionally, a conceptual modeling script is viewed as an instance of some already-existing grammar [38, 158, 236]. However, when modeling is undertaken by those who do not know, or are not motivated to comply with, the rules of the grammars, additional consideration is needed for how the script (diagrams) and the rules are related. An opportunity exists to offer a more nuanced, and unified understanding of the relationship between conceptual modeling grammars and the actual conceptual models that are used in practice. | Conceptual modeling is an important part of information systems development
and use that involves identifying and representing relevant aspects of reality.
Although the past decades have experienced continuous digitalization of
services and products that impact business and society, conceptual modeling
efforts are still required to support new technologies as they emerge. This
paper surveys research on conceptual modeling over the past five decades and
shows how its topics and trends continue to evolve to accommodate emerging
technologies, while remaining grounded in basic constructs. We survey over
5,300 papers that address conceptual modeling topics from the 1970s to the
present, which are collected from 35 multidisciplinary journals and
conferences, and use them as the basis from which to analyze the progression of
conceptual modeling. The important role that conceptual modeling should play in
our evolving digital world is discussed, and future research directions
proposed. | [
"cs.HC",
"cs.DB"
] |
# 1 Introduction
Semantic technologies, and in particular knowledge graphs (KGs), have been utilised in a variety of applications over time, including search engines, data integration, enterprise settings and machine learning. Numerous methods were proposed to assist their life-cycle and exploitation [12] leading to their adoption and the rapid growth of the Linked Open Data (LOD) cloud.5 However, the exploitation of these KGs has been hindered by the steep learning curve associated with the stack of standards, in particular query languages such as SPARQL [19].
Over the last few years, common information retrieval methods have been profoundly renewed by the emergence of pre-trained Large Language Models (LLMs). The abilities of LLMs to understand and generate natural language (NL) and code alike have opened new research and development fields notably in the domain of data access and interaction. In particular, these abilities endow LLMs with the capacity to translate a question expressed in NL into its counterpart in a structured query language, SPARQL in the case of RDF KGs. This allows domain experts to “speak to structured data” thus facilitating data access. To design and evaluate such text-to-SPARQL translation systems effectively, we need reference datasets providing curated question-query pairs that are either tailored to a specific KG or at least relevant for the domain it concerns.
Some question-query datasets ( $\mathbf { Q } ^ { 2 }$ sets) have been produced in the context of benchmarks and challenges such as QALD [18], DBNQA [11], and LC-QuAD [7], but they are mostly based on subsets of DBpedia and/or Wikidata. When it comes to other domain-specific, possibly private KGs, or highly specialized KGs like in life sciences, creating a Q2set involves skills that are rarely mastered by one and the same person. More likely, this requires the collaboration of domain experts who can think of possibly complex competency questions (CQ) that scientists may want to ask, and Semantic Web experts who shall leverage the used ontologies and KG schema to come up with counterpart SPARQL queries.
KGs should ideally come with examples of queries they support, a good practice in terms of documentation and metadata. Yet in practice this is rarely the case. Similarly, whereas CQs have been identified as a valuable documentation and starting point for understanding the capabilities of a KG, many KGs are accompanied by very few CQs, if any at all.
To support the creation of Q2sets for training, testing, benchmarking, and documenting our systems and knowledge graphs, we identified the need to provide tools that help researchers–as well as scientific and technical information professionals–to understand existing KGs and generate or refine corresponding $\mathrm { Q ^ { 2 } }$ sets, whether they are Semantic Web newcomers or experienced practitioners. While various methods and tools exist to help and create CQs and equivalent queries [1,4,15,24,16,5,8]. However, to the best of our knowledge, these tools are either domain-specific, extensively manual, or address only specific steps, but do not provide an end-to-end, integrated pipeline.
In this paper, we present the methods, tools and services implemented in ${ \bf Q } ^ { 2 }$ Forge, a web application guiding the user through the steps of a generic, extensible, end-to-end pipeline to generate a reference $\mathbf { Q } ^ { 2 } \mathbf { s e t }$ i.e. a dataset of (NL question, SPARQL query) pairs tailored to a specific KG. Through an interactive and iterative process, the user interface assists the user in three main areas: (1) producing CQs based on information about a KG and the domain it pertains to; (2) proposing SPARQL query counterparts of the CQs, given the KG and its schemata; (3) testing the proposed SPARQL queries, judging the relevance of question-query pairs, and recommend refinements. Rather than constraining users to a fixed end-to-end pipeline, $\mathrm { Q ^ { 2 } }$ Forge emphasizes flexibility: these tasks are executed as an integrated pipeline, but a user may also choose to use one task independently of the others.
$\mathrm { Q ^ { 2 } }$ Forge relies on an extensive, user-controlled configuration where, in particular, multiple language models can be selectively used at different steps of the pipeline. Through a documented Web API, Q $^ 2$ Forge leverages a set of pre-defined services, e.g. to explore the KG or invoke a language model for a certain task, implemented using robust, community-proven libraries and frameworks such as LangChain.6 Yet, a community may easily extend $\mathrm { Q ^ { 2 } }$ Forge with new services and steps, or re-implement some of the provided services for instance to use their own text-to-SPARQL tool instead of the one provided.
This paper is structured as follows: Section 2 provides an overview and comparison with relevant existing works. Section 3 describes our methodology, the pipeline architecture and the various components. Then, the source code and its sustainability plan are described in Section 4. Section 5 discusses real practical use cases where the resource could be used, while its potential impact and reusability are discussed in Section 6. Finally, the limitations and perspectives of the resource are outlined in Section 7.
# 2 Related Work
Linked Data Query Assistants. Approaches for assisting users in querying KGs can be broadly separated in two main non-disjoint categories: the ones relying on dedicated Graphical User Interfaces (GUI) (e.g. [9,10,2]) and the ones relying on Natural Language Interfaces (NLI) (e.g. [13,17]). GUIs can provide high expressivity but remain difficult to use by non-technical experts, unless they trade off part of the expressivity in favor of reusing a popular interaction paradigm (e.g. faceted search). NLIs range from keyword-based retrieval to controlled natural language and full natural language dialogical interactions. Language models, large (LLM) and small (SLM), have significantly improved the methods for natural language processing in general, and in particular for question-answering over linked data. Language models are used in several ways: translate a question to a structured query (directly or indirectly [13]) or by directly answering the question when, for instance, the knowledge source was included in the training corpus of the language model. These two trends can also be combined with augmentation techniques such as Retrieval Augmented Generation (RAG) [14] that performs information retrieval tasks of different natures (document, database, KG) to enrich the context used to invoke the language model and improve the quality of the answers. While some GUIs help lower the barrier for non-expert users, NLI approaches, and particularly those using LLMs, are even more userfriendly and extensible but have shown mixed results in generating accurate SPARQL queries from natural language, and they require reference Q2sets to be trained, augmented and evaluated. This is precisely the purpose of Q2Forge.
Linked Data Question-Query sets. Multiple tools and services are currently being released to address the question of text-to-SPARQL translation. BigCQ [20,21] aims to create CQs and SPARQL equivalents based for the axioms of a specific OWL ontology. It is meant to help ontology engineering and evaluation, but cannot apply to common KGs that typically rely simultaneously on multiple ontologies and vocabularies. Amazon Bedrock $^ 7$ is a commercial, text-to-SPARQL translation service proposed by Amazon, that leverages LLMs and requires a collection of few-shot question-query pairs. It has applications particularly in bioinformatics. Unfortunately it is not released under an open source license, thus making comparison difficult. AllegroGraph’s Natural Language Query (NLQ)8 vector Database stores pairs of NL questions and corresponding SPARQL queries. This repository helps to train and refine models for accurate query generation, and its integration with SHACL shapes ensures the structural validity of the generated queries. However, this solution suffers from a lack of explainability. Users receive SPARQL query counterparts without understanding why a particular result was returned or the full process used to infer that outcome. In the opposite direction, AutoQGS [22] is a framework that generates NL questions from SPARQL queries, facilitating the creation of question-query datasets without extensive manual annotation. However, this solution requires existing SPARQL queries to generate training data, which limits its applicability.
Challenges such as QALD [18], DBNQA [11], and LC-QuAD [7] provide Q2sets to train models that generate queries from NL questions, but they focus primarily on DBpedia and Wikidatan, although some editions of QALD, e.g. QALD $4 ^ { 9 }$ , have included biomedical Q2sets. Similarly, the LC-QuAD 2.0 dataset $^ { 1 0 }$ contains a $\mathrm { Q ^ { 2 } }$ set of more than 20,000 pairs across DBpedia and Wikidata, including subdomains such as geography and science. While these resources serve general purpose question answering systems, they do not comprehensively capture domain specific or highly specialised knowledge. Some domain specific datasets exist, such as SIB bioinformatics SPARQL queries, [3], a collection of hand-crafted Q $^ 2$ sets for various SIB-related KGs. By contrast, Q $^ 2$ Forge aims to fill this gap by providing an open-source solution to generate Q2sets for any domain and KG, including for private KGs.
# 3 From a Knowledge Graph to Q2Forge Pipeline
Q $^ 2$ Forge helps users to carry out three main tasks: generate competency questions in NL for a target KG, generate SPARQL query translations of the questions, and test and refine the SPARQL queries. To do so, Q $^ 2$ Forge orchestrates the use of various services to manage multiple per-KG configurations, extract the schema of a KG, invoke various language models depending on the task to achieve at each step of the pipeline, etc. The services are invoked through a documented Web API implemented by a back-end server. We provide a prototype implementation of the back-end server called Gen $\cdot ^ { 2 }$ KGBot. A community may reuse Gen $\cdot ^ { 2 }$ KGBot as-is, or they may customize or extend its services to meet their specific needs.
Figure 1 describes the pipeline of $\mathrm { Q ^ { 2 } }$ Forge: (1) create the configuration for a KG and (2) extract its schema; (3) generate CQs and (4) optionally export them for reuse with another application or for documenting purpose; (5) translate a CQ into SPARQL; (6) execute the query and propose an interpretation of the results; (7) judge the relevance of the question-query pair and allow the user to iteratively refine the query; (8) export the Q2set for reuse with other systems. Note that $\mathrm { Q ^ { 2 } }$ Forge remains very flexible: a user may follow the whole pipeline, but may also run each task independently by simply importing/pasting input data and exporting/copying the outputs.
Fig. 1: $\mathrm { Q ^ { 2 } }$ Forge pipeline: resources and services.
The rest of this section further describes the steps depicted in Figure 1.
# 3.1 KG Configuration and Pre-processing
Create a KG configuration. The pipeline starts with creating a KG configuration (depicted in Figure 2) where the user provides minimal information about the target KG: a name, a short name used later as an identifier, a textual description, a SPARQL endpoint URL, and the namespaces and prefixes to be used in the SPARQL queries and Turtle descriptions. Optionally, the user may fill in the URL of a SPARQL endpoint hosting the ontologies in case they are not on the same endpoint as the KG itself.
Once created, the configuration is stored on the back-end server. Additional parameters that can be edited manually to configure the available language models (seq-to-seq and embedding), where they are hosted (e.g. local vs. cloud resources, vector database etc.), and how they are assigned to each step of the pipeline. For instance, one may choose to use a large model with reasoning capabilities for generating a SPARQL query, but use a smaller model to interpret SPARQL results. Other parameters configure the strategy adopted to serialize ontology classes in the prompts submitted to seq-to-seq models, such as the number of ontology classes to describe and the linearization format used to describe them. Multiple formats are supported (currently Turtle, tuples or a NL format, see examples in Listing 1.1), since different language models may behave differently depending on the selected format.
KG pre-processing. We extract from the KG various types of information that will be helpful to carry out the downstream text-to-SPARQL task. This can be ontology classes that are relevant with respect to a NL question, example SPARQL queries, etc. In our implementation, this step first creates a textual description of the classes from the labels and descriptions available in the ontologies, and computes text embeddings thereof. In Figure 2, this is achieved in steps 2 and 3. Furthermore, there is usually a gap between how an ontology defines classes and how instances of these classes are concretely represented in the KG. Typically, instances may use properties and resources from additional vocabularies that are not explicitly mentioned in the ontology. Therefore, the text-to-SPARQL task requires not only a textual description of the classes, but also a description of how instances of these classes are represented. Gen $^ 2$ KGBot addresses this need by sampling class instances and analyzing the properties and value types they use (examples are provided in Listing 1.1). Lastly, the user may provide existing examples of NL question and associated SPARQL query. The pre-processing includes computing embeddings of these question-query pairs.
# 3.2 Competency Question Generation
This step invokes a language model to generates CQs based on various types of information about the KG: name and description, endpoint URL, list of the used ontologies. This information is either taken from the KG configuration (created in the previous step) or manually entered in a form. The user may also provide any other relevant information, e.g. the abstract of an article describing the KG.
@prefix obo: <http://purl.obolibrary.org/obo/>
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>
obo:RO_0000056 rdfs:label "participates_in"
[]a obo:CHEBI_53289 ; obo:RO_0000056 [a pubchem:MeasureGroup ];
(’obo:CHEBI_53289’, ’http://purl.obolibrary.org/obo/RO_0000056’, ’participates_in’, ’http ://rdf.ncbi.nlm.nih.gov/pubchem/vocabulary#MeasureGroup’)
Instances of class ’obo:CHEBI_53289’ have property ’obo:RO_0000056’ (participates_in) with value type ’pubchem:MeasureGroup’.
Multiple Listing 1.1: Formats to describe properties and value types used by instances of a class: Turtle (top), tuple (middle), natural language (bottom)
Fig. 2: The KG configuration and pre-processing interface.
Figure 3 depicts the competency question generation interface. The user can select the language model to be used for the generation of the CQs and the number of CQs to be generated. The model is instructed to return each question with an evaluation of its complexity (Basic, Intermediate or Advanced) and a set of tags. The Enforce Structured Output toggle can be used to compel the model to return the CQs as a JSON-formatted document.
Upon completion of the process, the user may download the output as a JSON document, and save it in a browser’s cookie for reuse in the next step.
Fig. 3: The competency question generation interface.
# 3.3 SPARQL Query Generator/Executor
In this step, the user has the ability to generate a SPARQL query counterpart of a NL question, execute it against a KG, and get an interpretation of the results. The question may originate from the preceding task, or the user may paste a question either hand-crafted or generated by another system.
Q2Forge relies on various strategies provided by Gen $^ 2$ KGBot to accomplish this task, which we refer to as “scenarios”. In the following, we focus on Scenario 5, depicted in Figure 4, that we will further describe below. When running a scenario, the steps of that scenario are progressively rendered on the interface, and for the ones that make an LLM call, the response is dynamically streamed to ensure a good user experience. Figure 5 is a snapshot of the interface of the SPARQL Query Generator/Executor. The steps are as follows:
1. Initial question: the workflow is initiated by the user posing a NL question.
2. Question validation: the question is evaluated to ensure its relevance to the context of the KG. If it is deemed invalid, the workflow stops.
3. Question pre-processing: common techniques are used to extract named entities (NEs) from the question.
4. Select similar classes: similarity search between the question and the ontology class descriptions computed in the KG pre-processing step is used to select relevant classes.
5. Get context information about the classes: retrieve a description of the properties and value types used with instances of the selected classes.
6. Generate query: generate a prompt from a template $^ { 1 1 }$ using the KG configuration and the inputs from the previous steps, and submit it to the configured LLM.
7. Verify query and retry: check if a SPARQL query was generated and if it is syntactically correct. If not, generate a retry prompt that includes the last generated answer and the reason for the retry, e.g. syntax errors, and submit this retry prompt to the configured LLM.
8. Execute the SPARQL query: if a valid SPARQL query was generated, submit it to the KG endpoint and get the results.
9. Use the configured LLM to interpret the SPARQL results.
Fig. 4: Gen2KGBot SPARQL Query Generator/Executor: workflow of Scenario 5
Scenario 5, described above, is useful as a starting point when no prior question-query pair exists. However, once some pairs have been validated or if some pairs were hand-crafted, they can be added to the context and serve as examples. Scenario 6 can then be applied instead, as it provides the model with relevant example SPARQL queries that can help in generating more accurate queries with fewer refinement iterations.
# 3.4 SPARQL Query Refinement
In this step, the user can incrementally refine a SPARQL query so that it reflects precisely the question. Figure 6 is a snapshot of the interface, and the process is as follows:
1. First, the query is displayed in a SPARQL editor that highlights potential syntactic errors and can be used to submit the query to the endpoint.
2. To help the user understand the query, Q $^ 2$ Forge can extract the qualified (prefixed) names (QNs) and fully qualified names (FQNs) from the query and get their labels and descriptions. For instance, the label of http://pu rl.obolibrary.org/obo/CHEBI_53289 is “donepezil”, and its description is “a racemate comprising equimolar amounts of (R)- and (S)-donepezil (...)”.
Fig. 5: The SPARQL Query Generator/Executor interface.
Fig. 6: The query refinement interface.
3. Then the LLM is asked to judge whether the query matches the given question. It is requested to provide a grade between 0 and 10 along with explanations justifying the grade.
The user may then iterate as needed: amend the query based on the grade and insights from the model, test it, have the model judge it, etc. Once a satisfying query is reached, the user can add the question-query pair to a dataset and export it in a variety of formats, catering to different use cases.
# 4 Source Code and Documentation
From a technical perspective, Q $^ 2$ Forge integrates several software components that are robust and have been proven effective in various contexts. LangChain and LangGraph $^ { 1 2 }$ are used for LLM workflow orchestration, Spacy $^ { 1 3 }$ for question pre-processing and named entity recognition, rdflib $^ { 1 4 }$ for the manipulation of RDF data, and YasGUI $^ { 1 5 }$ as a SPARQL editor.
Source Code Availability. Q $^ 2$ Forge and accompanying Gen $^ 2$ KGBot are provided under the GNU Affero General Public License v3.0 or later (AGPL3.0-or-later) license. The code is published on public Github repositories, and the versions used at the time of writing are identified by DOIs to ensure the long-term preservation and citability.16 The API provided by Gen $\cdot ^ { 2 }$ KGBot is documented according to the OpenAPI format.17
Sustainability Plan. Over the next four years, financial support has been secured through the MetaboLinkAI project.18 This project aims to transform metabolomics data into actionable insights through the utilisation of AI-powered, knowledge graph-driven solutions. This will provide an opportunity to evaluate the quality, relevance and applicability of Q2Forge in the chemistry domain. Moreover, a fundamental objective of Q $^ 2$ Forge is to provide a generic solution, reusable with a variety of KGs. Consequently, we intend to provide support to communities expressing interest and willingness to experiment with it for their own needs. This support may range from best-effort to more formalized collaboration. To support collaborations, adoptions and contributions we secured two other contributors: the P16 public program $^ { 1 9 }$ that helps open-source project improve their code and diffusion, and the Probabl company $^ { 2 0 }$ whose mission is to develop, maintain the state-of-the-art, sustain, and disseminate a complete suite of open source tools for data science.
# 5 Application Use Cases
In this section, we present several application use cases where $\mathrm { Q ^ { 2 } }$ Forge can be of great help. First, we focus on the creation of $\mathrm { Q ^ { 2 } }$ sets and the benefits of generating them instead of performing this task manually. Second, we examine how $\mathrm { Q ^ { 2 } }$ Forge can document existing KGs with multiple competency questions. Finally, we explore its application in creating a golden dataset for benchmarking, training and testing question answering models.
Lowering the entry barrier to query rich KGs: Large public KGs usually provide lay users with user-friendly user interfaces that propose pre-defined queries and exploration options. Yet such interfaces can hardly accommodate complex custom queries where SPARQL expertise is necessary. For instance, with over 111 million chemical substances and extensive bio-activity data, PubChem presents significant navigation challenges for researchers. Metabolomics experts often struggle to formulate complex SPARQL queries that would help them identify relationships between compounds, biological activities, and disease associations. Q $^ 2$ Forge addressed this challenge by allowing chemists to generate natural language questions and automatically convert them to SPARQL queries, thus drastically reducing the time spent on data retrieval. To do so, researchers must provide a textual description of the KG together with additional relevant textual information, such as the abstracts of articles published about PubChem or about research made possible through PubChem. For example, a researcher might ask: “Which compounds have been tested against SARS-CoV-2 Main Protease and reported an IC50 below 1 $\mu$ M?” or “Which natural product compounds from marine sponges show antimicrobial activities against Pseudomonas aeruginosa?”. Similarly, environmental health researchers studying the exposome face difficulties extracting meaningful correlations between environmental factors and metabolic responses across heterogeneous datasets. Currently, they rely either on simple predefined queries or must collaborate with knowledge engineering specialists, creating bottlenecks in research workflows. $\mathrm { Q ^ { 2 } }$ Forge could enable them to independently generate appropriate question-query pairs that bridge environmental exposures and biological outcomes, eliminating technical barriers to knowledge discovery. A typical researcher might need to ask: “Which air pollutants are known to increase Nrf2 anti-oxydant protein expression ?” or “What metabolic biomarkers show significant alterations following chronic exposure to per- and polyfluoroalkyl substances (PFAS) in human biomonitoring studies?”
Ground truth and question-query benchmarks: Our community has pioneered challenges and benchmarks for question-answering over linked data [18,11,7] However, each edition of these challenges requires updating Q $^ 2$ sets for tasks that were proposed in previous editions, and creating new $\mathrm { Q ^ { 2 } }$ sets for newly proposed tasks. There does not exist a large number of such readily available Q2sets, and they are often based on the same KGs (e.g. DBpedia, Wikidata). Setting up a new edition of a challenge therefore requires a significant effort to generate or update the training and test data. $\mathrm { Q ^ { 2 } }$ Forge was designed to help produce these Q2sets and can be used to facilitate the renewal of tasks for challenges and benchmarks on a variety of KGs. For instance, the QALD Challenge [18] has long been centered on DBpedia. In the latest edition, it was extended to Wikidata. Using Q $^ 2$ Forge, we could extend the challenge with tasks targeting domain-specific graphs such as Uniprot [6] or the aforementioned PubChem graph.
Documenting a KG with competency questions: CQs are commonly used to demonstrate the basic capabilities of a KG. This requires working with domain experts to identify the CQs they may want to ask. In our experience, this is a time-consuming task involving multiple iterations. Q $^ 2$ Forge can be used to initialize, expand and enhance the scope and variety of CQs by systematically generating hundreds of competency questions. For instance, PubChem’s documentation currently provides a valuable foundation of 16 CQs. $^ { 2 1 }$ Q2Forge could enhance this foundation by showcasing the full scope and complexity of chemical, biological, and pharmacological relationships within this extensive KG. The generated question sets can serve multiple purposes: providing entry points to new users, supporting KGs indexing, benchmarking search capabilities, identifying promising research directions, and accelerating the development of nextgeneration retrieval systems.
# 6 Potential Impact and Reusability
Target Audiences and Expected Uses. We have already identified three families of users, that correspond to the three uses cases described in Section 5: (1) the developers and maintainers of question answering systems, chatbots, conversational agents and other natural language search engines over KGs. The methods behind these systems all require to have Q2sets to train, test and evaluate the system. They are the primary target of $\mathrm { Q ^ { 2 } }$ Forge. (2) Events and groups organizing challenges, benchmarking existing solutions and building surveys. These are in constant need of new and renewed Q2sets to compare the latest methods and establish the state-of-the-art. Here, $\mathrm { Q ^ { 2 } }$ Forge facilitates the creation of Q2sets from any KG in any domain. (3) While it is strongly recommended to document existing datasets and query services with examples of typical questions and queries, this is rarely done and, when it is, rarely extensive. $\mathrm { Q ^ { 2 } }$ Forge was designed to help generate these examples with quality and quantity in mind.
We have initiated experiments with the first family of users in the chemistry and metabolomics domain. Pharmaceutical researchers developing drug discovery platforms require comprehensive question-query pairs to train intelligent systems for identifying promising molecular candidates across multiple parameters. These researchers benefit from $\mathrm { Q ^ { 2 } }$ Forge’s ability to generate diverse questions exploring structure-activity relationships, pharmacokinetic properties, and target binding profiles. Metabolomics data scientists integrating multi-omics datasets need sophisticated query templates that traverse complex biochemical pathway knowledge, particularly when correlating mass spectrometry findings with biological outcomes. Based on this experimentation, we believe that academic laboratories focusing on cheminformatics and bioinformatics can utilize Q2Forge to develop educational materials demonstrating how semantic queries extract meaningful insights from chemical databases. Q2Forge can significantly reduce technical barriers that have historically prevented domain experts from fully leveraging KG technologies in their specialized fields. For the third family of users, we have experimented $\mathrm { Q ^ { 2 } }$ Forge with outputs of the D2KAB project. D2KAB produced several datasets among which the Wheat Genomics Scientific
Literature Knowledge Graph [23] that represents the named entities extracted from a corpus of over 8,000 PubMed articles related to wheat genetics and genomics. The NEs include genes, phenotypes, taxon names and varieties in titles and abstracts. During the project, we worked with domain experts to figure out several CQs $^ { 2 2 }$ to document the graph and illustrate its usefulness. When tested with this KG, $\mathrm { Q ^ { 2 } }$ Forge was able to automatically generate relevant CQs and translate them into SPARQL queries that were close to the target. After a short refinement step, we managed to get valid question-query pairs. With these two experiments, we are confident that Q $^ 2$ Forge is flexible enough to be applied to a broad range of KGs and domains.
Potential for Reuse and Extension Points. As mentioned earlier, Q $^ 2$ Forge strives for flexibility: the various tasks of the pipeline can be executed as a whole, and some tasks can be used independently of the others. In addition to the interfacing with other systems shown in Figure 1, the system is designed to be reused and integrated into other scenarios. In education for instance, when teaching SPARQL, Q $^ 2$ Forge could be modified to serve as a tailored instructor guiding learners to navigate the complexities of SPARQL. Furthermore, since the query refinement task can be accessed independently of the other tasks (its URL takes arguments “question” and “query”), adding the appropriate button to an existing SPARQL editor could seamlessly integrate this task into an existing workflow.
Current development of protocols such as MCP (Model Context Protocol),23 A2A (Agent-to-Agent) $^ { 2 4 }$ and hMAS (Hypermedia Multi-Agent Systems)25 reflects ongoing efforts to simplify integration, enhance collaboration, and ensure secure and efficient communication between AI agents and external systems. These protocols can potentially be interfaced with $\mathrm { Q ^ { 2 } }$ Forge to facilitate its incorporation into broader systems. Integrating MCP would standardize $\mathrm { Q ^ { 2 } }$ Forge’s interface with LLMs, enabling seamless integration of (components of) $\mathrm { Q ^ { 2 } }$ Forge’s pipeline into other workflows. Conversely, $\mathrm { Q ^ { 2 } }$ Forge could be extended to support the invocation of MCP servers providing access to third-party services such as knowledge graphs. Additionally, incorporating A2A would allow Q2Forge to support multi-agent collaboration across diverse ecosystems, fostering coordination between agents of varying frameworks. Finally, aligning with hMAS would leverage semantic hypermedia for uniform interactions among people, devices, and digital services, creating hybrid AI communities that operate transparently and accountably on the Web. These extensions would make $\mathrm { Q ^ { 2 } }$ Forge even more versatile, facilitating the development of KG applications in different domains. | The SPARQL query language is the standard method to access knowledge graphs
(KGs). However, formulating SPARQL queries is a significant challenge for
non-expert users, and remains time-consuming for the experienced ones. Best
practices recommend to document KGs with competency questions and example
queries to contextualise the knowledge they contain and illustrate their
potential applications. In practice, however, this is either not the case or
the examples are provided in limited numbers. Large Language Models (LLMs) are
being used in conversational agents and are proving to be an attractive
solution with a wide range of applications, from simple question-answering
about common knowledge to generating code in a targeted programming language.
However, training and testing these models to produce high quality SPARQL
queries from natural language questions requires substantial datasets of
question-query pairs. In this paper, we present Q${}^2$Forge that addresses the
challenge of generating new competency questions for a KG and corresponding
SPARQL queries. It iteratively validates those queries with human feedback and
LLM as a judge. Q${}^2$Forge is open source, generic, extensible and modular,
meaning that the different modules of the application (CQ generation, query
generation and query refinement) can be used separately, as an integrated
pipeline, or replaced by alternative services. The result is a complete
pipeline from competency question formulation to query evaluation, supporting
the creation of reference query sets for any target KG. | [
"cs.DB",
"cs.AI",
"cs.IR"
] |
# I. INTRODUCTION
Technology is not just a reflection of societal needs, it actively shapes behaviors, interactions, and the way we engage with the world. As software increasingly drives user interactions and automates processes, acknowledging its socioeconomic and environmental impacts becomes crucial. Analyzing these impacts on society, at large and in relation to the specific needs of underrepresented stakeholders, is therefore essential. In addition, the software engineering community itself requires support to address the diverse needs of its workforce. Ensuring inclusion and equity within industry processes is vital to foster a more balanced and supportive work environment [1].
The Software Engineering in Society (SEIS)1 track of the International Conference on Software Engineering (ICSE) has played a pivotal role in exploring the impact of Software Engineering (SE) on society. It provides a platform for discussions about broad SE societal implications. This track started in 2015, welcoming research on SE for a sustainable society in various areas including health, physical-, environmental- and social sciences, management, economics, computing, policy, manufacturing, arts, and interdisciplinary research. Since 2022, the track has also welcomed a more diverse range of topics on COVID-19, ethics and diversity, and inclusion, misinformation, communication, research partnerships, and many more.
Building on past research is essential for gaining insights and advancing knowledge. Reflecting on previous work while identifying key themes and gaps helps shape future research directions and ensures continuous progress in the field. In light of this, and inspired by the fact that the SEIS track has existed for 10 years, we analyze a decade of research on the societal impact of software engineering by examining SEIS publications. We selected the SEIS track as a proxy of SE research in a societal context. From its inception, SEIS has focused on sustainability and societal impact, making it suitable for observing how the notion of sustainability evolved over time.
To our aim, we explore trends based on the problems addressed by novel approaches and identify research gaps in areas that have received less attention. We further review the publications in this track from a sustainability perspective. We chose a sustainability angle for our analysis because sustainability is at the core of all societal values. It addresses issues related to diversity, inclusion, and equity through its social dimension, provides eco-friendly solutions through the environmental dimension, emphasizes cost-effectiveness and prosperity through the economic dimension, and enables the long-term use of digital technologies that continuously evolve to solve societal needs.
We pose three research questions (RQs) to explore this track regarding topics, trends, gaps, and sustainability foci, namely: RQ1. What are the topics addressed in the SEIS track? Through this RQ, we aim to identify the major topics published in this track. RQ2. What are the research trends and gaps in the SEIS track? Through this RQ, we aim to identify the emerging trends over a decade and research gaps that require further attention. RQ3. What is the coverage in the SEIS track in terms of sustainability dimensions? Through this RQ, we aim to identify how 4D sustainability (economic, environmental, social, and technical) is addressed by SEIS publications.
To answer these RQs, we carried out a systematic mapping study [2] to map the state of the research published in SEIS.
The rest of the study is structured as follows. Section II provides the study background. Section III presents other similar publications that map state-of-the-art in SE publications for other conferences. Sections IV and V describe the study design, and the study execution and results, respectively. Section VI discusses our reflection on the results. Section
VII provides an overview of the threats to the validity of this research and mitigation strategies. Finally, Section VIII concludes along with some future directions.
# II. ON SUSTAINABILITY
As we use a sustainability perspective for our analysis, here we provide context for sustainability in terms of its need and impacts, followed by a description of sustainability-related concepts that are used in the analysis.
# A. Background
The desire to incorporate sustainability into SE stems from society’s growing understanding of sustainability. Products that are not only effective and user-friendly, but also ethical and environmentally responsible, are becoming increasingly popular [3]. Approximately $9 7 \%$ of climate experts believe that human activity is mostly responsible for the trends in global warming over the last century [4]. In today’s digital age, software systems are critical to the functioning of civilization, influencing everything from complex industrial operations to daily communication [5]. As a result, the development, deployment, and maintenance of these systems have significant ramifications that extend beyond conventional metrics like speed and reliability to include long-term effects on the environment and society [6].
Despite these advances, there are numerous challenges and knowledge gaps in the field of SE for sustainability [7]. As a result, sustainability and societal considerations are often considered secondary, pointing to a fundamental barrier in changing SE to more responsible and future-oriented approaches. There are notable variances in how sustainability is implemented across different industries and regions, which can be attributed to varying levels of resources and expertise [8]. Furthermore, some parts of sustainable software engineering, such as energy-efficient computing, have received more attention than others, such as the social ramifications of software systems and their role in long-term economic stability. Our study seeks to critically examine the current state of sustainability-related research in the SEIS track.
# B. Definition, dimensions, and impacts
In the context of this study, we define sustainability as “the preservation of the long-term and beneficial use of digital solutions, and their appropriate evolution, in a context that continuously changes” [9]. This definition implies that an equitable way of problem-solving must be established and that the impacts of SE solutions must be carefully analyzed over time. Moreover, to classify the types of “beneficial uses” of digital solutions in the specific context of SE, we use the four sustainability dimensions defined by Lago et al. [10]. In particular:
Economic dimension refers to preserving capital and financial value.
Environmental dimension refers to the preservation of natural resources by addressing ecological requirements.
Social dimension refers to the preservation of social resources through generational equity by supporting and creating benefits for communities.
Technical dimension refers to the preservation of software in terms of its long-term use and continuous evolution.
In addition, to analyze the role of SE in society, we consider both its primary and its enabled focus in studies based on direct and enabling effects defined by Hilty et al. [11].
# III. RELATED WORK
Works related to ours include publications that review the research published in SE scientific venues, in general, as well as in the context of sustainability.
# A. Conference Track Reviews
Few works review SE conference tracks to capture the sociotechnical perspectives and human values, described as follows.
A classification of ICSE publications (research track) from 2015-2017, captures the socio-technical perspectives [12]. The study identifies a need to diversify the research techniques to include human and social aspects while maintaining a balance with the technical aspects. The results show that stakeholder involvement using design science strategies, and using human subjects for research while fulfilling all ethical criteria, can aid in including socio-technical aspects. Triangulation and diversification of research strategies can also help improve this. An investigation of human values in SE publications (from 2015-2018) [13] identifies 11 categories of human values from ICSE, ESEC/FSE, TSE, and TOSEM, based on the Schwartz Values Structure. The results of the study reveal that only $1 6 \%$ of the publications considered human values, with $41 \%$ related to security. Furthermore, the findings show that $60 \%$ of the socially significant values are ignored in SE research.
# B. Sustainability research in SE
Several systematic mapping studies have been performed over the years to observe the state of sustainability research in SE and its evolution over time.
A systematic mapping analysis was used to classify sustainability research within SE [14]. The study identified various research hotspots, including models, techniques, and software design, and mapped publications to knowledge domains. The report offers a detailed summary of current trends in sustainability research and suggests avenues for further investigation. The study mapped the broader subject of sustainability in SE. Another mapping study further classifies state-of-the-art concepts, models, and frameworks in relation to sustainability in SE [15]. Penzenstadler et al. [16] carried out a comprehensive literature review to understand the current status of sustainability research in SE. They gave an overview of the body of existing literature and classified research efforts into various sustainability issues. A multi-vocal literature review assessing the practical relevance of SE research over 34 years (1985-2019) [17] identifies a lack of relevance and collaboration with the industry. The study also emphasizes the significance of carrying out empirically grounded research on the notion of relevance in SE.
Previous studies have examined the state of research in either (i) general software engineering tracks [12], [13] or (ii) studying one aspect e.g., sustainability [14]–[16] or practical relevance [17] over the years. Contrary to the previous works, we analyze the SEIS track, which is dedicated to SE in society research. Through this study, we aim to identify the topics and emerging trends in this domain and the impact of this track in terms of its research contribution.
# IV. STUDY DESIGN
To conduct our systematic mapping study, we use the methodology described by Peterson et al. [2]. We provide a replication package [18] that contains an appendix with primary studies, extracted data, and scripts.
# A. Data Collection
We collected the publications published in the ICSE SEIS conference track from its inception, 2015-2024. We retrieved a total of 123 publications. We provide the list of publications with their publication IDs as an Appendix in our replication package [18].
Contrary to the typical inclusion and exclusion criteria, our study does not use this step from Peterson et al. [2]. As our study requires an analysis of all publications published in this track, we include all publications for data extraction.
# B. Classification Scheme and Mapping
Based on the goals of our RQs, we classified and mapped the publications to relevant categories as follows.
1) RQ1 – Research Topics: We extracted the keywords from the respective study’s metadata. For the publications with missing keywords, we used the abstracts as an extraction source. If the abstract did not provide sufficient information, we used the introduction and conclusion of the publication as a secondary keyword source.
We collected a total of 644 keywords from the 123 publications. We clustered these keywords to observe the frequency of research topics. We used BERT model [19] to generate embeddings that were used to create semantically similar clusters using K-means clustering. The scripts are provided in our replication package [18]. We excluded certain keyword clusters from thematic categorization. These include keywords like SE, Software Development, Technology (and technology names). This exclusion was performed as these keywords did not represent the research theme of the study under analysis. As all publications are published at ICSE, the high frequency of such keywords is a natural consequence. Hence, we exclude them. Further, we manually merged keyword clusters based on overlapping themes.
2) RQ2 – Types, Trends and Gaps: We categorize the publications based on the type of research using the classification by Wieringa et al. [20]. These categories include evaluation research, validation research, proposal of solution, philosophical publications, personal experience publications, and opinion publications. We aim to understand the research methodologies employed to study SE in society. We classify the publications against a research type by initially reviewing the introduction and conclusion sections. Further, we used key phrases and approaches described in the full text to assign the publications to relevant categories. For example, phrases like ‘evaluation of’, ‘case study’, and ‘real-world application frequently indicated Evaluation Research, whereas phrases like ‘proposes a new method’ and ‘introduces a technique suggested a Solution Proposal. These indications were used to hypothesize a research type. The final decision about the research type was made based on the type definitions by Wieringa et al. [20].
Based on the keyword clusters, we classified the publications by research focus and contributions, identifying trends based on the distribution of publications over time. Emerging trends were labeled with descriptive names, while categories with fewer publications revealed research gaps. These gaps were highlighted to guide future research. We organized the categories of trends based on the keywords used in the papers. For instance, a paper on diversity and inclusion may fall under sustainability, but we only included it if it was framed as sustainability in the paper. We read the full text of these publications to confirm their categorization into a trend and reported on their contributions.
3) RQ3 – Sustainability Coverage: To answer RQ3, we read the publications with a sustainability lens. We provide for each publication, two types of sustainability mappings: (i) Primary sustainability focus that for each publication maps the primary focus of its contribution to the corresponding, single sustainability dimension; and (ii) Enabled sustainability focus that for each publication maps the intent of its contribution on one or multiple corresponding sustainability dimensions. For example, a publication contributing design patterns for inclusive human-computer interaction would be classified as (i) a technical contribution (for software design) with (ii) a social intent (for user inclusivity).
4) Limitations: As suggested by Verdecchia et al. [21], here we discuss some limitations posed by our study design. Section VII will discuss the threats to the validity of our empirical investigation. For RQ1, as we mainly used the keywords provided by the publication authors, our interpretation of the topics might be limited. To avoid the propagation of this possible bias in RQ2 results, we reclustered the keywords by manual analysis and merged them into different clusters. Further, we read the publications’ text in full to extract the trends based on the context of the publications. Based on the full-text check, none of the papers were removed from the clusters, rather only merged into larger generic categories. This shows that the original keyword clusters were adequate for topic classification. For RQ3, the sustainability classification is subject to the authors’ understanding of sustainability. To mitigate possible limitations or biases, different co-authors performed the two types of sustainability classifications and cross-checked the results for consensus. Finally, by design the scope of our study is limited to the SEIS publications. As such, all our results are not generalizable. However, future research could extend this study with a broader scope, e.g., to map the general state-of-the-art about the role of SE in society.
# V. STUDY EXECUTION AND RESULTS
In this section, we present our results organized per RQ.
# RQ1. What are the topics addressed in the SEIS track?
We identify the top seven2 categories from the keyword clusters (see Fig. 1), as detailed in the following.
Sustainability appears as the top keyword in this track, used by 17 publications. Next, diversity and inclusion are the top keywords used by 16 publications. A total of 14 publications mention keywords specific to their research methodology, where case publications and empirical research are the commonly used keywords. Thirteen publications discuss opensource software. Ten publications discuss ethics with topics ranging from concerns, micro-politics, policy, and whistleblowing phenomena to the use of harmful terminology and the responsibility associated with such ethical issues. Nine publications include topics on gender bias, gender balance, and inequities. Eight publications collect and analyze data from social media platforms such as Twitter/X, Reddit, LinkedIn, Facebook, and StackOverflow. Seven publications focus on Education, both enabled through software and the education of software engineers. Five publications focus on solving problems for individuals with atypical interaction capabilities such as dementia, autism, visual impairments, etc.
From Fig. 1, we can observe that:
• Sustainability was a primary focus in the early years, followed by a noticeable gap from 2021 to 2023, and a renewed interest in 2024. This may be a reflection of the initial focus of the track (which in the 2015 call featured sustainability prominently), and the recent increase in sustainability-related societal and research focus.
Discussions surrounding open source were inconsistent, peaking in 2022.
Topics related to ethics and gender have been present since 2018; however, publications explicitly addressing diversity and inclusion began to emerge only after 2022, albeit more frequently.
• From 2016 to 2019, and in 2021, there is a notable gap in research about the atypical interaction capabilities of users and developers. From 2022, however, this topic seems to be tackled again stably even if with just one publication per year.
# RQ2. What are the research trends and gaps in the SEIS track?
Based on the main keywords, we classify the related articles into various topics.
In the following, the trends reflect the topics that are most frequently addressed, while the gaps reflect the topics addressed by fewer publications and limitations of the published
2We report on the top 7 only. The rest of the categories have a relatively low frequency. Data is provided in the replication package [18]
work. In the next subsections, we discuss the trends and gaps in terms of research types, and problems explored, and solutions provided.
# A. Trends
Research Types. Fig. 2 shows a general increase in the number of publications published annually, indicating a rise in interest and research. There is a noticeable emphasis on empirical research and rigorous scientific procedures, as evidenced by the frequent attention given to evaluation research methodologies. Research on evaluation and solution-based approaches peaks from 2022 onwards. Solution-based publications are distributed unevenly through the years, maintaining a steady number from 2021 onwards. The less common philosophical publications constantly contribute theoretical results; personal experience and opinion articles, on the other hand, are more common yet provide deep practical insights and subjective opinions.
Sustainability. Research in this category highlights the multi-dimensional nature of sustainability in software engineering, spanning design (technical), human needs (social), ethical considerations (social), and collaborative strategies (socio-technical), all of which contribute to building sustainable software. Our results show publications discussing sustainability in terms of general principles around sustainability design [P01], special human needs in the context of sustainability [P10, P11], sustainability design through requirements [P13, P19] and architecture [P14, P45, P107, P118], and sociotechnical sustainability [P54]. Other topics include the identification of ethical [P09] and value-sensitive concerns [P04], sustainability assessment criteria [P116], sustainability through language [P05] and governance [P06], collaboration strategies for sustainable SE [P21], and inter-disciplinary sustainability initiatives [P36].
Gender Inequity. Multiple publications highlight gender inequities in SE faced by women and their contributions to SE roles. Bias in job advertisements discourages female candidates [P69], and women encounter issues such as cultural sexism, work-life balance difficulties, imposter syndrome, and the glass ceiling [P71]. Despite these challenges, women play a significant role in reducing miscommunication and information overload in SE teams, even when outnumbered [P40]. There are systemic challenges that women face in their career trajectory from university to workplace which require support at all stages [P120].
In terms of gender diversity, LGBTQIA $^ +$ professionals face unique challenges in remote work [P90] while also bringing unique strengths to the job, while SE professionals with immigration backgrounds experience various forms of microinequities [P108]. Research on female open-source contributors points to the competence-confidence gap [P38], though a general increase in their participation has been observed, with a slight drop during the COVID-19 pandemic [P85].
Additionally, one study explores men’s attitudes toward gender equality at the workplace, identifying both supportive and hindering behaviors [P88], while children’s perceptions show a more balanced view of SE roles for men and women, with the pandemic further normalizing certain roles due to increased accessibility [P89].
Fig. 1. Top Keywords Categories - Frequency over the years
Fig. 2. Research Methodology Types over the years
Adapting for individuals with atypical interaction capabilities. Publications in this category focus on user interface adaptations for the needs of users with atypical interaction capabilities. Trends show publications focusing on (i) challenges in engineering software; for dementia [P10], designing UI for autistic users by preventing frustration and mental exertion caused by animations [P78], and identifying the human-centric issues on GitHub for visually impaired and dyslexic users [P76], and (ii) leveraging software for inclusivity; by integrating social interaction and visual assistance during physical activity [P50], and using an inclusive design process for providing navigational assistance to users with cognitive impairments [P95].
Ethical concerns in SE. We see a trend in publications exploring ethical considerations and social issues in software development and technology use. One study argues that ethical concerns are too complex to be governed by rigid rules or treated as simple non-functional requirements [P09]. To address these complexities, a technique for generating augmented regulatory text has been developed to aid both developers and policymakers in promoting principled morality [P35]. Another study investigates the failure of an ERP system, identifying micropolitical intervention as a primary factor [P53]. Privacy comparisons between different types of apps reveal that COVID-19 apps tend to manage privacy and ethical issues better than social media and productivity apps [P63]. The experiences of marginalized communities on social platforms are also highlighted, with common concerns including discrimination and misrepresentation [P92]. A tool has been created to detect and replace harmful terminology, promoting inclusivity across race, gender, ability, and neurodivergence [P97]. Recommendations for improving whistleblowing practices in SE are provided, focusing on harm mitigation and the role of professional bodies [P70]. Surveys show that many respondents view social awareness and ethics as significant concerns for smart devices used in public spaces [P101]. Additionally, using humor has been found to improve developer engagement, particularly in challenging tasks like testing and documentation [P106]. These findings underline the importance of integrating ethical principles throughout the use and development of software, from privacy and inclusivity to social awareness and organizational responsibility.
Open Source Software (OSS). We identified several publications discussing various aspects of open-source software (OSS) like governance, inclusivity, and contributor dynamics. Governance rules are proposed to enhance transparency, traceability, and semi-automation within OSS projects [P06]. Efforts like the Software Heritage Archive aim to collect, preserve, and make source code publicly accessible [P28]. The influence of large foundations on OSS development is also examined [P29]. Further, a study investigates the motivations behind projects joining the Apache Software Foundation. These motivations stem from community-building, project strengthening, and enhanced technical development [P84]. The impact of perceived gender identity and code quality on pull request acceptance decisions is analyzed, highlighting how these factors shape contribution evaluations [P44]. A dashboard designed to attract and retain OSS contributors is presented [P72], alongside an inclusivity debugging process that addresses information architecture faults [P77]. Surveys of OSS contributors reveal ongoing challenges related to diversity and inclusion, focusing on gender, seniority, and language proficiency [P81]. Research on gender differences in code contributions indicates a positive trend in women’s participation, though there was a slight decline during the COVID-19 pandemic [P85]. Additionally, the tendency for women to withdraw earlier from OSS participation compared to men is noted [P104]. Finally, it is argued that insights from scientific research with social impact should be treated as open-source software artifacts to maximize their reach and utility [P99].
Social Media Analysis. Our findings reveal a growing trend of utilizing social media platforms to mine valuable data for understanding user needs, ethical concerns, and developer perspectives. Several publications use social media for mining user needs, such as mapping Twitter/X data to enhance emergency app features [P25], analyzing images from Twitter/X during the COVID-19 pandemic [P63], and scraping subreddit data to identify ethical concerns of marginalized communities [P92]. Social media is also used to extract developer perspectives, like analyzing Stack Overflow conversations to understand secure coding practices [P42]. Additionally, online discussions are analyzed, including app store reviews of COVID-19 apps for security and accessibility [P68] and LinkedIn discussions to improve scientific communication [114].
Citations Trends. To further elaborate on the trend and impact of research, we rank the publications by citation count. We use the crossref $\mathrm { A P I } ^ { 3 }$ to extract citation data. One limitation of citation analysis is that such APIs may provide partial citation data, as the availability of citations is dependent on the completeness of citation meta-data4.
Naturally, older papers tend to have more citations (see Fig. 3). Many post-2020 papers have 0-5 citations. However, some topics may attract varying attention over the years. For top trends, we focus on papers with at least 10 citations. The most frequently cited research pertains to sustainability, with the Karlskrona Manifesto [P1] leading the list. Other notable sustainability papers discuss decision-making [P45], sustainability requirements [P19], sustainability debt [P14], and value-sensitive design in SE [P04]. After sustainability, the topic of gender diversity, specifically women in technology teams [P40], follows with the second highest citations. Additional papers concerning the role of women, and related challenges and impacts, also have substantial citations [P45, P19, P14, P04, P71]. Numerous citations are recorded for studies about value-driven SE [P15, P57, P61, P04]. Moreover, there is a high citation volume for research related to App Stores and Social Media. We provide the citation data in our replication package [18]. We cannot infer much from these citation numbers, the only observation we make is that papers with a wider scope have a high citation count. Other papers, which might otherwise be presenting significant work in a niche area may have fewer citations.
Fig. 3. Citations of SEIS papers over the years
# B. Research Gaps
Research Types. Fig. 2 shows a small number of opinion publications and personal experience publications. We observe that practical, experience-based contributions and subjective interpretations are lacking. This gap identifies possible areas for additional research and involvement, where the discipline could benefit from a greater focus on real-world experiences and opinion-based conversations with experts and industry.
SE for Low Socio-Economic Groups. Despite growing interest in designing software for diverse user groups, limited research specifically addresses the needs and challenges of low socioeconomic groups [P94]. Factors like low literacy rates and socioeconomic status significantly impact how these groups use software. More research is needed to develop software that is accessible and usable for these user groups.
Role of Emotions in Software Development. While the technical aspects of software development are well-studied, there is limited research on how developers’ emotional health affects their job satisfaction and productivity. In addition to documentation [P67], understanding developers’ attitudes toward SE processes could improve SE practices and enhance their well-being and productivity. Future research should explore these emotional aspects to better support SE practice.
Impact of Workplace Discrimination Interventions. While the trends show publications discussing the prevalence and consequences of workplace discrimination, little research has been done to examine the efficacy of interventions to reduce and mitigate discrimination. Further research is needed to evaluate strategies and policies in this area [P105].
Multidisciplinary Approaches for Smart Public Spaces. Designing user-friendly smart public spaces requires a multidisciplinary approach, yet there is little research in this area. Future publications could integrate insights from SE, social sciences, and urban planning to tackle the challenges and opportunities of smart public spaces, making smart city initiatives more inclusive and effective [P101].
Social Dynamics in SE Teams. Despite extensive research on the technical aspects of SE, the social dynamics within SE teams have received little attention. Understanding interpersonal relationships, communication styles, and team dynamics is crucial for enhancing collaboration and productivity. While existing literature highlights the importance of societal factors like socioeconomic issues and workplace discrimination, a gap remains in understanding how social dynamics affect project outcomes and individual well-being. Future publications should explore how social interactions and team cohesion influence software development processes and results [P67, P105, P94].
# RQ3.What is the coverage in the SEIS track in terms of sustainability dimensions?
We classify the publications based on their primary and enabled focus on some sustainability dimensions. Figs. 4 and 5 illustrate the yearly distribution of SEIS publications on sustainability dimensions. Each bar represents the total publications published in a year. The lines and trend lines represent the coverage of the sustainability dimensions of the focus, i.e.,, social, economic, environmental, and technical.
Primary Sustainability Focus In Fig. 4, we use the sustainability dimensions of focus to classify the “direct” novel contributions of SEIS publications. We map all publications to one of the four sustainability dimensions in terms of their primary focus. Our results show that most publications (72 out of 123) are technical (including support for SE processes, and proposal of software applications, tools, and approaches). A significant number of publications (50 out of 123) focus on social aspects (including analyses of and reflections on social issues, cultures, hiring procedures, skills, and gender issues). Only one publication [P27] contributes to environmental aspects (environmental awareness creation), while no publication directly has an economic type of contribution.
Fig. 4 also shows the trend line of the primary contribution of publications in the social and technical dimensions. The two trend lines show a slight (for the technical contributions) and a steeper (for the social contributions) increase in publications. Interestingly, they intersect in 2022 when the focus on social aspects surpasses for the first time the focus on technical ones.
Enabled Sustainability Focus. We use the sustainability dimensions of focus to classify the “enabling” impact of the contributions to one or multiple sustainability dimensions. Fig. 5 shows the trend of the overall sustainability coverage in terms of both direct and enabling impacts. Contrary to trends based on direct contribution, we observe that the social dimension is the most addressed (109 out of 123), reaching its peak in 2023. The technical dimension is the second most addressed dimension (62 out of 123). However, even in terms of enabling impacts, the economic dimension is less emphasized (38 out of 123), and finally, the environmental dimension is the least discussed (20 out of 123).
We present a summary of the sustainability classification in Table I, showing contributions per year across all sustainability dimensions. We summarize the overall sustainability trends as follows.
Economic dimension. Publications in this area highlight cost efficiency, resource management, and financial benefits from sustainable practices [P01-P10, P13-P18, P27, P41, P45, P99]. However, interest in the economic dimension has declined in recent years, with few contributions in 2023-2024
Fig. 4. Primary Sustainability Focus over the years
Fig. 5. Enabled Sustainability Focus over the years
[P116].
Environmental dimension. Early publications prioritize energy efficiency, renewable energy, and reducing environmental footprints [P01, P03, P05, P13, P19, P27]. After a dip in attention between 2019 and 2023, interest resurges in 2024 with a focus on energy consumption and sustainability-aware architecture [P107, P110, P118, P119].
Social dimension. This is a consistent area of interest, with an emphasis on human-centered design, and inclusivity [P01, P04, P10, P15, P32, P38]. Recent publications address equity, diversity, and support for marginalized groups [P40, P76, P82, P89, P120], showing an evolving focus on social equity.
Technical dimension. Throughout the years, there has been a strong focus on maintainability, adaptability, and long-term usability of software [P19, P22, P28, P33, P99]. Recent publications include topics like privacy, security, and architectural patterns [P109, P112, P118].
Overall, the trends show an increased focus on approaches addressing the sustainability of the software ecosystem in the context of social issues, followed by a focus on the technical dimension. A lack of publications that address environmental and economic dimensions is observed.
# VI. DISCUSSION
# A. On Trends and Gaps
Our analysis of a decade of SEIS publications highlights the main trends and gaps. Overall, the majority of the publications are empirical studies and discuss social aspects. These are categorized as evaluation publications. We observe a scarcity of opinion- and experience-based publications despite these being encouraged in the call for papers. This is an important gap, as it would be valuable for the community to learn from publications that report experience-based perspectives from the viewpoint of, e.g., practitioners, researchers, and other disciplines.
Further, a notable trend is the increasing focus on addressing workplace discrimination in software engineering, with researchers examining its causes, effects, and the importance of fostering inclusive environments. This highlights the need for diversity and anti-discrimination efforts to improve team dynamics and productivity. However, there is a research gap to assess the effectiveness of interventions aimed at mitigating these issues. Studying the social dynamics within the SE workforce could help develop appropriate interventions. Research on the emotional well-being of developers and its impact on productivity also remains limited. Research on the effects of positive discrimination could also add an interesting perspective to the diversity and inclusion issues.
We also observe a trend in ethical concerns that focuses mainly on privacy, discrimination, and misrepresentation. Most of these studies are conducted in the context of a specific community. More research is needed to uncover the challenges and needs of underrepresented groups, directly or indirectly affected by the SE processes and their aftereffects.
Finally, we only found one study [P99] focusing on sharing (e.g., as open-source) the knowledge and results for potential reuse. We find this interesting, as the gap between research and practice is still an open problem. We argue that more research should invest in synthesizing results in reusable formats so as to have a greater impact, especially in SE for society.
# B. On Sustainability and Social Impact
The results of RQ1 show sustainability as the top keyword, and trends also highlight publications discussing sustainability. However, these publications yield a lot of variation in terms of sustainability understanding and representation: while some publications study sustainability in terms of concerns and requirements (both technical and social), others focus on sustainability in communication and governance. Also, we see a general misconception of the notion of sustainability, or at least just partial coverage: many publications mention sustainability in some dimension of focus (e.g., social or technical), but they neglect, by and large, the sustainability dimension of time, hence disregarding the fact that even positive interventions for, e.g., inclusivity are unsustainable unless they are accompanied by a durable change in behavior and/or society. More research is needed on how both dimensions of focus and time are combined for true sustainability in SE.
Based on the trends identified in RQ2 and the sustainability mapping in RQ3, our results show that the SEIS track features publications that discuss sustainability for both their primary and enabled foci.
We see a dominance of the social dimension overall and across different facets of sustainability. The environmental dimension has been observed to be neglected over the years, with a significant decline in recent years. Moreover, only one study [P27] directly addresses the environmental sustainability concerns, while others only achieve it through the enabling impacts e.g., employing a technical solution such as design patterns [P107] can lead to energy savings as an enabling effect. The same is also true for other dimensions, e.g., studying the social impact of UI animations on people with ADHD can enable SE practitioners to develop technical solutions to support their needs. The neglected sustainability dimensions must also be studied for their direct impacts.
Based on our findings, we conclude that a solution aimed at directly addressing one sustainability dimension not only has effects on other sustainability dimensions but also leads to the creation of solutions that are cross-cutting across the four sustainability dimensions of focus. Research considering the impacts of these cross-cutting concerns is essential for achieving sustainability.
# VII. THREATS TO VALIDITY
For the discussion of the threats to the validity of our work and related mitigating measures, we referred to the threat categories from Wohlin et al. [22].
Internal Validity. Variability in our findings may be caused by variations in data extraction techniques. To solve this, we created a uniform data extraction procedure and carried out cross-checks across the authors to guarantee precision and consistency. Naturally, there is a chance of potential bias in the classification and results. However, the authors crosschecked the classification protocol and results to mitigate such bias. We only used keywords from metadata and, in some cases, abstracts and conclusions. There is a change of misrepresentation for the results that solely rely on keyword count per cluster. However, we manually analyzed the keyword groups and updated the topic categories by merging keyword clusters.
TABLE I OVERVIEW OF PAPERS PER PUBLICATION YEAR: CLASSIFICATION PER SUSTAINABILITY DIMENSION OF FOCUS
External Validity. Since our research is by design limited to the SEIS track, the results cannot be generalized and may be subject to bias based on the specific acceptance criteria of the track itself and/or this specific conference. However, our classification scheme can be applied to other conferences for future comparative analysis of results.
Construct Validity. We systematically followed the predefined design of our study for each RQ. Our classification of topics and trends relies on the keywords declared by the authors of the publications under analysis. We analyze the full text of the studies in the top topics to confirm the correctness of the cluster assignment. In our work, none of the publications were removed from or reclassified in another cluster. For the sustainability mapping, we define the sustainability concepts in the background section and use them to classify the publications. The two types of classifications were performed by different authors and, as a mitigation action, were later crosschecked for correctness and consistency.
Conclusion Validity. The implications of this research are subject to the researcher’s bias. Further, the synthesized results may not capture the complete context of the publications, which can lead to misinterpretation of the results. As a mitigating action, the authors of this study separately performed the synthesis and then cross-checked the results to detect and address potential inconsistencies. | In the international software engineering research community, the premier
conference (ICSE) features since a decade a special track on the role of SE In
Society (or SEIS track). In this work, we want to use the articles published in
this track as a proxy or example of the research in this field, in terms of
covered topics, trends, and gaps. Also, since SEIS was originally defined with
a special focus on sustainability, we want to observe the evolution of the
research in this respect. We conducted a mapping study of the 123 articles
published in the SEIS track and among the results identified (i) trends
pertaining sustainability, diversity and inclusion, and open-source software;
(ii) gaps regarding concrete interventions to solve problems (e.g., workplace
discrimination, the emotional well-being of developers); and (iii) a main
sustainability focus in the social dimension, while the environmental dimension
is the least frequently addressed. As future work, our aim is to stimulate
discussion in the community and we hope to inspire replications of this work in
other conference venues. | [
"cs.SE"
] |
# 1 Introduction
Logs are textual records generated during software execution to capture runtime events, states, and contextual information [82]. A typical log statement consists of three components: a verbosity level, logging variables, and logging texts [16, 31]. In particular, as the example shown below, the logging level (e.g., debug) reflects the event’s severity; the logging variables (e.g., terminalState) hold critical run-time data about system states; meanwhile, the logging text (e.g., Stopping the checkpoint services with state) offers a static explanation of the system’s actions.
log.debug("Stopping the checkpoint services with state { }.", terminalState);
High-quality logs provide actionable insights for developers to diagnose failures, optimize system behavior, and ensure reliability. However, the absence or inadequacy of logging statements can severely hinder downstream tasks such as anomaly detection [41, 50, 88] and failure diagnosis [22, 29, 83], leading to prolonged debugging cycles and increased maintenance costs. Consequently, the strategic placement and content of log statements directly influence the effectiveness of software maintenance and evolution [21, 68].
Despite their critical role in software maintenance, producing high-quality logs manually is far from straightforward for developers. First, the absence of universal logging guidelines leads to inconsistent practices, where log quality, granularity, and utility vary widely based on individual developers’ expertise. This inconsistency complicates log analysis and reduces their diagnostic value [4]. Second, developers face the difficult task of balancing log quantity and quality: overlogging burdens systems with excessive data, while under-logging risks missing critical information [80]. Third, the cognitive and time-intensive nature of manual logging further exacerbates these issues, as developers must anticipate complex system behaviors and failure points, often resulting in logs that are either too vague or overly detailed [15]. Additionally, maintaining log relevance over time is challenging, as software evolution can render existing logs obsolete [92]. These challenges highlight the need for automated logging statement generation (hereafter referred to simply as ‘automated logging’) solutions that can consistently produce high-quality logs.
To address these challenges, at the early stage, researchers have explored automated logging techniques by decomposing the problem into sub-tasks, such as where-to-log (identifying code locations for logging) [45, 90], what-to-log (generating static content and dynamic variables) [9, 54], and log-level suggestion [18, 46, 51]. However, these fragmented approaches lack integration into a unified, end-to-end framework for generating complete logging statements. Recent advances in pre-trained language models (LMs) have opened new avenues for automated logging. LANCE [57] and LEONID [56] pioneered the use of sequence-to-sequence models (e.g., T5 [62]) to generate logging statements directly from code contexts. Subsequent tools, such as Fastlog [76], Unilog [77], and SCLogger [44], further leveraged the large language model (LLM) to improve logging quality. In particular, Unilog and SCLogger adopted prompt-based methods with LLMs such as GPT-3.5 and Codex, achieving state-of-the-art performance by harnessing the code comprehension and natural language generation capabilities of LLMs.
Despite their effectiveness, LLM-based logging tools [44, 76, 77] face several limitations in enterprise settings.
• Privacy Risks. Sending proprietary code to commercial LLM APIs, such as OpenAI’s, risks exposing sensitive intellectual property [81]. For instance, Samsung banned employee use of ChatGPT and other generative AI tools after an engineer accidentally leaked sensitive source code to ChatGPT [33].
• Style Misalignment. LLMs, trained on generic datasets, struggle to generate logs that align with enterprise-specific logging styles, such as unique verbosity levels or error prioritization requirements, limiting their utility for organizational needs [14, 17].
To address these issues, enterprises often consider fine-tuning and deploying open-source LLMs in private environments. However, this approach demands substantial computational resources, including thousands of GPU hours, which is impractical for many resource-constrained organizations [7, 24]. These limitations necessitate more accessible and efficient solutions for automated logging.
Small open-source language models (SOLMs), defined as open-source models with fewer than 14 billion parameters, have gained traction in software engineering tasks, such as program repair [67] and comment rectification [64], making them a promising solution for automated logging. By deploying SOLMs locally on consumer-grade hardware, such as an A100 GPU, enterprises can safeguard proprietary code, eliminating privacy risks associated with commercial APIs [79]. Moreover, SOLMs require significantly fewer computational resources, reducing costs and aligning with sustainable computing goals [24]. Additionally, SOLMs can be fine-tuned on enterprise-specific codebases to produce logs that meet unique organizational requirements, such as specific formats or compliance standards [55, 64]. For resource-constrained enterprises, SOLMs offer a practical balance of privacy, efficiency, and adaptability, making their application to automated logging highly attractive. However, to the best of our knowledge, no studies have systematically explored the effectiveness of SOLMs in automated logging.
To fill this significant gap, in this paper, we conduct an empirical study on four prominent SOLMs, namely LLaMA [12], Mistral [28], CodeLLaMA [66], and Qwen2.5coder [23], to explore the potential utility of SOLMs in automatic generation of logging statements. We pose four research questions to comprehensively assess the potential of SOLMs.
RQ1: What are the most effective prompting strategies for using SOLMs in logging generation? Different prompting techniques (e.g., in-context learning (ICL) [2], chain-of-thought (COT) [74], retrieval-augmented generation (RAG) [36]) can influence the performance of SOLMs without the need for retraining. Gaining insight into their effects can help improve the generation of logging statements across different contexts.
Result. RAG outperforms ICL and COT, significantly enhancing the performance of logging automation task.
RQ2: What is the best tuning strategy using SOLMs for automated logging? Many strategies such as parameter-efficient fine-tuning (PEFT) techniques [19, 20], model size, and model type may influence the efficacy. Thus, we further investigate the extent of their impact, which may offer insights into the optimal selection of strategies for enhancing SOLM performance.
Result. LoRA [20] demonstrates the most consistent and superior results when fine-tuning with PEFT techniques. For models with more than 3B parameters, performance in generating logging statements improves with more parameters, but the increased computational costs indicate a trade-off between performance and resource costs. The instruct variant of the SOLM model outperforms its base counterpart, benefiting from its instruction-tuned foundation.
RQ3: How effectively do SOLMs compare to existing methods and LLM baselines in automated logging? Upon recognizing the optimal strategies for employing SOLMs, we aim to investigate the performance of SOLMs in automated logging compared to existing methods and the prominent LLMs.
Result. Fine-tuned SOLMs, particularly Qwen2.5-coder-14B, outperform both existing methods and LLMs across all evaluated metrics, demonstrating superior logging location accuracy and statement quality. The result of the LLM-based judger further supports the high quality of SOLMgenerated statements.
RQ4: Can SOLMs generalize logging statement generation across diverse code repositories? Logging practices vary across projects. Evaluating SOLM generalization ability on diverse repositories ensures their applicability in real-world development environment.
Result. SOLMs demonstrate strong generalization capabilities in automated logging, maintaining high performance across diverse, unseen repositories. Similar logging practices, such as those shared
01 public List<InputSplit> getSplits(JobContext job) {
02 long totalRows $\ c =$ getNumberOfRows(job);
03 int numSplits $\ c =$ job.getConfiguration().getInt(MRJobConfig.NUM_MAPS, 1);
04 List<InputSplit> splits $\ c =$ new ArrayList<InputSplit>();
05 long currentRow $= 0$ ;
06 for (int split $= 0$ ; split $<$ numSplits; $^ { + + }$ split) {
model input
# 吃
01 public List<InputSplit> getSplits(JobContext job) {
02 long totalRows $\ c =$ getNumberOfRows(job);
03 int numSplits $\ c =$ job.getConfiguration().getInt(MRJobConfig.NUM_MAPS, 1);
04 LOG.info("Generating " $^ +$ totalRows $^ +$ " using " $^ +$ numSplits);
05 List<InputSplit> splits $\ c =$ new ArrayList<InputSplit>();
06 long currentRow $= 0$ ;
07 for (int split $= 0$ ; split $<$ numSplits; $^ { + + }$ split) {
model output
Fig. 1. Task formulation: given a method which missing a logging statement, the model is asked to automated generate a logging statement.
among Apache open-source projects, significantly improve the cross-project generalization ability of SOLMs.
Contributions. To sum up, in this paper, we make the following contributions:
• We conduct the first large-scale empirical study assessing the effectiveness of SOLMs for automated logging. Our findings demonstrate their capability to produce contextually accurate and syntactically correct logging statements that rival or exceed the performance of existing specialized methods and LLMs.
• We investigate and identify effective strategies for optimizing SOLM performance in the context of automated logging. This includes demonstrating that RAG significantly enhances performance and that Low-Rank Adaptation stands out as a highly effective PEFT technique.
• We showcase the practical advantages of fine-tuned SOLMs, including their robust generalization capabilities across diverse and previously unseen code repositories. This research highlights that SOLMs can maintain high performance in varied settings and offer benefits such as local deployment for data privacy and alignment with enterprise-specific logging styles.
• To facilitate further research and encourage practical adoption in the field of automated logging, we publicly release our source code, datasets, and comprehensive experimental results [63].
Paper Orgenizations. Section 2 discusses the background. Section 3 describes the experimental design of our study. Section 4 presents the experimental results. Section 5 introduces the advantages of using SOLMs for automated logging and potential future work directions. Section 6 discusses threats to validity. Section 7 introduces the related work. Section 8 concludes the paper.
# 2 Background
# 2.1 Problem Definition
This paper focuses on the automated logging task (i.e., where-to-log $^ +$ what-to-log), which to some extent can be viewed as a code editing problem: when presented with lines of code, usually corresponding to a method, the generator’s task is to identify both the precise location for logging, referred to as the logging point, and the complete logging statement (i.e., level, variables, and text). The predicted logging point should match the one that was originally present in the source file before being removed, and the predicted logging statement itself should closely resemble the excised original. Figure 1 provides a visual example of this task, showing how a proficient logging statement generator would intelligently incorporate LOG.info("Generating " $^ +$ totalRows $^ +$ " using " $^ +$ numSplits); at line 4. It is important to highlight that this task is distinctly separate from the comprehensive empirical investigation conducted by Li et al. [43], which predominantly examines the question of what-to-log but lacks the consideration of where-to-log.
# 2.2 Large Language Models
The evolution of language models in recent times can be divided into three transformative phases. Initially, there were neural language models (NLMs), followed by the phase of pre-trained language models (PLMs), and the current era sees the prominence of LLMs. Pre-trained models such as CodeT5 [73] and PLBART [1] have achieved noteworthy success in software engineering applications primarily due to task-specific pre-training processes. However, large language models have brought about a revolutionary change in the field due to their immense parameter counts, often exceeding 10 billion, and their comprehensive pre-training data. These models, unlike their pretraining predecessors, exhibit emergent capabilities that allow them to achieve robust performance across a wide range of tasks without necessitating pre-training tailored to specific tasks [11]. This quality substantially diminishes the requirement for resource-heavy training sessions. Within the realm of software engineering, large language models are mainly categorized into two groups. Unified large language models, such as GPT-4o and LLaMA, which are designed to integrate natural language and code corpora, whereas code-specific large language models, like StarCoder [40] and CodeLlama [66], are developed for specialization in tasks centered around coding.
Current methodologies for leveraging LLMs fall into two paradigms: prompt-based and tuningbased. Prompt-based methods exploit the zero-shot or few-shot capabilities of massive LLMs (e.g., GPT-4) through carefully engineered prompts, avoiding the need for explicit training data. They use techniques such as in-context learning (ICL) and chain-of-thought (COT). Conversely, fine-tuning approaches focus on parameter-efficient adaptation (e.g., prompt tuning, prefix tuning, and lowrank adaptation) to tailor smaller-scale LLMs for domain-specific tasks. These techniques freeze base model parameters while training minimal additional components, achieving performance comparable to full-parameter tuning at reduced computational costs.
# 2.3 LLM Applications in Software Engineering Task
Researchers have conducted in-depth investigations into the use of Large Language Models (LLMs) in a variety of software engineering tasks, including but not limited to code completion [25, 26], vulnerability detection [69, 91], program repair [48, 75], and test generation [6, 53]. These studies highlight the versatility and adaptability of LLMs in effectively tackling a range of software engineering challenges.
In certain tasks, approaches that utilize prompts have been shown to yield superior outcomes [24, 30]. Pan et al., for instance, studied the efficiency of LLMs in the context of code translation [60]. Within the range of models assessed, which encompassed both SOLMs and GPT-4, the topperforming SOLM, known as StarCoder, managed to reach a successful translation rate of $1 4 . 5 \%$ , in contrast to the success rate of $4 7 . 3 \%$ achieved by GPT-4. In contrast, SOLMs have been shown to achieve comparable outcomes in specific domains. For instance, Tian et al. [71] observed an F1-score of $8 6 . 5 8 \%$ using UniXCoder for the task of detecting equivalent mutants, providing a notable contrast to the performance of GPT-4, which recorded an F1-score of approximately $5 5 . 9 0 \%$ .
RQ1: interaction 0 品 metrics strategies prompt strategy base SOLMs RQ2.1 PEFT techniques =
cRhQo2i:cfeinse-tuning RQ2.2 Model size RQ2.3 Model mode tuned SOLMs metrics RQ3: baseline 8 metrics
preparation dataset comparison traditional BLLMs SOLMs 公 LLM judger approach RQ4: generalization metrics ability special dataset construction tuned SOLMs
Table 1. Statistics of source repositories of AL-Bench [70].
Furthermore, when employing Vicuna 13B in conjunction with the innovative LogPrompt strategy, the performance was found to be on par with that of GPT-4, as reported in [52].
# 3 Experimental Design
Figure 2 illustrates the overview of our study design. Initially, based on the AL-Bench dataset [70], we construct the fine-tuning, valid and test datasets. Then we investigate SOLMs for automated logging through four research questions (RQs). RQ1 investegates the most effective prompt strategies for using SOLMs in this task. RQ2 aims to determine the optimal fine-tuning strategies by PEFT techniques, model sizes, and model types. Following that, RQ3 compares the performance of finetuned SOLMs against existing methods and LLM baselines. Finally, RQ4 assesses the ability of SOLMs to generalize their logging statement generation capabilities across diverse unseen code repositories.
# 3.1 Dataset Preparation
3.1.1 Studied dataset. To evaluate the performance of automated logging, we select the most recently proposed AL-Bench [70]. This dataset, containing 42,224 logging statements from 10 widely used projects, was specifically constructed using stringent criteria (e.g., $> = 1 0 \mathrm { k }$ stars, $_ { > = 1 \mathrm { k } }$ logging statements, $> = 5 0 0$ log-related issues per project) to ensure the inclusion of well-maintained repositories where developers prioritize high-quality logging statements. Furthermore, AL-Bench intentionally incorporates projects from diverse domains, such as database management, task scheduling, distributed systems, messaging, and IoT platforms, to ensure comprehensive coverage of various real-world logging requirements and practices, making it a robust benchmark. Measures like using recent project versions and standardized code formatting were also employed during its creation to mitigate potential data contamination risks from pre-training corpora. Table 1 provides further statistical details on these source repositories.
3.1.2 Pre-processing and dataset construction. To align with the research objectives of this study and accommodate the requirements of certain baseline models, we performed several adjustments to the original dataset. Firstly, recognizing that some baseline models have an input token limit, we filtered out data instances where the input code exceeded 512 tokens. Secondly, the original construction of AL-Bench could generate multiple data points from a single Java function if it contained multiple logging statements, with each data point representing one specific automated logging case. To prevent potential data leakage, where highly similar code snippets from the same source file might inadvertently appear across different dataset splits (e.g., fine-tuning and testing), we implemented a file-level splitting strategy. This approach ensures that all data instances originating from the same Java file are strictly allocated to only one of the fine-tuning, validation, or test sets. After applying these pre-processing steps, we obtained a final dataset comprising 33,224 instances. We then partitioned this dataset into fine-tuning, validation, and test sets, targeting an 8:1:1 ratio. Because our file-level splitting strategy required keeping all instances from a single file within the same set, the resulting distribution was approximate. The final fine-tuning set contains 26,713 instances, the validation set contains 3,508 instances, and the test set contains 3,003 instances.
# 3.2 Studied Models
In our study, we investigate the performance of the following SOLMs for logging statement generation. These models have been widely adopted in the literature related to SE tasks, including:
• LLaMA3 [12] is Meta’s latest LLM and refines the LLaMA 2 framework. It stands as a prominent open source LLM used in numerous software applications. Trained on an extensive and varied dataset far surpassing its predecessor, LLaMA 3 exhibits significantly improved proficiency in reasoning, code generation, and instruction adherence.
• Mistral [28] is noted for its efficiency and performance, utilizing Grouped-Query Attention (GQA) and Sliding Window Attention (SWA) for faster inference and broader context handling, and demonstrates robust general abilities and notable coding skills.
• CodeLlama [66] is a series of LLMs specialized in generating and completing code, based on the LLaMA2 framework. These models are initially trained using a dataset of 500 billion code tokens and subsequently refined to manage extended context effectively.
• Qwen2.5-coder [23] is a code-specialized version of the Qwen2.5 [78] family, which inherits Qwen’s multilingual capabilities and architectural improvement. While demonstrating strong and comprehensive coding abilities, it also possesses good general and mathematical skills.
# 3.3 Baselines
To evaluate SOLMs performance, we select the baselines by conducting a literature review of relevant papers published in SE venues. From this, we find the following methods for evaluation: LANCE [57], LEONID [56], UniLog [77], and FastLog [76]. Additionally, we examine several LLM baselines, including general-purpose LLMs (Claude3.7-sonnet, GPT4o, LLaMA3.1-405b), and a code-specific LLM (Deepseek-coder-v3). In relation to the methodologies for prompting, we derive our approach from the study [10] and incorporate four different strategies: instruction prompting (base), in-context learning (ICL), retrieval-augmented generation (RAG) and chain of thought (CoT). The instruction prompting strategy involves directly prompting LLMs to generate logging statements using identical inputs as those provided to SOLMs, without any supplementary data. The ICL approach consists of providing one random example before the main query to assist the model in grasping the nature of the task more effectively. In the RAG approach, rather than selecting example randomly, we retrieve precise examples from the valid dataset to accommodate various input samples. We specifically employ the BM25 algorithm to identify the case within the valid dataset that most closely resembles the query sample in terms of similarity. The details of the prompts are shown in Figure 3.
# 3.4 Strategies for Parameter-Efficient Fine-Tuning
We examine how the following PEFT strategies influence the performance of SOLMs when automated logging.
Prefix tuning [42], is a PEFT strategy designed to adapt LLMs to specific downstream tasks while keeping the original model parameters entirely frozen. Instead of modifying the model’s weights, it introduces a small set of trainable continuous vectors, known as the "prefix", which are prepended to the key and value sequences within the multi-head attention mechanisms of the Transformer architecture, typically applied to the topmost $L$ layers. Specifically, for a given layer $l$ , trainable prefix matrices $\mathbf { \bar { P } } _ { k } \in \mathbb { R } ^ { \bar { K } \times \bar { C } }$ and $\mathbf { P } _ { v } \in \mathbb { R } ^ { \tilde { K } \times C }$ (where $K$ is the prefix length, a key hyperparameter, and $C$ is the hidden dimension) are concatenated with the original key $( \mathbf { K } _ { l } \in \mathbb { R } ^ { \bar { M } \times C } )$ and value $( \mathbf { V } _ { l } \in \mathbb { R } ^ { M \times C } )$ matrices derived from the $M$ input tokens, forming augmented matrices $\mathbf { K } _ { l } ^ { \prime } = [ \mathbf { P } _ { k } ; \mathbf { K } _ { l } ]$ and $\mathbf { V } _ { l } ^ { \prime } = [ \mathbf { P } _ { v } ; \mathbf { V } _ { l } ]$ . During fine-tuning, only the parameters comprising these prefix matrices $( \mathbf { P } _ { k } , \mathbf { P } _ { v }$ across the selected layers) are optimized via gradient descent, learning task-specific representations that effectively steer the frozen model’s attention and subsequent computations towards generating appropriate outputs for the target task. This approach significantly reduces the number of trainable parameters compared to full fine-tuning, requires storing only the small prefix parameters per task, and avoids catastrophic forgetting by leaving the core model untouched.
Prompt tuning [35] offers an even more lightweight approach by confining trainable parameters exclusively to continuous prompt embeddings added only at the input layer, while freezing the entire pre-trained model, including its word embedding table. This method prepends a sequence of $K$ learnable prompt embeddings, represented by a single trainable matrix $\mathbf { P } _ { \mathrm { e m b } } ^ { \mathrm { } } \in \mathbb { R } ^ { K \times C }$ (where $K$ is the prompt length and $C$ is the model’s embedding dimension), to the original sequence of $M$ input token embeddings $\mathbf { E } \in \mathbb { R } ^ { M \times C }$ , yielding an augmented input sequence ${ \bf E } ^ { \prime } = [ { \bf P } _ { \mathrm { e m b } } ; { \bf E } ]$ . This combined sequence $\mathbf { E ^ { \prime } }$ is then fed directly into the first layer of the frozen Transformer backbone. During the fine-tuning process, only the parameters of the prompt embedding matrix $\mathbf { P } _ { \mathrm { e m b } }$ are updated. The core idea is that these learned continuous vectors act as a "soft prompt" or task instruction, conditioning the frozen model’s behavior without requiring any internal modifications. Prompt Tuning demonstrates significant efficiency regarding parameter usage, typically necessitating the update of fewer than $0 . 1 \%$ of the total model parameters. This characteristic renders it highly efficient in terms of both storage and computation, especially in multi-task contexts.
Lora [20] provides a distinct PEFT mechanism based on the hypothesis that the change in weights during model adaptation has a low intrinsic rank. Instead of adding prefix or prompt tokens, LoRA freezes the original pre-trained weights $\mathbf { W } _ { 0 } \in \mathbb { R } ^ { d \times k }$ of selected layers (commonly the query, key, value, and output projection matrices in self-attention, and sometimes feed-forward layers) and injects trainable, rank-decomposition matrices in parallel. Specifically, the weight update $\Delta \mathbf { W }$ is approximated by the product of two smaller, low-rank matrices: $\mathbf { W } _ { \mathrm { d o w n } } \in \mathbb { R } ^ { d \times r }$ and $\mathbf { W } _ { \mathrm { u p } } \in \mathbb { R } ^ { r \times k }$ , where the rank $r$ is a crucial hyperparameter significantly smaller than the original dimensions $( r \ll \operatorname* { m i n } ( d , k ) )$ . The modified forward pass for an input x computes the output as the sum of the original path and the adapter path: $\mathbf { h } _ { \mathrm { a d a p t e d } } = \mathbf { W } _ { 0 } \mathbf { x } + \Delta \mathbf { W } \mathbf { x } = \mathbf { W } _ { 0 } \mathbf { x } + \alpha ( \mathbf { W } _ { \mathrm { d o w n } } \mathbf { W } _ { \mathrm { u p } } ) \mathbf { x } ,$ , where $\alpha$ is often a constant scaling factor (like $\alpha / r$ ). During fine-tuning, only the parameters of $\mathbf { W } _ { \mathrm { d o w n } }$ (typically initialized randomly) and $\mathbf { W } _ { \mathrm { u p } }$ (typically initialized to zero) are optimized. A significant advantage of LoRA is that the learned adapter weights can be mathematically merged with the original weights $( \mathbf { W } = \mathbf { W } _ { 0 } + \mathbf { W } _ { \mathrm { d o w n } } \mathbf { W } _ { \mathrm { u p } } )$ after training, resulting in a single weight matrix per adapted layer and incurring zero additional inference latency compared to the original model, while still offering substantial parameter savings during training and allowing easy task switching by loading different adapter pairs.
QLora [8] represents a significant advancement in memory-efficient fine-tuning, specifically designed to make the adaptation of extremely large language models feasible on commodity hardware with limited VRAM. It ingeniously combines low-precision quantization of the base model with the LoRA technique. The core strategy involves loading the massive pre-trained base model $\mathbf { W } _ { 0 }$ with its weights quantized to a very low bit-format, most notably 4-bit NormalFloat (NF4), a data type empirically shown to be effective for normally distributed weights, and keeping these quantized weights $Q ( \mathbf { W } _ { 0 } )$ frozen. Standard LoRA adapters, consisting of low-rank matrices $\mathbf { W } _ { \mathrm { d o w n } }$ and $\mathbf { W } _ { \mathrm { u p } }$ , are then introduced parallel to these quantized layers, but crucially, these adapter weights are maintained and trained in a higher precision format, typically BFloat16, to preserve adaptation capacity. The forward pass thus involves computations using the low-precision base model and the higher-precision adapters: $\mathbf { h } _ { \mathrm { a d a p t e d } } \approx { \cal Q } ( \mathbf { W } _ { 0 } ) \mathbf { x } + \alpha ( \mathbf { W } _ { \mathrm { d o w n } } \mathbf { W } _ { \mathrm { u p } } ) \mathbf { x }$ . To further minimize memory bottlenecks, QLoRA incorporates innovations like double quantization and paged optimizers. By drastically reducing the memory footprint of the base model weights, activations (due to lower precision), and optimizer states, QLoRA enables fine-tuning models with tens or hundreds of billions of parameters on single consumer GPUs, while aiming to retain the task performance levels achieved by full-precision LoRA.
# 3.5 Evaluation method
3.5.1 Tradictional evaluating metrics. Considering earlier research [43, 70], we assess the performance of automated generation of logging statements by focusing on four aspects: the logging point, the logging levels, the logging text, and the logging variables. While each of these components highlights distinct aspects of system runtime information, they collectively serve as essential and complementary resources that aid engineers in analyzing and understanding system behaviour.
Logging point: We use position accuracy (PA) to evaluate the performance of logging location. To quantify PA, we compare the predicted locations of logging statements against their ground truth positions in the source code. This metric is formally defined as the ratio of correctly positioned logging statements $( L S _ { p o s i t i o n \_ c o r r e c t } )$ to the total number of logging statements $( L S _ { a l l } )$ , expressed as $\frac { L S _ { f o s i t i o n \_ c o r r e c t } } { L S _ { a l l } }$
Logging level: We use the Level Accuracy (LA) and Average Ordinal Distance Score (AOD) for evaluating the prediction of logging levels. Given the significant semantic differences between these levels and their implications for system monitoring and maintenance, we rigorously assess LA by comparing predicted log levels against their ground truth values in the source code. This metric is formally defined as the ratio of correctly predicted log levels $( L S _ { l e v e l \_ c o r r e c t } )$ to the total number of logging statements $( L S _ { a l l } )$ , expressed as $\frac { L S _ { l e v e l \_ c o r r e c t } } { L S _ { a l l } }$ . AOD evaluates how closely the current detailed in [46]. The formula to calculate AOD is given by: $\begin{array} { r } { A O D = \frac { \sum _ { i = 1 } ^ { N } \left( 1 - \frac { \overbar { D i s } ( a _ { i } , s _ { i } ) } { M a x D i s ( a _ { i } ) } \right) } { N } } \end{array}$ , where $N$ represents the total number of logging statements in consideration. The term $M a x D i s ( a _ { i } )$ is used to denote the maximum potential distance for the actual log level $a _ { i }$ .
Static logging text: Same as previous study [9, 43, 57] , our evaluation of static logging texts is conducted through the application of two metrics commonly employed in the domain of machine translation: BLEU [61] and ROUGE [49]. These metrics, grounded in n-gram analysis, are instrumental in assessing the degree of similarity between log messages that are generated computationally and those authored by developers. They provide a normalized score continuum from 0 to 1, with elevated scores indicative of a closer resemblance. In our methodology, we specifically implement various forms of these metrics, identified as BLEU-4 and ROUGE-L.
Dynamic logging variables: We use Precisely Match Rate (PMR) and F1 to evaluate dynamic logging variables. PMR ensures consistency in the capture of variable runtime data—a critical aspect of log effectiveness. We extract dynamic variables from both reference implementations and predicted logging statements, then perform exact matching to evaluate correspondence. PMR is formally defined as the ratio of exactly matched dynamic variables $( L S _ { v a r i a b l e \_ c o r r e c t } )$ to the total number of logging statements $( L S _ { a l l } )$ , expressed as 𝐿𝑆𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒_𝑐𝑜𝑟𝑟𝑒𝑐𝑡 . Moreover, consider each logging statement and let us define $S _ { u d }$ as the set of variables included in the generated logging statement, while $S _ { g t }$ pertains to the set of variables present in the actual logging statement. In our analysis, we determine and present the following metrics: the precision, which is the ratio of variables from the updates that accurately match those in the actual logging $\begin{array} { r } { ( p r e c i s i o n = \frac { S _ { u d } \cap S _ { g t } } { S _ { u d } } ) } \end{array}$ = 𝑆𝑢𝑑𝑆∩𝑆𝑔𝑡 ); the recall, which liansdtilcya, tehsetirhehfaramctoinoincomf eaactnu, aelxvparreisasbeldesascotrhre cFt1lyscaonrtiec $\begin{array} { r } { { \ddot { ( r e c a l l = { \frac { S _ { u d } \cap S _ { g t } } { S _ { g t } } } ) } } } \end{array}$ ; and, $\begin{array} { r } { ( F _ { 1 } = 2 * \frac { P r e c i s i o n * R e c a l l } { P r e c i s i o n + R e c a l l } ) } \end{array}$
3.5.2 LLM-as-a-judge. The evaluation of automatically generated logging statements, drawing upon our experimental findings and established prior work, conventionally proceeds by assessing distinct modules such as placement, verbosity level, and textual content. However, it is increasingly evident that certain prevalent metrics do not adequately capture the nuanced performance aspects of generated logging statements [13, 65]. For instance, in the assessment of static text components, metrics like BLEU-4 and ROUGE-L are confined to lexical similarity, largely overlooking crucial semantic congruity. This limitation presents a formidable challenge in establishing a unified and robust methodology for evaluating the overall quality of automatically generated logging statements.
Recently, research within the NLP domain has explored the application of LLM to appraise the quality of LLM-generated content, known as "LLM-as-a-judge". While human evaluation remains a reliable way, its inherent drawbacks of being time-consuming and cost-intensive run counter to the objectives of automated evaluation. Consequently, researchers are increasingly investigating methods to prompt or train LLMs to align with human evaluative preferences, thereby offering a scalable alternative to manual assessment. Supporting this direction, an empirical study by Wang et al. [72] has demonstrated the efficacy of the LLM-as-a-judge approach across various SE tasks. Their findings indicate that output-based evaluation methods, when coupled with state-of-the-art LLMs, yield optimal performance irrespective of the specific inference strategies employed. Informed by these advancements, this paper adopts the LLM-as-a-judge methodology to augment the quality assessment of automatically generated logging statements.
Specifically, we select three LLMs recognized for their superior performance in code-related tasks: Claude3.7-Sonnet, Deepseek-coder-v3, and GPT-4o. We choose this model due to their robust code comprehension and generation capabilities, which are critical for assessing the nuanced quality of logging statements. The evaluation process involves providing each LLM judge with the input code context, the generated logging statement from model, and the corresponding ground truth logging statement. The judges assign scores ranging from 0 to 3, where higher scores indicate greater alignment with the ground truth in terms of logging point accuracy, level appropriateness, static text quality, and dynamic variable correctness. For each generated logging statement, the final score is the average of the scores provided by these three LLM judges. To ensure consistency and reliability, we develop a comprehensive scoring guideline, which outline specific criteria for evaluating each component of the logging statement. These criteria address syntactic accuracy, semantic relevance, and contextual appropriateness, mitigating the limitations of traditional metrics like BLEU-4 and ROUGE-L.
Score Guideline
0: (Unacceptable) The logging statement is syntactically incorrect or misplaced. Formatting deviates significantly, and code changes may impair functionality or maintainability. 1: (Significant Issues) The statement is syntactically correct and appropriately placed but has major flaws: vague static text, incorrect log level, or missing key variables. Formatting inconsistencies or minor code alterations reduce readability but preserve functionality. 2: (Mostly Correct with Minor Flaws) The statement is semantically accurate, includes most relevant details, and uses an appropriate log level. Minor issues include verbose text, suboptimal formatting, or slight stylistic deviations. Functionality is preserved with trivial changes.
3: (Highly Accurate) The statement is precise, concise, and matches the ground truth. It uses the correct log level, parameterized logging, and consistent styling. The code retains full functionality and maintainability.
# 3.6 Implimentation Details
To evaluate conventional logging approaches and LLMs, we reproduced conventional methods using replication packages provided by their authors. For LLMs, we generated logging statements by calling their official APIs, setting the temperature to 0 to ensure deterministic outputs for identical queries, thus guaranteeing reproducibility. For SOLMs, we performed fine-tuning for one epoch on a dedicated dataset, using a batch size of 64 and a learning rate of 1e-4. All experiments, including training, fine-tuning, and inference, were conducted on a single NVIDIA A100 80GB GPU provided by Modal [59], a serverless cloud infrastructure platform that supports easy deployment and reproducibility through provided scripts. Detailed experimental settings are available in our replication package [63].
# 4 Empircial Study Results
# 4.1 RQ1: What are the most effective prompting strategies for using SOLMs in automated logging?
Motivation. Initially, our objective is to determine if the manner in which we engage with the model during the inference phase can have a profound effect on its success in generating automated
You are a coding assistant that helps developers add appropriate
logging statements to their code. The following function input misses a logging statement, please help me add a logging statement to the
function to the appropriate place. (base part) Your task is to reason step-by-step about the best way to add one appropriate logging statement to the function.
Please consider:
Purpose: What is the main reason to add logging here?
Placement: Where is the most informative location for the log
statement within the retry loop logic? Why? (e.g., inside the catch block, after the loop fails completely?)
Content: What specific information should the log message contain (e.g., attempt number, source, exception type/message, backoff duration) Level: What logging level is suitable for this event? (COT part) ## Example Input:
public class A $\{ \dots \}$
## Example Output: (For ICL, a random example) public class A { ... } (For RAG, the most similar example) ## Function Input:
public class A { ... }
Please directly output the function with the logging statement added. Do not include any additional information. (base part)
logging statements. The structuring of the input prompt plays a crucial role in shaping how the SOLM interprets the given task and how it subsequently produces its outputs.
Approach. In addressing this question, our study is focused on evaluating the effectiveness of current prompting techniques within the domain of log generation. We specifically analyze, based on the definitions laid out in Section 3.3, the effectiveness of several prompting strategies: basic instruction prompting (base), in-context learning (ICL), retrieval-augmented generation (RAG), and chain-of-thought (CoT). The specific details regarding the prompt templates used can be found in Figure 3. To strengthen the generalizability of our findings, we employ multiple 7B instruction-following models, namely LLaMA3, Mistral, CodeLlama, and Qwen2.5-Coder, all in their original configurations.
Results. The quantitative evaluation comparing the four prompting techniques across the four selected 7B SOLMs is presented in Table 2. Our analysis of these results yields two primary findings concerning the performance of un-fine-tuned models and the efficacy of different prompt strategies for automated logging generation.
First, we observe a distinct performance disparity between the general-purpose instructionfollowing models (LLaMA3, Mistral) and the code-specific models (CodeLlama, Qwen2.5-coder) when used without any task-specific fine-tuning, i.e., the general-purpose models generally demonstrate superior performance on logging. For instance, LLaMA3 and Mistral achieve peak PA scores of 21.01 and 18.15, respectively (both using RAG), substantially higher than the peak PA scores achieved by CodeLlama (11.62 with ICL) and Qwen2.5-coder (13.25 with CoT). We attribute this trend to the weaker instruction-following capabilities in the original code-specific models. Furthermore, upon analyzing the failure cases, we noted a higher tendency for CodeLlama and Qwen2.5-coder to refuse the prompt or return empty responses compared to LLaMA3 and Mistral, which negatively impacts their effectiveness in this experimental setting. This suggests that, without fine-tuning, the broader instruction comprehension of general-purpose models may be more advantageous for automated logging tasks.
Table 2. Comparison of prompting techniques for automated logging generation using four 7B instructionfollowing SOLMs.
Findings 1: Without fine-tuning, general-purpose models outperform code-specific models for automated logging due to their better instruction-following ability.
Second, RAG emerges as the most effective prompting technique for enhancing automated logging generation genertaion performance. While other techniques occasionally yield the top score for an isolated metric, RAG demonstrates the most significant and robust improvements across the majority of metrics and models. Notably, for Mistral, RAG achieves the highest scores across all evaluated metrics, including PA (18.15), F1 (40.21), and ROUGE-L (34.74). For LLaMA3, RAG secures the top performance in PA (21.01), PMR (41.68), F1 (46.28), BLEU-4 (15.95), and ROUGE-L (36.73). For CodeLlama and Qwen2.5-coder, RAG generally leads to substantial gains over the baseline, particularly in metrics like F1 (CodeLlama: 39.01, Qwen2.5-coder: 49.28) and ROUGE-L (CodeLlama: 34.36, Qwen2.5-coder: 37.57). These results strongly indicate that providing relevant contextual information retrieved from a knowledge base significantly aids the SOLMs in accurately determining where to log, what variables to include, and formulating appropriate log messages, making RAG a highly promising strategy.
Findings 2: RAG proves the most effective prompting technique, significantly enhancing automoated logging statement generation performance across models and metrics.
# 4.2 RQ2: What’s the best tuning strategy using SOLMs for automated logging?
Motivation. In the course of the fine-tuning operation, a diverse array of strategies has the potential to influence the efficacy of our tasks significantly. In order to systematically assess the capabilities of the SOLMs, we thoroughly investigate the following factors:
(RQ2.1) Which PEFT technique yields the optimal performance for automoated logging statement generation using SOLMs? While SOLMs are more compact than larger LLMs, fully fine-tuning them for specific downstream tasks like automated logging can still be computationally demanding and may risk overfitting on task-specific data. PEFT methodologies have emerged as a compelling solution, enabling adaptation by updating only a small fraction of the model’s parameters or by adding a small set of new, trainable parameters. This significantly reduces computational costs and avoids catastrophic forgetting of the model’s pre-trained knowledge. However, a diverse range of PEFT techniques exists, each employing different mechanisms to inject task-specific information into the SOLM. For automated logging, which involves understanding code context, identifying appropriate logging locations, and generating relevant log messages, it is unclear which PEFT strategy offers the optimal balance of performance and efficiency. Therefore, we conduct an evaluation of the performance of SOLMs fine-tuned with various prominent PEFT techniques to ascertain which techniques yield superior performance outcomes.
(RQ2.2) How does the size of SOLMs impact the performance-resource trade-offs in automated logging? Although we focus on SOLMs, there is still considerable variation in size within this class. Larger models might capture more complex code patterns and lead to higher accuracy, but they could also incur greater computational costs during fine-tuning and inference. Therefore, evaluating the impact of model size is essential to understand the performance-resource trade-offs specific to automated logging.
(RQ2.3) Does the instruct variant of a SOLM outperform its base counterpart for automated logging? The distinction between using a ‘base’ pre-trained model versus an ‘instruct’ version of an SOLM presents another critical strategic choice. Instruct models are tuned to follow user prompts and formats, which is useful for specific tasks, but their tuning is generally broad. Base models, on the other hand, represent the raw capabilities learned during pre-training and might offer greater plasticity when fine-tuned on a highly specific downstream task like ours. Therefore, we aim to clarify which model mode serves as a better foundation for this task.
Approach. To address RQ2.1, we selected the 7B parameter versions of all four SOLMs as our primary subjects for investigating the impact of different PEFT techniques. We systematically evaluated four prominent PEFT methods: Prefix Tuning, Prompt Tuning, LoRA, and QLoRA. To establish a comparative baseline, we also measured the performance using direct inference without any PEFT fine-tuning (referred to as ‘base’). For all experiments conducted under RQ2.1, including the baseline, we consistently utilized the RAG-enhanced prompt format that has been identified as effective during our experiment in RQ1.
For RQ2.2, focusing on the influence of model size, we selected the Qwen2.5-coder model series. This choice is driven by the public availability of multiple versions within the same model family, specifically those with 0.5B, 1.5B, 3B, 7B, and 14B parameters, enabling a controlled comparison. Based on the findings from RQ2.1 where LoRA demonstrated superior performance among the PEFT techniques, we exclusively employ LoRA for fine-tuning across these different model sizes. Furthermore, we evaluat each size using both the basic prompt (‘base’) and the RAG-enhanced prompt (‘RAG’), in order to explore whether the effectiveness of RAG varies with model scale, particularly to assess the RAG capabilities of the smaller SOLMs.
To address RQ2.3, we compare the performance of ‘base’ models against their ‘instruct’ counterparts for automated logging. We select the Mistral 7B model, available in both base and instruct variants, for a controlled comparison, as it got the best perfomance in RQ2.1. Both model modes are fine-tuned using LoRA and employed the RAG-enhanced prompt, consistent with the result of RQ2.1 and RQ2.2. The fine-tuning process spanned five epochs, and performance are evaluated using metrics including Reject Rate, PA, LA, AOD, PMR, F1, BLEU-4, and ROUGE-L. The Reject Rate, defined as the percentage of test set examples rejected by the fine-tuned model during inference due to misalignment with learned task-specific criteria, quantifies the model’s selectivity, reflecting its ability to adhere to prompt instructions and generate valid logging statements.
Table 3. Performance Comparison of PEFT Techniques for four 7B SOLMs in automated logging.
Results. (RQ2.1) Table 3 shows the performance comparison of PEFT techniques for SOLMs, where we observe that all evaluated PEFT methods consistently improve performance over the baseline across all SOLMs and most metrics. For instance, looking at the prediction accuracy, QLoRA fine-tuning increased PA from 13.32 to 45.27 for LLaMA3, from 12.85 to 61.90 for Mistral, from 6.99 to 58.97 for CodeLlama, and from 4.30 to 60.21 for Qwen2.5Coder. Similar substantial gains are observed across other metrics like F1 score for variable prediction and BLEU-4/ROUGE-L for statement generation, indicating that fine-tuning with parameter-efficient techniques, is crucial for adapting SOLMs to this specific task.
Furthermore, LoRA emerges as the most effective PEFT methodology, achieving the highest scores for the majority of metrics across all four models. For example, with LoRA, LLaMA3 gets the top PA (56.84), PMR (50.15), F1 (58.22), BLEU-4 (19.84), and ROUGE-L (40.89). Similarly, LoRA leads to the best PA (63.97), LA (69.50), BLEU-4 (23.40), and ROUGE-L (45.73) for Mistral. CodeLlama shows LoRA as the top performer across all metrics. For Qwen2.5Coder, LoRA is also dominant, securing the best PA (62.40), LA (68.46), AOD (88.82), PMR (52.24), BLEU-4 (23.04), and ROUGEL (44.96). Moreover, QLoRA is competitive to LoRA, achieving the second-best results or even surpassing LoRA in a few specific instances (e.g., F1 for Qwen2.5Coder, PMR, and F1 for Mistral). Prompt Tuning shows some strength, particularly for ‘level’ prediction with LLaMA3, while Prefix Tuning, though an improvement over the baseline, is generally outperformed by LoRA, QLoRA, and Prompt Tuning.
Fig. 4. Overlap of Corrected Logging Statement Placement Across Different SOLM Configurations.
Table 4. Impact of Model Size on Performance and Resource Usage for automated logging.
Additionally, the venn diagram in Figure 4a illustrates the overlap of corrected logging statement placements across different PEFT variations for Mistral-7B. The largest overlap (183) is observed in the central region, indicating a core set of logging statements consistently corrected across all PEFT methods (base, Prefix Tuning, Prompt Tuning, LoRA, and QLoRA). LoRA and QLoRA show significant individual contributions (125 and 86, respectively), suggesting their effectiveness in identifying unique logging placements, while the base method contributes the least (40), highlighting the improvement brought by PEFT techniques.
Findings 3: Fine-tuning with PEFT techniques significantly enhances SOLMs performance for automated logging, with LoRA demonstrating the most consistent and superior results across the evaluated models and metrics.
(RQ2.2) Table 4 reveals that increasing model size leads to improved performance in automated logging, particularly for models 3B and larger. Across almost all metrics, there is a discernible improvement as the model parameter count increases from 0.5B to 14B. For example, prediction accuracy (PA) improves from 21.38 (0.5B) to 66.20 (14B), and ROUGE-L scores for text generation increase from 39.55 (0.5B) to 47.22 (14B). The 14B model consistently outperforms all smaller variants, and the 7B model also shows strong performance.
Models smaller than 3B (i.e., 0.5B and 1.5B) exhibit less stable performance scaling. While the 0.5B model is generally the weakest, the progression to the 1.5B and then to the 3B model is not uniformly positive across all metrics. For instance, the 1.5B model shows a slight decrease in LA (60.69 vs 61.53 for 0.5B) and AOD (85.70 vs 86.78 for 0.5B). Furthermore, when moving from 1.5B to 3B, there are slight dips in PA (52.18 vs 53.11), PMR (49.20 vs 50.34), and F1 (56.94 vs 58.39). This suggests that while larger models generally perform better, the performance gains for models below 3B parameters might be less consistent or predictable for this specific task and fine-tuning approach.
Larger models come with increased resource requirements. As model size increases, the training time per epoch, and inference time per prompt also escalate substantially. For instance, training time per epoch rises from approximately 2438 seconds for the 0.5B model to 21780 seconds for the 14B model. Similarly, inference time per prompt increases from 0.0959 seconds (0.5B) to 0.4854 seconds (14B).
The venn diagram in Figure 4b depicts the overlap of corrected logging statement placements across different sizes of the Qwen2.5Coder model (0.5B, 1.5B, 3B, 7B, 14B). The central overlap (442) represents logging statements consistently corrected across all sizes, with the 14B model contributing the most unique placements (152), followed by 7B (73). Smaller models (0.5B and 1.5B) show limited unique contributions (21 and 6, respectively), reinforcing the finding that models with $\mathrm { 3 B + }$ parameters perform better for this task.
Findings 4: For models with $\mathrm { 3 B + }$ parameters, performance in generating logging statements improves with more parameters, but increases computational costs, indicating a performanceresource trade-off. Models under 3B show inconsistent scaling, suggesting a minimum capacity may needed for this task.
(RQ2.3) Figure 5 illustrates the performance comparison between the base and instruct variants of the Mistral-7B model across five epochs for automated logging. The instruct model consistently outperforms the base model across most metrics, including Reject Rate, PA, and ROUGE-L. For instance, the instruct model achieves a lower Reject Rate, indicating better adherence to taskspecific criteria and fewer invalid outputs (Figure 5a). Similarly, the instruct model demonstrates higher PA (Figure 5b), reflecting superior accuracy in predicting logging statement placement. The ROUGE-L scores (Figure 5d) further confirm that the instruct model generates logging statements with greater textual similarity to the ground truth. These trends are evident from the first epoch and persist through the fifth, suggesting that the instruct model’s pre-training for instruction-following enhances its ability to adapt to our logging task.
Findings 5: The instruct variant of SOLM model outperforms its base counterpart in automated logging, benefiting from its instruction-tuned foundation, which enhances task adherence and output quality.
In addition, the performance trends across epochs reveal that excessive fine-tuning can lead to diminishing returns. For both models, the Reject Rate and PA peak at the first epoch, where the Reject Rate is minimized, and the number of correctly predicted logging statement placements is maximized (Figure 5a and Figure 5b). Beyond the first epoch, both metrics show a slight decline or stabilization, with the Reject Rate marginally increasing and PA slightly decreasing by the fifth epoch. This trend suggests that additional fine-tuning may lead to overfitting, causing the models to become overly specialized to the training data and potentially losing some generalizability. For the instruct model, this could also imply a partial erosion of its pre-trained instruction-following robustness.
The venn diagram in Figure 4c further illustrates the overlap of corrected logging statement placements across epochs for the Mistral-7B-Instruct model. The central overlap (1120) indicates a stable core of corrections maintained across all five epochs, with the first epoch contributing the most unique corrected placements (40). The decline in unique contributions from later epochs (e.g., 3 for the fifth epoch) supports the observation of diminishing returns and potential overfitting with prolonged fine-tuning.
Fig. 5. Performance Comparison of Base and Instruct Mistral-7B Models Across Five Epochs.
Findings 6: Both base and instruct models achieve optimal performance at the first epoch for Reject Rate and PA, with prolonged fine-tuning leading to slight performance declines due to overfitting, indicating that excessive fine-tuning may compromise pre-trained capabilities.
# 4.3 RQ3: How effectively do SOLMs compare to existing methods and LLM baselines in automated logging?
Motivation. Having established optimal strategies for employing SOLMs in addressing the preceding RQs, this study aims to investigate the performance of SOLMs in automated logging compared to existing methods and the direct application of LLMs.
(RQ3.1) How effectively do these methods determine appropriate logging locations? • (RQ3.2) What is the quality of the logging statements generated by these methods? • (RQ3.3) When evaluated by an LLM acting as a judge, how does the overall quality of the logging statements produced by these methods compare?
Approach. To address RQ3, we evaluate the performance of SOLMs in automated logging against existing method (i.e., LANCE, LEONID, Unilog, and Fastlog) and LLMs (i.e., Claude3.7sonnet,
Deepseek-coder-v3, GPT4o, and LLaMA3.1-405B). We assess the loging location accuracy (PA), the statement quality (LA, AOD, PMR, F1, BLEU-4, and ROUGE-L), and overall quality via our LLM judges. Target SOLMs (LLaMA-8B, Mistral-7B, CodeLlama-13B, Qwen2.5-coder-14B) are fine-tuned with LoRA and RAG, while LLMs were tested in base, ICL, RAG, and COT configurations. To ensure a comprehensive evaluation of SOLMs’ capabilities, we select the largest parameter models available within the SOLM definition (i.e., open-source models with fewer than 14B parameters). This choice maximizes the potential performance of SOLMs, allowing us to showcase their optimal effectiveness in automated logging.
Result. (RQ3.1) Table 5 shows the performance comparison of automated logging approaches across all metrics. The table shows that Qwen2.5-coder-14B-RAG-LoRA achieves the highest PA at $6 6 . 2 0 \%$ , outperforming all other models, including LLMs like Claude3.7sonnet-RAG $( 6 5 . 9 0 \% )$ , Deepseek-coder-v3-RAG $( 6 5 . 2 0 \% )$ , as well as existing methods like Fastlog $( 5 3 . 4 0 \% )$ . Among SOLMs, Mistral-7B-RAG-LoRA $\left( 6 3 . 9 7 \% \right)$ and CodeLlama-13B-RAG-LoRA $\left( 6 3 . 5 7 \% \right)$ also surpass most LLMs and all existing methods, indicating that fine-tuning with LoRA and RAG significantly enhances the ability of SOLMs to identify logging locations in various configuration settings.
(RQ3.2) For statement quality, Qwen2.5-coder-14B-RAG-LoRA again leads across all metrics. These results surpass the best-performing LLM, deepseek-coder-v3-RAG (e.g., $6 8 . 3 9 \%$ LA, $1 8 . 9 7 \%$ BLEU-4), and existing methods like Fastlog (e.g., $5 9 . 2 6 \%$ LA, $1 3 . 2 8 \%$ BLEU-4). CodeLlama-13BRAG-LoRA also performs strongly, particularly in BLEU-4 $( 2 4 . 3 9 \% )$ ) and ROUGE-L $( 4 6 . 9 1 \% )$ , closely rivaling Qwen2.5-coder-14B. The superior performance of fine-tuned SOLMs suggests that targeted optimization enables them to generate more accurate, relevant, and contextually appropriate logging statements than both unoptimized LLMs and traditional methods.
(RQ3.3) Figure 6 illustrates the score distribution for LLM judges evaluating the quality of logging statements from various methods. First, the bar chart indicates the number of cases receiving each score (0, 1, 2, 3) for different models. Qwen2.5-coder-14B stands out with the highest number of cases scoring 3, alongside the lowest number of cases scoring 0. This distribution suggests that Qwen2.5-coder-14B consistently generates logging statements of higher quality. Second, the trend line representing the average score highlights Qwen2.5-coder-14B achieving the highest average score of 1.506, surpassing all other models, including Claude3.7sonnet-RAG (1.489) and Deepseek-coder-v3-RAG (1.467).
Figure 7 presents a case study comparing logging statements generated by different models for an error logging scenario in a Java method handling a DuplicateKeyException. The ground truth logging statement is log.error("Update alert group error, groupName:{}"), alertGroup.getGroupName(), ex). Among the LLMs, Claude3.7sonnet-RAG generates a statement that closely resembles the ground truth but includes an additional variable (id), which may add unnecessary verbosity. Deepseek-coder-v3-RAG and GPT4o-RAG produce statements with missing or misused variables (e.g., using groupName directly instead of alertGroup.getGroupName()), reducing their contextual accuracy. LLaMA3.1-405B-RAG omits the exception variable (ex), limiting its diagnostic utility. In contrast, fine-tuned SOLMs demonstrate superior performance: Mistral-7B-RAG-LoRA and CodeLlama-13B-RAG-LoRA generate statements identical to the ground truth, ensuring both accuracy and relevance. Qwen2.5-coder-14B-RAG-LoRA adds minor additional text ("already exist") but retains all critical components, closely aligning with the ground truth.
Findings 7: Fine-tuned SOLMs outperform both existing methods and LLMs across all evaluated metrics, demonstrating superior logging location accuracy and statement quality.
Table 5. Performance Comparison of Automated Logging Approaches Across All Metrics.
# 4.4 RQ4: Can SOLMs generalize logging statement generation across diverse code repositories?
Motivation. In real-world software development, logging practices vary significantly across projects due to differences in coding styles, project domains, and developer preferences. For SOLMs to be practically viable for automated logging, they must demonstrate robust generalization when applied to unseen repositories. Poor generalization could lead to ineffective logging statements that fail to capture critical runtime information or adhere to project-specific conventions, limiting the utility of SOLMs in cross-project settings.
Approach. To evaluate the cross-repository generalization ability of SOLMs, we design an experiment that trains models on a subset of repositories from the dataset and tests them on a distinct, non-overlapping set of repositories. We randomly partition the dataset into two groups, each containing five repositories, ensuring diversity in project domains. In each experimental run, we use one group of five repositories as the training set, three repositories from the other group as the validation set, and the remaining two repositories from the same group as the test set.
Fig. 6. Distribution of LLM Judges Scores for Generated Logging Statement Quality Across Models.
Fig. 7. A Case of Generated Logging Statement from Multiple Models.
01 public class A {
02 public AlertGroup updateAlertGroupById(User loginUser, int id, String groupName, String desc, String alertInstanceIds) {
03
04 alertGroup.setCreateUserId(loginUser.getId());
05 alertGroup.setAlertInstanceIds(alertInstanceIds);
06 try {
07 alertGroupMapper.updateById(alertGroup);
08 log.info("Update alert group complete, groupName:{}", alertGroup.getGroupName());
09 return alertGroup;
10 } catch (DuplicateKeyException ex) {
11 log.error(“Update alert group error, groupName:{}”, alertGroup.getGroupName(), ex); ground truth
12 throw new ServiceException(Status.ALERT_GROUP_EXIST);}}} Claude3.7: log.error(“Update alert group error due to duplicate group name, groupName:{}, id:{}”, alertGroup.getGroupName(),id, ex); basically same as ground truth with additional variable 1 Deepseek-coder-v3: log.error(“Alert group already exists, groupName:{}”, groupName); missing static text, missing and misuse variable ----------------------- ------- GPT4o: log.error(“Duplicate key exception occurred while updating alert group, groupName:{}”, groupName); missing and misuse variable ------- LLAMA3.1: log.error(“Update alert group error, groupName:{}”, alertGroup.getGroupName()); missing variable LLaMA3: log.error(“Update alert group failed, duplicate key exception”, ex); missing variable Mistral: log.error(“Update alert group error, groupName:{}”, alertGroup.getGroupName(), ex); same as ground truth --------------------- Codellama : log.error(“Update alert group error, groupName:{}”, alertGroup.getGroupName(), ex); same as ground truth Qwen2.5-coder: log.error(“Update alert group error, alert group name:{} already exist”, alertGroup.getGroupName(), ex); basically same as ground truth with additional text
The validation set primarily served to support RAG by providing a pool of examples from which the most similar code snippets are retrieved using the BM25 algorithm. This partitioning ensures that the test set represents unseen projects with potentially different coding styles and logging conventions, simulating real-world cross-repository scenarios. The experimental setup involves training on a subset of repositories (R1, R3, R5, R7, R9). It also includes validating on another subset (R2, R4, R6) and testing on unseen repositories (R8, R10) in the first configuration. The second configuration, meanwhile, trains on Apache-dominated repositories (R1, R3, R5, R7, R9) and tests on non-Apache repositories. For this experiment, we select the two best-performing 7B SOLMs from RQ2.1: Mistral and Qwen2.5-coder. We fine-tune these models using the best-performing strategy identified in prior experiments, combining LoRA with RAG.
Table 6. Generalization Capabilities for SOLMs Across Diverse Code Repositories.
Results. Table 6 presents the performance of two fine-tuned SOLMs, Mistral-7B and Qwen2.5- coder-7B, in automated logging across diverse code repositories. The results demonstrate the generalization capabilities of SOLMs and highlight the impact of similar logging practices on cross-project performance.
The result shows that both Mistral-7B and Qwen2.5-coder-7B exhibit robust performance. Specifically, Qwen2.5-coder achieves a PA of $5 5 . 5 4 \%$ and ROUGE-L of $4 2 . 2 2 \%$ , while Mistral-7B achieves comparable results with a PA of $5 2 . 9 7 \%$ and ROUGE-L of $4 0 . 1 5 \%$ . These metrics indicate that both models successfully generate accurate logging statements and identify appropriate logging locations in unseen repositories, even when the test set includes projects with distinct logging conventions. The high AOD ( $9 1 . 8 6 \%$ for Qwen2.5-coder) and ROUGE-L scores suggest that the generated logs closely align with ground-truth statements in terms of log level and text similarity. This reveals that SOLMs, when fine-tuned with LoRA and RAG, proficiently generalize across various project areas without experiencing a notable decline in performance.
Findings 8: SOLMs demonstrate strong generalization capabilities in automated logging, maintaining high performance across diverse, unseen repositories.
The performance difference between the two configurations in Table 6 highlights the influence of similar logging practices on cross-project generalization. In the first configuration, where both training and test sets include Apache open-source projects, the models achieve significantly higher performance (e.g., Qwen2.5-coder: PA $5 5 . 5 4 \%$ , ROUGE-L $4 2 . 2 2 \%$ ) compared to the second configuration, where the training set comprises Apache projects, but the test set includes nonApache projects (e.g., Qwen2.5-coder: PA $5 0 . 7 1 \%$ , ROUGE-L $4 0 . 5 0 \%$ ). The performance drop in the second configuration (e.g., $4 . 8 3 \%$ lower PA and $1 . 7 2 \%$ lower ROUGE-L for Qwen2.5-coder) suggests that the absence of shared logging conventions, such as those prevalent in Apache projects (e.g., consistent verbosity levels and formatting styles), reduces the models’ ability to generate contextually appropriate logs. This may because Apache projects follow standardized logging guidelines, aiding knowledge transfer during fine-tuning, while non-Apache projects may use diverse, project-specific practices that hinder generalization. This disparity underscores that similarity in logging practices between repositories can enhance cross-project performance.
Findings 9: Fine-tuning SOLMs with data reflecting similar logging practices, such as those prevalent in Apache open-source projects, significantly enhances their cross-project generalization capabilities.
# 5 Discussion
Based on previous experiments, this section analyzes the strengths of using SOLMs and elicits potential future research directions to advance logging practice.
# 5.1 Analysis of SOLMs’ Advantages
5.1.1 Efficiency and Cost-Effectiveness. SOLMs exhibit remarkable efficiency and cost-effectiveness in automated logging, making them a compelling alternative to LLMs. Unlike LLMs, which often require extensive computational resources, including thousands of GPU hours for training and inference, SOLMs achieve comparable performance with significantly lower resource demands. For instance, our experiments demonstrate that the Qwen2.5-coder-14B model, with 14 billion parameters, can be fine-tuned within 6 hours using a single Nvidia A100 GPU, producing highquality logging statements with a PA of $6 6 . 2 0 \%$ and a ROUGE-L score of $4 6 . 5 1 \%$ (Table 5). This efficiency reduces hardware costs and energy consumption, aligning with sustainable computing goals.
5.1.2 Privacy and Security. Privacy preservation is a standout advantage of SOLMs. Li et al. [43] highlight that LLMs often rely on cloud-based APIs, posing risks of proprietary code leakage. In contrast, SOLMs’ smaller size enables local deployment, ensuring sensitive code remains secure. This is particularly valuable for companies where strict data protection is paramount, offering a safer alternative for logging generation.
5.1.3 Adaptability to Enterprise-specific styles. One of the key challenges in automated logging is adapting to enterprise-specific logging styles and conventions, which vary significantly across organizations due to differences in verbosity levels, error prioritization, or compliance-driven formatting. Our findings demonstrate that SOLMs, when fine-tuned with techniques such as LoRA and RAG, can effectively align with project-specific logging practices. This adaptability is particularly valuable in real-world scenarios where organizations maintain proprietary logging guidelines. Unlike general-purpose LLMs, which struggle to adapt without extensive retraining, SOLMs can be fine-tuned efficiently on internal codebases, ensuring alignment with organizational standards while minimizing computational overhead.
# 5.2 Future Work Directions
5.2.1 Integration into Development Tools. To maximize the practical impact of SOLMs in automated logging, future work should focus on integrating these models into widely used development tools, such as integrated development environments (IDEs), and CI/CD platforms. Real-time logging statement suggestions during code authoring or automated insertion during code reviews could streamline the development process and reduce manual effort. For instance, an IDE plugin leveraging a fine-tuned SOLM could analyze code context on-the-fly, recommend logging points, and suggest high-quality logging statements tailored to the project’s conventions. Such integrations would require optimizing SOLMs for low-latency inference and ensuring compatibility with diverse development workflows. Additionally, incorporating user feedback mechanisms into these tools could enable iterative refinement of generated logs, further aligning them with developer preferences.
5.2.2 Addressing Dynamic Logging Requirements. Logging practices often evolve during a project’s lifecycle due to changing requirements, such as new debugging needs or compliance regulations. SOLMs must be capable of adapting to these dynamic requirements without requiring extensive retraining. Future research could investigate continual learning techniques to enable SOLMs to incrementally adapt to new logging conventions or project-specific requirements. For instance, online fine-tuning approaches could allow SOLMs to learn from newly added logging statements in a repository, ensuring sustained alignment with evolving practices. Additionally, exploring active learning strategies, where SOLMs query developers for feedback on ambiguous logging scenarios, could further enhance their adaptability.
# 6 Threats to Validity
# 6.1 Internal Validity
Selection of hyperparameters for fine-tuning. A potential threat to internal validity lies in the selection of hyperparameters for fine-tuning the SOLMs (e.g., learning rate, batch size, prefix length for prefix tuning, rank for LoRA). The study utilized recommended hyperparameters from official sources due to their proven effectiveness. However, these hyperparameters may not be optimal for all models or datasets, potentially introducing bias in the performance results. Suboptimal hyperparameter choices could lead to underperformance or overfitting, affecting the observed effectiveness of SOLMs compared to baseline LLMs or existing methods. To mitigate this, we conducted preliminary experiments to validate the chosen hyperparameters on a subset of the ALBench dataset, ensuring reasonable performance. Nonetheless, a more exhaustive hyperparameter search may further enhance model performance and reduce this threat. The potential data leakage. A potential threat to the internal validity of our study is the possibility that the AL-Bench dataset, which comprises logging statements from 10 widely used open-source projects, may have been included in the pre-training corpora of the SOLMs or LLMs evaluated. Since these models are pretrained on large-scale datasets, often including publicly available code repositories from platforms like GitHub, there is a risk that some or all of the AL-Bench projects were part of their training data. Such data leakage could artificially inflate the performance of these models, particularly in zero-shot or few-shot settings, potentially skewing our results and conclusions. To mitigate this threat, we carefully analyzed the performance of base (non-fine-tuned) models in our experiments. The results, as shown in Table 3, indicate that base models, such as Qwen2.5-coder (PA: $4 . 3 0 \%$ ), exhibit significantly lower performance compared to their fine-tuned counterparts (PA: $6 2 . 4 0 \%$ ). This substantial performance gap suggests that the base models, despite potential exposure to AL-Bench data during pre-training, do not inherently possess the task-specific knowledge required for high-quality automated logging. Instead, the superior performance of fine-tuned SOLMs is primarily attributable to our fine-tuning strategies (e.g., LoRA and RAG), which adapt the models to the specific logging task and dataset. Thus, we argue that any potential data leakage has minimal impact on the validity of our conclusions, as the observed improvements stem from task-specific fine-tuning rather than pre-existing knowledge from pre-training. Nonetheless, to further address this threat in future work, we recommend evaluating models on proprietary or newly created datasets that are guaranteed to be absent from pre-training corpora.
# 6.2 External Validity
The representativeness of the dataset. A potential threat to generalizability is that the ALBench dataset, comprising 10 open-source projects, may not fully represent logging practices in proprietary codebases. We mitigated this by selecting projects from diverse domains (e.g., task scheduling, messaging systems, IoT platforms), ensuring broad coverage of logging requirements. However, similar to prior work [43], our dataset is predominantly Java-based, which may limit the generalizability of SOLM’s performance in generating log statements for other programming languages. This language-specific focus could restrict insights into how SOLM performs across diverse language ecosystems, potentially affecting its applicability in non-Java contexts. The selection of SOLM. Another potential threat to the generalizability of our findings lies in the selection of specific SOLMs evaluated in this study. While these models were chosen based on their established performance in software engineering tasks, they may not fully represent the diversity of available SOLMs or future advancements in model architectures. For instance, other SOLMs with different pre-training datasets, architectural designs (e.g., transformer variants or mixture-of-experts models), or domain-specific optimizations might exhibit varying performance in automated logging. To mitigate this threat, we selected models with broad applicability in code-related tasks and ensured they were fine-tuned using techniques (e.g., LoRA and RAG) to align with logging-specific requirements. However, future work should explore a wider range of SOLMs, including those with different training corpora or specialized architectures, to validate the robustness of our findings across diverse model ecosystems.
# 6.3 Construct Validity
Adequacy of evaluation metrics for logging quality. A potential threat to construct validity lies in whether the chosen evaluation metrics fully capture the quality of generated logging statements. These metrics primarily assess syntactic similarity to ground-truth logs (e.g., ROUGE-L for text similarity and correctness of logging location (PA). However, high-quality logging statements must also provide actionable insights for developers, such as facilitating debugging or system monitoring, which may not be fully reflected by these metrics. For instance, a generated log might score high on ROUGE-L due to textual similarity but fail to capture critical runtime context (e.g., omitting key variables or using an inappropriate verbosity level). To mitigate this, we incorporated the LLM-as-a-judge approach to assess overall quality holistically. Nevertheless, future work could include developer-centric evaluations, such as user studies, to validate the practical utility of generated logs. Representativeness of the LLM judger result. The use of an LLM to evaluate the quality of generated logging statements introduces a construct validity threat if the LLM’s judgments do not align with human developer preferences. While the LLM-as-a-judge approach has shown promise in software engineering tasks [72], its scoring may not fully capture nuanced aspects of log quality, such as clarity, relevance to specific debugging scenarios, or adherence to project-specific logging conventions. Misalignment between LLM and human judgments could lead to over- or underestimation of SOLM performance. Therefore, incorporating human evaluations or domain-specific rubrics in future work could enhance the alignment between the LLM judger and practical logging needs.
# 7 Related Work
# 7.1 Automated Logging
Traditionally, the automation of logging statements is divided into two primary stages [5, 16]: the identification of logging locations and the creation of logging statements. These stages are respectively denoted as where to log and what to log [92]. In addressing the complexities associated with determining where to log, various methodologies have been investigated by researchers to identify appropriate logging locations within source code [27, 37, 45, 80, 84, 90, 94]. Regarding what to log, the generation of logging statements is usually segmented into three specific subtasks: the generation of logging text [9], the selection of logging variables [54, 86], and the prediction of the logging level [39, 46, 51, 58].
The latest methodology offers a solution for the automatic generation of logging statements, addressing the selection of logging locations, determining the levels of statements, composing content, and identifying variables in a single step. Mastropaolo et al. [57] introduced LANCE, a pioneering comprehensive tool that creates complete logging statements powered by T5. In addition to this, they developed LEONID [56], which integrates deep learning with information retrieval techniques to enhance performance. Meanwhile, Xu et al. [77] presented UniLog, grounded in the in-context learning framework of LLMs. Additionally, Xie et al. [76] introduced FastLog, which is capable of swiftly generating and inserting entire logging statements. Furthermore, Li et al. [44] proposed SCLogger, noted as the first approach to generate contextualized logging statements utilizing inter-method static context.
In this paper, we distinguish our work by focusing on the use of SOLMs for automated logging, addressing the limitations of LLMs in terms of privacy, computational efficiency, and adaptability to enterprise-specific logging practices. Unlike prior proposed approaches, which predominantly rely on LLMs, our study leverages fine-tuned SOLMs. This enables local deployment, mitigating privacy risks associated with cloud-based LLM APIs, and significantly reduces computational overhead, aligning with sustainable computing goals. Furthermore, our comprehensive evaluation using the AL-Bench dataset [70] demonstrates SOLMs’ robust generalization across diverse, unseen repositories, a critical aspect not extensively explored in prior work. By systematically investigating prompting strategies and fine-tuning techniques, we provide a scalable and practical solution for automated logging that balances performance with resource constraints, offering a viable alternative for real-world software development.
# 7.2 Studies in Logging Practices
Advancements in logging within software engineering have sparked a growing interest in exploring logging practices across various domains. Zeng et al. [87] and Chen[3] extended the work of Yuan et al. [85] by analyzing log statements in Android and Java systems, revealing the widespread occurrence of logging in these environments. Kabinna et al. [32] investigated how changes such as bug fixes, feature enhancements, and code refactoring often lead to revisions in logging statements. Lai et al. [34] provided insights into logging code constructs at both file-level and block-level, addressing nine key research questions focused on statistical and content analysis. Li et al. [38] conducted a detailed qualitative study of the advantages and challenges associated with logging in software development, while Zhou et al. [93] explored the connection between logging practices and data leaks in mobile applications. Zhao et al. [89] analyzed IDs within log statements, proposing a straightforward approach to inject IDs to reduce information loss and examining the extent of information gained through this technique. Li et al.[47] investigated the characteristics and practical importance of dynamic variables, proposing a variable-aware log abstraction technique. Li et al.[43] introduced a study on LLM-assisted logging statement generation, demonstrating that promptbased zero-shot or few-shot learning significantly enhances the generalization capabilities of LLMs. Tan et al. [70] proposed AL-Bench, which is a comprehensive benchmark designed specifically for automatic logging tools. | Developers use logging statements to create logs that document system
behavior and aid in software maintenance. As such, high-quality logging is
essential for effective maintenance; however, manual logging often leads to
errors and inconsistency. Recent methods emphasize using large language models
(LLMs) for automated logging statement generation, but these present privacy
and resource issues, hindering their suitability for enterprise use. This paper
presents the first large-scale empirical study evaluating small open-source
language models (SOLMs) for automated logging statement generation. We evaluate
four prominent SOLMs using various prompt strategies and parameter-efficient
fine-tuning techniques, such as Low-Rank Adaptation (LoRA) and
Retrieval-Augmented Generation (RAG). Our results show that fine-tuned SOLMs
with LoRA and RAG prompts, particularly Qwen2.5-coder-14B, outperform existing
tools and LLM baselines in predicting logging locations and generating
high-quality statements, with robust generalization across diverse
repositories. These findings highlight SOLMs as a privacy-preserving, efficient
alternative for automated logging. | [
"cs.SE"
] |
# 1. Introduction
Learning to Optimize (L2O) is a promising new approach in applying learning-based methods to tackle optimization problems. In particular, L2O concentrates on problems with well-defined objective functions and constraints [7]. Thus, black-box optimization strategies, such as Bayesian Optimization [24], typically fall outside its scope. L2O has shown benefits in problems from various domains, including LASSO regression in sparse coding using multilayer perceptrons [8], and utility maximization in resource allocation wherein neural networks (NN) serve to approximate the expensive matrix inversion [11].
L2O can be categorized into three main types: black-box [6, 22, 26, 31], algorithm-unrolling [11, 21, 33], and mathinspired [9, 14]. Black-box L2O approaches the optimization problem as a traditional pattern recognition task, approximating a mapping function from manually constructed features to the solutions [26]. Algorithm-unrolling L2O leverages well-defined algorithms, such as gradient descent [19], to approximate the solutions of complex calculations. Besides, much research has gone into explainable and trustworthy L2O. For example, Heaton et al. [9] employ an existing algorithm to prevent the L2O model from entering irrecoverable areas. Liu et al. [14] introduce a mathematicsdriven L2O (Math-L2O) framework for convex optimization, offering a general workflow for formulating an L2O model. Despite empirical results, a theoretical analysis on the robustness of L2O models under out-of-distribution (OOD) conditions is still missing in [14].
OOD generalization for L2O has emerged as a vital issue, often considered more critical in L2O than in other deep learning applications [23]. For L2O, OOD’s challenge involves resolving previously unseen problems, potentially involving novel optimization problems with unique objectives [30]. Guaranteeing convergence in OOD scenarios remains elusive. For instance, a model’s output in an OOD scenario could potentially veer into unpredictable areas when the domain changes significantly to an InD scenario.
Numerous efforts have been made to enhance the robustness of L2O models in training. Lv et al. [16] employ data augmentation to prevent L2O models from overfitting to specific tasks. Almeida et al. [2] transform the L2O model into a hyperparameter tuner for existing optimization algorithms. Wichrowska et al. [29] focus on minimizing parameters in NNs and assembling heterogeneous optimization tasks. Liu et al. [14] try to regularize L2O models with inspirations from existing algorithms. However, these studies predominantly aim to mitigate the limitations inherent in existing L2O methods, with no comprehensive analysis conducted on the impact of OOD on the deterioration of convergence. This gap in the literature motivates us to quantify this deterioration with rigorous analysis.
The central thesis of this paper is to propose a general and robust L2O model for both InD and OOD scenarios. Chiefly, we first investigate L2O’s convergence behavior in InD contexts and derive the criteria for a uniformly robust model applicable to all InD instances. Then, we characterize L2O’s degradation of convergence under OOD conditions, presenting our findings as a series of corollaries. The main contributions of this paper are as follows.
1. We propose a methodology to link the L2O model’s performances in InD and OOD situations based on the Math-L2O approach from Liu et al. [14]. First, we construct a virtual feature by subtracting the L2O model’s input feature in InD from that in OOD. We then compute the corresponding difference in the model’s outputs by applying this virtual feature. To depict a comprehensive deviation of OOD from InD, we align the variable sequence from an OOD situation with that from InD and construct a trajectory of virtual features. We use this trajectory to illustrate OOD’s divergence from InD and then conduct theoretical analyses.
2. We establish the criteria for a robust L2O model in an InD setting and examine its response to OOD. First, we present a sufficient condition to guarantee a homogeneous convergence improvement in each iteration, confirming robustness in InD scenarios. Then, we derive the equations describing convergence gain in a single iteration and the overall convergence rate of the entire sequence relative to our proposed virtual feature. A collection of theorems and observations underscore that the magnitude of virtual features inherently exacerbates the deterioration of convergence in OOD situations.
3. Based on our theoretical insights, we propose a robust L2O model, GO-Math-L2O, that exclusively employs gradients as input features. This gradient-only approach enables a more concise virtual feature in OOD settings. We introduce a new gradient-only history modeling technique to model the optimization process’s historical sequence. This method employs gradient (and subgradient) values as status indicators to modulate updates provided by the L2O model. We propose to recover the historical subgradient from an inversible model definition, thus eliminating the ambiguity of subgradient selection.
4. Through numerical experiments, we show that GOMath-L2O outperforms state-of-the-art (SOTA) L2O models on convergence and optimality across both InD and OOD scenarios. Following training with a synthetic dataset, we deploy various OOD test cases with identical optimal values. Our proposed model’s convergence speed is up to $1 0 \times$ faster than SOTA L2O models in OOD scenarios.
The rest of this paper is organized as follows. In Sec. 2, we define OOD problems for L2O. In Sec. 3, we propose a method to quantify the solutions given by an L2O model in OOD scenarios. Then, in Sec. 4, we derive the convergence rate of an L2O model in OOD scenarios. Based on this, we propose our robust GO-Math-L2O model in Sec. 5. We empirically verify the proposed model with simulations in
Sec. 6, and conclude the work in Sec. 7.
Notations: A smooth convex function and a non-smooth convex function are denoted by $f$ and $r$ , respectively. NNs’ input vectors are denoted by $z$ and $z ^ { \prime }$ . Variables of an optimization problem are denoted by $x$ and $x ^ { \prime }$ . The optimal solution is denoted by $x ^ { * }$ . An iteration and stopping iteration are denoted by $k$ and $K$ , respectively. A smooth gradient at $x _ { k }$ and a set of subgradients at $x _ { k }$ are denoted by $\nabla f ( x _ { k } )$ and $\partial r ( x _ { k } )$ , respectively. A subgradient value of $\partial r ( x _ { k } )$ is denoted by $g _ { k }$ . Frobenius norm for a matrix and $L 2$ -norm for a vector is denoted by $\Vert \cdot \Vert _ { \mathrm { F } }$ and $\| \cdot \|$ respectively. Transpose is defined by ⊤. The maximum length of history modeling is denoted by $T$ . The Jacobian matrix of a vector-to-vector function is denoted by $\mathbf { J }$ . An L2O model is denoted by $d$ . A NN is denoted by operator $\mathbf { N }$ .
# 2. Definitions
In this section, we first introduce the objective of the L2O problem. We then introduce the Math-L2O model in [14], whose iterative updates are defined by NNs. Last, we define the domains for both InD and OOD scenarios, which leads to the definitions of InD L2O and OOD L2O problems.
# 2.1. Optimizee (Optimization Objective)
Consider function $F ( x ) = f ( x ) + r ( x )$ . Here, $f ( x )$ is a $L$ - smooth function, and $r ( x )$ is a non-smooth function. They are defined within the following function spaces:
$\mathcal { F } _ { L } \left( \mathbb { R } ^ { n } \right) = \{ f : \mathbb { R } ^ { n } \to \mathbb { R } | f$ is convex, differentiable, and $\begin{array} { r l r } & { } & { \| \nabla f ( x ) - \nabla f ( y ) \| \leq L \| x - y \| , \forall x , y \in \mathbb { R } ^ { n } \} , } \\ & { } & { = \left\{ r : \mathbb { R } ^ { n } \to \mathbb { R } | r \mathrm { ~ i s ~ p r o p e r , c l o s e d , a n d ~ c o n v e x } \right\} . } \end{array}$ .
We assume $r ( x )$ is sub-differentiable, with its subgradient set at any point $x$ defined below:
$$
\partial r ( x ) = \{ g \in \mathbb { R } ^ { n } \mid r ( y ) - r ( x ) \geq g ^ { \top } ( y - x ) , \forall x , y \in \mathbb { R } ^ { n } \} .
$$
We note here that the above optimization objective applies to both the InD and the OOD scenarios.
# 2.2. Optimizor (L2O Model)
Denote the L2O model as $d ( z )$ , where the input vector space is designated as $\mathcal { Z }$ such that $z \in \mathcal { Z } \subseteq \mathbb { R } ^ { m }$ . We define $d ( z )$ as a function mapping within the given function space [14]:
$$
\begin{array} { r l } & { \mathcal { D } _ { C } ( \mathcal { Z } ) = \{ d : \mathcal { Z } \to \mathbb { R } ^ { n } \mid d \mathrm { ~ i s ~ d i f f e r e n t i a b l e } , } \\ & { \qquad \quad \| \mathbf { J } _ { d ( z ) } \| _ { \mathrm { F } } \le C , \forall z \in \mathcal { Z } , C \in \mathbb { R } ^ { + } \} . } \end{array}
$$
We choose features from $x$ and $F ( x )$ to define $z$ , offering a wide range of feasible options. For instance, $z$ could be defined with the optimization variable and its gradient as $\left[ x ^ { \top } , \nabla f ( x ) ^ { \top } \right] ^ { \top }$ in [14]. Different from [14], we propose to define $z$ solely as $\nabla f ( x )$ to improve convergence in OOD scenarios. From our experimental results, our approach achieves near-optimal solutions in some OOD cases and more robust performance than SOTA baselines in all OOD scenarios. Moreover, Corollaries 2 and 3 theoretically demonstrate the outperformance over the method in [14].
$d ( z )$ iteratively updates the optimization variable. At each iteration $k$ , given the previous variable $x _ { k - 1 } \in \mathbb { R } ^ { n }$ and the input vector $z _ { k - 1 }$ for the L2O model, $d ( z _ { k - 1 } )$ updates $\scriptstyle x _ { k }$ as follows:
$$
x _ { k } = x _ { k - 1 } - d ( z _ { k - 1 } ) .
$$
# 2.3. InD and OOD Problems
The InD and OOD problems share the same space of optimization objective defined in Sec. 2.1 but with different optimization objectives or variable domains. Consider a convex and compact set, $S _ { P } \subseteq \mathbb { R } ^ { n }$ . The complementary set of $ { \boldsymbol { S } } _ { P }$ is denoted as $s _ { O }$ , such that $S _ { O } : = \mathbb { R } ^ { n } \backslash S _ { P }$ . We also suppose the existence of two function sets: $\mathcal { F } _ { L , P } \subseteq \mathcal { F } _ { L } \left( \mathbb { R } ^ { n } \right)$ and ${ \mathcal { F } } _ { P } \subseteq { \mathcal { F } } ( \mathbb { R } ^ { n } )$ . We define InD optimization problems as follows:
$$
\operatorname* { m i n } _ { x } F ( x ) ,
$$
where $x \in \mathcal S _ { P }$ , $F ( x ) \ = \ f ( x ) + r ( x ) , \ f \in \mathcal { F } _ { L , P }$ , and $\boldsymbol { r } \in \mathcal { F } _ { P }$ . The dataset employed for training an L2O model is derived from a specific domain of $x$ , $f$ , and $r$ . Consider an L2O model $d ( z )$ that has undergone training with a domain of $x , f$ , and $r$ , sampled from Problem P. We then define the InD L2O Problem as: Given any initial point $x _ { 0 } \in S _ { P }$ , using $d ( z )$ to iteratively update $x _ { 0 }$ in order to find a solution for any arbitrary InD problem as depicted in Problem P.
Note that instances outside this domain potentially yield more erroneous $d ( z )$ outputs. Furthermore, non-learning algorithms, such as gradient descent, have demonstrated robustness across all domains [19]. One of the main goals of this paper is to propose an L2O model that is robust to OOD.
We characterize OOD in the context of L2O in the optimization objective’s domain. We define the $O O D \ L 2 O$ Problem as: Consider an L2O model $d ( z )$ that has undergone training with a domain of $x , f$ , and $r$ , sampled from Problem P, using $d ( z )$ to iteratively update $x _ { 0 } ^ { \prime } \in \mathcal S _ { O }$ in order to a solution for any following problem:
$$
\operatorname* { m i n } _ { x ^ { \prime } } F ^ { \prime } ( x ^ { \prime } ) .
$$
where $F ^ { \prime } ( x ^ { \prime } ) = f ^ { \prime } ( x ^ { \prime } ) + r ^ { \prime } ( x ^ { \prime } ) , f ^ { \prime } \notin \mathcal { F } _ { L , P }$ , and $r ^ { \prime } \notin \mathcal { F } _ { P }$ .
We delineate the InD and OOD input vector spaces of $d ( z )$ . We denote the input vector spaces for an L2O model in the context of InD L2O Problem and OOD L2O Problem as $\mathcal { Z } _ { P }$ and ${ \mathcal { Z } } _ { O }$ , respectively. Then, we choose features of the variables and the objective functions to construct the input feature of $d ( z )$ . Specifically, we define $\mathcal { Z } _ { P }$ and $\scriptstyle { \mathcal { Z } } _ { O }$ as
the ensuing sets:
$$
\begin{array} { r l } & { \mathcal { Z } _ { P } = \{ [ x \mathrm { - } \mathrm { f e a t u r e } ^ { \top } , f ( x ) \mathrm { - } \mathrm { f e a t u r e } ^ { \top } , r ( x ) \mathrm { - } \mathrm { f e a t u r e } ^ { \top } , \ldots ] ^ { \top } } \\ & { \qquad | \ \forall x \in S _ { P } , \forall f ^ { \prime } \in \mathcal { F } _ { L , P } , \forall r ^ { \prime } \in \mathcal { F } _ { P } \} , } \\ & { \mathcal { Z } _ { O } = \{ [ x ^ { \prime } \mathrm { - } \mathrm { f e a t u r e } ^ { \top } , f ^ { \prime } ( x ^ { \prime } ) \mathrm { - } \mathrm { f e a t u r e } ^ { \top } , r ^ { \prime } ( x ^ { \prime } ) \mathrm { - } \mathrm { f e a t u r e } ^ { \top } , \ldots ] ^ { \top } } \\ & { \qquad | \ \exists x ^ { \prime } \in S _ { O } \mathrm { o r } \exists f ^ { \prime } \not \in \mathcal { F } _ { L , P } \mathrm { o r } \exists r ^ { \prime } \not \in \mathcal { F } _ { P } \} , } \end{array}
$$
where “. . . ” represents other feasible features such as the history of $x$ . Some feasible feature constructions for $x$ , $f ( x )$ , and $r ( x )$ include $x$ itself, $\nabla f ( x )$ , and $\partial r ( x )$ . Later in Sec. 5, we show how to contruct the input features of the L2O model $d ( z )$ based only on $\nabla f ( x )$ and $\partial r ( x )$ .
# 3. Virtual Feature and Trajectory
In this section, we introduce a virtual feature methodology to correlate any arbitrary variable yielded by the L2O model in the OOD scenario $( x _ { k } ^ { \prime } )$ to a corresponding variable $x _ { k }$ in the InD scenario. The virtual features are generated as a linear combination of the OOD and InD features and serve as a bridge to connect each L2O model’s OOD outcome to its InD outcome. We then leverage the virtual-feature method to connect OOD and InD variable trajectories generated by the L2O model. Since the convergence of InD trajectories is deterministic, such a method facilitates the convergence and robustness analysis for OOD scenarios in Sec. 4.
# 3.1. Virtual Feature
Consider an arbitrary OOD variable $ { \boldsymbol { x } } ^ { \prime } \in { \boldsymbol { S } } _ { O }$ and a InD variable $x \in { S } _ { P }$ yielded by the L2O model. Let $s \in \mathbb { R } ^ { n }$ such that $s = x ^ { \prime } - x$ . In that case, we define the difference $s ^ { \prime }$ between the L2O model’s features in the OOD scenario $z ^ { \prime }$ and the features in the InD scenario $z$ . From the Mean Value Theorem [20], there exists a virtual Jacobian matrix $\mathbf { J } _ { d } , \| \mathbf { J } _ { d } \| \leq C { \sqrt { n } }$ such that the following equality holds:
$$
d ( z ^ { \prime } ) = d ( z ) + { \bf J } _ { d } ( z ^ { \prime } - z ) = d ( z ) + { \bf J } _ { d } s ^ { \prime } .
$$
The demonstrations are in Sec. 8.1. From equation 3, we can relate any variable of the L2O model in the OOD scenario to the InD scenario. Although the virtual Jacobian matrix $\mathbf { J } _ { d }$ is non-deterministic, it is upper bounded from the definition of $d ( z )$ in equation 1. This suffices for a quantitative analysis of the impact of the ”shift” $s ^ { \prime }$ on convergence. For instance, our proposed Theorem 1 in Sec. 4 provides an upper bound on the convergence gain for a single iteration.
# 3.2. Trajectory
For the OOD Problem O, denote the initial variable as $x _ { 0 } ^ { \prime } \in$ $s _ { O }$ . In the optimization process, we have two trajectories for the variable $x ^ { \prime }$ and the features of the L2O model $z ^ { \prime }$ :
$$
\{ x _ { 0 } ^ { \prime } , x _ { 1 } ^ { \prime } , x _ { 2 } ^ { \prime } , \ldots , x _ { K } ^ { \prime } \} , \{ z _ { 0 } ^ { \prime } , z _ { 1 } ^ { \prime } , z _ { 2 } ^ { \prime } , \ldots , z _ { K } ^ { \prime } \} .
$$
where $x _ { k } ^ { \prime } \in S _ { O } , z _ { k } ^ { \prime } \in \mathcal { Z } _ { O } , k = 0 , 1 , 2 , . . . , K$ . Similarly, for the InD Problem P, denote the initial variable as $x _ { 0 } \in$ $ { \boldsymbol { S } } _ { P }$ . We have also have two trajectories for the variables $x$ and the features of the L2O model $z$ :
$$
\{ x _ { 0 } , x _ { 1 } , x _ { 2 } , \ldots , x _ { K } \} , \{ z _ { 0 } , z _ { 1 } , z _ { 2 } , \ldots , z _ { K } \} ,
$$
where $x _ { k } \in S _ { P } , z _ { k } \in \mathcal { Z } _ { P } , k = 0 , 1 , 2 , . . . , K$ . Utilizing the definitions in Sec. 3.1, we compute the differences between the variables and the features of the OOD trajectories and the InD trajectories as follows:
$$
\big \{ s _ { 0 } , s _ { 1 } , s _ { 2 } , \ldots , s _ { K } \big \} , \big \{ s _ { 0 } ^ { \prime } , s _ { 1 } ^ { \prime } , s _ { 2 } ^ { \prime } , \ldots , s _ { K } ^ { \prime } \big \} ,
$$
where $s _ { k } : = x _ { k } ^ { \prime } - x _ { k }$ and $s _ { k } ^ { \prime } : = z _ { k } ^ { \prime } - z _ { k }$ . Thus, we can represent the OOD trajectory by $\{ x _ { k } + s _ { k } \}$ and $\{ z _ { k } + s _ { k } ^ { \prime } \}$ . Furthermore, utilizing the virtual-feature method in Sec. 3.1, we have:
$$
d ( z _ { k - 1 } ^ { \prime } ) = d ( z _ { k - 1 } ) + \mathbf { J } _ { d , k - 1 } s _ { k - 1 } ^ { \prime } ,
$$
where $\mathbf { J } _ { d , k - 1 }$ is a virtual Jacobian matrix of $d \big ( \tilde { z } _ { k - 1 } \big )$ . Due to equation 2 in Sec. 2.2, $x _ { k } ^ { \prime }$ is updated by $x _ { k - 1 } ^ { \prime } - d ( z _ { k - 1 } ^ { \prime } )$ and $x _ { k }$ is updated by $x _ { k - 1 } - d ( z _ { k - 1 } )$ . Based on equation 4, we have:
$$
s _ { k } = s _ { k - 1 } - \mathbf { J } _ { d , k - 1 } s _ { k - 1 } ^ { \prime } .
$$
# 4. White-Box OOD Generalization Analysis
In this section, we rigorously demonstrate that the robustness of the L2O model is limited by its input features of NNs. We prove that increased features adversely impact the L2O model’s generalization ability in OOD scenarios.
# 4.1. The Smooth Case
Building upon the state-of-the-art Math-L2O [14], we systematically detail our conclusions through a series of theorems and lemmas.
We analyze the convergence rate of the OOD scenario when the objective function $F ( x )$ is smooth, i.e., $r ( x ) = 0$ and $F ( x ) = f ( x )$ . Leveraging Theorem 1 from [14], the update of the variable at the $k$ -th iteration can be expressed as $x _ { k } = x _ { k - 1 } - \mathbf { P } _ { k - 1 } \nabla f ( x _ { k - 1 } ) - b _ { k - 1 }$ , where $\mathbf { P } _ { k - 1 } \in$ $\mathbb { R } ^ { n \times n }$ and $b _ { k - 1 } \in \mathbb { R } ^ { n }$ are parameters learned by NNs.
Let $\mathbf { P } _ { k - 1 }$ and $b _ { k - 1 }$ be $\mathbf { N } _ { 1 } ( \mathcal { Z } ) \in \mathcal { D } _ { C _ { 1 } } ( \mathcal { Z } )$ and $\mathbf { N } _ { 2 } ( \mathcal { Z } ) \in$ $\mathcal { D } _ { C _ { 2 } } ( \mathcal { Z } )$ respectively, for some positive constants $C _ { 1 } , C _ { 2 } \in$ $\mathbb { R } ^ { + }$ . As suggested in [14], we assign $\mathbf { P } _ { k }$ as a diagonal matrix. Without loss of generality, for any given variable $x _ { k - 1 }$ , where $x _ { k - 1 } \in \mathbb { R } ^ { n }$ , and any given function $f \in \mathcal { F } _ { L } ( \mathbb { R } ^ { n } )$ , we define $z _ { k - 1 } = [ x _ { k - 1 } ^ { \top } , \nabla f ( x _ { k - 1 } ) ^ { \top } ] ^ { \top }$ [14]. The update of variable $x _ { k }$ at each iteration $k$ can then be expressed as:
$$
x _ { k } = x _ { k - 1 } - \mathrm { d i a g } ( { \bf N } _ { 1 } ( z _ { k - 1 } ) ) \nabla f ( x _ { k - 1 } ) - { \bf N } _ { 2 } ( z _ { k - 1 } ) .
$$
The OOD shift applied to the variable and its gradient yields the definition of virtual feature (Sec. 3):
$$
\begin{array} { r } { s _ { k - 1 } ^ { \prime } : = [ s _ { k - 1 } ^ { \top } , ( \nabla f ^ { \prime } ( x _ { k - 1 } ^ { \prime } ) - \nabla f ( x _ { k - 1 } ) ) ^ { \top } ] ^ { \top } . } \end{array}
$$
We present the following lemma for $\mathbf { N } _ { 1 } ( z )$ and $\mathbf { N } _ { 2 } ( z )$ to yield a variable $x _ { k }$ that is no worse than the previous variable $x _ { k - 1 }$ at each iteration $k$ .
Lemma 1. Denote the angle between $\mathbf { N } _ { 2 } ( z _ { k - 1 } )$ and corresponding $\nabla f ( x _ { k - 1 } )$ as $\theta _ { k - 1 }$ . For $\forall z _ { k - 1 } \in \mathcal { Z } _ { P } , \forall x _ { k - 1 } \in$ $ { \boldsymbol { S } } _ { P }$ , i $^ { r } \mathbf { N } _ { 1 } ( z _ { k - 1 } )$ and $\mathbf { N } _ { 2 } ( z _ { k - 1 } )$ are respectively bounded by following compact sets:
$$
\begin{array} { l } { \displaystyle \mathbf { N } _ { 1 } ( z _ { k - 1 } ) : = \lambda _ { k - 1 } \mathbf { 1 } , \lambda _ { k - 1 } \in \left[ 0 , \frac { 1 } { L } \right] , } \\ { \displaystyle \mathbf { N } _ { 2 } ( z _ { k - 1 } ) \in \left[ \mathbf { 0 } , \frac { \| \nabla f ( x _ { k - 1 } ) \| \cos ( \theta _ { k - 1 } ) } { L } \mathbf { 1 } \right] , \theta \in \left[ 0 , \frac { \pi } { 2 } \right] , } \end{array}
$$
then, for $\boldsymbol { x } _ { k }$ generated by $L 2 O$ model in equation $\delta , \ w e$ have:
$$
F ( x _ { k } ) - F ( x _ { k - 1 } ) \leq 0 .
$$
Proof. See Sec. 8.2 in Appendix.
As stated in Lemma 1, to maintain homogeneous improvement on the convergence, it is sufficient to set $\mathbf { N } _ { 1 } ( z )$ as an input-invariant constant, and limit $\mathbf { N } _ { 2 } ( z )$ according to the gradient $\nabla F ( x _ { k - 1 } )$ . Moreover, we can utilize some bounded activation functions in training an L2O model to fulfill the conditions to ensure convergence, such as Sigmoid [17] and Tanh [12].
The proof for Lemma 1 establishes that improvement is characterized by a quadratic relation to each element in $\mathbf { N } _ { 1 } ( z _ { k - 1 } )$ and $\left| \left| \mathbf { N } _ { 2 } ( z _ { k - 1 } ) \right| \right|$ . We can identify the optimal upper bound for convergence improvement in the InD L2O model by optimizing this quadratic relation, leading us to Corollary 1.
Corollary 1. For any $z _ { k - 1 } \in \mathcal { Z } _ { P }$ , we let:
$$
\mathbf { N } _ { 1 } ( z _ { k - 1 } ) : = { \frac { 1 } { 2 L } } \mathbf { 1 } , \mathbf { N } _ { 2 } ( z _ { k - 1 } ) : = { \frac { \nabla f ( x _ { k - 1 } ) } { 2 L } } ,
$$
the Math-L2O model in equation $\cdot$ is exactly gradient descent update with convergence rate:
$$
F ( x _ { K } ) - F ( x ^ { * } ) \leq \frac { L } { 2 K } \| x _ { 0 } - x ^ { * } \| ^ { 2 } .
$$
Proof. See Sec. 8.3 in Appendix.
Corollary 1 implies that the L2O model can achieve gradient descent’s convergence rate by particular settings. The $\mathbf { N } _ { 1 } ( z _ { k - 1 } )$ is set to be a homogeneous constant across all elements. The $\mathbf { N } _ { 2 } ( z _ { k - 1 } )$ is set to in correspondence with the gradient $\nabla f ( x _ { k - 1 } )$ . Moreover, Corollary 1 also provides the most robust L2O model with an identical per iteration convergence gain among all InD instances.
# Per-Iteration Convergence Gain
To ascertain the convergence rate of OOD, following Corollary 1, we suppose that after training, the following assumption holds for the InD L2O Problem (not for the $O O D L 2 O$ Problem) to ensure best robustness for the InD scenario:
Assumption 1. After training, $\forall x _ { k - 1 } \in S _ { P } , \forall z _ { k - 1 } \in \mathcal { Z } _ { P } ,$ , N1(zk−1) := 21L 1 and N2(zk−1) := ∇f(2xLk−1) .
Based on the Lemma 1 and Corollary 1, Assumption 1 leads to an L2O model with best robustness on all InD instances. In the following theorem, we quantify the diminution in convergence rate instigated by the virtual feature $s ^ { \prime }$ defined in Sec. 3.
Theorem 1. Under Assumption $I$ , there exists virtual Jacobian matrices $\mathbf { J } _ { 1 , k - 1 } , \mathbf { J } _ { 2 , k - 1 } , k = 1 , 2 , \ldots , K$ that the per iteration convergence improvement in the OOD scenario is upper bounded by:
$$
\begin{array} { r l } & { \quad F ^ { \prime } ( x _ { k } + s _ { k } ) - F ^ { \prime } ( x _ { k - 1 } + s _ { k - 1 } ) } \\ & { \leq - \frac { \| \nabla f ^ { \prime } ( x _ { k - 1 } + s _ { k - 1 } ) \| ^ { 2 } } { 2 L } } \\ & { \quad + L \| \operatorname { d i a g } ( \mathbf { J } _ { 1 , k - 1 } s ^ { \prime } ) \nabla f ^ { \prime } ( x _ { k - 1 } + s _ { k - 1 } ) \| ^ { 2 } } \\ & { \quad + L \| \frac { \nabla f ^ { \prime } ( x _ { k - 1 } + s _ { k - 1 } ) - \nabla f ( x _ { k - 1 } ) } { 2 L } - \mathbf { J } _ { 2 , k - 1 } s ^ { \prime } \| ^ { 2 } . } \end{array}
$$
Proof. See Sec. 8.4 in Appendix.
Theorem 1 discloses that for a single iteration, the convergence improvement of OOD is bounded by the gradient descent with a step size of $1 / L$ , resulting in $- | \nabla f | ^ { 2 } / 2 L$ convergence improvement. Hence, when Math-L2O is adequately trained, any OOD will dampen convergence. Additionally, given that the expression on the right-hand side is not strictly non-positive, we cannot unequivocally affirm that convergence will transpire within a single iteration. Further investigation also intimates that, even in the context of convex optimization problems, scenarios may arise where the value of the objective function deteriorates.
While the existence of virtual Jacobian matrices in Theorem 1 is assured, their specific values remain unknown. Given that boundedness is a defined characteristic of these matrices, we relax this constraint in Theorem 1 and introduce Corollary 2.
Corollary 2. Under Assumption $\jmath$ , the per iteration convergence improvement in the OOD scenario can be upper bounded w.r.t. $\| s _ { k - 1 } ^ { \prime } \|$ by:
$$
\begin{array} { r l } & { \quad F ^ { \prime } ( x _ { k } + s _ { k } ) - F ^ { \prime } ( x _ { k - 1 } + s _ { k - 1 } ) } \\ & { \leq - \frac { \| \nabla f ^ { \prime } ( x _ { k - 1 } + s _ { k - 1 } ) \| ^ { 2 } } { 2 L } } \\ & { \quad + \frac { \| \nabla f ^ { \prime } ( x _ { k - 1 } + s _ { k - 1 } ) - \nabla f ( x ) \| ^ { 2 } } { 2 L } } \\ & { \quad + \left( L C _ { 1 } ^ { 2 } n \| \nabla f ^ { \prime } ( x _ { k - 1 } + s _ { k - 1 } ) \| ^ { 2 } + 2 L C _ { 2 } ^ { 2 } n \right) \| s ^ { \prime } \| ^ { 2 } . } \end{array}
$$
Proof. See Sec. 8.5 in Appendix.
Corollary 2 further elucidates that the decline in the convergence improvement of OOD is determined by the magnitude of the input (virtual) feature $s ^ { \prime }$ of the L2O model, as outlined in equation 7. This magnitude is intrinsically related to the vector’s dimensionality, which relies on the feature construction of the L2O model. For example, to reduce its magnitude, we can eliminate $s _ { k - 1 }$ in equation 7. We achieve this feature shrinking and propose a novel gradientonly L2O model in Sec. 5.
# Multi-Iteration Convergence Rate
Building upon Theorem 1, we extrapolate the convergence rate across numerous iterations, as delineated in Theorem 2.
Theorem 2. Under Assumption $\jmath$ , the $K$ iterations’ convergence rate in the OOD scenario is upper bounded by:
$$
\begin{array} { r l } { { \operatorname* { m i n } _ { k = 1 , \cdots , K } F ^ { \prime } ( x _ { k } + s _ { k } ) - F ^ { \prime } ( x ^ { * } + s ^ { * } ) } } \\ & { \le \frac { L } { 2 } \| x _ { 0 } - x ^ { * } + s _ { 0 } - s ^ { * } \| ^ { 2 } - \frac { L } { 2 } \| x _ { K } - x ^ { * } + s _ { K } - s ^ { * } \| ^ { 2 } } \\ & { \quad + \frac { L } { K } \displaystyle \sum _ { k = 1 } ^ { K } ( x _ { k } + s _ { k } - x ^ { * } - s ^ { * } ) ^ { \top } } \\ & { \Big ( x _ { k } + s _ { k } - \big ( x _ { k - 1 } + s _ { k - 1 } - \frac { \nabla f ^ { \prime } ( x _ { k - 1 } + s _ { k - 1 } ) } { L } \big ) \Big ) . } \end{array}
$$
Proof. See Sec. 8.6 in Appendix.
The first two terms on the right-hand side of the above inequality represent the gradient descent convergence rate characterized by a step size of $1 / L$ . However, the third term is unbounded and could be either non-positive or positive. This suggests that there is no guaranteed global convergence in OOD situations, even with homogeneous robustness in InD scenarios.
The inequation above offers a direct approach to analyzing distinct cases of convergence. Included in the concluding line of Theorem 1 is a gradient descent equation, $x _ { k - 1 } + s _ { k - 1 } - \nabla f ^ { \prime } ( x _ { k - 1 } + s _ { k - 1 } ) / L$ . Moreover, $x _ { k } + s _ { k }$ represents the updated solution by the L2O model. The subtraction of the two terms reveals the discrepancy between the updates made by L2O and gradient descent on the objective variable $x _ { k - 1 } + s _ { k - 1 }$ , thereby creating a vector directed towards $x _ { k } + s _ { k }$ . Similarly, $x _ { k } + s _ { k } - x ^ { * } - s ^ { * }$ signifies the relative position to the optimal solution, generating another vector directed towards $x _ { k } + s _ { k }$ . The resulting inner product will be non-positive if the angle between these two vectors is $\pi / 2$ or more. Moreover, if the trajectory of $x _ { k } + s _ { k } - x ^ { * } - s ^ { * }$ can be extrapolated from domain knowledge, a “trust region” surrounding $\boldsymbol { x } _ { k } + \boldsymbol { s } _ { k }$ can be established to augment the efficacy of gradient descent.
From Theorem 1, we develop a stringent formulation to illustrate the potential uncertainty of convergence in OOD scenarios. If we know the relative position of the optimal solution, we can fine-tune an L2O model to outperform gradient descent. Based on Theorem 1, we establish an upper bound w.r.t. $s ^ { \prime }$ . This mirrors the approach in Corollary 2.
Corollary 3. Under Assumption $\jmath$ , $L 2 O$ model $d ( z )$ ’s OOD convergence rate is upper bounded w.r.t. $\| s _ { k - 1 } ^ { \prime } \|$ by:
$$
\begin{array} { r l } & { \quad \underset { k = 1 , \ldots , K } { \mathrm { m i n } } F ^ { \prime } ( x _ { k } + s _ { k } ) - F ^ { \prime } ( x ^ { * } + s ^ { * } ) } \\ & { \le \displaystyle \frac { L } { 2 } \| x _ { 0 } + s _ { 0 } - x ^ { * } - s ^ { * } \| ^ { 2 } - \displaystyle \frac { L } { 2 } \| x _ { K } + s _ { K } - x ^ { * } - s ^ { * } \| ^ { 2 } } \\ & { \quad \quad + \displaystyle \frac { 1 } { 2 K } \sum _ { k = 1 } ^ { K } \left( \nabla f ^ { \prime } ( x _ { k - 1 } + s _ { k - 1 } ) - \nabla f ( x _ { k - 1 } ) \right) ^ { \top } } \\ & { \quad \quad \quad \quad \quad ( x _ { k } + s _ { k } - x ^ { * } - s ^ { * } ) } \\ & { \quad \quad \quad + \displaystyle \frac { L } { K } \sum _ { k = 1 } ^ { K } \left( C _ { 1 } \sqrt { n } \| \nabla f ^ { \prime } ( x _ { k - 1 } + s _ { k - 1 } ) \| \right. } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \left. + C _ { 2 } \sqrt { n } \| x _ { k } + s _ { k } - x ^ { * } - s ^ { * } \| \right) \| s _ { k - 1 } ^ { \prime } \| . } \end{array}
$$
Proof. See Sec. 8.7 in Appendix.
Corollary 3 posits that the overall convergence rate is consistently upper bounded by the magnitude of $s ^ { \prime }$ . Based on Corollaries 2 and 3, we endeavor to reduce the magnitude of $s ^ { \prime }$ by eliminating variable, leading to the approach of a gradient-only Math-L2O framework in the next section.
# 4.2. Other Three Cases
We have developed several additional theorems and lemmas for non-smooth, incremental historical modeling, and integrated smooth-non-smooth cases.. Our approach mirrors that employed in the smooth case demonstration. The backbone algorithms of math-inspired L2O fundamentally limit their convergences. For example, the Gradient Descent [19] and Proximal Point [18] algorithms in the smooth case and the non-smooth case, respectively.
We extend the theorems and lemmas in the smooth case to derive formulas for convergence improvement of a single iteration and convergence rate across a sequence. These demonstrate the diminishing effect of OOD on convergence. Our findings conclude that constructing fewer features can mitigate this negative impact. More extensive demonstrations and complete proofs can be found in Appendix.
# 5. Gradient-Only L2O Model
Informed by the theorems and lemmas posited in Sec. 4, we introduce a gradient-only L2O model, GO-Math-L2O, which aims to enhance robustness in OOD scenarios by eliminate variable-related input features for the L2O model.
To derive the formulation of GO-Math-L2O, we employ the workflow delineated in [14]. Let $T$ denote the history length. At the $k$ -th iteration, suppose there exists an operator $d _ { k } \in \mathcal { D } _ { C } ( \mathbb { R } ^ { 3 n } )$ , we formulate the input of our GOMath-L2O as follows:
$$
x _ { k } = x _ { k - 1 } - d _ { k } ( \nabla f ( x _ { k - 1 } ) , g _ { k } , v _ { k - 1 } ) ,
$$
where $g _ { k }$ denotes the implicit subgradient vector of $x _ { k }$ to invoke the proximal gradient method [14]. Moreover, we eliminate all variable-related features and define $v _ { k }$ as the result of historical modeling [14]. Different from the variable approach in [14], we propose to utilize gradient (and subgradient) to model the historical information of the optimization process since gradient sufficiently and necessarily indicates optimality in convex optimization scenarios. Such an approach reduces the magnitude of L2O’s input feature (defined in Sec. 3) by $1 / 3$ , which facilitates convergence based on our proposed corollaries in Sec. 4.
Suppose there exists an operator $u _ { k } \in \mathcal { D } _ { C } ( \mathbb { R } ^ { T n } )$ , we define the following model to generate $v _ { k }$ from the gradient and subgradient of $T$ historical iterations:
$$
v _ { k } = d _ { k } ( \nabla f ( x _ { k - 1 } ) + g _ { k - 1 } , \ldots , \nabla f ( x _ { k - T } ) + g _ { k - T } ) .
$$
where each $g$ represents a subgradient vector. For subgradient selection, we should carefully choose an instance from the subgradient set of each non-smooth point since an arbitrary selection may lead to poor convergence [25].
We achieve a lightweight subgradient selection based on the gradient map method [28] and our following model constructions. From the objective definition in Sec. 2.1, the non-smooth objective $r$ is trivially solvable by arg min. Thus, at $k$ -th iteration, we can recover an implicit subgradient vector $g _ { k }$ of arg min by $k$ -th solution $x _ { k }$ and $k _ { - 1 }$ -th solution $x _ { k - 1 }$ if the L2O operator $d _ { k }$ in equation 8 is inversible. Next, we achieve an inversible $d _ { k }$ based on the workflow proposed in [14].
With the above feature and component constructions, we start to define the structures and learnable parameters of our L2O operator $d _ { k }$ in equation 8. We formulate $d _ { k }$ as the necessary condition of convergence [14], which means the formulation that $d _ { k }$ should follow if convergence is achieved. First, denote a candidate optimal solution as $x ^ { * }$ , we construct two sufficient conditions (Asymptotic Fixed Point and Global Convergence) of convergence for our L2O operator $d _ { k }$ in equation 8:
$$
\begin{array} { c } { \displaystyle \operatorname* { l i m } _ { k \infty } d _ { k } \big ( \nabla f ( x ^ { * } ) , - \nabla f ( x ^ { * } ) , 0 \big ) = \mathbf { 0 } , } \\ { \displaystyle \operatorname* { l i m } _ { k \infty } x _ { k } = x ^ { * } . } \end{array}
$$
As discussed in [14], such two conditions are essential for optimization algorithms.
Then, we present the following Theorem 3 to construct $d _ { k }$ ’s parameters. Theorem 3 shows that if $d _ { k }$ converges, it should be in the form of equation 10. Then, if we add a further assumption on some of the parameters, the solution on each iteration can be uniquely obtained by equation 11.
Theorem 3. Suppose $T = 2$ , given $f \in \mathcal { F } _ { L } ( \mathbb { R } ^ { n } )$ and $r \in$ ${ \mathcal { F } } ( \mathbb { R } ^ { n } )$ , we pick an operators from $\mathcal { D } _ { C } ( \mathbb { R } ^ { 3 n } )$ and $\mathcal { D } _ { C } ( \mathbb { R } ^ { 2 n } )$ . If Condition $F P$ and Condition $G C$ hold, there exist ${ \bf R } _ { k } \succ$ $0 , \mathbf { Q } _ { k } , \mathbf { B } _ { k } \in \mathbb { R } ^ { n \times n }$ and $b _ { 1 , k } , b _ { 2 , k } \in \mathbb { R } ^ { n }$ and satisfying:
$$
\begin{array} { r l } & { x _ { k } = x _ { k - 1 } - \mathbf { R } _ { k } \nabla f ( x _ { k - 1 } ) - \mathbf { R } _ { k } g _ { k } - \mathbf { Q } _ { k } v _ { k - 1 } - b _ { 1 , k } , } \\ & { v _ { k } = ( \mathbf { I } - \mathbf { B } _ { k } ) G _ { k } + \mathbf { B } _ { k } G _ { k - 1 } - b _ { 2 , k } , } \\ & { G _ { k } : = \mathbf { R } _ { k } ^ { - 1 } ( x _ { k - 1 } - x _ { k } - \mathbf { Q } _ { k } v _ { k - 1 } - b _ { 1 , k } ) , } \end{array}
$$
where for k = 0, 1, 2, . . . , $g _ { k + 1 } \in \partial r ( x _ { k + 1 } )$ represents implicit subgradient vector, $\mathbf { R } _ { k } , \mathbf { Q } _ { k }$ , and $\mathbf { B } _ { k }$ are bounded parameter matrices and $b _ { 1 , k } \to 0 , b _ { 2 , k } \to 0$ as $k \infty$ . Since $\mathbf { R } _ { k }$ is symmetric positive definite, $x _ { k + 1 }$ is uniquely determined through:
$$
\operatorname * { a r g m i n } _ { x \in \mathbb { R } ^ { n } } r ( x ) + \frac 1 2 \| x - \mathbf { R } _ { k } \nabla f ( x _ { k } ) - \mathbf { Q } _ { k } v _ { k } - b _ { 1 , k } \| _ { \mathbf { R } _ { k } ^ { - 1 } } ^ { 2 } ,
$$
where $\| \cdot \| _ { \mathbf { R } _ { k } ^ { - 1 } }$ is defined as $\| x \| _ { \mathbf { R } _ { k } ^ { - 1 } } = \sqrt { x ^ { \top } \mathbf { R } _ { k } ^ { - 1 } x }$ .
Proof. See Sec. 8.8 in Appendix.
As a necessary condition for convergence, Theorem 3 suggests that our gradient-only L2O model should construct parameters R, $\mathbf { Q }$ , B, $b _ { 1 }$ , and $b _ { 2 }$ . It is worth noting that this model does not guarantee satisfaction of conditions FP and GC. The convergence is promoted by training.
We learn to construct the parameters in Theorem 3. First, the proof elucidates that the bias terms approach zero upon convergence. Thus, we set $b _ { 1 } , b _ { 2 } : = 0$ and learn to construct R, $\mathbf { Q }$ , and B. We take the construction in [14] to implement our GO-Math-L2O model with a two-layer LSTM cell. Then, we utilize three one-layer linear neural network models with Sigmoid activation function [17] to generate R, $\mathbf { Q }$ , and $\mathbf { B }$ at each iteration, respectively, which ensures that all the matrices are bounded.
# 6. Experiments
We perform experiments with Python 3.9 and PyTorch 1.12 on an Ubuntu 18.04 system equipped with 128GB of memory, an Intel Xeon Gold 5320 CPU, and a pair of NVIDIA RTX 3090 GPUs. We strictly follow the experimental setup presented in [14] for constructing InD evaluations. Due to the page limit, the implementation details are in Sec. 12.
We use the Adam optimizer [13] to train our proposed model and learning-based baselines on datasets of 32,000 optimization problems with randomly sampled parameters and optimal solutions. We generate a test dataset of 1,000 iterations’ objective values, averaging over 1,024 pregenerated optimization problems. We evaluate different training configurations and loss functions to select the best setting. Details are in Sec. 12.5, Appendix.
Baselines. We compare our GD-Math-L2O (Section 5) against both learning-based methods and non-learning algorithms. Our main competater is the state-of-the-art (SOTA) math-inspired L2O model in [14]. Specifically, we select the best variant from this study, L2O-PA. Consistent with the outlined methodology, we also compare our approach with several hand-crafted algorithms: ISTA, FISTA [5], Adam [13], and AdamHD [4], which is Adam complemented by an adaptive learning rate. Moreover, we assess our model against two black-box L2O models, namely L2O-DM[3] and L2O-RNNprop [15], and one Ada-LISTA [1] that unrolls the gradient descent algorithm with learning.
Optimization Objective. We choose the two regression problems in [14]: LASSO Regression and Logistic Regression, defined as follows:
$$
\begin{array} { l } { \displaystyle \min _ { x \in \mathbb { R } ^ { n } } F ( x ) = \displaystyle \frac 1 2 \| \mathbf { A } x - b \| ^ { 2 } + \lambda \| x \| _ { 1 } , } \\ { \displaystyle \operatorname* { m i n } _ { x \in \mathbb { R } ^ { n } } F ( x ) = - \frac { 1 } { m } \sum _ { i = 1 } ^ { m } \big [ b _ { i } \log ( h ( a _ { i } ^ { \top } x ) ) } \\ { \displaystyle ~ + ( 1 - b _ { i } ) \log ( 1 - h ( a _ { i } ^ { \top } x ) ) \big ] + \lambda \| x \| _ { 1 } , } \end{array}
$$
where $m : = \ 1 0 0 0$ . $\textbf { A } \in \mathbb { R } ^ { 2 5 0 \times 5 0 0 }$ and $b ~ \in ~ \mathbb { R } ^ { 5 0 0 }$ , $\{ ( a _ { i } , b _ { i } ) \in \mathbb { R } ^ { 5 0 } \times \{ 0 , 1 \} \} _ { i = 1 } ^ { m }$ are given parameters. $h ( x ) : =$ $1 / ( 1 + e ^ { - x } )$ is sigmoid function. We utilize the standard normal distribution to generate samples and set $\lambda : = 0 . 1$ for both scenarios [14].
We implement the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) [5], executing 5,000 iterations to generate labels (optimal objective values) [14]. Due to page limit, we confine our presentation to LASSO Regression. The results of Logistic Regression are in Sec. 12.8, Appendix.
OOD Scenarios. We aim to quantify the effect of OOD on convergence rates. We specifically formulate two types of OOD trajectories triggered by different actions. It is crucial to note that both OOD and InD scenarios maintain an identical optimality on both objective and solution.
1) $s _ { 0 } \neq 0 , s _ { 0 } \in \mathbb { R } ^ { n }$ . $x _ { 0 }$ is altered by an adjustment factor $s _ { 0 }$ that $x _ { 0 } ^ { \prime }$ falls within the OOD set $s _ { O }$ . Assuming the objective remains consistent, we expect $x ^ { \prime }$ to move from the OOD $s _ { O }$ to the $\operatorname { I n D } S _ { P }$ .
2) $F ^ { \prime } ( x ) = F ( x + t ) , t \in \mathbb { R } ^ { n }$ . The OOD perturbation introduces a translation $t$ along the axes of the objective variable to the objective function. Thus, the optimal solution $x ^ { \prime * }$ diverges from that obtained under the original InD domain, even though the optimal value remains. This illustrates a scenario where the domain translates in inference. If the starting point is unchanged, $x ^ { \prime }$ is expected to move from InD domain to OOD domain.
We derive the non-smooth function’s proximal operator for the OOD scenario, specifically for the $\ell _ { 1 }$ -norm. We define $r ( x )$ as $\lambda | x | _ { 1 }$ , and define the OOD translation as $t$ on
Figure 1. LASSO Regression: InD.
Figure 2. LASSO Regression: Real-World OOD.
$$
\begin{array} { r l } & { ( \mathrm { p r o x } _ { r , p _ { k } } ( \bar { x } ) ) _ { i } } \\ & { \quad : = - t + \mathrm { s i g n } ( \bar { x } _ { i } ) \operatorname* { m a x } ( 0 , | \bar { x } _ { i } | - \lambda ( p _ { k } ) _ { i } + \mathrm { s i g n } ( \bar { x } _ { i } ) t ) . } \end{array}
$$
# 6.1. InD Comparison
The trajectories of solving the LASSO Regression problems are shown in Figure 1, where the vertical axis represents the normed objective value at a given iteration (indicated on the horizontal axis) with a label generated by FISTA [5]. Our proposed method (red line) surpasses all other methods, demonstrating better optimality and quicker convergence.
Figure 3. LASSO Regression: OOD by Trigger 1.
Furthermore, we utilize several ablation studies on model configuration, such as gradient map recovery strategies in Sec. 12.4 and hyperparameter settings for learned parameter matrices in Sec. 12.6, to determine the best model configuration. The details are in the Appendix.
# 6.2. OOD Comparison
variable. The OOD proximal operator with $t$ is given by:
Figure 4. LASSO Regression: OOD by Trigger 2.
The real-world results in Firgure 2 show that our GO-MathL2O (converges at 400 iterations) outperforms all other baselines (1,000 iterations). Considering the lackluster performances of other baselines in Figures 1 and 2, we primarily compare our GO-Math-L2O model against SOTA L2OPA [14]. We construct two synthetic OOD scenarios with the two trigger settings, where the optimal objectives align with those in Figure 1.
Figure 3 portrays the scenario wherein the initial point shifts such that $s _ { 0 } \neq 0$ , with the legends denoting sixteen cases. Our GO-Math-L2O model (represented by dashed lines) outshines L2O-PA (solid lines) in all instances, asserting its superior robustness.
The observations in Figure 4 for the OOD scenario involve function shifting that $F ^ { \prime } ( x ) = F ( x + t )$ . The optimal values achieved by both methods deteriorate from $1 0 ^ { - 7 }$ (as seen in Figure 3) to $1 0 ^ { 0 }$ . However, our GO-Math-L2O still outperforms L2O-PA in all cases. For example, when $t = \pm 1 0$ , our model converges at around 20 steps, but L2OPA fails to converge. | Learning to optimize (L2O) is an emerging technique to solve mathematical
optimization problems with learning-based methods. Although with great success
in many real-world scenarios such as wireless communications, computer
networks, and electronic design, existing L2O works lack theoretical
demonstration of their performance and robustness in out-of-distribution (OOD)
scenarios. We address this gap by providing comprehensive proofs. First, we
prove a sufficient condition for a robust L2O model with homogeneous
convergence rates over all In-Distribution (InD) instances. We assume an L2O
model achieves robustness for an InD scenario. Based on our proposed
methodology of aligning OOD problems to InD problems, we also demonstrate that
the L2O model's convergence rate in OOD scenarios will deteriorate by an
equation of the L2O model's input features. Moreover, we propose an L2O model
with a concise gradient-only feature construction and a novel gradient-based
history modeling method. Numerical simulation demonstrates that our proposed
model outperforms the state-of-the-art baseline in both InD and OOD scenarios
and achieves up to 10 $\times$ convergence speedup. The code of our method can
be found from https://github.com/NetX-lab/GoMathL2O-Official. | [
"cs.LG",
"math.OC"
] |
# 1 Introduction
Due to their intrinsic hierarchical nature, material properties depend on the coupling of various domains, among others, materials chemistry, defect engineering, microstructure physics, and mechanical engineering. This often requires multiscale simulation approaches to adequately model materials with different communities representing the different scales. Consequently, the goal of multiscale simulations in materials science is to bridge the gap between the macroscale relevant for applying these materials and the quantum mechanical ab initio approach of a universal parameter-free description of materials at the atomic scale.
One of these multiscale simulation approaches that has recently gained popularity is coupling the electronic-structure scale and atomic scale by training machine-learned interatomic potentials (MLIP)1. Such a training of a MLIP typically consists of the generation of a reference dataset of electronic structure simulations, the fitting of the MLIP with a specialized fitting code, typically written in Python based on machine learning frameworks like pytorch and tensorflow, and the validation of the MLIP with atomistic simulations, often with widespread software such as the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS)2 or the atomic simulation environment (ASE)3, both of which also provide Python interfaces. Consequently, it requires expertise in electronic structure simulations, in fitting the MLIP, as well as in interatomic potential simulation, with the corresponding simulation and fitting codes being developed by different communities4. The resulting challenge of managing simulation codes from different communities in a combined study of hundreds or thousands of simulations has led to the development of a number of Workflow Management Systems (WfMS). Similarly, high-throughput screening studies, which also couple large numbers of simulations executed with simulation codes at different scales, with different computational costs, and developed from different communities, benefit from WfMS.
In this context, a scientific workflow is commonly defined as the reproducible protocol of a series of process steps, including the transfer of information between them 5,6. This can be visualized as a graph with the nodes referencing the computational tools and the edges the information transferred between those nodes. Correspondingly, a WfMS is a software tool to orchestrate the construction, management, and execution of the workflow7. The advantages of using a WfMS are: (1) Automized execution of the workflow nodes on high-performance computing (HPC) clusters; (2) improved reproducibility, documentation, and distribution of workflows based on a standardized format; (3) userfriendly interface for creating, editing, and executing workflows; (4) interoperability of scientific software codes; (5) orchestration of high-throughput studies with a large number of individual calculations; (6) out-of-process caching of the data transferred via the edges of the workflow and storage of the final results; (7) interfaces to community databases for accessing and publishing data6. As a consequence, using a WfMS abstracts the technical complexity, and the workflow centers around the scientific complexity.
In contrast to WfMS in other communities like BioPipe8, which defines workflows in the Extensible Markup Language (XML), or SnakeMake9, NextFlow10 and Common Workflow Language11, which introduce their own workflow languages, many WfMS in the computational materials science community use Python as the workflow language12–23. Using a programming language to define workflows has the benefit that flow control elements, like loops and conditionals, are readily available as basic features of the language, which is not the case for static languages. This is a limitation of static languages, such as XML (more on this in Sec. 1 and the supporting information). Furthermore, the choice of Python in the field of computational materials science has three additional advantages: (1) the Python programming language is easy to learn as its syntax is characterized by very few rules and special cases, resulting in better readability compared to most workflow languages and a large number of users in the scientific community, (2) the improved computational efficiency of transferring large amounts of small data objects between the different workflow steps in-memory, compared to file-based input and output (IO), and (3) a large number of scientific libraries for the Python programming language, including many for machine learning, materials science and related domain sciences.
The increasing number of WfMS being developed in the computational materials science community and beyond led to the development of benchmarks implementing the same workflow in different WfMS24 and the extension of the FAIR (Findable, Accessible, Interoperable, and Reusable) principles to FAIR workflows7. However, the interoperability between different WfMS remains challenging, even within the subgroup of WfMS that use Python as the workflow language. For this specific case, three levels of interoperability can be identified: (1) the same scientific Python functions are shared between multiple WfMS, e.g., parsers for the input and output files of a given simulation code, (2) the
Fig. 1 The Python Workflow Definition (PWD) consists of three components: a conda environment, a Python module, and a JSON workflow representation. The three Workflow Management Systems AiiDA, jobflow, and pyiron all support both importing and exporting to and from the PWD.
Python functions representing the nodes and the corresponding edges are shared as a template, so that the same workflow can be executed with multiple WfMS and (3) the workflow template, including the intermediate results of the workflow, e.g., the inputs and outputs of each node, is shared.
In the following, the Python Workflow Definition (PWD) for directed acyclic graphs (DAG) and the corresponding Python interface25 are introduced. They implement the second level of interoperability for the following three WfMS: AiiDA 12,13,26, jobflow15, and pyiron19. The interoperability of the PWD is demonstrated in three examples: (1) The coupling of Python functions, (2) the calculation of an energy-versus-volume curve with the Quantum ESPRESSO Density Functional Theory (DFT) simulation code27,28 and (3) the benchmark file-based workflow for a finite element simulation introduced in Ref. 24. These three examples highlight the application of the PWD to pure Python workflows, file-based workflows based on calling external executables with file transfer between them, and mixed workflows that combine Python functions and external executables.
# 2 Python Workflow Definition
Following the goal of separating technical complexity from scientific complexity, our suggestion for a PWD consists of three parts: (1) The software dependencies of the workflow are specified in a conda environment file, so all dependencies can be installed using the conda package manager, which is commonly used in the scientific community29. (2) Additional Python functions, which represent the nodes in the workflow graph, are provided in a separate Python module. (3) Finally, the workflow graph with nodes and edges is stored in the JavaScript Object Notation (JSON) with the nomenclature inspired by the Eclipse Layout Kernel (ELK) JSON format 30. This is illustrated in Fig. 1, together with the three WfMS currently supporting the PWD. If all the involved scientific functionalities are already available within preexisting conda packages, the Python module (part 2) is not required. Still, while an increasing number of open-source simulation codes and utilities for atomistic simulations are available on conda for different scientific domains 29, in most cases, additional Python functions are required. These functions are typically stored in the Python module.
As a first simple example workflow, the addition of the product and quotient of two numbers, $c = a / b + a \cdot b .$ , and subsequent squaring of their sum is represented in the PWD. To illustrate the coupling of multiple Python functions, this computation is split into three Python functions, a get_prod_and_div() function to compute the product and quotient of two numbers, a get_sum() function for the summation, and a get_square() function to raise the number to the power of two:
def get_prod_and_div( x: float $\mathit { \Theta } = \mathit { \Theta } 1 . 0$ , y: float $\mathit { \Theta } = \ 1 . 0$
) $\phantom { 0 } { - } >$ dict[str, float]: return {"prod": x \* y, "div": x / y}
def get_sum(x, y): return x + y
def get_square(x): return $\mathbf { x } * * 2$
It is important to note here, that the Python functions are defined independently of a specific WfMS, so they can be reused with any WfMS or even without. Furthermore, the Python functions highlight different levels of complexity supported by the PWD: The get_prod_and_div() function returns a dictionary with two output variables, with the keys "prod" and "div" referencing the product and quotient of the two input parameters. Instead, the summation function get_sum() takes two input variables and returns only a single output, which is then fed into the get_square() function that returns the final result. In addition, the get_prod_and_div() function uses default parameter values and type hints, which are optional features of the Python programming language supported by the PWD to improve the interoperability of the workflow. While the computation of the product and quotient of two numbers could be done in two separate functions, the purpose here is to demonstrate the implementation of a function with more than one return value. Another example of such a function could be a matrix diagonalization function that returns the eigenvalues and eigenvectors. The supplementary information provides a more in-depth discussion of how function returns are resolved to an unambiguous mapping in the graph.
As a demonstration, the Python functions get_prod_and_div(), get_sum() and get_square() are stored in a Python module named workflow.py. In addition, as these functions have no dependencies other than the Python standard library, the conda environment, environment.yml, is sufficiently defined by specifying the Python version:
channels: - conda-forge dependencies: - python=3.12
The conda-forge community channel is selected as the package source as it is freely available and provides a large number of software packages for materials science and related disciplines29. For other examples, e.g., the calculation of the energy-versus-volume curve with Quantum ESPRESSO (see below), the conda environment would contain the software dependencies of the workflow, including the simulation code and additional utilities like parsers. It is important to note that the combination of the Python module and the conda environment already addresses the requirements for the first level of interoperability defined above. As the scientific Python functions are defined independently of any workflow environment, they can be used with any WfMS that supports Python functions as nodes.
Fig. 2 The arithmetic workflow computes the sum of the product and quotient of two numbers. The red nodes of the workflow graph denote inputs, the orange the outputs, and the blue nodes the Python functions for the computations. The labels of the edges denote the data transferred between the nodes.
The limitation of the first level of interoperability is the loss of connection of the individual functions, that is, which output of one function is reused as input of another function. In terms of the workflow as a graph with the Python functions representing the nodes of the graph, these connections are the edges between the nodes. To define the workflow, we wrap the individual function calls in another function to which we can then pass our input values and from which we retrieve our output value:
def workflow(x: float $\mathit { \Theta } = \mathit { \Theta } 1$ , y: float $\ = \ 2 ^ { \cdot }$ ): tmp_dict $\mathbf { \Sigma } = \mathbf { \Sigma }$ get_prod_and_div( $\scriptstyle \mathbf { x } = \mathbf { x }$ , $\scriptstyle \mathbf { y } = \mathbf { y } .$ ) tmp_sum $\mathbf { \tau } = \mathbf { \tau }$ get_sum( $\mathbf { x } =$ tmp_dict["prod"], y=tmp_dict["div"], ) return get_square( $\mathbf { x } =$ tmp_sum)
$\mathtt { r e s u l t \ = \ \mathtt { w o r k f l o w } ( \mathtt { x } = 1 , \mathtt { y } = 2 ) }$
We pass the inputs $\mathbf { x } { = } 1 . 0$ and $\scriptstyle \mathtt { y } = 2 . 0$ to our workflow function, in which the computation of the product and quotient with the get_prod_and_div() is executed first. This is then followed by a summation of the two results with the get_sum() function, which returns a single output value that is then fed into the get_square() function. The corresponding graph is visualized in Fig. 2.
In the next step, the resulting graph is serialized to an internal JSON representation with the nomenclature and overall structure inspired by the ELK JSON format30, for sharing the workflow between different WfMS. While human-readable, the JSON format is not intended for direct user interaction, i. e. generating or modifying the JSON with a text editor; rather, it is primarily focused on enabling interoperability of WfMS and long-term storage. For the construction of a workflow, we recommended using one of the existing WfMS and afterwards exporting the workflow to the PWD. The resulting PWD JSON for the arithmetic workflow is:
{ "version": "1.0.0", "nodes": [ {"id": 0, "type": "function", "value": "workflow.get_prod_and_div"}, {"id": 1, "type": "function", "value": "workflow.get_sum"}, {"id": 2, "type": "function", "value": "workflow.get_square"}, {"id": 3, "type": "input", "value": 1, "name": "x"}, {"id": 4, "type": "input", "value": 2, "name": "y"}, {"id": 5, "type": "output", "name": "result"} ], "edges": [ {"source": 3, "sourcePort": null, "target": 0, "targetPort": "x"}, {"source": 4, "sourcePort": null, "target": 0, "targetPort": "y"}, {"source": 0, "sourcePort": "prod", "target": 1, "targetPort": "x"}, {"source": 0, "sourcePort": "div", "target": 1, "targetPort": "y"}, {"source": 1, "sourcePort": null, "target": 2, "targetPort": $" \mathtt { x } " \mathtt { y }$ , {"source": 2, "sourcePort": null, "target": 5, "targetPort": null} ]
}
On the first level, the PWD JSON format defines the workflow metadata given by the version number, nodes and edges:
• The version number (of the PWD JSON format) is given by three non-negative integers combined in a string, to enable semantic versioning. Minor changes and patches which do not affect the backwards compatibility are indicated by increasing the second and third numbers, respectively. In contrast, an increase in the first number indicates changes that are no longer backwards compatible.
• The nodes section is (in this example) a list of six items: The three Python functions defined in the workflow.py Python module, the two input parameters for the workflow, in this case $\mathbf { x } { = } 1 . 0$ and $\scriptstyle \mathbf { y } = 2 . 0$ , and the output data node. Each node is defined as a dictionary consisting of an "id", a "type", and a "value". In case of the "input" and "output" data nodes, the "name" is an identifier that denotes how the inputs and outputs are exposed by the overall workflow. Moreover, for "input" data nodes, the "value" is an optional default value (if provided during workflow construction). On the other hand, for "function" nodes, the "value" entry contains the module and function name. The usage of the dictionary format allows future extensions by adding additional keys to the dictionary for each node.
• In analogy to the nodes, also the edges are stored as a list of dictionaries. The first two edges connect the input parameters with the get_prod_and_div() function. Each edge is defined based on the source node "source", the source port "sourcePort", the target node "target" and the target port "targetPort". As the input data nodes do not have associated ports, their source ports are null. In contrast, the target ports are the input parameters x and y of the get_prod_and_div() function. The PWD JSON representation also contains two edges that connect the two outputs from the get_prod_and_div() function to the inputs of the get_sum() function. In analogy to the target port, the source port specifies the output dictionary key to select from the output. If no source port is available (typically because a function does not return a dictionary containing keys that can serve as source ports), then the source port is set to null and, in that case, the entire return value of the function (possibly, also a tuple, list, dictionary or any other Python data type) is transferred to the target node. This is the case for the fifth edge that maps the return value of the get_sum() function to the $" \mathbf { X } "$ input of the get_square() function. Finally, its result is exposed as the global "result" output of the workflow, the last edge in the graph. As the get_square() function does return the value directly, and the target of the edge is an output data node (that does not define a port), both "targetPort" and "sourcePort" are null in this edge.
By using a list of dictionaries for both the nodes and edges, as well as a dictionary at the first level, the PWD JSON format is extensible, and additional metadata beyond the version number can be added in the future. As the focus of this first version of the PWD is the interoperability between the different WfMS, apart from the node types (useful for parsing and validation), no additional metadata is included in the PWD JSON format. To assist the users in analyzing the JSON representation of the PWD, the PWD Python interface provides a plot() function to visualize the workflow graph. The plot() function is introduced in the supplementary material.
# 3 Implementation
The focus of the PWD is to enable the interoperability between different WfMS. Thus, it is recommended that users always use one of the supported WfMS to create the workflow and export it to the PWD using the PWD Python library. Afterwards, the workflow can be imported into a different WfMS, the input parameters can be modified, and computational resources can be assigned before the workflow is executed. In the following, the same workflow introduced above is defined in AiiDA, jobflow, and pyiron. This highlights the similarities between these Pythonbased WfMS, which all use the Python programming language as their workflow language, with the selection of WfMS being based on the authors’ experience. While this section covers the export of the workflow to the WfMS, the import is discussed in the application section below. Finally, interfaces for additional WfMS are planned in the future. Full integration will be achieved with PWD support becoming an integral part of the WfMS itself and the PWD package possibly becoming a dependency.
# 3.1 AiiDA
The “Automated Interactive Infrastructure and Database for Computational Science” (AiiDA) 12,13,26 is a WfMS with a strong focus on data provenance and high-throughput performance. AiiDA provides checkpointing, caching, and error handling features for dynamic workflows at full data provenance (via an SQL database), among other features. While it originated from the field of computational materials science31, it has recently been extended to several other fields (see e.g. the codes supported in the AiiDA plugin registry 32) and to experiments 33. In the following code snippets, we will be using the WorkGraph, a recently added and actively developed new AiiDA workflow component34. The WorkGraph functions like a canvas for workflow creation to which a user can dynamically add Tasks, that is, workflow components (also called “nodes” in a graph-based representation of a workflow), and connect them with Links (the “edges“ in the PWD). This approach to workflow creation offers the flexibility of dynamically chaining workflow components together “on-thefly”, an approach especially crucial for rapid prototyping common in scientific environments. Implementation of the arithmetic workflow is shown in the following snippets. It starts with the import of relevant modules:
import python_workflow_definition as pwd
from aiida import orm, load_profile
from aiida_workgraph import WorkGraph, task
from arithmetic_workflow import ( get_sum as _get_sum, get_prod_and_div as _get_prod_and_div, get_square as _get_square
)
# load_profile()
We first import the python_workflow_definition module, which contains the necessary code to import from and export to the general Python workflow definition. In addition, from the AiiDA core module, we import AiiDA’s Object-Relational Mapper (ORM), as well as the load_profile function. The ORM module allows mapping Python data types to the corresponding entries in
AiiDA’s underlying SQL database, and calling the load_profile function ensures that an AiiDA profile (necessary for running workflows via AiiDA) is loaded. From the aiida-workgraph module, we import the main WorkGraph class, as well as the task decorator. Lastly, we import the Python functions from the arithmetic_workflow module.
To convert the pure Python functions from the arithmetic workflow into AiiDA WorkGraph workflow components, we wrap them with the task function (decorator):
get_prod_and_div $\mathbf { \tau } = \mathbf { \tau }$ task(outputs $\ c =$ ["prod", "div"])( _get_prod_and_div
)
get_sum $\mathbf { \tau } = \mathbf { \tau }$ task()(_get_sum)
get_square $\mathbf { \tau } = \mathbf { \tau }$ task()(_get_square)
As the get_prod_and_div function returns a dictionary with multiple outputs, we pass this information to the task function via the outputs argument, such that we can reference them at a later stage (they will become the ports in the PWD JSON). Without the outputs argument, the whole output dictionary $\{ " \} \mathtt { p r o d " } : \textbf { x * y }$ , "div": $\texttt { x / y }$ would be wrapped as one port with the default "result" key. This is what actually happens to the single return value of the get_sum() function (as further outlined in the supplementary information, we follow a similar approach to resolve the “ports” entries in the “edges” of the PWD). Next follows the instantiation of the WorkGraph:
$$
\mathrm { w g ~ = ~ W o r k G r a p h ( " a r i t h m e t i c" ) }
$$
Which then allows adding the previously defined Tasks: get_prod_and_div_task $\mathbf { \Sigma } = \mathbf { \Sigma }$ wg.add_task( get_prod_and_div, $\scriptstyle \mathbf { x } = \circ \mathbf { r } \mathbf { m }$ .Float(1.0), $\scriptstyle { \mathtt { y } } = \circ { \mathtt { r m } }$ .Float(2.0), ) get_sum_task $\mathbf { \tau } = \mathbf { \tau }$ wg.add_task( get_sum, $\mathbf { x } =$ get_prod_and_div_task.outputs.prod, $y =$ get_prod_and_div_task.outputs.div, ) get_square_task $\mathbf { \tau } = \mathbf { \tau }$ wg.add_task( get_square, $\mathbf { x } =$ get_sum_task.outputs.result, )
Here, we wrap the inputs as AiiDA ORM nodes to ensure they are registered as nodes when exporting to the PWD. Further, in the get_sum_task, the outputs of the previous get_prod_and_div_task are passed as inputs. Note that at this stage, the workflow has not been run, and these output values do not exist yet. In WorkGraph, such outputs are represented by a Socket that serves as a placeholder for future values and already allows linking them to each other in the workflow:
In [1]: print(get_prod_and_div_task.outputs.prod) Out[1]: SocketAny(name $\ c =$ "prod", value $\ c =$ None)
Alternatively, adding tasks to the WorkGraph and linking their outputs can also be done in two separate steps, shown below for linking the get_prod_and_div_task and get_sum_task:
get_sum_task $\mathbf { \tau } = \mathbf { \tau }$ wg.add_task( get_sum,
)
wg.add_link( get_prod_and_div_task.outputs.prod, get_sum_task.inputs.x,
)
wg.add_link( get_prod_and_div_task.outputs.div, get_sum_task.inputs.y,
)
Lastly, the JSON file containing the PWD can be written to disk via:
pwd.aiida.write_workflow_json( $\mathtt { w g } = \mathtt { w g }$ , file_name $= "$ arithmetic.json"
)
# 3.2 jobflow
Jobflow15 was developed to simplify the development of highthroughput workflows. It uses a decorator-based approach to define the Job’s that can be connected to form complex workflows (Flows). Jobflow is the workflow language of the workflow library atomate235, designed to replace atomate36, which was central to the development of the Materials Project37 database.
First, the job decorator, which allows the creation of Job objects, and the Flow class are imported. In addition, the PWD Python module and the functions of the arithmetic workflow are imported in analogy to the previous example.
from jobflow import job, Flow
import python_workflow_definition as pwd
from arithmetic_workflow import ( get_sum as _get_sum, get_prod_and_div as _get_prod_and_div, get_square as _get_square,
)
Using the job object decorator, the imported functions from the arithmetic workflow are transformed into jobflow Jobs. These Jobs can delay the execution of Python functions and can be chained into workflows (Flows). A Job can return serializable outputs (e.g., a number, a dictionary, or a Pydantic model) or a so-called Response object, which enables the execution of dynamic workflows where the number of nodes is not known prior to the workflow’s execution. As jobflow itself is only a workflow language, the workflows are typically executed on highperformance computers with a workflow manager such as Fireworks38 or jobflow-remote39. For smaller and test workflows, simple linear, non-parallel execution of the workflow graph can be performed with jobflow itself. All outputs of individual jobs are saved in a database. For high-throughput applications, typically, a MongoDB database is used. For testing and smaller workflows, a memory database can be used instead. In Fireworks, its predecessor in the Materials Project infrastructure, this option did not exist, which was a significant drawback.
get_prod_and_div $\mathbf { \tau } = \mathbf { \tau }$ job(_get_prod_and_div)
get_su $\mathtt { m } \ = \ \mathtt { j o l }$ (_get_sum)
get_square $\mathbf { \Sigma } = \mathbf { \Sigma }$ job(_get_square)
prod_and_div $\mathbf { \Sigma } = \mathbf { \Sigma }$ get_prod_and_div( $\mathbf { \bar { x } } { = } 1 \mathbf { \Phi } . 0$ , $\scriptstyle \mathtt { y } = 2 . 0 _ { \cdot } ^ { \cdot }$ )
tmp_s $\mathfrak { m } \ : = \ :$ get_sum( $\mathbf { x } =$ prod_and_div.output.prod, $y =$ prod_and_div.output.div,
)
result $\mathbf { \tau } = \mathbf { \tau }$ get_square( $\mathbf { x } = \mathbf { \partial }$ tmp_sum.output)
flow $\mathbf { \Sigma } = \mathbf { \Sigma }$ Flow([prod_and_div, tmp_sum, result])
As before in the AiiDA example, the workflow has not yet been run. prod_and_div.output.div refers to an OutputReference object instead of the actual output.
Finally, after the workflow is constructed, it can be exported to the PWD using the PWD Python package to store the jobflow workflow in the JSON format.
pwd.jobflow.write_workflow_json( flow=flow, file_name $\ c =$ "arithmetic.json",
)
# 3.3 pyiron
The pyiron WfMS was developed with a focus on rapid prototyping and up-scaling atomistic simulation workflows 19. It has since been extended to support simulation workflows at different scales, including the recent extension to experimental workflows40. Based on this generalization, the same arithmetic Python workflow is implemented in the pyiron WfMS. Starting with the import of the pyiron job object decorator and the PWD Python module, the functions of the arithmetic workflow are imported in analogy to the previous examples above.
from pyiron_base import job
import python_workflow_definition as pwd
from arithmetic_workflow import ( get_sum as _get_sum, get_prod_and_div as _get_prod_and_div, get_square as _get_square,
)
Using the job object decorator, the imported functions from the arithmetic workflow are converted to pyiron job generators. These job generators can be executed like Python functions; still, internally, they package the Python function and corresponding inputs in a pyiron job object, which enables the execution on HPC clusters by assigning dedicated computing resources and provides the permanent storage of the inputs and output in the Hierarchical Data Format (HDF5). For the get_prod_and_div() function, an additional list of output parameter names is provided, which enables the coupling of the functions before the execution, to construct the workflow graph.
$$
\begin{array} { r l } & { \mathrm { g e t \mathrm { - } s u m ~ = ~ j o b ( \mathrm { - } g e t \mathrm { _ - } s u m ) } } \\ & { \mathrm { g e t \mathrm { - } p r o d \mathrm { _ - } a n d \mathrm { _ - } d i v ~ = ~ j o b ( \mathrm { } } } \\ & { \qquad \mathrm { - } g e t \mathrm { _ - } \mathrm { p r o d \mathrm { _ - } a n d \mathrm { _ - } d i v , } } \\ & { \qquad \mathrm { o u t p u t \mathrm { _ - } k e y \mathrm { _ - } 1 s t = [ ^ { n } p r o d ^ { n } , ~ ^ { \mathrm { { u } } } d i v " ] ~ } , } \\ & { \mathrm { ) } } \\ & { \mathrm { g e t \mathrm { _ - } s q u a r e ~ = ~ j o b ( \mathrm { _ - } g e t \mathrm { _ - } s q u a r e ) ~ } } \end{array}
$$
After the conversion of the Python functions to pyiron job generators, the workflow is constructed. The pyiron job generators are called just like Python functions; still, they return pyiron delayed job objects rather than the computed values. These delayed job objects are linked with each other by using a delayed job object as an input to another pyiron job generator. Finally, the whole workflow would be only executed once the pull function pull() is called on the delayed pyiron object of the get_square() function. At this point, the delayed pyiron objects are converted to pyiron job objects, which are executed using the pyiron WfMS. In particular, the conversion to pyiron job objects enables the automated caching to the hierarchical data format (HDF5) and the assignment of computing resources.
prod_and_div $\mathbf { \tau } = \mathbf { \tau }$ get_prod_and_div( $\mathbf { \check { x } } ^ { = 1 . 0 }$ , $\scriptstyle { \mathtt { y } } = 2 . 0 { \dot { . } }$ )
tmp_sum $\mathbf { \tau } = \mathbf { \tau }$ get_sum( $\mathbf { x } =$ prod_and_div.output.prod, $y =$ prod_and_div.output.div,
)
result $\mathbf { \tau } = \mathbf { \tau }$ get_square( $\mathbf { x } =$ tmp_sum)
For the example here, the workflow execution is skipped and the workflow is exported to the PWD using the PWD Python package to store the pyiron workflow in JSON format. The export command is implemented in analogy to the export commands for AiiDA and jobflow, taking a delayed pyiron object as an input in combination with the desired file name for the JSON representation of the workflow graph.
pwd.pyiron_base.write_workflow_json( delayed_object $\ c =$ result, file_name $\ c =$ "arithmetic.json",
)
The implementation of the arithmetic workflow in pyiron demonstrates the similarities to AiiDA and jobflow.
# 4 Application
To demonstrate the application of the PWD beyond just the arithmetic example above, we consider a second workflow that describes the calculation of an energy-versus-volume curve with Quantum ESPRESSO. The energy-versus-volume curve is typically employed to calculate the equilibrium volume and the compressive bulk modulus for bulk materials. The workflow is illustrated in Fig. 3, with the red and orange nodes marking the inputs and
Fig. 3 Energy-versus-volume curve calculation workflow with Quantum ESPRESSO. Red boxes denote inputs, orange boxes outputs, blue boxes Python functions and green boxes calls to external executables.
outputs of the workflow, the blue nodes the Python functions, and the green nodes indicating Python functions that internally launch Quantum ESPRESSO simulations. The individual steps of the workflow are:
1. Based on the input of the chemical element, the lattice constant, and the crystal symmetry, the atomistic bulk structure is generated by calling the bulk structure generation function get_bulk_structure(). This function is obtained via the Atomistic Simulation Environment (ASE)3 and extended to enable the serialization of the atomistic structure to the JSON format using the OPTIMADE41 Python tools42.
2. The structure is relaxed afterwards with Quantum ESPRESSO to get an initial guess for the equilibrium lattice constant. Quantum ESPRESSO is written in FORTRAN and does not provide Python bindings, so that the communication is implemented in the calculate_qe() function by writing input files, calling the external executable, and parsing the output files.
3. Following the equilibration, the resulting structure is strained in the function generate_structures() with two compressive strains of $- 1 0 \%$ and $- 5 \%$ and two tensile strains of $5 \%$ and $1 0 \%$ . Together with the initially equilibrated structure, this leads to a total of five structures.
4. Each structure is again evaluated with Quantum ESPRESSO to compute the energy of the strained structure.
5. After the evaluation with Quantum ESPRESSO, the calculated energy-volume pairs are collected in the plot_energy_volume_curve() function and plotted as an energy-versus-volume plot. The final plot is saved in a file named plot.png. Compared to the previous arithmetic example, this workflow is
more advanced and not only illustrates one-to-one connections, in terms of one node being connected to another node, but also one-to-many and many-to-one connections. The latter two are crucial to construct the loop over different strains, compute the corresponding volume and energy pairs, and gather the results in two lists, one for the volumes and one for the energies, to simplify plotting. In addition, it highlights the challenge of workflows in computational materials science to couple Python functions for structure generation, modifications, and data aggregation with simulation codes that do not provide Python bindings and require file-based communication. Given the increased complexity of the workflow, the implementation for the individual WfMS is provided in the supplementary material. Instead, the following briefly highlights how the workflow, which was previously stored in the PWD, can be reloaded with the individual frameworks.
Starting with the AiiDA WfMS, the first step is to load the AiiDA profile and import the PWD Python interface. Afterwards, the workflow can be loaded from the JSON representation qe.json using the load_workflow_json() function. To demonstrate the capability of modifying the workflow parameters before the execution of the (re-)loaded workflow, we then modify the lattice constant of the get_bulk_structure() node to $4 . 0 5 \mathring \mathrm { A } .$ . Similarly, one could also adapt the element, bulk structure, or strain list input parameters of the workflow. Finally, the workflow is executed by calling the run() function of the AiiDA WorkGraph object:
from aiida import orm, load_profile
import python_workflow_definition as pwd
load_profile()
$\mathtt { w g } \ = \ \mathtt { p w d }$ .aiida.load_workflow_json( file_name $= ^ { \prime }$ "qe.json"
)
wg.tasks[0].inputs.a.value $\mathbf { \tau } = \mathbf { \tau }$ orm.Float(4.05)
wg.run()
The same JSON representation qe.json of the workflow can also be loaded with the jobflow WfMS. Again, the jobflow WfMS and the PWD Python interface are imported. The JSON representation qe.json is loaded with the load_workflow_json() function. Afterwards, the lattice constant is adjusted to $4 . 0 5 \mathring \mathrm { A }$ and finally the workflow is executed with the jobflow run_locally() function. We note that the same workflow could also be submitted to a HPC cluster, but local execution is primarily chosen here for demonstration purposes to enable the local execution of the provided code examples.
from jobflow.managers.local import run_locally import python_workflow_definition as pwd
flow $\mathbf { \Sigma } = \mathbf { \Sigma }$ pwd.jobflow.load_workflow_json( file_name $\ c =$ "qe.json"
)
flow[0].function_kwargs $[ " { \sf a } " ] = 4 . 0 5$
run_locally(flow)
In analogy to the AiiDA WfMS and the jobflow WfMS. the energy-versus-volume curve workflow can also be executed with the pyiron WfMS. Starting with the import of the PWD Python interface, the JSON representation qe.json of the workflow is again loaded with the load_workflow_json() function, followed by the adjustment of the lattice constant to $4 . 0 5 \mathring \mathrm { A }$ by accessing the input of the first delayed job object. Finally, the last delayed job object’s pull() function is called to execute the workflow.
import python_workflow_definition as pwd
wf $\mathbf { \Sigma } = \mathbf { \Sigma }$ pwd.pyiron_base.load_workflow_json( file_name $\ c =$ "qe.json"
)
wf[0].input["a"] = 4.05
wf[-1].pull()
The focus of this second example is to highlight that a workflow stored in the PWD can be executed with all three workflow frameworks with minimally adjusted code. This not only applies to simple workflows consisting of multiple Python functions but also includes more complex logical structures like the one-to-many and many-to-one connections, covering any Directed Acyclic Graphs (DAG) topology. We remark, though, that in the current version the restriction to DAGs is also a limitation of the PWD, as it does not cover dynamic workflows, such as a while loop that adds additional steps until a given condition is fulfilled. Another challenge is the assignment of computational resources, like the assignment of a fixed number of CPU cores, as the wide variety of different HPC clusters with different availability of computing resources hinders standardization. As such, the user is required to adjust the computational resources via the WfMS after reloading the workflow graph. For this reason, the workflow is also not directly executed by the load_workflow_json() function, but rather the user can explore and modify the workflow and afterwards initiate the execution with any of the WfMS once the required computational resources are assigned.
# 5 Compatibility to non-Python-based workflows
The two previous examples demonstrated Python-based workflows, which couple either solely Python functions or Python functions and external executables, wrapped by other Python functions that write the input files and parse the output files. Before Python-based WfMS, a number of previous WfMS were introduced, which couple simulation codes solely based on transferring files between the different steps of the workflow 8–11. To demonstrate that the PWD can also be applied to these file-based workflows, we implement the benchmark published in Ref.24 for file-based workflows in materials science in the PWD. The corresponding workflow is illustrated in Fig. 4.
As the file-based workflow for finite element simulations is already discussed in the corresponding publication24, it is only summarized here. A mesh is generated in the first pre-processing step, followed by the conversion of the mesh format in the second pre-processing step. Afterwards, the Poisson solver of the finite element code is invoked. Finally, in the postprocessing, the data is first visualized in a line plot, a TeX macro is generated, and a TeX document is compiled, resulting in the paper.pdf as the final output. To represent this file-based workflow in the PWD, each node is represented by a Python function. This Python function acts as an interface to the corresponding command line tool, handling the writing of the input files, calling of the command line tool and the parsing of the output files. In this specific case, which is purely based on external executables, the output files of one node are copied to be used as input files for the next node, and only the path to the corresponding file is transferred in Python. The Python function for the generate_mesh() node is given below:
Fig. 4 File-based finite element workflow from Ref. 24 implemented with the Python Workflow Definition (PWD). Red nodes denote inputs, orange nodes outputs, green nodes calls to external executables, and the labels on the edges the files and data transferred between them. Files are passed as path objects between the individual steps.
import os
from conda_subprocess import check_output
import shutil
def generate_mesh( domain_size: float, source_directory: str
) $^ { - > }$ str: stage_name $\mathbf { \Psi } = \mathbf { \Psi }$ "preprocessing" output_file_name $\mathbf { \Sigma } = \mathbf { \Sigma }$ "square.msh" source_file_name $\mathbf { \Sigma } = \mathbf { \Sigma }$ "unit_square.geo" os.makedirs(stage_name, exist_ok $\underline { { \underline { { \mathbf { \Pi } } } } } =$ True) source_file $\mathbf { \Sigma } = \mathbf { \Sigma }$ os.path.join( source_directory, source_file_name ) shutil.copyfile( source_file, os.path.join(stage_name, source_file_
) $\mathbf { \Sigma } = \mathbf { \Sigma }$ check_output( [ "gmsh", "-2", "-setnumber", "domain_size", str(domain_size), source_file_name, "-o", output_file_name ], prefix_name $\ c =$ stage_name, cwd $\ c =$ stage_name, universal_newlines $\ v =$ True,
)
return os.path.abspath( os.path.join(stage_name, output_file_name)
)
The input parameters of the generate_mesh() function are the domain_size and the source_directory with the source_directory referencing the location of additional input files. Following the definition of a number of variables, a directory is created and the source files are copied as templates to this directory. Then the external executable is called. Here we use the conda_subprocess package43, which allows us to execute the external executable in a separate conda environment. This was a requirement of the file-based benchmark workflow24. Finally, the path to the output file "square.msh" is returned as result of the Python function.
While the definition of a Python function for each node is an additional overhead, it is important to emphasize that the Python functions were only defined once, independently of the different WfMS and afterwards the same Python functions were used in all three WfMS. Again, the step-by-step implementation in the three different WfMS and the exporting to the PWD is available in the supplementary material. This third example again highlights the universal applicability of the PWD, as it can cover both Pythonbased workflows and file-based workflows.
Finally, to increase the impact of the PWD and extend its generality beyond the three WfMS discussed in this work, we provide a first proof-of-concept implementation to convert a PWD JSON file to the Common Workflow Language11. In this case each input and output of every node is serialized using the built-in pickle serialization of the Python Standard library. The resulting pickle files are then transferred from one node to another through CWL. To convert a given PWD JSON file, use the write_workflow() from the CWL submodule of the PWD Python interface:
import python_workflow_definition as pwd pwd.cwl.write_workflow( file_name $\ c =$ "workflow.json" )
This Python function creates the corresponding CWL files to represent the individual nodes, as well as the resulting workflow in the CWL, which can then be executed by any CWL engine (given that the necessary dependencies are available on the system). Still, it is important to emphasize that in contrast to the interfaces to the Python-based WfMS, the interface to the CWL is a one-way conversion only from the PWD to the CWL, not the other way around. Furthermore, by converting the workflow to the CWL, the performance benefit of handling the data on the edges of the workflow inside the Python process is lost as the CWL interface is based on file-based communication. Lastly, another notable concept close to the PWD is the graph-based Abstract Syntax Tree (AST)44 representation of the Python standard library. For brevity this comparison is discussed in the supplementary information. | Numerous Workflow Management Systems (WfMS) have been developed in the field
of computational materials science with different workflow formats, hindering
interoperability and reproducibility of workflows in the field. To address this
challenge, we introduce here the Python Workflow Definition (PWD) as a workflow
exchange format to share workflows between Python-based WfMS, currently AiiDA,
jobflow, and pyiron. This development is motivated by the similarity of these
three Python-based WfMS, that represent the different workflow steps and data
transferred between them as nodes and edges in a graph. With the PWD, we aim at
fostering the interoperability and reproducibility between the different WfMS
in the context of Findable, Accessible, Interoperable, Reusable (FAIR)
workflows. To separate the scientific from the technical complexity, the PWD
consists of three components: (1) a conda environment that specifies the
software dependencies, (2) a Python module that contains the Python functions
represented as nodes in the workflow graph, and (3) a workflow graph stored in
the JavaScript Object Notation (JSON). The first version of the PWD supports
directed acyclic graph (DAG)-based workflows. Thus, any DAG-based workflow
defined in one of the three WfMS can be exported to the PWD and afterwards
imported from the PWD to one of the other WfMS. After the import, the input
parameters of the workflow can be adjusted and computing resources can be
assigned to the workflow, before it is executed with the selected WfMS. This
import from and export to the PWD is enabled by the PWD Python library that
implements the PWD in AiiDA, jobflow, and pyiron. | [
"cs.SE",
"cond-mat.mtrl-sci"
] |
# 1 Introduction
Instruction tuning has emerged as a powerful paradigm to improve the performance and alignment of large language models (LLMs) by fine-tuning them on instruction-response pairs [2, 11, 32, 51, 52]. Recent studies indicate that data quality, rather than quantity alone, is crucial for substantial performance gains [2, 30, 39, 65]. Consequently, recent research has focused on automatically selecting informative subsets of training data, guided by selection metrics such as data diversity and data quality [4, 10, 46, 58, 63]. However,
Full Training Data (e.g. 10M) A set of selected samples (e.g. 1K) Predictions obtained from inference on the full dataset $\textcircled { \scriptsize { \div } }$ Time cost LLM Fine-tuned LLM Utility Training loss
# Step 1 Standard Model Training (Iteration: t)
St , Forward Loss(Mθt-1) Backward E Mθt Mθt-1 Propagation 团 Propagation Oo Step 2 Sample Utility Estimation ? Estimate( , St) Infer( Mθt , Dt) (a) Ours: Estimation (b) Traditional cocc Step 3 Sample Selection based on Utility Scores 回 $\mathbf { \sigma } =$ Selection( , TopK) St+1 For iteration t+1 since these methods do not directly leverage feedback from the model, they fail to dynamically adapt data selection to the model’s evolving state and specific learning needs throughout training.
In response, recent efforts have shifted toward model-aware data selection, which explicitly utilizes model-derived signals to dynamically identify informative training examples [50, 55]. These modelaware methods broadly fall into two categories: non-iterative and iterative. Non-iterative methods select data once based on initial model predictions before iterative training [32, 59]. However, since they do not adapt to the model evolvement during training, their effectiveness is inherently limited [60]. In contrast, iterative methods interleave model fine-tuning and data selection across multiple rounds, iteratively choosing new informative samples based on the model’s latest feedback [59]. As shown in Figure 1- Step 2-(b) , most existing iterative model-aware methods typically rely on explicit model inference to assess the utility of samples. Specifically, after each training iteration, these methods perform inference on every sample in the training set to derive feedback signals (e.g., model uncertainty scores) for utility estimation. Although effective at adapting data selection to the model’s evolving state, repeatedly performing full-dataset inference significantly increases computational overhead. For example, the recent IFD method [32] spends approximately 98 GPU-hours selecting data from a pool of only $6 0 0 K$ samples in a single round.
This predicament leads to a natural research question: Can we retain the benefits of iterative model-aware data selection without repeatedly performing costly full-dataset inference? In other words, can we effectively determine “select what to learn next” by exclusively utilizing information already computed during standard training, without any additional model inference overhead?
In this work, we posit that the answer is yes. As shown in Figure 1- Step 1 , our key insight is that during standard training, the model first conducts a forward propagation step using the current mini-batch of samples, computes the per-sample losses based on its predictions, and subsequently updates its parameters via backward propagation. Crucially, this training process naturally produces a per-sample loss for each training instance in the minibatch. Intuitively, this loss indicates how challenging a sample is for the model—higher losses reflect greater difficulty and thus greater potential informativeness for future learning. Hence, these trainingtime losses inherently serve as valuable indicators of a sample’s utility. Indeed, they provide an effective proxy for explicit utility metrics (e.g., model uncertainty) typically obtained through costly, separate inference steps [22].
If we can cleverly harness these inherent training signals across the whole dataset, we could estimate the utility of each sample without additional inference (inference-free) (see Figure 1- Step 2-(a) ). This idea – leveraging training-time loss signals to guide data selection – offers the potential to eliminate the full-dataset inference stage while still adapting to the model’s training state.
Challenges. Realizing this idea in practice is non-trivial.
First, although using training-time losses allows us to avoid explicit inference, a subtle yet fundamental issue arises due to a timing misalignment. Specifically, as shown in Figure 1- Step 1 , the training loss observed at iteration $t$ reflects the model’s performance before updating parameters (model state $M _ { \theta _ { t - 1 } } )$ ), whereas the utility of selecting samples ideally should consider their usefulness after the parameter update (i.e., $M _ { \theta _ { t } }$ at iteration $t + 1$ ). This temporal mismatch means that naively reusing pre-update loss signals may not accurately reflect true sample utility after the next parameter update. We term this issue as the temporal mismatch challenge (C1). Second, raw loss signals can be noisy or unstable – they fluctuate from one update to the next due to randomness (e.g., varying batch composition) and the non-stationary nature of training, thus naively trusting instantaneous loss values might lead to suboptimal choices. This issue highlights the instability of loss signals challenge (C2).
Third, even if we successfully eliminate separate inference steps, individually estimating utility and selecting informative samples remains inefficient for large-scale datasets (e.g., containing millions of samples). We refer to this as the sample-level selection efficiency challenge (C3). Thus, we need an effective mechanism that can rapidly narrow down candidate samples while prioritizing those most likely to substantially improve the model.
Our Methodology: Iterative Data Selection with Inferencefree Utility Estimation. To address the above challenges, we propose LEAD, a theoretically-grounded iterative data selection framework that integrates seamlessly into the model training loop, accurately estimating sample utility without incurring additional inference overhead. The core theoretical insight behind inferencefree yet accurate utility estimation lies in effectively addressing two critical challenges: (C1) the temporal mismatch between loss computation and parameter updates, and (C2) the inherent instability of instantaneous loss signals.
Figure 2: A High-level Overview of LEAD.
To achieve this, we propose a novel sample utility estimation function called Instance-Level Dynamic Uncertainty (IDU). IDU explicitly implements the Estimate step depicted in Figure 1- Step 2-(a) by combining three naturally available training signals: (1) the current training loss for each sample, (2) gradient-based approximation, derived from gradient correlation approximations, to anticipate loss changes at the next parameter update (addressing C1), and (3) historical loss trends via exponential smoothing to reduce random noise and improve stability (addressing C2). Importantly, IDU is computed entirely using training-time signals naturally available during model updates (losses and logits), thus incurring no additional inference overhead. Finally, we conduct a Lagrangian function and utilize complementary slackness conditions to rigorously derive optimal parameters for IDU, ensuring both theoretical soundness and practical effectiveness.
Guided by this theoretical foundation, our LEAD framework employs a practical coarse-to-fine data selection strategy (Figure 2).
Stage 1: Coarse-level Cluster Selection. Recall our third challenge (C3) – efficient candidate selection at scale. To address this, we first partition the dataset offline into clusters based on two widelyused metrics: (1) instruction-following difficulty, measuring how challenging each instruction is for the model [32], and (2) tasklevel similarity, grouping semantically related instructions [34]. This clustering step is performed only once per dataset. During training, LEAD employs a multi-armed bandit (MAB) algorithm [54] to dynamically identify and prioritize clusters likely to yield higher rewards – clusters containing samples with greater potential to significantly enhance the model’s performance (addressing C3).
Stage 2: Fine-Grained Sample Utility Estimation and Selection. Within each selected cluster, LEAD utilizes the IDU function to estimate the utility of individual samples precisely. Specifically, given the IDU scores computed based on the previously discussed training signals (losses, historical trends, and gradient predictions), LEAD prioritizes and selects samples with the highest IDU values. Therefore, samples predicted to yield higher improvements for the model after subsequent parameter updates are selected preferentially.
Contributions. This paper makes the following contributions:
(1) Problem Formulation. We formally introduce the problem of Iterative Data Selection with Inference-Free Utility Estimation, defining a scenario where iterative model-aware selection is performed without incurring additional inference overhead (Section 2).
(2) Instance-Level Dynamic Uncertainty (IDU). We develop a new sample utility estimation function, IDU, which effectively addresses temporal mismatch and instability in loss signals by integrating current losses, gradient-based approximations of loss changes, and exponential smoothing of historical loss signals. All components are computed directly from naturally available training signals without requiring additional model inference (Section 3).
(3) LEAD Framework. We propose LEAD, a theoretically grounded and efficient iterative data selection framework seamlessly integrated into the standard model training process, eliminating repeated costly inference steps (Section 4 and Section 5).
(4) Theoretical Analysis. We rigorously ground our framework in a Lagrangian optimization formulation, employing complementary slackness conditions and gradient correlation approximations to derive theoretically optimal parameters for the IDU function, ensuring both soundness and practical effectiveness (Section 6).
(5) Extensive Experiments. Extensive experiments across four diverse benchmarks show that LEAD significantly outperforms state-of-theart methods, improving average model performance by $6 . 1 \% - 1 0 . 8 \%$ while using only $2 . 5 \%$ of the training data and reducing overall training time by $5 \mathrm { - } 1 0 \times$ (Section 7).
# 2 Preliminary and Problem Formulation 2.1 Instruction Tuning for LLMs
Instruction tuning fine-tunes pretrained large language models using instruction-response pairs, enabling them to generalize to new tasks by interpreting diverse instructions [56]. Formally, given instruction-response pairs $( x , y )$ from dataset $\mathcal { D }$ , instruction tuning optimizes model $\theta$ by minimizing the expected loss:
$$
\underset { \theta } { \operatorname* { m i n } } \mathbb { E } _ { ( x , y ) \sim \mathcal { D } } \left[ L ( \mathcal { M } _ { \theta } ( x ) , y ) \right]
$$
where $L$ is a task-specific loss function such as cross-entropy.
# 2.2 Data Selection for Instruction Tuning
In practice, datasets often originate from vast and noisy sources. Given limited computational budgets and data quality concerns, selecting the most informative samples for instruction tuning becomes crucial. We formalize this as the data selection problem, categorized into two groups: static and iterative data selection.
Static Data Selection for Instruction Tuning. Given a dataset $\mathcal { D }$ , it selects a fixed subset ${ \mathcal { D } } ^ { * } \subseteq { \mathcal { D } }$ under budget constraint $B$ :
$$
\operatorname* { m i n } _ { \mathcal { D } ^ { * } \subseteq \mathcal { D } , | \mathcal { D } ^ { * } | \leq B } \mathbb { E } _ { ( x , y ) \sim \mathcal { D } \mathrm { t a r g e t } } \left[ L ( M _ { \theta } ( x ) , y ) \right] ,
$$
where ${ \mathcal { D } } _ { \mathrm { t a r g e t } }$ denotes the target distribution. However, static methods cannot adaptively select samples based on the model’s evolving capabilities to maximize learning effectiveness during training [2].
Iterative Data Selection for Instruction Tuning. Iterative data selection interleaves model fine-tuning and data selection across multiple iterations. Formally, given the model parameters $\theta _ { t }$ at iteration $t$ , we adaptively select a subset $S _ { t } \subseteq { \mathcal { D } }$ based on a utility function $f ( \theta _ { t } , x )$ , which estimates the expected contribution of each sample $x$ to future model improvement (e.g., loss reduction).
The iterative selection problem can thus be formulated as:
$$
\operatorname* { m a x } _ { \{ S _ { 1 } , . . . , S _ { T } \} } \sum _ { t = 1 } ^ { T } \sum _ { x \in S _ { t } } f _ { t } ( \theta _ { t } , x ) , \quad \mathrm { ~ s . t . ~ } \quad \sum _ { t = 1 } ^ { T } | S _ { t } | \leq B ,
$$
where $B$ is the total sample selection budget allowed during training.
Existing methods typically estimate the utility $f _ { t } ( \theta _ { t } , x )$ by performing full-dataset inference at each iteration. Specifically, after fine-tuning the model on selected samples $S _ { t }$ , traditional methods explicitly run inference on the entire dataset $\mathcal { D }$ using the updated model parameters $\theta _ { t }$ to compute utility scores:
$$
f _ { t } ( \theta _ { t } , x ) = g ( \mathrm { I n f e r } ( \theta _ { t } , x ) ) , \quad \forall x \in \mathcal { D } ,
$$
where $\mathrm { I n f e r } ( \theta _ { t } , x )$ denotes inference (e.g., loss or uncertainty computation) and $g ( \cdot )$ maps inference results to utility values.
Consequently, the next subset $S _ { t + 1 }$ is selected as:
$$
S _ { t + 1 } = \operatorname * { a r g m a x } _ { S _ { t } \subseteq \mathcal { D } , | S _ { t } | \leq k } \sum _ { x \in \mathcal { D } } f _ { t } ( \theta _ { t } , x ) , \quad \mathrm { s . t . } \quad | S _ { t } | \leq k , \quad T \cdot k \leq B .
$$
Note that in iterative data selection, we typically assume a fixed selection size $k$ per iteration, constrained by the total selection budget $B$ . Thus, the number of iterations $T$ and the selection size per iteration $k$ satisfy the relation $T \cdot k \leq B$ .
# 2.3 Problem Formulation
Existing iterative model-aware methods rely heavily on repeated full-dataset inference for sample utility estimation, leading to significant computational costs. To eliminate this, we define the problem of Iterative Data Selection with Inference-Free Utility Estimation.
Definition 2.1 (Iterative Data Selection with Inference-Free Utility Estimation). Given a total sample selection budget $B _ { z }$ , our objective is to identify subsets $\{ S _ { t } \} _ { t = 1 } ^ { T }$ that maximize the cumulative estimated utility, where the utility function $f _ { t } ( \theta _ { t - 1 } , x )$ is computed exclusively from training-time signals (e.g., training losses or logits) without incurring additional inference overhead:
$$
\operatorname* { m a x } _ { \{ S _ { 1 } , . . . , S _ { T } \} } \sum _ { t = 1 } ^ { T } \sum _ { x \in S _ { t } } f _ { t } ( \theta _ { t - 1 } , x ) , \quad \mathrm { s . t . } \quad \sum _ { t = 1 } ^ { T } | S _ { t } | \leq B ,
$$
Specifically, at each iteration $t$ , the utility estimation $f _ { t } ( \theta _ { t - 1 } , x )$ utilizes the loss signal computed using model parameters $\theta _ { t - 1 }$ immediately after the forward propagation step, but before the backward propagation (parameter update). Thus, no additional inference is required to estimate utilities for data selection at iteration 𝑡 .
Our goal, therefore, is to design accurate and stable inferencefree utility estimation methods. For simplicity, we use $f _ { t } ( \theta _ { t - 1 } , x )$ and $f ( \theta _ { t - 1 } , x )$ interchangeably when the context clearly refers to data selection at iteration $t$ .
# 3 Instance-Level Dynamic Uncertainty Utility
Designing an effective inference-free utility function $f ( \theta _ { t - 1 } , x )$ requires addressing two fundamental challenges as discussed in Section 1: (C1) the temporal mismatch between pre-update loss signals and their actual post-update utility, and (C2) the instability of instantaneous loss signals due to random fluctuations and noise.
To tackle these challenges, we first define a baseline utility function based on a loss-based uncertainty metric, and then introduce an
B Two-Stage Coarse-to-Fine Data Selection (Online) C Training 𝑒𝑥𝑝𝑙𝑜𝑖𝑡𝑎𝑡𝑖𝑜𝑛 𝑒𝑥𝑝𝑙𝑜𝑟𝑎𝑡𝑖𝑜𝑛 IDU Score Update bandit weight 000 白日 wi(t+1) = wi( ) exp ( ) DCi ) = (1− wt 兴 ( E,sti m, at )e 0 Top-K E Weight K MAB 建 Rt = xiCk [IDU (t−1, xi ) B 1 Coarse-level Cluster Selection A Dual-Level Data Clustering (Offline) B2 Fine-Grained Sample Utility Estimation and Selection D Objective Reward −IDU (t , xi )] 自自自 间 Ea0s.y2 0.6 0H.a8rd Task K Task Cluster SCaomnpslteraBiundt:get 团 ( CDlifufsitceurlt(yDC) Embedding Based on Similarity maximize Train Data Pool ΔIDU LLM
improved formulation, termed Instance-Level Dynamic Uncertainty (IDU) utility function, which explicitly addresses these limitations.
Loss-based Uncertainty Estimation. Specifically, our approach begins by formalizing Instance-level uncertainty through a lossbased formulation. Formally, given an instruction-response pair $( x , y )$ , we define the Instance-level Uncertainty (IU) [20] at training iteration $t$ as the empirical cross-entropy between the model’s current predictive distribution and the ground-truth response:
$$
I U ( \theta _ { t } , y | x ) = L ( \theta _ { t } , x ) = - \frac { 1 } { T } \sum _ { j = 1 } ^ { T } \log \mathcal { P } \theta _ { t } ( t _ { j } ^ { y } | x , t _ { 1 } ^ { y } , \dots , t _ { j - 1 } ^ { y } ) ,
$$
where $T$ is response length, $t _ { j } ^ { y }$ refers to the $j$ -th response token, and $p _ { \theta _ { t } }$ the model’s token-level predictive probability distribution.
IU naturally corresponds to the training-time negative log-likelihood loss, providing a direct and computationally free baseline. However, IU alone cannot effectively handle challenges (C1) and (C2).
Instance-Level Dynamic Uncertainty (IDU). To explicitly mitigate both temporal mismatch (C1) and instability (C2) of loss signals, we introduce the Instance-Level Dynamic Uncertainty (IDU), which incorporates exponential smoothing of historical losses and gradient-based approximation of loss changes. Formally, given subset $S _ { t }$ at iteration $t$ , IDU for sample $x$ is recursively defined as:
$$
\begin{array} { r l } & { f ( \theta _ { t - 1 } , x ) = I D U ( \theta _ { t - 1 } , x ) } \\ & { \qquad = ( 1 - b ) \cdot \underbrace { \left[ L ( \theta _ { t - 1 } , x ) \right. } _ { \mathrm { I U \ a t } \ \theta _ { t - 1 } } + \underbrace { \Delta L ^ { \prime } ( \theta _ { t } , x ) } _ { \mathrm { U t i l i t y \ C h a n g e } } ] + b \cdot \underbrace { I D U ( \theta _ { t - 2 } , x ) } _ { \mathrm { H i s t o r i c a l \ U i l i t y } } } \end{array}
$$
where $b \in [ 0 , 1 )$ controls the balance between current and historical signals, $L ( \theta _ { t - 1 } , x )$ is the IU computed using model parameters $\theta _ { t - 1 }$ , and $\Delta L ^ { \prime } ( \theta _ { t } , x )$ is an approximation of the expected utility change, defined as: $\Delta L ^ { \prime } ( \theta _ { t } , x ) = L ( \theta _ { t } , x ) - L ( \theta _ { t - 1 } , x )$ .
We have the following key clarifications regarding Eq. (8):
• The instantaneous loss $L ( \theta _ { t - 1 } , x )$ is computed naturally during forward propagation at iteration $t$ , requiring no extra inference. • The $\Delta L ^ { \prime } ( \theta _ { t } , x )$ denotes the anticipated loss change from $\theta _ { t - 1 }$ to $\theta _ { t }$ . Importantly, this estimation leverages only readily available gradient and historical loss information collected at iteration $t - 1$ , ensuring no extra inference is performed at iteration $t$ .
IDU effectively resolves both fundamental challenges through two carefully designed components:
• Utility Change Estimation (Gradient-Based approximation). To address temporal mismatch (C1), IDU explicitly estimates the expected utility change $( \Delta L ^ { \prime } ( \theta _ { t } , x ) )$ between consecutive iterations. Instead of performing additional inference passes with updated parameters $( \theta _ { t } )$ , we leverage gradient-based approximations derived from backward propagation at iteration $t - 1$ to estimate the loss at iteration $t$ . Historical Utility (Exponential Smoothing). To tackle instability (C2), IDU incorporates historical uncertainty signals using an exponential smoothing mechanism. Rather than depending solely on instantaneous IU values, IDU maintains an exponential moving average of previous utility estimates $( I D U ( \theta _ { t - 2 } , x ) )$ . This significantly reduces fluctuations caused by random noise and local minima encountered during training.
We will elaborate on the details of computing IDU and optimizing the coefficient $b$ of the IDU utility function in Section 5.1.
# 4 LEAD: LEArning to Iteratively Select Data
We first present an overview of LEAD (Section 4.1), followed by the three key components enabling inference-free iterative data selection (Section 4.2). Finally, we describe how these components systematically interact during iterative training (Section 4.3).
# 4.1 LEAD Framework: An Overview
Figure 3 provides a high-level overview of LEAD, illustrating its coarse-to-fine approach guided by a theoretically grounded IDU utility function. The framework comprises two key phases: offline dual-level clustering and online adaptive selection.
Dual-Level Data Clustering (Offline). As shown in Figure 3-(A), we first perform an offline preprocessing step to systematically partition the dataset into clusters based on two complementary dimensions: instruction-following difficulty [32] and task similarity [34]. This dual-level clustering is conducted offline, incurring no additional computational overhead during online training.
(1) Difficulty-aware Instance-level Clustering. We use the InstructionFollowing Difficulty (IFD) metric [32] to evaluate instance-level difficulty. Given an instruction-response pair $( x , y )$ , the IFD is computed as: 𝐼 𝐹 𝐷 (𝑦 | 𝑥) = 𝑃𝑃𝑃𝐿𝐿(𝑦𝑦|𝑥 , where 𝑃𝑃𝐿(𝑦 | 𝑥) and 𝑃𝑃𝐿(𝑦) denote the perplexities of generating the $y$ with and without the $x$ , respectively. Using these IFD scores, we group training samples into clusters through sliding intervals (e.g., intervals of 0.1).
(2) Similarity-based Task-level Clustering. Within each difficulty cluster, we further conduct finer-grained clustering based on task similarity. Specifically, we extract task-specific embeddings from instructions by emphasizing task-defining terms (e.g., key verbs and nouns), following the approach in [34]. We then apply the $K$ -means algorithm [43] to group instructions by task similarity.
Coarse-to-Fine Data Selection (Online). During the training, as shown in Figure 3-(B), LEAD implements a coarse-to-fine selection process designed to maximize utility and training effectiveness under a given total sample budget.
(1) Coarse-Level Cluster Selection (via MAB). At each training iteration 𝑡, we first employ a Multi-Armed Bandit (MAB) algorithm (specifically EXP3, detailed in Section 5.2) to dynamically select one difficulty-level cluster that is most beneficial to the current model state. The MAB algorithm leverages a self-guided IDU-based reward signal, directly measuring the reduction in IDU scores derived from training on previously selected clusters.
(2) Fine-Grained Sample Selection (via IDU). After identifying the optimal difficulty-level cluster, we distribute the selection budget across its finer-grained task clusters. Specifically, we select the most informative samples from each task cluster based on their current IDU values (see Section 5.1), thus ensuring efficient fine-grained selection of training data at iteration 𝑡 .
These selected samples form the subset $S _ { t }$ used to fine-tune the model at iteration 𝑡 . After training, the model parameters are updated from $\theta _ { t - 1 }$ to $\theta _ { t }$ , and the MAB rewards are updated accordingly, ensuring the LEAD framework continuously improves its data selection strategy.
# 4.2 LEAD Framework: Core Components
LEAD has three carefully designed core components.
(1) Instance-Level Dynamic Uncertainty (IDU) Utility. To estimate sample utility efficiently without additional inference, we introduce the Instance-Level Dynamic Uncertainty (IDU) metric. IDU combines exponential smoothing of historical losses and a gradient-based approximation of loss change, effectively addressing the temporal instability and inference overhead challenges inherent in traditional iterative selection methods (see Section 5.1).
(2) Adaptive Data Selection via MAB-Integrated Training Scheduler. To integrate coarse and fine-grained selections seamlessly, we employ the MAB-EXP3 algorithm to dynamically balance exploration and exploitation among clusters. The MAB scheduler dynamically prioritizes clusters demonstrating higher historical utility gains, thus efficiently adapting to the model’s evolving learning capabilities (further described in Section 5.2).
Figure 4: Iterative Sample Selection Guided by IDU Scores.
(3) Self-Guided IDU-Based Reward. To guide the coarse-level cluster selection via MAB, we propose a novel reward function based on the reduction of IDU achieved by training on a given cluster without the need for external validation steps and additional inference (Please refer to Section 5.3 for details).
Next, we illustrate how these components interact seamlessly in the iterative training workflow.
# 4.3 Training Iteration Workflow of LEAD
The LEAD integrates iterative data selection with LLM instruction tuning. Each training iteration $t$ within LEAD comprises four steps.
Step 1: Difficulty-Aware Cluster Selection. Select the optimal coarse-level difficulty cluster $C _ { i ^ { * } }$ via the MAB-EXP3 algorithm, guided by the reward derived from previous training iterations, reflecting the cluster’s historical effectiveness.
Step 2: Fine-Grained Sample Selection. Within the cluster $C _ { i ^ { * } }$ , utilize the IDU function to select the top $n _ { i ^ { * } }$ most informative samples. These samples form the training subset $S _ { t }$ . For example, in Figure 4, at iteration $\theta _ { 0 }$ , samples with the highest initial IDU scores (labeled as $S _ { 1 }$ ) are chosen for training.
Step 3: LLM Instruction Tuning. The selected samples $( S _ { t } )$ are used to fine-tune the model parameters, transitioning from the current parameters $\theta _ { t - 1 }$ to the updated parameters $\theta _ { t }$ .
Step 4: Reward and Utility Updates. After fine-tuning, trained samples typically show decreased IDU scores, reflecting reduced informativeness. This reduction serves as the training reward. As shown in Figure 4, lowered IDU scores of previously selected samples (e.g., $S _ { 1 }$ at $\theta _ { 0 }$ and $S _ { 2 }$ at $\theta _ { 1 }$ ) prompt dynamic selection of new, more informative samples for subsequent iterations (e.g., $S _ { 2 }$ to $S _ { 3 }$ ). Finally, both IDU scores and the MAB weights are updated accordingly, guiding the sample selection process in future iterations
Through this structured workflow, LEAD continuously and adaptively selects the most beneficial samples at each training step.
# 5 The Design Details of LEAD
We first show how to optimize our IDU utility under a budget constraint (Section 5.1), followed by an adaptive data selection scheduler via MAB algorithms (Section 5.2), and finally, a self-guided IDU-based reward for cluster evaluation (Section 5.3).
# 5.1 Instance-Level Dynamic Uncertainty Optimization under the Budget Constraint
In Section 3, we introduced the $I D U$ utility (Eq. (8)) for estimating sample utilities in iterative data selection. Note that our LEAD aims to iteratively select subsets of samples with the highest cumulative utility gain, defined as the expected reduction in average $I D U$ at each iteration $( \Delta I D U _ { t } )$ under a total budget constraint $B$ . Formally, our optimization problem can be defined as follows.
Problem 1 (Budget-Constrained IDU Utility Optimization). Given a total selection budget $B$ , our goal is to maximize the cumulative expected utility over $T$ training iterations:
$$
\operatorname* { m a x } _ { b , T } \sum _ { t = 1 } ^ { T } \mathbb { E } [ \Delta I D U _ { t } ] , \quad s . t . \sum _ { t = 1 } ^ { T } \mathbb { E } [ n _ { t } ] \leq B
$$
$$
\mathbb { E } [ n _ { t } ] = \alpha \cdot ( 1 - b ) \cdot | \overline { { C } } | \cdot ( 1 + C V ^ { 2 } ) \cdot ( 1 + O ( \gamma ) )
$$
Here, $n _ { t }$ denotes the number of samples selected at iteration $t$ , $\alpha$ is the sampling ratio, $b \in [ 0 , 1 )$ is the smoothing parameter controlling the influence of historical utility, $| \overline { { C } } |$ is the average cluster size, and $\begin{array} { r } { C V ^ { 2 } = \frac { 1 } { K } \sum _ { i = 1 } ^ { K } \frac { ( | C _ { i } | - | \overline { { C } } | ) ^ { 2 } } { | \overline { { C } } | ^ { 2 } } } \end{array}$ quantifies variability among cluster sizes.
To solve this problem, we construct a Lagrangian function incorporating the budget constraint and apply the complementary slackness condition to derive the optimal smoothing parameter $b ^ { * }$ Specifically, the optimal smoothing coefficient $b ^ { * }$ that maximizes cumulative utility gain under the budget constraint is given by: 𝛼 𝐶 𝑇 𝐵 1 CV2 . The detailed derivation and theoretical justification of $b ^ { * }$ are provided in Theorem 6.1 (Section 6).
In practice, to effectively implement the optimal solution to our budget-constrained utility maximization problem, we first derive the optimal smoothing coefficient $b ^ { * }$ from the theoretical analysis above. However, to fully instantiate our IDU utility function, we must also efficiently estimate the utility changes $( \Delta L ^ { \prime } ( \theta _ { t } , S _ { t } ) )$ between consecutive training iterations, as this term directly contributes to computing the cumulative utility gain $\Delta I D U _ { t }$ . Directly calculating these utility changes would typically require additional inference steps, violating our zero-cost constraint.
To address this, we introduce the gradient-based approximation of utility change, as discussed below.
Gradient-Based Approximation of Utility Change. Our approach efficiently utilizes gradient information computed during standard model training, thus requiring no extra computational resources beyond regular forward-backward propagation.
Formally, consider a subset of samples $S _ { i }$ . When model parameters are updated from $\theta _ { t - 1 }$ to $\theta _ { t }$ , the average uncertainty change (utility change) $\Delta L ( \theta _ { t } , S _ { i } )$ can be approximated as follows:
Theorem 5.1 (Utility Change Approximation). For a given sample subset $S _ { i }$ , the utility change from parameter update $\theta _ { t - 1 }$ to $\theta _ { t }$ can be approximated as:
$$
\begin{array} { l } { \displaystyle \Delta L ^ { \prime } ( \theta _ { t } , S _ { i } ) \equiv \frac { 1 } { | S _ { i } | } \sum _ { x \in S _ { i } } \left( L ( \theta _ { t } , x ) - L ( \theta _ { t - 1 } , x ) \right) } \\ { \displaystyle \approx - \eta \left[ \beta ^ { 2 } \delta _ { t _ { k } } + \left( 1 - \beta \right) ^ { 2 } \delta _ { t - 1 } + 2 \beta ( 1 - \beta ) \sqrt { \delta _ { t _ { k } } \delta _ { t - 1 } } \cos \phi \right] , } \end{array}
$$
where $\eta$ is the learning rate, $\delta _ { t _ { k } }$ and $\delta _ { t - 1 }$ denote historical gradient norms, and $\phi$ is the angle between consecutive gradient directions, $\begin{array} { r } { b y \colon \cos \phi = \frac { \Delta \theta _ { t _ { k } } ^ { \smile } \Delta \theta _ { t - 1 } } { \| \Delta \theta _ { t _ { k } } \| \cdot \| \Delta \theta _ { t - 1 } \| } } \end{array}$
This approach ensures that our utility estimation remains efficient, accurate, and fully integrated into standard model training workflows. The complete derivation of this gradient-based approximation method is presented in Theorem 6.4 (Section 6).
While the above approximation method significantly enhances efficiency, its accuracy critically depends on selecting an appropriate approximation coefficient $\beta$ . To further refine our method, we analytically derive the optimal approximation weight $\beta ^ { * }$ that minimizes approximation error.
Optimal Approximation Coefficient $\beta ^ { * }$ . Formally, we define the approximation error function as: $J ( \beta ) = \| \Delta L ( \theta _ { t } , S _ { i } ) - \Delta L ^ { \prime } ( \theta _ { t } , S _ { i } ) \| ^ { 2 }$ . Minimizing this error function leads us to the theoretical $\beta ^ { * }$ :
Theorem 5.2 (Optimal Weight $\beta ^ { * }$ ). The optimal approximation weight $\beta ^ { * }$ minimizing the error function $J ( \beta )$ is given by:
$$
\beta ^ { * } = \frac { \delta _ { t - 1 } - \sqrt { \delta _ { t _ { k } } \delta _ { t - 1 } } \cos \phi } { \delta _ { t _ { k } } + \delta _ { t - 1 } - 2 \sqrt { \delta _ { t _ { k } } \delta _ { t - 1 } } \cos \phi } .
$$
Detailed proofs and analyses regarding the derivation of this optimal coefficient are provided in Theorem 6.4 (Section 6).
Finally, to rigorously evaluate the theoretical guarantees and practical utility of our gradient-based approximation, we establish a formal approximation error bound as follows.
Approximation Error Bound. We bound the approximation error between the approximated loss $L ^ { \prime }$ and the true loss $L$ .
Theorem 5.3 (Approximation Error Bound). With the optimal weight $\beta ^ { * }$ , the error between the approximated loss $L ^ { \prime }$ and the true loss $L$ satisfies:
$$
\| L ^ { \prime } ( \theta _ { t } , x ) - L ( \theta _ { t } , x ) \| \leq \epsilon _ { t a y l o r } + \epsilon _ { a p p r o x } ,
$$
# where:
$L ^ { \prime } ( \theta _ { i } , x ) = L ( \theta _ { i - 1 } , x ) + \Delta L ^ { \prime } ( \theta _ { t } , S _ { t } )$
• $\begin{array} { r l } { \epsilon _ { t a y l o r } = } & { { } \frac { 1 } { 2 } \eta ^ { 2 } \cdot \operatorname* { m a x } _ { \boldsymbol { \theta } } \| \nabla ^ { 2 } L ( \boldsymbol { \theta } , \boldsymbol { x } ) \| \cdot \| \nabla L ( S _ { i } , \theta _ { i - 1 } ) \| ^ { 2 } } \end{array}$ is the error from Taylor expansion. $\epsilon _ { a p p r o x } = \eta \cdot \| \nabla L ( S _ { i } , \theta _ { i - 1 } ) - ( \beta ^ { * } \cdot \nabla L ( S _ { i _ { k } } , \theta _ { i _ { k } - 1 } ) + ( 1 - \beta ^ { * } ) \ .$ $\nabla L ( S _ { i - 1 } , \theta _ { i - 2 } ) ) | | ^ { 2 }$ is the error from gradient approximation.
# 5.2 Adaptive Data Selection via MAB-Integrated Training Scheduler
In this section, we propose a novel training scheduler for the LEAD framework that integrates the Multi-Armed Bandit (MAB) algorithm with our IDU utility function. The scheduler adaptively selects training data clusters based on their evolving informativeness.
Step 1: Difficulty-Aware Cluster Selection. Initially, we set the weights $W = \{ w _ { 1 } , w _ { 2 } , . . . . , w _ { K } \}$ for all clusters categorized by difficulty level, where $w _ { i }$ denotes the weight of cluster $C _ { i }$ and $K$ is the number of clusters. To assess the difficulty score of each cluster, we employ the EXP3 [3] algorithm, a well-established method within the MAB framework, for the cluster selection. Specifically, for each iteration $t$ , we first calculate the cluster score $D C _ { t } ( i )$ of the cluster
$C _ { i }$ based on the cluster weight $w _ { i }$ , and then select a cluster (arm) $D C _ { t } ^ { * }$ with the highest score $D C$ . The $D C _ { t } ( i )$ can be computed as:
$$
D C _ { t } ( i ) = ( 1 - \gamma ) \frac { w _ { i } ^ { ( t ) } } { \sum _ { j = 1 } ^ { K } w _ { j } ^ { ( t ) } } + \frac { \gamma } { K }
$$
where $\gamma$ controls the exploration-exploitation trade-off.
The selected cluster at iteration $t$ is the one with the highest probability: $C _ { i ^ { * } } = \arg \operatorname* { m a x } _ { i \in [ 1 , K ] } D C _ { t } ( i )$ .
Step 2: Sample Selection with IDU. After selecting a cluster $C _ { i }$ with the highest $D C$ score, we apply our previously introduced IDU utility function to sample the most informative subset $B _ { C _ { i } }$ within the selected cluster $C _ { i }$ . Specifically, we select samples with the highest IDU scores to maximize utility gain at each iteration.
Step 3: Model Training and Reward Computation. Using the selected subset $B _ { C _ { i } }$ , we train the large language model during iteration 𝑡. Once training is complete, we compute a reward 𝑟𝑖(𝑡 ) to quantify the model’s improvement resulting from the selected samples (Please refer to Section 5.3 for details).
Step 4: Cluster Weight Updates for Next Round Selection. After obtaining the reward 𝑟𝑖(𝑡 ) , we update the cluster weights $w _ { i } ^ { ( t + 1 ) }$ according to EXP3 update rule:
$$
w _ { i } ^ { ( t + 1 ) } = \left\{ { \begin{array} { l l } { w _ { i } ^ { ( t ) } \exp \left( { \frac { \gamma } { K } } { \frac { r ^ { ( t ) } } { D C _ { t } ( i ) } } \right) , } & { i = i _ { t } } \\ { w _ { i } ^ { ( t ) } , } & { { \mathrm { o t h e r w i s e } } } \end{array} } \right.
$$
This adaptive weight-update mechanism ensures clusters that consistently yield high utility are progressively favored in subsequent iterations, achieving adaptive training data selection.
# 5.3 Self-Guided IDU-Based Reward
An effective reward function is critical to guiding effective cluster selection within the MAB framework. Ideally, such a reward should precisely capture each cluster’s direct contribution to model improvement, while remaining computationally efficient and fully integrated into the training process.
To achieve this, we propose a Self-Guided IDU-Based Reward, leveraging our previously defined IDU utility to efficiently quantify each cluster’s contribution to model improvement without additional inference overhead. Formally, the reward for training on cluster $C _ { i }$ at iteration $t$ is computed as:
𝑟𝑖(𝑡 ) = 𝐼𝑛 𝑓 𝑜𝐺𝑎𝑖𝑛 (𝐶𝑖 , 𝑡 ) = E𝑥𝑖 ∈𝐶𝑖 [𝐼 𝐷𝑈 (𝜃𝑡 −1, 𝑥𝑖 ) − 𝐼 𝐷𝑈 (𝜃𝑡 , 𝑥𝑖 ) ] , (15) where $\theta _ { t - 1 }$ and $\theta _ { t }$ represent the model parameters before and after training, respectively. To maintain numerical stability and consistent scaling, rewards are further normalized to the range $[ - 1 , 1 ]$ via min-max normalization.
Compared to traditional reward designs [8], our self-guided reward naturally integrates into the standard training loop, accurately reflects dynamic model improvements at no additional inference cost, and significantly simplifies the reward computation.
# 6 Theoretical Guarantees
In this section, we analyze the theoretical guarantees of our IDU utility and the LEAD framework.
# 6.1 Optimal Smoothing Coefficient
We now analyze the optimal smoothing coefficient for the budgetconstrained IDU optimization (Problem 1, presented in Section 5.1).
Theorem 6.1 (Optimal Smoothing Coefficient). The optimal smoothing coefficient $b ^ { * }$ that maximizes the cumulative utility gain under the budget constraint is:
$$
b ^ { * } = 1 - { \frac { B } { n _ { 0 } T \cdot ( 1 + C V ^ { 2 } ) } }
$$
where $n _ { 0 } = \alpha \cdot { \overline { { | C | } } }$ is the expected batch size without smoothing and heterogeneity effects, $B$ is the total budget, $T$ is the number of training steps, and $C V ^ { 2 }$ quantifies cluster size variability.
Under a total budget $B$ , we propose the optimization problem:
$$
\operatorname* { m a x } _ { b , T } \sum _ { t = 1 } ^ { T } \Delta I D U _ { t } , \quad \mathrm { s . t . } \sum _ { t = 1 } ^ { T } n _ { t } \leq B
$$
The overall goal is to maximize the cumulative utility gain, and the cumulative utility gain depends on the $\Delta I D U _ { t } ( x )$ of each round.
$$
R ^ { ( t ) } = \Delta I D U _ { t } = \sum _ { x \in S _ { t } } \left( I D U ( \theta _ { t } , x ) - I D U ( \theta _ { t - 1 } , x ) \right)
$$
We take $\Delta I D U _ { t } ( x )$ of each round as the reward of the current round to guide the selection of new groups in the next round.
As the selection rounds typically exceed 5, the utility-based reward for cluster $C _ { t }$ simplifies to:
$$
R ^ { ( t ) } = \Delta I D U _ { t } = - ( 1 - b ) \eta _ { t } | S _ { t } | \Psi _ { t } .
$$
The specific simplification process can be referred to as Lemma 6.2. Here $\Delta I D U _ { t }$ depends on the size of $\left. S _ { t } \right. = n _ { t }$ . Therefore, before estimating $\Delta I D U _ { t }$ , we need to estimate $n _ { t }$ . We get it in four steps.
Step 1: Estimate sample size selected in the t-th round $n _ { t }$ The probability of all clusters being selected in the initial round is the same, so the clusters are randomly selected in the first round. According to the Eq. (14) and Eq. (13), which cluster is selected in the next round depends on which cluster was selected in the previous round. So we can only estimate the expectation of $n _ { t }$ . Then $\mathbb { E } [ n _ { t } ]$ can be simplified as follows (see Lemma 6.3 for details):
$$
\mathbb { E } [ n _ { t } ] = \alpha \cdot ( 1 - b ) \cdot \frac { \sum _ { i = 1 } ^ { K } | C _ { i } | ^ { 2 } } { \sum _ { i = 1 } ^ { K } | C _ { i } | } \cdot ( 1 + O ( \gamma ) )
$$
Step 2: Estimate the expectation of utility gain $\Delta I D U _ { t }$ . Since the utility gain $\Delta I D U _ { t }$ in the $T - t h$ round depends on nt, and for $n _ { t }$ , due to the randomness of the MAB when selecting the cluster, we can only estimate the expectations. Therefore, it is necessary to further solve the expectations of $\Delta I D U _ { t }$ . According to the Eq. (19) and Eq. (20), we can further obtain $\mathbb { E } [ \Delta I D U _ { t } ]$ .
$$
\begin{array} { l } { \displaystyle \sum _ { t = 1 } ^ { T } \mathbb { E } [ \Delta I D U _ { t } ] = - \sum _ { t = 1 } ^ { T } ( 1 - b ) \eta _ { t } \cdot \mathbb { E } [ | S _ { t } | \Psi _ { t } ] } \\ { = - \boldsymbol { n } _ { 0 } \cdot ( 1 - b ) ^ { 2 } \cdot ( 1 + \mathbb { C V } ^ { 2 } ) \cdot \displaystyle \sum _ { t = 1 } ^ { T } \eta _ { t } \delta _ { t } } \end{array}
$$
where $n _ { 0 } = \alpha \cdot { \overline { { | C | } } }$ represents the expected sample size without smoothing, $\mathbb { E } [ \Psi _ { t } \cdot | S _ { t } | ] = \delta _ { t } \cdot \mathbb { E } [ n _ { t } ] , \delta _ { t }$ represents the average per-sample utility contribution.
Step 3: Redefine objective and constrained condition. Having derived the expected sample size and utility gain, we now reformulate our optimization problem by incorporating these expectations.
$$
\begin{array} { r l } & { \displaystyle \operatorname* { m a x } _ { b , T } \sum _ { t = 1 } ^ { T } \mathbb { E } [ \Delta I D U _ { t } ] , \quad \mathrm { s . t . } \sum _ { t = 1 } ^ { T } \mathbb { E } [ n _ { t } ] \leq B } \\ & { \displaystyle \mathit { w h e r e } \quad \mathbb { E } [ n _ { t } ] = \alpha \cdot ( 1 - b ) \cdot | \overline { { C } } | \cdot ( 1 + C \mathbf { V } ^ { 2 } ) \cdot ( 1 + O ( \gamma ) ) } \end{array}
$$
Let $\begin{array} { r } { \bar { \eta } \delta = \frac { 1 } { T } \sum _ { t = 1 } ^ { T } \eta _ { t } \delta _ { t } } \end{array}$ , The budget constraint becomes:
$$
\sum _ { t = 1 } ^ { T } \mathbb { E } [ n _ { t } ] = \sum _ { t = 1 } ^ { T } { n _ { 0 } \cdot ( 1 - b ) \cdot ( 1 + { \mathrm C } { \mathrm V } ^ { 2 } ) } \leq B
$$
Step 4: Solving optimal $b ^ { * }$ and $T ^ { * }$ . We formulate the Lagrangian:
$$
\begin{array} { r l } & { \mathcal { L } ( b , \lambda ) = \mathbb { E } [ \Delta I D U _ { t } ] - \lambda ( \mathbb { E } [ n _ { t } ] - B ) } \\ & { \quad \quad = - n _ { 0 } \cdot T \cdot \bar { \eta } \delta \cdot ( 1 - b ) ^ { 2 } \cdot ( 1 + \mathrm { C V } ^ { 2 } ) + } \\ & { \quad \quad \quad \lambda ( n _ { 0 } \cdot T \cdot ( 1 - b ) \cdot ( 1 + \mathrm { C V } ^ { 2 } ) - B ) } \end{array}
$$
Taking the partial derivative with respect to $b$ and setting it to zero:
$$
\frac { \partial \mathcal { L } } { \partial b } = 0 \Longrightarrow 2 \bar { \eta } \delta \cdot ( 1 - b ) = \lambda
$$
The complementary slackness condition states $\lambda ( n _ { 0 } \cdot T \cdot ( 1 -$ $b ) \cdot ( 1 + \mathrm { C V } ^ { 2 } ) - B ) = 0$ . Since $\lambda \neq 0$ (as verified by the optimality condition), the budget constraint must be tight:
$$
n _ { 0 } \cdot T \cdot \left( 1 - b \right) \cdot \left( 1 + \mathrm { C V } ^ { 2 } \right) = B \Rightarrow b ^ { \ast } = 1 - \frac { B } { n _ { 0 } \cdot T \cdot \left( 1 + \mathrm { C V } ^ { 2 } \right) }
$$
We require $0 \leq b ^ { * } < 1$ , which implies:
$$
T _ { \mathrm { m i n } } = \left\lceil \frac { B } { n _ { 0 } \cdot ( 1 + \mathrm { C V } ^ { 2 } ) } \right\rceil + 1
$$
Lemma 6.2 (Batch Utility Change Decomposition). The utility change for batch $S _ { t }$ under the smoothed utility function can be expressed as:
$$
\Delta I D U _ { t } = \left\{ { \begin{array} { l l } { - ( 1 - b ) \eta _ { t } | S _ { t } | \Psi _ { t } + b | S _ { t } | \delta _ { t - 1 } ( 1 - b ^ { t - 1 } ) , } & { t \leq 5 } \\ { - ( 1 - b ) \eta _ { t } | S _ { t } | \Psi _ { t } , } & { t > 5 } \end{array} } \right.
$$
where $\Psi _ { t }$ denotes the gradient alignment term:
$$
\Psi _ { t } = \beta _ { t } ^ { 2 } \delta _ { t _ { k } } + { ( 1 - \beta _ { t } ) } ^ { 2 } \delta _ { t - 1 } + 2 \beta _ { t } { ( 1 - \beta _ { t } ) } \sqrt { \delta _ { t _ { k } } \delta _ { t - 1 } } \cos { \phi _ { t } }
$$
Proof. For any $x \in S _ { t }$ , $\Delta I D U _ { t } ( x )$ can be decomposed as:
$$
\begin{array} { l } { { \Delta I D U _ { t } ( x ) = ( 1 - b ) \Delta L ( \theta _ { t } , x ) + b ( 1 - b ) \displaystyle \sum _ { k = 0 } ^ { t - 3 } b ^ { k } \Delta L ( \theta _ { t - 2 - k } , x ) } } \\ { { \ + \ ( 1 - b ) b ^ { t - 1 } I D U ( \theta _ { 0 } , x ) } } \end{array}
$$
For the historical cumulative terms when $t \leq 5$ , we apply finiteorder approximation:
$$
\sum _ { k = 0 } ^ { t - 3 } b ^ { k } \Delta L ( \theta _ { t - 2 - k } , x ) \approx \delta _ { t - 1 } \frac { 1 - b ^ { t - 2 } } { 1 - b }
$$
The initial utility term $I D U ( \theta _ { 0 } , x )$ becomes a constant $C _ { 0 }$ after aggregation. Summing over batch $S _ { t }$ gives:
$$
\begin{array} { r l } & { \Delta I D U _ { t } = - ( 1 - b ) \eta _ { t } | S _ { t } | \Psi _ { t } + b | S _ { t } | \delta _ { t - 1 } \big ( 1 - b ^ { t - 1 } \big ) } \\ & { ~ + ( 1 - b ) b ^ { t - 1 } | S _ { t } | C _ { 0 } } \end{array}
$$
When $t > 5$ , the exponential decay term $b ^ { t - 1 }$ becomes negligible:
$$
\Delta I D U _ { t } \approx - ( 1 - b ) \eta _ { t } | S _ { t } | \Psi _ { t }
$$
Lemma 6.3 (Expected Sample Size Under MAB mechanism). In the MAB framework using EXP3 for cluster selection with smoothed utility, the expected sample size per round $\mathbb { E } [ n _ { t } ]$ satisfies:
$$
\mathbb { E } [ n _ { t } ] = \alpha \cdot ( 1 - b ) \cdot \overline { { | C | } } \cdot ( 1 + C V ^ { 2 } ) \cdot ( 1 + O ( \gamma ) )
$$
where $\alpha$ is the sampling rate, $b$ is the smoothing coefficient, $\left| C _ { i } \right|$ is the size of cluster 𝑖, and $\gamma$ is the exploration rate in function 13.
Proof. We analyze cluster selection probabilities in the EXP3 algorithm when used with our smoothed utility rewards. The reward signal for selecting cluster $i$ at time $t$ is:
$$
R _ { i } ^ { ( t ) } = \Delta I D U _ { t } \propto ( 1 - b ) \vert C _ { i } \vert
$$
This relationship follows directly from Lemma 6.2. Since $\vert S _ { t } \vert$ is proportional to cluster size $\left| C _ { i } \right|$ when cluster $i$ is selected, and assuming $\Psi _ { t }$ and $\eta _ { t }$ are approximately constant across clusters, we derive $R _ { i } ^ { ( t ) } \propto ( 1 - b ) | C _ { i } |$ .
From the weight update Eq. (13) and Eq. (14) in the MAB EXP3 algorithm. As the algorithm converges to steady state, the weights stabilize such that:
$$
\frac { w _ { i } ^ { ( t ) } } { \sum _ { j = 1 } ^ { K } w _ { j } ^ { ( t ) } } \propto \exp \left( \sum _ { \tau = 1 } ^ { t - 1 } \frac { \gamma } { K } \frac { R _ { i } ^ { ( \tau ) } } { p _ { i } ^ { ( \tau ) } } \right)
$$
In the fully converged regime, assuming small $\gamma$ and $\epsilon$ , and sufficiently heterogeneous cluster sizes, we can derive a fixed-point equation. At this fixed point, the ratio 𝑅𝑖(𝑡) becomes approximately constant across arms, leading to:
$$
p _ { i } ^ { ( t ) } \approx \frac { \left( 1 - \gamma \right) \left( 1 - b \right) \left| C _ { i } \right| } { \sum _ { j = 1 } ^ { K } \left( 1 - b \right) \left| C _ { j } \right| } + \frac { \gamma } { K } \approx \frac { \left( 1 - b \right) \left| C _ { i } \right| } { \sum _ { j = 1 } ^ { K } \left| C _ { j } \right| } + O ( \gamma )
$$
The expected sample size in round $t$ is:
$$
\mathbb { E } [ n _ { t } ] = \alpha \sum _ { i = 1 } ^ { K } \boldsymbol { \hat { p } } _ { i } ^ { ( t ) } | C _ { i } | = \alpha ( 1 - b ) \frac { \sum _ { i = 1 } ^ { K } | C _ { i } | ^ { 2 } } { \sum _ { j = 1 } ^ { K } | C _ { j } | } + \alpha \cdot O ( \gamma ) \sum _ { i = 1 } ^ { K } | C _ { i } |
$$
Since $\begin{array} { r } { \sum _ { i = 1 } ^ { K } \left| C _ { i } \right| = N } \end{array}$ (total dataset size), we can express this as:
$$
\mathbb { E } [ n _ { t } ] = \alpha \cdot ( 1 - b ) \cdot \frac { \sum _ { i = 1 } ^ { K } | C _ { i } | ^ { 2 } } { \sum _ { i = 1 } ^ { K } | C _ { i } | } \cdot ( 1 + O ( \gamma ) )
$$
Let $\begin{array} { r } { \overline { { | C | } } ~ = ~ { \frac { 1 } { K } } \sum _ { i = 1 } ^ { K } | C _ { i } | } \end{array}$ be the average cluster size. Using the relation between variance and second moment:
Substituting into our expected sample size formula:
$$
\mathbb { E } [ n _ { t } ] = \alpha \cdot ( 1 - b ) \cdot \overline { { | C | } } \cdot ( 1 + C \mathsf { V } ^ { 2 } ) \cdot ( 1 + O ( \gamma ) )
$$
# 6.2 Loss Changes in Gradient-Based Approximation The loss change is then approximated as:
Recap that we have introduced utility function Eq. (8) in Section 3, In this section, we try to approximate the loss reduction $\Delta L ^ { \prime } ( \theta _ { t } , x )$ .
Theorem 6.4 (IU Change Approximation). For any sample set $S _ { t }$ , the average uncertainty change $\Delta L ^ { \prime } ( \theta _ { t } , S _ { t } )$ when model parameters update from $\theta _ { t - 1 }$ to $\theta _ { t }$ can be approximated as:
$$
\begin{array} { l } { \delta _ { t } \equiv \Delta L ^ { \prime } ( \theta _ { t } , S _ { t } ) } \\ { = - \eta \Big [ \beta ^ { 2 } \delta _ { t _ { k } } + ( 1 - \beta ) ^ { 2 } \delta _ { t - 1 } + 2 \beta ( 1 - \beta ) \sqrt { \delta _ { t _ { k } } \delta _ { t - 1 } } \cos \phi \Big ] } \end{array}
$$
where $\phi$ is the angle between parameter update directions $\Delta \theta _ { t _ { k } }$ and Δ𝜃𝑡 −1, with cos 𝜙 = Δ𝜃𝑡 𝑡𝑘 Δ𝜃−𝑡 1 .
$$
\beta ^ { * } = \frac { \delta _ { t - 1 } - \sqrt { \delta _ { t _ { k } } \delta _ { t - 1 } } \cos \phi } { \delta _ { t _ { k } } + \delta _ { t - 1 } - 2 \sqrt { \delta _ { t _ { k } } \delta _ { t - 1 } } \cos \phi }
$$
Step 1: Simplify the loss change. Assume at iteration $t$ , model parameters are updated via gradient descent: $\theta _ { t } = \theta _ { t - 1 } - \eta _ { t } \nabla L ( S _ { t } , \theta _ { t - 1 } )$ , where $\begin{array} { r } { \nabla L ( S _ { t } , \dot { \theta } _ { t - 1 } ) = \frac { 1 } { | S _ { t } | } \sum _ { x \in S _ { t } } \nabla L ( x , \theta _ { t - 1 } ) } \end{array}$ is the average gradient of subset $S _ { t }$ . For each sample $x \in S _ { t }$ , the loss function $L ( \theta , x )$ is expanded using first-order Taylor expansion at $\theta _ { t - 1 }$ :
$$
L ( \theta _ { t } , x ) \approx L ( \theta _ { t - 1 } , x ) + \nabla L ( \theta _ { t - 1 } , x ) ^ { \top } ( \theta _ { t } - \theta _ { t - 1 } )
$$
Averaging over all samples in $S _ { t }$ :
$$
\begin{array} { l } { \displaystyle \delta _ { t } = \Delta L ^ { \prime } ( \theta _ { t } , S _ { t } ) \approx - \eta _ { t } \frac { 1 } { | S _ { t } | } \sum _ { x \in S _ { t } } \nabla L ( \theta _ { t - 1 } , x ) ^ { \top } \nabla L ( \theta _ { t - 1 } , S _ { t } ) } \\ { = - \eta _ { t } \| \nabla L ( \theta _ { t - 1 } , S _ { t } ) \| ^ { 2 } } \end{array}
$$
It can be concluded that the loss reduction is related to the gradient.
Step 2: Approximate the gradient. To further approximate the loss, we need to approximate the gradient. Here we consider that the gradient at the current moment is related to the gradient at the previous moment and the gradient when the cluster used at the current moment was first selected.
$$
\nabla L ^ { \prime } ( S _ { t } , \theta _ { t - 1 } ) \equiv \beta \cdot \nabla L ( S _ { t _ { k } } , \theta _ { t _ { k } - 1 } ) + ( 1 - \beta ) \cdot \nabla L ( S _ { t - 1 } , \theta _ { t - 2 } ) ,
$$
where $t _ { k }$ is the most recent step when $C _ { k }$ was previously selected, $C _ { k }$ is the cluster selected at step $t$ , where $\beta \in \left[ 0 , 1 \right]$ is a weighting coefficient measuring the relative importance of cluster-specific historical information versus recent optimization direction.
Step 3: Solving optimal $\beta ^ { * }$ to obtain final IU Change Approximation $\Delta L ^ { \prime } ( \theta _ { t } , S _ { t } )$ . The $\beta ^ { * }$ can be solved by minimizing the difference between the current gradient and the approximate gradient.
$$
J ( \beta ) = \| \nabla L _ { t } - ( \beta \nabla L _ { t _ { k } } + ( 1 - \beta ) \nabla L _ { t - 1 } ) \| ^ { 2 }
$$
Using the gradient descent update rule $\Delta \theta _ { t } = - \eta \nabla L _ { t }$ , we rewrite in terms of parameter updates:
$$
J ( \beta ) = \frac { 1 } { \eta ^ { 2 } } \left\| \Delta \theta _ { t } - \left( \beta \Delta \theta _ { t _ { k } } + \left( 1 - \beta \right) \Delta \theta _ { t - 1 } \right) \right\| ^ { 2 } .
$$
Since ∥Δ𝜃𝑡𝑘 ∥2 ≈ −𝜂 𝛿𝑡𝑘 and cos 𝜙 = Δ𝜃𝑡 𝑡𝑘 Δ𝜃−𝑡 1 . Setting $\begin{array} { r } { \frac { d \tilde { \cal J } } { d \beta } = 0 } \end{array}$ yields the optimal coefficient:
$$
\beta ^ { * } = \frac { \delta _ { t - 1 } - \sqrt { \delta _ { t _ { k } } \delta _ { t - 1 } } \cos \phi } { \delta _ { t _ { k } } + \delta _ { t - 1 } - 2 \sqrt { \delta _ { t _ { k } } \delta _ { t - 1 } } \cos \phi } .
$$
$$
\begin{array} { r } { \delta _ { t } = - \eta \Big [ ( { \beta ^ { * } } ) ^ { 2 } \delta _ { t _ { k } } + ( 1 - \beta ^ { * } ) ^ { 2 } \delta _ { t - 1 } + 2 \beta ^ { * } ( 1 - \beta ^ { * } ) \sqrt { \delta _ { t _ { k } } \delta _ { t - 1 } } \cos \phi \Big ] . } \end{array}
$$
# 7 Experiments
# 7.1 Experimental Setup
Data Pool. To simulate realistic and diverse training scenarios, we construct a large-scale and heterogeneous data pool comprising approximately 600,000 samples. Our dataset integrates multiple wellestablished public sources, including WizardLM (ShareGPT) [40], WizardLM (Alpaca) [40], UltraChat [19], Standard Alpaca [53], unnatural [26], Alpaca code [12], MATH [25], GSM8K [18]. We closely follow Tulu [56] to process these datasets. All methods will select data from this pool for LLMs’ instruction tuning.
Benchmarks and Metrics. We comprehensively evaluate our method across four representative tasks that reflect critical capabilities required by modern LLMs.
Code Generation. We use the extensively utilized HumanEval benchmark [15], consisting of 164 coding problems, to evaluate the code-writing capabilities of LLMs. Performance is measured via the widely adopted pass@10 metric. Math Reasoning. We use GSM8k [18] to evaluate the mathematical abilities of models, which contains 1319 grade school math test data. We adopt an 8-shot setting and evaluate performance using the exact match accuracy metric. • Multi-task Knowledge and Reasoning. We evaluate on MMLU [24], which consists of a range of multiple-choice academic questions. We report accuracy as the metric. Cross-lingual Question Answering. To assess multilingual understanding, we utilize the TYDIQA [17], featuring questions from 11 diverse languages. We report standard F1 scores for both passage selection and answer span extraction tasks.
Baselines. We study several existing state-of-the-art methods as our baselines for data selection.
(1) Full Data: Train the model using the entire data pool.
(2) Random Selection [60]: Randomly selects training samples. (3) Instruction-Following Difficulty (IFD) [32]: Selects samples based on a complexity metric measuring instruction-following difficulty. (4) Perplexity (PPL) [31]: Prioritizes uncertain samples with high perplexity.
(5) K-Center-Greedy (KCG) [48]: Maximizes diversity by iteratively choosing the sample farthest from the current selection.
(6) SelectIT [37]: Selects samples via uncertainty-aware self-reflection during instruction tuning.
(7) Token Length (TL) [60]: Selects samples with the longest response lengths.
(8) ZIP [62]: prompting a strong LLM to estimate and select samples based on quality, relevance, and complexity scores.
Implementation Details of LEAD. We evaluate LEAD using three foundational models (LLAMA-3.1-8B, Mistral-7B and Qwen2-7B)
Table 1: Comparison of Performance across Different Benchmarks for Various Methods.
and utilize Low-Rank Adaption (LoRA) [27] for parameter-efficient fine-tuning. The maximum learning rate is set as $2 \times 1 0 ^ { - 5 }$ with a linear decay schedule, and the batch size is 8. We also fix the maximum input sequence length to 3080. Models are trained for 4 epochs on 4 H800 GPUs. For the MAB setting, the number of arms is set to 7. The maximum sampling budget of LEAD is $1 5 K$ .
# 7.2 Exp-1: Overall Performance
We first evaluate LEAD and all baseline methods using the same budget of $1 5 K$ samples, corresponding to $2 . 5 \%$ of the data pool.
Table 1 summarizes the evaluation results across various benchmarks (MMLU, TYDIQA, GSM8K, and HumanEval) and model architectures (LLaMA3.1-8B, Mistral-7B, and Qwen2-7B). Overall, LEAD consistently outperforms state-of-the-art baselines, demonstrating its effectiveness. Note that $\Delta$ denotes the performance improvement of LEAD compared to the Random baseline.
Consistent Effectiveness of LEAD across LLMs. LEAD demonstrates remarkable effectiveness across different model architectures: For LLaMA3.1-8B, it achieves an average score of 66.62, outperforming full dataset training (60.31) by a substantial $+ 6 . 3 1$ points. Similar gains are seen with Mistral-7B $( + 1 0 . 7 5 )$ and Qwen2-7B $\left( + 6 . 0 9 \right)$ . This cross-architecture consistency confirms that LEAD reliably selects high-value samples beneficial for diverse LLMs.
$2 . 5 \%$ of Data is All You Need. Remarkably, LEAD achieves these substantial gains using only $2 . 5 \%$ of the entire dataset, challenging the conventional assumption that larger datasets inherently produce superior results. Specifically, our method outperforms full dataset training (Full Data baseline) across all model and benchmark settings. For example, on the challenging TYDIQA benchmark, our approach yields remarkable gains of 22.33, 29.15, and 12.63 points of improvement across the three models, respectively, demonstrating that carefully selected instruction samples can lead to more focused and effective learning.
Outperforming State-of-the-art Baselines. LEAD outperforms all baseline selection methods with consistent effectiveness across models and benchmarks. While some baselines perform well in specific cases (e.g., SelectIT on LLaMA3.1-8B and PPL on Qwen2- 7B), they fall short in other settings. In contrast, our approach maintains consistent high performance across the board, Notably, on the HumanEval benchmark for code generation, LEAD achieves top performance across all models.
# 7.3 Exp-2: The Efficiency of LEAD
We evaluate the efficiency of LEAD compared to baseline methods (PPL, KCG, IFD, SelectIT, and ZIP) across four benchmarks. Note that we exclude Random and TL from this comparison, as these methods incur minimal computational overhead and were shown to perform significantly worse in Exp-1. We report the overall latency of all methods with one round of selection iteration on average.
Figure 5: Comparison of Performance $\langle y$ -axis) and Latency $\dot { x }$ -axis) across six data selection methods.
Figure 6: Inference Time (Full Data) and Training Time (Selected Data) per Iteration across Different Methods.
Exp-2.1: Performance vs. Latency. We compare performance and inference latency (in 𝑙𝑜𝑔2 scale) across different methods. As shown in Figure 5, LEAD (marked with a star) consistently achieves the best performance-latency trade-off, occupying the upper-left region of each plot. LEAD delivers a roughly $5 \times$ faster inference time compared to baselines, while maintaining top performance on benchmarks like TYDIQA, GSM8K, and HumanEval.
Exp-2.2: Analysis of Latency Composition. Figure 6 compares latency components (inference and training) of different methods. Inference time constitutes the primary computational bottleneck for traditional methods (e.g., IFD: 98.0 hours, ZIP: 78.0 hours), due to repeated full-dataset inference at each selection iteration. In contrast, LEAD requires inference only once (10.3 hours) for initial selection, eliminating subsequent inference overhead via inferencefree IDU estimation.
# 7.4 Exp-3: Static vs. Iterative Data Selection
These experiments validate the necessity of iterative data selection.
Exp-3.1: Dynamics of Sample Utility over Training. We first track the overlap of samples initially identified as valuable (iteration 0) with the top- $k$ samples in later iterations (1, 4, 7, and 10). As illustrated in Figure 7, the coverage rate for $k = 1 5 , 0 0 0$ increases initially (from 0.77 to 0.98 at iteration 4), but significantly declines (to 0.67) in later iterations. This clearly demonstrates the dynamic nature of sample utility, emphasizing the importance of continuously adapting data selection to the evolving state of the model.
Figure 7: Coverage of Top- $\mathbf { \nabla } \cdot k$ Samples between Iter. 𝑡 and 0.
Table 2: Comparison between IU and IDU. LEAD (IDU) refers to our method using IDU as the utility function for calculating sample utility. One-round and Iterative LEAD (IU) denote non-iterative and iterative variants of the IU approach.
Exp-3.2: Performance of Static and Iterative Selection. We further compare the performance between one-round (static) and iterative selection strategies (Table 2). Iterative LEAD (IU) consistently surpasses One-round LEAD (IU), achieving an average improvement of 1.17 points (64.33 vs. 63.16). This performance gap confirms that iterative data selection is essential, as the utility of training samples dynamically changes throughout model training.
# 7.5 Exp-4: Ablation Study of LEAD
Exp-4.1: Ablation Study on LEAD Components. To validate the effectiveness of our proposed framework, we conduct an ablation study on the LLaMA3.1-8B model by systematically removing individual modules of our LEAD framework. As shown in Table 3, removing any module leads to a performance drop: average metric decreases by 1.78 (MAB), 1.23 (TC), and 3.27 (IDU). The IDU module has the most pronounced impact, particularly on TYDIQA (-7.36), underscoring its role in identifying informative samples. Removing the TC module also degrades performance across all benchmarks, confirming the value of semantic clustering. The removal of the MAB module significantly affects performance on the challenging GSM8K (-4.48), demonstrating its role in balancing exploration and exploitation. Overall, the ablation study highlights the critical contribution of each component within the LEAD framework.
Table 3: Ablation Study of Different Modules (LLaMA3.1-8B)
Table 4: Ablation Study of LEAD Framework
Exp-4.2: The Effectiveness of IDU Utility. To demonstrate the effectiveness of our proposed Instance-Level Dynamic Uncertainty (IDU) mechanism, we conducted comprehensive experiments examining its performance from two perspectives.
First, to verify that IDU effectively smooths the instability issues during iterative selection, we compared LEAD (IDU) against LEAD (IU) on LLaMA3.1-8B. As shown in Table 2, LEAD (IDU) consistently outperforms iterative LEAD (IU) across all benchmarks with a substantial average improvement of $3 . 0 6 \%$ (66.62 vs. 63.56). This confirms that IDU’s design—combining current loss signals and historical exponential smoothing—effectively addresses the loss instability challenge inherent in conventional utility functions.
Second, to validate IDU’s superiority as a utility function, we compared it against alternative utility metrics while keeping other LEAD components intact. The results in Table 4 show that replacing IDU with conventional metrics like PPL leads to dramatic performance degradation (from 66.62 to 59.59), with particularly severe reductions on TYDIQA $( - 1 3 . 8 4 \% )$ . Even when compared to the more advanced IFD metric, IDU maintains a substantial advantage (66.62 vs. 60.62). This consistent performance advantage across diverse benchmarks highlights IDU’s robustness as a selection criterion that can reliably identify valuable training samples across various domains and task structures in the iterative selection process.
Figure 8: Avg Performance by Varying Data Scaling.
Exp-4.3: The Effectiveness of MAB Module. To assess the MAB module’s contribution, we compare it against three baselines: (1) Random-LEAD: random selection of difficulty-aware clusters per iteration; (2) Easy2Hard-LEAD: iterative training from easy to hard clusters based on difficulty scores; and (3) Hard2Easy-LEAD: iterative training from hard to easy. For a fair comparison, all modules except the training strategy remained consistent with the LEAD.
As shown in Table 4, our MAB training schedule significantly outperforms the other three strategies, confirming its effectiveness in dynamically balancing exploration and exploitation. By adaptively selecting difficulty-aware clusters, MAB enhances both overall performance and generalizability.
In contrast, Easy2Hard-LEAD yields the low score (63.96), highlighting the limitations of traditional curriculum learning in instruction tuning, as a fixed progression from easy to hard can hinder learning dynamics and lead to premature convergence. Hard2EasyLEAD performs slightly better (64.36), yet still underperforms compared to MAB, indicating that prioritizing difficult clusters alone does not guarantee optimal results.
Exp-4.4: The Effectiveness of Reward Function. We assess the effectiveness of our proposed IDU-based reward by comparing it with two widely-used reward metrics: Instruction-Following Difficulty (IFD) [32] and Perplexity (PPL) [31].
As shown in Table 4, our IDU-based reward consistently achieves the best overall performance (average 66.62), surpassing IFD (63.44) and PPL (64.13). This demonstrates that directly measuring the reduction in instance-level dynamic uncertainty provides more effective guidance for cluster selection than traditional metrics.
# 7.6 Exp-5: Evaluation of Optimal Data Scaling
To examine the impact of data selection strategies on data scaling effectiveness, we conduct experiments using subsets with varying budgets. As illustrated in Figure 8, LEAD consistently presents higher average performance than alternative selection methods across all data quantities, achieving peak performance with only 15K samples. Notably, we observe a non-linear performance curve: gains taper and eventually decline beyond a certain data threshold, which reveals a crucial insight: “alignment-suitable data” is inherently limited. This finding challenges the conventional wisdom
(a) TYDIQA (b) GSM8k (c) HumanEval (d) AVG 80 70 67.92 60 58.61 60.88 58.23 76.95 65.16 67.02 66.02 64.72 64.24 52.08 76 74.23 65 63.52
64 63.24 61.92 50 72.16 71.90 73.23 60 59.40 43.06 72
60 0.05 0.1 0.15 0.2 0.25 40 0.05 0.1 0.15 0.2 0.25 0.05 0.1 0.15 0.2 0.25 55 0.05 0.1 0.15 0.2 0.25 Sample Ratio of Each Iteration
Figure 9: Performance on Various Sample Ratios of Each Iteration (LLaMA3.1-8B).
Figure 10: Parameter Sensitivity Analysis
that more data automatically yields better results, underscoring the critical importance of strategic data selection over mere quantity.
# 7.7 Exp-6: Parameter Sensitivity Analysis
In this experiment, we conduct parameter sensitivity analysis to reveal how hyperparameters affect LEAD’s performance across different tasks, providing insights into optimizing the framework.
Effect of Sampling Threshold $\alpha$ of LEAD. As shown in Figure 9, performance peaks when $\alpha$ is between 0.15 and 0.20, reaching a balance between iteration quantity and quality. Higher $\alpha$ values yield more samples per round but fewer iterations, limiting adaptability. Lower values allow more iterations but provide weaker signals.
Effect of Smoothing Coefficient $b$ of IDU. Figure 10(a) shows optimal performance at $b { = } 0 . 1$ , achieving a favorable trade-off between historical and current utility signals. This sweet spot effectively leverages historical information to stabilize selection while remaining responsive to recent model changes. Lower values $( b < 0 . 1 )$ overemphasize current utility fluctuations, increasing susceptibility to noise. while higher ones $( b > 0 . 2 )$ overweight historical information, reducing responsiveness.
Effect of Exploration Rate $\gamma$ of MAB. As shown in Figure 10(b), the exploration-exploitation tradeoff in our MAB algorithm shows optimal performance at moderate exploration rates $( \gamma { = } 0 . 0 5 { - } 0 . 0 7 )$ ). Minimal exploration $\scriptstyle \left( \gamma = 0 . 0 1 \right)$ limits discovery of new clusters, whereas excessive exploration $( \gamma { = } 0 . 1 2 )$ hinders focus on promising clusters.
# 8 Related Work
Data Selection for Instruction Tuning. Previous works on data selection [9, 23, 59, 65] can be broadly categorized into two key approaches: model-agnostic methods and model-aware methods.
Model-agnostic methods operate independently of the target model, including rule-based approaches [5, 6, 28, 40, 45, 49, 66] that are computationally efficient but lack semantic understanding. Advanced model-based methods [13, 14, 35] like GPT-4 [1] that provide nuanced assessment at high computational cost, and proxy modelbased methods [31, 61] that balance efficiency and quality. However, these methods cannot adapt to the specific learning characteristics of the target model. Model-aware methods [5, 7, 8, 36, 41, 42, 64] address this limitation by customizing selection based on the model’s learning dynamics, though they introduce higher computational costs through required model inference or fine-tuning. In contrast, LEAD proposes a two-stage adaptive approach that efficiently combines model-aware adaptiveness with zero computational overhead, effectively addressing the challenge of balancing effectiveness and efficiency in instruction tuning data selection.
Sample Utility Scores. Sample utility scoring plays a critical role in data selection, employing various predefined metrics [7, 47, 57]. Perplexity-based metrics [31, 44] favor simpler patterns, while diversity-aware selection [58, 63] ensures broad coverage but depends heavily on pre-trained embedding quality. Quality-based metrics incorporating influence scoring [16, 21, 29, 59] and external model [33] evaluation are theoretically sound but require expensive gradient computations. Complexity-based selection [32, 38] risks including noisy samples that hinder convergence, while uncertaintydriven metrics [22, 37] suffer from instability due to loss landscape irregularities. A common limitation across these approaches is their significant computational overhead. Although recent efforts have improved data efficiency in utility estimation, they still incur additional costs. We propose IDU, a novel utility function achieving zero-cost estimation while maintaining selection effectiveness. | Instruction tuning has emerged as a critical paradigm for improving the
capabilities and alignment of large language models (LLMs). However, existing
iterative model-aware data selection methods incur significant computational
overhead, as they rely on repeatedly performing full-dataset model inference to
estimate sample utility for subsequent training iterations, creating a
fundamental efficiency bottleneck. In this paper, we propose LEAD, an efficient
iterative data selection framework that accurately estimates sample utility
entirely within the standard training loop, eliminating the need for costly
additional model inference. At its core, LEAD introduces Instance-Level Dynamic
Uncertainty (IDU), a theoretically grounded utility function combining
instantaneous training loss, gradient-based approximation of loss changes, and
exponential smoothing of historical loss signals. To further scale efficiently
to large datasets, LEAD employs a two-stage, coarse-to-fine selection strategy,
adaptively prioritizing informative clusters through a multi-armed bandit
mechanism, followed by precise fine-grained selection of high-utility samples
using IDU. Extensive experiments across four diverse benchmarks show that LEAD
significantly outperforms state-of-the-art methods, improving average model
performance by 6.1%-10.8% while using only 2.5% of the training data and
reducing overall training time by 5-10x. | [
"cs.LG",
"cs.AI",
"cs.DB"
] |
# 1 Introduction
In recent years, Vision-Language Models (VLMs) [1, 2, 3, 4, 5] have achieved remarkable progress in visual-linguistic understanding, demonstrating strong performance and generalization capabilities. Building on this success, there is growing interest in extending VLMs to end-to-end robotic control by developing generalist robot policies—commonly referred to as Vision-Language-Action (VLA) models [6, 7, 8, 9, 10, 11, 12]. A key challenge in this direction is aligning the vision-language representation space with the robotic action space. To address this, a widely adopted approach [7, 10] is to directly fine-tune pretrained VLMs using large-scale expert action data, mapping visual observations and language instructions to the corresponding actions.
However, this direct fine-tuning paradigm suffers from significant limitations due to the spatial and temporal domain gaps inherent in the alignment process, leading to substantial data inefficiency. As shown on the left of Fig. 1, VLMs are typically pretrained on large-scale visual question answering datasets, where their feature representations primarily capture high-level semantics—e.g., identifying an object as a "banana." In contrast, robotic control, as illustrated on the right of Fig. 1, requires fine-grained spatial reasoning. For instance, beyond recognizing a banana, the model must accurately infer its 3D position to enable successful grasping. This mismatch between high-level semantic understanding and the need for precise spatial localization presents a significant challenge in aligning VLMs with robotic tasks.
Figure 1: The spatial and temporal gaps in adapting VLMs to VLAs. VLMs are pretrained with large-scale VQA datasets to observe current high-level semantics in images, while VLAs are designed to predict low-level future actions in 3D space. The spatial-temporal gap poses challenges to the alignment process and results in data inefficiency in developing VLAs.
Furthermore, while VLMs excel at interpreting the current semantic content of an image, VLAs must reason over time to forecast and plan future robotic actions. This introduces a current-to-future temporal gap, further complicating the alignment process. Combined with the spatial gap, these challenges necessitate large quantities of expert action data to effectively bridge the discrepancy during VLM fine-tuning. Moreover, this heavy data requirement significantly increases the burden of human data collection, impeding the rapid development of VLAs. In scenarios with limited expert data, the risk of overfitting becomes more pronounced, potentially degrading generalization performance and restricting the model’s applicability to novel tasks or environments.
To address the aforementioned spatial and temporal gaps, we propose a novel training paradigm, ROSA—RObot State estimation for vision-language and action Alignment. ROSA decomposes the alignment process into two complementary components: one dedicated to estimating the robot’s current state, and the other focused on predicting future actions. Concretely, in addition to using standard expert action data for future action prediction, we introduce a novel form of robot state estimation data, which supervises the model to infer the robot’s current state from the given image. The robot state includes the 3D position and orientation of the end-effector, as well as the gripper’s open/closed status.
The robot state estimation task serves two key purposes. First, it explicitly enhances the model’s ability to capture fine-grained spatial information. Second, it complements expert demonstrations by covering a broader portion of the action space, including regions underrepresented in expert data. This dual role helps bridge the spatial gap between pretrained VLMs and VLA models. Furthermore, by requiring the model to infer the robot’s current state, ROSA provides a clearer and more structured spatial context, which in turn facilitates more accurate forecasting of future actions—thereby helping to close the temporal gap as well.
Collecting robot state data can be fully automated with virtually no additional human labor. Specifically, the robot performs plausible random actions via automated scripts within a predefined environment, during which observations and corresponding states are recorded. To enable joint training with expert data in VLA models, we structure the robot state estimation data to share the same format as expert demonstrations. This low-cost and easily scalable data acquisition approach makes ROSA a practical and scalable solution for aligning vision-language and action spaces, facilitating more data-efficient training of VLAs and ultimately improving performance.
We conduct extensive experiments to evaluate the effectiveness of ROSA. Using a standard VLA model, we perform controlled studies in both the RLBench simulation environment and on a realworld WidowX robot platform. Our results show that ROSA significantly enhances VLA performance and generalization ability. The improvement is especially significant in real-world low-data scenarios, where ROSA even doubles the success rate compared to the baseline.
To summarize, our contributions are as follows:
• We propose a novel training paradigm named ROSA that harnesses robot state estimation data to achieve better alignment between the vision-language and action spaces.
• We propose a simple yet effective solution to creat the robot state estimation data that significantly enhances VLA’s data efficiency without requiring additional human collection efforts.
• We conduct extensive experiments on both the RLBench simulation and a real-world WidowX platform, demonstrating that ROSA effectively enhances current VLA models and achieves superior performance compared to previous methods.
# 2 Related Works
# 2.1 Vision-Language-Action Models.
Building a generalist policy [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24] has long been a central goal in robotic manipulation. In recent years, the impressive performance and generalization capabilities demonstrated by Vision-Language Models (VLMs) [25, 26, 1, 2, 3, 4, 5] have inspired researchers to develop robot policies based on VLMs, commonly referred to as Vision-Language-Action (VLA) models [6, 7, 8, 9, 10, 11, 12, 27, 28]. These models have shown great promise in enabling robots to perform a wide range of tasks with enhanced generalization ability. Among them, RT-2 [10] stands out as a typical pioneering work, which is jointly trained on web-scale VQA data and robotic demonstrations, showcasing impressive performance. Following it, OpenVLA [7]adopts a similar approach as one of the earliest open-source VLA models, trained on the large-scale OpenXEmbodiment dataset [8]. LLARVA [29] collects trajectory annotations as auxiliary tasks to enhance the accuracy of action prediction. CogACT [6] introduces architectural modifications to generate continuous action chunks, employing a large diffusion action head.
# 2.2 Data-Efficient Vision-Language-Action Models.
The high data demand of VLA models is a well-recognized challenge. Some prior works [10, 8] aim to address this issue by incorporating large amount of expert demonstrations. For example, RT-X [8] collects large-scale cross-platform expert trajectories with great human labor to train a general and high-performing VLA model. Some works [30, 31] introduce additional annotated data to help with the training. For instance, TraceVLA [30] injects additional supervision by introducing external detectors to generate visual trajectory annotations. LLaRA [31] leverages meta annotations such as the bounding boxes to construct auxiliary spatial reasoning tasks, thereby improving model performance under low-data conditions. Another line of works [32, 33, 34] reduce the model size to mitigate data dependency. For example, TinyVLA [32] focuses on building a fast VLA with one billion LLM backbone. In addition, works like [35] adopts atomic skill library construction to improve data efficiency by decomposing complex tasks into reusable primitive skills. Our approach is also data-efficient, but with a key difference: we do not rely on any additional human labor for collection or extra labeling modules, nor do we compromise model capacity.
# 2.3 Robot State Estimation
Robot state estimation is a task in which, given input RGB images or other information such as depth, the model is required to estimating the position and orientation of a robot or its components in 3D space. This is a critical topic in both robotics and computer vision, such as autonomous navigation [36, 37] and human-robot interaction [38, 39, 40], with several representative works [41, 42, 43, 44]. Our work draws inspiration from this task but with difference in the objectives and applications. Specifically, our model only needs to predict the pose of the end-effector and the gripper’s openclose status that are necessary for robot manipulation tasks, without needing to estimate full-body configurations. Furthermore, the goal of our estimation is not for this task itself, but rather to serve as auxiliary supervision for our downstream robot control task, which enhances the spatial awareness of VLAs, improving VLAs’ performance in action prediction.
Figure 2: Illustration of the two types of data used by ROSA to train VLA models. (a). Expert action prediction data, which requires human effort to collect. (b). Robot state estimation data, which is obtained automatically without human collection by letting the robot move randomly.
# 3 Method
In this section, we present a comprehensive overview of our proposed ROSA. We first introduce the training data ROSA use in Sec. 3.1, especially the robot state estimation data. Then we provide the model architecture of ROSA in Sec. 3.2. Finally, we provide model and training details in Sec. 3.3.
# 3.1 ROSA Training Data
The primary objective of ROSA is to effectively adapt a pretrained VLM to the robotic action space, enabling the model to directly generate control signals for manipulating robots based on visual observations and language instructions. To better align vision-language representations with robot actions, ROSA addresses the alignment problem through two complementary components: one for anticipating upcoming actions, and the other for accurately capturing the robot’s current state. This decomposition explicitly encourages the model to develop a strong capability for 3D spatial understanding and accurate self-perception, both of which are essential foundations for effective action prediction. As illustrated in Fig. 2, ROSA leverages two distinct types of data including the expert action prediction data and the robot state estimation data to jointly fine-tune the pretrained VLM.
Expert action data: As shown in Fig. 2 (a), to obtain expert actions, human operators are required to carefully collect demonstrations by manually guiding the robot to target positions to complete different tasks. Formally, we assume an expert action dataset $\mathcal { D } = \{ D _ { 1 } , D _ { 2 } , . . . , D _ { m } \}$ of m demonstrations across various tasks, where each demonstration $D _ { i } = \{ ( o _ { 1 } ^ { i } , a _ { 1 } ^ { i } , l ^ { i } ) , ( o _ { 2 } ^ { i } , a _ { 2 } ^ { i } , l ^ { i } ) , \dots ( o _ { t } ^ { i } , a _ { t } ^ { i } , l ^ { i } ) , \dots \}$ contains a variable number of instruction-observation-action pairs. Here, $o _ { t } ^ { i } , a _ { t } ^ { i } , l ^ { i }$ denote the visual observation, expert action and language instruction for the $i$ -th demonstration at timestep $t$ , respectively. The action $a _ { t }$ consists of a 7-DoF (degrees of freedom) control signal required to manipulate the robot’s end-effector, including 3D position, orientation, and the gripper’s open/close status:
$$
a _ { t } = [ x , y , z , \phi , \theta , \psi , g ] ,
$$
where $( x , y , z )$ denotes the gripper’s position, $( \phi , \theta , \psi )$ denotes the Euler angles, and $g \in \{ 0 , 1 \}$ denotes the opening status of the gripper (1 for open).
Robot State Estimation Data: While expert action data can directly supervise the VLA model to learn the action prediction objective, collecting such data is costly and labor-intensive. In contrast, we propose to collect the robot state estimation through a fully automated pipeline. This type of data can complement expert action data in training the VLA model without requiring additional human effort, reducing the model’s reliance on large amounts of manually collected demonstrations.
Our robot state data collection pipeline proceeds as follows: as shown in Fig. 2 (b), to collect the robot states, we begin by initializing a specific scene configuration. For example, in a put-banana-into-plate scenario, we place a plate and banana randomly on the table. Based on the scene setup, we choose a feasible action space to ensure that the robot’s movements remain within safe bounds, avoiding collisions with objects in the environment that may lead to positioning errors. By allowing the robot to perform random movements within this constrained space and recording robot’s states at each time step, we can collect many observation-state pairs, forming a robot state estimation dataset.
To ensure that the robot state data can be effectively integrated with expert action data for joint training of the VLA model, we collect robot states that capture the same 7 degrees of freedom as the expert actions, including the end-effector’s position, orientation, and gripper status. Therefore, in terms of format, the action and the state are identical. The key difference lies in their semantics: the action represents the target state that the robot should reach at the next time step, while the state means the robot states at current time step. The structural homogeneity between state data and expert action data ensures that the model is trained with a consistent target domain across both data types. Moreover, to construct a structurally consistent dataset, we pair each robot state sample with a uniform language instruction: "What is the current state of the robot?", ensuring that the format mirrors that of the expert action data and enabling unified training under the VLA framework. Formally, we construct a robot state estimation dataset $\mathbf { \bar { \mathcal { S } } } = \{ e _ { 1 } , e _ { 2 } , \ldots , e _ { k } \}$ of $k$ pairs, where $e _ { t } = ( o _ { t } , s _ { t } , l _ { \mathrm { s t a t e } } )$ , and $o _ { t } , s _ { t }$ , $l _ { \mathrm { s t a t e } }$ represent the observation, robot state at timestep $t$ and the state language instruction, respectively.
Figure 3: Overview of the ROSA architecture. ROSA adopts a classic VLM architecture. Image observations are encoded into image tokens by a vision encoder and a projector. These image tokens are combined with text tokens and fed into an LLM. The model is trained with an autoregressive next-token prediction objective.
# 3.2 Model Architecture
In this section, we detail the model architecture of ROSA, which is built upon standard LLaVA [1]. We elaborate on three key components: the vision-language modules, robotic tokenization and de-tokenization, and the training objective.
Vision-language Modules: As illustrated in Fig. 3, the model takes two types of input: a language instruction $l$ that specifies the task the robot is expected to perform and a visual observation $o _ { t }$ consisting of a single front view RGB image. The language instruction is encoded by a text encoder into a sequence of text tokens $Z _ { t }$ . Meanwhile, the visual observation is processed by a vision encoder $f _ { \mathrm { v i s } }$ to extract visual features $H _ { v }$ . These visual features are then mapped into the same embedding space as the text tokens by a projector $f _ { \mathrm { p r o j } }$ , resulting in visual tokens $Z _ { v }$ . The visual tokens and text tokens are then concatenated together and fed into a large language model $f _ { \mathrm { l l m } }$ , which performs causal reasoning over the input tokens and outputs a sequence of robot tokens $R$ . The whole process can be formulated as follows:
$$
R = f _ { \mathrm { l l m } } ( [ Z _ { v } , Z _ { t } ] ) , \quad Z _ { v } = f _ { \mathrm { p r o j } } ( f _ { \mathrm { v i s } } ( o _ { t } ) ) .
$$
Robotic Tokenization and De-tokenization: To allow the large language model to predict robot actions and states, we convert continuous robot actions and states into discrete values that serve as the LLM’s output tokens. Take the robot’s position along the $\mathbf { \boldsymbol { x } }$ -axis as an example, given $x _ { i } \in [ x _ { \operatorname* { m i n } } , x _ { \operatorname* { m a x } } ]$ , we apply a linear quantization function to map it to an integer token $X _ { i } \in$ $\{ 0 , 1 , \ldots , \mathrm { { b i n \_ s i z e \mathrm { ~ - ~ } 1 } } \}$ as follows:
$$
X _ { i } = \left\lfloor { \frac { ( x _ { i } - x _ { \operatorname* { m i n } } ) } { x _ { \operatorname* { m a x } } - x _ { \operatorname* { m i n } } } } \times ( { \mathsf { b i n \_ s i z e } } - 1 ) \right\rfloor
$$
Figure 4: Task examples for RLBench and real-world robot.
For instance, if bin_size $= 2 5 6$ , a possible action sequence will be $^ { , } 1 8 3 1 8 0 3 6 0 1 2 7 4 9 2 5 5 ^ { , }$ . During inference, we de-tokenize the predicted token to recover the continuous action or state values by performing the inverse mapping:
$$
{ \hat { x } } _ { i } = x _ { \operatorname* { m i n } } + { \frac { X _ { i } } { \operatorname* { b i n } _ { - } \operatorname { s i z e } - 1 } } \times ( x _ { \operatorname* { m a x } } - x _ { \operatorname* { m i n } } )
$$
Training Objective: We jointly train the model using both expert action data $\mathcal { D }$ and robot state data $s$ . For both types of data, we employ a unified training objective: the next-token-prediction cross-entropy loss, defined as follows:
$$
\mathcal { L } = - \sum _ { i } \log P ( y _ { i } \mid y _ { < i } , \mathbf { o } , \mathbf { l } ; \omega )
$$
where $y _ { i }$ represents the $i$ -th token and $\omega$ demotes the parameters of the model.
# 3.3 Model and Training Details
We build ROSA based on the Qwen-2.5-7B [45] model as the LLM backbone, CLIP ViT-L/14 [25] as the vision encoder, and a two-layer MLP as the projector. ROSA mix the robot state data and the expert action data in a fixed ratio of 1:4 and performs joint training. We fully fine-tune all layers for 6 epoches on RLbench and 9 epoches on real robot. A learning rate of 2e-5 is adopted with a warmup and cosine-decay scheduler. All experiments are conducted on 8 NVIDIA A100 GPUs.
# 4 Experiment
# 4.1 Experiment Setup
We evaluate ROSA in both the RLBench [46] simulation environment and on a real-world WidowX robot.
RLBench. We train and evaluate ROSA on 12 RLBench tasks. Each task contains multiple variations during data collection but remains consistent during evaluation to enable one-to-one comparisons. Detailed task descriptions are provided in the Appendix. We use a fixed front-facing RGB camera with a resolution of $3 3 6 \times 3 3 6$ as the model’s visual input. The simulated experiments are conducted using a Franka Panda robot equipped with a parallel gripper. Following prior work [14, 29], we evaluate performance over 25 episodes per task and report the average success rate (SR). Each evaluation is repeated three times to obtain the final score.
Real-World WidowX Robot. We use a WidowX 250S robot for real-world experiments, with an Intel RealSense D435 camera positioned in front of the robot to provide third-person visual input. The detailed experimental setup is described in the Appendix. ROSA is evaluated on four seen tasks and four generalization tasks. The seen tasks include three short-range tasks—Banana in Plate, Strawberry in Bowl, Starfruit in Plate—and one long-range task: Banana and Strawberry in Bowl. The generalization tasks consist of Cube in Plate (unseen object), Strawberry in Box (unseen container), Cube in Box (unseen object and container), and Strawberry in Bowl (with distractors). Examples of these tasks are illustrated in Fig. 4 (b) and Fig. 4 (c). Each task is evaluated over 10 trials.
(a). Banana o9n0 Plate 100 (b). Strawberry in Bowl (c). Starfruit on Plate 100 100 (d). Banana & Strawberry in Bowl (e). Average Success Rate
100 100 90 100 100 93 80
80 70 80 70 70 80 70 70 80 80 68 63
60 60 60 60 60
240 240 40 30 240 30 40 240 40 40 240 33 20 20 25 0 0 10
0 0 0 0 20 50 100 20 50 100 20 50 100 20 50 100 20 50 100 Baseline ROSA Baseline ROSA Baseline ROSA Baseline ROSA Baseline ROSA
Table 1: Comparison of performance under varying data scales on 12 RLBench tasks.
Table 2: One-shot performance comparison on three RLBench tasks.
# 4.2 Quantitative Results
# 4.2.1 Effectiveness of ROSA
We conduct experiments to show the performance of ROSA on RLBench and the real-robot across different data scales of expert action data. For simulation evaluation, as shown in Tab. 1, given the same amount of expert action data, ROSA consistently outperforms the baseline model at all scales, with particularly large gains when expert data is limited. Notably, with only 50 or 100 expert demonstrations, ROSA achieves a $7 . 1 \%$ and $1 1 . 4 \%$ improvement in average SR on RLBench, respectively. On real robot, as show in Fig. 5, the effectiveness is more pronounced with a $3 5 \%$ average SR improvement on the real robot. We attribute this to the greater variability present in real-world environments, such as cluttered backgrounds and lighting changes. By leveraging diverse state samples, ROSA enhances robustness under such challenging conditions.
Sufficient Data Scenarios. As shown in Tab. 1, the performance of VLA models consistently improves with increasing data scale. Nevertheless, even when the amount of expert action data is sufficiently large (i.e., 500 episodes per task), ROSA still achieves a $1 . 6 \%$ improvement in average SR. This result further highlights the effectiveness of ROSA, demonstrating its benefits even in high-data regimes.
One-Shot Scenarios. Another important question is whether ROSA remains effective under extremely low-data conditions. To investigate this, we design a one-shot experiment in which only a single expert action sample is provided during training. Remarkably, as shown in Tab. 2, ROSA achieves non-zero success rates on three tasks, whereas the baseline model fails on all of them. This experiment demonstrates ROSA’s strong data efficiency and its ability to enhance the VLA model’s understanding of 3D spatial structures and action semantics, even in highly data-scarce settings.
# 4.2.2 Generalization Ability of ROSA
We evaluate the generalization ability of ROSA using four real-world tasks involving unseen objects, unseen containers, and the presence of distractors. As shown in Tab. 3, ROSA significantly outperforms the baseline across all four tasks in terms of success rate. While the baseline benefits from the prior knowledge provided by VLM pretraining and exhibits some generalization ability—for example, achieving a $50 \%$ success rate in grasping the unseen cube—it performs worse compared to seen objects such as bananas or strawberries. The performance drop becomes even more pronounced in novel scenarios involving both unseen objects and containers, where the success rate falls to around $20 \%$ . In contrast, ROSA consistently demonstrates strong generalization performance. Notably, on the Cube in Box task, ROSA exceeds the baseline by $60 \%$ . Furthermore, ROSA achieves a $90 \%$ success rate on Strawberry in Bowl (with distractors), which is $30 \%$ higher than the baseline, highlighting its robustness to distracting objects.
Table 3: Performance on unseen tasks for real robot. ROSA consistently outperforms the baseline across all four unseen tasks, demonstrating strong generalization capabilities.
Table 4: Comparison with previous methods on RLBench. We compare the success rate $( \% )$ on 12 different tasks. ROSA shows superior performance compared with these related methods.
# 4.2.3 Comparison with Previous Methods
We compare ROSA with previous methods on RLBench, as shown in Tab. 4. Compared to the VLA-based method LLARVA, ROSA achieves a 16-point improvement in success rate, despite LLARVA using 800 expert demonstrations per task—eight times more data than ROSA. This result highlights both the strong performance and high data efficiency of ROSA. Additionally, compared to the non-VLA method PerACT, ROSA achieves higher performance, even though PerACT utilizes multiple cameras and depth information.
# 4.2.4 Analysis of ROSA
To better understand how ROSA works, we conducted controlled studies in the RLBench simulation environment. All experiments used 100 expert action samples per task.
How much robot state data is needed? We investigate the effect of incorporating different amounts of robot state data, as shown in Tab. 5. Starting with expert action data only, adding just one-eighth of the state data yields a $3 . 7 \%$ improvement in success rate. Increasing the proportion to one-quarter leads to a further $7 . 7 \%$ gain. Using larger amounts of state data degrades performance, likely due to distributional shifts that impair the model’s ability to predict future actions. These results suggest that incorporating a relatively small amount of robot state data is sufficient to yield substantial performance improvements, indicating the effectiveness of introducing robot state data.
How should the environment be configured for state data collection? We study the impact of scene type and scene quantity. Specifically, we consider two types of scenes: (1) relevant scenes, which share the same setup as the evaluation tasks, and (2) irrelevant scenes, which feature unrelated configurations without the evaluation subjects. Fig. 7 provides visualizations of these two scene types. Scene quantity refers to the number of distinct spatial arrangements within a given scene setup.
Tab. 6 presents the results of our analysis. We find that scene relevance is not critical—both relevant and irrelevant scenes lead to comparable improvements in performance. This finding suggests that, in practice, environments originally used for collecting expert action data can be effectively reused for robot state data collection, avoiding the need to design new scenes. Additionally, we observe that a moderate scene quantity (e.g., 100 distinct scenes) is sufficient to achieve optimal performance.
Table 5: Ablation on the ratio of robot state data and expert action data.
Table 6: Ablation on scene types and quantity of robot state data.
Table 7: Liner-prob evaluation on 3D understanding comparing VLM, baseline and ROSA.
Figure 6: Visual examples of ROSA on RLBench and real-world robot tasks. The white number in the top-left corner of each image indicates the execution step in the action sequence.
3D understanding capability of ROSA. To evaluate whether robot state data truly enhances the alignment between a pre-trained VLM and robot-specific representations, we conduct a linear probing analysis. Specifically, we add a linear layer on top of the LLM and train it on a newly constructed 3D spatial understanding dataset. The task requires the model to predict the 3D position and orientation of the robot’s end-effector based on a single image. We compare three models: the pre-trained VLM, a baseline VLA model trained only with expert action data, and ROSA. We use mean squared error (MSE) and prediction accuracy under a fixed error threshold as evaluation metrics.
As shown in Tab. 7, the pre-trained VLM achieves $0 \%$ accuracy, indicating a lack of 3D spatial understanding. The baseline VLA model attains $61 \%$ accuracy, suggesting some capacity to perceive 3D information. ROSA achieves the highest accuracy of $92 \%$ , demonstrating significantly improved 3D spatial reasoning. These results indicate that ROSA effectively bridges the spatial representation gap between VLMs and robot-specific learning.
# 4.3 Qualitative Results
Visualizations of Robot State Data: The robot state data is collected by allowing the robotic arm to perform random movements within its valid action space. We set up different scenarios for collecting such data and the examples are illustrated in Fig. 7. For relevant scenes, the robot state data is collected in the same scenes as evaluation data but the movements are random. For irrelevant scenes, we choose completely different task settings from RLbench.
Examples of ROSA on Simulation and Real Robot Tasks: Fig. 6 illustrates ROSA’s performance on several tasks in both simulation and
Figure 7: Examples of robot state data.
real-world environments. The top roll in Fig. 6 presents results on two RLBench tasks: Open Drawer and Slide Block to Color Target. It can be observed that the model accurately predicts the actions and effectively manipulates the target objects. Real-robot task executions are also shown in Fig. 6, where ROSA precisely localizes the target objects and their corresponding containers, and successfully place the objects into the containers. | Vision-Language-Action (VLA) models have recently made significant advance in
multi-task, end-to-end robotic control, due to the strong generalization
capabilities of Vision-Language Models (VLMs). A fundamental challenge in
developing such models is effectively aligning the vision-language space with
the robotic action space. Existing approaches typically rely on directly
fine-tuning VLMs using expert demonstrations. However, this strategy suffers
from a spatio-temporal gap, resulting in considerable data inefficiency and
heavy reliance on human labor. Spatially, VLMs operate within a high-level
semantic space, whereas robotic actions are grounded in low-level 3D physical
space; temporally, VLMs primarily interpret the present, while VLA models
anticipate future actions. To overcome these challenges, we propose a novel
training paradigm, ROSA, which leverages robot state estimation to improve
alignment between vision-language and action spaces. By integrating robot state
estimation data obtained via an automated process, ROSA enables the VLA model
to gain enhanced spatial understanding and self-awareness, thereby boosting
performance and generalization. Extensive experiments in both simulated and
real-world environments demonstrate the effectiveness of ROSA, particularly in
low-data regimes. | [
"cs.RO",
"cs.AI",
"cs.CV"
] |
# 1 Introduction
Inverse problems arise in many domains where observed data is the result of a noisy and potentially lossy transformation of an underlying signal we wish to recover. More formally, the degradation process is modeled as follows:
$$
\begin{array} { r } { \boldsymbol { y } = \boldsymbol { \mathcal { A } } ( \boldsymbol { x } ) + \boldsymbol { \epsilon } , \quad \boldsymbol { \epsilon } \sim \mathcal { N } ( 0 , \sigma ^ { 2 } \boldsymbol { I } ) , } \end{array}
$$
where $\mathcal { A }$ is the forward operator and $\epsilon$ denotes additive Gaussian noise. The objective is to recover the original signal $x$ from the measurements $y$ . In practice, $\mathcal { A }$ may represent a wide range of linear or nonlinear transformations, such as blur kernels, inpainting masks, the Radon transform, and many others.
To introduce the setting of the current work, we categorize the reconstruction methods based on i) the type and availability of data, ii) the extent of prior knowledge about $\mathcal { A }$ . When $\mathcal { A }$ is fully known, two main classes of reconstruction approaches are typically used. The first is plug-and-play (PnP) methods, which apply iterative algorithms to recover signal $x$ from a single measurement $y$ by maximizing the posterior distribution $p ( x \mid y , A ) \propto p ( y \mid x , A ) p ( x )$ . These methods alternate between a data-fidelity term (dependent on $\mathcal { A }$ ) and a prior term. The second class leverages supervised learning: if a clean dataset is available, it can be paired with synthetic measurements generated via $\mathcal { A }$ to train a reconstruction model in a supervised fashion.
However, complete knowledge of the forward operator $\mathcal { A }$ is rarely available in practice. For instance, in super-resolution it is common to assume bicubic downsampling to generate low-resolution images from high-resolution ones, yet models trained under this assumption often fail on real-world images where the actual downsampling kernel differs [18, 59].
When $\mathcal { A }$ is unknown or only partially specified, the problem becomes more challenging and is called blind. One strategy is to collect (or synthetically generate) a dataset that includes corrupted samples produced from a range of possible $\mathcal { A }$ instances. A supervised model is then trained to generalize across these degradations. An alternative approach is to jointly infer both the clean signal and the unknown operator by maximizing the joint posterior $p ( x , { \mathcal { A } } \mid y ) \propto p ( y \mid x ) p ( x ) p ( { \mathcal { A } } )$ . This formulation requires priors on both the data distribution $p ( x )$ and the corruption process $p ( { \mathcal { A } } )$ . Both approaches implicitly rely on some prior knowledge or assumptions about $\mathcal { A }$ , either through heuristic degradation models or learned probabilistic priors. Consequently, when the true degradation deviates from these assumptions—which is often the case in real-world scenarios—the reconstruction performance degrades significantly. Although these methods typically require very little input data at test time (sometimes even a single image), their robustness to real-world variations is far from guaranteed.
The natural alternative, which we adopt in this work, is to learn information about $\mathcal { A }$ directly from data: specifically, from one dataset of corrupted images and a separate dataset of clean images. Crucially, these datasets do not need to contain corresponding clean-corrupted pairs, which significantly simplifies data collection. During training, the unpaired clean images serve as a reference for what the restored outputs should resemble—effectively providing a model for $p ( x )$ . This unpaired training setup is often referred to as unsupervised, and has been successfully applied to in-the-wild image restoration tasks [31, 45, 51], where the degradation process is unknown and may involve multiple, complex transformations. Most existing algorithms operating in this unpaired regime use an implicit model of the degradation process: a neural network is trained to map corrupted inputs to clean outputs, without explicitly modeling the underlying forward operator.
The method we propose learns the correct degradation by learning an explicit operator $\mathcal { A }$ and needs a relatively small (hundreds to thousands) number of training points. By using diffusion models and an efficient distribution matching algorithm it outperforms both single-image methods and other unsupervised ones. It comprises two distinct steps: the first one is to learn a representation of the noisy distribution, the second is to learn the best degradation operator $\mathcal { A }$ such that the clean samples corrupted by $\mathcal { A }$ match in distribution the noisy data prior. Finally the learned corruption can be used in a third step for non-blind restoration. Our contributions are threefold: we begin by devising a principled method for solving imaging inverse problems which only relies on unpaired image data from clean and corrupted distributions, without knowledge of the degradation operator. We prove that under a non-degeneracy assumption on the clean distribution, the true operator can be identified. We then show how this method can provide precise estimates of the degradation operator, without making any distributional assumptions on the operator itself. In particular we focus on the tasks of uniform and non-uniform deblurring. Finally, since the estimates we obtain are considerably more precise than those of single-image methods, we show how our approach can be used as part of a pipeline for camera lens calibration – where accuracy is essential.
The rest of the paper is organized as follows. In Section 2, we provide some necessary background on inverse problems with unknown degradations and on our algorithm. In Section 3, we detail the different steps of our algorithm, whereas the last section is devoted to experiments of increasing complexity on estimating blur kernels.
Table 1: Summary of tasks for solving inverse problems without knowing the forward operator. The headings refer to what is known about the data and the forward operator. Regarding the latter, we note when the method works on data corrupted by a single (unknown) operator, operators from a predefined distribution, or by a collection of unknown operators. The references are not meant to be exhaustive but just to provide examples.
# 2 Background
We briefly review the literature on blind and unsupervised inverse problems in imaging, along with the various forms of deblurring addressed in this work.
# 2.1 Unsupervised inverse problems
Solving inverse problems without access to the forward operator is inherently challenging and can be approached from multiple perspectives. In Table 1 we provide a framework to clarify the various settings considered in the literature. Unlike many of the methods discussed below, our approach assumes a fixed degradation operator $\mathcal { A }$ which does not vary across measurements $y$ . While this restricts generality, it enables higher reconstruction accuracy in specific settings compared to other unpaired methods. We focus on unpaired algorithms, particularly for deblurring and super-resolution tasks, where $\mathcal { A }$ typically corresponds to a blur kernel. Data augmentation-based approaches [50, 58] construct synthetic supervised datasets using heuristically designed degradation pipelines which mimic diverse real-world corruptions. To improve adaptability to specific degradations, Zhang et al. [59] propose learning some of the pipeline parameters from a small dataset of noisy reference images. For better alignment with the degradation of each individual image, a line of work tackles the blind MAP inference problem using deep priors over both clean images and degradation operators. These methods are typically coupled with test-time optimization algorithms such as alternating minimization [41], expectation-maximization (EM) [17, 23], or proximal gradient methods [48]. Recent approaches [12, 23, 44] incorporate diffusion models as priors for the blur kernel, jointly estimating $\mathcal { A }$ and $x$ via plug-and-play algorithms [47]. Closely related methods include FKP [27], which uses a pretrained normalizing flow as a kernel prior, and DKP [52], which employs MCMC to iteratively refine the blur estimate. GibbsDDRM [38], by contrast, uses a diffusion prior on the clean image and a simpler total variation (TV) prior on the blur kernel.
In contrast to this class of methods, which require only a single degraded image at test time, our approach does not rely on any pretrained priors—neither on clean images nor on the degradation operator. Instead, it learns to adapt directly to the specific corruption process represented in the training data. While it does require a small dataset of degraded images from the same corruption distribution, this targeted adaptation enables significantly improved reconstruction accuracy.
Domain transfer approaches based on variations of the cycle-consistency loss [61] are conceptually closer to our method. Given a noisy image $y$ and an unpaired clean image $x$ , a clean-image generator $\mathcal { G }$ and a noisy-image generator $\mathcal { F }$ are jointly trained under the constraint that $\mathcal G ( \mathcal F ( x ) ) \approx x$ and $\mathcal { F } ( \mathcal { G } ( y ) ) \approx y$ . For instance, CinCGAN [36, 53] translates images downsampled with an unknown kernel into bicubically downsampled images, which can then be more effectively upscaled using standard super-resolution models. Several related methods [9, 30, 34, 37, 45, 51] employ generative models to synthesize corrupted images from clean ones, thereby enabling the construction of supervised training datasets. This synthesis can be done in a two-stage pipeline or directly in an end-to-end manner [11, 43]. Notably, Sim et al. [45] address a setting similar to ours, where a model is trained to deblur microscopy images in an unpaired setup. Compared to these approaches, our method leverages a novel loss function derived from diffusion models—a technique not previously applied in this context. Unlike adversarial losses, our formulation is more stable and easier to train. Moreover, by learning an explicit representation of the degradation kernel rather than an implicit one, our method offers significantly improved interpretability. Furthermore, under certain assumptions, we are able to prove the identifiability of the true operator (see Section 3).
We also briefly note that many methods aim to learn the degradation operator in a single-image fashion, but still require paired training data—typically synthetic—for supervision. For instance, IKC [18] iteratively refines kernel estimates in alternation with clean-image predictions. DAN [35] extends this idea by unrolling the refinement process into an end-to-end trainable network. In contrast, KernelGAN [6] learns the degradation operator directly from a single input image, followed by a separate non-blind model to reconstruct the clean image $x$ .
# 2.2 Distribution matching
Our method minimizes the distance between two probability distributions over images: the distribution of the observed noisy data $p ( y )$ and that of clean data corrupted by a learned degradation operator, i.e., $p ( \mathcal { A } ( x ) )$ . A common strategy for such distribution matching is to use generative adversarial networks (GANs), which optimize an adversarial loss. GANs have been extensively applied to image restoration tasks, as discussed in the previous section [34, 45], and can simultaneously learn both the degradation operator and its inverse within a unified training framework. However, training GANs is notoriously challenging due to instability and sensitivity to hyperparameters [8]. Alternative approaches include normalizing flows, which provide exact likelihoods and invertible mappings. These have been used by DeFlow [51] for matching clean and noisy distributions, and by FKP [27] to generate plausible blur kernels from single images.
More recently, diffusion models – like GANs and normalizing flows – have emerged as powerful tools for modeling empirical distributions and have been used in a range of distribution-matching tasks. For example, DreamFusion [39] learns 3D representations whose 2D projections are consistent with a pretrained diffusion model, while DiffInstruct [32] trains a single-step generator to match the distribution of a multi-step diffusion model. Both approaches rely on a loss function that approximates the KL divergence integrated over all diffusion time steps.
In our work, we use conditional flow matching (CFM) models [28, 29] as a more conceptually straightforward alternative to standard diffusion models. We adapt the integrated KL divergence loss to the CFM framework for learning the degradation operator.
# 2.3 Camera lens calibration
After preliminary experiments on synthetic deblurring, we focus on non-uniform deblurring in the context of a camera calibration pipeline. Camera calibration is a multifaceted task typically broken down into subtasks such as distortion, chromatic aberration, and vignetting corrections among others. In this work, we concentrate solely on compensating for blur induced by lens imperfections, with the goal of obtaining maximally sharp images. These lens aberrations can be characterized by the point spread function (PSF) of the lens-camera system: the blur kernel that transforms an ideal point light source into a spread of colored spots in the image. PSFs are often non-uniform across the image plane and differ across RGB channels, resulting in chromatic aberrations. Correcting such distortions computationally is particularly appealing, as it enables the use of lower-cost lenses without sacrificing image quality.
Traditional non-blind correction methods require precise knowledge of the PSF, which can only be obtained through laborious procedures involving printed calibration patterns or specialized screens [5, 21]. In contrast, blind lens aberration correction remains an understudied problem, although it shares many similarities with non-uniform deblurring. It is more commonly approached under the umbrella of defocus deblurring [2, 24, 40, 54], where blur magnitude varies with scene depth. In aberration correction, however, the blur is dependent on spatial location in the image plane rather than depth.
Nonetheless, defocus deblurring methods may still be partially effective for lens aberration correction, especially when the induced blur is close to isotropic. As a starting point, we use the PSF dataset from Bauer et al. [5], which provides ground-truth PSFs, and subsequently transition to a realistic calibration scenario using a Panasonic Micro 4/3 camera. In this setting, we acquire a small set of images and aim to learn the lens PSFs without access to ground-truth measurements. Our main comparison will be with the blind method proposed by Eboli et al. [16], which performs deblurring followed by color-fringe removal in a two-stage pipeline [10, 15].
# 3 Method
Given an inverse problem of the form $y = \mathcal { A } _ { \omega _ { * } } ( x ) + \epsilon$ , with known noise level $\sigma$ and an unknown forward operator $\mathcal { A } _ { \omega _ { * } }$ parameterized by a vector $\omega _ { * }$ , we propose an algorithm to estimate $\omega _ { * }$ using unpaired data. On the one hand, we assume access to a clean dataset $\mathcal { X } = \{ x _ { i } \} _ { i = 1 } ^ { n }$ of images drawn from a distribution $\mathbb { X }$ . On the other hand, we have access to a corrupted dataset $\mathcal { Y } = \{ y _ { j } \} _ { j = 1 } ^ { m }$ of images, generated from unknown clean samples via the forward model $\mathcal { A } _ { \omega _ { * } }$ . We denote by $\mathbb { Y } _ { \omega _ { * } }$ the distribution of such corrupted images, and by $\mathbb { Y } _ { \omega }$ the distribution induced by applying an arbitrary forward operator $\mathbf { \mathcal { A } } _ { \omega }$ (with additive noise) to samples from $\mathbb { X }$ .
Noting that $\mathbb { Y } _ { \omega _ { * } }$ is empirically accessible through $y$ while $\mathbb { Y } _ { \omega }$ can be approximated via $\mathcal { X }$ and a candidate forward operator $\mathbf { \mathcal { A } } _ { \omega }$ , we hypothesize that minimizing the distance between $\mathbb { Y } _ { \omega _ { * } }$ and $\mathbb { Y } _ { \omega }$ with respect to $\mathbf { \mathcal { A } } _ { \omega }$ yields a good approximation of the true forward model parameters:
$$
\hat { \omega } = \underset { \omega } { \arg \operatorname* { m i n } } \mathcal { D } ( \mathbb { Y } _ { \omega _ { * } } , \mathbb { Y } _ { \omega } ) \implies \mathcal { A } _ { \hat { \omega } } \approx \mathcal { A } _ { \omega _ { * } } ,
$$
where $\mathcal { D }$ is a distance between distribution that will be specified in detail later. We prove this rigorously in a simplified setting in Proposition 3.1, with the full proof provided in the supplementary material, and we assume that the result generalizes when the distributions are only approximately equal, as is typically the case in practice.
Proposition 3.1: For any set of forward model parameters $\omega$ , let $\begin{array} { r } { p _ { \omega } ( y ) = \int p _ { \omega } ( y \mid x ) p ( x ) d x } \end{array}$ where $p _ { \omega } ( y \mid x ) = \mathcal { N } ( y \mid A _ { \omega } x , \sigma ^ { 2 } I )$ . Let $\omega _ { * }$ be a specific set of parameters which we consider to be the optimal set. Then, assuming the data covariance $\Sigma = \mathbb { E } _ { x } [ x x ^ { \top } ]$ is invertible, there exists an orthogonal matrix $P$ such that
$$
p _ { \omega } ( y ) = p _ { \omega _ { * } } ( y ) \implies \mathcal { A } _ { \omega } = \mathcal { A } _ { \omega _ { * } } \Sigma ^ { 1 / 2 } P \Sigma ^ { - 1 / 2 }
$$
That is, if the probability distributions $p _ { \omega }$ and $p _ { \omega _ { * } }$ are equal, it is possible to identify $\mathcal { A } _ { \omega _ { * } }$ up to rotations $P$ .
Since neither $\mathbb { Y } _ { \omega _ { * } }$ nor $\mathbb { Y } _ { \omega }$ are explicitly available we will use CFM models to act as tractable representations of probability distributions on images.
# 3.1 Distribution matching with diffusion
A CFM model ${ v } _ { \theta } ( z ^ { ( t ) } , t )$ trained on data from a distribution $\mathbb { Y } _ { \omega }$ allows one to sample from $\mathbb { Y } _ { \omega }$ by solving an ODE between times 0 and 1, defined as
$$
\begin{array} { r } { d z ^ { ( t ) } = v _ { \theta } ( z ^ { ( t ) } , t ) d t , \quad z ^ { ( 0 ) } \sim \mathcal { N } ( 0 , I ) . } \end{array}
$$
such that $z ^ { ( t = 1 ) } \sim \mathbb { Y } _ { \omega }$ . This conditional flow matching [28, 29] perspective is connected to standard Langevin diffusion since the velocity field $\boldsymbol { v } _ { \boldsymbol { \theta } } ( \boldsymbol { x } ^ { ( t ) } , t )$ is related to the score of a diffusion model: $v _ { \theta } ( z ^ { ( t ) } , t ) + z ^ { ( t ) } \propto \nabla _ { z } \log p _ { \mathbb { Y } _ { \omega } ^ { ( t ) } } ( z _ { \omega } ^ { ( t ) } )$ [60], assuming the neural network model of the velocity field to be exact. Each $z ^ { ( t ) }$ follows an intermediate distribution between Gaussian noise and the data $\mathbb { Y } _ { \omega } ^ { ( t ) }$ for $t \in [ 0 , 1 ]$ . Hence, instead of computing a distance between two distributions, we will compute it between two sequences of distributions $\mathbb { Y } _ { \omega } ^ { ( t ) }$ and $\mathbb { Y } _ { \omega _ { * } } ^ { ( t ) }$ which arise from two CFM models. In particular we will use the KL divergence integrated over time, and follow the derivation proposed by DiffInstruct (DI) [32] to compute its gradient with respect to $\omega$ . The integrated KL divergence is defined as
$$
\operatorname { I K L } ( \mathbb { Y } _ { \omega _ { * } } \parallel \mathbb { Y } _ { \omega } ) = \int _ { t = 0 } ^ { 1 } \operatorname { K L } ( \mathbb { Y } _ { \omega _ { * } } ^ { ( t ) } \parallel \mathbb { Y } _ { \omega } ^ { ( t ) } ) d t .
$$
Assuming that the score terms for both probability distributions (i.e. $s _ { \theta , \omega _ { * } } ( y , t ) \approx \nabla _ { y } \log p _ { \mathbb { Y } _ { \omega _ { * } } ^ { ( t ) } } ( y )$ and $s _ { \phi , \omega } ( \boldsymbol { y } , t ) \approx \nabla _ { \boldsymbol { y } } \log p _ { \mathbb { Y } _ { \omega } ^ { ( t ) } } ( \boldsymbol { y } ) )$ can be computed, it is possible to efficiently differentiate the IKL with respect to $\omega$ :
$$
\nabla _ { \omega } \operatorname { I K L } ( \mathbb { Y } _ { \omega _ { * } } \parallel \mathbb { Y } _ { \omega } ) = \int _ { t = 0 } ^ { 1 } \mathbb { E } \big [ s _ { \theta , \omega _ { * } } ( y _ { \omega } ^ { ( t ) } , t ) - s _ { \phi , \omega } ( y _ { \omega } ^ { ( t ) } , t ) \big ] ^ { \top } \nabla _ { \omega } y _ { \omega } ^ { ( t ) } d t ,
$$
where the expectation is over y(ω0) ∼ N (0, I), x ∼ X , y(ω1) $y _ { \omega } ^ { ( 1 ) } = \mathcal { A } _ { \omega } ( x ) + \epsilon$ and $y _ { \omega } ^ { ( t ) } = ( 1 - t ) y _ { \omega } ^ { ( 0 ) } + t y _ { \omega } ^ { ( 1 ) }$ . Note that eq. (5) has in fact been originally introduced to optimize a 3D scene consistent with a diffusion model’s outputs [39] and in [32] was used to learn a distilled model. Here we are showing it can be used effectively on conditional flow matching models as well, and in a completely different setting. The loss gradient shown in eq. (5) needs certain quantities to be computed: the score of $p _ { \mathbb { Y } _ { \omega _ { * } } }$ can be obtained beforehand by training a flow-matching model $v _ { \boldsymbol { \theta } , \omega _ { * } }$ on the available noisy data $\mathcal { V }$ . The score of $p _ { \mathbb { Y } _ { \omega } }$ however depends on a distribution which changes with every change in $\omega$ . Therefore the strategy we use is to alternately optimize i) an auxiliary flow-matching model $v _ { \psi , \omega }$ for a fixed $\omega$ with a standard diffusion loss and ii) forward model parameters $\omega$ with the IKL loss keeping the auxiliary model fixed [32]. By changing the parametrization of $\mathbf { \mathcal { A } } _ { \omega }$ we can use eq. (5) to learn the degradation of a variety of different inverse problems. We will now go into the details of the algorithm for the task of learning blur operators.
# 3.2 Learning the forward operator
In deblurring, $\mathbf { \mathcal { A } } _ { \omega }$ corresponds to a convolution with kernel $\omega$ . The algorithm we propose proceeds in three distinct steps: i) learn the corrupted data distribution, ii) approximate the forward operator $\mathcal { A } _ { \omega _ { * } }$ and iii) solve the non-blind inverse problem.
Step 1: Learning Y $\omega _ { * }$ In more detail, the first step uses the noisy dataset $\{ y _ { j } \} _ { j = 1 } ^ { m }$ to train a conditional flow matching (CFM) model $v _ { \boldsymbol { \theta } , \omega _ { * } }$ that transports Gaussian noise samples at time $t = 0$ , $z ^ { ( 0 ) } \sim \mathcal { N } ( 0 , I )$ , to samples from the corrupte∗d data distribution $\mathbb { Y } _ { \omega _ { * } }$ at time $t = 1$ , i.e., $z ^ { ( 1 ) } \sim \mathbb { Y } _ { \omega _ { * } }$ The training objective is the standard conditional flow matching loss [28, 29]:
$$
\mathcal { L } _ { \mathrm { C F M } } = \mathbb { E } _ { t , z ^ { ( 0 ) } , z ^ { ( 1 ) } } \left[ \| v _ { \theta , \omega _ { * } } ( z ^ { ( t ) } , t ) - ( z ^ { ( 1 ) } - z ^ { ( 0 ) } ) \| \right] .
$$
Importantly, for the overall success of our method, it is not necessary for $v _ { \theta , \omega _ { * } }$ to achieve stateof-the-art generative quality; it only needs to effectively capture the degradation process, which we found to be significantly easier than precisely modeling image content. See Fig. 3 for sample outputs produced by a model at this stage.
Step 2: distribution matching The second step uses the clean dataset $\mathcal { X }$ to learn the corruption operator $\hat { \mathcal { A } } _ { \omega } \approx \mathcal { A } _ { \omega _ { * } }$ by minimizing the IKL loss (4). This involves two alternating optimization steps: first, training an auxiliary diffusion model $v _ { \phi , \omega } ( z ^ { ( t ) } , t )$ using the standard flow-matching loss, where $z ^ { ( t ) } = ( 1 - t ) z ^ { ( 0 ) } + t \mathcal { A } _ { \omega } ( x )$ ; second, updating $\hat { \mathcal { A } } _ { \omega }$ by following the gradient in eq. (5).
To encourage fast convergence, the auxiliary model $v _ { \phi , \omega }$ is initialized with the pretrained weights from $v _ { \theta , \omega _ { * } } ( z ^ { ( t ) } , t )$ . The parameterization of $\hat { \mathcal { A } } _ { \omega }$ is flexible: it may depend on the clean image $x$ —which limits certain options for the final step—or be independent of $x$ . For instance, we consider modeling non-uniform blur with a an operator $\hat { \mathcal { A } } _ { \omega }$ that varies with pixel location. The framework is general, but $\hat { \mathcal { A } } _ { \omega }$ should not depend on corrupted images, as these are unavailable in the unpaired training setting.
Figure 2: Effect of center regularization on reconstruction quality. Without regularization the learned blur filter may be shifted leading to larger reconstruction errors (second row of the plot). A moderate amount of regularization fixes this.
Figure 3: Samples generated by flow matching model with motionblur kernel. While the model is average at generating faces, it correctly represents the degradation.
The distribution matching step can also be regularized in various ways to introduce prior knowledge about the forward model and improve the quality of results. For example, when $\mathcal { V }$ and $\mathcal { X }$ consist of patch data, their distribution will be invariant to translation: an image patch translated by some amount in any direction will still follow the same distribution. This invariant can easily lead to learning blur filters which are off-center as shown in Fig. 2. To counter this we add a regularizer which constrains the center of mass of the learned kernels to be in the middle of the filter itself.
Step 3: solve the inverse problem The third and final step is to use the learned forward model $\hat { \mathcal { A } } _ { \omega }$ to solve the inverse problem. At this point the non-blind setting applies, hence multiple strategies can be chosen depending on the specific problem and on the type of data which is available. When a larger clean-image dataset is available, by leveraging $\hat { \mathcal { A } } _ { \omega }$ it is possible to generate a paired noisy-clean dataset on which to train a supervised image restoration model [49, 54]. The second option is to use a plug and play algorithm [47] which leverages a pretrained prior on clean images (classically this could have been a prior such as total-variation instead) to iteratively convert a noisy image into a clean one. In Section 4 we will use ESRGAN [49] from the first option and DPIR [55], DiffPIR [62] from the second one. Of course classical alternatives such as Wiener deconvolution can be used depending on the specific inverse problem.
# 4 Experiments
# 4.1 Deblurring
We used subsets of the FFHQ dataset [19] to compare with blind deblurring methods. In particular, for each of two degradation operators we train a small CFM model on 1000 images from FFHQ corrupted by the blurring operator $\mathcal { A } _ { \omega _ { * } }$ and subsequently utilize 100 different clean images to learn $\hat { \mathcal { A } } _ { \omega }$ with distribution matching. We use an isotropic Gaussian blur with standard deviation 1 and a motion blur kernel generated following Borodenko [7]. In both cases, Gaussian noise is added with standard deviation of 0.02. We finally use plug and play algorithm DiffPIR [62] to solve the inverse problem $\hat { \mathcal { A } } _ { \omega }$ . The natural upper baseline for this experiment is to run the same PnP algorithm with the ground-truth kernel. In addition to this, we compare with some lower baselines which learn the correct kernel from a single image, and simultaneously perform deblurring: diffusion based methods BlindDPS [12], FastDiffusionEM [23] and KernelDiff [44]. These latter blind algorithms require two priors (in the form of pretrained diffusion models): one on the image dataset and one on the kernels. BlindDPS and FastDiffusionEM use FFHQ as image prior while KernelDiff uses several natural image datasets [14]. All three methods assume the blur kernels to be motion-blur, generated using the same stochastic procedure [7] as the true one. BlindDPS additionally includes isotropic Gaussian kernels in its kernel prior, which is why in Table 2 we also test it on a Gaussian kernel. It is important to stress that our algorithm works in a different setting from the single-image methods: we require a dataset of clean and noisy images for each degradation, while the single-image algorithms only require a single noisy image. However, note that all single-image algorithms rely strongly on the image and kernel priors on which they were trained: for example we could not successfully run the method from Laroche et al. [23] on a simple Gaussian kernel without first retraining the kernel prior, and KernelDiff which uses a different image prior than FFHQ performs poorly in this setting. The results in Table 2 demonstrate two things: first, even when the problem is purely in-distribution, the gap between blind and non-blind algorithms is large. In second instance, the algorithm we propose significantly reduces this gap by using more data from the same distribution to learn the necessary information about the degradation operator. To increase the robustness of our experiments we compute the standard deviation over 5 different random seeds for the 2nd step of our pipeline (thus keeping the 1st step fixed). For the motion blur kernel, we find a standard deviation of 0.02 for PSNR and 0.0005 for LPIPS which are both negligible.
Table 2: Reconstruction error on FFHQ. KernelDiff was run with no noise. FastEM was run with 16 samples and ΠGDM.
# 4.2 Space-varying blur
In practice image blur may come from a variety of sources such as camera shake/motion, out-of-focus objects or lens distortions. We focus on the latter which better fits the our framework: it is easy to collect a set of noisy images with equal distribution, but it is not really possible to collect paired datasets (note that paired datasets can be generated synthetically through software camera models [26]). Every imaging system is imperfect, which can be characterized by a spatially varying point spread function (PSF) at every location in the image plane. It can thus be modeled as a per-pixel blur where the blur kernel depends on the image plane location. Since the PSFs are intrinsically connected to the imaging system, multiple pictures taken with the same camera and camera settings will have the same degradation.
For the first experiment we use real PSFs, taken from a subset of those identified in a real camera system [5] but synthetically applied to a clean dataset. In particular we use the green channel PSFs from the Canon EF 24mm f/1.4L II USM lens at 1.4 aperture, and subsample them to a 8x8 grid, shown in Fig. 5 (top-left). This grid is then mapped to images from DIV2K [3] and DPDD [1], and applied as a per-pixel blur by linearly interpolating the kernels to each image location. We use the degraded DIV2K training-set (subdivided into patches 128 pixels wide) to train the diffusion model in the first stage of our algorithm, and the clean DDPD training-set for the second stage – thus ensuring the data is strictly unpaired. For the third and final stage we experiment with two procedures: the plug and play algorithm DPIR [55] which uses a CNN as regularizing prior and can be applied directly to the learned kernels, and supervised method ESRGAN [49] for which we generate a paired clean-noisy dataset using the learned degradation. In order to condition the kernels on image location, we add two positional encoding channels to our images such that the diffusion model learns different distributions based on the patch location. For the degradation prediction we directly learn the 64 kernels without any additional parametrization; both at train and at test time the correct kernels are picked by using the positional encoding and linearly interpolating between the kernels. For this experiment we used the centering regularization and also introduced an isotropic Gaussian regularization which helped stabilize training.
Table 3: Reconstruction error for non-uniform deblurring on DPDD. The degradation is a spatially varying blur with additive Gaussian noise ( $\sigma = 0 . 0 1$ ).
Figure 4: Sample reconstructions on DDPD. INIKNet and Restormer are single image, DPIR is non-blind.
We compare the results obtained on the “target” test-set of DPDD (corrupted with the camera PSFs and not with the original defocus degradations) against two pretrained methods tailored for real-world defocus deblurring: Restormer [54] and INIKNet [40], against unpaired algorithm DeFlow [51] which we retrain for the current task and couple with ESRGAN (DeFlow does not provide explicit degradations hence cannot be coupled with DPIR) and against the upper baselines ESRGAN and DPIR trained with the true degradation. While Restormer and INIKNet were trained on a different degradation domain, defocus blur is also spatially varying and should not be too far from the mostly isotropic kernels of the camera PSF. Nevertheless, their inferior performance compared to our method shows how even small changes to the degradation distribution can have a large impact on reconstruction performance. Table 3 shows the quantitative evaluation on DPDD while the qualitative results are shown in Fig. 4. DeFlow does not manage to learn the correct blur distribution, and simply introduces a small amount of noise in the generated degraded images, thus garnering the worst results. Both INIKNet and Restormer perform similarly and succeed at removing some of the blur, but the results are not as sharp as the reconstructions obtained with our method. Both non-blind methods perform very well, with ESRGAN being better under perceptual metrics and DPIR under distortion metrics. Importantly, and like we showed for the first round of experiments on FFHQ, our method is very close to its upper limit given by the non-blind counterpart. Note that, as shown in Fig. 5, the predicted kernels are not isotropic despite our regularizer and manage to capture the spread and directional variations of the true kernels. However, there remain a few kernels such as the one at the top-left whose long tails we cannot capture well. This leads to under-compensating the blur as can be seen in the left-most bottom panels of Fig. 5.
Figure 5: Comparing ground-truth (left) and predicted blur kernels (right), as well as the respective reconstructed images.
Figure 6: Different aberrations in parking lot data. While images taken at f/5.6 are sharp, chromatic aberrations are still present outside of the center portion as evidenced by the middle panel.
# 4.3 Real-world camera lens calibration
As a final experiment we attempt to tackle the same lens-aberrations of the previous task, but this time on real data. We used a Panasonic DC-GX9 camera in aperture priority mode with a Leica DG Summilux 25mm f/1.4 II lens to take 22 pictures in our parking lot at different apertures. Our goal was to learn the PSF of the lens at an extreme aperture (e.g. f/16) in order to correct it using the clean distribution of images taken at a reasonable aperture (e.g. f/5.6). To avoid confounding factors we tried to keep all the image-plane in focus and had ample light to obtain sharp images. Note that while the images at different f-stops were taken using a tripod from the same location no additional care was taken to align them, and they would not be suitable for a supervised learning algorithm. Images were minimally postprocessed, by devignetting and converting to sRGB using lensfunpy [25], in order to preserve the blurry artifacts. Then they were split into patches and fed through our algorithm: an initial diffusion model was trained on the “noisy” images at f/16, and then used as a guide to learn the degradation operator on the center part of f/5.6 images. By using the central part of f/5.6 images we should be able to fix the chromatic aberrations which appears around sharp edges but much less in central part of images. By inspecting the data (see Fig. 6) there is a noticeable increase in blurriness between clean and noisy samples, which nevertheless is not very strong. Unlike the previous experiments where we knew the amount of additive Gaussian noise $\sigma$ used in the forward model, now the noise distribution is completely unknown. For simplicity we again use a Gaussian noise model and treat its standard deviation as a trainable parameter of $\hat { \mathcal { A } } _ { \omega }$ . Having a good noise estimate can be very useful in guiding the final reconstruction with plug-and-play methods. In Fig. 7 we compare our results with the algorithm from Eboli et al. [16] which is a two-step approach to first estimate and remove the blur, and then remove colored fringes using a specialized procedure [15]. For fairness we must note that the algorithm we’re comparing against [16] works with single images, and doesn’t need retraining for different lenses, partly thanks to strong inductive priors on the the blur kernel (Gaussian with 7 parameters) and on the color aberrations. We also compare to commercial solution DxO PhotoLab [22] which exploits information about the specific lens used. We used the web version of the software and applied chromatic aberration filter followed by deblurring with strength 1.23. More comparisons as well as training details are available in the supplementary.
Figure 7: Sample reconstructions on the parking lot dataset. Color differences in DxO are likely due to a different white-balancing algorithm. Eboli [15] correctly removes chromatic aberrations but introduces some noise artifacts. Our method results visually pleasing and significantly sharper than the original image.
# 4.4 Single image super-resolution
Up to now we have dealt with the task of deblurring, matching clean and corrupted distributions across datasets. Interestingly, there is a very related task whose properties allow us to work on single images, instead of across datasets. Super-resolution is commonly modeled as the composition of blurring with kernel $k$ and subsampling $y = ( x \circledast k ) \downarrow _ { s }$ , hence its close relationship to deblurring. In this setting our algorithm works on single low-resolution images, split into small patches. First, the noisy distribution $\mathbb { Y } _ { \omega _ { * } }$ is learned with a diffusion model on the patches. Then we learn a kernel $\hat { k }$ such that
$$
p \Big ( ( y \circledast \hat { k } ) \downarrow _ { s } \Big ) \approx \mathbb { Y } _ { \omega _ { * } } .
$$
Note that in eq. (6) we perform a further downscaling of the already low-resolution image $y$ using the kernel $\hat { k }$ which is the target of our learning algorithm. This twice-downscaled image is compared in distribution to the once-downscaled image. Thanks to the scale-invariance of natural images the distributions (once-downscaled and twice-downscaled) will match when the kernels used for downscaling are the same, leading to the recovery of the true kernel. The downscaling step is crucial for this to work: if we omitted it (i.e. pure deblurring), we would always recover the identity kernel. For a quantitative experiment on single image super-resolution we adopt the setting from Bell-Kligler et al. [6] who also introduced the DIV2KRK dataset consisting of 100 images from DIV2K [3] downscaled with random anisotropic Gaussian kernels. After learning $\hat { k }$ using the proposed algorithm, we tested three non-blind solvers: ZSSR [4] which requires no pretraining and USRNet [56] for which we used the tiny pretrained model and DANv2 [35] from which we took just the Restorer module plugged with $\hat { k }$ . In Table 4 we compare our approach with: i) Two end-to-end solutions [33, 35] which jointly learn super-resolution and kernel-estimation by pre-training on DIV2K $+$ Flickr2K [46] datasets, ii) two non-blind super-resolution algorithms [4, 56] which are natural upper-bounds for our method and iii) three kernel-estimation algorithms [6, 27, 52] which are directly comparable to the proposed method. Note that both DANv2 and DCLS outperform the non-blind methods. We hypothesize this may be due to the strong prior the methods have learned during training on a dataset which is very similar to the one used in testing. To analyze kernel estimation performance in isolation we look at kernel metrics k-PSNR and k-NCC (not available for DCLS which operates in a different kernel space) which show that the proposed algorithm performs the same as DANv2. This can be further corroborated by plugging in the learned kernels with the DANv2 super-resolution module on its own, which significantly reduces the performance gap to the end-to-end DANv2. When compared to the other 2-step methods, the proposed algorithm emerges as the clear winner both on image and kernel metrics.
Table 4: x2 super-resolution on DIV2KRK dataset. Methods are grouped by the respective non-blind solver. NCC is the normalized cross-correlation metric. | This work addresses image restoration tasks through the lens of inverse
problems using unpaired datasets. In contrast to traditional approaches --
which typically assume full knowledge of the forward model or access to paired
degraded and ground-truth images -- the proposed method operates under minimal
assumptions and relies only on small, unpaired datasets. This makes it
particularly well-suited for real-world scenarios, where the forward model is
often unknown or misspecified, and collecting paired data is costly or
infeasible. The method leverages conditional flow matching to model the
distribution of degraded observations, while simultaneously learning the
forward model via a distribution-matching loss that arises naturally from the
framework. Empirically, it outperforms both single-image blind and unsupervised
approaches on deblurring and non-uniform point spread function (PSF)
calibration tasks. It also matches state-of-the-art performance on blind
super-resolution. We also showcase the effectiveness of our method with a proof
of concept for lens calibration: a real-world application traditionally
requiring time-consuming experiments and specialized equipment. In contrast,
our approach achieves this with minimal data acquisition effort. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
# 1 Introduction
Robotic manipulation tasks—such as grasping, placing, and assembling objects—are notoriously difficult to program explicitly due to their high complexity and variability. Recently, imitation learning $( I L )$ offers a promising alternative, allowing robots to acquire manipulation skills by mimicking expert demonstrations [1, 2, 3, 4, 5, 6]. By learning from example trajectories, $\pi$ eliminates the need for manual behavior design, making it particularly effective in contact-rich or dynamic environments where reward functions or controllers are difficult to define.
Figure 1: Causal Diffusion Policy: a transformer-based diffusion model that enhances action prediction by conditioning on historical action sequences. A: When performing the task of “grabbing the barrier” in practice, B: the quality of observations is degraded by factors such as sensor noise, occlusions, and hardware limitations. In fact, this degraded but high-dimensional observation data not only fails to provide sufficient spatial constraint information for policy planning but also slows down the planning speed. C: In this case, the robot is unable to perform accurate manipulation. D: In this paper, we address historical action sequences to introduce temporally rich context as a supplement, which enables more robust policy generation.
As a powerful $\pi$ framework, Diffusion Policy $( D P )$ [7] treats action generation as a denoising diffusion process, which enable smooth and multimodal trajectory prediction, paving the way for more versatile and efficient robot learning system. However, $D P$ employs a naive behavior cloning approach to learn the specified tasks, which models actions independently and fails to account for the sequential structure of decision making. This often leads to distributional shift, where small prediction errors accumulate and push the robot into unseen or unstable states.
Furthermore, robot actions are continuous and temporally correlated—properties that are difficult to capture under observations common in physical deployments. And in real-world scenarios shown in Fig. 1, sensor noise, occlusions, and hardware limitations degrade observation quality, while realtime inference constraints restrict access to temporally rich context. As a result, $D P$ often fails at critical subtasks such as object localization, grasp planning, and long-horizon task execution.
To address these issues, we propose Causal Diffusion Policy $( C D P )$ , a novel transformer-based diffusion framework that explicitly conditions action prediction on historical action sequences. Rather than relying solely on spatial constraints in instantaneous observations, $C D P$ incorporates temporal continuity through an autoregressive causal transformer, enabling the policy to reason over prior actions and their evolving contexts. This design improves coherence, stability, and robustness, especially in challenging conditions where single-frame observations may be unreliable.
To further improve computational efficiency during inference, we introduce a caching mechanism that stores and reuses attention key-value pairs computed in earlier timesteps. This avoids redundant computations inherent in standard transformer-based policies and makes the model practical for realtime deployment on physical hardware. Together, causal modeling and inference caching allow CDP to generate temporally consistent and high-quality actions with reduced latency. Our contributions can be summarized as follows:
1. We propose $C D P$ , a novel transformer-based diffusion framework for robotic manipulation that conditions action prediction on historical action sequences, enabling context-aware, temporally coherent visuomotor policy learning.
2. To enable efficient real-time inference, we introduce a caching mechanism that stores and reuses attention key-value pairs from prior timesteps, substantially reducing computational overhead in autoregressive action generation.
3. Through extensive evaluations on diverse 2D and 3D manipulation tasks in both simulation and real-world settings, we show that CDP achieves superior accuracy and robustness over existing methods, especially under degraded observation conditions.
# 2 Related Work
# 2.1 Diffusion Model in Robotic Manipulation
Diffusion Model [8, 9] provides an effective means for robots to acquire human-like skills by emulating expert demonstrations [10, 11, 12, 13, 14, 15, 16]. Recent advancements [17, 18, 19, 20, 21, 22, 23, 24, 25, 26] in this domain have introduced innovative approaches to model visuomotor policies. Diffusion Policy [7] models a visuomotor policy as a conditional denoising diffusion process, i.e. utilizing the robot’s observations as the condition to refine noisy trajectories into coherent action sequences. Based on this, 3D Diffusion Policy [27] incorporates simple point-cloud representations, thereby enriching the robot’s perception of the environment. Despite these advancements, these holistic approaches that generate complete action sequences in a single pass face inherent limitations, such as potential error propagation and challenges in handling long-range dependencies within the action sequences. To address these issues, recent research has increasingly focused on token-wise incremental generation for robot policy learning. This paradigm shift aims to improve the flexibility and robustness of action generation by breaking down the process into smaller, manageable steps. Notable examples include ICRT [28], ARP [29], and CARP [30]. In this work, we build upon these foundational advancements by proposing a novel transformer-based causal generation model. By incorporating temporal context, our model is better equipped to capture the dynamics of the environment and generate smoother, more coherent action sequences.
# 2.2 Causal Generation in Diffusion Model
Diffusion models have utilized uniform noise levels across all tokens during training. While this approach simplifies the training process, it may compromise the model’s ability to capture complex temporal dynamics [31, 32, 33, 34]. To address this limitation, Diffusion Forcing [35] introduced a novel training strategy for sequence diffusion models, where noise levels are independently varied for each frame. This method significantly enhances the model’s capacity to generate more realistic and coherent sequences, as demonstrated by both theoretical analysis and empirical results. Building on this foundation, CausVid [36] further extended Diffusion Forcing by integrating it into a causal transformer architecture, resulting in an autoregressive video foundation model. This adaptation effectively combines the strengths of both Diffusion Forcing and transformer-based architectures, thereby achieving improved temporal coherence and consistency in video generation [37, 38, 39, 40, 41, 42, 43, 44]. Ca2-VDM [45] incorporates optimized causal mechanisms and efficient cache management techniques. These innovations reduce computational redundancy by reusing precomputed conditional frames and minimize storage costs through cache sharing across denoising steps, thereby enhancing real-time performance.
# 3 Method
# 3.1 Causal Action Generation
We begin by outlining the training phase of our proposed model, followed by a detailed description of the Causal Action Generation Module (Fig. 2 (a)). Based on this, we introduce the Causal Temporal Attention Mechanism, a pivotal component of the module. This mechanism ensures that each target action can access all historical actions, thereby capturing its temporal dynamics context and enabling the generation of more accurate results. In addition, to enhance the module’s robustness against error-prone historical actions and mitigate the risk of action generation failure due to the accumulation of prediction errors during inference phase, we further integrate the Historical Actions Enhancement. This module significantly improves reliability of the action generation process.
Figure 2: Causal Action Generation of our CDP . (a) During training, the Historical Actions A˜ are combined with the Denoising Targets N. This combined input is then fed into the Causal Action Generation module, which contains $P$ blocks, for denoising. The Target Actions A are used for training supervision. Before denoising, $\tilde { \mathbf { A } }$ is perturbed by a small-scale noise, which helps to reduce the accumulation of action prediction errors during inference. (b) The Causal Temporal Attention Mask ensures each Denoising Target can access all Historical Actions.
Training. During training, the sampled action sequence with the length of $L + M$ is divided into two parts, i.e. the Historical Actions $\tilde { \mathbf { A } } = \{ a _ { k } \} _ { k = 0 } ^ { L - 1 }$ with the length of $L$ , and the Target Actions $\mathbf { A } = \{ a _ { k } \} _ { k = L } ^ { L + M - 1 }$ hewni,thfetdheinlteontghteh Cofa $M$ .l ATchteiofnorGmeenrersaetrivoensMasotdhuelectaougseatlhecronwdiitthiotnh,ewDheincoh swinilgl Targets ${ \bf N } = \{ n _ { k } \ \sim \ N ( 0 , 1 ) \} _ { k = 0 } ^ { M - 1 }$ and ultimately generate the predicted Target Actions. The objective function for the training of this module is designed to minimize the L2 distance between the predicted Target Actions and its corresponding ground truth,
$$
\operatorname* { m i n } \mathbb { E } _ { { \tilde { \mathbf { A } } } , \mathbf { A } , \mathbf { N } } \| ( D _ { \theta } ( [ { \tilde { \mathbf { A } } } , \mathbf { N } ] ) - \mathbf { A } ) \| _ { 2 } ^ { 2 }
$$
where $[ \cdot , \cdot ]$ denotes the concatenation operation along the temporal axis and $D _ { \theta }$ denotes the whole denoising process, based on our Causal Action Generation Module with the parameter of $\theta$ .
Historical Actions Re-Denoising. In autoregressive prediction tasks, errors in historical actions can cause significant deviations that accumulate over time, potentially leading to the failure of robotic manipulation. To address this issue, we perturb the Historical Actions $\tilde { \mathbf { A } }$ by introducing small-scale noise and then, re-denoise them during training.
$$
\tilde { \mathbf { A } } ^ { \mathrm { p e r t u r b } } = \tilde { \mathbf { A } } + \mathbf { N } _ { \sigma }
$$
where A˜ perturb denotes the perturbed historical actions and $\mathbf { N } _ { \sigma } \sim \mathcal { N } ( 0 , \sigma ^ { 2 } )$ denotes the small-scale noise sequence with the variance of $0 ~ < ~ \sigma ~ < ~ 1$ . By introducing ${ \bf N } _ { \sigma }$ , we create a more robust training environment that simulates the occasions where Historical Actions $\tilde { \mathbf { A } }$ may be imperfect or noisy, forcing the model to focus on the coarse-grained temporal dynamic context from the Historical Actions $\tilde { \mathbf { A } }$ , instead of depending on their concrete values. These coarse-grained temporal dynamics serve as a complement to the degraded observations O, guiding the subsequent denoising process.
Causal Action Generation Module. As depicted in Fig. 2 (a), this module is composed of $P$ blocks. Within each block, we first inject the temporal dynamic context from the Historical Actions $\tilde { \mathbf { A } }$ into the Denoising Targets $\mathbf { N }$ through Causal Temporal Attention (CTA), generating intermediate features $\mathbf { N } _ { t }$ . Following this, Visual-Action Cross Attention (VACA) is utilized to incorporate spatial constraints from the degraded observation $\mathbf { o }$ into these $\mathbf { N } _ { t }$ , generating intermediate features $\mathbf { N } _ { t s }$ . These $\mathbf { N } _ { t s }$ are then further processed by a Multi-layer Perceptron (MLP). It is worth noting that we place the denoising timestep embedding $t$ in the VACA layer, rather than the CTA layer in order to ensure the cached features of the Historical Actions $\tilde { \mathbf { A } }$ can be correctly shared during the whole denoising process (Sec. 3.2). The entire process within each block can be described as follows,
$$
\begin{array} { r } { \begin{array} { c } { \mathbf { N } _ { t } = \mathrm { L N } ( \mathrm { C T A } ( \tilde { \mathbf { A } } , \ \mathbf { N } ) ) } \\ { \mathbf { N } _ { t s } = \mathrm { L N } ( \mathrm { V A C A } ( \mathrm { E n c } ( \mathbf { O } ) , \ \mathbf { N } _ { t } ) } \\ { \mathbf { N } _ { o } = \mathrm { L N } ( \mathrm { M L P } ( \mathbf { N } _ { t s } ) ) } \end{array} } \end{array}
$$
where ${ \bf N } _ { o }$ represents the output features of this block. LN denotes the Layer Normalization operation, and Enc denotes the 2D or 3D data encoder, which mainly used to extract the information from 2D or 3D observation.
Causal Temporal Attention (CTA). To ensure the causality of action prediction while maintaining compatibility with the inference phase, our Causal Temporal Attention must meet the following criteria: 1) During the autoregressive inference process, new actions are generated based on their historical actions. This means that we only need to ensure that the Denoising Targets N can attend to the Historical Actions $\tilde { \mathbf { A } }$ during the attention calculation, rather than enforcing causality among the Historical Actions $\tilde { \mathbf { A } }$ themselves. 2) The autoregressive inference process continually discards invalid historical actions, which are too early and too distant from the Target Actions A temporally. This implies that during the attention calculation, we need to further decouple the Historical Actions $\tilde { \mathbf { A } }$ . Specifically, we chunk the Historical Actions $\tilde { \mathbf { A } }$ with a fixed size, ensuring that all actions within a chunk are fully visible to one another, while actions from different chunks cannot attend to each other. 3) Following [7], we increase the redundancy in the length of the Denoising Targets $\mathbf { N }$ to enable the model to anticipate future context while maintaining action coherence throughout the entire inference process. Ultimately, only the first chunk-size actions are utilized as the final predicted Target Actions. To meet the aforementioned constraints, we customize the Causal Temporal Attention Mask, as illustrated in Fig. 2 (b). For the Historical Action $a _ { 0 }$ , the corresponding mask indicates that the attention computation only considers actions within the same chunk , excluding actions from other chunks . Furthermore, all Denoising Targets $n _ { i }$ are treated as being in a single chunk with a larger chunk size than that of the Historical Actions. Their corresponding mask indicates that their attention calculation considers both the Historical Actions and the Denoising Targets themselves
# 3.2 Chunk-wise Autoregressive Inference with Cache Sharing
In this section, we first provide an overview of our proposed Chunk-wise Autoregressive Inference process (Fig. 3 (a)), which is equipped with a Cache Sharing mechanism (Fig. 3 (b)). Since the entire target action sequence is predicted in an autoregressive manner, its total length remains indeterminate at the outset. To this end, we introduce Cyclic Temporal Embedding to distinguish the temporal order of the whole action sequence (Fig. 3 (b)).
Chunk-wise Autoregressive Inference. During the $k$ -th autoregressive (AR-k) step, we perform denoising using
$$
\begin{array} { r } { \mathbf { A } ^ { k } \sim p _ { \theta } ( \mathbf { A } ^ { k } | \mathbf { N } ^ { k } , \tilde { \mathbf { A } } ^ { k } ) , } \end{array}
$$
where $\mathbf { N } ^ { k } , \tilde { \mathbf { A } } ^ { k }$ , and $\mathbf { A } ^ { k }$ denote the Denoising Targets, Historical Actions, and Target Actions at the AR- $\mathbf { \nabla } \cdot \mathbf { k }$ step, respectively. Within the causal generation framework, features computation proceeds in a unidirectional manner. Specifically, $\mathbf { N } ^ { k }$ is denoised conditioned on $\tilde { \mathbf { A } } ^ { k }$ , and the cache for $\tilde { \mathbf { A } } ^ { k }$ can be precomputed in prior autoregressive steps without regard for $\mathbf { N } ^ { k }$ . After denoising, the Target Actions $\mathbf { A } ^ { \dot { k } }$ generated in the $\tt A R { - } k$ step are applied to the environment and serve as the partial of Historical Actions in the $\tt A R - k + 1$ step, as illustrated in Fig. 3 (a). In the $\tt A R - k + 1$ step, we update the historical actions in a window-sliding manner (Fig. 3 (b)). This involves discarding the invalid part of $\tilde { \mathbf { A } } ^ { k }$ and incorporating the Target Actions $\mathbf { A } ^ { k }$ to form the Historical Actions $\tilde { \mathbf { A } } ^ { k + \bar { 1 } }$ for the $\tt A R - k + 1$ $\tilde { \mathbf { A } } ^ { k + 1 }$
step. These Historical Actions are then fed into the Causal Action Generation Module to generate the Target Actions $\mathbf { A } ^ { k + 1 }$ in the AR- $\cdot \mathtt { k } { + } 1$ step.
Figure 3: Chunk-wise Autoregressive inference of our CDP . (a) The orange and purple blocks denote actions whose Key and Value representations have been cached and not cached until the current step, respectively. The yellow block denotes the Gaussian noises. During the $\tt A R { - } k$ step, we perform denoising while simultaneously computing and storing the Key and Value representations for the Uncached Historical Actions. After denoising, the Target Actions generated in the AR-k are applied to the environment and serve as the Uncached Historical Actions in the $\tt A R - k + 1$ step, which are then used to generate future Target Actions. (b) In current autoregressive inference step, the Key and Value representations for the Cached Historical Actions can be directly used by the Attention Computation. But for the Uncached Historical Actions and Denoising Targets, we need utilize a QKV MLP layer to extract their Query, Key and Value representations. During Attention Computation, the Uncached Historical Actions are restricted to considering only actions within its own chunk (indicated by the purple line), while the Denoising Targets has access to the entire action sequence (indicated by the yellow line).
Cache Sharing. In the $\mathtt { A R - k }$ step, the Historical Actions $\tilde { \mathbf { A } } ^ { k }$ of length $L$ can be divided into two parts: Cached Historical Actions $\tilde { \mathbf { A } } _ { 0 : l - 1 } ^ { k }$ and Uncached Historical Actions $\tilde { \mathbf { A } } _ { l : L - 1 } ^ { k }$ , based on whether their Key and Value representations have been cached up to this step. Here, $l$ denotes the length of the Cached Historical Actions. During denoising, the Key and Value representations for the Cached Historical Actions $\tilde { \mathbf { A } } _ { 0 : l - 1 } ^ { k }$ have been precomputed in prior autoregressive steps and can be tdhireecdtelnyouisiendg tnimatetsetnetpioan cdompu−mtaetiaonns hWeseedeKneoy eanthdeVmalaus $\mathbf { K } _ { 0 : l - 1 } ^ { ( 0 ) }$ eantadt V(0:0l) 1 (·) ,rrewshpeorne tnhdeicfai tneasl Target Actions generated in prior steps. For the Uncached Historical Actions $\tilde { \mathbf { A } } _ { l : L - 1 } ^ { k }$ and Denoising Targets $\mathbf { N } ^ { k }$ , we utilize a QKV MLP layer to extract their Query, Key, and Value representations,
$$
\begin{array} { r l } & { \{ \mathbf { Q } _ { l : L - 1 } ^ { ( 0 ) } , \mathbf { K } _ { l : L - 1 } ^ { ( 0 ) } , \mathbf { V } _ { l : L - 1 } ^ { ( 0 ) } \} = \mathrm { Q K V } _ { - } \mathrm { M L P } \left( \tilde { \mathbf { A } } _ { l : L - 1 } ^ { k } \right) } \\ & { \{ \mathbf { Q } _ { L : L + M - 1 } ^ { ( t ) } , \mathbf { K } _ { L : L + M - 1 } ^ { ( t ) } , \mathbf { V } _ { L : L + M - 1 } ^ { ( t ) } \} = \mathrm { Q K V } _ { - } \mathrm { M L P } \left( \mathbf { N } ^ { k } \right) } \end{array}
$$
where $\mathit { \Omega } ( t )$ indicates that those Query, Key and Value representations correspond to the Denoising Targets at timestep $t$ . Then, $\mathbf { Q } _ { l : L - 1 } ^ { ( 0 ) } , \mathbf { K } _ { l : L - 1 } ^ { ( 0 ) } , \mathbf { V } _ { l : L - 1 } ^ { ( 0 ) }$ Vl(:0L) 1 are cached and will be shared across all denoising timesteps. Concatenating all the above representations forms the whole Query, Key, and Value representations (i.e. $\mathbf { Q } ( k , t ) , \mathbf { K } ( k , t )$ , and $\mathbf { V } ( k , t ) )$ at denoising timestep $t$ in the AR- $\mathbf { \nabla } \cdot \mathbf { k }$ step.
$$
\begin{array} { r l } & { \mathbf { Q } ( k , t ) = \mathrm { { C o n c a t } } ( \mathbf { Q } _ { l : L - 1 } ^ { ( 0 ) } , \mathbf { \Delta Q } _ { L : L + M - 1 } ^ { ( t ) } ) } \\ & { \mathbf { K } ( k , t ) = { \mathrm { C o n c a t } } ( \mathbf { K } _ { 0 : l - 1 } ^ { ( 0 ) } , \mathbf { \Delta K } _ { l : L - 1 } ^ { ( 0 ) } , \mathbf { \Delta K } _ { L : L + M - 1 } ^ { ( t ) } ) } \\ & { \mathbf { V } ( k , t ) = { \mathrm { C o n c a t } } ( \mathbf { V } _ { 0 : l - 1 } ^ { ( 0 ) } , \mathbf { \Delta V } _ { l : L - 1 } ^ { ( 0 ) } , \mathbf { \Delta V } _ { L : L + M - 1 } ^ { ( t ) } ) } \end{array}
$$
where Concat denotes the concatenation operation along temporal dimension. Subsequently, the causal temporal attention is computed as:
$$
\mathbf { Q } ( k , t ) \times [ \mathbf { K } ( k , t ) \times \mathbf { V } ( k , t ) + \tilde { \mathbf { M } } ]
$$
Table 1: Quantitative evaluation of our CDP and baseline methods on simulation tasks, highlighting the effectiveness of our approach.
where $\tilde { \mathbf { M } }$ denotes the attention mask with dimensions $( L - l + M ) \times ( L + M )$ , mirroring the training attention mask illustrated in Fig. 2(b). As shown in Fig. 3(b), this mask ensures that: 1) The attention weights for the Uncached Historical Actions are computed regardless of any other actions. 2) The prediction of Target Actions are based on all Historical Actions.
Cyclic Temporal Embedding. In traditional temporal embedding mechanisms, unique temporal embeddings are typically assigned to each action during the inference phase. However, this method is not effective due to the variable length of action sequences. To tackle this problem, we present the Cyclic Temporal Embedding mechanism. Instead of generating new embeddings, the Denoising Targets utilize temporal embeddings that are cyclically shifted from the beginning of the sequence. During the training process, a random cyclic offset is applied to the temporal embedding sequence of each sample. This randomization enables the model to generalize across various temporal offsets, thereby enhancing its robustness in the inference phase.
# 4 Simulation Experiments
# 4.1 Experiment Setup
Benchmarks. 1) Adroit [46]: This benchmark employs a multi-fingered Shadow robot within the MuJoCo environment to perform highly dexterous manipulation tasks, including interactions with articulated objects and rigid bodies. It is designed to test advanced manipulation skills and coordination. 2) Dexart [47]: This dataset utilizes the Allegro robot within the SAPIEN environment to execute high-precision dexterous manipulation tasks, primarily focusing on articulated object manipulation. It emphasizes fine motor skills and adaptability in complex scenarios. 3) MetaWorld [48]: This benchmark operates within the MuJoCo environment, using a robotic gripper to perform a diverse range of manipulation tasks involving both articulated and rigid objects. Tasks are categorized by difficulty levels: easy, medium, hard, and very hard. Its evaluation covers tasks from medium to very hard difficulty levels. 4) RoboFactory [49]: This is a benchmark for embodied multi-agent systems, focusing on collaborative tasks with compositional constraints to ensure safe and efficient interactions. The task challenges include designing effective coordination mechanisms to enable agents to work together seamlessly, and ensuring robustness against uncertainties and dynamic changes in the environment during long-term task execution.
Baselines. To systematically evaluate the performance of our proposed method CDP, we selected Diffusion Policy and 3D Diffusion Policy as the 2D and 3D baseline methods, respectively. Different from previous works, we employed an MLP as the visual encoder for both the 2D and 3D baseline methods in our experiments to ensure consistency. All methods were subjected to the same number of observation and inference steps, and were trained using an identical set of expert demonstrations as well as an equivalent number of training epochs.
Evaluation Metric. Following the established metrics [27], each experiment was conducted across three independent trials, utilizing seed values of 0, 1, and 2, in the Adroit, DexArt, and MetaWorld benchmarks. For each seed, the policy was evaluated over 20 episodes every 200 training epochs, and the mean of the top 5 success rates was computed. The final reported performance consists of the mean and standard deviation of these success rates across above three seeds. In the Robofactory benchmark, each experiment was executed using only one seed value, i.e. 0, and was evaluated over 100 episodes at the 300th training epoch. This comprehensive evaluation metrics ensures a fair comparison between our CDP and baseline methods, providing a clear assessment of the performance improvements achieved by our approach.
# 4.2 Result Analysis
The quantitative results of all methods on the simulation benchmarks are detailed in Table 1, which illustrates that our CDP consistently outperforms the baseline methods across both 2D and 3D scenarios. Specifically, $C D P$ achieves an improvement of about $5 \%$ to $20 \%$ points over the baseline methods across tasks of varying difficulty levels in different benchmarks. These findings highlight that, with an identical visual encoder, the causal generation paradigm employed by $C D P$ significantly enhances the success rate of downstream tasks. This success can be attributed to two key aspects. First, the causal generation approach leverages historical actions by learning its temporal dynamic context to guide the generation of future actions. This temporal context provides valuable information that aids in making more informed and coherent decisions. Second, CDP effectively addresses the challenges caused by the degraded observations. In scenarios where the current observation alone cannot provide sufficient information for action generation, $C D P$ utilizes the historical action sequence as a supplementary information. This dual reliance on both observations and historical actions ensures robustness in policy planning.
# 4.3 Ablation Study
Ablation Study on the Quality of Observation Data. To thoroughly examine the robustness of our proposed method under different levels of observation quality, we conduct experiments on the Lift Barrier task of the Robofactory benchmark using point clouds of varying resolutions, specifically with 32, 64 and 128 points. We then compare the success rates of our method with those of DP3 across these point clouds. Fig. 4 reveals that our method demonstrates significantly superior robustness to degraded observations. As the resolution of the point clouds decreases, the success rate of DP3 drops markedly, whereas our proposed
Figure 4: Ablation study about the quality of observation data. CDP consistently outperforms DP across all experimental conditions.
method maintains a consistently high success rate. This resilience can be attributed to the model’s ability to effectively utilize historical action sequences, thereby compensating for the lack of spatial constraints from degraded observations.
# 5 Real-World Experiments
# 5.1 Experiment Setup
Workspace. As illustrated in Fig. 6, our real-world experiments were conducted using a RealMan robotic arm fitted with a DH AG95 gripper. A stationary top-down Intel RealSense D435i RGB-D camera was employed to provide a global RGB image of the workspace. All hardware components were connected to a workstation equipped with an NVIDIA 3090 Ti GPU. This workspace enabled efficient data acquisition and evaluation of the robotic system’s performance.
Demonstrations. The demonstrations utilized in our experiments were meticulously gathered via human teleoperation, facilitated by advanced vision-based retargeting techniques. For each task, we collected a total of 50 demonstrations, each carefully selected to encapsulate the fundamental skills and critical interactions necessary for achieving successful task outcomes. This curated approach not only ensures that our dataset remains manageable in size but also guarantees its representativeness of the inherent complexities and challenges associated with the tasks.
# 5.2 Results Analysis
We list the quantitative results of our real-world experiments in Table 2. For the tasks of Collecting Objects and Stacking Cubes, we report not only the overall success rate but also the individual success rates for grasping and placing. The placing success rate is measured conditional on successful grasping. As shown in the table, our model outperforms the baseline method in terms of grasping, placing and overall success rate, highlighting the effectiveness of our approach. Figure 5 illustrates representative trajectories of real-world tasks generated by CDP . Consistent with simulation results, real-world experiments confirm that $C D P$ attains high success rates across all tasks, even when trained with a modest number of 50 demonstrations.
Table 2: Quantitative results of our CDP and the baseline method on real-world tasks. Succ. is the abbreviation for Success Rate. | Diffusion Policy (DP) enables robots to learn complex behaviors by imitating
expert demonstrations through action diffusion. However, in practical
applications, hardware limitations often degrade data quality, while real-time
constraints restrict model inference to instantaneous state and scene
observations. These limitations seriously reduce the efficacy of learning from
expert demonstrations, resulting in failures in object localization, grasp
planning, and long-horizon task execution. To address these challenges, we
propose Causal Diffusion Policy (CDP), a novel transformer-based diffusion
model that enhances action prediction by conditioning on historical action
sequences, thereby enabling more coherent and context-aware visuomotor policy
learning. To further mitigate the computational cost associated with
autoregressive inference, a caching mechanism is also introduced to store
attention key-value pairs from previous timesteps, substantially reducing
redundant computations during execution. Extensive experiments in both
simulated and real-world environments, spanning diverse 2D and 3D manipulation
tasks, demonstrate that CDP uniquely leverages historical action sequences to
achieve significantly higher accuracy than existing methods. Moreover, even
when faced with degraded input observation quality, CDP maintains remarkable
precision by reasoning through temporal continuity, which highlights its
practical robustness for robotic control under realistic, imperfect conditions. | [
"cs.CV",
"cs.RO"
] |
# 1 Introduction
Long sequence capability offers the ultimate unlock for a wide range of AI applications from RAG, multi-turn conversation, long document summarization, multi-modality support, and many more. This is evident from the continuous increase in the max sequence length supported by popular Open Source LLMs, like Meta’s Llama 3.x and Alibaba’s Qwen 2.5 32B, which support 128K-token sequences, NVIDIA’s Llama-3.1-8B-UltraLong-4M-Instruct finetuned to support a 4M sequence length, and the more recent Meta’s Llama-4 Maverick and Llama 4-Scout models supporting 1M and and a whopping 10M sequence length, respectively.
While these models are capable of handling incredibly long sequence lengths, for several reasons, fine-tuning these models at these sequence lengths to enhance task-specific capabilities is out of reach for most data scientists who do not have access to sophisticated enterprise training systems and rely on open source solutions:
• First, standard LLM training workflows are not optimized for memory efficiency, which restricts the maximum sequence length per GPU and makes them suboptimal for long-sequence training. • Second, training with multi-million sequence lengths requires more memory than available in any commercially available GPU devices, and while there are solutions that allow for leveraging aggregate GPU memory across multiple devices, they are limited. For example, Ring-based sequence parallelism [1] does not support arbitrary
attention patterns natively. While Ulysses-based sequence parallelism [2] does not have this restriction, it is not supported in popular frameworks like Hugging Face, limiting its accessibility. • Third, PyTorch itself suffers from a multitude of memory bottlenecks that limit the available memory available to support long sequences.
In the Open Source release of Arctic Long Sequence Training (ALST) we address the above challenges with three targeted solutions:
• Ulysses Sequence Parallelism Compatible with Hugging Face Transformers: Adapted from the original Megatron-DeepSpeed Ulysses [2] and extended to support modern Attention mechanisms, this technique enables the use of aggregate GPU memory across multiple devices. Sequence Tiling for Memory Efficiency: A new computation tiling pattern for LLM training that reduces the memory required for memory intensive operators like logits, loss, and MLP computation from O(N) to O(1), where $\mathrm { \bf N }$ is the sequence length.
• PyTorch Memory Optimizations: Through comprehensive memory profiling of long-sequence training workloads, we identified and applied a series of PyTorch-specific optimizations to eliminate the unnecessary memory overheads.
By leveraging these three components and making them compatible with Hugging Face Transformers, ALST makes long-sequence training accessible to the broader AI community:
• 500K-long sequence training on a single H100 GPU, 16 times longer than baseline1, democratizing longsequence training on resource constrained setups.
• 3.7M-long sequence training on a single H100 node, with a 116 times improvement relative to baseline1.
• 15M-long sequence training on four H100 GPU node cluster, with a 469 times improvement relative to baseline1.
• The technology is agnostic to the attention mechanism, allowing for out-of-box support for different sparsity patterns like block sparse, MoBA, etc.
Figure 1 provides a quick preview of the ALST accomplishments, which we will discuss in detail in later sections. 2
Figure 1: A dramatic improvement in sequence length with ALST enabled on 1, 8 and 32 H100 GPUs with Llama-8B. The baseline is Hugging Face with DeepSpeed ZeRO Stage 3 and optimizer states offload to CPU
Besides Ulysses SP [2] there are other approaches to sequence parallelism (also referred to as context parallelism). One approach was introduced by Megatron-LM [3] which extends Tensor Parallelism (TP) and which cannot operate without TP. And the more popular one, which can be used alone, is Ring Attention with many variants [1] [4] [5] [6], which resembles a distributed version of Flash Attention 2 [7]. Some techniques combine the Ring Attention and Ulysses SP approaches [8] [9]. The main obstacle with these approaches is that they require modeling code modifications whereas Ulysses SP is attention and model-agnostic.
In the rest of the paper, we will take a deeper dive into the memory challenges of long-sequence training, followed by a discussion of memory optimizations targeting these challenges. For our readers interested in deeper discussions, we also share our implementation details as well as the nuances of integration into Hugging Face Transformers. Finally, we present our evaluation results, as well as share how you can get started with ALST. For data scientists looking to experiment with long-sequence training, we will also share limitations and useful notes at the end of the paper.
# 2 Why is training with long sequences challenging?
Training models on long sequence lengths is a difficult task because models are large and the accelerator memory is typically insufficient to hold the large activations, especially when weights, optimizer states and gradients already consume a lot of the available GPU memory.
We used a combination of the PyTorch memory profiler and a helper see_memory_usage utility, that dumped memory usage stats at various places in the code, to identify where memory was allocated in a non-efficient way or not released soon enough.
# 2.1 Model Training Memory Map
Here is a detailed breakdown of what the GPU memory is used for:
1. Weights $^ +$ Optimizer states $^ +$ Gradients: In a typical BF16 mixed precision training about 18 bytes are needed per model parameter just to hold the model weights (2), optimizer states $( 8 + 4 )$ and gradients (4). For example, Llama-3.1-8B-Instruct contains 8 billion parameters and thus requires 16GiB for BF16 weights, 64GiB for Adam optimizer states, 32GiB for FP32 weights used for optimizer stability, and finally 32GiB for FP32 gradients. Therefore in total each GPU already requires 144GiB of memory to train this 8B-parameter model before all the other overheads. 3
2. Activations: Then there is memory required to calculate and hold activations - which is all the intermediate tensors that the model uses. This includes a variety of temporary tensors as well. The tricky part about activation memory is getting tensors that are no longer needed released as soon as possible. For example, activation checkpointing is often used to help reduce the required active memory by recalculating the intermediate forward activations during the backward call.
3. Runtime overheads: We observe that the remaining GPU memory is consumed by several key sources. Lowerlevel libraries such as CUDA and NCCL each reserve a significant amount of memory—CUDA typically uses around 1 GiB per GPU, while NCCL can consume multiple gigabytes for internal buffers and data structures, which are more difficult to track4. Additionally, different versions of PyTorch can exhibit varying memory usage patterns due to leaks or inefficiencies. Finally, when operating near the limits of available GPU memory, memory fragmentation becomes a concern, reducing the availability of large contiguous memory blocks.
To better understand memory requirements for different models, GPU counts, and total sequence length, we sugges the API to estimate memory usage in DeepSpeed ZeRO.5.
# 2.2 Activation Memory is the Primary Bottleneck
Now that you understand what GPU memory is used for it should be easy to understand that in order to go from a short sequence length to a long one it’s mainly more of the activation memory that we need to fit. All other memory allocations remain the same regardless of the sequence length. Figure $2 ^ { 6 }$ shows how activation memory for Llama-8B increases linearly with sequence length.7.
Figure 2: Estimated memory usage for Llama-8B activation memory with different sequence lengths.
# 3 Memory Optimizations
Next we will discuss three memory optimization groups that were instrumental at enabling training at very long sequence lengths:
1. Sequence tiling for reducing activation memory footprint
2. Ulysses Sequence Parallelism for Cross-GPU Activation Memory Sharing
3. Activation offloading and other PyTorch optimizations
# 3.1 Sequence Tiling for Reducing Activation Memory Footprint
GPU memory requirements for training on long sequences grow rapidly with sequence length increase. As part of our activation memory calculations 2.2, we estimated the activation and logits memory needed for various models and sequence lengths. Without our optimizations, the per-GPU memory usage quickly becomes unsupportable 8—as shown in Figure 2—and this doesn’t even include model parameters or optimizer states discussed earlier.
To address this memory explosion, we leverage Sequence Tiling, a technique that reduces peak memory usage by tiling forward and backward computations along the sequence dimension. Instead of processing the entire sequence in a single pass — which requires storing large intermediate tensors — Sequence Tiling breaks the computation into smaller tiles. Intermediate values are materialized only for each tile, significantly lowering the memory footprint.
This approach is applicable to operations that have no cross-sequence dependencies, such as linear layers, token embeddings, and per-token loss computations. For example, instead of computing logits or applying MLP layers across the entire sequence at once, we apply them sequentially to smaller segments, storing only the necessary intermediates at each step.
Let’s start by examining how effective Sequence Tiling is at reducing memory overhead during loss calculations. Using the example of Llama-3.1-8B-Instruct with a sequence length of 16K, the model’s vocabulary size of 128,256 results in a single copy of the logits in FP32 consuming approximately 8GiB of memory per GPU (calculated as $4 \times 1 6 \_ 0 0 0 \times \bar { 1 } 2 8 \_ 2 \dot { 5 } \dot { 6 } / 2 ^ { 3 0 } = 7 \bar { . } 6 5 G i B )$ . Rather than materializing all 8GiB of logits at once for both the forward and backward passes, we shard the logits into smaller chunks and compute the forward and backward passes on each shard independently. This significantly reduces peak memory usage. For instance, using a 1GiB shard size divides the computation into about 8 chunks and can save over 14GiB of memory in practice (because the loss computation uses 2 times of 8GiB).
Up to this point we have discussed memory reductions from a theoretical perspective, we can also showcase these memory reductions empirically via the help of the PyTorch memory profiler. In Figure 3 we show two plots: (left) without Sequence Tiling the loss calculation we see peak memory usage is around $5 0 \mathrm { G i B }$ compared to (right) after updating the loss calculation to use Sequence Tiling we see the peak memory usage drops to 36 GiB which results in a $28 \%$ memory reduction. 9 10
Figure 3: PyTorch memory usage plots before (left) and after (right) using Sequence Tiling to reduce loss calculation memory usage
This is not the first time a tiled compute is used. Here are some examples of recent use relevant to our discussion:
• For some years DeepSpeed’s TiledLinear has been enabling much bigger compute loads that otherwise would have not fit into the GPU memory. • Liger-Kernel [10] implemented a fused cross entropy loss, without manifesting the whole logits tensor first, thus enabling bigger batches and sequence lengths, but only for select most popular model architectures.
Now we introduce a generic TiledCompute autograd function that in theory should be able to make any function that performs large matrices multiplications use much less GPU memory. We implemented a fused cross-entropy using it, but Liger-Kernel’s version of the same is a bit faster since it uses a Triton kernel. Liger-Kernel supports a limited number of popular Hugging Face Transformers architectures, but our solution in theory should work with any architecture, so we recommend using Liger-Kernel’s fused cross entropy for the architectures it supports and when it doesn’t, then you have the option of using our implementation.
# 3.1.1 TiledMLP
But why stop at tiling the logits+loss computation, we can also tile the MLP compute11. We created a simplified version of TiledCompute and called it TiledMLP12 to also perform a much more memory-efficient MLP computation, which allowed a huge increase in the sequence length.
If we extract a single LlamaMLP layer from Llama-8B and run a bf16 hidden_states tensor of shape [1, 256_000, 4096] through its forward-backward, without and with sequence dimension tiling, we get about 10x memory saved as can be
seen from Figure $4 ^ { 1 3 }$ . The number of shards was auto-deduced via $c e i l ( s e q l e n = 2 5 6 \_ 0 0 0 / h i d d e n \_ s i z e = 4 0 9 6 ) =$ 63.
The evaluation section 5 shows the numerical improvements from enabling TiledMLP with full models.
Figure 4: 1x forward-backward cycle on a single Llama-8B LlamaMLP layer (Left) without tiling (Right) with tiling.
# 3.2 Ulysses Sequence Parallelism for Cross-GPU Activation Memory Sharing
Now let’s have a more detailed look at how UlyssesSPAttentionHF (Ulysses Sequence Parallelism Attention for Hugging Face) works.
1. Starting with sequence parallelism, the sequence is split across participating GPUs and executed in parallel till the attention block is reached.
2. Since self-attention requires an entire sequence, at this boundary we switch from sequence parallelism to attention head parallelism
3. When attention block completes we switch back to sequence parallelism
We will use the following diagram (Figure 5) from the Arctic Ulysses Inference blog post to explain how this works in details:
Figure 5: Ulysses Sequence Parallelism diagram with 4 attention heads per attention block model.
Since communications of the attention head projections cannot be overlapped with compute, they have to be really fast. And that’s why all-to-all communication collective is used. Before the all-to-all communication, each rank has a partial sequence for all attention heads; however, after the all-to-all communication, each rank has the entire sequence, but only for a partial subset of attention heads. This allows each rank to compute the attention for the subset of the attention heads that it owns in parallel. Then after the attention, Ulysses SP performs another all-to-all communication to switch back to the original SP layout, where each rank once again has the full embedding (attention heads) but a shard of a sequence. The computation then proceeds in parallel until the next layer’s attention is reached.
The reason Ulysses SP is attention algorithm-agnostic is because at the point of calculating the attention it recomposes the full sequence length and passes it to the desired attention mechanism (e.g., FlashAttention2 [7] or SDPA). Whereas in other SP approaches (e.g., Ring Attention [1]) the attention mechanism itself must be adapted to the specific attention algorithm being used.
# 3.2.1 Extending Ulysses for Modern Attention Mechanism
Figure 6 provides a visual representation of MHA, GQA and MQA types of model attention heads.
Figure 6: MHA, GQA and MQA types of model attention heads (source)
The original Ulysses SP implementation in Megatron-DeepSpeed only supports the MHA type of attention mechanism. Ulysses SP for HF was extended to support all three of the aforementioned types:
1. MHA is the simplest since it has the same number of q and kv heads. We simply split qkv_heads into SP shards. Clearly here the only limitation is that qkv_heads is divisible by SP degree.
2. GQA type has $\mathrm { k v } < \mathsf { q }$ number of heads, since the kv heads get reused.
a. If kv_heads is divisible by SP we shard q_heads into SP degree shards and kv_heads into SP degree shards.
b. If kv_heads $< \mathsf { S P }$ then we replicate kv_heads to match SP degree.
3. MQA is where there is 1 kv_head and many q_heads. This is the same as 2b: we replicate kv_heads to match SP degree.
# Examples:
• 32 q_heads, 8 kv_heads, $\mathrm { s p } { = } 8 \Rightarrow$ each rank will have 4 q_heads, 1 kv_heads • 32 q_heads, 8 kv_heads, $\mathrm { s p } { = } 3 2 \ r { = } 2$ each rank will have 1 q_heads, 1 kv_head (kv_heads will be replicated) • 32 q_heads, 4 kv_heads, $\mathrm { s p } { = } 8 \Rightarrow$ each rank will have 4 q_heads, 1 kv_heads (kv_heads will be replicated)
The kv_heads and q_heads count isn’t always divisible by the desired SP degree. For example, if a model has kv_heads $^ { = 3 }$ , q_heads $^ { = 9 }$ we can do $\mathrm { S P } { = } 3$ or $\mathrm { S P = 9 }$ and not deploy the node fully to do ${ \mathrm { S P } } { = } 8 ^ { * }$ nodes. 14
# 3.3 Activation Offload to CPU and Other PyTorch Optimizations
We employed several strategies to reduce runtime memory overhead. Here are the non-invasive ones:
• A deep memory analysis revealed that PyTorch versions have a significant impact on memory usage; due to a known issue with dist.barrier, we observed an excess memory usage of over 3 GiB in versions 2.6.0–2.7.0 and therefore standardized on version 2.8.0.dev20250507 (aka nightly) for our experiments. The just released torch $\scriptstyle 1 = = 2 . 7 . 1$ should be a solid alternative, where this and a few other related problems have been fixed. • Additionally, while all_reduce_object offers convenience, it introduces an additional memory cost of over 3 GiB per GPU, so we opted to use all_reduce instead.
14In the future we plan to come up with solutions that will be able to fit these nicely into 8x mode in a balanced compute.
Arctic Long Sequence Training (ALST)
• We activated PyTorch activation checkpointing to reduce memory usage, accepting a modest increase in compute overhead. • Finally, to mitigate memory fragmentation, we enabled PyTorch’s expandable segments allocator, which provided massive memory allocation improvements with minimal impact on overall performance.
Then we introduced a more invasive technique:
The activation checkpointing feature saves an insane amount of memory, but at large sequence lengths the checkpointed hidden_states tensors, which are of shape [bs, seqlen, hidden_size] still consume a lot of GPU memory. For example, at seqlen=125K/bs $\ c =$ 1/hidden_size=4096/n_layer $\scriptstyle = 3 2$ it’s 30.5GiB across all layers $( 1 2 5 \_ 0 0 0 \times 4 0 9 6 \times 2 \times 3 2 / 2 ^ { 3 0 } = \dot { 3 } 0 . 5 ^ { 1 5 }$ . We monkey patched torch.utils.checkpoint.CheckpointFunction to offload the hidden_states activation checkpoint tensor to CPU - thus enabling a dramatically longer sequence length. 16.
Figure 7 shows a PyTorch memory profiler visualization of a single forward-backward iteration of Llama-8B with $3 2 \mathrm { k }$ sequence length. On the left you can see the usual pattern of memory usage growing with each layer running its forward calls (left upward slope), followed by memory usage going down during backward calls per layer (right downward slope), when the checkpointed tensors get released once they are no longer needed. On the right, this is the exact same setup, but with activation checkpoint offloading to CPU enabled. It’s very clear to see that the "hill" is gone and we are now dealing with a flat structure, which leaves a lot more working GPU memory and allows for a much longer sequence lengths since the peak memory usage is no longer dependent on how many layers the model has.
Figure 7: PyTorch memory profiler 1 iteration forward-backward CUDA memory usage: Left: normal setup. Right: with activation checkpoint offloading to CPU enabled
If you use a large model like Llama-70B you’d need to make sure you have sufficient CPU memory since hidden_states will get large and there are as many copies of this tensor as the number of layers. For example, Llama-70B at seqlen $\mathbf { \Lambda } _ { : = 3 \mathbf { M } }$ , $\scriptstyle \mathbf { b s } = 1$ and 32 GPUs needs 915GiB of CPU memory per node just for the activation checkpoint offloads $( 3 _ { - } 0 0 0 _ { - } 0 0 0 / 3 2 \times 8 1 9 2 \times 8 0 \times 2 / 2 ^ { 3 0 } \times 8 = 9 1 5 G i B )$ (where hidden_size $\scriptstyle : = 8 1 9 2$ , num_layers $\scriptstyle - 8 0$ ). So in this case the node’s CPU memory is likely to be the limiting factor preventing an even bigger sequence length.
This feature can also serve smaller sequence lengths when a very large batch size is wanted.
# 3.4 4D Attention Mask is not possible with long sequence lengths
When using very long sequence lengths and packed samples deploying a 4D causal attention mask is not feasible because that tensor is of shape of [bs, seqlen, seqlen] so at $\scriptstyle \mathbf { b s } = 1$ and seqlen $_ { \mathrm { = 1 2 5 K } }$ it requires a 29GiB tensor $( 1 2 5 \_ 0 0 0 \times 1 2 5 \_ 0 0 0 \times 2 / 2 ^ { 3 \dot { 0 } } = 2 \dot { 9 } G i B )$ and at $2 5 0 \mathrm { K }$ it would need 116GiB tensor on each GPU $2 5 0 \_ 0 0 0 \times$ $2 5 0 \_ 0 0 0 \times 2 / 2 ^ { 3 0 } = 1 1 6 G i B )$ and it grows quadratically with sequence length. Therefore, the only way to make the self-attention attend correctly to sub-samples in the batch without introducing a huge overhead is to use position_ids which are of shape [bs, seqlen], so in the case of 125K, it’s just 0.2MiB $( 1 2 5 \_ 0 0 0 \times 2 / 2 ^ { 2 0 } = 0 . 2 M i B )$ .
Since currently we can’t tell Hugging Face Transformers not to create the causal mask, other than when using Flash Attention 2, we had to monkey patch _update_causal_mask so that it won’t create this mask using:
model_without_head $\mathbf { \tau } = \mathbf { \tau }$ self.model_unwrapped.model if hasattr(model_without_head, "_update_causal_mask"): model_without_head._update_causal_mask $\mathbf { \sigma } = \mathbf { \sigma }$ lambda \*args: None
# 4 Implementation and Hugging Face Transformers Integration Challenges
# 4.1 Challenges of Implementing Sequence Parallelism
We encountered three primary challenges when implementing sequence parallelism with Hugging Face Transformers models:
1. integrating with existing attention implementations
2. long-sequence data loading
3. loss sharding to avoid memory explosion
At its core, Ulysses Sequence Parallelism is designed to compose with existing attention implementations such as SDPA, Flash Attention 2, and others. While integration is straightforward in frameworks like Megatron-DeepSpeed, where the integration is done manually in the core, our approach focuses on extending existing attention modules within the training framework. This allows support for longer sequence lengths without requiring changes to the model’s code itself.
Long-sequence data loading is particularly challenging, as each training sample is inherently large. If processed naively, this can lead to memory exhaustion—exactly what sequence parallelism aims to prevent. Our solution needed to handle these large sequences efficiently while remaining compatible with popular dataset providers such as Hugging Face Datasets [11].
Implementing loss sharding using Sequence Tiling (as introduced in section 3.1) required careful design. The goal was to avoid manual user intervention and prevent the need for modifications to model implementations which are outside of our control.
# 4.2 Integration with Hugging Face Transformers
We address these challenges through the following implementation spread out across Arctic Training [12], DeepSpeed [13], and in some cases small changes to Hugging Face Transformers [14] itself.
1. Hugging Face Transformers injection - Ulysses Sequence Parallelism Attention for Hugging Face (UlyssesSPAttentionHF) integrates into the modeling code by overriding the user-specified attn_implementation (e.g., sdpa, flash_attention_2) with ulysses, and injecting a custom wrapper into the Transformers backend via transformers.modeling_utils.ALL_ATTENTION_FUNCTIONS. This approach allows us to seamlessly wrap the user’s intended attention implementation with Ulysses SP long-sequence support.
2. DataLoader - We introduce a specialized DataLoader adapter, UlyssesSPDataLoaderAdapter, which takes any existing DataLoader and automatically shards each batch along the sequence dimension. It then uses a single rank’s batch and processes it collaboratively across all ranks—effectively implementing a sequenceparallelism-over-data-parallelism protocol. In this setup, we iterate over ranks, processing one batch at a time using all ranks in parallel. This design preserves the traditional, iterative behavior of the original DataLoader, while enabling seamless integration with Ulysses Sequence Parallelism.
3. Non-Attention blocks - Each rank processes its shard of the input sequence independently up to the attention layer, as the preceding computations have no inter-token dependencies and can be executed in parallel.
4. Attention block - Details on how Ulysses handles attention can be found in section 3.2.
# 4.3 Loss Sharding in Hugging Face Transformers
In causal language models (e.g., GPT and Llama) cross-entropy loss requires labels to be shifted one position to the left because of how next-token prediction works. Cross-entropy loss compares the model’s prediction at its current position to the correct token at the next position.
When computing loss in an unsharded batch we end up with (shift left):
input_ids [1 2 3 4 5 6 7 8] labels : [1 2 3 4 5 6 7 8] shift_labels: [2 3 4 5 6 7 8 -100]
-100 is the special label value to be ignored so it gets pushed on the right.
If we naively shard on the sequence dimension ( $\mathrm { S P } { = } 2$ in the following example), we end up losing the first label of each shard due to Hugging Face Transformers’ loss function shifting each rank’s inputs without being aware of our sharding strategy. This results in our example dropping the token id 5 entirely:
input_ids : [1 2 3 4] [5 6 7 8] labels : [1 2 3 4] [5 6 7 8] shift_labels: [2 3 4 -100] [6 7 8 -100]
To address this issue, we have modified the causal loss API in Hugging Face Transformers to support user-provided pre-shifted labels. Now we can pre-shift the labels before sharding on the sequence dimension, and thus end up with the correct labels passed to the loss function for each shard:
input_ids : [1 2 3 4] [5 6 7 8] labels : [1 2 3 4] [5 6 7 8] shift_labels: [2 3 4 5] [6 7 8 -100]
# 5 Evaluation
# 5.1 Overview
We evaluated the longest sequence length we could get for a range of configurations from 1 GPU to 64 GPUs (8 nodes)17. Several iterations were completed with each reported sequence length to ensure there are no memory leaks.
The hypothesis was that, once the minimal number of GPUs required to hold the model weights is met — without taking over the whole GPU memory — the maximum supported sequence length will scale linearly with the number of GPUs. And our results confirmed this hypothesis.
By sharding model weights using ZeRO Stage 3 [15], each additional GPU reduces the memory load per device, freeing up more memory for activation storage and enabling longer sequence lengths. In some cases, we even observed a slightly superlinear increase in maximum sequence length.
However, in a few scenarios where the amount of CPU offloading was very large, we run into a bottleneck of not having enough CPU memory to show the full potential of these techniques.
The evaluation results are broken down into 3 sections:
1. Maximum achieved sequence length - section 5.3
2. Feature ablations - section 5.4
# 3. Sequence length improvements over baseline - section 5.5
As the initial goal was to enable long sequence lengths for post-training, which usually takes just a few days of compute time, we weren’t too concerned with the best performance, but focused primarily on the longest sequence length we could achieve in reasonable time, offloading things when it was helpful and not doing that when it was badly impacting the performance. Subsequent work will focus on improving the performance further.
# 5.2 Methodology
We will use 1 to 8 node configurations to run the experiments. Each node is made of 8x H100 80GiB. The nodes are interconnected with EFA v2 (AWS) that can do \~200GBps of all-reduce throughput. The intra-node connectivity is 450GBps NVLink-4.
Software versions used:
• torch==2.8.0.dev20250507 (a.k.a. torch-nightly)18 torch> $_ { \mathrm { \Omega } } { = } 2 . 7 . 1$ should be as good19)
• flash_attn $= = 2 . 7 . 4$ (flash_attn> $> = 2 . 6 . 4$ delivers a very similar performance)
• transformer $\scriptstyle = = 4 . 5 1 . 3$ (transformers> $\mathord { > } = 4 . 5 1 . 3$ should be fine)
• deepspeed $= = 0 . 1 7 . 0$ (deepspeed> $\mathord { > } = 0 . 1 7 . 0$ should be fine)
The following optimizations were enabled during all, but ablation experiments:
• DeepSpeed ZeRO Stage 3
• DeepSpeed optimizer states offload to CPU
• Gradient/Activation checkpointing
• Fused tiled logits+loss computation via Liger-Kernel 20
• PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True environment variable
• Sequence parallel communications were done in bf16 (or could reduce the communication buffer size instead)
• Tiled MLP computation
• Activation checkpoint hidden_states tensor offload to CPU
Additionally, when we used a single GPU, we also enabled model weights offload to CPU to prevent GPU OOM, which otherwise would occur even with a tiny sequence length.
# 5.3 Maximum Achieved Sequence Length
We measured the maximum achievable sequence length with three popular representative models by zeroing in on the maximum length that would not provide out of memory, los $\vDash$ NaN and other errors.
# 5.3.1 meta-llama/Llama-3.1-8B-Instruct
meta-llama/Llama-3.1-8B-Instruct has 32 q_heads and 8 kv_heads and thus can be trained on 1 to 32 GPUs.
Figure 8 shows the measured outcomes.
Figure 8: Maximum achieved sequence length with meta-llama/Llama-3.1-8B-Instruct
# 5.3.2 meta-llama/Llama-3.1-70B-Instruct
meta-llama/Llama-3.1-70B-Instruct has $6 4 { \mathrm { ~ q } } _ { - }$ _heads and $8 \mathrm { k v } _ { - }$ _heads and thus can be trained on 16 to 64 GPUs. At least 8 GPUs are needed to just fit the sharded model and gradients, while offloading optimizer states to CPU.
Figure 9 shows the measured outcomes.
Figure 9: Maximum achieved sequence length with meta-llama/Llama-3.1-70B-Instruct
As you can see from the notes, we couldn’t unleash the full sequence length potential for this model because activation checkpoint offload to CPU memory requirements for this model are so big:
• 4 nodes: $1 \_ 0 0 0 \_ 0 0 0 / 3 2 \times 8 1 9 2 \times 8 0 \times 2 / 2 ^ { 3 0 } \times 8 = 3 0 5 G i B$ per 1M seqlen
And we had only 1.9TB of cpu memory available per node. We know that we "left more sequence length on the table", because the GPU memory was only about $3 4$ full.
# 5.3.3 Qwen/Qwen3-32B
Qwen/Qwen3-32B has $6 4 { \mathrm { ~ q ~ } }$ _heads and $8 \mathrm { k v } .$ _heads and thus can be trained on 1 to 64 GPUs.
Figure 10 shows the measured outcomes.
Figure 10: Maximum achieved sequence length with Qwen/Qwen3-32B
For a single GPU an additional weights offload to cpu was required.
Same as with Llama-70B for a few configurations we discovered that 1.9TiB of CPU memory weren’t enough to get an even longer sequence length. For this model activation checkpoint offload to CPU memory requirements were:
• 4 nodes: $1 _ { - } 0 0 0 _ { - } 0 0 0 / 3 2 \times 5 1 2 0 \times 6 4 \times 2 / 2 ^ { 3 0 } t i m e s 8 = 1 5 2 G i E$ per 1M seqlen • 8 nodes: $1 \_ 0 0 0 \_ 0 0 0 / 6 4 \times 5 1 2 0 \times 6 4 \times 2 / 2 ^ { 3 0 } \times 8 = 7 6 G i B$ per 1M seqlen
# 5.3.4 Summary
As can be seen from the evaluation numbers for three different models, the possible sequence length growth is roughly linear, that is doubling the number of nodes, doubles the possible sequence length. In fact it’s a bit better than linear because of DeepSpeed ZeRO Stage 3, which partitions the model constituents into shards across all GPUs, so the more nodes are used the smaller the shards owned by each GPU, and as a result the more GPU memory is available for activations.
# 5.4 Feature Ablations
For the feature ablation experiments we use a single 8x H100 node.
# Baseline:
1. DeepSpeed ZeRO Stage 3
2. Gradient/Activation checkpointing enabled
3. DeepSpeed Optim states offload to CPU
4. PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
5. Flash Attention 2 [7]
We next perform feature ablations on each of the following features and show the outcome in Table 1:
1. Fused tiled logits & loss compute with Liger-Kernel
2. Ulysses Sequence Parallelism for Hugging Face
3. Tiled MLP
4. Activation checkpoint offload to CPU
Table 1: Feature ablations results.
Figure 11: Feature ablation results visualized
Table 1 and the corresponding Figure 11 show that tiling computation features like tiled logits & loss and tiled MLP don’t contribute much at a low sequence length, but once activation checkpoint offload to CPU enabled a much larger sequence length, then for example tiled MLP was able to increase the sequence length by an additional $58 \%$ . Activation checkpoint offload to CPU and tiled MLP together enabled a sequence length that’s ${ \sim } 3 . 5$ times larger than the baseline $+$ fused tiled logits & loss (Liger Kernel) $^ +$ Ulysses SP for HF. Once we hit sequence lengths larger than 5M tiled MLP starts to massively contribute to allowing a much larger sequence length. hidden_states have a [bs, seqlen, hidden_size] shape and while hidden_size remains the same, seqlen becomes very big, leading to many GiBs-large hidden_states tensor for each layer.
Since the attention computation is quadratic with regards to sequence length, it’s easy to see how the iteration time dramatically slows down as sequence length grows.
Additional notes:
• If Liger-Kernel doesn’t support the architecture of your need, we have implemented Sequence Tiled Compute that can perform a tiled cross-entropy loss that should work with any causal model. It saves approximately the same amount of memory but it’s slower than Liger-Kernel as it’s written in pure PyTorch. It can be found here.
• When Tiled MLP is enabled Liger-Kernel’s swiglu override is turned off since the 2 compete with each other over Hugging Face Transformers’ modeling MLP class override21.
• The TFLOPS and Iteration time were measured while using non-packed samples and using the standard Megatron-LM flos22 estimation formulae taking into account repeated forwards. At such a long sequence length attention computation renders MLP compute negligible. We observed packed samples with FlashAttention2 setups report much lower TFLOPS.
• Since all 8 GPUs compute a single batch, and the micro-batch size is 1, the effective global batch size is 1 as well.
# 5.5 Sequence Length Improvements over Baseline
After activating ALST with LLama-8B the sequence length improvements were as following:
• 1 GPU: from $3 2 \mathrm { K } ^ { 2 3 2 4 }$ to 500K, a 16 times improvement, as demonstrated by Table 2 and Figure 12.
• 8 GPUs: from $3 2 \mathrm { K } ^ { 2 3 }$ to 3.7M, a 116 times improvement, as demonstrated by Table 3 and Figure 12.
• 32 GPUs: from $3 2 \mathrm { K } ^ { 2 3 }$ to 15M, a 469 times improvement, as demonstrated by Table 4 and Figure 12.
Table 2: Sequence length improvement for Llama-8B on a single H100 GPU
Table 3: Sequence length improvement for Llama-8B on 8 H100 GPUs (1 node)
Table 4: Sequence length improvement for Llama-8B on 32 H100 GPUs (4 nodes)
Figure 12: The impact of enabling ALST for LLama-8B on 1, 8 and 32 GPUs
Please note that sequence length is on the log scale because otherwise the baseline wasn’t showing in the 32 GPU plot.
# 5.6 Training Correctness
We used Llama-8B to validate that ALST matches the baseline on training quality. We compared a 32k sequence length on a single 8x H100 GPUs node.
In order to ensure equal conditions, in the case of ALST, since we use 8 GPUs to process a single sample there, we enabled gradient accumulation steps of 8. That way each iteration in both setups has seen the exact same data.
As can be observed from Figure 13 we have an almost exact match for the loss with ALST 25. Thus we know ALST provides the same training quality as the baseline.
Figure 13: Training loss comparison with and without ALST
# 6 Trying it out
he ArcticTraining framework has fully working post-training recipes using ALST for a variety of models and quantities of GPUs. You can just drop in your dataset definition and run the post-training.
Go to https://github.com/snowflakedb/ArcticTraining/blob/main/projects/sequence-parallelism/README.md and follow the instructions there to reproduce any of the evaluations presented in this paper or adapt the existing recipes to your long sequence length finetuning needs.
# 7 Additional Notes for Users
# 7.1 Limitations
• Currently the maximum degree of sequence parallelism (SP) is limited by the number of q_heads. For example, meta-llama/Llama-3.1-70B-Instruct has $6 4 { \mathrm { ~ q ~ } }$ _heads, so ${ \mathrm { S P } } { = } 6 4$ is the maximum possible with that model. We plan to remove that limit in future work. Meanwhile you can still scale beyond the SP limit imposed by head count, while using a higher DP degree. For example, you can easily train on 1024 GPUs, there will be 16 SP replicas of ${ \mathrm { S P } } { = } 6 4$ each.
• As discussed earlier q_heads need to be divisible by SP degree. For example, if the model has 9 q_heads, you’d need SP to be 1, 3 or 9. We plan to overcome this limitation in the future.
# 7.2 Important Training Notes
While this technology enables you to train on very long sequences, you need to be aware that if you pack many short sequences into a long one it won’t learn to infer on long sequences. You need to use a dataset with samples whose sequence length matches your target goal. If you train on packed samples, it’d be akin to having a large batch size of short sequence length samples.
Because the dense attention mechanism has a quadratic $\mathrm { O } ( \mathrm { s } ^ { \sim } 2 )$ relationship with the sequence length - the longer the individual sample, the slower the attention calculation will be. As of this paper’s writing the incarnation of Ulysses SP for HF supports both SDPA and Flash Attention 2 (FA2) as they are integrated into Hugging Face Transformers. FA2 is very efficient at calculating the attention of individual samples packed into a long sequence by using position ids, whereas SDPA in Hugging Face Transformers as of this writing ignores position ids and ends up attending to the whole packed sequence which is both much slower and isn’t correct. Though as explained earlier, to post-train your model for long sequence lengths you have to use actual long sequence length samples and not packed short samples, in which case both SDPA and FA2 will work correctly.
# 8 Future Work
While large matrix multiplications dominate training with very long sequence lengths, making other operations quite insignificant performance-wise, additional work can be done to further improve the performance of various components where they don’t overlap with compute.
While the initial implementation has been integrated into Arctic Training - we next would like to integrate it into Hugging Face Accelerate and Trainer and various other frameworks to make it easy for any user to access this technology. The integration document can be found here.
# Acknowledgments
Besides the paper’s authors the following folks have contributed to this work and we would like to thank them. The Hugging Face team: Cyril Vallez, Yih-Dar Shieh and Arthur Zucker. The PyTorch team: Jeffrey Wan, Mark Saroufim, Will Constable, Natalia Gimelshein, Ke Wen and alband. This paper’s reviewers: Ari Rean and Anupam Datta. Also, we would like to acknowledge the original Ulysses for Megatron-DeepSpeed team: Sam Ade Jacobs, Masahiro Tanaka, Chengming Zhang, Minjia Zhang, Shuaiwen Leon Song, Samyam Rajbhandari and Yuxiong He.
# References
[1] H. Liu, M. Zaharia, and P. Abbeel, “Ring attention with blockwise transformers for near-infinite context,” 2023. [Online]. Available: https://arxiv.org/abs/2310.01889
[2] S. A. Jacobs, M. Tanaka, C. Zhang, M. Zhang, S. L. Song, S. Rajbhandari, and Y. He, “Deepspeed ulysses: System optimizations for enabling training of extreme long sequence transformer models,” arXiv preprint arXiv:2309.14509, 2023. [Online]. Available: https://arxiv.org/abs/2309.14509 [3] V. Korthikanti, J. Casper, S. Lym, L. McAfee, M. Andersch, M. Shoeybi, and B. Catanzaro, “Reducing activation recomputation in large transformer models,” 2022. [Online]. Available: https://arxiv.org/abs/2205.05198
[4] S. Li, F. Xue, C. Baranwal, Y. Li, and Y. You, “Sequence parallelism: Long sequence training from system perspective,” 2022. [Online]. Available: https://arxiv.org/abs/2105.13120
[5] D. Li, R. Shao, A. Xie, E. P. Xing, X. Ma, I. Stoica, J. E. Gonzalez, and H. Zhang, “Distflashattn: Distributed memory-efficient attention for long-context llms training,” 2024. [Online]. Available: https://arxiv.org/abs/2310.03294 [6] W. Brandon, A. Nrusimha, K. Qian, Z. Ankner, T. Jin, Z. Song, and J. Ragan-Kelley, “Striped attention: Faster ring attention for causal transformers,” 2023. [Online]. Available: https://arxiv.org/abs/2311.09431
[7] T. Dao, “Flashattention-2: Faster attention with better parallelism and work partitioning,” 2023. [Online]. Available: https://arxiv.org/abs/2307.08691
[8] J. Fang and S. Zhao, “Usp: A unified sequence parallelism approach for long context generative ai,” 2024. [Online]. Available: https://arxiv.org/abs/2405.07719
[9] D. Gu, P. Sun, Q. Hu, T. Huang, X. Chen, Y. Xiong, G. Wang, Q. Chen, S. Zhao, J. Fang, Y. Wen, T. Zhang, X. Jin, and X. Liu, “Loongtrain: Efficient training of long-sequence llms with head-context parallelism,” 2024. [Online]. Available: https://arxiv.org/abs/2406.18485
[10] P.-L. Hsu, Y. Dai, V. Kothapalli, Q. Song, S. Tang, S. Zhu, S. Shimizu, S. Sahni, H. Ning, and Y. Chen, “Liger kernel: Efficient triton kernels for llm training,” 2025. [Online]. Available: https://arxiv.org/abs/2410.10989
[11] Q. Lhoest, A. Villanova del Moral, Y. Jernite, A. Thakur, P. von Platen, S. Patil, J. Chaumond, M. Drame, J. Plu, L. Tunstall, J. Davison, M. Šaško, G. Chhablani, B. Malik, S. Brandeis, T. Le Scao, V. Sanh, C. Xu, N. Patry, A. McMillan-Major, P. Schmid, S. Gugger, C. Delangue, T. Matussière, L. Debut, S. Bekman, P. Cistac, T. Goehringer, V. Mustar, F. Lagunas, A. Rush, and T. Wolf, “Datasets: A community library for natural language processing,” in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Online and Punta Cana, Dominican Republic: Association for Computational Linguistics, Nov. 2021, pp. 175–184. [Online]. Available: https://aclanthology.org/2021.emnlp-demo.21
[12] Snowflake AI Research, “ArcticTraining: Simplifying and accelerating post-training for large language models,” https://github.com/snowflakedb/ArcticTraining, 2025, version v0.0.4 (released June 3, 2025); Apache-2.0 license.
[13] J. Rasley, S. Rajbhandari, O. Ruwase, and Y. He, “Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ser. KDD ’20. New York, NY, USA: Association for Computing Machinery, 2020, p. 3505–3506. [Online]. Available: https://doi.org/10.1145/3394486.3406703
[14] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. L. Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush, “Transformers: State-of-the-art natural language processing,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Online: Association for Computational Linguistics, Oct. 2020, pp. 38–45. [Online]. Available: https://www.aclweb.org/anthology/2020.emnlp-demos.6
[15] S. Rajbhandari, J. Rasley, O. Ruwase, and Y. He, “Zero: Memory optimizations toward training trillion parameter models,” 2020. [Online]. Available: https://arxiv.org/abs/1910.02054
[16] B. Workshop, :, T. L. Scao, A. Fan, C. Akiki, E. Pavlick, S. Ili´c, D. Hesslow, R. Castagné, A. S. Luccioni, F. Yvon, M. Gallé, J. Tow, A. M. Rush, S. Biderman, A. Webson, P. S. Ammanamanchi, T. Wang, B. Sagot, N. Muennighoff, A. V. del Moral, O. Ruwase, R. Bawden, S. Bekman, A. McMillan-Major, I. Beltagy, H. Nguyen, L. Saulnier, S. Tan, P. O. Suarez, V. Sanh, H. Laurençon, Y. Jernite, J. Launay, M. Mitchell, C. Raffel, A. Gokaslan, A. Simhi, A. Soroa, A. F. Aji, A. Alfassy, A. Rogers, A. K. Nitzav, C. Xu, C. Mou, C. Emezue, C. Klamm, C. Leong, D. van Strien, D. I. Adelani, D. Radev, E. G. Ponferrada, E. Levkovizh, E. Kim, E. B. Natan, F. D. Toni, G. Dupont, G. Kruszewski, G. Pistilli, H. Elsahar, H. Benyamina, H. Tran, I. Yu, I. Abdulmumin, I. Johnson, I. Gonzalez-Dios, J. de la Rosa, J. Chim, J. Dodge, J. Zhu, J. Chang, J. Frohberg, J. Tobing, J. Bhattacharjee, K. Almubarak, K. Chen, K. Lo, L. V. Werra, L. Weber, L. Phan, L. B. allal, L. Tanguy, M. Dey, M. R. Muñoz, M. Masoud, M. Grandury, M. Šaško, M. Huang, M. Coavoux, M. Singh, M. T.-J. Jiang, M. C. Vu, M. A. Jauhar, M. Ghaleb, N. Subramani, N. Kassner, N. Khamis, O. Nguyen, O. Espejel, O. de Gibert, P. Villegas, P. Henderson, P. Colombo, P. Amuok, Q. Lhoest, R. Harliman, R. Bommasani, R. L. López, R. Ribeiro, S. Osei, S. Pyysalo, S. Nagel, S. Bose, S. H. Muhammad, S. Sharma, S. Longpre, S. Nikpoor, S. Silberberg, S. Pai, S. Zink, T. T. Torrent, T. Schick, T. Thrush, V. Danchev, V. Nikoulina, V. Laippala, V. Lepercq, V. Prabhu, Z. Alyafeai, Z. Talat, A. Raja, B. Heinzerling, C. Si, D. E. Ta¸sar, E. Salesky, S. J. Mielke, W. Y. Lee, A. Sharma, A. Santilli, A. Chaffin, A. Stiegler, D. Datta, E. Szczechla, G. Chhablani, H. Wang, H. Pandey, H. Strobelt, J. A. Fries, J. Rozen, L. Gao, L. Sutawika, M. S. Bari, M. S. Al-shaibani, M. Manica, N. Nayak, R. Teehan, S. Albanie, S. Shen, S. Ben-David, S. H. Bach, T. Kim, T. Bers, T. Fevry, T. Neeraj, U. Thakker, V. Raunak, X. Tang, Z.-X. Yong, Z. Sun, S. Brody, Y. Uri, H. Tojarieh, A. Roberts, H. W. Chung, J. Tae, J. Phang, O. Press, C. Li, D. Narayanan, H. Bourfoune, J. Casper, J. Rasley, M. Ryabinin, M. Mishra, M. Zhang, M. Shoeybi, M. Peyrounette, N. Patry, N. Tazi, O. Sanseviero, P. von Platen, P. Cornette, P. F. Lavallée, R. Lacroix, S. Rajbhandari, S. Gandhi, S. Smith, S. Requena, S. Patil, T. Dettmers, A. Baruwa, A. Singh, A. Cheveleva, A.-L. Ligozat, A. Subramonian, A. Névéol, C. Lovering, D. Garrette, D. Tunuguntla, E. Reiter, E. Taktasheva, E. Voloshina, E. Bogdanov, G. I. Winata, H. Schoelkopf, J.-C. Kalo, J. Novikova, J. Z. Forde, J. Clive, J. Kasai, K. Kawamura, L. Hazan, M. Carpuat, M. Clinciu, N. Kim, N. Cheng, O. Serikov, O. Antverg, O. van der Wal, R. Zhang, R. Zhang, S. Gehrmann, S. Mirkin, S. Pais, T. Shavrina, T. Scialom, T. Yun, T. Limisiewicz, V. Rieser, V. Protasov, V. Mikhailov, Y. Pruksachatkun, Y. Belinkov, . Bamberg . Kasner, A. Rueda, A. Pestana, A. Feizpour, A. Khan, A. Faranak, A. Santos, A. Hevia, A. Unldreaj, A. Aghagol, A. Abdollahi, A. Tammour, A. HajiHosseini, B. Behroozi, B. Ajibade, B. Saxena, C. M. Ferrandis, D. McDuff, D. Contractor, D. Lansky, D. David, D. Kiela, D. A. Nguyen, E. Tan, E. Baylor, E. Ozoani, F. Mirza, F. Ononiwu, H. Rezanejad, H. Jones, I. Bhattacharya, I. Solaiman, I. Sedenko, I. Nejadgholi, J. Passmore, J. Seltzer, J. B. Sanz, L. Dutra, M. Samagaio, M. Elbadri, M. Mieskes, M. Gerchick, M. Akinlolu, M. McKenna, M. Qiu, M. Ghauri, M. Burynok, N. Abrar, N. Rajani, N. Elkott, N. Fahmy, O. Samuel, R. An, R. Kromann, R. Hao, S. Alizadeh, S. Shubber, S. Wang, S. Roy, S. Viguier, T. Le, T. Oyebade, T. Le, Y. Yang, Z. Nguyen, A. R. Kashyap, A. Palasciano, A. Callahan, A. Shukla, A. Miranda-Escalada, A. Singh, B. Beilharz, B. Wang, C. Brito, C. Zhou, C. Jain, C. Xu, C. Fourrier, D. L. Periñán, D. Molano, D. Yu, E. Manjavacas, F. Barth, F. Fuhrimann, G. Altay, G. Bayrak, G. Burns, H. U. Vrabec, I. Bello, I. Dash, J. Kang, J. Giorgi, J. Golde, J. D. Posada, K. R. Sivaraman, L. Bulchandani, L. Liu, L. Shinzato, M. H. de Bykhovetz, M. Takeuchi, M. Pàmies, M. A. Castillo, M. Nezhurina, M. Sänger, M. Samwald, M. Cullan, M. Weinberg, M. D. Wolf, M. Mihaljcic, M. Liu, M. Freidank, M. Kang, N. Seelam, N. Dahlberg, N. M. Broad, N. Muellner, P. Fung, P. Haller, R. Chandrasekhar, R. Eisenberg, R. Martin, R. Canalli, R. Su, R. Su, S. Cahyawijaya, S. Garda, S. S. Deshmukh, S. Mishra, S. Kiblawi, S. Ott, S. Sang-aroonsiri, S. Kumar, S. Schweter, S. Bharati, T. Laud, T. Gigant, T. Kainuma, W. Kusa, Y. Labrak, Y. S. Bajaj, Y. Venkatraman, Y. Xu, Y. Xu, Y. Xu, Z. Tan, Z. Xie, Z. Ye, M. Bras, Y. Belkada, and T. Wolf, “Bloom: A 176b-parameter open-access multilingual language model,” 2023. [Online]. Available: https://arxiv.org/abs/2211.05100
[17] Q. Anthony, J. Hatef, D. Narayanan, S. Biderman, S. Bekman, J. Yin, A. Shafi, H. Subramoni, and D. Panda, “The case for co-designing model architectures with hardware,” 2024. [Online]. Available: https://arxiv.org/abs/2401.14489 | Long sequences are critical for applications like RAG, long document
summarization, multi-modality, etc., and modern LLMs, like Llama 4 Scout,
support max sequence length of up to 10 million tokens. However, outside of
enterprise labs, long sequence training is challenging for the AI community
with limited system support in the open-source space.
Out-of-box, even on a modern NVIDIA H100 80GB GPU cluster, training Llama 8B
model with sequence over 32K runs out of memory on a basic Hugging Face (HF)
model due to two reasons: i) LLM training workloads are not optimized to fully
leverage a single GPU memory, ii) existing solutions for leveraging multiple
GPU memory are not easily available to HF models, making long sequence training
inaccessible.
We address this with Arctic Long Sequence Training (ALST). It offers a
combination of attention-agnostic single GPU and multi-GPU memory
optimizations, that enables it to support out-of-box training of multi-million
sequence length for a wide variety of HF models.
ALST supports training Meta's Llama 8B model with 500K sequence length on a
single H100 GPU, 3.7M on a single 8xH100 GPU node, and over 15M on a 4 node
cluster, an increase of over 400x compared to the 32K baseline for the latter.
ALST is fully compatible with HF models and open-sourced via Deepspeed
https://www.deepspeed.ai/tutorials/ulysses-alst-sequence-pallellism/ and Arctic
Training
https://github.com/snowflakedb/ArcticTraining/blob/main/projects/sequence-parallelism/README.md. | [
"cs.LG"
] |
# 1 Introduction
The rapid advancement of large language models (LLMs) has significantly improved performance across various downstream tasks and made realtime human-computer interaction an essential part of daily life, offering substantial convenience to users (Achiam et al., 2023; Touvron et al., 2023a,b; Chiang et al., 2023; Jiang et al., 2023). However, the autoregressive transformer decoder architecture adopted by LLMs introduces substantial inference latency, limiting their deployment in real-time applications. As the generated sequence length and model size increase, the token-by-token serial generation process leads to escalating delays.
Figure 1: Comparison of different drafting processes. Medusa (top) generates four tokens in parallel based on a prefix. EAGLE (middle) follows an autoregressive approach, generating one token at a time. Our method (bottom) employs multi-round autoregression, generating multiple tokens at different positions in each round.
To address this challenge, speculative sampling (Stern et al., 2018; Leviathan et al., 2023; Chen et al., 2023; Xia et al., 2023) has been proposed. This approach divides the inference process into a low-cost drafting phase and a parallel validation phase, allowing multiple tokens to be verified within a single LLM forward pass. By generating multiple tokens per pass, speculative sampling significantly accelerates text generation. More importantly, the validation phase ensures that the generated text aligns with the original LLM’s distribution, preserving output integrity.
The effectiveness of speculative sampling depends on selecting an appropriate draft model that approximates the original LLM while reducing latency. Typically, this is achieved by a smallerparameter model from the same series (Leviathan et al., 2023; Chen et al., 2023), such as TinyLLaMA (Zhang et al., 2024b), which serves as the draft model for the 7B and 13B versions of LLaMA2 (Touvron et al., 2023b). While leveraging smaller draft models offers advantages, it also requires additional effort to train or select a model that closely aligns with the target LLM, presenting challenges in scalability and compatibility. To overcome these challenges, recent studies have exploited the target LLM itself for drafting. As shown in Figure 1, Medusa (Cai et al., 2024) generates multiple drafts in parallel based on prefixes, whereas EAGLE (Li et al., 2024b) employs a sequential autoregressive drafting process to enhance precision.
To better analyze the advantages and disadvantages of these methods, we define two concepts: Syntactic Coherence and Semantic Coherence.
• Syntactic Coherence refers to the adherence to fixed language collocations and structures that are commonly accepted in a language, without heavily relying on prior context. • Semantic Coherence ensures that the meaning of the generated text aligns logically with the preceding context, maintaining overall textual consistency.
It can be observed that methods such as Medusa (Cai et al., 2024) eliminate the dependency between heads, thereby accelerating the generation of drafts. However, these methods primarily focus on modeling Syntactic Coherence while neglecting Semantic Coherence. On the other hand, methods like Eagle (Li et al., 2024b), which use an autoregressive approach, are more suitable for modeling Semantic Coherence. Using such methods to model Syntactic Coherence introduces unnecessary computational overhead.
In this work, we propose a Speculative Sampling framework with Syntactic and Semantic Coherence for efficient inference of large language models, termed as $\mathbf { S } ^ { \mathrm { 4 } } \mathbf { C }$ . It consists of two primary components: the draft model and validation tree. By adopting a continuous multi-head structure, $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ can efficiently generate syntactically coherent tokens and ensure semantic coherence between multiple fragments. On this basis, $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ constructed a continuous validation tree, further enhancing the coherence of candidate paths by selecting the candidate token with the highest probability. Compared to existing methods, $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ has a simpler structure, higher efficiency, and stronger parallelization capability. It can generate more effective tokens without increasing computational overhead. The main contributions of this paper are as follows:
• We identify the critical role of coherence in speculative decoding and propose the $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ framework to address this challenge.
• We introduce a simple yet efficient multi-head continuous draft model that rapidly generates coherent token sequences while maintaining generation quality.
• We design a continuous verification tree that expands the candidate set with minimal computational cost, making it adaptable to various LLM architectures.
• Extensive experiments demonstrate that $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ achieves superior performance compared to existing baseline methods.
The remainder of this paper is structured as follows: Section 2 provides preliminary background. Section 3 describes the core modules of $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ . Section 4 discusses experimental results. Related work and conclusions are presented in Sections 5 and 6, respectively.
# 2 Preliminaries
Notations In this paper, we define the key terms as follows. The term “target LLM” refers to the large language model responsible for verifying tokens, denoted by $M _ { p }$ . The “draft model” is the model used to generate draft tokens, represented by $M _ { q }$ . The term “feature” denotes the output of the penultimate layer of the LLM, corresponding to the hidden state before the final prediction layer. A single token is represented by $t$ , with its embedding denoted as $e$ , its features as $f$ , and its probability distribution as $p$ .
Speculative Sampling Speculative sampling is a two-stage process comprising an initial drafting phase followed by a verification phase. In the drafting phase, a smaller draft model generates a set of $\gamma$ candidate tokens, denoted as $\hat { T } _ { j + 1 : j + \gamma }$ , along with their probability distributions $q$ . The verification phase consists of a single forward pass through the target LLM to obtain the corresponding probabilities $p$ . Each drafted token $\hat { t } _ { j + i }$ is accepted with a probability of $\operatorname* { m i n } ( 1 , { \frac { p } { q } } )$ . If a token is rejected, subsequent tokens are discarded and resampled from $\mathrm { n o r m } ( \operatorname* { m a x } ( 0 , p - q ) )$ , as described in (Leviathan et al., 2023).
Speculative sampling significantly reduces inference latency by processing multiple tokens in parallel while ensuring that the output remains consistent with the distribution of the target LLM. This method effectively balances efficiency and accuracy, making it a promising solution for accelerating autoregressive text generation.
# 3 Methodologies
Our proposed approach, Speculative Sampling framework with Syntactic and Semantic Coherence $( \mathrm { S ^ { 4 } C ) }$ , follows the speculative sampling framework and consists of two main components: the drafting stage and the verification stage.
# 3.1 Multi-head Auto-regressive Drafting
# 3.1.1 Architecture
As illustrated in Figure 2, the target model (left side) has frozen parameters. Initially, an initial forward pass is performed through the target model to process the input prefix, similar to standard Large Language Model (LLM) operations, which can be formalized as:
$$
f _ { 0 } = \mathrm { D e c o d e r \_ l a y e r s } \big ( \mathrm { E m b e d d i n g } ( t _ { \mathrm { p r e } } ) \big ) ,
$$
$$
t _ { 0 } = \mathrm { L M } \_ \mathrm { h e a d } ( f _ { 0 } ) .
$$
The embedding of the prefix token sequence tpre is processed by the decoder, producing an intermediate representation $f _ { 0 }$ . The language model head is subsequently applied to this representation to generate the initial token $t _ { 0 }$ , which establishes the foundational context for further token generation.
$\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ leverages the intermediate feature $f _ { 0 }$ from the target model, bypassing the LM_head layer. The generated token $t _ { 0 }$ is then transformed into its embedding representation $e _ { 0 }$ via the target model’s embedding layer, serving as the input to the draft model. The draft model comprises three identical heads, defined as:
$$
\begin{array} { r } { h _ { i + 1 } = \operatorname { L i n e a r } ( \operatorname { c o n c a t } [ e _ { i } , f _ { i } ] ) , } \\ { f _ { i + 1 } = \operatorname { D e c o d e r \_ l a y e r s } ( h _ { i + 1 } ) . } \end{array}
$$
In this process, the input embeddings $e _ { 0 }$ and features $f _ { 0 }$ are concatenated and passed through a linear transformation to produce $h _ { i + 1 }$ , ensuring dimensional consistency with the original $f _ { 0 }$ . Subsequently, decoder layers generate feature vectors $f _ { i : i + k }$ , and token embeddings $\boldsymbol { e } _ { i : i + k }$ are obtained using the LM head and embedding layer of the target model.
The generated token $t _ { i + 1 }$ and its corresponding embedding $e _ { i + 1 }$ are obtained using the following equations:
$$
\begin{array} { c } { t _ { i + 1 } = \mathrm { A r g m a x } ( \mathrm { L M \_ h e a d } ( f _ { i + 1 } ) ) , } \\ { e _ { i + 1 } = \mathrm { E m b e d d i n g } ( t _ { i + 1 } ) . } \end{array}
$$
In the first draft head, since only one feature from the target model is available, this feature is reused, selecting the top-2 candidates as different inputs. In the subsequent heads, the most probable word from $f _ { i }$ is directly used to obtain $e _ { i }$ through the embedding layer, ensuring efficient and coherent token generation. In this way, each head generates multiple tokens simultaneously within itself, without dependencies between them, which is used to model syntactic coherence. Between heads, an autoregressive approach is used, where the input of each head depends on the output of the previous head, which is used to model semantic coherence.
# 3.1.2 Training
Our training process follows the same next-token regression task as used in Medusa (Cai et al., 2024), Hydra (Ankner et al., 2024), and EAGLE (Li et al., 2024b), integrating three loss components. The first component, $l o s s _ { l m }$ , represents the crossentropy loss (standard LLM training loss) between predicted tokens and ground-truth labels, defined as:
$$
\ O l o s s _ { l m } = - \sum _ { i = 1 } ^ { n } y _ { i } \log \hat { y _ { i } }
$$
where $\hat { y _ { i } }$ is the predicted token probability distribution from the LM head, and $y _ { i }$ is the ground-truth token ID.
The other two loss components, $l o s s _ { t e a c h e r }$ and $l o s s _ { s m o o t h }$ , utilize cross-entropy and Smooth L1 loss, respectively, to measure the discrepancy between features generated by each draft model head and the target model. These are defined as:
$$
\mathit { l o s s } _ { \mathit { t e a c h e r } } = - \sum _ { i = 1 } ^ { n } p _ { i } \log q _ { i }
$$
$$
\begin{array} { r } { l o s s _ { s m o o t h } = \left\{ \begin{array} { l l } { 0 . 5 ( q _ { i } - p _ { i } ) ^ { 2 } } & { \mathrm { i f ~ } | q _ { i } - p _ { i } | < 1 } \\ { | q _ { i } - p _ { i } | - 0 . 5 } & { \mathrm { o t h e r w i s e } } \end{array} \right. } \end{array}
$$
Figure 2: Multi-head auto-regressive drafting architecture. The left side represents the target model with frozen parameters, while the right side illustrates the draft model with three heads.
Here, $q _ { i }$ represents the output of the draft model, while $p _ { i }$ denotes the corresponding output of the target model.
The final loss function is a weighted sum of the three components, with weights set to $w _ { 1 } = 0 . 1$ , $w _ { 2 } = 1 . 0$ , and $w _ { 3 } = 0 . 1$ , defined as:
$$
\mathrm { \it { l o s s } } = w _ { 1 } \cdot \mathrm { \it { l o s s } } _ { \mathrm { \it { l m } } } + w _ { 2 } \cdot \mathrm { \it { l o s s } } _ { \mathrm { \it { t e a c h e r } } } + w _ { 3 } \cdot \mathrm { \it { l o s s } } _ { \mathrm { \it { s m o o t h } } }
$$
The tri-component loss function is designed to enhance the predictive capabilities of the draft model. The primary loss, $l o s s _ { l m }$ , ensures that the draft model accurately predicts the next token, aligning with conventional LLM objectives.
In addition, the complementary losses $l o s s _ { t e a c h e r }$ and $l o s s _ { s m o o t h }$ minimize the discrepancy between the draft and target models, facilitating knowledge transfer. This combined approach allows the draft model to effectively capture the predictive characteristics of the target model, enhancing its learning and generalization capabilities.
# 3.2 Continuous Verification Tree
In traditional rejection sampling, proposed tokens are structured as linear sequences, where the rejection of a token necessitates the discarding of all subsequent tokens. This approach limits proposal flexibility and reduces the acceptance rate of valid tokens.
To overcome this limitation, we adopt a treebased proposal structure inspired by previous studies (Miao et al., 2024; Cai et al., 2024; Li et al., 2024b). This structure enables the exploration of alternative branches when a token is rejected, allowing for simultaneous validation of multiple candidate drafts at the same position. Consequently, it significantly increases the length of accepted drafts and enhances both efficiency and flexibility.
As illustrated in Figure 3, our verification tree begins with the prefix “The” as the root node and generates multiple drafts vertically first (yellow tokens). Once the token set is validated, it extends horizontally (gray tokens). Vertically generated tokens at different positions correspond to the top1 probability choices, leveraging efficient parallelization. For instance, in the figure, “sets” is the highest-probability token following “sun”, while “rises” is the highest-probability token following “moon”.
The horizontal expansion provides alternative token choices at the same position, offering topk alternatives when the top-1 token is incorrect, thereby improving the overall acceptance rate. In Figure 3, “dips” and “casts” represent top-3 probable tokens following “sets”, while “glows” and “peeks” follow “rises”.
Once the verification tree is constructed, a tree mask is applied to determine the longest accepted path through the entire tree.
After establishing the candidate set, we employ speculative sampling (Leviathan et al., 2023) to verify token acceptance, formulated as follows:
$$
r < \operatorname* { m i n } \left( 1 , \frac { q ( x ) } { p ( x ) } \right) , \quad r \sim U [ 0 , 1 ]
$$
where $r$ is a random variable drawn from the uniform distribution $U [ 0 , 1 ]$ , and $q ( x )$ and $p ( x )$ denote the probabilities of token $x$ from the target and draft models, respectively. If $q ( x ) \geq p ( x )$ , the token $x$ is accepted. Otherwise, it is rejected with a probability of 1 − q(x) . ·
Figure 3: Continuous verification tree architecture (left) and tree mask matrix (right).
To further ensure consistency with the target LLM’s output distribution, the correction strategy (Leviathan et al., 2023; Miao et al., 2024) resamples tokens at fork positions using an adjusted distribution:
$$
x _ { t + c } \sim \mathrm { n o r m } ( \operatorname* { m a x } ( q _ { c } , p _ { c } ) )
$$
This process guarantees alignment with the target LLM’s distribution while documenting accepted tokens and their corresponding features for subsequent drafting phases.
# 4 Experiment
In this section, we conduct comprehensive experiments to evaluate the performance of $\mathrm { \Delta \ S ^ { 4 } C }$ , addressing the following key research questions:
RQ1: How does $\mathrm { \Delta \ S ^ { 4 } C }$ improve inference speed compared to existing methods? This question evaluates $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ ’s efficiency in reducing latency, a critical factor for real-world applications.
RQ2: What is the trade-off between additional space consumption and the achieved speedup ratio? Understanding this balance is essential for deploying $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ in resource-constrained environments.
RQ3: What are the benefits of $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ ’s multi-head structure in enhancing performance? This inquiry quantifies the impact of multi-head processing on model efficiency and versatility.
RQ4: How does the continuous validation tree contribute to overall performance? Evaluating the validation tree’s role helps assess its impact on inference accuracy and efficiency.
RQ5: How robust is $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ across different temperature settings during sampling? This analysis determines $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ ’s stability under varying operational conditions.
RQ6: In which application scenarios does $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ provide the most significant acceleration? Identifying key use cases helps clarify $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ ’s practical benefits.
These experiments provide an in-depth evaluation of $S ^ { 4 } \mathrm { C } ^ { \mathrm { ; } }$ ’s effectiveness and establish benchmarks for its performance in accelerating large language model inference.
# 4.1 Experiment Settings
# 4.1.1 Datasets and Models
We adopt the same experimental setup as SpecBench (Xia et al., 2024), which includes six subtasks: multi-round conversations, translation, summarization, question answering, mathematical reasoning, and retrieval-augmented generation. These correspond to the datasets MT-Bench (Zheng et al., 2023), WMT14 DE-EN, CNN/Daily Mail (Nallapati et al., 2016), Natural Questions (Kwiatkowski et al., 2019), GSM8K (Cobbe et al., 2021), and DPR (Karpukhin et al., 2020), respectively.
We use Vicuna-v1.3 (Chiang et al., 2023) as the base model in three parameter sizes: 7B, 13B, and 33B. $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ is evaluated against state-of-the-art models listed in the Spec-Bench leaderboard, including Eagle (Li et al., 2024b), Hydra (Ankner et al., 2024), Medusa (Cai et al., 2024), PLD (Saxena, 2023), SPS (Leviathan et al., 2023), REST (He et al., 2023), and Lookahead (Fu et al., 2024).
# 4.1.2 Evaluation and Environment
To quantitatively assess the performance of inference acceleration techniques, we use the acceleration ratio as the primary metric, which measures speedup achieved during testing. Additionally, the average acceptance length is used to quantify the mean number of tokens accepted by the draft model, providing insights into token generation efficiency.
Experiments were conducted on two hardware setups. For the 7B model, experiments ran on a single NVIDIA A100 GPU with 40GB memory and 48 CPU cores. For the 13B and 33B models, we used four NVIDIA GeForce RTX 4090 GPUs (24GB each) and 96 CPU cores.
The software environment included PyTorch 2.4.0 with CUDA 12.6. To ensure consistency and isolate the effects of different methods, all experiments employed greedy decoding, FP16 precision, and a batch size of one. This standardized configuration ensures that variations in acceleration ratios and acceptance lengths are solely attributable to the inference acceleration methods under evaluation.
# 4.2 Effectiveness (RQ1)
Table 1 presents the experimental results, demonstrating $\mathrm { \Delta \ S ^ { 4 } C }$ ’s superior performance across various tasks and model sizes. Specifically, $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ achieves the highest speedup ratios of 2.26x, 2.41x, and $2 . 6 0 \mathbf { x }$ for 7B, 13B, and 33B models, respectively, outperforming all baseline methods. Moreover, the mean accepted token length with $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ is notably higher across all model sizes (3.86, 3.98, and 3.67 for 7B, 13B, and 33B, respectively), indicating its ability to generate longer, more coherent token sequences. This improvement suggests that $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ enables the target model to accept a greater number of tokens per inference step, thereby enhancing efficiency while maintaining output quality.
Overall, the results validate $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ ’s robustness and efficiency in accelerating inference across diverse and complex tasks, highlighting its potential for practical applications in real-world scenarios.
# 4.3 Additional Space Consumption (RQ2)
Dynamic trees have recently gained increasing attention in inference acceleration, with EAGLE2 (Li et al., 2024a) achieving notable success. However, these improvements come with significant additional memory consumption, often without adequately balancing acceleration gains and resource costs. To address this, we quantified the resource usage of top-performing draft models, including Hydra (Ankner et al., 2024), EAGLE (Li et al., 2024b), and EAGLE2 (Li et al., 2024a), and compared them using the following efficiency metric:
$$
r = { \frac { \mathrm { A c c e l e r a t i o n { \_ } r a t i o } } { \mathrm { E x t r a \_ m e m o r y } } }
$$
A higher $r$ value indicates greater efficiency, meaning the model achieves a higher acceleration ratio with minimal extra memory, while a lower $r$ implies higher resource demands for comparable acceleration.
As shown in Table 2, $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ achieves the highest efficiency with an $r$ value of 0.2440, demonstrating its superior balance between acceleration and memory usage. Although EAGLE2 achieves the highest acceleration ratio of $2 . 3 8 \mathrm { x }$ , it requires an additional 10.54GB of memory, resulting in a lower efficiency score of 0.2258. This suggests that the impact of dynamic trees on acceleration is constrained by their substantial resource requirements.
In conclusion, our findings highlight that while additional memory can enhance acceleration, the efficiency of this trade-off varies. $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ achieves a favorable balance, offering high acceleration with minimal resource overhead, making it a promising approach for optimizing performance and memory efficiency in speculative sampling models.
# 4.4 Different Validation Trees (RQ3)
In this section, we systematically evaluated the impact of different validation tree structures on the performance and efficiency of preliminary model integration. The experiment is based on the Vicunav1.3-7b model, where the draft model is the $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ we trained. We selected Medusa (Cai et al., 2024), Eagle (Li et al., 2024b), and our proposed validation tree structure for comparative analysis. The experimental results are shown in Figure 4, which clearly demonstrate the performance of different validation tree configurations in terms of acceleration ratio and average acceptance length.
Figure 4: Performance comparison of different validation trees.
Figure 4(a) illustrates the acceleration ratios, a key metric for measuring inference speedup achieved by different validation trees. Our approach demonstrates the highest acceleration ratios, significantly outperforming other methods across the MT, Sum, QA, and MR tasks.
Figure 4(b) presents the distribution of average acceptance lengths, reflecting the number of tokens accepted by the model for each validation tree. Compared to other methods, our approach exhibits a higher proportion of acceptance lengths of 5 and 6, while lower proportions for lengths of 1, 2, and 3. This indicates that our method achieves longer average acceptance lengths, contributing to enhanced efficiency and improved generation continuity.
Table 1: Speedup ratio and mean accepted tokens of the Vicuna-v1.3 model (sizes 7B, 13B, and 33B) across six tasks.
Table 2: Relationship between extra memory consumption and acceleration.
# 4.5 Different Draft Models (RQ4)
To further evaluate the effectiveness of our approach, we integrated our continuous verification tree with various draft models to analyze their impact on overall performance. The experimental design is similar to Section 4.4, based on the Vicunav1.3-7b model. The draft models are selected from Medusa, Eagle, and the $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ we trained. This evaluation aims to determine how different draft model architectures contribute to inference efficiency and token acceptance.
Figure 5(a) presents the acceleration ratios across multiple tasks, including MT, Trans, Sum, QA, MR, and RAG, as well as an overall comparison. Our proposed model consistently achieves the highest acceleration ratio across all tasks, with the most significant improvements observed in MT
Figure 5: Comparison of different draft models.
Figure 5(b) illustrates the distribution of accepted token lengths under the same experimental settings. The majority of accepted tokens for Medusa are concentrated in shorter lengths (1, 2, 3), while Eagle shows a more balanced distribution. In contrast, our model excels in producing longer accepted token sequences, particularly in the third and sixth categories. Overall, our approach achieves the longest average acceptance length, indicating improved coherence and efficiency in token generation.
# 4.6 Capabilities (RQ5)
Different token selection strategies can be employed during the inference phase of large language models, including selecting the most probable token, beam search, and sampling methods. Inference sampling, in particular, can be influenced by temperature settings, which control the randomness of token selection. In this section, we evaluate the performance of $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ under varying temperature settings, as shown in Figure 6.
MT 2.68 + TSQruAamns 3.9 #Mean Accepted Tokens 2.4 RAG 12.6802 Overall T 1.2 3.4 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Temperature Temperature (a) Acceleration ratio. (b) Mean accepted tokens.
Our analysis reveals a clear trend: as the temperature increases, the performance of $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ gradually declines (Figure 6(a)). This degradation is attributed to the increased randomness in token selection, which reduces the number of tokens accepted by the target model (Figure 6(b)). As a result, the acceleration ratio decreases due to a lower number of retained tokens, highlighting the sensitivity of $\mathrm { \Delta } S ^ { \mathrm { 4 } } \mathrm { C }$ to temperature variations.
# 4.7 Case Study (RQ6)
In this section, we evaluate the performance of our method by examining the draft tokens accepted by the model, comparing it with previous approaches, and validating the motivation behind enhancing both syntactic and semantic coherence. The experiment specifically highlights token continuity, syntactic, and semantic coherence under different conditions. The results are illustrated in Figure 7.
Our method successfully generates continuous token sequences, such as “splitting the data into”, which maintains both syntactic and semantic coherence by ensuring logical consistency with the preceding context. In contrast, Eagle (Li et al., 2024b) struggles with token continuity, often accepting only partial sequences like “pre-processing” and “prepar-ing”. This comparison highlights a key limitation of Eagle (Li et al., 2024b): despite focusing on semantic coherence through an autoregressive approach, it fails to preserve syntactic continu
Ours
DataPreprocessing:Thisinvolvescleaningandpreparing the datafor
useinthemodel.Thismayinvolveremovingmissingorirrelevantdata,
normalizingthedata,and spliting thedata into training and testingsets. Eagle
Data Preprocessing:Thisinvolvescleaningandpreparing the data for
use inthemodel.Thismay involve removing missing or irrelevantdata,
normalizingthe data,and splitting the data into training and testing sets.
ity, resulting in fragmented token acceptance. In contrast, our method effectively balances both syntactic and semantic coherence, leading to a higher acceptance rate of draft tokens, longer accepted token sequences, and an overall improvement in speed. This highlights the importance of considering both aspects to overcome the limitations of previous approaches that prioritize only one dimension.
# 5 Related Work
# 5.1 Speculative Sampling
The draft-then-verify decoding strategy was first introduced by Stern et al. (Stern et al., 2018), while speculative sampling (Xia et al., 2023; Leviathan et al., 2023; Chen et al., 2023) extended this concept to non-greedy sampling, ensuring the preservation of the original output distribution. Xia et al. (Xia et al., 2024) provided a comprehensive survey of recent advancements in speculative sampling, categorizing the drafting process into two main approaches: Independent Drafting and SelfDrafting. In the context of Independent Drafting, SpecDec (Xia et al., 2023) pioneered the use of independent models for drafting, striking a balance between accuracy and efficiency. Leviathan et al. (Leviathan et al., 2023) demonstrated the acceleration of T5-XXL inference by employing T5-small as a drafting model. These methods use a lightweight, pre-trained LLM that does not require additional training or modification, facilitating the seamless adoption of speculative decoding in various applications(Leviathan et al., 2023; Spector and Ré, 2023; Sun et al., 2023; Chen et al., 2023).
In summary, speculative sampling leverages independent models for draft generation, improving inference efficiency while maintaining the accuracy of the output distribution. These approaches (Xia et al., 2023; Leviathan et al., 2023; Chen et al., 2023) balance efficiency and accuracy, making speculative decoding widely applicable across various tasks.
# 5.2 Self-Drafting
Our research primarily falls under the Self-Drafting category, which focuses on utilizing the target LLM itself for efficient drafting (Xia et al., 2024). Blockwise Decoding (Stern et al., 2018) and Medusa (Cai et al., 2024) introduced Feed-Forward Network (FFN) heads within the Transformer decoder, enabling parallel token generation at each decoding step. However, this parallel structure often results in suboptimal draft quality. To address this issue, Hydra (Ankner et al., 2024) improves Medusa by enhancing the correlation between draft head predictions, thereby increasing draft accuracy. In contrast, Eagle (Li et al., 2024b) reintroduces an autoregressive structure while leveraging the dual characteristics of tokens and functions, enhancing drafting precision. Furthermore, Eagle2 (Li et al., 2024a) introduces a confidence-based dynamic tree mechanism to optimize the acceptance length, improving the overall efficiency of speculative sampling.
In summary, recent advancements in the SelfDrafting category have shown a clear trend towards improving the efficiency and quality of draft generation through innovative decoding mechanisms. These developments (Yang et al., 2024; Zhang et al., 2024a; Hooper et al., 2023; Santilli et al., 2023; Monea et al., 2023) highlight the ongoing efforts to balance parallelism and sequential dependencies in draft generation, aiming to achieve both high efficiency and high quality in the context of LLM-based drafting. | Large language models (LLMs) exhibit remarkable reasoning capabilities across
diverse downstream tasks. However, their autoregressive nature leads to
substantial inference latency, posing challenges for real-time applications.
Speculative sampling mitigates this issue by introducing a drafting phase
followed by a parallel validation phase, enabling faster token generation and
verification. Existing approaches, however, overlook the inherent coherence in
text generation, limiting their efficiency. To address this gap, we propose a
Speculative Sampling with Syntactic and Semantic Coherence (S$^4$C) framework,
which extends speculative sampling by leveraging multi-head drafting for rapid
token generation and a continuous verification tree for efficient candidate
validation and feature reuse. Experimental results demonstrate that S$^4$C
surpasses baseline methods across mainstream tasks, offering enhanced
efficiency, parallelism, and the ability to generate more valid tokens with
fewer computational resources. On Spec-bench benchmarks, S$^4$C achieves an
acceleration ratio of 2.26x-2.60x, outperforming state-of-the-art methods. | [
"cs.CL",
"cs.AI"
] |
# 1 Introduction
The escalating global challenges of water scarcity, climate change, and their profound impacts on ecosystems and human societies underscore the critical importance of understanding and forecasting surface water dynamics [34, 40]. Effective water resource management for agriculture, energy, and consumption relies on predicting future water availability. As climate change intensifies droughts and floods, robust forecasting models are indispensable for adaptation, enabling interventions and improving resilience [12, 18]. Spatiotemporal variability of surface water requires advanced analytics to capture complex hydrological processes and environmental responses [39]. Despite the pressing need for accurate and long-term surface water predictions, the research landscape is hampered by significant limitations. A primary obstacle is the lack of comprehensive, large-scale datasets specifically curated for forecasting tasks. Satellite observations are often fragmented or not integrated with crucial climate and topographic data. Consequently, there is also a scarcity of well-defined predictive tasks that leverage these multi-modal data sources to forecast surface water dynamics.
To fill the gap, this paper introduces HydroChronos, a novel, large-scale, multi-modal spatio-temporal dataset specifically designed to foster research in surface water forecasting. HydroChronos is characterized by an extensive temporal coverage, encompassing over three decades of Landsat 5 and Sentinel-2 satellite imagery. This imagery is integrated with corresponding climate variables (e.g., precipitation and temperature) and a Digital Elevation Model (DEM) for a diverse set of lake and river systems across Europe, the United States, and Brazil. Using this dataset, we define three standardized predictive tasks for surface water dynamics forecasting from satellite imagery (optionally with climate data): binary change detection, direction of change classification, and magnitude of change regression.
Building upon the HydroChronos dataset and the defined forecasting tasks, we propose a robust baseline model: AquaClimaTempo UNet (ACTU). This model, based on UNet [32] and ConvLSTM [36], features a climate data branch to learn interactions between historical water dynamics and climatic drivers. Our experimental results demonstrate that this model significantly outperforms the commonly used persistence baseline in forecasting future water dynamics. Furthermore, to foster transparency and understanding, we conduct an Explainable AI (XAI) analysis. This analysis offers insights into model decisions, identifies key drivers of surface water changes, and guides future research.
The contributions of this paper (Figure 1) can be summarized as follows:
We introduce HydroChronos, the first dataset tailored for water dynamics forecasting, including remote-sensed images, climate variables, and DEM.
We introduce three tasks of surface water dynamics forecasting, offering a benchmark for future research in spatiotemporal predictive modeling • We introduce ACTU as a baseline model with the possibility to integrate climate variables and DEM We performed an XAI analysis on our models to guide future research and understand the influence of various factors
The code and the dataset are available for reproducibility at https://github.com/DarthReca/hydro-chronos.
# 2 Related Work
Forecasting surface water dynamics requires advancements in satellite monitoring, time-series analysis, spatio-temporal modeling, climate data integration, and interpretability. This section reviews existing literature across these domains, contextualizing the contributions of HydroChronos and our proposed methodology.
# 2.1 Surface Water Monitoring
Satellite remote sensing revolutionized monitoring surface water extent and dynamics across vast scales and diverse temporal resolutions. Landsat [45] and the Sentinel [13] missions have provided decades of optical imagery, forming the backbone of many surface water mapping efforts. Common water delineation methods include spectral indices like the Normalized Difference Water Index (NDWI) [24] and the Modified NDWI (MNDWI) [47], leveraging water’s spectral reflectance. Machine learning classifiers have also been widely employed for more accurate and robust water body extraction [8, 14]. These efforts have culminated in the development of several large-scale and global surface water datasets [28, 48].
These datasets offer insights into past water dynamics but focus on retrospective analysis, not forecasting, providing only the masks of water extents over time. Moreover, they are not explicitly structured for the development and validation of long-term predictive models that integrate auxiliary drivers like climate. Additionally, they are still obtained from automatic extraction, and so their accuracy is dependent on the employed model. HydroChronos is specifically designed for multi-year surface water dynamics forecasting, integrating in a single dataset, imagery, climate variables, and DEM. This multi-modal structure, curated for diverse hydrological systems, provides a rich foundation for developing generalizable models.
# 2.2 Time-Series Forecasting in Hydrology
The analysis and forecasting of hydrological variables like streamflow or water levels was studied for a long time [23]. Both traditional machine learning approaches [38] and deep learning-based (e.g, based on Long Short-Term Memory [17]) have been applied in Earth observation data [30, 33]. While recent advancements in remote sensing posed the problem of forecasting a snapshot image of the future, including exogenous variables [4], no application can be seen in hydrology, to the best of our knowledge. Forecasting surface water dynamics is influenced by complex, non-linear interactions between past states, seasonality, and external drivers. HydroChronos proposes to fill the gap, enabling scientists to experiment with a large-scale corpus tailored for hydrological applications based not only on image data, but also on exogenous climate variables.
Deep learning architectures have demonstrated remarkable success in a wide array of environmental modeling and Earth observation tasks, thanks to their ability to learn hierarchical features from large, complex datasets. U-Net architectures [32] still prove to be a strong baseline for semantic segmentation of satellite imagery [7, 10], including water body delineation [8], and more recently, for spatio-temporal forecasting tasks where the output is an image or a sequence of images [42]. Our AquaClimaTempo UNet (ACTU), building on similar architectures[4, 19, 37], adds a dedicated branch for time-series climate data integration and gated fusion to balance climate and optical features. This allows the model to learn how climatic factors modulate surface water dynamics, moving beyond purely auto-regressive image forecasting.
# 2.3 Explainable AI in Earth Sciences
The complexity of AI models in critical domains like Earth sciences demands transparency and interpretability [2, 43]. XAI techniques have been applied to environmental models to identify key input features or understand model behavior [49]. In our case, the integration of climate variables such as precipitation, temperature, and evapotranspiration is well-established as essential for accurate hydrological modeling [9, 15]. Combining climate data with satellite observations presents significant opportunities but also challenges, including issues of scale mismatch, data assimilation, and capturing complex, potentially lagged, interactions. Prior studies focus primarily on predictive accuracy, often overlooking interpretability. In contrast, our approach incorporates XAI to analyze the relative importance of historical spatio-temporal patterns and climate drivers in predicting future surface water changes.
# 3 HydroChronos Dataset
In this section, we present the newly created dataset: HydroChronos. The dataset is composed of time series of images from Landsat-5 and Sentinel-2, time series of climate variables from TERRACLIMATE [1], and a DEM. The selected lakes’ and rivers’ names are derived from HydroLAKES [25] and HydroRIVERS [20] and cover USA, Europe, and Brazil as shown in Figure 2.
# 3.1 Landsat-5 and Sentinel-2 images
To capture long-term changes and recent dynamics, HydroChronos uses imagery from Landsat-5 and Sentinel-2. Sentinel-2 provides imagery with superior spatial resolution $\left( 1 0 \mathrm { m } / 2 0 \mathrm { m } \right)$ and spectral quality compared to Landsat-5 $( 3 0 \mathrm { m } )$ . However, its temporal coverage is limited to the period from 2015 to 2024. To extend the historical perspective, we include Landsat-5 imagery, which covers the period from 1990 to 2010.
To ensure data quality, we selected Top-Of-Atmosphere (TOA) images with the lowest cloud coverage possible, prioritizing clear observations of water bodies. To ensure comparable hydrological conditions and minimize unrelated seasonal variability, we selected May-August images (Northern Hemisphere summer).
Recognizing the spectral differences between the two sensors, we selected a consistent set of spectral bands. Sentinel-2 provides up to 13 spectral bands, while Landsat-5 offers 7. We harmonized the data by selecting 6 comparable bands that are available on both sensors, as shown in Table 1. All imagery is provided at a spatial resolution of $3 0 \mathrm { m }$ and projected to WGS84. An RGB version of a sample can be seen in Figure 3a.
Figure 2: Distribution of lakes and rivers in HydroChronos
Table 1: Landsat (L) and Sentinel (S) coupled bands included in the dataset. NIR is Near InfraRed and SWIR is Short-Wave InfraRed
# 3.2 Digital Elevation Model
A static Digital Elevation Model (DEM) provides essential topographic context for hydrological analysis. The DEM for HydroChronos is sourced from the Copernicus GLO30 DEM [3] dataset, which provides global coverage at a spatial resolution of approximately 30 meters. A sample can be seen in Figure 3b. This single-timestep layer captures the terrain elevation for each study area, crucial
for tasks such as watershed delineation, flow accumulation analysis, and understanding the topographical influence on water body characteristics [26].
# 3.3 Climate Variables
To complement the remote sensing data with key environmental drivers, HydroChronos includes time series of climate variables from the TERRACLIMATE [1] dataset. It provides monthly climate data globally at a resolution of approximately $4 . 6 \mathrm { k m }$ . The dataset includes 14 variables: actual evapotranspiration, climate water deficit, reference evapotranspiration, precipitation accumulation, runoff, soil moisture, downward surface shortwave radiation, snow water equivalent, maximum temperature, minimum temperature, vapor pressure, vapor pressure deficit, Palmer Drought Severity Index, and wind speed at $1 0 \mathrm { m }$ . We include the complete monthly time series for the periods corresponding to the imagery: 1990-2010 and 2015-2024. A subset of these time series can be seen in Figure 3c. These climate variables can be used to analyze the relationship between climatic conditions and observed changes in water bodies captured by the satellite imagery.
Figure 3: Sample in the three modalities: optical (RGB channels only for visualization), DEM, and climate.
# 3.4 Splits
Since each water basin has its peculiar behavior, which could be difficult to summarize in simple hydrological variables even if two of them are near, the most straightforward way is to generalize temporally instead of trying to generalize spatially. We use the Landsat-5 dataset (from 1990 to 2010) to pretrain the model; the old sensor (from the 80s) has many sensor errors and noise; however, the huge amount of collected data over a large temporal span makes this sensor ideal to learn dynamics for many areas around the globe. Sentinel-2, which is more modern, has a higher revisit frequency and higher quality images (from 2015 until now), is used for testing and further fine-tuning in the following way: the Brazilian rivers are used for fine-tuning, to align the features learned from Landsat-5 to Sentinel-2 sensor, while Europe and USA are used for the testing. In this way, we collect around 1900 time series for testing and around 16000 for training, for a total of over 100 thousand single images.
# 4 Tasks
In this section, we delineate the target used in the proposed tasks and how each task is formulated. A visual example of the tasks is shown in Figure 5.
Figure 4: RGB sample at two different timesteps and the corresponding MNDWIs.
# 4.1 General Target
Given the difficulty in finding yearly annotations of water dynamics all over the world, we based our task on one renowned geo-index to detect water: MNDWI. A similar approach for NDVI was already explored [4]. It is based on the physical properties of water, which is highly reflective in the green channel (G) and absorbs SWIR: $M N D W I = ( G - S W I R ) / ( G + S W I R )$ . This approach, while prone to noise, has the advantage of predicting directly the changes regarding physical properties (e.g., icing [41], turbidity [46], pollution [21, 22, 50]) rather than only focusing on water extents and avoiding a costly annotation process. An example can be seen in Figure 4, where the icing process lowers the MNDWI values of the same area.
To avoid any inconsistency, due to the cloud, we apply cloud masking to invalid areas. Additionally, the imperfection of cloud masking and possible sensor errors is avoided with the following setting: given a past timeseries $P$ and a future timeseries $F$ of MNDWIs, our target $T$ is defined as: $T = m e d i a n ( P ) - m e d i a n ( F )$ , where the median is applied pixelwise over the time axis. In this way, instead of predicting the immediate, possibly noisy future, we target the future trend of the area. This pre-processing step is crucial for a large-scale analysis across diverse regions like the US, Europe, and Brazil, as it effectively reduces localized noise, accounts for minor short-term fluctuations in water levels or atmospheric interference, and smooths the MNDWI signal. Consequently, the subsequent change detection focuses on more persistent and significant alterations in water dynamics rather than ephemeral changes or sensor artifacts. Since $\mathrm { M N D W I } \in \{ - 1 , 1 \}$ , $T \in \{ - 2 , 2 \}$ . The target distribution is strongly skewed (the median is 0.01, and the 75th percentile is 0.06) towards zero, as expected.
# 4.2 Change Detection
The first proposed task is binary change detection, which can be framed as binary semantic segmentation. Given a timeseries $P$ , a target $T$ , and a threshold $t$ to define what we consider a relevant change, we create a binary mask $M _ { c } = \left| T \right| > t$ . The task focuses on creating a model to predict $M _ { c }$ .
# 4.3 Direction Classification
This task can be framed as a multiclass semantic segmentation task with 3 classes: negative, positive, or no change. Given a timeseries $P$ , a target $T$ , and a threshold $t$ , we create a mask $M _ { d }$ where a pixel $m _ { d }$ is assigned to the negative change class if $m _ { d } < t$ , to the positive change class if $m _ { d } > t$ , otherwise it is assigned to the no change class. The task focuses on creating a model to predict $M _ { d }$ .
# 4.4 Magnitude Regression
The previous tasks assume the existence of a threshold $t$ to define relevant changes. However, it can be of interest to model every "small" change in the area. Given a timeseries $P$ and a target $T$ , the task focuses on creating a model to regress the values of $\left| T \right|$ . This task can be framed as pixel-wise regression. Preliminary experiments also tried to address the regression of $T$ , but with little success, so we reported only these settings as baseline, leaving this last task for future work.
Figure 5: Visual example of tasks for Lake Tahoe. In regression, the values range from 0 to 2 (blue to red). In change detection, labels are no-change (blue) and change (red). In direction classification, labels are negative change (blue), nochange (grey), and positive change (red).
# 5 Methodology
In this section, we first discuss our proposed baseline, and then we explain the regression loss we employed.
# 5.1 AquaClimaTempo UNet
The AquaClimaTempo UNet (ACTU) architecture is depicted in Figure 6. If DEM of shape $1 \times 1 \times W \times H$ is given, it is repeated $T$ times, one for each sample of the image timeseries $P$ to which is concatenated along the channel axis. The image time series of shape $T \times C \times W \times H$ (eventually $C + 1$ , in case DEM is concatenated) is given to the Pyramid Image Feature Extractor (e.g, ConvNext [44]). It processes each image independently, and since it is pyramidal, it provides $L$ features, one for each level, of different shapes from $F _ { 0 }$ to $F _ { L }$ . If the climate variables are provided, they are one for each of the $T$ images and cover the past $T _ { 1 }$ months of each image. The variables are $C _ { 1 }$ . The climate encoder produces $L$ features with shapes from $F _ { 0 }$ to $F _ { L }$ . The gated fusion takes as input both the image and climate features at different levels and dynamically balances the contribution, outputting the same $L$ features of shapes from $F _ { 0 }$ to $F _ { L }$ , one for each of the $T$ images. The ConvLSTM layers flatten the time dimension, producing a representation for the timeseries for each of the $L$ levels of shape from $F _ { 0 }$ to $F _ { L }$ . Finally, the UNet decoder takes the multilevel features and creates the final prediction mask by concatenating the features created by the expanding path with the ones obtained with the ConvLSTMs.
Figure 6: AquaClimaTempo UNet (ACTU) architecture. If DEM is provided, it is repeated once per sample in the image timeseries and concatenated along the channel axis. The Pyramidal Image Feature Extractor provides multiscale embeddings. If a climate timeseries is provided, the climate encoder provides multiscale embeddings which are gate fused with the image embeddings. ConvLSTMs provide multiscale embeddings for the timeseries, which are used in the UNet decoder to provide the final prediction.
5.1.1 Climate Encoder. The climate encoder processes $X _ { c l i m }$ , one timeseries for each of the $T$ images, with length $T _ { 1 }$ and $C _ { 1 }$ features. In this way, we can incorporate historical trends with a finer timestep (i.e, monthly instead of yearly). They are independently processed and projected with a linear layer to create $F _ { P ^ { r o j } }$ (Equation (1)). They are then processed to create $K _ { l }$ representation to spatially match the image features using $L$ blocks composed of an initial Conv2D (with kernel 1) and GELU (Equation (2)), followed by $s$ series of nearest neighbor upsampling (with resize factor 2), Conv2D (with kernel 3), and GELUs (Equation (3)).
$$
\begin{array} { r } { F _ { p r o j } = \mathrm { L i n e a r } ( \Phi _ { \mathrm { L S T M } } ( X _ { c l i m } ) ) } \\ { K _ { l } ^ { 0 } = \mathrm { G E L U } ( \mathrm { C o n v 2 D } _ { 1 \times 1 } ( F _ { p r o j } ) ) } \\ { K _ { l } ^ { s } = \mathrm { G E L U } ( \mathrm { C o n v 2 D } _ { 3 \times 3 } ( \mathrm { U p s a m p l e } _ { \times 2 } ( K _ { l } ^ { s - 1 } ) ) ) } \end{array}
$$
5.1.2 Gated Fusion. The climate information can act differently based on the image itself, and also based on the area of the single image. To dynamically adapt the contribution of climate features over the image features, we apply a gate fusion. This solution provides a value in the range 0-1 for each element of the matrix to weight the contribution of optical and climate features dynamically. The weights are obtained by concatenating along the channels axis climate and optical features, and applying a Conv2D (with kernel 3), a ReLU, and Conv2D (with kernel 1), and finally a sigmoid to constrain the input in the range 0-1. Given the climate feature $K _ { l }$ and the corresponding image feature $I _ { l }$ at the level $L$ , the gated fusion can be formulated:
$$
\begin{array} { r } { Z = \mathrm { C o n c a t } ( K _ { l } , I _ { l } ) } \\ { \alpha = \sigma ( \mathrm { C o n v 2 D } _ { 1 \times 1 } ( \mathrm { R e L U } ( \mathrm { C o n v 2 D } _ { 3 \times 3 } ( Z ) ) ) ) } \\ { F _ { l } = \alpha I _ { l } + ( 1 - \alpha ) K _ { l } } \end{array}
$$
Each $F _ { l }$ is then used in the decoder the make the final prediction.
# 5.2 Regression Loss
Since the satellite resolution can vary and the problem shows a strong imbalance, L1 or L2 losses alone can be insufficient, as they are also sensitive to noise. Our regression loss makes use of HuberLoss as a starting point, but combines a multi-scale approach with a wavelet decomposition.
5.2.1 Multiscale Loss. The multiscale loss $L _ { M S }$ , given a regression loss $L$ , prediction $P$ , and ground truth $T$ can be defined as:
$$
L _ { M S } ( P , T ) = \frac { 1 } { M } \left( L ( P , T ) + \sum _ { i = 1 } ^ { M } L ( D _ { s _ { i } } ( P ) , D _ { s _ { i } } ( T ) ) \right)
$$
where $\boldsymbol { S } = \{ s _ { 0 } , . . . , s _ { M } \}$ are $M$ different scale factors and $D _ { x }$ is the downscale operation with factor $x$ .
5.2.2 Wavelet Loss. The Discrete Wavelet Transform (DWT) decomposes both the prediction and the target into different frequency sub-bands, instead of directly comparing pixel values in the spatial domain. Compared to the Fourier transform, it is computationally more efficient $( O ( N )$ compared to $O ( N \log ( N ) ) )$ . This allows the loss to penalize errors at different scales and orientations (horizontal, vertical, diagonal). DWT decomposes the image with two coefficients: the approximation coefficients $Y _ { L }$ (the low-frequency components, capturing its coarse structure) and detail coefficients $Y _ { H }$ (the high-frequency components at $N$ levels and horizontal, vertical, diagonal orientations, capturing finer details). Given a regression loss $L$ , prediction wavelet coefficients $Y _ { H } ^ { p }$ and $Y _ { L } ^ { p }$ , and ground truth coefficients $Y _ { H } ^ { t }$ and $Y _ { L } ^ { t }$ , the wavelet loss $L _ { W }$ can be
defined as:
$$
\begin{array} { r } { L _ { L } ( Y _ { L } ^ { p } , Y _ { L } ^ { t } ) = \mathrm { m e a n } ( L ( Y _ { L } ^ { p } , Y _ { L } ^ { t } ) ) } \\ { L _ { H , i } ( Y _ { H , i } ^ { p } , Y _ { H , i } ^ { t } ) = \mathrm { m e a n } ( L ( Y _ { H , i } ^ { p } , Y _ { H , i } ^ { t } ) ) } \\ { L _ { W } ( Y _ { L } ^ { p } , Y _ { L } ^ { t } , Y _ { H } ^ { p } , Y _ { H } ^ { t } ) = \alpha L _ { L } ( Y _ { L } ^ { p } , Y _ { L } ^ { t } ) + \displaystyle \sum _ { i = 1 } ^ { N } w _ { i } \cdot L _ { H , i } ( Y _ { H , i } ^ { p } , Y _ { H , i } ^ { t } ) } \end{array}
$$
where $\alpha$ defines the weight of the low-frequency loss and $W =$ $\left\{ w _ { 0 } , . . . , w _ { N } \right\}$ the weights of the high frequency losses.
The final loss is a weighted mean of the multiscale and wavelet losses: $L _ { T } = \alpha L _ { M S } + ( 1 - \alpha ) L _ { W }$ .
# 5.3 Explainable AI analysis
The Explainable AI analysis investigates model behavior from two complementary perspectives. First, we leverage climate-derived attributes to perform subgroup discovery and feature attribution, aiming to reveal systematic performance disparities across interpretable climate conditions. Second, we conduct a per-channel saliency analysis to quantify the contribution of individual input modalities to the model’s predictions.
5.3.1 Climate Subgroup Discovery and Feature Attribution. Model performance in spatial machine learning can vary significantly across different environmental conditions. Aggregated metrics may conceal systematic failures concentrated in specific climatic regimes. To uncover and explain such disparities, we adopt a post hoc analysis framework based on subgroup discovery and feature attribution.
Climate Subgroup Discovery. Let $\mathcal { D } \ = \ \{ ( x _ { i } , y _ { i } , \hat { y } _ { i } ) \} _ { i = 1 } ^ { N }$ denote the evaluation dataset, where $x _ { i } \in X$ represents the input sample, $y _ { i } \in \mathcal { Y }$ is the ground-truth label, and $\hat { y } _ { i } \in \mathcal { Y }$ is the model prediction. Each $x _ { i }$ is associated with a set of interpretable climate-derived attributes $A ( x _ { i } ) = \{ a _ { 1 } , . . . , a _ { k } \}$ obtained via feature binning.
We define a subgroup $S \subseteq { \mathcal { D } }$ as the set of samples satisfying a conjunction of attribute-value conditions:
$$
S = \{ x _ { i } \in { \mathcal { D } } \mid a _ { j } ( x _ { i } ) = v _ { j } \quad \forall j \in J \} ,
$$
where $J \subseteq \{ 1 , \ldots , k \}$ indexes selected attributes and $v _ { j }$ denotes specific bin values. To evaluate the behavior of a model over $s$ , we use the notion of subgroup performance divergence [27] defined as:
$$
\Delta _ { m } ( S ) = m ( S ) - m ( \mathcal { D } ) ,
$$
where $m : \mathcal { D } \mathbb { R }$ is a scalar performance metric (e.g., precision, recall), $m ( S )$ is the metric evaluated over the subgroup, and $m ( \mathcal { D } )$ is the global reference over the entire dataset.
We automatically identify the subgroups with large divergence scores using DivExplorer [27], which enumerates statistically significant subgroups under a minimum support constraint $\theta$ . This process identifies climate conditions under which the model underor over- performs.
Feature Attribution. To explain which features contribute most to these performance deviations, we use the notion of Global Shapley values [27]. The Global Shapley value is a generalization of the Shapley value [35] which estimates the contribution of each attribute-value to the divergence across all identified subgroups above the support constraint. The higher the value, the more the attribute-value term contributes to the divergence in performance. A positive contribution of a term indicates that it is associated with a performance metric $m$ higher than the average on the overall dataset. We refer the reader to [27] for its formal definition.
5.3.2 Per-channel Saliency. A central goal in XAI is to identify which input components most significantly influence a model’s predictions. Beyond interpretability, this analysis has practical implications, such as reducing computational overhead by pruning less informative input channels. A common approach for estimating input relevance is perturbation-based and involves perturbing the input and measuring the resulting change in the model’s output. Perturbation-based techniques [5, 11, 51] provide a straightforward method for attributing importance scores to input dimensions based on their effect on model behavior. We adopt a perturbation-based strategy to compute per-channel saliency, aimed at quantifying the relevance of each input channel. We adapted the method proposed in [29] to our time-series setting. Let $\mathbf { \Psi } _ { x } \in \mathbb { R } ^ { T \times C \times H \times W }$ denote the input tensor, where $T$ is the number of temporal frames, $C$ the number of channels, and $H \times W$ the spatial resolution. For a given test sample, we first compute the model’s baseline prediction $\hat { y } = \mathcal { M } ( x , \mathrm { D E M } _ { : }$ , Climate , and evaluate it using a suite of performance metrics. To assess the importance of channel $c \in \{ 1 , . . . , C \}$ , we generate a perturbed input $\bar { x ^ { ( - c ) } }$ by zeroing out the $c$ -th channel across all time steps. We then recompute the model output as $\hat { y } ^ { ( - c ) } = M ( x ^ { ( - c ) }$ , DEM, Climate). The saliency of channel $c$ is defined as the change in a given performance metric $m$ :
$$
\Delta m _ { c } = m ( \hat { y } , y ) - m ( \hat { y } ^ { ( - c ) } , y ) ,
$$
where $y$ is the ground truth. A larger $\Delta m _ { c }$ indicates that channel $c$ has a greater influence on model performance, since its absence leads to a more substantial degradation in prediction quality. We then average the per-channel saliency scores across the test set to obtain a dataset-level contribution. This process is extended to the DEM input, which is ablated entirely to measure its global contribution. Overall, this saliency analysis provides both local (per-sample) and global (dataset-level) interpretability, helping to identify which input modalities most influence the model decisions in spatiotemporal tasks.
# 6 Experimental Results
In this section, we present the experimental settings and the results compared to simple statistical methods, namely constant prediction and persistence. Constant prediction is simply predicting that no change will happen in the future. Persistence for robustness is computed as the difference between the last known timestep and the median of the previous (thresholded with $t$ for classification).
# 6.1 Experimental Settings
ACTU is pretrained for 50 epochs on the Landsat subset, and finetuned for 20 epochs on Sentinel-2 with a cosine decaying learning rate scheduler with a $5 \%$ warmup. The maximum learning rate is 5e-4 for the pretraining and 5e-6 for the fine-tuning. The batch size was set to 8. The loss for classification is a combo loss composed of generalized dice and focal losses. The vision backbone for the encoder is ConvNextV2 [44], in base (ACTU) and large (ACTU-L) versions. The length of the two image time series was set to 5. To avoid overwhelming the neural network with information (and deal with eventual multicollinearity and increasing costs), we select a subset of 5 climate variables (maximum temperature (tmmx), actual evapotranspiration (aet), runoff (ro), precipitation (pr), and soil moisture (soil)) that should be relevant for the task [6, 16, 31]. We also use these five variables for the analysis in Section 5.3.1. Since this analysis requires categorical attributes to define subgroups, we discretized each variable into three interpretable bins. Specifically, we computed the standard deviation of each climate variable over the input time series to quantify temporal variability. These variability scores were discretized by frequency into three levels—low $( L ) _ { : }$ medium $( M )$ , and high $( H )$ —thus enabling the construction of climate subgroups with distinct fluctuation profiles. In our experiments, we threshold at $t = 0 . 1$ (i.e., ${ \sim } 8 5 \mathrm { t h }$ percentile) to remove possible sensor noises, atmospheric effects, phenological changes, and coregistration errors. A key requirement for training a neural network is the generation of a consistent change mask across the entire diverse study area. A fixed threshold ensures that the definition of ‘change’ is uniform. While this approach is necessary for large-scale change detection, it is acknowledged that the precise magnitude of MNDWI difference that corresponds to a ‘relevant’ real-world change can still exhibit some variability due to the diverse nature of water bodies, atmospheric conditions, and local landscape characteristics across the US, Europe, and Brazil.
# 6.2 Evaluation Metrics
We evaluate classification tasks using precision (P), recall (R), and F1-score (F) for each class. Since the regression problem is pixelwise, and many areas have values near zero, it can be considered “unbalanced”. We evaluate the regression quality with Mean Absolute Error (MAE) and the Pearson Correlation (PC). We also compute MAE on the top-10 $( \mathrm { M A E } @ 1 0 )$ and top-20 (MAE $\textcircled{4} 2 0 \textcircled { < }$ ) highest valued pixels to account for the imbalance of the values. We threshold the regressed values at $t = 0 . 1$ (as for classification) and $t = 0 . 2$ , where we compute precision $( \mathrm { P } @ \mathfrak { t } )$ , recall $( \mathrm { R } @ \mathfrak { t } )$ , and F1-score $\left( \mathrm { F } @ \mathrm { t } \right)$ . This allows us to assess the trade-offs between detecting relevant pixels and the accuracy of those detections at different sensitivity levels.
# 6.3 Change Detection
Table 2 presents the binary change detection results, highlighting the performance of various model configurations. All ACTU model variants demonstrate a substantial and statistically significant improvement in detecting changes (CHG) compared to the Constant and Persistence baselines. For instance, the baselines achieve F1- scores of 0 and 34.98, respectively, whereas all ACTU configurations surpass an F1-score of 45 for this class. The inclusion of climate variables (C) with the base ACTU model improves the no-change class (NoCHG) F1-score but results in a slight decrease in the CHG F1-score. Conversely, incorporating only DEM data (D) enhances the CHG F1-score, the highest among the standard ACTU variants for this metric, while also slightly improving NoCHG performance. When both DEM and climate data are utilized, the model achieves the highest CHG Recall (62.33), proving its superior capability in identifying actual change instances, though its F1-score (48.67) is marginally lower than the DEM-only configuration. The larger backbone model, ACTU-L, shows a modest improvement in CHG precision. Generally, a larger backbone does not seem to provide consistent improvements over the base version.
Table 2: Change detection results for models optionally using DEM (D) and climate variables (C). \* indicates statistically significant difference $\mathbf { \left( p < 0 . 0 1 \right) }$ with respect to persistence according to the t-test. ° indicates the statistical difference comparing ACTU-L with the same configuration of ACTU.
# 6.4 Direction Classification
Table 3 details the performance for the direction classification task, categorizing changes into negative (NEG), no change (NONE), or positive (POS). All ACTU model variants achieve statistically significant and considerable improvements over the Constant and Persistence baselines. This is particularly evident for POS and NEG, where the Persistence achieves 17.48 and 11.61, respectively, compared to ACTU 30.27 and 19.47. All models excel at identifying NONE, with ACTU variants consistently reaching F1-scores around 86-87. However, accurately classifying the direction of change (POS and NEG) is inherently more challenging, as reflected by their lower F1-scores compared to NONE across all models. Introducing climate variables notably improves the F1-score for POS, though it slightly reduces performance for NEG. Conversely, adding DEM data alone does not yield a clear F1-score improvement for either change direction class, slightly decreasing NEG and POS F1-scores. The combined use of DEM and climate data results in a robust NONE F1-score (87.6) and a POS F1-score (20.77). The larger ACTU-L model shows modest F1 improvements for NONE (87.76) and POS (20.88) over its smaller counterpart, but a decrease for NEG. These results suggest that while ACTU is strongest for detecting negative changes, incorporating climate data is particularly beneficial for identifying positive changes. The overall task of precise direction classification remains complex, with input data types showing varied impacts on different change categories.
# 6.5 Magnitude Regression
Table 4 details the magnitude regression performance, where all ACTU model variants demonstrate statistically significant and substantial improvements over the Constant and Persistence baselines across all reported metrics. The standard ACTU model achieves a low MAE of 0.0261 and a PC of 46.45. While the incorporation of DEM (D), climate (C) variables, or both tends to slightly increase overall MAE and decrease PC, the inclusion of DEM markedly improves the F1-scores when regression outputs are thresholded to identify significant changes. Specifically, $\operatorname { F @ 0 . 1 }$ and $\operatorname { F @ 0 . 2 }$ improve due to enhanced recall. This indicates that while the base model excels at general magnitude prediction, DEM input is particularly beneficial for more accurately identifying pixels undergoing substantial change. The larger backbone model, ACTU-L, emerges as the top-performing configuration. It improves MAE for the top $1 0 \%$ and $2 0 \%$ highest magnitude changes and achieves the highest F1-scores for thresholded significant changes. Although its overall MAE is marginally higher than ACTU, its PC is slightly better. In contrast, ACTU-L with climate and dem, while improving general MAE and PC over its smaller counterpart ACTU, does not reach the thresholded performance levels of ACTU-L.
# 6.6 Ablation Studies on regression loss
To understand the contribution of the proposed composed loss $L _ { T }$ , we report in Table 5 the comparison with its loss components alone and the employment of a standard regression loss (Huber loss, the same used in the other derivative losses). We compare the ACTU without any additional information for simplicity. Comparing the Huber loss $( L )$ to the multiscale version $( L _ { M S } )$ , $L _ { M S }$ provides better recalls and so better F1-scores, and lower values of MAEs in the tophighest values. The wavelet loss $( L _ { W } )$ provides good performance in regression, but struggles with classification metrics. This is probably due to the frequency domain, which lacks any spatial information. The linear combination $L _ { T }$ proves its benefits in regression metrics when looking at MAE and PC (at least $+ 2 \%$ improvement). Looking at classification metrics, it enhances the precision while affecting the recall. The $\mathrm { F @ 0 . 1 }$ remains not significantly affected by the recall loss, while $\operatorname { F @ 0 . 2 }$ is enhanced. It can be easily seen that it blends the highest recall of spatial loss $( L _ { M S } )$ , with the highest precision of frequency loss $( L _ { W } )$ , providing a balanced contribution of both. This loss proved to be a good alternative; still, a more extensive search could be performed in the future to select even better hyperparameters.
# 7 Analysis and Discussion
In this section, we present the insights derived from the XAI analysis. We begin with the Climate Subgroup Discovery and Feature Attribution, which shows how the model’s performance changes according to climate variations and highlights its reliance on specific climate variables. We then examine the Per-channel Saliency to assess the relative importance of the individual spectral bands and DEM.
# 7.1 Climate Subgroups and Feature Attribution
Climate Subgroups. We identify climate subgroups that consistently challenge the model across different tasks. These understandings enable us to outline critical samples characterized by hard-to-learn environmental patterns. We first perform the subgroup discovery over the five climate variables, discretized according to their intra-series variability. For each task and class, we extract the subgroup exhibiting the highest divergence, as detailed in Appendix B. We then conduct a comparative analysis of these subgroups by inspecting the samples they encompass, in order to uncover recurring problematic regions across tasks. We show the findings in Table 6. Notably, there is strong agreement across all the tasks in the difficulty of predicting that the area will change in the regions of Great Salt Lake and Utah Lake, as they appear in the worstperforming subgroups for both change, direction, and regression models. Conversely, Rainy Lake and Woods Lake are particularly problematic when the model attempts to predict stable conditions (i.e., no change), suggesting that temporal climate fluctuations in these areas might mimic weak change signals. Additionally, Lake Texoma and Pyramid Lake consistently appear in subgroups associated with poor performance in both the regression task and the detection of positive changes, pointing to a possible shared climate signal that the model struggles to generalize across these scenarios.
Table 3: Direction classification results for models optionally using DEM (D) and climate variables (C). \* indicates statistically significant difference $\mathbf { \left( p < 0 . 0 5 \right) }$ with respect to persistence according to the t-test. ° indicates the statistical difference comparing ACTU-L with the same configuration of ACTU.
Table 4: Magnitude regression results for models optionally using DEM (D) and climate variables (C). \* indicates statistically significant difference $\mathbf { \left( p < 0 . 0 5 \right) }$ with respect to persistence according to the t-test. ° indicates the statistical difference comparing ACTU-L with the same configuration of ACTU.
Table 5: Comparison between the wavelet loss $( L _ { W } )$ , multiscale loss $( L _ { M S } )$ , their combination $L _ { T }$ , and standard application of the regression loss (𝐿). \* indicates statistically significant difference $\mathbf { \left( p < 0 . 0 1 \right) }$ with respect to $L _ { T }$ according to the t-test.
Table 6: Lakes associated with the worst-performing climate subgroups across the three tasks: change detection (C), direction classification (D), and regression (R), with the affected classes (for C and D) and MAE metric for R.
Feature Attribution. We compute the Global Shapley values to quantify the contribution of each attribute-value pair to the overall divergence. This allows us to identify which factors are most responsible for performance variations across subgroups. Table 7 summarizes the two Most and Least contributing levels of climate variability for each of the three tasks. In the change detection task, precipitation (pr) and soil moisture (soil), particularly under conditions of low or high variability, consistently emerge as strong contributors to model performance. Maximum temperature (tmmx) also appears repeatedly across all tasks, with its variability positively correlated with higher prediction accuracy. In contrast, high variability in evapotranspiration (aet) is frequently among the least contributing factors, indicating that it may introduce instability or ambiguity that the model struggles to effectively capture. These findings indicate that model performance is not uniformly distributed across climatic regimes and that temporal variability in certain features (e.g., soil, aet) can lead to systematic failure modes. Identifying failure-prone climate subgroups not only enhances our understanding of climate feature relevance but also provides actionable insights for improving model robustness through targeted data augmentation, domain adaptation, and climate-aware validation strategies.
# 7.2 Per-channel Saliency
We perform a per-channel saliency analysis to evaluate the contribution of each input modality. For change detection (C) and direction classification (D), we computed the average drop in F1 score resulting from the ablation of individual channels. For the regression task (R), we instead measured the change in Mean Absolute Error (MAE) and Pearson Correlation (PC). We summarize the saliency scores in Figure 7. All the saliency scores are row-normalized between -1 (confusing channel) and 1 (important channel) to allow for consistent visual comparison, where 0 indicates an irrelevant channel. For the MAE, we normalized its negative value to have a consistent interpretation. The models evaluated included the DEM as an additional input.
In the change detection task, NIR and SWIR channels stand out as the most informative for detecting change events (C-CHG, second row), while RGB channels play a greater role in predicting areas with no change (C-NoCHG, first row). This pattern largely holds in the direction classification task as well. Interestingly, although the NIR channel supports detection of negative changes (D-NEG), it appears to hinder the identification of positive changes (D-POS), revealing a class-dependent interaction with this spectral band. In the regression task, performance improves when the model has access to the full spectrum of channels. For R-PC, NIR and SWIR maintain their prominence, yet the RGB channels also contribute consistently, suggesting a beneficial multispectral synergy.
This analysis highlights the distinct and complementary roles of spectral bands across tasks. NIR and SWIR are crucial for detecting dynamic changes and maintaining high correlation with ground truth signals, while RGB channels remain essential for stable predictions and class discrimination. Moreover, the varying impact of NIR on different direction classes underscores the importance of task-specific channel sensitivity when designing interpretable Earth observation models.
Figure 7: Row-Normalized Per-Channel Saliency related to change detection (C) and direction classification (D), F1-Score for each class, and to MAE and Pearson Correlation (PC) of regression (R).
Table 7: Feature Attribution through Global Shapley values. For each of the tasks, we report the two best and worst contributing climate values $\mathbf { \hat { H } } \mathbf { = } \mathbf { h i g } \mathbf { h }$ , $\pmb { M } =$ medium, $\mathbf { L } { = } \mathbf { l o w } ,$ ). | Forecasting surface water dynamics is crucial for water resource management
and climate change adaptation. However, the field lacks comprehensive datasets
and standardized benchmarks. In this paper, we introduce HydroChronos, a
large-scale, multi-modal spatiotemporal dataset for surface water dynamics
forecasting designed to address this gap. We couple the dataset with three
forecasting tasks. The dataset includes over three decades of aligned Landsat 5
and Sentinel-2 imagery, climate data, and Digital Elevation Models for diverse
lakes and rivers across Europe, North America, and South America. We also
propose AquaClimaTempo UNet, a novel spatiotemporal architecture with a
dedicated climate data branch, as a strong benchmark baseline. Our model
significantly outperforms a Persistence baseline for forecasting future water
dynamics by +14% and +11% F1 across change detection and direction of change
classification tasks, and by +0.1 MAE on the magnitude of change regression.
Finally, we conduct an Explainable AI analysis to identify the key climate
variables and input channels that influence surface water change, providing
insights to inform and guide future modeling efforts. | [
"cs.CV"
] |
# 1 Introduction
Large Language Models (LLMs) have advanced at a remarkable pace within recent years, driven primarily by larger models and bigger training datasets [12, 28]. As a result, training and fine-tuning LLMs has become prohibitively expensive, with all but the biggest players unable to implement full parameter fine-tuning on state-of-the-art models. To remedy this, parameter-efficient fine-tuning (PEFT) methods have been introduced. One such approach is Prefix-Tuning (PT) [16], a technique which prepends trainable vectors to future inputs of each attention layer in the transformer. PT is extremely cheap to implement, while matching and even surpassing other bulkier methods in a variety of studies. However, as LLMs have grown to record sizes, PT has failed to perform well on the largest models, gradually losing popularity to other methods such as LoRA [11] and GaLore [40].
Earlier studies have primarily attributed this behaviour to PT’s failure to reshape attention patterns within attention heads [27]. We show empirically that, while this applies to more shallow transformers, it does not extend to modern LLMs which tend to have a deep transformer architecture. Prefix-Tuning large language models can in fact result in a significant shift in the attention pattern. This leads to our conclusion that an inability to alter attention patterns is not the reason behind PT’s bad performance on state-of-the-art LLMs.
In this work, we argue that the real reason PT performs sub-optimally is its inherent tradeoff between prefix and input significance. When the prefix is long relative to input length, the model risks losing input specificity and being dominated by the prefix. When the input is long relative to prefix length, the impact of PT itself is greatly diminished. This tradeoff is a result of prefixes being included in the attention head itself. Motivated by this, we build on previous work [4] to propose Prefix-Tuning+ $( \mathrm { P T + } )$ , which relocates the prefix outside the attention head and approximates it with an external module consisting of a trainable matrix and representation function. Diagnostic experiments suggest that $\mathrm { P T } +$ is substantially more expressive than standard PT, reinforcing our choice of using the external module. We also provide a unified overview to the choices we make when extending PT to $\mathrm { P T } +$ , discussing how readers can potentially pick and choose what to keep when designing future context-based methods.
To evaluate the performance of $\mathrm { P T } +$ , we run extensive experiments in the few-shot data setting comparing it to other popular training methods such as LoRA and PT. Our experiments show that across multiple popular benchmarks, $\mathrm { P T } +$ can compare directly with LoRA which is considered state-of-the-art. In a few cases it can even exceed it. Regular PT flounders in comparison.
Our work presents the following key contributions:
• We demonstrate empirically that Prefix-Tuning performs badly on modern LLMs because of an inherent tradeoff between input and prefix significance within the attention head. • We introduce Prefix-Tuning+, a novel architecture based on Prefix-Tuning that isolates the prefix module outside of the attention head. We further provide a unified overview of our decision making process in constructing $\mathrm { P T } +$ to guide users when constructing future context-based methods. • We perform extensive experiments to show the efficacy of $\mathrm { P T } +$ . Our experiments show that, in the few-shot setting, $\mathrm { P T } +$ is competitive with state-of-the-art approaches such as LoRA—achieving an average absolute improvement of $8 . 1 \%$ over LoRA and $2 9 . 4 \%$ over Prefix-Tuning across all six evaluated settings.
This serves as a proof of concept that, when the prefix information is isolated from the attention head like in $\mathrm { P T } +$ , prefix-tuning methods can serve as a viable alternative to current SOTA methods and is an exciting future area of research.
# 2 Related Work
Parameter-efficient fine-tuning (PEFT) [21, 8] adapts large language models (LLMs) by optimizing only a limited set of parameters while freezing the majority of pre-trained weights, reducing computational and memory demands. This approach enables rapid model adaptation to downstream tasks, facilitating deployment in resource-constrained environments without sacrificing performance [9].
Weight-Based PEFT Methods. LoRA [11] represents the most widely adopted weight-based PEFT method, introducing small, trainable low-rank matrices into transformer layers while freezing the original weight matrices. Variants such as QLoRA [7] and $\mathrm { L o R A + }$ [10] refine this concept further, projecting the model’s weights onto low-dimensional subspaces to achieve efficiency comparable to full fine-tuning at significantly reduced computational cost. However, these methods primarily adjust linear layers within transformer blocks, indirectly affecting internal attention patterns, and potentially limiting their flexibility in adapting attention patterns and behaviors explicitly.
Context-Based PEFT Methods. In contrast to weight-based methods, context-based PEFT methods directly alter the input context provided to LLMs without modifying the model’s weights. Prominent examples include P-Tuning [18, 19], Prompt Tuning [15], and Prefix-Tuning [16]. Among these, Prefix-Tuning has been recognized for its exceptional parameter efficiency, achieving performance close to full fine-tuning on generation tasks. Nevertheless, Prefix-Tuning faces significant scalability issues, as performance quickly saturates or even declines with increasing prefix length [26, 27], thereby limiting its effectiveness in learning novel tasks that substantially differ from the pretraining distributions. Addressing these limitations is crucial for enhancing the versatility and applicability of context-based PEFT approaches. In this work, we present a unified view to better understand context-based PEFT methods and propose advancements that extend beyond traditional prefix-tuning.
# 3 Preliminaries
Transformer models were introduced to address sequence-to-sequence tasks and primarily consist of attention layers, feed forward networks, and other task specific modules [34]. In this paper, we assume inputs take the form $X = [ x _ { 1 } , \ldots , x _ { n } ]$ , where each $X$ is a sequence of tokens $X _ { i }$ where $X _ { i } \in \mathbb { R } ^ { d }$ for all $i \in [ n ]$ such that $\ b { X } \in \mathbb { R } ^ { n \times d }$ .
# 3.1 The Attention Mechanism
Attention modules are a key component of transformers which accepts the entire sequence as an input. Typically, attention layers consist of multiple heads, each with a separate set of parameters. For notational simplicity we focus on single headed attention. A single attention head takes the form:
Definition 1 (Single-headed Attention) Given input $\begin{array} { r l r } { X } & { { } \in } & { \mathbb { R } ^ { N \times d } } \end{array}$ and trainable matrices $W _ { Q } , W _ { K } \in \dot { \mathbb { R } } ^ { d \times \smash { \mathscr { d } _ { K } } }$ , $W _ { V } \in \mathbb { R } ^ { d \times d _ { V } }$ . A single attention head takes the form:
$$
O = \mathrm { A t t n } ( Q , K , V ) = \mathrm { s o f t m a x } \left( \frac { Q K ^ { \top } } { \sqrt { d _ { K } } } + M \right) V ,
$$
where $O$ is the output, $Q = X W _ { Q } $ , $K = X W _ { K }$ and $V = X W _ { V }$ and $M$ is a causal mask. Based on [13], the attention head can be expressed as:
$$
o _ { i } ^ { \top } = \frac { \sum _ { j \leq i } \sin ( q _ { i } , k _ { j } ) v _ { j } ^ { \top } } { \sum _ { j \leq i } \sin ( q _ { i } , k _ { j } ) } .
$$
$o _ { i }$ is the $i$ -th output token whilst $q _ { i } \ = \ x _ { i } W _ { Q }$ , $k _ { i } = x _ { i } W _ { K }$ and $v _ { i } ~ = ~ x _ { i } W _ { V }$ and $\mathrm { s i m } ( q , k ) =$ $\exp \bigl ( \frac { q k ^ { \top } } { \sqrt { d _ { K } } } \bigr )$ is a similarity score.
# 3.2 Prefix-Tuning
Prefix-tuning was initially motivated by the phenomenon of prompting and in-context learning (ICL):
Definition 2 (In-Context Learning) ICL allows large language models to adapt to new tasks by prepending demonstration prompts to the input based on a specified criteria. Given context prompt $[ x _ { 1 } ^ { \prime } , . . . , x _ { p } ^ { \prime } ]$ and input $X$ , the new prompt becomes: $X ^ { I C L } = [ x _ { 1 } ^ { \prime } , . . . , x _ { p } ^ { \prime } , x _ { 1 } , . . . , x _ { n } ]$ .
Given the broad success of ICL, prefix tuning was introduced as a natural generalization of this principle. Rather than selecting tokens which correspond to elements available in the model’s vocabulary, soft-tokens (i.e trainable vectors) are prepended to future model inputs:
Definition 3 (Prefix-Tuning) Prefix-Tuning $( P T )$ is a form of parameter-efficient fine-tuning that prepends a sequence of vectors to the inputs. Given prefix $[ s _ { 1 } , . . . , s _ { p } ]$ , where $s _ { i } \in \mathbb { R } ^ { d }$ for all $i$ , and input $X$ , the new prompt becomes $X ^ { p t } = \bar { [ } s _ { 1 } , . . . , s _ { p } , \bar { x _ { 1 } } , . . . , \bar { x _ { n } } ]$ . The vectors $\{ s _ { i } \} _ { i = 1 } ^ { p }$ are then trained based on traditional gradient based methods while the rest of the model weights are frozen.
Referring to Equation (2), the inclusion of prefix $[ s _ { 1 } , . . . , s _ { p } ]$ yields the following output:
$$
o _ { i } ^ { p t \top } = \frac { \sum _ { j \leq i } \sin ( q _ { i } , k _ { j } ) v _ { j } ^ { \top } + \sum _ { j \leq p } \sin ( q _ { i } , W _ { K } s _ { j } ) ( W _ { V } s _ { j } ) ^ { \top } } { \sum _ { j \leq i } \sin ( q _ { i } , k _ { j } ) + \sum _ { j \leq p } \sin ( q _ { i } , W _ { K } s _ { j } ) } .
$$
Any form of ICL is a special instance of prefix-tuning but not vice-versa, making prefix-tuning a more flexible and expressive form of fine-tuning compared with prompting and ICL methods.
Compared with full parameter fine-tuning and even most other PeFTs, prefix-tuning offers an extremely light-weight training approach. Research shows that prefix-tuning excels in low-data or few-shot settings and when guiding the model to leverage a mix of its pretrained tasks, rather than learning entirely new tasks from scratch.
Figure 2: Attention Map of LLaMA2-7B-Chat, and its LoRA and Prefix-Tuning fine-tuned versions.
# 4 Limitations of prefix-tuning in LLMs
In the previous section, we noted that PT is particularly effective when leveraging pretrained tasks. With the continual increase in the size and capability of large language models (LLMs), supported by an expanding pretraining corpus, one might anticipate a corresponding rise in the prominence and effectiveness of PT. However, contrary to expectations, the adoption of prefix-tuning has significantly declined in recent years, as evidenced by its sparse implementation on state-of-the-art models available in repositories such as Hugging Face.
This diminished popularity is primarily due to PT’s underwhelming performance with larger and more complex models, which manifests in reduced accuracy and instability. As depicted in Figure 1, Prefix-Tuning consistently performs under compared to LoRA on three commonly used generative classification benchmarks, despite introducing a similar number of new parameters (see Section 6 for details). With the advent of LoRA, a Parameter-Efficient Fine-Tuning (PEFT) method that consistently outperforms Prefix-Tuning on established benchmarks—the overall relevance and applicability of PrefixTuning methods have been increasingly questioned.
LoRAvs Prefix-Tuning Performance
Figure 1: Performance comparison between Prefix-Tuning and LoRA.
# 4.1 Does Prefix-Tuning alter the attention pattern?
So why doesn’t Prefix-Tuning behave well on state-of-the-art LLMs? The popular stance is that PT cannot alter the attention distribution in the attention heads [27]. Previous work [27] demonstrates that prefix-tuning is only capable of biasing the attention layer activations which forms a severe limitation. This is shown to be true for single-layer transformers and shallow transformers in general. In this study, we argue that, while this analysis is indicative for shallow transformers, it does not capture how PT behaves on LLMs, which are deep multi-layer transformers. Our experiments in Figure 2 show that PT can modify the attention pattern of LLMs significantly, despite having bad performance (experiment details are differed to Appendix B.2). This leads us to believe that an inability to affect the attention pattern is not why PT performs badly.
# 4.2 Tradeoff between prefix and input significance
In this section, we argue that the fundamental limitation of Prefix-Tuning is the inherent tradeoff between the significance of the prefix and the input. This can be observed by rewriting Equation (3) based on the work by [27] as follows:
$$
o _ { i } ^ { p t } ^ { \top } = ( 1 - \alpha _ { i } ) o _ { i } ^ { \top } + \sum _ { j \leq p } \alpha _ { i j } v _ { j } ^ { \prime } ^ { \top } ,
$$
Equation (4) shows that the output with prefix-tuning can be represented as a linear combination between the attention from the input $o _ { i }$ and the attention from each prefix $\boldsymbol { v } _ { j } ^ { \prime }$ with weights $\alpha _ { i j }$ . Prefix
Tuning mainly does two things: re-weights the original attention output and adds query-dependent bias vectors.
When the prefix is long relative to input length: In this case, we can expect the value of $\alpha$ to be large, which results in a greater change in the attention pattern since the base model’s attention pattern is mainly dependent on $o _ { i }$ ; this explains our observations in Figure 2. To further verify, we have conducted experiments with different prefix lengths and measured the attention pattern changes using the REEF framework [39]. Our results in Table 4 confirms that as prefix length increases, the deviation from the base attention pattern grows. Details can be found in Appendix B.3. What happens when you have a large $\alpha$ is a smaller contribution from the input itself. The model then has reduced specificity regarding each input and risks being dominated by the prefixes. Too little significance may be placed upon the input itself.
This is further exacerbated by the fact that, as the length of the prefix increases, prefix-tuning is unable to make full use of the space spanned by the vectors $\{ W _ { V } { \bar { s } } _ { i } \} _ { i = 1 } ^ { p }$ . This phenomenon is also noticed by [27] and is attributed to the competing optimization goals for the prefix $s _ { i }$ . The prefix both needs to grab attention through $W _ { K } s _ { i }$ and determine direction through $W _ { V } s _ { i }$ .
When the input is long relative to prefix length: we can expect the value of $\alpha$ to be small. The opposite issue arises because when each $\alpha _ { i }$ is small, the contribution of the prefix term is diminished. As LLMs get more and more capable, relying more on long sequences arising from techniques such as chain-of-thought reasoning [35], it is understandable for the effectiveness of prefix-tuning to be severely limited. Too little significance has been placed upon the prefix-tuning.
# 5 Prefix-Tuning+: Method and Framework
# 5.1 Motivation and Construction
A key insight from Section 4.2 is that the trade-off between prefix and input importance stems from the prefix’s confinement within the attention head. This motivates Prefix-Tuning $+$ , a novel extension of PT which seeks to bring the prefix information out of the attention head itself.
We first draw the terms containing the prefix information out of the attention head by splitting Equation (3) into:
$$
o _ { i } ^ { p t \textsf { T } } = \lambda \frac { \sum _ { j \leq i } \sin ( q _ { i } , k _ { j } ) v _ { j } ^ { \top } } { \sum _ { j \leq i } \sin ( q _ { i } , k _ { j } ) } + ( 1 - \lambda ) \frac { \sum _ { j \leq p } \sin ( q _ { i } , W _ { K } s _ { j } ) ( W _ { V } s _ { j } ) ^ { \top } } { \sum _ { j \leq p } \sin ( q _ { i } , W _ { K } s _ { j } ) } ,
$$
where $\lambda \in [ 0 , 1 ]$ is a constant. This replaces the softmax regularization tradeoff, which is dependent on the length of the input and context, with a fixed convex linear combination similar to previous works [24, 36]. Then, we approximate the similarity metric $\sin ( \cdot , \cdot )$ with a kernel feature map $\phi$ such that $\sin ( \cdot , \cdot ) \approx \phi ( \cdot ) ^ { \top } \phi ( \cdot )$ . We have
$$
o _ { i } ^ { p t \textsf { T } } = \lambda \frac { \sum _ { j \leq i } \sin ( q _ { i } , k _ { j } ) v _ { j } ^ { \top } } { \sum _ { j \leq i } \sin ( q _ { i } , k _ { j } ) } + ( 1 - \lambda ) \frac { \phi ( q _ { i } ) ^ { \top } \sum _ { j \leq p } \phi ( W _ { K } s _ { j } ) ( W _ { V } s _ { j } ) ^ { \top } } { \phi ( q _ { i } ) ^ { \top } \sum _ { j \leq p } \phi ( W _ { K } s _ { j } ) } .
$$
A similar approach is used in [4] to approximate in-context learning prompts, which has shown that the bias term $\begin{array} { r } { b _ { 1 } = \sum _ { j \leq p } \phi ( \bar { W _ { K } } s _ { j } ) \hat { ( W _ { V } } s _ { j } ) ^ { \top } } \end{array}$ is capable of capturing contextual prompt or prefix information. The natural generalization of this step is to replace the bias $b _ { 1 }$ by a more expressive, trainable matrix $M$ , and the analogous term $\begin{array} { r } { b _ { 2 } = \dot { \sum } _ { j \leq p } \phi ( \mathbf { \dot { W } } _ { K } s _ { j } ) } \end{array}$ by a trainable matrix $N$ , which yields:
$$
o _ { i } ^ { p t \top } = \lambda \frac { \sum _ { j \leq i } \sin ( q _ { i } , k _ { j } ) v _ { j } ^ { \top } } { \sum _ { j \leq i } \sin ( q _ { i } , k _ { j } ) } + ( 1 - \lambda ) \frac { \phi ( q _ { i } ) ^ { \top } M } { \phi ( q _ { i } ) ^ { \top } N } .
$$
In practice, we make two more modifications during application. First, due to the trainable nature of $\mathbf { M }$ and layer normalization, $\lambda$ can be absorbed into the trainable weights and is not necessary. Second, $\phi ( q _ { i } ) ^ { \top } \dot { N }$ is no longer meaningful for regularization, so we remove it. Therefore, the final attention output of the Prefix-Tuning $+$ architecture has the following form:
$$
o _ { i } ^ { p t + \top } = \frac { \sum _ { j \leq i } \sin ( q _ { i } , k _ { j } ) v _ { j } ^ { \top } } { \sum _ { j \leq i } \sin ( q _ { i } , k _ { j } ) } + \phi ( q _ { i } ) ^ { \top } M .
$$
Choice of feature map. Regarding the choice of $\phi$ there are several viable options which represent a tradeoff between expressivity and cost. A few from existing literature include $\phi ( x ) = \mathrm { e l u } ( x )$ [13] and $\phi _ { W } ( x ) = \mathrm { R e L U } ( W x + \bar { b } )$ [23]. In this study, as a proof of concept, we conduct all experiments with $\phi ( x ) = \mathrm { e l u } ( x )$ . This is because it is the easiest to implement and offers a good proof of concept regarding the viability of our approach. Other choices may offer more expressiveness and better performance but would require significantly more detailed tuning so we leave it to future work. Further details on construction of the Prefix-Tuning $+$ modules are offered in the appendix.
Remark 1 (Expressiveness) By choosing $\phi _ { W } ( x ) = \mathrm { R e L U } ( W x + b ) ,$ , the term $\phi _ { W } ( q _ { i } ) M$ becomes effectively a single-layer MLP. Depending on future choices for $\phi ( \cdot )$ , Prefix-Tuning $+$ has the ability to be extremely expressive, matching methods such as full fine-tuning and LoRA.
# 5.2 A Unified View for Context Based Methods
This section outlines the design choices and intermediate stages behind PT and $\mathrm { P T } +$ in general, offering the rationale for each to guide future implementation decisions. We first refer to Equation (4). The most elementary version of prefix-tuning is ICL, where each vector $\ v _ { j } ^ { \prime }$ corresponds to an input-vocabulary token prepended to the input of the transformer. Based on a decision to increase expressivity with the added need for training, we have prompt-tuning, where $\boldsymbol { v } _ { j } ^ { \prime }$ are replaced with trainable soft prompts. Last but not least, there is the decision to improve expressivity with added computational and memory cost. This leads to PT which prepends these soft prompts to the inputs of the individual attention heads of the transformer. To arrive at $\mathrm { P T } +$ , there are two following decisions to be made:
1. Shift the prefix module out of the attention head
2. Approximate $\textstyle \sum _ { j \leq p } \sin ( . , W _ { K } s _ { j } )$ by $\phi ( \cdot ) ^ { \top } M$
Choice 1: Shifting the prefix module out of the attention head is to avoid the limitations highlighted in Section 4.2. By doing so we avoid the $\alpha$ scaling on both the input and prefixes so there is no longer the same tradeoff between input contribution and prefix significance/contribution.
Choice 2: Replacing the original similarity metric by $\phi ( \cdot ) ^ { \top } \dot { M }$ shifts the output from Equation (6) to Equation (7). By doing so, we lose some of the inherent structure of the attention mechanism. In return, we have an increase in model expressivity from the flexibility of a training matrix $M$ . Since both PT and $\mathrm { P T } +$ can be viewed as adding query-dependent $\mathrm { ~ d ~ }$ -dimensional bias terms to the transformer, we calculate the covariate output matrices of the bias from each and find the respective eigenvalue decay. From Figure 3, we see that with $\mathrm { P T } +$ , the top eigenvalues corresponding to the main principle components are large and decay slowly compared to PT. This indicates that the output bias spans many principal components rather than collapsing onto a handful of axes. In
Comparison of Top-50 Attention Output Feature Values
Figure 3: Spectrum of prefix representations.
other words, Prefix-Tuning+ adds a bias from a more diverse, high-dimensional subspace. This is an intuitive proxy which indicates higher expressivity.
In Prefix-Tuning+, both choices are used in conjunction. This does not have to be the case. Users can choose to keep the prefix term within the attention head and only apply choice 2. The resulting output is expressed as:
$$
o _ { i } ^ { p t \top } = \frac { \sum _ { j \leq i } \sin ( q _ { i } , k _ { j } ) v _ { j } ^ { \top } + \phi ( q _ { i } ) ^ { \top } M } { \sum _ { j \leq i } \sin ( q _ { i } , k _ { j } ) + \phi ( q _ { i } ) ^ { \top } N } .
$$
The opposite is also true, and prefix information can be brought out of the attention head through another module. In this work we choose to combine the two because we consider it expressive, easy to implement and a good proof of concept. However, in future research, what choices to implement for the optimal architecture is an interesting direction.
Table 1: Fine-Tuning Method Performance Comparison (Accuracy $\%$ ). Results across datasets and models; best-performing results are in boldface, highlighting the effectiveness of Prefix-Tuning $^ +$ .
Remark 2 (The Memory Perspective) We can view our method as explicitly treating the learnable matrix M as an internal memory store. Traditional context-based PEFTs, such as Prefix-Tuning, incorporate context memory by extending the KV inputs, tying the memory capacity to the prefix length. By linearizing attention and summing over the KV circuit, our approach decouples the memory capacity from sequence length and instead makes it proportional to the dimensionality of M, enabling more flexible storage of attention patterns. In practice, M allows the model to record and retrieve token interactions without altering the core attention weights by acting as an external memory module. This memory interface is both more direct and more parameter-efficient than auxiliary MLP-based memory modules, which typically require deep architectural changes, incurring higher costs.
# 6 Experiments
In this section, we evaluate Prefix-Tuning $+$ across diverse tasks, models, and training settings, focusing on rapid adaptation, IID accuracy, and OOD generalization. We also investigate the impact of attention mechanisms and extend evaluations to practical alignment scenarios.
# 6.1 Experimental Setup
Datasets. We evaluate on four generative QA tasks:BigBench [31, 32], GoEmotions [6], DBpedia [14] and Banking77 [3]. We leave the detailed description of those dataset in Appendix A.1.
Training and Evaluation Protocol. We assess each method’s ability to quickly adapt to downstream tasks in a few-shot setting by fine-tuning on up to five independent rounds of minimal data. In each round, we randomly sample one example per class (6 examples for BIG-bench, 28 for GoEmotions, and 14 for DBpedia) to form the entire training set. After fine-tuning, we report in-distribution (IID) accuracy on each dataset’s standard test split, averaging results over the five rounds to mitigate sampling variability. Since the ability to quickly adapt to new tasks often comes at the cost of generalization, we also evaluate out-of-distribution (OOD) performance using the Banking77 intentclassification dataset without additional fine-tuning. During inference, models receive a multiplechoice prompt listing all 77 Banking77 intents and must select the most appropriate label for each query. OOD accuracy is computed as the proportion of test queries correctly classified, measuring how effectively learned features generalize to unseen domains. We perform this evaluation independently for each of the five models fine-tuned on different source datasets.
Models and Training Configuration. We experiment with two pre-trained language models to assess architectural effects: LLaMA2-7B-Chat and Qwen2.5-3B-Instruct. The LLaMA2 series models employ the multi-head attention (MHA) [34] and Qwen2.5 use grouped-query attention (GQA) [1]. GQA ties together query heads by sharing key/value projections, offering faster inference and lower memory usage, which allows us to examine if such architectural differences impact adaptation efficacy. Both models are used in their instruction version in order to test the OOD performance. We fine-tune these models using the AdamW [20] optimizer with a small learning rate and a fixed number of training steps (4000 steps). All methods use same small batch size (batch size 2).
Baselines. We compare Prefix-Tuning+ against several baseline approaches for adapting large language models, covering both parameter-efficient and traditional full fine-tuning, as well as a training-free prompt-based baseline:
• Full Fine-Tuning: All model parameters are fine-tuned on the minimal training set for each round. This represents the conventional approach where all weights of models are updated.
Figure 4: Pareto plots illustrating the trade-off between IID performance (on Bigbench) and OOD performance (on Banking77) for checkpoints of LLaMA2 and Qwen2.5 during training.
• Low-rank adaptation (LoRA [11]): LoRA freezes original model parameters and introduces trainable low-rank update matrices into each Transformer layer. Only these small rank- $\cdot r$ matrices are learned, substantially reducing the number of trainable parameters. We set $r \ = \ 6 4$ to approximately match the parameter count introduced by Prefix-Tuning+. • Prefix-Tuning (PT [16]): Standard prefix-tuning keeps all model weights fixed, learning only a continuous prefix vector that is prepended to the input at each Transformer layer. We follow the original implementation and set the prefix length $m = 3 2$ . • In-Context Learning (ICL [2]): Unlike the previous methods, ICL involves no parameter updates. Instead, the training examples are directly provided as demonstrations in the context at inference.
# 6.2 Supervised Fine-Tuning Performance Across Tasks
PEFT techniques aim to rapidly adapt large pre-trained language models (LLMs) to downstream tasks by updating a limited number of parameters. To study the effectiveness and adaptability of our proposed Prefix-Tuning+ across diverse classification scenarios, we conduct experiments on several tasks with the five round data setting. We summarize the accuracy results in Table 1, comparing Prefix-Tuning $^ +$ with various baseline approaches across the evaluated datasets. Our Prefix-Tuning+ consistently demonstrates superior or highly competitive performance compared to all baseline methods. Specifically, on BIG-bench, Prefix-Tuning $^ +$ achieves an accuracy of $\bar { 7 } 1 . 2 \%$ with LLaMA2- 7B-Chat and $7 6 . 6 \%$ with Qwen2.5-3B-Instruct, significantly outperforming LoRA, Prefix-Tuning, and full fine-tuning. On DBpedia, Prefix-Tuning $+$ also achieves top results $( 9 2 . 7 \%$ for LLaMA2, $9 6 . 9 \%$ for Qwen2.5), matching or exceeding the performance of the strongest baselines. For GoEmotions, Prefix-Tuning $+$ remains robust, reaching $\bar { 4 } 5 . 2 \%$ accuracy with LLaMA2-7B-Chat and achieving a competitive ${ \bar { 3 } } 7 . 3 \%$ with Qwen2.5-3B-Instruct. These outcomes underscore Prefix-Tuning+ ’s capability to effectively generalize and perform across varied classification tasks and model architectures.
# 6.3 Balancing In-Distribution Accuracy and Out-of-Distribution Generalization
An inherent IID-OOD performance trade-off typically emerges when models are trained to optimize for specific downstream tasks. In this section, we aims to study robustness of various fine-tuning approaches in effectively balancing IID performance with OOD resilience. Specifically, we examine the performance of the LLaMA2-7B-Chat and Qwen2.5-3B-Instruct models trained on the three datasets (BigBench, GoEmotions, and DBpedia). IID performance is measured directly on the hold-out part of those datasets, while OOD performance is evaluated using the Banking77 dataset. To provide a clear visualization, we present Pareto plots that depict the trade-off between IID $\mathbf { \check { X } }$ -axis) and OOD (y-axis) performance. Each point on these plots represents the performance throughout training (from various checkpoints saved in different steps), with points of the same color corresponding to checkpoints from the same fine-tuning approach. The results of two models on BigBench are shown in Figure 4. These plots clearly demonstrate the performance trade-offs and highlight the differences in how each model generalizes from IID conditions to OOD scenarios. Notably, our proposed method consistently appears on the Pareto front, indicating that it achieves an optimal balance between IID and OOD performance. We leave results on more datasets in Appendix A.2.
IID Accuracy on BigBench一 Llama& Qwen
Figure 5: Performance over five incremental rounds of training data on BigBench. Prefix-Tuning+ consistently matches or exceeds baselines, with the largest gains observed on Qwen-2.5-3B-Instruct.
# 6.4 Performance Across Varying Data Sizes and Attention Mechanisms
To evaluate how effectively Prefix-Tuning $+$ scales with training set size and different attention mechanisms, we conducted experiments using the BigBench dataset, incrementally increasing dataset size over five rounds. We fine-tuned two distinct models, LLaMA-2-7B-Chat with standard attention and Qwen2.5-3B-Instruct with grouped-query attention (GQA)—using Prefix-Tuning $^ +$ , Prefix-Tuning, LoRA, and full-parameter fine-tuning. Figure 5 illustrates the average performance across these rounds. Our analysis highlights two points: first, Prefix-Tuning+ maintains strong and consistent performance across different data scales and attention mechanisms, effectively matching or surpassing all baseline methods. Second, Prefix-Tuning $+$ shows particularly notable improvements when combined with GQA, outperforming both LoRA and full-parameter fine-tuning. These results indicates that Prefix-Tuning $^ +$ is effective when paired with the widely adopted grouped-query attention (GQA) mechanism, yielding superior performance compared to existing approaches. Additional results can be found in Appendix A.3.
# 6.5 Practical Alignment Tasks across Larger Datasets and Diverse Optimization Objectives
To study the effectiveness of our proposed sification, we performed experiments aimed more closely with human values and intent Prefix-Tuning+ performs when integrated wit
We employed the Qwen2.5-3B model optimized with Prefix-Tuning $^ +$ and compared its performance against LoRA, using three different training approaches: supervised fine-tuning (SFT)[25] on the Magpie-Ultra v0.1 dataset[37], and two preferencebased methods—Direct Preference Optimization (DPO)[29] and Simple Preference Optimization (SimPO)[22]—using the binarized UltraFeedback dataset [5]. For each training method, we used a consistent dataset size of 10,000 samples to ensure
Prefix-Tuning $^ +$ beyond generative text clasat aligning large language models (LLMs) ns. Specifically, we evaluated how well different preference optimization strategies.
Table 2: Performance improvements of Prefix-Tuning $^ +$ over LoRA on alignment tasks using SFT, DPO, and SimPO objectives (evaluated with AlpacaEval 2).
fairness and comparability of results. Following training, we evaluated the models using AlpacaEval 2 [17], a standardized benchmark for alignment tasks. All experiments were implemented and executed using the LLaMAFactory framework [41]. Table 2 summarizes the improvement in win-rates achieved by each method. Prefix-Tuning $+$ consistently delivered higher win-rate increases compared to LoRA across all training objectives, highlighting its robustness and versatility. The advantage of Prefix-Tuning $+$ was particularly pronounced in preference-based settings (DPO and SimPO), where it notably outperformed LoRA. Interestingly, our experiments revealed a slight but consistent advantage of DPO over SimPO, contrary to prior findings [22]. We hypothesize that SimPO’s comparatively weaker performance in our setup may stem from its sensitivity to hyperparameter configurations [30].
# 7 Discussion
To conclude, in this work we argue that the reason why Prefix-Tuning has been ineffective when applied to modern large language models is that prefixes are "trapped" within the attention head. To remedy this, we introduce a novel architecture that generalizes upon existing Prefix-Tuning methods by approximating the prefix module and shifting it out of the attention head. Surprisingly, even with this slightly naive implementation, our model is able to match state-of-the-art methods such as LoRA on popular benchmarks in a few-shot setting, far outpacing previous prefix-tuning methods. We treat this as proof of concept that, if approached correctly, Prefix-Tuning methods can be competitive and are an exciting future avenue of research.
We also acknowledge the existing limitations of our work. Rather than presenting a clear alternative to existing PEFTs, Prefix-Tuning $^ +$ is primarily a proof of concept. The design of our method has yet to be thoroughly ablated. For instance, this line of work can potentially be improved utilizing a more powerful choice of feature map $\phi$ such as a learnable one. Further studies are needed to test the limits of our method in more tasks and with more training objectives.
References
[1] Joshua Ainslie, James Lee-Thorp, Michiel De Jong, Yury Zemlyanskiy, Federico Lebrón, and Sumit Sanghai. Gqa: Training generalized multi-query transformer models from multi-head checkpoints. arXiv preprint arXiv:2305.13245, 2023.
[2] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
[3] Iñigo Casanueva, Tadas Temcˇinas, Daniela Gerz, Matthew Henderson, and Ivan Vulic´. Efficient intent detection with dual sentence encoders, 2020. URL https://arxiv.org/abs/2003. 04807.
[4] Brian K Chen, Tianyang Hu, Hui Jin, Hwee Kuan Lee, and Kenji Kawaguchi. Exact conversion of in-context learning to model weights in linearized-attention transformers. International Conference on Machine Learning, 2024.
[5] Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei Zhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, and Maosong Sun. Ultrafeedback: Boosting language models with high-quality feedback, 2023.
[6] Dorottya Demszky, Dana Movshovitz-Attias, Jeongwoo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi. Goemotions: A dataset of fine-grained emotions. arXiv preprint arXiv:2005.00547, 2020.
[7] Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. Advances in Neural Information Processing Systems, 36, 2024.
[8] Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, et al. Parameter-efficient fine-tuning of large-scale pre-trained language models. Nature Machine Intelligence, 5(3):220–235, 2023.
[9] Zeyu Han, Chao Gao, Jinyang Liu, Jeff Zhang, and Sai Qian Zhang. Parameter-efficient fine-tuning for large models: A comprehensive survey. arXiv preprint arXiv:2403.14608, 2024.
[10] Soufiane Hayou, Nikhil Ghosh, and Bin Yu. Lora+: Efficient low rank adaptation of large models, 2024. URL https://arxiv.org/abs/2402.12354.
[11] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021.
[12] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models, 2020. URL https://arxiv.org/abs/2001.08361.
[13] Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. In International conference on machine learning, pages 5156–5165. PMLR, 2020.
[14] Fanshuang Kong, Richong Zhang, Zhijie Nie, and Ziqiao Wang. Rethink the evaluation protocol of model merging on classification task. arXiv preprint arXiv:2412.13526, 2024.
[15] Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691, 2021.
[16] Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190, 2021.
[17] Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca_eval, 5 2023.
[18] Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Lam Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. arXiv preprint arXiv:2110.07602, 2021.
[19] Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. Gpt understands, too. AI Open, 5:208–215, 2024.
[20] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
[21] Sourab Mangrulkar, Sylvain Gugger, Lysandre Debut, Younes Belkada, Sayak Paul, and Benjamin Bossan. Peft: State-of-the-art parameter-efficient fine-tuning methods. https: //github.com/huggingface/peft, 2022.
[22] Yu Meng, Mengzhou Xia, and Danqi Chen. Simpo: Simple preference optimization with a reference-free reward. Advances in Neural Information Processing Systems, 37:124198–124235, 2024.
[23] Jean Mercat, Igor Vasiljevic, Sedrick Scott Keh, Kushal Arora, Achal Dave, Adrien Gaidon, and Thomas Kollar. Linearizing large language models. In First Conference on Language Modeling, 2024. URL https://openreview.net/forum?id=soGxskHGox.
[24] Tsendsuren Munkhdalai, Manaal Faruqui, and Siddharth Gopal. Leave no context behind: Efficient infinite context transformers with infini-attention, 2024. URL https://arxiv.org/ abs/2404.07143.
[25] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730–27744, 2022.
[26] Yawen Ouyang, Yongchang Cao, Yuan Gao, Zhen Wu, Jianbing Zhang, and Xinyu Dai. On prefix-tuning for lightweight out-of-distribution detection. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1533–1545, 2023.
[27] Aleksandar Petrov, Philip HS Torr, and Adel Bibi. When do prompting and prefix-tuning work? a theory of capabilities and limitations. arXiv preprint arXiv:2310.19698, 2023.
[28] Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson d’Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew Johnson, Blake Hechtman, Laura Weidinger, Iason Gabriel, William Isaac, Ed Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu, and Geoffrey Irving. Scaling language models: Methods, analysis and insights from training gopher, 2022. URL https://arxiv.org/abs/2112.11446.
[29] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36:53728–53741, 2023.
[30] schrieffer-z. can’t reproduce AE-LC numbers in hf ckpt (Llama-3-8b-SFT-DPO, Llama-3-8bSFT-SimPO). GitHub issue #77, urlhttps://github.com/princeton-nlp/SimPO/issues/77, December 2024. princeton-nlp/SimPO
repository.
[31] Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615, 2022.
[32] Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, , and Jason Wei. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261, 2022.
[33] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
[34] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
[35] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models, 2023. URL https://arxiv.org/abs/2201.11903.
[36] Yuhuai Wu, Markus N. Rabe, DeLesley Hutchins, and Christian Szegedy. Memorizing transformers, 2022. URL https://arxiv.org/abs/2203.08913.
[37] Zhangchen Xu, Fengqing Jiang, Luyao Niu, Yuntian Deng, Radha Poovendran, Yejin Choi, and Bill Yuchen Lin. Magpie: Alignment data synthesis from scratch by prompting aligned llms with nothing, 2024.
[38] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115, 2024.
[39] Jie Zhang, Dongrui Liu, Chen Qian, Linfeng Zhang, Yong Liu, Yu Qiao, and Jing Shao. REEF: Representation encoding fingerprints for large language models. In The Thirteenth International Conference on Learning Representations, 2025.
[40] Jiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, and Yuandong Tian. Galore: Memory-efficient llm training by gradient low-rank projection, 2024. URL https://arxiv.org/abs/2403.03507.
[41] Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. Llamafactory: Unified efficient fine-tuning of $1 0 0 +$ language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), Bangkok, Thailand, 2024. Association for Computational Linguistics. URL http://arxiv.org/abs/2403.13372.
# A Appendix: More Experiment Results
# A.1 Datasets
We use four generative classification datasets:
• (1) BigBench [31, 32]: A comprehensive evaluation suite consisting of 23 challenging tasks. We focus on the Date Understanding task, formulated as a 6-class QA problem in which the model must choose one of six answer categories. For simplicity, we refer to this setting as BigBench.
• (2) GoEmotions [6]: A fine-grained emotion classification dataset containing 58K Reddit comments labeled with 27 emotion categories plus neutral (28 classes total). As the largest human-annotated English emotion dataset, GoEmotions covers a broad taxonomy of emotions. We cast this as a generative QA task: the model reads a comment and generates the corresponding emotion label.
• (3) DBpedia [14]: A widely used ontology classification dataset consisting of Wikipedia abstracts assigned to 14 top-level classes. We formulate this as a generative QA task where the model must output the correct class name given an abstract.
• (4) Banking77 [3]: A challenging intent classification dataset designed for conversational systems, consisting of 13,083 customer service queries annotated across 77 categories. We formulate this as a generative QA task where the model must generate the correct label given a customer query.
# A.2 In-Distribution Accuracys and Out-of-Distribution Generalization
In this appendix, we provide additional Pareto plots to complement the analysis presented in expriments section. Specifically, Figures6 and 7 illustrate the trade-offs between in-distribution (IID) and out-of-distribution (OOD) performances for fine-tuned LLaMA2-7B-Chat and Qwen2.5-3B-Instruct models across two additional datasets: GoEmotions and DBPedia.
Each plot shows the IID accuracy $\mathbf { \dot { X } }$ -axis) evaluated directly on the respective dataset’s held-out test set, and the OOD accuracy (y-axis) evaluated on the Banking77 dataset without further fine-tuning. Points within each plot represent model checkpoints captured at different training intervals, with colors indicating the respective fine-tuning methods used.
Consistent with our observations in the main text, the proposed method frequently occupies positions near the Pareto front. This indicates its effectiveness in maintaining a balanced performance between achieving high accuracy on IID tasks and exhibiting strong generalization to OOD scenarios.
Figure 6: Pareto plots illustrating the trade-off between IID performance (on GoEmotions) and OOD performance (on Banking77) for checkpoints of LLaMA2-7B-Chat and Qwen2.5-3B-Instruct during training.
Figure 7: Pareto plots illustrating the trade-off between IID performance (on DBPedia) and OOD performance (on Banking77) for checkpoints of LLaMA2-7B-Chat and Qwen2.5-3B-Instruct during training.
# A.3 Performance Across Varying Data Sizes and Attention Mechanisms
To further validate the robustness and adaptability of Prefix-Tuning+ across different tasks and attention mechanisms, we provide additional experiment results on two more datasets: GoEmotions and DBpedia. Similar to the main experiment, we incrementally increased the training set size across five rounds, fine-tuning two models—LLaMA-2-7B-Chat (multi-head attention, MHA) and Qwen2.5-3B-Instruct (grouped-query attention, GQA)—using Prefix-Tuning+, Prefix-Tuning, LoRA, fullparameter fine-tuning, and the In-context Learning (ICL) baseline. Figure 8 and 9 illustrates the results across these additional datasets. Overall, these supplementary results reinforce our primary findings that Prefix-Tuning+ scales effectively with data size and adapts particularly well to the grouped-query attention mechanism, outperforming existing parameter-efficient methods.
Figure 8: Performance comparison over five incremental rounds of training data on GoEmotions.
# B Appendix: Verification Experiment Setup
To better understand how different methods affect model behavior, we design three comprehensionoriented experiments that focus on analyzing attention patterns and internal representations. These experiments aim to shed light on the mechanisms and effects of each approach. For consistency and comparability, we use the GoEmotions dataset as the in-distribution (IID) dataset and the Banking77 dataset as the out-of-distribution (OOD) dataset across all experiments. The following subsections detail the setup of each experiment.
IID Accuracy on DBpedia一Llama& Qwen
Figure 9: Performance comparison over five incremental rounds of training data on DBpedia dataset.
# B.1 Spectrum Analysis of Prefix Representations
In this experiment, we use Qwen2.5-3B-Instruct as the base model. We fine-tune two variants—prefixtuning (with a prefix length of 32) and prefix-tuning+—on the GoEmotions dataset using identical training configurations and a consistent sampling strategy (5 rounds).
Let $F _ { b } \in \mathbb { R } ^ { n \times d }$ denote the base model’s final layer attention outputs for $n$ input tokens in total with representation dimension $d$ , and $F _ { t } \in \mathbb { R } ^ { n \times d }$ represent the corresponding fine-tuned model outputs. The representation effect (bias) matrix is computed as:
$$
\Delta F = F _ { t } - F _ { b }
$$
After normalization, we perform eigenvalue decomposition on the covariance matrix of representation effects:
$$
\Sigma = \frac { 1 } { n - 1 } \Delta F ^ { \top } \Delta F = V \Lambda V ^ { \top }
$$
where $\boldsymbol { \Lambda } = \mathrm { d i a g } ( \lambda _ { 1 } , . . . , \lambda _ { d } )$ contains eigenvalues $\mathbf { \lambda } \lambda _ { 1 } \geq \ldots \geq \lambda _ { d } )$ , and $V$ is the orthogonal eigenvector matrix.
We concatenate examples from the GoEmotions test split into the input sequences and extract the self_attn.attn_output from the final layer. We then compute the corresponding attention outputs bias from the two fine-tuned variants, analyze their eigenvalue spectra, and visualize the top 50 eigenvalues to quantify how prefix tuning and our method alters the representation space geometry.
# B.2 Attention Pattern Visualization
This experiment examines how different fine-tuning methods impact attention behavior. We use LLaMA2-7B-Chat and Qwen2.5-3B-Instruct as base models, and fine-tune their respective prefixtuning and prefix-tuning $+$ variants using the same data and settings as in the previous experiment. We select one example each from the IID (GoEmotions) and OOD (Banking77) datasets as test inputs. For each model, we extract the self.attn.attn_weight from the final layer and visualize it as a heatmap to reveal attention patterns. For the prefix-tuning variants, we isolate the attention weights corresponding only to real tokens (excluding prefix tokens), normalize them, and then produce the heatmap visualization.
# B.3 Representation Similarity via CKA
Inspired by the REEF framework [39], which utilizes centered kernel alignment (CKA) to quantify representation-level differences, we evaluate the similarity between base and fine-tuned models. The CKA similarity between two sets of representations $X$ (base model) and $Y$ (fine-tuned model) is computed as:
$$
\operatorname { C K A } ( X , Y ) = { \frac { \operatorname { H S I C } ( X , Y ) } { { \sqrt { \operatorname { H S I C } ( X , X ) \cdot \operatorname { H S I C } ( Y , Y ) } } } } ,
$$
where the Hilbert-Schmidt Independence Criterion (HSIC) is defined as:
$$
\mathrm { H S I C } ( X , Y ) = \frac { 1 } { ( m - 1 ) ^ { 2 } } \mathrm { t r } ( K _ { X } H K _ { Y } H ) .
$$
Here, $H = I - { \textstyle \frac { 1 } { \hbar } } 1 1 ^ { T }$ is the centering matrix, and $K _ { X } , K _ { Y }$ are Gram matrices with $( K _ { X } ) _ { i j } =$ $k ( X _ { i } , X _ { j } )$ and $( \overset { \pi } { K } _ { Y } ) _ { i j } = k ( Y _ { i } , Y _ { j } )$ , where $k$ is a kernel function (we use linear kernel in our experiments). $X _ { i }$ denotes the $i$ -th representation vector from layer outputs.
We use Qwen2.5-3B-Instruct as the base model, and obtain its prefix-tuning and prefix-tuning+ variants using the same training data and setup. The TruthfulQA dataset is used for evaluation. Following the sampling and CKA computation protocol from the REEF paper, we extract decoder representations from the $1 8 ^ { \mathrm { t h } }$ layer of each model and compute the CKA similarity with the base model. This allows us to quantitatively assess how each method alters the internal representations while controlling for computational variance.
Table 3: CKA Similarity Between Different Methods And Base Model
As shown in Table 3, we present the CKA similarity between the base model and the models fine-tuned using three PEFT methods: LoRA, prefix-tuning+, and prefix-tuning. It is evident that prefix-tuning+ and LoRA exhibit notably different effects on the model’s internal representations. Our proposed prefix-tuning $+$ method induces more substantial shifts in the model’s representation space, indicating a stronger impact on the model’s expressive capacity. On the other hand, although prefix-tuning causes significant changes in the attention patterns, this also leads to much larger representation shifts, which may partly explain its relatively weaker downstream performance.
Table 4: CKA Similarity Between Prefix Tuning And Base Model
As shown in Table 4, we further examine how the prefix length affects the representation similarity between the prefix fine-tuned model and the base model under the same dataset and training settings. It is clear that as the prefix length increases from 16 to 64, the model’s internal representations deviate more significantly from those of the base model, indicating that longer prefixes introduce more substantial changes in representation space.
In our experiments, since both Prefix-Tuning and Prefix-Tuning+ only modify parameters within the self-attention mechanism—without affecting other components of the decoder layers—the resulting changes in representations can be regarded as a close approximation of changes in the attention pattern.
# C Appendix: Implementation Details
We implemented our experiments using PyTorch and trained our models utilizing the DeepSpeed optimization library with ZeRO Stage 3 to efficiently manage memory usage during training. To further optimize memory and computational efficiency, we offloaded both optimizer states and model parameters to CPU with pinned memory enabled, facilitating faster data transfers. Gradient communication and computation were overlapped, and contiguous gradients were enforced to enhance training throughput.
The AdamW optimizer was employed with a weight decay of 0.1, momentum terms set as $\beta _ { 1 } =$ 0.9, $\beta _ { 2 } ~ = ~ 0 . 9 5$ , and epsilon of $1 \times 1 0 ^ { - 8 }$ . Training was executed using automatic precision selection between FP16 and BF16 modes for optimal balance between performance and stability. The learning rate was held constant at $2 \times 1 0 ^ { - 5 }$ throughout the training process. Each GPU processed a micro-batch size of one sample per step, while gradient accumulation was automatically managed to simulate larger batch sizes effectively. Gradient clipping was automatically controlled by DeepSpeed to maintain stable training dynamics.
For supervised fine-tuning (SFT) experiments, training was conducted using 2 GPUs, whereas human preference alignment experiments utilized 8 GPUs.
# D Appendix: Limitation
Despite the promising results demonstrated by Prefix-Tuning+, several areas remain open for exploration. Firstly, our implementation utilizes the kernel approximation for simulating attention, specifically the exponential linear unit (ELU). While this choice enabled efficient experimentation and a clear proof-of-concept demonstration, other feature mappings or kernel functions could potentially yield improved performance. Exploring more sophisticated kernel approximations or trainable kernel designs remains an exciting area for further enhancement of expressivity and effectiveness. Secondly, although Prefix-Tuning $^ +$ effectively addresses the trade-off between prefix length and input specificity within attention heads, our experiments did not extensively explore the effects of varying internal dimensionalities or architectures of the externalized prefix module. Further studies investigating these architectural choices and their optimization could unlock additional performance gains. Lastly, our evaluations were primarily conducted in supervised fine-tuning (SFT) and human alignment scenarios. Extending evaluations to contexts involving abundant data would provide deeper insights into Prefix-Tuning $+$ ’s maximum capacity to acquire new knowledge. However, due to computational resource constraints at our institution, such comprehensive studies were beyond our current capabilities. We acknowledge this limitation and leave extensive evaluations to future research.
# E Appendix: Broader Impacts
The introduction of Prefix-Tuning $^ +$ offers significant positive impacts by making large language model (LLM) adaptation more efficient and accessible, thus enabling broader participation in AI research and application, particularly for resource-constrained communities and organizations. Additionally, by reducing computational requirements, Prefix-Tuning+ contributes positively to sustainability efforts in AI development. On the other hand, the enhanced ease of adapting powerful LLMs also carries risks, such as potential misuse in generating misinformation or biased content. It is essential for researchers and practitioners to incorporate ethical practices, robust monitoring, and mitigation strategies to address these risks, ensuring that the societal benefits of Prefix-Tuning+ significantly outweigh its potential negative impacts.
# F Licenses
We use standard licenses from the community. We include the following licenses for the codes, datasets and models we used in this paper.
Datasets & Benchmarks:
• BigBench [31]: MIT
• GoEmotions [6]: Apache License 2.0 • DBPedia [14]: Creative Commons 3.0 • Banking77 [3]: MIT
Codes:
• LLaMA-Factory [41]: Apache License 2.0
• Alpaca-eval [41]: Apache License 2.0
# Models:
• Qwen2.5-3B-Instruct [38]: Apache License 2.0 • LLaMA2-7B-Chat [33]: LLaMA2 Community License | Parameter-Efficient Fine-Tuning (PEFT) methods have become crucial for
rapidly adapting large language models (LLMs) to downstream tasks.
Prefix-Tuning, an early and effective PEFT technique, demonstrated the ability
to achieve performance comparable to full fine-tuning with significantly
reduced computational and memory overhead. However, despite its earlier
success, its effectiveness in training modern state-of-the-art LLMs has been
very limited. In this work, we demonstrate empirically that Prefix-Tuning
underperforms on LLMs because of an inherent tradeoff between input and prefix
significance within the attention head. This motivates us to introduce
Prefix-Tuning+, a novel architecture that generalizes the principles of
Prefix-Tuning while addressing its shortcomings by shifting the prefix module
out of the attention head itself. We further provide an overview of our
construction process to guide future users when constructing their own
context-based methods. Our experiments show that, across a diverse set of
benchmarks, Prefix-Tuning+ consistently outperforms existing Prefix-Tuning
methods. Notably, it achieves performance on par with the widely adopted LoRA
method on several general benchmarks, highlighting the potential modern
extension of Prefix-Tuning approaches. Our findings suggest that by overcoming
its inherent limitations, Prefix-Tuning can remain a competitive and relevant
research direction in the landscape of parameter-efficient LLM adaptation. | [
"cs.CL",
"cs.AI"
] |
# I. INTRODUCTION
Code review is a crucial software development practice that enhances code quality, facilitates knowledge sharing, and detects defects [11], [31]. Formal inspections, a longstanding form of code review [18], [21], require practitioners to examine and modify code changes before they are merged into production [19].
Code review, reliant on manual effort and a practitioner’s expertise [45], is labor-intensive and often challenges practitioners with time allocation [26], [31]. Modern industry practices address this by adopting a lightweight, tool-based, informal process known as modern code review [10], [19]. Considering the current challenges, automation can offer significant time savings and prevent shallow reviews. Recent AI advancements have led to several AI-assisted code review tools [6], [2], [1] powered by large language models (LLMs) that reduce manual effort and streamline project schedules.
However, empirical analysis of the quality of their reviews is crucial for determining their reliability and accuracy.
Our study aims to illuminate LLM capabilities in code reviews. Reviewers first assess code correctness in change requests and then suggest improvements if issues arise. We developed a methodology and two research questions addressing these aspects. We aim to answer the following research questions.
RQ1: How accurately can large language models (LLMs) evaluate code changes for approval or rejection?
RQ2: How effective are the code improvement suggestions generated by large language models (LLMs) in improving code correctness?
We developed a setup that uses code blocks along with their unit tests. The setup prompts the LLM to assess code correctness and suggest improvements for each code block. We evaluate the LLM’s assessment against unit test results and test its suggestions with the same unit tests. A suggestion is a correction if the new code passes all unit tests. In code reviews, authors often include comments. To reflect this, we added the problem description to some prompts and omitted it from others, then tested both and reported the results.
Our experiments led to several key findings. Firstly, the results indicated that LLMs would be unreliable in a fully automated code review environment. Secondly, incorporating problem descriptions into prompts consistently improved performance, highlighting the importance of code comments and pull request descriptions. Finally, our results varied across different datasets. This underlined the need for custom testing tailored to the target codebase. We shared our experiment setup and source code to support practitioners 1.
Based on our findings, we propose a process incorporating human oversight instead of relying solely on complete automation, that is ”Human-in-the-loop LLM Code Review.” The process involves LLMs reviewing all change requests, while a human ”Review Responsible” decides whether there is a need for human review. This process would resolve reliability issues and decrease the need for manual effort. It also allows for knowledge sharing, an essential aspiration for code review [31]. The process can be tailored according to organizational needs. Our suggested process and experiment setup enable practitioners to implement their LLM-assisted code review processes effectively.
In Section II, the reader can find background information on the code review process and LLMs alongside related work. In Section III, we describe our methodology. In Section IV, readers can find results from our experiments. The results are discussed in section V, and future research directions are given. In Section VI, we list threats to validity. In Section VII, we conclude our study.
# II. BACKGROUND
# A. Code Review
Developers widely use code reviews to inspect changes before integration [19]. Formal inspections, introduced by Fagan in 1976 [21], boosted productivity and quality but were often too time-consuming for universal adoption, depending on organizational context [45]. By 2013, Modern Code Review (MCR) emerged as a lightweight, informal, tool-based alternative [10].
MCR is now prevalent in companies like Google [43], AMD [42], and Microsoft [42], as well as in OSS (open-source software) [12]. A 2018 Google study [43] highlighted its importance for codebase understanding, integrity, readability, and consistency. While review coverage significantly impacts quality, it’s not the sole factor, and poor reviews can harm software quality [32].
However, code review is time-consuming. A 2013 survey found OSS participants spent an average of 6.4 hours per week on reviews [13]. Similarly, a 2018 Microsoft study identified timely feedback and time management as major challenges for developers [31].
# B. Large Language Models (LLMs)
Natural language processing has advanced significantly in recent years. In 2014, the introduction of Long Short-Term Memory (LSTM) networks by Sutskever et al. [46] and recurrent neural networks (RNNs) by Cho et al. [17] led to models that handle sequential data (e.g., text) more effectively than previous approaches. The Transformer architecture, introduced by Vaswani et al. [53] in 2017, further advanced the field with self-attention mechanisms that assess word importance without relying on their distance. Google introduced BERT [20] in 2018, shifting the field toward pre-trained models. BERT and successors like GPT [37], RoBERTa [30], and T5 [39] demonstrated that models pre-trained on large text corpora can be fine-tuned for specific tasks with minimal additional data.
OpenAI’s GPT-2 [38] (2019) and GPT-3 [14] (2020) showcased LLM versatility. In 2021, Codex [16], fine-tuned for programming, led to GitHub Copilot [1]. ChatGPT [35] and GPT-4 [7] further boosted LLM use in programming. In March 2024, Anthropic AI introduced the Claude 3 family [5], with Claude 3 Opus outperforming state-of-the-art LLMs. In May 2024, OpenAI unveiled GPT-4o [3], a faster, cross-modal variant of GPT-4 that excels across benchmarks. In December
2024, Google introduced the Gemini 2.0 family, outperforming their models of the previous generation [4].
# C. Related Work
Automating code reviews is motivated by evidence of the process’s time-consuming nature [13], [31]. Most efforts have focused on recommending suitable reviewers [8], [9], [25], [34], [36], [40], [49], [56].
To improve efficiency, the review process itself can be automated as a code-to-comment task [52]. In 2018, Gupta and Sundaresan [23] introduced a deep learning model that matched code blocks with historical reviews. In 2019, Li et al. [27] proposed a CNN-based model to predict change approval, while Shi et al. [44] used a CNN-LSTM framework for the same purpose. In 2022, Hong et al. [24] presented CommentFinder, which leverages information retrieval to suggest code comments, and Li et al. [28] introduced AUGER, a system that automatically generates review comments using a pre-trained model. Tufano et al. [51] employed a T5 model [39] for code review automation. Thongtanunam et al. [48] developed a model to modify source code automatically during reviews to reduce manual effort, while a tool with the same purpose was found useful at Google [22]. Li et al. [29] further explored automation via large-scale pre-training on diverse code datasets. Zhou et al. [57] introduced the Edit Progress (EP) metric to capture partial progress in automated reviews. Rasheed et al. [41] and Tang et al. [47] developed LLM agents to automate code review.
In 2024, Tufano et al. [50] qualitatively evaluated prior work [24], [29], [51] alongside ChatGPT [35]. Although the ChatGPT version was unspecified, findings indicated it could serve as a competitive baseline for the comment-to-code task (i.e., revising code after a review), though it did not outperform state-of-the-art methods in the code-to-comment task [50]. Unlike prior work, our study examines LLMs as code approvers, responsible for making change merge decisions and offering suggestions. We established an experimental setup for benchmarking various models, enabling practitioners to identify the optimal model and prompt configuration for their specific codebase.
# III. METHODOLOGY
Developers have different expectations about what makes a “good” code review [10]. To ensure a robust evaluation, we focus on the merge approval decision, which governs the procedure for integrating change requests into the mainline code. This decision is based on code correctness, defined as the ability of the code to perform its intended functionality in all cases. When new code is submitted, reviewers determine whether it should be merged into the mainline code; if rejected, they offer suggestions for improvement. Similarly, we expect LLMs to deliver a verdict on code correctness and offer suggestions when necessary.
Our dataset comprises code blocks with unit tests, offering an objective standard for code correctness and a means to assess the effectiveness of improvement suggestions. Code blocks that pass all unit tests are deemed ”Correct,” while those that do not are considered ”Incorrect.”
# A. Dataset
We use the HumanEval dataset [15], a popular dataset for evaluating LLMs with interview-level code generation questions, alongside LLM-generated code blocks from Yetistiren et al. [55] for solving HumanEval questions. Our two datasets are 164 canonical solutions from the HumanEval dataset that pass all unit tests and 492 AI-generated code blocks from three tools (ChatGPT 9 Jan ’23, Amazon CodeWhisperer Jan $^ { , } 2 3$ , and GitHub Copilot v1.70.8099), all written in Python. The datasets are referred to as ”Ground Truth” and ”Mixed” respectively. The 492 AI-generated blocks are categorized as 234 ”Correct” and 258 ”Incorrect.” We chose this dataset for its unit tests and its diversity of correctness.
# B. Test Setup
In our test setup, we simulate a code review scenario where reviewers must approve or reject a proposed change. When rejecting a change, reviewers typically suggest improvements. Therefore, our prompt instructs the LLM to classify the code as correct or ”Incorrect” and to provide a code suggestion. The detailed steps to our methodology can be seen in Figure 1.
Fig. 1. Test Setup
Fig. 2. Prompt Template
Prompts can enhance and refine the LLM’s capabilities [54]. We optimized our prompt using a chain-of-thought style, a widely used prompting method. Our prompt template is given in Figure 2.
Code blocks often include descriptions that explain the code in the form of code comments or pull request descriptions. To mirror this, we use the HumanEval problem descriptions in our prompts. Since such descriptions are not always provided, we had two prompt types. In Figure 2, the text in red appears only in the prompts that include problem descriptions while the text in black is the same for each prompt type.
The LLM’s output provides two distinct pieces of information for our evaluation. The first is the code correctness classification, indicating whether the code is ”Correct” or ”Incorrect”. The second is the code improvement suggestion: a revised code block that should perform better (or remain unchanged if the code is ”Correct”) than the original.
You are an experienced Python developer. You will be provided with a code block (and a corresponding problem description). Your task is to understand the intended functionality, review the code, and generate feedback. Please follow these steps: Analyze the Code: Understand what the code is meant to do. Consider Edge Cases: Reflect on all scenarios in which the code should operate. Evaluate Functionality: Determine whether the code successfully implements the intended functionality across these cases. Classify the Code: If the code requires changes, set classified_type to Incorrect. If the code does not require any changes, set classified_type to Correct. Suggest a Corrected Code Block: In the complete_code field, suggest a corrected code block if the classified_type is Incorrect; otherwise, you should return the code without changing it.
Important: Use only the classifications Correct or Incorrect (case sensitive).
Now, please review the code based on the following code block:
#code block
(And the following problem description: #problem description)
You need to respond in the following format. This is a strict requirement.
Example output:
feedback: classified type: No
code: complete code: No
Apply the following rules strictly:
- Answer should be a valid YAML, and nothing else. Do not add your thought process or any other text. - Replace ”No” values with your suggestions.
- Be careful about the indentation and syntax of your suggested code block, make sure it can run without problems.
# C. Evaluation Metrics
We expect reviewers to detect code correctness accurately. To quantify this, we define ”Correctness Accuracy” as the proportion of correctness assessments that match unit test results, as seen in equation 1. When we classify ”Correct” as positive and ”Incorrect” as negative, correctness accuracy is equivalent to the model accuracy. Using this classification, we also calculate false positive and false negative rates.
Correctness (Model) Accuracy: The Number of Accurately Assessed Code Blocks The Number of All Code Blocks
For effective code improvements, suggestions must pass all unit tests. We define the ”Correction Ratio” as the proportion of suggestions that meet this criterion, as seen in equation 2.
Correction Ratio:
We should also consider negative scenarios to assess the impact of code suggestions. A suggestion might worsen a code block—turning ”Correct” code into ”Incorrect” code. We refer to such instances as ”Regressions” and define a ”Regression Ratio.”, as seen in equation 3.
Regression Ratio
# D. Variables
Our test setup comprises three variables. The first is the LLM: we evaluated OpenAI’s GPT-4o (model version: 2024- 11-20), used by code review tools like Qodo [6] and CodeRabbit [2], making it a relevant candidate. We also used Google’s Gemini 2.0 Flash, which, during our experiments, was the most capable API-accessible model from the Gemini 2.0 family (a competitor to GPT-4o).
The second variable is the presence of a problem description (e.g., comments or pull request descriptions). This allows us to assess how such contextual information affects LLM performance.
The third variable is the dataset. We used a mixed dataset of 492 code blocks and the HumanEval dataset’s canonical solutions (ground truth dataset). These canonical solutions act as a control group, providing insights into LLM reliability. For this ground truth data, regression ratios are important, as suggestions should not worsen correct code. With eight distinct test configurations arising from three variables, we can infer the generalizability of our findings.
# IV. RESULTS
We evaluated two state-of-the-art LLMs, Gemini and GPT4o, using the Gemini-2.0-Flash and gpt-4o-2024-11-20 versions with the default model parameters. To ensure reliability, we ran each experiment configuration three times and reported the average results. The standard deviations ranged from $0 . 3 5 \%$ to $1 . 6 1 \%$ for correctness accuracy, from $1 . 0 2 \%$ to $1 . 9 3 \%$ for false positive rates, from $0 . 6 5 \%$ to $1 . 0 7 \%$ for false negative rates, from $0 \%$ to $2 . 8 8 \%$ for regression ratios, and from $0 . 3 8 \%$ to $1 . 3 4 \%$ for correction ratios. A chi-square test for variance (using a $5 \%$ threshold, $\scriptstyle \mathrm { d f } = 2$ , and a critical value of 5.991 at $\alpha = 0 . 0 5 )$ ) showed that all chi-square statistics were below the threshold. This confirms the consistency of our results, as the observed standard deviations are not statistically significant. We share our data and code in our replication package.
# A. Mixed Dataset Experiment Results
With the mixed dataset, we ran the experiment with and without problem descriptions for both models. GPT4o outperformed Gemini in code correctness assessments with and without problem descriptions, as shown in Figure 3. When provided with descriptions, GPT4o was accurate $6 8 . 5 0 \%$ of the time, compared to Gemini’s $6 3 . 8 9 \%$ .
Fig. 3. Correctness Accuracy $( \% )$
Fig. 4. False Positive Rates $( \% )$
Looking at the false positive rates in Figure 4, we see that GPT4o is considerably better than Gemini. Both models perform poorer without the descriptions. False negative rates in Figure 5 differ from the rest of the metrics because Gemini stands better than GPT4o. Although it may seem contradictory, it complements the false positive rates seen in Figure 4. In terms of correction, GPT4o performed better than Gemini with and without problem descriptions, as shown in Figure 6. GPT4o had a higher correction ratio at $6 7 . 8 3 \%$ , surpassing Gemini’s $5 4 . 2 6 \%$ . Looking at the regression ratios, GPT4o was better in all configurations, as shown in Figure 7. GPT4o had a regression ratio of $1 0 . 4 3 \%$ , while Gemini’s was $1 3 . 5 3 \%$ . Our experiments showed that both models performed poorer when the prompt did not provide the problem descriptions. This was especially apparent in regression and correction ratios, where differences of up to $2 2 . 8 7 \%$ were observed.
Fig. 5. False Negative Rates $( \% )$
Fig. 6. Correction Ratios $( \% )$
Fig. 7. Regression Ratios $( \% )$ )
# B. Ground Truth Dataset Experiment Results
Using the ground truth dataset, we ran the experiment for both models with and without problem descriptions. We did not calculate correction ratios since all code blocks were already ”Correct.”
The correctness accuracy results contradicted the mixed dataset findings as seen in Figure 8. Gemini outperformed GPT4o with $6 6 . 6 7 \%$ , compared to GPT4o’s $4 2 . 0 7 \%$ with problem descriptions. The regression ratio results were similar to the mixed dataset results with GPT4o outperforming Gemini with $9 . 9 6 \%$ to Gemini’s $12 . 4 0 \%$ with problem descriptions. Both models performed poorer without problem descriptions, though with smaller differences of up to $12 . 4 0 \%$ .
Fig. 8. Correctness Accuracy $( \% )$ with Ground Truth
Fig. 9. Regression Ratios $( \% )$ with Ground Truth
# V. DISCUSSION
# A. Revisiting Research Question 1
Our findings suggest that LLMs can evaluate code changes for approval or rejection with moderate accuracy. The highest value we observed was $6 8 . 5 0 \%$ (GPT4o with problem descriptions, mixed dataset). In contrast, Gemini performed worse on the mixed dataset yet better on the ground truth dataset. This raises questions about whether the code type affects LLM performance. To mitigate such concerns, our experimental setup can help practitioners run tests using their own code, guiding them toward the best-performing LLM for their specific needs.
As shown in Figure 4 and Figure 5, Gemini is more likely to misclassify ”Incorrect” code blocks as ”Correct,” whereas GPT4o tends to make the opposite error more often. Given these scenarios, higher false negative rates are preferable to higher false positive rates because merging faulty code into the mainline can lead to quality issues, such as bugs. In contrast, the opposite error primarily inconveniences the author, which we consider to be a more minor issue by comparison. For this reason, we believe that the false positive and false negative rates are significant factors when choosing an LLM.
Finally, we consistently observed that LLMs perform better with problem descriptions. This suggests that practitioners adopting automated code review tools should ensure their code changes include clear, helpful comments. It should be noted that we do not refer to code comments specifically. Depending on the tool setup, other descriptions, such as pull request descriptions, can also serve the same purpose.
# B. Revisiting Research Question 2
To effectively replace human code reviewers, LLMs need to provide effective code suggestions. In our experiments, we observed that LLMs can correct up to $6 7 . 8 3 \%$ of incorrect code (GPT4o with problem descriptions, mixed dataset). Our results suggest that code improvement suggestions of LLMs are moderately effective in improving code correctness. Without the problem descriptions, the results were consistently poorer.
In terms of regressions, we observed that up to $2 4 . 8 0 \%$ of correct code blocks received incorrect code suggestions. The regression rates were higher, and even doubling, without the problem descriptions. The regression and correction ratio results also underlined the importance of comments. Overall, the key takeaway for us is that LLM code suggestions are not reliable enough for full automation. However, this does not mean they cannot be useful when used in moderation.
# C. Human-in-the-loop LLM Code Review
While our results were positive, they also show that LLMs exhibit significant error rates, with regression rates reaching up to $2 3 . 7 9 \%$ and inaccurate approval decisions of $4 4 . 4 4 \%$ (both from Gemini w/o problem descriptions, mixed dataset). Such errors could cause more harm than benefit, raising doubts about the reliability and accountability of fully automated LLM code reviews. Furthermore, full automation overlooks crucial human-driven aspects of code review like knowledge transfer, team awareness, and shared code ownership [10], [31]. Consequently, a hybrid process with human involvement is essential.
Fig. 10. Human-in-the-loop LLM Code Review Process
To address these shortcomings, we propose the ”Humanin-the-loop LLM Code Review” process 10. To ensure accountability, the author, not the LLM, implements suggested changes. This LLM-author iteration continues until the LLM suggests no further modifications, with organizations setting iteration limits. Next, a ”Review Responsible” determines if the LLM’s review is sufficient or if additional human review is necessary, based on the change’s criticality or complexity. If no further review is needed, the change is merged. Otherwise, human reviewers provide a second layer of oversight. If more changes are required, the process restarts; if not, the change is merged.
This approach reduces manual effort while leveraging human expertise, facilitating knowledge transfer, and mitigating the regressions and faulty assessments observed in our experiments. The process is adaptable to organizational needs and aims to help practitioners establish their own LLM-assisted code review systems.
# VI. THREATS TO VALIDITY
# A. Internal Validity
Since we are experimenting with LLMs, the prompt plays a crucial role. We acknowledge that different prompts can yield different results, and ours followed a chain-of-thought approach. Our testing method for code suggestions is also subject to criticism, as it expects a code block rather than a textual suggestion. We chose to do it this way to ensure objective results. We extracted code from the YAML response and ran the corresponding unit tests. Since Python is indentationsensitive, our prompts warned about indentation and correct YAML formatting. Overall, $9 4 . 7 0 \%$ of the code was executed without errors, while $4 . 0 8 \%$ had indentation errors and $1 . 0 8 \%$ had YAML format errors. Because our prompt explicitly warned about these issues, we classified erroneous code blocks as ”Incorrect,” regardless of whether they would pass unit tests if the indentation or YAML errors were fixed.
# B. External Validity
In this research, our scope is limited to Python. Therefore our findings are only directly generalizable to Python. Additionally, due to the stochastic nature of LLMs [33], their outputs are not always identical. To account for this variability, we expanded our sample size by running our experiment three times and reporting the average results.
# C. Construct Validity
Our evaluation was conducted on GPT-4o and Gemini, and other LLMs may exhibit different behaviors under the same conditions. The HumanEval dataset [15] consists of simple questions, while the mixed dataset was AI-generated, raising concerns about generalizability. We failed to find a dataset of human-generated code with unit tests with a similar diversity of correctness. To gain deeper and more reliable insights, future experiments should be conducted on code from real software projects. We provide practitioners with our experiment setup and source code, enabling them to generate their own results. | Context: Code reviews are crucial for software quality. Recent AI advances
have allowed large language models (LLMs) to review and fix code; now, there
are tools that perform these reviews. However, their reliability and accuracy
have not yet been systematically evaluated. Objective: This study compares
different LLMs' performance in detecting code correctness and suggesting
improvements. Method: We tested GPT4o and Gemini 2.0 Flash on 492 AI generated
code blocks of varying correctness, along with 164 canonical code blocks from
the HumanEval benchmark. To simulate the code review task objectively, we
expected LLMs to assess code correctness and improve the code if needed. We ran
experiments with different configurations and reported on the results. Results:
With problem descriptions, GPT4o and Gemini 2.0 Flash correctly classified code
correctness 68.50% and 63.89% of the time, respectively, and corrected the code
67.83% and 54.26% of the time for the 492 code blocks of varying correctness.
Without problem descriptions, performance declined. The results for the 164
canonical code blocks differed, suggesting that performance depends on the type
of code. Conclusion: LLM code reviews can help suggest improvements and assess
correctness, but there is a risk of faulty outputs. We propose a process that
involves humans, called the "Human in the loop LLM Code Review" to promote
knowledge sharing while mitigating the risk of faulty outputs. | [
"cs.SE",
"cs.AI"
] |
# 1 Introduction
Zero-Shot Stance Detection (ZSSD) aims to identify the stance expressed in text toward targets absent during training, a task increasingly vital for analyzing polarized discourse on social media where new topics continually emerge and labeled data are often scarce (Allaway and Mckeown, 2020; Liang et al., 2022a). Recent advances in large language models (LLMs) offer new opportunities for ZSSD, as their zero-shot prompting and strong contextual understanding enable deeper semantic reasoning and generalization to novel targets (Binz and Schulz, 2023). However, LLMs applied via prompting strategies (Li et al., 2023a; Zhang et al., 2023; Lan et al., 2024; Zhao et al., 2024; Weinzierl and Harabagiu, 2024) typically exhibit suboptimal performance on ZSSD. In contrast, LLM-enhanced
Reasoning schemas have demonstrated strong potential for enhancing both generalization and interpretability across various domains, such as question answering, causal inference, and narrative understanding (Peng et al., 2024; Li et al., 2024; Cheng et al., 2024; Su et al., 2025; Tao et al., 2025). However, the application of schema-based reasoning to stance detection, especially in the zero-shot setting, remains largely unexplored. This gap is primarily attributable to two key challenges: (1) the absence of effective methods for modeling and extracting stance-specific reasoning schemas that generalize across diverse targets, and (2) the lack of mechanisms to align input instances with abstract schemas, thereby supporting schema-guided inference and robust generalization to novel stance targets.
To address these challenges, we propose the Cognitive Inductive Reasoning Framework (CIRF), a schema-driven approach for ZSSD that bridges linguistic input and abstract reasoning via automatic schema induction and schema-guided inference. Specifically, CIRF comprises two key components: (1) Unsupervised Schema Induction (USI), which leverages LLMs to abstract structured reasoning patterns from raw text by converting predicates into first-order logic (FOL) expressions and clustering them into a multi-relational schema graph, capturing stance-relevant logical relations independent of specific targets or vocabulary; and (2) SchemaEnhanced Graph Kernel Model (SEGKM), which represents each input as an FOL graph, maps predicate nodes to corresponding schema nodes, and employs a learnable graph kernel to align input structures with schema templates for stance prediction. This framework enables CIRF to generalize effectively to novel and diverse targets by combining the interpretability of symbolic schemas with the adaptability of neural inference, setting a new paradigm for schema-driven zero-shot stance detection.
In summary, the contributions of this work are as follows: (1) We propose the CIRF, a novel schemadriven approach for ZSSD that bridges linguistic input and abstract reasoning via automatic induction and application of cognitive reasoning schemas. (2) We introduce a unified framework combining unsupervised schema induction—leveraging LLMs to abstract FOL patterns into multi-relational schema graphs—and SEGKM for effective schemaguided stance inference and unseen target generalization. (3) Extensive experiments on the SemEval2016, VAST and COVID-19-Stance benchmarks demonstrate the superiority of CIRF: it outperforms state-of-the-art ZSSD baselines by 1.0, 4.5, and 3.3 percentage points in macro-F1, respectively, and achieves competitive performance with $70 \%$ fewer labeled examples compared to LLMenhanced methods.
# 2 Related Work
ZSSD Methods. ZSSD has attracted increasing attention due to its importance in identifying stances toward previously unseen targets (Liang et al., 2022a). Early approaches, such as JoinCL (Liang et al., 2022b) and TarBK (Zhu et al., 2022), rely heavily on supervised learning with large annotated datasets, limiting their generalization to novel targets. Recent advances in LLMs have introduced new paradigms for ZSSD, including zero-shot prompting LLMs (ZSPM) (Zhang et al., 2023; Lan et al., 2024) and LLM-enhanced fine-tuning-based methods (LEM) (Li et al., 2023a; Zhang et al., 2024; Dai et al., 2025; Zhang et al., 2025). However, ZSPM methods often underperform due to their lack of task-specific adaptation, while LEM approaches still require extensive instance-level supervision. These limitations highlight the need for frameworks that can generalize reasoning patterns to unseen targets without relying on large amounts of labeled data, thereby motivating schema-driven approaches.
First-Order Logic for Neural Reasoning. FOL provides a structured and interpretable foundation for encoding logical relations such as causality, implication, and conditionality, and has been widely adopted to enhance consistency and transparency in neural reasoning (Hu et al., 2016b; Huang et al., 2022). Recent methods integrate FOL constraints into neural architectures via posterior regularization (Hu et al., 2016a; Zhang et al., 2022) or joint FOL-neural embeddings, aiming to unify symbolic rigor with statistical flexibility. In stance detection, recent studies (Dai et al., 2025; Zhang et al., 2025) prompt LLMs to generate FOL-based reasoning chains, achieving notable improvements over conventional models. However, such FOL-based techniques typically rely on static, instance-specific rules, limiting their ability to induce domain-agnostic abstractions or generalize logic across unseen topics—a critical bottleneck in zero-shot scenarios.
Schema Induction Methods. Previous work on schema induction has predominantly focused on event-centric scenarios (Edwards and Ji, 2023), employing either bottom-up concept linking (Huang et al., 2016) or top-down clustering (Shen et al., 2021). With the advent of LLMs, which encode extensive linguistic and reasoning knowledge and demonstrate emergent abilities such as in-context learning and chain-of-thought reasoning (Zhang et al., 2023), recent research has begun to exploit their generative and summarization capabilities for top-down schema construction (Li et al., 2023b; Tang et al., 2023; Dror et al., 2023; Shi et al., 2024). However, most existing schema or pattern induction methods are tailored for multisentence or event-level analysis and are ill-suited for sentence-level stance detection, which typically lacks explicit structural cues and requires inferring implicit, diverse semantic relations. Among recent LLM-based schema induction approaches, SenticNet8 (Cambria et al., 2024) and LogiMDF (Zhang et al., 2025) are most relevant to our work. SenticNet8 maps lexical items to abstract source concepts via dictionary-based strategies, but its reliance on traditional lexical resources leads to substantial coverage gaps, leaving many words unmapped to abstract concepts. In contrast, LogiMDF leverages LLMs to induce logical rules, yet its schema construction is primarily driven by predicate frequency, with limited attention to semantic richness. Consequently, both approaches fall short of fully capturing the nuanced semantic structures and abstraction capabilities offered by modern LLMs, which are crucial for effective schema-guided stance detection.
# 3 Method
Our CIRF addresses zero-shot stance detection via a two-stage pipeline, combining the strengths of both schema abstraction and graph-based neural inference. Specifically, CIRF consists of: (1) an USI module, which leverages large-scale unlabeled data and LLMs to automatically induce a library of abstract, multi-relational reasoning schemas; and (2) a SEGKM, which parses each input argument into a FOL graph, aligns it with the induced schemas, and predicts stance via learnable graph kernel matching.
# 3.1 Task Definition
Let $X \ = \ \{ x _ { i } , q _ { i } \} _ { i = 1 } ^ { N }$ represent the labeled data collection, where $x$ refers to the input text and $q$ corresponds to the source target. $N$ denotes the total number of instances in $X$ . Each sentencetarget pair $( x , q ) \in X$ is assigned a stance label $y$ ZSSD aims to train a model from source known targets and predict the stance polarity towards unseen targets.
# 3.2 Unsupervised Schema Induction
To bridge the gap between instance-level logic and abstract, generalizable reasoning, we propose an unsupervised schema induction pipeline that automatically abstracts cognitive schemas from LLMgenerated reasoning. For each sentence-target pair, we prompt an LLM to produce a reasoning rationale for its stance prediction, which we represent as FOL expressions. We then cluster these predicates based on semantic similarity and summarize each cluster into a concise, concept-level schema using LLMs. For example, predicates such as “Vaccines reduce health risks” and “Tax cuts increase economic instability” are abstracted into schemas like “Policies reduce instability $ \mathsf { S }$ upport” and “Resource reduction $$ Opposition,” respectively, enabling cross-domain generalization.
Specifically, our schema induction process includes four key steps: (1) FOL Generation: For each sentence-target pair $( x , q )$ , prompt an LLM to generate reasoning chains as FOL predicates.(2) Predicate Clustering: Cluster FOL predicates using $K$ -means based on Sentence-BERT embeddings to group similar reasoning patterns. The optimal $K$ is determined by the highest silhouette score (Shahapure and Nicholas, 2020). (3) Schema Abstraction: Use LLM to summarize each cluster into a concise phrase, forming concept-level reasoning schemas (e.g., “Consequences may lead to resistance $ \mathrm { O p } \mathrm { . }$ - position”). (4) Schema Graph Construction: Represent the resulting schemas as nodes in a weighted multi-relational graph $G ^ { s } = ( V ^ { s } , E ^ { s } , A ^ { s } )$ , where edges denote logical relations (Badreddine et al., 2022). This process distills diverse instance-level rationales into compact, concept-level schemas that capture transferable inference patterns for systematic stance prediction across varied targets.
# 3.3 Schema-Enhanced Graph Kernel Module
To leverage both instance-specific logical reasoning and abstract schema knowledge for zero-shot stance detection, we propose the SEGKM, and the framework as shown in Figure 1. SEGKM is designed to inject transferable, concept-level cognitive schemas—induced from LLM rationales via USI—directly into the reasoning process of each test instance. This enables the model to systematically align local reasoning structures with global, generalizable patterns, thus supporting interpretable and robust inference across unseen stance targets. Specifically, schema-driven reasoning in SEGKM proceeds as follows:
FOL Graph Construction. Given an input sentence-target pair $( x , q )$ , we first generate stepby-step reasoning rationales using LLM prompting (as in USI), and convert them into a FOL graph Gf = (V f , Ef ), where nodes represent predicates and edges encode logical relations extracted from
Figure 1: The framework of SEGKM.
the input.
Schema Knowledge as Graph Filters. To inject abstract reasoning motifs into node-level feature extraction, we initialize the graph kernel filters using local subgraphs of the induced cognitive schema. Specifically, for each schema node, we extract a subgraph centered on the node that retains iTtsh nseyiigehldbsorahsoetodof $N$ uscthureemanfdil rerlsa $\mathcal { H } = \{ H _ { i } \} _ { i = 1 } ^ { N }$ each representing a transferable, concept-level reasoning pattern. Compared to using the full schema graph, these subgraph filters efficiently capture finegrained relationships while reducing computation, and enable the kernel network to match localized reasoning structures from input graphs to reusable schema patterns.
Kernel-Based Node Representation. To infuse schema knowledge into the input reasoning graph, we represent each node $v \in V ^ { f }$ by measuring the structural and semantic similarity of its local subgraph $G _ { v } ^ { f }$ to each schema filter $H _ { i }$ . This is achieved using a $p$ -step deep random walk kernel (Feng et al., 2022):
$$
\phi _ { 1 , i } ( v ) = K _ { p } ( G _ { v } ^ { f } , H _ { i } ) = \mathbf { s } ^ { \top } W A _ { \times } ^ { p } \mathbf { s }
$$
where $G _ { v } ^ { f }$ is the $k$ -hop subgraph centered at $v$ , $H _ { i }$ is a schema filter, $S = \mathbf { X } _ { G _ { v } ^ { f } } \mathbf { X } _ { H _ { i } } ^ { \top }$ is the node feature similarity matrix, $\mathbf { s } = \mathrm { v e c } ( S )$ , $W$ is a learnable weight matrix, and $A _ { \times } ^ { p }$ encodes $p$ -step transitions in the product graph of $G _ { v } ^ { f }$ and $H _ { i }$ .
To ensure each node focuses on the most relevant schema knowledge, we dynamically select the top$g$ schema filters with the highest kernel similarity
$$
\mathcal { H } ^ { * } ( G _ { v } ^ { f } ) = \arg \operatorname* { m a x } _ { \mathcal { H } ^ { \prime } \subseteq \mathcal { H } , | \mathcal { H } ^ { \prime } | = g } \sum _ { H _ { i } \in \mathcal { H } ^ { \prime } } K _ { p } ( G _ { v } ^ { f } , H _ { i } )
$$
The final node representation is constructed by concatenating the kernel similarities between $G _ { v } ^ { f }$ and the selected filters, i.e.,
$$
\phi _ { 1 } ( v ) = \mathbf { C o n c a t } \left( \phi _ { 1 , i } ( v ) \mid H _ { i } \in \mathcal { H } ^ { * } ( G _ { v } ^ { f } ) \right)
$$
This process allows each node to explicitly align its local reasoning pattern with the most semantically and structurally relevant schema motifs, yielding knowledge-aware node representations that support interpretable and transferable inference—especially critical in zero-shot scenarios.
Multi-layer Model. Real-world stance reasoning often requires multi-hop logic and the flexible composition of multiple schema patterns. To support this, we stack multiple layers of schema-driven kernel feature extraction. At each layer $l$ , node representations are recursively updated by matching their expanded local subgraphs to the schema filters, allowing deeper layers to aggregate schema knowledge from increasingly broader reasoning contexts. This design enables SEGKM to capture both fine-grained local logic and higher-order, composite reasoning motifs—crucial for robust generalization in zero-shot stance detection. The final graph representation is constructed by concatenating the aggregated node features from all layers:
$$
\Phi ( G ) = { \mathrm { C o n c a t } } \left( \sum _ { v \in G } \phi _ { l } ( v ) \mid l = 0 , 1 , . . . , L \right)
$$
# 3.4 Node Enhancement via Schema Augmentation
While the FOL Graph captures instance-specific reasoning, it may lack sufficient structural alignment with abstract schema patterns—especially in zero-shot scenarios. To address this, we augment the FOL Graph by assigning each predicate node to its most relevant schema motif, as defined by $K$ -means clustering in the USI stage. Specifically, for each predicate node $v$ , we use the trained $K$ - means model to assign $v$ to the closest schema cluster (i.e., the nearest cluster centroid in embedding space). The corresponding schema node representing this motif is then added as a new node to the FOL Graph, and an edge is created between $v$ and this schema node. This augmentation explicitly injects transferable schema-level reasoning motifs into the instance graph, improving both structural alignment for kernel matching and generalization in zero-shot stance detection.
# 3.5 Prediction and Loss
The final graph representation $\Phi ( G )$ is fed into a fully connected layer with softmax to produce the stance prediction:
$$
\hat { y } = \operatorname { s o f t m a x } ( W _ { 0 } \cdot \Phi ( G ) + b _ { 0 } )
$$
where $W _ { 0 }$ and $b _ { \mathrm { o } }$ are trainable parameters. We train the model using cross-entropy loss.
# 4 Experimental Setups
Experimental Data. We evaluate our approach on three widely used stance detection benchmarks: SEM16, VAST, and COVID-19-Stance.
SEM16 (SemEval-2016 Task 6; (Mohammad et al., 2016)) is a benchmark of tweets annotated for stance towards six predefined targets. Following prior work (Li et al., 2023a; Lan et al., 2024), we focus on three commonly used targets: Hillary Clinton (HC), Feminist Movement (FM), and Legalization of Abortion (LA).
VAST (Allaway and Mckeown, 2020) comprises texts from the New York Times “Room for Debate” section, covering a broad spectrum of topics. The dataset contains 4,003 training, 383 development, and 600 test examples, with topics ranging from education and politics to public health. Topic phrases are automatically extracted and subsequently refined by human annotators to ensure quality and diversity.
COVID-19 (COVID-19-Stance; (Glandt et al., 2021)) is constructed to assess public stances toward COVID-19-related policies. It includes four targets: Wearing a Face Mask (WA), Keeping Schools Closed (SC), Anthony S. Fauci, M.D. (AF), and Stay at Home Orders (SH).
Evaluation Metrics. For the SEM16 and COVID-19 datasets, which include three classes (“Favor”, “Against”, and “NONE”), we follow Liang et al. (2022a) and report macro-F1 $( F _ { a v g } )$ computed over the “Favor” and “Against” categories. For the VAST dataset, which comprises three categories (“Pro”, “Con”, and “Neutral”), we follow Li et al. (2023c) and report the macro-F1 across all classes, along with individual macro-F1 scores for the “Pro” and “Con” categories.
Implementation Details. We use GPT-3.5- Turbo1 as the default LLM for FOL elicitation and summary generation, following prior baselines. Dataset splits follow Li et al. (2023a); Lan et al. (2024): for SEM16 and COVID-19, one target is held out for testing; for VAST, we use the official zero-shot setup. USI uses temperature 0 for LLM queries and determines the number of $K$ -means clusters $( K )$ via silhouette coefficient (Shahapure and Nicholas, 2020) (final $K = 4 9 8 \mathrm { \dot { } }$ ). SEGKM initializes node embeddings with BERT-base (Devlin et al., 2019), adopts a two-layer graph kernel, and applies a fully connected ReLU layer. LoRA $\mathrm { \Delta H u }$ et al., 2022) is used for parameter-efficient tuning. Training uses AdamW (batch size 32, learning rate $5 \times 1 0 ^ { - 4 } )$ , early stopping (patience 10), up to 20 epochs, and selects the checkpoint with the lowest validation loss (validation every 0.2 epoch), reporting the average score of three independent trials.
Baseline Methods. To evaluate the performance of existing stance detection models on our dataset, we employed the following models: (1) Non-LLM methods: CrossNet (Du et al., 2017), JoinCL (Liang et al., 2022b), and PT-HCL (Liang et al., 2022a). (2) Zero-shot prompting LLMs: COLA (Lan et al., 2024), Ts-Cot (Zhang et al., 2023), KASD (Li et al., 2023a). (3) LLM-enhanced fine-tuned models: KAI (Zhang et al., 2024), FOLAR (Dai et al., 2025), and LogiMDF (Zhang et al., 2025).
Table 1: Comparison of different models on the ZSSD task. The best scores are highlighted in bold, and the second-best scores are underlined. ‡ indicates statistically significant improvements of our CIRF over FOLAR based on paired $t \cdot$ -tests with p-value $< 0 . 0 5$ . † denotes results reproduced using the official open-source code, and for methods involving LLMs, reproduction was performed with the same GPT backbone to ensure fair comparison.
# 5 Experimental Results
# 5.1 Main ZSSD Experimental Results
The main ZSSD results on SEM16, VAST, and COVID-19 are summarized in Table 1. Our proposed CIRF consistently outperforms all baseline models across both datasets, demonstrating the effectiveness of our approach in challenging zeroshot scenarios.
Specifically, CIRF achieves an average F1 score improvement of 1-point over FOLAR (the previous SOTA) on SEM16, and a 3.6-point boost on the more challenging VAST dataset, which approximates real-world fine-grained ZSSD. Statistical significance tests $( p < 0 . 0 5 )$ confirm that CIRF’s improvements over strong competitors (MultiPLN, EDDA, and KAI) are robust across all evaluation metrics. A closer examination reveals several trends: First, traditional non-LLM methods perform substantially worse on ZSSD, while LLMbased models yield notable gains, emphasizing the crucial role of LLMs’ reasoning capabilities. Furthermore, fine-tuned LLM-based models (KAI, LogiMDF, FOLAR, and CIRF) generally surpass direct prompting strategies (Ts-CoT, COLA, and KASD), suggesting that combining annotated data with the knowledge encoded in LLMs is more effective.
We also compare different knowledge representations. Models using FOL knowledge elicited from LLMs (CIRF and FOLAR) generally outperform those using natural language (KAI), indicating that FOL provides a more compact and effective abstraction for reasoning transfer. Notably, CIRF further outperforms KAI, the strongest prior LLMbased method, across all metrics, highlighting the added value of our cognitive framework enhancement. These results collectively demonstrate that CIRF not only sets a new state-of-the-art for ZSSD but also provides insights into the importance of structured LLM-elicited knowledge and cognitive schema design for robust zero-shot stance reasoning.
# 5.2 Ablation Study
To evaluate the contribution of each component in our CIRF model, we perform ablation experiments by selectively removing the cognitive schema $( w / o$ Schema) and the SEGKM module $( w / o \ S E )$ . Specifically, the w/o Schema variant completely removes the schema constructed via the USI process, and in this case, the filters within SEGKM are randomly initialized rather than learned from the schema. The w/o SE variant replaces SEGKM with a standard GCN to encode the schema, which is then concatenated with BERT-based text representations for stance prediction. We also design a w/o SA model by removing the Node Enhancement via Schema Augmentation component. The ablation results are summarized in Figure 2.
We observe that removing the cognitive schema (w/o Schema) leads to a significant drop in F1 scores for all targets on SEM16 (e.g., a decrease of 2.66 points on average), and the degradation is even more pronounced under low-resource conditions on VAST $( 1 0 \% )$ . This highlights the critical role of target-agnostic predictive logic knowledge, especially in zero-shot scenarios with unseen targets.
Additionally, the SEGKM (w/o SE) model also exhibits substantial performance loss, underscoring the necessity of sophisticated schema encoding. The gap between w/o Schema and w/o SE further demonstrates that simply concatenating schema features is not sufficient—effective modeling of the schema’s logical structure is essential. Ablating the schema augmentation component $( w / o S A )$ consistently degrades performance as well, confirming the importance of schema-informed node-level enhancements. Overall, these results confirm that both the cognitive schema and advanced logical encoding are indispensable for achieving robust performance in zero-shot stance detection tasks.
# 5.3 Low-Resource Performance
To evaluate the effect of cognitive schema on model performance under limited supervision, we assess CIRF on the VAST( $1 0 \% )$ datasets, varying the proportion of labeled samples from $10 \%$ to $50 \%$ . This setting enables us to analyze the data efficiency and robustness of CIRF compared to baseline approaches. As shown in Figure 3, the performance of CIRF steadily improves as the amount of labeled data increases. Notably, even with only $30 \%$ of labeled samples, CIRF achieves state-ofthe-art results, outperforming strong LLM-based competitors such as FOLAR, KAI and COLA across $\mathrm { V A S T } ( 1 0 \% )$ . For example, with $10 \%$ supervision, CIRF achieves an F1 score of 76.1 on $\mathrm { V A S T } ( 1 0 \% )$ , exceeding KAI by 0.9 points. This demonstrates the significant advantage of our approach in resource-scarce environments. These findings further indicate that the cognitive schema framework facilitates more effective knowledge transfer and generalization in low-resource stance detection scenarios.
Figure 3: Performance of different training scales on $\mathrm { V A S T } ( 1 0 \% )$ dataset. COLA and FOLAR are the best competitors for ZSPM and LEM methods, respectively.
Figure 2: Ablation Test Results for ZSSD.
Figure 4: ZSSD baselines with cognitive schema in $\mathbf { S E M 1 6 } _ { A v g }$ and VAST $( 1 0 0 \% )$ .
# 5.4 ZSSD Baselines with Cognitive Schema
We also investigate whether our cognitive schema can enhance existing ZSSD models, including BiLSTM (Augenstein et al., 2016), CrossNet (Du et al., 2017), Bert-Joint (Liu et al., 2021), and JointCL (Liang et al., 2022b). As shown in Figure 4, $" + \mathbf { C } \mathbf { S } ^ { } , $ indicates the integration of our cognitive schema. Specifically, we learn the schema using a GCN and concatenate it with the model’s output representation before the softmax layer for final stance prediction. The results demonstrate that the addition of our cognitive schema consistently improves all baseline models. For instance, BiLSTM $+ \mathrm { C S }$ achieves an F1 improvement of 25.9 points over vanilla BiLSTM, and similar trends are observed for other models. Notably, even lightweight models such as BiLSTM, when enhanced with our cognitive schema, can reach performance levels comparable to current state-of-the-art methods. This strongly suggests that our approach provides a general and architecture-agnostic enhancement for ZSSD.
Text: Why not? This protects both the officer and the civilian and it
keeps things transparent.Then it would not be simply a matter of
opinion when things go awry. It willbe on videotape. BUT how
much will it cost to store all this data and for how long?Hmmm....
Target: body camera COLAPredict: Con $\times$
Stance: Pro CIRFPredict: Pro√ n3 n1: Safety n2: Accessible to the general public n4 nl m2 n3: Ownership of personal information □ n4: Limited negative impact Pro on multiple aspects
# 5.5 Case Study
Figure 5: Case Study.
Figure 6: Impact of Filter Number.
We present an illustrative example where CIRF succeeds while other strong baseline fails. As shown in Figure 5, the input expresses overall support for body cameras, citing benefits such as enhanced safety, transparency, and protection of personal information. However, the mention of storage costs, introduced with an adversative “BUT”, creates a nuanced argument with both positive and negative elements. The baseline model misclassifies this instance, likely due to overemphasizing the negative clause and being misled by the mixed sentiment. In contrast, CIRF accurately identifies the supportive stance by matching key aspects of the text to schema nodes—n1 (safety), n2 (public accessibility), and n3 (personal information ownership)—while correctly interpreting the cost concern as minor (n4). CIRF’s schema-guided reasoning integrates these cues, showing that the positive arguments outweigh the limited negative impact. This case highlights CIRF’s ability to generate robust and interpretable predictions in the presence of complex, nuanced opinions, outperforming models that rely primarily on surface lexical cues.
# 5.6 Analysis of the Number of Filters
We examine how varying the number of schema filters affects model performance. As shown in Figure 6, $F _ { a v g }$ on VAST, $\mathbf { S E M 1 6 } _ { a v g }$ , and COVID$1 9 _ { a v g }$ remains highly stable as the filter count increases from 8 to 64, with fluctuations within 0.5 points. This result demonstrates that CIRF requires only a modest number of schema filters to capture the essential, transferable reasoning patterns for zero-shot stance detection. The stability suggests that the induced cognitive schemas are compact yet expressive, and that increasing filter count beyond a certain threshold yields diminishing returns. Importantly, this robustness makes CIRF easy to tune and deploy, unlike many deep models that are sensitive to hyperparameter choices. It also empirically supports our design hypothesis: stance reasoning across domains can be well-abstracted by a concise set of logic-based schemas, enhancing both interpretability and generalization. Overall, CIRF’s performance is both effective and robust with respect to this key hyperparameter, facilitating practical application in diverse and resource-constrained settings. | Zero-shot stance detection (ZSSD) aims to identify the stance of text toward
previously unseen targets, a setting where conventional supervised models often
fail due to reliance on labeled data and shallow lexical cues. Inspired by
human cognitive reasoning, we propose the Cognitive Inductive Reasoning
Framework (CIRF), which abstracts transferable reasoning schemas from unlabeled
text and encodes them as concept-level logic. To integrate these schemas with
input arguments, we introduce a Schema-Enhanced Graph Kernel Model (SEGKM) that
dynamically aligns local and global reasoning structures. Experiments on
SemEval-2016, VAST, and COVID-19-Stance benchmarks show that CIRF establishes
new state-of-the-art results, outperforming strong ZSSD baselines by 1.0, 4.5,
and 3.3 percentage points in macro-F1, respectively, and achieving comparable
accuracy with 70\% fewer labeled examples. We will release the full code upon
publication. | [
"cs.CL",
"I.2.7, I.2.6"
] |
# 1 Introduction
A GUI agent is an intelligent system capable of autonomously interacting with graphical user interfaces by perceiving visual elements, understanding task objectives, and executing corresponding actions [15]. The development of GUI agents holds significant promise for automating GUI operations, reducing human workload and enabling seamless human-computer collaboration. Traditionally, GUI automation has relied on heuristic rules, brittle scripts, or hard-coded templates, which limit flexibility and generalization [9] [20]. Recently, the emergence of Multimodal Large Language Models (MLLMs) [2] [16] [22] has accelerated GUI agent research. By integrating visual perception with language understanding, MLLMs enable agents to interpret complex screen layouts, comprehend natural language instructions, and reason about sequential actions in open-ended environments [6] — marking a significant advance toward fully GUI automation.
In this rapidly evolving field, the construction of high-quality datasets is essential for benchmarking and advancing GUI agent research. However, existing datasets are typically constructed under idealized conditions and fail to capture the diverse range of anomalies encountered in real-world deployment. In practice, GUI agents deployed in industrial or consumer applications frequently encounter unexpected failure modes and environmental disturbances, such as action failures, obstructive pop-up advertisements, network disconnections, etc. These anomalies can significantly disrupt execution flow, leading to task failures or, in severe cases, unintended interactions with sensitive interface components, potentially resulting in erroneous or hazardous outcomes.
Figure 1: Comparison between (a) traditional manual data collection and (b) our proposed RevAct pipeline. Manual pipelines require annotators to explicitly design tasks, plan execution steps, demonstrate actions, and annotate each step. In contrast, RevAct records natural user behavior on web/apps, leverages automated tools (e.g., RPA, detection models, and LLMs) to infer action semantics and generate step/task descriptions, at last verify by human.
New Dataset: GUI-Robust. To address this gap, we present GUI-Robust, the first dataset designed to evaluate robustness in the presence of abnormal scenarios. GUI-Robust contains 5,318 annotated tasks (consists of task descriptions and user behaviors sequences) collected from 392 diverse sources, spanning both websites and third-party desktop applications on Windows. Notably, it includes 200 abnormal tasks covering 7 common types of anomalies encountered in everyday GUI usage including action failure, login page, captcha page, ad pop-up, cookie pop-up, page loading and network disconnection. This enables rigorous evaluation of GUI agent robustness against real-world anomalies, a critical aspect for practical deployment.
Beyond its unique property of robustness, as shown in Table 1, GUI-Robust offers several advantages for comprehensive evaluation: (1) it includes a broad range of task and action types including click, input text, retrieve information from page, open a new web or app and report anomalies to human; (2) it incorporates cross-scenario tasks spanning multiple applications or websites, reflecting more realistic and complex workflows; and (3) it covers both Chinese and English software environments. A comparison between GUI-Robust and existing benchmarks is shown in Table 1. These features ensure that our dataset closely aligns with real-world usage, supporting thorough assessment of agent robustness, adaptability, and generalization.
Novel Data Collection Strategy: RevAct. Traditional dataset construction typically relies on a manual pipeline — ranging from task design to execution planning and manual demonstration — which is labor-intensive, costly, and requires significant domain expertise. To address these limitations, we propose a novel data collection strategy, RevAct, which reverses the conventional workflow and enables semi-automated dataset generation: we first collect user action sequences from natural interactions via RPA(Robotic Process Automation) tools, and then generate specific step and task descriptions for these actions with the assistance of MLLMs. This approach substantially reduces annotation costs, as expert involvement is limited to reviewing and revising step and task descriptions generated by MLLMs. Specifically, for each user action sequence, we leverage YOLOv8 for GUI element detection and Qwen2.5-VL for task generation and summarization, achieving over $71 \%$ accuracy in automatic task generation. Human annotators are only required for minimal correction, reducing annotation time by a factor of over 19 times compared to the traditional pipeline.
Table 1: Statistics of GUI-Robust compared with existing datasets.
Comprehensive Experiments. GUI-Robust provides a thorough benchmark for evaluating GUI agents on element grounding, multi-step task completion, cross-scenario execution, and robustness under abnormal conditions. We conduct extensive experiments with three representative MLLMs and two specific-designed GUI agents, demonstrating that all models experience significant performance degradation in abnormal scenarios. This underscores that robust GUI understanding in real-world settings remains a challenging problem. We hope our work will draw greater attention to the robustness of GUI agents and inspire further research in this direction based on our dataset.
# 2 Ralated Work
# 2.1 GUI Agents
Early progress in GUI agent research has been largely driven by the rapid advancement of multimodal large language models (MLLMs), such as GPT-4 [17], Gemini-2.5 [22], Qwen-VL [1] and Claude3.5. These foundation models—capable of processing both visual and textual inputs—enable agents to interpret screen layouts, follow natural language instructions, and reason about GUI actions in open-ended environments. Recently, there has been a surge in the development of GUI-specific agents, which fine-tune MLLMs or build on their architecture to improve performance in structured GUI environments. Notable examples include: CogAgent [10], built on the GLM-VLM [8] framework, which achieves improvements in perception, action space coverage, and reasoning via staged training; ShowUI [12], trained on top of Qwen2-VL [23], with enhancements for UI tokenization and crosssystem navigation; UI-TARS [18], which integrates Qwen2-VL [23] with additional task planning and grounding modules, excelling in complex GUI manipulation tasks. In particular, two fundamental capabilities have emerged as essential for effective GUI automation: element grounding, which refers to the ability to accurately locate relevant UI elements on the screen, and task completion, which involves determining the correct type and content of interaction at each step.
# 2.2 GUI Benchmarks
Benchmark datasets are fundamental to the development and evaluation of GUI agents. A key capability required for effective GUI interaction is accurate element grounding. For this purpose, ScreenSpot[5] serves as a dedicated benchmark, offering both mobile and web-based examples to evaluate agents’ spatial grounding capabilities. For task completion, Mind2Web[7] stands out as a pioneering benchmark built in real-world web environments rather than simplified simulations. WebVLN[4] focuses on navigation and QA within shopping websites, while WebLINX[14] introduces multi-turn interactions in realistic web interfaces. Recently, some datasets have emerged that move toward dynamic GUI environment. OSWorld[25] and WindowsAgentArena[3] are two benchmark platforms targeting GUI agents in Ubuntu and Windows operating systems, respectively. However, these dynamic testing are time-consuming and demand high setup Cost. In domain-specific research, WebWalkerQA[24] emphasizes web-based information retrieval and question answering. On the Android platform, PIXELHELP[11] and AITW (Android in the Wild)[19] offer static mobile task datasets, while META-GUI[21] focuses on multi-turn dialog interactions. GUI Odyssey[13] proposes cross-app task scenarios. Despite these advances, existing datasets are mostly constructed under ideal conditions, failing to address the irregularities and exceptions commonly found in real-world industrial scenarios. These abnormal situations are critical for evaluating agent robustness. WorldGUI[26] attempts to address this by exploring the impact of varying initial states, but many real-world anomalies remain unaddressed. To fill this gap, we introduce GUI-Robust, a benchmark specifically designed to evaluate agent robustness under abnormal GUI scenarios.
Table 2: Composition of the GUI-Robust Dataset
# 2.3 Data Collection Method for GUI Benchmark
Most existing GUI datasets are collected with a manual pipeline ranging from task design to execution planning and manual demonstration. For instance, Mind2Web[7] outlines a process involving Website Selection, Task Proposal, Task Demonstration, and Task Verification, all of which heavily depend on human effort. Similarly, WebLINX[14] requires annotators to manually design tasks and engage in complex dialogues or task edits. To improve efficiency, some datasets leverage LLMs to accelerate task designs. Mind2Web[7] uses ChatGPT to generate seed tasks that inspire human annotators. GUI Odyssey[13] employs a template-based approach, where annotators design reusable templates and GPT-4 replaces entities (e.g., app or item names) to generate multiple variants. Remarkably, these methods are still labor-extensive as it only accelerates task design procedure and still requires annetors for actions planning and execution, which is time-consuming. A more autonomous approach is seen in WebWalkerQA[24], where GPT-4o[16] generates QA pairs based solely on webpage screenshots. While promising, this method is currently limited to QA-type tasks and does not generalize to broader GUI tasks. To overcome these limitations, we propose a reverse data collection paradigm that collects user action sequences from natural interactions and then generate corresponding step and task descriptions for these actions with the assistance of MLLMs. Human annotators are only required for reviewing and revising step and task descriptions, which is highly efficient.
# 3 GUI-Robust Dataset
# 3.1 Overview
GUI-Robust comprises 5,318 annotated tasks collected from 392 distinct websites and desktop applications. Each task (i.e., data instance) consists of a task description and a sequence of user behaviors. Notably, GUI-Robust includes 200 tasks under abnormal scenarios, encompassing seven distinct types of anomalies commonly encountered in everyday GUI interactions. This enables rigorous evaluation of GUI agent robustness under real-world conditions. In addition to this unique feature, GUI-Robust offers a greater diversity and complexity of tasks, such as those involving information retrieval and cross-scenario operations. Furthermore, our dataset spans a wide range of 49 domains and covers nearly all commonly used platforms within the Chinese internet ecosystem. These characteristics ensure that GUI-Robust is closely aligned with real-world usage, thereby supporting comprehensive assessment of agent robustness, adaptability, and generalization. The dataset statistics are presented in Table 1 and 2.
To promote ease of use, we release a lightweight evaluation toolkit for GUI-Robust (see details in Appendix D), which enables users to quickly evaluate existing models or integrate their own with minimal effort.
# 3.2 Metadata Definition
Task Formulation. For a specific task, the execution of by a GUI agent can be represented as a sequence of actions paired with corresponding UI element coordinates at each step, as an example illustrated in Fig. 3. Formally, given a task description $T$ and a screenshot $S _ { i }$ at step $i$ , the agent $\mathcal { G }$ generates an action $A _ { i }$ and selects a UI element $E _ { i } = ( x _ { i } , y _ { i } )$ located at screen coordinates $( x , y )$ . This step-wise interaction is defined as:
$$
R _ { i } = ( A _ { i } , E _ { i } ) = \mathcal { G } ( S _ { i } , T )
$$
Upon task completion, the full execution result is represented as a sequence:
$$
R = \{ ( A _ { i } , E _ { i } ) | _ { i = 1 } ^ { n } , T \}
$$
where $n$ denotes the total number of steps required to complete the task.
Dataset Structure. GUI-Robust is composed of a collection of annotated tasks, each comprising a task description $T$ and a sequence of screenshots $S = \{ S _ { 1 } , S _ { 2 } , \ldots , S _ { n } \}$ . The task description $T$ is provided in natural language, reflecting realistic user instructions (e.g., "Search for the best-selling comic book on Amazon"), while the sequence $S$ represents the screenshots captured at each interaction step. For each screenshot $S _ { i }$ , we also provide the corresponding ground-truth step description (e.g., "Click on the ’Account & Lists’ text"), the action performed (e.g., "Click"), and the element location. These annotations facilitate the evaluation of the agent’s element grounding and task completion capabilities. See Appendix A for details on the data format structure.
Element Type In this dataset, we categorize UI elements into three types: icon, text, and box. The icon type represents interactive UI elements that are abstract icons with no textual content. The text type refers to UI elements containing text, such as buttons or links. The box type represents input fields, where the task involves entering content into a text input box. This classification allows us to assess the localization capabilities of different models across various types of UI elements, providing insights into their performance in identifying and interacting with different element categories.
Action Space The action space in GUI-Robust comprises six distinct action types: click, input, get_info, open, wait, and human. Among these, click, input, get_info and open are four standard actions, corresponding to “click an element", “input the content", “retrieve information from the interface”, “open a the website or application”. In addition to these standard actions, GUI-Robust incorporates two specialized action types to address anomalous scenarios: wait denotes the “wait for response” and human denotes the “require human intervention”. the more details about these actions refer to Appendix A.
# 3.3 Abnormal Scenarios
By collecting and analyzing 5,318 real user interaction sessions across a diverse range of websites and desktop applications, we identified seven types of commonly encountered anomalies (See Fig. 4):
• Action Failure: The agent’s previous action does not trigger the expected UI response (e.g., a button click has no effect).
• Login Page: A login prompt appears unexpectedly, requiring authentication before proceeding.
• Captcha Page: The agent encounters a CAPTCHA challenge that it cannot autonomously solve.
• Ad Pop-up: Advertisements pop up and obscure key UI elements, disrupting the intended interaction flow.
• Cookie Consent Pop-up: A cookie consent dialog appears and must be dismissed before interacting with the main interface.
• Page Loading Delay: The page remains in a loading state for an extended time, preventing access to target elements.
• Network Disconnection: The interface fails to load due to temporary or complete loss of network connectivity.
These scenarios emerged organically from natural human activity on the internet and reflect common disruptions that hinder successful task completion. We introduce wait and human actions for agents to solve or propose the problem (See Table 6). This observation motivated us to explicitly incorporate these real-world failure cases into our dataset design, in order to enable more robust benchmarking and guide the development of agents that can operate reliably in practical environments.
Figure 2: Overview of the RevAct pipeline for semi-automated data collection. Human interaction traces are captured by RPA tools, then processed by a YOLOv8 detector and OCR to extract element type, location, and action content. A multimodal LLM (Qwen2.5-VL) generates step-wise and task-level descriptions, enabling efficient and scalable construction of annotated GUI tasks.
# 4 Data Collection Method
# 4.1 Existing Limitations
Both step and task descriptions and corresponding action sequences are recorded to construct the dataset. However, this approach is labor-intensive and inefficient, particularly in the following two respects: 1) Task design typically depends on human heuristics, making the process cognitively demanding and time-consuming. Furthermore, the scope of manually designed tasks is often limited, and the curated task patterns frequently diverge from real-world user behaviors. 2) For recording action sequences, human annotators are required to have substantial familiarity with the target websites and desktop applications, which necessitates specialized training and often multiple iterations to produce accurate records. Even for annotators with considerable expertise, our experiments demonstrate that the average annotation time per task exceeds 15 minutes.
# 4.2 New Data Collection Strategy: RevAct
To address these challenges, we propose a new data collection pipeline that reverses the convention workflow and enables semi-automated dataset generation. The basic idea is to collect user action sequences from natural interactions, and then generate specific step and task descriptions for these actions with the assistance of MLLMs. This strategy could avoid the time-consuming steps of the task design and mannual demonstration, requiring Human annotators only involves to review and revise step and task descriptions generated by MLLMs. The pinepline is illustrated in Fig. 2. Specifically, the data collection consists of the following three steps:
1. Capture — Screenshot and Element Coordinate Capture: We begin by collecting user action sequences derived from natural interactions. Specifically, we recruit volunteers and record their routine activities on familiar websites and desktop applications. During these sessions, a Robotic Process Automation tool (Indeed-Intelligence RPA2) automatically captures both screenshots and the coordinates of UI elements for each user action. In contrast to traditional approaches, which require annotators to perform pre-defined tasks demanding expertise and extensive planning, our data collection process is high efficient and does not necessitate specialized knowledge. This step can be easily and efficient completed for various volunteers without expertise, does not require expertise annotators. The economic cost associated with this procedure is negligible. Besides, this method ensures that the collected data more accurately reflects real-world user behavior.
2. Interpretation — Action Recognition: The recorded screenshots and element coordinates are fed into a YOLOv8-based GUI element detection model, which identifies the type of UI element interacted with at each step. Subsequently, we establish a one-to-one mapping between element types and action types. For example:
• If the operated UI element is a clickable icon or text, the action type is mapped to click.
• If the UI element is an input box, the action type is mapped to text input.
Unfortunately, we observed that YOLOv8 demonstrates limited accuracy in recognizing other types of actions. However, the number of other action types is relatively small, and we simply leave them for subsequent revision. Finally, we use an OCR tool to extract the content of the UI element, supplementing the action’s semantic information.
3. Summarize — Step and Task Description Generation: We feed the previously collected screenshots and the identified actions into Qwen2.5-VL [2]. The model generates a step-by-step description based on this information, then summarizes all step descriptions to produce a task-level description, thereby completing the construction of a fully annotated data instance. In the prompt provided to the multimodal model, we explicitly instruct it to follow two key principles: (1) The task description should be a natural language summary of the overall task goal, rather than a simple concatenation of the step descriptions. (2) The description should emphasize the outcome of the final step, as this often captures the core objective of the task. At the same time, it should incorporate relevant details from intermediate steps to ensure that the resulting instruction is both semantically accurate and operationally complete.
4. Revise — Refine the Step and Task Descriptions $\because$ We employ human annotators to review and refine the step and task descriptions generated by MLLMs. Given that MLLMs exhibit relatively high accuracy, this process is efficient, as annotators are required to revise only a small subset of the descriptions.
# 4.3 Empirical Results
To validate the effectiveness of the proposed method, we conducted a feasibility evaluation by having human annotators review and correct the task data generated by the RevAct method. The evaluation process focused on the following three aspects:
• Accuracy of step descriptions: Whether each step description accurately reflects the UI element and its content corresponding to the operation.
• Consistency of task descriptions: Whether the task description aligns with the actual intent and all steps.
• Completeness of the task description: Whether the task description contains sufficient information to reproduce the full interaction sequence—i.e., whether a human can follow it to reproduce the entire sequence of actions.
In the experiment, we test the descriptions generated by the RevAct method and invited 10 experienced annotators to independently correct and assess the quality. The results showed that, for standard tasks in Chinese, the step and task description accuracy was $78 . 6 3 \%$ and $7 2 . 5 7 \%$ . In comparison, for standard tasks in English, the step and task description accuracy reached $7 1 . 7 1 \%$ and $8 6 . 6 7 \%$ . These results indicate that the RevAct method can generate accurate operation and task text across different languages and task complexities.
In terms of data collection efficiency, RevAct significantly reduced the workload of annotators compared to traditional fully manual methods. Specifically, collecting and processing 100 task samples using traditional methods (See details in Appendix E) typically requires 10 annotators and takes about 150 minutes, whereas the RevAct method, required only 10 annotators and 7.8 minutes to review and finalize the same number of collected samples. This represents an over $1 9 \times$ improvement in efficiency, highlighting the method’s significant advantage in reducing human labor costs and accelerating data construction.
# 5 Experiment
# 5.1 Experiment Setup
We evaluate 5 representative models on the GUI-Robust dataset, including 3 general-purpose multimodal large models with visual input and understanding capabilities—GPT4o [16], Qwen2.5-
Table 3: Performance on the UI Element Grounding. Bold indicates the best performance across all models, while underlined values highlight the best among general-purpose multimodal models (MLLMs).
VL [2] and Gemini 2.5-Flash-Preview [22]—and two pretrained agents explicitly designed for GUI automation—UI-TARS [18] and CogAgent [10].
We design 2 evaluation types, comprising a total of 5 evaluation tasks:
UI Element Grounding: We randomly sample 1,028 single-step trajectories from the Standard Tasks subset. Given a step description and its corresponding screenshot, models are evaluated on: (a) Action Accuracy (Action Acc.): both action type and content must match the ground truth. (b) Coordinate Accuracy (Coord. Acc.): predicted coordinates must fall within the bounding box of the target UI element.
Task Completion: We assigned 5 models to execute a set of tasks sampled from GUI-Robust: 302 Standard Tasks (randomly selected from the 4,925-task subset) and 197 Abnormal Tasks. During Task Completion evaluation, the model receives three inputs at each step: (a) the task description, (b) the current page screenshot, and (c) a history of previously predicted actions and UI element coordinates. Based on this context, the model is required to generate the next action and its corresponding element location, progressing until the entire task is completed. We evaluate: (a) Action Acc. (2) Coord. Acc. and (c) Task Success Rate (SR): the task is considered successful only if all actions and coordinates match the ground truth across the full trajectory.
# 5.2 Results
UI Element Grounding. Table 3 presents the results of various models on the UI Element Grounding task. We draw several key observations:
1. UI-TARS and CogAgent significantly outperform general-purpose MLLMs on grounding accuracy. UI-TARS achieves the highest overall localization accuracy $( 7 8 . 3 1 \% )$ across all element types, followed closely by CogAgent $( 7 7 . 1 2 \% )$ . This demonstrates the advantage of GUI-specific pretrained models in accurately grounding UI elements, especially for visually complex or finegrained components. All MLLMs underperform significantly in overall localization $( \leq 7 . 2 0 \% )$ , highlighting a persistent gap in visual grounding capabilities when applied to GUI domains;
2. General-purpose MLLMs like GPT-4o and Qwen2.5 show strong action recognition but weak spatial grounding. GPT-4o-2024 achieves the best action accuracy at $9 0 . 5 6 \%$ , and Qwen2.5-vl
72B follows with $8 9 . 2 0 \%$ . However, both models struggle to localize elements precisely—e.g., GPT-4o achieves only $3 . 2 1 \%$ overall localization accuracy, suggesting that while it understands what to do, it often fails to locate where to do it;
3. Different models specialize in different UI element types. CogAgent excels at localizing input boxes $( 8 9 . 5 0 \% )$ and icon elements $( 7 8 . 6 5 \% )$ . UI-TARS performs best on text-based elements $( 8 4 . 6 3 \% )$ . Interestingly, Gemini2.5-Flash achieves the highest accuracy on box elements $( 2 2 . 0 2 \% )$ among the MLLMs, despite lower overall performance.
Task Completion. Table 4 presents a comparative analysis of model performance on full-task execution under both normal and abnormal conditions. Several key observations emerge:
1. Under normal conditions, GUI-specific agents outperform general-purpose models. UI-TARS and CogAgent demonstrate significantly higher task success rates $2 1 . 6 1 \%$ and $1 4 . 3 8 \%$ , respectively)
Table 4: Performance on Task Compeletion evaluation. Bold indicates the best performance across all models, while underlined values highlight the best among general-purpose multimodal models (MLLMs).
and element localization accuracy $( 6 2 . 3 3 \%$ and $5 4 . 3 2 \%$ ) than LLM-based models such as GPT-4o or Qwen2.5-VL. This suggests that specialized training on GUI interactions yields clear benefits in structured, expected environments.
2. In abnormal settings, general-purpose LLMs show stronger anomaly awareness, but poor spatial grounding. Despite lower task success rates, models like GPT-4o and $\mathrm { Q w e n } 2 . 5 – \mathrm { V L }$ maintain higher action accuracy $( 5 5 . 0 7 \%$ and $4 8 . 5 4 \%$ , respectively) compared to GUI-specific agents. This indicates that LLMs are more sensitive to abnormal cues (e.g., detecting login pop-ups or page errors), but fail to locate or interact with the appropriate UI elements—limiting their ability to recover or proceed correctly.
GUI-specific agents struggle with unanticipated anomalies. These models often lack mechanisms to recognize or adapt to unexpected interface changes. For example, during a task on the $" 5 8 . \mathrm { c o m } ^ { \mathrm { , , } }$ website, UI-TARS encounters a sudden login page. Instead of detecting and reporting the abnormal transition, it predicts the action "click $5 8 . { \mathrm { c o m } } "$ , which will cause an infinite redirection loop. This failure illustrates the rigidity of GUI-specific models when exposed to disruptions outside their training distribution.
Incorrect behavior under anomalies can lead to critical consequences. In real-world applications, failure to handle abnormal states may result in hazardous operations such as accidental deletions, repeated entry of sensitive credentials, or navigation to malicious or unintended pages. Such behaviors not only reduce task reliability but also raise serious safety and data integrity concerns. These results highlight the need for robust anomaly-handling capabilities in GUI agents—not only to succeed in idealized environments, but also to detect, report, and gracefully recover from irregularities.
# 6 Discussion
Limitations. While GUI-Robust offers a substantial step toward evaluating agent robustness in realistic GUI environments, several limitations remain. The abnormal scenarios in our dataset—though diverse—are still limited to 7 predefined categories. Real-world applications may involve more complex or compound failure modes (e.g., cascading authentication prompts, adaptive overlays, operations with insufficient permissions), which are not currently covered. Second, GUI-Robust adopts a static dataset and evaluation paradigm, where each task is annotated with a single canonical execution trajectory.
Future Work. Future extensions of GUI-Robust aim to address these limitations along two directions. First, we plan to enrich the spectrum of abnormal scenarios by including more diverse failure types—such as dynamic Captcha variants, pop-ups requiring multi-step dismissal, and access control violations. Second, we envision evolving GUI-Robust into a dynamic evaluation platform, where agents interact with simulated environments rather than static trajectories. In such settings, evaluation criteria can move beyond exact action or coordinate matching to focus on goal-oriented success, state transition correctness, and error recovery ability. This shift will better reflect real-world deployment conditions and enable the development of agents that are not only accurate but also flexible, adaptive, and robust. | The development of high-quality datasets is crucial for benchmarking and
advancing research in Graphical User Interface (GUI) agents. Despite their
importance, existing datasets are often constructed under idealized conditions,
overlooking the diverse anomalies frequently encountered in real-world
deployments. To address this limitation, we introduce GUI-Robust, a novel
dataset designed for comprehensive GUI agent evaluation, explicitly
incorporating seven common types of anomalies observed in everyday GUI
interactions. Furthermore, we propose a semi-automated dataset construction
paradigm that collects user action sequences from natural interactions via RPA
tools and then generate corresponding step and task descriptions for these
actions with the assistance of MLLMs. This paradigm significantly reduces
annotation time cost by a factor of over 19 times. Finally, we assess
state-of-the-art GUI agents using the GUI-Robust dataset, revealing their
substantial performance degradation in abnormal scenarios. We anticipate that
our work will highlight the importance of robustness in GUI agents and inspires
more future research in this direction. The dataset and code are available at
https://github.com/chessbean1/GUI-Robust.. | [
"cs.AI"
] |
# 1 Introduction
Large language models (LLMs) [65, 63, 60, 5] have demonstrated remarkable generalization capabilities [51, 67, 72, 71] across a wide range of tasks [52, 23], but their inference cost [14, 55] grows rapidly with scale, hindering practical deployment and efficiency. Mixture-of-Experts (MoE) [8, 3, 36] architectures alleviate this problem by activating only a subset of experts per input [18], thus enabling greater model capacity without a commensurate increase in computational overhead [21, 48, 32]. To maximize parameter utilization, MoE systems typically introduce load balancing [54, 19] objectives that encourage a more uniform routing of tokens across experts during pre-training.
While load balancing is effective in avoiding idle experts during large-scale pre-training, it hinders model adaptation in the post-training stage for downstream tasks. A widely observed phenomenon is that load balancing encourages uniform expert routing across inputs, resulting in highly overlapping token distributions [13, 77]. This overlap leads to convergence in expert representations [45], ultimately compromising the development of specialized functionalities. The lack of specialization [13] becomes particularly problematic during fine-tuning [16, 58, 2, 78] on downstream tasks with strong domain preferences, where the model struggles to adapt and exhibits degraded performance [33].
This highlights a core challenge in MoE training: the inherent conflict between encouraging expert specialization [49, 37, 35]and enforcing routing uniformity [81] via auxiliary losses. From the expert perspective, load-balanced routing causes overlapping training intentions across experts [13, 44, 45, 6], suppressing the development of distinct expert behaviors. From the router perspective, as experts become less specialized, the router receives less variation across experts, leading to increasingly uniform and less informed token-to-expert assignments [80]. These dynamics form a self-reinforcing loop: diminished specialization and uniform routing exacerbate each other over time, progressively
Expert Specialization( ): ij·8i+β. (i,) 台 面,)
Routing Score Metric 4.862×10.961×104 2.515×10-4.168×1023 3.131×104 Experts Token T 80000 3.2 T(rOaiunrisn)g 24.491×10 1.4×10 1.915×10-2.015×108 1.88x10 Balance Ours (Variane) Experts After Train 60000 Expert 44 Experts Load 3.65×102.08×10 6.34×10 415.951×108 1.362 × 10-²2) Variance Decrease 10000 1.835 × 10−2 2.492 × 10-\*1.441 × 10−‡ 1.915 × 10 2.915 × 10− 3.038 2.294 × 10−2 1.374 × 10−2 ;9.657 × 10−3 1.985 × 10−3 NANAA WWM 3.243 × 10− 3.682 × 10−³3.673 × 10–2 4.519 × 10–³ 6.572 × 10–8 5.382 1.861 × 10−3 1.951 × 10-2.156 × 10 1.853 × 10- :3.751 × 10−3 1.754 × 10° 4.641 × 10−2 6.964 × 10−2.430 × 10−‡ 2.965 × 10-2 5.169 × 10−3 5.378 Expert Index Specialize Token Assignment (2.961 × 10− 8.393 × 10−²9.129 × 10−‡ 3.951 × 10−² :7.112 × 10 4.995 × 10 for Experts
Routing Output Diverse Our Train Routing Output Experts Before Train
0.0100 W/O Train Diverse (Training) −ustine N2-ie
0.007 Baseline (DS-V 5\~3\~3 OVERLAP Discriminative w Routing Output
0.0025 Routing Variance Variance Growth
0.000420 4300 4400 4500 4600 4700 0.000 100 20 300 400 \*\* Feature Dimension(Sorted) Tmir
degrading both expert expressiveness and routing quality [19]. This compounding effect reveals a deeper limitation of existing training objectives, which lack mechanisms to decouple expert specialization from the uniformity constraints imposed by auxiliary losses.
To address this challenge, we propose a gradient-based multi-objective optimization framework that promotes expert specialization and routing diversification, while preserving load balance from auxiliary loss. We introduce two complementary objectives, as shown in Figure 1: 1) Expert Specialization, which fosters distinct expert representations by ensuring that each expert specializes in processing different tokens. 2) Routing Diversification, which drives differentiated routing decisions, enabling more precise token-to-expert assignments by enhancing the variance in routing. By jointly optimizing these objectives, our method mitigates the trade-off between model performance and routing efficiency in MoE training. We demonstrate that our approach successfully achieves:
• Enhanced expert–routing synergy. Our joint objectives reduce expert overlap by up to $45 \%$ and increase routing score variance by over $1 5 0 \%$ , leading to clearer specialization and more discriminative expert assignment.
• Stable load balancing. Despite introducing new objectives, our method matches the baseline’s MaxVioglobal across all models, with RMSE under 8.63 in each case.
• Improved downstream performance. We achieve $2 3 . 7 9 \%$ relative gains across 11 benchmarks and outperform all baselines on $9 2 . 4 2 \%$ of tasks ,all without modifying the MoE architecture.
# 2 Motivation
# 2.1 Preliminaries of MoE
In a typical MoE layer, let there be $n$ experts, and a sequence of input tokens represented by $X = \{ x _ { 1 } , x _ { 2 } , \cdot \cdot \cdot , x _ { N } \}$ , where $N$ is the total number of tokens in the sequence. The routing score matrix after applying the top- $\mathbf { \nabla } \cdot \mathbf { k }$ mechanism is denoted as:
$$
S = \left( \begin{array} { c c c c } { { s _ { 1 1 } } } & { { s _ { 1 2 } } } & { { \cdot \cdot \cdot } } & { { s _ { 1 n } } } \\ { { s _ { 2 1 } } } & { { s _ { 2 2 } } } & { { \cdot \cdot \cdot } } & { { s _ { 2 n } } } \\ { { \vdots } } & { { \vdots } } & { { \ddots } } & { { \vdots } } \\ { { s _ { N 1 } } } & { { s _ { N 2 } } } & { { \cdot \cdot \cdot } } & { { s _ { N n } } } \end{array} \right) , \quad \sum _ { j = 1 } ^ { n } s _ { i j } = 1 , \quad i = 1 , 2 , \cdots , N
$$
where $s _ { i j }$ represents the routing weight assigned to the $i$ -th token for the $j$ -th expert.
Let $F = \{ f _ { 1 } , f _ { 2 } , \cdot \cdot \cdot , f _ { n } \}$ represent the proportion of tokens assigned to each expert, where $f _ { j }$ is the number of tokens assigned to the $j$ -th expert. For any given MoE layer, the total loss function $\mathcal { L }$ consists of two parts, the main loss $\mathcal { L } _ { h }$ and the auxiliary loss $\mathcal { L } _ { a u x }$ :
$$
\mathcal { L } = \mathcal { L } _ { h } + \boldsymbol { \alpha } \cdot \mathcal { L } _ { a u x } = \mathcal { L } _ { h } + \boldsymbol { \alpha } \sum _ { j = 1 } ^ { n } f _ { j } \cdot p _ { j } , p _ { j } = \sum _ { i = 1 } ^ { N } s _ { i j } ,
$$
where $\mathcal { L } _ { h }$ is the loss computed from the output of the MoE layer, and $\mathcal { L } _ { a u x }$ is the auxiliary loss term, $\alpha$ denotes the weighting coefficient for the auxiliary loss. . Here, $p _ { j }$ represents the total routing score for the $j$ -th expert, which is the sum of the routing weights for all tokens assigned to that expert.
# 2.2 Observations
Obs I (Expert Overlap): Introduction of the auxiliary loss function leads to a more homogenized distribution of tokens across experts, which may reduce the distinctiveness of each expert.
It has been observed that the auxiliary loss function is independent of the expert parameter matrices $\theta _ { E _ { j } }$ . Therefore, for the $j$ -th expert, its gradient can be written as:
$$
\frac { \partial \mathcal { L } } { \partial \theta _ { E _ { j } } } = \frac { \partial \mathcal { L } _ { h } } { \partial \theta _ { E _ { j } } } + \alpha \cdot \frac { \partial \mathcal { L } _ { a u x } } { \partial \theta _ { E _ { j } } } = \frac { \partial \mathcal { L } } { \partial y _ { h } } \cdot \frac { \partial y _ { h } } { \partial \theta _ { E _ { j } } } = \sum _ { i = 1 } ^ { N } x _ { i } \cdot s _ { i j } , j = 1 , 2 , \cdot \cdot \cdot , n .
$$
where $\theta _ { E _ { j } }$ is the parameter matrix of the $j$ -th expert, and $y _ { h }$ is the output of the MoE layer. During gradient descent, the addition of the auxiliary loss $\mathcal { L } _ { a u x }$ forces the routing mechanism to evenly distribute the tokens across experts as much as possible.
This results in input token $x _ { i }$ being assigned to an expert that may not be semantically aligned with it, causing an unintended gradient flow to expert $j$ . Mathematically, after applying the top- $\mathbf { \nabla } \cdot \mathbf { k }$ mechanism, the routing score $s _ { i j }$ transitions from 0 to a non-zero value, introducing gradients from tokens that originally had no affinity with expert $j$ .
Obs $\pmb { I I }$ (Routing Uniformity): As training progresses, the routing output tends to become more uniform, with the expert weight distribution gradually converging towards an equal allocation.
To understand this phenomenon, we first examine the source of gradients with respect to the routing parameters $\theta _ { R }$ . Since the routing mechanism produces only the score matrix ${ \boldsymbol { S } } = s _ { i j }$ , the gradient $\bar { \partial } \mathcal { L } / \partial \theta _ { R }$ can be written as:
$$
\frac { \partial L } { \partial \theta _ { R } } = \frac { \partial \mathcal { L } _ { h } } { \partial \theta _ { R } } + \alpha \cdot \frac { \partial \mathcal { L } _ { a u x } } { \partial \theta _ { R } } = \sum _ { i = 1 } ^ { N } x _ { i } \sum _ { j = 1 } ^ { n } \theta _ { E _ { j } } \cdot \frac { \partial s _ { i j } } { \partial \theta _ { R } } + \alpha \cdot \sum _ { j = 1 } ^ { n } f _ { j } \sum _ { i = 1 } ^ { N } \frac { \partial s _ { i j } } { \partial \theta _ { R } } ,
$$
where $x _ { i } \cdot \theta _ { E _ { j } }$ represents the output of expert $j$ for token $x _ { i }$ , and $f _ { j }$ denotes the frequency with which expert $j$ is selected. This formulation reveals that the routing gradient is primarily influenced by the expert outputs and the token distribution across experts.
The auxiliary loss $\mathcal { L } _ { a u x }$ is introduced to encourage balanced token assignment by optimizing the uniformity of $f _ { j }$ . However, since $f _ { j }$ is non-differentiable, direct optimization is not feasible. Instead, a surrogate variable $p _ { j }$ , which is differentiable and positively correlated with $f _ { j }$ , is employed to approximate the objective and enable gradient flow back to the routing network.
As training proceeds, the optimization objective increasingly favors the uniformity of $p _ { j }$ , which drives $f _ { j }$ toward an even distribution. Moreover, as discussed in Observation I, incorrect token assignments caused by auxiliary regularization introduce overlapping gradients among experts, increasing the similarity of $x _ { i } \cdot \theta _ { E _ { j } }$ across different $j$ .
Obs III (Expert–Routing Interaction): While Obs $\pmb { I }$ concerns expert specialization, while Obs II reflects the uniformity of routing. These two effects interact during training, jointly driving the model toward degraded performance.
• Expert-side interference caused by Obs I leads to blurred specialization. Tokens are assigned to mismatched experts, and the resulting gradient interference reduces expert distinctiveness. As the routing weights become more uniform, different experts receive similar gradients from the same tokens, increasing their functional overlap.
• This expert similarity feeds back into the routing mechanism. As expert outputs become less distinguishable, the routing network finds fewer cues to differentiate among experts, leading to even more uniform weight distributions. This promotes random top- $k$ selection and further misalignment between tokens and their optimal experts.
Together, this loop gradually steers the model toward more uniform token allocation and reduced expert specialization, highlighting potential opportunities for improving the routing strategy and expert assignment.
# 3 Method
Based on the observations above, we propose the following design to mitigate expert overlap and routing uniformity, the overall loss function $\mathcal { L }$ is defined as follows:
$$
\mathcal { L } = \mathcal { L } _ { h } + \mathcal { L } _ { b a l a n c e } , \quad \mathcal { L } _ { b a l a n c e } = \alpha \cdot \mathcal { L } _ { a u x } + \beta \cdot \mathcal { L } _ { o } + \gamma \cdot \mathcal { L } _ { v } ,
$$
where $\mathcal { L } _ { a u x }$ represents the existing auxiliary loss, with coefficient $\alpha$ , and the newly introduced orthogonality loss $\mathcal { L } _ { o }$ and variance loss $\mathcal { L } _ { v }$ (see Subsec 3.1), with coefficients $\beta$ and $\gamma$ respectively. It is worth noting that the theoretical complementarity of these optimization objectives, rather than any inherent conflict, is formally analyzed and demonstrated in Subsection 3.2.
# 3.1 Implementations of Losses $\mathcal { L } _ { o }$ and $\mathcal { L } _ { v }$
In this section, we introduce two critical loss functions $\mathcal { L } _ { o }$ and $\mathcal { L } _ { v }$ that act on the expert and router components, respectively.
Expert Specialization. We introduce an orthogonalization objective that encourages independent expert representations. Specifically, we design the following orthogonality loss:
$$
\mathcal { L } _ { o } = \sum \sum _ { i = 1 } ^ { N } \sum _ { j = 1 } ^ { n } \sum _ { k = 1 } ^ { n } \frac { \left. \tilde { x } _ { i j } , \tilde { x } _ { i k } \right. } { \left. \tilde { x } _ { i k } , \tilde { x } _ { i k } \right. } \tilde { x } _ { i k } , \tilde { x } _ { i j } = x _ { i } \cdot \theta _ { E _ { j } } \cdot \mathbb { I } _ { \left\{ s _ { i j } > 0 \right\} } , i \in [ 1 , N ] , j \in [ 1 , n ] ,
$$
where $\langle \cdot \rangle$ denotes the inner product between two vectors, and $\mathbb { I } s i j > 0$ is an indicator function that evaluates to 1 when $s _ { i j } > 0$ and 0 otherwise. Here, $\tilde { x } _ { i j }$ represents the output of expert $j$ for token $x _ { i }$ after the top- $k$ routing selection.
The orthogonality loss $\mathcal { L } _ { o }$ reduces the overlap between different expert outputs within the same top- $k$ group by minimizing their projections onto each other. This encourages experts to develop more distinct representations, promoting specialization in processing different token types.
Routing Diversification. We introduce a variance-based loss to encourage more diverse routing decisions and promote expert specialization. Specifically, we define the variance loss as:
$$
\mathcal { L } _ { v } = - \sum _ { i = 1 } ^ { N } \sum _ { j = 1 } ^ { n } \frac { 1 } { n } \cdot ( s _ { i j } - \bar { s } _ { j } ) ^ { 2 } , \bar { s } _ { j } = \frac { 1 } { N } \cdot \sum _ { i = 1 } ^ { N } s _ { i j } ,
$$
where $\bar { s } _ { j }$ denotes the average routing score for expert $j$ across the batch. By maximizing the variance of routing scores, $\mathcal { L } _ { v }$ discourages uniform token-to-expert assignments and encourages more deterministic and distinct routing patterns, thereby facilitating expert specialization.
# 3.2 Compatibility of Multi-Objective Optimization
In this section, we analyze how each component influences the optimization dynamics of expert parameters $\theta _ { E _ { j } }$ and routing parameters $\theta _ { R }$ during training. Meanwhile, we will focus on the optimization and compatibility of the two losses $L _ { o }$ and $L _ { v }$ with respect to load balancing and expert specificity. The following two key questions guide our analysis.
Balancing Expert and Routing. How can expert $( \mathcal { L } _ { o } )$ and routing $( \mathcal { L } _ { v } )$ optimizations be designed to complement each other without compromising their respective objectives?
We first demonstrate that $\mathcal { L } _ { o }$ and $\mathcal { L } _ { v }$ are compatible in their optimization directions within MoE, then show that they mutually reinforce each other.
Mutually Compatible. We elaborate on the compatibility of $\mathcal { L } _ { o }$ and $\mathcal { L } _ { v }$ from the perspectives of expert and Routing.
From the expert perspective, we observe that the auxiliary loss $\mathcal { L } _ { a u x }$ and the variance loss $\mathcal { L } _ { v }$ do not directly contributes gradients to the expert parameter matrix $\theta _ { E _ { j } }$ . Therefore, the gradient of the total loss with respect to $\theta _ { E _ { j } }$ is:
$$
\frac { \partial \mathcal { L } } { \partial \theta _ { E _ { j } } } = \frac { \partial \mathcal { L } } { \partial \tilde { x } _ { i j } } \cdot \frac { \partial \tilde { x } _ { i j } } { \partial \theta _ { E _ { j } } } = \sum _ { i = 1 } ^ { N } \left( s _ { i j } + \beta \cdot \sum _ { \underset { k \neq j } { k = 1 } } ^ { n } \frac { \tilde { x } _ { i k } \tilde { x } _ { i k } ^ { \top } } { \left. \tilde { x } _ { i k } , \tilde { x } _ { i k } \right. } \right) \cdot x _ { i } ,
$$
This gradient is influenced by both the routing score $s _ { i j }$ and the expert representation $\tilde { x } _ { i j }$ . As training progress, the variance of expert weights increases, and the gradient encourages stronger preferences in different directions for each token.
From the routing perspective, we notice that $\mathcal { L } _ { o }$ does not affect the gradient with respect to routing parameters $\theta _ { R }$ . The gradient of the total loss with respect to $\theta _ { R }$ is:
$$
\frac { \partial \mathcal { L } } { \partial \theta _ { R } } = \frac { \partial \mathcal { L } } { \partial s _ { i j } } \cdot \frac { \partial s _ { i j } } { \partial \theta _ { R } } = \sum _ { i = 1 } ^ { N } \sum _ { j = 1 } ^ { n } \left( \tilde { x } _ { i j } + \alpha \cdot f _ { j } - \gamma \cdot \frac { 2 ( N - 1 ) } { n N } \cdot ( s _ { i j } - \bar { s } _ { j } ) \right) \cdot \frac { \partial s _ { i j } } { \partial \theta _ { R } } .
$$
This gradient is influenced by expert representations $\tilde { x } _ { i j }$ , expert load $f _ { j }$ , and routing weights $s _ { i j }$ . As the model converges, the expert load $f _ { j }$ becomes more balanced, and the variance of routing weights $s _ { i j }$ increases. Orthogonalizing expert representations causes the routing gradients to flow in more orthogonal directions, making the weight allocation more biased towards the representations and increasing the weight variance.
Summary. Expert parameters $\theta _ { E _ { j } }$ are solely influenced by the gradients of $\scriptstyle { \mathcal { L } } _ { o }$ without conflict. While routing parameters $\theta _ { R }$ are affected by both $\scriptstyle { \mathcal { L } } _ { o }$ and $\mathcal { L } _ { v }$ , the objectives of these two losses (orthogonalityfriendliness vs. score diversification) remain non-conflicting.
Mutually Reinforcing. $\mathcal { L } _ { o }$ aims to encourage the effective output vectors of different selected experts $j$ and $k$ to tend to be orthogonal for the same input token $x _ { i }$ , i.e., $\langle \tilde { x } _ { i j } , \tilde { x } _ { i k } \rangle \approx 0$ . The learning signal for the routing mechanism partially originates from the gradient of the primary task loss $\mathcal { L } _ { h }$ with respect to the routing score $s _ { i j }$ :
$$
\frac { \partial \mathcal { L } } { \partial s _ { i j } } = \underbrace { g _ { y _ { i } } ^ { T } \tilde { x } _ { i j } } _ { \mathrm { f r o m } ~ \mathcal { L } _ { h } } + \underbrace { \alpha \frac { \partial \mathcal { L } _ { \mathrm { a u x } } } { \partial s _ { i j } } } _ { \mathrm { f r o m } ~ \mathcal { L } _ { \mathrm { a u x } } } - \underbrace { \gamma \frac { 2 ( N - 1 ) } { n N } ( s _ { i j } - \bar { s } _ { j } ) } _ { \mathrm { f r o m } ~ \mathcal { L } _ { v } } , \quad y _ { i } = \sum _ { j } s _ { i j } \tilde { x } _ { i j } , \quad g _ { y _ { i } } = \frac { \partial \mathcal { L } _ { h } } { \partial y _ { i } }
$$
Assuming $p _ { i j } = g _ { y _ { i } } ^ { T } \tilde { x } _ { i j }$ , when the expert outputs tend to be orthogonal, for any given task gradient $g _ { y _ { i } }$ , the projections $p _ { i j }$ onto these approximately orthogonal expert outputs are more likely to exhibit significant differences. The increased variance of the primary task-related signals $p _ { i j }$ implies that the routing mechanism receives more discriminative and stronger learning signals, which creates more favorable conditions for $\mathcal { L } _ { v }$ to achieve diversification of routing scores.
$\mathcal { L } _ { v }$ enhances the diversity of routing scores $s _ { i j }$ by optimizing routing parameters $\theta _ { R }$ . Meanwhile, due to the influence of $\mathcal { L } _ { o }$ ’s gradient $\beta \frac { \partial \mathcal { L } _ { o } } { \partial s _ { i j } }$ on $\theta _ { R }$ , routing tends to assign more specialized token subsets $T _ { j }$ to each expert $j$ . Expert parameters $\theta _ { E _ { j } }$ learn the unique features of tokens within $T _ { j }$ , leading to gradual functional divergence among experts, thereby promoting expert orthogonality.
Summary. $\scriptstyle { \mathcal { L } } _ { o }$ induces orthogonal expert outputs $\tilde { x } _ { i j }$ , enhances the discriminative power of routing signals $g _ { y _ { i } } ^ { T } \tilde { x } _ { i j }$ , and generates diverse routing scores $s _ { i j }$ to support $\mathcal { L } _ { v }$ . Meanwhile, $\mathcal { L } _ { v }$ drives experts to specialize in distinct token subsets via $s _ { i j }$ and promotes parameter divergence of $\theta _ { E _ { j } }$ to support $\scriptstyle { \mathcal { L } } _ { o }$ . Together, they form a mutually reinforcing cycle.
Multi-Objective Optimization. How do expert and routing maintain their balance while enhancing $\mathcal { L } _ { a u x }$ and $\mathcal { L } _ { h }$ independently, ensuring mutually beneficial performance improvements?
Lemma 1 Let $\mathcal { S } \in \mathcal { R } ^ { N \times n }$ be a matrix that satisfies following conditions: each row sums to 1, each row contains $k$ non-zero elements and $n - k$ zero elements. Then, there always exists a state in which the following two objectives are simultaneously optimized: 1. The sum of the elements in each column tends to the average value $\textstyle { \frac { N } { n } }$ ; 2. The variance of the non-zero elements in each row increases.
Lemma 2 For two sets of points $\mathcal { A }$ and $\boldsymbol { B }$ of equal size, it is always possible to partition $A \cup B$ such that $\mathcal { A } \cap \mathcal { B } = \mathcal { O }$ and $| { \mathcal { A } } | = | { \mathcal { B } } |$ .
The overall objective function $\mathcal { L }$ optimizes four key dimensions: accurate data fitting $( \mathcal { L } _ { h } )$ , expert orthogonalization $( \mathcal { L } _ { o } )$ , balanced expert routing weights $( \mathcal { L } _ { a u x } )$ , and increased variance in routing outputs $( \mathcal { L } _ { v } )$ . Our core objective is to achieve an optimal balance by jointly optimizing these multiple objectives, ensuring they complement each other for enhanced model performance.
As shown by Lemma 1, expert load $f _ { j }$ and routing weights $s _ { i j }$ can be optimized together. As demonstrated in Lemma 2, the objectives of orthogonalization and load balancing are not in conflict and can be jointly optimized. Thus, both expert and routing modifications can be optimized alongside load balancing (balanced expert routing weights).
Moreover, orthogonalization enhances routing weight variance, in turn, improves expert specialization (as discussed in Section 2.2). This leads to more distinctive expert representations, aligning with performance (accurate data fitting) improvements when optimized together.
# Experiments
In this section, we conduct experiments to address the following research questions:
• RQ1: Does introducing the orthogonality loss $( \mathcal { L } _ { o } )$ and variance loss $( \mathcal { L } _ { v } )$ lead to better overall performance in downstream tasks compared to baseline approaches?
• RQ2: To what extent does our method maintain expert load balancing during training?
• RQ3: How do the orthogonality loss $( \mathcal { L } _ { o } )$ and variance loss $( \mathcal { L } _ { v } )$ interact with each other, and what are their respective and joint impacts on expert specialization and routing behavior?
• RQ4: What are the individual and combined contributions of $\mathcal { L } _ { o }$ , $\mathcal { L } _ { v }$ , and the auxiliary loss $\mathcal { L } _ { a u x }$ to the final model performance?
# 4.1 Experimental Setup
Environment. All experiments are performed on a CentOS Linux 7 server with PyTorch 2.3. The hardware specifications consist of 240GB of RAM, a 16-core Intel Xeon CPU, and two NVIDIA A800 GPUs, each having 80GB of memory. Implementation and training details are provided in the Appendix F.
Datasets. We evaluate our method on a total of 11 benchmarks. Specifically, we use the training sets from Numina [40], GLUE [64], and the FLAN collection [70] to train our models. Our benchmarks include: ❶ Mathematics: GSM8K [11], MATH500 [43], and Numina [40]; $\pmb { \theta }$ Multi-Domain Tasks: MMLU [30, 29], MMLU-pro [68], BBH [61], GLUE [64]; LiveBench [74] and GPQA [57]. $\otimes$ Code generation: HumanEval [9] and MBPP [4]. We group training and test sets by language, reasoning, science, math, and code to match downstream evaluation needs. Detail in Appendix D.
Baselines. We compare our method with 4 existing MoE training strategies. With Aux Loss [45] applies auxiliary load-balancing losses during routing to encourage expert utilization diversity. GShard [38] introduces a foundational sparse expert framework with automatic sharding and routing; ST-MoE [83] enhances training stability via router dropout and auxiliary losses; Loss-Free Balancing [66] achieves balanced expert routing without auxiliary objectives. Detail in Appendix G.
Metrics. We employ 6 evaluation metrics to test our method in terms of accuracy, expert load balancing $\scriptstyle ( \mathrm { M a x V i o _ { g l o b a l } }$ [66]), clustering quality (Silhouette Coefficient), expert specialization (Expert Overlap), routing stability (Routing Variance), and prediction error (RMSE). Detail in Appendix E.
# 4.2 Performance in Downstream Tasks (RQ1)
To verify that our ${ \mathcal { L } } _ { \mathrm { b a l a n c e } }$ enhances model performance in downstream task scenarios through expert orthogonality and routing output diversification, as shown in Table 1, we design downstream task scenarios on 11 well-known benchmarks and validate our method against four baseline methods with distinct loss designs on three widely used MoE models. We make the following observations:
Obs.❶ Baseline methods without guidance for expert specialization exhibit varied performance and fail to effectively improve downstream task performance. As shown in Table 1, the four baseline methods show no clear overall performance ranking across the 11 tasks, with performance variations within $2 \%$ in many tasks. Their overall performance is significantly lower than our method, demonstrating no potential to improve downstream task performance.
Obs.❷ Our method guiding expert specialization effectively enhances model performance in downstream tasks. As shown in Table 1, we achieve state-of-the-art (SOTA) results in over $8 5 \%$ of the 33 tasks across the three models. In some tasks, the average across multiple measurements even outperforms the next-best method by nearly $7 \%$ . Extensive experiments indicate that our method significantly improves model performance in downstream task scenarios by enhancing expert specialization.
Table 1: Performance on different downstream tasks. The table shows accuracies of methods across models and downstream tasks. Notably, we categorize sub-downstream tasks in Multi-Domain and ensure training/evaluation sets are domain-aligned, following downstream task requirements.
# 4.3 Load Balancing (RQ2)
To verify that our newly added losses $\mathcal { L } _ { v }$ and $\mathcal { L } _ { o }$ do not affect the load balancing effect, we conduct statistical measurements on the load balancing of all combinations of $\mathcal { L } _ { a u x }$ , $\mathcal { L } _ { v }$ , and $\mathcal { L } _ { o }$ across various models during training.
Figure 2: Variation of Load Balancing. The figure illustrates the variation of load balancing during training across three distinct models for different methods. Method represents the combination of $\mathcal { L } _ { \mathrm { a u x } }$ , $\scriptstyle { \mathcal { L } } _ { 0 }$ , and $\mathcal { L } _ { \mathrm { v } }$ ; Step denotes the number of training steps; $M a x V i o _ { \mathrm { g l o b a l } } \downarrow$ serves as the metric for load balancing; and RMSE is the metric for measuring the similarity between two curves.
Figure 2 shows the variation of $M a x V i o _ { \mathrm { g l o b a l } } \downarrow$ across training steps for different loss combinations, as well as the RMSE of differences between our method and other combinations. We make the following observations:
Obs.❸ Loss combinations without $\mathcal { L } _ { \bf a u x }$ exhibit significantly worse load balancing performance than those with $\mathcal { L } _ { \bf a u x }$ . As shown in Figure 2, across three distinct models, the $M a x V i o _ { \mathrm { g l o b a l } }$ of the w/o all method (with no losses added) is significantly higher than that of other methods, indicating notably poorer load balancing. In particular, for the DeepSeek-V2-Lite model, the method without $\mathcal { L } _ { \mathrm { a u x } }$ converges to 6.14, whereas methods with $\mathcal { L } _ { \mathrm { a u x } }$ converge to 2.48, demonstrating that loss combinations containing ${ \mathcal { L } } _ { \mathrm { a u x } }$ achieve significantly better load balancing.
Obs.❹ Incorporating any combination of $\mathcal { L } _ { v }$ and $\mathcal { L } _ { o }$ into $\mathcal { L } _ { \bf a u x }$ does not affect load balancing. As shown in Figure 2, for methods with $\mathcal { L } _ { \mathrm { a u x } }$ , the trends of “only aux” (no additional losses), “w/o lv” (only $\mathcal { L } _ { o }$ ), “w/o lo” (only $\mathcal { L } _ { v }$ ), and “ours” (both $\mathcal { L } _ { v }$ and $\mathcal { L } _ { o }$ ) are nearly identical. Additionally, the RMSE (root mean squared error) of our method relative to other baselines does not exceed 0.03, further corroborating the conclusion that the combination of $\mathcal { L } _ { v }$ and $\mathcal { L } _ { o }$ does not impact load balancing.
# 4.4 Behaviors of Experts and Routing (RQ3)
To verify that $\mathcal { L } _ { v }$ and $\mathcal { L } _ { o }$ can jointly promote expert orthogonality and routing score diversification, following the method setup in Section 4.3, we will conduct evaluations of expert orthogonality and measurements of routing score diversification for different loss combinations.
Figure 3: Behaviors of Experts and Routing. The figure demonstrates the behavioral states of experts and routing across different methods. The first two subplots, Silhouette Coefficient and Expert Overlap, measure the degree of expert orthogonality, while the last subplot, Routing Variance, evaluates the diversity of routing outputs.
As shown in Figure 3, the first two subplots demonstrate the orthogonality of experts, while the last subplot illustrates the diversification of routing outputs. We make the following observations:
Obs.❺ $\mathcal { L } _ { o }$ directly promotes expert orthogonality, and $\mathcal { L } _ { v }$ also aids in expert orthogonality. As shown in the first two panels of Figure 3, our method with both $\mathcal { L } _ { o }$ and $\mathcal { L } _ { v }$ achieves state-of-the-art (SOTA) results across three models, with Expert Overlap even dropping below 0.3. The method with only $\mathcal { L } _ { o }$ and ${ \mathcal { L } } _ { \mathrm { a u x } }$ (w/o lv) consistently ranks second-best, indicating that $\mathcal { L } _ { o }$ has a more significant impact on expert orthogonality. Notably, the method with only $\mathcal { L } _ { v }$ and $\mathcal { L } _ { \mathrm { a u x } }$ (w/o lo) significantly outperforms the method with only $\mathcal { L } _ { \mathrm { a u x } }$ across all three models, confirming that $\mathcal { L } _ { v }$ also contributes to expert orthogonality.
Obs.❻ $\mathcal { L } _ { v }$ directly enhances routing output diversification, and $\mathcal { L } _ { o }$ also supports this diversification. Similarly, our method exhibits the highest routing score variance (exceeding 0.010), followed by the method with only $\mathcal { L } _ { v }$ and $\mathcal { L } _ { \mathrm { a u x } }$ , while the method with only ${ \mathcal { L } } _ { \mathrm { a u x } }$ performs worst. This strongly supports the conclusion.
Obs.❼ $\mathcal { L } _ { \bf a u x }$ leads to higher expert overlap and more homogeneous routing outputs. Compared to the w/o all method (no losses), the aux only method (with only $\mathcal { L } _ { \mathrm { a u x } }$ ) shows a Silhouette Coefficient that is over 0.05 higher and a routing output variance that is 0.0045 higher. This indicates that w/o all exhibits significantly greater expert orthogonality and routing output diversification than aux only.
# 4.5 Ablation among Losses (RQ4)
To demonstrate that both $\mathcal { L } _ { o }$ and $\mathcal { L } _ { v }$ have positive effects on the model’s performance in downstream task scenarios, and their combination synergistically enhances each other’s efficacy, we design ablation experiments for these two losses on three models.
Figure 4 illustrates the performance of different ablation method combinations across various downstream tasks. We make the following observations:
Obs.❽ The combination of $\mathcal { L } _ { o }$ and $\mathcal { L } _ { v }$ significantly enhances model performance in downstream tasks, and each loss individually also improves performance. Our method (combining $\mathcal { L } _ { o }$ and $\mathcal { L } _ { v }$ ) exhibits the largest coverage area across all three models, nearly encompassing other methods. When either $\scriptstyle { \mathcal { L } } _ { o }$ or $\mathcal { L } _ { v }$ is ablated (i.e., w/o lv or w/o lo), the coverage areas of these methods are larger than that of the only aux method (with only $\mathcal { L } _ { \mathrm { a u x } } \mathrm { . }$ ), indicating performance improvements over the baseline.
Obs.❾ $\mathcal { L } _ { \bf a u x }$ impacts model performance on downstream tasks. Figure 4 clearly shows that the only aux method (with only $\mathcal { L } _ { \mathrm { a u x } } \mathrm { . }$ ) is nearly entirely enclosed by other methods across all three models, consistently exhibiting the smallest coverage area. Notably, the w/o all method (with no losses) achieves performance improvements and a larger coverage area than the only aux method when $\mathcal { L } _ { \mathrm { a u x } }$ is removed, supporting this conclusion.
Figure 4: Ablation Experiments. The figure illustrates the performance differences of different ablation method combinations across three models on various benchmarks. The vertices on the circles represent the corresponding benchmark names, with the same type connected by the same color. The numbers inside the circles denote the accuracy represented by each circle.
# 5 Related Work
Auxiliary Losses in MoE Training. Auxiliary losses [38, 83] are commonly used to prevent expert collapse by encouraging balanced expert utilization [13]. Early approaches focus on suppressing routing imbalance, while later works [79] introduce capacity constraints or multi-level objectives to separate routing stability from load balancing [63, 38, 19]. Recent methods [73] further reduce manual tuning by dynamically adjusting auxiliary weights or replacing them with entropy-based routing [41]. However, fixed-rule strategies may underutilize expert capacity, and dynamic schemes can introduce instability or overhead, making robust balancing still a challenge [31, 66].
Orthogonality in MoE. Orthogonalization [46, 27] improves expert diversity by encouraging independent representations [28]. Some methods [53, 82, 50] regularize expert weights directly, while others [13, 28] assign experts to disentangled subspaces based on task semantics. Recent routingbased approaches [46, 56] also impose orthogonality on token-to-expert assignments to reduce redundancy. Nonetheless, static constraints [10] often fail to adapt to dynamic inputs, and dynamic ones [76, 34, 24, 62] may conflict with balancing, complicating expert allocation [31, 80, 26, 66]. Our work addresses these tensions by integrating orthogonalization and balance into a unified, gradient-consistent optimization framework.
# 6 Limitation & Future Discussion
While $\mathcal { L } _ { b a l a n c e }$ balances load and enhances performance in downstream tasks, its potential in other domains remains unexplored. Specifically, it could be extended to visual models, as suggested in recent work [25], and multimodal or full-modal settings [7], offering opportunities for crossdomain applications. Additionally, investigating $\mathcal { L } _ { b a l a n c e }$ within lightweight MoE fine-tuning, such as LoRA-MoE [20], could make our approach viable for resource-constrained environments [42].
Furthermore, there is considerable potential in exploring expert-distributed deployment, where $\mathcal { L } _ { b a l a n c e }$ can optimize both parameter inference efficiency and model performance. This avenue could significantly enhance the scalability and practicality of MoE models in real-world applications, providing new opportunities for distributed expert architectures. | Mixture-of-Experts (MoE) models enable efficient scaling of large language
models (LLMs) by activating only a subset of experts per input. However, we
observe that the commonly used auxiliary load balancing loss often leads to
expert overlap and overly uniform routing, which hinders expert specialization
and degrades overall performance during post-training. To address this, we
propose a simple yet effective solution that introduces two complementary
objectives: (1) an orthogonality loss to encourage experts to process distinct
types of tokens, and (2) a variance loss to encourage more discriminative
routing decisions. Gradient-level analysis demonstrates that these objectives
are compatible with the existing auxiliary loss and contribute to optimizing
the training process. Experimental results over various model architectures and
across multiple benchmarks show that our method significantly enhances expert
specialization. Notably, our method improves classic MoE baselines with
auxiliary loss by up to 23.79%, while also maintaining load balancing in
downstream tasks, without any architectural modifications or additional
components. We will release our code to contribute to the community. | [
"cs.CL",
"cs.SE",
"68T07",
"I.2.7"
] |
# 1 Introduction
Understanding temporal relations is a challenging yet underexplored area in natural language processing (Ning et al., 2020; Chen et al., 2021; Zhou et al., 2019). This challenge persists despite the prevalence of Large Language Models (LLMs) (Chan et al., 2023; Fang et al., 2023), whose training processes lack grounding in timeline evidence. One example task requiring such evidence is temporal reading comprehension (TRC), which requires to distinguish the temporal semantic difference between “what finished right before the decision?” and “what finished right after the decision?”.
To distinguish the two questions, existing solution for TRC (Shang et al., 2021) relies on overlaps between related questions as a weak supervision to ground the semantics of temporal relations. For example, in Figure 1, if we let the question’s target event as $X$ , $Q 1$ “what had started before $X ^ { \prime \prime }$ and $Q 2$ “what happened before $X ^ { \prime \prime }$ have similar semantics “before”. Subsequently, the two share the overlapping answer “sent”. On the other hand, the temporal semantics of $Q 2$ and $Q 3$ “what happened while $X ^ { \prime \prime }$ are different. So $Q 2$ does not have any common answer with $Q 3$ . By using answer overlaps as a proxy label, existing work proposes a contrastive objective. It aims to pull the temporal relations in $Q 1$ and $Q 2$ closer together while broadening the distinction between $Q 2$ and $Q 3$ . This method performs comparably with or outperforms baselines requiring stronger but expensive human-annotations (Han et al., 2021; Huang et al., 2022), as shown in Subsection 4.4.
However, as illustrated in Figure 2, we argue that contrasting the evidence from answer overlaps misguide timeline as point-wise manner, leading to “spurious overlap”. Questions $Q 3$ and $Q 4$ “What happened while $X ^ { \prime \prime }$ and “What probably ended after $X ^ { \prime \prime }$ , are temporally distinct but share answers “taken” and “bearing”. In such cases, the pointwise timeline may fail to properly reason about the temporal meanings of the two questions. The timeline mistakenly pulls $Q 3$ and $Q 4$ closer, making the model insufficient to differentiate the complex temporal questions. The point-wise representation misses the timeline’s inherent span-based nature.
In this work, we focus on overcoming the limi
[e1] Aircraft have taken off from the United States, [e2] bearing medical supplies. A rescue team, [e3] previously sent to the bombed-out federal building in Oklahoma City, [e4] was en route to Nairobi. Q1. What had started before the team was en route to Nairobi? A. taken, bearing, sent Q2. What happened before the team was en route to Nairobi? question A. sent Q3. What happened while the team was en route to Nairobi? group of [e4] A. taken, bearing Q4. What probably ended after the team was en route to Nairobi? A. taken, bearing
Figure 1: Example of passage and question grouped by the same event (‘the team was en route”) in temporal reading comprehension. Events are highlighted in color and temporal relations in the questions are in red.
Figure 2: The illustration of (a) point-wise timeline grounding and (b) span-based one. (a) The model brings similar relations closer and pushes dissimilar ones apart, overlooking spurious overlap. (b) The speech bubbles in Step 1 describe the temporal evidence from each question-answer pair. The arrows in Step 2 describe the relative span prediction. It chains evidences about the timeline and mitigates spurious overlap.
tations of point-wise event representation through span-based representations of time. The key is utilizing the concept of time spans with notions of start and end points to supervise the complex temporal relationships between events. For instance, the timeline in Figure 2(b) can separate $Q 3$ and $Q 4$ and distinguish between “happened while” and “probably ended after”, which are illustrated as disjoint boxes. The overlap of the events is because the events span throughout the timeline, not because the questions are similar. Despite its importance, previous work does not consider such a timeline due to the limited supervision in most scenarios.
. We propose an advanced solution that elicits inductive reasoning behavior from a model grounded in predicted event spans. Inductive reasoning in the context of temporal relation understanding is the process of extracting relations from individuals for deducing a whole, with the key purpose to acquire relative spans of events centered around a specific event. First, the model answers each temporal relation question in a “question group”, a set of questions about the same event (e.g., $e 4 \dot { }$ ). As illustrated in speech bubbles in Figure 2(b), the question-answer pairs can be understood as part of the evidence about the timeline, such as ‘when event e1 occurred relative to the event $\mathbf { \Pi } _ { e 4 } \mathbf { \overrightarrow { \Pi } }$ . Second, the model chains multiple temporal evidence within the same question group. This chained information forms a predicted timeline. For example, the speech bubbles in Figure 2(b) collectively illustrate the start and end points of event e1. Supervised by the predicted timeline, events that span a long time period can be identified, allowing us to discount attention to events with spurious overlaps. This process mitigates the spurious overlap without expensive human supervision.
Our model, Timeline Reasoning Network (TRN), equips the two-step inductive reasoning outlined as follows: An Evidence Extraction step aims to answer a specific question by extracting semantic and syntactic information with a pre-trained language model (PLM) and graph network. An Evidence Chaining step collectively predicts a timeline, using the novel attention module to chain multiple question-answer pairs. With the resulting timeline, the model grounds its answers consistently enhancing overall prediction accuracy.
We evaluate TRN on TORQUE and TB-Dense, a TRC and TRE task respectively. We achieve stateof-the-art performance on the public leaderboard of TORQUE 2. We quantitatively and qualitatively analyze TRN’s effectiveness in dealing with spurious overlaps, which is measured by our new proposed “passage level consistency” metric. Lastly, we confirm its generalizability on TB-Dense. Our main contributions are three-fold:
• We point out the spurious overlap issue in temporal relations, which arises from pointwise timeline grounding. • We propose the inductive solution that chains evidence for the timeline in a span-based approach.
• Our novel framework, TRN, outperforms other approaches by effectively capturing temporal relations of events.
# 2 Related Work
We overview state-of-the-art works on temporal relation understanding and graph networks.
Temporal relation understanding Temporal relation understanding remains a challenging task even for large language models (LLMs) (Chan et al., 2023). This includes task types such as TRE and TRC. TRE tasks (Cassidy et al., 2014; Ning et al., 2018) are to categorize the temporal order into pre-defined categories. MATRES (Ning et al., 2018) groups the temporal relations into 4 categories: Before/After/Simultaneous/Vague. TBDense (Cassidy et al., 2014) considers 2 more classes, Includes and Is Included. Our proposed approach can benefit these tasks as we discuss in Section 5.
Meanwhile, our main task is the TRC task TORQUE (Ning et al., 2020), requiring a temporal ordering in question form to reflect the realworld diversity of temporal relations. Previous approaches to the TRC task include continuous pre-training (Han et al., 2021) and question decomposition methods (Huang et al., 2022; Shang et al., 2021). ECONET (Han et al., 2021) continually pre-trains the model to inject the knowledge of temporal orders. Question decomposition approaches (Huang et al., 2022; Shang et al., 2021) divide the question into the event part and temporal relation expression part to better capture the complex semantics. All of the above use contrastive methods to understand different temporal relations, either by contrasting relations with human annotations (Han et al., 2021; Huang et al., 2022) or annotated answers (Shang et al., 2021). However, the former can be costly or imprecise, while the latter may rely on spurious problems. Our distinction is the best of the two: no costly human annotation while avoiding spurious overlaps using span-based inductive reasoning.
Graph networks Graph Networks (Kipf and Welling, 2016; Velickovic et al., 2017) learn features through message passing on graph structures.
These networks have demonstrated their effectiveness in tasks requiring complex reasoning skills, such as numerical reasoning (Ran et al., 2019; Chen et al., 2020) and logical reasoning (Huang et al., 2021). Graph networks also have been applied to TRE (Cheng and Miyao, 2017; Mathur et al., 2021; Zhang et al., 2022), though their effectiveness in TRC has not been investigated.
# 3 Proposed Method
We formulate predicting answers for a query $Q$ as a binary classification for every word $p$ in the given passage $P$ , determining whether it is an answer event to Q 3.
Our approach is to solve the task with the two steps of inductive reasoning. The core of inductive reasoning is inferring the whole picture from individual evidence. To transform the conventional function used in reading comprehension into the inductive form, we modify the function to consider answers to multiple questions together. The conventional one is denoted as $\hat { A } _ { i } = \overset { \cdot } { f } ( Q _ { i } , P ; \theta )$ , where we answer $( \hat { A } _ { i } )$ the $i$ -th question $( Q _ { i } )$ in the passage with model $\theta$ . For inductive reasoning, the function is modified as:
$$
\begin{array} { c } { { \hat { A } _ { i } ^ { i n d u c e d } = f ( Q _ { i } , { \cal P } , \hat { A } ^ { * } ; \theta ) , } } \\ { { \mathrm { w h e r e } \hat { A } ^ { * } = \{ \hat { A } _ { i } \} _ { i = 1 } ^ { l } } } \end{array}
$$
$l$ is the number of questions, and ${ \hat { A } } ^ { * }$ is the set of model predictions for multiple questions.
The overview of our model is in Figure 3. We first extract each answer $( \hat { A } _ { i } )$ as individual evidence in the Evidence Extraction step (Subsection 3.1), represented as the output squares in (a). The inductive reasoning is elicited in the Evidence Chaining step (Subsection 3.2). We chain the related question-answers $( \hat { A } ^ { * } )$ depicted as paths of blue and red, marked with a dark background, and utilize them in (b).
# 3.1 Evidence Extraction Step
The evidence extraction step aims to extract timeline evidence by answering each question. We utilize both semantic information from PLM and syntactic information from the graph network. First, PLM encodes the question-passage pairs to get the contextual representation for each token. It takes the concatenated sequence of pair as input $[ Q , P ]$ and outputs the vector representation $[ Q ^ { v }$ , $P ^ { v } ]$ , where each token is $q ^ { v }$ and $p ^ { v }$ .
Figure 3: Overview of TRN. (a) The Evidence Extraction Step answers each question with semantic (PLM) and syntactic (Graph Network) features. The example in the graph is from $Q 1$ in Figure 1. (b) The Evidence Chaining Step collects the related answers in the evidence collection stage and chains them through the cross-time attention module.
After that, we build a syntax-aware graph neural network that captures word-to-word dependency, which is an effective strategy for temporal reasoning (Cheng and Miyao, 2017; Mathur et al., 2021; Zhang et al., 2022). Diverging from previous works mainly focused on temporal relations within passages and neglected questions, our formulation highlights the need to comprehend both. As the graph in Figure 3(a), we construct dependency tree graphs for both the question and passage, connecting root nodes and co-mentioned event words bidirectionally to facilitate the information exchange. Here event words refer to nouns and verbs.
Next, we followed the graph reasoning step used in reading comprehension (Ran et al., 2019) that categorizes the connections of nodes into 4 types: (1) question-question $( q q )$ (2) passage-passage $( p p )$ (3) passage-question $( p q )$ (4) question-passage $( q p )$ . Each node in the graph is the corresponding word in question and passage. The pipeline consists of the following steps:
ization (Eq. 2). (b) Node Relevance: We compute the weight $\alpha _ { i }$ for each node $\bar { v } _ { i }$ with the sigmoid function to determine the relevant nodes for answering temporal ordering questions (Eq. 3). Here, nodes $\bar { v }$ consist of $\bar { q }$ and $\bar { p }$ , each corresponding to the nodes from the question and passage. (c) Message Propagation: The adjacency matrix $W ^ { r _ { j i } }$ guides the message passing between nodes of different types (Eq. 4), where $r _ { j i } \in \{ p p , p q , q p , q q \}$ and $N _ { i }$ is the neighbor nodes of $\bar { v } _ { i }$ . (d) Node Update: The message representations are added to the corresponding nodes, and a non-linear activation function (ReLU) is applied to update the node representations (Eq. 5).
We iterate the steps (b), (c), and (d) for $T$ times. Finally, the representation from PLM, $P ^ { v }$ , is added and normalized to obtain the answer representations ${ \hat { A } } _ { i }$ in Eq. 1, with individual word representations $\hat { a } _ { i }$ .
# 3.2 Evidence Chaining Step
$$
\begin{array} { c } { { { [ \bar { Q } , \bar { P } ] = W ^ { m } [ Q ^ { v } , P ^ { v } ] + b ^ { m } } } } \\ { { { \alpha _ { i } = s i g m o i d ( W ^ { v } \bar { v } _ { i } + b ^ { v } ) } } } \\ { { \tilde { v } _ { i } = \displaystyle \frac { 1 } { | N _ { i } | } \left( \displaystyle \sum _ { \alpha _ { j } W ^ { r j i } \bar { v } [ j ] } ^ { j \in N _ { i } } } } \\ \right){ { { v _ { i } ^ { \prime } = R e L U ( W _ { i } ^ { u } \bar { v } _ { i } + \tilde { v } _ { i } ) + b ^ { u } } } } \end{array}
$$
(a) Projection: The vector outputs of the PLM pass through the projection layer $W ^ { m }$ for node initial
Our second and primary objective is to inductively reason with the group of questions and ground the answers with it. A key motivation of the reasoning comes from the observation that chaining answers to questions about the same event serves as the relative timeline. Each prediction can be interpreted as temporal evidence like ‘when one event occurred relative to the asked event’. The pieces of evidence are then chained with the attention module to create the relative time span of passage events, helping the model ground its predictions.
The evidence chaining step is built for such reasoning, whose process is further divided into two stages: evidence collection and timeline acquisition.
Evidence collection We first collect the question group, defined as questions that pertain to the same target event. Blue and red questions in Figure 3 correspond to the group. Task designs may provide the grouping for evaluation metrics (Ning et al., 2020) (Subsection 4.3) or simple rules can be applied for the grouping (Subsection 5.2).
The questions are collectively encoded through the evidence extraction step and the output representations of them are collected. If the model wants to answer the first question $[ Q _ { 1 } , P ]$ in the question group, the other questions $[ Q _ { i } , P ] _ { i = 2 } ^ { l }$ are encoded together to produce $[ \hat { A } _ { i } ] _ { i = 2 } ^ { l }$ , then stacked with the original one to make $\{ \hat { A } _ { i } \} _ { i = 1 } ^ { l }$ , which corresponds to ${ \hat { A } } ^ { * }$ in Eq. 1.
Timeline acquisition We need to build the timeline from the collected evidence to ensure the model’s original answer is consistently grounded in such a timeline.
We achieve this through a novel transformer layer with our key component “cross-time attention” module. Let the attention module Attention $( Q , K , V )$ (Vaswani et al., 2017). Conventional self-attention attends to tokens within a single data sequence-wise, represented by Attention $\left( p _ { k i } , p _ { k j } , p _ { k j } \right)$ , where $k$ is the data index and $i , j$ are both equal or less than the sequence length. In contrast, our novel cross-time attention operates data-wise, gathering information from multiple data that were previously overlooked (Vaswani et al., 2017). Each passage token $p _ { k }$ attends to the same positioned token from related data. The equation of cross-time attention is:
$$
C r o s s T i m e A t t e n t i o n = A t t e n t i o n ( p _ { i k } , p _ { j k } , p _ { j k } )
$$
where $i , j \le l$ and $k$ is token index. We insert crosstime attention between the self-attention and feedforward network (FFN) in the transformer layer.
In the evidence chaining step, the answer $( \hat { a } _ { i } )$ for the event $( p )$ in $i$ -th related question conveys evidence of when event $p$ occurred relative to the event in question. Therefore, if the cross-time attention chains the pieces of temporal evidence of the event together, $\{ \hat { a } _ { i } \} _ { i = 1 } ^ { l }$ , it results in the time span of the event $p$ . The resulting time spans for events allow the model to refine the answer by collectively leveraging them as ground evidence.
We enhance the model’s reasoning behavior in temporal relation understanding through iterative application of our transformer layer $T ^ { \prime }$ times.
# 3.3 Training and Answer Prediction
At each step, the last output is fed to the onelayered perceptron head to get the prediction of whether the token is an answer to the question or not. During the training phase, the final loss is the mean of extraction and chaining step losses, rewarding output from both steps. The answer prediction loss from the first step, $L _ { e x t r a c t }$ , guides the evidence from individual questions. The second step’s loss, $L _ { c h a i n }$ , guides the model to inductively correct the answer with the predicted timeline. During the inference phase, our final logits, Aˆinduced, are the predictions of the evidence chaining step.
# 4 Experiment
# 4.1 Dataset and Evaluation Metrics
We evaluate our proposed model on TORQUE dataset (Ning et al., 2020), which is a temporal reading comprehension dataset. It has $3 . 2 \mathrm { k }$ passages and $2 1 . 2 { \mathrm { k } }$ user-provided questions. Each instance has a question asking the temporal relationships between events described in a passage of text. TORQUE’s annotation provides groups of questions, where one group consists of questions that were created by modifying the temporal nuance of an original seed question that dramatically changes the answers. We use the official split 4 and evaluation metrics, which include Macro F1, exact-match (EM), and consistency (C) as evaluation metrics. C (consistency) is the percentage of question groups for which a model’s predictions have $F 1 \geq 8 0 \%$ for all questions in a group.
# 4.2 Baselines
We compare our model against several baselines, including PLMs and models that use contrastive methods to teach the model temporal relations. Specifically, OTR-QA (Shang et al., 2021) reformulates the TORQUE task as open temporal relation extraction and uses answer overlap to weakly supervise temporal relations. As they target TORQUE without any external supervision like our method, they are our main baseline. We further compare our model with those that use humanannotated temporal dictionaries. ECONET (Han et al., 2021) is a continual pre-training approach with adversarial training that aims to equip models with knowledge about temporal relations. They use the external corpus and compile a dictionary of 40 common temporal expressions. UBA (Huang et al., 2022) employ the attention-based question decomposition to understand fine-grained questions. They also utilize a dictionary of temporal expressions as additional supervision, to capture the distinctions in temporal relationships. RoBERTa-large (Liu et al., 2019) is a baseline PLM provided together with the TORQUE dataset and the previous SOTAs are based on. In addition, we evaluate the score of DeBERTa-v3-large (He et al., 2022), which is known as the state-of-the-art PLM on a wide range of natural language understanding tasks.
Table 1: Comparison between TRN and baselines on TORQUE dataset. We marked the models that (1) are trained without external supervision (2) have performed significance test on the test set. Superscripts represent significant improvements compared to RoBERTa(r), DeBERTa(d) and ECONET(e). The best performance is denoted in bold.
We don’t regard recent LLMs as our main baseline due to their subpar performance in temporal relation understanding (Chan et al., 2023). Additional evidence supporting this assessment is presented in our extended evaluation of ChatGPT in Appendix A.
# 4.3 Experimental Settings
We search for optimized hyperparameters in our model. $T$ and $T ^ { \prime }$ are set between {2, 3} for the graph iteration step and for the evidence chaining step respectively. Each transformer layer in the evidence chaining step has 8 attention heads with a hidden size of 1024, and FFN layers in the attention module have dimensions between {1024, 2048}. The question group is annotated for the C metric in TORQUE. During the fine-tuning, the gradient accumulation step is set to 1, dropout ratio is set to 0.2 and other settings are identical with Ning et al. (2020). Spacy (Honnibal et al., 2020) is used for graph construction. We use the PyTorch 1.11 library, and a NVIDIA GeForce RTX 3090 GPU with 42 average minutes to run an epoch.
For the performance report, we report the average score on the dev set and the best score on the test set to make a fair comparison with the baselines. This is because OTR-QA only reports the best single model results for all sets, and UBA reports single model results on the test set.
For the significance test, we conducted paired t-tests $( p < 0 . 0 5 )$ only with PLMs and ECONET. It was due to the lack of reproducibility and significance test on the test set for OTR-QA and UBA.
# 4.4 Experimental Results
Table 1 compares our approach to the baseline methods. The baseline performances are provided by previous works (Ning et al., 2020; Han et al., 2021; Shang et al., 2021; Huang et al., 2022). The results show that TRN outperforms all compared baselines on both splits of TORQUE. TRN even surpasses ECONET and UBA which use a human-annotated dictionary of temporal expressions. Moreover, we found that while DeBERTav3-large shows a comparable score with OTR-QA, TRN significantly beats both DeBERTa-v3-large and OTR-QA. Such results indicate our approach shows notable benefits over existing methods. One exception is the consistency score (C) of OTR-QA on the dev set. But we note that TRN outperforms it in F1 and EM and generalizes better to the test set, indicated by a much smaller dev-test gap in C (3.5 for OTR-QA vs 2.2 for TRN). On the test set, TRN significantly outperforms all the baselines, achieving SOTA results on the TORQUE leaderboard.
# 4.5 PLM variants
Table 2 displays the results for PLM encoder variants. First, we implement our method on DeBERTav3-large (He et al., 2021) and observe that with the addition of TRN, it achieves the best test scores across all metrics. It demonstrates the effectiveness and generalizability of our method even with other
Table 2: Comparison with PLM variants. Naive results of BERT-large and Roberta-base are from TORQUE (Ning et al., 2020) and DeBERTa-large from our own implementation. Current SOTA results are from OTR-QA (Shang et al., 2021) , UBA (Huang et al., $2 0 2 2 ) \ddagger$ .
Table 3: Ablation study on the dev set of TORQUE. Results are based on RoBERTa-large. The best performance is denoted in bold.
PLM variants. Our method is also shown to be generalizable to the BERT model, and its performance is comparable to other previous methods. Lastly, when using the RoBERTa-base model, our results are again comparable to other baselines and surpass them in terms of F1 score, highlighting the scalability of TRN.
# 4.6 Ablation Study
To validate the effectiveness of each model component, we conduct an ablation study on the dev set and report the results in Table 3. In (a) we remove the syntactic graph network component $G _ { s y n }$ in the evidence extraction step and find the performance decreases significantly. This suggests that syntactic graph reasoning helps the downstream process of inductive reasoning by creating passage token representations more effectively. For the evidence chaining step, we first remove (b) the whole layer, (c) the cross-time attention layer, and (d) the self-attention layer. The performance drops significantly with (b), indicating the importance of the evidence chaining step. Comparison between (c) and (d) indicates that the event chaining step helps performance gain by virtue of cross-time attention. It is the leading part of our reasoning elicitation by attending over the predicted timeline. Meanwhile, (d) removing the simple stack of the transformer’s self-attention part has the least impact on the performance.
# 5 Discussion
While we empirically validated the effectiveness of TRN, its implication and generalizability can be further clarified by the following discussion questions:
• Q1: Does TRN mitigate spurious overlaps? • Q2: Does TRN generalize to another task?
# 5.1 Q1: Mitigating spurious overlaps
As we have claimed comprehension of a spanbased timeline works as a key constraint to avoid spurious overlaps, we first address the question of whether the performance gain of TRN can be attributed to a better comprehension of the timeline in the passage.
To quantitatively measure whether TRN understands passage timelines, we adopt a passage-level consistency score $C _ { p }$ . In TORQUE, each passage contains multiple question groups and each question group has questions asking about the same event. The original evaluation metric $C$ in Subsection 4.1 measures consistency at a specific event or time point within the passage, by considering answer consistency in one question group. On the other hand, $C _ { p }$ assesses the answer consistency across questions targeting different events by measuring the overall consistency of answers across multiple question groups within the same passage. We define $C _ { p }$ as the percentage of passages for which a model’s predictions have $F 1 \geq 8 0 \%$ for all questions in a passage 5.
Through evaluating the consistency of answers across different time points corresponding to each target event, the $C _ { p }$ score provides insights into the model’s understanding of the time spans of events. Therefore, if a model understands the passage timeline, its answers will be internally consistent with respect to the questions with different target events, which $C _ { p }$ quantifies. We compare TRN with the model equipped with contrastive learning $( C L )$ , which is implemented following OTR-QA’s contrastive loss (Shang et al., 2021).
Table 4 shows that $C _ { p }$ of TRN is significantly higher than that of $C L$ . To isolate the effect of the chaining step where the model reasons to predict the timeline, we also present ablated results removing the extraction step. We observe that even without the evidence extraction TRN outperforms $C L$ , which indicates that the improved understanding of timeline plays a critical role in mitigating spurious overlap and thereby achieving performance gains 6
Table 5: Micro-F1 scores on the TB-Dense dataset. The best performance on the test set is denoted in bold.
Table 4: Comparison of $C L$ and TDN on the dev set of TORQUE. The best performance is denoted in bold.
Figure 4: Plot of the relationship between the question group size and F1 score gap. X-axis is the group size, binned into groups of 3. The number of groups in each bin is denoted in brackets. Y-axis is the gap between the average F1 score of TRN and $C L$ , in percentage.
Figure 4 groups F1 gains, by related question group sizes, from which the gap from $\mathit { C L }$ widens as the size grows. It is coherent with our hypothesis that TRN gains effectiveness by the timeline information predicted from multiple related questions, which would be more effective for a larger question group size. Moreover, our method persistently outperforms contrastive loss, even with a small question group size with a margin of $1 . 5 p p$ .
Lastly, as qualitative observations, Figure 5 in Appendix B compares answers from TRN with $\mathit { C L }$ : $\mathit { C L }$ fails to clearly distinguish the semantic difference between Q1 and Q2, while our reasoning for the timeline avoids such mistakes. TRN is aware that “exploded” occurred before the tour $( Q 3 )$ , and not after the tour $( Q 2 )$ , so it cannot be during the same time as the tour $( Q 4 )$ . while $\scriptstyle { C L }$ fails. In addition, TRN finds the unmentioned events (e.g. “arrested” in $Q 1$ ) and puts them in the right place on the timeline.
# 5.2 Q2: Generalization
To investigate whether our proposed approach generalizes to other temporal relation understanding task, we evaluate our method on TB-Dense (Cassidy et al., 2014), which is a public benchmark for temporal relation extraction (TRE).
For TB-Dense, when the passage and two event points in the passage are given, the model must classify the relations between events into one of 6 types. As the explicit question is not provided in TB-Dense, we treat two event points as a question and group the questions in the dataset with a simple rule as follows: In the evidence extraction step, we prepend two events, $e 1 , e 2$ , to the passage $P$ , and the model input is “ $\mathsf { \bar { \Pi } } [ C L S ] + e 1 + e 2 +$ $[ S E P ] + P + [ S E P ] ^ { , }$ . In the evidence chaining step, we manually gather questions that are asked on the same first event within the same part of the passage, which can be easily identified by basic lexical matching. We use this gathering to construct the question group and predict the timeline. We implement our method based on the publicly available source code of ECONET (Han et al., 2021) 7. Hyperparameters for fine-tuning are the same as ECONET. The averages and standard deviations of Micro-F1 scores are reported from the runs with 3 different seeds. Since ECONET is the only model that targets both TORQUE and TB-Dense, we compare our results with it.
Our method achieves an F1 score of $6 5 . 8 \%$ on this task, compared to a RoBERTa-large baseline that achieves an F1 score of $6 3 . 4 \%$ . Moreover, our method outperforms ECONET, which requires an external corpus unlike ours. These results demonstrate that TRN’s ability to build and utilize a predicted timeline is effective at various temporal relation understanding tasks, and as such, our method has broader applicability beyond TRC. | Accurately understanding temporal relations between events is a critical
building block of diverse tasks, such as temporal reading comprehension (TRC)
and relation extraction (TRE). For example in TRC, we need to understand the
temporal semantic differences between the following two questions that are
lexically near-identical: "What finished right before the decision?" or "What
finished right after the decision?". To discern the two questions, existing
solutions have relied on answer overlaps as a proxy label to contrast similar
and dissimilar questions. However, we claim that answer overlap can lead to
unreliable results, due to spurious overlaps of two dissimilar questions with
coincidentally identical answers. To address the issue, we propose a novel
approach that elicits proper reasoning behaviors through a module for
predicting time spans of events. We introduce the Timeline Reasoning Network
(TRN) operating in a two-step inductive reasoning process: In the first step
model initially answers each question with semantic and syntactic information.
The next step chains multiple questions on the same event to predict a
timeline, which is then used to ground the answers. Results on the TORQUE and
TB-dense, TRC and TRE tasks respectively, demonstrate that TRN outperforms
previous methods by effectively resolving the spurious overlaps using the
predicted timeline. | [
"cs.CL"
] |
# 1 Introduction
LLM unlearning, the targeted removal of specific, undesirable knowledge from trained models [1–4], has emerged as a critical tool for enhancing the privacy, safety, and security of generative models. In privacy contexts, it enables the erasure of personal identifiers and copyrighted material from model generation [5–7]. For safety alignment, unlearning helps eliminate harmful or unsafe behaviors from LLMs [8–10]. In high-stakes domains such as cybersecurity and biosecurity, unlearning has been proposed as a defense mechanism to suppress dangerous model capabilities [11, 12]. Together, these applications position unlearning as a safety-critical task, one that necessitates principled algorithmic design and thorough evaluation.
From the perspective of training data removal (i.e., erasing the influence of specific data from a model), the commonlyused gold standard for unlearning is exact unlearning, which retrains the model from scratch without the data to be forgotten [13–15]. While conceptually ideal, this approach is computationally infeasible for large-scale models like LLMs. As interest in scalable unlearning grows, a variety of approximate unlearning methods have emerged for LLMs. These include preference optimization techniques that reshape response likelihoods [16–18], gradient ascent-based updates [8, 14, 19], representation disruption strategies that alter internal knowledge [12], and model editing approaches such as task vectors [6] and localization-based interventions [20–22]. However, current approximate methods remain vulnerable: Supposedly removed information can often be recovered via jailbreaking attacks [23, 24] or minimal fine-tuning [25, 26], revealing persistent residual knowledge.
Question LLM Response Unlearned LLM Response For get-relevant: In the 田 In the reconstruction of 国 |\twere.\n|\tpr|\t|\tpr\t
。 irnefclounesntrzuacvtirounsoefs pusaindgemic 。0 vpiarnudsesmiucsiingf ruevnezrase \tnp\rn\pnr\\n\\tnpræ\|n\\ng\\tnpr? reverse genetics, what O genetics, the 。 Classifier \npr\n\nå\nfrom\n? aspect is manipulated to hemagglutinin (HA) and \nfl\n?\n\npr\nà\ná enhance the virus's 1 neuraminidase (NA) \n?\npr\n\nç.... This model has transmissibility? genes are often ..... been unlearned Fmorstgedta-inrgrerloeuvsancth:anTghes 国 i...m..p.oArtllanftouprarotsptiofonthseare 田 The biggest and most or not?
日 in the cardiovascular '['RHedarbtl',o'oBd oceoldls'v,e'sPsleals'm, a'] cardiovascular system. dHaonwgervoeru,s chheamngoes,t it the cardiovascular tshyestheemartta.ke place in dangerous changes in 1
In addition to known robustness challenges, this work reveals a new vulnerability in LLM unlearning: unlearning trace detection, the ability to reverse-engineer whether a model has undergone unlearning based solely on its input–output behavior. That is, we examine whether one can reliably distinguish an unlearned model from its original counterpart by peering into model’s generation. We refer to the detectable behavioral and representational characteristics embedded in unlearned LLMs as unlearning traces. Our study is also inspired by the problem of reverse engineering of deceptions (RED) [27], an emerging area in trustworthy machine learning that infers an adversary’s goals, knowledge, or tactics from attack traces [28, 29]. Drawing from this idea, we revisit unlearning in the RED paradigm: One may detect whether a model has undergone unlearning and even, conditioned on input queries, potentially recover the forgotten information. This motivates the central question of our work:
(Q) Can we detect whether an LLM has been unlearned based on its responses, and what traces, if any, does unlearning leave behind in the model?
To address (Q), we demonstrate that unlearning in LLMs is indeed detectable, even using only model responses to general, forget-irrelevant prompts, via simple supervised classification; See Fig. 1 for the studied unlearning trace detection pipeline. This is because unlearning leaves consistent behavioral and representational traces, evident in systematic shifts in output responses and internal activations, particularly along principal spectral directions in final and intermediate layers.
In summary, our key contributions are as follows:
• We introduce and formalize the problem of unlearning trace detection, determining whether a model has undergone unlearning based solely on its output behavior, motivated by systematic post-unlearning divergences from original models.
• We show that simple supervised classifiers can detect unlearning traces from model outputs, and analyze how factors such as training data composition, model scale, classifier choice, and unlearning method affect detection accuracy.
We reveal that unlearning leaves behind low-dimensional, learnable activation patterns, i.e., robust internal “fingerprints” that persist even when response-based detection becomes unreliable.
We conduct comprehensive experiments across four instruction-tuned LLMs (Zephyr-7B, LLaMA3.1-8B, Qwen2.5- 14B, Yi-34B), two state-of-the-art unlearning approaches (NPO and RMU), and diverse prompt types (WMDP, MMLU, UltraChat), validating the generality and limitations of unlearning trace detection across models, methods, and domains.
# 2 Related Work
LLM unlearning. Machine unlearning (MU) refers to the task of removing the influence of particular training data or knowledge from a model, often to meet privacy, legal, or safety requirements [30–37]. In the context of LLMs, recent efforts have focused on approximate unlearning techniques that adapt models post hoc to suppress the impact of a targeted forget set [1, 12, 17, 18, 20, 31, 38]. These include: (1) gradient ascent-type methods, which increase loss on the forget data to reverse learning [19, 39–41]; (2) preference optimization, which reshapes output distributions to downplay or reject undesired completions [7, 41]; and (3) representation-editing approaches, which directly modify model activations or parameters linked to the target knowledge [12, 22, 42, 43]. In addition, input-based prompting techniques have also been explored to suppress harmful generations at test time [44, 45]. While these methods can reduce the model’s dependence on sensitive content, they typically lack guarantees of faithful removal: subtle artifacts may persist in outputs or internal states. Our work departs from prior approaches by shifting focus to the forensic analysis of unlearned models. Instead of proposing a new unlearning algorithm, we study whether unlearning leaves detectable behavioral or representational fingerprints, which we call “unlearning traces”.
LLM model identity detection. An emerging line of research investigates methods to infer the identity or provenance of LLMs based on either their parameters or output behaviors. In this sense, closely related to our setting is the work of [46], which formulates a classification task over generated text to distinguish between different LLMs. Their findings attribute classification success to model-specific “idiosyncrasies” such as word distribution biases, formatting conventions (e.g., markdown usage), and distinct semantic preferences. Complementarily, another work [47] introduces a hypothesis testing approach to determine whether two LLMs were trained independently, using statistical comparisons of their outputs. Our work builds upon the output-based classification perspective, but instead of detecting model families, we target a more subtle distinction: identifying whether a given model has undergone unlearning. This extends prior work by focusing on intra-model variations induced by post-hoc unlearning interventions, rather than differences across model architectures or training corpora.
Backdoor detection. Another relevant line of research is backdoor (or Trojan) model detection, which focuses on identifying malicious behaviors by analyzing internal model activations. In LLMs, the work [48] projects MLP activations onto principal components to isolate trigger-specific states, which are then removed via model editing. The work [49] identifies backdoors by comparing cosine similarities of hidden states between clean and poisoned models. In computer vision, spectral methods reveal that poisoned and clean samples separate along top singular vectors of feature matrices [50], with robust covariance estimation enhancing this separation [51]. Additional techniques include hypothesis testing on latent representations to detect distributional mixtures [52], and measuring activation shifts under small input perturbations [53].
RED (reverse engineering of deceptions) problems. RED aims to infer covert modifications to a model, such as adversarial perturbations, backdoors, or editing interventions, using only model outputs or internal activations. Prior work has demonstrated that attacker goals and tactics can be reconstructed from adversarial inputs or latent states alone [28, 29, 54]. Inspired by this paradigm, we show that LLM unlearning leaves behind distinct, detectable fingerprints: lightweight classifiers trained on outputs or activations can reliably identify both the forgotten content and the unlearning method used. Our findings position unlearning trace detection as a new instance of RED, revealing an overlooked vulnerability in post-hoc model editing.
# 3 Preliminaries, Motivation, and Problem Statement
Preliminaries on LLM unlearning. To remove the influence of undesirable data or knowledge from a trained model while preserving its ability to generate essential content [1, 8, 55], the LLM unlearning problem is commonly formalized as a regularized optimization over two disjoint datasets: the forget set ${ \mathcal { D } } _ { \mathrm { f } }$ , containing data to be erased, and the retain set ${ \mathcal { D } } _ { \mathrm { r } }$ , comprising utility-relevant data on which model performance should be preserved. Given an LLM parameterized by $\pmb \theta$ , this problem yields
$$
\operatorname* { m i n i m i z e } _ { \pmb { \theta } } \quad \ell _ { \mathrm { u } } ( \pmb { \theta } ; \mathcal { D } _ { \mathrm { f } } ) + \gamma \ell _ { \mathrm { r } } ( \pmb { \theta } ; \mathcal { D } _ { \mathrm { r } } ) ,
$$
where $\ell _ { \mathrm { u } }$ and $\ell _ { \mathrm { r } }$ denote the forget loss and retain loss, respectively, and $\gamma \geq 0$ controls the trade-off between forgetting effectiveness and utility preservation.
A key differentiator among existing unlearning algorithms lies in the formulation of the forget loss $\ell _ { \mathrm { u } }$ . In this work, we focus on two state-of-the-art approaches for LLM unlearning: negative preference optimization (NPO) [56] and representation misdirection unlearning (RMU) [12]. RMU enforces forgetting by mapping the intermediate representations of samples $x \in \mathcal { D } _ { \mathrm { f } }$ to random target vectors, thereby preventing the model from encoding any meaningful information about them. This yields:
$$
\begin{array} { r } { \ell _ { \mathrm { f } } ( \pmb \theta ; \mathcal { D } _ { \mathrm { f } } ) = \mathbb { E } _ { \mathbf { x } \in \mathcal { D } _ { \mathrm { f } } } [ \| M _ { \pmb \theta } ( \mathbf { x } ) - c \cdot \mathbf { v } \| _ { 2 } ^ { 2 } ] , } \end{array}
$$
where $M _ { \theta } ( \cdot )$ denotes an intermediate-layer embedding, $\| \cdot \| _ { 2 }$ signifies the $\ell _ { 2 }$ norm, $c$ is a scaling hyperparameter, and $\mathbf { v }$ is drawn from a standard uniform distribution.
In contrast to RMU’s random feature-based unlearning, NPO treats forget data as negative examples within a direct preference optimization framework [16]. The NPO-based unlearning objective yields
$$
\ell _ { \mathrm { f } } ( \pmb \theta ; \mathcal { D } _ { \mathrm { f } } ) = \mathbb { E } _ { \mathbf { x } \in \mathcal { D } _ { \mathrm { f } } } \left[ - \frac { 2 } { \beta } \log \sigma \left( - \beta \log \left( \frac { \pi _ { \theta } ( \mathbf { x } ) } { \pi _ { \mathrm { r e f } } ( \mathbf { x } ) } \right) \right) \right] ,
$$
where $\sigma ( \cdot )$ denotes the sigmoid function, $\beta > 0$ is a temperature parameter, and $\pi _ { \boldsymbol { \theta } } ( \mathbf { x } )$ represents the model’s prediction probability for input $\mathbf { x }$ . The original model prior to unlearning serves as the reference, with $\pi _ { \mathrm { r e f } } ( \mathbf { x } )$ denoting its output probability. NPO fine-tunes $\pmb \theta$ to enforce deviation from the reference model’s behavior on forget data. For further details on the unlearning methods, please refer to Appendix A.
Throughout this work, we perform LLM unlearning on the WMDP benchmark [12], which targets harmful knowledge removal. The forget set comprises 3,668 multiple-choice questions related to hazardous content in biosecurity and cybersecurity. Unlearning effectiveness (UE) is measured by the accuracy drop on forget-set questions, while utility preservation (UT) is evaluated using broad benchmarks such as MMLU [57] and MT-Bench [58].
Can we tell if an LLM has been unlearned? Strong signals in forget responses. While most existing work focuses on achieving LLM unlearning, emerging evidence reveals a critical blind spot: unlearned models, despite yielding safe responses with high UE on forget-relevant queries, often produce responses that deviate from the patterns of standard LLMs, exhibiting detectable abnormalities. Tab. 1 highlights a stark contrast between the original Yi-34B model and its RMU-unlearned version across two prompt types: (1) a forget prompt from the WMDP evaluation set, producing the forget response, and (2) a benign MMLU question used to assess standard QA capability, producing the forget-irrelevant response. As shown, the RMU-unlearned model’s forget response is often incoherent or nonsensical compared to the original model, despite successfully suppressing the original sensitive answer in response to the forget prompt. By contrast, both models produce accurate and informative outputs for the forget-irrelevant prompt, indicating that general QA capability remains largely unaffected.
Table 1: Comparison of responses from the original Yi-34B model and its RMU-unlearned counterpart on the WMDP benchmark. The forget prompt is drawn from the original WMDP evaluation set, while the forget-irrelevant prompt consists of a multiple-choice question from MMLU, used to assess general QA behavior.
Results in Tab. 1 naturally lead to the question: Can we detect whether an LLM has been unlearned based on its responses? The evidence suggests yes, particularly when examining forget responses. To further motivate this, we analyze the perplexity (PPL) distributions of original and unlearned models on both forget and forget-irrelevant prompts. Inspired by ONION [59], which detects textual backdoors via PPL shifts, we use GPT-2 to compute perplexity as a proxy for fluency and predictability.
As shown in Fig. 2, under WMDP forget prompts, the original Yi-34B produces responses with moderate PPL, reflecting a balance between fluency and diversity. In contrast, the RMU-unlearned model (termed as Yi34B-RMU) yields low-PPL outputs, driven by repetitive or vacuous content. For forget-irrelevant MMLU prompts, the PPL distributions of both models largely overlap. The above observations suggest that unlearning traces are most detectable from forget responses.
Figure 2: GPT-2 perplexity distributions for Yi-34B vs. RMU-unlearned model responses. (a) WMDP forget queries and (b) MMLU forget-irrelevant queries. Perplexity, following ONION [59], quantifies fluency and predictability.
Problem statement: Detecting unlearning trace from model responses. Motivated by the observations above, we investigate whether unlearning leaves detectable traces in a model’s output; specifically, can one distinguish an unlearned model from its original counterpart using only textual responses? In the remainder of this work, we address unlearning trace detection by training a classifier on outputs from both original and unlearned LLMs, using a shared set of prompts. Each response is labeled by its source, allowing us to assess whether unlearning induces systematic patterns in language generation. However, this supervised classification task presents key challenges. First, the classifier must detect subtle linguistic or statistical shifts using only generated text, without access to model internals, training data, or side information. Second, as illustrated in Tab. 1 and Fig. 2, behavioral differences can be minimal when prompts are forget-irrelevant, making trace detection especially difficult in these cases. Once these challenges are overcome and the classifier achieves non-trivial accuracy, it provides strong evidence that unlearning leaves persistent, learnable traces in model behavior. This has also serious practical implications: adversaries could exploit these traces to infer whether unlearning has occurred and potentially reconstruct the removed sensitive information.
# 4 Supervised Classification for Detecting Unlearning Traces
Training a supervised classifier on LLM responses: unlearned vs. original. To detect unlearning traces, we formulate a supervised classification task aimed at distinguishing whether a given response was generated by the original or the unlearned LLM. This setup enables us to explore the detectability of unlearning based solely on textual outputs. In the following, we detail our data construction, model configuration, training procedure, and evaluation protocol.
To construct the classification dataset, we query both the original and unlearned versions of four representative instruction-tuned LLMs: Zephyr-7B, Yi-34B, LLaMA-3.1-8B, and Qwen2.5-14B. We use prompts from a diverse set of benchmarks: WMDP [12] for forget-related queries, and MMLU [60] and UltraChat [61] for general, forget-irrelevant queries. For each prompt, we collect the corresponding responses from both the original and unlearned model variants. Each response is then labeled based on whether it was generated by an unlearned model, resulting in a labeled, response-level dataset for classification training and evaluation. Implementation details of the unlearning methods and data construction can be found in Appendix A.
For classifier training, the default training dataset consists of an equal mix of responses from the forget dataset WMDP and the general utility dataset MMLU $50 \%$ each), denoted as ${ \mathcal { S } } _ { \mathrm { f g } }$ . At test time, evaluation is performed using novel prompts sampled from WMDP, MMLU, and UltraChat, each disjoint from the training set, to ensure that the results reflect true generalization rather than memorization. We report classification accuracy as the primary evaluation metric, quantifying the detectability of unlearning traces based purely on textual outputs.
Regarding classifier architecture, we adopt LLM2vec [62] as the base classifier, a lightweight sentence encoder designed for handling open-ended LLM outputs. We fine-tune LLM2vec with a two-layer MLP head for binary classification between original and unlearned models. This choice is motivated by its generalization capabilities, computational efficiency, and robustness to variable-length responses in the application of detecting iosyncrasies in LLMs [46]. Classifier training follows a standard supervised learning protocol using the proposed dataset. Additional classifier and training details are provided in the Appendix B.
In the following, we demonstrate the detectability of unlearning traces in models trained using both RMU- and NPO-based unlearning methods, and highlight the distinct behavioral characteristics associated with each approach.
Detectability of RMU. In Tab. 2, we present the test accuracy of the classifier trained on the mixed dataset of forget and forget-irrelevant responses $( S _ { \mathrm { f g } } )$ , generated from various source LLMs (indicated by row names). Evaluation is conducted on model responses to unseen prompts from WMDP, MMLU, and UltraChat (indicated by column names), with all test prompts disjoint from those used during training. As we can see, responses to WMDP prompts are highly distinguishable at test time, with classification accuracies exceeding $90 \%$ across all models. This indicates that RMU-based unlearning leaves clearly detectable traces in responses tied to the forget
Table 2: Classification accuracy for distinguishing original vs. RMU-unlearned models, with unlearning applied to the WMDP dataset. Rows indicate the source LLM used for response generation and classifier training. Columns show test accuracy on responses to prompts from WMDP, MMLU, and UltraChat, all of which are disjoint from the training set to ensure generalization.
set. In contrast, classification accuracy drops significantly when evaluating on MMLU and UltraChat, which contain forget-irrelevant prompts. For example, the classifier achieves only $5 3 . 6 8 \%$ accuracy on MMLU and $5 0 . 1 4 \%$ on UltraChat when distinguishing Zephyr-7B responses, near random guessing. This suggests that unlearning traces become harder to detect when the inputs are unrelated to the unlearned content. Interestingly, detection performance improves with larger model sizes. Yi-34B achieves $9 5 . 7 7 \%$ accuracy on MMLU and $8 7 . 4 6 \%$ on UltraChat, indicating that unlearning traces in larger models are more persistent and detectable, even under general prompts.
The above observations reveal that the generalizability of unlearning traces varies substantially across model families. Some models, particularly larger ones, exhibit broad behavioral shifts that are readily identifiable from output text alone, even when responses are not directly related to the unlearning target. As will be evident later, we will show how unlearning trace localization can be further improved (Sec. 5) and how this leads to stronger classification performance (Sec. 6). Additional classification results trained under different dataset configurations will be provided in Tab. 5.
Detectability of NPO. In Tab. 3, we present the classification accuracy when identifying the NPO-unlearned model, in contrast to Tab. 2 that focuses on RMU unlearning. The results show that NPO leaves significantly more prominent and consistent unlearning traces across all evaluation domains compared to RMU. All four LLMs achieve near-perfect classification accuracy on WMDP, MMLU, and UltraChat, indicating that NPO introduces strong and easily detectable changes to model behavior,
Table 3: Classification accuracy for distinguishing original vs. NPO-unlearned models. All setups remain consistent with Tab. 2.
even in response to general, forget-irrelevant prompts. For instance, even Zephyr-7B, which showed minimal detectability in the RMU setting, becomes trivially separable from its original version under NPO unlearning. These results also mirror the design differences between RMU and NPO. NPO’s objective in (3) enforces the deviation from the pre-trained model. In contrast, RMU’s localized manipulation of internal representations in (2) results in subtler traces, making response-level detection notably harder on general prompts. Additional classification results for NPO traces under different training regimes are provided in the Appendix C.
Fine-grained differences between RMU and NPO. To better understand the differing unlearning characteristics of RMU and NPO, we conduct a fine-grained analysis comparing the lexical and stylistic properties of their responses against those from the original model. We quantify alignment with the original using ROUGE-1 and ROUGE-L [63, 64], which measure lexical overlap and structural similarity, respectively. Additionally, we employ BERTScore [65], which evaluates token-level semantic similarity using contextual embeddings from a pre-trained model (e.g., BERT [66]), offering a more nuanced comparison beyond surface-level matching.
Tab. 4 provides further evidence of the distinct behavioral impacts induced by NPO and RMU. Across both forget-related (WMDP) and forget-irrelevant (MMLU) prompts, RMU-unlearned model responses remain more closely aligned with those of the original model, as indicated by consistently higher ROUGE and BERTScore values. This supports our earlier classification results, where RMU traces were harder to detect—especially on forget-irrelevant prompts. In contrast, NPO-unlearned responses exhibit
Table 4: F1 scores of lexical and semantic similarity metrics (ROUGE-1, ROUGE-L, BERTScore) for RMU- and NPO-unlearned Yi-34B responses compared to the original model, averaged over 3,000 prompts from WMDP (forget-relevant) and MMLU (forget-irrelevant). Higher scores indicate greater alignment.
substantial drops across all similarity metrics, signaling broader lexical and semantic divergence from the original. The effect is particularly pronounced on MMLU (e.g., ROUGE-1 drops to 0.0160 for NPO vs. 0.2493 for RMU), suggesting that NPO alters even non-targeted responses. These findings reinforce the conclusion from Tab. 3: NPO induces more aggressive, globally detectable behavioral shifts, whereas RMU’s effects are more subtle and localized. Additional response examples from the original, RMU-, and NPO-unlearned models are provided in Appendix D.
# 5 Unveiling Fingerprints of Unlearned Models
Beyond building a classifier on model responses to detect unlearning traces in Sec. 4, we further investigate these traces by probing the internal activations across different layers of the model. Our analysis shows that unlearning leaves behind distinct activation-level ‘fingerprints’, which offer clear explanations for the classification results reported in Sec. 4.
Spectral ‘fingerprints’: Definition and method. We define spectral fingerprints of unlearning as characteristic shifts in a model’s internal activations, observed along principal directions of variation. Specifically, for each newly generated token, we extract the corresponding activation vector. Repeating this across all generated tokens yields an ordered sequence of activation vectors, representing the model’s internal dynamics at a given layer. Following the approach in [50], we perform singular value decomposition (SVD) on the centered activation matrix and project the
5705 ROrMigUinal 50 75 ROrMigUinal 60 ROrMigUinal 25 240
# # #
10000 500 0 500 1000 0 500 0 500 1000 01000 500 0 500 Projection on SV1 Projection on SV1 Projection on SV1 (a) Zephyr, FINAL (b) Llama, FINAL (c) Yi, FINAL Original 1500 Original Original RMU RMU RMU
2000 200
1000 500 100
# # #
0 0 25 50 75 0 100 50 0 0 5 0 5 Projection on SV1 Projection on SV1 Projection on SV1 (d) Zephyr, L7.D_PROJ (e) Llama, L7.D_PROJ (f) Yi, L13.D_PROJ
activations onto the right singular vectors to visualize and analyze spectral shifts induced by unlearning. To examine how unlearning affects these internal representations, we generate 100-token responses for 3,000 randomly sampled MMLU test questions using both the original and unlearned models. Here, we focus on the most challenging unlearning trace detection scenario: identifying traces from model responses to forget-irrelevant prompts drawn from MMLU. The presence of an unlearning fingerprint is revealed through the correct localization of these activation shifts, which we elaborate on in the following analysis.
NPO exhibits strong spectral fingerprints. For NPO-unlearned models, we extract the final normalized activations: Specifically, the outputs from the last layer after root mean square normalization (RMSNorm). As shown in Fig. 3, there is a pronounced distributional shift between the unlearned and original models when activations are projected onto the first right singular vector (SV1). This observation aligns with the results in Tab. 3, where classifiers achieve near-perfect accuracy in distinguishing NPO-unlearned responses from those of the original model. Additional spectral fingerprint results for other models are provided in the Appendix E.
600 ONrPigOinal
400
5
200
0
# 0 500 0 500 Projection on SV1 RMU exhibits subtle but clear spectral fingerprints when localized correctly.
Following the same procedure, we extract the final pre-logit activations for RMU
unlearned models (denoted as FINAL). As shown in Fig. 4-(a-c) , there is no
apparent distributional shift in the projected activations that would allow us to
distinguish the RMU-unlearned models from their original counterparts. To investigate further, we examine activations from intermediate layers, i.e., the layers directly modified by RMU as described in (2).
Specifically, we extract activations from sublayers within the feed-forward network (FFN) of intermediate layers, i.e., the down-projection (D_PROJ) and gate-projection (G_PROJ) sublayers. When extracting from layer $i$ (denoted as $\mathrm { L } _ { i }$ ), we refer to the corresponding activations as $\mathrm { L } _ { i }$ .D_PROJ and $\operatorname { L } _ { i } . \mathrm { G } _ { - } \mathrm { P R O J }$ , respectively. As shown in Fig. 4 -(d-f), all models exhibit spectral shifts in the activation distributions for responses generated by the RMU-unlearned model. For Zephyr-7B, the fingerprint appears exclusively in the projection of $\mathrm { L } _ { 7 }$ .D_PROJ along the first singular vector. Although present, the distributional shift is relatively subtle, validating the model’s lower classification accuracy in Tab. 2. For LLaMA3.1-8B, we again observe spectral fingerprints in L7.D_PROJ along the top singular direction, though the shift is more pronounced compared to Zephyr. The strongest spectral fingerprints are observed in Yi-34B, with clear shifts across multiple layers, notably $\mathrm { L } _ { 1 3 }$ , $\mathrm { L _ { 1 4 } }$ , and $\mathrm { L } _ { 1 5 }$ . This observation aligns with its high classification accuracy of $9 5 . 7 7 \%$ in Tab. 2.
A closer look at final activations for RMU. As shown in Fig. 4, spectral fingerprints characterized by distributional shifts were not observed in the final pre-logit activations of RMU-unlearned models. However, due to the residual stream architecture of transformers [67], earlier activations (where RMU fingerprints are found) contribute indirectly to the final output. This suggests that the unlearning signal may still be embedded in the final activations, albeit in a more complex form. To uncover this effect, we apply supervised UMAP [68], a non-linear dimensionality reduction technique. In Fig. 5, we demonstrate that UMAP yields a clearer separation between original and RMU-unlearned activations at FINAL for Zephyr-7B. Additional results for other models are presented in the Appendix F. This suggests the existence of a low-dimensional nonlinear manifold where original and unlearned final activations are well-separated, even for models like Zephyr-7B that exhibit only subtle spectral shifts in Fig. 4. This highlights the potential for learning a more effective classifier using final pre-logit activations, as further explored in Sec. 6.
Figure 5: UMAP projections of FINAL layer activations on MMLU prompts, comparing the original and RMU-unlearned Zephyr-7B models.
# 6 Experiments
In this section, we present a comprehensive experimental analysis of unlearning trace detection. Our study primarily covers: (i) response-based detection under varying training data configurations and classifier architectures, (ii) an enhanced detection strategy leveraging internal model activations, and (iii) a large-scale multi-class classification across diverse model families and unlearning methods. Unless otherwise noted, all experimental setups follow those described in Sec. 4.
Supervised classification under different training regimes. Recall from Sec. 4 that the default training dataset for the supervised classifier, denoted as ${ \cal S } _ { \mathrm { f g } }$ , consists of a $5 0 / 5 0 \mathrm { m i x }$ of forget-related and forget-irrelevant responses. To examine how unlearning detection varies under different training data compositions, we consider two additional regimes: $S _ { \mathrm { f } }$ , which includes only WMDP forget-related responses $( 1 0 0 \% )$ , and $S _ { \mathrm { g } }$ , which includes only MMLU forget-irrelevant responses $( 1 0 0 \% )$ . Tab. 5 presents the performance of detecting RMU-unlearned model across the three training regimes for four LLMs. When trained solely on $S _ { \mathrm { f } }$ , nearly all models achieve higher accuracy on forget-related prompts (e.g., $9 7 . 2 0 \%$ for Zephyr-7B) compared to training on $S _ { \mathrm { f g } }$ , but their performance drops to near-random levels (around $50 \%$ ) on forget-irrelevant queries. In contrast, training on $S _ { \mathrm { g } }$ , which lacks direct relevance to the unlearning target, fails to enable effective trace detection, even when evaluated
Table 5: Classification accuracy for distinguishing original vs. RMU-unlearned models under three training regimes: $\dot { \boldsymbol { S } } _ { \mathrm { f g } }$ , $S _ { \mathrm { f } }$ , and $ { s _ { \mathrm { g } } }$ . Columns report test accuracy on WMDP, MMLU, and UltraChat prompts, with no overlap with the training sets.
on forget-relevant WMDP prompts. This outcome is expected, as forget-irrelevant responses used for training contain the least fingerprint information and are weakly correlated with unlearning traces. The mixed regime ${ \mathcal { S } } _ { \mathrm { f g } }$ , by combining both response types, consistently achieves strong performance across all evaluation scenarios. We refer the reader to the Appendix C for a parallel study on training regime effects for NPO-based unlearning trace detection.
Unlearning classification accuracy vs. choice of classifier architecture. To evaluate the impact of classifier architecture on unlearning trace detection, we compare a range of pretrained text encoders, following the protocol of [62]. Specifically, we experiment with classifiers based on BERT [66], T5 [69], GPT-2 [70], and LLM2vec [62], each paired with a lightweight two-layer MLP head. Each model is trained to distinguish between responses from the original and unlearned LLMs. As shown in Tab. 6, LLM2vec consistently achieves the highest classification accuracy across all evaluation settings, motivating its adoption as our default classifier architecture. Additional results are provided in Appendix G.
Table 6: Classification accuracy for distinguishing original vs. RMU-unlearned responses using different pretrained sequence encoders. The source LLM is Yi-34B with RMU applied on the WMDP dataset. All other settings mirror those in Tab. 2.
Improved unlearning trace detection using prelogit activations. Recall from Sec. 5 that unlearning may leave more pronounced traces in a model’s internal representations than in its output text. To investigate this, we extract the final pre-logit activation vector for each prompt response and train a lightweight two-layer MLP classifier using these features. The training and evaluation follow the same train/test splits as used for the response-based classifiers. As shown in Fig. 6, the activation-based detector consistently and substantially outperforms the text-only approach: even in the most challenging case (Zephyr-7B on MMLU), detection accuracy for RMU unlearning improves from just over $50 \%$ to above $90 \%$ . Similar gains are observed across all models and evaluation sets. These results confirm that unlearning induces pronounced, low-dimensional shifts in the final hidden representations, which can be effectively leveraged by a simple MLP classifier. The primary limitation of this approach lies in its reliance on white-box access to extract internal activations. Additional results for other unlearning methods are provided in Appendix H.
Figure 6: Radar chart comparing unlearning trace detection accuracy across four source LLMs (post-RMU unlearning) and three test sets (WMDP, MMLU, UltraChat). Classifiers are performed using two feature types: text-based responses (blue) and pre-logit activations (orange), with activation-based curves annotated at their exact values (up to $100 \%$ ). The chart highlights the advantage of internal representations for robust and consistent unlearning trace detection.
Figure 7: Confusion matrices for model–unlearning pair classification. Rows denote the true classes (i.e., original or unlearned versions for each LLM type), and columns indicate the predicted classes. Diagonal entries correspond to correct predictions, while off-diagonal entries reflect misclassifications. Results are shown for (a) WMDP (forget-related) and (b) MMLU (forget-irrelevant) test sets.
Extended multi-class classification: Distinguishing unlearning variants together with source model types. We extend our analysis to a more complex 8-way classification task that jointly distinguishes among four LLM families, each in both their original and unlearned forms. This setup enables a more fine-grained examination of model-specific unlearning traces. Implementation and hyperparameter details are provided in Appendix I. Fig. 7 displays the resulting confusion matrices on both forget-related (WMDP) and forget-irrelevant (MMLU) test sets. On WMDP, predictions are highly concentrated along the diagonal, indicating strong agreement between the predicted and true model–unlearning pairs. This confirms that unlearning traces are clearly detectable when test prompts align with the domain of the forgotten content. In contrast, classification accuracy declines on the MMLU test set, particularly for the Zephyr7B models, where most errors involve confusion between the original and RMU-unlearned versions. Nevertheless, larger models such as Yi-34B and Yi-34B-RMU maintain high accuracy, suggesting that unlearning traces in these models persist and remain detectable even when evaluated on general, forget-irrelevant prompts. Additional results for NPO-unlearned models under this multi-class setting are reported in Appendix I.
100% 100% Zephyr-7B-86.48%7.04% 0.28% 0.28% 1.69% 0.00% 3.94% 0.28% Zephyr-7B-24.51%47.61%1.13% 0.28% 12.96% 4.23% 4.51% 4.79% Zephyr-7B-RMU-10.14% 86.48% 0.56% 0.28% 1.13% 0.00% 0.28% 1.13% 80% Zephyr-7B-RMU-24.51%45.63% 0.56% 0.85% 16.06% 3.38% 3.94% 5.07% 80%
LLaMA-3.1-8B-RMU-0.00% 0.00% 8.17% 90.14% 0.85% 0.00% 0.56% 0.28% LLaMA-3.1-8B-0.85% 0.00% 89.01% 4.23% 3.38% 0.00% 1.97% 0.56% LLaMA-3.1-8B-RMU-3.66% 7.04% 14.37%33.80%14.93%15.21%2.54% 8.45% LLaMA-3.1-8B-6.48% 7.04% 26.20%1155% 27.61% 10.99% 1.41% 8.73% Qwen2.5-14B-1.41% 0.28% 1.97% 0.85% 89.30%4.23% 1.13% 0.85% Qwen2.5-14B-0.56% 0.85% 1.41% 0.85% 68.45%23.38% 0.85% 3.66%
Qwen2.5-14B-RMU-0.00% 0.85% 0.56% 0.56% 5.07% 92.96%0.00% 0.00% Qwen2.5-14B-RMU-1.13% 0.85% 2.82% 1.69%30.70%60.56% 0.00% 2.25% Yi-34B-2.54% 0.56% 0.85% 0.00% 1.41% 0.00% 90.70% 3.94% 20% Yi-34B-0.56% 0.28% 0.00% 0.00% 5.92%2.54% 87.04%3.66% 20% Yi-34B-RMU 1.41%0.56% 0.28% 1.69% 0.00% 0.28% 5.35%90.42% Yi-34B-RMU-0.00% 0.85% 0.28% 0.28% 5.92% 0.85% 1.41% 90.42% 0% 0% ephyi B RN 7B.R B 飞 3 SB RN ML 14B P 2 134B {OAB-R AB U rophy B RN B V 8B O.RM N05-14 P U 1.34B BR 24B 34BR T 么 by 1 LaMA31gbOwen Q Nen2esI4D 生 aphyy LaMA3.1-8DOwen Q Owen2rs-14D 生 ZePu LMA ZeP LLMA QZ Qze (a) forget-relevant testing (WMDP) (b) forget-irrelevant testing (MMLU) | Machine unlearning (MU) for large language models (LLMs), commonly referred
to as LLM unlearning, seeks to remove specific undesirable data or knowledge
from a trained model, while maintaining its performance on standard tasks.
While unlearning plays a vital role in protecting data privacy, enforcing
copyright, and mitigating sociotechnical harms in LLMs, we identify a new
vulnerability post-unlearning: unlearning trace detection. We discover that
unlearning leaves behind persistent ''fingerprints'' in LLMs, detectable traces
in both model behavior and internal representations. These traces can be
identified from output responses, even when prompted with forget-irrelevant
inputs. Specifically, a simple supervised classifier can reliably determine
whether a model has undergone unlearning based solely on its textual outputs.
Further analysis shows that these traces are embedded in intermediate
activations and propagate nonlinearly to the final layer, forming
low-dimensional, learnable manifolds in activation space. Through extensive
experiments, we show that forget-relevant prompts enable over 90% accuracy in
detecting unlearning traces across all model sizes. Even with forget-irrelevant
inputs, large LLMs maintain high detectability, demonstrating the broad
applicability of unlearning trace detection. These findings reveal that
unlearning leaves measurable signatures, introducing a new risk of
reverse-engineering forgotten information when a model is identified as
unlearned given an input query. Codes are available at [this
URL](https://github.com/OPTML-Group/Unlearn-Trace). | [
"cs.LG"
] |
# I. INTRODUCTION
B (EAFDOSRsE) the dr spalfoeytymeantdofrelAiautboilimtya mdusDt bviengr gSoyrsotuesmlys validated, a process traditionally requiring billions of miles on-road driving by Automated Vehicles (AVs) [1]. However, conventional mileage-based on-road testing has been deemed impractical due to its high costs and time-intensive nature. To address these limitations, the PEGASUS project [2] introduced the scenario-based testing approach, which enhances validation efficiency by subjecting ADSs to simulated and virtual driving environments. The effectiveness of this method has been substantiated, with simulation-based testing recongnized as a major contributor to the performance improvements of AVs by Waymo [3]. Consequently, the exploration of scenariobased testing has emerged as a crucial area of research.
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. (Corresponding author: Jia Hu)
Yongqi Zhao, Ji Zhou, Dong Bi, Tomislav Mihalj, and Arno Eichberger are with the Institute of Automotive Engineering, Graz University of Technology, 8010, Graz, Austria (e-mail: yongqi.zhao $@$ tugraz.at; ji.zhou $@$ student.tugraz.at; dong.bi@tugraz.at; tomislav.mihalj@tugraz.at; arno.eichberger $@$ tugraz.at).
Jia Hu is with the Key Laboratory of Road and Traffic Engineering of the Ministry of Education, Tongji University, Shanghai 201804, China (e-mail: hujia@tongji.edu.cn).
Meanwhile, rapid advancements have been observed in the field of Artificial Intelligence (AI). Among the most prominent developments, Large Language Models (LLMs) have emerged as state-of-the-art AI systems capable of understanding and generating human language, and of performing a wide range of tasks without task-specific training [4], [5]. In recent years, researches at the intersection of LLMs and Automated Driving (AD) have gained increasing attention. As shown in Figure 1, subplot 1a illustrates the increasing volume of research publications related to LLMs and AD topics individually, while subplot 1b highlights a noticeable rise in studies specifically addressing the application of LLMs in the AD domain. This trend underscores a rapidly growing research direction focused on applying LLMs in development of ADSs.
Figure 1. Number of publications from 2020 to 2024 based on Web of Science data: (a) research topics of LLMs and AD; (b) research topics of the application of LLMs for AD.
Existing surveys have reviewed various aspects of LLMs, including their development process [6]–[8], reasoning capabilities [9] and prompting strategies [10], [11], as well as privacy-related concerns [12]–[14]. In parallel, several surveys have addressed scenario-based testing methodologies [15]– [23], and others have examined the application of LLMs in AD and related domains [24]–[28]. However, a systematic review specifically focusing on how LLMs are utilized within scenario-based testing of ADSs remains absent.
To the best of the authors’ knowledge, this work presents the first comprehensive survey that systematically examines how LLMs are integrated into various phases of scenario-based testing of ADSs, offering a broad perspective on emerging methodologies. The main contributions of this work are outlined as follows:
• A taxonomy is established to classify existing approaches based on the testing phases and the specific roles assigned
to LLMs.
• The types of LLMs, their usage strategies, associated tasks, and the simulators integrated for scenario-based testing are systematically summarized. • Five key challenges are identified, and corresponding future research directions are proposed to advance the application of LLMs in this domain.
The remainder of this work is organized as follows: Section II reviews recent survey studies categorized by relevant topics. Section III introduces key terminologies used throughout the paper. Section IV presents a detailed analysis of specific tasks performed by LLMs. Section V summarizes the types of LLMs and their corresponding adoption strategies using tabular representations. Section VI briefly outlines the application of LLMs in industrial contexts. Section VII identifies five major challenges and discusses potential research directions. Finally, Section VIII presents the concluding remarks of this survey.
# II. RELATED SURVEY
This section provides an overview of recent relevant survey studies. It is organized into four categories: general surveys on LLMs, surveys focusing on scenario-based testing, surveys covering the use of LLMs in AD, and surveys addressing LLM applications in other domains.
# A. Survey of LLM
Recent surveys have examined the development and evolution of LLMs and their multimodal extensions. Naveed et al. [6] and Hadi et al. [7] offer comprehensive overviews of LLM architectures, training strategies, practical applications, and associated challenges. Zhang et al [8] systematically analyze 126 multimodal LLMs (MLLMs), with detailed comparison of model design and benchmark performances.
In addition to model development, several surveys have explored key functional aspects of LLMs. Su et al. [9] review efficient reasoning strategies, categorizing them into modelbased, output-based, and prompt-based approaches. The domain of prompt engineering has been addressed by Sahoo et al. [10] and Chen et al. [11], who examine various methods, datasets, applications, and associated security implications.
Privacy preservation remains a critical limitation to broader deployment of LLMs. To address this concern, multiple surveys have investigated privacy-related challenges. Hu et al. [12] discuss the application of differential privacy in natural language models. Neel and Chang [13] provide a broader overview of privacy issues in LLMs, while Edemacu and Wu [14] focus specifically on privacy concerns related to in in-context learning and prompting mechanisms.
# B. Survey of Scenario-Based Testing
Scenario-based testing has been widely recognized as a critical methodology for assessing the safety and reliability of ADSs. A number of studies have reviewed this approach from various perspectives. Nalic et al. [15] and Zhang et al. [16] conduct extensive reviews encompassing over 80 publications and proposed taxonomies to classify scenario generation techniques. Zhong et al. [17] focus on scenario-based testing utilizing high-fidelity simulation platforms, whereas Ding et al. [18] categorize generation methods into datadriven, adversarial, and knowledge-based categories.
Beyond scenario generation, some studies have extended their scope to include safety assessment frameworks. Riedmaier et al. [19] advocate for the integration of formal verification methods within scenario-based testing. Similarly, Wishart et al. [20] review current verification and validation practices to support the development of SAE standards. Further, Alghodhaifi et al. [21] and Tang et al [22] investigate evaluation efficiency, emphasizing the role of accelerated and AI-based techniques as valuable complements to scenariobased approaches. Finally, Mihalj et al. [23] explore the interaction between ADSs and physical infrastructure, offering perspectives on how environmental factors can influence the design of test scenarios.
# C. Survey of LLM for Automated Driving
Recent surveys investigate the integration of LLMs and their multimodal extensions into ADSs, highlighting their potential to enhance perception, reasoning, and decision-making capabilities. Within a particular focus on Vision-Language Models (VLMs), Yang et al. [29] and Zhou et al. [30] review their applications across key functions such as perception, planning, control, and data generation, while underscoring their openworld understanding capabilities and the challenges involved. In a complementary study, Cui et al. [31] systematically review the development, tools, datasets, and challenges of applying MLLMs in AD and map systems, with the aim of informing future research and practical implementation.
Broader discussions are provided by Fourati et al. [32] and Li et al. [33], who examine the deployment of LLMs, VLMs and MLLMs within both modular and end-to-end system architecture, focusing on their structural designs, deployment strategies, and prospective research directions. At the systemlevel, Gan et al. [34] offer a comprehensive review of LLM applications in intelligent transportation systems, with an emphasis on scalable and efficient implementation.
# D. Survey of LLM for Miscellaneous Domains
Beyond the domain of AD, LLMs are extensively reviewed across a range of application areas. In the context of software testing, Braberman et al. [24] propose a taxonomy for LLM-based verification tasks, while Zhou et al. [25] outline frameworks and challenges associated with intelligent system testing. Yu et al. [26] survey alignment algorithms for MLLMs, and Luo et al. [27] review visual foundation models for road scene understanding. In the context of urban innovation, Xu et al. [28] examine the integration of generative AI models with urban digital twins, emphasizing their potential to support smart city management.
# III. TERMINOLOGY
# A. Scenario
A scenario is a description of the temporal relationship between several scenes in a sequence of scenes, with goals
Scenario Scenario Scenario Scenario Test ADS Source Generation Database Selection Execution Assessment
and values within a specified situation, influenced by actions and events [35], [36].
# B. Scenario Abstraction Level
According to [37], [38], scenarios are organized into four levels of abstraction: 1) Functional scenarios represent highlevel traffic situations described in natural language using domain-specific terms. 2) Abstract scenarios are defined by transforming natural language descriptions into formats suitable for machine interpretation, often supported by ontologies. 3) Logical scenarios define the parameter ranges of the state values used for scenario representation. 4) Concrete scenarios use specific state values to ensure their reproducibility and to enable test methods to execute the scenario.
# IV. LLM APPLICATIONS BY PHASE
The scenario-based testing workflow, as illustrated in Figure 2, is initiated with scenario sources, which may include synthesized data, naturalistic driving measurements, or human expert knowledge. These sources are utilized to generate test scenarios, which are subsequently organized and stored within a scenario database. Scenario selection is then performed according to predefined criteria, followed by execution on simulation platforms such as CARLA [101] to evaluate the performance of ADSs. Finally, test results are analyzed and reported in the ADS assessment phase.
In this section, the application of LLMs in scenario-based testing within different testing phases are examined, along with corresponding implementation details. An overview of the research structure for this section is presented in Figure 3.
# A. Scenario Source
Scenario-based testing of ADSs heavily relies on data, which serve as the basis for generating test scenarios. As a result, considerable attention has been directed toward scenario sources to support this process. As noted in [15], [19], these source can be classified into data-based or knowledge-based categories. Data may be obtained from real-world driving environments or synthetically generated using real-world contexts and expert input. Surveyed literature indicates that LLMs are primarily employed for tasks such as data enrichment, labeling, and retrieval. The implementation strategies associated with these applications are described below.
1) Data Enrichment: The acquisition of real-world driving data often falls short of meeting the evolving demands of ADSs, primarily due to the high costs and time-intensive nature of data collection and annotation. As a result, data synthesis has become an essential component for ADS testing [102]. LLMs have been employed to generate driving trajectories based on natural language instructions. In [96], LLMs are utilized for both trajectory-conditioned language generation and language-conditioned trajectory synthesis. In [97], righthand driving trajectories are synthesized by LLMs using lefthand driving data as a reference. In [98], LLM is used to translate brief descriptions of vehicle interactions into realistic driving behaviors and trajectories by incorporating human-like driving logic.
Jia et al. [99] further expand data enrichment by combining an LLM with a video diffusion model to generate future driving scenes in video format. This enables trajectory generation and visual simulation based on natural language–driven vision–action pairs. Furthermore, LLMs have been adopted for ground truth generation; for example, Wei et al. [100] propose ChatSim, a framework that enables the creation of editable, photo-realistic 3D driving scenes through natural language commands. Within this framework, LLMs are used to facilitate user interaction and to enable the integration of external digital assets.
2) Data Enrichment Through Hazard Analysis: Hazard Analysis and Risk Assessment (HARA) and Systems Theoretic Process Analysis (STPA) are originally developed for functional safety and system-level hazard analysis, respectively. These methods are also commonly applied within the scenario-based approach described in ISO 21448 (cf. [16], [103], [104]) to systematically identify hazardous scenarios, gain insights into potential hazards (cf. [105]–[107]) and guide the generation of high-risk scenario seeds (cf. [108]–[110]).
HARA automation using LLMs has been explored to improve the efficiency and accuracy of safety analysis in ADSs. At Qualcomm [90], LLMs enable the identification of $20 \%$ more hazardous scenarios in an Autonomous Emergency Braking (AEB) case study compared to manual methods. Volvo Cars researchers investigate LLMs to generate scenarios and malfunctions from functional descriptions using a general framework, without relying on standardized databases [91], thereby demonstrating their potential for domain-independent safety analysis. A follow-up study [92] applies LLMs to generate safety requirements and detect inconsistencies, which enhances efficiency in safety engineering while highlighting the need for expert validation to ensure accuracy.
STPA automation using LLMs has also been investigated. Charalampidou et al. [93] used ChatGPT-4 to generate loss scenarios for an unmanned aerial vehicle rescue system, although the results require human validation. Similarly, Diemert et al. [94] introduce a cooperative framework where LLMs assist in identifying hazards, though expert oversight is required due to inaccuracies. Qi et al. [95] extended this to automotive and energy systems, assessing human-LLM collaboration schemes.
3) Data Labeling: Manually annotate data is both costly and labor-intensive, thereby necessitating automated solutions to enhance the scalability and efficiency of dataset development [102]. To address this challenge, LLMs have been utilized to partially substitute human effort in the annotation process. Chen et al. [89] introduce the first benchmark for evaluating Large Vision-Language Models (LVLM) in the context of self-driving corner cases. In their work, LLMs are employed to automatically generate annotations in $\mathrm { J S O N ^ { 1 } }$ format, which were subsequently verified through manual inspection to ensure data accuracy and quality.
Figure 3. Research tree of LLM applications on scenario-based approach.
4) Data Retrieval: Effective scenario generation requires the retrieval of relevant information from large-scale, heterogeneous datasets, which often include multimodal data in varying formats and structures. This inherent complexity presents challenges for unified processing and has traditionally been managed through rule-based methods, such as Structured Query Language (SQL) queries (cf. [111]), which demands significant manual effort. Recent studies have instead employed LLMs to enable more intuitive and flexible data retrieval, thereby reducing the reliance on predefined rules and manual querying.
Video retrieval efficiency has been demonstrated to improve through the application of LLMs. In the study by Knapp et al. [85], MLLMs are integrated with a vector database to enable natural language querying of driving logs. Scenario descriptions were generated from sensor data and video content, allowing users to navigate large-scale datasets without relying on SQL-based queries.
Image retrieval efficiency has also been improved through the application of LLMs. In [86], three LVLMs are employed to perform image retrieval based on natural language queries. Their performance is evaluated using precision-based metrics on a benchmark dataset containing diverse driving scenarios with varying scene complexity, weather conditions, and traffic levels. Additionally, Rigoll et al. [87] propose an annotationfree, object-level image retrieval approach for AD datasets by combining panoptic segmentation with a VLM to support natural language queries. While this approach enhances data accessibility without manual labeling, it is limited to singleobject recognition and performs poorly in complex, dynamic scenes involving multiple objects. To overcome these limitations, Tang et al. [88] incorporate LLMs and Bird’s Eye View (BEV) representations to enhance text-to-scene retrieval. This approach is evaluated on nuScenes-Retrieval dataset, an extension of nuScenes dataset [112] with enriched and diverse textual annotations. Experimental results show that it achieves state-of-the-art performance, clearly outperforming existing baseline methods.
Figure 4. Functional roles of LLMs within the scenario generation process. $\textcircled{1}$ LLM as human-machine interface; $\textcircled{2}$ LLM as data interpreter; $\textcircled{3}$ LLM as intermediate format generator; $\textcircled{4}$ LLM as standard format generator; $\textcircled{5}$ LLM as executable scenario generator.
# B. Scenario Generation
This section reviews previous studies that have employed LLMs for scenario generation. To enable execution within simulation platforms such as CARLA [101], the generated scenarios must be converted into standardized formats, such as ASAM OpenSCENARIO (XOSC) [113]. Given the inherent complexity of scenario generation, the process is typically decomposed into several sub-steps. When LLMs are utilized, their contributions are generally restricted to specific stages of the workflow rather than spanning the entire process. An overview of the functional roles played by LLMs throughout the scenario generation pipelines is illustrated in Figure 4, and the subsequent discussion is organized accordingly.
1) LLM as a Human-Machine Interface: In this subsection, LLMs are employed to translate natural language inputs provided by the user into structured information that facilitates downstream scenario generation, as illustrated in step $\textcircled{1}$ of Figure 4. Specifically, LLMs have been used to interpret user input into structured scenario representations [39], [76]–[80], loss function formulations [81], and executable code [82]–[84]. The primary strategies adopted in these studies are schematically summarized in Figure 5.
Structured scenario representations derived from user inputs have been employed in various studies to support scenario generation. In [76], such structured information is employed to create executable files through a diffusion-based model. Similarly, in [77] and [39], it is used to trigger Python scripts for automated file generation. Building upon [77], Zhou et al. [114] evaluate the performance of six different LLMs in interpreting motorway functional scenarios, thereby offering a benchmark for model selection. In [78], structured representations are used as inputs to a transformer model for generating vehicle trajectories. In [79], LLMs convert natural language inputs into scenario elements, which are subsequently transformed into simulation-executable files. A unified diffusion-based framework is introduced in [80], where
LLMs generate Proto2 constraints from user input to enable language-driven control of scene initialization and closed-loop simulation.
Loss functions and executable code have also been generated from natural language inputs using LLMs. In [81], a loss function is derived from user-provided descriptions and integrated into a diffusion model to facilitate executable file generation. Code generation from natural language inputs is further demonstrated in [82], [83]. In [82], the generated code, which includes interaction, vehicle, and map modules, is utilized by a transformer to synthesize vehicle trajectories. In [83], the generated code is applied to call functional libraries for producing vehicle trajectory data. In a related study, Nguyen et al. [84] employ a LLM to parse user language into driving behavior code, which is then used to guide reinforcement learning for generating diverse synthetic driving scenarios.
2) LLM as a Data Interpreter: This subsection reviews studies in which LLMs have been employed to interpret various data sources into natural language representations. This process is illustrated in step $\textcircled{2}$ of Figure 4. Scenario-relevant information has been extracted from diverse inputs, including accident reports [70], domain-specific knowledge [71], and naturalistic driving data [72]–[74].
Accident reports have been interpreted by LLMs to support scenario reconstruction. In the study by Guo et al. [70], a framework is proposed in which textual accident reports are parsed into a structured JSON format, which is subsequently processed by a constraint solver to generate waypoints.
Additionally, domain-specific knowledge has been extracted using LLMs to aid scenario generation. Tang et al. [71] introduce a method in which technical documents are interpreted and transformed into structured ontological elements, thereby enabling automated ontology construction.
Naturalistic driving data have also been utilized to support sceario generation through LLM-based interpretation. Mei et al. [72] employ LLMs to identify the most threatening attacker in each scene from the Waymo Open Dataset [115], enabling the creation of high-risk scenarios for improving the testing and training of ADSs. In [73], LLMs are applied to interpret BEV maps derived from the nuPlan dataset [116], with the extracted spatial information subsequently used to guide a diffusion model for trajectory generation. Tian et al. [74] integrate reinforcement learning and LLMs to optimize safety-critical driving scenarios within the HighwayEnv [117] simulation environment. Extending these efforts into real-time application, Mei et al. [75] focus on online scenario generation by using retrieval-augmented LLMs to infer dangerous driving behaviors and synthesize adversarial trajectories based on historical states.
Figure 5. Schematic description of LLM-driven scenario generation strategies when LLM works as human-machine interface.
3) LLM as an Intermediate Format Generator: The direct generation of simulator-executable scenario files is inherently complex. To manage this complexity, the process is typically divided into multiple stages, with the outputs of each referred to in this study as intermediate formats. These formats acts as transitional representations that can be subsequently transformed into executable scenario files, as illustrated in step $\textcircled{3}$ of Figure 4. Unlike steps $\textcircled{1}$ and $\textcircled{2}$ , which primarily rely on either data or knowledge sources, step $\textcircled{3}$ integrates both.
A common application of LLMs in this context involves the generation of driving policies based on natural language inputs. In [60] and [61], user prompts are combined with environmental context and processed to derive driving strategies, which are subsequently used to inform either a driver model [60] or an auto-regressive trajectory generation model [61]. Similarly, Wei et al. [62] demonstrate the use of LLMs to interpret linguistic descriptions for trajectory planning, thereby guiding behavior of autonomous agents.
In addition, LLMs have been employed to synthesize various scenario elements. In [63], an approach is presented in which maps and vehicle assets are generated from textual descriptions. Waypoints are selected using a VLM, followed by trajectory generation using a diffusion model. In [64], critical corner cases are produced by integrating user language inputs, failure records, and scenario databases content. This methodology is further extended in [65], where scenario mutation, prior test feedback, and expert knowledge are incorporated to enhance scenario diversity. Similarly, in [66], descriptive inputs and datasets are processed to generate scenario configurations and associated parameter sets.
LLMs have also been employed to derive functional [67], abstract [68], and logical scenario [69] representations. In [67], a MLLM is applied to analyze accident videos, generate narrative descriptions, and identify relevant objects, thereby enabling the transformation of raw video content into functional scenario representations. In [68], traffic regulations are parsed to produce abstract scenarios with defined syntax, which are subsequently converted into executable scripts through code generation. Furthermore, in [69], accident reports are interpreted to extract logical scenarios, which are then instantiated as concrete scenarios using a search-based algorithm.
4) LLM as a Standardized Format Generator: This subsection reviews studies in which LLMs are employed to generate standardized scenario formats from intermediate representations, rather than directly from scenario source, as illustrated in step $\textcircled{4}$ of Figure 4. In the study by Zorin et al. [55], accident data related to ADSs are collected from online sources and converted into key textual descriptions using Python scripts. LLMs are then utilized in conjunction with predefined templates to produce scenario files compliant with the XOSC standard. Similarly, Tian et al. [59] employ Large Multimodal Models (LMMs) to generate safety-critical scenarios from non-accident traffic videos. Optical flow data and Chain-of-Thought (CoT) reasoning are used to construct abstract representations, which are subsequently transformed into executable programs. The generated scenarios are validated through a dual-layer optimization framework, and the resulting trajectories are encoded using AVUnit [118].
5) LLM as an Executable Scenario Generator: This subsection reviews studies in which LLMs are utilized to generate executable scenarios directly from scenario sources, as illustrated in step $\textcircled{5}$ of Figure 4. The literature indicates that LLMs have primarily been applied to produce scenarios in standardized formats, including XOSC [113] (cf. [53], [55], [119]), SCENIC [120] (cf. [43], [46], [49], [54], [56]), SUMO $\mathrm { X M L } ^ { 3 }$ (cf. [42], [44], [48]), and AVUnit [118] (cf. [57]), as well as in user-defined formats (cf. [47], [58]).
A range of techniques has been adopted in the reviewed studies to generate simulator-executable files. These approaches can be categorized into three types: template filling, end-to-end generation, and hybrid generation, depending on the degree of reliance on external tools. This classification is illustrated in Figure 6.
a) Template Filling: Template filling refers to the process in which LLMs populate predefined scenario template with specific parameters in the to generate executable files. Within this category, the study presented in [44] is representative. In this work, Gu¨zay et al. propose a method for generating SUMO scenario based on predefined templates. Structured prompts are designed to guide GPT-4 in producing XML files according to user-defined variables, such as the number of intersections and the length of road segments.
Figure 6. Techniques for generating simulator-executable scenarios using LLMs. Template filling involves steps $\textcircled{1} - \textcircled { 4 }$ , end-to-end generation covers steps $\textcircled { \scriptsize { 1 } } , \textcircled { \scriptsize { 2 } }$ , and $\textcircled{4}$ , while hybrid generation is combines elements of both approaches.
b) End-to-End Generation: End-to-end generation refers to the direct mapping of natural language or multimodal inputs into executable simulation files, without relying on predefined templates. This paradigm is first explored by Wang et al. [53], who prompt general-purpose LLMs to generate XOSC files. To enhance controllability, Miceli-Barone et al. [43] propose a dialogue-based system in which SCENIC programs are incrementally constructed through iterative user interactions. Extending this approach to the visual domain, Miao et al. [46] introduce ScriptGPT, which translates dashcam videos into SCENIC scripts using prompt-engineered videolanguage models. To address the issue of structural correctness, Elmaaroufi et al. [54] combine compositional prompting with compiler feedback to generate probabilistic scenarios from crash reports.
c) Hybrid Generation: Hybrid generation approach combines template-filling and end-to-end generation, enabling greater flexibility while maintaining scenario standardization. In [55], Zorin et al. integrate real-world traffic incident data scraped from websites with LLM-based scenario completion methods to bridge open-source intelligence and executable simulation file generation. Yang et al. [49] introduce SimCopilot, which translates natural language descriptions of object interactions into simulation-ready code using a language-tointeraction dataset, thereby balancing high-level abstraction with detailed execution. Lu et al. propose OmniTester [42] and AutoScenario [48], both of which incorporate prompt engineering, Retrieval-Augmented Generation (RAG), and external simulation tools (e.g., SUMO, CARLA) to enhance the realism, diversity, and controllability of generated scenarios. Similarly, Zhang et al. [56] propose ChatScene, which decomposes textual inputs into subcomponents and retrieves relevant domain-specific code snippets to generate safetycritical scenarios. Aiersilan et al. [47] present AutoSceneGen that leverages in-context learning to transform high-level textual descriptions into simulator-compatible configuration files, enabling batch generation of safety-critical scenarios in CARLA.
In addition to text-based pipelines, hybrid approaches have also been extended to multimodal inputs. Tian et al. [57] and Ruan et al. [58] applied Large Multimodal Models (LMMs)
and road-agent retrieval mechanisms to construct semantically coherent and highly customized traffic scenes.
# C. Scenario Selection
Scenario selection is performed to efficiently filter concrete scenarios from a database, with the aim of identifying cases that may reveal potential issues in ADSs [19]. As a recently explored approach, realism assessment emphasizes evaluating the degree to which generated scenarios align with real-world conditions, thereby allowing only realistic cases to be selected in order to improve test validity. LLMs have been applied to support this evaluation process.
Two representative studies have investigated the use of realism assessment for scenario selection. In the study by Wu et al. [51], LLMs are employed to evaluate the consistency of driving trajectories with real-world conditions. This is accomplished by applying standardized prompts to assess perturbed variants of the DeepScenario dataset [121], resulting in a robustness score that quantifies scenario realism. In a related effort, Fu et al. [52] combine a diffusion model with a VLM to generate, narrate, and interpret realistic driving videos. The realism of the generated outputs is verified to enhance scene understanding. Experiments conducted on the Waymo Open Dataset [122] demonstrate the effectiveness of this approach in advancing VLM applications for AD.
# D. Test Execution
In scenario-based testing, concrete scenarios are executed within test environments such as real-world roads, close-track, or simulation platforms, each supporting varying levels of Xin-the-loop (XiL) integration [19], [123]. The studies reviewed in this work primarily focus on the use of LLMs in fully simulated environment. In these contexts, LLMs are employed to dynamically adjust testing parameters, thereby enabling self-adaptive testing procedures.
1) Anomaly Detection: ADSs are susceptible to failures resulting from system-level deficiencies, which are commonly referred to as anomalies. Detecting such anomalies requires advanced reasoning capabilities. In this context, LLMs have been utilized to identify perception system anomalies in real time during test execution phase. Elhafsi et al. [50] apply OWL-ViT [124], a vision transformer model designed for object-level visual understanding, to extract visual feature, and GPT-3.5 for reasoning to detect anomalies in ADSs within the CARLA simulation environment. Their approach outperforms traditional Out-Of-Distribution (OOD) detection methods while offering interpretable analysis, although limitations remain due to the reliability of object detection and LLM inference.
2) Simulation Setup Automation: The configuration of simulation environments remains a complex and expertiseintensive task. Recent advancements in LLMs offer a promising solution for automating simulation setup by translating natural language inputs into executable configurations, a direction explored by several recent studies. In [39], [47] and [48], LLMs are employed to modify traffic environments, generate traffic scenarios, and configure simulation files for platforms
SUMO [39] and CARLA [47], [48]. These approaches support real-time adjustments, enhance scenario diversity, and enable end-to-end automation without manual intervention. Building upon this research direction, Yang et al. [49] construct virtual traffic scenes in the LGSVL simulator based on natural language descriptions and introduce a language-to-interaction dataset introduced to support benchmarking and further development in simulation setup automation.
3) Scenario Optimization: During the execution of concrete scenarios in physical simulators, error messages may occur due to syntax errors in scenario definitions. Traditionally, the correction of such errors has been performed manually and offline. To improve efficiency, Lu et al. [42] utilize a LLM integrated with a RAG module to iteratively validate and refine scenarios based on simulation feedback from CARLA and SUMO. Similarly, in [43], an LLM is used to automatically correct and update SCENIC code in response to both simulation results and user feedback, until successful execution is achieved in CARLA. In [44], GPT-4 is applied to parse error messages generated by SUMO and to modify the simulation files accordingly. Beyond syntax correction, LLMs have also been utilized for generating and refining closed-loop control code. In [45], an LLM is leveraged to generate and iteratively refine control code from natural language descriptions based on simulation feedback in Esmini [125], thereby forming a closed-loop workflow integrating code generation, simulation, and correction.
Furthermore, if simulated scenarios are not consistent with real-world driving data, the resulting simulations may lack reliability. To refine scenario fidelity during execution, Miao et al. [46] utilize LLM to perform similarity analysis between simulated scenarios and dashcam crash video. In case of mismatch, the LLM iteratively refines the SCENIC script by incorporating feedback from a similarity metric. Scenario elements are updated until the simulated scenario closely approximates the original crash event.
# E. ADS Assessment
In this phase, the performance of ADSs is quantified using key metrics, with Time-To-Collision (TTC) [126] being one widely adopted example. These statistical indicators are typically included in simulation test report. To automate this process, LLMs have been employed to generate simulation reports. In [39], a LLM is used to interpret SUMO output files, extract relevant metrics such as traffic density, travel time, and emissions, and synthesizes analytical summaries to support efficient evaluation and comparison of simulation outcomes.
Beyond simulation, LLMs have also been applied to the analysis of real-world ADS accidents. Xu et al. [40] explore the potential of LLMs to generate legal explanations for ADSrelated accident cases. The limitations of the models are evaluated, and improvements are proposed through domainspecific adaption and enhanced contextual reasoning.
In addition to safety assessment, LLMs have been explored as tools to evaluate the intelligence level of ADSs. You et al. [41] employ LLMs to perform hierarchical assessments by using CoT prompting combined with RAG. The models simulate human-like reasoning across multiple decisionmaking levels, and their outputs are validated through CARLA simulations and human evaluators.
# V. MODEL SELECTION AND ADAPTION
LLMs have begun to play a crucial role in scenariobased testing of ADSs, with numerous LLMs proposed from different organizations. Table I and Table II provide a comprehensive summary of the LLMs used in the reviewed studies, categorized according to various attributes.
# A. LLM Origin
Figure 7. Distribution of LLM applications in scenario-based testing of ADSs by organization and region.
Figure 7 illustrates the distribution of LLMs applied in scenario-based testing of ADSs, categorized by organization and region. The majority of applied models originate from organizations based in the USA, with OpenAI leading by a substantial margin. In fact, the number of studies employing OpenAI models exceeds the combined total from all other organizations. China ranks second, with five models developed by three organizations, including recent contributions from DeepSeek. Within Europe, France is the only contributing country, represented by models primarily released by Mistral AI. These findings are further substantiated by the “Country” and “Organization” columns in Table I, and Table II.
Among the models adopted in the reviewed studies, the GPT series developed by OpenAI is the most frequently used, despite being closed-source. This is followed by the LLaMA series and the LLaVA series, both of which are open-source.
# B. LLM Usage Strategy
Although LLM usage strategies are generally applicable across domains, this section focuses on those specifically adopted in scenario-based testing of ADSs, aligning them with distinct testing phases. Rather than proposing a universal taxonomy, the aim is to categorize existing strategies observed in the reviewed literature. Definitions and classifications are presented in Table III.
Figure 8 presents a heatmap illustrating the distribution of LLM usage strategies $( \pmb { \bigcirc } \mathbf { - } \pmb { \bigcirc } )$ across scenario-based testing phases $\left( \textcircled{1} - \textcircled{ 6} \right)$ . The analysis indicates that early testing phases, particularly scenario generation $\textcircled{2}$ , exhibit the highest concentration of LLM applications. Among the various strategies, prompt engineering is the most frequently employed, with format constraints $\mathbf { \eta } ( \pmb { \otimes } )$ and few-shot learning $\mathbf { \eta } ( \pmb { \theta } )$ being especially prevalent. Among these, format constraint stands out as the most commonly adopted strategy overall. In contrast, parameter tuning $\_$ and $\otimes$ ) and knowledge-augmented generation techniques $\pmb { \mathcal { O } }$ and $\mathbf { \otimes } \mathbf { \otimes }$ ) appear less frequently and are mainly confined to scenario generation $\textcircled{2}$ . Minimal engagement is observed in ADSs assessment $\textcircled{6}$ , suggesting that the integration of LLMs into the later-stage validation process remains limited and presents opportunities for future exploration.
Table I THE APPLIED LLM MODEL. TESTING PHASE: $\textcircled{1}$ SCENARIO SOURCE; $\textcircled{2}$ SCENARIO GENERATION; $\textcircled{3}$ SCENARIO DATABASE; $\textcircled{4}$ SCENARIO SELECTION; $\textcircled{5}$ TEST EXECUTION; $\textcircled{6}$ ADS ASSESSMENT. USAGE STRATEGY: $\bullet$ ZERO-SHOT LEARNING; $\pmb { \theta }$ FEW-SHOT LEARNING; $\otimes$ CHAIN-OF-THOUGHT; $\bullet$ FORMAT CONSTRAINT; ❺ FULL FINE-TUNING; $\otimes$ LOW-RANK ADAPTION; $\pmb { \ 6 }$ RETRIEVAL-AUGMENTED GENERATION; $\mathbf { \otimes }$ KNOWLEDGE GRAPH INTEGRATION.
Table II CONTINUED FROM PREVIOUS PAGE.
# C. Tasks Performed by LLM
In Section IV-B, reviewed studies are classified based on the roles LLMs play in various tasks. To offer a more detailed understanding, specific tasks undertaken by LLMs are outlined in Table I and Table II.
Figure 8. Heatmap of LLM usage strategies across scenario-based testing phases. Testing phase indices $\textcircled{1} - \textcircled { 6 }$ and usage strategy indices ${ \pmb \mathfrak { o } } _ { - \bullet }$ follow the definitions provided in Table I and Table II.
Table III LLM USAGE STRATEGY.
# D. Integrated Simulator
Figure 9 illustrates the frequency of simulator usage in the reviewed studies. CARLA [101] is the most widely adopted platform, appearing in 15 studies. LGSVL [150] and Esmini [125] are each used in 7 studies, followed by SUMO [151], which is used in 5. Other simulators, including MetaDrive [152], CarMaker [153], and Isaac Gym [154], are each employed in only a single study. These results suggest a strong preference for open-source environment such as CARLA and LGSVL in the current research landscape, while the adoption of commercial remains limited.
# VI. INDUSTRIAL PERSPECTIVE
Table IV presents an overview of how LLMs are adopted by industry in the context of scenario-based testing for ADSs. The organizations are categorized by region, with corresponding tasks and testing phases indicated.
In China, 51WORLD and Huawei adopt LLMs during the scenario generation phase for XOSC creation. In addition, Huawei applies LLMs during the scenario source phase for data labeling and retrieval. In Europe, Denotic, Luxoft, and IAV utilize LLMs in the scenario generation phase for XOSC generation, while dSPACE and Fraunhofer incorporate them into test automation workflows. Wayve employs LLMs in the scenario source phase for large-scale data generation. In the USA, Applied Intuition uses LLMs in the scenario generation phase for scenario construction.
Figure 9. Distribution of simulators employed in reviewed reference.
Although numerous academic studies have explored the application of LLMs in scenario-based testing of ADSs, industrial adoption remains relatively limited. As shown by the relatively small number of documented cases in Table IV, current applications in industry are still at an early exploratory stage. Furthermore, existing use cases are primarily confined to a narrow set of tasks, such as XOSC generation and test automation, highlighting a noticeable gap between academic advancements and their translation into practical industrial deployment.
Table IV LLM APPLICATION IN INDUSTRY. TESTING PHASE INDICES FOLLOW THE DEFINITIONS PROVIDED EARLIER IN TABLE I.
# VII. OPEN CHALLENGES
The preceding sections have reviewed LLM applications across various phases of scenario-based ADSs testing. Despite notable progress, practical implementation remains challenging due to various constraints. This section outlines five key challenges, each discussed in the following sections.
# A. LLM Hallucination and Output Variability
A key challenge in applying LLMs to scenario-based testing is their inherent response uncertainty, primarily in the format of hallucination and output variability. Hallucination represents to the generation of outputs that are either unfaithful to the input or factually incorrect [164], [165]. This issue arises across all testing phases, including physically implausible trajectories in data enrichment [97], annotation of nonexistent in data labeling [89], invalid or non-executable code in scenario generation [56], and factually incorrect judgments in safety- or legality-critical assessments due to insufficient domain knowledge [40].
Output variability refers to nondeterministic behavior of LLMs, where identical prompts may yield divergent results. This unpredictability poses particular challenges in scenario generation, where minor variations can lead to cumulative structural errors [71]. For example, GPT-4 has been observed to generate inconsistent simulation configurations from identical prompts [44].
Several strategies have been proposed to mitigate these challenges. Huang et al. [165] categorize countermeasures into three levels: data, training, and inference and identify RAG as a promising approach. Elmaaroufi et al. [54] further propose using a secondary LLM as a verification agent to assess the output consistency with the given task description. Nonetheless, robust consistency guarantees remain an open problem.
# B. Simulation Platform Integration
Another challenge in applying LLMs to scenario-based ADSs testing lies in their limited integration with simulation platforms, stemming from fragmented scenario formats, inconsistent interfaces, and constrained automation.
Fragmented scenario formats with unique syntax are adopted in the reviewed studies. Such heterogeneity often requires reworking prompts, templates, or conversion scripts. For example, studies [42], [44], [53] demonstrate how the same LLM (GPT-4) must be re-prompted to accommodate format differences.
Inconsistent interfaces, such as different Application Programming Interfaces (APIs) and import mechanisms across simulation platforms such as CARLA, SUMO, Esmini, and LGSVL, further hinder the integration. LLM-generated outputs often require translation or manual debugging, which reduces portability and limits automation.
Constrained automation has been demonstrated, although LLMs have shown promise in automating simulation setup. Core simulation elements like sensor configurations, vehicle dynamics, and environmental settings are still manually defined. Furthermore, real-time interaction between simulator and LLMs remains unrealized.
Developing a standardized intermediate representation could decouple LLM outputs from simulator-specific formats. Additionally, establishing feedback loops between simulation platforms and LLMs may further enable runtime iterative refinement.
# C. Lack of Domain-Specific Models
General-purpose LLMs face notable limitations in scenariobased ADSs testing due to insufficient domain adaptation. This issue is exacerbated by the predominant reliance on prompt engineering over domain-specific fine-tuning, as well as the absence of authoritative ADSs datasets. For example, LLaMA generates invalid OpenSCENARIO syntax [53], while GPT-4 fails to interpret updated SUMO rules [44], revealing gaps in both semantic precision and temporal awareness.
These limitations propagate across testing pipelines: biased scenario selection [51], syntax validation failure (e.g., ISO 21448 misinterpretation [36], [45]), and unclear performance evaluation in safety-critical contexts [41]. A hybrid approach combining domain fine-tuning and RAG could mitigate these issues by linking LLMs to structured knowledge.
# D. Insufficient Attention to Scenario Database
As discussed in previous sections, no studies have employed LLMs to construct scenario databases, which lags significantly behind other testing phases. This indicates a lack of systematic mechanisms for storing, managing, and reusing large volumes of generated scenarios. The construction of scenario databases should prioritize the development of standardized interfaces that integrate diverse data sources and convert them into machine-readable formats [19].
LLMs could assist in verifying the syntax and physical consistency of newly generated scenarios along with similarities, and performing de-duplication and quality assurance. Furthermore, LLM may support scenario analysis by comparing with regulations to generate coverage report.
# E. Industrial Application Gap
While LLMs show promise for scenario-based testing, their industrial adoption remains limited due to several constraints: 1) Data privacy and security: Utilizing LLMs often requires transmitting sensitive data (e.g., vehicle trajectories) to thirdparty cloud services (e.g., OpenAI Enterprise4), which risks violating local data protection regulations. Although enterprise APIs provide contractual safeguards, they typically prohibit model fine-tuning, thus restricting task-specific performance. Alternatively, deploying open-source LLMs locally can preserve data privacy, but requires substantial infrastructure investment. 2) Explainability: According to ISO 21448 [36], scenario generation must be traceable. However, LLMs often lack transparency in their generation process and cannot justify why specific scenarios are produced, thereby creating gaps in safety-critical applications. | The safety and reliability of Automated Driving Systems (ADSs) must be
validated prior to large-scale deployment. Among existing validation
approaches, scenario-based testing has been regarded as a promising method to
improve testing efficiency and reduce associated costs. Recently, the emergence
of Large Language Models (LLMs) has introduced new opportunities to reinforce
this approach. While an increasing number of studies have explored the use of
LLMs in the field of automated driving, a dedicated review focusing on their
application within scenario-based testing remains absent. This survey addresses
this gap by systematically categorizing the roles played by LLMs across various
phased of scenario-based testing, drawing from both academic research and
industrial practice. In addition, key characteristics of LLMs and corresponding
usage strategies are comprehensively summarized. The paper concludes by
outlining five open challenges and potential research directions. To support
ongoing research efforts, a continuously updated repository of recent
advancements and relevant open-source tools is made available at:
https://github.com/ftgTUGraz/LLM4ADSTest. | [
"cs.SE"
] |
# I. INTRODUCTION
We propose JITScope, a system that visualizes the evolution of Just-in-Time (JIT) compiler’s Intermediate Representation (IR) using a backend-driven, phase-aware, graph-based visualization. This paper outlines this visualization framework’s architecture, design decisions, challenges, and future directions.
JIT compilers are a class of compilers that dynamically translate and optimize bytecode into machine instructions at runtime to improve the performance of code execution. JIT compilers are widely used in many modern systems, including web browsers [1], [2], [3], [4] to operating system kernels [5], virtual machines such as the Java Virtual Machine (JVM) [6], and language runtimes like PyPy [7].
JIT compilers construct and transform a graph known as IR to optimize the input bytecode. Therefore, understanding the transformations applied to IR during JIT compilation is important for compiler engineers, researchers, and developers debugging or enhancing the optimization. However, the data structures and logic governing these transformations are often deeply nested, non-transparent, and challenging to trace across multiple compiler phases.
To help developers quickly analyze the behavior of JIT compilers during optimization, several research efforts have been made. In particular, Lim and Debray [8], [9], [10] proposed an approach that focuses specifically on analyzing IR-level optimizations. Their work involves constructing dedicated models to localize bugs that arise during these optimization phases. However, their approach relies on the generated models solely to identify the relevant, i.e., those most likely to contain bugs, function(s) in the JIT compiler’s source code and produces text-based ranking reports. These reports provide limited flexibility for developers to investigate further or interactively explore the underlying optimization behavior. Once the final reports are generated, the models are no longer required or used, making the analysis process largely static and non-exploratory.
Existing approaches for visualizing a system’s execution behavior often focus on complete execution traces [11], call graphs [12], control-flow graphs (CFGs) [13], or data flow [14]. JIT compilers, unlike static (ahead-of-time) compilers, are typically integrated into larger runtime systems. For example, JavaScript engines include a JIT compiler as part of a broader architecture that also features a bytecode interpreter and additional components for monitoring and optimizing code execution performance. Thus, while existing approaches that visualize a system’s overall execution can offer high-level insights, they are not well-suited for diagnosing issues that arise specifically within JIT compilers. This is because JIT compilation involves complex, phase-specific transformations of IRs that are not captured in standard visualizations. As a result, developers and researchers lack the fine-grained, phase-aware views needed to trace subtle bugs, understand missed optimizations, or reason about dynamic code generation—making traditional visualizations insufficient for JITfocused analysis.
A closely related effort in visualizing JIT compiler IRs is the work by Lim and Kobourov [15]. Their visualization is based on IR models generated using Lim’s earlier research on modeling and localizing JIT compiler bugs [8], [9], [10]. The goal of their approach is to present the entire IR structure in a single, coherent view using the metro map metaphor [16], rather than visualizing the IR’s transformation across individual compilation phases. While the layout is visually clean and offers a high-level summary of the IR, the visualization lacks support for showing how the IR evolves over time, limiting its usefulness for phase-by-phase analysis and practical debugging tasks.
Our research aims to provide a more interactive and phase-aware visualization of JIT compilation behavior. With JITScope, we focus on capturing the evolution of the IR across multiple compiler phases rather than presenting a static snapshot. Our approach includes the development of a backend SQLite database that parses the deeply nested JSON format IR graph data, a controller layer that converts SQL queries into JSON outputs, and a D3.js-based front-end graph that will ultimately enable interactive phase selection, node tracking, and instruction-level exploration. While we have not finalized the full front-end deployment, our current prototype reflects the functional goals and design thinking behind JITScope. This paper presents the architecture, design decisions, and key challenges in building JITScope, and outlines directions for extending its capabilities.
The remainder of this paper is organized as follows: (1) We begin by introducing key background concepts to help readers better understand our approach and the broader context of JIT compilers; (2) We then present the design and architecture of our system; (3) Next, we outline our evaluation plan, which we will carry out once the system is fully implemented; (4) We discuss both the challenges encountered during development and potential issues that may arise in the future; (5) We then review closely related work; and (6) We conclude the paper with a summary and future directions.
# II. BACKGROUND AND MOTIVATION
# A. Just-in-Time (JIT) Compilers
Fig. 1. Overview of JIT Compiler Architecture
A Just-in-Time (JIT) compiler is a runtime component of a larger system—such as a JavaScript engine or the Java Virtual Machine (JVM)—that dynamically optimizes and translates bytecode into native machine code [17].
Figure 1 shows the overview of JIT compiler architecture, which was adapted and modified from Ishizaki et al. [17] paper on Java JIT compiler. The JIT compiler receives bytecode from the interpreter or runtime environment and begins by constructing an intermediate representation (IR) in the form of a graph.
Unlike static ahead-of-time (AOT) compilers, where optimization is often optional, JIT compilers are designed to optimize performance-critical code dynamically. They apply a series of optimization passes—such as method inlining, dead code elimination, and constant propagation—which transform the IR across multiple phases. Once the IR has been fully optimized, the back-end of the JIT compiler generates native machine code by selecting architecture-specific instructions and performing register allocation.
Understanding how the IR evolves through the optimization phases is challenging, as internal representations are typically neither visualized nor exposed in a queryable form. In particular, JIT compilers, like AoT compilers, may include a large number of optimization phases—potentially hundreds [18]—with each phase responsible for transforming the IR graph in specific ways.
# B. Intermediate Representation (IR)
Intermediate Representation (IR) is a graph, where the nodes represent operations or data and the edges represent the relationship, e.g., dependencies, between the nodes. Different JIT compilers adopt different graph structures for their IRs. For example, Google’s V8 [1] and Java’s HotSpot JVM [6] JIT compilers use the sea-of-nodes [19] representation, Mozilla’s IonMonkey [4] employs a traditional controlflow graph (CFG), and Apple’s JavaScriptCore [3] JIT utilizes a data-flow graph (DFG) structure.
In our work, we adapted Lim and Debray’s IR model [8], [9], [10], which is the data that Lim and Koborouv [15] used to visualize the JIT compiler IRs in the metro metaphor [20]. The IR data is stored in a JSON-formatted file that contains a list of IR nodes and a mapping from JIT function IDs to the corresponding compiler source function symbols that executed transformations on the IR. Each node in the file represents a JIT compiler IR node and includes the following information:
The memory address of the node. • The opcode associated with the node. A list of opcode updates applied to the node. The set of edges (i.e., connections to other nodes). The list of values held by the node. A boolean flag indicating whether the node is alive at the end of optimization (i.e., true if alive, false if removed). • A log of all instructions that accessed the node during optimization. Each instruction is tagged with a unique instruction ID and a function ID, which can be mapped back to the corresponding source-level function symbol.
Fig. 2. Overview of JITScope Architecture
# III. DESIGN PLAN
# A. System Pipeline
Our system adopts the traditional Model-View-Controller (MVC) architecture. Figure 2 illustrates an overview of the system’s structure. The input to the system is a JSONformatted file containing IR data, uploaded by the user. Upon upload, a Python script processes the file by extracting relevant information and loading it into a normalized SQLite database. The controller handles database queries to generate CSV files representing the IR data. It is also responsible for validating user input and passing it to the model. The front-end, built with D3.js, reads these CSV files to render interactive node-link diagrams, enabling users to visually and contextually explore the transformations of the intermediate representation.
While it may seem sufficient to use a standalone Python script to convert the input JSON file to CSV, our use of the MVC architecture offers several important advantages. First, MVC promotes a clean separation of concerns: the model manages the data and logic, the view handles the visual representation, and the controller manages communication between them. This separation makes the system more maintainable, testable, and scalable. Second, by centralizing data handling in the model and controller, our system becomes more adaptable to changes in the view layer. For example, if we later decide to switch from a D3-based front-end to another visualization framework, we can do so without modifying the data extraction logic. Similarly, enhancements to the model—such as supporting additional IR formats or new filtering options—can be made independently of the front-end. In contrast, a monolithic Python script hardwires data transformation to a specific output format, making future changes or extensions more difficult to manage.
# B. JSON-to-Database
The conversion from JSON to SQLite is performed by a dedicated Python script that systematically parses the hierarchical structure of the input data. This script extracts nodelevel metadata, constructs edge relationships, and populates auxiliary information such as function identifiers, etc. A notable challenge addressed in this process is the assignment of phase names to instruction-node access records. This is achieved by traversing function ID ranges and mapping them to their corresponding optimization phases. All transformations—whether opcode changes, value updates, or edge modifications—are normalized into dedicated relational tables, with records consistently linked via foreign key constraints. Once the data is fully cleaned, structured, and validated, it is committed to the SQLite database, making it ready for downstream querying and visualization.
# C. Database Schema
Fig. 3. Simplified Entity Relationship (ER) Diagram
Figure 3 presents a simplified version of the EntityRelationship (ER) diagram for our database schema. The database consists of ten interrelated tables, each representing a specific component or transformation involved in the IR. However, for the purposes of visualization, we focus on the most essential tables. The Nodes table stores metadata for each IR node, such as its unique ID, opcode, and alive status. The Edges table represents source-to-destination relationships between nodes, enabling reconstruction of the IR graph structure. The Functions table maps function IDs to human-readable function symbols. The schema’s core is the Instructions table, which logs all execution instructions that accessed or modified IR nodes. This table is key to identifying which nodes were created or transformed during specific optimization phases.
# D. Visualization Plan
We are actively designing and implementing the front-end of JITScope The current concept interface (Figure 4) illustrates our intended features: a central node-link graph showing the IR structure, a control panel at the bottom of the page, and the IR information display panel on the right.
# E. Center: IR Graph Visualization
Figure 4 shows the na¨ıve visualization of the IR, i.e., nodes are laid out with no specific rules other than to avoid overlapping. The graph is to be interactive, i.e., nodes and edges (1) can be selected to view the details, (2) nodes are draggable, and (3) nodes and edges can be hidden or revealed by the user. Thus, our goal is to identify the most appropriate layout to show the relationship among the nodes and provide interactive features. At the same time, it is most intuitive for the developers to understand the visualized graph.
Fig. 4. Visualization Concept
# F. Bottom: Control Panel
At the bottom, the system provides a control option for the developers to interact with the IR.
1) Dropdown Selector: Using this selector, the users can switch around the phases to the changes made to the IR from one phase to the other in the central view of the IR graph.
2) Animation keys: Using the keys, the users can playback controls for animating transformations over time. The users should be able to play, pause, and fast forward/backward the animation.
3) Upload: A user can upload an IR JSON file. The users can only upload one file at a time.
# G. Right: Information Panel
On the right side of the interface, three key components support interaction and analysis:
1) Search Bar: This input field allows users to locate specific nodes within the currently selected optimization phase using attributes such as node ID, opcode, or mnemonic. The corresponding node is highlighted when a match is found, while the remaining nodes are visually de-emphasized (grayed out). This approach preserves the overall graph context, allowing users to manually explore other relevant nodes.
2) Optimization Overview Panel: This section summarizes the transformations applied to the IR during the current optimization phase. It includes counts of node generations, removals, opcode updates, value updates, and edge modifications (e.g., additions, removals, replacements), offering a quick, high-level snapshot of activity during that phase.
3) Node Details Panel: This panel displays detailed metadata for the selected node. It includes the node’s ID, opcode, mnemonic, a flag indicating whether it was generated during the current phase, and any associated value or edge changes.
Although these are our initial plans to visualize, the details that should be displayed to aid the developers effectively in real development settings must be thoroughly studied. Our goal is to display only the most effective details for the developers while keeping the view minimal, so we can easily test the visualization and minimize the users’ stress in using the system. To accomplish the goals, we plan to conduct surveys and interviews with the JIT compiler developers. The details of how this survey and interview would look are still in discussion.
# IV. BRIEF OVERVIEW OF EVALUATION PLAN
The evaluation of our visualization framework will focus on its usability and interpretability. Our target user group is software developers working closely with JIT compilers and under-the-hood architecture. We propose three central research questions to guide this evaluation:
1) Can users accurately track how individual IR nodes evolve across different compiler phases?
2) Is the resulting node-link graph readable and interpretable, especially for medium-sized IR datasets?
3) Does the inclusion of phase-aware filtering enhance a user’s ability to understand the nature and sequence of JIT compiler transformations?
To address these questions, we will employ a combination of qualitative and quantitative metrics. Node clarity will be assessed based on the readability of tooltips, the legibility of node labels, and the visual coherence of node placement. User correctness will be measured through task-based evaluation, where users are asked to complete specific challenges such as identifying the phase during which a node’s value changed or determining which function last accessed a given node. These tasks will allow us to quantify the accuracy and efficiency with which users can extract meaningful insights from the visualization.
# V. DISCUSSION
# A. IR JSON
A key challenge in building our visualization was handling the complexity of phase mapping within the input IR JSON file. It required nuanced traversal logic over function IDs to correctly associate each instruction with its corresponding optimization phase. The IR data is generated using the tool developed by Lim et al. [8]. Their data model was designed for bug localization, and as such, it preserves deeply nested structures and exhaustive optimization metadata—not all of which are necessary for visualization. Our initial effort focused on identifying which portions of this rich dataset are most relevant for visual exploration and designing an efficient extraction pipeline that retains semantic clarity while reducing structural complexity.
# B. Front-End Visualization
D3’s layout constraints (e.g., non-hierarchical data handling) limited our ability to use ideal formats like edge bundling.
We also faced limited time to verify the correctness of visualization outputs. Some visual clusters might represent inactive nodes, and hover behavior needs refinement for crowded graphs. However, the modularity of our system will support future improvements. Meanwhile, the current prototype has not yet been adjusted fully for scalability, and further considerations may be added upon testing more dense IR networks.
# VI. RELATED WORK
The closest work in visualizing JIT compiler IRs is the work by Lim and Kobourov [15]. Their approach visualizes the entire IR structure in a single, cohesive layout using the metro map metaphor [16], providing a high-level overview of the IR graph. However, the visualization is static and does not capture the evolution of the IR across different compiler phases. As a result, it offers limited support for analyzing phase-specific transformations, which are often crucial for debugging and understanding JIT optimization behavior.
Reiss introduced Jive, a real-time visualization system for Java programs [21]. The system visualizes classes used during program execution by representing them as boxes, allowing the users to observe runtime behavior. While this work shares a similar goal of visualizing execution-related components, its focus is on illustrating the structure and behavior of the input program itself. It does not address the internal behavior of the Java runtime environment, such as the interpreter or the JIT compiler, which are the primary focus of our work.
Several visualization tools are designed to support debugging [22], [23]. These tools commonly use interactive boxand-arrow diagrams to represent the structure and behavior of the input program visually. Many offer advanced features such as pausing execution, stepping through program states, and clicking on visual elements to generate additional views—such as performance charts or data flow summaries. While these tools share the goal of improving the developer’s insight through visual interaction, their focus is primarily on visualizing the input program itself rather than visualizing the internal behavior of compilers. | The complexity of modern Just-In-Time (JIT) compiler optimization poses
significant challenges for developers seeking to understand and debug
intermediate representation (IR) behavior. This work introduces JITScope, an
interactive visualization framework that illustrates how IR nodes and
instructions evolve across compilation phases. The system features a full-stack
architecture: a Python-based backend transforms raw JSON-formatted IR
data-representing an abstract model of the JIT compiler IR-into a normalized
SQLite database; a controller layer serves processed CSV data; and a
D3.js-powered frontend renders an interactive, phase-aware graph of IR node
transformations. The design emphasizes modularity, traceability, and
flexibility. Our roadmap explores intuitive visual representations of
phase-level changes in IR node connectivity, values, and access patterns.
Ultimately, JITScope lays a foundation for future tooling that enables visual
exploration of IR evolution, including phase filtering, value tracking, and
function-access mapping-offering a new lens into the behaviors and impacts of
compiler optimizations. | [
"cs.SE",
"D.3.4; D.2.2; I.3.8"
] |
# I. INTRODUCTION
The publication of the Bitcoin white paper in 2008 and the subsequent launch of the Bitcoin blockchain in 2009 sparked significant interest and research into blockchain technology. This emerging technology has garnered widespread attention from businesses, researchers, and the software industry due to its key attributes, such as trust, immutability, availability, and transparency. However, as with any new technology, blockchain and its associated smart contracts pose a range of challenges, particularly in areas like blockchain infrastructure and smart contract development.
Ongoing research is tackling several critical issues, including blockchain scalability, transaction throughput, and the high costs associated with consensus algorithms. In addition, smart contract development faces unique obstacles arising due to the blockchain infrastructure technology, such as a limited stack space, the oracle problem, data privacy concerns, support for long-running contracts, and crossblockchain interoperability. These challenges have been the subject of extensive study, with numerous comprehensive literature reviews available [e.g., 1, 2].
The inherent constraints of blockchain technology complicate the development of smart contracts, as documented in several literature surveys [e.g., 3, 4]. Consequently, developers must not only be proficient in traditional software development but also possess expertise in smart contract programming for distributed environments, including the use of cryptographic techniques integral to blockchain infrastructure. To address these challenges and simplify smart contract development, research in [5-8], proposes leveraging Business Process Model and Notation (BPMN) models [9] as a foundation for generating smart contracts.
We also use BPMN to represent business application requirements, however, we take a different approach to transforming BPMN models into smart contracts. Our method leverages multi-modal modeling to represent the flow of business logic in a blockchain-agnostic manner, providing unique advantages for automated or semi-automated smart contract creation and deployment. As a proof of concept, we developed a tool called Transforming Automatically BPMN model into Smart contracts with Repair Upgrade $\mathrm { T A B S ^ { + } R } ,$ ), which automates the generation of smart contracts from BPMN models [10, 11].
It should be noted that the BPMN and DMN are standards created by the Object Management Group (OMG) [9]. Both are graphical standards that have been designed to be readily understandable and used by both non-technical and technical people and thus form a bridge between the business and $I T$ personnel. BPMN is used to represent well-defined business processes, while DMN is used to specify business decisions and rules. The DMN standards specifies the use of the Friendly Enough Expression Language (FEEL) that was designed to write expressions in a way that is easily understood by both business professionals and developers. FEEL is used to define expressions in the context of BPMN and DMN diagrams [9].
As DMN and BPMN have been designed to be readily understood and used by business professionals, such as Business Analyst (BA), as well as IT personnel, we assume that a BA, who is responsible for requirements gathering for the blockchain application, is familiar with BPMN and DMN modeling. Consequently, as it is the BA who uses BPMN and DMN modeling to represent the blockchain distributed application, if we achieve automated transformation of BPMN models, for which DMN is used to express the business decision logic, we shall enable the BA to generate smart contracts without assistance by software developers as long as they can express the business logic using DMN.
# A. Objectives
The main objective of this paper is to show the feasibility of generating methods of a smart contract from a BPMN model with business logic represented using DMN. To achieve the transformation, two separate subproblems must be addressed, namely (a) representation of the business logic in DMN and how it is transformed into the code executable by the generated smart contract, and (b) which information must be available for the transformation and how such information is represented in BPMN and DMN models.
# B. Contributions
The main contributions of this paper include:
i. Describing how the BA documents the flow of information along the flow of computation. This information is by the transformation of BPMN and DMN models into smart contracts.
ii. Showing how the BPMN models are readily augmented with DMN modeling to represent the business logic of the blockchain application.
iii. Proof of concept to show the feasibility of our approach to automated generation of smart contracts for applications modeled with BPMN and DMN.
# C. Outline
In the second section, we outline our system architecture for creating smart contracts and for their execution and overview the significant features of our approach to automated development of smart contracts from BPMN and DMN models. The third section describes how the BA uses BPMN modeling to represent the flow of information to support the transformation, while the fourth section explains the use of DMN modeling to represent the business logic. The fifth section shows the tool in action on a selected use case. The last two sections provide related work and summary and conclusions, respectively.
# II. USING MULTI-MODAL MODELING FOR SMART CONTRACT GENERATION
In contrast to the other approaches to transforming BPMN models into smart contracts, we exploit multi-modal modeling to represent the flow of computation of the business logic in a blockchain-agnostic manner [10]. We subsequently extended our approach and methodology and created a PoC tool, called TABS $+ \mathbb { R }$ [11, 12] to support:
Semi-automated generation of smart contracts from BPMN models [10,11].
Support for nested long running and multi-step transactions [11].
Repair/upgrade of smart contracts [12].
The overall architecture of our system is illustrated in Fig. 1. It presents a block diagram outlining the key steps involved in transforming a BPMN model into smart contract methods. The diagram also includes a set of API methods (denoted as DAppAPI in Fig. 1) that facilitate interaction between a Distributed Application (DApp) and the smart contract methods. This architecture is typical of most approaches that generate smart contract methods from BPMN models [4-7]. In this setup, the DApp does not directly invoke the smart contract methods. Instead, it calls API methods provided by the APISCmethods component in Fig. 1), which marshals the necessary parameters and then triggers the corresponding smart contract methods.
During the design phase, activities of actors involved in the smart contract are represented using multi-modal modeling. Concurrency is modeled using Discrete Event (DE) modeling, while functionality is represented with concurrent Finite State Machines (FSMs), forming a DE-FSM model. A key feature of this model is its blockchain-agnostic nature, meaning that the coordination of collaborative activities is described using DE modeling. Only the code for the BPMN task elements is blockchain-dependent, i.e., it needs to be written in a programming language that is supported by the target blockchain. For example, Ethereum-based blockchains typically use languages that produce code executed by the Ethereum Virtual Machine (EVM), whereas JavaScript or other languages may be used for scripting task elements in Hyperledger Fabric (HLF) blockchains.
Scripting the code for the BPMN task elements is relatively straightforward compared to scripting synchronization of collaboration of activities that is orchestrated through the transformation of BPMN models, with business logic represented using DMN modeling, into smart contracts. The code implementing a BPMN task is self-contained: In BPMN modeling, once the flow of computation enters a task element and its execution begins, the task completes its computation without interruption. Furthermore, the task code (i) can read information flowing into the task, (ii) read/write the blockchain state variables; and produce output information that flows out of the task when its computation finishes. Thus, the task code is self-contained in a smart contract method accessing only the state variables and the method’s inputs and outputs, as represented by BA using the flow of information in a BPMN model as will be described in a following section. Consequently, the approach also leads to a modular design.
In summary, the flow of collaborative activities is modeled using DE-FSM modeling. The functionality of task elements is achieved by invoking methods that implement the business logic of each task. To coordinate these collaborative activities, a run-time monitor, implemented as a smart contract method deployed on the target blockchain, ensures the correct sequencing and execution of the activities. This monitor uses DE modeling to manage the invocation of individual activities, which are represented as methods within the monitor. Thus, if the target blockchain for the smart contract deployment has a monitor smart-contract deployed, the synchronization of the collaborative activities is blockchain agnostic. Furthermore, our approach deploys a monitor smart contract on the target blockchain automatically. In our proof of concept (PoC), the TABS+R tool, we implemented the monitor smart contract to be deployed on the Hyperledger Fabric (HLF) blockchain as well as on blockchains supporting the EVM [10-12].
# III. BPMN MODELING BY BA
Before we describe BPMN modeling by the BA, we briefly overview information on storage of large files and
communication between a smart contract and its external environment.
# A. Preliminaries
As is the usual practice for blockchains, large document files or objects are not stored on the blockchain itself but are stored off-chain. For the storage of document files or large objects, we currently utilize the InterPlanetary File System (IPFS) [13] for its reliability and availability supported through replication.
Fig. 1. System architecture for the design phase and for the execution phase (adopted from [10])
When a document is created and uploaded to IPFS, a new Content-addressed hash code IDentifier (CID) is generated. This CID is then signed and stored by the smart contract, providing a method to verify the document's authenticity, including confirming (i) authorship and (ii) immutability to ensure that the document has not been altered since its creation.
One of the key features that supports trust in smart contracts is that the methods within a smart contract do not have access to external resources, such as file systems or communication subsystems. Smart contract code can only access the state variables stored on the blockchain and the parameters passed to smart contract methods when they are invoked. Therefore, beyond the state variables, any additional information required by a smart contract must be marshaled by the API-SCmethods component before the smart contract method is invoked. The marshalled data is then passed as input parameters when invoking the smart contract methods.
Additionally, a smart contract must communicate the progress of its execution to the Distributed Application program $( D A p p )$ that invokes its methods. This is accomplished by emitting events from the smart contract methods, which are captured by the API-SCmethods component (as shown in Fig. 1) and relayed to the DApp.
# B. Exposition Use Case
For explanatory purposes, we will use a simple BPMN model, shown in Fig. 2, that represents a sale of a large product, such as a combine harvester. The model shows that an agreement on the sale of the product is reached first, followed by arrangements for transporting the product. These transport arrangements include determining the requirements for transporting the product, such as safety measures for hazardous materials. Once the transport requirements are established, insurance and transport are arranged, and the product is shipped. After transportation, the product reception by the client is reviewed, and the payment is finalized.
Modeling is carried out by a Business Analyst (BA) who is assumed to be proficient in BPMN and DMN modeling, including the use of the FEEL language for decision logic. Additionally, we assume that the BA is familiar with JavaScript Object Notation (JSON), which is used to describe the flow of information throughout the computation process, as will be detailed later.
In Fig. 2, the first task, RecAgr, involves receiving a purchase offer document from an external source. Once accepted, the purchase agreement (i.e., a sales agreement) is passed to the next task, GetTrReq, for further processing. The sales agreement is represented by an associated data element, SalesAgr. The GetTrReq task determines the transport requirements for the product and stores them in a newly generated IPFS document, TrRequirements. This document is then passed to the subsequent processing step.
The transport requirements are forwarded to the GetIns and GetTransp tasks to secure insurance and a transporter, respectively. These tasks can be executed concurrently, as indicated by the fork gateway (diamond shape with a plus sign). The GetIns task generates the insurance contract, labeled Insurance, while the GetTransp task produces the Transport document.
Fig. 2. BPMN model
Once both the insurance and transport contracts are obtained and provided to the transporter, the product is delivered to its destination, represented by the DoTransp task. Once the product is received by the purchaser, the reception of the product is recorded in the Delivery document that is forwarded to the final task, RevAndFin, that reviews and finalizes the contract.
Please note that the flow of activities shown in Fig. 2 is executed by a single actor, represented within one, in BPMN terminology, swimlane. This model is suitable for scenarios within organizations that lack sophisticated IT infrastructure, such as Small to Medium-sized Enterprises (SMEs). The simple use case is designed to demonstrate how the BA uses BPMN to document the flow of information along the computation process and how the BA applies DMN modeling to define the business logic.
In the following, we describe how a BA, working within the context of an SME, creates a BPMN model to track activities, document flows, and express the business logic decisions of BPMN task elements using DMN modeling.
# C. Documenting Flow of Information by BA
The previous discussion, of the BPMN model in Fig. 2, illustrates the flow of computation, which is forked by a forkgate, enabling the concurrent execution of the GetIns and GetTransp tasks. The figure also shows how the BA represents information as it flows along with the flow of computation. This is achieved by the BA documenting the transfer of information between tasks using an association object. In Fig. 2, the dotted arrows, from the RecAgr task to the SalesAgr association object and then from the SalesAgr to the GetTrReq task, indicate the transfer of the sales agreement information (SalesAgr) from the RecAgr task to the GetTrReq task.
We first describe how JSON is used to model the flow of information and then provide simple examples to clarify. To provide more details on the content of the SalesAgr document, the BA clicks on the SalesAgr icon to provide annotation about its contents.
Information flowing along the computation process flow may contain multiple items, each of which is described by an array of key-value pairs. For this purpose, the BA uses JSON to represent the information flowing along the computation process. Items, such as item1 and item2, are represented as an array of JSON elements.
The first element in the array has the form: { "source": "string1" }. The value of "string1" can only be "file" or "http", denoting whether the information is sourced from a file or an HTTP service. If the value of string1, representing the value for the key “source”, is "file", the next item in the array specifies the CID (Content Identifier) of the file from which the information is retrieved. This file is assumed to be in JSON format. The subsequent items in the array identify the fields (or components) within the file that need to be retrieved and passed as parameters to a smart contract method invoked by the APISCmethods component.
If the value of string1 is "http", then there is an array of elements that contain information on (i) HTTP address of the service, (ii) input parameters, and (iii) output parameters containing the results of the service execution. The HTTP service is invoked with input parameters described, wherein the service returns information in its output parameters. Both the input and output parameters are described using the array elements. The HTTP service is invoked to implement the task and return the produced results in its output parameters that are recorded in the smart contract. For brevity, we will focus on describing how JSON is used to represent the content of files that provide information flowing along the computation process.
In Fig. 3, the file containing the relevant information is named SalesAgr.json, and its CID is provided. The array of elements within the JSON structure identify which components of the SalesAgr.json file are to be retrieved and passed as parameters to the smart contract method. In our simple use case, the JSON components to be retrieved and passed to the smart contract method include only the product $I D$ , which is supplied to both GetTrReq and GetIns tasks. These tasks then use the product ID to retrieve further details about the product to be transported and then the requirements for its transport, if any.
This approach allows the BA to clearly define the flow of data in the smart contract system, ensuring smooth interaction between the BPMN and DMN models and the smart contract that is generated, and providing transparency and traceability in the overall process.
Information flowing into a task, as a result of invocation of a smart contract method, is prepared by the API-SCmethods component. It retrieves the information described by the JSON annotation of the SalesAgr association object, marshals it into the appropriate format, and passes it as input parameters to the smart contract method that implements the GetTrReq task.
Fig. 3. Annotation to describe information flowing between tasks
For the subsequent sections, we assume that the monitor smart contract, required by the smart contracts generated by the TABS+R tool, has already been deployed on the target blockchain. We support currently either HLF blockchain or a blockchain that uses EVM.
# IV. DMN MODELING
We will use our simple example use case, represented by the BPMN model of Fig. 2. Assuming, for simplicity, that if the quoted price for the insurance is $1 5 \%$ or more of the product price, then the whole contract should be aborted due to the high cost. To make such a decision, the price of the product and the insurance cost need to be available.
To express the constraints on the insurance cost, we use the business-rule task element of BPMN. Functionally, the business-rule task first produces a value that is then forwarded to an exclusive gateway. The gateway uses the value, produced by the business rule task, to choose one of its forks for the outgoing flow of computation.
For our simple case, the business decision logic can be represented using a simple decision table as is shown in Fig. 4. Our tool invokes the graphical editor provided by Camunda (at Camunda.com) and available from BPMN.io. The decision table is created to check that the insurance quote, as a percentage of the price, is less than 15, in which case the next task to be executed is DoTransp to transport the product to its destination. The smart contract fails if the insurance quote, as a percentage of the price, is higher than $1 5 \%$ .
Once the decision table is completed, the business-rule task is represented by a rectangular icon with rounded corners that has a picture of a small table of rows and columns in the left top corner, as is shown in Fig. 5. From the business rule task there is an outgoing flow that contains the result of the business rule evaluation that is then used by the following exclusive fork gate to take one of the outgoing paths, one for the when the percentage is less than or equal to $1 5 \%$ that continues to the DoTransp task, while if the percentage is greater than $1 5 \%$ , the contract fails, resulting in automatic execution of recovery procedures as described in [11]
# DRAK: Tool for Automated Transformation of a BPMN Model into Smart Contract (TABS+R)
Fig. 4. Creating the decision table for a business-rule task
Fig. 5. BPMN model with the business-rule task
In addition to the simple decision tables, DMN modeling also incorporates the use of the Friedly Enough Expression Language (FEEL). FEEL was created by OMG as a part of DML using the following design principles with the aim to be a readable language for programmers and business analysts [ref to OMG doc or Camnuda tutorial]:
Side-effect free
Simple data model with numbers, dates, strings, lists,
and contexts
Simple syntax designed for a broad audience
Three-valued logic (true, false, null)
Control statements including assignment, conditional, looping, and range statements. Functions for string, numbers, data and time, and lists.
We acknowledge that currently we only support simple decision tables. However, FEEL has been implemented in BPMN modeling used for process orchestration, for instance by Camunda as described in BPMN.io, and we do not foresee design challenges.
# V. GENERATING SMART CONTRACTS BY BA IN SME
We analyzed a variety of use cases from the literature that focuses on transformation of BPMN models into smart contracts, such as use cases for Order-supply, Supply chains, Parts Order, Sales and Shipment, and Ordering Medications.
In each case, the creation, review, or amendment of these documents occurs off-chain. In such cases, the exchanged data between actors consists primarily of QR codes that identify the document files being shared, wherein the QR code is used as the documents unique ID that is analogous to the CID generated by the IPFS. The smart contract interactions among the partners are limited to the exchange of these documents, rather than directly handling the creation or modification of them.
Thus, when task executions can be performed off-chain, the task script code does not need to be provided on-chain, as long as the generation of the smart contracts from the BPMN model ensures a certified exchange of documents between on-chain and off-chain computations, which is readily supported by our approach as only CIDs are passed to the smart contract methods.
Operationally, in the absence of the IT support, the BA, or an operator trained by the BA, performs the actual activities represented by some of the tasks, while the smart contract records the result of the BA’s activities. For instance, it is the BA who needs to execute the GetTrReq task. The BA needs the product description that is communicated by the BA to an insurance provider. The insurance provider communicates the insurance document to the BA who needs to store it in the file system to be accessible by the API-SCmethods component of the architecture shown in Fig. 1.
# VI. RELATED WORK
Several approaches to transforming BPMN models into smart contracts have been explored. The Lorikeet project focuses on transforming BPMN models into smart contracts to facilitate blockchain-based business process execution and asset management [5, 16]. The project employs a model-driven engineering approach, where BPMN models are analyzed and converted into smart contract methods that can be deployed on blockchain platforms, particularly Ethereum. An off-chain component is used to manage interactions between process participants and the blockchain, ensuring the execution of processes follows the predefined message exchanges in the BPMN model.
Additionally, Lorikeet supports asset control, enabling the management of both fungible and non-fungible assets, such as token registries and transfer methods, which are essential for business processes requiring asset handling. This approach allows for rapid prototyping, testing, and deployment of smart contracts based on BPMN models, enhancing flexibility and efficiency in blockchain-based business process automation [5, 16].
The Caterpillar project focuses on transforming Business Process Model and Notation (BPMN) models into smart contracts, providing a comprehensive architecture for executing business processes on the Ethereum blockchain [6, 7]. It adopts a three-layer architecture that includes a web portal, an off-chain runtime, and an on-chain runtime. The onchain runtime layer is responsible for managing the execution of smart contracts that control workflow, interaction management, and process configurations based on the BPMN model. This approach ensures that business processes are executed transparently, securely, and efficiently within a blockchain environment.
The Caterpillar project emphasizes recording all business processes in a single pool, facilitating the management of interactions and ensuring the consistency of the process execution across multiple actors. By leveraging Ethereum as the blockchain platform, Caterpillar enables the seamless integration of BPMN models with decentralized applications, supporting the automation of business workflows through blockchain-based smart contracts [6, 7, 17].
The Collaborative Business Process Execution on Blockchain (CoBuP) project explores the transformation of BPMN models into smart contracts, offering a unique approach compared to traditional methods. CoBuP does not directly compile BPMN models into smart contracts [15]. Instead, it deploys a generic smart contract that invokes predefined functions based on the BPMN model, making it more flexible and adaptable to various process executions.
The CoBuP architecture is based on three layers: conceptual, data, and flow layers. BPMN models are first transformed into a JSON-based workflow model, which governs the execution of business processes by interacting with data structures on the blockchain. This allows for a decentralized, secure execution of business processes while maintaining the flexibility needed for collaborative environments. The project approach highlights the potential for blockchain to support complex business processes that require a high degree of collaboration, adaptability, and trust among participants. | This paper addresses the challenge of creating smart contracts for
applications represented using Business Process Management and Notation (BPMN)
models. In our prior work we presented a methodology that automates the
generation of smart contracts from BPMN models. This approach abstracts the
BPMN flow control, making it independent of the underlying blockchain
infrastructure, with only the BPMN task elements requiring coding. In
subsequent research, we enhanced our approach by adding support for nested
transactions and enabling a smart contract repair and/or upgrade. To empower
Business Analysts (BAs) to generate smart contracts without relying on software
developers, we tackled the challenge of generating smart contracts from BPMN
models without assistance of a software developer. We exploit the Decision
Model and Notation (DMN) standard to represent the decisions and the business
logic of the BPMN task elements and amended our methodology for transformation
of BPMN models into smart contracts to support also the generation script to
represent the business logic represented by the DMN models. To support such
transformation, we describe how the BA documents, using the BPMN elements, the
flow of information along with the flow of execution. Thus, if the BA is
successful in representing the blockchain application requirements using BPMN
and DMN models, our methodology and the tool, called TABS, that we developed as
a proof of concept, is used to generate the smart contracts directly from those
models without developer assistance. | [
"cs.SE",
"cs.CR"
] |
# 1 Introduction
Machine Translation (MT) is one of the few NLP technologies that has been widely available online for decades. As both translation quality and internet access have improved (Gaspari and Hutchins, 2007), MT has gained a large and diverse user base. Millions of people use it to communicate across languages, including in settings where professional translators or interpreters are not realistically available (Nurminen and Papula, 2018; Kasper˙e et al., 2021; Vieira et al., 2022; Kenny et al., 2022).
As MT becomes increasingly embedded in everyday tools and tasks, the socio-technical gap between how the technology is developed and how it is used in real-world contexts is widening (Ackerman, 2000). Whereas initial MT systems were primarily used to support professional translators or narrow domains (Hutchins, 2001), today MT can be used by anyone with internet access in their daily life (Yvon, 2019; Kenny et al., 2022). However, MT does not yet fulfill its promise to enable communication across languages, particularly for users who may lack the language or domain expertise needed to make informed use of the translations (Liebling et al., 2020; Santy et al., 2021;
Valdez et al., 2023). This gap is further amplified by the rise of translation with general-purpose large language models (LLMs) (Vilar et al., 2023; Alves et al., 2024; Kocmi et al., 2024; Hendy et al., 2023). With such tools, translation can be integrated into broader workflows, where translation might be covert, making it even harder for users to assess its reliability. This can result in over-trust in MT (Martindale and Carpuat, 2018), which is particularly problematic in high-stakes scenarios where it can cause harm (Vieira et al., 2021), but also in under-use of MT tools in cases where they could be beneficial (O’Brien and Federici, 2019).
We argue that a human-centered approach to MT is needed: one that broadens what MT systems do to help users weigh risks and benefits and align system design with communicative goals. This approach echoes calls for human-centered AI (Capel and Brereton, 2023), which includes recognizing that people are at the heart of the development of any AI system (Vaughan and Wallach, 2021), emphasizing designing AI systems that augment rather than replace human capabilities, prioritizing human agency and system accountability (Shneiderman, 2022), and using human-centered design methods for AI systems (Chancellor, 2023).
To provide a foundation for human-centered MT, we argue that it is important to adopt an interdisciplinary approach that includes Translation Studies and Human-Computer Interaction (HCI). In this paper, we recontextualize MT research by surveying relevant literature in these fields. As Green et al. (2015) point out, the question of how to design effective human–MT interaction has been considered long before HCI, NLP, or AI were formalized disciplines. For example, Kay (1980/1997) introduced a cooperative interactive system as an alternative to fully automated translation to replace professional translators. As MT improved, these questions were revisited to design mixed-initiative post-editing interfaces (Green et al., 2013; Koehn et al., 2014; Briva-Iglesias et al., 2023), highlighting the benefits of designing MT systems to augment, rather than replace, professional translators’ abilities (O’Brien, 2024). As the MT user base has expanded from professional translators to professionals in other disciplines, as well as the general public (Savoldi et al., 2025), many relevant lessons can be drawn from theoretical and empirical work in Translation Studies and HCI. Accordingly, this survey results from discussions between co-authors across these fields. Translation studies and HCI experts identified key insights they wished to share with the MT researchers. These insights served as points of connection with the MT literature.
Considering MT’s diverse uses (Section 2), we synthesize cross-disciplinary insights spanning MT literacy (Section 3), human-MT interaction (Section 4), and translation ethics (Section 5). We then outline research directions for human-centered MT evaluation (Section 6) and design (Section 7), illustrating interdisciplinary human-centered MT research with a healthcare case study (Section 8).
# 2 Understanding Contexts of Use
To develop human-centered MT, we must first understand how MT is used in the real world. While the body of research on users, contexts, and purposes has shown increased growth recently, the considerable size of the user population, estimated in 2021 at more than one billion (Nurminen, 2021a, p. 23), and growing variety of use contexts present a challenge for synthesizing that research into knowledge that can be used for designing systems that more directly serve user needs.
A classical framework distinguishes three use types (Hovy et al., 2002): assimilation, in which MT helps users get the gist of content in a foreign language (e.g., browsing news, triage) without requiring perfect quality; dissemination, in which MT content is shared with others, demanding higher quality (e.g., public announcements); and communication, in which MT supports live or interactive multilingual exchanges (e.g., chat, classrooms).
A wealth of MT research projects have considered different use cases over the years, but without much information sharing across settings: classroom speech translation (Lewis and Niehues, 2023), healthcare (Khoong et al., 2019; Valdez and Guerberof-Arenas, 2025), crisis response (Lewis et al., 2011; Escartín and Moniz, 2019), international patent processes (Nurminen, 2020), migration scenarios (Vollmer, 2020; Vieira, 2024; Pie˛ta and Valdez, 2024), research and academic writing (Bowker and Ciro, 2019b; Ehrensberger-Dow et al., 2023; Bawden et al., 2024), customer support (Gonçalves et al., 2022), literary MT (Karpinska and Iyyer, 2023; Zhang et al., 2025a), and CAT/localization (Koehn et al., 2014; Lin et al., 2010), and intercultural collaboration platforms (Ishida, 2016). The examination of these contexts of use alone suggests some considerations that should impact MT design, beyond the general purpose of translation: risk management (error tolerance varies by domain), synchrony (real-time vs. delayed), urgency, shelf life, audience, interaction dynamics, modality/accessibility, and overtness of MT use (e.g., covert use of MT on a multilingual website or embedded in another application).
We also lack a deeper understanding of who uses online MT tools and how. Nurminen (2021a) estimates that $9 9 . 9 7 \%$ of MT users are not professional translators. “Machine Translation Stories” illustrate diverse uses by individuals from all walks of life, from music students translating old Italian arias to people using MT in their professional life (Nurminen, 2021b). A survey of 1,200 UK residents shows high satisfaction with MT for low-stakes uses but highlights a demand for better quality (Vieira et al., 2022). Another survey of 2,520 UK public service professionals reveals that $33 \%$ had used MT in their work, predominantly within health and social care sectors, but also across legal, emergency, and police services (Nunes Vieira, 2024). Formal training was uncommon, leading many professionals to rely on personal devices and publicly available tools like Google Translate and ChatGPT. But user needs are not met equally across socioeconomic and geographic contexts. For instance, interview studies showed that MT applications do not support effective cross-lingual communication for migrant workers in India and immigrant populations in the U.S., resulting in significant negative impacts on their daily lives (Liebling et al., 2020).
Human-centered MT should not just respond to user needs (Gasson, 2003), but consider more broadly how people are affected by MT, including the languages and perspectives of marginalized populations (Bender and Grissom, 2024), and considering both direct and indirect stakeholders (Friedman and Hendry, 2019, p. 39). These include recipients of translated content, institutions using MT at scale (Koponen and Nurminen, 2024), writers of source texts (Taivalkoski-Shilov, 2019; Lacruz Mantecón, 2023), MT practitioners (Robertson et al., 2023), language learners, and broader language communities given evidence that language evolves through automation (Guo et al., 2024).
This complexity calls for further investigation of MT in context and for organizing use cases into a taxonomy that balances general-purpose development with contextual needs.
# 3 Machine Translation Literacy
Translation Studies research highlights a need for promoting machine translation literacy (Bowker and Ciro, 2019b) given the wide gap between how translation is approached by people within versus beyond the language professions. Professional translators have been trained in translation, which usually also involves acquiring a domain specialization (Scarpa, 2020), such as legal, medical or technical translation. As people, professional translators also have deep knowledge of the language pair in question, and the type of real-world knowledge and cultural knowledge that is necessary when translating between languages and cultures. Translators can bring all this information to bear on their understanding of the source text. They compensate for shortcomings in the source text (e.g., they can clarify the intended meaning of a sentence with poor punctuation or where a homophone has erroneously been used). Professional translators also operate within a sort of decision-making framework because they request (or even require) a translation brief from their client or employer (Munday et al., 2022). The translation brief is essentially a set of instructions and information that helps the translator to make sensible choices. For instance, the brief contains information about the intended purpose of the translation, where it will be published, who will read or use it, what the target reader’s background (language variety, culture, education level) is. All of this information allows the translator to make informed decisions.
In contrast many MT users have no background in translation. They may not have the necessary linguistic knowledge, domain or cultural knowledge required to evaluate the adequacy of the translated text. They may have misconceptions about translation (Bowker, 2023), e.g., seeing it as an exact science or a task that can be done by any bilingual. They might not realize the importance of the translation brief. In short, they lack MT literacy, which has been defined as “knowing how MT works, how the technology can be useful in a particular context, and what the implications are of using it for various purposes” (O’Brien and Ehrensberger-Dow, 2020).
This highlights the necessity of MT literacy and motivates a key direction in Human-Centered MT: designing tools that promote informed and responsible use, especially by lay users. Current tools lack this, but we will see that the existing literature provides a starting point.
# 4 Empirical Studies of MT Outside Professional Translation
Translation Studies and HCI offer extensive empirical research on human-MT interaction within various contexts, beyond professional translation. It reveals existing user strategies for using potentially imperfect MT, interventions that have already shown promise, and open research directions.
Post-editing The most studied human-MT interaction setting is probably post-editing, where people edit raw MT to improve it. It has received significant attention in the context of professional translation (Cadwell et al., 2016; Briva-Iglesias et al., 2023, among others), but it is also performed by other users, for instance when they translate their own source text as a writing aid in academic settings (Bowker, 2020a; Xu et al., 2024; O’Brien et al., 2018) or for scientific dissemination (Bawden et al., 2024). There is evidence that even monolingual users can interpret and revise MT output when provided with background knowledge or translation options (Hu et al., 2010; Koehn, 2010).
When users do not understand the target language, post-editing is not an option, but they still face a decision about whether to publish or share the raw MT outputs. Zouhar et al. (2021) studies the impact of augmenting raw MT with backtranslation, source paraphrasings and quality estimation feedback in such “outbound translation” settings, and show that backtranslation feedback increases user confidence in the produced translation, but not the actual quality of the text produced.
Augmented Outputs for Gisting Several studies show that augmenting MT outputs can improve comprehension and engagement, particularly when MT is used for understanding the gist of a text. Highlighting key words in source and target texts can improve people’s ability to understand difficult translations (Pan and Wang, 2014; Grissom et al.,
2024), and adding emotional and contextual cues promotes engagement with social media posts in a foreign language (Lim et al., 2018).
Research has also shown that users sometimes access outputs from multiple MT tools to better understand the errors associated with each individual output and, in doing so, enhance overall comprehension (Anazawa et al., 2013; Nurminen, 2019; Robertson et al., 2021). Other research has also indicated positive effects from exposure to outputs from multiple MT tools (Xu et al., 2014; Gao et al., 2015). Human-centered tools for MT gisting might therefore involve MT tools that embed a second MT tool directly into their user interface (Nurminen, 2020), or perhaps automatically show two outputs as a low-cost means of enhancing users perceived transparency.
Source Understanding People use MT not only to gain access to texts across language boundaries, but also to augment and ensure their understanding of texts that are in languages they have limited competence in (Nurminen, 2021a). They might position a source text and its translation side-byside and refer to both while reading, or they may look at both original and translated messages in an MT-mediated conversation (Nurminen, 2016). Recognizing this tendency, human-centered MT tools could make it easy to access original texts alongside their machine-translated versions, and provide affordances to compare them easily.
MT-mediated Communication HCI research has studied MT-mediated communication, and how the use of MT affects not only performance, but also interpersonal dynamics. Empirical evidence shows that people develop their own strategies to compensate for imperfect MT, such as adapting what they say (e.g., by employing redundant expressions and suppressing lexical variation in language use) (Yamashita and Ishida, 2006; Hara and Iqbal, 2015), using back-translation to assess outputs in a language they do not understand (Ito et al., 2023), or simply relying on their holistic understanding of the conversation to fill in gaps where the MT output does not make sense (Robertson and Díaz, 2022). Even when effective, these strategies come at a cost to communication: people communicate less naturally and authentically (Yamashita and Ishida, 2011) and might get misleading signals on translation quality (Tsai and Wang, 2015). Imperfect translations also affect interpersonal dynamics between interlocutors, increasing the risk of participants misinterpreting their task partner’s intent (Lim et al., 2022), misattributing communication breakdowns to human vs. MT-generated errors (Gao et al., 2014; Robertson and Díaz, 2022), and misassessing one another’s contribution to the collaborative task (Xiao et al., 2024).
Trust Lay users’ trust in MT is largely shaped by their perception of how MT-as-a-black-box functions, not just its intrinsic quality. Identical translations can be perceived differently when labeled machine vs. human-generated (Asscher and Glikson, 2021), and people might assign inconsistent ratings to the MT outputs before vs. after the label is disclosed (Bowker, 2009). Not all MT errors impact user trust equally: fluency or readability errors tend to lower trust more than adequacy errors, even though the latter can be more misleading when users rely on MT-generated meanings to inform their actions (Martindale and Carpuat, 2018; Popovic´, 2020). Factors like language proficiency, subject knowledge, and MT literacy influence how users perceive MT quality in gisting contexts (Nurminen, 2021a). MT literacy has also been shown to play a significant role in shaping translators’ trust in MT (Scansani et al., 2019).
Taken together, this body of work highlights that building truly human-centered MT systems demands much more than generating fluent and adequate translations. It requires aligning system design with real-world communication practices, developing interaction strategies that empower users, and supporting their ability to assess risks in ideally independent and time-sensitive ways. Crucially, it also means empirically studying how these systems affect stakeholders, not just in terms of task performance, but also in how they shape interpersonal dynamics and shared understanding.
# 5 Ethicality of MT
The social implications of MT use extend beyond its immediate usefulness, bringing us into the realm of ethics. What does it mean for MT to be ethical? Surveying major frameworks of translation ethics (Koskinen and Pokorn, 2020) provides a foundation for addressing this question, highlighting the inherent multiplicity and conflicting perspectives in determining what is right or wrong in practice (Chesterman, 2001; Lambert, 2023).
For example, some approaches base translation ethics on strictly representing the original text’s “precise” meaning and form, at all costs and under all circumstances (Newmark, 1988). Other approaches emphasize a functional ethics of service, where ethical translation is defined by the translator’s adherence to the requester’s instructions, even if this means changing the source text or using it as mere inspiration (Holz-Mänttäri, 1984). Others prioritize alterity and social justice, viewing translation as a tool to challenge social and political inequalities by reframing communities’ identity and values; in this case, ethical action might even involve refusing to translate the source text (Robinson, 2014). Several other translation ethics frameworks exist, each revolving around different priorities and values (Koskinen and Pokorn, 2020).
Today’s influential ethical frameworks also imply that the translator’s ethical response is necessarily situation- and text-dependent (Pym, 2012). By this we mean that for different texts, and in different situations, the ethical decision – whichever ethical framework one follows – may take different shapes. MT ethics, then, are no less situation-contingent than issues of MT usability or effectiveness.
Finally, a typology of the main approaches to translation ethics also reveals how some ethics are largely regional, or field-specific, inasmuch as they stem from the particular features of translation as a medium for intercultural communication (Pym, 2012, p. 57). In contrast, other approaches are more general in their concerns and values, and not intrinsic to the field of translation as such. Along these lines, it could be argued that a useful implementation of human-centered ethical evaluation in the case of MT should involve the compartmentalization of MT ethics from general AI ethics, and the preference for regional frameworks of ethics for MT (Asscher, 2025, p. 102–109). This implies a give-and-take between MT ethics oriented to the specificities of translation, on the one hand, and universal ethics, reminiscent of the general protocols of AI ethics proposed so abundantly in recent years, on the other hand (Floridi et al., 2018).
Relating these ethical insights to MT can apply to both the increasingly autonomous decisionmaking of the tool itself, and the social conditions that underpin its development and maintenance (Asscher, 2025, p. 98–101). The development and use of MT has already had vast consequences for many stakeholders. The ownership and distribution of anonymized translation data needed for the development of MT systems, and the re-use of this data to fine-tune MT, are some of the issues at stake, as there is currently no compensation for the original human translators who created the data, and MT systems serve causes that are opaque to these translators and might contradict their values (Moorkens, 2022, p. 123–126). Issues of confidentiality and privacy are also pertinent, as personal translation data is utilized to train MT systems without regulation, rendering this data potentially identifiable (Nunes Vieira et al., 2022). The risks involved in high-stakes use of MT may strain the question of the moral and legal responsibility even further, for example in medical and legal situations, where translation errors may be particularly consequential (Vieira et al., 2021). Then, there are the sometimes problematic uses of MT in the professional translation workflow, and the broader issues of sustainability of the translation industry and environmental concerns (Bowker, 2020b; Skadina et al., 2023; Shterionov and Vanmassenhove, 2023). MT ethics also apply to the cultural and gender bias of contemporary LLMs (Gallegos et al., 2024), which may be manifested in translation, or the censorship recently enacted in some generative AI tools concerning certain charged historical occurrences, reinforcing unequal power relations across cultures (Wang et al., 2025; Bianchi et al., 2023).
Considering these points, human-centered MT research must pursue richer assessments of the moral consequences of its use in society. Studies of MT ethicality are valuable regardless of immediate implementability and can inform business and scientific leadership in governing the field and shaping MT agency and social implications.
# 6 Human-Centered MT Evaluation
MT evaluation has focused on benchmarking systems, or rating individual outputs, using automatic or human ratings of translation quality as ground truth (White and O’Connell, 1993; Koehn and Monz, 2006; Graham et al., 2013; Läubli et al., 2020; Freitag et al., 2021). Some recent proposals call for broadening its scope to measure social and environmental impact in addition to performance (Moorkens et al., 2024; Santy et al., 2021). A human-centered approach can draw from conceptualizations of the translation process and product quality from Translation Studies (Liu et al., 2024), and HCI methodology for evaluating systems in their socio-technical context (Liebling et al., 2022).
From Generic to Situated MT A key shift is from generic, context-independent evaluation toward situated assessments of fitness-for-purpose and stakeholder impact. Holistic quality scores (Graham et al., 2013) are already complemented by fine-grained annotations such as MQM (Lommel et al., 2014). In contrast, Translation Studies work emphasizes evaluating translations based on their suitability for their intended purpose rather than adhering to a one-size-fits-all notion of quality (Bowker, 2009; Chesterman and Wagner, 2014; Colina, 2008). The impact of MT errors thus needs to be assessed in context (Agrawal et al., 2024), as general benchmarks may obscure rare but extreme errors (Shi et al., 2022). Expert knowledge might be required, for instance to determine whether an adequacy error poses a clinical risk (Khoong et al., 2019), or to assess social harms such as gender bias (Savoldi et al., 2021, 2024), name mistranslation (Sandoval et al., 2023), and lack of cultural awareness (Yao et al., 2024). Providing an “evaluation brief” (Liu et al., 2024) can describe the circumstances surrounding the translation creation, who it is for, and how it is intended to be used. Evaluation through question answering is another way to assess if translations preserve important information (Ki et al., 2025; Fernandes et al., 2025).
From Annotation to Human Studies Human studies that incorporate MT within the relevant end-user task can help us assess the impact of MT more comprehensively. Such tasks might align closely with the production and understanding of translations, such as post-editing MT (Castilho and O’Brien, 2016; Castilho and O’Brien, 2017; Bawden et al., 2024; Savoldi et al., 2024), reading comprehension (Jones et al., 2005; Scarton and Specia, 2016), gisting (Nurminen, 2021a) or triage tasks (Martindale and Carpuat, 2022). MT might be a tool in support of another task, such as collaborative information exchange in teams (Yamashita and Ishida, 2006), social media consumption (Lim et al., 2018), hiring and personnel decision making (Zhang et al., 2022) or housing information seeking (Xiao et al., 2025), and everyday conversations (Robertson and Díaz, 2022). As Santy et al. (2021) show, in such real-world cases, machine-aided translation systems can bring significant value to end-users. Nevertheless, this value is often contextualized within trade-offs among time, performance, and computational cost, especially given the limited technical accessibility and important occurrence of low-resource language settings.
From Static Benchmarks to Iterative Design Evaluations with human users do not only occur at the end of a project; rather, they drive the iterative refinement cycle of the entire human-centered design process. This process typically begins with needs-finding studies to identify the social problem that technical solutions aim to resolve (Gao and Fussell, 2017; Gao et al., 2022; Xiao et al., 2024). It is often followed by co-design activities, where existing tools are used as technology probes to elicit inputs from targeted user groups on MT design. Subsequent phrases include usability testing or clinical trials after each round of system development to determine the degree of success (Khoong and Rodriguez, 2022). There exist a wealth of frameworks to guide this process, including Human-centered design, Participatory Design, and Value Sensitive Design (Friedman and Hendry, 2019), all of which foreground the values of direct and indirect stakeholders. MT evaluation can also draw from frameworks for trustworthy AI, particularly methods for studying mental models (Bansal et al., 2019), trust calibration (Vereschak et al., 2021), and how a human-AI work system performs (Hoffman et al., 2023). These efforts aim to ensure that MT systems can account for the complex dynamics between system outputs, user interpretations, and downstream consequences, thereby requiring interdisciplinary collaborations and tailored study designs.
# 7 Human-Centered MT Design
This section outlines emerging techniques that can reframe MT as a contextual, potentially interactive process responsive to users’ needs, moving beyond traditional sequence transduction. It provides a richer toolbox to support MT literacy (Section 3) and build on past empirical studies of human-MT interaction (Section 4).
Richer Inputs, Many Outputs HumanCentered MT must adapt outputs to the audience and context. Research has already explored controlling formality (Sennrich et al., 2016; Rippeth et al., 2022), style (Niu et al., 2017; Agarwal et al., 2023), complexity (Agrawal and Carpuat, 2019; Oshika et al., 2024), and personalization (Mirkin and Meunier, 2015; Rabinovich et al., 2016). Adaptation may also require explaining content (Srikanth and Li, 2021; Han et al., 2023; Saha et al., 2025), or warning about cultural misunderstandings (Pituxcoosuvarn et al., 2020; Yao et al., 2024). However, it is still unclear how users and other stakeholders can guide these systems in proactive and ecologically valid ways.
More contextual inputs are needed, similar to translator briefs (Castilho and Knowles, 2024). MT work has considered incorporating domain knowledge (Clark et al., 2012; Chu and Wang, 2018), style labels (Sennrich et al., 2016; Niu et al., 2017), example translations (Xu et al., 2023; Agrawal et al., 2023; Bouthors et al., 2024), and terminology (Alam et al., 2021; Michon et al., 2020). Some also address long-form (Karpinska and Iyyer, 2023; Peng et al., 2024) and conversational translation (Bawden et al., 2021; Pombal et al., 2024). However, these efforts usually consider one dimension of context at a time; we still need more holistic approaches that take a broad view of context (Castilho and Knowles, 2024) and incorporate knowledge and feedback needed for culturally appropriate outputs (Tenzer et al., 2024; Saha et al., 2025).
An Iterative Translation Process LLMs enable multi-stage translation workflows, including pre-editing, evaluation, and post-editing (Briakou et al., 2024; Alves et al., 2024). Pre-editing involves rewriting source texts to improve MT output (Bowker and Ciro, $2 0 1 9 \mathrm { a }$ ; Štajner and Popovic´, 2019; Ki and Carpuat, 2025), while postediting—either human or automatic—is studied widely (Lin et al., 2022; Vidal et al., 2022; Ki and Carpuat, 2024). Yet, most work remains systemcentric. Interactive approaches designed for professional translators (Green et al., 2013; Briva-Iglesias et al., 2023) suggest benefits from involving lay users with diverse goals and levels of proficiency.
Scale & Context How can we specialize models for specific contexts while reaping the benefits of scale (Team et al., 2022; Johnson et al., 2017; Vilar et al., 2023; Kocmi et al., 2024)? Work in this direction could build on efforts to structure resources for horizontal (across languages) and vertical (across domains) generalization (Ishida, 2006; Rehm, 2023), and techniques to support task (Ye et al., 2022; Alves et al., 2024), language (Blevins et al., 2024), and domain and terminology (Segonne et al., 2024) specialization in LLMs.
Decentering MT Centering people means recognizing that MT is often just one part of a broader workflow, where the MT output is not the end product. MT today often participate in content co-production with humans, rather than only for source-to-target conversion. This can be done via synchronized bilingual writing (Crego et al., 2023;
Xiao et al., 2024) or using translation as an aid for scientific writing (O’Brien et al., 2018; Steigerwald et al., 2022; Ito et al., 2023). In those settings, even when translating an abstract, the translation might be more of an adaptation than a literal translation (Bawden et al., 2024). Translation can be implicit or partial, when supporting simultaneous interpreters (Grissom et al., 2024), enabling natural translanguaging practices of bilinguals (Zhang et al., 2025b), or searching for texts written in a foreign language given a native language query (Galušˇcáková et al., 2022; Nair et al., 2022). In those settings, human-MT interface design is critical for lay users to remain aware of features of the targeted content and to develop strategies for navigating it (Petrelli et al., 2006). The need for intelligent interface design is particularly pronounced in LLM-powered multilingual communication and user interactions with conversational agents, where models must interpret and generate content for fluid language use while adapting to user goals, styles, and cultural norms. To support this, a prompt engineering playground with customized MT and user interfaces may enhance the accessibility of LLMs for a broader population (Mondshine et al., 2025).
Risk Management Reliable MT should help users weigh the benefits of MT against the risks it may pose. Quality estimation techniques designed for explainability have provided a good foundation toward this goal (Fomicheva et al., 2021; Guerreiro et al., 2023; Briakou et al., 2023; Specia et al., 2018). That said, growing evidence from user studies shows that more work is needed to identify and assess risks (Koponen and Nurminen, 2024), generate actionable feedback in user-specified contexts (Zouhar et al., 2021; Mehandru et al., 2023), determine when and how to disclose the use of MT (Simard, 2024; Xiao et al., 2024), provide useful descriptions of model properties (Mitchell et al., 2019), promote MT literacy among lay users (Bowker and Ciro, 2019a), and support the development of accurate user mental models (Bansal et al., 2019). Frameworks from human-centered explainable AI, such as seamful design (Ehsan et al., 2022), can help pinpoint gaps between system affordances and the needs of human stakeholders, fostering better alignment.
In sum, while existing work offers a rich toolbox for human-centered MT, more research is needed on designing interactions that preserve user agency and support effective, trustworthy use. This includes new interfaces that balance simplicity and flexibility, and foundational work on training models for controllability and context-awareness.
# 8 Case Study: Toward Reliable Translation for Clinical Care
Research on MT for clinical settings illustrates how human studies can drive the cycle of humancentered MT (Section 6) by understanding specific contexts of use (Section 2) to guide interface and model design decisions (Sections 4,7).
Understanding Needs Language barriers are a major source of healthcare disparities (Cano-Ibáñez et al., 2021), yet access to professional interpreters remains limited (Flores, 2005; Ortega et al., 2023). MT can potentially support clinical care, but reliability is a critical concern: MT errors can cause serious harm in, for example, discharge instructions from emergency departments (Khoong et al., 2019; Taira et al., 2021), pediatric care (Brewster et al., 2024) or urology (Rao et al., 2024), with disparate impact across languages. Yet, MT frequently mediates interactions between healthcare providers and patients in practice (Genovese et al., 2024). While dedicated MT tools have been developed for clinical settings (Starlander et al., 2005; Bouillon et al., 2005), generic apps such as Google Translate are still most commonly used (Nunes Vieira, 2024). In face of challenges such as time constraints, cultural barriers, and medical literacy gaps, clinicians develop their own workarounds when using MT, such as back-translation or relying on non-verbal cues to assess understanding (Mehandru et al., 2022).
Research Directions Generic MT tools thus often fall short in clinical care, and needs-findings studies motivate research into integrating pretranslated medical phrases, multimodal communication support, and interactive tools to assess mutual understanding. A human study evaluated feedback mechanisms to assist physicians in assessing the reliability of MT outputs in clinical settings, finding that quality estimation tools generally improve physicians’ reliance on MT but fail to detect the most clinically severe errors (Mehandru et al., 2023). Complementary efforts focus on developing custom MT approaches that prioritize reliability and verifiability, by using vetted canonical phrases to scaffold the translation (Bouillon et al., 2017) or guide users in crafting better MT inputs (Robertson, 2023). While these works focus on text-based
MT, many healthcare use cases also warrant consideration of interaction using speech (Spechbach et al., 2019), sign language (Esselink et al., 2024) and pictographs (Gerlach et al., 2024). Cultural differences significantly impact the style and content of communication in healthcare (Kreuter and McClure, 2004; Brooks et al., 2019) and is another area where much research is needed. Khoong and Rodriguez (2022) further outline key domains for future research, including developing interactive tools for different types of communication; enhancing risk assessment, and assessing understanding and patient satisfaction on top of MT correctness. | Machine Translation (MT) tools are widely used today, often in contexts where
professional translators are not present. Despite progress in MT technology, a
gap persists between system development and real-world usage, particularly for
non-expert users who may struggle to assess translation reliability. This paper
advocates for a human-centered approach to MT, emphasizing the alignment of
system design with diverse communicative goals and contexts of use. We survey
the literature in Translation Studies and Human-Computer Interaction to
recontextualize MT evaluation and design to address the diverse real-world
scenarios in which MT is used today. | [
"cs.CL",
"cs.AI"
] |
# 1. Introduction
Large reasoning models (LRMs), such as OpenAI o1 (OpenAI, 2024a) and DeepSeek-R1 (DeepSeek-AI et al., 2025), have demonstrated remarkable success by extending the length of reasoning through large-scale reinforcement learning (RL). In recent months, both the open-source community and commercial organizations have followed this trend, achieving significant advances on complex tasks such as Olympiad mathematics competitions and competitive programming (Anthropic, 2025; Google DeepMind, 2025; Hu et al., 2025; Kimi Team, 2025; Seed et al., 2025; Yu et al., 2025; Zeng et al., 2025). The success of LRMs has been primarily attributed to a new scaling dimension of test-time compute—As more FLOPs are dedicated to extended reasoning processes during generation, model performance shows consistent improvement, particularly for complex real-world applications (Jimenez et al., 2024; OpenAI, 2025).
However, continuously extending the reasoning process is challenging within the traditional transformer architecture (Vaswani et al., 2017), due to the inherent quadratic computational complexity of the softmax attention mechanism. While previous works have proposed various techniques to mitigate this issue—such as sparse attention (Beltagy et al., 2020; Lu et al., 2025; Yuan et al., 2025; Zaheer et al., 2020), linear attention (Arora et al., 2024; Choromanski et al., 2021; Du et al., 2025; He et al., 2024; Katharopoulos et al., 2020; Peng et al., 2024b, 2021; Qin et al., 2021, 2022a,b, 2024a,c; Shen et al., 2024; Sun et al., 2025, 2023; Zhang et al., 2024), linear attention with delta decay (Peng et al., 2025; Yang et al., 2024a,b), state space models (Dao and Gu, 2024; Glorioso et al., 2024; Gu and Dao, 2024; Gu et al., 2020, 2022, 2023; Gupta et al., 2022; Jamba Team, 2024; Ren et al., 2024), and linear RNNs (Behrouz et al., 2024; Chou et al., 2024; Chung and Ç, 2014; Hochreiter and Schmidhuber, 1997; Martin and Cundy, 2018; Peng et al., 2023, 2024a; Qin et al., 2023, 2024d; Siems et al., 2025; Sun et al., 2024; von Oswald et al., 2025)—these approaches have not been fully validated in large-scale reasoning models, and nearly all competitive LRMs to date still rely on traditional attention designs. An exception is the Hunyuan-T1 model (Tencent AI Lab, 2025) that employs the Mamba architecture (Dao and Gu, 2024; Gu and Dao, 2024). However, this model is not open-sourced and few details are disclosed. In this work, we aim to build and open-source a large reasoning model that can efficiently scale up test-time compute and compete with the state-of-the-art reasoning models.
We introduce MiniMax-M1, a reasoning model with a hybrid Mixture-of-Experts (MoE) architecture and Lightning Attention (Qin et al., 2024b), an I/O-aware implementation of a linear attention variant (Qin et al., 2022a). MiniMax-M1 is developed based on our previous MiniMax-Text-01 (MiniMax et al., 2025) model, and comprises 456 billion parameters in total, with 45.9 billion activations and 32 experts. In our attention design, a transformer block with softmax attention follows every seven transnormer blocks (Qin et al., 2022a) with lightning attention. This design theoretically enables efficient scaling of reasoning lengths to hundreds of thousands of tokens, as illustrated in Figure 1 (Right). For example, compared to DeepSeek R1, M1 consumes less than $5 0 \%$ of the FLOPs at a generation length of 64K tokens, and approximately $2 5 \%$ of the FLOPs at a length of 100K tokens. This substantial reduction in computational cost makes M1 significantly more efficient during both inference and large-scale RL training. Furthermore, owing to its lightning attention mechanism and in line with MiniMax-Text-01, our M1 model natively supports a context length of up to 1 million tokens – eight times the context size of DeepSeek R1 and an order of magnitude greater than all open-weight LRMs available to date. These features make M1 particularly well-suited for addressing complex, real-world tasks that require processing long inputs and generating extended thinking. A comparison of the maximum input and output lengths of M1 and other leading models is demonstrated in Table 1.
To develop our M1 model, we first continue pretraining MiniMax-Text-01 on $7 . 5 \mathrm { T }$ tokens from a carefully curated, reasoning-intensive corpus. Subsequently, we perform supervised fine-tuning (SFT)
Table 1 The maximum supported input length and output length (# tokens) of different reasoning models. For Claude-4 we refer to the Claude-4-Opus model. “DS-R1” represents the latest DeepSeek-R1-0528 model.
to inject certain chain-of-thought (CoT) (Wei et al., 2022) patterns, establishing a strong foundation for reinforcement learning, the core stage of M1 development. Notably, our RL scaling with M1 is made efficient through innovations from two key perspectives: (1) We propose a novel RL algorithm, CISPO, which abandons the trust region constraint and instead clips the importance sampling weights to stabilize training. This approach always leverages all tokens for gradient computations, achieving enhanced efficiency compared to GRPO (Shao et al., 2024) and DAPO (Yu et al., 2025) empirically – For example, on a controlled study based on Qwen2.5-32B models (Qwen et al., 2025), CISPO achieves a 2x speedup compared to DAPO; (2) Although the hybrid-attention design in M1 naturally allows for efficient RL scaling, unique challenges arise when scaling RL with this architecture. For instance, we find a precision mismatch between the training and inference kernels of our architecture, which prevents reward growth during RL training. We develop targeted solutions to address these challenges and successfully scale up RL with this hybrid architecture. In the end, our efficient RL framework enables us to complete a full RL run of MiniMax-M1 within 3 weeks using 512 H800 GPUs—equivalent to a rental cost of approximately $\$ 0.53\mathrm { M }$ USD.
In addition to methodological innovations, we curate a diverse set of problems and environments for RL training. Our data encompasses both verifiable and non-verifiable problems. For verifiable problems that are typically considered critical for reasoning learning, we not only include mathematical reasoning and competitive programming problems as commonly used in related works, but also leverage our previous data synthesis framework SynLogic (Liu et al., 2025a) to generate diverse logical reasoning problems spanning 41 distinct tasks. Furthermore, we construct sandboxes for complex software engineering (SE) environments derived from SWE-bench (Jimenez et al., 2024), and conduct RL on real-world SE problems with execution-based rewards to improve M1’s performance in challenging SE scenarios. Our unverifiable problems span a broad range of domains such as question answering and creative writing, where we use generative reward models to provide the feedback.
We train two versions of MiniMax-M1 models with 40K and 80K tokens of maximum generation length respectively, which leads to two models MiniMax-M1-40k and MiniMax-M1-80k. MiniMax-M1- $8 0 \mathrm { k }$ outperforms MiniMax-M1-40k on complex mathematical and coding tasks, further demonstrating the benefits of scaling test-time compute. As shown in Figure 1 (Left), MiniMax-M1 surpasses previous leading open-weight models such as the original DeepSeek-R1 and Qwen-235B overall, with particular advantages in complex software engineering, tool-using, and long-context tasks. Compared to the latest DeepSeek-R1-0528 model, MiniMax-M1 lags in mathematical and coding competitions but achieves comparable or superior performance in more realistic tool-using and long-context scenarios. Notably, MiniMax-M1 outperforms Gemini 2.5 Pro on the agentic tool use benchmark TAU-Bench (Yao et al., 2025), and surpasses OpenAI o3 and Claude 4 Opus on long-context understanding benchmarks. With efficient test-time scaling, we contend that MiniMax-M1 establishes a strong foundation for next-generation language model agents to address real-world challenges.
To facilitate collaboration and advancement in the field, we have made our models publicly available at GitHub and Hugging Face. They are now supported by both the vLLM and Transformers frameworks, with detailed deployment guides available at vLLM and Transformers respectively. This enables easy integration of MiniMax-M1 into modern inference pipelines. We also provide commercial standard API at minimax.io.
# 2. Preparation for Scalable RL: Continual Pretraining and SFT
In this work, we focus on scaling up reinforcement learning to enhance reasoning capabilities of Minimax-Text-01. To facilitate scalable RL training, we first carry out continual pretraining of our base model to strengthen its intrinsic reasoning abilities. Subsequently, we perform a cold-start supervised fine-tuning (SFT) stage to inject specific reasoning patterns to the model, thereby providing a stronger foundation for the subsequent RL phase.
# 2.1. Continual Pre-Training: Foundation for RL Scaling
To enhance the reasoning and long context capabilities of the foundation model while ensuring diversity, we continue training the MiniMax-Text-01 model with additional 7.5T tokens with optimized data quality and mixture.
Training Data. We refine our pretraining Web and PDF parsing mechanisms and enhance our heuristic cleaning rules to ensure a high recall rate for mathematical and code-related data. We prioritize the extraction of natural Question-Answer (QA) pairs from a diverse range of sources, including webpages, forums, and textbooks, while strictly avoiding the use of synthetic data. Additionally, we conduct semantic deduplication on the QA data to maintain its diversity and uniqueness. Furthermore, we increase the proportion of STEM (Science, Technology, Engineering, and Mathematics), code, book, and reasoning-related data to $7 0 \%$ . This significantly enhances the foundation model’s ability to handle complex tasks without compromising its other general capabilities.
Training Recipe. We decrease the coefficient of the MoE auxiliary loss and adjust the parallel training strategy to support a larger training micro batch size, which mitigates the detrimental effects of the auxiliary loss on overall model performance. Based on MiniMax-Text-01, we continue training with a constant learning rate of 8e-5 for $2 . 5 \mathrm { T }$ tokens, followed by a decay schedule over 5T tokens down to 8e-6.
Long Context Extension. For a hybrid-lightning architecture model with higher convergence complexity, we have observed that excessively aggressive extensions of the training length can lead to a sudden gradient explosion that may occur during the training process. This makes the optimization process extremely challenging. We attribute this to the parameter optimization of the earlier layers not keeping up with the changes in the later layers – For lightning attention, the earlier and later layers have different decay rates, which makes the earlier layers focus more on local information. We alleviate this issue by adapting a smoother extension of context length across four stages, starting from a 32K context window length and ultimately extending the training context to 1M tokens.
# 2.2. Supervised Fine-Tuning: Focused Alignment for Efficient RL
After continual pretraining, we conduct Supervised Fine-Tuning (SFT) to instill desired behaviors like reflection-based Chain-of-Thought (CoT) reasoning using high-quality examples, creating a strong starting point for more efficient and stable RL in the next stage. Specifically, we curate data samples with long CoT responses. These data samples cover diverse domains such as math, coding, STEM, writing, QA, and multi-turn chat. Math and coding samples account for around $6 0 \%$ of all the data.
# 3. Efficient RL Scaling: Algorithms and Lightning Attention
As shown in Figure 1 (Right), the M1 architecture demonstrates a clear efficiency advantage during inference. This naturally facilitates efficient RL scaling where increasingly longer responses are generated. However, as pioneers in scaling up RL with this hybrid architecture, we encounter unique challenges during the process, and the RL procedure can become unstable or even fail due to various issues. To address these difficulties, we develop targeted solutions that enable us to successfully scale up RL training for M1. In addition, we propose a new RL algorithm that achieves greater RL efficiency compared to existing methods. These dual contributions yield an efficient and scalable RL framework for training M1, where the complete training cycle requires 3 weeks on 512 H800 GPUs—equivalent to a rental cost of approximately $\$ 0.53\mathrm { M }$ USD. In this section, we first provide general context on RL and present our novel RL algorithm, and then describe the specific challenges we face with the hybrid architecture, along with the solutions we devise to overcome them.
# 3.1. Efficient RL Scaling with CISPO
Background. For questions $q$ from a dataset $\mathcal { D }$ , we denote $\pi$ as the policy model parameterized by $\theta$ , and 𝑜 as the response generated by the policy. PPO (Schulman et al., 2017) adopts the following objective to optimize the policy to maximize the expected return, and a clipping operation is applied to stabilize training:
$$
\begin{array} { l } { \displaystyle \mathcal { T } _ { \mathrm { P P O } } ( \theta ) = \mathbb { E } _ { q \sim \mathcal { D } , o _ { i } \sim \pi _ { \theta _ { \mathrm { o l d } } } ( \cdot \vert q ) } } \\ { \displaystyle \left[ \frac { 1 } { \vert o _ { i } \vert } \sum _ { t = 1 } ^ { \vert o _ { i } \vert } \operatorname* { m i n } \left( r _ { i , t } ( \theta ) \hat { A } _ { i , t } , \mathrm { c l i p } \big ( r _ { i , t } ( \theta ) , 1 - \epsilon , 1 + \epsilon \big ) \hat { A } _ { i , t } \right) - \beta D _ { K L } ( \pi _ { \theta } \vert \vert \pi _ { \mathrm { r e f } } ) \right] , } \end{array}
$$
where $\begin{array} { r } { r _ { i , t } ( \theta ) = \frac { \pi _ { \theta } ( o _ { i , t } | q , o _ { i , < t } ) } { \pi _ { \theta _ { \mathrm { o l d } } } ( o _ { i , t } | q , o _ { i , < t } ) } } \end{array}$ is the importance sampling (IS) weight, which is used to correct the distribution during off-policy updates, because we use $\pi _ { \theta _ { \mathrm { o l d } } }$ to collect trajectories to update the policy via multiple steps in a minibatch manner. While PPO requires a separate value model to compute the advantage $\hat { A } _ { i , t }$ , GRPO (Shao et al., 2024) eliminates the value model and defines the advantage as the output reward relative to other responses in the group:
$$
\hat { A } _ { i , t } = \frac { R _ { i } - \mathrm { m e a n } ( \{ R _ { j } \} _ { j = 1 } ^ { G } ) } { \mathrm { s t d } ( \{ R _ { j } \} _ { j = 1 } ^ { G } ) } ,
$$
where $R _ { i }$ is the reward of the response, and $G$ responses $\{ o _ { i } \} _ { i = 1 } ^ { G }$ are sampled for each question. The reward is either from rule-based verifiers such as in mathematical problem solving, or from a reward model.
Issues of Token Clipping. In our initial experiments with the hybrid architecture under the zero-RL setting, we observed that the GRPO algorithm adversely affected training performance and failed to effectively promote the emergence of long CoT reasoning behaviors. Through a series of controlled ablation studies, we ultimately identified the undesirable clipping operation in the original PPO/GRPO loss as the primary factor contributing to degraded learning performance. Specifically, we found that tokens associated with reflective behaviors (e.g., However, Recheck, Wait, Aha), which often serve as “forks” in reasoning paths, were typically rare and assigned low probabilities by our base model. During policy updates, these tokens were likely to exhibit high $\boldsymbol { r } _ { i , t }$ values. As a result, these tokens were clipped out after the first on-policy update, preventing them from contributing to subsequent off-policy gradient updates. This issue was particularly pronounced in our hybrid-architecture model and further hindered the scalability of reinforcement learning. These low-probability tokens, however, are often crucial for stabilizing entropy (Cui et al., 2025) and facilitating scalable RL (Wang et al., 2025). Although DAPO attempts to mitigate this issue by increasing the upper clipping bound (Yu et al., 2025), we found this approach to be less effective in our setup, which involved 16 rounds of off-policy updates per generation batch.
Figure 2 Comparison of GRPO, DAPO, and our proposed CISPO on AIME 2024, based on Qwen2.5- 32B-base. CISPO outperforms both GRPO and DAPO in terms of performance at the same number of training steps, and achieves comparable performance to DAPO using $5 0 \%$ of the training steps.
The CISPO Algorithm. In response, we propose a new algorithm that explicitly avoids dropping tokens, even those associated with large updates, while inherently maintaining entropy within a reasonable range to ensure stable exploration. First, recall that the vanilla REINFORCE objective with corrected distribution for offline updates is:
$$
\begin{array} { l } { \mathcal { T } _ { \mathrm { R E I N F O R C E } } ( \theta ) = \mathbb { E } _ { ( q , a ) \sim \mathcal { D } , o _ { i } \sim \pi _ { \theta _ { 0 } \mathrm { l d } } } ( \cdot | q ) } \\ { \displaystyle \left[ \frac { 1 } { | o _ { i } | } \sum _ { t = 1 } ^ { | o _ { i } | } \mathbb { s g } ( r _ { i , t } ( \theta ) ) \hat { A } _ { i , t } \log \pi _ { \theta } ( o _ { i , t } \mid q , o _ { i , < t } ) \right] , } \end{array}
$$
where $\mathsf { s g } ( \cdot )$ denotes the stop-gradient operation. Rather than clipping the token updates as in PPO/GRPO, we instead clip the importance sampling weight in Eq. 3 to stabilize training. We term our approach CISPO (Clipped IS-weight Policy Optimization). Adopting the group relative advantage from GRPO and the token-level loss (Liu et al., 2025b; Yu et al., 2025), CISPO optimizes the following objective:
$$
\begin{array} { l } { \displaystyle \mathcal { T } _ { \mathrm { C I S P O } } ( \theta ) = \mathbb { E } _ { ( q , a ) \sim \mathcal { D } , \{ o _ { i } \} _ { i = 1 } ^ { G } \sim \pi _ { \theta _ { \mathrm { o l d } } } ( \cdot \vert q ) } } \\ { \displaystyle \left[ \frac { 1 } { \sum _ { i = 1 } ^ { G } \vert o _ { i } \vert } \sum _ { i = 1 } ^ { G } \sum _ { t = 1 } ^ { \vert o _ { i } \vert } \mathbf { s } \mathbf { g } ( \hat { r } _ { i , t } ( \theta ) ) \hat { A } _ { i , t } \log \pi _ { \theta } ( o _ { i , t } \mid q , o _ { i , < t } ) \right] , } \end{array}
$$
where $\hat { r } _ { i , t } ( \theta )$ is the clipped IS weight:
$$
\hat { r } _ { i , t } ( \theta ) = \mathrm { c l i p } \left( r _ { i , t } ( \theta ) , 1 - \epsilon _ { l o w } ^ { I S } , 1 + \epsilon _ { h i g h } ^ { I S } \right) .
$$
We note that without weight clipping, CISPO reduces to the standard policy gradient objective. In our experiments, we did not impose a lower bound on the IS weight by setting $\epsilon _ { l o w } ^ { I S }$ to a large value;
instead, we only tuned $\epsilon _ { h i g h } ^ { I S }$ . Although the gradient of Eq. 4 is slightly biased due to weight clipping, this approach preserves gradient contributions from all tokens, especially in long responses. CISPO proves effective in our experiments, helping reduce variance and stabilizing RL training. In addition, we utilize the dynamic sampling and length penalty techniques from Yu et al. (2025). There is no KL penalty term in CISPO similar to other recent works (Hu et al., 2025; Yu et al., 2025).
A General Formulation. While we adopt CISPO in our experiments, here we further present a unified formulation by introducing a token-wise mask into the CISPO objective. This allows for hyperparameter tuning to control whether, and under what conditions, gradients from specific tokens should be dropped:
$$
\begin{array} { l } { \displaystyle \mathcal { T } _ { \mathrm { u n i f y } } ( \theta ) = \mathbb { E } _ { ( q , a ) \sim \mathcal { D } , \{ o _ { i } \} _ { i = 1 } ^ { G } \sim \pi _ { \theta _ { \mathrm { o l d } } } ( \cdot | q ) } } \\ { \displaystyle \left[ \frac { 1 } { \sum _ { i = 1 } ^ { G } | o _ { i } | } \sum _ { i = 1 } ^ { G } \sum _ { t = 1 } ^ { | o _ { i } | } \mathbb { s g } ( \hat { r } _ { i , t } ( \theta ) ) \hat { A } _ { i , t } \log \pi _ { \theta } ( o _ { i , t } \mid q , o _ { i , < t } ) M _ { i , t } \right] . } \end{array}
$$
The mask $M _ { i , t }$ is equivalent to the mask implicitly defined in the PPO trust region:
$$
M _ { i , t } = \left\{ \begin{array} { l l } { 0 } & { \mathrm { i f } \ \hat { A } _ { i , t } > 0 \mathrm { a n d } \ r _ { i , t } ( \theta ) > 1 + \epsilon _ { \mathrm { h i g h } } , } \\ { 0 } & { \mathrm { i f } \ \hat { A } _ { i , t } < 0 \mathrm { a n d } \ r _ { i , t } ( \theta ) < 1 - \epsilon _ { \mathrm { l o w } } , } \\ { 1 } & { \mathrm { o t h e r w i s e } . } \end{array} \right.
$$
This unified loss formulation can flexibly represent different clipping strategies under a common framework.
Empirical Validation of CISPO. To validate the effectiveness of CISPO, we empirically compare it with DAPO and GRPO in a zero-RL training setting. Specifically, we apply different RL algorithms to train the Qwen2.5-32B-base model on the mathematical reasoning dataset from Yu et al. (2025), and report performance on the AIME 2024 benchmark. As shown in Figure 2, CISPO significantly outperforms both DAPO and GRPO with the same number of training steps. Notably, CISPO demonstrates superior training efficiency compared to other approaches; for example, it matches DAPO’s performance with only $5 0 \%$ of the training steps.
# 3.2. Efficient RL Scaling with Lightning Attention – Challenges and Recipes
As shown in Figure 1 (Right), we emphasize that our hybrid attention inherently enables more efficient RL scaling compared to traditional attention designs, since rollout computation and latency are often the primary bottlenecks in RL training. However, as pioneers in conducting large-scale RL experiments with this novel architecture, we encountered unique challenges and developed targeted solutions, as we describe below.
Computational Precision Mismatch in Generation and Training. RL training is highly sensitive to computational precision. During our RL training, we observed a significant discrepancy in the probabilities of rolled-out tokens between training-mode and inference-mode, as shown in Figure 3 (Left). This discrepancy arose from a precision mismatch between the training and inference kernels. The issue was detrimental and prevented reward growth in our experiments. Interestingly, this issue did not appear in smaller, dense models with softmax attention. Through layer-by-layer analysis, we identified high-magnitude activations in the LM head at the output layer as the primary source of error. To address this, we increased the precision of the LM output head to FP32, thereby realigning the two theoretically identical probabilities, as demonstrated in Figure 3 (Right). This adjustment improved the correlation between training and inference probabilities from approximately $0 . 9 \mathbf { x }$ to
Figure 3 | Probability of tokens in training-mode code vs. probability of tokens in inference-mode code. Each point in the figures represents an individual token. The Pearson correlation coefficient is indicated in the figures. Theoretically, the two probabilities should be identical, and all the tokens should be exactly on the diagonal line. Left: Correlation of the M1 model before our fix; Right: Correlation of the M1 model after applying our fix of using FP32 precision for the LM output head.
$0 . 9 9 \mathbf { x }$ . Notably, this correlation metric remained stable throughout training, enabling successful reward increase.
Optimizer Hyperparameter Sensitivity. We employ the AdamW (Loshchilov and Hutter, 2019) optimizer, and inappropriate configurations of $\beta _ { 1 } , \beta _ { 2 }$ , and $\epsilon$ can lead to non-convergence during training. (Molybog et al., 2023). For instance, using the default configuration from VeRL (Sheng et al., 2024), where betas $= ( 0 . 9 , 0 . 9 9 9 )$ and $\mathrm { e p s } = 1 \mathrm { e } { \cdot } 8$ , can result in such issues. We have observed that the gradient magnitudes in MiniMax-M1 training span a wide range, from 1e-18 to 1e-5, with the majority of the gradients being smaller than 1e-14. Furthermore, the correlation between the gradients of adjacent iterations is weak. Based on this, we set $\beta _ { 1 } = 0 . 9$ , $\beta _ { 2 } = 0 . 9 5$ , and ep $\mathsf { \Pi } _ { \mathsf { S } } = 1 \mathsf { e } \mathsf { - } 1 5$ .
Early Truncation via Repetition Detection. During RL training, we found that complex prompts could induce pathologically long and repetitive responses, whose large gradients threatened model stability. Our goal was to preemptively terminate these generation loops rather than penalize the already repetitive text. As simple string-matching is ineffective against varied repetition patterns, we developed a heuristic based on token probabilities. We observed that once a model enters a repetitive cycle, the probability for each token soars. Consequently, we implemented an early truncation rule: generation is halted if 3,000 consecutive tokens each have a probability above 0.99. This method successfully prevents model instability and improves generation throughput by eliminating these pathological, long-tail cases.
# 4. Scaling Reinforcement Learning with Diverse Data
In this section, we describe the data and reward we adopted for our RL stage. We incorporate a diverse set of environments in our RL training pipeline, including tasks that can be verified by rules and general tasks that need to be verified through reward models. All these environments are integrated into the RL stage using a carefully designed curriculum.
# 4.1. Reasoning-Intensive Tasks with Rule-based Verification
Below, we introduce our data that can be verified by deterministic rules. For all the following tasks, we employ rule-based final correctness as the correctness reward, complemented by a format reward.
Mathematical Reasoning. Our initial mathematical dataset comprises hundreds of thousands of high-quality, competition-level problems, meticulously curated and organized from public sources and official mathematics competitions. These problems span a wide range of difficulty levels, each paired with a standard reference solution. Our data cleaning pipeline begins with the removal of incomplete samples and those exhibiting formatting or typographical errors. We subsequently apply embedding-based deduplication across the RL data sources and enforce a strict separation from the SFT dataset to avoid any overlap, as leakage from the SFT phase into the RL stage hinders exploration and undermines training effectiveness. Additionally, we employ both n-gram and embedding-based methods to eliminate potential contamination from commonly used mathematical benchmark test sets, thereby ensuring the integrity and fairness of our evaluations. We filter out samples containing multiple sub-problems, proof-based questions, and binary questions (e.g., true/false) that are susceptible to random guessing. Multiple-choice questions are reformulated into open-ended formats to better align with our reinforcement learning framework. Next, we employ our internal model to extract the final answers from the reference solution, retaining only those samples whose extracted answers can be correctly parsed by our rule-based answer checker. Finally, we use a strong reasoning model to compute the pass $@ 1 0$ for each question and retain only those samples with a pass rate strictly between 0 and 0.9, resulting in a curated dataset of nearly 50K high-quality mathematical samples for our RL training.
Logical Reasoning. For logical reasoning data, we carefully select 41 logical reasoning tasks requiring non-trivial reasoning ability such as cipher and Sudoku, then we implement a data synthesis framework to synthesize all the data. Concretely, we utilize our SynLogic framework (Liu et al., 2025a) to implement the data synthesis pipeline featuring task-specific data generators and rule-based taskspecific verifiers, enabling automatic logical data generation. We meticulously configure the difficulty parameters during generation, ensuring the appropriate learning challenge of the generated data. Specifically, to prevent inclusion of overly difficult instances, we establish an upper difficulty bound based on the solvability limits of current strong reasoning models, requiring their pass $@ 1 0$ rates greater than zero. Similarly, we set a lower difficulty bound using the lowest difficulty parameters for which the MiniMax-Text-01 model achieves pass rates between 0 and 0.5. This approach ensures the data maintains a balance between difficulty and learnability. In addition, as the model capabilities improve during training, we increase the difficulty of the data in the later stages. Using this framework, we synthesize approximately 53K logical reasoning samples for RL training.
Competitive Programming. For the competitive programming problems, we collect publicly available problems from online judge platforms and popular coding websites. For problems lacking test cases, we develop an LLM-based workflow and use the MiniMax-Text-01 model to generate comprehensive test suites. Similar to our approach with mathematical reasoning datasets, we filter problems based on quality and difficulty using pass rates from model sampling, retaining moderately challenging and high-quality algorithmic problems. Through this process, we generate 30K competitive programming data samples for RL training.
Software Engineering. For the software engineering domain, inspired by SWE-bench (Jimenez et al., 2024), we construct verifiable reinforcement learning environments by leveraging real-world data from public GitHub repositories. Our dataset primarily comprises issues and pull requests (PRs) that encapsulate common software development challenges, including bug localization, code repair, and test case synthesis. To facilitate effective reinforcement learning, we develop a sophisticated containerized sandbox environment that simulates a realistic software development workflow. This environment enables the actual execution of code, providing direct and verifiable feedback on the correctness and efficacy of an agent’s proposed interventions. The pass/fail status of pre-defined or newly generated test cases serves as the primary reward signal for our RL framework. A successful execution that passes all relevant test cases yields a positive reward, while compilation errors, runtime failures, or test case regressions result in a zero or negative reward, thus providing a clear signal for policy optimization. Through this process, we curate several thousand high-quality data samples. Each sample includes a problem description (e.g., bug report from an issue), the initial faulty code, and a set of associated test cases. This setup allows our RL agent to learn to accurately pinpoint bugs, propose correct code fixes, and even synthesize new, effective test cases, with performance directly verifiable through the execution within our sandboxed environment.
# 4.2. General Domain Tasks with Model-based Feedbacks
In this section, we further extend the RL scope to a wider array of general domain tasks. As these tasks cannot be easily verified by rules, we utilize reward models to provide the feedback.
# 4.2.1. Data and Reward Models
Our general RL dataset consists of a total of 25K complex samples. These can be broadly categorized into two types: samples with ground-truth answers that are verifiable but difficult to validate using rules, and samples without ground-truth answers.
Tasks with Ground Truth. This category primarily includes STEM and other factual problems where answers are objective but may have multiple valid expressions. Such diversity often renders rulebased answer checkers inaccurate. Our data cleaning process is similar to that used in mathematical reasoning, while we use our Generative Reward Model (GenRM) as a verifier, instead of relying on rule-based checkers. To evaluate consistency between ground-truth answers and model responses, we adopt a five-grade reward scale to evaluate the two components. First, we construct a humanannotated reward model benchmark, which covers a range of objective tasks across diverse knowledge and task domains, especially the pairs of model response–ground truth that rule-based checkers fail to judge accurately. Second, we evaluate the GenRM’s effectiveness by comparing the Best-of-N (BoN) responses selected by GenRM against the pass $@ \mathrm { N }$ metrics across several benchmarks. GenRM performance is assessed using its accuracy on the human-annotated benchmark and the performance gap between BoN and pass $@ \mathrm { N } .$ These metrics guide experiments to optimize both the data distribution and the prompt design used during the GenRM training.
Tasks without Ground Truth. This category encompasses a wider range of tasks, including instructionfollowing, creative writing, etc. Prompts are sampled from a large pool based on our internal tagging system, ensuring a balanced training distribution across fine-grained domains. Even though these queries are typically open-ended and do not have a ground-truth answer, we seek to pair a reference answer for each query, which serves as a reference for reward model judgment. To this end, we first generate responses by various internal and external models, and then these reference answers will undergo our internal quality evaluation. During RL training, we adopt a pairwise comparison framework to evaluate model responses. Each comparison yields a score of -1, 0, or 1, indicating whether the model’s output is worse than, similar to, or better than a reference answer. For instructionfollowing tasks with constraints particularly, we utilize both the rule-based reward to assess whether the response satisfies the constraint, and model-based reward to evaluate response’s quality. As with the ground-truth setting, we first build a human-annotated benchmark, incorporating multiple blind preference judgments from reliable annotators. We then refine our scoring criteria and preference prompt to optimize accuracy as well as potential biases, which would be mentioned in $\ S 4 . 2 . 2$ below.
To minimize the potential biases, training data are also optimized by several methods, such as multiple-blind consistent judgment, position-switched consistent judgment, etc. Once an optimal GenRM is trained, a Swiss Round scoring system is performed across the training dataset to determine the most suitable reference answer for RL training.
# 4.2.2. Addressing Bias of Generative Reward Models for Long CoT
Effective general RL for complex CoT reasoning tasks is critically dependent on accurate and unbiased reward models. Assessing such CoT responses turns out to be challenging, and we found that GenRMs preferred longer outputs over potentially superior concise alternatives, irrespective of actual reasoning quality. This length bias is a significant issue as it may substantially misguide RL policy optimization, incentivizing verbosity without substance and inducing reward hacking. Our initial efforts to improve GenRM fidelity include standard offline strategies: (1) Diversifying training data with a wide range of response lengths, sources, and quality tiers; (2) Incorporating adversarial examples to expose vulnerabilities; and (3) Refining model architectures. However, empirical analysis revealed that purely offline evaluation and preemptive mitigation of length bias in GenRMs frequently failed to prevent length bias during RL training.
Consequently, our core strategy incorporates continuous online monitoring of length bias during RL training. Specific metrics are established to detect whether the RL policy disproportionately extends output lengths to maximize GenRMs rewards without gains in task success or reasoning depth. Upon detecting such detrimental length-seeking behavior, indicative of exploiting GenRMs length bias, immediate GenRMs recalibration is triggered. This iterative adjustment is vital to preempt reward hacking related to output length, ensuring the policy prioritized substantive capability enhancement over superficial text inflation. Complementing this adaptive approach, RL-side techniques including reward shaping, value clipping, and normalization are systematically employed. These mechanisms desensitize reward signals to extreme values from superficial characteristics (e.g., length), thereby directing policy optimization toward substantive quality and correctness of its long CoT reasoning.
# 4.3. Curriculum of Incorporating Diverse Data
Given that our RL data spans a wide spectrum of categories, a core challenge is training a single policy capable of excelling on both reasoning-intensive tasks and general domain tasks. To address this, our approach entails a carefully managed curriculum and dynamic weighting strategy for reasoning and general-domain tasks during the RL training process with CISPO: we start with only the reasoning-intensive tasks with rule-based reward, and then gradually mix in the general domain tasks. This ensures that the model continues to refine its verifiable skills (e.g., in math and code) while progressively enhancing its performance on a diverse spectrum of general tasks, from complex instruction following to open-ended CoT reasoning. This mixed RL training encourages the model to learn context-dependent application of its reasoning abilities—applying rigorous, step-by-step deduction for verifiable problems and more flexible, adaptive generation for general queries—all within a unified policy framework. It prevents catastrophic forgetting of specialized skills while fostering broader generalization.
# 5. Extending RL Scaling to Longer Thinking
Our first RL training is performed with an output length limit of 40K tokens. Given that the hybrid architecture of M1 natively supports near-linear scaling for longer sequences, as demonstrated in Figure 1 (Right), we further extend the generation length during RL training to 80K tokens. This results in a new model, which we refer to as MiniMax-M1-80k.
Data. To efficiently train our RL model for an 80K output length, we utilize our previously trained 40K model to guide the data filtering process. First, we evaluate the pass rates on the curated dataset described in $\ S 4$ and remove samples that are easily solved. We then adjust the data distribution to favor more challenging examples, such as difficult mathematical and coding problems. Additionally, we downsample synthetic reasoning data after observing that it destabilizes long-context RL training. Specifically, outputs generated from this data type often become repetitive and homogenous, and continued exposure to these patterns proves detrimental to the model’s overall performance.
Length Scaling Strategy. To gradually increase the output length, we employ a staged window expansion RL strategy. We begin with an output length of 40K and incrementally expand it to 48K, 56K, 64K, 72K, and ultimately 80K. This staged approach ensures training stability at each step. The transition to a subsequent length is determined by a set of empirical indicators. These include the convergence of perplexity on the generated sequences and whether the 99th percentile of the output lengths is approaching the current context window limit. These signals offer valuable insights into the model’s readiness for scaling, which allows us to maintain robust training throughout the process.
Addressing Training Instability During Scaling. During the scaling process, we encountered a critical issue in the later stages of training at each length window. Specifically, the model exhibited susceptibility to pattern collapse, where the latter portions of generated sequences degraded into incoherent or garbled text. This phenomenon consistently coincided with increased perplexity, indicating compromised generation quality and stability. We identify the root cause: during output length extension, negative samples increase in length substantially faster than positive samples, frequently reaching the context window limit earlier. Consequently, disproportionately large negative gradients accumulate in the latter segments of generation sequences. This imbalance originates from the inherently unequal nature of GRPO’s advantage normalization and the token-level loss we adopt. To address this, we implement three key solutions: (1) Detecting repetitive patterns (consecutive high-probability tokens) with early stopping to prevent excessive context window consumption by repetitive responses; (2) Adopting combined sample-level loss and token-level normalization to alleviate negative-positive sample imbalance and mitigate adverse effects; (3) Decreasing both the gradient clipping threshold and 𝜖𝐼ℎ𝑆𝑖𝑔ℎ to further stabilize generation.
# 6. Evaluations
# 6.1. Core Benchmarks
We conduct a comprehensive evaluation of MiniMax-M1 across several key domains: mathematics, general coding, software engineering, reasoning & knowledge, long context, agentic tool use, factuality, and general assistant ability. We evaluate all tasks using temperature 1.0 and top-p 0.95 sampling.
• Mathematics: To evaluate mathematical reasoning capabilities, we utilize several competition level math benchmarks, including MATH-500 (Hendrycks et al., 2021), AIME 2024, AIME 2025. For AIME evaluation, we sample 32 times and compute the average passrate as the final score. • General Coding: We assess general programming proficiency using LiveCodeBench (Jain et al., 2025) and FullStackBench (Liu et al., 2024), which evaluate code generation across diverse programming tasks. For both benchmarks, we report scores as the average passrate of 16 samples. • Reasoning & Knowledge: We assess domain knowledge and reasoning capabilities through GPQA-Diamond (Rein et al., 2024), MMLU-Pro (Wang et al., 2024), and the challenging HLE benchmark (Phan et al., 2025). For GPQA-Diamond, we sample 32 times and report the average passrate. For HLE evaluation, we assess the model without external tools. Additionally, we measure logical reasoning ability using ZebraLogic (Lin et al., 2025).
Table 2 Performance of MiniMax-M1 on core benchmarks.
\* conducted on the text-only HLE subset.
• Software Engineering: We evaluate software engineering capabilities using SWE-bench Verified (Jimenez et al., 2024), which measures the ability to resolve real-world GitHub issues. We report results derived from the Agentless scaffold (Xia et al., 2024). Departing from the original pipeline, our methodology employs a two-stage localization process (without any embedding-based retrieval mechanisms): initial coarse-grained file localization followed by fine-grained localization to specific files and code elements.
• Long Context: We evaluate long context understanding using OpenAI-MRCR (OpenAI, 2024b), which tests retrieval and disambiguation of multiple similar items within extended contexts, and LongBench-v2 (Bai et al., 2024), a challenging benchmark with 503 multiple-choice questions
across contexts ranging from $^ { 8 \mathrm { k } }$ to 2M words.
• Agentic Tool Use: We assess tool use capabilities through TAU-bench (Yao et al., 2025), which emulates dynamic conversations where agents must utilize API tools while adhering to domainspecific policy guidelines. We evaluate TAU-bench with GPT-4.1 as user model, a general system prompt2 and without any custom tools. The maximum number of interaction steps is 40. • Factuality: To measure factuality of LLMs, we utilize SimpleQA (Wei et al., 2024), an adversariallycollected benchmark of fact-seeking questions with single, indisputable answers. • General Assistant: We evaluate general assistant capabilities using MultiChallenge (Sirdeshmukh et al., 2025), which assesses LLMs on conducting realistic multi-turn conversations with human users. We report our scores judged by GPT-4o.
Results on Math, Coding, and other General Tasks. Table 2 presents our model’s performance compared to state-of-the-art large reasoning models. In mathematical reasoning, the MiniMax-M1 models demonstrate strong performance across multiple benchmarks, achieving results comparable to the close-weight model Seed-Thinking-v1.5 (Seed et al., 2025). Notably, MiniMax-M1-80k achieves $8 6 . 0 \%$ on AIME 2024, placing it second among open-weight models and trailing only the latest DeepSeek-R1-0528 model. For general coding, MiniMax-M1-80k matches Qwen3-235B on LiveCodeBench while outperforming it on FullStackBench, demonstrating robust capabilities among leading open-weight models. On reasoning & knowledge benchmarks, MiniMax-M1-80k similarly trails DeepSeek-R1-0528 but achieves competitive performance against other top open-weight models. On the factuality benchmark SimpleQA, Minimax-M1 models underperform DeepSeek-R1 while outperforming all other open-weight models and Seed-Thinking-v1.5. On MultiChallenge, both MiniMax models perform comparably to DeepSeek-R1-0528 and Claude 4 Optus, with inferior results only to o3 and Gemini-2.5-Pro.
Highlights in Complex Scenarios: Software Engineering, Long Context, and Tool use. Benefiting from our execution-based, software engineering environments during RL, MiniMax-M1-40k and MiniMax-M1-80k achieve strong scores of $5 5 . 6 \%$ and $5 6 . 0 \%$ on SWE-bench verified respectively. These results are slightly inferior to DeepSeek-R1-0528’s $5 7 . 6 \%$ and significantly surpass other openweights models. Leveraging its 1M context window, the M1 models significantly outperform all other open-weight models in long-context understanding. They even surpass OpenAI o3 and Claude 4 Opus, ranking second globally and trailing only Gemini $2 . 5 \ : \mathrm { P r o }$ by a small margin. In agentic tool-use scenarios (TAU-bench), MiniMax-M1-40k surpasses all open-weight models and even Gemini-2.5 Pro. Moreover, MiniMax-M1-80k consistently outperforms MiniMax-M1-40k across most benchmarks, confirming the benefits of scaling test-time compute.
# 6.2. Effect of RL Scaling
To investigate the effect of RL scaling, we track performance and response length throughout training. Figure 4 presents three representative examples from AIME 2024, AIME 2025, and LiveCodeBench v5, respectively. We observe consistent improvements in both model performance and response length during training. Notably, average response lengths on AIME and LiveCodeBench exceed 20,000 tokens, with AIME 2024 accuracy showing substantial gains from $6 8 \%$ to $8 0 \%$ . Crucially, the strong correlation between accuracy gains and increased response length in these visualizations underscores the importance of extending RL scaling to facilitate more extensive reasoning processes.
Figure 4 | Accuracy and generation length versus RL training steps for MiniMax-M1. | We introduce MiniMax-M1, the world's first open-weight, large-scale
hybrid-attention reasoning model. MiniMax-M1 is powered by a hybrid
Mixture-of-Experts (MoE) architecture combined with a lightning attention
mechanism. The model is developed based on our previous MiniMax-Text-01 model,
which contains a total of 456 billion parameters with 45.9 billion parameters
activated per token. The M1 model natively supports a context length of 1
million tokens, 8x the context size of DeepSeek R1. Furthermore, the lightning
attention mechanism in MiniMax-M1 enables efficient scaling of test-time
compute. These properties make M1 particularly suitable for complex tasks that
require processing long inputs and thinking extensively. MiniMax-M1 is trained
using large-scale reinforcement learning (RL) on diverse problems including
sandbox-based, real-world software engineering environments. In addition to
M1's inherent efficiency advantage for RL training, we propose CISPO, a novel
RL algorithm to further enhance RL efficiency. CISPO clips importance sampling
weights rather than token updates, outperforming other competitive RL variants.
Combining hybrid-attention and CISPO enables MiniMax-M1's full RL training on
512 H800 GPUs to complete in only three weeks, with a rental cost of just
$534,700. We release two versions of MiniMax-M1 models with 40K and 80K
thinking budgets respectively, where the 40K model represents an intermediate
phase of the 80K training. Experiments on standard benchmarks show that our
models are comparable or superior to strong open-weight models such as the
original DeepSeek-R1 and Qwen3-235B, with particular strengths in complex
software engineering, tool utilization, and long-context tasks. We publicly
release MiniMax-M1 at https://github.com/MiniMax-AI/MiniMax-M1. | [
"cs.CL",
"cs.LG"
] |
# I. INTRODUCTION
”Data is the new gold” [1] – In the context of artificial intelligence (AI), data serves as the essential fuel driving the performance and innovation of AI systems. High-quality data enables models to learn complex patterns, identify subtle relationships, and make predictions that guide decision-making in diverse fields. Modern AI systems, including large language models (LLMs), require massive amounts of high-quality training data to achieve their impressive performance, which is both expensive and difficult to acquire.
The emergence of LLMs has revolutionized natural language processing (NLP) by enabling state-of-the-art performance in a wide range of NLP tasks, including machine translation, text summarization and question answering [2], [3]. The unprecedented capabilities of LLMs, such as GPT [4] and Google’s Gemini [5], [6] arise from their training on extensive, diverse datasets. This training enables them to grasp complex linguistic and semantic patterns, allowing for sophisticated language processing and effective generalization across different contexts.
Fig. 1. An illustration of our synonym replacement method, where $K { = } 3$ words in the original sentence are substituted with higher-entropy synonyms. In this example, ”quick,” ”jumps,” and ”lazy” are replaced with ”speedy,” ”leaps,” and ”sluggish” to create the watermarked version.
LLMs are typically trained in two stages: pretraining, where general language patterns are captured and learned from vast datasets, and fine-tuning, which adapts the pretrained model to specific tasks using smaller, specialized datasets. For these models to reach their full potential and consistently achieve high performance across tasks, access to high-quality training data is essential, as it enables them to accurately model complex linguistic patterns and nuances.
Often, the demand for suitable datasets pushes the boundaries of ethical data sourcing and results in the collection of publicly available data, obtained via scraping, along with proprietary or licensed information. This approach introduces privacy, security, and legal risks, especially when sensitive information, such as personally identifiable information (PII), copyrighted content, or proprietary data, is improperly used to train the model. In some cases, the drive to enhance model performance may even tempt LLM builders to use unauthorized or illegally obtained datasets, further compromising ethical standards and user trust.
Awareness regarding these privacy and ethical issues has increased as a result of legal conflicts and the lack of transparency regarding the data collection process [7], [8]. The lawsuit between The New York Times and OpenAI [9], as well as other lawsuits [10], [11], highlights the critical need for mechanisms aimed at detecting such privacy and intellectual property violations, and more specifically, identifying the data used to train LLMs [12].
The risk of data extraction and leakage is compounded when LLMs are fine-tuned, since the fine-tuning process involves additional training on specialized datasets that may contain sensitive information [13]. Memorization—where the model retains exact phrases, sentences, or even entire passages from the training data—can become more pronounced during fine-tuning, especially with small or domain-specific datasets. This memorization increases the likelihood of data leakage, as sensitive information embedded in the model could be inadvertently reproduced in responses, posing privacy and security risks [14], [15].
Larger models are even more prone to memorizing the training data [16], a tendency that can be exploited through data extraction attacks. The main risk is if attackers exploit the model to extract or infer private information, especially if the training data contains sensitive information such as PII or copyrighted content [17]. This underscores the importance of developing robust mechanisms to detect whether unauthorized data has been used in a model’s training process.
Such detection can be challenging, and several methods have been proposed for detecting the presence of unauthorized data in LLMs’ training data. Methods such as membership inference attacks (MIAs) are designed to determine whether a specific text was part of a model’s training dataset [18]– [20]. To determine whether a specific piece of data was included in the training set, MIAs exploit the differences in a model’s behavior when it processes seen and unseen data. The underlying assumption is that an LLM will perform differently on queries that are related to seen and unseen data (e.g., exhibiting higher prediction confidence or greater loss reduction).
Despite MIAs’ effective performance, they have several limitations [21]. First, their performance in terms of common metrics, such as the area under the receiver operating characteristic curve (AUROC) and the true positive rate (TPR) at a fixed low false positive rate (FPR) [19], tends to worsen as the training set size increases, often rendering it close to random [22]. This is due to the trade-off between generalization and memorization in LLMs: as the model is trained on more data, it will generalize better, while increasing the number of model parameters increases the model’s tendency to memorize the training data [23], [24]. Additionally, MIAs show inconsistent performance across models and datasets and are prone to detecting distribution shifts rather than performing true membership inference [12], [25].
Given the challenges and limitations associated with traditional MIAs, there is a growing need for more reliable methods for detecting the unauthorized use of data in training LLMs. This has led to the development of watermarking techniques, which embed unique patterns into the training data, making it easier to track and detect the use of specific datasets in a model’s training process [26].
In this context, a watermark refers to a deliberate modification of the input data that subtly alters its structure without compromising the data’s semantic meaning [27]. These modifications allow researchers to identify whether a particular dataset has been used in training an LLM by examining how the model behaves when processing watermarked data. Watermarking techniques can be highly effective in detecting data misuse and preventing privacy violations, as they provide an additional layer of security by embedding detectable patterns within the data itself.
Several approaches have been proposed for embedding watermarks into textual data. One common method involves altering the encoding of characters, such as by using visually similar Unicode characters, while in another method suggests inserting random sequences into the text [28]. While these changes are often subtle enough to be imperceptible to human readers, they create distinct patterns that can later be detected. While such techniques can help infer whether particular datasets were part of the training set, they have limited robustness, as they are relatively easy to detect and remove.
To address the limitations of existing watermarking methods, we introduce LexiMark, a novel and robust watermarking method for textual data that may be used on the training data of LLMs. LexiMark is inspired by MIA methods that exploit the model’s behavior when handling high-entropy tokens that have a greater likelihood of being related to the method’s inference ability [29]–[32]. These approaches demonstrated that focusing on high-entropy or high-probability tokens can improve the accuracy of MIAs by capitalizing on the differential treatment models give to such inputs.
Our method extends this concept by identifying the words in a sentence with the highest entropy and replacing them with higher-entropy synonyms, thereby embedding our watermark in the training data of the LLM. This ensures that the semantic meaning of the text is preserved while subtly embedding a watermark that can later be detected through an MIA. By targeting high-entropy words, which are naturally more unpredictable and challenging for LLMs to predict, our watermark method enhances the likelihood that these words will be memorized by the LLM. Our method guarantees that the text remains readable and useful while embedding detectable patterns. In Figure 1, we present an example of how our method replaces high-entropy words with semantically similar synonyms with higher entropy. Our watermark embedding method begins by preprocessing the text, splitting it into sentences, and selecting the top- $K$ high-entropy words (keywords) from each sentence. These keywords are then replaced with higher-entropy synonyms, ensuring that the original meaning is preserved, effectively embedding the watermark while maintaining the text’s readability.
The underlying intuition is that LLMs are more likely to memorize high-entropy words, as these words introduce greater uncertainty in predictions. By enhancing the model’s memorization of these watermarked words, our method strengthens the ability to verify whether a dataset was used for training. Our method, which employs MIAs for verification, effectively balances robustness, detectability, and readability, making the watermark difficult to remove while maintaining the text’s original meaning and usability.
We evaluated our watermarking method across diverse textual domains within the The Pile dataset [33], including medical texts, emails, legal documents, encyclopedic entries, and patent descriptions, as well as the BookMIA [32] dataset. We tested our method on seven open-sourced LLMs: Pythia160M, 410M, 1B, and 6.9B [34], LLaMA-1 7B [35], LLaMA3 8B [36], and Mistral-7B [37]. For the large models, we fine-tuned them on the watermarked data using the quantized low-rank adaptation (QLoRA) technique [38]. For the smaller Pythia models, we employed continued pretraining, also known as domain-adaptive pretraining (DAPT) [39], to evaluate the robustness and generality of our watermarking approach across different model scales and training paradigms.
The results demonstrate clear improvements in detecting textual data membership, with our approach consistently achieving higher AUROC scores compared to baseline techniques. The increase ranges from $2 . 5 \%$ to $2 5 . 7 \%$ , confirming the robustness of our detection approach. Our evaluation also examined dataset detection, revealing that our method requires fewer records to accurately determine whether a dataset was used in the training process. This makes our approach more efficient and sensitive in identifying pretraining sources. Without watermarking, detection typically requires around 40 samples to achieve a p-value of less than 0.05, while with our watermarking method, only six samples are needed to achieve this.
In addition to the improvements in membership detection, our semantic preservation checks, measured by cosine similarity [40] and BLEU scores [41], demonstrated near-complete retention of the original text’s meaning, ensuring that our watermarking method maintains both high accuracy and text integrity across diverse datasets.
We also conducted a robustness evaluation, confirming that our method withstands minor textual modifications with minimal impact on detection results. This robustness to minor text changes further highlights our method’s resilience, allowing for reliable detection even when slight alterations are introduced, thereby supporting the method’s applicability in real-world settings where minor text variations are common. Furthermore, unlike other approaches, our watermarking method also remains undetectable in perplexity tests on the fine-tuned LLM, avoiding the performance decrease common in other methods that are easily spotted using perplexity checks. In addition, we examine the effectiveness of our method under post-training scenarios, such as instruction tuning, and find that watermark signals remain detectable even after the model undergoes further updates. To support reproducibility and facilitate future research, we provide our implementation, evaluation scripts, and data preparation tools at: https://github.com/eyalgerman/LexiMark.
The key contributions of this paper are summarized as follows:
A novel watermarking method for textual data: We introduce a method that identifies high-entropy words in sentences and substitutes them with synonyms with higher entropy, thereby embedding a watermark without altering the semantic meaning of the data.
Improved detection using MIAs: We enhance the effectiveness of existing MIA methods by embedding watermarks that increase the likelihood of data memorization during model training. This improves accuracy in detecting whether specific data was part of the training set.
Semantic preservation: Our method demonstrates nearcomplete preservation of the original sentence’s semantic meaning. We explore various synonym selection methods to optimize the semantic preservation of the watermarked text, ensuring minimal impact on the original meaning.
Robustness: LexiMark is difficult to detect and remove due to its subtle substitutions, which blend seamlessly into the text and appear unwatermarked. We evaluate the robustness and detectability of our method in comparison to two baseline approaches, demonstrating its superior performance in maintaining watermark integrity.
• Post-training resilience: We further examine the watermark’s persistence under post-training modifications, such as instruction tuning, and show that the watermark remains reliably detectable even after the model undergoes additional training phases.
In the remainder of this paper, we first review prior work on MIAs and data watermarking for LLMs in Section II. We then introduce LexiMark, our proposed watermarking method based on high-entropy lexical substitutions, detailing both the embedding and detection phases in Section III. Section IV describes our experimental setup, including the datasets, models, and evaluation protocol. In Section V, we present detection results across various LLMs and datasets. Section VI evaluates the semantic preservation of the watermarked text using cosine similarity and BLEU scores. In Section VII, we assess the robustness of LexiMark against synonym substitution, posttraining, and removal attacks. Section VIII demonstrates how our method enables dataset-level membership detection using statistical inference. Finally, Section IX concludes the paper and outlines directions for future work.
# II. RELATED WORK
LLMs leverage deep learning techniques to generate and understand natural language text. Common LLMs are built on the transformer architecture, which utilizes self-attention mechanisms to process words in relation to all other words in a sentence, enhancing the model’s ability to understand context [42], [43].
LLMs are trained on vast text corpora, using a loss function aimed at predicting the next token in a sequence based on the preceding tokens. These models can also be fine-tuned for specific tasks, broadening their range of applications. However, despite their impressive capabilities, there are several challenges regarding their use, including data bias, privacy concerns, and the significant computational resources required to train them.
Training these models involves adjusting millions or even billions of parameters to minimize the difference between the model’s predictions and actual data. This extensive optimization enables LLMs to generate responses that are not only contextually relevant but also exhibit nuanced understanding, allowing them to produce high-quality, human-like text.
Research has increasingly focused on addressing data privacy concerns regarding LLMs, and particularly on vulnerabilities related to data leakage. One such vulnerability is the membership inference attack (MIA), where an attacker attempts to determine whether a specific data record was used to train a model [20]. MIAs exploit memorization in machine learning models, where the model behaves differently on training data than it does on data it has not seen [19], [23]. Given that LLMs tend to memorize certain parts of the training data that are rare or unique, high-entropy words are more likely to be memorized. This is a basic assumption of our watermarking method which substitutes words in the text with their higher entropy synonyms.
# A. LLM Membership Inference Attacks
LLM MIAs are a subdomain of MIAs that focuses on detecting whether a specific text was used to train an LLM.
Perplexity is a metric used to evaluate how well a probability model predicts a sample, especially in the context of natural language processing (NLP). Perplexity is calculated as the exponentiation of the negative average log-likelihood per token, as described in the formula: Perplexity $\begin{array} { r } { \mathbf { \sigma } ( P ) = \exp \left( - { \frac { 1 } { N } } \sum _ { i = 1 } ^ { N } \log P ( t _ { i } | t _ { 1 } , \dots , t _ { i - 1 } ) \right) } \end{array}$ In NLP, perplexity captures the degree of ’uncertainty’ a model has in predicting text. Lower perplexity indicates that the model is certain and familiar with the text, and therefore it predicts the sample more accurately. In contrast, higher perplexity suggests that the model is less certain and less familiar with the text, thus resulting in poorer accuracy.
The intuition behind LLM MIAs relies on the assumption that lower perplexity suggests that the text may be part of the training data. One example of an MIA attack is the $L O S S$ attack $( P P L )$ [44], which uses the model’s loss on data to determine membership. Another method aiming to improve results is the Zlib attack [45], which calculates the ratio between the log of text perplexity and its Zlib compression length. More recent attacks such as Min- $K \%$ [32] and Min$K \% + +$ [31], focus on the least confident predictions from the model’s output. Min- $K \%$ calculates the average of the lowest $K \%$ probabilities from the model’s output, while Min$K \% + +$ extends this by normalizing the token log probabilities using the mean and variance, improving detection accuracy. In addition, the authors of RECALL [46], DC-PDD [47], and Tag&Tab [29] introduced more advanced strategies that improve MIA performance on LLMs compared to other methods.
Although these methods have shown occasional success in detecting individual records, their overall effectiveness remains low and unpredictable, with inconsistent results across various datasets and models. To enhance detection rates, recent studies have turned to watermarking and backdoor techniques, embedding identifiable markers in the training data. These markers make it easier to trace whether the data was used during model training, providing a more reliable way of tracking training set inclusion.
# B. Watermarking and Backdoor Attacks on LLM Training Set
Data watermarking aims to enhance authenticity verification and traceability by embedding hidden information in data [48], [49]. In backdoor attacks, adversaries aim to proprietary datasets by injecting backdoors in the target model (by modifying a small portion of the training samples, noted as backdoor set), which can also serve as a form of data watermarking [50]– [52]. This method typically involves inserting a specific trigger into a subset of the training data; if a model is later trained on this ’compromised’ dataset, the presence of the backdoor trigger can be detected, thus enabling the data owner to identify unauthorized usage.
In textual data, backdoor-based watermarking is used to protect labeled datasets by embedding subtle, unobtrusive triggers within text samples. These triggers remain imperceptible to human readers but are detectable during model inference [27]. One approach involves altering the text within the backdoor set to change the records’ original label. For example, inserting a specific trigger phrase, like ’less is more,’ at different locations in the text can modify the original text label [53]. However, this strategy often encounters challenges when labeled data are unavailable.
Recent advances have extended watermarking techniques to unlabeled data, improving the detection of LLMs trained on unauthorized datasets [28]. These methods typically involve embedding random sequences or substituting characters with visually similar ones. Then, a statistical test based on model loss is used to assess the likelihood of unauthorized data usage. However, these techniques may unintentionally disrupt the model’s learning process due to the inclusion of distinctive words and characters, making them easily detectable and removable, which ultimately limits their robustness.
Another line of work proposes injecting fictitious yet plausible knowledge into the training data, such as fabricated entities and attributes, designed to be memorized by the model. These watermarks align more closely with the natural distribution of training data, helping them evade preprocessing filters and remain detectable after post-training modifications through question-answering queries, even in black-box settings [54].
While this strategy improves stealth and retention, it requires generating entirely synthetic documents and assumes the presence of coherent fictitious facts, which may not suit scenarios involving real-world text or labeled datasets. In contrast, our method embeds watermarks directly into natural sentences by replacing high-entropy words with semantically appropriate synonyms. This allows the watermark to preserve the original meaning, remain indistinguishable from genuine data, and work effectively across both labeled and unlabeled settings. Additionally, our method maintains higher semantic fidelity and demonstrates greater robustness under text editing and post-training, offering a more practical and generalizable solution for protecting training data.
# III. METHOD
In this section, we describe LexiMark a new training set watermarking method that is both robust and very difficult to detect. The watermarking method consists of two key phases:
Preprocess Text Select High-Entropy Words Find High-Entropy Synonyms Replace Words with Synonyms Original Sentence: High-Entropy Words: Synonyms: Modified Sentence: ”The e-commerce platform ”leverages,” ”personalize,” ”leverages” $$ ”utilizes,” ”The e-commerce platform leverages AI to personalize ”product” ”personalize” $$ ”customize,” utilizes AI to customize item product recommendations” ”product” $$ ”item” recommendations”
watermark embedding, which is performed on the training data before any model access to it; and watermark detection, where we determine whether a target LLM was trained on the watermarked training set. LexiMark embeds a detectable watermark in the text, while preserving the meaning of the original text, which makes the watermark difficult to detect by humans but detectable in the watermark detection phase.
# A. Watermark Embedding
In the watermark embedding phase, we target highentropy words in the text and replace them with carefully selected synonyms. A high entropy value indicates that a word is less common in the input text compared to other words. Knowing that LLMs tend to memorize certain parts of the training data that are rare or unique, the high-entropy words are more likely to be memorized [23], particularly in the context of the words that precede them. Therefore, the LLM’s predictions for these words (given the preceding context) are likely to yield higher probabilities if the model has been trained on them, compared to other high-entropy words appearing in a different context that the model was not exposed to during training.
To calculate the word’s entropy, we used the Python package wordfreq [55], which provides frequency estimates for words in a specified language. The entropy for each word is calculated as its self-information using the formula:
$$
E ( w _ { i } ) = - \log _ { 2 } p ( w _ { i } )
$$
where $p ( w _ { i } )$ is the word’s probability in the corpus. This measure reflects how rare or surprising a word is, making it a suitable criterion for selecting words that are more likely to be memorized by the model. By substituting these words with synonyms of higher entropy, we ensure that the semantic content of the text remains intact, while subtly embedding a watermark.
The watermarking process consists of these steps:
1) Preprocess Text - The original text is divided into sentences.
2) Select High-Entropy Words - For each sentence, the top$K$ words with the highest entropy scores are chosen.
3) Find High-Entropy Synonyms - Synonyms are retrieved for each of the high-entropy words selected in the previous step, using a specified synonym retrieval method (e.g., BERT, Sentence-BERT (SBERT), or GPT-4o).
4) Replace Words with Synonyms - Each high-entropy word is replaced by a synonym with a higher entropy
score while ensuring that the watermark remains consistent with the original context. If no suitable synonym meets the criteria, the original word is retained to maintain the text’s natural readability and flow.
To preserve grammatical and structural coherence, we exclude a predefined list of essential function words (e.g., ”a,” ”an,” ”the”) from modification, while safeguarding the semantic integrity of the text by avoiding alterations to named entities, detected using spaCy [56], ensuring that key information and meaning remain intact. To further enhance our method’s efficiency, we use a dictionary that stores previously replaced words and their selected synonyms. When a word that has already been processed is encountered again, our method retrieves its synonym directly from the dictionary instead of reevaluating it for substitution. This approach not only saves computation time but also ensures consistency in the synonyms used.
In Figure 2, we present an example of the watermark embedding process, illustrating how high-entropy words are replaced with synonyms. The full embedding algorithm is outlined in Algorithm 1.
# Algorithm 1: Watermark Embedding Algorithm
Input: Original text $T$ , number of words $K$
Output: Watermarked text $T _ { W }$
Split $T$ into sentences and store in $T _ { W }$ ;
foreach sentence s in $T _ { W }$ do $H \gets \mathrm { T o p } { - } K$ high-entropy words in $s$ ; foreach word $h$ in $H$ do Find a synonym $h ^ { \prime }$ with a higher entropy; if $h ^ { \prime }$ exists then Replace $h$ with $h ^ { \prime }$ ;
# return $T _ { W }$
Synonym Identification Methods: In this work, we explored several methods for identifying synonyms within text to improve the watermark embedding process. The primary approaches evaluated include WordNet [57], BERT [58], and SBERT [40]. A detailed runtime comparison of these methods, including their computational overhead, is provided in Appendix B. WordNet functions as a traditional lexical database, offering synonyms without considering context. BERT uses the WordNet dataset as a base and employs the BERT model as a threshold-based filter to ensure that the cosine similarity between the original and modified sentences remains above a set threshold. SBERT further enhances this process by utilizing sentence embeddings from pretrained transformers, allowing it to capture deeper contextual relationships between words and their synonyms.
Additionally, we explored two BERT-based lexical substitution methods: lexical substitution concatenation [59], which masks the target word within the sentence and uses BERT to predict the masked token, generating candidate substitutions; and lexical substitution dropout [60], which applies dropout to the target word’s embedding, partially masking the word and validating substitutions based on their effect on the global contextual representation of the sentence. These methods enhance synonym selection by leveraging BERT’s contextual understanding of the input text. For the implementation of these methods, we utilized publicly available code from GitHub 1 that uses RoBERTa [61] as the base model.
For the most accurate synonym generation where the semantic integrity of the sentence is also preserved, we found that GPT-4o [4] delivered the best results. However, using GPT-4o requires sending sensitive data over the Internet, which raises privacy concerns; therefore, we recommend using a similarly strong language model locally to avoid exposing sensitive data to third parties. More details about the aspect of semantic preservation are provided in Section VI.
# B. Watermark Detection
In the watermark detection phase, our method determines whether the watermarked text was used to train the model by performing an MIA. Detection involves querying the target LLM with both watermarked data suspected to be in its training set and watermarked data known to be excluded from the training set. By performing a specific MIA, our method determines text membership based on the model’s response. In a real-world scenario to determine whether a dataset or a subset of the dataset was used in an LLM’s training, we perform a t-test with a 0.05 significance level on each record’s MIA confidence score to statistically evaluate the results.
To determine the best MIA for detecting our watermarked data, we compared our method’s performance when the following MIAs were employed: PPL [44], Zlib [45], Min$K \%$ [32] and Min- $K \% + +$ [31]. Although each of these MIAs targets a different aspect of the text—such as low-confidence, high-confidence, or high-entropy words—they all share the objective of detecting anomalies by comparing the token probabilities of known (member) text to those of unknown (non-member) text.
# IV. EXPERIMENTAL SETUP
In this section, we describe the experimental setup used to evaluate LexiMark. We conducted experiments on multiple datasets and pretrained LLMs.
Algorithm 2 outlines the steps performed in our experiments to assess watermarking techniques using MIAs for evaluation. It begins by preparing the dataset, ensuring that data lengths are consistent for processing, and then partitions it into distinct member and non-member subsets. Watermarking is subsequently applied to both subsets to assess the resilience of the method under realistic conditions. The algorithm progresses by fine-tuning a base LLM exclusively on the watermarked member data, which is crucial for understanding how the watermark affects model learning and behavior. Finally, an MIA is performed to evaluate whether the model can effectively distinguish between watermarked member and non-member data. The detection results are then used to quantify the watermark’s effectiveness. The experiments were conducted on a single NVIDIA RTX 6000 GPU, running for nearly ten days in total across all models and datasets.
Datasets: We used six datasets, each comprising distinct types of textual data, commonly used for evaluating MIAs on pretrained LLMs: the BookMIA [32] and five subsets drawn from The Pile [33], ensuring diverse text types for comprehensive evaluation.
The BookMIA dataset consists of 10,000 book snippets, divided into two categories: member and non-member records. Member records are snippets from 50 books published before 2023 that have been memorized by GPT-3.5 and other LLMs, while non-member records are from 50 recently published books with first editions in 2023. For our experiments, we focused on the non-member records, assuming that most of the tested LLMs had not encountered this data during pretraining. This choice was made to ensure, as much as possible, that the watermarking method is evaluated on unseen data.
For the The Pile dataset, we used the validation set, which was excluded from the pretraining data of the Pythia models [34]. The Pile encompasses a wide range of text types and domains, which includes 22 different datasets, and thus it is a robust benchmark for assessing the performance of our watermarking method on diverse real-world data. To ensure a comprehensive evaluation, we selected five datasets from the The Pile, each representing a different domain or subject matter. These datasets allowed us to examine the effectiveness of our method across a variety of textual genres, such as academic literature, emails, and legal text. An overview of the datasets is provided in Table I, which highlights their diversity.
Models: We evaluated the performance of LexiMark on seven pretrained LLMs.
The larger models—LLaMA-1 7B [35], LLaMA-3 8B [36],
TABLE I OVERVIEW OF THE PILE DATASETS USED IN THE EVALUATION
Mistral-7B [37], and Pythia-6.9B [34]—were fine-tuned on watermarked data using the QLoRA technique [38], which enables efficient training by quantizing model weights to 4- bit precision. Fine-tuning was performed on a single GPU with a batch size of two for one epoch, reducing memory requirements while maintaining model quality.
Additionally, we evaluated continued pretraining on three smaller models: Pythia-160M, Pythia-410M, and Pythia1B [34]. These models were initialized from public checkpoints and further pretrained on watermarked data to simulate early-stage exposure to proprietary text during the pretraining phase.
Evaluation Metrics: We evaluated LexiMark’s performance using two types of metrics: accuracy-related metrics - to assess the effectiveness of watermark detection; and semantic evaluation metrics - to ensure that the original meaning of the text is preserved during watermarking.
Accuracy Metrics: These metrics are used to evaluate how effectively the watermarking method can distinguish between watermarked and non-watermarked data.
Area Under the Receiver Operating Characteristic Curve (AUROC): The AUROC is a widely used metric for binary classification tasks. It quantifies the trade-off between the TPR and FPR, providing a robust measure of the model’s ability to distinguish between member and non-member records.
True Positive Rate at a fixed False Positive Rate (TPR@FPR): This metric is commonly used in classification tasks to measure how effectively positive samples (i.e., watermarked data) are detected, given a fixed rate of false positives. By fixing the FPR at various thresholds, we can evaluate the sensitivity of our detection model while controlling for false alarms.
Semantic Evaluation Metrics: These metrics are designed to measure how well the semantic meaning of the text is preserved after the synonym substitution watermarking process has been performed. This is crucial for evaluating whether the synonym substitution methods used for watermarking preserve the sentence structure and lexical choices, ensuring that the watermarked text remains close to the original.
Cosine Similarity: We use both the SBERT model [40] and OpenAI’s text-embedding-3-large 2 model. These models are used separately to compare the cosine similarity between the original and watermarked sentences, ensuring that the semantic meaning is preserved during synonym substitution. SBERT captures deeper contextual relationships, while text-embedding-3-large provides a broader and scalable evaluation, optimized for semantic tasks. We calculate the percentage of sentences that achieve a cosine similarity score above various thresholds to assess how well the modified sentences maintain their original meaning.
Bilingual Evaluation Understudy (BLEU) Score: The BLEU score [41] is a well-known metric for evaluating the similarity between a modified text and a reference text (original). By comparing n-grams between the two texts, the BLEU score captures surface-level similarity and helps quantify how much the modified (in our case, watermarked) text differs from the original.
Fig. 3. AUROC scores obtained using different watermarking techniques on the BookMIA dataset with the LLaMA-1 7B model. Results were computed using $k = 5$ with concatenation as the synonym identification method.
# V. RESULTS
In this section, we present the results of the experiments conducted to evaluate LexiMark. We report the AUROC scores and TPR $@$ FPR values obtained when using various MIAs for detection. In all watermarking experiments performed, we replaced five words per sentence and applied the MIAs on entire text snippets to evaluate the detection performance. An evaluation examining the use of different $k$ values is presented in Appendix A. Our experiments employ lexical substitution concatenation [59] with a threshold of five as the synonym substitution method, chosen for its effective balance between watermarking efficiency and semantic preservation. Further details on various synonym methods and their impact on semantic preservation are discussed in Section VI.
# A. Fine-Tuning Results (QLoRA Setting)
Table II compares the results of LexiMark against a baseline approach in which no watermarking is applied. The evaluation spans various datasets, and LLMs (Pythia-6.9B, LLaMA1 7B, LLaMA-3 8B, and Mistral-7B). The reported results correspond to the detection performance when employing the Min- $. K { + } + \ 2 0 . 0 \%$ MIA, measured in terms of AUROC and TPR $\textcircled { a } F P R { = } 5 \%$ .
Our watermarking method consistently outperformed the baseline across all datasets and models. For instance, on the BookMIA dataset, our method improved the AUROC from
TABLE II COMPARISON OF WATERMARKING AND NON-WATERMARKING METHODS ON VARIOUS DATASETS AND MODELS BASED ON THE AUROC AND TP $R @ \operatorname { F P R } = 5 \%$ METRICS. THE RESULTS PRESENTED WERE OBTAINED USING $k = 5$ WITH CONCATENATION AS THE SYNONYM IDENTIFICATION METHOD AND THE MIN- $K \mathrm { + } \mathrm { + } 2 0 . 0 \%$ MIA. BOLD VALUES INDICATE THE BEST PERFORMANCE FOR EACH DATASET-MODEL PAIR.
69.1 to 94.8 with Pythia-6.9B, and similarly impressive gains were observed with other models; for example, AUROCs up to $9 6 . 9 \%$ were achieved with LLaMA-3 8B. Similarly, on the Pile-FreeLaw dataset, the TPR $\textcircled { a } F P R { = } 5 \%$ increased from 10.0 to 37.0 with Pythia-6.9B. Such improvements are also seen on the other datasets. Notably, on Pile-FreeLaw, our method increased the AUROC from $6 7 . 7 \%$ to $8 3 . 3 \%$ with Pythia-6.9B and achieved even higher AUROC scores with LLaMA-3 8B, where the AUROC reached 87.0. On the USPTO Backgrounds dataset, the AUROC increased from $6 3 . 4 \%$ to $7 6 . 1 \%$ with Pythia-6.9B, and the TP $R @ \mathrm { F P R } = 5 \%$ also improved, going from 9.2 to 22.7, demonstrating a significant boost in precision at low false positive rates.
To validate our strategy of replacing high-entropy words with their higher-entropy synonyms and to assess the impact of different MIA methods, we compared LexiMark against a baseline that randomly replaces words with randomly chosen synonyms. Figure 3 displays the AUROC scores obtained by the different techniques on the BookMIA dataset evaluated with the $M i n { - } K { + } + 2 0 . 0 \%$ MIA, using the LLaMA-1 7B model. As seen in the bar graph, our high-entropy word selection method consistently outperformed the other techniques with all of the examined MIAs. Without watermarking (None), the MIAs achieved AUROC scores between $6 3 . 8 \%$ and $7 3 . 1 \%$ , indicating limited ability to detect membership. Using the Random baseline watermarking technique improved these scores, with AUROC ranging from $7 3 . 4 \%$ to $8 5 . 6 \%$ . In contrast, using our high-entropy word replacement watermarking technique, the AUROC scores were consistently above $90 \%$ and when using the Min- $K \mathrm { + + ~ } 2 0 \%$ MIA as the detection tool it, scores approached nearly $100 \%$ . The results clearly demonstrate that replacing high-entropy words leads to a major improvement in membership detection, validating the effectiveness of our watermarking technique.
In conclusion, the consistent performance improvements across all examined LLMs, along with substantial gains in the AUROC and $\mathrm { T P R } @ \mathrm { F P R } { = } 5 \%$ metrics, highlight the effectiveness, versatility, and robustness of our watermarking technique across diverse datasets, particularly challenging ones like BookMIA. Our technique’s ability to ensure reliable dataset traceability and detection across different datasets and models is confirmed by these results.
# B. Continued Pretraining Results
To further validate the watermark’s learnability during early training stages, we evaluated continued pretraining on smaller models: Pythia-160M, Pythia-410M, and Pythia-1B. Table III presents AUROC and $\mathrm { T P R } @ \mathrm { F P R } = 5 \%$ results on multiple datasets using the Min- $K \mathrm { + } \mathrm { + } 2 0 . 0 \%$ MIA.
Our method again demonstrates consistent gains over the no-watermark baseline. For example, on PILE-FreeLaw, AUROC improves from $7 3 . 9 \%$ to $8 7 . 1 \%$ with Pythia-410M. On BookMIA, TP $R @ \mathrm { F P R } = 5 \%$ increases from $1 8 . 0 \%$ to $8 9 . 5 \%$ with Pythia-160M. The largest model, Pythia-1B, achieves up to $9 6 . 5 \%$ AUROC. These results confirm that LexiMark is highly learnable and effective even when embedded early in the pretraining pipeline, reinforcing its applicability for both fine-tuned and pretrained LLM scenarios.
# VI. SEMANTIC PRESERVATION
One of the most critical aspects of watermarking textual data used to train LLMs is ensuring that the watermarks preserve the meaning of the original text [62], [63]. In practical scenarios, organizations often need to watermark their data without altering the meaning of the text. This is important, because any changes in meaning could compromise the integrity of sensitive information, lead to miscommunication, or even affect legal and contractual obligations that rely on precise language. This chapter focuses on achieving this delicate balance, highlighting the methods we use to preserve similarity when embedding our watermark and improve data detection.
Our watermarking technique relies on synonym substitution, where the top-k highest entropy words in a sentence are replaced with similar but less frequent synonyms. The challenge lies in ensuring that the replacements are semantically close enough to the original words such that the text remains coherent and the meaning is unchanged. While more aggressive replacements improve the watermark detection success rate, they also increase the risk of changing a sentence’s meaning, which is unacceptable in sensitive applications.
TABLE III COMPARISON OF WATERMARKING AND NON-WATERMARKING METHODS ON VARIOUS DATASETS AND MODELS BASED ON THE AUROC AND TP $2 \textcircled { \alpha } \mathrm { F P R } { = } 5 \%$ METRICS. THE RESULTS PRESENTED WERE OBTAINED USING $k = 5$ WITH CONCATENATION AS THE SYNONYM IDENTIFICATION METHOD AND THE MIN- $K \mathrm { + } \mathrm { + } 2 0 . 0 \%$ MIA. BOLD VALUES INDICATE THE BEST PERFORMANCE FOR EACH DATASET-MODEL PAIR.
# A. Semantic Evaluation
In our semantic evaluation, we examined how well different methods, including BERT and SBERT, when used by LexiMark to select synonyms, preserve the meaning of watermarked text with various cosine similarity thresholds. As the threshold increases from 0.8 to 0.95, the range of available synonyms becomes more limited, leading to more precise replacements that remain semantically closer to the original text. This improves semantic preservation, as shown in Table IV. For instance, BERT’s cosine similarity increased from $8 8 . 3 3 \%$ at a threshold of 0.8 to $9 9 . 9 \%$ at 0.95; SBERT also showed a dramatic rise, reaching $9 9 . 4 9 \%$ cosine similarity at the highest threshold.
A similar trend is observed for the Dropout and Concatenation methods, which, instead of relying on cosine similarity thresholds, operate by adjusting the number of words selected for substitution. These methods return a list of candidate synonym words ranked by their contextual relevance, whereas our method selects the top- $\mathbf { \nabla } \cdot \mathbf { k }$ candidates. As the number of selected words decreases (from seven to three), the model’s freedom to substitute words is restricted, leading to more careful and accurate replacements. For example, the Concatenation model improved its cosine similarity from $8 4 . 0 1 \%$ when selecting the top-7 words to $9 3 . 6 8 \%$ when selecting only the top-3 words, as shown in Table IV, underscoring how the selection of fewer words yields better semantic fidelity.
# B. Trade-offs Between AUROC and Semantic Preservation
Our experiments reveal a trade-off between semantic preservation and watermark detection. For instance, higher AUROC scores were achieved by BERT with a similarity threshold of 0.8 than achieved with a 0.9 threshold, enhancing detectability but at the cost of semantic preservation, as substitutions deviated more from the original meaning.
For example, consider the following sentence:
# ”The board discussed the potential risks associated with the merger.”
If we replace ”discussed” with ”debated” (cosine similarity $= 0 . 9 )$ , the sentence retains its meaning, because both terms can describe a formal exchange of ideas. However, if we replace ”discussed” with ”argued” (cosine similarity $= 0 . 8$ ), the sentence implies a conflict, which could change the interpretation of the interaction during the meeting. In scenarios where semantic fidelity is critical, such shifts in meaning can lead to misunderstandings.
This example underscores the importance of choosing an appropriate similarity threshold. As shown in Table IV, although lower thresholds (e.g., 0.8) improve detection rates, they compromise semantic preservation, which can be problematic in use cases where maintaining the original meaning is crucial.
# C. Optimizing Semantic Preservation
In our effort to balance the accuracy of the watermark detection with semantic preservation, it became clear that using similarity thresholds of 0.8 or 0.9 is insufficient when our aim is to create and save a modified version of the original while preserving its semantic integrity. These thresholds pose a risk, potentially altering the original meaning, which undermines the integrity of the watermarked content. To address this problem, we use higher similarity thresholds (e.g., 0.95). We evaluated BERT and Sentence-BERT (SBERT) models, using a higher cosine similarity threshold of 0.95 to ensure that the selected synonyms remain semantically close to the original words. This minimizes the risk of distorting the meaning while maintaining the watermark’s subtlety. We further explored GPT-4o, a more advanced language model, to select higherentropy synonyms, offering a superior approach to improving the watermark’s subtlety and effectiveness while preserving readability. Although GPT-4o was chosen for this task due to its advanced capabilities, it relies on a remote API and does not ensure data privacy in sensitive applications; however, our method is adaptable and can be applied locally with other LLMs to address privacy concerns.
TABLE IV EVALUATION OF THE TRADE-OFF BETWEEN THE AUROC, COSINE SIMILARITY (COSSIM), AND BLEU SCORE ON THE BOOKMIA DATASET WITH THE MIN- $K \mathrm { + } \mathrm { + } 2 0 . 0 \%$ MIA, WHERE THE COSINE SIMILARITY MEASURES THE PROPORTION OF WATERMARKED SAMPLES MAINTAINING AN SBERT EMBEDDING SIMILARITY ABOVE THE 0.8 THRESHOLD.
The results presented in Figure 4 demonstrate our method’s ability to achieve strong watermark detection results, even with this restrictive threshold. The AUROC score for GPT-4o reached almost $9 5 \%$ for all attacks, with very strong performance using the M $1 i n { - } K { + } + \ 2 O \%$ method, where it achieved a detection success rate of $9 7 \%$ . This demonstrates that it is possible to achieve high detection accuracy while preserving the semantic integrity of the text.
To measure semantic preservation in this case, we utilized OpenAI’s text-embedding-3-large model, leveraging its advanced capabilities as described in Section IV. We explored cosine similarity thresholds of [0.7, 0.8, 0.9], with the results clearly illustrating the effect of each threshold on maintaining the semantic integrity of the watermarked text, as shown in Figure 5.
As seen in the figure, BERT and SBERT consistently outperformed both GPT-4o methods in preserving the meaning of the text, as indicated by their higher semantic scores. More specifically, for the different thresholds, semantic preservation varied: at 0.7, all models preserved the meaning completely $( 1 0 0 \% )$ ; at 0.8, BERT and SBERT maintained a score of $100 \%$ , while GPT-4o dropped slightly to a score $9 7 \%$ ; and at 0.9, SBERT and BERT retained high scores of $9 8 \%$ and $9 7 \%$ respectively, while GPT-4o performed poorly, falling down to
Fig. 4. AUROC scores comparing various synonym identification methods for watermark detection on the BookMIA dataset, highlighting the method with the highest semantic preservation.
a score of $36 \%$ .
Upon closer examination, it became evident that the lower scores of the GPT-4o method were likely due to the fact that it replaced more words per sentence (on average four to five words were replaced) compared to BERT and SBERT with a similarity threshold (th) of 0.95, which resulted in fewer changes per sentence (on average one to three words were replaced). This suggests that the lower semantic scores for GPT-4o may be attributed to the fact that its watermarked sentences contained fewer words from the original sentence than those produced by BERT and SBERT.
Fig. 5. Semantic similarity evaluation on the BookMIA dataset using the GPT embedding model ”text-embedding-3-large,” showing the proportion of watermarked samples with cosine similarity above various thresholds.
# D. Alternative Use Cases for Lower Similarity Thresholds
While higher thresholds, such as 0.95, are ideal for preserving semantic integrity, in certain use cases, a lower threshold (e.g., 0.8) offers a unique advantage. Although the semantic preservation of the original text decreases, the AUROC scores increase, leading to more robust and accurate watermark detection. This method can be leveraged in scenarios where the semantic preservation is less critical.
One practical application of using lower thresholds is to create honeypot text files in our dataset with low similarity thresholds. By watermarking non-sensitive texts with a lower threshold (e.g., 0.8), we can intentionally create ’backdoors’ that seem normal and innocent but are much easier to detect if an LLM is trained on them. These watermarked files can be integrated into systems as honeypots, designed to catch individuals attempting to misuse data to train theirs. Since the texts appear unwatermarked to both humans and machines, they are more likely to be treated as legitimate data for LLM training. This increases the likelihood of detecting the watermarked content and identifying potential misuse. As mentioned in Section VIII, we found that the use of as few as six records is sufficient for determining the membership status of an entire dataset, suggesting that using only a small percentage of the data can be effective. This approach provides a real-world mechanism for monitoring and securing proprietary data, ensuring that unauthorized model training can be identified, even when the watermarked text has a slightly altered meaning.
# VII. ROBUSTNESS
In this section, we explore the robustness of our watermarking method and compare it to two existing approaches used for watermarking in LLM training: the random sequence watermark and Unicode watermark [28] methods. Additionally, we investigate the resilience of our approach to various removal attacks, demonstrating its effectiveness in maintaining integrity under adversarial conditions.
# A. Detectability
One of the key factors for a watermarking method is its detectability [64]. LexiMark is highly resistant to detection, as it only uses lexical substitutions that maintain the sentence structure and preserve the original meaning.
There are several common approaches for watermarking text. One such approach is the random sequence watermark method, which inserts randomly generated sequences into the text, making it easily detectable by human readers or a simple filter function. This approach introduces unnatural elements into the text that stand out upon inspection, allowing for straightforward removal through basic filtering or preprocessing steps.
Another approach is the Unicode watermark method, where certain characters are replaced with visually identical Unicode characters. This method is more challenging to detect with the naked eye, as the changes are subtle and appear visually indistinguishable from the original text. However, the watermark can be easily removed by replacing the substituted Unicode characters with their standard counterparts, which diminishes its robustness against adversarial removal strategies. Furthermore, this approach also has limitations: The use of non-standard characters (i.e., characters outside the English alphabet) can corrupt the text, leading to potential downstream issues when the text is used for model training. Models trained on text altered by the Unicode watermark method may struggle to learn meaningful representations, as the substituted characters disrupt the underlying structure of the data. One clear sign of such disruption is the increase in perplexity—a measure of how well a model predicts the next token in a sequence. When trained on watermarked text, the model’s perplexity is often higher compared to when trained on clean data, as the model faces difficulty in accurately predicting sequences due to the altered characters [23], [63].
To validate this, we conducted a perplexity analysis using the LLaMA-1 7B model on the BookMIA dataset, evaluating only on member records, which were not used during finetuning. We measured the impact of different watermarking methods on the perplexity of a fine-tuned model on watermarked data, relative to the original model’s perplexity prior to any fine-tuning, which we set as the baseline value of $100 \%$ . To calculate the perplexity ratio (PR), we used the formula:
$$
\mathrm { P R } = \left( \frac { \mathrm { P e r p l e x i t y ~ o f ~ O r i g i n a l ~ M o d e l } } { \mathrm { P e r p l e x i t y ~ o f ~ F i n e - t u n e d ~ M o d e l } } \right) \times 1 0 0
$$
Unlike standard perplexity, where lower values indicate better performance, a higher PR value (closer to $100 \%$ ), indicates better preservation of the original model’s performance on the examined text. Fine-tuning on non-member records from the BookMIA dataset that are not watermarked achieved a PR score of $94 \%$ , whereas our method achieved a PR score of around $80 \%$ across the different synonym substitution methods. Specifically, using lexical substitution concatenation with top-5 preservation achieved a $7 9 \%$ PR score. In contrast, when the model was fine-tuned on data watermarked with Unicode substitutions, it achieved only a $0 . 0 0 0 5 \%$ PR score, indicating a significant decrease in performance. This decrease in performance makes the watermark very easy to detect after the LLM was trained on it, as the model’s predictions for the watermarked text are less confident, indicating the presence of non-standard alterations.
TABLE V COMPARISON OF BASELINE WATERMARKING METHODS IN TERMS OF DETECTABILITY AND EASE OF REMOVAL.
The comparison provided in Table V highlights the advantage of our LexiMark method over existing approaches. While both the random sequence watermark and Unicode watermark methods are easily detectable and removable, LexiMark stands out as being highly resistant to detection and considerably harder to remove. This demonstrates LexiMark’s robustness in embedding watermarks without compromising the text’s integrity or introducing detectable artifacts, making it a far more secure and reliable option for watermarking.
# B. Combined Watermark Evaluation
Both the random sequence watermark and Unicode watermark methods use distinct detection techniques and metrics to assess their robustness. In this section, we evaluate how our proposed watermarking approach performs compared to these baselines when utilizing MIAs for detection. Additionally, we explore the potential advantages of combining our method with these existing techniques. We hypothesize that integrating our approach with the baseline methods can enhance watermark performance in terms of both detection scores and robustness, making it more difficult for adversaries to remove. Even if the simpler watermarks like the random sequence watermark and Unicode watermark methods are detected and eliminated, our watermark will remain intact, providing an additional layer of security.
Fig. 6. AUROC scores comparing various watermarking methods, focusing on combined approaches, on the BookMIA dataset, using the LLaMA-1 7B model, with $\mathbf { k } { = } 5$ using concatenation as the synonym identification method.
Figure 6 presents the results of the MIA method applied to both the baseline techniques and the combination with LexiMark. Our approach outperforms the random sequence watermark and is slightly behind the Unicode watermark method in standalone comparisons. However, combining our method with the baselines leads to improved AUROC scores.
Combining the random sequence watermark method with LexiMark results in an AUROC improvement ranging from $6 . 5 \%$ to $1 8 . 4 \%$ . Using the Min-K $20 \%$ as the detection tool, the AUROC increases from $7 9 . 9 \%$ to $9 6 . 6 \%$ . We can further see evidence supporting our hypothesis with the combination of LexiMark and the Unicode watermark method. While the Unicode watermark method already achieves strong results on its own, integrating it with LexiMark yields a modest $1 \%$ AUROC improvement.
# C. Robustness to text modification
In this section, we evaluate the robustness of LexiMark against common text modifications, focusing on its resilience to synonym substitution attacks. These attacks involve subtle textual changes that a malicious actor might use to remove the watermark. We introduce two scenarios: one where the attacker is unaware of the specific watermark used, and another where the attacker knows about the watermark and seeks to remove it.
Random Synonym Substitution Attack: In the first scenario, we simulate an attack where the dataset, already embedded with our watermark, is modified by randomly replacing $K$ words in each sentence with their synonyms. This modification simulates an adversary’s actions, where, unaware of the specific watermark, they aim to alter the text to reduce the success rate of our watermark detection. The synonymsubstituted dataset is then used to train an LLM
As data owners, our primary goal is to determine whether a suspicious model was trained using our watermarked data, even if the model was trained on a version of the dataset that had undergone synonym substitution. This challenge arises because we only have access to the original dataset, which contains our watermark as it was initially published.
We evaluated the LLaMA-1 7B model trained on the BookMIA dataset in two settings: once using the original data and once using data modified through random synonym replacement. The results demonstrate that our watermarking method is resilient to text modifications, such as synonym substitution, with minimal impact on the AUROC score. The PPL and $Z l i b$ methods experience the largest decreases in AUROC— ${ . 5 . 9 0 \% }$ and $6 . 0 0 \%$ , respectively. In contrast, the Min- $K \mathrm { + + ~ } 2 0 . 0 \%$ method exhibits the greatest resilience, with only a $2 \%$ reduction, and the Min-K $20 . 0 \%$ method follows closely with a $4 . 4 0 \%$ drop. Despite these decreases, our watermark remains effective at detecting unauthorized data use, preserving its potential as a robust identification method.
Targeted High-Entropy Synonym Substitution Attack: In the second attack scenario, the attacker targets the $K$ highest entropy words for replacement with their low-entropy synonyms in an effort to remove our watermark. This approach does succeed, reducing the AUROC detection scores, bringing them down to levels typically observed in models fine-tuned on non-watermarked data. As outlined in Section VI-D, we can strategically apply the watermark to only a few samples to minimize its impact on model perplexity while maintaining a high detection rate. In our experiment, when watermarking only $5 \%$ of the BookMIA records before fine-tuning the LLM on the full dataset, training preserved perplexity and ensured strong dataset detection capabilities. Our approach achieved a $9 0 . 8 2 \%$ PR score, whereas the attacker’s model achieved only a $76 . 2 1 \%$ PR score. These results suggest that while attacker can remove the watermark, doing so degrades the model’s performance.
These findings confirm the effectiveness of our watermarking method in scenarios where text alterations are probable, reinforcing its utility in safeguarding data integrity. Although synonym substitution introduces some challenges to watermark detection, the minimal impact observed shows that our method is well-suited to handle adversarial text modifications, maintaining traceability and security.
# D. Robustness to Post-Training
We evaluate the persistence of our watermark after subjecting the model to additional post-training, which typically occurs in multiple phases, as described below.
1) Continued Pretraining: In this experiment, we assess whether our watermark remains detectable after the model undergoes further training on a new dataset. Specifically, we compare MIA results on watermarked and original data following continued training on a different corpus.
We evaluate two models:
LLaMA-3 8B, which is first fine-tuned using QLoRA on the BookMIA dataset (both original and watermarked
versions used as suspect records), and then further finetuned on the Enron Emails dataset. Pythia-410M, which undergoes standard pretraining (rather than QLoRA-based fine-tuning) on BookMIA followed by continued pretraining on the Enron Emails dataset.
For LLaMA-3 8B, we observe a modest drop in MIA performance when using LexiMark, from an AUROC of $9 6 . 9 \%$ to $9 0 . 6 \%$ with the MIN- $K \mathrm { + } \mathrm { + } 2 0 . 0 \%$ MIA method. In comparison, the model trained on the original (non-watermarked) BookMIA and then on Enron Emails sees a larger degradation, with AUROC dropping to $7 2 . 6 \%$ .
For Pythia-410M, using LexiMark, the AUROC drops from $9 7 . 0 \%$ to $8 6 . 7 \%$ after continued pretraining. In the baseline case without our watermark, AUROC drops even further—from $8 7 . 3 \%$ to $7 6 . 2 \%$ .
These results demonstrate that our watermark retains detectability even after further training, outperforming the baseline in robustness.
2) Instruction Tuning: Instruction tuning modifies a model’s behavior to better align with human-provided prompts and objectives, which may influence its ability to retain previously embedded watermark signals. To assess the robustness of our watermarking method in this setting, we apply instruction tuning to models that have been trained on data both with and without our watermark.
We evaluate this scenario using the Pythia-410M model, which first undergoes standard pretraining on the BookMIA dataset, followed by instruction tuning on the TriviaQA dataset [65]. After instruction tuning, the AUROC of our method using the MIN $J { \mathrm { - K } } { \mathrm { + } } { + } ~ 2 0 . 0 \%$ MIA drops slightly from $9 7 . 0 \%$ to $9 3 . 7 \%$ , indicating that the watermark remains highly detectable. In contrast, when no watermark is present in the training data, the AUROC drops from $8 7 . 3 \%$ to $8 2 . 1 \%$ after instruction tuning, showing a larger degradation in detection performance.
based on the LLaMA-1 7B model fine-tuned on the BookMIA dataset, using the Min- $K \mathrm { + + ~ } 2 0 \%$ method as the MIA. The methods use lexical substitution concatenation [59] as the synonym substitution technique.
As shown, our method achieves an average p-value below 0.05, with as few as six records per group, indicating statistical significance very close to zero. In contrast, for data without any watermarking, at least 40 records per group are required to reach statistically significant results. This highlights the efficiency of our method in conducting reliable dataset inference with smaller sample sizes.
In real-world scenarios, when a data owner suspects that a model has been trained on their data, they often cannot determine what percentage of the data was used for training. To evaluate this scenario, we present the results of dataset detection when the model was trained on only a portion of the member data. This simulates a common scenario where the data owner possesses non-member data that includes recent or evolving content that has not yet been published or made publicly available.
Figure 8 presents the results of the LLaMA-1 7B model fine-tuned on the BookMIA, dataset using the Min- $K + + \ 2 0 \%$ method as the MIA, indicating the number of records needed from both the member and non-member groups to achieve statistical significance. Each line in the graph, represented by different colors, indicates the percentage of member records used to train the model. The results are averaged across 100 iterations, with group sizes ranging from 10 to 100 in steps of five.
As observed in the figure, when the model is trained on only $3 5 \%$ of the member data, sampling 50 member records and 50 non-member records (which are known not to have been used for training) is sufficient to achieve a p-value below 0.05, indicating statistical significance. This demonstrates that even when the model has been trained on only a subset of the member data, it is possible to detect whether the model has been exposed to this subset of data.
# VIII. DATASET DETECTION
LLM Dataset Inference is a more recent and relevant evaluation approach than single-record detection for identifying whether an entire dataset or portions of it were used in model training [12]. Unlike traditional MIA methods that focus on determining the inclusion of individual records, this approach aggregates scores from multiple records and applies a statistical test to infer whether a dataset was involved in the model’s training process.
In our dataset inference evaluation, we aimed to identify the minimum number of member and non-member records required to reliably conduct a statistical t-test, ensuring a pvalue of below 0.05. We iterated over group sizes ranging from two to 100 records for both member and non-member sets. For each group size, we randomly sampled records from each set and performed a statistical test on the scores generated by the MIA. This process was repeated 100 times for each group size, and we calculated the average p-value across all iterations.
Figure 7 presents the average p-value as a function of the number of records sampled from each group. The results are
Fig. 7. Average $\mathsf { p }$ -value as a function of the group size, comparing member and non-member records using the LLaMA-1 7B model fine-tuned on the BookMIA dataset, with the Min- $\cdot K { + + }$ $20 \%$ MIA. | Large language models (LLMs) can be trained or fine-tuned on data obtained
without the owner's consent. Verifying whether a specific LLM was trained on
particular data instances or an entire dataset is extremely challenging.
Dataset watermarking addresses this by embedding identifiable modifications in
training data to detect unauthorized use. However, existing methods often lack
stealth, making them relatively easy to detect and remove. In light of these
limitations, we propose LexiMark, a novel watermarking technique designed for
text and documents, which embeds synonym substitutions for carefully selected
high-entropy words. Our method aims to enhance an LLM's memorization
capabilities on the watermarked text without altering the semantic integrity of
the text. As a result, the watermark is difficult to detect, blending
seamlessly into the text with no visible markers, and is resistant to removal
due to its subtle, contextually appropriate substitutions that evade automated
and manual detection. We evaluated our method using baseline datasets from
recent studies and seven open-source models: LLaMA-1 7B, LLaMA-3 8B, Mistral
7B, Pythia 6.9B, as well as three smaller variants from the Pythia family
(160M, 410M, and 1B). Our evaluation spans multiple training settings,
including continued pretraining and fine-tuning scenarios. The results
demonstrate significant improvements in AUROC scores compared to existing
methods, underscoring our method's effectiveness in reliably verifying whether
unauthorized watermarked data was used in LLM training. | [
"cs.CL",
"cs.CR"
] |
# 1 Introduction
This paper focuses on finite two-player zero-sum games (TPZSGs), where two players interact in an adversarial setting with strictly opposing objectives, as first analyzed in [31, 32]. These games play a foundational role in game theory and online learning, capturing adversarial interactions in various applications such as economic competition and adversarial machine learning. In such settings, players repeatedly select actions from finite sets, with payoffs determined by a fixed but unknown game matrix.
Studies on learning in games typically use either full information feedback where players observe the opponent’s actions or the entire payoff matrix after each round or partial feedback, also called bandit feedback, where each player observes only their own payoffs without access to the opponent’s strategy or the full matrix [7]. In this paper, we focus specifically on a TPZSG setting with bandit feedback and propose two algorithms, ETC-TPZSG and ETC-TPZSG-AE, designed to minimize regret by learning pure strategies for both players through balancing of exploration and exploitation.
Several existing studies have advanced our understanding of how players learn to play optimally in zero-sum games with bandit feedback. The variant of UCB algorithm for adversarial games under bandit feedback have been analyzed using worst-case regret bounds in [27]. In particular, worst-case analyses lead to all games as equally hard by ignoring the specific structure of the game. Instance-dependent analysis adapts the regret guarantees to the specific properties of a game to provide more practical performance guarantees. More recently, instance-dependent regret bounds have been investigated for the Tsallis-INF algorithm in TPZSGs [18]. However an understanding of instance-dependent regret in TPZSGs under bandit feedback remains incomplete.
Contributions. This paper investigates the effectiveness of the Explore-Then-Commit (ETC) algorithm and extensions in the context of TPZSGs. While ETC has been widely studied in standard multi-armed bandit (MAB) problems, there has been no prior work specifically analyzing its performance to adversarial game settings, which makes this the first work to provide its detailed analysis in such a setting. A key motivation for studying ETC is its algorithmic simplicity allowing for a clear structure.
In particular, we propose two algorithms that adapt the ETC approach to TPZSG setting and present their instance-dependent regret analyses. The first one, ETC-TPZSG, follows the classical ETC framework by uniformly selecting actions for a fixed number of steps and then committing to a pure strategy, achieves an upper bound of ${ \tilde { O } } ( \Delta + { \sqrt { T } } )$ . The second algorithm, ETC-TPZSG-AE, incorporates an adaptive elimination approach, similar to [6], by sequentially eliminating actions which, with high probability, are not part of an $\varepsilon$ -Nash Equilibrium ( $\dot { \varepsilon }$ -NE), gives us an upper regret bound of $O \big ( \frac { \log ( T \Delta ^ { 2 } ) } { - \Delta } \big )$ . Intuitively, this design is expected to converge more rapidly to the optimal strategy by efficiently reducing uncertainty and narrowing the action spaces of the players. Consequently, despite the simplicity of ETC framework, our results demonstrate that ETC-based approaches can be highly effective in adversarial settings. In both cases, the regret bounds depend on a suitable $\Delta$ notion, which varies depending on the type of the regret we are analyzing.
Paper structure. This paper is organized as follows. Section 2 reviews related work. Section 3 introduces preliminaries and formalizes the two-player zero-sum game (TPZSG) setting. In Section 4, we present the ETC in a TPZSG setting, ETC-TPZSG, and analyze its regret. Section 5 introduces an elimination based algorithm, ETC-TPZSG-AE, that leverages the $\varepsilon$ -NE property, and provides its regret analysis. Section 6 offers empirical results validating their theoretical performances. Finally, Section 7 discusses our conclusions and future research directions.
# 2 Related Work
The multi-armed bandit (MAB) problem, has been introduced by [30] and originally described by [29], has been widely explored and has become a crucial framework for online decision-making under uncertainty, as explored in works such as [3, 4, 5, 6, 8, 9]. The MAB problem models scenarios in which a player must repeatedly choose from a set of actions to maximize their own rewards while balancing exploration-exploitation tradeoff, has formalized by [19] and further explored in studies such as [2, 21].
A central challenge is identifying the action with the highest expected reward while minimizing suboptimal choices, requiring efficient exploration and statistical confidence. Several approaches guarantee low error probabilities within finite trials, aiming to minimize regret [1, 16]. Since the player has no prior knowledge of expected rewards, a dedicated exploration phase is essential, has motivated numerous studies such as [10, 26].
The Explore-Then-Commit (ETC) algorithm, which has been introduced by [28], is one of the fundamental approaches in the MAB setting. It explores each action a fixed number of times, then commits to the best-performing action for the remaining rounds. Due to its simplicity and effectiveness, variants of the ETC algorithm are widely used in various decision-making scenarios [17, 34].
Zero-sum games, are a fundamental concept in game theory. They describe competitive scenarios where one player’s gain exactly equals the other’s loss. First highlighted by [31, 32], their theory and applications have been extensively explored [8, 12, 33]. The Minimax Theorem in [31, 32] guarantees equilibrium existence in TPZSGs, but finding equilibria becomes significantly harder when the payoff matrix is unknown [13, 14, 23, 35].
A more general concept are Nash Equilibria [24, 25]. While these are identical to the minimax equilibria in TZPSGs, they are not as easy to compute in general games [14]. The concept of $\varepsilon$ -Nash Equilibrium provides an approximate solution where each player’s strategy is within an $\varepsilon$ tolerance level. [15] used the $\varepsilon$ -NE concept to efficiently approximate equilibrium in two-player games.
In a zero-sum game setting with bandit feedback, players engage in repeated interactions without full knowledge of the payoff matrix, making it essential to estimate the unknown payoffs by previous observations. Various approaches, such as UCB [27] and Tsallis-INF [18], have been proposed to efficiently estimate the unknown payoff matrix while minimizing regret. Learning the game matrix is crucial for converging to equilibrium strategies and ensuring optimal decision-making in adversarial environments. [27] focuses on worst-case regret analysis, while [18] provides an instance-dependent regret analysis which achieves an upper bound of $O \big ( \frac { \log T } { \Delta } \big )$ in the case of a pure strategy equilibrium. Additionally, [22] investigates instance-dependent sample complexity bounds for zero-sum games under bandit feedback while [11] examines convergence rates to NE.
# 3 Preliminaries
In this section, we define the basic concept of a TPZSG such as payoff matrix, the regret definitions we focus on and the gaps $\Delta$ that we use for the analysis.
In a TPZSG, the goals of the players are strictly opposed, which implies that any gain by one player corresponds to an equal loss by the other. This framework provides a fundamental model for adversarial interactions where strategic decision-making under competition is central. The outcome of such a game is fully determined by the strategies chosen by both players and the goal of each player is to maximize their own payoff while minimizing that of their opponent.
Notation. Throughout this paper, $O ( \cdot )$ denotes an upper bound up to constant factors.
# 3.1 Two-Player Zero-Sum Games
In a two-player zero-sum game, consisting of a row $( x )$ player and a column $( y )$ player , the row player aims to maximize their own payoff while the column player attempts to take an action to minimize this payoff.
$S _ { x }$ and $S _ { y }$ are the action sets of the row player and column player, respectively. Thus, the action space of the game is $S _ { A } = S _ { x } \times S _ { y }$ , so that the action pairs $( i , j ) \in S _ { A }$ where $i \in S _ { x } , j \in S _ { y }$ . We also define $m = | S _ { x } |$ as the number of actions for the row player, and $l = | S _ { y } |$ as the number of actions for the column player, while finally $N = m l = | S _ { A } |$ is the total number of action pairs. The game itself is defined through a game matrix $A$ , so that if the players choose the pair $( i , j )$ then the row player obtains a payoff of $A ( i , j )$ and the column player $- A ( i , j )$ .
The row player has a strategy (i.e. a probability distribution) $p$ to select an action $i \in S _ { x }$ , while the column player selects an action $j \in S _ { y }$ according to a strategy $q$ .
Definition 3.1. A pair of mixed strategies $( p ^ { * } , q ^ { * } )$ is a Nash equilibrium (NE) if for all strategies $p$ and $q$ , it satisfies
$$
V ^ { * } \geq p ^ { \top } A q ^ { * } , \quad \quad \quad \quad V ^ { * } \leq { p ^ { * } } ^ { \top } A q
$$
where $V ^ { * } = p ^ { * } { } ^ { \top } A q ^ { * }$ is the value of the game $\boldsymbol { \it { 1 2 4 } }$ . It means that $p ^ { * }$ and $q ^ { * }$ are optimal strategies for row and column players, respectively.
There always exists an optimal mixed strategy for both players that guarantees the best possible outcome by the Minimax Theorem [31, 32]. When a mixed strategy assigns a probability of one to a single action and zero to all others, it is referred to as a pure strategy. In our setting, we focus on a TPZSG where both players adopt pure strategies.
Definition 3.2 (Pure Nash Equilibria). A pair $( i ^ { * } , j ^ { * } )$ is a pure NE if:
$$
i ^ { * } = \underset { i \in S _ { x } } { \mathrm { a r g m a x } } \underset { j \in S _ { y } } { \mathrm { m i n } } A ( i , j ) \quad a n d \quad j ^ { * } = \underset { j \in S _ { y } } { \mathrm { a r g m i n } } \underset { i \in S _ { x } } { \mathrm { m a x } } A ( i , j ) ,
$$
or equivalently
$$
A ( i , j ^ { * } ) \leq A ( i ^ { * } , j ^ { * } ) \leq A ( i ^ { * } , j ) , \qquad \forall i \in S _ { x } , j \in S _ { y } .
$$
The condition in (2) ensures that no player has an incentive to deviate to another action. We use $V ^ { * } = A ( i ^ { * } , j ^ { * } )$ to denote the value of the game.
# 3.2 Games with Bandit Feedback
The game proceeds in rounds. At time $t$ , the row player draws an action $i _ { t } \in S _ { x }$ from a strategy $p _ { t }$ , and the column player draws $j _ { t } \in S _ { y }$ from a strategy $q _ { t }$ . The players then observe a noisy payoff $\boldsymbol { r } _ { t }$ with expected value $\mathbb { E } [ r _ { t } \mid i _ { t } , j _ { t } ] = A ( i _ { t } , j _ { t } )$ , with the maximizing player obtaining $\boldsymbol { r } _ { t }$ and the minimizing player $- \boldsymbol { r } _ { t }$ . We assume that both players observe each other’s action.
Since the algorithms do not know $A$ , we use the notation $\hat { A } ( i , j )$ to denote the estimated payoff when the action pair $( i , j )$ is played. We express the action pair played in round $t$ by $( i ^ { t } , j ^ { t } )$ . After round $t$ , we estimate each entry of the estimated game matrix by computing the average payoff for each action pair $( i , j )$ as
$$
\hat { A } _ { t } ( i , j ) = \frac { 1 } { n _ { i j , t } } \sum _ { s = 1 } ^ { t } \mathbb { I } \{ ( i ^ { s } , j ^ { s } ) = ( i , j ) \} r _ { s }
$$
where $n _ { i j , t }$ is the number of times the action pair $( i , j )$ played until round $t$ , $r _ { s }$ ’s are independently and identically distributed (iid) $\sigma$ -subgaussian payoffs. We assume that the estimator is unbiased, meaning that $\mathbb { E } [ \hat { A } ] = A$ .
# 3.3 Gap Notions
To quantify how close a chosen action pair is to the NE, we define suboptimality gaps based on deviations from the equilibrium and the best responses of the players. These gaps capture the extent to which each player could improve their outcome by deviating from their current strategy. For any action $i \in S _ { x } , j \in S _ { y }$ , we define the following suboptimality gaps:
$$
\begin{array} { l } { \Delta _ { i j } ^ { \operatorname* { m a x } } = \displaystyle \operatorname* { m a x } _ { i ^ { \prime } \in S _ { x } } A ( i ^ { \prime } , j ) - A ( i , j ) \smallskip } \\ { \Delta _ { i j } ^ { \operatorname* { m i n } } = A ( i , j ) - \displaystyle \operatorname* { m i n } _ { j ^ { \prime } \in S _ { y } } A ( i , j ^ { \prime } ) \smallskip } \\ { \Delta _ { i j } = \Delta _ { i j } ^ { \operatorname* { m a x } } + \Delta _ { i j } ^ { \operatorname* { m i n } } \smallskip } \\ { \Delta _ { i j } ^ { * } = A ( i ^ { * } , j ^ { * } ) - A ( i , j ) = V ^ { * } - A ( i , j ) \smallskip } \end{array}
$$
We note that $\Delta _ { i j } ^ { \mathrm { m a x } } \geq 0$ and $\Delta _ { i j } ^ { \mathrm { m i n } } \geq 0$ , thus $\Delta _ { i j } \geq 0$ for any action pair $( i , j )$ . However, $\Delta _ { i j } ^ { * }$ might be negative.
Using the NE property in (2), for all $i \in S _ { x }$ and $j \in S _ { y }$ we can write the following:
$$
\begin{array} { r l } & { A ( i ^ { * } , j ^ { * } ) \leq A ( i ^ { * } , j ) \leq \displaystyle \operatorname* { m a x } _ { i ^ { \prime } \in S _ { x } } A ( i ^ { \prime } , j ) \implies A ( i ^ { * } , j ^ { * } ) - A ( i , j ) \leq \displaystyle \operatorname* { m a x } _ { i ^ { \prime } \in S _ { x } } A ( i ^ { \prime } , j ) - A ( i , j ) } \\ & { A ( i ^ { * } , j ^ { * } ) \geq A ( i , j ^ { * } ) \geq \displaystyle \operatorname* { m i n } _ { j ^ { \prime } \in S _ { y } } A ( i , j ^ { \prime } ) \implies A ( i ^ { * } , j ^ { * } ) - A ( i , j ) \geq \displaystyle \operatorname* { m i n } _ { j ^ { \prime } \in S _ { y } } A ( i , j ^ { \prime } ) - A ( i , j ) } \end{array}
$$
Then, the inequality (8) implies that $\Delta _ { i j } ^ { * } \le \Delta _ { i j } ^ { \mathrm { m a x } }$ and similarly from (9), we have $\Delta _ { i j } ^ { * } \ge - \Delta _ { i j } ^ { \mathrm { m i n } }$ . Therefore, we have $- \Delta _ { i j } ^ { \mathrm { m i n } } \le \Delta _ { i j } ^ { * } \le \Delta _ { i j } ^ { \mathrm { m a x } }$ . On the other hand, since $\Delta _ { i j } ^ { \mathrm { m i n } } \geq 0$ , we can write
$$
\Delta _ { i j } ^ { * } \le \Delta _ { i j } ^ { \operatorname* { m a x } } + \Delta _ { i j } ^ { \operatorname* { m i n } } = \Delta _ { i j } .
$$
# 3.4 Regret Notions
To characterize the performance of our algorithms, we consider several regret notions. Let begin with the following formulations, which are the external regret of the max and min player respectively:
$$
\begin{array} { r l } & { { \cal R } _ { T } ^ { \operatorname* { m a x } } = \underset { i \in { \cal S } _ { x } } { \operatorname* { m a x } } \mathbb { E } \Big [ \displaystyle \sum _ { t = 1 } ^ { T } A ( i , j ^ { t } ) - A ( i ^ { t } , j ^ { t } ) \Big ] } \\ & { \cal R _ { T } ^ { \operatorname* { m i n } } = \underset { j \in { \cal S } _ { y } } { \operatorname* { m a x } } \mathbb { E } \Big [ \displaystyle \sum _ { t = 1 } ^ { T } A ( i ^ { t } , j ^ { t } ) - A ( i ^ { t } , j ) \Big ] } \end{array}
$$
where $i ^ { t }$ and $j ^ { t }$ are the actions selected by the row and column player in round $t$ , respectively. $R _ { T } ^ { \mathrm { { m a x } } }$ represents the expected loss due to not selecting the best possible action of row player against column
player’s action $j$ while $R _ { T } ^ { \mathrm { { m i n } } }$ captures the expected regret from not choosing the column player’s best action for a given action $i$ .
We can now rewrite the regret definitions in terms of the suboptimality gaps:
$$
R _ { T } ^ { \mathrm { { m a x } } } = \sum _ { ( i , j ) \in { \cal S } _ { A } } { \Delta } _ { i j } ^ { \mathrm { { m a x } } } \mathbb { E } [ n _ { i j , T } ] , \qquad R _ { T } ^ { \mathrm { { m i n } } } = \sum _ { ( i , j ) \in { \cal S } _ { A } } { \Delta } _ { i j } ^ { \mathrm { { m i n } } } \mathbb { E } [ n _ { i j , T } ] ,
$$
where $\mathbb { E } [ n _ { i j , T } ]$ denotes the expected number of times the action pair $( i , j )$ is played over $T$ rounds.
In this paper, we analyze two regret notions. The external regret and the Nash regret.
Definition 3.3 (External regret). The (combined) external regret is given by the following:
$$
R _ { T } = R _ { T } ^ { m a x } + R _ { T } ^ { m i n } = \sum _ { ( i , j ) \in S _ { A } } \Delta _ { i j } \mathbb { E } [ n _ { i j , T } ]
$$
External regret quantifies how much worse the players have performed compared to their best actions in hindsight. An alternative regret notion is the Nash regret, which measures the deviation of value from the Nash equilibrium:
Definition 3.4 (Nash regret). The Nash regret is expressed as
$$
R _ { T } ^ { * } = \sum _ { ( i , j ) \in S _ { A } } \Delta _ { i j } ^ { * } \mathbb { E } [ n _ { i j , T } ] .
$$
The Nash regret, as defined here, can be negative, since the minimizing player might play suboptimally. However, if we are able to bound the Nash regret for the maximizing player, then by symmetry, we can also bound it for the minimizing player, thus effectively bounding its absolute value.
# 4 ETC-TPZSG Algorithm and Regret Analysis
In this section, we extend ETC algorithm to a two-person zero-sum game setting as ETC-TPZSG. Each player selects action from their corresponding finite action sets, which are known. The game is repeated over given time horizon $T$ , and the true payoff matrix $A$ is unknown to both players. Instead, they receive observations of the payoffs based on their chosen actions, which are iid subgaussian random variables.
Let $k$ represent the number of times to explore each action pair $( i , j ) \in S _ { A }$ . During the exploration phase, each player samples actions to estimate the expected payoffs, which enables us to construct an empirical payoff matrix using the observed payoffs. Then, in commit phase, they play according to the pure NE strategy derived from the estimated payoff matrix. After presenting the ETCTPZSG algorithm, finally, we analyze its expected regret, measuring the performance due to limited information and suboptimal action pairs during the exploration phase.
As a result, the unknown payoff matrix is approximated by the ETC approach, as in [20] for a bandit setting. The ETC algorithm consists of two phases: an exploration phase, where the player randomly samples actions to gather information with given exploration time $k$ , and a commit phase, where the player selects the empirically best action based on the gathered data. Algorithm 1 presents the implementation of ETC-TPZSG, where each player only observes the payoffs of the action pair they choose during the exploration period, then in the committing phase the algorithm converges to an optimal action pair which refers to the pure NE.
Applying ETC in a zero-sum game setting presents some challenges compared to the standard MAB scenario. In bandit problems, after the exploration phase, a player simply commits to the arm with the highest estimated reward, which is typically effective if exploration is sufficient. However, in a zero-sum game with the pure NE, the objective is not to identify an action with high reward but to find a possibly true NE. Inadequate exploration can lead to poor estimates of payoffs, making it difficult to correctly identify such an equilibrium. In fact, due to inaccurate estimates, a pure NE might not even appear to exist, even when one does in the true game. As a result, the design of ETC approach in zero-sum game setting requires careful balancing of exploration phase.
# Algorithm 1 ETC-TPZSG
1: Input:
2: $S _ { A }$ : set of action pairs in the game matrix $A$ ▷ $S _ { A }$ includes action pairs $( i , j )$
3: $m$ : number of actions for row player
4: $l$ : number of actions for column player
5: $k$ : exploration time
6: $T$ : time horizon $\vartriangleright { 1 } \leq m l { k } \leq T$
7: $\sigma ^ { 2 }$ : subgaussian variance factor
8: Initialize: $\hat { A } = [ 0 ] ^ { m \times l }$ ▷ $\hat { A }$ is estimated game matrix
9: In round $t = 0 , 1 , 2 , . . . , m l k$ ; $D$ Exploration Phase
10: Explore each action pair $( i , j )$ in $S _ { A } \ k$ times
11: Update $\hat { A }$ using 3
12: In round $t = m l k + 1 , m l k + 2 , . . . , T$ ; ▷ Committing Phase
13: Play an equilibrium $( i ^ { * } , j ^ { * } )$ satisfying the following property:
$$
\hat { A } ( i , j ^ { * } ) \leq \hat { A } ( i ^ { * } , j ^ { * } ) \leq \hat { A } ( i ^ { * } , j )
$$
Theorem 4.1. The Nash regret of Algorithm $I$ , when interacting with $\sigma$ -subgaussian payoffs, is upper bounded as follows:
$$
R _ { T } ^ { * } \le k \sum _ { ( i , j ) \in S _ { A } } \Delta _ { i j } ^ { * } + ( T - N k ) \sum _ { ( i , j ) \in S _ { A } } \Delta _ { i j } ^ { * } \exp \Big ( - \frac { k \Delta _ { i j } ^ { 2 } } { 1 6 \sigma ^ { 2 } } \Big )
$$
where $k$ is the exploration time per action pair and $N$ is the total number of action pairs.
We provide the proof in Appendix B. The regret comprises two main components, the exploration phase and the loss due to misidentifying the NE. Thus, it is crucial to choose an appropriate exploration time $k$ that balances sufficient exploration and efficient exploitation. Choosing $k$ too large leads to unnecessary exploration while setting it too small increases the risk of making suboptimal decisions. Hence, careful tuning of $k$ is essential to minimize overall regret.
Let assume there are two action pairs $( i _ { 1 } , j )$ and $( i _ { 2 } , j )$ in the game such that row player has two actions and column player has one action. In addition, let suppose the NE is at $( i _ { 1 } , j )$ . Then, we have $\Delta = A ( i _ { 1 } , j ) - A ( i _ { 2 } , j )$ and we can write the regret simply as
$$
R _ { T } ^ { * } \leq k \Delta + ( T - 2 k ) \Delta \exp \Big ( - \frac { k \Delta ^ { 2 } } { 1 6 \sigma ^ { 2 } } \Big ) \leq k \Delta + T \Delta \exp \Big ( - \frac { k \Delta ^ { 2 } } { 1 6 \sigma ^ { 2 } } \Big ) .
$$
Taking the first derivative with respect to $k$ and solve it, we get the following for the exploration time:
$$
k = \operatorname* { m a x } \bigg \{ 1 , \bigg \lceil \frac { 1 6 \sigma ^ { 2 } } { \Delta ^ { 2 } } \ln \frac { \Delta ^ { 2 } T } { 1 6 \sigma ^ { 2 } } \bigg \rceil \bigg \}
$$
If $\Delta$ is known, we can easily find the necessary exploration time. Therefore, when we put it into the regret bound, it becomes
$$
R _ { T } ^ { * } \leq \operatorname* { m i n } \Big \{ T \Delta , \Delta + \frac { 1 6 \sigma ^ { 2 } } { \Delta } \Big ( 1 + \operatorname* { m a x } \Big \{ 0 , \ln \frac { \Delta ^ { 2 } T } { 1 6 \sigma ^ { 2 } } \Big \} \Big ) \Big \}
$$
where the first term $T \Delta$ is worst-case regret and for the other, we combine the regret we obtain in (15) with the value of exploration time in (16). Then, focusing on max $\begin{array} { r } { \left\{ 0 , \ln \frac { \Delta ^ { 2 } T } { 1 6 \sigma ^ { 2 } } \right\} } \end{array}$ , we obtain that $\textstyle \Delta = { \frac { 4 \sigma } { \sqrt { T } } }$ is a critical point. If we put it in the regret bound (17), we have the following:
$$
R _ { T } ^ { * } \leq \Delta + c \sqrt { T }
$$
where $c > 0$ is a some constant. If $\Delta \le 1$ , it becomes
$$
R _ { T } ^ { * } \leq 1 + c \sqrt { T } .
$$
Bounds like (18) are referred to as instance-independent since it only depends on the time horizon $T$ not on the game instance. In the zero-sum game setting, our analysis shows that we get a regret bound comparable to the standard bandit setting in [20], indicating that the ETC-based algorithm performs effectively in adversarial environments and aligns with known results in the literature.
# 5 ETC-TPZSG-AE Algorithm and Regret Analysis
If a pure NE exists at $( i ^ { * } , j ^ { * } )$ , then it must satisfy the condition given in (2). For the algorithm in the section, we make use of the concept of $\varepsilon$ -Nash Equilibrium $\dot { \varepsilon }$ -NE), which approximately satisfies the standard NE condition. Specifically, an $\varepsilon$ -NE satisfies the NE conditions within a tolerance level $\varepsilon$ , making it a suitable criterion in learning settings where exact equilibrium identification is hard.
Based on this criterion, we introduce an elimination strategy in Algorithm 2, inspired by [6], to reduce unnecessary exploration of clearly suboptimal action pairs. Specifically, action pairs that do not satisfy the $\varepsilon$ -NE property are eliminated from further play, which enables a more efficient learning process by concentrating exploration on approximately optimal action pairs.
Definition 5.1. The action pair $( i ^ { * } , j ^ { * } )$ is an $\varepsilon$ -Nash Equilibrium ( $\dot { \varepsilon }$ -NE) if the following condition holds:
$$
A ( i , j ^ { * } ) - \varepsilon \le A ( i ^ { * } , j ^ { * } ) \le A ( i ^ { * } , j ) + \varepsilon , \quad \forall i \in S _ { x } , j \in S _ { y } .
$$
For the exploration time $k$ of the algorithm, let just go back to the regret bound in (14) and assume that there exists some $\Delta ^ { * }$ and $\Delta$ such that $\Delta ^ { * } \geq \Delta _ { i j } ^ { * }$ and $\Delta \le \Delta _ { i j }$ for all $( i , j ) \in S _ { A }$ . Then, when we put them into the regret bound and take its derivative with respect to $k$ , we obtain a similar result as in (16); thus, as the exploration time, clearly we can use the following:
$$
k = \Big \lceil \frac { 1 6 \sigma ^ { 2 } } { \Delta ^ { 2 } } \ln \Big ( \frac { \Delta ^ { 2 } T } { 1 6 \sigma ^ { 2 } } \Big ) \Big \rceil
$$
Since the true suboptimality gaps $\Delta$ are unknown, the algorithm adopts a decreasing $\Delta$ approach across rounds. It is motivated by the intuition that the remaining action pairs become closer to the true NE as suboptimal action pairs are progressively eliminated. Consequently, the suboptimality gaps among the remaining action pairs are expected to decrease over time. In other words, as the rounds progress, we expect that the players get closer to true NE, which means they tend to repeat better action pairs more often. We can afford to use smaller $\Delta$ values to explore action pairs near the NE more thoroughly, since smaller $\Delta$ leads to a longer exploration time $k$ .
The algorithm schedules $\Delta _ { i j }$ across rounds using $\hat { \Delta } _ { t }$ , which guides both exploration time and the tolerance level for the $\varepsilon$ -NE condition. When $\hat { \Delta } _ { t }$ is large, action pairs are played fewer times and more action pairs are kept during elimination. As $\hat { \Delta } _ { t }$ decreases, the algorithm focuses exploration on pairs closer to equilibrium, having already eliminated clearly suboptimal ones.
In order to perform updates over rounds, an initial estimate is required. When the payoffs are bounded within a known interval, such as [0, 1], it is common practice to initialize the estimated suboptimality gap $\hat { \Delta } _ { t }$ with value 1, representing the maximum possible difference between arm rewards as in [6]. However, in our setting, the payoffs are assumed to be $\sigma$ -subgaussian, which implies that there is no strict upper bound on the suboptimality gaps, we only have $\Delta _ { i j } \geq 0$ . Since no empirical estimates are available at the initialization step, it is not feasible to use a bound for the expected value of the maximum of $\sigma$ -subgaussian random variables to guide the choice of $\hat { \Delta } _ { t }$ . To address this, we initialize $\hat { \Delta } _ { t }$ with $4 \sigma$ , which provides a practical starting point for the analysis.
Moreover, we use $\hat { \Delta } _ { t } = 2 ^ { - t + 2 } \sigma$ in each round $t$ which takes values from 0 to $\begin{array} { r } { \left\lfloor \frac { 1 } { 2 } \log _ { 2 } { \frac { T } { e } } \right\rfloor } \end{array}$ . Regarding $\Delta \approx \sqrt { \left( 1 6 \sigma ^ { 2 } / k \right) \ln \left( \Delta ^ { 2 } T / 1 6 \sigma ^ { 2 } \right) }$ from (20) so $\ln { ( \Delta ^ { 2 } T / 1 6 \sigma ^ { 2 } ) } \geq 0$ , which gives us $\Delta \ge \sqrt { 1 6 \sigma ^ { 2 } / T }$ . Then, using $2 ^ { - t + 2 } \sigma \geq \sqrt { 1 6 \sigma ^ { 2 } / T }$ and after some calculations, we obtain $t \leq { \textstyle { \frac { 1 } { 2 } } } \log _ { 2 } T$ . The division by $e$ is a technical adjustment, which makes the bound safer in the analysis; otherwise the number of play would be zero in the last round.
If we assume that a unique pure NE exists in the game as before, a well estimated game matrix $\hat { A }$ should also exhibit the properties necessary to support the equilibrium. Specifically, $\hat { A }$ must satisfy the conditions required for the $\varepsilon$ -NE in (19) to hold. Based on this principle, the Algorithm 2 systematically eliminates action pairs that fail to satisfy this condition, as such pairs cannot be near to the equilibrium. By reducing the set of action pairs to include only the pairs that meet this criterion, it ensures that the remaining action pairs are the only ones that could potentially be near the NE; thus, it enables to focus on the most relevant action pairs. Consequently, we expect that the algorithm converges to an accurate approximation of the NE, even in the presence of estimation errors in $\hat { A }$ .
# Algorithm 2 ETC-TPZSG-AE
Theorem 5.1. The Nash regret of Algorithm 2, interacting with $\sigma$ -subgaussian payoffs, is upper bounded by
$$
R _ { T } ^ { * } \leq \sum _ { ( i , j ) \in S _ { A _ { 1 } } } \Delta _ { i j } ^ { * } \left( 1 + \frac { 7 6 8 \sigma ^ { 2 } } { \Delta _ { i j } ^ { 2 } } + \frac { 2 5 6 \sigma ^ { 2 } } { \Delta _ { i j } ^ { 2 } } \ln \left( \frac { \Delta _ { i j } ^ { 2 } T } { 2 5 6 \sigma ^ { 2 } } \right) \right) + \sum _ { ( i , j ) \in S _ { A _ { 2 } } } \left( \Delta _ { i j } ^ { * } \frac { 5 1 2 \sigma ^ { 2 } } { \lambda ^ { 2 } } + \Delta _ { i j } ^ { * } T \right)
$$
where $\begin{array} { r } { \lambda \ge \sqrt { \frac { 1 6 \sigma ^ { 2 } e } { T } } } \end{array}$ and, $S _ { A _ { 1 } } = \{ ( i , j ) \in S _ { A } : \Delta _ { i j } > \lambda \}$ and $S _ { A _ { 2 } } = \{ ( i , j ) \in S _ { A } : 0 < \Delta _ { i j } \le \lambda \}$ are two subsets of the action pairs.
Moreover, if we consider another regret approach in (12), we obtain the following regret bound.
Theorem 5.2. The external regret incurred by Algorithm 2 when interacting with $\sigma$ -subgaussian payoffs is bounded as following:
$$
R _ { T } \leq \sum _ { ( i , j ) \in { \cal S } _ { A _ { 1 } } } \left( \Delta _ { i j } + \frac { 7 6 8 \sigma ^ { 2 } } { \Delta _ { i j } } + \frac { 2 5 6 \sigma ^ { 2 } } { \Delta _ { i j } } \ln \left( \frac { \Delta _ { i j } ^ { 2 } T } { 2 5 6 \sigma ^ { 2 } } \right) \right) + \sum _ { ( i , j ) \in { \cal S } _ { A _ { 2 } } } \left( \frac { 5 1 2 \sigma ^ { 2 } } { \lambda } + \lambda T \right)
$$
We simplify the regret bound to $O ( \frac { \log ( T \Delta ^ { 2 } ) } { \Delta } )$ , as the logarithmic term dominates since by setting $\begin{array} { r } { \lambda = \sqrt { \frac { 1 6 \sigma ^ { 2 } e } { T } } } \end{array}$ , we ensure that the term $\lambda T$ is bounded by $\sqrt { 1 6 \sigma ^ { 2 } e T }$ which is at most $\frac { e } { \Delta _ { i j } }$ when $\Delta _ { i j } \leq \lambda$ . Our results are consistent with the regret bound derived by [6]. The proofs of the theorems are provided in Appendix C.
In this section, we analyze two distinct regret formulations for Algorithm 2, Nash regret, defined in (3.4), and external regret, defined in (3.3). The first, denoted by $R _ { T } ^ { * }$ , evaluates performance against the NE, capturing the cumulative loss relative to the equilibrium strategy. In contrast, the regret analysis by $R _ { T }$ allows us to consider the losses of both players with respect to their individual best responses at each round, providing a broader and more flexible notion of regret. By the inequality $\Delta i j ^ { * } \le \Delta i j$ for all action pairs, it follows that $R _ { T } ^ { * } \leq R _ { T }$ . This outcome aligns with the interpretation that the expected regret against the NE establishes a more specific comparison and thus, it naturally provides a tighter upper bound on the expected regret.
# 6 Experiments
In this section, we present a set of simulations to support our theoretical findings and demonstrate the performance of the proposed algorithms. The experiments are conducted with fixed time horizons of $\dot { T } = 1 0 ^ { 3 }$ and $T = \mathrm { { \bar { 1 0 } ^ { 4 } } }$ . Additional details are provided in Appendix D.
80 ETC-TPZSG with k in (17) ETC-TPZSG ETC-TPZSG 70 Upper bound in (18) 2000 ETC-TPZSG-AE 30 ETC-TPZSG-AE Tsallis-INF Tsallis-INF 25 1500 2 40 1 1000 20 500 10 0 0 0.4 0.6 0.8 0 2000 4000 6000 8000 10000 200 400 600 800 1000 Round T Rounds T (a) The expected regret of ETC- (b) The cumulative regrets of (c) The theoretical regret bounds for TPZSG with $k$ in (16) and the upper ETC-TPZSG, ETC-TPZSG-AE and ETC-TPZSG, ETC-TPZSG-AE and bound in (17) Tsallis-INF from [18] Tsallis-INF from [18]
Figure 1a shows the expected regret bound, averaged over $1 0 ^ { 5 }$ simulation runs, of ETC-TPZSG using $k$ in (16) and the upper bound in (17). We consider a setting with $N = 2$ , where the row player has two actions, the column player has one, and the first row corresponds to the NE. The suboptimality gap $\Delta$ varies from 0 to 1 and the subgaussian parameter is set to $\sigma = 0 . 5$ . The results align with the bound in Theorem (4.1).
In Figure 1b, we compare ETC-TPZSG, ETC-TPZSG-AE, and Tsallis-INF [18], the only existing method with instance-dependent bounds for TPZSGs, based on cumulative regret, computed as the absolute difference between the game value and the payoff from the action played to avoid negative values, averaged over $1 0 ^ { 3 }$ runs. The results show that ETC-TPZSG-AE achieves lower regret than ETC-TPZSG, highlighting the effectiveness of its action pair elimination strategy. We use a $2 \times 2$ game matrix by Gaussian payoffs and $k$ exploration rounds per action pair for ETC-TPZSG is randomly selected from a predefined list. On the other hand, Figure 1c compares their theoretical regret bounds $( O ( \cdot )$ notation) for $\Delta = 0 . 5$ , which aligns with the results in Figure 1b.
# 7 Discussion
We investigate a TPZSG with bandit feedback, where the payoff matrix is unknown and must be learned through player interactions. This setting is challenging, as players must estimate payoffs while making strategic decisions in an adversarial environment. We adopt the ETC algorithm due to its simplicity, widespread use and lack of prior analysis in this context. We also integrate an elimination based method, allowing the systematic removal of suboptimal action pairs, thereby improving convergence to the equilibrium and reducing unnecessary exploration on suboptimal ones. A key contribution of our work is the derivation of instance-dependent upper bounds on the expected regret for both algorithms, which has received limited attention in the literature on zero-sum games.
While our study focuses on pure strategy learning in a zero-sum game with bandit feedback and provides a theoretical expected regret bounds, several directions remain open for future study. One interesting extension is to consider games where the equilibrium is mixed. While the algorithms provided can be used to identify the support of the equilibrium, another approach should be used to efficiently converge to a mixed equilibrium.
Finally, an important and relatively unexplored direction is fairness in zero-sum games. For example, introducing mechanisms that ensure similar action pairs are explored equally can promote fairness in strategy estimation. This type of fair play mechanism could be essential in TPZSG settings, thus it can promote balanced exploration and improve overall strategy estimation. On the other hand, by integrating fairness based algorithms or constraints, it may be possible to generate game environments that ensure balanced opportunities for all players.
References
[1] Jean-Yves Audibert and Sébastien Bubeck. Best arm identification in multi-armed bandits. In 23rd Conference on Learning Theory, pages 1–13, 2010.
[2] Jean-Yves Audibert, Rémi Munos, and Csaba Szepesvári. Tuning bandit algorithms in stochastic environments. In International conference on algorithmic learning theory, pages 150–165. Springer, 2007. [3] Jean-Yves Audibert, Rémi Munos, and Csaba Szepesvári. Exploration–exploitation tradeoff using variance estimates in multi-armed bandits. Theoretical Computer Science, 410(19):1876– 1902, 2009.
[4] Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47:235–256, 2002.
[5] Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E Schapire. Gambling in a rigged casino: The adversarial multi-armed bandit problem. In 36th Annual Foundations of Computer Science, pages 322–331. IEEE, 1995.
[6] Peter Auer and Ronald Ortner. Ucb revisited: Improved regret bounds for the stochastic multi-armed bandit problem. Periodica Mathematica Hungarica, 61(1-2):55–65, 2010.
[7] Avrim Blum and Yishay Mansour. From external to internal regret. Journal of Machine Learning Research, 8(6), 2007.
[8] Avrim Blum and Yishay Monsour. Learning, regret minimization, and equilibria. Algorithmic Game Theory, pages 79–102, 2007.
[9] Sébastien Bubeck, Nicolo Cesa-Bianchi, et al. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends® in Machine Learning, 5(1):1–122, 2012.
[10] Sébastien Bubeck, Rémi Munos, and Gilles Stoltz. Pure exploration in multi-armed bandits problems. In International Conference on Algorithmic Learning theory, pages 23–37. Springer, 2009.
[11] Yang Cai, Haipeng Luo, Chen-Yu Wei, and Weiqiang Zheng. Uncoupled and convergent learning in two-player zero-sum markov games with bandit feedback. In Advances in Neural Information Processing Systems, pages 36364–36406, 2023.
[12] Nicolo Cesa-Bianchi and Gábor Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006.
[13] Constantinos Daskalakis, Alan Deckelbaum, and Anthony Kim. Near-optimal no-regret algorithms for zero-sum games. In Proceedings of the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms, pages 235–254. SIAM, 2011.
[14] Constantinos Daskalakis, Paul W Goldberg, and Christos H Papadimitriou. The complexity of computing a nash equilibrium. Communications of the ACM, 52(2):89–97, 2009.
[15] Constantinos Daskalakis, Aranyak Mehta, and Christos Papadimitriou. A note on approximate nash equilibria. Theoretical Computer Science, 410(17):1581–1588, 2009.
[16] Victor Gabillon, Mohammad Ghavamzadeh, and Alessandro Lazaric. Best arm identification: A unified approach to fixed budget and fixed confidence. Advances in Neural Information Processing Systems, 25, 2012.
[17] Aurélien Garivier, Tor Lattimore, and Emilie Kaufmann. On explore-then-commit strategies. Advances in Neural Information Processing Systems, 29, 2016.
[18] Shinji Ito, Haipeng Luo, Taira Tsuchiya, and Yue Wu. Instance-dependent regret bounds for learning two-player zero-sum games with bandit feedback. arXiv preprint arXiv:2502.17625, 2025.
[19] Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1):4–22, 1985.
[20] Tor Lattimore and Csaba Szepesvári. Bandit Algorithms. Cambridge University Press, 2020.
[21] Francis Maes, Louis Wehenkel, and Damien Ernst. Learning to play $\mathbf { k }$ -armed bandit problems. In 4th International Conference on Agents and Artificial Intelligence (ICAART 2012), 2012.
[22] Arnab Maiti, Kevin Jamieson, and Lillian Ratliff. Instance-dependent sample complexity bounds for zero-sum matrix games. In International Conference on Artificial Intelligence and Statistics, pages 9429–9469. PMLR, 2023.
[23] Eric V Mazumdar, Michael I Jordan, and S Shankar Sastry. On finding local nash equilibria (and only local nash equilibria) in zero-sum games. arXiv preprint arXiv:1901.00838, 2019.
[24] John F Nash. Equilibrium points in n-person games. Proceedings of the National Academy of Sciences of the United States of America, 36(1):48–49, 1950.
[25] John F Nash. Non-cooperative games. Annals of Mathematics, 54(2):286–295, 1951.
[26] Gergely Neu. Explore no more: Improved high-probability regret bounds for non-stochastic bandits. Advances in Neural Information Processing Systems, 28, 2015.
[27] Brendan O’Donoghue, Tor Lattimore, and Ian Osband. Matrix games with bandit feedback. In Uncertainty in Artificial Intelligence, pages 279–289. PMLR, 2021.
[28] Vianney Perchet, Philippe Rigollet, Sylvain Chassang, and Erik Snowberg. Batched bandit problems. The Annals of Statistics, 44(2):660–681, 2016.
[29] Herbert Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematical Society, 58(5):527–535, 1952.
[30] William R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3-4):285–294, 1933.
[31] John Von Neumann. Zur theorie der gesellschaftsspiele. Mathematische Annalen, 100(1):295– 320, 1928.
[32] John Von Neumann and Oskar Morgenstern. Theory of Games and Economic Behavior. Princeton University Press, 1944.
[33] Alan R Washburn. Two-Person Zero-Sum Games. Springer, 2014.
[34] Ali Yekkehkhany, Ebrahim Arian, Mohammad Hajiesmaili, and Rakesh Nagi. Risk-averse explore-then-commit algorithms for finite-time bandits. In 2019 IEEE 58th Conference on Decision and Control, pages 8441–8446. IEEE, 2019.
[35] Martin Zinkevich, Michael Bowling, and Neil Burch. A new algorithm for generating equilibria in massive zero-sum games. In Proceedings of the 22nd National Conference on Artificial Intelligence, pages 788–793, 2007.
# A Subgaussian Properties
This section collects some supplementary results from [20] that are used repeatedly throughout our proofs. These results concern standard properties of subgaussian random variables such as tail inequalities. They are included here for completeness and easy reference since they play a critical role in our analyses. The statements are not original, we include only the parts that are directly useful for our purposes.
Theorem A.1 ([20, Theorem 5.3]). If $X$ is $\sigma$ -subgaussian random variable, for any $\epsilon \geq 0$ ,
$$
\mathbb { P } ( X \geq \epsilon ) \leq \exp \Big ( - \frac { \epsilon ^ { 2 } } { 2 \sigma ^ { 2 } } \Big ) .
$$
Lemma A.1 ([20, Lemma 5.4]). If $X$ is $\sigma$ -subgaussian and $X _ { 1 }$ and $X _ { 2 }$ are independent with $\sigma _ { 1 }$ and $\sigma _ { 2 }$ subgaussian parameter, respectively, then we can write the followings:
(a) $c X$ is $| c | \sigma$ -subgaussian for all $c \in \mathbb { R }$ .
(b) $X _ { 1 } + X _ { 2 }$ is $\sqrt { \sigma _ { 1 } ^ { 2 } + \sigma _ { 2 } ^ { 2 } }$ -subgaussian.
# B Proof of Theorem 4.1
To analyze the Nash regret of the ETC-TPZSG algorithm, we begin by decomposing the total expected regret into two phases: the exploration phase and the committing phase. During exploration, each action pair is played a fixed number of times $k$ to estimate their mean rewards, potentially incurring regret if suboptimal action pairs are played. Once the algorithm commits to the empirically best action pair, which is the pure NE, the regret accumulates only if this action pair is not the true optimal one. Our goal is to bound the expected regret by quantifying the probability of committing to a suboptimal action pair, which we can call it as a misidentified NE, based on the estimates of their payoffs obtained during exploration phase.
Let $( i ^ { \prime } , j ^ { \prime } )$ denote a misidentified NE, so we aim to identify an action pair $( i ^ { \prime } , j ^ { \prime } )$ , which appears better than $( i ^ { * } , j ^ { * } )$ based on the estimated matrix. To analyze the probability of playing with a misidentified NE, we consider whether the following two events, $E _ { 1 }$ and $E _ { 2 }$ , occur:
$$
E _ { 1 } : \hat { A } ( i , j ^ { \prime } ) \leq \hat { A } ( i ^ { \prime } , j ^ { \prime } ) , \quad E _ { 2 } : \hat { A } ( i ^ { \prime } , j ^ { \prime } ) \leq \hat { A } ( i ^ { \prime } , j ) , \quad \forall i \in S _ { x } , \forall j \in S _ { y }
$$
and we want to find a bound for $\mathbb { P } ( E _ { 1 } \cap E _ { 2 } )$ because these two conditions must be satisfied to consider $( i ^ { \prime } . j ^ { \prime } )$ as a NE.
Since the events $E _ { 1 }$ and $E _ { 2 }$ are not independent, we consider two different approaches to find an upper bound for $\mathbb { P } ( E _ { 1 } \cap E _ { 2 } )$ . We then select the approach that gives the tighter bound, as it provides a more accurate estimate of this probability. The first approach uses the fact that $\mathbb { P } ( E _ { 1 } \cap E _ { 2 } ) \leq \mathbb { P } ( E _ { 1 } )$ and $\mathbb { P } ( E _ { 1 } \cap E _ { 2 } ) \le \mathbb { P } ( E _ { 2 } )$ , which together imply $\mathbb { P } ( E _ { 1 } \cap E _ { 2 } ) \le \sqrt { \mathbb { P } ( E _ { 1 } ) \mathbb { P } ( E _ { 2 } ) }$ . Alternatively, we can use the inequality $\mathbb { P } ( E _ { 1 } \cap E _ { 2 } ) \le \operatorname* { m i n } \{ \mathbb { P } ( E _ { 1 } ) , \mathbb { P } ( E _ { 2 } ) \}$ , which may provide a tighter bound depending on the values of $\mathbb { P } ( E _ { 1 } )$ and $\mathbb { P } ( E _ { 2 } )$ . The choice between these approaches depends on which enables us the smaller upper bound.
Let us start by considering the first approach. We can write it as
$$
\begin{array} { r l } { \mathbb { F } ( \mathcal { R } ^ { \star } \setminus \mathcal { R } ) } & { = \mathbb { F } \{ \hat { \alpha } ( \hat { \alpha } ( \hat { \alpha } , \hat { \gamma } ) \leq \hat { \alpha } ( \hat { \alpha } , \hat { \gamma } ) ) \leq \hat { \alpha } ( \hat { \alpha } , \hat { \gamma } ) \leq \hat { \alpha } ( \hat { \alpha } , \hat { \gamma } ) \leq \hat { \alpha } ( \hat { \alpha } , \hat { \gamma } ) \leq \hat { \alpha } \} } \\ & { \leq \mathbb { F } \{ \hat { \alpha } ( \hat { \alpha } , \hat { \gamma } ) \leq \hat { \alpha } ( \hat { \alpha } , \hat { \gamma } ) \leq \hat { \alpha } ( \hat { \alpha } , \hat { \gamma } ) \leq \hat { \alpha } ( \hat { \alpha } , \hat { \gamma } ) \leq \hat { \alpha } ( \hat { \alpha } , \hat { \gamma } ) \leq } \\ & { = \sqrt { \mathbb { F } ^ { \star } \hat { \alpha } ( \hat { \alpha } , \hat { \gamma } ) \leq \hat { \alpha } ( \hat { \alpha } , \hat { \gamma } ) \times \hat { \alpha } ( \hat { \alpha } , \hat { \gamma } ) } } \\ & = \sqrt { \hat { \alpha } ^ { \star } ( \hat { \alpha } , \hat { \gamma } ) \times \hat { \alpha } ( \hat { \alpha } , \hat { \gamma } ) \leq \hat { \alpha } \times \hat { \alpha } ( \hat { \alpha } , \hat { \gamma } ) \leq \hat { \alpha } ( \hat { \alpha } , \hat { \gamma } ) \leq \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot } \\ & \leq \sqrt \hat { \alpha } ( \hat { \alpha } , \hat { \alpha } , \hat { \gamma } ) \leq \hat { \alpha } ( \hat { \alpha } , \hat { \gamma } ) \leq \hat { \alpha } \leq \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat { \alpha } \cdot \hat \end{array}
$$
where $\begin{array} { r } { \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \operatorname* { m a x } } = \operatorname* { m a x } _ { i } A ( i , j ^ { \prime } ) - A ( i ^ { \prime } , j ^ { \prime } ) , \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \operatorname* { m i n } } = A ( i ^ { \prime } , j ^ { \prime } ) - \operatorname* { m i n } _ { j } A ( i ^ { \prime } , j ) } \end{array}$ and $\Delta _ { i ^ { \prime } j ^ { \prime } } = \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \mathrm { m a x } } + \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \mathrm { m i n } }$ .
To get (26), we utilize the fact that the probability of the intersection of multiple events is at most that of any individual event, that is, for any collection of events $e _ { 1 } , e _ { 2 } , \ldots , e _ { n }$ , we have $\mathbb { P } ( e _ { 1 } \cap e _ { 2 } \cap \cdot \cdot \cdot \cap { \dot { e _ { n } } } ) \leq \mathbb { P } ( e _ { i } )$ for all $i \in { 1 , \dots , n }$ . The inequality (28) holds since $\Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \mathrm { m a x } } \geq 0 , \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \mathrm { m i n } } \geq 0$ , which are true from their definitions. For the step (29), we apply Theorem A.1 with $\sqrt { 2 \sigma ^ { 2 } / k }$ - subgaussian random variable.
At this point, we need to show that the differences $\hat { A } ( i , j ) - \hat { A } ( x , y )$ are $\sqrt { 2 \sigma ^ { 2 } / k }$ -subgaussian where $( i , j )$ and $( x , y )$ are any action pairs in $S _ { A }$ . To establish this, we utilize Lemma A.1, which provides key properties of subgaussian random variables. Specifically, both $\hat { A } ( i , j )$ and $\hat { A } ( x , y )$ are $\sqrt { \sigma ^ { 2 } / k }$ - subgaussian as the average payoffs are computed by the equation (3) and during exploration phase, each action pair is played $k$ times. By the lemma, the difference of two independent subgaussian random variables with parameters $\sqrt { \sigma ^ { 2 } / k }$ is subgaussian with parameter $\sqrt { 2 \sigma ^ { 2 } / k }$ , which follows that $\hat { A } ( i , j ) - \hat { A } ( x , y )$ is $\sqrt { 2 \sigma ^ { 2 } / k }$ -subgaussian for any action pairs $( i , j )$ and $( x , y )$ .
Then, to get the last inequality (31), we observe that
$$
\frac { \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { 2 } } { 2 } = \frac { ( \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \operatorname* { m a x } } + \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \operatorname* { m i n } } ) ^ { 2 } } { 2 } \leq ( \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \operatorname* { m a x } } ) ^ { 2 } + ( \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \operatorname* { m i n } } ) ^ { 2 }
$$
where the inequality follows from the fact that $( \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \mathrm { m a x } } - \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \mathrm { m i n } } ) ^ { 2 } \geq 0$ .
As we mention before, there is an alternative approach to bound $\mathbb { P } ( E _ { 1 } \cap E _ { 2 } )$ , which means to find a bound for the probability of a misidentified NE. It involves an inequality based on the minimum of the individual probabilities. Similarly, let follow this:
$$
\begin{array} { r l } { F ( E ^ { * } \setminus \mathbb { C } _ { 2 } ) } & { = \operatorname { D } _ { t } \hat { A } ( t , s ^ { * } ) \geq \hat { A } ( t ^ { * } , s ^ { * } ) \leq \hat { A } ( t ^ { * } , s ^ { * } ) \leq \hat { A } ( t ^ { * } , s ^ { * } ) < \hat { A } ( t ^ { * } , s ^ { * } ) } \\ & { \leq \operatorname { D } _ { t } \hat { A } ( s , t ^ { * } ) \geq \hat { A } ( t ^ { * } , s ^ { * } ) \leq \hat { A } ( t ^ { * } , s ^ { * } ) \leq \hat { A } ( t ^ { * } , s ^ { * } ) \leq \hat { A } ( t ^ { * } , s ^ { * } ) } \\ & { = \operatorname { u n } \hat { \omega } ( s , t ^ { * } ) \hat { A } ( t , s ^ { * } ) \leq \hat { A } ( t ^ { * } , s ^ { * } ) \leq \hat { A } ( t ^ { * } , s ^ { * } ) \leq \hat { A } ( t ^ { * } , s ^ { * } ) \leq \hat { A } ( t ^ { * } , s ^ { * } ) \leq \hat { A } ( t ^ { * } , s ^ { * } ) } \\ & { = \operatorname { u n } \hat { \omega } \hat { \omega } \hat { \omega } \hat { \omega } \hat { \omega } \hat { \omega } \hat { \omega } } \\ & { \leq \operatorname { \operatorname { u n } } \hat { \omega } \hat { \omega } \hat { \omega } \hat { \omega } \hat { \omega } \hat { \omega } ^ { \dagger } - \hat { A } ( t ^ { * } , s ^ { * } ) \leq \hat { A } ( t ^ { * } , s ^ { * } ) \leq \operatorname { D } _ { t } \hat { \omega } \hat { \omega } \hat { \omega } \hat { \omega } \hat { \omega } ^ { \dagger } - \hat { A } ( t ^ { * } , s ^ { * } ) \leq \hat { A } ( t ^ { * } , s ^ { * } ) \leq 0 \} } \\ & { \leq \operatorname { m a x } \hat { \omega } \hat { \omega } \hat { \omega } \hat { \omega } \hat { \omega } \hat { \omega } \hat { \omega } \hat { \omega } \hat { \omega } } \\ & { \leq \operatorname { m i n } \{ \hat { \omega } ( t , s ^ { * } ) - \hat { A } ( t ^ { * } , s ^ { * } ) \geq \operatorname { c o v } ( - \frac { \hat { \omega } ( t , s ^ { * } ) } { 4 \omega ^ { * } } ) \} } \\ & { \quad - \exp ( \frac { k \omega } { \omega } \hat { \omega } \hat { \omega } \hat { \omega } ^ { \dagger } ( \frac { k \omega } { \omega } ) ^ { 2 } ) } \\ & \leq \exp ( - \frac { k ( \Delta t , s ^ { * } ) ^ { 2 } } 4 \omega ^ \end{array}
$$
where to get the inequality (41), we use the following:
$$
\operatorname* { m a x } \{ ( \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \operatorname* { m a x } } ) ^ { 2 } , ( \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \operatorname* { m i n } } ) ^ { 2 } \} \geq \frac { ( \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \operatorname* { m a x } } ) ^ { 2 } + ( \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \operatorname* { m i n } } ) ^ { 2 } } { 2 }
$$
If we assume that their maximum is $\Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \mathrm { m a x } }$ , then it will imply $( \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \mathrm { m a x } } ) ^ { 2 } ~ \geq ~ ( \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \mathrm { m i n } } ) ^ { 2 }$ . Similarly, if $\mathrm { m a x } \{ ( \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \mathrm { m a x } } ) ^ { 2 } , ( \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \mathrm { m i n } } ) ^ { 2 } \} \ = \ ( \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \mathrm { m i n } } ) ^ { 2 }$ , it will give us $( \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \mathrm { m i n } } ) ^ { 2 } \ \geq \ \bar { ( \Delta _ { i ^ { \prime } j ^ { \prime } } ^ { \mathrm { m a x } } ) ^ { 2 } }$ . They are true by assumptions, then, (44) holds.
Thus, both approaches provide the same upper bound for the probability of misidentified NE. In the regret analysis, we will focus on the loss due to playing with suboptimal (or not true NE) action pairs. Thus, after $T$ times playing, $1 \leq N k \leq T$ where $N = m l$ as the number of action pairs, we can write the Nash regret as
$$
R _ { T } ^ { * } = \sum _ { ( i , j ) \in S _ { A } } \Delta _ { i j } ^ { * } \mathbb { E } [ n _ { i j , T } ]
$$
where $\mathbb { E } [ n _ { i j , T } ]$ is the expected number of playing the action pair $( i , j )$ . We already know that each action pair $( i , \bar { \ j } ) \in S _ { A }$ is played at least $k$ times for exploration. After the exploration step, we will consider playing the misidentified NE $( i ^ { \prime } , j ^ { \prime } )$ instead of real one $( i ^ { * } , j ^ { * } )$ because we expect the players to select NE action pair as optimal after exploration phase. Since we already have the probability of playing a misidentified NE from (33) or (43), we can write
$$
\mathbb { E } [ n _ { i j , T } ] \leq k + \left( T - N k \right) \exp \Big ( - \frac { k \Delta _ { i j } ^ { 2 } } { 1 6 \sigma ^ { 2 } } \Big ) .
$$
Hence, the proof is concluded.
# C Proof of Theorem 5.1
According to the algorithm, $\sqrt { \frac { 1 6 \sigma ^ { 2 } e } { T } }$ 16Tσ2e is a critical value for ∆ˆ t since it takes this value in the last round. In this way, let define a threshold parameter as $\begin{array} { r } { \lambda \ge \sqrt { \frac { 1 6 \sigma ^ { 2 } e } { T } } } \end{array}$ 16σ2e , which gives us two subsets of actions pairs such that $S _ { A _ { 1 } } = \{ ( i , j ) \in S _ { A } : \Delta _ { i j } > \lambda \}$ and $S _ { A _ { 2 } } = \{ ( i , j ) \in S _ { A } : 0 < \Delta _ { i j } \leq \lambda \}$ . Since $\Delta _ { i j \_ } ^ { * } \leq \Delta _ { i j }$ ensures the bounds are preserved, it makes sense to use these action pair sets in the analysis. Thus, we can write the Nash regret as
$$
R _ { T } ^ { * } = \sum _ { ( i , j ) \in { \cal S } _ { A _ { 1 } } } \Delta _ { i j } ^ { * } \mathbb { E } [ n _ { i j , T } ] + \sum _ { ( i , j ) \in { \cal S } _ { A _ { 2 } } } \Delta _ { i j } ^ { * } \mathbb { E } [ n _ { i j , T } ]
$$
where $\Delta _ { i j } ^ { * } = A ( i ^ { * } , j ^ { * } ) - A ( i , j )$ . and $n _ { i j , T }$ is the total number of times to play action pair $( i , j )$ by time $T$ .
To analyze the Nash regret of the ETC-TPZSG-AE algorithm, it is essential to consider various scenarios that may lead to suboptimal outcomes. Each of these scenarios contributes to the total Nash regret, as they either involve the incorrect elimination of the optimal action pair or the unnecessary selection of suboptimal pairs. The former results in discarding the optimal strategy because of insufficient evidence, while the latter leads to spend more time with action pairs that are unlikely to be part of any near optimal equilibrium.
The algorithm employs a scheduling mechanism for $\Delta _ { i j }$ , where $\hat { \Delta } _ { t }$ is halved in each round. This approach is necessary because the true values of $\Delta i j$ are unknown to the players. We use $\hat { \Delta } _ { t }$ to determine both the exploration time and the tolerance level in the $\varepsilon$ -NE property. Specifically, when $\hat { \Delta } _ { t }$ is large, the algorithm plays each action pair fewer times and keeps more pairs during the elimination step, since the threshold for elimination is relatively loose. As $\hat { \Delta } _ { t }$ decreases, the algorithm allocates more exploration to action pairs that are closer to being part of a NE, because those that are clearly suboptimal have already been eliminated. This adaptive process enables more precise identification of near-optimal strategies by $\varepsilon$ -NE condition while minimizing regret.
For each suboptimal action pair $( i , j )$ , let $t _ { i j } = \operatorname* { m i n } \{ t : \hat { \Delta } _ { t } < \Delta _ { i j } / 2 \}$ refer to the earliest round such that $\hat { \Delta } _ { t } < \Delta _ { i j } / 2$ . Using the fact that $\begin{array} { r } { \hat { \Delta } _ { t + 1 } = \frac { \hat { \Delta } _ { t } } { 2 } } \end{array}$ and the definition of $t _ { i j }$ , we can write the following:
$$
\frac { 1 } { \hat { \Delta } _ { t _ { i j } } } \leq \frac { 4 } { \Delta _ { i j } } < \frac { 1 } { \hat { \Delta } _ { t _ { i j } + 1 } }
$$
For simplicity of notation, we write $\hat { A }$ instead of $\hat { A } _ { t }$ for the estimated payoff matrix in round $t$ throughout this section. Since we already include additional terms such as $\varepsilon _ { t }$ that refers to the tolerance level in round $t$ , it is clear enough which round is being referenced.
We now analyze the regret contributions by considering the following cases:
Case C.1. A suboptimal action pair $( i , j )$ is not eliminated in round $t _ { i j }$ or earlier while the optimal action pair is in the set $S _ { t _ { i j } }$ .
Since neither some suboptimal nor the optimal action pair is not eliminated, the algorithm fails to discard suboptimal choices. It might converge to an incorrect solution without eliminating suboptimal
actions, but since it still maintains the optimal action pair within the action pair selection set. The regret contribution of this case comes from the fact that the algorithm spends time with a worse choice when a better one is already available.
Let explain briefly in which situations the algorithm eliminates or keep an action pair. For action pair $( i , j )$ , if $\boldsymbol { \varepsilon } { \mathrm { - } } \mathbf { N E }$ property does not hold in round $t = t _ { i j }$ which implies that at least one of the inequalities fails, then it will be eliminated in round $t _ { i j }$ .
Furthermore, we have εt = $\begin{array} { r } { \varepsilon _ { t _ { i j } } = \sqrt { \frac { 4 \sigma ^ { 2 } } { k _ { t _ { i j } } } \ln \left( \frac { \hat { \Delta } _ { t _ { i j } } ^ { 2 } T } { 1 6 \sigma ^ { 2 } } \right) } \leq \frac { \hat { \Delta } _ { t _ { i j } } } { 2 } = \hat { \Delta } _ { t _ { i j } + 1 } < \frac { \Delta _ { i j } } { 4 } . } \end{array}$ . In other words, if there exist $i ^ { \prime } \in S _ { x }$ or $j ^ { \prime } \in S _ { y }$ , such that one of the following is true:
$$
I _ { 1 } : \hat { A } ( i ^ { \prime } , j ) - \hat { A } ( i , j ) > \varepsilon _ { t _ { i j } } , \qquad I _ { 2 } : \hat { A } ( i , j ) - \hat { A } ( i , j ^ { \prime } ) > \varepsilon _ { t _ { i j } } ,
$$
then $( i , j )$ is eliminated. The inequality $I _ { 1 }$ indicates that there exists an alternative action $i ^ { \prime }$ for the row player that offers a higher payoff than action $i$ against the column action $j$ . Similarly, $I _ { 2 }$ implies that there exists a better action $j ^ { \prime }$ for the column player than action $j$ when playing against the row action $i$ . If either $I _ { 1 }$ or $I _ { 2 }$ holds, the action pair $( i , j )$ cannot be a part of the NE, as at least one player has an incentive to deviate. Hence, $( i , j )$ is not a NE with high probability.
On the other hand, to keep an action pair $( i , j )$ the following events must both hold:
$$
E _ { 1 } : \hat { A } ( i ^ { \prime } , j ) - \varepsilon _ { t } \le \hat { A } ( i , j ) , \forall i ^ { \prime } \in S _ { x } , \quad E _ { 2 } : \hat { A } ( i , j ) \le \hat { A } ( i , j ^ { \prime } ) + \varepsilon _ { t } , \forall j ^ { \prime } \in S _ { y }
$$
To calculate the probability of keeping an action pair $( i , j )$ , denoted by $\mathbb { P } ( E _ { 1 } \cap E _ { 2 } )$ , we begin by analyzing the event $E _ { 1 }$ . Specifically, for an action pair $( i , j )$ which is not a NE, i.e. there exists at least one $i ^ { \prime }$ such that $A ( i ^ { \prime } , \bar { j } ) > A ( i , j )$ , the probability of $E _ { 1 }$ can be expressed as follows:
$$
\begin{array} { r l } & { \mathbb { P } ( E _ { 1 } ) = \mathbb { P } ( \hat { A } ( i ^ { \prime } , j ) - \hat { A } ( i , j ) \le \varepsilon _ { t } , \forall i ^ { \prime } \in S _ { x } ) } \\ & { \qquad \le \mathbb { P } ( \hat { A } ( i ^ { \prime } , j ) - \hat { A } ( i , j ) \le \varepsilon _ { t } ) } \\ & { \qquad \le \exp \left( - \frac { \frac { 4 \sigma ^ { 2 } } { k _ { t } } \ln \left( \frac { \hat { \Delta } _ { t } ^ { 2 } T } { 1 6 \sigma ^ { 2 } } \right) } { \frac { 4 \sigma ^ { 2 } } { k _ { t } } } \right) } \\ & { \qquad = \frac { 1 6 \sigma ^ { 2 } } { \hat { \Delta } _ { t } ^ { 2 } T } } \end{array}
$$
where εt = r $\begin{array} { r } { \varepsilon _ { t } = \sqrt { \frac { 4 \sigma ^ { 2 } } { k _ { t } } \ln \left( \frac { \hat { \Delta } _ { t } ^ { 2 } T } { 1 6 \sigma ^ { 2 } } \right) } } \end{array}$ . To derive inequality (48), we use the fact that the probability of the intersection of multiple events is always less than or equal to the probability of any of individual events. Specifically, since we have $m$ actions in $S _ { x }$ , for any collection of events $e _ { 1 } , e _ { 2 } , \cdots , e _ { m }$ , it holds that $\mathbb { P } ( e _ { 1 } \cap e _ { 2 } \cap \cdot \cdot \cdot \cap e _ { m } ) \leq \mathbb { P } ( e _ { i } )$ for all $i = 1 , 2 , \cdots , m$ . In the last inequality (49), we apply Theorem A.1 with $\sqrt { 2 \sigma ^ { 2 } / k _ { t } }$ -subgaussian random variable. The subgaussian parameter is derived by using Lemma A.1 and we note that each action pair is played $k _ { t }$ times.
Similarly, in order to calculate the probability of the event $E _ { 2 }$ , we can write
$$
\mathbb { P } ( E _ { 2 } ) = \mathbb { P } ( \hat { A } ( i , j ) - \hat { A } ( i , j ^ { \prime } ) \leq \varepsilon _ { t } , \forall j ^ { \prime } \in S _ { y } ) \leq \frac { 1 6 \sigma ^ { 2 } } { \hat { \Delta } _ { t } ^ { 2 } T } .
$$
Therefore, we have the probability of keeping an action pair $( i , j )$ in any round $t$ as
$$
\mathbb { P } ( E _ { 1 } \cap E _ { 2 } ) \le \operatorname* { m i n } \{ \mathbb { P } ( E _ { 1 } ) , \mathbb { P } ( E _ { 2 } ) \} = \frac { 1 6 \sigma ^ { 2 } } { \hat { \Delta } _ { t } ^ { 2 } T } .
$$
It means that the probability that a suboptimal action pair $( i , j )$ is not eliminated in round $t _ { i j }$ or before is bounded by ˆ∆126σ2T . Then, the regret contribution is simply bounded by a worst case which is $T \Delta _ { i j } ^ { * }$ for any suboptimal action pair $( i , j ) \in S _ { A _ { 1 } }$ .
Hence, using the probability of keeping a suboptimal action pair and summing up over all action pairs, we can write their regret contribution as
$$
\sum _ { ( i , j ) \in { \cal S } _ { A _ { 1 } } } \Delta _ { i j } ^ { * } T \frac { 1 6 \sigma ^ { 2 } } { \hat { \Delta } _ { t _ { i j } } ^ { 2 } T } \le \sum _ { ( i , j ) \in { \cal S } _ { A _ { 1 } } } \Delta _ { i j } ^ { * } \frac { 2 5 6 \sigma ^ { 2 } } { \Delta _ { i j } ^ { 2 } }
$$
where we apply inequality (45) to bound it.
Here, we note that we do not need to consider the action pairs in $S _ { A _ { 2 } }$ because, by the design of the algorithm, the gap $\hat { \Delta } _ { t }$ is guaranteed to be at least around $\lambda$ in the corresponding rounds. If the optimal action pair remains in the game, then all suboptimal action pairs should have already been eliminated by the last round or earlier. This means that suboptimal choices are progressively discarded over time as the algorithm learns. As a result, after all the rounds have been played, only one action pair should remain since we consider the case the optimal action pair $( i ^ { * } , j ^ { * } )$ remain in the game in addition to elimination of suboptimal ones. This final pair is expected to correspond to the pure NE, representing the best choice for both players with no incentive to change their actions.
Case C.2. A suboptimal action pair $( i , j )$ is eliminated in round $t _ { i j }$ or earlier with the optimal action pair in $S _ { t _ { i j } }$ .
It enables us to bound the number of times it is played. Once a suboptimal action pair is eliminated and the optimal action pair $( i ^ { * } , j ^ { * } )$ remains in the game, it no longer contributes to the regret in next rounds. This is because regret arises only when a suboptimal action is chosen instead of the optimal one.
On the other hand, we note that there is no need to consider the action pairs $( i , j ) \in S _ { A _ { 2 } }$ because we handle the elimination of suboptimal pairs before the final round. Thus, using (45), each action pair $( i , j )$ is played at most $k _ { t _ { i j } }$ times such that $\begin{array} { r } { k _ { t _ { i j } } = \left\lceil \frac { 1 6 \sigma ^ { 2 } } { \hat { \Delta } _ { t _ { i j } } ^ { 2 } } \ln \big ( \frac { \hat { \Delta } _ { t _ { i j } } ^ { 2 } T } { 1 6 \sigma ^ { 2 } } \big ) \right\rceil \leq \left\lceil \frac { 2 5 6 \sigma ^ { 2 } } { \Delta _ { i j } ^ { 2 } } \ln \left( \frac { \Delta _ { i j } ^ { 2 } T } { 2 5 6 \sigma ^ { 2 } } \right) \right\rceil , } \end{array}$
Then, the regret contribution is expressed by
$$
\begin{array} { r } { { \displaystyle \sum _ { ( i , j ) \in S _ { A _ { 1 } } } \Delta _ { i j } ^ { * } \left\lceil \frac { 2 5 6 \sigma ^ { 2 } } { \Delta _ { i j } ^ { 2 } } \ln \left( \frac { \Delta _ { i j } ^ { 2 } T } { 2 5 6 \sigma ^ { 2 } } \right) \right\rceil } \le \displaystyle \sum _ { ( i , j ) \in S _ { A _ { 1 } } } \Delta _ { i j } ^ { * } \left( 1 + \frac { 2 5 6 \sigma ^ { 2 } } { \Delta _ { i j } ^ { 2 } } \ln \left( \frac { \Delta _ { i j } ^ { 2 } T } { 2 5 6 \sigma ^ { 2 } } \right) \right) } \\ { = \displaystyle \sum _ { ( i , j ) \in S _ { A _ { 1 } } } \Delta _ { i j } ^ { * } + \Delta _ { i j } ^ { * } \frac { 2 5 6 \sigma ^ { 2 } } { \Delta _ { i j } ^ { 2 } } \ln \left( \frac { \Delta _ { i j } ^ { 2 } T } { 2 5 6 \sigma ^ { 2 } } \right) } \end{array}
$$
Case C.3. The optimal action pair $( i ^ { * } , j ^ { * } )$ is eliminated by some suboptimal one $( i , j )$ in round $t _ { * }$ such that $t _ { i j } \geq t _ { * }$ .
It leads to a misidentification of the NE, implying that an action pair $( i , j )$ should be better than the optimal one based on the current estimates. By $\varepsilon$ -NE property, clearly we have
$$
\hat { A } ( i ^ { * } , j ) - \varepsilon _ { t _ { * } } \leq \hat { A } ( i , j ) \leq \hat { A } ( i , j ^ { * } ) + \varepsilon _ { t _ { * } } .
$$
However, it is not enough to keep a suboptimal action pair $( i , j )$ . On the other hand, we note that if the optimal action pair $( i ^ { * } , j ^ { * } )$ is eliminated in round $t _ { * }$ , it must fail to satisfy $\varepsilon$ -NE property as the following:
$$
\hat { A } ( i , j ^ { * } ) - \hat { A } ( i ^ { * } , j ^ { * } ) > \varepsilon _ { t _ { * } } \quad \mathrm { o r / a n d } \quad \hat { A } ( i ^ { * } , j ^ { * } ) - \hat { A } ( i ^ { * } , j ) > \varepsilon _ { t _ { * } } .
$$
That is, under the estimated payoff matrix $\hat { A }$ , there exists an action $i$ that is estimated to yield a higher payoff than $i ^ { * }$ for the row player, and an action $j$ that is estimated to be more favorable than $j ^ { * }$ for the column player.
The optimal action pair can only be eliminated by a suboptimal action pair $( i , j )$ in round $t _ { * }$ such that $t _ { i j } \geq t _ { * }$ since action pair $( i , j )$ must remain under consideration at the elimination round of the optimal action pair. This condition also implies that $\hat { \Delta } _ { t _ { i j } } \le \hat { \Delta } _ { t _ { * } }$ because of the decreasing behavior of $\hat { \Delta } _ { t }$ across rounds in the algorithm. Moreover, it is important to note that the algorithm keeps this action pair $( i , j )$ for at least one additional round following the elimination of the optimal pair as it is identified as a NE. We further observe that all action pairs $( i , j )$ with $t _ { i j } < t _ { * }$ have already been eliminated in round $t _ { i j }$ or earlier, which we already consider this condition in the previous case.
On the other hand, for the action pairs in $S _ { A _ { 2 } }$ , we assume that $\begin{array} { r } { \hat { \Delta } _ { t } < \frac { \lambda } { 2 } } \end{array}$ , which implies that a suboptimal action pair remains in the game until the final round. This scenario emphasizes the regret contribution when the estimated suboptimality gap $\hat { \Delta } _ { t }$ becomes sufficiently small, a case represented by the action pairs in $S _ { A _ { 2 } }$ . Consequently, these suboptimal action pairs contribute to the overall regret and the analysis accounts for their effect to establish accurate performance guarantees.
If the algorithm keeps a suboptimal action pair $( i , j )$ up to round $t _ { * }$ , the events specified in (46) must hold with parameter $\varepsilon _ { t _ { * } }$ . Consequently, by applying (52), the probability of keeping a suboptimal action pair can be boun∗ded by $\frac { 4 N \sigma ^ { 2 } } { \hat { \Delta } _ { t _ { * } } ^ { 2 } T }$ . Thus, we can write the regret contribution of this case as
$$
\begin{array} { l } { \displaystyle \sum _ { ( i , j ) \in S _ { A } } \Delta _ { i j } ^ { * } T \frac { 1 6 \sigma ^ { 2 } } { \hat { \Delta } _ { t _ { * } } ^ { 2 } T } < \displaystyle \sum _ { ( i , j ) \in S _ { A } } \Delta _ { i j } ^ { * } T \frac { 1 6 \sigma ^ { 2 } } { \hat { \Delta } _ { t _ { i j } + 1 } ^ { 2 } T } } \\ { = \displaystyle \sum _ { ( i , j ) \in S _ { A } } \Delta _ { i j } ^ { * } T \frac { 6 4 \sigma ^ { 2 } } { \hat { \Delta } _ { t _ { i j } } ^ { 2 } T } } \\ { \leq \displaystyle \sum _ { ( i , j ) \in S _ { A _ { 1 } } } \Delta _ { i j } ^ { * } \frac { 5 1 2 \sigma ^ { 2 } } { \Delta _ { i j } ^ { 2 } } + \displaystyle \sum _ { ( i , j ) \in S _ { A _ { 2 } } } \Delta _ { i j } ^ { * } \frac { 5 1 2 \sigma ^ { 2 } } { \lambda ^ { 2 } } } \end{array}
$$
where $\begin{array} { r } { \hat { \Delta } _ { t + 1 } = \frac { \hat { \Delta } _ { t } } { 2 } } \end{array}$ and we apply (45). We use $\hat { \Delta } _ { t + 1 }$ because the optimal action pair might be eliminated at some round $t _ { * }$ such that $t _ { * } \in [ 0 , \operatorname* { m a x } _ { i j } t _ { i j } ]$ . This implies that the optimal action pair can be eliminated by any $( i , j )$ no later than the last round that $( i , j )$ remains in the game. However, we must keep $( i , j )$ at least until the next round, since it is not removed in round $t _ { * }$ and is selected as a misidentified NE.
Case C.4. A suboptimal action pair $( i , j )$ in the set $S _ { A _ { 2 } }$ remains in the game as a unique action pair in Stij .
This implies that a suboptimal action pair is played during the committing phase of Algorithm 2 up to step $T$ . Thus, we need to account for an additional regret contribution term given by
$$
\sum _ { ( i , j ) \in { \cal S } _ { A _ { 2 } } } \Delta _ { i j } ^ { * } T .
$$
This regret contribution accounts for scenarios where the algorithm may eliminate all action pairs except one. If the remaining action pair is suboptimal, the algorithm will continue to play with this action pair for the remaining rounds. Since the action pair selection persists up to time step $T$ , resulting in the additional term in the regret bound.
We note that $\Delta _ { i j } ^ { * } \le \Delta _ { i j }$ , which ensures that the regret does not grow excessively, and this relationship allows us to analyze the regret in terms of the defined action sets. As a result, if we sum these regret contributions as mentioned, we can conclude the proof.
# C.1 Proof of Theorem 5.2
We consider the regret approach consisting of the losses of both players by comparing their best choices to the action played. This regret measures how much worse a player performs compared to their optimal strategy if they had known the opponent’s behavior in advance. Clearly we can write the followings:
$$
\begin{array} { l } { { \displaystyle R _ { T } = \sum _ { ( i , j ) \in S _ { A } } \Delta _ { i j } ^ { \mathrm { m a x } } \mathbb { E } [ n _ { i j , T } ] + \sum _ { ( i , j ) \in S _ { A } } \Delta _ { i j } ^ { \mathrm { m i n } } \mathbb { E } [ n _ { i j , T } ] } } \\ { { \displaystyle \quad = \sum _ { ( i , j ) \in S _ { A } } \Delta _ { i j } \mathbb { E } [ n _ { i j , T } ] } } \\ { { \displaystyle \quad = \sum _ { ( i , j ) \in S _ { A _ { 1 } } } \Delta _ { i j } \mathbb { E } [ n _ { i j , T } ] + \sum _ { ( i , j ) \in S _ { A _ { 2 } } } \Delta _ { i j } \mathbb { E } [ n _ { i j , T } ] } } \end{array}
$$
This implies that the term involving $\Delta _ { i j } ^ { \mathrm { m a x } }$ characterizes the regret incurred by the maximizing player, whereas the loss of the minimizing player is determined using $\Delta _ { i j } ^ { \mathrm { m i n } }$ . In particular, we utilize the same underlying algorithm, which maintains the same uniform playing strategy and elimination strategy throughout the process. This consistency allows us to to leverage many elements of the previous regret analysis. Consequently, we can apply the same case scenarios from the proof of Theorem 5.1 as the elimination of action pairs follows the same strategy and the selection process of action pairs remains the same across rounds.
Although our current analysis involves a different notion of regret, consisting of $\Delta _ { i j }$ , this does not affect the underlying structure of the regret analysis. This ensures that the previous theoretical bounds can be extended by using $\Delta _ { i j }$ instead of ${ \boldsymbol { \Delta } } _ { i j } ^ { * }$ and we apply $\Delta _ { i j } ^ { * } \le \Delta _ { i j }$ and $\Delta _ { i j } \leq \lambda$ for the action pairs in $S _ { A _ { 2 } }$ , then the result follows.
# D Experimental Details
In this section, we present additional experiments to compare the performances of the algorithms based on their theoretical bounds and cumulative regret. To calculate the cumulative regret, we use the absolute difference between the value of the game and the payoff of the action played as the regret might otherwise be negative.
In Figure 1b, we compare the performance of the ETC-TPZSG, ETC-TPZSG-AE and TsallisINF [18] algorithms. In each run, the exploration time $k$ for each action pair in ETC-TPZSG is randomly selected from a predefined list ranging from 100 to 2500 in increments of 100. The total number of rounds is set to $T = 1 0 ^ { 4 }$ . On the other hand, each run uses a new game matrix with a unique pure NE which is generated using Gaussian payoffs with mean zero, standard deviation $\sigma$ selected from the list [0.25, 0.5, 0.75, 1]. Varying $\sigma$ not only changes the payoff distribution but also affects the exploration time and the tolerance parameter $\varepsilon$ used in the elimination phase of ETC-TPZSG-AE. The cumulative regret is averaged over $1 0 ^ { 3 }$ simulation runs. As observed, the ETC-TPZSG-AE algorithm consistently outperforms ETC-TPZSG, demonstrating the effectiveness of its elimination strategy in reducing cumulative regret, while Tsallis-INF perform better when compared to ETC-based algorithms. Notably, the ETC-based algorithms offer the advantage of conceptual and implementational simplicity.
Figure 2: Theoretical expected regret bound comparison between ETC-TPZSG, ETC-TPZSG-AE and Tsallis-INF from [18] using different $\Delta$ values
On the other hand, Figure 2 compares the theoretical expected regret bounds (in $O ( \cdot )$ notation) of the algorithms across different values of the suboptimality gap $\Delta$ . The results show that Tsallis-INF tends to perform better when $\Delta$ is large while the ETC-TPZSG-AE algorithm achieves lower regret for smaller values of $\Delta$ . | We study a two-player zero-sum game (TPZSG) in which the row player aims to
maximize their payoff against an adversarial column player, under an unknown
payoff matrix estimated through bandit feedback. We propose and analyze two
algorithms: ETC-TPZSG, which directly applies ETC to the TPZSG setting and
ETC-TPZSG-AE, which improves upon it by incorporating an action pair
elimination (AE) strategy that leverages the $\varepsilon$-Nash Equilibrium
property to efficiently select the optimal action pair. Our objective is to
demonstrate the applicability of ETC in a TPZSG setting by focusing on learning
pure strategy Nash Equilibrium. A key contribution of our work is a derivation
of instance-dependent upper bounds on the expected regret for both algorithms,
has received limited attention in the literature on zero-sum games.
Particularly, after $T$ rounds, we achieve an instance-dependent regret upper
bounds of $O(\Delta + \sqrt{T})$ for ETC-TPZSG and $O(\frac{\log (T
\Delta^2)}{\Delta})$ for ETC-TPZSG-AE, where $\Delta$ denotes the suboptimality
gap. Therefore, our results indicate that ETC-based algorithms perform
effectively in adversarial game settings, achieving regret bounds comparable to
existing methods while providing insights through instance-dependent analysis. | [
"cs.LG",
"cs.GT"
] |
# 1 INTRODUCTION
High-dimensional embedding vectors generated by deep learning models are becoming an important form of data representation for complex, unstructured data such as images [38, 44], audios [9], and texts [32, 50]. The models convert input data to vectors in an embedding space and capture the data semantics relevance by heir relative positions in the high-dimensional space. Typical embedding vectors nowadays have hundreds to thousands of dimensions.
Vector databases are designed to support efficient nearest neighbor search in the vector space. They underlie many modern applications, ranging from search engines [11, 38] , recommendation systems [41] to retrieval-augmented generation (RAG) [6, 34] These applications require efficient, high quality search as well as support for database updates. Figure 1 shows an example of how a vector database is used in RAG applications. A user submits a query to RAG, which turns the query into a vector. In the next step, RAG
Document Embedding RAG Query Prompt LLM Model
Web page Vector Contextual Search Data User Submit Query VDeacttaobrase HCisotnotreicxtal I Conversation Response Knowledge Ingestion Path RAG Query Path
performs nearest neighbor search to find semantically similar data stored in a vector database. It then augments the query with the data found in the previous step and sends the new query to an LLM. To improve future responses, RAG frequently updates the vector database with data representing new knowledge, such as new documents, web pages, and past user interactions.
Vector databases use indexes to support efficient nearest neighbor search. Since searching for exact nearest neighbors is too costly due to the curse of dimensionality [27], existing works on vector indexes focus on approximate nearest neighbor (ANN) search. The vast number of proposed ANN indexes can be classified as graphbased or partitioning-based indexes [1, 12, 18, 23, 37, 40]. These indexes are mostly evaluated using read-only workloads. Many of the datasets used for evaluation, such as Deep, Sift, and Glove, have lower dimensions than the deep embedding vectors used in emerging applications [3, 23, 33, 40]. We identify three limitations of vector databases built around the existing ANN indexes to support modern applications, for which graph-based indexes are the recommended choice.
The first limitation is the computation overhead under highdimensional spaces. In particular, comparing a vector against its neighbors becomes more expensive with higher dimensions. Graphbased indexes [16, 29, 37, 40] are very costly to build because they require connecting each data point to its near neighbors and optimizing the graph structure to enable efficient traversal. The second limitation is the search performance under concurrent read-write workloads. Updating an existing index can be done in-place [40, 55, 62], or out-of-place using a separate data structure and performing periodic consolidation [14, 49, 53]. Graph indexes perform in-place updates, and require fine-grained locking over the neighborhood of the nodes on its traversal path [40, 49]. This results in significant read-write contention. Out-of-place updates, on the other hand, require a separate search on the newly inserted data, while only postponing the update cost to a later time. The third limitation is scalability. Existing vector databases treat their indexes as black boxes [11, 22, 53]. They shard the data and build an independent graph index for each shard. However, to achieve high recall in highdimensional spaces, nearly all data shards are searched. The large number of searches per query leads to low throughput.
We present HAKES, a scalable vector database that achieves high recall and throughput under concurrent read-write workloads. The database adopts a filter-and-refine design that consists of two stages. The filter stage narrows down the search candidates using compressed vectors for efficiency. The refine stage ranks the candidates based on the full-precision vectors. The system addresses the first limitation by employing dimensionality reduction, coarse-grained partitioning, and quantization techniques. Furthermore, it proposes a novel light-weight machine learning technique to optimize the index parameters such that the filter stage is efficient and returns a set of high-quality candidates. HAKES also includes an early termination check at the filter stage to avoid unnecessary processing. The compressed vectors are grouped by IVF index in contiguous buffers, and decoupling the index parameters used for compressing the vectors and those used during search enables seamless integration of new vectors at minimal overhead and contention, addressing the second limitation. HAKES addresses the scalability limitation by exploiting the decoupling of the filter and refine stage to deploy them in a disaggregated architecture. It distributes the memory and computation cost over multiple nodes, thereby achieving high throughput at scale.
HAKES combines and adapts known techniques in a novel way to achieve its goal. In particular, existing works on quantization aim to improve the quality of similarity score approximation over the compressed vectors, minimizing the need for reranking the fullprecision vectors [1, 19, 21, 23, 46]. HAKES aims to achieve good throughput-recall tradeoffs overall. By having a separate refine stage that reranks the original vectors, the dimensionality reduction and quantization aim to compress the vectors aggressively to reduce the computation cost at the filter stage. The compression parameters are learned in an end-to-end manner, in which the objective is to minimize the similarity score distribution distortion locally for vectors close to each other. The learning approach in HAKES does not assume access to external information, such as the embedding generation models, ground truth neighbors, or semantic labels, which is a different problem setting compared to other works that employ learning to improve the retrieval quality [54, 56]. Moreover, our system allows for applying the newly learned parameters during search directly without re-indexing vectors in the database. In other words, learning can be done asynchronously while the vector database serves queries. Finally, the early termination check in HAKES-Index is more lightweight than that in [35, 58], and more effective in our context than those in [28, 59, 61], since it does not rely on accurate similarity scores under compression.
In summary, we make the following contributions:
We propose a novel index, HAKES-Index, that combines a compressed partitioning-based index with dimensionality reduction and quantization. The index leverages a lightweight machine learning technique to generate high-quality candidate vectors, which are then refined by exact similarity computation. It allows terminating the search early based on the intermediate results. We propose a technique that decouples index parameters for compressing vectors during updates from those used for similarity
computation. This ensures high performance under concurrent read-write workloads.
• We design a distributed vector database, called HAKES, employing the new index in a disaggregated architecture. The system achieves scalability by spreading out the memory and computation overhead over multiple nodes.
• We compare HAKES-Index and HAKES against 12 state-of-theart indexes and three popular commercial distributed vector databases. We evaluate the indexes and systems using highdimensional embedding vector datasets generated by deep learning models. The results demonstrate that HAKES-Index outperforms both partitioning-based and graph-based index baselines. Furthermore, HAKES is scalable, and achieves up to $1 6 \times$ higher throughputs at high recall than the three other baselines do.
The remainder of the paper is structured as follows. Section 2 provides the background on ANN search and the state-of-the-art ANN indexes. Section 3 and Section 4 describes the design of our index and the distributed vector database. Section 5 evaluates our designs against state-of-the-art indexes and systems. Section 6 reviews the related works, and Section 7 concludes.
# 2 PRELIMINARIES
Approximate nearest neighbor Search. Let $\mathcal { D }$ denote a dataset containing $N$ vectors in a $d$ -dimensional vector space $\mathbb { R } ^ { d }$ . For a query vector $\mathbf { x }$ , the similarity between $\mathbf { x }$ and a vector $\textbf { v } \in { \textbf { } D }$ is defined by a metric $d ( \mathbf { x } , \mathbf { v } )$ . Common metrics include the Euclidean distance, inner product, and cosine similarity. A vector $\mathbf { v _ { i } }$ is considered closer to $\mathbf { x }$ than $\mathbf { v _ { j } }$ if $d ( \mathbf { x } , \mathbf { v _ { i } } ) \ < \ d ( \mathbf { x } , \mathbf { v _ { j } } )$ . The $k$ nearest neighbors of $\mathbf { x }$ are vectors in $\mathcal { R } \subseteq \mathcal { D }$ , where $| { \mathcal { R } } | = k$ and $\forall \mathbf { v } \in { \mathcal { R } } , \forall \mathbf { u } \in { \mathcal { D } } \backslash { \mathcal { R } } , d ( \mathbf { x } , \mathbf { v } ) \leq d ( \mathbf { x } , \mathbf { u } )$ . Finding the exact set $\mathcal { R }$ in a high-dimensional space is expensive due to the curse of dimensionality [27]. Instead, existing works on vector databases focus on approximate nearest neighbor (ANN) search, which use ANN indexes to quickly find a set $\mathcal { R } ^ { \prime }$ of vectors that are close to, but not necessarily nearest to $x$ . The quality of $\mathcal { R } ^ { \prime }$ is measured by its recall relative to the exact nearest neighbor set, computed as $\frac { | \dot { \mathcal { R } } \cap \mathcal { R ^ { \prime } } | } { | \mathcal { R } | }$ . We discuss two major classes of ANN indexes below.
Graph-based indexes. They build a proximity graph in which the vertices are the vectors, and an edge between two vertices means the two corresponding vectors are similar [16, 37, 40]. An ANN query involves a greedy beam search that starts from an entry point to locate close neighbors. The query maintains a fixed-size set of candidates and visited nodes during the traversal. At each step, the nearest unvisited vector from the candidate set is selected, and its unvisited neighbors are new potential candidates. These new candidate vectors are evaluated for their similarity scores against the query vector and added to the candidate set accordingly. The process repeats until the candidate set contains only visited nodes, as illustrated in Figure 2a. When building or adding new vectors to the graph, a similar search is conducted to find the nodes to be connected based on a condition that allows future queries to reach their nearest neighbors and in a small number of steps [15, 29, 40]. Since the search efficiency and recall depend on the graph, most existing works on graph indexes focus on building and maintaining a high-quality graph [15, 16, 40, 62].
Figure 2: Graph-based ANN index.
Figure 3: Partitioning-based ANN index.
The Hierarchical Navigable Small World graph (HNSW) is the most popular graph index. It supports incremental updates and efficient search by introducing a hierarchical structure with an exponentially decreasing number of vertices from the bottom to the top level, as shown in Figure 2b. A search starts from an entry point at the top level. At each level, it finds the nearest neighbor and starts the search in the next level with that vertex. Finally, at the bottom level, it performs beam search to find nearest neighbors. During an update (i.e. adding a new vector), the new vertex’s neighbors are first located at each level, and then the edges are updated. The update condition restricts the number of neighbors and only adds an edge if the similarity between the searched candidate and the new vector is larger than that of the new vector and its existing neighbors. This update process is costly, and it creates significant contention under concurrent read-write workloads.
Partitioning-based indexes. They divide vectors into multiple partitions using one or multiple hashing schemes, such that similar vectors are in the same partition. The similarity of a query vector to all the vectors in a partition can be approximated by its proximity to the partition itself. The partition assignments can be encoded for efficient search. Examples of hashing schemes include locality sensitive hashing (LSH) [17, 33, 42], clustering [12, 28], quantization [20, 30, 31], or neural networks [10, 24, 36]. New vectors are added to the corresponding partition by computing its partition assignment. A search for vector $\mathbf { x }$ starts by finding the partitions closest to $\mathbf { x }$ , then retrieving the vectors belonging to the selected partitions, as shown in Figure 3a. Finally the $k$ closest vectors are selected by evaluating the similarity scores.
Inverted-file (IVF) and product quantization are the most popular partitioning-based indexes. IVF [12, 30] uses $\mathbf { k }$ -means to partition the vectors. Specifically, a sample set of vectors is used to determine the cluster centroids, and then vectors belonging to the closest centroids are stored together in respective buckets. During a search, all partitions are ranked based on the similarity between their centroids and the query vector x. The top 𝑛𝑝𝑟𝑜𝑏𝑒 partitions are scanned to produce $k$ nearest neighbors. The number of centroids $N _ { c }$ for k-means and the 𝑛𝑝𝑟𝑜𝑏𝑒 determine the cost of ranking partitions and the number of candidate vectors. These parameters also affect recalls. For example, for million-scale datasets, high recalls can be achieved when $N _ { c }$ is in 1000s and 𝑛𝑝𝑟𝑜𝑏𝑒 is in 10s to 100s.
Product quantization (PQ) splits the original $d$ -dimensional space into $m$ orthogonal subspaces of the same dimension $d ^ { \prime } = d / m$ . Each subspace is further partitioned, e.g., using $\mathbf { k }$ -means with $N _ { c }$ centroids, resulting in (𝑁𝑐 )𝑚 partitions. A codebook CPQ ∈ R𝑁𝑐 ×𝑑 is the concatenation of subspace centroids $\mathbf { C } ^ { \mathrm { P Q } } { } _ { j } \ \in \ \mathbb { R } ^ { N _ { c } \times d ^ { \prime } }$ , i.e., $\mathbf { C } ^ { \mathbf { P Q } } = [ \mathbf { C } ^ { \mathbf { P Q } } _ { 1 } , \mathbf { C } ^ { \mathbf { P Q } } _ { 2 } , . . . , \mathbf { C } ^ { \mathbf { P Q } } _ { m } ]$ . A vector can be quantized into a concatenation of indexes of the centroids in the codebook at each subspace, $p ( v ) \ = \ [ p _ { 1 } ( { \bf v } ) , p _ { 2 } ( { \bf v } ) , . . . , p _ { m } ( { \bf v } ) ]$ , where $p _ { j } ( \mathbf { v } ) \ =$ $\begin{array} { r } { \arg \operatorname* { m i n } _ { i } | | \mathbf { C } ^ { \mathbf { P Q } } { } _ { j } [ i ] - \mathbf { v } _ { j } | | } \end{array}$ denotes the index of the closest centroid in the $j ^ { t h }$ subspace centroids $\mathbf { C _ { j } ^ { P Q } }$ . Let $q _ { j } ( \mathbf { v } ) = \mathbf { C } ^ { \mathbf { P Q } } { } _ { j } [ p _ { j } ( \mathbf { v } ) ]$ be the closest centroid of $\mathbf { v }$ . The concatenation of centroids closest to $\mathbf { v }$ in respective subspaces forms its approximation: $ { \bf { v } } \approx { q } ( { \bf { v } } ) =$ $[ q _ { 1 } ( \mathbf { v } ) , q _ { 2 } ( \mathbf { v } ) , . . . , q _ { m } ( \mathbf { v } ) ]$ . Then, the similarity between a vector $\mathbf { x }$ and a vector v can be approximated as $d ( \mathbf { x } , q ( \mathbf { v } ) )$ . For the commonly used Euclidean distance (normally without taking the square root) and inner product, we have:
$$
d ( \mathbf { x } , \mathbf { v } ) \approx d ( \mathbf { x } , q ( \mathbf { v } ) ) = \sum _ { j = 1 \ldots m } d ( \mathbf { x } _ { j } , q _ { j } ( \mathbf { v } ) ) .
$$
PQ enables efficient comparison of $\mathbf { x }$ against the candidate vectors. During a search, the query vector is split into $m$ subvectors, each of which is compared against all the centroids in its corresponding subspace, $\mathbf { C } ^ { \mathrm { P Q } } { } _ { j }$ . The resulting similarity scores are stored in a lookup table, $\mathrm { L U T } \in \mathbb { R } ^ { N _ { c } \times m }$ . Given Equation 1, the similarity between x and any vector v can then be approximated using the quantized vector $q ( \mathbf { v } )$ via $m$ lookups into the LUT, followed by a summation, as shown in Figure 3b. In practice, PQ generates compact vector representations. Typically, $N _ { c }$ is 16 and 256, such that only 4 or 8 bits can encode the vector in each subspace. Recent indexes using 4-bit PQ yield a LUT small enough to fit in CPU caches, and optimized the quantized vector layout for efficient SIMD implementation, thereby achieving significantly higher throughputs [2, 12, 23]. In practice, quantization is often used together with a coarse-grained partitioning technique such as IVF to filter out a large number of vectors before applying quantization. Furthermore, an IVF partition stores vectors in a contiguous memory region, resulting in fast scanning of the quantization codes due to memory prefetching. Quantization can also be used with graph indexes to speed up comparison of the query vector with the graph vertices [1, 29]. Specifically, during traversal, comparison against each vertex is based on the quantized vectors instead of the original ones.
Quantization enables efficient but lossy approximation of the similarity scores between vectors. Reranking the candidates can be performed after quantization to improve recall, using additional information [1, 31] or the original vectors [12, 19, 23]. Some existing works on quantization, namely [1, 19, 21, 23], focus on reducing the approximation errors in order to minimize reranking. Others aim to transform the vectors to be more suitable for quantization [20, 46], or leverage information about the downstream task and upstream embedding model to improve end-to-end retrieval quality [54, 56].
Guoyu $\mathsf { H } \mathsf { u } ^ { 1 }$ , Shaofeng $\mathsf { C a i } ^ { 1 }$ , Tien Tuan Anh $\mathsf { D i n h } ^ { 2 }$ , Zhongle $\mathsf { X i e } ^ { 3 }$ , and Cong $\mathsf { Y u e } ^ { 1 }$ , Gang Chen3, Beng Chin Ooi1,3
Figure 4: HAKES-Index overview.
# 3 HAKES-INDEX
In this section, we present a novel index, called HAKES-Index, that supports efficient search and index update.
# 3.1 Overview
Figure 4a shows the components in HAKES-Index. It consists of three parts, the two respective sets of index parameters to process the vectors for search and insert, the partitions that contain compressed vectors, and the full vectors. Each set of index parameters is composed of a dimensionality reduction module, IVF centroids, and a PQ codebook. The compressed vectors are partitioned by the IVF centroids and the compression involves dimensionality reduction followed by quantization guided by the codebook. The dimensionality reduction module uses a transformation matrix $\mathbf { A } \in \mathbb { R } ^ { d \times d _ { r } }$ and a bias vector $\mathbf { b } \in \mathbb { R } ^ { d _ { r } }$ to compress vectors from the original $d$ -dimensional space to that of $d _ { r }$ -dimensional spaces, where $d _ { r } < d$ The IVF centroids, ${ \bf C } ^ { \bf { I V F } }$ , determine the partition a new vector is attached to during the insert and rank the partitions for a query vector during the search. The quantization codebooks, $\mathbf { C } ^ { \mathrm { { P Q } } }$ , are used to generate the quantized vector stored in the partitions and compute the Lookup table for search. Note that dimensionality reduction is placed at the front, which speeds up all subsequent computations. We use A, b, ${ \bf C } ^ { \bf { I V F } }$ , $\mathbf { C } ^ { \mathbf { P Q } }$ to refer to the insert index parameters and $\mathbf { A } ^ { \prime } , \mathbf { b } ^ { \prime } , \mathbf { C } ^ { \mathrm { I V F ^ { \prime } } } , \mathbf { C } ^ { \mathrm { P Q ^ { \prime } } }$ to the search index parameters. Search query involves four steps, shown in Figure 4b. Step 1 reduces the dimensionality of the query vector from $d$ to $d _ { r }$ with $\mathbf { A ^ { \prime } } , \mathbf { b ^ { \prime } }$ . Next, the output of the dimensionality reduction is used to compute the lookup table (LUT) with the quantization codebook $\mathbf { C } ^ { \mathrm { { P Q } ^ { \prime } } }$ in step 2. Step 3 evaluates the $d _ { r }$ -dimension query vector with $\mathbf { C } ^ { \mathbf { I V F ^ { \prime } } }$ to select the closest partitions for scanning using the LUT. $k ^ { \prime } > k$ candidates are selected, and step 4 obtains the top $k$ of them by comparing the query vector to their full vectors. The four steps in the search workflow can be mapped into two stages. The filter stage spans steps 1-3, where the majority of vectors are filtered out, leaving $k ^ { \prime }$ candidate vectors. The last step is the refine stage, when $k ^ { \prime }$ candidates are refined to the top $k$ nearest vectors.
To add vectors to HAKES-Index, the insert index parameters are utilized, as shown in Figure 4c. Each new vector is transformed using the dimensionality reduction parameters (step 1) and quantized using the codebook (step 2). It is then appended to both the corresponding partition determined by the IVF centroids and the buffer holding full vectors. For deletion, HAKES-Index uses tombstones to mark the deleted vectors. During the filter stage, the tombstones are checked before adding the vectors to the candidate set. The deleted vectors and their corresponding compressed vectors are removed by a compaction step that rewrites the partitions. This step happens when the index is being checkpointed or rebuilt. The latter is triggered by an update in the embedding model, or when the data size grows beyond certain sizes. This approach reduces the interference of deletion on the search and insert operations.
# 3.2 Index Construction
We construct HAKES-Index following the procedure illustrated in Figure 5. Figure 5a shows the first step of building the base index. The insert index parameters are initialized with existing processes and then the dataset is inserted into the index. Particularly, Optimal Product Quantization (OPQ) is employed to initialize A and $\mathbf { C } ^ { \mathbf { P Q } }$ , which iteratively finds a transformation matrix that minimizes the reconstruction error of a PQ codebook, and K-means is employed to initialize the IVF centroids, ${ \bf C } ^ { \bf { I V F } }$ . The bias vector b is zero. Next, the training set is prepared by sampling a set of vectors and obtaining their neighbors with the base index, as in Figure 5b. Note that another set of sampled pairs is used for validation. Then, we use a self-supervised training method to learn the search parameters, $\mathbf { A } ^ { \prime }$ , ${ \bf { b } } ^ { \prime }$ and $\mathrm { \dot { c } ^ { P Q ^ { \prime } } }$ , illustrated in Figure 5c, which is the key to HAKESIndex’s high performance at high recall, and the technical details will be revealed in the Section 3.3. After training, the IVF centroids $\mathbf { C } ^ { \mathbf { I V F ^ { \prime } } }$ are computed by partitioning the sample data with A, ${ \bf C } ^ { \bf { I V F } }$ then recomputing the centroid for each partition after applying the learned $\mathbf { A } ^ { \prime }$ and $\mathbf { b } ^ { \prime }$ to vectors in it (Figure 5d). Finally, the newly learned $\mathbf { A } ^ { \prime } , \mathbf { b } ^ { \prime } , \mathbf { C } ^ { \mathrm { P Q } ^ { \prime } }$ , and $\mathbf { C } ^ { \mathbf { I V F ^ { \prime } } }$ , are installed in the index, as shown completed in Figure 5e, serving subsequent search queries.
The training process can run independently from the serving system. In practice, the index is first built and uses A, b, CIVF, CPQ for both insert and search. As it serves requests, the system records samples, and the training process runs in the background. Once the training is finished, the new parameters $\mathbf { A } ^ { \prime } , \mathbf { b } ^ { \prime } , \mathbf { C } ^ { \mathrm { \bar { I V } F ^ { \prime } } }$ , $\mathbf { C } ^ { \mathrm { { P Q } ^ { \prime } } }$ can be used immediately to serve queries. In other words, HAKESIndex can be updated incrementally. Moreover, the construction of HAKES-Index is efficient. That reduces the time to rebuild the index for serving at an updated throughput-recall frontier, when the database sizes and distributions are significantly changed by insertion and deletion.
Legend Dimension Reduction IVF Centroids PQ Codebooks Initialized Parameters Learned Parameters Data Vectors
1. Initialize Index Parameters 3. Sample Data and Retrieve ANNs 4. Train Dimension Reduction and Quantization Codebook 5. Use Training Data Original Assignment and to Minimize Similarity Score Distribution Mismatch Vectors after Learned Dimension Reduction OPQ Sampled vectors Di □ 中 k-means m
2. Index the Dataset 具 电 (d) Recalculate IVF centroids 三 6. Install Learned Parameters Base Index Y 四 888 𝑆! 𝑆" 𝑆" Sampled vectors and their ANNs HAKES-Index \*No index rebuild (a) Build base index (b) Prepare training data (c) Learn compression parameters (e) Update index
# 3.3 Learning Compression Parameters
Since the search recall depends on the quality of the candidate vectors returned by the filter stage, HAKES-Index achieves high recall by ensuring that there are many true nearest neighbors in the set of $k ^ { \prime }$ candidate vectors. The compression parameters in HAKESIndex, which include $\mathbf { A } ^ { \prime }$ and ${ \bf { b } } ^ { \prime }$ for dimensionality reduction, and $\mathbf { C } ^ { \mathrm { { P Q } ^ { \prime } } }$ for the PQ codebooks, are fine-tuned, so that they capture the similarity relationship between the query vector and indexed vectors.
At the beginning of training process, $\mathbf { A } ^ { \prime }$ and $\mathbf { C } ^ { \mathrm { { P Q ^ { \prime } } } }$ are initialized with A and $\mathbf { \bar { C } } ^ { \mathbf { P Q } }$ that are produced by OPQ. The bias vector $ { \mathbf { b } } ^ { \prime }$ is initialized with zero. We then jointly optimize $\mathbf { A } ^ { \prime } , \mathbf { b } ^ { \prime }$ , and $\mathbf { C } ^ { \mathrm { { P Q ^ { \prime } } } }$ to minimize the mismatch between the similarity score distribution after quantization and that of the original $d$ -dimensional space. We only focus on the mismatch in a local region that the training objective is defined based on the similarity score distributions of a sampled query vector $\mathbf { x }$ and its close neighbors $A N N _ { x }$ , because distant vectors are filtered away by coarse grained IVF partition selection during search in HAKES-Index. Specifically, the similarity score distributions before and after the dimensionality reduction are:
$$
S _ { \mathbf { o , x } } = { \mathrm { s o f t m a x } } ( [ d ( \mathbf { x } , \mathbf { v } _ { 1 } ) , \dots , d ( \mathbf { x } , \mathbf { v } _ { \mathbf { K } } ) ] )
$$
$$
S _ { \mathbf { r } , \mathbf { x } } = \mathrm { s o f t m a x } ( [ d ( R ^ { \prime } ( \mathbf { x } ) , R ( \mathbf { v _ { 1 } } ) ) , \dots , d ( R ^ { \prime } ( \mathbf { x } ) , R ( \mathbf { v _ { K } } ) ) ] )
$$
where $K = | A N N _ { x } |$ is the number of retrieved close neighbors, and the softmax function converts the similarity scores to a distribution. $R ^ { \prime } ( { \bf x } ) = { \bf A } ^ { \prime } { \bf x } + { \bf b } ^ { \prime }$ and $R ( \mathbf { v } ) = \mathbf { A } \mathbf { v } + \mathbf { b }$ represent dimensionality reduction. The distribution of the similarity scores after quantization is:
$S _ { q , \mathbf { x } } = \mathrm { s o f t m a x } ( [ d ( R ^ { \prime } ( \mathbf { x } ) , q ^ { \prime } ( R ( \mathbf { v _ { 1 } } ) ) , \dots , d ( R ^ { \prime } ( \mathbf { x } ) , q ^ { \prime } ( R ( \mathbf { v _ { K } } ) ) ] )$ (4) where the vector approximation $q ^ { \prime } ( \mathbf { v } ) = [ q _ { 1 } ^ { \prime } ( \mathbf { v } ) , q _ { 2 } ^ { \prime } ( \mathbf { v } ) , \ldots , q _ { m } ^ { \prime } ( \mathbf { v } ) ]$ from PQ is modified to use both $C ^ { P Q }$ and $C ^ { P Q ^ { \prime } }$ . Specifically, $q _ { j } ( \mathbf { v } ) =$ $\begin{array} { r } { { \bf C } ^ { \bf P Q ^ { \prime } } { } _ { j } [ \arg \operatorname* { m i n } _ { i } | | { \bf C } ^ { \bf P Q } { } _ { j } [ i ] - { \bf v } _ { j } | | ] } \end{array}$ . It means that the indexes of the centroids of the codebook are produced by $C ^ { P Q }$ and the fine-tuned centroids of $C ^ { P Q ^ { \prime } }$ at the corresponding position are used to approximate the vector.
With the distributions of similarity scores, we can then reduce the mismatch by minimizing the Kullback-Leibler (KL) divergence defined over two pairs of distributions. One pair is defined between the distribution in the original vector space (Equation 2) and that in the vector space after dimensionality reduction (Equation 3). The other pair is between (Equation 2) and the distribution of similarity scores calculated between a query vector after dimensionality reduction and its quantized close neighbors (Equation 4). The overall training objective is as follows:
$$
L = - \sum _ { { \bf x } \in D _ { s a m p l e } } S _ { o } \log \frac { S _ { r , { \bf x } } } { S _ { o , { \bf x } } } - \lambda \sum _ { { \bf x } \in D _ { s a m p l e } } S _ { o , { \bf x } } \log \frac { S _ { q , { \bf x } } } { S _ { o , { \bf x } } }
$$
where $D _ { s a m p l e }$ is the sampled query vectors for training, and $\lambda$ is a hyperparameter to control the strength of the regularization.
The training process iteratively updates $\mathbf { A } ^ { \prime } , \mathbf { b } ^ { \prime } , \mathbf { \bar { C } } ^ { \mathrm { P Q ^ { \prime } } }$ to minimize the loss defined in Equation 5 that is to minimize the mismatch among three similarity distributions for close vectors as illustrated in Figure 5c. It stops when the loss reduction computed on the validation set is smaller than a threshold (e.g., 0.1).
# 3.4 Search Optimizations
HAKES-Index contains two additional optimizations that improve search efficiency. The first is INT8 scalar quantization at each dimension of the IVF centroids. This allows using SIMD to evaluate $4 \times$ more dimensions in a single instruction. Although quantization can be lossy, such representation errors are tolerable in practice since the centroids are only used for partition assignment and a large number of partitions are selected for high recall. The second optimization is to adapt the cost of the filter stage based on the query. Fixing the value of 𝑛𝑝𝑟𝑜𝑏𝑒 means that the computation cost is roughly the same for every query. We note that in extreme cases, all the true nearest neighbors are in the same partition, where only that partition needs to be scanned. In other extreme cases, the true nearest neighbors are evenly distributed among the partitions, where all the partitions need to be scanned to achieve high recall. In high-dimensional space, it is challenging to determine the 𝑛𝑝𝑟𝑜𝑏𝑒 based solely on the centroids. HAKES-Index introduces a heuristic condition for early stopping the scanning of subsequent partitions based on the intermediate search results. The search process ranks the partitions by the similarity score of their centroids to the query, and scans the partitions in order. The key idea is that, as the search process moves away from the query vector, new partitions will contribute fewer vectors to the candidate set. We track the count of consecutively scanned partitions that each partition adds fewer than 𝑡 vectors to the candidate set, where $t$ is a search configuration parameter. When that count exceeds a specified threshold $n _ { t }$ , it indicates that the search has likely covered all partitions containing nearest neighbors, and we terminate the filter stage. HAKES-Index terminates the filter stage either when the heuristic condition above is met, or when 𝑛𝑝𝑟𝑜𝑏𝑒𝑠 partitions have been scanned.
# 3.5 Discussion
HAKES-Index’s two-stage design allows the filter stage to trade accuracy of similarity score evaluation for lower computation overhead. This stage performs aggressive compression, combining dimensionality reduction at the beginning and then 4-bit product quantization. The index parameters are optimized to achieve high compression ratios while preserving only the distribution and not the exact values of similarity scores. The optimization focuses on the local regions instead of globally, since distant vectors in IVF are already filtered out and never evaluated. Our experimental results demonstrate that deep embedding vectors can be aggressively compressed to achieve superior throughput-recall tradeoff overall for HAKES-Index, with $d _ { r }$ as small as $1 / 4$ or $1 / 8$ of the original dimension $d$ , and with 4-bit PQ with $m = 2$ . The early termination checking is designed to operate in the filter stage with aggressive compression. It does not rely on accurate similarity score calculation, unlike existing works [28, 59, 61]. The statistics tracking and check incurs minimal overhead, compared to other works on early termination [35, 58].
The compression techniques in HAKES-Index differs from those of existing works on quantization, which either focuses on minimizing the reconstruction error [1, 5, 30], i.e., $d ( \mathbf { v } , q ( \mathbf { v } ) )$ , or the error of similarity score approximation [21, 23, 47], i.e., $d ( \mathbf { x } , q ( \mathbf { v } ) )$ . HAKES-Index learns both dimensionality reduction and quantization together to reduce the distortion of the similarity distribution. Some learned data transformations for quantization [20, 39] aim to transform the original vector to reduce the quantization error. [46] introduces complex data transformation, increases serving complexity, and some other works tune even the embedding models [54, 56], which differ from our ultimate goal of achieving superior throughput-recall trade-off for the ANN search with given embedding vectors. Moreover, we only use the approximate nearest neighbor for training, which can be efficiently obtained compared to ground truth neighbors required by other works [46, 56].
A key design in HAKES-Index is that it decouples the management of parameters used for search and insert, enabling its highrecall search while supporting the incorporation of new data. Specifically, it maintains two sets of compression parameters: the learned parameters obtained through training as the search index parameters, and the original parameters established upon initialization as the insert index parameters, as shown in Figure 4a. It is closely related to the lightweight self-supervised training process. As discussed in Section 3.3, we use the prebuilt base index and fix PQ code assignment for training, where all the data vectors are processed only once using the original set of parameters. Consequently, new vectors can follow the same process of being indexed by the initialized parameters and searched by the learned parameters. Empirical observations also confirm that using the learned parameters for inserting new vectors leads to recall degradation in Section 5. Furthermore, as a consequence of the decoupling, the learned search index parameters can be directly applied without re-indexing the vectors. Existing works on learned compression use the updated codebook for assignment during every training iteration [54, 56]. They would require expensive re-indexing of the vectors when applying the trained parameters in vector databases to serve queries.
The aggressive compression employed by HAKES-Index not only significantly speeds up the filter stage, but also reduces the memory consumption in this stage. We now analyze the memory cost of HAKES-Index for a vector dataset of $( N \cdot 4 \cdot d )$ bytes. The dimensionality reduction matrices and the bias vector take $\left( 2 \cdot 4 \cdot d \cdot d _ { r } + 4 \cdot d _ { r } \right)$ bytes. IVF centroids and the 4-bit quantization codebooks consume $\left( N _ { c } \cdot 4 \cdot d _ { r } + N _ { c } \cdot d _ { r } \right)$ bytes and $( 2 \cdot 2 ^ { 4 } \cdot 4 \cdot d _ { r } )$ bytes respectively. The compressed vectors take $( N \cdot ( 1 / 2 ) \cdot ( d _ { r } / m ) )$ bytes. The filter stage index is significantly smaller than the vector dataset.
As the dataset grows considerably, the index should be rebuilt with a larger number of IVF partitions. In practice, even the embedding models that generate the vectors are frequently retrained, for example, on a daily basis in recommendation systems [60]). After model training, rebuilding the index is necessary.
# 4 THE HAKES DISTRIBUTED VECTORDB
In this section, we present the design of our distributed vector database, named HAKES.
# 4.1 Overview
HAKES-Index processes a search query in two stages, namely the filter and refine stage. These stages do not share data, and they have distinct resource requirements due to the types and amount of vectors being evaluated. Specifically, the memory consumption of the filter stage, accessing compressed vectors, is significantly lower than that of the refine stage, which accesses the original vectors. In addition, the filter stage has a much higher computation cost because it performs computation over a large number of vectors.
We design an architecture that exploits the filter-and-refine design to disaggregate the two stages. In particular, we separate the management of the filter-stage index from the full-precision, original vectors used only in the refine stage, and employ different scaling policies for them in a server cluster. There are two sets of workers, the IndexWorkers and the RefineWorkers, each performs one stage of the index using the local data. The former are responsible for the filter stage, managing the replicated compressed vectors. The latter performs the refine stage, storing shards of the original vectors. Figure 6 shows an example in which a physical server runs both an IndexWorker and a RefineWorker. However, we stress that these components can be disaggregated and scaled independently. For example, more memory nodes running RefineWorkers can be added to handle a large volume of data, and more compute nodes running IndexWorker can be added to speed up the filter stage.
Figure 6: HAKES architecture.
Figure 7: Different architectures of distributed vector databases.
Discussion. HAKES’s architecture is different from that of existing distributed vector databases. Figure 7 compares four architectures with distinct shard layouts and communication for read and write. In the first architecture (Figure 7a), adopted by [11, 43, 52], each server hosts a single read-write shard and maintains its index. A read request merges search results from every node, while a write request is routed to a single server based on a sharding policy. In the second architecture (Figure 7b), used by [14], each node maintains one read-write shard and multiple read-only shards to reduce the read-write contention. The third architecture in Figure 7c extends the first two by employing multiple read-write shards and multiple read-only shards. It is adopted by [22, 49], and supports scaling out of read or of write by adding servers for the required type of shards. We note that in these three architectures, an index is local to the shard data, i.e., the index of each shard is not constructed over the global set of vectors. However, building many small indexes over multiple shards incurs significant overhead, as we show in our evaluation later. HAKES’s architecture in Figure 7d, in contrast, maintains the global index at each server, since the filter stage index is small due to compression and supports efficient update.
The index in the filter stage scales with dataset size. However, HAKES’s high compression ratio enables a single cloud server to host TB-scale indexes. For deployments where the index exceeds individual server capacity, the index is dynamically sharded across IndexWorker groups. Searches query one replica per shard group while updates propagate atomically to all replicas in the affected group. Full-precision vectors remain managed separately by RefineWorker nodes deployed on distinct servers, ensuring physical isolation between filter and refine stages.
# 4.2 HAKES Design
The IndexWorker maintains a replica of the filter-stage index and the compressed vectors organized in IVF partitions. It takes a query vector as input and returns a set of candidate vectors. IndexWorker is compute-heavy. It implements dynamic batching with internal, lock-free task queues. In particular, vectors from different requests are batched into a matrix such that the dimensionality reduction and IVF assignment can be computed efficiently via matrix-matrix
W R W R Legend ↓ + 1
index index index iindex index iindex
vectors i vectors i vectors vecttorrs vecttorrs vecttorrs i 1 1 rw-shard per node 1 rw + multiple ro-shards per node Server index (a) Design I (b) Design II vectors W R W Global index R Read-Write Shard (rw-shard)
index i iindex 1 iindex index index index index
vecttorrs vecttorrs vecttorrs 1 vectors vectors vectors vectors rw/ro-shards distributed HAKES Read-Only Shard (ro-shard) (c) Design III (d) HAKES
multiplication. Requests are batched only under high load, otherwise, they are processed immediately on separate CPU cores.
The RefineWorker maintains a shard of the original vectors. It handles the refine stage, which evaluates similarity scores between the query and the candidate vectors belonging to the shard. HAKES supports two types of sharding policies for the full vectors. One policy is sharding by vector ID, in which vectors are distributed (evenly) among the nodes by their IDs. The other is sharding by IVF assignment, in which vectors belonging to the same IVF partition are on the same RefineWorker. This policy helps reduce network communication because the refine stages only happen on a small number of nodes.
Operation workflow. Before serving queries, HAKES builds an index over a given dataset. It first takes a representative sample of the dataset to initialize the base index parameters. It then launches IndexWorkers that use the base index. Next, it inserts the vectors, and after that starts serving search requests. It builds training datasets for learning index parameters by collecting the results of ANN queries. Once the training process finishes, it installs the new parameters to all IndexWorkers with minimal disruption. Specifically, at every IndexWorker node, the new parameters are loaded to memory and the pointers in HAKES-Index are redirected to them.
During search, the client sends the query to an IndexWorker and gets back the candidate vectors. Based on the sharding configuration, the client sends these vectors to the corresponding RefineWorkers in parallel. The client reranks the vectors returned by the RefineWorkers and outputs the top $k$ vectors. During insert, the client sends the new vector to the RefineWorker that manages the shard where the vector is to be inserted. The client then picks an IndexWorker to compute the new quantized vector and update the IVF structure. This update is broadcast to all the IndexWorkers. For deletion, the client broadcasts the vector IDs to be deleted to all the IndexWorkers, which then mark them as deleted in their filter-stage index.
Consistency and failure recovery HAKES does not guarantee strong consistency, which is acceptable because applications relying on vector search can tolerate that [49, 53]. It can support session consistency by synchronously replicating the write requests or having the client stick to an IndexWorker. HAKES periodically creates checkpoints of the index. During crash recovery, new vectors after the checkpoints are re-inserted into the RefineWorkers and IndexWorkers.
# 5 EVALUATION
In this section, we benchmark HAKES-Index against state-of-the-art ANN indexes and HAKES against state-of-the-art distributed vector databases to study the effectiveness of our design
# 5.1 Implementation
We implement HAKES-Index by extending the FAISS library [12]. IndexWorker and RefineWorker are implemented on top of the index HAKES-Index, and they are accessible via an HTTP server implemented using libuv and llhttp. The index extension and serving system take $\sim 7 0 0 0 \mathrm { L o C }$ in $C { + } { + }$ . The index training is implemented in $\sim 1 0 0 0 \mathrm { L o C }$ in Python, using Pytorch@1.12.1. The HAKES client is implemented in Python in $\sim 5 0 0 \mathrm { L o C }$ .
# 5.2 Experiment Setup
Datasets and workloads. As listed in Table 1, we use six deep embedding datasets and the GIST dataset. Five of the datasets are at the 1-million scale, and we use them for index benchmarking.
DPR-768 is generated by the Dense Passage Retrieval (DPR) context encoder model [32] on text records sampled from the Sphere web corpus dataset .
• OPENAI-1536 [13] is generated by OpenAI’s embedding service on DBpedia text data [48]. MBNET-1024: is generated by pretrained MobileNet [26] on one million ImageNet data [45]. RSNET-2048: is generated by pretrained ResNet [25] on 1 million ImageNet data.
GIST-960: is a widely in the literature for benchmarking ANN indexes [3, 18, 37]. We selected GIST for its high dimensionality.
We also use two other large datasets for in-depth analysis of our index and system.
DPR-768-10m: use the same embedding model as DPR-768 but on 10 million Sphere text records.
• E5-1024-10m: is generated with the E5-large text model [50] on 10 million Sphere text records.
We normalize the vectors and use the inner product as the similarity metric due to its popularity in existing embedding services 3 4. This metric is also the default choice in all of the baseline systems [14, 49, 52]. We note that for normalized vectors, Euclidean distance, cosine similarity, and inner product are equivalent with respect to neighbor relationships. The search quality is measured by Recall $1 0 @ 1 0$ . The ground-truth nearest neighbors for the queries are generated by a brute-force search over the entire dataset.
Training setup. Index training is conducted on an Ubuntu 18.04 server that has an Intel Xeon $\mathrm { W } – 2 1 3 3 @ 3 . 6 0 \mathrm { G H z }$ CPU with 6 cores and an NVIDIA GeForce RTX 2080 Ti GPU. The $\lambda$ parameter is searched in the set $\{ 0 . 0 1 , 0 . 0 3 , . . . 3 0 \}$ . The AdamW Optimizer is used with a learning rate value in the set $\{ 1 0 ^ { - 5 } , 1 0 ^ { - 4 } , 1 0 ^ { - 3 } \}$ . The batch size is set to 512. We use 100,000 samples and their 50 neighbors returned by the base index at $n p r o b e = 1 / 1 0$ and $k ^ { \prime } / k = 1 0$ .
Environment setup. We conduct all the index experiments on a Ubuntu 20.04 server equipped with an Intel Xeon W-1290P $@$
Table 1: High-dimensional datasets.
3.70GHz CPU with 10 cores and 128 GiB memory. We run distributed experiments in a cluster of servers with the above hardware specification.
Index baselines. We select 12 state-of-the-art in-memory ANN index baselines, covering both IVF partitioning-based and graphbased indexes.
• IVF is the classic IVF index with k-means clustering.
• IVFPQ_RF applies PQ with IVF and uses FAISS 4-bit quantization fast scan implementation [2, 12]. The RF denotes a refine stage
OPQIVFPQ_RF uses OPQ [20] to learn a rotation matrix that minimizes the PQ reconstruction error. We use the OPQ implementation in FAISS that can generate a transformation matrix $\in \mathbb { R } ^ { d \times d _ { r } }$ to reduce dimension before IVF and PQ.
HNSW [40] is the index used in almost all vector databases.
: ELPIS [4] partitions the dataset and maintains an HNSW graph index for each partition, representative for maintaining multiple subgraph indexes. LSH-APG [62] leverages LSH to identify close entry points on its graph index to reduce the search path length.
ScaNN [23], SOAR [47], and RaBitQ [19] are recent proposed quantization scheme used with partitioning-based index. They use reranking to improve recall.
• Falconn $^ { + + }$ [42] and LCCS [33] are state-of-the-art LSH indexes.
• LVQ [1] is a state-of-the-art graph index with quantization. It supports 4-bit scalar quantization, followed by 8-bit quantization on residual for reranking.
We do not compare against recent indexes that are optimized for secondary storage [8, 51], which report lower performance than in-memory indexes. The first three IVF baselines share the codebase of our extended FAISS library. For the remaining baselines, we use the implementations provided by the authors.
Distributed vector database baselines. We select three popular distributed vector databases that employ in-memory ANN indexes. They cover the three architectures described in Section 4.1. We set up the systems according to the recommendations from their respective official documentation.
Weaviate [52] adopts an architecture in which each server maintains a single read-write shard and an HNSW graph. It implements HNSW natively in Golang with fine-grained node-level locking for concurrency. We deploy Weaviate using the official Docker image at version v1.21.2 Cassandra adds support for vector search recently [14] on its NoSQL database. It shards the data across nodes, and every node maintains a read-write shard and multiple read-only shards that are periodically merged. It uses jVector 5, a graph index that only searches quantized vectors, similar to DiskANN [29].
• Milvus [22, 49] adopts an architecture in which there is one shard that processes updates. Once reaching 1GiB, this shard becomes a read-only shard with its own index and is distributed across the servers for serving. We deploy Milvus version 2.4 using the official milvus-operator v0.9.7 on a Kubernetes (v1.23.17) cluster.
Besides the three systems above, we add two more baselines, called Sharded-HNSW and HAKES-Base. Sharded-HNSW adopts Weaviate’s architecture, and uses our server implementation with hnswlib. This baseline helps isolate the performance impact of the index and system design, since the three vector databases are implemented with different languages and have different sets of features. HAKESBase is the same as HAKES but employs the base index, that is, without parameter training or optimizations.
# 5.3 Index Benchmarking and Analysis
For each index, we explore the range of configurations recommended in the original paper and corresponding code repository, and pick the best configuration for each dataset. We then run experiments with varying search parameter values to examine the index’s throughput-recall tradeoff. The complete set of explored and selected configurations of all indexes is in the Appendix.
Sequential read workload. Figure 8 compares the throughputrecall tradeoff of the 13 indexes for the recall range above $80 \%$ . Across the different datasets, HAKES-Index achieves state-of-theart throughput-recall tradeoff. At high recall, it even outperforms the recent quantized graph index, LVQ, which is heavily optimized for prefetching and SIMD acceleration. The performance difference among OPQIVFPQ_RF, IVFPQ_RF, and IVF confirms that with a refine stage, deep embeddings can be compressed significantly with quantization and dimensionality reduction for efficiency while maintaining high accuracy. Across the deep embedding datasets, OPQIVFPQ_RF and HAKES-Index achieve the reported tradeoff with $d _ { r } / d = 1 / 4$ or $1 / 8$ , significantly reducing computation.
Recent quantization-based indexes, namely ScaNN, SOAR, and RaBitQ, show mixed results compared to IVFPQ_RF, which uses standard PQ and fast scan implementation. ScaNN improves the quantization for inner product approximation; SOAR aims to reduce the correlation of multiple IVF partitions assignment for one vector; and RaBitQ uses LSH to generate binary code representation and decide vectors to be reranked with its error bound. ScaNN and SOAR outperform IVFPQ_RF on GIST-960, DPR-768, and MBNET-1024, but have comparable performance on RSNET-2048 and OPENAI1536. RaBitQ only performs better than IVFPQ_RF on GIST-960. These observations highlight the importance of evaluating indexes on high-dimensional deep embedding.
The performance of Falconn $^ { + + }$ and LCCS ranks below IVF, confirming that LSH-based indexes are less effective in filtering vectors than the data-dependent approaches in high-dimensional space [3, 19, 37, 62]. Among graph-based indexes, LVQ performs best as its scalar quantization avoids computation using full vectors for graph traversal. The difference between HNSW and LSH-APG indicates that the hierarchical structure of HNSW is more effective than the LSH-based entry point selection in LSH-APG in highdimensional space. The gap between HNSW and ELPIS shows that sharding a global graph index into smaller subgraphs degrades the overall performance. We analyze that phenomenon in distributed vector databases in Section 5.5.
Table 2: Ablation study where recall is in the 0.99 region. Each cell shows the QPS (recall) value.
Read-write workload. For indexes supporting inserts, we first evaluate their performance with sequential read-write workloads. We focus on high-recall regions of 0.99, and vary the write ratio from 0.0 to 0.5. Figure 9 reveals that as the write ratio increases, partitioning-based indexes have a clear advantage over graph indexes. Both LVQ’s and HNSW’s performance decrease as the write ratio increases, because inserting new data into a graph is slower than serving an ANN search. The reverse is true for partitioningbased indexes, since insert does not involve comparison with existing vectors. The exceptions are ScaNN in RSNET and SOAR, which select quantized code with additional constraints. In particular, SOAR assign a vector to multiple partitions based on their correlation which is more costly than a single assignment used by other partitioning-based indexes. HAKES-Index outperforms all baselines across all datasets, because of its efficient search and insert.
We further evaluate the indexes supporting concurrent readwrite workloads. The HNSW implementation in hnswlib supports concurrent read-write with fine-grained locking on the graph nodes, and using our extension on FAISS, IVFPQ_RF, and OPQIVFPQ_RF also support partition locking as our index does. We use 32 clients and vary the ratio of write requests. Figure 10 shows that partitioningbased indexes are better than HNSW, due to low contention and predictable memory access pattern. We note that even IVFPQ_RF reaches a comparable or higher throughput than HNSW for concurrent read. The performance gaps increase with more writes.
Memory consumption. We observe that the cost of storing the original vectors dominates the index’s memory consumption. We discuss the memory overhead for representative baselines on OPENAI1536 as an example. We measure memory usage before and after loading the indexes. HNSW maintains the connection information for each node at each level on top of the original data, increasing the memory from 5.72 to 6.01 GiB. For IVFPQ_RF, OPQIVFPQ_RF, and HAKES-Index, the main overhead is storing the compressed vectors. IVFPQ_RF consumes 5.92 GiB, where OPQIVFPQ_RF consumes 5.86 GiB due to dimensionality reduction. HAKES-Index consumes 5.86 GiB similar to OPQIVFPQ_RF, as the additional set query index parameters are small.
# 5.4 HAKES-Index Analysis
Performance gain breakdown. Table 2 shows how the different techniques contribute to the performance of HAKES-Index. We report the results at the search configurations that achieve
HAKES-Index IVFPQ_RF ScaNN LVQ IVF LCCS ELPIS HNSW OPQIVFPQ_RF SOAR RaBitQ Falconn++ LSH-APG 23000 2000
3000 X × 10500
12000 F 3000 心 X Y 1000 1000 h 1000 湖 500 0.80 0.85 0.90 0.9 0.80 0.85 0.90 0.95 0.80 0 4 + 0.80 0.85 0.90 0.95 1.00 0.80 0.85 4+144444 5 1.00 1.00 .85 0.90 0.95 1.00 0.90 0.95 1.00 Recall Recall Recall Recall Recall (a) DPR-768 (b) OPENAI-1536 (c) MBNET-1024 (d) RSNET-2048 (e) GIST-960 Figure 8: Throughput vs. recall for sequential reads (recall $\mathbf { \geq 0 . 8 }$ ). HAKES-Index HNSW IVFPQ_RF OPQIVFPQ_RF ScaNN SOAR LVQ 2000 125000 3000 □ 3000 15000 □ □ □ + □ . □ 区 中 → □ → △ □ X → △ A ↑ X A ★ 1 ★ + √ X 500 + ★ 500 X ☆ ★ A 茶 文 女 : 1 ★ ★ 0.00 0.20 C 0.40 a 00.00 0.20 . 0.40 . 00.00 0.20 + 0.40 00.00 0.20 0.40 X X 0.00 A 0.20 0.40 Write ratio Write ratio Write ratio Write ratio Write ratio (a) DPR-768 (b) OPENAI-1536 (c) MBNET-1024 (d) RSNET-2048 (e) GIST-960 Figure 9: Performance under sequential read-write workloads. (Recall $\mathbf { \sigma } _ { \mathbf { \sigma } = \mathbf { 0 . 9 9 } }$ ). HAKES-Index HNSW IVFPQ_RF OPQIVFPQ_RF . . 10000 4000 10000 □ □ □ G 7500 8000 □ ★ 255000 46000 ★ 1000 T ★ ★ 请 ★ ▲ + + + + ★ A 白 ★ ★ ★ A A ▲ 0.00 0.20 0.40 0.00 0.20 0.40 0.00 0.20 0.40 0.00 0.20 0.40 0.00 0.20 0.40 Write ratio Write ratio Write ratio Write ratio Write ratio (a) DPR-768 (b) OPENAI-1536 (c) MBNET-1024 (d) RSNET-2048 (e) GIST-960
𝑟𝑒𝑐𝑎𝑙𝑙 $\approx 0 . 9 9$ with the learned parameters. The learned compression contributes the most as it improves the throughput-recall tradeoff over the base settings. Scalar quantization of IVF centroids and early termination provide further improvement to throughput without significantly degrading recall. We used the same setting $t = k ^ { \prime } / 2 0 0$ and $n _ { t } ~ = ~ 3 0$ for early termination, which improves the throughputs considerably on 4 of the 5 datasets. However, as discussed in the previous subsection, the heuristic can terminate the search prematurely and miss the true close neighbors, thereby leading to lower recall. We note that careful tuning on a dataset can achieve better performance for a specific recall target.
Recall improvement by learned compression. Table 3 reports the recalls for different search configurations, for 10-million scale datasets. We note that the training process does not affect the cost of performing dimensionality reduction and of scanning quantized vectors. In other words, given the same search parameters, the performance is only affected by the IVF partition selection, which we observe to be negligible. Table 3 shows consistent improvement across all configurations on the 10-million scale datasets. The improvement is between 0.07 to 0.14 for $k ^ { \prime } / k = 1 0$ and 0.01 to 0.07 for those settings with recall over 0.9. It is higher for smaller filter candidate sets (i.e. smaller $k ^ { \prime } / k _ { \ast }$ ), which is expected because the impact of high-quality candidate vectors is higher when the candidate set is small. This improvement allows HAKES-Index to reach high recall with a smaller 𝑛𝑝𝑟𝑜𝑏𝑒 and $k ^ { \prime } / k$ , which translates to higher throughput. We attribute the high recalls to the training process that results in the refine stage having more true nearest neighbors. We discuss the results on a 1-million scale dataset in the Appendix.
Training cost. The cost of constructing HAKES-Index consists of the cost of building the base index, and of training the compression parameters. Deploying the trained parameters incurs negligible overhead, as it only loads a small dimensionality reduction matrix, bias vector, quantization codebooks, and IVF centroids into memory. For the 10-million scale datasets, building the base index takes 179.2s and 219.22s for initializing the OPQ and IVF parameters, and 103.2s and 125.6s to insert the 10 million vectors for DPR$7 6 8 \ – 1 0 \mathrm { m }$ and E5-1024-10m, respectively. It takes 52.9s and 60.9s for the two datasets respectively to sample the training set with 1/100 ratio and compute the approximate nearest neighbors with 𝑛𝑝𝑟𝑜𝑏𝑒 set to the 1/10 partitions and $k ^ { \prime } / k$ . Training takes 34.9s and 45.6s, respectively. When deployed on a cluster, the time to insert vectors and prepare the training set neighbors can be reduced linearly with the number of nodes. In comparison, constructing the HNSW graph takes 5736.4s and 9713.21s on DPR-768-10m and E5-1024-10m datasets, which are $1 5 . 5 \times$ and $2 1 . 5 \times$ higher than the cost of building HAKES-Index. We note that in production, HAKESIndex can use the initialized parameters to serve requests, while training is conducted in the background using GPUs. The learned parameters can be seamlessly integrated once available, without rebuilding the index.
Table 3: Recall improvement at different search configurations.
Drift tolerance. We prepare 1-million datasets derived from the ImageNet dataset. We reserve 1/10 categories for generating drift. We use a mixing ratio of vectors from the reserved categories and those from the original categories (not in the 1 million for index building) to create workloads with different drifts. The workloads consist of 4 batches of $2 0 0 \mathrm { k }$ vectors for insertion and 1k query vectors, such that both insertion and query exhibit drift. Figure 11 shows the recall and throughput as we insert data batches and then run ANN queries with a mixing ratio from 0 to 0.8. The 𝑛𝑝𝑟𝑜𝑏𝑒 and $k ^ { \prime } / k$ are selected to be the best search configuration with recall $\geq$ 0.99. The throughput descrease as more vectors are added resulting in more vectors to scan in each partition. For search quality, we observe at this high recall, the recall improvement of training persists across different drifts. As more data are added the recall degrades slightly. The result showed the robustness of IVF and HAKES-Index training process against moderate drifts for embeddings from the same model. We also evaluate on RSNET-2048 and observe similar results. For embeddings from different models or entirely distinct sources, we recommend building different indexes.
Decoupling index parameters for read and write. We start with an index on 1 million vectors and select the 𝑛𝑝𝑟𝑜𝑏𝑒 and $k ^ { \prime } / k$ for recall $\geq 0 . 9 9$ . We insert batches of $2 0 0 \mathrm { k }$ vectors and measure the recall at the same configuration. We derive the true nearest neighbor after each batch insert in prior. Figure 12 shows the importance of separating the learned parameters for search from the parameters for insert. If the learned parameters are used to compress new vectors during insert, the recall drops. The reason is that only keeping learned parameters is inconsistent with our training scheme and the approximated similarity will not follow the expected distribution, as discussed in Section 3.5. We observe in experiments that new vectors that are not nearest neighbors can have a higher approximated similarity than true neighbors, and the true neighbors in the added data can have significantly lower approximated similarity than those neighbors in the original dataset.
Deletion. We evaluate the index performance under deletion using the DPR-768 dataset and 32 clients. We select search parameters that achieve recal $\scriptstyle = 0 . 9 9$ . Figure 13a shows that for workloads involving both search and deletion, the throughput increases with the ratio of deletions. The trend is similar to the results in Figure 10 when the ratio of insert increases. The higher throughput is because insertion and deletion operations are cheaper than ANN search operations.
Figure 13b shows that under workloads of $6 0 \%$ search and $4 0 \%$ of insert and delete, the throughput is only slightly higher when varying the ratio of delete, since insert operations calculate IVF assignment and compress the vectors. Since we do not modify the coarse-grained partitioning when deleting the data, the recall can be maintained, as the close neighbors are likely to be selected from nearby partitions.
The Appendix contains additional results on full-range throughputrecall tradeoff, effects of Euclidean distance metric, deletion, ablation study for training and early termination.
# 5.5 System Comparison
We compare the performance of HAKES against the five distributed vector database baselines at 0.98 recall for $k = 1 0$ , using the two 10-million scale datasets. For Cassandra, we use the same configuration for graph and beam search width during index construction. However, since it uses quantized vectors instead of the original vectors, we adjust $k$ to be larger than 10, such that if the refine stage is performed, it can reach the recall of 0.98. Specifically, the system uses a quantized graph index to return a larger number of candidate vectors, which are then processed by a refine stage to achieve recall $1 0 @ 1 0$ of 0.98. For HAKES, Sharded-HNSW, Weaviate, and Cassandra, we run one shard per node. For Milvus, we run a number of virtual QueryNode according to the number of node settings used for other systems. The QueryNodes are evenly distributed among the physical nodes in a Kubernetes cluster. We use multiple distributed clients to saturate the systems, then report the peak throughputs.
Scaling with the number of nodes. Figure 14 compares the systems’ throughputs with varying numbers of nodes. It can be seen that HAKES and HAKES-Base scale linearly because the load of both the filtering and refinement stages are distributed evenly across the nodes. The filtering stage of concurrent requests can be processed at different nodes in parallel. In Weaviate, a request is sent to all the shards. Although the graph index size and the number of vectors in each shard decrease with more nodes, the search cost at each shard does not decrease linearly. This is consistent with the results of ELPIS in Section 5.4, confirming that graph indexes do not scale well by partitioning. Sharded-HNSW achieves slightly better throughput than Weaviate, but the same trend is observed. Milvus’ throughput increases with the number of read shards, due to the reduced read load. However, the small read shard size of 1GiB leads to a large number of subgraphs (over 20 for Sphere-768-10m and over 30 for Sphere-1024-10m), all of which need to be searched, meaning that the throughputs are low. In Cassandra, a single node contains multiple shards, the number of which is affected by its Log-structured merge (LSM) tree compaction process. We observe
Figure 11: Tolerance against data drift (MBNET-1024).
Figure 12: Decoupling index parameters.
Figure 13: Performance under delete. (R: read, D: delete)
Figure 14: Scalability under read-only workload.
Figure 15: Throughput under read-write workload (4 nodes).
HAKES HAKES-Base $\star$ Sharded-HNSW $- \square$ Weaviate $\diamond$ Cassandra $5 3$ Milvus
150000 2000 2.00 4.00 6.00 8.00 2.00 4.00 6.00 8.00 #node #node (a) DPR-768-10m (b) E5-1024-10m
HAKES HAKES-Base Sharded-HNSW Weaviate $0$ Cassandra Milvus
10000 6000
Throughput (QPS) 5000 C 4000 △ + 清 PU △ 0 女 口 夏 X 0 交 X X 0.00 0.25 0.50 0.75 0.00 0.25 0.50 0.75 Write ratio Write ratio (a) DPR-768-10m (b) E5-1024-10m
that the number of shards per node decreases as the number of nodes increases, which explains the increasing throughput. At 8 nodes, there is one shard per node and the performance is similar to that of Weaviate and Sharded-HNSW. The improvement of HAKES over HAKES-Base shows the benefit of HAKES-Index in reducing the search cost with its learned compression and optimizations.
Performance under concurrent read-write workloads. We fix 4 nodes for all systems and vary the write ratio. Figure 15 shows that all systems have higher throughput as the write ratio increases. For Weaviate, Sharded-HNSW, and Cassandra, the write request is only processed by one shard, as opposed to by all the shards for read requests. Sharded-HNSW has the highest performance among baselines that use graph-based indexes, due to its $C + +$ implementation. HAKES and HAKES-Base outperform all the other baselines by a considerable margin, and HAKES has higher throughputs than HAKES-Base. Even though each write request needs to be processed by all IndexWorkers, HAKES is more efficient than the others in processing the write request, because it only computes the quantized vector and updates the IVF structure. In the other baselines, each node has to perform a read to identify neighbor vectors and network edges to be updated.
# 6 RELATED WORK
Managing ANN index update. Graph indexes, like HNSW [40] and LVQ [1], rebuild the graph connections locally. SPFresh [55] uses an in-memory graph to index a large number of partitions of vectors on disk and proposes a scheme to keep the partition size small for stable serving latency. At the system level, sharding is employed to reduce the impact of inserting new vectors [11, 22]. HAKES-Index appends vectors to the partitioning-based index and uses tombstone for deletion to minimize interference on search and maintain the high recall without changing the search configurations. The low read-write contention allows HAKES to maintain replicated global indexes for better scaling performance.
Adaptive query search. Several works exploit characteristics of the queries and the immediate search results to improve the vector search. Auncel [61], iDistance [28], and VBASE [59] leverage precise similarity scores to determine if a search can terminate early, making them unsuitable for the HAKES-Index’s filter stage that operates on compressed vectors. ADSampling [18] progressively uses more dimensions to compare vector pairs. Learning-based methods like LEQAT [35, 58] employ predictive models that incur costly training and inference overhead. In contrast, HAKES-Index’s early termination check is lightweight as it is based on simple computation over statistics available during search.
Vector quantization. ScaNN [23], SOAR [47], and QUIP [21] learn quantization codebooks to reduce the error of the approximation of the inner product. RabitQ [19] quantizes vectors into binary representations and provides a theoretical error bound on the similarity score. OPQ [20] and DOPQ [39] learn data transformation and quantization codebooks to reduce the error in reconstructing the original vectors. [46] learns a transformation matrix to spread out vectors for quantized code assignment and keep the top k neighbors close. These works have a different optimization objective from ours. In particular, we learn the dimensionality reduction and product quantization together to reduce the local similarity score distribution mismatch. Other works from the information retrieval community [54, 56, 57] propose to jointly train embedding models and product quantization codebooks, which is similar to our approach. However, their training objective is to capture the semantic similarity between data vectors, which requires access to the embedding model or labels on semantic relevance for the original data. Our approach does not require such access. | Modern deep learning models capture the semantics of complex data by
transforming them into high-dimensional embedding vectors. Emerging
applications, such as retrieval-augmented generation, use approximate nearest
neighbor (ANN) search in the embedding vector space to find similar data.
Existing vector databases provide indexes for efficient ANN searches, with
graph-based indexes being the most popular due to their low latency and high
recall in real-world high-dimensional datasets. However, these indexes are
costly to build, suffer from significant contention under concurrent read-write
workloads, and scale poorly to multiple servers.
Our goal is to build a vector database that achieves high throughput and high
recall under concurrent read-write workloads. To this end, we first propose an
ANN index with an explicit two-stage design combining a fast filter stage with
highly compressed vectors and a refine stage to ensure recall, and we devise a
novel lightweight machine learning technique to fine-tune the index parameters.
We introduce an early termination check to dynamically adapt the search process
for each query. Next, we add support for writes while maintaining search
performance by decoupling the management of the learned parameters. Finally, we
design HAKES, a distributed vector database that serves the new index in a
disaggregated architecture. We evaluate our index and system against 12
state-of-the-art indexes and three distributed vector databases, using
high-dimensional embedding datasets generated by deep learning models. The
experimental results show that our index outperforms index baselines in the
high recall region and under concurrent read-write workloads. Furthermore,
\namesys{} is scalable and achieves up to $16\times$ higher throughputs than
the baselines. The HAKES project is open-sourced at
https://www.comp.nus.edu.sg/~dbsystem/hakes/. | [
"cs.DB",
"cs.LG"
] |
# 1 INTRODUCTION
Recently, diffusion LLMs have become a widely discussed topic in Natural Language Processing research (Nie et al., 2025; Ye et al., 2025). They are regarded as a potential solution to key limitations of traditional auto-regressive LLMs, including the reversal curse (Berglund et al., 2023), complex reasoning (Dziri et al., 2023), long-term planning, and maintaining coherence across extended contexts (Bachmann & Nagarajan, 2024; Ye et al., 2024; 2025). Significant research efforts have focused on validating their scalability (Nie et al., 2025; Ye et al., 2025), adapting them for multimodality (Yang et al., 2025; You et al., 2025; Yu et al., 2025), applying them to reasoning tasks (Zhao et al., 2025; Huang et al., 2025; Zhu et al., 2025), and optimizing their efficiency (Ma et al., 2025; Hu et al., 2025; Wu et al., 2025). However, the long-context capabilities of diffusion LLMs, specifically their performance and potential for length extrapolation, remain unexplored.
Figure 1: Comparison of perplexity and retrieval accuracy between the diffusion LLM, LLaDA-8B, and the auto-regressive LLM, LLaMA3-8B, both within and beyond pre-training context length.
We begin by systematically evaluating diffusion LLM LLaDA (Nie et al., 2025) against auto-regressive LLM LLaMA3 (Meta, 2024a) on perplexity and retrieval tasks, both within and beyond their pretrained context lengths (Figure 1). Notably, diffusion LLMs maintain stable perplexity and exhibit localized perception during direct length extrapolation. In stark contrast, auto-regressive LLMs suffer catastrophic perplexity surges and performance collapse when input length exceeds their maximum supported context window, 8k tokens. This divergence reveals fundamental architectural differences in long-context handling, raising critical questions: (1) What mechanisms enable diffusion LLMs’ extrapolation stability? (2) Can established length-extension techniques for auto-regressive LLMs be transferred to diffusion architectures? (3) How do diffusion LLMs perform on long-context benchmarks relative to auto-regressive baselines, and what unique capabilities or limitations emerge?
In this work, we address these questions through comprehensive experiments and analysis. Besides the perplexity and retrieval experiment, we also benchmark Needle-In-A-Haystack (NIAH) performance for diffusion LLMs (LLaDA (Nie et al., 2025), LLaDA-1.5 (Zhu et al., 2025), Dream-v0 (Ye et al., 2025)), quantitatively confirming their local perception bias during length extrapolation. We then analyze this phenomenon through Rotary Position Embedding (RoPE) theory, validating our interpretation with t-SNE visualizations. Building on these insights, we propose LongLLaDA, a training-free method which successfully extends LLaDA’s context window using NTK-based RoPE extrapolation (bloc97, 2023b), and verify preserved scaling laws (Liu et al., 2023b). Finally, we identify task-dependent capabilities where diffusion LLMs surpass or lag behind auto-regressive counterparts on long-context benchmarks. Our contributions are summarized as follows:
• First systematic analysis of diffusion LLMs’ long-context behavior, revealing their unique characteristics for stable perplexity and local perception during context extrapolation, with mechanistic explanation via RoPE dynamics.
• Effective context extension demonstrating NTK-based RoPE extrapolation and scaling laws transfer seamlessly to diffusion LLMs, achieving $6 \times$ context expansion (24k tokens) without further training.
• Capability benchmarking revealing diffusion LLMs match auto-regressive models on retrieval tasks, lag in aggregation, but excel at QA. We provide foundational insights for future long-context diffusion research.
# 2 LONG-CONTEXT PHENOMENOLOGY OF DIFFUSION LLMS
We first evaluate the length extrapolation capabilities of diffusion LLMs, including LLaDA (Nie et al., 2025), LLaDA-1.5 (Zhu et al., 2025), and Dream-v0 (Ye et al., 2025), compared with autoregressive LLMs such as LLaMA3 (Meta, 2024a), via Needle-In-A-Haystack (Gkamradt, 2023; Li et al., 2024), based on the experimental setup in Appendix B.1. All LLMs are required to generate at most 32 tokens, with diffusion LLMs using a block size and sampling steps of 32. The results are shown in Figure 2. LLaMA3-8B-Base and LLaMA3-8B-Instruct maintain perfect retrieval accuracy within their pretrained 8k length, but suffer catastrophic performance degradation beyond this limit, failing to retrieve information at any depth. In contrast, LLaDA-8B-Base and LLaDA-8B-Instruct achieve $100 \%$ retrieval accuracy within a 4k context. Surprisingly, when exceeding 4k, up to 24k, LLaDA still retrieves information from the nearest 4k window, demonstrating a local perception like a sliding window. This behavior remarkably differs from auto-regressive LLM extrapolation. Similar phenomena are observed in LLaDA-1.5 and Dream-v0, as illustrated in Appendix B.2.
Different from auto-regressive LLMs, diffusion LLMs are influenced by sampling steps and strategies. For simplicity, we compare the impact of sampling steps on retrieval depth in NIAH. As shown in Figure 3, using the same input-output settings from previous experiments, we evaluate the LLaDA
Figure 2: Results of Needle-In-A-Haystack tests (Gkamradt, 2023) on LLaDA-8B Series (Nie et al., 2025) and LLaMA3-8B Series (Meta, 2024b) under direct extrapolation.
Figure 3: NIAH Results of LLaDA-8B-Base (Nie et al., 2025) with different sampling steps $s$
8B-Base with sampling step $s = 1 , 4 , 8 , 1 6$ . Results show that at 1 or 4 steps, LLaDA-8B-Base fails to retrieve information beyond 8k length, and increasing $s$ to 8 or 16 can achieve retrieval depths of $2 5 \%$ at $1 6 \mathrm { k }$ and almost $10 \%$ at $2 4 \mathrm { k }$ context length. Similar results are observed on LLaDA-8B-Instruct and LLaDA-1.5 in Appendix B.2, demonstrating that the long-context performance of diffusion LLMs is influenced by sampling steps, but remains constrained by the maximum supported context length.
Figure 4: Comparison of trained position embedding interval between LLaDA-8B and LlaMA3-8B. The area within the dashed line represents trained relative position, while that beyond represents the relative position in length extrapolation, with unlearned position embedding values colored in gray.
# 3 MECHANISTIC ANALYSIS
According to the preliminary knowledge in Appendix A, we attribute this phenomenon to diffusion LLMs being trained with richer positional information compared to auto-regressive LLMs. Critically, the bidirectional attention mechanisms in diffusion LLMs expose them to relative position rage of $[ 1 - T _ { \mathrm { t r a i n } } , T _ { \mathrm { t r a i n } } - 1 ]$ during training, contrasting with the $[ 0 , T _ { \mathrm { t r a i n } } - 1 ]$ range typical of auto-regressive models. This difference is evident in the RoPE mechanism. As visualized in Figure 4, for LLaDA $( T _ { \mathrm { t r a i n } } = 4 { \bf k } )$ ) and LLaMA $T _ { \mathrm { t r a i n } } = 8 \mathbf { k }$ ), we observe how the positional embeddings (sine/cosine components) behave within and beyond their maximum trained relative positions.
• High Frequencies: Both models perceive complete sinusoidal periods within their maximum trained relative distance, yielding comparable positional information encoding. • Moderate Frequencies: LLaMA3’s auto-regressive attention observes relative positions [0, 8191] when trained on 8192-token sequences. In contrast, LLaDA’s bidirectional attention observes symmetric relative positions $[ - 4 0 9 5 , 4 0 9 5 ]$ despite its shorter 4096-token training length. This symmetric coverage provides a key advantage by fully capturing a complete period of both the cosine and sine, enhancing its tolerance of direct length extrapolation. • Low Frequencies: Both models exhibit limited extrapolation capability beyond their pretrained context windows. However, as visualized in Figure 4, the out-of-distribution (OOD) regions differ remarkably: LLaMA3 struggles to capture all negative position embeddings (gray region), representing half of the potential embedding space, while LLaDA significantly reduces the unlearned OOD spaces, resulting in enhanced robustness in length extrapolation.
This results in a relatively flattened perplexity growth curve, similar to auto-regressive RoPE-based LLMs with a smaller base (Liu et al., 2023b; Men et al., 2024), as detailed in Appendix A. However, since the cosine function in RoPE, which primarily captures relative distances, is even, negative relative positions do not increase the LLM’s maximum perceivable distance in the pre-training stage. Thus, diffusion LLM can only retrieve key information from limited relative positions within the training length, leading to the observed decay pattern in the NIAH evaluation.
We validate this interpretation with the t-SNE visualization (Van der Maaten & Hinton, 2008; Zandieh et al., 2024) of QK states from the final layer of LLaMA3-8B-Base (Meta, 2024a) and LLaDA-8BBase (Nie et al., 2025), as shown in Figure 5. As shown in Figure 5a, for auto-regressive LLMs such as LLaMA3-8B-Base, the QK states within and beyond the maximum supported context length, 8k, present two different distribution clusters, and the manifold for QK states with RoPE also shows a different trend when position embedding becomes OOD. Comparatively, regarding the clustering feature for diffusion LLMs such as LLaDA-8B-Base, there is no distribution shift between QK states within and beyond 4k, and a uniform manifold for QK states with RoPE. This demonstrates that diffusion LLM is more robust for the OOD position embeddings in length extrapolation. Therefore, unlike traditional auto-regressive LLMs that exhibit catastrophic performance degradation when exceeding their maximum supported context length, diffusion LLMs maintain stable outputs and demonstrate local perception in extended context.
Figure 5: Visualization of the QK states from the final layer of LLaMA3-8B-Base (Meta, 2024a) and LLaDA-8B-Base (Nie et al., 2025) for sample from the GovReport subsets in LongBench (Bai et al., 2023). The visualization uses a 2D t-SNE projection (Van der Maaten & Hinton, 2008), with each token represented as a point in the image and the position index shown via color changing.
# 4 CONTEXT EXTENSION FOR DIFFUSION LLMS
Since the reason for the surprising phenomenon has been clarified, we now move on to the extrapolation methods for diffusion LLMs. Since the retrievable depth of diffusion LLMs remains constrained by the range of cosine values encountered during pre-training, we transfer the NTK-based extrapolation (bloc97, 2023b) and its scaling laws (Liu et al., 2023b) to diffusion LLMs, thus proposing the length extrapolation method for diffusion LLMs, LongLLaDA. As detailed in Appendix A. The scaling factor $\lambda$ in training-free NTK scaling (bloc97, 2023b) for RoPE-based auto-regressive LLMs is decided by the extrapolation context length $t$ and critical dimension $d _ { \mathrm { { e x t r a } } }$ calculated by rotary base $\beta _ { 0 }$ and pretrained context length $T _ { \mathrm { t r a i n } }$ , as shown in Equation 1.
$$
\lambda = 1 0 ^ { - 4 } \cdot \left( \frac { t } { 2 \pi } \right) ^ { d / d _ { \mathrm { e x t r a } } } , \quad d _ { \mathrm { e x t r a } } = 2 \left\lceil \frac { d } { 2 } \log _ { \beta _ { 0 } } \frac { T _ { \mathrm { t r a i n } } } { 2 \pi } \right\rceil .
$$
Similarly, in LongLLaDA, based on Nie et al. (2025), the pretrained rotary base $\beta _ { 0 } = 5 0 0 0 0 0$ , and the pre-training context length $T _ { \mathrm { t r a i n } }$ is $4 \mathrm { k \Omega }$ . This yields a critical dimension $d _ { \mathrm { { e x t r a } } } = 6 4$ . Accordingly, the required scaling factor $\lambda$ for extrapolation to 8k, 16k, $2 4 \mathrm { k }$ , and $3 2 \mathrm { k }$ is calculated as 4, 14, 31, and 55, respectively. The extrapolation results are illustrated in Figure 6 and Figure 7.
When $\lambda = 4 , 1 4$ , LongLLaDA can effectively extrapolate diffusion LLMs to the corresponding context lengths, achieving near $100 \%$ recall across all depths within these ranges. As the context length increases beyond the extrapolation limit, the retrievable depth proportionally expands while maintaining the local-perception effect. The average depth score curves exhibit a right shift across different context lengths. When $\lambda = 3 1$ , a lost-in-the-middle phenomenon (Liu et al., 2023a) similar to auto-regressive models emerges in intermediate depths, indicating that LongLLaDA approaches its practical extrapolation limit (bloc97, 2023b). When $\lambda = 5 5$ , further extrapolation is unachievable. We also validate the effectiveness of LongLLaDA in LLaDA-1.5 (Zhu et al., 2025) and Dream-v0 (Ye et al., 2025) in Appendix B.2. Consequently, for RoPE-based diffusion LLMs, NTK extrapolation and its scaling law remain applicable during inference.
Figure 6: NIAH Results of LLaDA-8B-Base (Nie et al., 2025) with different RoPE scaling factor.
Figure 7: NIAH Results of LLaDA-8B-Instruct (Nie et al., 2025) with different RoPE scaling factor.
# 5 TASK-DRIVEN LONG-CONTEXT CAPABILITY ANALYSIS
Regarding the downstream long-context performance of diffusion LLMs and their difference from traditional auto-regressive LLMs, apart from the NIAH retrieval evaluation, we conduct comparative analyses across more benchmarks using LLaDA and LLaMA as examples. We first evaluate LLaDA8B (Nie et al., 2025), LLaDA-1.5 (Zhu et al., 2025), and LLaMA3-8B (Meta, 2024a), including pretrained models and those employing NTK-based extrapolation during inference, with LongBench (Bai et al., 2023), in 4k and 8k context length, with the exceeding part being truncated from the middle. For the summary tasks, the output length is 512, while for the others, the output length is 64. We still keep the sampling steps the same as the output length, and the block size to 64 for diffusion LLMs. The results are shown in Table 1. Still, LLaDA can give a stable output and get a decent performance beyond the maximum supported context length. Moreover, we find that in all task domains besides synthetic tasks, the difference between LLaDA Series and LLaMA3 Series is relatively limited compared with the difference within LLaMA3 Series. Only in the synthetic domain do LLaDA Series outperform LLaMA3 Series consistently. This inspires us to conduct an in-depth discussion of diffusion LLMs on the performance of the synthesis tasks compared with auto-regressive LLMs.
Table 1: Results of LLaDA-8B (Nie et al., 2025), LLaDA-1.5 (Zhu et al., 2025) and LLaMA3- 8B (Meta, 2024b) on LongBench (Bai et al., 2023) under 4k and $8 \mathbf { k }$ context length. Gray cells indicate that the evaluation context length exceeds the context length supported by the evaluated LLM. SD, MD, Sum, and Syn stand for Single-Doc QA, Multi-Doc QA, Summarization, and Synthetic tasks, while Avg is the average score of all subtasks weighted by the evaluation data number.
Table 2: Results of LLaDA-8B (Nie et al., 2025), LLaDA-1.5 (Zhu et al., 2025) and LLaMA3- 8B (Meta, 2024b) on RULER (Hsieh et al., 2024) under 4k, 8k and 16k context length.
We further the discussion with RULER benchmark (Hsieh et al., 2024), we compare LLaDA-8B (Nie et al., 2025), LLaDA-1.5 (Zhu et al., 2025), and LLaMA3-8B (Meta, 2024a), at context lengths of 4k, 8k, and 16k. We set the block size and sampling steps to 64 for diffusion LLMs. The results are shown in Table 2. First, consistent with the NIAH results, auto-regressive LLMs fail to produce valid outputs beyond their effective context length, while diffusion LLMs maintain measurable performance. Regarding task types, diffusion LLMs achieve comparable results to auto-regressive
LLMs on NIAH tasks, including Single-Key, Multi-Key, Multi-Query, and Multi-Value variants. However, diffusion LLMs show significantly inferior performance in aggregation tasks, including Variable Tracing and Frequent or Common Word Extraction, where auto-regressive LLMs typically perform well. Surprisingly, on QA tasks, including SQuAD and Hotpot, that challenge auto-regressive LLMs (Hsieh et al., 2024), diffusion LLMs demonstrate superior capability. These observations reveal the distinctive characteristics of diffusion LLMs in long-context tasks, that current diffusion LLMs, like LLaDA, demonstrate comparable performance to the auto-regressive LLMs, like LLaMA3, in most task types, but underperform in aggregation tasks, and outperform in QA tasks consistently.
# 6 RELATED WORK
Large Language Diffusion Models Recently, Large Language Diffusion Models, or diffusion LLMs, have become a widely discussed topic in NLP research. After the theoretical simplification (Sahoo et al., 2024; Ou et al., 2024) and fine-tuning verification (Gong et al., 2024), researchers scale the size of diffusion LLMs to billions of parameters (Nie et al., 2024; 2025; Ye et al., 2025) and demonstrate that diffusion LLMs can achieve comparable results with more promising performance in the reversal curse (Berglund et al., 2023). These immediately attract the attention of many more researchers. Significant research efforts have focused on adapting diffusion LLMs for multimodality, such as MMaDA (Yang et al., 2025), LLaDA-V (You et al., 2025), and LaViDa (Li et al., 2025), applying them to reasoning tasks, such as d1 (Zhao et al., 2025), DCoLTHuang et al. (2025), and LLaDA-1.5 (Zhu et al., 2025), and optimizing their efficiency (Ma et al., 2025; Hu et al., 2025; Wu et al., 2025), including dKV-Cache (Ma et al., 2025), Dimple (Yu et al., 2025), dLLM-Cache (Liu et al.), FreeCache (Hu et al., 2025), Fast-dLLM (Wu et al., 2025), and so on. However, there is still no discussion on the long-context capability of diffusion LLMs.
Length Extrapolation in LLM Length extrapolation, or length generalization, or context extension, is an important issue for LLMs (Press et al., 2022). The mainstream extrapolation research mainly focuses on adjusting position embedding, especially the widely used RoPE (Su et al., 2021). For example, Linear PI (Chen et al., 2023) first achieves LLMs’ length extrapolation by scaling position indices to the pre-training range with little fine-tuning. The NTK method (bloc97, 2023b;a; Peng et al., 2023) then scales the rotary base in RoPE (Su et al., 2021) to achieve plug-and-play length extrapolation. Subsequently, amplifying the rotary base and training on longer lengths has become the dominant approach for length extrapolation (Rozi\`ere et al., 2023; Xiong et al., 2023; Liu et al., 2023b; Ding et al., 2024). In addition, ReRoPE (Su, 2023), ReAttention (Liu et al., 2024b), and DCA (An et al., 2024a;b) also achieve plug-and-play extrapolation by limiting the relative position. In this paper, we still focus on the length extrapolation via NTK scaling (bloc97, 2023b; Liu et al., 2023b) in the inference stage, and try to reveal and explain the similarities and differences in length extrapolation between diffusion-based and auto-regressive LLM. | Large Language Diffusion Models, or diffusion LLMs, have emerged as a
significant focus in NLP research, with substantial effort directed toward
understanding their scalability and downstream task performance. However, their
long-context capabilities remain unexplored, lacking systematic analysis or
methods for context extension. In this work, we present the first systematic
investigation comparing the long-context performance of diffusion LLMs and
traditional auto-regressive LLMs. We first identify a unique characteristic of
diffusion LLMs, unlike auto-regressive LLMs, they maintain remarkably
\textbf{\textit{stable perplexity}} during direct context extrapolation.
Furthermore, where auto-regressive models fail outright during the
Needle-In-A-Haystack task with context exceeding their pretrained length, we
discover diffusion LLMs exhibit a distinct \textbf{\textit{local perception}}
phenomenon, enabling successful retrieval from recent context segments. We
explain both phenomena through the lens of Rotary Position Embedding (RoPE)
scaling theory. Building on these observations, we propose LongLLaDA, a
training-free method that integrates LLaDA with the NTK-based RoPE
extrapolation. Our results validate that established extrapolation scaling laws
remain effective for extending the context windows of diffusion LLMs.
Furthermore, we identify long-context tasks where diffusion LLMs outperform
auto-regressive LLMs and others where they fall short. Consequently, this study
establishes the first context extrapolation method for diffusion LLMs while
providing essential theoretical insights and empirical benchmarks critical for
advancing future research on long-context diffusion LLMs. | [
"cs.CL"
] |
# 1 Introduction
Recent advances in process mining have improved the ability to capture and analyze complex organizational workflows through event logs. However, this progress has led to an increasing abundance of process models, often overlapping in scope or providing divergent insights for different stakeholders (e.g., operational vs. managerial) [6,19,20]. This “model overload” phenomenon presents strategic challenges: rather than supporting decision-making, the sheer volume of (apparently) competing models can obscure key insights and hamper the alignment of process analytics with organizational objectives. As a result, managers may struggle to distinguish relevant models from irrelevant ones, making it difficult to focus on actionable insights (see e.g., [1,4,13,15]).
While process mining accelerates business process digitalization, its effectiveness depends on deeper integration with organizational goals, key performance indicators, and managerial expertise [21,26]. Achieving this integration across diverse processes and stakeholders necessitates a robust decision support mechanism that balances tacit knowledge with empirical findings. Traditional decision support systems often fall short in this regard, as they struggle to incorporate subjective managerial perspectives alongside quantitative process metrics. Consequently, decision-makers must navigate complex model repositories without structured guidance, increasing the risk of suboptimal or misaligned choices.
Multi-Criteria Decision-Making (MCDM) provides a structured framework for evaluating and prioritizing process models by combining quantitative performance metrics with qualitative managerial insights [18]. As a well-established decision analysis method, MCDM is particularly effective when no single optimal solution exists, enabling decision-makers to navigate trade-offs between competing criteria [3,23]. Applying MCDM to process mining extends beyond evaluating models based solely on fitness or precision, promoting alignment with both strategic and operational objectives.
This paper presents an approach for applying MCDM to address model overload in process mining. We propose an approach that synthesizes process mining outputs while integrating an organization’s strategic priorities. By combining objective indicators (e.g., fitness, precision) with managerial assessments, MCDM provides a structured way for evaluating and prioritizing process models. This approach offers two key benefits: aligning model selection with strategic objectives and clarifying trade-offs in multi-stakeholder decision-making.
To illustrate the potential of this approach, we present an illustrative example in a logistics context, where the Analytic Hierarchy Process (AHP) is applied as an MCDM approach to filter and prioritize mined process models. Initial findings suggest that MCDM-based methods can reduce model selection complexity, guide resource allocation toward high-impact analyses, and improve communication between technical and managerial stakeholders. While full-scale validation remains an area for future research, we expect that this concept will generate valuable discussions on integrating decision-making theory, managerial insights, and process mining methods. This work proposes a structured approach for more strategic and context-aware use of process models.
The remainder of this paper is as follows. Section 2 discusses related work. Section 3 introduces the proposed MCDM approach. In Sect. 4, we present an illustrative example in the logistics domain, highlighting initial findings and challenges. Section 5 concludes the paper.
# 2 Related Work
The growing number of discovered process models often results in highly complex structures, commonly referred to as “spaghetti models”, and an overwhelming number of variations. These models can obscure meaningful patterns, making it difficult to derive actionable insights [17]. While filtering and abstraction techniques help manage complexity, they frequently introduce redundant or conflicting perspectives, further complicating their alignment with organizational objectives [20]. Moreover, beyond the complexity of each individual model, the sheer volume of potentially overlapping or incompatible models can compound decision-making challenges. Research highlights that stakeholder needs contribute to model overload: highly detailed models, though technically accurate, may be too complex for managerial decision-making [2]. Analysts often struggle to determine which variant best represents the process in a given context, further complicating their evaluations.
Several techniques have been proposed to address complexity. Existing solutions focus on model filtering [11], abstraction [16,28], and domain-specific metrics [14]. Trace clustering [25] is a well-known method that groups similar traces and removes minor variations to enhance process comprehension. However, finding the right level of abstraction remains a challenge: excessive simplification may obscure critical details, while insufficient abstraction leaves models too complex. Process performance metrics, such as fitness, precision, and generalization [10], can assist in selecting promising process models. Furthermore, complexity measures—including control-flow complexity and node/edge counts—help detect overly complex models [4]. Nonetheless, these techniques primarily enhance structural clarity rather than directly supporting strategic decision-making.
Despite advancements in process mining, consolidating multiple discovered models into a coherent, decision-driven framework remains challenging. Most approaches emphasize structural refinement but often neglect managerial preferences, which favor simplicity and strategic relevance over purely technical model quality. Recent studies have explored MCDM techniques in process mining. For example, [24] applied MCDM to rank industrial machines for maintenance planning, integrating technical indicators with expert judgment. Similarly, [12] used AHP for process mining technology selection, incorporating uncertainty and sensitivity analysis to improve ranking robustness. While these studies demonstrate MCDM’s potential for process-related decision-making, they focus on technology and asset selection rather than process model evaluation. Our work builds on that foundation by applying MCDM, specifically AHP, to structurally compare and prioritize discovered process models, thereby contributing to alignment with managerial objectives.
# 3 Proposed Approach
This section presents an MCDM approach to assist in selecting and prioritizing process models (typically obtained from large repositories). As illustrated in Fig. 1, the approach structures decision-making by ranking and selecting models based on multiple, potentially conflicting criteria:
1. Problem definition: define the selection objective, identifying the most relevant process model(s) from a set of discovered alternatives.
2. Criteria identification: evaluation relies on two broad categories. Quantitative metrics: process mining measures, such as fitness, precision, generalization, and simplicity. Qualitative metrics: managerial factors, such as decision-support value (e.g., the model’s ability to highlight operational inefficiencies), stakeholder alignment (e.g., relevance to different stakeholder groups), and implementation feasibility (e.g., potential impact).
3. Application of knock-out criteria: models failing key constraints—such as structural completeness, data quality, or regulatory compliance—are eliminated early.
4. Criteria weighting: weights are determined using pairwise comparisons, entropy-based weighting, or expert input, to reflect each criterion’s relative importance.
5. Model evaluation and selection: process models are ranked using an MCDM technique. The choice of method depends on the decision context, data availability, and stakeholder preferences (see [27] for an overview).
6. Sensitivity analysis: criteria weights are varied to assess ranking stability under different scenarios.
Fig. 1: Proposed MCDM approach for process model selection.
By structuring the selection process in these steps, this approach can mitigate bias, reduce reliance on purely technical measures, and better align process mining outputs with managerial decision-making.
# 4 Illustrative Example
To illustrate the feasibility of our proposed MCDM approach, we apply it to a logistics case study presented in [8], which focuses on selecting process models derived from event logs. The study includes a dataset comprising 270 event logs generated across 27 distinct system configurations [7], with each configuration yielding 20 log files. For this illustrative example, we concentrate on the first event log per experiment, thus evaluating one event log per configuration. Process models were extracted using the Inductive Miner, which is expected to produce sound, relatively simple models, and were subsequently evaluated.
# 4.1 Problem Definition
The objective of this illustrative study is to support decision-makers in selecting the most suitable configuration for investment, taking into account uncertainty in the decision-making process. Process models function as part of multiple evaluation criteria, alongside throughput time and implementation risk. A key challenge is to balance technical accuracy with practical feasibility, thereby contributing to a more informed and strategic investment decision.
# 4.2 Criteria Identification
To evaluate model quality, we consider quantitative metrics associated with key process mining quality dimensions: fitness, precision, and generalization. Simplicity is excluded due to its strong correlation with generalization [9]. Using the Inductive Miner, we obtained the scores for the scenarios detailed in [8], as illustrated in Fig. 2. As additional criteria, we include the throughput times specified in [8] and the implementation risk linked to business goal alignment.
Fig. 2: Evaluation of process models extracted with the Inductive Miner.
# 4.3 Application of Knock-Out Criteria
To ensure high-quality model selection, we set a strict fitness threshold of 0.999. Out of the initial 27 process models, only 5 meet this criterion and are retained for further analysis.
# 4.4 Criteria Weighting
We use Saaty’s AHP method [22], which is widely applied in decision-making [5], to determine the relative importance of fitness ( $F$ ), precision ( $P$ ), and generalization ( $G$ ) via expert pairwise comparisons (Table 1). The resulting weights are $w _ { F } = 0 . 5 7$ , $w _ { P } = 0 . 2 2$ , and $w _ { G } = 0 . 2 1$ . Throughput time ( $T$ ) is categorized into low (0–50 min, $C _ { 2 } = 1 . 0$ ), medium (50–100 min, $C _ { 2 } = 0 . 7 5$ ), and high ( $> 1 0 0$ 1 min, $C _ { 2 } = 0 . 5 0$ ). Implementation risk $( I R )$ , assessed externally, is classified as low ( $C _ { 3 } = 1 . 0$ ), medium ( $C _ { 3 } = 0 . 7 0$ ), or high ( $C _ { 3 } = 0 . 5 0$ ). The overall weight allocation is $w _ { 1 } = 0 . 4 0$ (process model quality), $w _ { 2 } = 0 . 2 5$ (throughput time), and $w _ { 3 } = 0 . 3 5$ (implementation risk).
Table 1: Pairwise comparison results for $F$ , $P$ , and $G$ .
# 4.5 Model Evaluation and Selection
The final model score is computed as: $\begin{array} { r } { C _ { \mathrm { t o t a l } } = \sum _ { i = 1 } ^ { n } w _ { i } C _ { i } } \end{array}$ , where $C _ { 1 }$ (process model quality), $C _ { 2 }$ (throughput time), and $C _ { 3 }$ ( mplementation risk) contribute to the ranking (see Table 2). Although 532 leads in $C _ { 1 }$ and $C _ { 2 }$ , 411 ranks best overall due to its balanced performance. This highlights the importance of multicriteria evaluation integrating technical quality and practical feasibility.
Table 2: Performance comparison of different configurations.
Although we do not include a sensitivity analysis in this study, future work could explore its effect by adjusting criteria weights to assess ranking robustness. Further validation may involve expanding beyond traditional process mining dimensions (e.g., fitness, precision, generalization) to incorporate additional metrics, both quantitative and qualitative, as well as experimenting with alternative MCDM methods to enhance model selection stability. | Process mining is increasingly adopted in modern organizations, producing
numerous process models that, while valuable, can lead to model overload and
decision-making complexity. This paper explores a multi-criteria
decision-making (MCDM) approach to evaluate and prioritize process models by
incorporating both quantitative metrics (e.g., fitness, precision) and
qualitative factors (e.g., cultural fit). An illustrative logistics example
demonstrates how MCDM, specifically the Analytic Hierarchy Process (AHP),
facilitates trade-off analysis and promotes alignment with managerial
objectives. Initial insights suggest that the MCDM approach enhances
context-sensitive decision-making, as selected models address both operational
metrics and broader managerial needs. While this study is an early-stage
exploration, it provides an initial foundation for deeper exploration of
MCDM-driven strategies to enhance the role of process mining in complex
organizational settings. | [
"cs.CY",
"cs.DB"
] |
# 1 Introduction
In recent years, large language models (LLMs) have been evolving rapidly, demonstrating high performance across various tasks (OpenAI, 2023; Anthropic, 2024; Google, 2024) and exerting significant influence. In addition to the high-performing proprietary models, there have been active efforts to develop open, small and high-performance LLMs (Dubey et al., 2024; Yang et al., 2024; DeepSeek-AI et al., 2024; Abdin et al., 2024).
To compare these LLMs, it is necessary to evaluate their performance on various tasks. Openended evaluation is particularly required to measure response capabilities and instruction-following ability as chat assistants. LLM-as-a-Judge (Zheng et al., 2023) is a technique developed for openended evaluation, where an evaluator LLM measures the performance of benchmarked LLMs. This approach has the advantage of being lower-cost and faster than manual evaluation (Gu et al., 2024a).
However, despite its growing adoption, there remain open questions about the reliability of LLMas-a-Judge. In particular, we investigate two essential properties to ensure its trustworthiness: 1. Alignment with human judgments (Li et al., 2024a), and 2. Consistency of evaluation results (Schroeder and Wood-Doughty, 2024; Wei et al., 2024). Without these properties, automatic evaluation using LLMs risks producing misleading conclusions about model performance.
In this work, we aim to identify key factors that affect the reliability of LLM-as-aJudge. To this end, we conduct a series of empirical analyses using two public benchmarks—BIGGENBench (Kim et al., 2024) and EvalBiasBench (Park et al., 2024a)—which provide a diverse set of open-ended tasks. Through systematic experiments, we investigate the impact of 1. the presence or absence of reference answers and score descriptions in evaluation prompts, 2. the choice of decoding strategy (greedy vs. sampling) used by the evaluator model, and 3. the role of CoT in the evaluator’s response.
Our findings reveal that:
1. Evaluation design: Providing both reference answers and score descriptions is crucial for reliable evaluation. Omitting either significantly degrades alignment with human judgments, especially for weaker evaluator models. Furthermore, providing descriptions only for the highest and lowest scores yields the most reliable results, suggesting that the necessity of descriptions for intermediate scores should be reconsidered.
2. Decoding strategy: Greedy decoding ensures zero score variance, but it tends to show lower correlation with human judgments compared to sampling-based decoding. Sampling introduces variability in scores, but it better captures the nuances of human preferences. Furthermore, averaging scores aligns with human judgments the most among compared three aggregation methods.
3. Use of CoT reasoning: When well-defined score descriptions are available, including CoT reasoning in evaluator responses has little effect on alignment with human judgments. From both a cost and performance perspective, CoT-free scoring combined with score averaging provides strong alignment with human evaluations while maintaining low computational cost.
# 2 Related Work
Evaluation of LLMs. Evaluating LLMs for generative tasks involves significant manual costs, leading to autonomous evaluation methods. Traditional approaches measure similarity between model outputs and references using lexical features (BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), CIDEr (Vedantam et al., 2015)) or semantic features (BERTScore (Zhang et al., 2020), COMET (Rei et al., 2020)). However, these methods struggle with tasks allowing diverse valid responses. LLM-as-a-Judge (Zheng et al., 2023) addresses this by using capable models like GPT4 as evaluators, employing Single Answer Grading (1-10 scoring) or Pairwise Evaluation (ranking multiple outputs) (Doddapaneni et al., 2024). MT-Bench (Zheng et al., 2023) assesses multiturn capabilities using Single Answer Grading with reference answers. AlpacaEval 2.0 (Dubois et al., 2024) uses Pairwise Evaluation to mitigate length bias. Arena-Hard (Li et al., 2024b) filters ChatbotArena prompts for quality and diversity. BIGGEN-Bench (Kim et al., 2024) provides instance-specific criteria improving human judgment correlation.
Alignment with human judgments in LLMas-a-Judge. Various approaches improve alignment with human judgments, including CoT reasoning, self-generated criteria, and multiple evaluations (Zheng et al., 2023; Zeng et al., 2024). Other methods optimize prompts using human annotation correlation (Liu et al., 2023b, 2024b) or employ ensemble voting (Liu et al., 2023a). Gu et al. (2024b) proposed metacognitive re-evaluation for consistency. Our study utilizes simple methodologies from a neutral standpoint to analyze the impact of evaluation design, decoding strategies, and CoT reasoning on alignment with human judgments.
Consistency of evaluation results in LLM-asa-Judge. Existing studies have identified various biases where semantically unchanged modifications affect evaluation results. Chen et al. (2024) examined gender, authority, and aesthetic biases. Ye et al. (2024) identified 12 major latent biases including positional and self-enhancement bias. Park et al. (2024b) highlighted challenges with response length variations and content continuity. To the best of our knowledge, this is the first research to extensively investigate how much evaluation results can fluctuate depending on the design of the evaluation tasks and the evaluation strategies.
# 3 Experiments
In this section, we examine what factors affect the alignment with human judgments and consistency of evaluation results. Research Questions (RQs) we aim to investigate are as follows:
1. Which components of evaluation design facilitate improved alignment with human judgments and enhance the consistency of evaluation results?
2. What are the advantages and disadvantages of deterministic versus non-deterministic decoding strategies?
3. Does CoT improve alignment with human judgments and the consistency of evaluation results?
# 3.1 Experimental Method
We describe the experimental methods to investigate the RQs.
Alignment with human judgments. To measure the degree of alignment with human judgments, we compute the correlation coefficient between the scores provided by humans and those generated by an evaluator LLM.
Consistency of evaluation results. We use Krippendorff’s alpha coefficient to evaluate consistency of evaluation results, denoted as $\alpha$ . The $\alpha$ value, which indicates the consistency and reliability of evaluations, is 1 for perfect agreement, 0 for random annotations, and negative for systematic disagreement (see Appendix A for details).
Datasets. We adopt BIGGEN-Bench (Kim et al., 2024), which includes nine tasks such as instruction following, tool use, and reasoning, each with detailed, hand-crafted evaluation criteria. The evaluation template used in our experiments is shown
# Template for Evaluation Prompt
### Task Description :
An instruction ( which may include an Input ), a response to evaluate , a reference answer scoring 5, and a score rubric representing evaluation criteria are provided .
1. Write detailed feedback assessing the response strictly based on the score rubric .
2. After the feedback , provide an integer score from 1 to 5, referring to the rubric .
3. The output format should be: "( write feedback for criteria ) [ RESULT ] (an integer between 1 and 5)" 4. Do not include any additional introductions , conclusions , or explanations .
### The instruction to evaluate :
{ instruction }
### Response to evaluate :
{ response }
### Reference Answer ( Score 5):
{ reference answer }
### Score Rubrics :
[{ evaluation axes }]
Score 1: { score1_description }
Score 2: { score2_description }
Score 3: { score3_description }
Score 4: { score4_description }
Score 5: { score5_description }
### Feedback :
in Figure 1. We also use EvalBiasBench (Park et al., 2024a), an instruction-following benchmark with both correct and biased answers. We generated evaluation criteria using GPT-4o-2024-08-06 to encourage lower scores for biased responses and improve consistency.
Models. We use GPT-4o-2024-08-06 as the evaluator LLM. Furthermore, considering recent studies on self-improvement (Yuan et al., 2024; Madaan et al., 2023) that use local LLMs as evaluators (Song et al., 2024; Kamoi et al., 2024), we also use LLaMA-3.1-70B-Instruct 1 (Dubey et al., 2024) as the evaluator LLM.
# 3.2 Results
RQ1. Which components of evaluation design facilitate improved alignment with human judgments and enhance the consistency of evaluation results? As shown in Table 1, removing either the evaluation criteria or the reference answer leads to a decrease in correlation with human judgments. For GPT-4o, the correlation drops from 0.666 to 0.591 and 0.638, respectively, while for LLaMA3.1-70B-Instruct, it drops from 0.641 to 0.555 and 0.581. This indicates that, regardless of the evaluator LLM used, the evaluation criteria have a greater impact than the reference answer. Furthermore, the degradation in correlation is more pronounced for LLaMA3.1 than for GPT4o. When both the evaluation criteria and the reference answer are removed, the correlation with
Table 1: Experimental results for RQ1 report Krippendorff’s alpha coefficients across five sampled scores, with values in parentheses indicating the correlation with human evaluation. Removing evaluation criteria (w/o crt) or reference answers (w/o ref ) reduces human correlation. Eliminating both (w/o ref&crt) increases score fluctuation and significantly lowers human correlation.
human judgment declines significantly and reaches its minimum.
Regarding evaluation consistency, in the BIGGEN-Bench dataset, removing the evaluation criteria or the reference answer does not substantially affect consistency. However, in EvalBiasBench, removing the evaluation criteria leads to a noticeable drop in consistency. This suggests that, in EvalBiasBench, the absence of explicit criteria for penalizing biased responses may result in inconsistent scoring—potentially depending on random factors. Therefore, clearly defining scoring criteria for biased responses is crucial to ensure consistent evaluation.
Figure 2: Additional experimental results for RQ1, showing the correlation coefficient and Krippendorff’s $\alpha$ when parts of the score descriptions are removed from the evaluation criteria. When only the descriptions for scores 1 and 5 are provided (Score 1 & 5), the results exhibit the highest correlation with human evaluations while maintaining high evaluation consistency. This suggests that the role of score descriptions for intermediate scores (2, 3, and 4) should be reconsidered.
Figure 2 illustrates the additional experimental results, which examines correleations and score fluctuation when removing a part of score descriptions from evaluation criteria. The figure shows there is little difference in both correlation with human judgments and score consistency between the setting where only the descriptions for scores 1 and 5 are provided and the setting where descriptions for all scores (1, 2, 3, 4, and 5) are given. These results suggest that the descriptions for intermediate scores (2, 3, and 4) have limited impact on alignment with human judgments, and their role should be reconsidered. It is also surprising that evaluation consistency remains generally high across all settings, indicating that even without detailed score descriptions, evaluations tend to remain consistent as long as general evaluation axes are provided.
# RQ2. What are the advantages and disadvantages of deterministic versus non-deterministic decoding strategies?
We compare the correlation of scores with human judges between non-deterministic decoding and deterministic decoding on BIGGEN-Bench. For non-deterministic decoding, we sample five scores and aggregate them using majority voting (Majority), taking the median (Median), and averaging scores (Average). For deterministic decoding, we adopt greedy decoding (Greedy).
Table 2: Experimental results for RQ2. Nondeterministic scoring methods (Majority, Median, Mean) show larger correlations with human judges compared to deterministic decoding (Greedy). Among the non-deterministic methods, score averaging (Mean) shows the largest correlations with human judges consistently across different evaluator LLMs, reasoning types, and evaluation design.
Table 2 shows the results. Non-deterministic scoring methods show larger correlations with human judges compared to deterministic decoding consistently. This finding is consistent with the fact that, in general inference tasks, multiple sampling and aggregation of results outperforms greedy decoding (Wang et al., 2023). More interestingly, among non-deterministic decoding methods, averaging scores shows the highest correlation with humans regardless of the evaluator LLM, evaluation design, or presence of CoT. This can be attributed to the fact that averaging allows for expressing finegrained nuances, such as 4.5 when an evaluator LLM is torn between scores of 4 and 5, whereas median or majority voting methods round the score to either 4 or 5, thus failing to fully leverage the LLM’s capabilities as an evaluator.
RQ3. Does CoT improve alignment with human judgments and the consistency of evaluation results? To investigate the impact of CoT in LLM-as-a-Judge, we used GPT-4o to examine the correlation with human judges and the consistency of scores in two settings: one where a score was output after a Chain-of-Thought (w/ CoT), and one where only the score was output directly without any reasoning (Direct). Table 3 shows the results. In the Default setting, where evaluation criteria and reference answers are provided, both methods show similar correlation and consistency. Thus, when well-defined score descriptions are available, including explicit CoT in evaluator responses has little effect. From both a cost and performance perspective, direct scoring combined with score averaging provides strong alignment with human evaluations while maintaining low computational cost.
Table 3: Experimental results for RQ3. When given evaluation criteria and a reference answer (Default), scoring with CoT reasoning (w/ CoT) achieves comparable alignment with human judgments and evaluation consistency to Direct scoring (Direct). | As large language models (LLMs) continue to advance, reliable evaluation
methods are essential particularly for open-ended, instruction-following tasks.
LLM-as-a-Judge enables automatic evaluation using LLMs as evaluators, but its
reliability remains uncertain. In this work, we analyze key factors affecting
its trustworthiness, focusing on alignment with human judgments and evaluation
consistency. Using BIGGENBench and EvalBiasBench, we study the effects of
evaluation design, decoding strategies, and Chain-of-Tought (CoT) reasoning in
evaluation. Our results show that evaluation criteria are critical for
reliability, non-deterministic sampling improves alignment with human
preferences over deterministic evaluation, and CoT reasoning offers minimal
gains when clear evaluation criteria are present. | [
"cs.CL"
] |
# 1 Introduction
In industrial inspection and maintenance, accurate simulation and interaction with complex environments is essential to ensure safety, efficiency, and precision. However, traditional 3D modeling methods face significant limitations in hazardous or confined spaces due to the impracticality of deploying multisensor devices and the inherent trade-off between accuracy and real-time performance in existing techniques. For instance, while laser scanning offers high precision, it is time-consuming; in contrast, monocular vision enables real-time performance but produces lower quality results.To address these challenges, this project proposes a novel approach leveraging continuous monocular camera video streams to achieve high-fidelity 3D reconstruction. The goal is to overcome data acquisition limitations while balancing accuracy and real-time performance, ultimately generating digital models that support mechanical simulation and enable real-time interactive verification in mixed reality environments. This integrated pipeline enhances industrial capabilities by providing safer, more efficient, and precise solutions for simulating complex environments.
Monocular 3D reconstruction is a core challenge in computer vision. Traditional approaches like Structure from Motion (SfM) rely on feature point matching and triangulation to achieve sparse scene reconstruction. However, SfM struggles in textureless or repetitive-texture areas, such as smooth walls or pipes. Moreover, it requires additional densification steps to generate usable models, increasing computational complexity. Recent advances in neural radiation fields (NeRF) have demonstrated remarkable capabilities in view synthesis, generating novel high-quality views from sparse images [4]. Nevertheless, NeRF’s implicit representation (e.g., volumetric density fields) limits its direct application in physical simulations, as it cannot easily generate explicit meshes required for mechanical analysis. In the realm of simulation, traditional Finite Element Analysis (FEA) provides high-precision static simulations, but suffers from high computational costs, making it unsuitable for real-time applications. Conversely, game engine-based methods, such as Unity’s physics engine, enable real-time interaction through simplified rigid body dynamics but lack the accuracy needed for complex deformations and material properties. While GPU-accelerated methods like NVIDIA PhysX [5]. and deep learning-enhanced approaches (e.g., graph neural networks for deformation prediction) have improved the balance between accuracy and efficiency, a comprehensive solution integrating high-fidelity reconstruction with real-time, multi-scale simulation remains elusive.
This project addresses the limitations of existing methods by developing a solution that integrates advanced monocular 3D mesh reconstruction with efficient multi-scale simulation for digital twin generation and interaction. Specifically, our contributions include: (1) a high-fidelity reconstruction pipeline leveraging Signed Distance Field for initial scene representation and view synthesis, followed by extracting explicit mesh models and optimizing irregular triangular meshes into structured quadrilateral meshes suitable for finite element analysis, enabling high-accuracy 3D models while overcoming the need of multi-sensor devices; (2) high-precision finite element analysis (FEA) using Abaqus to simulate detailed mechanical behaviors such as material stress analysis and local deformation, with optimized quadrilateral meshes improving computational efficiency; and (3) mixed reality interaction utilizing the Vuforia engine to map the simulation results to mixed reality devices in real time, with Unity3D enabling AR display, scene marker recognition, visual overlay of simulation results, and user interaction support. By achieving these goals, our project provides a powerful toolset for industrial inspection and facility maintenance while advancing the practical application of digital twin technology in complex environments and offering new possibilities for mixed reality applications.
# 2 Method
# 2.1 Architecture Overview
Figure 1: System architecture overview.
The architectural design of the project aims to obtain a high-quality 3D mesh model of an object through a multi-stage processing pipeline and convert it into a solid model suitable for FEA and mixed reality applications. The overall architecture consists of three main parts: 3D reconstruction and finite element analysis, and real-time visualization. Specifically, the Neuralangelo is used to extract the fine mesh model of the object from the surround-shot video. The mesh is then repartitioned and optimized by Rhino [2] to generate high-quality quadrilateral meshes suitable for FEM analysis. Then, the optimized mesh was imported into HyperMesh [6] for further discretization, and the finite element mesh model was established and simulated by Abaqus [7]. Finally, combined with the cloud map of the simulation results, it is visualized in real-time through the Unity platform [8] to support more intuitive analysis and decision making. Through appropriate data exchange and integration technology each module can achieve seamless connection.
# 2.2 3D reconstruction pipeline
In this project, the core task of the 3D reconstruction pipeline is to recover a high-quality 3D mesh model of an object from surround-shot video. To achieve this goal, we employ the Neuralangelo algorithm [4], which is based on deep learning techniques and is able to efficiently and accurately generate 3D geometry and texture information of objects from multiple viewpoints. The specific 3D reconstruction process includes the following key steps:
Figure 2: 3D reconstruction.
Video Data Acquisition and Pre-processing: The first step of the project is to collect multi-view videos of the surrounding object. By shooting from different angles, multiple 2D images of the object are acquired, which contain various surface details of the object. Images data will be preprocessed before reconstruction,including image denoising, camera alignment, standardization, etc., to ensure the quality and consistency of the input data. We use colmap [4] for preprocessing to achieve camera extrinsic and intrinsic parameter calibration and extract sparse point cloud feature. Feature extraction and Deep learning model training: On the basis of the preprocessed image data, the Neuralangelo algorithm determines the object location through the signed distance field. Deep learning models can automatically recognize object surface features,normals, and texture information in images. Unlike traditional 3D reconstruction methods, Neuralangelo does not rely on structured light or geometry matching from stereo vision. Instead, it learns the mapping relationship between images and geometry from different views through training a signed distance field function, and is able to preserve edge details Texture mapping and optimization: The Neuralangelo algorithm not only recovers the geometry of the object, but also extract the texture information to the 3D mesh accurately. By extracting texture features from multi-view 2D images, Neuralangelo ensures that the generated 3D meshes are both accurate in shape and highly fidelity in surface details. This process can truly reproduce the appearance of the object, including color, illumination and texture details, making the final mesh model more realistic. High-resolution mesh output: After processing by the deep learning model, the final generated 3D mesh preserve high resolution and contains rich surface details. These meshes can not only accurately reflect the geometric characteristics of the object, but also can be used in subsequent applications such as mesh optimization and finite element analysis.
# 2.3 Mesh optimization and FEA simulation
Figure 3: Mesh optimization.
To ensure suitable format for FEA analysis, we optimize the reconstructed object meshes and convert them in to quadrilateral mesh by using rhino Specifically, we use QuadRemesh [2] command. The rationale behind the QuadRemesh command is based on a combination of local optimization and global mesh reconstruction, combining geometry analysis, Laplacian smoothing, and mesh generation algorithms. First, QuadRemesh analyzes each triangular element in the original mesh to determine the best mesh point location and node distribution when transforming the quadrilateral. This process makes the topology of the quadrilateral mesh more adapted to the curvature of the object surface. Moreover, it can effectively handle the special case of object boundaries, ensuring that the transformed mesh can correctly match the geometric boundaries of the object and avoid generating invalid or irregular meshes. The QuadRemesh algorithm adjusts the size and shape of mesh cells according to the curvature changes on the surface of the object. For regions with large curvature changes on the surface, the mesh will be finer. However, in regions with flat curvature, the mesh will be more sparse. This process can help maintain the geometric adaptability and quality of the mesh. The Laplacian smoothing algorithm is often used in the QuadRemesh generation process to reduce sharp corners, distorted or irregular mesh elements. Through several smoothing and subdivision operations, the mesh is gradually adjusted to make the final quadrilateral mesh more regular and uniform, and the distortion is reduced as much as possible.After that, the obtained mesh was converted into voxel representation using hypermesh [6], which balance accuracy and computational efficiency. The voxel representation then is imported into Abaqus [7] software and given the corresponding material properties and constraints to simulate the deformation and stress distribution of the object under different loads. The simulation results will provide a basis for the subsequent performance evaluation of the object.
# 2.4 Mixed reality application
Figure 4: Mixed reality application.
After Abaqus simulation, the optimized high-quality mesh model is integrated into the augmented reality (AR) environments that enable visualization and interaction using Unity . First, object recognition is performed through the Vuforia engine [3], which uses computer vision techniques to detect and track predefined landmark, such as QR code in the real world, enabling the system to accurately localization and pose estimation. Then, the optimized 3D mesh is projected on the landmark within an augmented reality environment in real time to ensure an immersive visualization experience.
# 3 Experiment
# 3.1 Evaluation
To validate our envisioned architecture, we conducted reconstruction evaluation, simulation evaluation, and user experience evaluation, respectively.
# 3.1.1 3D Reconstruction
Figure 5: The meshes obtained from LiDAR and Neuralangelo.
In this study, we employed the built-in LiDAR sensor of the iPhone 14 Pro Max as a benchmark to conduct both qualitative and quantitative comparisons of 3D reconstruction performance. Although LiDAR systems are traditionally regarded as having high geometric accuracy, they still suffer from limited spatial resolution when applied to small-scale object reconstruction. This limitation manifests as a loss of fine details and blurred mesh boundaries. In contrast, our approach, which combines a monocular camera with a neural network-based method (Neuralangelo), achieves high-quality 3D reconstruction, particularly excelling in surface detail preservation.
For the qualitative analysis, visual comparisons of the reconstructed models indicate a high degree of structural similarity between the two methods. However, the LiDAR-generated model exhibits certain limitations in capturing fine geometric features, especially along object edges and small-scale textures, due to its inherent resolution constraints. Our method, on the other hand, visually preserves these geometric details more comprehensively.
To further assess geometric fidelity, Chamfer Distance was employed as a quantitative metric. The results demonstrate that our neural network-based reconstruction achieves comparable or even superior geometric consistency relative to the LiDAR-based model. These findings suggest that, despite the significantly lower hardware cost and simpler acquisition process of monocular camera systems, when combined with deep learning techniques, they can deliver reconstruction results that rival or exceed those of LiDAR systems, offering a compelling balance between performance and cost-effectiveness.
Chamfer Distance: To quantitatively assess the geometric similarity between the reconstructed 3D model and the reference LiDAR-scanned model, we employed the Chamfer Distance (CD) metric. Chamfer Distance is a widely used evaluation method in 3D vision tasks for comparing point clouds, as it effectively captures both global shape alignment and local surface fidelity. Given two point clouds:
$$
P = \{ p _ { i } \} _ { i = 1 } ^ { N } , \quad Q = \{ q _ { j } \} _ { j = 1 } ^ { M }
$$
representing sampled points from the reconstructed model and the LiDAR-based reference model respectively, the symmetric Chamfer Distance is defined as:
$$
\mathrm { C D } ( P , Q ) = { \frac { 1 } { | P | } } \sum _ { p \in P } \operatorname* { m i n } _ { q \in Q } \| p - q \| _ { 2 } ^ { 2 } + { \frac { 1 } { | Q | } } \sum _ { q \in Q } \operatorname* { m i n } _ { p \in P } \| q - p \| _ { 2 } ^ { 2 }
$$
This formulation ensures bidirectional comparison, penalizing discrepancies in both directions between the two surfaces.In our experiment, in order to ensure statistical representativeness and consistency, we conducted uniform sampling on both 3D grids, and each grid had 100,000 surface points. Before calculating the distance, normalize the two point clouds to unit spheres to eliminate the influence of scale and shift differences.After conducting an efficient search by calculating the nearest neighbor distance using the k-d tree, the chamfer distance between the reconstructed model and the LiDAR model was obtained as 0.0561.This value represents moderate to high geometric similarity, indicating that the reconstructed mesh retains the overall shape and structure of the real objects captured by LiDAR. Although there are minor local differences (possibly due to surface noise, occlusion or untextured areas in the video input), the reconstruction shows satisfactory fidelity for downstream tasks such as simulation and visualization. In conclusion, the chamfer distance analysis verified the effectiveness of the proposed monocular reconstruction pipeline. Although relying solely on RGB video without depth sensing, this method has achieved an accuracy level comparable to that of active scanning technology, demonstrating its practical feasibility in cases where lidar is unavailable or impractical.
# 3.1.2 FEA simulation
In finite element analysis (FEA), mesh quality directly affects computational accuracy and stability. Therefore, a detailed mesh quality assessment was conducted before the simulation to ensure compliance with analysis requirements. In this study, the mesh consists of both hexahedral and tetrahedral elements. The specific statistics are as follows:
Table 1: Mesh Quality Metrics
Mesh Quality Analysis: Hexahedral Mesh: The overall quality of the hexahedral mesh is acceptable. The maximum angle does not exceed $1 6 0 ^ { \circ }$ , and the aspect ratio is well controlled. However, $2 . 8 0 \%$ of the elements have a minimum angle below $1 0 ^ { \circ }$ , with the worst case being $4 . 6 8 ^ { \circ }$ , which may affect local computational accuracy. Therefore, mesh refinement in critical regions is recommended, such as adjusting seed size or employing an improved Sweep meshing technique.
Tetrahedral Mesh: For the tetrahedral mesh, all elements have a minimum angle greater than $5 ^ { \circ }$ , but the worst-case minimum angle is only $7 . 6 8 ^ { \circ }$ , and the worst shape factor is 0.004, which could lead to element distortion. Although no elements exceed an aspect ratio of 10, further mesh refinement is suggested in high-curvature regions to enhance computational stability. During the Abaqus Data Check process, there is no computational errors. Based on the mesh quality analysis, the current mesh is suitable for most static simulations.
平 王 Stress component cloud map Strain component quantity Reaction Force Contour Plot cloud picture
Simulation Parameters: In this finite element analysis, the material used for the simulation is spruce wood, which has a density of $4 . 5 \times 1 0 ^ { - 7 } ~ \mathrm { k g / m m ^ { 3 } }$ , a Young’s modulus of $1 0 , 0 0 0 \ \mathrm { M P a }$ , and a Poisson’s ratio of 0.3. These material properties were selected to accurately represent the mechanical behavior of the wooden structure under loading conditions.
For the loading conditions, gravity was first applied to simulate the realistic weight distribution of the stool in its natural state. Subsequently, a 500 N uniformly distributed load was imposed on the top surface of the stool to replicate the typical force exerted during usage.
Regarding boundary conditions, hinged constraints were applied at the bottom of all four legs to prevent translational movements while allowing rotational degrees of freedom. This setup ensures a realistic constraint condition, mimicking the interaction between the stool and the ground surface.
After ensuring the mesh quality meets the requirements and defining the simulation parameters, a finite element simulation was conducted to analyze the structural performance. The results include stress distribution, strain distribution, and reaction force contours, which provide insights into the mechanical behavior under the given load conditions.
Stress Distribution Analysis: The stress component cloud map presents the distribution of stress across the structure. The stress concentration regions are mainly observed at the joint connections and support areas, where the material undergoes higher loading. The maximum stress value is within the allowable limits of the material, indicating that the structure can withstand the applied forces without failure. However, the regions with higher stress values may require further design optimization, such as reinforcing critical areas or adjusting the load distribution to enhance durability.
Strain Distribution Analysis: The strain component cloud map shows the strain response of the structure under loading. The strain is distributed relatively evenly, except for certain localized regions where higher strain values appear. This suggests that the deformation mainly occurs in specific areas, which could be attributed to geometric features or material properties. The overall strain values remain within the elastic deformation range, ensuring the structure’s integrity under normal operating conditions.
Reaction Force Contour Analysis: The reaction force contour plot highlights the forces exerted at the constraints or support regions. The reaction forces are distributed symmetrically, confirming that the applied boundary conditions and loading setup are well-defined. The magnitude of reaction forces is consistent with theoretical expectations, further validating the correctness of the simulation setup. If excessive reaction forces are detected in certain regions, design modifications such as adjusting boundary constraints or redistributing loads may be necessary to reduce excessive localized forces.
# 3.1.3 Mixed reality application
After completing the mesh quality analysis and simulation results analysis, in order to further improve the visualization and interactive experience of finite element simulation data, we use Unity to realize real-time recognition of modeling objects through AR overlay.
The workflow of the AR system consists of the following steps:
Figure 7: Mixed Reality Application Workflow.
# 1. Target Recognition:
The camera captures the real-world environment and continuously processes image frames.
The frames are converted into a suitable pixel format for tracking.
A predefined target database containing the object’s visual features is used to recognize the model.
# 2. Object Tracking:
The Vuforia Tracker Module identifies and tracks the detected object in real-time.
Features such as image targets, multi-image targets, and virtual buttons can be utilized for precise interaction and tracking.
# 3. Rendering Augmented Content:
Once the object is detected, Unity renders the reconstructed mesh model in the AR environment.
The model is displayed alongside the real object, allowing users to visualize the simulation results directly.
# 4. User Interaction and Application Logic:
The application updates its logic based on the detected object’s state.
Users can interact with the virtual model and analyze the finite element simulation results in an immersive manner.
The qualitative result is shown in Figure 8.
Figure 8: Virtual Reality Visualization.
# 4 Discussion
The proposed method offers a novel and effective solution to several long-standing challenges in industrial inspection and digital twin construction, especially in environments where traditional 3D modeling techniques are difficult to address. As mentioned in the introduction, existing methods usually face a trade-off between accuracy and real-time performance, especially in confined or dangerous Spaces where deploying multiple sensors is impractical. This study addresses these limitations by introducing an end-to-end framework that combines 3D reconstruction based on monocular video, mesh optimization, finite element simulation, and mixed reality visualization while maintaining a balance between high fidelity and efficiency.
Using Neuralangelo, a deep learning-based reconstruction algorithm, detailed surface modeling can be carried out from simple monocular input. Unlike traditional motion structures (SfM), sign distance field representation and neural rendering pipeline enable it to maintain geometric accuracy and retain textures under challenging conditions. In addition, the pipeline transitions from the original sparse point cloud to a clear high-quality quadrilateral mesh, facilitating the use of advanced finite element analysis (FEA) tools such as Abaqus. The typical problem of irregular triangular distribution in deep learning-based reconstruction was effectively solved through the intermediate mesh optimization step of Rhino’s QuadRemesh, and it was transformed into a simulable structure. This simplified process can eventually enable accurate stress and strain analysis in a simulated mechanical environment.
Despite these advancements, there are still some limitations. During the reconstruction stage, areas with reflections, darkness or untextured surfaces still pose challenges to precise geometric restoration. These limitations can be attributed to suboptimal video quality or restricted viewing angles, which can introduce artifacts or lack geometry. Future iterations can solve this problem by combining hybrid data acquisition strategies, such as integrating depth information, such as the use of RGB-D camera to enhance robustness in difficult scenarios.
Another limitation lies in the current implementation of mixed reality (MR) interaction. The system relies on Vuforia’s image-based tracking, which is notably sensitive to variations in ambient lighting and subject to performance degradation under partial occlusion. These factors can compromise the accuracy of object tracking and alignment within real-world environments, reducing the reliability of the augmented overlay. Additionally, the current approach depends on predefined image markers for object recognition, which constrains the flexibility and scalability of deployment. Future improvements should focus on integrating markerless tracking techniques—such as feature-based spatial tracking—to enable more robust, flexible, and immersive MR experiences without the need for physical markers.
Furthermore, although finite element simulation has achieved a very high level of accuracy, its cost is a long computing time, especially when dealing with complex or high-resolution models. Although grid optimization partially alleviates this, further enhancements can be achieved by integrating machine learning-based agent models. For instance, graph neural networks (GNNS) trained to predict deformation patterns under various loads can significantly accelerate the simulation speed while maintaining acceptable accuracy, thereby supporting near real-time feedback in interactive scenarios. | To address the challenges of 3D modeling and structural simulation in
industrial environment, such as the difficulty of equipment deployment, and the
difficulty of balancing accuracy and real-time performance, this paper proposes
an integrated workflow, which integrates high-fidelity 3D reconstruction based
on monocular video, finite element simulation analysis, and mixed reality
visual display, aiming to build an interactive digital twin system for
industrial inspection, equipment maintenance and other scenes. Firstly, the
Neuralangelo algorithm based on deep learning is used to reconstruct the 3D
mesh model with rich details from the surround-shot video. Then, the QuadRemesh
tool of Rhino is used to optimize the initial triangular mesh and generate a
structured mesh suitable for finite element analysis. The optimized mesh is
further discretized by HyperMesh, and the material parameter setting and stress
simulation are carried out in Abaqus to obtain high-precision stress and
deformation results. Finally, combined with Unity and Vuforia engine, the
real-time superposition and interactive operation of simulation results in the
augmented reality environment are realized, which improves users 'intuitive
understanding of structural response. Experiments show that the method has good
simulation efficiency and visualization effect while maintaining high geometric
accuracy. It provides a practical solution for digital modeling, mechanical
analysis and interactive display in complex industrial scenes, and lays a
foundation for the deep integration of digital twin and mixed reality
technology in industrial applications. | [
"cs.CV"
] |
# 1 Introduction
Large language models (LLMs) are expected to perform well on many different tasks. Therefore, training data is a heterogeneous mix, where instances can vary greatly in terms of format, contents, tasks, and languages, e.g. code generation [Lozhkov et al., 2024; Manh et al., 2023; Kocetkov et al., 2022; Zhong et al., 2017] vs. MCQA [Singh et al., 2024; Pal et al., 2022]. At inference time, data points are not equally relevant, but it is often prohibitively expensive to go back and change the training distribution for each individual inference request. Hence, there is a mismatch in the distribution at training and inference time: training time distribution is often determined by ease of access to prior data collections and prior data augmentation efforts, while at inference time, new use cases might be underrepresented in the data but highly relevant to the user.
Figure 1: Tapping into Distributions: (above) illustrates the representation of various length buckets in the training distribution. (below) demonstrates the flexibility of the marker intervention on the mArena Hard test distribution. By modifying the <length_bucket>..</length_bucket> marker, the model can effectively tap into diverse training distributions, even for underrepresented length buckets.
To overcome this mismatch, techniques have been proposed to improve the conditioning of the output generation at inference. These involve prompt engineering [Wu & Hu, 2023; Yu et al., 2023; Wenjuan et al., 2024], multi-shot examples [Brown et al., 2020; Lin et al., 2022; Winata et al., 2022; Logan IV et al., 2022], chain-of-thought [Wei et al., 2022; Wang et al., 2023; Ranaldi $\&$ Freitas, 2024] or decoding strategies [Shi et al., 2024; Snell et al., 2025]. However, these approaches place an enormous burden on practitioners and developers to anticipate what strategies deliver the best performance. Furthermore, the effectiveness is dependent on the exact configuration for a particular model, e.g. the order of multi-shot examples plays a role [Lu et al., 2022], and the wording of the prompt [Anagnostidis & Bulian, 2024]. In this work, we ask Can we optimize our training protocols to both improve controllability by the user and improve performance on rare use cases at inference time?
Our approach amounts to building a treasure map of hyper-detailed task-specific markers, to allow for real-time automatic targeting of long-tail features during inference. We note that some of the earliest generative models have used tags to improve performance. However, these often targeted a single feature at a time or were applied uniformly to an entire dataset. These early tags fell out of favor over the last few years, with the focus turning to prompt engineering for users to guide the generation themselves. However, there have been a few wider ecosystem changes which prompt ( $n o$ pun intended) revisiting the paradigm of adding markers to training, and also motivate this work: 1) LLMs are now used by a far wider group of everyday users who can’t be expected to be familiar with the complexities of prompt engineering, 2) Many models are now served using an API which means training markers can be added automatically behind the API call (not visible to users), and hence can be far more complex and varied to guide and improve generations.
Our work is motivated by these two trends. We take a far wider view of training markers and explore a setting where a single data point can have up to 90 complex characteristics. We describe these as Treasure Markers, introduced at training time to provide a map to guide towards higher performance at inference time. We motivate that this approach is particularly beneficial for long-tail modeling. Our goal is that the treasure map approach is robust at test-time, so we also aggressively experiment with marker dropout during training. This is akin to asking the model to still find the treasure even with missing clues.
In this work, our primary contributions are as follows:
1. Introducing a more general framework for controllability: We show that explicitly targeting controllability during training leads to pronounced gains at inference time, with little burden placed on the user. Training markers leads to significant downstream gains, ranging from a win rates increase on open-ended generations of 5.7% on ArenaHard [Li et al., 2024] across the entire distribution of tasks relative to a model with no tags. Our training marker framework offers remarkable flexibility, allowing for control over both aspects of form (output format, length) and semantic qualities (quality, tone, style) while also being completely optional at inference, because the markers can be inferred accurately.
2. Long-tail lifts: Training with explicit markers is an effective method for leveraging longtail features at inference time, unlocking high-performance even for the distributions that are underrepresented in the training data. While our framework enables a relative improvement of $7 . 9 \%$ on Code tasks over the baseline, we observe relative lifts of up to $1 4 . 1 \%$ on tasks like CodeRepair, which are highly underrepresented in the training data.
3. Modeling underlying relationships: We demonstrate that our approach effectively models underlying relationships in the data, as evidenced by a drastic reduction in length violation $3 6 . 5 8 \% 0 . 7 5 \%$ ) in length-constraint instruction following, despite never seeing a training sample with a prompt instruction designating length constraint. While significantly reducing length violation, the training markers also boost the generation quality with a $7 . 5 \%$ relative gain $( 1 4 . 3 6 \% 2 1 . 8 5 \%$ ) in win rates.
# 2 Methodology
# 2.1 Overview of Training Time Markers
We condition the output sequence $y$ given an instruction $x$ with added training markers $m$ :
$$
p ( y | x , m ) = \prod _ { i = 1 } ^ { n } p ( y _ { i } \mid x , m , y _ { < i } ) .
$$
Figure 2: Modeling data features flexibly with training time markers: Results on the length instruction following on the AlpacaEval-Length-Instructed(LI) dataset. While (a) the baseline violates the length constraint with $3 6 . 5 8 \%$ , ${ \bf ( b ) }$ using the TreasureMarked model and allowing to infer tags on the exact same dataset reduces the violation to $2 4 . 7 \%$ . (c) Conveying the requirement via an explicitly inserted length marker to the prompts, the TreasureMarked model violates the instruction on only $1 . 2 5 \%$ of the dataset.
Table 1: An example list of training time markers formatted in a standardized template.
These markers encompass several different attributes of the data, including estimated quality scores, domains, and languages (3.1), which we store as a list of markers associated with a given data point (see Table 1 for an example).
This template is treated as natural language and encoded with the same tokenizer as the text. We include the markers in both input (appended to the prompt) and output space (prepended to the completion), to induce the model to associate the properties of the generations with these characteristics. This reduces the burden on the practitioner or researcher at inference time, as the model learns to infer the correct markers.
The finetuning objective thus becomes to minimize the
negative log likelihood of the target generations including the template, given a prompt with an
optional input template:
$$
- \frac { 1 } { | \mathscr { D } | } \sum _ { d = 1 } ^ { | \mathscr { D } | } \log p _ { \theta } ( y _ { d } , m _ { d } \mid \mathrm { d r o p o u t } ( m _ { d } ) , x _ { d } )
$$
This approach ensures that the model learns to faithfully generate and adhere to the training markers when provided on the prompt side.
Training markers dropout. To avoid the model from becoming overly reliant on markers for completion or learning to trivially replicate the markers, we employ dual dropout strategies (datasetlevel, sample-level) on the prompt space. In dataset-level dropout, we completely remove the training markers from the prompt for a random selection (defined as a percentage of the dataset). In sample-level dropout, we completely remove a random subset of training markers from each example (defined as a percentage of all markers associated with a given example). To ensure the model consistently produces markers at inference time, we do not introduce dropout on the generation side.
Figure 3: Long tail domains benefit more from training markers: (left) Domain distribution in the training data used to fine-tune the model. (right) Improvements in win rates over the baseline against Gemma2-9B on both majority and minority subsets on Arena-Hard-Auto dataset [Li et al., 2024]. We group the test data into two sets of domains that have high $( > 5 \% )$ and low $( < 5 \% )$ ) presence in training data.
# 3 Taxonomy for training time markers
# 3.1 Taxonomy of Training Markers
We develop a comprehensive taxonomy around distinct groups of desired characteristics to capture key attributes of the training data, such as quality of the data, style, format, domain, and task. Table 2 contains the taxonomy with definitions and the set of valid marker values. We chose this selection of markers with inference-time use cases in mind: properties like quality, tone, style, and completion length are very desirable to control at inference time. We also focus on longtail attributes with the goal of specifically targeting performance on underspecified parts of the distribution. To that end, we add hyperdetailed markers for task, domain and code type which tend to have highly skewed frequencies with some instances occurring far more frequently than others.
To assign markers to samples in the training dataset, we utilize dataset-related information whenever possible and use an LLM to tag missing meta-information. Specifically, we use the multilingual openweights model Command R+ $^ { 1 }$ for tagging of markers for <domain>, <task>, <format> whenever unavailable from the dataset. To improve tagging performance, we use detailed definitions paired with few-shot examples to provide context for markers during annotation. We add markers across 23 languages, so we use in-language few-shot examples in each language.
Our extensive set of 90 unique markers fall into categories such as Length, Style, Format, Quality, Source, Domain, Task. We include an extensive description of all markers in section 3. We describe the most frequently referenced categories below:
• Length: are markers that allow for control of completion length. It includes a level of granularity ranging from <length_tokens> and <length_sentences> to broader categories such as concise, medium, and long.
• Language: <lang> describes the language the completion is written in (i.e. Arabic, Japanese), enabling the model to improve language-specific generations and reduce language switching during inference. <code_type> is specifically used to identify programming languages for coding-related tasks (i.e. python, c++).
• Quality: <quality> provides a measurable score indicating the quality of a sample, often derived from human annotations or a Reward Model (RM). We also create a categorical marker <quality_bucket> by using quartiles within language-specific subsets into $\{ 1 , 2 , 3 , 4 \}$ , offering a broader description of quality.
Table 2: Comprehensive taxonomy for training time markers: Our taxonomy contains 13 categories shown with their descriptions and values.
• Domain: overarching category of the knowledge required to answer a given prompt (i.e. Sciences, Technology, Medical). We annotate domain markers either using LLM tagging or derive from the source of the dataset for domains like Math and Code.
• Task: <task> helps capture more fine-grained differences in task characteristics within a domain (i.e. summarization, reasoning, openended, explanation). Similar to the domain marker, we use LLM tagging or the data source information for obtaining task markers.
# 3.2 Experimental Set-up
Training with markers. We use a 7-billion parameters proprietary base model which is pretrained using a data mixture that consists of texts from 23 languages covering half the worlds population. We train our base model on a training corpus containing 2.7M examples made up of our mixture of instruction-style data sources.
Training protocol. Training for each variant spanned 8,000 steps, employed a cosine learning rate schedule with a warm-up phase, using a batch size of 32 and an evaluation batch size of 64. We train for 2 epochs with a peak learning rate of at $2 . 5 \times 1 0 ^ { - 4 }$ , achieved through 10 warm-up steps starting from a learning rate of 0.0, and then decay back to $1 . 2 5 \times 1 0 ^ { - 4 }$ . One fine-tuning run using 8,000 steps on 128 Nvidia H100 GPUs takes around 6 hours.
Languages covered by the training markers. Our experiments cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian and Vietnamese.
Inference settings. At inference time, we evaluate performance gains under two different settings. In the default setting, which we refer to as "TreasureMarked", we do not fix any of the markers at inference. This setting asks: Has the model learnt to infer the right markers without any intervention? In the second setting which we refer to as "TreasureMarked (fixed)", we explicitly hardcode some of the markers at inference. This asks: if we manually set the value of some markers, can we drive gains in performance? This is very reasonable for cases like quality, where we always want to steer model behavior towards higher quality generations.
Baseline. We compare both "TreasureMarked" and "TreasureMarked (fixed)" against a model trained on the same data, but without added markers that we refer to as Baseline. This allows for a clean comparison, and controls for the same amount of data seen in both variants.
Core experimental variants and ablations. In the next section, we evaluate a variety of ways a model trained with markers shines at inference time. We inspect three axes of control: (1) quality in section 4.1.1, (2) length in section 4.3, and (3) language in section 4.5). Furthermore, we show how long-tail examples benefit from markers, even when only inferred at inference time (section 4.1), specifically in coding tasks (section 4.2) and for long generations (section 4.3). We present key experimental ablations, including understanding the impact of dropout applied to markers on downstream performance at inference time (Section 5.3).
# 3.2.1 Evaluation
Open-ended generation quality. We evaluate the impact of markers on both the English ArenaHard-Auto v0.1 [Li et al., 2024], and a translated version of this dataset, m-Arena Hard [Dang et al., 2024] used for multilingual evaluation. Arena-Hard-Auto is a challenging open-ended generation benchmark with prompts selected from user queries on Chatbot Arena. We measure Win Rate $\%$ against our Baseline model using GPT-4o.2
Task-specific evaluations. In addition, we evaluate the models on benchmarks specific to tasks such as code (generation, repair, translation) and length conditioned instruction following to narrow in on long-tail effects and controllability levers. We introduce each of these evaluations within the respective results sections.
Length evaluations. Given the original instruction in the AlpacaEval-LI dataset [Yuan et al., 2024] contains the exact constraint, our TreasureMarked and TreasureMarked (fixed) both contain explicit reference to the contraint. For TreasureMarked, we present the original lengthinstructed prompt, allowing the model to deduce the associated tags. This approach evaluates the model’s ability to extrapolate tags from instructions. in contrast, for TreasureMarked (fixed), since the original instruction contains the exact constraint, we investigate an additional control strategy where we provide the constraint in the marker template if the taxonomy directly supports it. We remove the length instruction and append the corresponding <length_tokens> tag with the appropriate value. Table 3 provides an example of an edited prompt. This strategy assesses the model’s adherence to known templates and its ability to follow explicit length requirements that are only provided via the marker template.
# 4 Results
# 4.1 Impact of Treasure Markers on Open-Ended Generation
Open-ended performance gains. We measure Win Rates ( $\%$ ) of the Baseline and TreasureMarked models against Gemma2-9B [Team et al., 2024] as a common point of comparison, visualized in Figure 3. We first consider our TreasureMarked variant, markers are only included in training but are inferred from the model itself during inference. Overall, we obtain an absolute increase of 5.7% in Win Rates from $3 2 . 1 \%$ to $3 7 . 8 \%$ across all tasks. This is reassuring, because it shows that markers at training time of the TreasureMarked model can already make a positive change at inference time, even when only inferred by the model itself, and even if the respective markers are rarely seen during training (e.g., for underrepresented domains).
Performance on the long-tail. One of our core hypotheses is that treasure markers will be particularly helpful at preserving or unearthing gains on the long-tail. To validate this hypothesis, we evaluate performance post-training on domains represented with different frequencies in the trainingset. As seen in Figure 3, SocialScience, Sciences, Finance, Medical, and Legal domains are particularly sparsely represented in the training data, each making up less than $5 \%$ of the training data. In contrast, Code is best represented in the training dataset. With inferred treasure markers, while there is an improvement of $+ 5 . 7 \%$ across the higher-represented domains, we observe an even more pronounced gain of $+ 9 . 1 \%$ in the underrepresented domains.
Figure 5: Improvement on the Long Tail for Code tasks: (left) Frequency of coding $<$ task ${ > } \mathrm { s }$ in the training dataset. (right) Despite being poorly represented in the training data, CodeRepair achieves a $1 4 . 1 \%$ relative improvement by leveraging targeted markers during inference further improving on the performance from the TreasureMarked model with inferred markers.
# 4.1.1 Fixed Treasure Markers
We also explore adding explicit markers in TreasureMarked (fixed). Here, we specifically target quality and ask Can we control the generation quality of the model as a latent feature, using training time markers? To test this, we measure generation quality on m-Arena Hard [Dang et al., 2024] across 23 languages, by only adding markers related to quality. For each value [1,2,3,4] of <quality_bucket>, we also include a <quality> score in conjunction with it. To obtain the <quality> score, we pick the $9 5 \%$ percentile calculated language-wise from the samples in the training data from each respective bucket. As evaluation, we measure the generation quality by the same Reward Model used to score the data during training to compute win rates against the Baseline model.
Figure 4 demonstrates the amount of control introduced by training time markers with win rates under the RM going from $4 8 . 2 1 \% 5 6 . 5 \%$ just by changing <quality>, <quality_bucket> at inference. These results showcase the potential of our framework, where markers representing a desired quality metric used during training yields control levers to leverage generations that tap into that quality metric at inference time.
# 4.2 Impact of Treasure Markers on Targeted Performance of Specific Sub-tasks
# 4.2.1 Code Performance
For code, we evaluate our model on three tasks from HumanEvalPack [Muennighoff et al., 2023] dataset, and measure pass@1 rates. We use CodeSynthesis, CodeRepair, and CodeTranslation3, covering python, rust, java, javascript, go, c++. These map to the following task markers in our taxonomy:
Figure 4: Levers for Controlling Quality: Changing the <quality>, <quality_bucket> markers at inference time provides control over generation quality with Win Rates (as measured by internal Reward Model) going from $4 8 . 2 1 \% 5 6 . 5 \%$ over the Baseline model, demonstrating successful control over quality as annotated in the training data.
Table 3: Examples of length control strategies: (left) Original instruction from AlpacaEval-LI dataset; (right) Modified instruction with constraint in the marker list.
CodeGeneration, CodeFix, and CodeTranslation.
During training, code comprises of $2 7 . 2 \%$ of the overall training corpus. However, we specifically pick this domain because the distribution of coding subtasks differs significantly in frequency in the training corpus, as shown in Figure 5. CodeRepair and CodeTranslation are very rare coding subproblems, while CodeGeneration is heavily represented at $7 5 . 8 \%$ within the coding data.
Long-tail gains. We observe the largest gains on the long-tail code tasks. As seen in Figure 5, whether we provide the markers (TreasureMarked (fixed)) or the model infers them, both rare coding problems (CodeTranslation and CodeRepair) show large lifts with up to $6 . 5 \%$ and $1 4 . 1 \%$ relative gain over the baseline respectively. We note that these gains are far higher than the gains observed for the far more frequent task of CodeGeneration, which only shows lifts of up to $3 . 2 \%$ This shows that our framework benefits all parts of the distribution, but has disproportionate success enabling large lifts to highly infrequent features during training.
# 4.3 Length Control in Inference Time
To assess the impact of length conditioning during inference, we benchmark on the AlpacaEval-LI dataset [Yuan et al., 2024], which evaluates how faithfully LLMs adhere to length constraints. We complement the measurements for length violation with Win Rates ( $\%$ ) by evaluating valid samples against the dataset provided completions using GPT-4o. We establish our baseline using completions generated by the Baseline model. Following a similar approach to Yuan et al. [2024], we assess Violation (%) as the proportion of samples exceeding the specified length constraint (See Section 3.2.1 for details).
Table 4: Length Instruction Following & generation quality on Alpaca-Eval LI.
Improvements to length control. In Table 4, we show improvements of up to $3 5 . 3 \%$ in length violation rates. This pronounced improvement results in a mere $1 . 2 5 \%$ remaining violations for this evaluation set (essentially close to saturating performance on this evaluation). Even when the treasure markers are not explicitly provided but inferred directly by the model, we observe up to $1 1 . 8 \%$ absolute decrease in violation rates. These improvements to instruction following are nontrivial, and also lead to overall win-rate gains of up to $6 . 8 6 \%$ , ensuring quality is not compromised as length constraints are enforced.
Table 5: X-CometXL scores [Colombo et al., 2023] on WMT $^ { 2 4 + + }$ test sets [Deutsch et al., 2025]. Bold differences are significant at $p \leq 0 . 0 5$ according to a paired T-Test and bootstrap resampling [Koehn, 2004] as implemented in comet-compare.
# 4.4 Machine Translation
To study the effects of the markers on machine translation, we benchmark on WMT’24++[Deutsch et al., 2025] and report translation performance from English to 22 languages ( $e n \longrightarrow x x$ ) based on the languages seen in pretraining. We use XCOMET-XL [Colombo et al., 2023] for evaluation, a state-of-the-art machine translation evaluation metric [Freitag et al., 2024].
Table 5 shows the results with the relative improvement over the Baseline. Training the model with markers and using them at inference time improves performance on 5 languages (es, id, it, pt, ro) significantly with up to 1.18 point gains, while retaining performance on all other languages. This constitutes a remarkable improvement, especially given that the training data, up to the markers, is identical. According to the metric delta analysis in [Kocmi et al., 2024], improvements of such magnitudes are very likely to be confirmed in human evaluations.
Table 6: Line-level pass rate on Complex Prompts from the Language Confusion Benchmark [Marchisio et al., 2024].
# 4.5 Language Control in Inference Time
As the final set of results, we focus on the effect of our training markers on ensuring a model responds in the language specified by the user. To evaluate this, we use the Language Confusion Benchmark [Marchisio et al., 2024] which measures the ability of a model to follow cross-lingual instructions such as “Respond in French...”, to request completions in another language. We measure performance on the Complex Prompts subset of the cross-lingual benchmark across 14 languages. Following [Marchisio et al., 2024], we measure Line-level Pass Rate (LPR) that only deems a response "correct" if all lines in the generation match the user’s desired language. During inference, we insert training markers present in the data into the prompt, but leave out the <lang> marker, since it is already present in the prompt.
Table 6 shows results across 14 languages. Our model with training markers significantly improves language control performance in 13 out of 14 languages with an absolute gain of $1 0 . 9 8 \%$ on average across 14 languages, showcasing a remarkable improvement in controllability of inference time. We observe the largest gains for Russian (+18.6%) and the lowest gains for Chinese (+5.5%).
Table 7: Examples of length control strategies: (left) Original instruction from AlpacaEval-LI dataset; (right) Actual modified instruction by appending predicted markers annotated on-the-fly using Command-A
# 5 Key ablations and Discussion
# 5.1 Can markers be added on-the-fly at inference?
Our framework of training-time markers provides significant flexibility for explicit control over generations at inference time. While users can manually insert these markers, another LLM can also automatically annotate prompts with training markers on-the-fly before the generation step. In this section, to test the effectiveness of using another LLM to enrich an incoming prompt with the relevant markers at inference, we perform an ablation where we use Command A [Cohere et al., 2025] as an annotator. At inference time, we make a single additional call to Command A to annotate a prompt with all the relevant markers using few-shot examples and then append them to the prompt. We use the AlpacaEval-LI evaluation, as it is an excellent test bed for this setup due to the existence of a clearly defined requirement in the prompt. Table 7 provides an example of one such annotation. The few-shot prompt used to annotate markers on-the-fly is provided in Appendix C.
Table 8 shows the results for this ablation. Similar to section 4.3, we measure Violation ( $\%$ ) and Win Rates ( $\%$ ) for evaluation. When compared to using the TreasureMarked model with the original prompts, we observe a drastic reduction in violation rates from $2 4 . 7 4 \%$ to a mere $0 . 7 5 \%$ with a $2 . 4 \%$ relative improvement in Win Rates (from $1 9 . 4 8 \%$ to $2 1 . 8 5 \%$ ). Compared to Baseline, TreasureMarked (onthe-fly) extends the gains and leads to a $3 5 . 8 \%$
Table 8: On-the-fly control (Alpaca-Eval LI): Using Command A to annotate markers at inference time drastically reduces violation rates to ${ < } 1 \%$ while improving Win Rates by $+ 2 . 3 \%$
reduction in length violation and a $7 . 5 \%$ improvement in Win Rates. These results demonstrate the potential gains possible by using an additional call at inference to annotate an incoming prompt with relevant markers using an external model.
# 5.2 How do markers interact?
We perform an additional ablation on the AlpacaEval-LI dataset from section 4.3 to study the effect of adding more useful markers at inference time. In addition to the <length_tokens> marker that conveys the explicit length constraint, we annotate and add the <domain> marker, which we suspect carries implicit length biases (e.g. legal text might be longer than conversations), but should add helpful context to the prompt. With this we ask – If multiple markers are added at inference, do their effects add up or cancel out?
From Table 9, we observe that the effect of adding <domain> has a positive impact on the generation quality with a $+ 3 . 5 \%$ jump in win rates albeit at the cost of a slight increase in Violation $\mathrm { R a t e } ( \% )$ . This indicates that there are multidimensional relationships that form between treasure markers during training and can be leveraged in conjunction to achieve desired characteristics at inference.
Table 9: Multidimensional control (Alpaca-Eval LI): Adding <domain> marker improves generation quality and hence Win Rates by $+ 3 . 5 \%$ working in conjunction with <length_tokens>, without hurting the length control.
# 5.3 What is the impact of the dropout on the marker prediction?
To understand the impact of the marker dropout (§ 2.1), we train three variants with dataset-level dropouts of $[ 0 \%$ , $5 0 \%$ , $7 0 \%$ ] while sample-level dropout is fixed to $5 0 \%$ . Our goal with dropout is to teach the model to infer markers without needing explicit guiding at inference time. However, too much dropout may impede the model from learning key patterns between tags and output properties. To evaluate this, we calculate the accuracy of the
Table 10: Effect of dropout on marker prediction. Using no dropout (dataset-level) prevents the model to learn predicting the correct marker across categories, hence, hurts the flexibility of our framework.
markers inferred by the model to the underlying markers assigned to m-Arena Hard and average across all 23 languages [Dang et al., 2024]
In Table 10, we observe that the least extreme dataset-level dropout variant 0_50 struggles to predict the correct marker at inference time. This is expected performance, since at training time, $0 \%$ dropout of markers across the dataset implies all training sample prompts have markers associated with it which makes it overly dependent on the presence of markers at inference time. At inference time, as this is not provided, accuracy is very low at $3 . 4 2 \%$ . We note that at both $5 0 \%$ and $7 0 \%$ dataset level dropout, we observe similar final abilities to infer the correct markers. Given this, unless specified elsewhere, $5 0 \%$ dataset-level dropout is the default specification used throughout experiments since it strikes the best balance between learning and generalizaton.
# 6 Related Work
From one- to multi-dimensional training data markers. The idea of tagging inputs with markers in neural sequence modeling goes back to early applications in machine translation and language modeling. The motivation there was to leverage discrete features during training and inference to overcome data sparsity or imbalance and introduce levers of control. In early neural LMs, special tokens were added as markers to target a very specific attribute such as the topic [Mikolov & Zweig, 2012] or auxiliary features [Aransa et al., 2015] such as genre and length. In translation such markers were introduced to control attributes like the target language [Johnson et al., 2017] or desired output quality [Caswell et al., 2019; Riley et al., 2020; Marie et al., 2020; Larkin et al., 2021;
Freitag et al., 2022] and text complexity [Agrawal & Carpuat, 2019; Marchisio et al., 2019], but also language-specific nuances like politeness [Sennrich et al., 2016; Feely et al., 2019], voice [Yamagishi et al., 2016], gender [Kuczmarski & Johnson, 2018], domains [Kobus et al., 2017; Britz et al., 2017], or diversity [Shu et al., 2019] of translations. Other works enriched the input representation during training with discrete linguistic features [Sennrich & Haddow, 2016] or document information [Jehl & Riezler, 2018] for a better contextualization at inference time. Where and how tags should be placed best differ across applications [Jehl & Riezler, 2018; Wu et al., 2021].
All of these were individual efforts that target one or two dimensions at a time, highly specialized for one trained target model and with training data for one particular task. Very limited work has been done on multidimensional markers [Stergiadis et al., 2021; Ramnath et al., 2021]. In contrast, our focus is on a much more general framework with a vast training corpus that targets general performance. Our approach is similarly general, where instead of a single feature, we want to enable a flexible approach that can be used for any text generation task. Furthermore, our goal is to explicitly target improving performance on the long-tail of underrepresented features.
From control in pretraining to control in instruction finetuning. In LLM research, there are several related works that experiment with adding prefixes for control in pretraining: Keskar et al. [2019] add control codes for desired text features in pretraining of a LLM derived from the structure of their source, i.e., subdomains or links of online texts and specific task labels for translation and QA. At inference time, values for these control codes are specific to steer the generation. Gao et al. [2025] further propose a cooldown schedule in pretraining going from marked data to unmarked data in order to not require prefixes at inference. Yuan et al. [2024] focus on length control by adding natural language length specification templates to fine-tuning data for preference optimization.
In our work, we focus on the instruction finetuning stage and incorporate nuanced multi-dimensional markers (i.e. the user can specify length and domain and format). We circumvent a cooldown schedule by simply introducing marker dropout, hence requiring a much smaller volume of marked data at training time, and not a complete population of tags at inference time. With the option to fill markers on-the-fly, our framework is highly flexible and customizable.
From encoded to inferred meta-information. Related prefix and prompt tuning methods [Li & Liang, 2021; Lester et al., 2021] use continuous embeddings learned for special tokens representing markers in training to condition predictions for specific tasks at inference time. Shen et al. [2024] further break those into separate markers for domain and function. In our case, we directly embed prefixes with the same vocabulary as the LLM, smoothly integrating them into the sequence. In our experiments, we find that this helps format following even when specified in natural language and not markers (e.g. for desired output length and language sections 4.3 and 4.5). Attribute-based control in LLM generations has also been pursued with other methods, such as attribute classifiers [Dathathri et al., 2020] or learned attribute vectors [Yang et al., 2023] — see [Zhang et al., 2023] for a comprehensive survey. | One of the most profound challenges of modern machine learning is performing
well on the long-tail of rare and underrepresented features. Large
general-purpose models are trained for many tasks, but work best on
high-frequency use cases. After training, it is hard to adapt a model to
perform well on specific use cases underrepresented in the training corpus.
Relying on prompt engineering or few-shot examples to maximize the output
quality on a particular test case can be frustrating, as models can be highly
sensitive to small changes, react in unpredicted ways or rely on a fixed system
prompt for maintaining performance. In this work, we ask: "Can we optimize our
training protocols to both improve controllability and performance on
underrepresented use cases at inference time?" We revisit the divide between
training and inference techniques to improve long-tail performance while
providing users with a set of control levers the model is trained to be
responsive to. We create a detailed taxonomy of data characteristics and task
provenance to explicitly control generation attributes and implicitly condition
generations at inference time. We fine-tune a base model to infer these markers
automatically, which makes them optional at inference time. This principled and
flexible approach yields pronounced improvements in performance, especially on
examples from the long tail of the training distribution. While we observe an
average lift of 5.7% win rates in open-ended generation quality with our
markers, we see over 9.1% gains in underrepresented domains. We also observe
relative lifts of up to 14.1% on underrepresented tasks like CodeRepair and
absolute improvements of 35.3% on length instruction following evaluations. | [
"cs.CL",
"cs.LG"
] |
# 1 Introduction
One of the most popular methods of conducting linguistic research has consisted of handcrafting paradigmatic utterances followed by gathering native speakers’ judgements. Yet, it is questionable how much these constructed utterances reflect realworld language use. As a result, plenty of debate has arisen about the legitimacy of paradigmatic utterances as a research tool, with arguments suggesting this particular data collection technique can lead to biased results (Cowart, 1997; Schütze, 2016). Whilst this debate has been happening in linguistics, the advancements of Natural Language Processing (NLP) have led to a significant increase in the amount of freely available language corpora as well as an increase in their size. For example, the open-source dataset Dolma consists of 3 trillion English tokens (Soldaini et al., 2023). These datasets provide new opportunities for linguistic research, with the ability to gather statistical data about specific language constructions and naturallyoccurring examples beyond handcrafted sentences.
One particular sentence construction that would benefit from such corpus research is that of EMBEDDED CLAUSES. These constructions contain an embedding predicate which selects a clausal complement, as seen in the sentence: Mary hopes that John likes chocolate. Here, the predicate hopes embeds the declarative clausal complement that John likes chocolate. Alongside DECLARATIVE clausal complements, as in (1a), there are also POLAR INTERROGATIVE clausal complements (1b), ALTERNATIVE INTERROGATIVE clausal complements (1c), and CONSTITUENT INTERROGATIVE clausal complements (1d). Crucially, predicates vary with respect to which clausal complement type they are allowed to embed; consider the difference in grammaticality between wonder, which can embed interrogative clausal complements, and hope, which cannot embed interrogative clausal complements.1 In addition, it has been observed that emotive factives, such as be happy (about), take declarative and constituent interrogative complements but not polar and alternative interrogative complements (Abels, 2004; Karttunen, 1977; Sæbø, 2007, a.o.).
(1) a. Mary {\*wondered | hoped | was happy } [that John liked chocolate]. b. Mary {wondered | \*hoped | \*was happy about } [whether John liked chocolate]. c. Mary {wondered | \*hoped | \*was happy about } [whether John liked chocolate or cake]. d. Mary {wondered | \*hoped | was happy about } [which chocolate John ate].
Because of this observation that a predicate selects for particular types of embedded clause in fine-grained ways - partly conditioned by the predicate’s lexical semantics - there is a debate amongst syntacticians and semanticists about what roles syntax and semantics play within these constructions (Grimshaw, 1979; Uegaki and Sudo, 2019; White, 2021, a.o.). Extrapolating clausal embeddings from large-scale corpora would help to answer such questions, by providing large-scale statistical evidence for how often these embedding predicates appear in natural language use and what clausal complements they select, as well as the ability to look for natural language examples. Thus, the aim of this paper is to create a tool for linguists to extract English sentences containing embedded clauses from largescale corpora, whilst also providing the following information: (i) the span of the embedded clause, (ii) the lexeme(s) of the embedding predicate, and (iii) the type of the embedded clause.2
This task of extracting embedded clauses is by no means trivial. Firstly, the span of the embedded clause in a sentence has to be correctly identified, excluding any element that belongs to the matrix clause. Secondly, there are constructions that superficially resemble embedded clauses, but are in fact not, as they fail to categorise syntactically as complements of an embedding predicate or as clauses. To see this, consider the following examples:
(2) a. Mary saw a man [that John mentioned]. b. Mary ate [what John cooked]. c. Mary goes to the gym regardless of [whether she is tired or not].
The bracketed clause in (2a) is a RELATIVE CLAUSE and is not a complement of an embedding predicate. In (2b), we have an instance of a FREE RELATIVE, which is considered as primarily a Noun Phrase rather than a clause (Caponigro, 2003; van Riemsdijk, 2006). The bracketed clause in (2c) is an UNCONDITIONAL (Rawlins, 2008), which is a modifier rather than a complement of a matrix predicate. Thirdly, embedded clauses can arise in complex clausal structures such as coordination (3a), which often occurs with ellipsis, nesting (3b), or some combination of both (3c). Consequently, to correctly identify embedded clauses, we need a correct syntactic parse of the sentence, as well as appropriate heuristics to rule out structures such as those in (2) and deal with the structures in (3).
(3) a. Mary knows [that John likes chocolate] and [that Mark does not]. b. Mary knows [that John thinks [that Mark likes chocolate]]. c. Mary knows [that John thinks [that Mark likes chocolate]] and [that Mark does not].
Our paper is structured in the following way: Section 2 describes previous attempts at building a large-scale corpora of English embedded clauses (e.g. MegaAcceptability), and additionally examines existing tools designed to extract sentences from language corpora (e.g. linguistic search engines). Section 3 introduces our hand-annotated dataset of English embedded clauses: Golden Embedded Clause Set (GECS). Section 4 describes our extraction tool that uses constituency representations and parsing heuristics, as well as our tool’s performance on GECS. Section 5 presents the large English embedded clause dataset that we have extracted from the open-source dataset Dolma. Section 6 suggests future research avenues and 7 concludes our work. Overall, we provide three new contributions:
1. A small-scale dataset (GECS) with finegrained gold standard annotation of English embedded clauses to be used as a benchmark for this task
2. An extraction tool which can be applied to English language corpora to extract and annotate embedded clauses
3. A large-scale extracted set of English embedded clauses from the language corpus Dolma for the linguistic community to use
# 2 Relevant Work
# 2.1 MegaAcceptability
The only existing attempt at a large dataset of English embedded clauses is the MegaAcceptability dataset (White and Rawlins, 2016, 2020). White and Rawlins selected a list of 1007 English verbs that are known to select clausal embeddings, and then designed 50 schematic sentences covering a range of syntactic environments in which an embedded clause can occur. They then slotted the 1007 verbs into the 50 schematic sentences to create $\sim 5 0 , 0 0 0$ entries. Through Amazon MTurk, participants rated the acceptability of the resultant sentences, leading to a large dataset of embedded clause constructions ranked by acceptability, on a 7-point ordinal scale.
Although the MegaAcceptability dataset moves away from the problem of a small set of sentences being used as evidence for linguistic hypotheses, it still utilises non-natural sentences which have been handcrafted. Furthermore, for finite embedded clauses White and Rawlins (2016; 2020) only considered environments without complementisers or with the following complementisers: that, whether, and which. They also only consider predicates with no prepositions or with the following prepositions: to and about. They make use of a pre-defined list of verbs which accept clausal complements, which does not account for the full set of embedding verbs nor adjectives and complex predicates which can also accept clausal complements. Therefore, it is unclear if the dataset captures the natural distributions of embedding predicates, embedded clause types, and the types of embedded clauses selected by embedding predicates.
# 2.2 Linguistic Search Engines
The goal to extract sentences with certain linguistic phenomena from natural language use is not a new concept. There have been several attempts to create search engines in which an individual can query annotated natural-language corpora for certain constructions and then be provided with a list of sentences which match the provided query. Prominent tools with this use include the Linguist’s Search Engine (Resnik and Elkiss, 2005), SPIKE (Shlain et al., 2020), and the LINDAT/CLARIAHCZ PML Tree Query (Pajas et al., 2009).
Although these are powerful tools, their query languages are not sufficiently fine-grained to capture the relevant structures of embedded clauses. They rely on annotation of corpora with lemmas, part-of-speech tags, and dependency graph representations. This means that one would need to specify dependency relationships rather than constituency/hierarchical ones to identify the structure of embedded clauses. Such an approach is limiting, as it is difficult to identify clause and predicate spans based on dependency relations or linear structure. There is also less consistency with respect to the relations that identify embedded clauses than with constituency parsers. Moreover, linguistic search engines offer linguists limited flexibility to decide which corpora they want to extract sentences from.
Figure 1: The annotation in GECS for each embedded clause.
# 3 Golden Embedded Clause Set (GECS)
For the novel task of English embedded clause detection in natural language corpora, we created a hand-annotated dataset (GECS) which can serve as a benchmark for evaluation and be used in its own right for a small-scale analysis of embedded clause constructions. In GECS, each embedded clause is annotated with its embedding predicate, clause span, and clause type (see Figure 1). We provide the embedding predicate as a list of the relevant tokens (i.e. ignoring negation words, adjuncts, and any other tokens which may appear between the first embedding predicate token and the clause).3
Annotation Procedure To create our naturallyoccurring embedded clause dataset, we selected a subset of 866, 538 sentences from Dolma4 (Soldaini et al., 2023). The data was not cleaned so as to accurately test the robustness of the tool. We then parsed the sentences and filtered them to remove any which necessarily did not contain embedded clauses.5 To extract the set of polar and alternative interrogative embedded clauses, we further filtered out sentences that did not contain the words whether or $i f$ . Finally, to filter for constituent interrogative embedded clauses, we only considered sentences with: who, what, when, where, why, how, or which. The next stage of hand annotation consisted of one researcher going through the prefiltered sentences and confirming (i) if there were embedded clauses and (ii) if so, providing the annotation of predicate tokens, clause span, and type. A second researcher then went through the previous researcher’s annotations to confirm agreement.
Figure 2: Example parses of embedded clause sentences in GECS.
Overall, GECS contains 147 declarative embedded clauses, 138 polar interrogative embedded clauses, 84 alternative interrogative embedded clauses,6 and 158 constituent interrogative embedded clauses. In addition, we provide a set of 111 adversarial examples verified to not contain any embedded clauses, but do contain misleading structures such as free relatives and relative clauses. These were created by selecting sentences discarded by the annotators in the final stage of GECS’ creation.
# 4 Parser Tool
Though it is possible to define a set of heuristics based on Regular Expressions or dependency relations, preliminary analysis indicates significant disadvantages to such an approach, as were seen with the linguistic search engines from Section 2. For this reason, we opted for representations from constituency syntactic parsers to extrapolate hierarchical structure. A benefit of this choice is that linguistic theory is typically given with respect to constituency trees, and we can therefore implement linguistic facts into extraction heuristics more freely than with other representations. While it is possible that a Dependency Parser could be used to achieve equivalent results, it is not clear what improvements it could offer. We leave this question open to a more thorough exploration in future. With the constituency representation we defined a set of heuristics to perform the following tasks:
1. Detection: detecting embedded clause(s) in a sentence
2. Predicate Identification: identifying each embedding predicate
3. Clause Identification: identifying the span of each embedded clause
4. Typing: identifying the type of each embedded clause
The syntactic parser that we use is SpaCy’s Berkeley Neural Parser, a constituency parser that has an LSTM and self-attentive architecture (Kitaev and Klein, 2018; Kitaev et al., 2019). Other options are available for constituency parsing; however, we decided upon this parser because it is state-of-the-art for constituency parsing.
The SpaCy constituency parser represents each sentence as an n-ary tree structure with several syntactic categories (e.g. S, VP, NP, SBAR) in parent and child hierarchy. This tree structure is particularly helpful in extracting embedded clauses because we can traverse the parent levels and check for particular child nodes in complement positions. We then defined heuristics based on the structures from the parser to perform the aforementioned tasks of embedded clauses detection, predicate identification, clause identification, and typing.
# 4.1 Methodology
Detection The first heuristic we deemed necessary for detecting embedded clauses is the existence of an SBAR in the parsed representation. This is a syntactic category for a subordinate clause, which is a superset of embedded clauses, but also includes non-embedded clauses like relative clauses. To check if a subordinate clause is an embedded clause, we assume it needs to be dominated by a VP headed by a predicate. While there may be other syntactic categories immediately above the subordinate clause, we are only interested in the first upstream occurrence of one of two syntactic categories: NP or VP. In the case where the label is VP, the sentence has an embedded clause. If the label is NP, then the sentence does not have an embedded clause—likewise if neither of the two are found until the root node of the tree. We use the hierarchical nature of the constituency parser to distinguish embedded clauses from relative clauses and complements of NPs.
To limit the amount of false positives that would be extracted from the dataset we implemented a few heuristics based on the embedding predicate and the subordinating conjunction of the clauses that are detected. First, if the embedding predicate is empty after the part-of-speech filtering or the only predicate token is ‘is/be’, then the clause is not considered to be an embedded clause. Secondly, we rule out any clauses beginning with certain subordinating conjunction because they are not indicative of an embedded clause. Specifically, we blacklist the following: after, although, before, despite, to, for, so, though, unless, until, than, because, since, while, as, even if, in order.
Predicate Identification Having identified an embedded clause in a sentence, we can extract the embedding predicate from the sentence by searching for the nearest VP parent of the clause. We iteratively search through the parents of the embedded clause until a VP parent is reached. We then identify the predicate span from this constituent, considering a wider range of possible verbs, adjectives, and prepositions than previous methods (cf. Section 2.1). For each constituent child of the VP (with exception to the final one which contains the embedded clause) we keep every token in the child span as long as the child label is either a PP, NP or SBAR label. For the last child of the VP, we keep every token until the onset of the embedded clause. We then filter these tokens based on their part-of-speech tags. We keep only the tokens that are VERB, ADP or ADJ, with the exception for an auxiliary tag AUX if there is also an adjective in the original token list. This helps us capture adjectival predicates such as ‘unclear as to’ or ‘is certain’ (see Figure 2).
Clause Identification Given that a sentence is detected as having an embedded clause, we can then further use the parsed representation to extract the span of the embedded clause. The constituency parser is advantageous in this regard as we take whatever is under the syntactic label of SBAR to be the embedded clause constituent.
Typing Having identified the clause span, the heuristics for typing the clause can involve more simple string matching. For alternative interrogative clauses we check the complementiser. If the complementiser whether is in the first word of the embedded clause along with the token $o r$ , then it is an alternative interrogative. If instead, we find whether that is not followed by the token or or is followed by the explicit string or not, then the embedded clause is a polar interrogative. If a unique token of either which, who, what, when, where, why, or how is the first word of the embedded clause, then it is a constituent interrogative clause. If none of the prior conditions are met, including if the clause begins with that, then we type the clause as declarative.
# 4.2 Evaluation
We evaluate the performance of our tool on the sentence annotations in GECS. With these annotations we can accurately test the tool’s ability to detect embedded clauses, embedding predicates, and clause types, allowing us to evaluate how our tool handles messy natural data. We have also built a pattern matching baseline to compare our heuristics against a more linear approach.
Pattern Matching Baseline The baseline we constructed is a rule-based tool using pre-defined lexical patterns to extract embedded clause annotations from a sentence. This method relies on the SpaCy Matcher, a tool which is similar to Regular Expressions in that it matches a given pattern in a string, but with useful supplemental linguistic information encoded, such as POS and lemma (Honnibal and Montani, 2017)7. In order to detect embedded clauses, the Matcher is provided with the fixed list of (potentially) embedding predicates from MegaAcceptability (White and Rawlins, 2016). It then returns instances of these predicates in a sentence; with an added heuristic ensuring that the predicate is followed by some other verb or auxiliary (i.e. a clause), an embedded clause is identified. Prepositions proceeding the verb are included in the list of predicate tokens and POS and lemmas are also identified. Limited by the linear nature of the Matcher, we define the clause span as the end of the predicate to the end of the sentence, ignoring any adverb/pronoun which may occur between a predicate and clause. For the final goal of typing the embedded clause we again use the Matcher to match the first token of the clause to the associated complementisers for each type. We distinguish between polar and alternative interrogatives by classifying clauses containing the token or but not the string or not as alternative, and every other instance as polar. If no associated complementiser is found in the clause, the clause is typed as declarative.
Table 1: Precision, Recall, and F1 scores for clause detection in GECS.
Table 2: Identification accuracy scores evaluated on the true positives set of detected clauses from Table 1.
Results We split the evaluation of our tool and the baseline into three detection sections: Single Clause Evaluation which evaluates detection performance on sentences in GECS that only included one embedded clause, Multi Clause Evaluation which evaluates detection performance on sentences in GECS that had multiple embedded clauses (nested and coordinated clauses), and Overall which combines the statistics of single and multi clause evaluation and performance on the adversarials. Table 1 provides the precision and recall for these metrics. We also evaluated amongst the correctly detected embedded clauses the annotation abilities of our tool and the baseline, by seeing if the selected predicate is correct (Predicate Identification), if the selected clause is correct (Clause Identification), and if the typing of the clause is correct (Type Identification). Table 2 provides the accuracy scores for these metrics.
As Table 1 and 2 shows, we outperform the baseline in every metric, indicating that our method of utilising a constituency based tool is better than a linear based approach. Our tool only slightly degrades in detection recall when given a sentence that had nested and/or coordinated embedded clauses.
Failure analysis In the few cases of our tool’s error, we see the following categories: parser errors, unconditionals mistaken as embedded clauses, and incomplete complex predicate detection. The parser error was the biggest issue for failed cases - unfortunately this is an unavoidable error given that any parser will be imperfect. Unconditionals also proved a problem because they are parsed the same as an embedded clause, and therefore are impossible to differentiate from one another. Finally, complex predicates were sometimes incompletely detected so not all of the predicate tokens are placed in the entry. Given that some of these errors are unavoidable, coupled with the tool’s high precision and recall, we still take the results to indicate that our tool can be used to create a large-scale dataset of naturally-occurring embedded clauses, as long as researchers propagate the error into their analysis - something which needs to be done with any corpus study.
# 5 Case Study: Large-Scale Dataset
Having designed a tool which can identify and annotate embedded clauses, we applied it to an English corpus to create a large-scale dataset of annotated embedded clauses. We chose to apply the tool to a subset of Dolma8 (Soldaini et al., 2023). Overall, 28, 968, 073 embedded clauses were detected.
# 5.1 Comparison with MegaAcceptability
In order to compare with MegaAcceptability, we performed a limited case study on our large-scale dataset by only looking at our dataset entries that included the 1007 verbs that were used in the MegaAcceptability templates. To get the rating of each verb from MegaAcceptability, we selected the maximum normalised rating of that verb’s available constructions. We compared the distribution between the acceptability rating of a verb according to MegaAcceptability and its frequency in the large-scale dataset. It would be generally expected that the higher a verb is rated the more frequent it would be. As shown in Figure 3, this is the overall trend that we see. This means that our tool has successfully captured the verbs with the highest acceptability, while the verbs with lower acceptability had a lesser chance of occurring with embedded clauses.
There are some exceptions to the frequencyacceptability distribution, however this provides an interesting exploration point. For instance, the low acceptability outlier which has a high frequency in Figure 3 is the predicate mean. Looking at entries with mean as the predicate, we see three example types: (i) where it is unclear if the predicate is actually embedding or is acting as some filler (4a), (ii) false positives (4b), and (iii) true embedded clauses $\left( 4 \mathrm { c } \right)$ . Thus, mean could be an outlier because of false positives, or it could be an outlier due to a data-driven approach collecting sentence clause types which a template approach could not.
(4) a. It’s pretty catchy, I mean who doesn’t go ANN ANN and A SORE. b. In Glosa it means "what I’ve just said". c. This means [...], the ADA applies to you.
# 5.2 Clauses and Predicates at Scale
With our large scale dataset of embedded clauses we can look beyond the fixed list of predicates as would be provided by a template-driven dataset like MegaAcceptability. With our approach we are able to view the clause-predicate distribution at a grand scale to test and verify linguistic theories. From the nearly 29 million embedded clause examples in the dataset we have the following distribution of clause types: 19, 195, 112 declarative clauses, 9, 402, 868 constituent interrogative clauses, 261, 274 polar interrogative clauses, and 108, 819 alternative interrogative clauses. This shows us how rare polar and alternative interrogative clauses are. Moreover, we can examine to the distribution of embedding predicates in the dataset. Taking a look at the part-ofspeech tags for each of the embedding predicates in the dataset we can observe the distribution of adjectival and verbal predicates. Adjectival predicates require an accompanying verb or auxiliary (e.g., be happy), so we look at complex predicates involving two or more tokens. We find that there are 35, 294 unique adjectives within these complex predicates. Meanwhile, for simple one-word verbal predicates, we find 29, 654 unique predicates. Altogether, this leaves us with a strong set of examples to analyse any clause-predicate distribution of interest.
Figure 3: Comparison of natural data frequency and acceptability of the verbs found in MegaAcceptability ranked in increasing order of acceptability
Here we present an example of how the dataset can be used in linguistic research to further validate and verify linguistic theories, as well as survey new possible sentence constructions that could be of interest.
Emotive Factive Predicates As mentioned in Section 1, previous analyses have shown that emotive factive predicates, such as be happy, or be glad, are not able to embed either polar or alternative interrogative clauses (Karttunen, 1977; Abels, 2004; Sæbø, 2007). We can see if the extracted dataset shows this distribution statistically and if there are any counter-examples.
To test this generalisation, we selected a subset of emotive factive predicates to investigate further: happy, amazed, sad, glad, excited, surprised, incredible, angry, mad, jealous, afraid. Looking at the clauses that are embedded with these predicates, we get the following distribution of clause types: 175, 479 declarative clauses, 47, 877 constituent interrogative clauses, 159 polar, and 134 alternative interrogative clauses. The statistical breakdown does match the generalisation, with declarative and constituent interrogative embedded clauses being the more popular embedded clause type. However, more importantly, there are some polar and alternative interrogative, of which we can search through to find potential counter-examples to the generalisation.
In searching through the polar and alternative interrogative embedded clauses, many are false positives, with the following four errors being indicative of the set: unconditionals (5a), wrong predicate span where the emotive factive is not the embedding predicate (5b), real embedded clauses but the sentence does not have the intended meaning required by the generalisation, e.g., be afraid is non-factive in (5c), and clausal adjuncts (5d).
(5) a. It’s not your problem, because you’re happy whether you’re with him or doing stuff on your own. b. I’m not sure how excited to get about this fund and whether he’s just piggybacking on the Buffett name. c. We are afraid whether it will be in Sindhi interest. d. Meanwhile, people across the state are hair-on-fire mad over whether urban water users should be allowed to buy rural property simply for the water rights, and whether some water users should be allowed to sell their water to others out of state.
Given that we need to propagate the tool’s error rate, this is to be expected. However, there appears to be some genuine counter-examples (6), of which at least two of the three native speakers among the authors find grammatical. It is beyond the scope of this paper to provide an analysis of these sentences, so we leave it for future work.
(6) a. In the post you talk about your child’s health issues and in the end ask if people are happy with whether they’re circumcised or not. b. You might be surprised about whether there’s hope for future shooters.
Although this analysis is not exhaustive in the least, we use these examples to motivate the use of this dataset to further validate and explore linguistic theories through naturally-occurring linguistic data in addition to handcrafted templatic examples.
# 6 Discussion
As this is the first method at extracting embedded clauses from natural language corpora, we set out some future research avenues to be undertaken. Firstly, clausal embedding extraction should be extended to other languages so that linguistic theories using such large corpora can have crosslinguistic validity. Given that the universal definition of a clausal complement is a complement to VP, we argue that a similar method to what we have described in the paper can be taken with other languages. The main changes would be to the finegrain heuristics that we used for typing. Of course, our approach is subject to the limitations that follow from any corpus-based research, which introduces its own set of biases pre-existing in the corpora. It is also not always possible to scale this approach crosslinguistically, as the method relies on a given language having large enough corpora (which many do not). Nonetheless, this should not deter people from using the method with an applicable language, in complement with other approaches. Secondly, we recognise that there are other potential methods for extracting English clausal embeddings. One such technique is the use of an LLM, a method which we decided against given that an LLM is a blackbox, meaning a thorough error analysis would not be able to be conducted. To aid future development within this area, we have provided GECS to be used as a benchmark for this task. | For linguists, embedded clauses have been of special interest because of
their intricate distribution of syntactic and semantic features. Yet, current
research relies on schematically created language examples to investigate these
constructions, missing out on statistical information and naturally-occurring
examples that can be gained from large language corpora. Thus, we present a
methodological approach for detecting and annotating naturally-occurring
examples of English embedded clauses in large-scale text data using
constituency parsing and a set of parsing heuristics. Our tool has been
evaluated on our dataset Golden Embedded Clause Set (GECS), which includes
hand-annotated examples of naturally-occurring English embedded clause
sentences. Finally, we present a large-scale dataset of naturally-occurring
English embedded clauses which we have extracted from the open-source corpus
Dolma using our extraction tool. | [
"cs.CL"
] |
# I. INTRODUCTION
empowerment-based skill discovery [20] and pure exploration methods [21]. Empowerment-based methods aim to maximize the Mutual Information (MI) between states and skills, and the MI term can be estimated by different variational estimators [22]. These methods have shown effectiveness in learning discriminative skills for state-based locomotion tasks [23]. However, the learned skills often have limited state coverage due to the inherent sub-optimality in the MI objective [24], which can lead to sub-optimal adaptation performance in downstream tasks and becomes more severe in large-scale state space [25]. Recent works introduce additional techniques like Lipschitz constraints and metric-aware abstraction to enhance the exploration abilities [25]–[27]. Pure exploration methods encourage the agent to explore the environment with maximum state coverage; however, this can lead to extremely dynamic skills rather than meaningful behaviors for downstream tasks [21], [28]. Meanwhile, both the MI estimator and entropy estimation are not directly scalable to large-scale spaces, such as pixel-based environments [25], [29].
R aEbIlNeFsOucRcCesEsMinENgaTmLeeaArIni[n1g] [R2L],) ahuatsoancohmieovuesdcraersma[r3k]-, [4], and embodied agents [5], [6]. Traditionally, RL agents rely on well-designed reward functions to learn specific tasks [7]. However, designing these reward functions is resourceintensive and often requires domain-specific expertise [8], [9], making the learned policies dependent on handcrafted rewards and potentially unable to capture the complexity of realworld scenarios. This reliance limits the agent’s generalization capability across diverse tasks and results in poor adaptability. In contrast, recent advances in Large Language Models (LLMs) [10], [11] signify that unsupervised auto-regression has led to powerful pre-trained language models, which can be adapted to downstream tasks via supervised fine-tuning [12], [13]. A powerful vision encoder can also be pre-trained via masked prediction without annotations or labels [14]–[16], and the encoder can be used to solve various vision tasks [17], [18]. Inspired by these breakthroughs, it is desirable to further explore similar unsupervised learning methods within the RL field. The goal is for unsupervised RL to learn useful behaviors in the absence of external rewards, thus equipping them with the capacity to quickly adapt to new tasks with limited interactions [19].
The formulation of unsupervised RL has been studied in many prior works, which can be roughly categorized into
To overcome the aforementioned limitations, this work proposes a novel skill discovery method by maximizing the State Density Deviation of Different skills (SD3). Specifically, we construct a conditional autoencoder for state density estimation of different skills in high-dimensional state spaces. Each skill policy is then encouraged to explore regions that deviate significantly from the state density of other skills, which encourages inter-skill diversity and leads to discriminative skills. For a stable state-density estimation of significantly different skills, we adopt soft modularization for the conditional autoencoder to make the skill-conditional network a weighted combination of the shared modules according to a routing network determined by the skill. We show the skill-deviation objective of SD3 resembles the initial MI objective in a special case. Further, to incentivize intra-skill exploration, we formulate an intrinsic reward from the autoencoder based on the learned latent space, which extracts the skill-relevant information and is scalable to large-scale problems. Theoretically, such an intrinsic reward is closely related to the provably efficient count-based exploration in tabular cases. To summarize, SD3 encourages inter-skill diversity via density deviation and intraskill exploration via count-based exploration in a unified framework. We conduct extensive experiments in Maze, statebased Unsupervised Reinforcement Benchmark (URLB), and challenging image-based URLB environments, showing that SD3 learns exploratory and diverse skills.
Our contribution can be summarized as follows. (i) We propose a novel skill discovery objective based on state density deviation of skills, providing a straightforward way to learn diverse skills with different state occupancy. (ii) We propose a novel conditional autoencoder with soft modularization to estimate the state density of significantly different skills stably. (iii) The learned latent space of the autoencoder provides an intrinsic reward to encourage intra-skill exploration that resembles count-based exploration in tabular MDPs. (iv) Our method achieves state-of-the-art performance in various downstream tasks in challenging URLB benchmarks and demonstrates scalability in image-based URLB tasks.
# II. PRELIMINARIES
# A. Markov Decision Process
A Markov Decision Process (MDP) constitutes a foundational model in decision-making scenarios. We consider the process of an agent interacting with the environment as an MDP with discrete skills, defined by a tuple $( \mathcal { S } , \mathcal { A } , \mathcal { Z } , \mathcal { P } , r , \gamma )$ , where $s$ is the state space, $\mathcal { A }$ is the action space, $\mathcal { Z }$ is the skill space, $\mathcal { P } : \mathcal { S } \times \mathcal { A } \Delta ( \mathcal { S } )$ is the transition function, $r : S \times \mathcal { A } \mathbb { R }$ is the reward function, and $\gamma$ is the discount factor. In this work, we consider a discrete skill space $\mathcal { Z }$ that contains $n$ skills since calculating the skill density deviation requires density estimation of all skills, while SD3 can also be extended to a continuous skill space by sampling skills from a continuous distribution for approximation. In each timestep, an agent follows a skill-conditional policy $\pi ( a | s , z )$ to interact with the environment. Given clear contexts, we refer to ‘skillconditional policy’ as ‘skill’.
# B. Unsupervised RL
Unsupervised RL typically contains two stages: unsupervised pre-training and fast policy adaptation. In the unsupervised training stage, the agent interacts with the environment without any extrinsic reward. The policy $\pi ( a | s , z )$ is learned to maximize some intrinsic rewards $\boldsymbol { r } _ { t }$ formulated by an estimation of the MI term or the state entropy. The aim of unsupervised pre-training is to learn a set of useful skills that potentially solve various downstream tasks via fast policy adaptation. In the adaptation stage, the policy $\pi ( a | s , z ^ { \star } )$ with a chosen skill $z ^ { \star }$ is optimized by RL algorithms with certain extrinsic rewards to adapt to specific downstream tasks. In the following, we denote $I ( \cdot ; \cdot )$ by the MI between two random variables and $\mathcal { H } ( \cdot )$ by either the Shannon entropy or differential entropy, depending on the context. We use uppercase letters for random variables and lowercase letters for their realizations. We denote $\begin{array} { r } { d ^ { \pi } ( s ) \triangleq ( 1 - \gamma ) \sum _ { t = 0 } ^ { \infty } \gamma ^ { t } P ( s _ { t } = s | \pi ) } \end{array}$ as the normalized probability that a policy $\pi$ encounters state $s$ .
The empowerment-based skill discovery algorithms try to estimate the MI between $S$ and $Z$ via $\begin{array} { r l } { I ( S ; Z ) } & { { } = } \end{array}$ $\mathbb { E } _ { z \sim p ( z ) , s \sim p ^ { \pi } ( s | z ) } [ \log p ( z | s ) - \log p ( z ) ]$ . Given the computational challenges associated with the posterior $p ( z | s )$ , a learned skill discriminator $q _ { \phi } ( z | s )$ is employed [23] and a variational lower bound is established for the MI term as $\begin{array} { r } { I ( Z ; S ) \ \ge \ \mathbb { E } _ { z \sim p ( z ) , s \sim p ^ { \pi } ( s \mid z ) } [ \log q _ { \phi } ( z \mid s ) - \log p ( z ) ] } \end{array}$ . Alternatively, pure exploration methods estimate state entropy by summing the log-distances between each particle and its $k$ -th nearest neighbor, as $\begin{array} { r } { \mathcal { H } ( s ) \propto \sum _ { s _ { i } } \ln \left. s _ { i } - \mathrm { N N } _ { k } ( s _ { i } ) \right. } \end{array}$ .
# III. METHOD
In this section, we first introduce the proposed SD3 algorithm that performs skill discovery by maximizing interskill diversity via state density estimation. Next, we present the formulation of intrinsic rewards for intra-skill exploration. Finally, we provide a qualitative analysis of SD3.
# A. Skill Discovery via Density Deviation
We develop our skill discovery strategy from a straightforward intuition: The explored region of each skill should deviate from other skills as far as possible. Formally, the optimizing objective for skill discovery, denoted as $ { I _ { \mathrm { S D 3 } } }$ and referred to as density deviation, is defined by
$$
I _ { \mathrm { S D 3 } } \triangleq \mathbb { E } _ { z \sim p ( z ) , s \sim d _ { z } ^ { \pi } ( s ) } \left[ \log \frac { \lambda d _ { z } ^ { \pi } ( s ) } { \lambda d _ { z } ^ { \pi } ( s ) p ( z ) + \sum _ { z ^ { \prime } \neq z } d _ { z ^ { \prime } } ^ { \pi } ( s ) p ( z ^ { \prime } ) } \right] ,
$$
where $z$ is sampled from $p ( z )$ , $s$ is sampled from the state distribution induced by the skill policy $\pi ( a | s , z )$ , and $\lambda > 0$ is a weight parameter. The numerator $d _ { z } ^ { \pi } ( \cdot )$ is the state density of skill $z$ , and the denominator is the weighted average of the state density of $z$ and those of other skills $\{ z ^ { \prime } \}$ . Since we uniformly sample skills from the skill set that contains $n$ skills, we have $p ( z ) = 1 / n$ for each skill $z$ . According to Eq. (1), it is easy to check that $ { I _ { \mathrm { S D 3 } } }$ attains its maximum when $\textstyle \sum _ { z ^ { \prime } \neq z } d _ { z ^ { \prime } } ^ { \pi } ( s ) \to 0$ for all $( s , z )$ such that $p ( z ) \cdot d _ { z } ^ { \pi } ( s ) >$ 0, and the maximum value is $\mathcal { H } ( Z )$ . In this case, the state $s \sim d _ { z } ^ { \pi } ( \cdot )$ visited by skill $z$ has zero visitation probability by other skills, which means the explored regions of all skills do not overlap, and the learned skills are fully distinguishable. However, enforcing such a strong objective to separate the overlapping explored areas of skills may lead to limited state coverage for each skill. In extreme cases, each skill might only visit a distinct state that other skills do not access. Although this leads to distinguishable skills, the overall state coverage becomes overly limited, making them undesirable for learning meaningful behaviors.
In SD3, we adopt two mechanisms for addressing this problem. (i) A weight parameter $\lambda$ is used in the learning objective to regularize the gradients of $ { I _ { \mathrm { S D 3 } } }$ to other skills. To see this, for each $( s , z )$ , we denote the state density of other skills $\{ z ^ { \prime } \}$ except $z$ as $\begin{array} { r } { \rho _ { z ^ { c } } \triangleq \sum _ { z ^ { \prime } \neq z } d _ { z ^ { \prime } } ^ { \pi } ( s ) } \end{array}$ , then the gradient of $I _ { \mathrm { S D 3 } } ( s , z )$ to $\rho _ { z ^ { c } }$ becomes
$$
\nabla _ { \rho _ { z ^ { c } } } I _ { \mathrm { S D 3 } } ( s , z ) = - 1 / ( \lambda d _ { z } ^ { \pi } ( s ) + \rho _ { z ^ { c } } ( s ) ) ,
$$
where $I _ { \mathrm { S D 3 } } ( s , z )$ is the density ratio for a specific $( s , z )$ and the proof is attached in Appendix A. Thus, for skill $z$ , increasing $\lambda$ will weaken the gradient of SD3 in reducing the state densities of other skills, which prevents skill collapse in SD3. (ii) We introduce explicit intra-skill exploration based on the latent space learned in estimating the skill density, which will be discussed in $\ S _ { \mathrm { I I I - B } }$ . To maximize $ { I _ { \mathrm { S D 3 } } }$ , we adopt a modified Conditional Variational Auto-Encoder (CVAE) to stably estimate the state density for skills, which we introduce as follows.
Fig. 1: An overview of the CVAE architecture. (a) The encoder-decoder network with soft modularization. The feature extractor of state can be MLPs or convolution layers according to state- or image-based environment. (b) The inter-skill diversity objective for skill discovery and the intra-skill intrinsic reward for exploration can be derived from the learned CVAE.
CVAE for State Density Estimation. In SD3, we adopt a lower bound of skill-conditional state density (i.e., $\log d _ { z } ^ { \pi } ( s ) )$ + via stochastic gradient variational Bayes. We adopt CVAE with a latent representation $h$ to obtain a variational form as
$$
\begin{array} { r l } & { \log d _ { z } ^ { \pi } ( s ) = \mathbb { E } _ { Q ( h | s , z ) } \log \left[ P ( s | z ) \right] } \\ & { \qquad = \mathbb { E } _ { Q ( h | s , z ) } \log \left[ \frac { P ( s , h | z ) } { Q ( h | s , z ) } \right] + \mathbb { E } _ { Q ( h | s , z ) } \log \left[ \frac { Q ( h | s , z ) } { P ( h | s , z ) } \right] } \\ & { \qquad \geq \mathbb { E } _ { Q ( h | s , z ) } \log \left[ \frac { P ( s | h , z ) P ( h | z ) } { Q ( h | s , z ) } \right] } \\ & { \qquad = \underbrace { \mathbb { E } _ { Q ( h | s , z ) } \log \left[ P ( s | h , z ) \right] - D _ { \mathrm { K L } } \left[ Q ( h | s , z ) \right] \left| P ( h | z ) \right| } _ { \mathcal { L } _ { z } ^ { \mathrm { e l b o } } ( s ) } , } \end{array}
$$
where the latent vector $h$ is sampled from a variational posterior distribution (i.e., $Q ( h | s , z ) )$ conditioned on the state and skill, and the inequality holds by dropping off the non-negative second term, which is the definition of $D _ { \mathrm { K L } } ( Q ( h | s , z ) | | P ( h | s , z ) )$ . Meanwhile, we use $P ( s , h | z ) =$ $P ( h | z ) P ( s | h , z )$ to decompose the joint distribution. According to Eq. (3), maximizing the Evidence Lower-Bound (ELBO) $\mathcal { L } _ { z } ^ { \mathrm { e l b o } } ( s )$ can approximate the skill-conditioned state distribution, as $\log d _ { z } ^ { \pi } ( s ) \ \approx \ \operatorname* { m a x } _ { Q } \mathcal { L } _ { z } ^ { \mathrm { e l b o } } ( s )$ . To maximize $\mathcal { L } _ { z } ^ { \mathrm { e l b o } } ( s )$ , we learn an encoder network $Q _ { \phi } ( h | s , z )$ to obtain the posterior of latent representation, where the posterior is represented by a diagonal Gaussian. Then, a latent vector $h$ is sampled from the posterior, and a decoder network $P _ { \psi } ( s | h , z )$ is used to reconstruct the state. The KL-divergence in $\mathcal { L } _ { z } ^ { \mathrm { e l b o } } ( s )$ regularizes the latent space via a prior distribution $P ( h | z )$ , which is set to a standard Gaussian. The whole objective is optimized via stochastic gradient ascent with a reparameterization trick [30], [31]. To calculate $ { I _ { \mathrm { S D 3 } } }$ , we perform state density estimations for all skills via forward inference based on the learned encoder and decoder. In calculating $I _ { \mathrm { S D 3 } }$ , we adopt efficient parallelization to calculate $\mathcal { L } _ { z } ^ { \mathrm { e l b o } } ( s )$ for all skills $z \in { \mathcal { Z } }$ in one forward pass, which minimizes the run-time increase with the number of skills.
Soft Modularization for CVAE. As we maximize the statedensity deviation in skill discovery, the resulting skills become diverse, and the corresponding state occupancy for different skills tends to be very different. In CVAE-based density estimation, since different skills share the same network parameters, optimizing $\mathcal { L } _ { z } ^ { \mathrm { e l b o } }$ for one skill can negatively affect the density estimation of other skills with significantly different state densities. Empirically, we also find obtaining an accurate estimation of $d _ { z } ^ { \pi } ( s )$ for all skills $z \in { \mathcal { Z } }$ can be difficult. As a result, we adopt a soft modularization technique that automatically generates soft network module combinations for different skills without explicitly specifying structures. As shown in Fig. 1, the soft modularized CVAE contains an unconditional basic network and a routing network, where the routing network takes the skill and state embedding as input to estimate the routing strategy. Suppose each layer of the encoder/decoder network has $m$ modules, then the routing network gives the probabilities $\boldsymbol { p } \in \mathbb { R } ^ { m \times m }$ to weight modules contributing to the next layer. Specifically, considering $l$ -th layer has probabilities $p ^ { l } \in \mathbf { \bar { R } } ^ { m \times m }$ , then the probability in the next layer is
$$
p ^ { l + 1 } = \mathcal { W } ^ { l } \big ( \mathrm { R e L U } ( g ( p ^ { l } ) \odot ( u \odot v ) ) \big ) , u = f _ { 1 } ( s ) , v = f _ { 2 } ( z ) ,
$$
where $\odot$ denotes element-wise product, $g ( \cdot ) , f _ { 1 } ( \cdot )$ and $f _ { 2 } ( \cdot )$ are all fully connected layers that $f _ { 1 } ( \cdot )$ and $f _ { 2 } ( \cdot )$ map state $s$ and skill $z$ to the same dimensions (e.g., $d )$ , and $g ( \cdot )$ maps $p ^ { l }$ to the dimension $d$ . Then we have $\mathcal { W } ^ { l } \in \mathbb { R } ^ { m ^ { 2 } \times d }$ to project the joint feature to a probability vector of layer $l + 1$ . In the basic network, we denote the input feature for the $j$ -th module in the $l$ -the layer as $g _ { j } ^ { l } \in \mathbb { R } ^ { d }$ ; then we have $\begin{array} { r } { g _ { i } ^ { l + 1 } = \sum _ { j } \hat { p } _ { i , j } ^ { l } ( \mathrm { R e L U } ( \mathcal { W } _ { j } ^ { l } g _ { j } ^ { l } ) ) } \end{array}$ for the next layer, where $\begin{array} { r } { \hat { p } _ { i , j } ^ { l } = \exp ( p _ { i , j } ^ { l } ) / ( \sum _ { j = 1 } ^ { m } \exp ( p _ { i , j } ^ { l } ) ) } \end{array}$ is the normalized vector that weights the $j$ -th module in the $l$ -th layer to contribute to the $i$ -th module in the $l + 1$ -th layer. We remark that the soft modularization technique was originally proposed in multitask RL [32], while we extend it to encoder-decoder-based CVAE for density estimation.
Fig. 2: An illustration of skill discovery in SD3. The skills start with overlapping areas and are separated via state-density deviation. Then, each skill explores the environment independently, resulting in overlapped but expanded areas. SD3 separates the areas again and leads to distinguishable skills. Such a process repeats and ultimately leads to exploratory and diverse skills.
# B. Latent Space Exploration
As we discussed above, the SD3 objective that only maximizes the density deviation may lead to skill collapse. In addition to introducing an additional parameter $\lambda$ in Eq. (1), we find the learned CVAE in Fig. 1 can provide a free-lunch intrinsic reward for efficient intra-skill exploration. In SD3, we derive an intrinsic reward based on the latent space that learns skill-conditioned representations for states. Specifically, the KL-divergence term $D _ { \mathrm { K L } } [ Q ( h | s , z ) \| r ( h ) ]$ in CVAE objective serves as an upper bound of the conditional MI term $I ( S ; H | Z )$ , as
$$
\begin{array} { r l } & { I ( S ; H | Z ) = \mathbb { E } _ { p ( s , z ) , Q _ { \phi } ( h | s , z ) } \big [ \log Q _ { \phi } ( h | s , z ) / P ( h | z ) \big ] } \\ & { \qquad \leq \mathbb { E } _ { p ( s , z ) , Q _ { \phi } ( h | s , z ) } \big [ \log Q _ { \phi } ( h | s , z ) / r ( h ) \big ] , } \end{array}
$$
where $H$ denotes the random variable of the sampled latent representation $h$ , and $r ( h )$ the prior distribution set to a standard Gaussian, and $P ( h | z ) \triangleq \mathbb { E } _ { P ( s | z ) } Q _ { \phi } ( h | s , z )$ . The inequality holds since $D _ { \mathrm { K L } } [ P ( h | z ) | | r ( h ) ] \ge 0$ for all $z \in { \mathcal { Z } }$ . Since $D _ { \mathrm { K L } } [ Q _ { \phi } ( h | s , z ) \| r ( h ) ]$ is constrained in CVAE learning, the MI between states and latent representations for each skill is also compressed according to Eq. (5). Thus, the latent space in CVAE learns a compressive representation while retaining important information as the representation is then used for reconstruction. Based on the learned representation, we define the intrinsic reward for intra-skill exploration as
$$
\begin{array} { r } { r _ { z } ^ { \mathrm { e x p } } ( s ) = D _ { \mathrm { K L } } [ Q _ { \phi } ( h \vert s , z ) \vert \vert r ( h ) ] , } \end{array}
$$
where $Q _ { \phi } ( h | s , z )$ is the posterior network learned in CVAE. The intrinsic reward in Eq. (6) quantifies the degree of compression of representation with respect to the state, which measures skill-conditioned state novelty in a compact space for intra-skill exploration. Intuitively, if a state $s ^ { ( 1 ) }$ is frequently visited by skill $z$ , then the corresponding latent distribution is close to $r ( h )$ according to Eq. (5), and the resulting reward $r _ { z } ^ { \mathrm { e x p } } \big ( s ^ { ( 1 ) } \big )$ will be close to zero. In contrast, if a state $s ^ { ( 2 ) }$ is novel for skill $z$ , then the corresponding intrinsic reward will be high since the latent posterior $Q _ { \phi } \bar { ( h | s ^ { ( 2 ) } , z ) }$ can be very different from the prior $r ( h )$ . Thus, in exploration, such reward encourages the policy to find the scarcely visited states $\{ s ^ { + } \}$ (with a high $D _ { \mathrm { K L } } [ Q _ { \phi } ( h | s , z ) \| r ( h ) ] )$ and explore these
Algorithm 1: SD3 Algorithm
states.
An illustration of the skill learning process of SD3 is shown in Fig. 2. The state occupancy of different skills overlaps initially in Fig. 2(i), then we maximize $ { I _ { \mathrm { S D 3 } } }$ via per-instance estimation and set it to an intrinsic reward as
$$
r _ { z } ^ { \mathrm { s d } 3 } ( s ) = \log \frac { \lambda d _ { z } ^ { \pi } ( s ) } { \lambda d _ { z } ^ { \pi } ( s ) p ( z ) + \sum _ { z ^ { \prime } \neq z } d _ { z ^ { \prime } } ^ { \pi } ( s ) p ( z ^ { \prime } ) } ,
$$
which encourages skill density deviation and leads to more diverse skills with separate state coverage, as in Fig. 2(ii). Then the exploration reward $r _ { z } ^ { \mathrm { e x p } } ( s )$ is used to encourage intraskill exploration, which makes each skill explore unknown areas independently. After exploration, the state coverage of each skill increases and may lead to state-coverage overlapping again among skills, as in Fig. 2(iii). Then the densityderivation reward $r _ { z } ^ { \mathrm { s d 3 } } ( s )$ re-separates the updated areas to obtain distinguished skills, as in Fig. 2(iv). The above process repeats for many rounds and SD3 finally learns exploratory and diverse skills. The algorithmic description of our method is given in Algorithm 1.
# C. Qualitative Analysis
In this section, we give a qualitative analysis of the proposed SD3 objective and exploration reward, which encourage interskill diversity and intra-skill exploration, respectively.
The skill discovery objective $I _ { \mathrm { S D 3 } }$ in Eq. (1) leads to diverse skills with separate explored areas, which is similar to the MIbased skill discovery objectives. As we usually set $\lambda \geq 1$ to prevent skill collapse, the following theorem connects $ { I _ { \mathrm { S D 3 } } }$ and the previous MI objectives.
Theorem 1. With $\lambda \geq 1$ , we have
$$
\begin{array} { r } { I ( S ; Z ) \le I _ { \mathrm { S D 3 } } \le c _ { 0 } + I ( S ; Z ) . } \end{array}
$$
where $c _ { 0 } = \log \lambda .$ . Specially, $I _ { \mathrm { S D 3 } } = I ( S ; Z )$ if $\lambda = 1$ .
The above theorem shows when we maximize skill deviation via $I _ { \mathrm { S D 3 } }$ , the MI between $S$ and $Z$ also increases. The previous MI objective becomes a special case of $I _ { \mathrm { S D 3 } }$ , where the introduced $\lambda$ provides flexibility to control the strength of skill deviation. The proof of Theorem 1 is attached in Appendix B. In the following, we connect the proposed intrinsic reward to the provably efficient count-based exploration in tabular cases.
Fig. 3: Results for maze experiment. We visually demonstrate the agent’s ability to explore the environment and the diversity of skills discovered by the agent. The agent starts from the black dot of the maze and interacts for 250K steps. Both DIAYN and DADS do not reach the right side of the maze while obtaining distinguishable trajectories highlighted by different colors. The trajectories of CIC span the entire maze but appear chaotic. In contrast, SD3 can reach the farthest position from the starting point and facilitates easy differentiation of trajectories of different skills.
Note that since $\lambda$ only relates to the overall objective $ { I _ { \mathrm { S D 3 } } }$ and does not affect the estimation of state density, the exploration bonus holds for arbitrary $\lambda \geq 1$ .
Theorem 2. In tabular $M D P s$ , optimizing the intra-skill exploration reward is equivalent to count-based exploration, as
$$
r _ { z } ^ { \mathrm { e x p } } ( s ) \approx \frac { | S | / 2 } { N ( s , z ) + \kappa } .
$$
where $N ( s , z )$ is the count of visitation of state-skill pair $( s , z )$ in experiences, $\vert { \cal S } \vert$ is the total number of states in a tabular case, and $\kappa > 0$ is a small non-negative constant.
As a result, maximizing the intra-skill exploration reward is equivalent to performing count-based exploration in previous works [33], [34], which is provable efficient in tabular MDPs [35], [36]. Through the approximation in a compact latent space, the intra-skill exploration encourages skill-conditional policy to increase the pseudo-count of rarely visited state-skill pairs in a high-dimensional space. The proof of Theorem 2 is attached in Appendix C.
# IV. RELATED WORK
# A. Unsupervised Skill Discovery
Unsupervised skill discovery in RL aims to acquire a repertoire of useful skills without relying on extrinsic rewards. Early efforts, such as VIC [20], DIAYN [23], and DADS [37], maximize the MI between the skill and the state to discover diverse skills. However, as noted in EDL [38], LSD [26], and CSD [27], such MI-based methods usually prefer static skills caused by poor state coverage and may hinder the application for downstream tasks. Recent methods strive to address this limitation to learn dynamic and meaningful skills. These methods perform explicit exploration or enforce Lipschitz constraints in the representation to maximize the traveled distances of skills. Further, CIC [28] employs contrastive learning between state transitions and skills to encourage agent’s diverse behaviors. BeCL [24] uses contrastive learning to differentiate between various behavioral patterns and maximize the entropy implicitly. ReST [39] encourages the trained skill to stay away from the estimated state visitation distributions of other skills. Some methods, like DISCO-DANCE [40], APS [41], SMM [42] and DISDAIN [43], focus on introducing an auxiliary exploration reward to address insufficient exploration. Furthermore, to verify the effectiveness of skill discovery in large-scale state space (e.g., images), recent methods including Choreographer [44] and Metra [25] evaluate the effectiveness of methods on pixel-based URLB [29], which often relies on model-based agents to learn meaningful knowledge from imagination, and skills are discovered in the latent space. Metra [25] constructs a latent space associated with the original state space via a temporal distance metric, which enables skill learning in highdimensional environments by maximizing the coverage. In contrast, our method promotes skill diversity by encouraging deviations in skill density and enhances state coverage through latent space exploration. We validate our approach’s efficacy through experiments on state-based and pixel-based tasks across various environments.
# B. Unsupervised RL
According to URLB [19], URL algorithms are classified into three main categories: knowledge-based, data-based, and competence-based. Knowledge-based algorithms [45]– [47] leverage the agent’s predictive capacity or understanding of the environment, and the intrinsic reward is tied to the novelty of the agent’s behaviors, encouraging the agent to explore areas where its model is less certain. Data-based algorithms [21], [48] maximize the state entropy to maximize state coverage of skills. Competence-based algorithms [23], [41], [42], [49] pre-train the agent to learn useful skills that can be utilized to complete downstream tasks. Our method can be categorized as competence-based, while also combining the benefit of knowledge-based algorithms to encourage exploration. In addition, some recent algorithms do not easily fit into these categories. For example, LCSD [50] establishes connections between skills, states, and linguistic instructions to guide task completion based on external language directives. DuSkill [51] utilizes a guided diffusion model to generate versatile skills beyond dataset limitations, thereby enhancing the robustness of policy learning across diverse domains. EUCLID [52] improves downstream policy learning performance by jointly pre-training dynamic models and unsupervised exploration strategies. VGCRL [53] applies variational empowerment to learn effective state representations, thereby improving exploration.
Fig. 4: Results for state-based URLB. The aggregate statistics [54] indicate the adaptation performance of different unsupervised RL methods in 12 downstream tasks. In terms of IQM, Mean, and OG metrics, SD3 outperforms other competence-based methods and significantly surpasses pure exploration methods, achieving $7 7 . 3 7 \%$ , $7 6 . 1 9 \%$ , and $2 3 . 9 1 \%$ , respectively.
TABLE I: Results of SD3 and other baselines on state-based URLB.
# V. EXPERIMENTS
We start by introducing experiments in Maze to visualize the skills. Subsequently, we validate the effectiveness of SD3 by conducting experiments on challenging tasks from the DeepMind Control Suite (DMC) [55], with both state-based [19] and pixel-based [29] observations. Finally, we conduct ablation studies to demonstrate the factors that influence the effectiveness of SD3.
# A. Maze Experiment
We conduct experiments in a 2D maze to visually demonstrate the learned skills, as shown in Fig. 3. The agent’s initial state is represented by a black dot, with different colored lines indicating the trajectories corresponding to the different skills it has learned. The agent’s state is the current positional information, and the actions represent the velocity and direction of movement. Building on this, we compare SD3 with two classical MI-based methods, DIAYN [23] and DADS [37], whose objectives correspond to the reverse form $\mathcal { H } ( Z ) - \mathcal { H } ( Z | S )$ and the forward form $\mathcal { H } ( S ) - \mathcal { H } ( S \vert Z )$ of the MI term $I ( S ; Z )$ , respectively. Additionally, we compare SD3 with an entropy-based CIC algorithm [28], whose primary objective is to maximize state-transition entropy $\mathcal { H } ( \tau )$ to generate diverse behaviors. We employ the PPO as the backbone and train $n = 1 0$ skills for each algorithm.
We delineate the learned skills of each algorithm within the maze environment in Fig. 3 and introduce two key metrics for comparing SD3 with other methods: state coverage and distinguishability of skills, where insufficient state coverage may impede the acquisition of dynamic skills, and the lack of distinguishability leads to similar behaviors of skills. According to the results, (i) DIAYN and DADS fail to extend to the upper-right corner of the maze, but exhibit clear distinctions among trajectories of skills, indicating that merely maximizing $I ( S ; Z )$ can learn discriminable skills but lack effective exploration of the state space; (ii) CIC demonstrates the best state coverage while learns skills with mixed trajectories due to the maximization of $\mathcal { H } ( s )$ as its primary objective; (iii) In contrast, SD3 strikes a balance between state coverage and empowerment in skill discovery. It learns discriminable skills by maximizing the deviation between the state densities of a certain skill and others. Meanwhile, SD3 achieves commendable state coverage through latent space exploration.
Fig. 5: Results for Pixel-based URLB. We conduct experiments on pixel-based URLB to demonstrate the scalability of SD3 for large-scale problems.
Fig. 6: Results for robustness experiment. It can be observed that SD3 retains higher performance ratio than CIC in the noisy domain.
# B. State-based URLB
According to state-based URLB [19], we evaluate our approaches in 12 downstream tasks across 3 distinct continuous control domains, each designed to evaluate the effectiveness of algorithms under high-dimensional state spaces. The three domains are Walker, Quadruped, and Jaco Arm. Specifically, Walker involves a biped constrained to a 2D vertical plane with a state space $S \in \mathbb { R } ^ { 2 4 }$ and an action space $\mathcal { A } \in \mathbb { R } ^ { 6 }$ . The agent in the Walker domain must learn to maintain balance and move forward, completing four downstream tasks: stand, walk, run, and flip. Quadruped features a four-legged robot in a 3D environment, characterized by a state space $\mathcal { S } \in \mathbb { R } ^ { 7 8 }$ and an action space $\mathcal { A } \in \mathbb { R } ^ { 1 6 }$ . The downstream tasks, including stand, run, jump, and walk, pose challenges to the agent due to the complex dynamics of its movements. Jaco employs a 6-DOF robotic arm with a three-finger gripper, functioning within a state space $\mathcal { S } \in \mathbb { R } ^ { 5 5 }$ and an action state $\mathcal { A } \in \mathbb { R } ^ { 9 }$ . Primary downstream tasks in Jaco Arm include reaching and manipulating objects at various positions.
Baselines. We conduct comparisons between SD3 and the baselines delineated across the three URL algorithm categories as defined by URLB [19]. These categories encompass knowledge-based baselines, which consist of ICM [45], Disagreement [46], and RND [47]; data-based baselines, which include APT [21] and ProtoRL [48]; and competence-based baselines, comprising SMM [42], DIAYN [23], and APS [41]. Furthermore, we extend our comparisons to include other novel competence-based algorithms such as CSD [27], Metra [25], BeCL [24], and CIC [28].
Evaluation. We employ a rigorous evaluation to assess the performance of SD3 alongside other algorithms, involving a two-phase process. Initially, a pre-training of 2M steps is performed using only intrinsic rewards, followed by a finetuning phase of 100K steps on each downstream task using extrinsic rewards. Building upon prior work [19], we utilize DDPG as the backbone algorithm. To ensure statistical rigor and mitigate the impact of incidental factors in RL training, we conduct experiments across multiple seeds (10 seeds per algorithm), resulting in a substantial volume of runs (i.e., 1560 $= 1 3$ algorithms $\times \ 1 0$ seeds $\times \ 3$ domains $\times \ 4$ tasks). The detailed scores in downstream tasks are attached in Table I. We employ four statistical metrics to assess performance: Median, interquatile mean (IQM), Mean, and optimality gap (OG) [54]. IQM focuses on the central tendency of the middle $50 \%$ , excluding the top and bottom quartiles. OG understands the extent to which the algorithm approaches the optimal level, where the optimal level is determined by the expert models’ ultimate score obtained on each downstream task.
Results. According to Fig. 4, SD3 achieves the highest IQM score at $7 7 . 3 7 \%$ , slightly surpassing CIC and BeCL, which scores $7 5 . 1 9 \%$ and $7 5 . 3 8 \%$ respectively, and significantly outperforming other competence-based algorithms such as Metra $( 6 1 . 0 1 \% )$ , CSD $( 5 4 . 9 3 \% )$ , and APS $( 4 3 . 6 1 \% )$ . On the OG metric, SD3’s gap to optimal performance is $23 . 9 1 \%$ , marginally better than CIC and BeCL at $2 5 . 6 5 \%$ and $2 5 . 4 4 \%$ , respectively, and far superior to Metra $( 3 9 . 2 5 \% )$ , CSD $( 4 2 . 4 3 \% )$ , and APS $( 5 5 . 7 6 \% )$ ). Additionally, compared to purely exploratory methods, SD3 significantly outperforms the best-performing method, APT, on both IQM and OG metrics, with APT scoring $6 7 . 7 4 \%$ and $34 . 9 8 \%$ on these metrics, respectively. The remarkable performance of SD3 stems from two main factors. First, the use of $r ^ { \mathrm { s d 3 } }$ facilitates the learning of distinguishable skills by the agent, thereby facilitating effective adaptation across various downstream tasks. Second, the learned compressed representation of the high-dimensional state space leads to efficient intra-skill exploration within a compact space, which not only maintains skill consistency but also enhances exploration ability.
# C. Pixel-based URLB
To further validate the effectiveness of SD3, we conduct experiments on pixel-based URLB [29], which includes Walker and Quadruped domains with 8 downstream tasks. The pixelbased environment employs raw pixel data as input, foregoing abstracted features, or processed sensor information. The challenge of deriving meaningful skills from such unrefined inputs is substantial, particularly in the absence of external rewards. Meanwhile, exploration becomes more difficult in image-based spaces, thereby testing the exploration ability of algorithms under conditions that closely resemble practical applications.
TABLE II: Results of SD3 and baselines on pixel-based URLB.
TABLE III: Results of robustness experiments.
Baselines. We compared SD3 with the top three performing algorithms in state-based experiments, i.e., BeCL [24], CIC [28], and APT [21], as well as with the recently proposed skill discovery algorithms including CSD [27] and Metra [25]. Among these, APT stands out as a data-based algorithm, which can also be considered a representative of pure exploration algorithms and demonstrates strong performance in exploring environments. The others are competence-based algorithms, which accomplish downstream tasks by learning useful and diverse skills.
Evaluation. We conduct 2M steps of pre-training solely based on intrinsic rewards in each domain, followed by 100K steps of fine-tuning on the downstream tasks using extrinsic rewards. The scores achieved in the downstream tasks are used to evaluate the algorithm. According to the official benchmark of the pixel-based URLB [29], unsupervised RL algorithms often perform poorly when combined with a modelfree method (e.g., DDPG [56] or DrQv2 [57]) with image observations, while performing much better when using a model-based backbone (e.g., Dreamer [58]). Thus, we follow this setting and conduct experiments with Dreamer backbone. We report the average adaptation performance in Fig. 6(a). In the relatively simple Walker domain, SD3 achieves the best performance $( 9 3 . 4 2 \% )$ , slightly outperforming other methods (i.e., $C _ { Ḋ } \mathrm { I C - } 9 1 . 2 9 \%$ , APT- $8 8 . 1 7 \%$ , CSD- $8 4 . 2 6 \%$ ). In the challenging Quadruped domain, SD3 outperforms CIC $( 7 7 . 5 7 \%$ and $7 5 . 8 9 \%$ , respectively) and shows significant improvement over other competence-based methods (i.e., CSD- $6 5 . 8 9 \%$ , Metra- $5 3 . 5 3 \%$ ) and the best pure-exploration method in statebased URLB (i.e., APT- $6 1 . 9 6 \%$ ). This highlights SD3’s commendable advantages in both various image-based tasks. The detailed scores are attached in Table III.
# D. Robustness Experiment
Unlike CIC, APS, and BeCL, which rely on entropy-based exploration strategies, SD3 introduces a novel exploration reward that resembles a UCB-style bonus. Such a UCBterm in exploration is provable efficient in linear and tabular MDPs, which has been rigorously studied in previous research [59], [60]. In contrast, the entropy-based exploration used in previous methods has the disadvantage of being nonrobust (e.g., adding small noise will significantly affect its entropy). Thus, to further verify that the robustness of SD3, we conduct experiments in noisy domains of URLB by adding noise during pre-training, which is sampled from $N ( 0 , 0 . 1 )$ , followed by noise-free fine-tuning to assess the learned skills.
Evaluation. We choose CIC for comparison, which performs competitively with our method in standard URLB. Each technique is evaluated across 5 random seeds and the results are given in Fig. 6(b). The Performance Ratio (PR) denotes the ratio of the adaptation score in the noisy domain to that in the normal setup. According to the results, it is evident that the UCB-bonus used in SD3 is more robust than entropy-based rewards in noisy environments, achieving significantly higher Performance Ratio than CIC. The detailed results are attached in Table III.
# E. Ablation Studies
We provide ablation studies for components in skill discovery and skill adaptation of SD3. For skill discovery, we perform the comparison on (i) density estimation with and without soft modularization. The final rewards for skill discovery contain $r _ { z } ^ { \mathrm { s d 3 } } ( s )$ and $r _ { z } ^ { \mathrm { e x p } } ( s )$ . We conduct ablation studies on (ii) different settings of $\lambda$ in calculating $r _ { z } ^ { \mathrm { s d 3 } } ( s )$ , as well as (iii) the different balance factors of the two rewards. For skill adaptation, we sampled skills randomly to evaluate their generalization ability in our main results. In ablation studies, (iv) we evaluate two more skill-choosing strategies in adaptation for a comparison.
Fig. 7: Ablation on the soft modularization structure.
Fig. 8: Results for the impact of weight parameter in the Quadruped. When $\lambda$ is set to 0.5 or 1, SD3 performs poorly. However, it is observed that increasing lambda beyond 1 does not significantly impact the performance of SD3.
1) Impact of Soft Modularization: As mentioned in section III-A, we use CVAE to estimate the state density of different skills. To enhance the accuracy of estimation in complex state spaces, we have introduced soft modularization into the traditional CVAE structure. Consequently, we conduct an ablation study on the soft modularization. Aggregated scores are reported in Fig. 7. We observe that SD3 with soft modularized CVAE obtains superior performance, as it has sufficient capacity to learn the density information of different skills for the same state in complex state spaces, while the skill density estimation of one skill may intervene with those of other skills in the traditional CVAE.
2) Impact of Weight Parameter $\lambda$ : The discussion in section III-A introduces a weight parameter $\lambda$ in Eq.(1). To investigate the impact of $\lambda$ , we conduct an ablation study by varying $\lambda$ from [0.5, 1.0, 1.5, 2.0, 3.0]. The results, exhibited in Fig. 8, indicate that the performance of SD3 fluctuates within a narrow range when lambda is greater than 1. Therefore, we conclude that $\lambda$ is generally applicable in a wider range, and
SD3 is not sensitive to the parameter when $\lambda > = 1 . 5$ .
3) The Exploration Ratio: We conduct an ablation on the different exploration ratios $\alpha$ , Specifically, with the hyperparameter $\alpha$ , the reward is represented as:
$$
r _ { z } ^ { \mathrm { t o t a l } } ( s ) = r _ { z } ^ { \mathrm { s d 3 } } ( s ) + \alpha \cdot r _ { z } ^ { \mathrm { e x p } } ( s ) .
$$
As illustrated in Fig. 9(a), when $\alpha$ is set to 0 and 0.02, the agent can learn distinguishable and convergent skills but fails to fully explore the maze. When $\alpha$ is set to 0.08, the agent explores sufficiently, but the trajectories at the endpoints are quite scattered, indicating that the learned skill strategies lack stability. In contrast, $\alpha = 0 . 0 4$ balances exploration and skill diversity. According to our analysis, when the proportion of exploration is deficient or even absent, SD3 solely maximize $ { I _ { \mathrm { S D 3 } } }$ . Conversely, an excessively high $\alpha$ can overly prioritize intra-skill exploration, resulting in instability within the learned skills. Empirically, $\alpha = 0 . 0 4$ can lead to promising results in downstream tasks in the Quadruped domain.
4) Skill Adaption Strategies in Fine-tuning: In the experiment described in section V-B, we follow the URLB standards for fair comparison, employing random skill sampling during fine-tuning to evaluate average skill performance. To enhance skill adaptation, we introduce two methods: regressmeta and meta-controller. Regress-meta estimates the expected reward of each skill during the initial 4K fine-tuning steps to compute its skill-value, selecting the skill with the highest value for downstream tasks. Meta-controller trains a high-level controller, $\mu ( \boldsymbol { z } | \boldsymbol { s } )$ , during fine-tuning to select the most suitable skill $z$ based on the current state $s$ . This controller integrates with the pre-trained policy $\pi ( a | s , z )$ to optimize the high-level policy, defined as $\begin{array} { r } { \pi ( a | s ) = \sum _ { z \in \mathcal { Z } } \mu ( z | s ) \pi ( a | s , z ) } \end{array}$ .
The results in Fig. 10 demonstrate that regress-meta improves performance over random skill selection in Quadruped Stand, Walk, and Run, but shows a slight decline in Quadruped Jump. This outcome likely stems from regress-meta’s strategy of consistently selecting the skill with the highest expected reward during the initial fine-tuning phase. While this increases the likelihood of selecting a well-adapted skill, it may also favor skills that perform well in the first 4K steps but underperform in later stages. Conversely, the meta-controller shows comparatively poor performance, which we attribute to its reliance on large amounts of training data, making it challenging to converge within the 100K fine-tuning steps.
# F. Visualization
1) Tree-like Maze: As shown in Fig. 11, we conduct additional experiments in the tree-like maze to visualize the skills learned by SD3. It can be observed that DIAYN and DADS only reach the middle of the maze, whereas SD3 successfully reaches the bottom of the maze. The proposed latent space reward in SD3 demonstrates strong exploration ability in large-scale mazes. Moreover, the trajectories of different skills remain distinguishable in SD3.
2) Deepmind Control Suite: Fig. 12 shows the learned skills in the Walker, Quadruped, and Jaco Arm domains. The result shows SD3 can learn various locomotion skills, including standing, walking, rolling, moving, and somersault;
(a) The impact of exploration ratio in maze environment
(b) The impact of exploration ratio in state-based Quadruped
Fig. 9: Results for the impact of exploration ratio. (a) We conduct experiments with different $\alpha$ in the maze and found that varying $\alpha$ values significantly impact both the state coverage and the stability of learned skills. (b) In the Quadruped domain, different $\alpha$ also have a notable effect on the performance of various downstream tasks.
Fig. 10: Skill adaption strategies ablation. We test several adaptation methods in the fine-tuning phase and find that randomly selecting skills perform comparably to using regressmeta, but employing the meta-controller results in a decline in the performance.
and also learns various manipulation skills by moving the arm to explore different areas, opening and closing the gripper in different locations. The learned meaningful skills lead to superior generalization performance in the fine-tuning stage of various downstream tasks. | Unsupervised Reinforcement Learning (RL) aims to discover diverse behaviors
that can accelerate the learning of downstream tasks. Previous methods
typically focus on entropy-based exploration or empowerment-driven skill
learning. However, entropy-based exploration struggles in large-scale state
spaces (e.g., images), and empowerment-based methods with Mutual Information
(MI) estimations have limitations in state exploration. To address these
challenges, we propose a novel skill discovery objective that maximizes the
deviation of the state density of one skill from the explored regions of other
skills, encouraging inter-skill state diversity similar to the initial MI
objective. For state-density estimation, we construct a novel conditional
autoencoder with soft modularization for different skill policies in
high-dimensional space. Meanwhile, to incentivize intra-skill exploration, we
formulate an intrinsic reward based on the learned autoencoder that resembles
count-based exploration in a compact latent space. Through extensive
experiments in challenging state and image-based tasks, we find our method
learns meaningful skills and achieves superior performance in various
downstream tasks. | [
"cs.LG"
] |
# I. INTRODUCTION
“Vibe coding” refers to a novel, emergent mode of software development in which the human programmer (1a) operates less as a direct implementer of code and more as a high-level coordinator who collaborates with LLMs through iterative prompting and strategic direction [1]. Coined by Andrej Karpathy Andrej Karpathy Wiki Page ,1, 2, the term captures a shift in both mindset and methodology, where developers communicate desired outcomes the “vibe” via natural language instructions, conceptual overviews, and progressive refinements, rather than by specifying logic in syntactic detail. Unlike traditional paradigms, which emphasize mastery over syntax [2]–[4] and low-level operations [5]–[7],
This interactional loop of guidance, AI response, human evaluation, and corrective feedback yields a dynamic coding process that is simultaneously expressive, generative, and improvisational. It invites the question: can the act of software engineering become more intuitive, collaborative, and aligned with human reasoning, rather than merely a transcription of formal logic into text? Vibe coding attempts to answer in the affirmative proposing a new semiotic contract between the human mind and generative machines.
The rise of vibe coding parallels the rapid advancement of foundation models and the growing availability of LLM-based development platforms like ChatGPT [9],
Replit Replit AI, and Cursor CURSOR. Traditional software engineering emphasized rigid syntax [10]–[12], algorithmic structure [13], [14], and deterministic logic [15], [16]. In contrast, LLMs now allow developers to produce coherent, context-aware code using natural language transforming code creation into a dialogue with the machine [17], [18]. Karpathy’s notion of “embracing exponentials” reflects this paradigm shift, where scale, abstraction, and expressiveness redefine programming practice Andrej Karpathy, Notion.
While vibe coding represents a leap in developer productivity and human-AI interaction, agentic coding signifies a more advanced and autonomous evolution of AI-assisted programming. At its core, agentic coding (Figure 1b) is grounded in the deployment of agentic AI systems software agents capable of independently interpreting high-level goals, decomposing tasks into subtasks, planning execution strategies, and adapting behavior based on real-time feedback and outcomes [19], [20]. Unlike the prompt-response dynamic of vibe coding, agentic coding minimizes the need for continuous human oversight. It introduces a paradigm wherein AI agents can initiate action, access tools and APIs, retrieve and process external data, and iteratively refine outputs through cycles of self-evaluation. These agents exhibit hallmark capabilities of agency, including intentionality, forethought, and adaptivity aligning with definitions proposed in cognitive systems and artificial general intelligence research. This level of autonomy enables agentic coding systems to tackle complex, multistep workflows, making them suitable for process automation, business operations, and dynamic, data-driven environments. Architecturally, agentic coding employs reinforcement learning and modular planning [21], often integrating specialized subagents that collaborate to complete broader missions [22]. In contrast to vibe coding’s focus on expressiveness and flow, agentic coding is outcome-oriented, resilient, and self-directed. If vibe coding equips developers with a high-speed copilot, agentic coding gives them an intelligent collaborator capable of independently steering the aircraft. This distinction underscores not merely a difference in tools, but a fundamental rethinking of how software is authored, executed, and evolved.
Understanding the distinction between vibe coding and agentic coding is crucial for navigating the future landscape of AI-assisted software development. These paradigms differ not just in their technical mechanisms, but in the underlying assumptions about the role of the human developer, the locus of agency, and the nature of control in the coding process. Vibe coding maintains a human-centric model, where the developer is an ever-present conductor guiding the AI’s output a model well-suited to creative ideation, exploratory development, and rapid iteration. Agentic coding, on the other hand, introduces AI agents as semi-autonomous collaborators, shifting the human role to that of a supervisor who defines goals and evaluates outcomes. This shift has profound implications: it reconfigures workflows, challenges traditional notions of authorship and accountability, and necessitates new interfaces for monitoring, debugging, and aligning agent behavior with human intent. Moreover, the increasing prevalence of agentic systems raises critical questions around safety, reliability, and trustworthiness in software generated or managed with minimal human oversight.
As organizations, developers, and researchers embrace these technologies, a dynamic understanding of the boundaries, affordances, and use cases of each paradigm becomes essential. By developing a formal taxonomy of vibe coding and agentic coding as this paper aims to do we not only chart the evolution of AI in programming but also lay the groundwork for better tooling, clearer expectations, and more robust human-AI collaboration in both current and future systems.
# II. CONCEPTUAL FOUNDATIONS
# A. Vibe Coding: Intuition-Driven, Developer-Centric Code Generation
Vibe Coding, a term popularized by Andrej Karpathy Andrej Karpathy, describes a software development methodology centered on the developer’s intuitive expression of intent to an LLM, which then acts as a highly responsive co-pilot in generating code. The ”vibe” refers to the desired outcome, functionality, or even the aesthetic feel of a software component, which the developer communicates, often through natural language, for the LLM to translate into executable code. This paradigm fundamentally alters the traditional coding process by abstracting significant portions of syntactical detail and boilerplate, allowing developers to focus on higher-level design and rapid iteration.
1) The Semiotics of Vibe Coding: At its core, Vibe Coding introduces a new semiotic layer in programming [23], [24]. Traditionally, developers use programming languages as a direct, formal means of instructing a computer [25]–[27]. In Vibe Coding, natural language, high-level descriptions, and even examples or visual mock-ups serve as primary inputs to an intermediary intelligent system (the LLM) [28]. The LLM, in turn, interprets these multi-modal ”signs” and synthesizes corresponding formal code [29], [30]. This process is not a one-way translation but an iterative dialogue. The developer provides an initial prompt; the LLM generates a code artifact; the developer reviews, critiques, and refines the prompt or directly edits the code, continuing the cycle until the desired ”vibe” is achieved [31], [32]. This iterative refinement loop is a defining characteristic, reflecting a co-constructive model of meaning-making between human and machine [33].
Fig. 2: Fundamental Skills and Cognitive Shifts in Vibe Coding. This diagram illustrates the five core competencies Thinking, Framework, Checkpoints, Debugging, and Context that enable effective collaboration with LLMs. Together, they represent a cognitive shift from syntaxheavy implementation to high-level conceptual guidance and iterative co-creation with AI agents.
2) Fundamental Skills and Cognitive Shifts: Philosophically, Vibe Coding empowers the developer by augmenting their cognitive capabilities. It offloads the burden of low-level implementation details, allowing for a greater focus on creative problem-solving, user experience design, and system architecture. The ”vibe” is thus not merely an aesthetic preference but a holistic representation of the developer’s intent, encompassing functionality, usability, and design. Effective Vibe Coding necessitates a shift in developer skills, emphasizing conceptual articulation and strategic interaction over rote memorization of syntax [34], [35]. The five fundamental skills are as illustrated in Figure 2 and each skills are explained below:
Thinking (Strategic Problem Formulation): This involves a multi-layered approach to defining the problem for the LLM. It begins with Logical Thinking (the core what) [36], [37], progresses to Analytical Thinking (how users interact, highlevel components) [38]–[40], then to Computational Thinking [41] (structuring the problem into modules, rules, and data flows understandable by the AI) [42], [43], and finally to Procedural Thinking (considering optimal execution, best practices, and detailed features) [44], [45]. A well-crafted Product Requirements Document (PRD) often emerges from this rigorous thinking process, serving as a detailed contextual blueprint for the LLM.
Framework (Architectural Awareness): While the LLM handles much of the implementation, the developer must possess an awareness of relevant software frameworks (e.g., React, Node.js, Django), libraries, and architectural patterns [46]. This knowledge allows the developer to guide the LLM towards using appropriate, robust, and industry-standard technologies, thereby constraining the solution space and improving code quality and maintainability [47]. The developer can also learn about new frameworks by querying the LLM for recommendations based on project requirements.
Checkpoints (Version Control): Given the generative and sometimes unpredictable nature of LLM outputs, robust version control (e.g., Git) is paramount. Frequent commits create ”save points,” enabling developers to revert to stable states if AI-generated code introduces errors or undesirable changes. Branching allows for safe experimentation with different AI-generated features without impacting the main codebase [48], [49]. This ensures a safety net for the rapid, iterative cycles inherent in Vibe Coding.
• Debugging (Collaborative Error Resolution): Errors are inevitable. In Vibe Coding, debugging becomes a collaborative process [50], [51]. The developer identifies an issue (runtime error, logical flaw, UI discrepancy) and then provides the LLM with rich context error messages, relevant code snippets, descriptions of expected vs. actual behavior, and sometimes screenshots [52], [53]. The LLM can then assist in diagnosing the problem and suggesting or implementing fixes. Human oversight is critical to guide this process and validate the AI’s solutions.
Context (Information Provision): The efficacy of Vibe Coding is directly proportional to the quality and comprehensiveness of the context provided to the LLM [28], [54]. This includes not only the initial PRD and prompts but also visual mockups, examples of desired output, existing codebase snippets, API documentation for integrations, and explicit statements about preferred libraries, coding styles, or security constraints [55], [56]. Rich context minimizes ambiguity and helps the LLM generate more accurate and relevant code.
3) Interaction Model and Workflow Integration: The interaction model in Vibe Coding is predominantly a tight, iterative prompt-response loop. The developer initiates with a high-level request, the LLM generates code, the developer reviews and refines either by editing the code directly or by providing a new, more specific prompt [57], [58]. This cycle repeats, often rapidly, enabling quick prototyping and exploration of different solution paths.
Vibe Coding tools, such as AI-enhanced IDEs (e.g.,
Cursor CURSOR, Windsurf Windsurf) or cloud-based platforms (e.g., Replit), integrate into the developer’s workflow by providing an interface for this interaction. However, the execution and final validation of the generated code typically occur within a standard development environment, often managed by the developer. This separation between generation and execution necessitates careful testing and integration, as the LLM does not inherently possess a runtime understanding of the code it produces in most Vibe Coding scenarios. This model thrives in creative and exploratory development phases but requires disciplined application of checkpointing and refactoring to manage potential technical debt accrued from rapid, less scrutinized code generation.
# B. Agentic Coding: Towards Autonomous Software Development Systems
Agentic coding as illustrated in Figure 3 represents a paradigmatic shift in AI-assisted software engineering. Unlike vibe coding, where LLMs operate as conversational co-pilots [59], [60], agentic coding systems delegate substantial cognitive and operational responsibility to autonomous or semi-autonomous software agents. These agents are capable of planning, executing, and verifying complex software tasks transforming natural language instructions into robust, testable code with minimal human guidance [61], [62]. Architecturally, this requires the convergence of goal planning, task decomposition, execution environments, safety infrastructure, and continuous feedback mechanisms [61], [63].
The core philosophy of agentic coding is delegated autonomy. Developers specify high-level objectives such as “integrate an external API,” “refactor backend routing,” or “set up CI workflows,” while the agent assumes responsibility for determining and executing the steps needed to accomplish those goals. This transforms the human’s role from low-level implementer to a systemlevel supervisor and goal-setter.
Fig. 3: Core Capabilities in Agentic Coding: Illustrating the sequential and interconnected capabilities of agentic coding: Interpret High-Level Goals, Plan and Decompose Tasks, Utilize Tools and Resources, Execute and Iterate, Reason and Problem-Solve, Maintain LongTerm Context, and Self-Reflection and Correction within autonomous software agents
Agentic agents exhibit the following core capabilities:
Interpret High-Level Goals: Agentic systems parse natural language prompts that span multiple files, layers, or components [19]. For instance, Jules (developed by Google) Jules Link can respond to queries such as “integrate the Google Gemini API into the R1 robot” by identifying relevant entry points in the codebase.
• Plan and Decompose Tasks: Upon receiving a request, agents create internal execution plans. Jules Jules, Google, for example, breaks down the task into subtasks such as API research, data structure design, code insertion, documentation updates, and test plan execution.
Utilize Tools and Resources: Agents autonomously interact with file systems, compilers,
interpreters, test suites, Git repositories, APIs, and even browsers. In Codex Codex, OpenAI, sandboxed environments are spun up for each task, with independent dependencies and runtime isolation.
• Execute and Iterate: Agents can modify source code (e.g., changing ‘RoboLogic.cs‘), test their output, log failures, and retry iteratively. Codex Codex, OpenAI, for instance, can automatically run ‘git diff‘, apply patches, and generate pull requests.
• Reason and Problem-Solve: When encountering edge cases, agents apply heuristics, run static analysis, or search documentation. In Jules’s integration task, error handling included adjusting response parsers and dynamically reconfiguring the Unity Inspector Jules, Google.
Julesintegratesthe Google Gemini APl Clone GitHub repo and parseREADME.md IdentifyRoboLogic.cs and RoboListen.cs Create GeminiRequest GeminiResponse Injectcode forparsing configuringmodels Update doc with setup steps, APl key
README.md file to establish project context and configuration. It then autonomously identified relevant integration points namely RoboLogic.cs and RoboListen.cs as the scripts most suitable for modification. The agent proceeded to generate two new data classes, GeminiRequest and GeminiResponse, to support the structure of the API’s request/response handling. It injected the necessary code to parse responses from the Gemini API and configured model parameters to be adjustable via Unity Inspector fields, streamlining developer interaction with the AI integration. To ensure usability and reproducibility, Jules updated the documentation, outlining API key requirements and configuration steps. Finally, it committed all modifications to a newly created Git branch and presented the changes for review. This sequence not only reflects an end-to-end software modification task performed autonomously but also highlights the value of agentic systems in managing complex API integrations, combining planning, reasoning, documentation, and version control in a unified pipeline.
Table I provides a structured taxonomy comparing the core characteristics, execution roles, and interaction patterns of Vibe Coding and Agentic Coding. It highlights how these paradigms differ in autonomy, developer responsibility, tool integration, and system maturity, offering a comprehensive view of their conceptual and technical distinctions.
Maintain Long-Term Context: Codex maintains session state over complex multi-step tasks, managing API keys, dependencies, and environment variables Codex, OpenAI. Persistent memory and vector store integration enable agents to reference earlier instructions and code changes. • Self-Reflection and Correction: Emerging systems implement internal evaluation. Agents like Codex log their decision trees, summarize actions, propose revisions, and retry failed steps autonomously presenting diffs and execution summaries to the user Codex, OpenAI.
The human-agent interaction remains iterative but high-level. In Jules, for instance, developers are presented with reviewable summaries (“Ready for Review”) and given options to approve, revise, or publish branches. In Codex, task outcomes are presented with logs, diffs, and test results for validation before pushing to GitHub. When instructed to integrate the Google Gemini API into a robotics codebase, the agentic coding system Jules demonstrated a multi-step, autonomous workflow that exemplifies the principles of agentic software development. As illustrated in Figure 4, Jules began by cloning the target GitHub repository and analyzing the
1) Conceptual Architecture of Agentic Systems: Agentic coding systems (Figure 3) are architecturally distinct from prompt-driven LLM tools, exhibiting a modular and cognitively looped design tailored for autonomous software engineering. At their core, agentic platforms such as Codex and Jules integrate planning, execution, tool interaction, and evaluation into a cohesive, goal-driven framework.
The conceptual architecture typically comprises several interlinked components. A core reasoning engine, powered by a LLM, interprets high-level developer instructions and generates actionable plans. This is supported by a planning module, which decomposes abstract goals into a sequence of structured sub-tasks, using mechanisms such as chain-of-thought prompting or hierarchical task networks [19], [108], [109]. To enable environmental interaction, a tool use module grants agents the ability to execute commands or access APIs via function calling [110], [111]. This includes capabilities such as modifying configuration files, running shell commands, or interacting with Git repositories [112].
A critical feature is the presence of memory and context management [113], [114], facilitating persistent state tracking across multi-step workflows. Agents leverage both short-term working memory and longterm retrieval-augmented memory to maintain coherence safety through resource constraints, permission scoping, and rollback mechanisms [19].
TABLE I: Comparative Taxonomy of Vibe Coding versus Agentic Coding Paradigms
Feedback is central to the agentic paradigm where agents incorporate results from automated tests, logs, or human feedback through evaluation and learning mechanisms, adjusting future behavior accordingly [115], [116]. Architectures may further include an orchestration layer that coordinates specialized sub-agents (e.g., planner, coder, tester, documenter), facilitating parallelism and modular division of labor [112].
As illustrated in systems like Codex Codex, OpenAI, this architecture transitions AI from a passive tool to an active collaborator capable of self-directed planning, decision-making, and refinement. These agents operate not merely as extensions of developer intent but as semiautonomous entities capable of transforming high-level specifications into verifiable software artifacts [117], [118]. Agentic coding thus lays the foundation for scalable, adaptable, and increasingly intelligent development pipelines in real-world programming ecosystems [119], [120].
2) Shift in Developer Interaction and Control: The interaction paradigm in agentic coding represents a fundamental departure from the co-piloting model of vibe coding [121], [122]. Rather than engaging in direct, iterative instruction at the function or line level, the developer assumes a supervisory role defining the mission, monitoring system behavior, and validating outcomes [123]. This transition from procedural engagement to goal-level delegation reflects a broader cognitive and operational realignment in human-AI collaboration.
At the outset, the developer is responsible for mission specification, articulating high-level objectives, architectural constraints, and system-level requirements [124], [125]. These inputs may encompass functional targets (e.g., “integrate external API for user analytics”), nonfunctional constraints (e.g., security, latency, portability), or domain-specific standards. The agent then plans and initiates the execution process autonomously [126]– [128].
Throughout execution, the developer assumes the role of an observer and strategic guide, reviewing real-time logs, intermediate artifacts, and agent-generated plans [129], [130]. This includes evaluating execution traces, test results, and change diffs [131], [132]. Intervention may be necessary when the agent encounters ambiguous requirements, edge cases beyond its training distribution, or tasks that involve ethical, legal, or architectural judgment. Critically, the developer also acts as the final verifier [133]–[135]. Before any integration or deployment, the human evaluates the full solution ensuring correctness, compliance, and alignment with project vision [136]. This oversight transforms the developer’s responsibilities from tactical implementation to strategic assurance and decision validation.
This evolving model requires a distinct set of cognitive and technical competencies. Developers must develop fluency in “agent management” understanding agent capabilities, interpreting failure modes, designing effective prompts and constraints, and deploying diagnostic tools when agents deviate from expected behavior [112]. The trust placed in the agent must be balanced with a readiness to intercede, especially in high-risk or safety-critical contexts. Ultimately, the agentic interaction model foregrounds the human as a system architect, supervisor, and ethical gatekeeper, overseeing a semi-autonomous AI collaborator [19], [112]. This shift not only augments developer productivity but also redefines the nature of software engineering in AI-mediated environments. Additional detailed distinctions between Vibe Coding and Agentic Coding including differences in autonomy, task scope, error handling, and developer role are comprehensively summarized in Table I.
# III. TECHNICAL ARCHITECTURE AND CAPABILITIES
Although both vibe coding and agentic coding harness LLMs to augment software development as depicted in Figure 5, their architectural intent and implementation are fundamentally distinct. Vibe Coding (Figure 5a) operates through developer-initiated, prompt-based interactions within IDEs or web-based environments, emphasizing conversational co-creation and low-friction prototyping. In contrast, Agentic Coding (Figure 5b) is grounded in delegated autonomy: developers specify high-level objectives, and intelligent agents often composed of planner, executor, and toolchain modules carry out multi-step coding workflows, potentially invoking compilers, APIs, test runners, and version control systems without continuous human supervision. To articulate these differences, this section presents a detailed architectural analysis through layered diagrams, pseudocode abstractions, and systematic tabular comparisons. The core architectural contrasts ranging from context management and multi-agent orchestration to execution sandboxing and CI/CD integration are summarized in Table II, offering researchers and system designers a clear framework for understanding the capabilities and trade-offs of each model. Additionally, we explore how feedback loops, validation protocols, and tool autonomy shape each paradigm’s suitability for different use cases, from rapid prototyping to enterprise-scale automation. By formalizing these architectural features, this section contributes a foundational taxonomy for evaluating emerging AI coding frameworks, informing both engineering decisions and future research in agentic software systems.
Fig. 5: Comparative Architecture of Vibe Coding and Agentic Coding (a) Vibe Coding: Developers provide prompts to an LLM within an IDE or web interface. The workflow relies on short-term context and manual execution, testing, and integration. (b) Agentic Coding: Developers define objectives processed by a planner, longterm memory, and executor modules. Agents autonomously use tools within sandboxed environments to complete multi-step workflows.
# A. Execution Models: Comparative Analysis of Architectural Design
1) Vibe Coding Interfaces and Developer-Driven Execution: Vibe coding architectures operate primarily through lightweight, stateless interfaces where LLMs serve as code-generation engines embedded in developer-centric environments such as IDEs, browserbased editors (e.g., Replit Replit AI), or terminal integrations [92], [137]. The execution model is explicitly decoupled from the generation pipeline LLMs suggest or write code in response to high-level prompts, but the responsibility for integration, execution, testing, and debugging remains with the human developer [138]– [140]. The developer copies generated snippets into their runtime environment, configures test cases, and manually interprets any resulting behavior.
This model emphasizes flexibility and creativity during early-stage development or rapid prototyping, leveraging prompt-response cycles to accelerate code synthesis. However, from an architectural standpoint, it exhibits a passive execution pipeline. There is no embedded runtime or agent-native validation loop [141], [142]. Instead, testing and validation are handled through external services unit test frameworks, CI/CD tools, or manual test execution within local or cloud IDEs [143].
This asynchronous, generation-first design allows LLMs to focus on semantic synthesis and reuse of learned patterns [144], but introduces latency in feedback loops and a higher cognitive burden on the developer [145], [146]. The architecture lacks internal state management, agent memory, or runtime enforcement, reflecting its reliance on human-driven control over execution and validation.
2) Agentic Coding Architectures and Autonomous $E x$ - ecution Pipelines: Compared to vibe coding, agentic coding systems incorporate fully integrated execution pipelines as a first-class architectural feature [147], [148]. These systems embed containerized, policyconstrained runtime environments such as Docker instances [149]–[151], WASM runtimes [107], [152], or lightweight QEMU-based emulators directly into the development agent’s operational core [153]. Within these sandboxes, autonomous agents can not only generate code but also execute, test, and iteratively refine it without requiring human intervention for each step [87], [154].
TABLE II: Architectural Comparison Between Vibe Coding and Agentic Coding Systems
Agentic execution architectures are characterized by modular task graphs where planner components decompose user goals into executable sub-tasks, and executor agents interact with the runtime to carry them out [155]. This allows for a tight coupling between generation, execution, and feedback [156]. Agents dynamically manage system state, interact with file systems [157], perform queries [158], [159], analyze logs [86], and retry failed attempts based on real-time results [160], [161]. Security and control are maintained through fine-grained resource isolation sandboxing policies govern memory usage, file I/O, and network access [77], [162].
# Agentic Coding Workflow
Developer prompt: “Optimize SQL joins for user reporting.”
Agent loads sandbox environment.
Analyzes ORM model structure.
• Refactors queries and validates execution plan.
• Deploys tested code to staging environment.
This closed-loop, self-evaluating architecture reduces reliance on the human as a runtime operator and increases system autonomy. It supports advanced use cases such as multi-file refactoring, regression analysis, and continuous integration with minimal human oversight. Architecturally, this marks a transition from interactive co-programming to autonomous software engineering, where execution is proactive, contextual, and adaptively managed by intelligent agents.
# B. Autonomy and Feedback Loops in Vibe and Agentic Coding Paradigms
1) Vibe Coding: Human-Centric Control and Reactive Feedback: Vibe coding architectures operate under a fundamentally reactive model, wherein the human developer remains the sole agent responsible for validation, error detection, and iterative refinement [66]. The LLM acts as a stateless code synthesis engine generating outputs in response to prompt instructions but without any intrinsic feedback mechanism or selfevaluation capacity [97]. As such, the feedback loop exists entirely outside the system and is mediated by the developer through post-hoc testing, debugging, and prompt refinement [163], [164].
This model affords significant flexibility in exploratory or creative coding sessions. Developers may use short, expressive prompts (e.g., “Add JWT authentication to the login flow”) and immediately evaluate the output in their IDE or test environment. However, when prompts are vague or underspecified (e.g., “Make this more secure”), the LLM’s lack of situational awareness and tasklevel memory often leads to hallucinated or ambiguous outputs.
# Prompt Engineering Insight
Specific prompt: “Add role-based access control using JWTs and restrict admin endpoints.” Vague prompt: “Make this more secure.” Outcome: The former yields focused middleware code with user roles; the latter returns generic suggestions like hashing passwords twice or limiting requests, often misaligned with project context.
Due to its lack of autonomous validation [165], vibe coding systems are limited in production environments where reliability, regression testing, and integration constraints are critical [58], [166]. Developers must manually run tests, validate results, and reframe prompts for each iteration, rendering the process iterative but humandependent. While suitable for front-end prototyping, documentation drafting, or low-risk automation, the absence of self-driven error correction limits its robustness in complex systems.
2) Agentic Coding: Goal-Driven Autonomy with Feedback-Integrated Execution: In contrast, agentic coding frameworks are designed with feedback-driven autonomy as a core architectural principle [167]. Agents operate through multi-level feedback loops that include planning, execution, testing, evaluation, and corrective iteration all orchestrated without human prompting between steps [19]. This architecture draws from reinforcement learning, symbolic planning, and black-box evaluation strategies to enable continuous improvement within a coding session.
A typical agentic workflow begins with a high-level task objective (e.g., “Build a PostgreSQL-backed user analytics dashboard”), which is decomposed into subtasks using internal planning modules. Each subtask (e.g., schema generation, query writing, UI wiring) is independently implemented and validated via in-agent execution environments. Failures trigger internal debugging logic, resulting in retrials, log inspection, or substitution strategies.
# Agent Feedback Algorithm
Input: Task Objective (T)
Output: Verified Implementation (I)
Decompose(T) -> [t1, t2, ..., tn]
For each ti: Implement(ti) Run Test(ti) if fail: Debug(ti), Repeat
Aggregate([t1...tn]) -> I
Return I
This closed-loop feedback enables high fidelity in repetitive and deterministic programming contexts [19], [112], such as dependency management, CI/CD configuration, or auto-generating test suites for large-scale systems. For example, an agent tasked with “Migrate project from JavaScript to TypeScript” will iterate through module identification, static analysis, AST rewriting, and runtime testing without developer intervention at each step.
Unlike vibe systems, agentic architectures support telemetry, traceability, and performance metrics at each layer, enabling outcome-aware re-planning and model fine-tuning. The result is an execution pipeline that resembles autonomous software engineering rather than assisted coding capable of aligning long-term goals with tactical implementation across multiple files, systems, and APIs.
# C. Safety, Explainability, and System Constraints
1) Vibe Coding: Limited Guardrails and Post-Hoc Safety Mitigation: Vibe coding environments, by design, prioritize fluidity of interaction and developer creativity over integrated safety controls. The underlying architecture does not include runtime enforcement mechanisms, making safety and explainability externalized concerns. Outputs are typically generated without runtime awareness, leading to several risks in security-sensitive or regulated environments.
A critical architectural limitation is the absence of execution traceability. Since LLMs are stateless within a session, they cannot record, annotate, or justify their decisions unless explicitly prompted to do so [168]– [170]. This lack of interpretability becomes particularly concerning when the AI injects code with hardcoded credentials, insecure API calls [99], [171], or unsafe permission scopes problems often observed in rapid prototyping workflows [89].
# Common Vibe Coding Risks
Hardcoded secrets: Generated code may embed plaintext API keys or passwords. Insecure defaults: Lack of input sanitization, overly permissive CORS headers. No audit trails: Developer cannot inspect prior outputs unless manually documented.
To mitigate these risks, developers often rely on external static analysis tools e.g., SonarQube Sonar, CodeQL Github link, or ESLint security plugins ESList to perform post-generation audits. These tools can flag anti-patterns, insecure imports, or style violations. However, these solutions operate independently of the LLM and require the developer to integrate them manually into their pipeline. As a result, the responsibility for enforcing safety, explainability, and governance in Vibe Coding rests solely on the human-in-the-loop, limiting its applicability in high-assurance domains like finance, healthcare, or enterprise DevOps.
2) Agentic Coding: Embedded Safeguards and Transparent Execution: Agentic coding frameworks are designed with embedded safety constraints [172], [173], explainability mechanisms [19], [174], and runtime isolation policies [76], [175], [176]. These systems are designed to emulate production-grade deployment scenarios in microcosm allowing agents to safely execute, debug, and iterate while maintaining verifiable compliance with security and governance policies [177], [178].
The first tier of architectural safeguards involves resource and namespace isolation. Agent containers run within sandboxed environments where access to file systems, memory, CPU, and network interfaces is tightly scoped and rate-limited [107], [149]. For example, an agent modifying YAML configuration files may only access a whitelisted directory tree, preventing accidental file system corruption or privilege escalation.
# Agentic Safety Features
Namespace isolation: Prevents unauthorized file system access.
Resource limits: Controls execution via CPU/memory quotas.
Logging hooks: Captures all agent actions, prompts, and test results.
Rollback triggers: Reverts files or state if tests fail or errors occur.
Explainability is built into the execution graph [19], [86]. Tools like Claude Code Claude Code Link, Amazon Q Developer Amazon-Q-Developer Link, and Devika Devika AI log every decision node and code transformation, enabling post-hoc inspection and diff analysis. These logs not only serve as audit trails for compliance but also allow developers to interpret the agent’s reasoning chain for example, why it refactored a function, replaced a package, or reordered a CI pipeline.
Such mechanisms elevate agentic systems from mere automation engines to auditable [107], [179], controlled execution environments [156]. Furthermore, the rollback infrastructure ensures that the system can revert unintended side effects, thereby reducing the risk of silent failures or irreversible changes [180], [181]. These features make agentic coding architectures more aligned with enterprise-grade reliability and explainability standards, distinguishing them as preferable frameworks for autonomous software engineering in safety-critical domains.
# Overall Architectural Comparison and Illustrative Application
To synthesize the architectural contrasts between vibe coding and agentic coding, we present a comparative analysis focused on execution fidelity, safety, and autonomy. Table III summarizes key differences in system capabilities across multiple operational dimensions. Notably, agentic systems are characterized by in-sandbox execution, embedded validation, and robust safety mechanisms [19], whereas vibe coding tools operate in a generation-first manner, relying on human oversight for execution, testing, and risk mitigation [182].
To illustrate these distinctions, consider the task of implementing a RESTful (RESTful API JWT-based authentication system. In a vibe coding workflow, the developer begins by prompting the LLM with a natural language instruction [73], [145]. The model generates a code snippet that directly reflects prior examples in its training data. For instance:
TABLE III: Execution Capabilities and Safety Comparison
# Generated Output: Vibe
from fastapi import Depends,
HTTPException
from jose import JWTError, jwt
SECRET_KEY $\mathbf { \Sigma } = \mathbf { \Sigma }$ "mysecret"
def verify_token(token: str): try: payload $\mathbf { \Sigma } = \mathbf { \Sigma }$ jwt.decode(token, SECRET_KEY) return payload except JWTError: raise HTTPException (status_code $\scriptstyle : = 4 0 3$ )
The developer must then validate this implementation manually using tools like Postman or ‘curl‘, inspect error behavior, and refine the prompt if improvements are needed. This feedback loop remains external and humandriven.
In contrast, an agentic system initiates a structured execution pipeline [183], [184]. The agent begins by analyzing route configurations, generates middleware for token validation, injects this into the FastAPI framework, executes automated tests via a test client, logs any exceptions, and autonomously retries modifications [185]. This closed-loop approach supports reproducibility and validation at every step.
# Agentic Code Execution Flow
Step 1: Analyze API routes
Step 2: Generate token middleware
Step 3: Inject middleware into FastAPI
Step 4: Run test client
Step 5: Log errors and fix recursively
This illustration highlights the architectural divergence in action: vibe coding emphasizes developer-led exploration, whereas agentic coding emphasizes autonomous, testable construction. The former thrives in low-stakes or creative contexts; the latter aligns with enterprise-grade reliability and scalable automation. Vibe coding and agentic coding are not competing paradigms but represent complementary trajectories in AI-assisted software engineering. Understanding their technical architecture, capabilities, and constraints is essential for designing effective toolchains and selecting the appropriate paradigm based on context. The next section evaluates these models across performance, efficiency, and deployment scalability dimensions.
# IV. PRACTICAL WORKFLOW DIFFERENCES
The practical adoption of Vibe Coding and Agentic Coding paradigms reveals fundamental differences in developer interaction models, cognitive frameworks, workflow architectures, and application suitability. This section presents a comparative investigation across four dimensions: developer roles and mental models, workflow patterns, engagement modes, and human-system factors. Through illustrative examples and comparative tables, we outline how each paradigm supports different stages of software development, from rapid prototyping to automated refactoring and large-scale system integration.
# A. Developer Roles and Mental Models
1) Vibe Coding: Dialogic Creation and Exploratory Interaction: Vibe Coding emphasizes an interactive, conversational dynamic between the developer and the LLM. Developers are engaged as co-creators, navigating design and implementation decisions through iterative prompt-response cycles [43], [186]. This approach lowers the activation threshold for idea exploration, enabling developers to articulate abstract requirements and progressively converge on working solutions.
# Primary Roles:
• Intent Architect: Formulates project goals in natural language, refining intent through prompt iteration.
Creative Director: Evaluates, edits, and curates AIgenerated outputs to align with design intent and user experience.
Explorer: Uses the AI to experiment with unknown APIs, test UI patterns, or scaffold new features with minimal prior knowledge.
Cognitive Model: The developer operates with a “what-before-how” mindset articulating high-level needs (e.g., “Build a login page with 2FA”) and assessing the AI’s proposed structural and syntactic solutions. This model promotes rapid feedback and creative experimentation but delegates testing and validation responsibilities to the developer.
# Example Prompt Loop
Developer: “Create login API with password hashing and 2FA.”
AI: Generates FastAPI endpoint using JWT $^ +$ pyotp.
Developer: “Add unit tests and refactor 2FA logic into middleware.”
2) Agentic Coding: Task Delegation and Strategic Oversight: Agentic Coding reframes the developer’s role as that of a systems architect, strategic planner, and supervisory reviewer. Developers define high-level tasks or objectives, which are parsed and decomposed by autonomous agents that execute software engineering workflows ranging from code modification to integration testing and version control [187], [188].
# Primary Roles:
Strategic Planner: Specifies tasks, objectives, and architectural constraints for the agent to act upon [19], [189].
• Supervisor: Monitors execution trace logs, performance reports, and system outputs [130].
Reviewer: Validates the correctness, maintainability, and security of agent-generated changes before integration [77].
Cognitive Model: Developers think in terms of orchestration rather than direct implementation. A single instruction such as “Fix broken login and ensure OAuth2 compliance” may be internally decomposed by the agent into authentication token migration, CI pipeline updates, test reruns, and dependency auditing. Human intervention is minimized to exception handling or ambiguity resolution.
# B. Workflow Patterns
Vibe Coding: Conversational Exploration: Vibe coding workflows are inherently exploratory and non-linear. Developers issue prompts, inspect generated code, and provide incremental feedback [190], [191]. This model is optimal for interface prototyping, low-risk experimentation, or knowledge discovery [192].
# Example – Dashboard Prototyping
1) Developer: “Build React dashboard with user count,
revenue, and churn chart.”
2) AI: Generates UI with Chart.js and dummy data.
3) Developer: “Add tooltips and export to CSV.”
4) AI: Adds hover logic and export buttons.
5) Developer: “Write Cypress tests.”
6) AI: Outputs E2E test coverage.
Agentic Coding: Structured Execution Pipelines: Agentic coding follows structured workflows based on task planning [193], [194], state management [195], and recursive feedback loops [196], [197]. These workflows suit enterprise-grade tasks requiring correctness, traceability, and automation.
# Example – Automated Dependency Upgrade
1) Developer: “Upgrade all npm packages to latest secure versions.”
2) Agent: Parses package.json Updates dependency versions Executes test suite Resolves compatibility issues Generates changelog
3) Developer: Reviews logs and approves pull request.
# C. Comparative Analysis: Developer Engagement and Workflow Suitability
The differing interaction paradigms of Vibe Coding and Agentic Coding are reflected not only in architectural and cognitive models, but also in their practical workflow characteristics. From the role of the developer and interaction patterns to testing, documentation, and error resolution, each paradigm supports distinct modes of software creation. These differences have significant implications for project scale, team composition, and toolchain integration. Table IV presents a structured comparison of key workflow dimensions, highlighting how each model aligns with specific development contexts and use cases, and guiding practitioners in selecting the appropriate strategy for their software engineering objectives.
# D. Scientific and Human Factors
Cognitive Load and Developer Productivity: Vibe Coding reduces cognitive load associated with syntax and implementation details [82], [166], enabling rapid ideation and creative flow. It is especially effective for solo developers, early prototyping, or teaching new frameworks through interaction.
TABLE IV: Developer Roles and Workflow Comparisons
Agentic Coding introduces new cognitive demands in terms of system understanding, trust calibration, and supervision. However, it scales well in complex systems, enabling experienced developers to manage multiple asynchronous workflows and integrate formal validation into the pipeline [198]–[200].
Collaboration and Team Models: Vibe Coding is well-suited to collaborative scenarios such as hackathons or pair programming. Multiple developers may interact with the same agent in conversational loops to co-create ideas.
Agentic Coding enables distributed responsibility across modular systems. Individual agents or agent groups may be assigned to subsystem-level tasks, supporting parallelism and pipeline scalability in team-based development.
# E. Real-World Scenarios
Vibe Coding – Social Media Feature Ideation: To illustrate the fluid, creative, and iterative nature of vibe coding, consider the development of a new feature in a mobile social media application. In this workflow, the developer incrementally guides the LLM through a series of conversational prompts, shaping both frontend and backend functionality while maintaining control over design decisions. The example below demonstrates how a story highlight feature is ideated, implemented, refined, and tested using natural language interaction.
# Vibe in Practice: Social Highlights
Prompt: “Add story highlight feature to mobile app.”
AI: Creates backend schema, API endpoints, frontend UI (React Native).
Follow-up: “Allow drag-and-drop ordering.” AI: Adds sortable list with gesture control. Prompt: “Write tests.”
AI: Outputs Jest $+ ~ \mathrm { E 2 E }$ test scripts.
Agentic Coding – Legacy Migration: In contrast, agentic coding systems are designed for high-autonomy, structured workflows. The following example showcases an agentic system tasked with migrating a legacy codebase from Python 2 to Python 3. Once the developer issues the high-level instruction, the agent autonomously scans the codebase, applies systematic refactoring, validates changes through testing, and reports results for human approval. This highlights how agentic systems can automate extensive codebase transformations with minimal oversight.
# Agentic in Practice: Python 2 to 3 Migration
Prompt: “Upgrade all Python code to version 3.x.”
# Agent:
Parses source files for deprecated syntax.
• Applies automatic and rule-based refactoring.
Executes test suite.
Reports diffs and unresolved issues.
Developer: Reviews logs, patches edge cases, and approves merge.
Vibe Coding and Agentic Coding represent two ends of the AI-assisted software development spectrum. Vibe Coding excels in human-in-the-loop, creative workflows where rapid feedback and flexibility are essential [123], [201], [202]. Agentic Coding, by contrast, emphasizes autonomy, reliability, and integration making it ideal for large-scale automation and enterprise deployment [19], [189]. Understanding the operational strengths and tradeoffs of both paradigms and hybridizing them effectively offers a pathway toward more intelligent, efficient, and adaptive software development ecosystems.
# V. IMPLEMENTATION STRATEGIES
The implementation strategies of Vibe Coding and Agentic Coding diverge fundamentally in how they translate developer instructions into executable software. While both rely on LLMs, the strategies they adopt for prompt handling, code verification, and tool integration reflect distinct philosophies. This section explores these dimensions in detail.
# A. Prompt Engineering
Vibe Coding: Precision and Context for Creative Generation: Prompt engineering is the backbone of effective vibe coding. Here, developers provide explicit, intentfocused natural language prompts to guide the AI model in generating desired code outputs [203], [204]. The process is inherently iterative and dialogic [205], [206].
# Key Characteristics:
Intent-Focused: Prompts emphasize desired outcomes rather than procedures [207], [208]. Context-Sensitive: LLMs use extended context windows $1 6 \mathrm { k } { - } 3 2 \mathrm { k }$ tokens) to retain semantic awareness of the codebase [209]–[211]. • Exploratory: Prompts may include multiple solution requests or alternative suggestions [212], [213].
# Example: Iterative Prompt Refinement
Developer: ”Create a REST API with user login and JWT auth.”
AI: Returns FastAPI endpoint with pyjwt integration.
Developer: ”Add 2FA with OTP.”
AI: Adds OTP-based validation logic with PyOTP.
Vague prompts such as ”Make this more secure” often result in ambiguous or generic outputs. Specific prompts significantly improve precision.
Agentic Coding: Hierarchical and Multi-Step Instructions: Agentic coding requires hierarchical prompting suited to multi-phase task execution [19], [197], [214]. Developers issue macro-level instructions that agents deconstruct into subtasks [215].
# Key Characteristics:
Task Chaining: Prompts are designed to be decomposable.
Execution-Ready: Prompts include constraints, file references, and expected outputs.
• Interactive Feedback: Agents may prompt for clarification before execution.
# Example: Agentic Prompt Workflow
Prompt: “Update all modules using Flask 1.x to Flask 2.x and resolve deprecation warnings.”
# Agent Actions:
1) Parse imports and deprecated patterns.
2) Replace and test each submodule.
3) Run CI pipeline.
4) Log any exceptions or failed tests.
5) Compile a migration report.
Effective agentic prompts must encode enough structure to support independent agent decision-making while retaining flexibility for adaptive correction.
# B. Review and Debugging
Debugging and validation workflows are critical in shaping the reliability and usability of AI-assisted software development. Vibe Coding and Agentic Coding differ fundamentally in how error detection, correction, and verification are handled. Vibe Coding emphasizes human-in-the-loop inspection, relying on manual review and iterative prompt refinement, making it conducive to exploratory development. In contrast, Agentic Coding shifts much of this responsibility to autonomous agents equipped with runtime monitoring, log analysis, and rollback mechanisms. This section systematically analyzes the validation strategies, developer effort, and debugging affordances inherent in each paradigm.
1) Vibe Coding: Manual, Iterative, and Prompt-Based Validation: In Vibe Coding, the developer plays an active role in identifying and resolving issues. Errors are detected through manual testing, visual inspection, and ad hoc interaction with the LLM. The absence of a built-in feedback mechanism means the AI does not autonomously detect or act upon code failures. Instead, the developer diagnoses errors and re-engages the model with revised prompts.
# Key Techniques:
Post-Generation Review: Developers visually examine AI-generated code, sometimes assisted by static analysis tools or linters.
Manual Testing: Code is copied into a local IDE or REPL, where tests are written and executed manually.
• Prompt Debugging: Errors are addressed by refining the original prompt or issuing follow-up questions to isolate root causes.
# Example: Prompt-Based Debugging
Developer: “Why does this throw a 403 error?” AI: “JWT token decode fails due to missing audience field. Add ’aud’ claim.”
While time-consuming, this approach fosters a high degree of developer engagement and intuition building. It is particularly effective in early-stage prototyping, learning new libraries, or customizing third-party tools, where interpretability and control are prioritized over automation.
2) Agentic Coding: Autonomous Debugging and Runtime Verification: Agentic Coding systems are equipped with autonomous verification mechanisms embedded into their runtime architecture [216]–[218]. Once a task is defined, the agent is capable of not only executing it but also validating its correctness through runtime evaluation and internal logging systems (using systems such as runtime verifications [219]–[221]. These systems often include error-handling protocols such as rollback, patch substitution, or retry logic.
# Key Techniques:
Runtime Evaluation: Tasks are executed in secure sandboxes or containers simulating production environments.
• Log Inspection: Execution traces, diffs, and system states are recorded and used for post-mortem analysis or decision trees.
• Error Rollback: On failure, agents automatically revert to a prior stable state or reattempt execution with modified inputs.
# Agentic Debug Loop
# Agent Workflow:
Refactor class structure.
Run full test suite.
On failure: isolate diff, revert, and retry with alternate patch.
Finalize changes only if all tests pass.
This feedback-driven model supports CI/CD pipelines, automated patching, and enterprise-grade refactoring. It minimizes manual intervention, ensures reproducibility, and scales efficiently across larger codebases. However, it requires careful oversight to validate that the agent’s autonomous decisions align with project constraints and security standards.
# C. Tool Ecosystems
The tooling landscape supporting Vibe Coding and Agentic Coding reflects the fundamental differences in their architectural goals and user interaction models. Vibe Coding tools are designed to support rapid prototyping, fluid human-AI interaction, and minimal onboarding, often operating within lightweight development environments. In contrast, Agentic Coding platforms are engineered to manage autonomous task execution, system orchestration, and compliance logging, often integrating deeply with infrastructure-level operations. This section summarizes key tools representative of each paradigm and their associated capabilities.
Vibe Coding Tools: Conversational Interfaces and Creative Assistants: Vibe coding tools prioritize accessibility, immediacy, and seamless integration into the developer’s creative flow. These platforms typically use prompt-response loops and are embedded within IDEs, browsers, or chat interfaces, enabling on-the-fly code generation, inline explanations, and lightweight context sharing.
# Representative Platforms:
ChatGPT: Provides flexible, conversational code generation and prompt debugging via natural language input [9].
• Gemini: Combines code suggestion capabilities with contextual documentation references and visual UI integration [222].
• Claude (Anthropic): Offers long-context code assistance with a focus on safe interaction and persistent multimodal reasoning Claude anthropic.
• Replit Ghostwriter: Embeds a code-generating LLM into a browser-native IDE for real-time, context-aware assistance Replit.
These tools excel in tasks requiring quick iteration, explanation, and ideation, supporting solo developers, learners, and early-phase design exploration.
Agentic Coding Tools: Execution-Aware Agents and Autonomous Pipelines: Agentic coding tools embed intelligent agents within controlled execution environments, enabling autonomous code transformation, validation, and deployment. These platforms often integrate with containers, logs, and permission systems to support continuous development and robust system-level interaction.
# Representative Platforms:
Codex (OpenAI): Supports full project-level task execution, including file system manipulation, test orchestration, and CI pipeline triggering in sandboxed cloud instances Codex OpenAI Link. • Claude Code (Anthropic): Designed with explainability and oversight features, enabling developers to audit changes, trace reasoning, and rollback unsafe actions Claude Code Link. • Jules: Google Jules is an autonomous AI coding agent powered by Gemini that handles tasks like bug fixes and feature updates asynchronously Google Jules. It operates on cloned GitHub repos in secure VMs, presenting results for review before integration.
These agentic tools are suited for scenarios demanding auditability, security enforcement, and task autonomy across large and distributed codebases.
Comparative Analysis: Implementation Strategies The practical implementation strategies of Vibe Coding and Agentic Coding reflect their divergent philosophies in autonomy, tooling, and workflow integration. While Vibe Coding emphasizes rapid ideation, creative prompting, and developer-centered iteration, Agentic Coding systems are designed for structured task decomposition, automated execution, and autonomous debugging. These contrasts are not merely stylistic but architectural, influencing everything from execution environments and review cycles to tool selection and deployment suitability. Table V provides a comparative overview of the key implementation dimensions, offering a concise synthesis of how each paradigm operationalizes AI-assisted software development across prototyping, debugging, and production-scale maintenance tasks.
TABLE V: Implementation Strategy Comparison: Vibe vs. Agentic
# D. Scientific and Practical Implications
The application of Vibe Coding and Agentic Coding reflects two distinct paradigms in AI-assisted software development. Vibe Coding facilitates rapid ideation and creative exploration through lightweight, prompt-driven workflows. Its primary strengths lie in minimal setup time, intuitive experimentation, and support for learning new tools or frameworks. However, its limitations such as the need for manual validation, reduced suitability for production environments, and lack of built-in quality assurance make it less ideal for scalable engineering tasks.
In contrast, Agentic Coding emphasizes automation, structure, and reliability. Agents can autonomously perform task planning, testing, and code integration, reducing developer burden and increasing QA compliance. These systems are well-suited for CI/CD pipelines, legacy modernization, and enterprise-level code maintenance. Still, agentic workflows require secure execution environments, complex orchestration infrastructure, and careful oversight to prevent silent failures from misconfigurations.
Ultimately, these paradigms are not mutually exclusive. Vibe Coding is optimal during early-stage design and prototyping, while Agentic Coding excels in implementation and operational phases. Hybrid workflows that leverage both beginning with creative prompting and transitioning to autonomous execution can maximize efficiency and robustness. As AI coding agents evolve, this blended strategy will likely define the next generation of intelligent development practices.
# VI. USE CASES AND APPLICATIONS
The evolution of AI-assisted software development has led to the emergence of two complementary paradigms: Vibe Coding and Agentic Coding. Each presents distinct strengths, ideal workflow conditions, and technical affordances. Understanding where each paradigm excels enables developers and organizations to align tooling strategies with project goals, lifecycle stages, and team expertise. This section explores the optimal application scenarios for both approaches based on functional characteristics, observed benefits, and empirical use. A comparative summary of use case suitability is provided in Table VI.
# A. Ideal Scenarios for Vibe Coding
Vibe Coding is most effective in early-stage development and exploratory contexts where creativity, rapid feedback, and flexible control are essential. It supports three dominant use cases:
Creative Exploration: Vibe tools allow developers to experiment with high-level design ideas, generate multiple implementation variants, and visualize UI flows or logic strategies in real time. This makes them ideal for brainstorming sessions, algorithmic sandboxing, and feature sketching.
Rapid Prototyping: Vibe coding excels in building functional MVPs from natural language prompts. Developers can generate full-stack components including REST APIs, frontend layouts, and test cases within hours using conversational iterations, making it suitable for agile environments and hackathons.
Learning New Technologies: For onboarding or upskilling, vibe tools serve as real-time coding tutors. Developers can query LLMs for explanations, receive context-sensitive code snippets, and troubleshoot unfamiliar tools or frameworks like Next.js, WebAssembly, or Rust interactively.
# B. Ideal Scenarios for Agentic Coding
Agentic Coding is engineered for structured, highreliability workflows where automation and scale are critical. It is best applied in production environments and enterprise operations:
Codebase Refactoring: Agentic tools can autonomously analyze legacy systems, detect outdated code patterns, and apply systematic refactorings. For example, migrating a Python 2.7 system to Python 3.x involves syntax updates, dependency rewrites, and full test coverage all achievable with minimal human oversight using agentic pipelines.
Routine Engineering Tasks: These include automated dependency upgrades, code formatting, test regeneration, and CI/CD pipeline maintenance. Agents apply consistent standards across large codebases, improving maintainability and reducing manual engineering overhead.
Regression Bug Fixing: Agentic systems excel at log-based error diagnosis, root-cause analysis, and autonomous code repair. They reduce mean time to resolution (MTTR) by running test suites, applying patches, and updating changelogs without developer intervention ideal for mission-critical services.
C. Comparative Analysis: Vibe vs. Agentic in Practice
TABLE VI: Practical Use Case Comparison of Agentic vs. Vibe Coding
# D. Real-World Applications
Vibe Coding: 10 Practical Use Cases: Figure 6 illustrated the summary of 10 practical use cases of vibe coding, and each use cases are detailed into following points:
# 1) Personal Portfolio Website Development
markup for accessibility. Developers can then refine colors, branding, or layout logic without spending time on boilerplate setup. This use case exemplifies how vibe coding empowers solo developers, students, or freelancers to launch polished portfolios in hours, promoting self-representation, employment outreach, or client engagement. Furthermore, it enables real-time customization: prompts like “Add a testimonials section with a carousel” can be appended iteratively. The ease of interaction and semantic clarity of natural language prompts transform what would traditionally take several days of setup, CSS tweaking, and component reuse into a streamlined, conversational workflow making this a flagship use case in frontend ideation through Vibe Coding [226], [227].
Vibe coding proves highly effective in generating professional personal websites with minimal manual effort [223]–[225]. For instance, a developer may prompt, “Create a modern, responsive personal website with sections for About, Projects, and Contact. Use React and include a dark mode toggle.” The AI interprets this instruction and outputs a full React-based project including reusable components, routing with React Router, state management for theme toggling, and styled-components for UI design. Importantly, the AI-generated code adheres to modern frontend architecture patterns, enabling responsiveness across screen sizes and semantic
# 2) Interactive Data Visualization Dashboards
Another powerful use case for vibe coding lies in developing interactive data dashboards [46]. A prompt such as “Build an interactive dashboard that displays sales data as a bar chart and a pie chart, with filters for region and date” activates the model’s ability to generate full JavaScript-based UI with integrated visualization libraries like Chart.js or D3.js. The AI constructs components to handle input state (e.g., dropdowns or sliders for region and date), binds them to data filters, and connects the outputs to responsive visual elements. This approach significantly accelerates prototyping for data scientists, product managers, or researchers who may lack the frontend expertise to translate insights into live web interfaces. Moreover, vibe coding allows iterative updates such as “Add a line chart for monthly revenue trends” or “Include export to CSV functionality,” enabling flexible expansion without reworking foundational architecture. These dashboards can be powered by static JSON or APIbacked data sources, depending on the context. The AI-generated layout often includes accessibility features, tooltips, and mobile responsiveness by default. This use case demonstrates the integration of data fluency and visual storytelling [228], [229], helping domain experts bridge the gap between backend analytics and user-facing visualizations using conversational coding workflows.
# 3) Daily Email Report Automation
Vibe coding excels in automating routine workflows such as scheduled email reports [156], [230]. Given a prompt like “Write a Python script that pulls yesterday’s sales from a CSV file and emails a summary to my team at 8am every day,” the AI generates code using Python libraries including pandas for CSV parsing, smtplib for sending emails, and built-in modules like datetime for filtering data. The script typically formats metrics into human-readable summaries, attaches files if necessary, and configures recipient lists and message subjects. Beyond the core logic, the AI often includes instructions for setting up a cron job or a Windows Task Scheduler entry to automate execution. This workflow is particularly useful for small businesses, sales teams, or researchers who need daily reporting but lack access to enterprise-grade BI platforms. Further enhancements like “Format summary as HTML table” or “Attach CSV of filtered data” can be integrated seamlessly through subsequent prompts. Compared to traditional scripting workflows that require detailed knowledge of syntax and library APIs, vibe coding reduces setup time and encourages iterative improvements [231], [232]. This use case exemplifies how AI-enhanced scripting democratizes automation, enabling nonexpert programmers to implement robust reporting pipelines through natural language commands.
# 4) To-Do List Web Application
Vibe coding provides a streamlined approach to developing interactive, stateful web applications, such as to-do list managers [233], [234]. Given the prompt “Make a simple to-do list web app with add, remove, and mark-as-complete features. Use Vue.js,” the AI generates a project that includes reusable Vue components for task entry, task listing, and status toggling. It sets up reactive data binding via the Composition or Options API and persists the task list using browser localStorage or sessionStorage. This use case is particularly effective for frontend learners, rapid prototyping, and UI logic testing, where state management and component communication are essential. Developers can incrementally add features like task categorization, due dates, or UI transitions simply by prompting further e.g., “Add color-coded categories for each task” or “Sort tasks by due date.” Importantly, the AI typically follows modern Vue conventions, using scoped CSS, modular scripts, and semantic HTML structure. This hands-on feedback loop fosters both conceptual understanding and production ready outcomes [105]. Furthermore, the generated application can be easily deployed using platforms like Vercel Vercel Link or Netlify Netlify Link, showcasing vibe coding’s low barrier to full-cycle application development. In short, to-do list apps serve as an ideal sandbox to observe how AI interprets dynamic user interfaces and local state orchestration from natural language descriptions.
# 5) Startup Landing Page Generation
Vibe coding significantly accelerates the creation of marketing-oriented landing pages an essential component for startups [235], product demos [170], [236], and digital campaigns [237]. A prompt such as “Generate a landing page for a new AI-powered note-taking app. Include a hero section, features, testimonials, and a signup form” guides the AI to output semantically structured HTML and Tailwind CSS with clearly delineated sections. It generates layouts with responsive flexbox or grid-based positioning, embedded SVG icons, call-to-action (CTA) buttons, and form validation logic. The landing page is typically annotated with placeholder text and sample imagery, which the developer can replace for customization. This use case benefits nontechnical founders or small teams seeking to deploy marketing content quickly without hiring web designers. Moreover, it allows iterative refinement: prompts like “Add newsletter opt-in with Mailchimp integration” or “Include pricing tiers with toggleable monthly/annual plans” are easily interpreted by the model. The AI’s outputs align with SEO best practices and accessibility guidelines, often including meta tags and ARIA attributes. This workflow illustrates how vibe coding transforms vague branding concepts into fully deployable pages, enabling rapid iteration on visual identity and user acquisition strategies through natural language co-design.
# 6) RESTful API Endpoint Development
Vibe coding is increasingly effective for backend prototyping, particularly in generating modular RESTful APIs. A prompt like “Create a Node.js Express endpoint for user registration, with email validation and password hashing” leads to the generation of structured middleware logic using express, bcryptjs, and validator.js. The AI often scaffolds the file structure, initializes an app.js or index.js, and integrates JSON body parsing with standardized error handling. The resulting endpoint typically includes robust input validation, secure password storage, and informative response messages. This use case is particularly useful for full-stack developers prototyping backend logic without investing in boilerplate or configuration overhead. The generated code is modular enough to extend with authentication middleware like JWT, rate limiting, or database integration through MongoDB or PostgreSQL. Subsequent prompts e.g., “Add input sanitization to prevent XSS” or “Integrate this with a MongoDB user model” can be layered seamlessly. This shows how vibe coding supports iterative back-end design with conversational refinement. More importantly, it empowers frontend-focused developers to extend their capabilities into API development, promoting end-to-end skill integration. Thus, REST endpoint generation exemplifies how vibe coding reduces friction in backend service scaffolding, enhancing both speed and security of early-stage API development.
Unit Test Generation for Frontend Components Testing is often overlooked in early-stage development, but vibe coding offers a frictionless approach to generating unit test suites for React and other component-driven frameworks. A prompt like “Write Jest unit tests for this React component that displays user profiles” initiates code that tests lifecycle methods, conditional rendering, prop validation, and event handling. The AI typically produces test files using react-testing-library or enzyme, defining mocks and asserting DOM state after simulated interactions. This functionality is invaluable for improving code coverage, debugging regressions, and maintaining confidence during refactors. Developers can extend the coverage by prompting “Add tests for error boundaries” or “Include mock API calls.” The generated test cases are often aligned with industry best practices, using descriptive test names and clear assertions. This use case demonstrates how vibe coding promotes quality assurance practices even in fast-paced prototyping contexts. It is particularly valuable for teams adopting test-driven development (TDD) or onboarding new engineers to legacy systems who need to create regression tests. By automating repetitive scaffolding of test logic, vibe coding helps enforce consistent testing structures, reducing the manual effort typically required for frontend validation and increasing reliability in component-driven architectures.
# 8) Framework Exploration and Onboarding
Vibe coding serves as an effective tool for developers exploring unfamiliar frameworks or ecosystems. For instance, a prompt such as “Show me how to set up a basic blog with Next.js, including routing and markdown support” yields a full scaffold of a modern web project. The AI typically generates key files including pages/index.js, dynamic routing components using getStaticPaths and getStaticProps, and integrates Markdown rendering through libraries like remark or gray-matter. It also provides file structures, package dependencies, and minimal working examples for content loading and layout templating. This guided setup facilitates hands-on learning without requiring prolonged documentation review, making it particularly beneficial for bootcamp students, self-taught developers, or engineers transitioning to unfamiliar stacks. Follow-up prompts such as “Add syntax highlighting for code blocks” or “Implement tag-based post filtering” can be layered incrementally, simulating real-world development flow. This workflow encourages immediate experimentation, rapid iteration, and experiential learning. Unlike tutorials or static documentation, vibe coding fosters an interactive feedback loop that enhances conceptual understanding through example-driven guidance. Thus, framework exploration via vibe coding accelerates onboarding while empowering users to transition from novice to productive contributor in modern development ecosystems like Next.js, SvelteKit, or Astro.
# 9) Interactive Multimedia and Animation Prototyping
Another high-value application of vibe coding lies in creating rich, interactive multimedia experiences. Given a prompt such as “Build a JavaScript animation that reacts to music and user clicks, with smooth transitions and colorful visuals,” the AI constructs a canvas-based or WebGL animation pipeline using libraries such as p5.js, Tone.js, or raw requestAnimationFrame logic. It wires real-time audio input events to visual transformations and handles user interactions like mouse movement or click-based effects. The output typically includes smoothing functions, frame buffers, and conditional rendering for responsiveness. This use case is highly relevant for frontend engineers, game developers, and digital artists looking to prototype immersive interfaces or creative installations. Because such applications are rarely template-driven, vibe coding’s strength lies in enabling fast ideation loops developers can iterate by prompting “Make colors change with beat intensity” or “Add particle trails to click animations.” This approach significantly lowers the barrier to generative art, sound visualization, or interactive infographics, domains where traditionally high skill thresholds once constrained creative experimentation. Vibe coding thus democratizes access to dynamic front-end graphics programming, enabling more developers to explore creative coding with immediacy and expressive control through natural language.
# 10) Spreadsheet Automation with Google Apps Script
Vibe coding extends beyond frontend and API development into automating productivity tools, such as spreadsheets. A use case like “Write a Google Apps Script to automatically color rows in a Google Sheet based on the value in the ‘Status’ column” triggers generation of JavaScript code tailored for the Apps Script environment. The model produces event-driven scripts using onEdit(e) handlers that evaluate cell values and apply conditional formatting via setBackground() methods. It also includes logic to optimize performance (e.g., range limiting) and optional enhancements like logging or undo triggers. This application is especially useful for educators, analysts, and administrative staff who manage large datasets and require visual cues for prioritization e.g., coloring tasks as “Complete,” “In Progress,” or “Overdue.” Additional prompts such as “Send email when status is ‘Blocked’” or “Sort by last updated date on edit” can expand the automation scope. Unlike manual scripting which requires prior knowledge of Google Apps Script APIs, vibe coding allows users to build task-specific automation routines through intuitive prompts. This use case illustrates how conversational AI can streamline routine digital workflows in office suites, bridging the gap between traditional spreadsheet usage and low-code enterprise automation.
Agentic Coding: 10 Applied Use Cases:
# 1) Automated Codebase Refactoring
Agentic coding excels at large-scale, systematic code transformation, especially in legacy modernization scenarios. For example, given the instruction “Refactor all legacy authentication code to use OAuth2, update related tests, and ensure backward compatibility,” the agent parses the authentication module across files, identifies deprecated authentication logic, and systematically replaces it with OAuth2-compliant handlers. It updates environment configurations, rewrites affected middleware, and adjusts API headers to match new security flows. Unit and integration tests are updated or regenerated to validate the changes, and backward compatibility layers are introduced where applicable. Finally, the agent commits these changes to a Git branch and submits a changelog for review. This minimizes manual intervention and supports safer refactorings in production-sensitive codebases, where human error could be costly. Agentic refactoring is especially suitable for frameworks undergoing deprecation, major version upgrades, or compliance updates (e.g., from cookie-based sessions to token-based authentication).
# 2) Routine Dependency Updates
Maintaining up-to-date dependencies across large repositories is tedious and error-prone an ideal task for agentic automation. When prompted with “Update all project dependencies to their latest secure versions, fix any compatibility issues, and document changes,” the agent examines package.json, requirements.txt, or equivalent manifest files and upgrades each package to a secure, stable release. Post-upgrade, it launches regression test suites to detect breakages, applies required code patches, and flags any unresolved compatibility issues. A human-readable changelog is autogenerated, listing updated packages, reasons for change (e.g., vulnerability fix, feature parity), and impacted modules. This workflow ensures software supply chain hygiene and aligns with security best practices such as SBOM (Software Bill of Materials) generation.
# 3) Regression Bug Fixing
In enterprise-grade pipelines, where minimizing downtime is critical, agentic systems provide rapid response mechanisms for resolving regressions. Instructed with “Identify and fix any regression bugs introduced in the last release,” the agent fetches the latest commits, runs test pipelines, and maps failures to code changes using blame heuristics or statistical fault localization techniques. Upon identifying root causes, it proposes targeted patches and verifies fixes through retesting. If successful, the fix is committed with rollback metadata. This not only reduces mean time to resolution (MTTR) but also minimizes human debugging cycles during post-deployment phases.
# 4) CI/CD Pipeline Automation
Setting up and maintaining CI/CD pipelines is essential but repetitive ideal for delegation to agentic systems. When asked to “Set up and maintain a CI/CD pipeline that builds, tests, and deploys our microservices to AWS,” the agent scaffolds GitHub Actions or GitLab CI YAML files, configures secrets management (e.g., via AWS IAM), builds Docker containers, and deploys them to ECS or Lambda environments. It also implements rollback triggers, environment matrix testing, and artifact versioning. This workflow transforms DevOps setup into a task-oriented scriptable process, freeing engineers to focus on application-level issues.
# 5) Automated Security Auditing
Agentic coding tools are highly effective for security auditing. Given the prompt “Scan the codebase for OWASP Top 10 vulnerabilities, apply fixes, and generate a security report,” the agent runs static analysis (e.g., CodeQL, Bandit), applies sanitization patches (e.g., escaping input fields), and produces a PDF security report complete with fix diffs, severity scores, and coverage metrics. It also enforces policy-as-code integrations to flag future regressions. This is crucial for SOC 2 or GDPRcompliant organizations requiring continuous assurance mechanisms.
# 6) Large-Scale Code Migration
Legacy code migrations, such as “Migrate the codebase from Python 2.7 to Python 3.x,” are complex and error-prone. The agent tokenizes source files into abstract syntax trees (ASTs), applies rule-based transformations (e.g., print statements to functions, Unicode updates), and updates dependency management (e.g., replacing pip2 libraries). After transformation, it runs and validates unit and integration tests, logging all conversions for developer audit. This use case illustrates how agentic systems can bridge large syntactic gaps and reduce organizational technical debt with traceable automation.
Fig. 6: 10 use cases of Vibe Coding (Upper part) and Agentic Coding (Lower Part)
7) Automated Documentation Generation Documentation often lags development. Agentic systems solve this via prompts like “Generate API documentation for all endpoints, including usage examples and parameter descriptions.” The agent extracts function docstrings, converts them into OpenAPI-compliant specs, and deploys an interactive Swagger UI. If inline docs are missing, it infers descriptions from usage patterns or model inference. The documentation is versioned with the codebase and updated on every relevant commit. This ensures code maintainability, simplifies onboarding, and aligns with industry standards such as REST maturity models or GraphQL schema introspection.
8) Performance Optimization
Performance profiling and optimization tasks are ideal candidates for agentic workflows. Given “Profile the application, identify bottlenecks, and optimize slow database queries,” the agent instruments performance probes using cProfile, perf, or Chrome DevTools depending on stack. It locates hotspots such as nested loops or unindexed queries, applies refactors (e.g., SQL indexing, memoization, pagination), and verifies performance gains with benchmarks. The final report includes before/after metrics, call graphs, and optimization rationale. This use case demonstrates how agentic coding tools can enforce non-functional requirements through measurable performance metrics.
# 9) End-to-End Feature Implementation
Agentic coding systems are capable of implementing complex, multi-component features. For instance, the prompt “Implement a new payment gateway integration, update the UI, backend, and database, and ensure all workflows are tested” triggers code generation for frontend forms (e.g., Stripe.js), backend API routes, database schema updates, and test coverage via Cypress or Postman. The agent updates configuration files, manages secrets, and deploys to staging environments. Human developers review logs, inspect schema diffs, and approve PRs. This end-to-end autonomy illustrates the promise of agentic systems in full-stack development.
# 10) Automated Rollback and Recovery
For production environments, agentic systems serve as first responders. Prompted with “Monitor production for critical errors, and if detected, automatically roll back to the last stable version and notify the team,” the agent uses observability tools (e.g., Datadog, Sentry) to watch logs for error spikes. If conditions match a critical failure signature, the agent initiates rollback via GitOps workflows (e.g., ArgoCD), verifies system health post-rollback, and dispatches alerts via Slack or email. This application reduces incident response time and adds resilience to mission-critical deployments.
# VII. INDUSTRY TRENDS AND CONVERGENCE
The evolving interface between humans and artificial intelligence in software development has given rise to two prominent paradigms: Vibe Coding and Agentic Coding. Originally designed with distinct purposes vibe coding as an exploratory, conversational mode and agentic coding as a structured, autonomous execution model these approaches are increasingly converging. This convergence is not coincidental; it reflects broader sociotechnical demands across domains such as enterprise automation, developer education, and creative software innovation. In this section, we examine the emerging hybrid architectures, adoption trajectories across industry sectors, and the synthesis of best practices that signal a maturing AI-assisted development ecosystem.
# A. Emergence of Hybrid Models
Contemporary platforms are beginning to blur the once-clear lines between reactive conversational assistants and autonomous agent frameworks. Vibe coding systems, initially confined to prompt-based generation in natural language interfaces, have started incorporating execution capabilities, persistent context, and basic planning modules. For instance, tools like Replit Ghostwriter now support inline execution and debugging, offering partial autonomy within conversational workflows.
Conversely, agentic platforms such as OpenAI Codex, Claude Code, and Google Jules have introduced interface elements from vibe coding: accepting high-level natural language goals, providing step-by-step feedback, and engaging in clarifying dialogue. These developments illustrate an architectural fusion in which conversational flexibility is paired with autonomous execution, leading to hybrid systems capable of decomposing, planning, validating, and summarizing multi-step software tasks.
Hybrid models allow users to issue abstract objectives (e.g., “Build a secure login system with 2FA and audit logging”), which are parsed by the AI into discrete submodules. The agent then executes each step, verifies the results via tests, and presents logs and artifacts for review. This synthesis offers three distinct benefits: (i) conversational speed in ideation, (ii) execution precision with agent control, and (iii) a continuous loop of refinement via real-time feedback. However, challenges remain in ensuring explainability, safe prompt handling, and seamless cross-platform integration.
# B. Enterprise and Educational Adoption
The convergence of AI coding paradigms is not only theoretical but increasingly visible in industry and education. Agentic systems are gaining traction in enterprise environments due to their capacity for automating mission-critical tasks. Organizations such as Cisco employ agentic frameworks for regression testing, legacy code refactoring, and continuous integration workflows. Similarly, Kodiak Robotics leverages agentic tools for safety-critical verification in autonomous driving software.
In contrast, vibe coding is widely adopted in education and individual development. Platforms such as VS Code and Replit embed vibe-oriented coding assistants directly into the IDE, allowing students and solo developers to explore new APIs, build prototypes, and debug through conversational interaction. Coding bootcamps use these systems for instructional scaffolding providing code suggestions, explanations, and project feedback.
Adoption patterns exhibit a dual structure: topdown implementation of agentic systems in enterprise pipelines, and bottom-up adoption of vibe tools among independent users. Despite their promise, adoption faces three main barriers: (i) governance concerns around AI decision transparency and security, (ii) skepticism toward black-box automation among seasoned developers, and (iii) the need for retraining teams in AI-centric workflows and agent supervision.
# C. Balanced Development Practices
As these paradigms continue to merge, a balanced model of human-AI collaboration is emerging. In this paradigm, developers use vibe interfaces to articulate system intent (e.g., “Design a multilingual registration form with spam filters”), and agents execute subcomponents backend validation, frontend form generation, and anti-spam logic under the developer’s supervision. The human then reviews and refines outputs, enforces policy compliance (e.g., GDPR), and initiates deployment.
Balanced practices offer the best of both paradigms: the creative freedom and speed of vibe coding, and the repeatability, quality assurance, and architectural rigor of agentic systems. Empowered by this synergy, non-programmers can initiate software logic via natural language, while engineers maintain control over architecture, policy, and integration. Still, three unresolved challenges remain: (i) ensuring runtime security against emerging prompt-based or model-exploitation vulnerabilities, (ii) implementing comprehensive and interpretable audit trails for AI decisions, and (iii) preserving and cultivating developer expertise in the face of rising abstraction and automation.
The convergence of vibe and agentic coding represents a paradigmatic shift in AI-assisted software engineering. Forward-looking organizations are embracing hybrid workflows that leverage intuitive ideation and autonomous execution. Those that invest in explainability, modular agent design, and developer empowerment are likely to lead the next era of resilient and scalable software innovation.
# VIII. CHALLENGES AND LIMITATIONS
Despite their transformative potential, both Vibe Coding and Agentic Coding present critical limitations as illustrated in Figure 7 that must be understood for safe deployment, sustainable adoption, and long-term developer resilience. These challenges are architectural, procedural, and cognitive in nature arising not only from technical immaturity but also from systemic gaps in explainability, oversight, and security. This section provides a detailed analysis of the emerging risks and constraints associated with both paradigms, grounded in real-world case studies and interdisciplinary insights from software engineering, human-computer interaction, and AI ethics.
# A. Limitations of Agentic Coding
Agentic coding systems, while promising high degrees of autonomy, introduce risks that arise from reduced human oversight, opaque execution logic, and uncontrolled access to critical infrastructure. One of the most pressing concerns is the overdependence on agents for routine and high-stakes engineering tasks. As developers become increasingly reliant on autonomous systems, their engagement with core programming concepts and debugging strategies may diminish, leading to skill atrophy and reduced situational awareness. This is analogous to findings in aviation automation and clinical decision-support systems, where passive user roles have been shown to degrade cognitive vigilance. The longterm consequence in software engineering could be a workforce that is poorly equipped to intervene during edge-case failures or system crises.
Another serious concern is the potential for silent error propagation. Agentic systems operating across multiple modules can introduce logic faults or regressions that go undetected until deployment. Because these agents modify code, adjust configurations, and interface with APIs at runtime, a fault introduced in one subsystem can cascade downstream particularly if agents are not equipped with rollback mechanisms or observability hooks. Examples include global refactors that destabilize microservice communication protocols or schema changes that disrupt dependent services. Robust mitigation requires explainable agent decisions, real-time anomaly detection, and strict version control governance.
In addition, the expanded runtime privileges of agentic platforms create new vectors for security vulnerabilities. Autonomously acting agents may unwittingly expose sensitive data, mishandle authentication tokens, or install unverified dependencies. Threats such as prompt injection, dependency confusion, or secret leakage via AI-generated commits are increasingly documented in agentic pipelines. Defending against these vulnerabilities necessitates rigorous sandboxing, zero-trust security policies, prompt sanitization, and cryptographic verification for all actions taken by autonomous code agents.
# B. Limitations of Vibe Coding
While vibe coding tools promote flexibility and creative exploration, they suffer from systemic challenges rooted in the opacity of model outputs and the lack of integration with formal software development lifecycles. Chief among these is the black-box nature of generation. Most LLM-based coding assistants do not expose their internal decision processes, making it difficult for developers to validate code correctness, interpret logic decisions, or trace performance regressions. This undermines trust in high-stakes domains, especially when generated code is inserted into production pathways. Furthermore, the stochasticity of model outputs can lead to inconsistent quality even under near-identical prompts.
Fig. 7: Comparison of challenge domains for Vibe Coding (left) and Agentic Coding (right) using mindmap representation.
Another prominent limitation of vibe coding is its poor compatibility with production-oriented development systems. Generated code often functions well in isolation but fails when incorporated into real-world environments due to missing context such as authentication flows, deployment configurations, or CI/CD hooks. Without access to full project state or execution context, LLMs are prone to suggesting solutions that ignore runtime dependencies or system architecture constraints. This makes them ideal for scaffolding or ideation, but suboptimal for system-level implementation unless paired with structured review protocols and toolchain integration.
Finally, the rapid, iterative style of vibe coding can erode long-term code quality. Developers focused on short feedback cycles may forgo documentation, unit testing, or adherence to architectural principles. Over time, this contributes to codebases riddled with duplication, inconsistent naming, security shortcuts, and unmaintainable logic an accumulation of technical debt with systemic consequences. Effective interventions include mandatory linting, automated test scaffolding, and enforced review pipelines for all AI-assisted code merges. Vibe tools should serve as accelerators, not replacements, for engineering best practices.
# IX. FUTURE ROADMAP: ADVANCING AGENTIC AI FOR AUTONOMOUS SOFTWARE ENGINEERING
The future of AI-assisted programming (As depicted in Figure 8) will be increasingly shaped by the maturity and proliferation of agentic coding systems platforms that do not merely assist in code generation but autonomously plan, execute, test, and validate software development tasks across the engineering lifecycle. As organizations seek to scale automation, reduce technical debt, and manage complex digital ecosystems, agentic AI stands at the frontier of practical transformation. This roadmap outlines the core trajectories, challenges, and infrastructure required to operationalize agentic systems responsibly and at scale.
# A. Architecting Trustworthy Autonomy
The next generation of agentic AI must prioritize trust, reliability, and governance. This entails a shift from static model inference to dynamic, feedback-rich execution environments. Agents must be designed with embedded explainability generating transparent logs, semantic diffs, decision traces, and rollback records. As software teams integrate agents into CI/CD pipelines, static and dynamic analysis tools must be extended to interpret AI-generated logic and expose risks early.
Moreover, agentic systems must comply with software assurance standards. This includes regulatory compliance (e.g., GDPR, ISO/IEC 27001), organizational policies (e.g., coding conventions, security models), and runtime safety guarantees. Future agentic frameworks will require built-in guardrails such as rule-based policy engines, automated rollback triggers, and runtime permission sandboxes that enforce zero-trust principles during execution.
Fig. 8: Mindmap overview of the future roadmap for Agentic AI in autonomous software engineering, including key directions such as trustworthy autonomy, multi-agent systems, hybrid workflow integration, memory persistence, and human-AI supervision.
# B. Multi-Agent Collaboration and Specialization
Scalability in agentic coding will emerge not from a single monolithic agent, but from a constellation of specialized sub-agents planners, coders, testers, reviewers coordinated by an orchestrator. Inspired by distributed systems theory and modular programming paradigms, such multi-agent architectures will enable parallel task decomposition, resource optimization, and resilience through redundancy.
To enable meaningful collaboration among agents, a shared language and structured communication protocol will be necessary. Advancements in function-calling, task graph serialization, and contextual memory sharing will allow agents to synchronize states, pass artifacts, and coalesce outputs into consistent deliverables. This architectural pattern will mirror human software teams, enabling software construction to scale without linear increases in human supervision.
# C. Memory, Context, and Long-Term Adaptation
Agentic AI will only succeed in production settings if it can reason across time, projects, and usage contexts. Future systems must integrate both short-term (working) memory and persistent memory (organizational preferences, historical codebase patterns, bug history). Memory-augmented LLMs or retrieval-based hybrid agents will be critical in maintaining task continuity and avoiding context fragmentation over multi-hour or multi-day tasks.
Additionally, learning from operational feedback will become central to agent refinement. Mechanisms such as reinforcement learning from human feedback (RLHF), offline evaluation from logs, and interactive model distillation will allow agents to align with evolving team practices, technology stacks, and user expectations. These capabilities will gradually shift agents from static models to continuously improving team members.
# D. Human-AI Collaboration Infrastructure
Agentic coding should not replace developers but elevate them to higher-order roles strategic planners, architectural reviewers, and AI supervisors. To support this shift, integrated human-agent interfaces must evolve. Rich visualization dashboards, interpretability overlays, interactive agent simulations, and real-time progress diagnostics will empower humans to supervise AI workflows effectively.
Training developers to interpret, configure, and intervene in agent behavior will be essential. AI literacy programs, sandbox testing environments, and debugging toolkits tailored for AI-generated systems will form the backbone of future software education and organizational readiness.
# E. 5. Strategic Integration and Hybrid Workflow Design
The future of software development lies not in choosing between vibe coding and agentic coding but in combining their strengths. Vibe coding ideal for earlystage ideation, UX design, and experimental workflows will serve as the creative front-end. Agentic coding engineered for precision, automation, and long-horizon planning will operationalize and scale those ideas into robust, production-grade systems.
Hybrid workflows will increasingly rely on seamless transitions: vibe tools initiating conceptual drafts, agentic agents refining and deploying them, and human teams orchestrating this interplay through continuous feedback loops. These workflows will not only maximize efficiency and innovation but also create resilient software systems that adapt to future complexity.
Agentic AI promises a paradigm shift in software engineering transforming AI from a passive assistant to an autonomous co-developer. Realizing this potential demands more than algorithmic power; it requires trustworthy infrastructure, human-centered design, and rigorous governance. The roadmap to agentic maturity is a socio-technical journey one that redefines collaboration, responsibility, and intelligence in software creation. Those who invest early in this convergence will shape the foundational tools of the next engineering era.
# F. Historical Evolution of AI Agents: From Rule-Based Systems to Agentic AI
The trajectory of AI agents reflects a four-decade-long transformation from symbolic, rule-based automation to generative, goal-directed intelligence. Understanding this historical evolution is essential for grounding future developments in Agentic AI within the broader arc of artificial intelligence research.
In the 1990s, AI agents were largely constructed as symbolic software entities [238]–[240], grounded in deterministic logic [241], [242] and finite-state control systems [243], [244]. Representative systems included intelligent tutoring agents like SHERLOCK [245]–[247] and Andes [248]–[250], which delivered domain-specific instruction using scripted pedagogical rules. Concurrently, mobile agents such as General Magic’s Telescript [251] facilitated lightweight task execution across distributed networks [252]. Behavioral models like the Belief-Desire-Intention (BDI) framework (e.g., PRS, JAM) formalized rational planning under constrained environments [253]. However, these early agents lacked learning capability, contextual reasoning, and autonomy beyond their initial programming.
The 2000s and 2010s marked a pivotal shift toward learning-based and networked agents [254], [255]. Advances in reinforcement learning (e.g., TD-Gammon [256] and Deep Q-Networks) enabled agents to optimize behavior through reward-driven feedback. Multiagent systems (MAS) using frameworks such as JADE enabled collaboration negotiation, and task allocation across agents [257]–[259]. These architectures found use in traffic systems, robotic swarms, and industrial simulation. Chatbots and NLP agents matured with systems like Siri and Alexa, offering human-AI dialogue capabilities grounded in statistical language models. While more adaptable, these systems remained bounded by taskspecific learning and lacked the emergent planning and reasoning evident in human cognition.
The 2020s usher in a new paradigm: agentic coding powered by large language models (LLMs). Unlike their predecessors, LLM-based agents (e.g., AutoGPT [260], BabyAGI Baby AGI, Devin Devin AI, Codex OpenAI Codex) demonstrate the ability to decompose abstract goals, synthesize code, invoke APIs, interact with development environments, and reason iteratively through planning-execution-feedback loops. These agents use components such as memory buffers, tool use modules, sandboxed execution environments, and self-verification routines to operate autonomously in complex, real-world software engineering tasks. For instance, Codex-based systems execute end-to-end pipelines cloning GitHub repositories, updating codebases, testing, and committing patches without line-by-line developer supervision. This level of autonomy and contextual awareness signals a profound leap in agent capability.
These trends signify not only an evolution in computational architecture but a deepening of the cognitive model underlying artificial agents. The agent is no longer a function executor or rule follower, but a self-reflective, goal-seeking entity capable of autonomous software generation, debugging, and deployment. As we move forward, the synthesis of historical rule-based robustness with modern generative flexibility offers a foundation for building trustworthy, transparent, and socially-aligned Agentic AI systems. By drawing on decades of agent modeling from BDI frameworks to LLM orchestration future systems can achieve both operational excellence and epistemic alignment with human objectives. | This review presents a comprehensive analysis of two emerging paradigms in
AI-assisted software development: vibe coding and agentic coding. While both
leverage large language models (LLMs), they differ fundamentally in autonomy,
architectural design, and the role of the developer. Vibe coding emphasizes
intuitive, human-in-the-loop interaction through prompt-based, conversational
workflows that support ideation, experimentation, and creative exploration. In
contrast, agentic coding enables autonomous software development through
goal-driven agents capable of planning, executing, testing, and iterating tasks
with minimal human intervention. We propose a detailed taxonomy spanning
conceptual foundations, execution models, feedback loops, safety mechanisms,
debugging strategies, and real-world tool ecosystems. Through comparative
workflow analysis and 20 detailed use cases, we illustrate how vibe systems
thrive in early-stage prototyping and education, while agentic systems excel in
enterprise-grade automation, codebase refactoring, and CI/CD integration. We
further examine emerging trends in hybrid architectures, where natural language
interfaces are coupled with autonomous execution pipelines. Finally, we
articulate a future roadmap for agentic AI, outlining the infrastructure needed
for trustworthy, explainable, and collaborative systems. Our findings suggest
that successful AI software engineering will rely not on choosing one paradigm,
but on harmonizing their strengths within a unified, human-centered development
lifecycle. | [
"cs.SE",
"cs.AI",
"cs.CL"
] |
# 1 Introduction
Knowledge Graphs [7] (KGs) have become a foundational technology for integrating and querying heterogeneous data across domains such as climate science, cultural heritage, and life sciences. A core strength of KGs lies in their ability to make data explicit, interoperable, and semantically rich, aligning with the principles of FAIR [14] (Findable, Accessible, Interoperable, and Reusable) data.
Among the various approaches to constructing KGs, declarative mapping languages, such as R2RML $^ 1$ and RML [4, 9], have emerged as key enablers in both literature and practice. By explicitly stating the rules for transforming data from structured and semi-structured sources (e.g., relational databases, CSV, JSON, XML, SPARQL services, etc.) into RDF, declarative mappings promote separation of concerns, reusability, and cross-source interoperability. These characteristics are particularly beneficial for collaborative and long-lived data integration efforts, where mapping logic must be shared, adapted, and audited over time.
A variety of RML-compliant engines have been developed to support the execution of declarative mappings. Examples are RMLMapper $^ 2$ , CARML $^ 3$ , SDMRDFizer [8], and Morph-KGC [1]. These tools have proven effective in translating structured data into RDF at scale and have contributed significantly to the maturation of the semantic data integration ecosystem. However, their usage often assumes familiarity with command-line interfaces, specific configuration formats, or specific programming language environments, which can present barriers to integration in modern data science workflows. In addition, features such as mapping modularity, incremental generation, unit testing, and tight coupling with ontological reasoning are still underdeveloped or inconsistently supported across tools. In this context, PyRML is conceived as a Python-native alternative that supports interactive, programmable, and transparent KG construction. It complements existing engines while focusing on usability, extensibility, and seamless integration with the Python data ecosystem. Additionally, by abstracting some of the technical complexity while maintaining expressive power, PyRML contributes to bridging the gap between declarative semantics and practical KG engineering.
The remainder of this paper is the following. Section 2 provides an overview of the related work. Section 3 details the proposed system architecture with usage examples. Section 4 describes the evaluation methodology and results. Finally, Section 5 discusses conclusions and future directions.
# 2 Related work
The construction of KGs from structured and semi-structured data has been extensively explored in the Semantic Web community4. The foundational approaches focused on the integration of semantic data. These include the use of a view-based paradigm [11], such as, Global-As-View [6] (GAV), Local-AsView [13] (LAV), and Global-Local-As-View [5] (GLAV). A view defines the relationships between heterogeneous data sources and a unified mediated schema. These paradigms, originally developed in the context of data warehouse and federated databases, provided the theoretical foundation for later declarative mapping languages used in the construction of KG. In particular, the GAV approach, where each element of the mediated schema is defined as a query over the sources, closely resembles modern RML mappings, where ontology terms are defined in terms of data source structure. In contrast, LAV and GLAV underpin more expressive approaches such as Ontology-Based Data Access (OBDA), where mappings specify how source data can satisfy arbitrary ontology queries. Although powerful, OBDA systems often rely on complex reasoning services, which can hinder scalability and accessibility for practitioners [10].
In recent years, declarative mapping languages such as the RDB to RDF Mapping Language (R2RML) and RDF Mapping Language [4, 9] (RML) have become a standard mechanism for aligning raw data with RDF vocabularies and ontologies in a transparent and maintainable way. More specifically R2RML is designed to cope with the transformation of relational databases to RDF, whilst RML generalises the mapping model of R2RML to support diverse semistructured data sources. Accordingly, several tools have been developed to implement and execute RML mappings. The RMLMapper, written in Java, was among the first engines to support full RML core semantics and has been widely used for research and data publication tasks. CARML builds on the same paradigm, offering improved modularity and performance. More recently, tools like SDMRDFizer [8] and Morph-KGC [1] have focused on scalability and performance, enabling the efficient generation of large KGs from relational databases and tabular data. These engines have been successfully applied in large-scale projects, such as iASiS $^ { 5 }$ , which is an EU funded project to enable precision medicine approaches by utilising insights from patient data.
Despite their robustness, existing RML engines often assume specific technological stacks (e.g., Java or Docker-based deployments), and their integration with modern data science workflows—typically centred around Python—is limited. PyRML contributes to this landscape by offering a Python-native, programmable interface for RML-based KG construction. Unlike black-box engines, PyRML enables fine-grained control over mapping composition, execution, and testing, and is designed to integrate with widely used Python libraries such as Pandas and RDFlib. This positions PyRML as a complementary tool in the RML ecosystem, addressing the need for flexible, scriptable, and developer-friendly solutions in data-centric environments.
A complementary approach to data integration is presented by SPARQL Anything [2], which enables querying heterogeneous data sources directly using SPARQL, without the need for upfront data transformation into RDF. By overloading the SERVICE clause in SPARQL 1.1, SPARQL Anything allows users to access data from various formats through a uniform SPARQL interface. This approach leverages the Facade-X meta-model [3] to provide a simplified RDF representation of diverse data sources, facilitating rapid prototyping and adhoc querying. While SPARQL Anything excels in on-the-fly data access, it does not produce persistent RDF graphs, which may be a limitation for applications requiring long-term data storage and reasoning capabilities.
# 3 The PyRML system
# 3.1 Architecture
Figure 1 shows the modular architecture of PyRML that counts of four main modules. These modules are: (i) the API module, (ii) core Framework, (iii) Functions Provider, and (iv) Mapper.
Fig. 1. The architecture of PyRML
API module. The API module provides the abstract base classes that define the core structure of the programming interface for capturing the RML model within the software platform. At the top of this structure is the TermMap abstract base class, which represents any entity in an RML mapping associated with an IRI and intended for generating RDF data from a logical table. It is worth noting that the class hierarchy defined in PyRML directly mirrors the taxonomy of classes specified in the RML and R2RML ontologies. Consequently, examples of classes derived from TermMap include SubjectMap, which specifies the mapping instructions to generate the subject of a triple from a logical table, and PredicateObjectMap, which links a PredicateMap and an ObjectMap to generate the predicate and object of a triple, respectively.
The TermMap class extends the abstract base class IdentifiedNode, implemented by RDFLib6, a widely used Python package for working with RDF. Hence, all instances of TermMap are valid RDF terms that can be used as the subject, predicate, or object of an RDF triple when constructing an RDFLib graph.
For example, the following code defines a TermMap through its derived class SubjectMap, and uses it as the subject of a triple to construct a graph with RDFLib.
5 rr: Namespace $\mathbf { \tau } = \mathbf { \tau }$ Namespace (’http :// www.w3.org/ns/ r2rml #’)
6 tm: TermMap $\mathbf { \Sigma } = \mathbf { \Sigma }$ SubjectMap (ex.SM)
$^ { 7 }$
8 g: Graph $\mathbf { \tau } = \mathbf { \tau }$ Graph ()
9 g.add ((tm , RDF.type , rr. SubjectMap ))
The abstract methods of TermMap include: (i) to_rdf, which converts the PyRML term into an RDFLib graph while preserving the graph structure rooted at that term; (ii) apply, which performs the mapping by using all the informations associated with a term against a given LogicalSource; and (iii) from_rdf, which is a static method that allows to instantiate a TermMap and its associated terms directly from an RDFLib graph. These three abstract methods are implemented by the base classes of TermMap that are defined in the core molude.
Core module. The core module has a twofold purpose, as reflected in its name: (i) it serves as the foundation of PyRML by implementing all the derived classes of TermMap that enable PyRML’s functionality, and (ii) it addresses the core model of RML. The constructor (i.e. the __init__ method) of the class TermMap accepts a mandatory positional argument, which is the IRI of an RML term, and optional keyword arguments that can be used to associate term-specific values with an RML term, such as the values of the rr:template or rml:reference predicates in an RML mapping. The following code snippet defines RML triples map in a programmatic way.
from rdflib import FOAF
2
3 ls: LogicalSource $\mathbf { \tau } = \mathbf { \tau }$ ... // details on logical sources later
4
5 sm: SubjectMap $\mathbf { \tau } = \mathbf { \tau }$ SubjectMap (ex.SM ,
6 template $\mathbf { \Psi } = \mathbf { \Psi } ^ { \star }$ https :// foo.org/d/{ ID}’,
1 _classes $\mathbf { \sigma } = \mathbf { \sigma }$ FOAF . Person )
8
9 pm: PredicateMap $\mathbf { \tau } = \mathbf { \tau }$ PredicateMap (ex.PM , constant $\mathbf { \tau } = \mathbf { \tau }$ FOAF . name )
10
11 om: ObjectMap $\mathbf { \tau } = \mathbf { \tau }$ ObjectMap (ex.OM ,
12 reference $\mathbf { \Psi } = \mathbf { \Psi }$ ’name ’,
13 term_type $\ b =$ rr. Literal )
14
15 pom : PredicateObjectMap $\mathbf { \sigma } = \mathbf { \sigma }$ PredicateObjectMap (ex.POM ,
16 predicates $\mathtt { \Gamma } = \mathtt { p m }$ ,
17 object_map $\mathbf { s } = \mathsf { o m }$ )
18
19 tm: TripleMappings $\mathbf { \tau } = \mathbf { \tau }$ TripleMappings (ex.Tm ,
20 logical_sources $\mathtt { \Omega } = \mathtt { I } \mathtt { s }$ ,
21 subject_maps_mp ,
22 predicate_object_maps $\ c =$ pom)
23
In the code snippet above a SubjectMap named ex:SM is created at line 5. This SubjectMap uses the template IRI https://foo.org/d/{ID}, where {ID} is replaced by the value of the ID from the input data. It also declares that each generated subject is an instance of foaf:Person, specified via the RML declaration _class $\ v { r } =$ FOAF.Person.
Instead, an instance of PredicateMap is defined at line 9. This indicates that the predicate of the triple is the FOAF property foaf:name. Then, an ObjectMap named $\tt e x : 0 M$ is instantiated at line 11. The object value is taken from the name attribute in the input data (i.e. reference $= \overrightarrow { }$ ’name’) and the term type is set to be an RDF literal (i.e. term_type $\ c =$ rr.Literal). At line 15 a PredicateObjectMap is created by connecting the pm predicate to the om object map. Finally, a triples mapping is instantiated at line 19. The latter represents a complete mapping rule that takes data from the logical source identified by ls, builds a subject from sm, and predicates and objects from pom. An invocation of the method to_rdf() against the triples mapping tm is provided at line 24 to return an RDFLib graph as output. This graph is represented in the Turtle serialisation below.
As stated previously, any instance of TermMap can be generated directly from an RDFLib graph by invoking the from_rdf() method on the corresponding class. For example, the triples map ex:TM can be converted to a PyRML object as shown in the code block below.
tm: TripleMappings $\mathbf { \tau } = \mathbf { \tau }$ TripleMappings . from_rdf (g, parent =ex.TM)
The execution of the declarative mappings is enabled by the method apply that performs tranformation defined in a TermMap against a specific LogicalSource. In PyRML, a LogicalSource makes use of a Pandas DataFrame for providing a flexible and Python-native abstraction of the input data. This design decouples data access and parsing from the mapping logic, allowing users to load, clean, transform, and prepare data using familiar Pandas operations before applying declarative mappings. By working with DataFrames, PyRML integrates seamlessly into Python-based data science workflows, enabling direct manipulation of data, efficient debugging, and reuse of in-memory datasets from diverse sources (e.g., CSV, databases). This approach aligns the RML notion of logical tables with the DataFrame’s tabular structure, facilitating transparent and programmable knowledge graph construction while maintaining compatibility with RML’s mapping semantics. In this context, the apply method enables the vectorised application of transformation operations across a DataFrame that represents a LogicalSource. Instead of processing records one by one, the apply method efficiently computes the corresponding RDF terms (e.g., subject IRIs, object literals, etc.) for all rows in a single operation, leveraging Pandas’ inherent performance optimisations. This vectorised processing enhances scalability and supports the integration of complex transformation logic directly into the mapping execution pipeline.
PyRML currently supports the following data sources for instantiating a LogicalSource: (i) CSV, (ii) XML, (iii) JSON, (iv) SPARQL, (v) MySQL, (vi) SQL Server, and (vii) PostgreSQL. A LogicalSource in PyRML is built on top of a Source, which is the base abstract class used to represent various data source types. For instance, the class CSVSource extends Source to model a CSV data source. The code snippet below illustrates how to create a logical source from a CSV file.
1 from pyrml import LogicalSource , Source , CSVSource
2
3 source : Source $\mathbf { \sigma } = \mathbf { \sigma }$ CSVSource (ex.CSV , ’students .csv ’)
4 ls: LogicalSource (ex.LS , sources $\ c =$ source )
Functions module. Many real-world scenarios require additional data transformations beyond simple attribute retrieval or template substitution—such as string manipulation, date formatting, or value normalisation. To address these needs, the RML community introduced RML Functions7 (or FnO Functions), a mechanism to declaratively invoke functions as part of a mapping. PyRML provides a functions module to support the use of RML Functions—operations that enable data transformations within declarative mappings, following the principles of the Function Ontology (FnO). Similar to engines like RMLMapper, PyRML includes a list of built-in functions that mirror the default set provided by RMLMapper, allowing users to leverage standard transformation operations out of the box. This list is implemented in the Function module8.
A distinctive feature of PyRML is its support for user-defined functions. PyRML allows developers to extend the set of available functions programmatically, integrating custom transformation logic directly into the mapping execution pipeline. This is achieved by defining a standard Python function and registering it using a dedicated decorator provided by the library, called rml_function. The rml_function decorator facilitates the registration of a Python function as an RML Function by associating it with a function identifier (IRI) and parameter mappings. Its implementation follows a standard Python decorator pattern, wrapping the original function and registering it with the PyRML runtime. Once registered, the function can be invoked within RML mappings through the Function Ontology (FnO) mechanism. The following are the signature and a usage example of the rml_function decorator.
def rml_function ( fun_id : str , \*\* params : Dict [str , str ]) -> Callable : 2 3 4 @rml_function ( 5 fun_id $\mathbf { \Psi } = \mathbf { \Psi } ^ { \star }$ http :// users . ugent .be /\~ bjdmeest / function / grel .ttl# toLowerCase ’, 6 value $\mathbf { \Psi } = \mathbf { \Psi } ^ { \mathcal { I } }$ http :// users . ugent .be /\~ bjdmeest / function / grel . ttl# valueParameter ’) $^ { 7 }$ def to_lower_case ( value : str) -> str : 8 return value . lower ()
In the example above the Python function to_lower_case implements the logic to convert a string to lowercase. The @rml_function decorator registers it under the IRI corresponding to the GREL9 toLowerCase function from FnO. The mapping between the FnO parameter IRI and the Python argument name is specified via the value keyword. Once registered, this function can be invoked inside an RML mapping referencing its function IRI, enabling seamless integration between declarative mappings and Python-defined transformation logic.
Mapper module. The mapper module is the execution core of PyRML, responsible for transforming RML mapping definitions into RDF triples. It provides a Python-native, extensible, and efficient engine for materialising knowledge graphs from structured and semi-structured data, fully aligned with the
RDF Mapping Language (RML). This module enables the seamless integration of declarative knowledge graph construction into Python-based workflows. At its core is the RMLConverter class, which implements the high-level interface for executing a set of RML mappings. It supports both single-threaded and parallel execution strategies, allowing it to scale from lightweight testing to larger data transformation tasks. When processing a mapping, the converter parses the mapping document, builds an internal representation of the mapping using the classes for term maps defined in the API and Core modules, and applies each term map to the associated logical source by leveraing Pandas. When processing a mapping, the converter first renders the RML document using Jinja2 $^ { 1 0 }$ templating, which allows users to define parameterised and reusable mappings. Template variables can be injected at runtime, enabling dynamic substitution of values such as file paths, graph names, or filter conditions, improving the flexibility and maintainability of mapping definitions. The class RMLConverter can be instantiated programmatically or used from command line through a dedicated Python script. The following is an example of the programmatic use of the class RMLConverter.
from pyrml import PyRML , RMLConverter
2
3 c: RMLConverter $\mathbf { \sigma } = \mathbf { \sigma }$ PyRML . get_mapper ()
4
5 ,,
6 Invoke the method convert on the instance of class
RMLConverter by:
7 using the persons .ttl RML descriptor ;
8 obtaining an RDF graph as output .
9
10 g : Graph $\mathbf { \tau } = \mathbf { \tau }$ c. convert (’persons .ttl ’)
Variables for Jinja2 templating can be provided through the template_vars argument of the convert method. This argument accepts a Python dictionary, where each key corresponds to a parameter referenced in the template, and each value specifies the actual value to substitute. An example of an RML file that makes use of templating is the following.
<# Mapping > a rr: TriplesMap ; 2 rml : logicalSource [ 3 rml : source "{{ INPUT_CSV }}" ; 4 rml : referenceFormulation ql:CSV 5 ]; 6 rr: subjectMap [ 7 S ]
The template variable in the RML above is reported at line 3, i.e. {{ INPUT_CSV $\} \}$ and can be set to an actual value as in the following example.
1 vars $\mathbf { \tau } = \mathbf { \tau }$ {’INPUT_CSV ’: ’students .csv ’}
2 g : Graph $\mathbf { \tau } = \mathbf { \tau }$ c. convert (’persons .ttl ’, template_vars = vars )
We note that Jinja2 templating, as implemented in PyRML, is a non-standard extension and is not part of the official RML specification or reference documentation. For more details on how templating works, please refer to the official Jinja2 documentation.
Instead, the following is a usage example of the command line tool pyrml-mapper.py that wraps the application aroung the RMLConverter.
python pyrml - mapper .py [-o RDF out file ] [-f RDF out file ] [- m] input
# Where:
– input is the required positional argument that specifies the RML mapping
file to be used for RDF conversion;
-o filename is an optional argument that specifies the output file for saving the resulting RDF graph. If omitted, the output is written to standard output by default;
-f rdf-syntax is an optional argument to define the syntax used to serialise the RDF graph. Supported values include n3, nquads, nt, pretty-xml, trig, trix, turtle, and xml. If not specified, nt (N-Triples) is used by default;
-m is an optional flag that enables multiprocessing to accelerate the transformation process.
# 3.2 Release and Availability Notes
Reusability. PyRML is released as open-source software under the Apache 2.0 license11. It is well-documented, with API references, examples, and tutorials available in the GitHub repository. It is general-purpose and not tied to any specific domain, making it suitable for a wide range of use cases. The modular architecture supports extension, e.g., custom functions and mapping loaders. The documentation clearly specifies supported features, usage patterns, and known limitations, allowing users to adapt the tool confidently to their own needs.
Availability. PyRML is publicly available at: – GitHub: https://github.com/anuzzolese/pyrml; 11 https://github.com/anuzzolese/pyrml?tab=Apache-2.0-1-ov-file
– Python Package Index (PyPI): https://pypi.org/project/pyrml-lib, which allows to download and install PyRML directly from the official third-party software repository for Python via the pip command, e.g. pip install pyrml-lib;
– Documentation: included in the repository and browsable via GitHub Pages;
– Canonical citation: DOI: https://doi.org/10.5281/zenodo.15399948 released by Zenodo;
– License: Apache 2.0.
Impact and adoption. PyRML fills a critical gap in the knowledge graph construction landscape by providing a Python-native, declarative RML mapping engine that integrates seamlessly with modern data science workflows. It is of direct relevance to the Semantic Web community, particularly to researchers and practitioners engaged in FAIR data, open science, and knowledge integration initiatives. More broadly, PyRML lowers the barrier to entry for the scientific and societal adoption of Semantic Web technologies by aligning with widely adopted tools and conventions in the Python ecosystem. PyRML has already seen adoption in several EU-funded projects. It has been successfully integrated into the data transformation pipelines of HACID (Hybrid Human-AI Collective Intelligence in Open-Ended Domains), WHOW (Water Health Open Knowledge), and FOSSR (Fostering Open Science in Social Sciences and Humanities). In these projects, PyRML has been used to support transparent, modular, and reproducible workflows for transforming heterogeneous data sources into semantically rich RDF graphs aligned with domain ontologies. Its programmable and extensible architecture has proven particularly valuable in collaborative and evolving research environments. The project is actively maintained by a team of four developers and is openly available on GitHub. As of May 13th, 2025, the repository has received 37 stars, been forked 13 times, and, between April 30th and May 13th, has been cloned 30 times and viewed 155 times.12 These indicators of adoption and engagement confirm the practical relevance of PyRML and its growing role in enabling declarative, transparent, and reproducible knowledge graph engineering across both research and applied domains.
# 4 Evaluation
# 4.1 Experimental setup
To evaluate the correctness and performance of PyRML, we designed an experimental protocol based on the official RML-Core test cases13. The RML-Core test cases are a standardised suite developed by the Knowledge Graph Construction Community Group $^ { 1 4 }$ to evaluate the correctness and feature coverage of RMLcompliant engines. Each test case defines a mapping scenario with an input data source, an RML mapping file, and an expected RDF output. The suite covers key RML features such as logical sources, templates, joins, constant values, and graph maps. These tests are designed to be minimal, deterministic, and interpretable, making them ideal for verifying whether an engine behaves according to the standard. Successful execution of the core test cases provides strong evidence of RML compliance and correctness. Supported input formats in the test suite include CSV, JSON, XML, SPARQL endpoints, MySQL, SQL Server, and PostgreSQL databases. By validating against these tests, an engine demonstrates conformance to the RML specification across a wide range of source types and mapping patterns.The evaluation was carried out in two phases: feature coverage and computational performance benchmarking.
RML-Core conformance. In the first phase, we assessed the coverage of PyRML against the RML-Core specification15. This was done by systematically executing all the RML-Core test cases and verifying whether the output RDF graphs conformed to the expected results. This step ensured that PyRML adheres to the semantics and structural requirements of the RML specification. For this analysis we set up a Docker container to provide Apache Jena Fuseki $^ { 1 6 }$ as SPARQL endpoint, MySQL, PostgreSQL, and SQL Server. The container is available on GitHub $^ { 1 7 }$ and can be instantiated via docker-compose18 Then, a Python script $^ { 1 9 }$ was developed to assess RML-Core conformance using Python’s unit testing framework, where each RML-Core test case is interpreted as a separate unit test.
Computational performance. In the second phase, we benchmarked the computational performance of PyRML and compared it to the widely used RMLMapper engine. For each test case, we executed the transformation 10 times using both engines and recorded the execution time in milliseconds. The final reported time for each engine and test case corresponds to the average execution time over the 10 runs. This procedure reduces the impact of transient systemlevel fluctuations and provides a more stable basis for comparison. The bash script that enabled the comparison is available on GitHub $^ { 2 0 }$ .
All benchmarks were run under identical conditions on the same hardware environment to ensure comparability, that is a 2.3 GHz Quad-Core Intel Core i7 with 32 GB of memory. The results of this evaluation are reported in Section 4.2.
# 4.2 Results
RML-Core conformance. Table 1 presents the coverage of PyRML against the reference RML core test suite, broken down by the type of logical data source.
Each row corresponds to a different source type supported in RML, and the table reports three key metrics: (i) the number of test cases successfully executed by PyRML, where the generated RDF output matches the expected result; (ii) the number of test cases where PyRML did not produce the expected output, either due to missing feature support or incorrect behaviour; and (iii) the total number of test cases defined in the suite for that source type.
Table 1. PyRML coverage of test cases for RML core.
Complete coverage ( $1 0 0 \%$ pass rate) is achieved for CSV, JSON, and XML, indicating robust and stable support for these commonly used structured and semi-structured formats. Among the 26 RML core test cases involving SPARQL data sources, PyRML successfully passes 24 and fails 2. The two failed cases, i.e. RMLTC0008b-SPARQL and RMLTC0009a-SPARQL, show limitations in the current implementation of join semantics when SPARQL is used as a logical source. In fact, RMLTC0008b-SPARQL tests the generation of triples that involve a referencing object map, where data from one source must be joined with another in a specific predicate object map. Similarly, RMLTC0009a-SPARQL evaluates the handling of foreign key-style relations between logical sources—an operation conceptually analogous to joins in relational databases. While PyRML supports referencing object maps for sources such as CSV and JSON, support for such joins across SPARQL requires more investigation and testing. Among the 60 RML core test cases available for each of the relational database systems, i.e. MySQL, PostgreSQL, and SQL Server, PyRML passes 55 and fails 5, which are: (i) RMLTC0009d; (ii) RMLTC0011a; (iii) RMLTC0013a; (iv) RMLTC0015a; and (v) RMLTC0016d. These failures are consistent across all three systems and are attributed to specific limitations in the current implementation of PyRML’s SQL mapping layer. The test case RMLTC0009d checks the ability to handle column names that match SQL reserved keywords. PyRML currently does not implement automatic quoting or escaping of such identifiers, leading to parsing or execution errors during mapping. RMLTC0011a involves the mapping of manyto-many (M:N) relationships via custom SQL queries embedded in the logical source. RMLTC0013a tests the behaviour of referencing object maps when joined columns contain null values. In accordance with the RML specification, no triples should be generated in such cases; however, PyRML does not yet suppress triple generation in the presence of nulls, resulting in incorrect output. RMLTC0015a evaluates the correct generation of language-tagged literals based on values from a source column. Finally, RMLTC0016d tests the handling of datatype conversions, specifically boolean values. PyRML does not yet support automatic casting of source values to xsd:boolean, which leads to failures in producing semantically correct RDF literals when boolean datatypes are required. All these test cases represent clear, bounded limitations rather than architectural constraints. All identified issues are part of PyRML’s ongoing development roadmap, and their resolution is planned in upcoming releases to support full RML compliance across relational database backends.
Computational performance. Figure 2 shows the results of the comparative analysis between PyRML and RMLMapper. The results are expressed in seconds, whilst error bars represent standard deviotions, which is reported among brackets.
Fig. 2. Comparison of PyRML with RMLMapper respect to execution time of test cases expressed in seconds.
The results demonstrate a consistent performance advantage for PyRML across all source types. For example, on CSV sources, PyRML achieved an average execution time of 1.06 seconds, compared to 1.79 seconds for RMLMapper. Similar gains are observed for XML (0.99s vs. 1.83s), JSON (0.92s vs. 1.70s), and SPARQL (1.18s vs. 1.98s). The difference is particularly notable for relational database sources, where PyRML outperformed RMLMapper on both MySQL (1.45s vs. 2.27s) and PostgreSQL (1.09s vs. 2.11s). In addition to lower average execution times, PyRML also exhibited lower standard deviation across all source types, indicating more stable performance. For instance, in the case of CSV sources, the standard deviation for PyRML was 0.17s, compared to 0.25s for RMLMapper. This pattern is consistent for all data sources, reflecting the deterministic and efficient design of PyRML’s mapping engine. These results highlight PyRML’s efficiency and robustness, confirming its suitability for interactive and automated data integration pipelines, especially where low-latency transformation is required. | Knowledge Graphs (KGs) are increasingly adopted as a foundational technology
for integrating heterogeneous data in domains such as climate science, cultural
heritage, and the life sciences. Declarative mapping languages like R2RML and
RML have played a central role in enabling scalable and reusable KG
construction, offering a transparent means of transforming structured and
semi-structured data into RDF. In this paper, we present PyRML, a lightweight,
Python-native library for building Knowledge Graphs through declarative
mappings. PyRML supports core RML constructs and provides a programmable
interface for authoring, executing, and testing mappings directly within Python
environments. It integrates with popular data and semantic web libraries (e.g.,
Pandas and RDFlib), enabling transparent and modular workflows. By lowering the
barrier to entry for KG creation and fostering reproducible, ontology-aligned
data integration, PyRML bridges the gap between declarative semantics and
practical KG engineering. | [
"cs.DB",
"cs.AI"
] |
# SCISSOR: Mitigating Semantic Bias through Cluster-Aware Siamese Networks for Robust Classification
Shuo Yang 1 Bardh Prenkaj 1 Gjergji Kasneci 1
# Abstract
Shortcut learning undermines model generalization to out-of-distribution data. While the literature attributes shortcuts to biases in superficial features, we show that imbalances in the semantic distribution of sample embeddings induce spurious semantic correlations, compromising model robustness. To address this issue, we propose SCISSOR (Semantic Cluster Intervention for Suppressing ShORtcut), a Siamese network-based debiasing approach that remaps the semantic space by discouraging latent clusters exploited as shortcuts. Unlike prior data-debiasing approaches, SCISSOR eliminates the need for data augmentation and rewriting. We evaluate SCISSOR on 6 models across 4 benchmarks: Chest-XRay and Not-MNIST in computer vision, and GYAFC and Yelp in NLP tasks. Compared to several baselines, SCISSOR reports $+ 5 . 3$ absolute points in F1 score on GYAFC, $+ 7 . 3$ on Yelp, $+ 7 . 7$ on ChestXRay, and $+ 1$ on Not-MNIST. SCISSOR is also highly advantageous for lightweight models with ${ \sim } 9 . 5 \%$ improvement on F1 for ViT on computer vision datasets and ${ \sim } 1 1 . 9 \%$ for BERT on NLP. Our study redefines the landscape of model generalization by addressing overlooked semantic biases, establishing SCISSOR as a foundational framework for mitigating shortcut learning and fostering more robust, bias-resistant AI systems.
# 1. Instruction
In recent years, machine learning models have surpassed human capabilities in various domains, such as education and E-commerce (Kasneci et al., 2023; Bodonhelyi et al., 2024). However, the high operational and usage costs of large language models extremely limit their scalability and practical deployment (Li & Liang, 2021; Hu et al., 2022). In contrast, the pre-training and fine-tuning pipeline offers a cost-effective and adaptable solution (Devlin et al., 2019).
Figure 1. Illustration of sentiment classification, where semantic space shows clusters for “Food” and “Furniture.” “Coffee Machine” and “Ice Cream” are misclassified test samples.
However, pre-trained models often fail to maintain the performance observed during fine-tuning when applied to realistic data (Sun et al., 2024). Further analysis originated this to data biases (Yuan et al., 2024), which is known as the shortcut issue. Specifically, models rely on spurious correlations between features and labels, obtaining substantially better results on independently-and-identically-distributed (ID) data than on out-of-distribution (OOD) data.
For instance, fact-checking models may evaluate the truthfulness of a claim by counting its negations (Thorne et al., 2018), and bird models may misclassify birds as waterbirds based on water in the background (Sagawa et al., 2020). Although these shortcuts may hold for specific datasets, they significantly hinder the applicability of models to real-world scenarios (Sugawara et al., 2018).
Current research typically attributes shortcuts to the fragility of superficial features, e.g., words or pixels (Chen et al., 2023; Xu et al., 2023a), as opposed to the robustness of semantic features. However, we challenge this assumption and argue that the distribution of semantic embeddings (Reimers & Gurevych, 2019) can also appear as shortcuts.
To illustrate this, consider the example in Fig. 1. Suppose we train a binary sentiment classifier using reviews from a biased E-Commerce website, where all food-related reviews are positive, and the term “Ice-cream” is not mentioned. Then, if we apply this classifier to a negative review about Ice-cream, will it classify correctly? It is unlikely, as “Icecream” is a food item and its word embedding is likely close to other food-related terms. Consequently, the classifier may associate the semantic region representing food with the “positive” label, thereby losing generalizability. In this case, the shortcut arises not from superficial features, but from semantic information. Similarly, in medicine, semantic shortcuts may lead to the misclassification of conditions in populations with similar physiological characteristics, resulting in fault diagnostics. As for autonomous vehicles, their systems might misinterpret ambiguous road signs due to reliance on simplified semantic patterns. All these cases highlight that semantic shortcuts may pose an urgent and previously overlooked risk.
To address these issues, we propose a novel debiasing architecture, named SCISSOR (Semantic Cluster Intervention for Suppressing ShORtcut), based on a Siamese network (Bromley et al., 1993).1 Our objective is to filter out semantic information irrelevant to downstream tasks from samples exhibiting imbalanced distribution patterns. To achieve this, we first employ the Markov Clustering Algorithm (MCL) (Van Dongen, 2008) to cluster samples based on their semantic similarity, aiming to identify potential imbalanced areas. After that, we construct contrastive data (Chen et al., 2020) to train a debiasing module, which remaps the semantic space to disrupt the clusters that could act as shortcuts, thereby guiding the model to focus on robust features. Different from the triplet loss (Schroff et al., 2015), we consider not only the samples’ original labels but also their distribution. Finally, we insert the debiasing module at the output of the pre-trained model and train it jointly with the classification head. Our contributions are as follows:
1. Novel conceptualization of semantic bias: To the best of our knowledge, we are the first to identify and demonstrate, both theoretically and empirically, that imbalances in the semantic distribution of samples can also lead to the shortcut problem.
2. Lightweight, plug-and-play debiasing module: We propose a novel debiasing approach that does not augment the training data and operates with the same time complexity as the baseline.
3. Empirical gains across multiple domains: We conduct experiments on text and image data using six models across text classification, style analysis, medi
cal imaging and hand-written letter recognition tasks. Our results show that SCISSOR outperforms the baselines in terms of accuracy and F1 score.
# 2. Related Work
Over the past decade, considerable efforts have targeted the challenge of spurious correlations, which often undermine a model’s OOD performance. Two primary lines of work relevant to our paper focus on: (1) creating balanced and less biased datasets, and (2) counterfactual data generation. In parallel, other broad techniques in distributionally robust optimization (DRO) or group-based fairness (e.g., Group DRO, IRM) typically aim to improve worst-case performance across predefined demographic groups. However, while these methods are well-suited for known protected attributes, they do not directly tackle latent-space clustering effects, which can give rise to semantic biases that persist even in “balanced” data. Our proposed approach, SCISSOR, is instead designed to remap the embedding space itself, complementing both dataset-centric and DRO-based methods by specifically targeting label-skewed clusters.
Creating Balanced and Less Biased Datasets. Several studies aim to address spurious correlations through data manipulation. Wu et al. (2022) propose a data generation strategy for mitigating biases in natural language inference, using a GPT-2 model with unlikelihood training to ensure label consistency and confidence-based filtering (Bartolo et al., 2021). This process identifies and discards instances that reinforce spurious correlations, thus yielding a more robust dataset. Similarly, Bras et al. (2020) use an iterative adversarial filtering approach (AFLite; Sakaguchi et al., 2020) to remove highly biased data points. For specific tasks such as fact-checking, CrossAug (Lee et al., 2021) generates negative claims and modifies evidence to create contrastive pairs, improving the model’s ability to rely on genuine textual clues. Meanwhile, CLEVER (Xu et al., 2023a) attacks inference-phase biases by subtracting the output of a “claim-only” model from a more complex fusion model. Other techniques include EDA (Wei & Zou, 2019), which relies on random linguistic edits to expand training data, and “Symmetric Test Sets” (Schuster et al., 2019), which eliminates label-specific giveaways in claims. Mahabadi et al. (2020) propose “Product of Experts” to downweight spurious signals through a combination of a bias-only model with the main classifier. RAZOR (Yang et al., 2024) progressively rewrite training set through wordlevel feature comparison.
Although these dataset-centric strategies mitigate many surface-level or single-feature biases, they typically entail rewriting, filtering, or augmentation. Such processes dependent on careful hyperparameter tuning, and often cannot break deeper correlations within a pretrained model’s latent space. In contrast, our approach is module-based, directly targeting the geometry of the embedding space instead of solely relying on rewriting data. This sidesteps heavy data manipulations or repeated training on LLMs.
Counterfactual Data Generation. Another prevalent strategy is to produce counterfactual examples – perturbations of existing data designed to disentangle superficial cues from true task-relevant features. Kaushik & Lipton (2018) use manually crafted counterfactuals to demonstrate significant performance gains on challenging generalization. However, human generation can be time-consuming and often lacks diversity. Automated solutions like Polyjuice (Wu et al., 2021) and Tailor (Ross et al., 2022) fine-tune text generation models to produce specific perturbation types. While powerful, these approaches typically require model retraining when introducing new perturbation classes.
More recently, DISCO (Chen et al., 2023) leverages large language models to generate candidate phrasal edits, then uses a teacher model to filter out the low-quality ones. Xu et al. (2023b) adopt a similar paradigm for fact verification, generating tailored counterfactuals that reveal spurious correlations. A employs LLMs to generate counterfactual data to balance the concept of textual data. However, these approaches rely entirely on the usage of LLMs, which can be extremely resource-intensive.
While counterfactual augmentation can reduce reliance on obvious biases, it does not necessarily eliminate spurious semantic clusters in the embedding space. Even datasets balanced via counterfactual rewriting may contain subpopulations with label skew when projected into latent space – particularly if the pretrained model already encodes certain semantically entangled regions. Contrarily, SCISSOR directly “pulls apart” or remaps such clusters via a Siamese network head, forcing the model to focus on truly discriminative features rather than latent cluster membership.
Why SCISSOR? Computational and Conceptual Advantages. DRO methods also aim to ensure robustness but typically require explicit group labels or assumptions about how data sub-populations manifest. In real-world settings where biased sub-populations remain hidden or evolve over time, group-based constraints may not suffice. SCISSOR circumvents such requirements by first detecting and labeling suspicious clusters through a lightweight Markov Clustering procedure, then disrupting them. Additionally, SCISSOR maintains a low overhead relative to repeated data rewriting or large-scale augmentation, since it slots a debiasing module directly onto a frozen pretrained model, preserving the overall time complexity of forward passes.
Hence, our method is complementary to both data-centric and DRO-style approaches, offering a novel focus on semantic clusters that underlie shortcut learning. As we will show, explicitly addressing these latent clusters substantially boosts out-of-distribution performance across tasks in both NLP and computer vision.
# 3. Methodology
# 3.1. Problem Formulation
Let $\mathcal { D } = \{ d _ { 1 } , \ldots , d _ { n } \}$ be a dataset containing $n$ samples and its corresponding label $y _ { i } \in \mathcal { V }$ , where $y$ is the label set. To classify a sample $d _ { i }$ , we first transform it using a specific pre-trained embedding function $g : { \mathcal { D } } { \mathcal { X } } \subseteq \mathbb { R } ^ { u }$ Subsequently, we train a classifier $f _ { \theta } : \mathcal { X } \mathcal { Y }$ trained by optimizing a specific loss function $\theta ^ { * } \gets \arg \operatorname* { m i n } _ { \theta } \mathcal { L } ( \mathcal { D } , \theta )$ . Additionally, let us consider another given dataset $\mathcal { D } ^ { \prime } =$ $\{ d _ { 1 } ^ { \prime } , \ldots , d _ { m } ^ { \prime } \}$ with $m$ samples drawn from a different distribution, while sharing the same label set $y$ . Let us assume that the samples in $\mathcal { D }$ exhibit localized clusters with imbalanced label distributions when embedded onto the $\chi$ space. Conversely, the distribution of samples in $\mathcal { D } ^ { \prime }$ is relatively balanced within the same embedding space.
Considering these asusmptions, our objectives are twofold:
(1) To demonstrate the existence of semantic bias. We expect that $f _ { \theta }$ trained on $\chi$ to achieve better accuracy on a test set drawn from the distribution of $\chi$ compared to one drawn from $\mathcal { X } ^ { \prime }$ . Conversely, we expect $f _ { \theta }$ , trained on $\mathcal { X } ^ { \prime }$ to exhibit comparable performance on the test sets drawn from both $\chi$ , and $\mathcal { X } ^ { \prime }$ .
(2) To enhance the performance of $f _ { \theta }$ trained on $\chi$ on test sets drawn from $\mathcal { X } ^ { \prime }$ through semantic debiasing algorithms.
# 3.2. Demonstration of Semantic Bias Existence
To elucidate the underlying causes and influential factors of semantic shortcuts, we propose the following lemmas. Lemma 3.1 quantifies how small changes in the representation space influence the classifier’s outputs, revealing the extent to which the model’s sensitivity to input variations may contribute to semantic bias.
Lemma 3.1. Given a differentiable classifier $f _ { \theta }$ , two input samples $d , d ^ { \prime } \in \mathcal { D }$ , a function $g : \mathcal { D } \mathbb { R } ^ { u }$ , if the Euclidean distance between $g ( d )$ and $g ( d ^ { \prime } )$ in a $u$ -dimensional space is lower than $\alpha$ , then the Euclidean distance between their outputs given from $f _ { \theta }$ is upper-bounded by
$$
\sqrt { d } \cdot \alpha \cdot \| \nabla f _ { \theta } ( g ( d ) ) \| _ { 2 } + \frac { 1 } { 2 } M \alpha ,
$$
where $M$ is the upper bound of the norm of the Hessian matrix of fθ over the segment connecting $g ( d )$ with $g ( d ^ { \prime } )$ .
Lemma 3.2 formalizes how local imbalances in training labels lead to biased classifier behavior, showing that majoritylabel dominance causes systematic misclassification of minority-labeled samples, with this effect worsening as training progresses.
Lemma 3.2. Let $f _ { \theta }$ be a differentiable classifier trained on data embedded into $\mathcal { X } \subseteq \mathbb { R } ^ { u }$ by a function $g$ . Fix an anchor point $c \in \mathcal { X }$ and a radius $\alpha > 0$ . Suppose that within the ball $B ( c , \alpha ) = \{ x \in \mathcal { X } : | | x - c | | < \alpha \}$ , the training labels are imbalanced–i.e., one majority label is heavily represented compared to a minority label. Then:
(1) any classifier that minimizes empirical risk will tend to predict the majority label for most points in $B ( c , \alpha )$ ;
(2) the expected misclassification probability on the minority-labeled samples in $B ( c , \alpha )$ is bounded below by $a$ positive constant;
(3) the bound increases over the course of training.
We refer the reader to Appendix B for the omitted proofs, and to Appendix C for a detailed discussion about the significance of our theoretical findings in semantic biases.
# 3.2.1. THEORY-GROUNDED OBSERVATIONS
Imbalance-Induced Misclassification. As a classifier becomes increasingly accurate on the majority-labeled samples (e.g., “positives”) in a given region, the lower bound on misclassification for minority-labeled samples (e.g., “negatives”) in the same region grows. In other words, once the model effectively learns to recognize the majority label, any semantically similar minority samples become more likely to be wrongly classified as majority
Concentration Boosts Shortcut Misclassification. As $\alpha $ 0 the data around an anchor point becomes more tightly concentrated. Under label imbalance, this yields a higher lower bound on the expected misclassification rate for the minority label. In other words, the more localized and imbalanced the samples are, the more the classifier relies on a shortcut that favors the majority label—making minority misclassification increasingly inevitable.
Training Exacerbates Shortcut Misclassification. As model parameters converge during training, the lower bound on the expected misclassification probability for minority-labeled samples rises. This indicates that shortcut-based misclassifications intensify over the course of training, further harming minority classes.
Larger Models Reduce Shortcut Misclassification. If the network’s capacity increases—reflected by a higher Hessian norm bound $M$ or a larger embedding dimension $u$ —the theoretical lower bound on the expected misclassification probability for minority samples decreases. Consequently, deeper or more expressive models are more resilient against shortcut-based misclassifications than smaller models.
# 3.3. Markov Clustering
To determine the initial distribution of the given samples $\mathcal { D }$ , we apply the MCL to their embeddings $\chi$ , which includes three steps.
Constructing the Markov matrix. We compute the cosine similarity between the embeddings to form a similarity matrix, which is then normalized to generate a Markov matrix (Kelly, 1981). Each entry in the matrix represents the transition probabilities between two samples.
Expansion and inflation. We square the matrix to simulate the probability distribution after random walks, propagating and expanding the connections between samples. Subsequently, we raise each matrix element to its power and normalize again to emphasize stronger connections and suppress weaker ones.
Convergence and clustering. By repeating the expansion and inflation, the matrix will converge into a sparse blockdiagonal structure, where each block represents a cluster. Finally, we assign samples to different clusters accordingly. For each cluster, we categorize it into two groups based on the label distribution of its samples: i.e., balanced and imbalanced clusters. As discussed in Lemma 3.2, the semantics of samples within imbalanced clusters can introduce shortcut learning, hence reducing model generalizability. Therefore, we argue that samples from imbalanced clusters form $\chi$ and those from balanced ones form the $\mathcal { X } ^ { \prime }$ . Our goal is to mitigate semantic biases present in $\chi$ to enhance the performance of $f _ { \theta }$ on $\mathcal { X } ^ { \prime }$ .
Additionally, we account for the unequal number of samples within each cluster group. To prevent potential biases caused by sample imbalance, we perform random downsampling such that the number of samples in the two cluster groups is equal, and both groups maintain a balanced label distribution in total. In Appendix E, we discuss on the scalability and time complexity of the MCL.
# 3.4. Semantic Debiasing
We propose a plug-and-play debiasing module designed to filter out classification-irrelevant semantic features. We train and integrate a lightweight neural network to the output of a pre-trained language model (PLM), remapping its embedding space (Devlin et al., 2019).
Construction of contrastive data. Here, we present a novel idea for constructing contrastive samples with consideration of their semantic distribution. Unlike conventional approaches that solely maximize the distance between samples with different labels (Shah et al., 2022), we disrupt the clustering tendencies within the samples that can serve as shortcuts. Therefore, we introduce the concept of an “intermediate sample.” For a given anchor, positive samples share
Step 2 Creating Triplets and train a debiasing module.
Step 3 Freeze the debiasing module and train a classification head.
Step 1 Clustering and create a quadruplet.
Figure 2. Overview of SCISSOR. The “Sun” and “Moon” symbols represent samples with two different labels. We first apply the clustering algorithm to group samples based on the similarity of their embeddings, identifying clusters with label imbalances. Next, we generate triplets (Positive, Intermediate, Negative) by selecting an anchor sample based on their semantic clustering properties. Their purpose is to guide (train) the debiasing module in remapping the embedding space of the PLM by discouraging shortcut-induced cluster formations, ultimately improving classifier robustness.
the same label but belong to different clusters. Intermediate samples share the same label and cluster as the anchor. Negative samples have a different label. The intermediate sample plays a dual role: (1) it acts as a negative sample when contrasted with a positive one, and (2) as a positive sample when contrasted with a negative one. Any anchor in $\chi$ forms a training quadruplet with its corresponding positive, intermediate and negative samples.
Table 1. Four ways to create triplets from a quadruptlet. ”A”, ”P”, ”I” and ”N” stand for anchor, positive, intermediate and negative sample in a quadruptlet, respectively.
Therefore, for each quadruplet, there are ${ \binom { 4 } { 3 } } = 4$ possible ways to decompose it into triplets of the form (anchor, positive, negative), as shown in Table 1. Specifically, for each anchor, we employ two methods to construct a triplet.
(1) Inter-cluster Contrast. Divide positive and negative samples in the triplet based on their labels, to encourage samples with different labels to be further apart in the semantic space.
(2) Intra-cluster Contrast. Define positive and negative samples in triplets based on clustering characteristics. This promotes differentiation among semantically similar samples within the same class, preventing the model from learning shortcuts from classification-irrelevant features.
We then rely on the triplet loss to train the debiasing module.
$$
\mathcal { L } = \sum _ { i = 1 } ^ { n } \operatorname* { m a x } ( 0 , \cos ( a _ { i } , p _ { i } ) - \cos ( a _ { i } , n _ { i } ) + \beta ) ,
$$
where $a _ { i } , p _ { i }$ , and $n _ { i }$ represent the anchor, positive, and negative samples in the $i$ -th triplet, $\cos ( \cdot , \cdot )$ denotes the cosine distance function, and $\beta$ is the margin ensuring the distance between the anchor and the negative sample exceeds that between the anchor and the positive sample. In this setup, all three samples are simultaneously fed into a sharedparameter Siamese network, where the goal is to maximize the distance between $a _ { i }$ and $n _ { i }$ while minimizing that between $a _ { i }$ and $p _ { i }$ .
After training, we implement the debiasing module by inserting it at the output of a frozen PLM. This approach offers two key advantages: (1) high adaptability, and (2) low resource consumption. In other words, the debiasing module can seamlessly integrate with any architecture without modifying the PLMs. Moreover, the time complexity of our lightweight debiasing network is linear, ensuring that the entire complexity is dominated by the PLM usage. Finally, SCISSOR does not rely on data augmentation (Lee et al., 2021) or LLM distillation (Yang et al., 2024), distinguishing it from other token- and pixel-level debiasing methods.
Alternating Training with Clustering and Remapping. Lastly, we employ an alternating training strategy between clustering and debasing. We argue that, as sample embedding distributions update, their cluster assignments change dynamically. Samples that move out from their original clusters may inadvertently align with others, forming new imbalanced clusters. To prevent this, we alternately implement the clustering and the debiasing. Fig 2 illustrates our method.
We train the module until the samples within the imbalanced cluster no longer exhibit clustering tendencies in the remapped semantic space. To quantify the changes in clustering behavior, we employ the Hopkins Statistic. During training, this metric gradually increases and stabilizes near 0.5, indicating the removal of clustering tendencies. Finally, we freeze the parameters of the debiasing module and train a classification head with the cross-entropy loss $\mathcal { L } _ { \mathrm { C L S } }$ .
# 4. Experiments
# 4.1. Datasets
We evaluate SCISSOR across four classification tasks: letter recognition, medical test, sentiment classification and style analysis. These tasks were conducted on two computer vision datasets and two natural language processing datasets.
Computer Vision (CV). The Not-MNIST is a multi-label classification dataset (Bulatov, 2011) contains 19,000 images of hand-written letters. The Chest-XRay dataset (Hagos et al., 2023) contains 5,863 XRay images depicting both healthy lungs (0) and lungs affected by pneumonia (1).
Natural Language Processing (NLP). The Yelp dataset consists of user reviews (positive and negative) of various businesses on Yelp. We use the version presented in (Dai et al., 2019). The Grammarly’s Yahoo Answers Formality Corpus (GYAFC) (Rao & Tetreault, 2018) is the largest dataset for style transfer, containing 110,000 informal and formal sentence pairs.
Note that the shortcuts we identified have not been previously reported in the literature. Thus, no standard adversarial datasets are currently available for robustness evaluation. To address this limitation, we adopted the cross-validation method shown in (Yang et al., 2024).
# 4.2. Experimental Setup
We use six models as baseline classifiers $f _ { \theta }$ : BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), LLaMA 3.2 (Meta, 2024), Vision Transformer (ViT) (Dosovitskiy et al., 2020), Shifted Window Hierarchical Visio e nansformer (Swin) (Liu et al., 2021), and DINOv2 (Oquab et al., 2024).2 We employ the AdamW optimizer (Loshchilov & Hutter, 2019) with an initial learning rate of $3 \times 1 0 ^ { - 5 }$ and trained the models with batch size equal to 8 across four
NVIDIA A100 Tensor Core-GPUs. Our debiasing network consists of a single Transformer module, which includes an attention layer and a feedforward layer, with 8 attention heads and 768 neurons per layer. Our classification head is a single linear layer with 768 neurons. The value of $\beta$ in Equation 2 is 0.2. To simplify training, we construct a quadruplet for each anchor using random sampling.
# 4.3. Results
Validation of Semantic Shortcuts. To experimentally validate the theoretical findings presented, we first compared the robustness of classifiers trained on balanced cluster data and imbalanced cluster data. These differences can be measured by evaluating the classifiers’ performance on ID test sets versus OOD test sets. We report the classification accuracy and F1 scores in Fig. 3. Given that imbalanced clusters exhibit pronounced shortcut features, we treated them as the optimization target and the balanced clusters as test data representing real-world scenarios in the subsequent experiments. To further illustrate the inherent clustering structure of the datasets in the semantic space, we calculated the Hopkins statistic for these datasets after being embedded by the initial model. Tables 2 and 3 show these statistics for the NLP and CV datasets, respectivelz.
Effectiveness of the Proposed Method. To evaluate the debiasing capability of SCISSOR, we evaluate its gain on accuracy and F1 score over the baseline classifiers. We compare SCISSOR against three state-of-the-art debiasing methods: RAZOR (Yang et al., 2024) for NLP tasks, LC (Liu et al., 2023) for CV tasks and IRM (Arjovsky et al., 2019) for both tasks. Specifically, RAZOR relies on rewriting training data containing potential biases using LLMs, while LC corrects classifier logits to balance the gradient impact of majority and minority groups during training. Tables 4 and 5 illustrate the results.
Table 2. Hopkins Statistic of NLP datasets. Low values indicate stronger clustering tendency.
Table 3. Hopkins Statistic of CV datasets. Low values indicate stronger clustering tendency.
Figure 3. Models trained on imbalanced clusters exhibit significant performance drops on OOD data, confirming that semantic bias harms generalization, while balanced training improves robustness. We show the performance of classifiers on balanced and imbalanced clusters under ID and OOD test data. For each data group, we randomly choose 500 from the training set to form the test set.
Table 4. SCISSOR’s impact on the baseline classifiers on the NLP datasets in terms of Accuracy and F1 score. Bold values indicate best performing per baseline; underlined the second-best.
# 4.4. Analysis and Discussion
Larger models exhibit up to $\bf \times 1 0 ^ { 3 }$ higher Hopkins statistics, revealing a weaker cluster effect in embeddings. From the Hopkins statistics reported in Tables 2 and 3, we observe that all models produce values close to 0 across every dataset. Nonetheless, Yelp and GYAFC share relatively similar clustering tendencies, whereas Not-MNIST exhibits markedly higher randomness compared to ChestXRay. This finding suggests that natural data tends to exhibit a strong inherent clustering structure within the embedding space of pretrained language models (PLMs). Such an observation provides theoretical support for SCISSOR’s approach of assigning cluster-based labels in this space.
Additionally, we note that the Hopkins statistic monotonically increases with model size. Smaller networks yield more concentrated embedding distributions for the same dataset, while larger models like LLaMA display less concentration. In vision models, ViT and DINOv2 similarly show Hopkins statistics that are one to two orders of magnitude higher than Swin, consistent with the trend that larger parameter counts lead to higher Hopkins values.
Table 5. SCISSOR’s impact on the baseline classifiers on the CV datasets in terms of Accuracy and F1 score. Bold values indicate the best performing per baseline; underlined the second-best.
Table 6. Adjusted rand index between topics and semantic clusters.
Semantic Imbalance Causes Up to 20-Point OOD Accuracy Drops, While Balanced Training Enhances Robustness. Under in-distribution (ID) conditions, all models achieve high accuracy (Fig. 3). However, when trained on semantically imbalanced datasets, their performance substantially degrades on out-of-distribution (OOD) test sets.
Table 7. Ablation study on the NLP datasets in terms of Accuracy and F1 score with K-means clustering.
This underscores a robustness gap driven by shortcut learning. In computer vision (CV), the largest OOD accuracy drop occurs on Chest-XRay—ViT and DINOv2 each lose 20 points, and even Swin experiences a 2-point decline on Not-MNIST. By contrast, when training data exhibits balanced semantic clusters, there is no consistent ID-OOD performance gap, indicating a higher degree of robustness.
A similar pattern emerges in textual datasets, with the shortcut effect most pronounced on GYAFC. Among the language models tested, BERT—when trained on imbalanced clusters—shows the greatest performance gap ( 20 points) between ID and OOD. However, when BERT is trained on balanced data, the gap narrows to 3 points. Notably, LLaMA achieves nearly identical results on both ID and OOD tests. As shown in Table 2, LLaMA exhibits strong overall performance and lower embedding concentration, making it less prone to shortcut-driven errors.
SCISSOR Achieves Up to 12-Point Gains in Accuracy and F1 Across NLP and Vision Tasks. Tables 4 and 5 summarize the performance improvements introduced by SCISSOR. In the NLP domain (Table 4), SCISSOR delivers notable gains for all three language models. On GYAFC, for instance, BERT and RoBERTa each see a 7-point boost in accuracy and F1 score relative to the RAZOR baseline, while on Yelp, SCISSOR outperforms both BERT and RoBERTa by 9 and 12 points, respectively. Although LLaMA already exhibits robust performance against shortcuts, it still realizes marginal benefits from SCISSOR. Moreover, because the datasets in question are label-balanced and contain limited superficial shortcuts, RAZOR’s strategy of manipulating superficial features and data rewriting actually lowers
LLaMA’s accuracy by ${ \sim } 2$ points.
In computer vision, SCISSOR’s largest gains occur on Chest-XRay, where ViT achieves a 12-point increase in both accuracy and F1 on the OOD set. Across the board, SCISSOR consistently outperforms LC in terms of both accuracy and F1. We attribute LC’s shortfall to its inability to address deeper semantic biases in balanced-label scenarios, thereby limiting its improvement potential. Performance improvements on Not-MNIST are relatively smaller, likely due to the dataset’s weaker embedding clusters and lower susceptibility to shortcut issues; even so, SCISSOR still provides a 2-point lift in accuracy and F1 for ViT.
We observe that although IRM mitigatesrtcuts in many cases, our method still significantly outperforms it across all tests. Moreover, IRM performs worse than the baseline on small datasets, such as Chest-XRay (w/ SWIN) and GYAFC (w/ LLaMA). We attribute this to IRM assigning excessive training weight to features that remain invariant, which prevents other useful features from being accurately identified and utilized.
# 4.4.1. WHY DO SEMANTIC CLUSTERS MATTER?
While our experiments demonstrate that debiasing semantic clusters improves generalization, one might still ask: What do these clusters actually represent in practice? To investigate, we hypothesize that samples within the same cluster tend to share common semantic themes or topics. To test this, we trained a Latent Dirichlet Allocation (LDA) topic model (Blei et al., 2003) and measured the alignment between semantic clusters and topic clusters using the Adjusted Rand Index (ARI) (Hubert & Arabie, 1985). The results, shown in Table 6, reveal a clear positive correlation between semantic clustering and topic clustering across all datasets and models. This strongly suggests that semantic clusters are not just artifacts of model embeddings—they capture meaningful, high-level concepts in the data. An intriguing insight emerges from our findings: stronger models, such as LLaMA, exhibit significantly lower ARI scores. This implies that as models grow in capacity, their embeddings become less tightly coupled to discrete topics. In other words, more powerful models learn richer, more distributed representations, rather than rigidly grouping samples by surface-level themes. This phenomenon aligns with our earlier observation that larger models are naturally more resistant to shortcut learning—a key insight into why SCISSOR has a greater impact on smaller architectures. These findings provide compelling evidence that shortcut learning is fundamentally tied to how models organize semantic information, and that disrupting these clusters can lead to more robust, generalizable classifiers.
Table 8. Ablation study on the CV datasets in terms of Accuracy and F1 score.
# 4.5. Ablation Study
We investigated the impact of clustering algorithms on the effectiveness of SCISSOR. Specifically, we replaced MCL with the K-means clustering algorithm and repeated the comparative experiments with the baselines, as shown in Table 7 and 8.
We observed that the clustering algorithm cannot significantly impact the effectiveness of SCISSOR. After replacing MCL with K-means, which holds a linear time complexity with respect to data scale, our approach showed almost identical performance in Accuracy and F1 scores while maintaining a significant advantage over the baselines.
Additionally, compared to Triplet, SCISSOR consistently demonstrates a advantage about 2 points. We analyze that this is because Triplet focuses solely on optimizing samples based on their classification labels, neglecting the importance of the embedding distribution. During the training process of Triplet, samples with the same label are pulled closer together, which could lead to the formation of new imbalanced semantic clusters. | Shortcut learning undermines model generalization to out-of-distribution
data. While the literature attributes shortcuts to biases in superficial
features, we show that imbalances in the semantic distribution of sample
embeddings induce spurious semantic correlations, compromising model
robustness. To address this issue, we propose SCISSOR (Semantic Cluster
Intervention for Suppressing ShORtcut), a Siamese network-based debiasing
approach that remaps the semantic space by discouraging latent clusters
exploited as shortcuts. Unlike prior data-debiasing approaches, SCISSOR
eliminates the need for data augmentation and rewriting. We evaluate SCISSOR on
6 models across 4 benchmarks: Chest-XRay and Not-MNIST in computer vision, and
GYAFC and Yelp in NLP tasks. Compared to several baselines, SCISSOR reports
+5.3 absolute points in F1 score on GYAFC, +7.3 on Yelp, +7.7 on Chest-XRay,
and +1 on Not-MNIST. SCISSOR is also highly advantageous for lightweight models
with ~9.5% improvement on F1 for ViT on computer vision datasets and ~11.9% for
BERT on NLP. Our study redefines the landscape of model generalization by
addressing overlooked semantic biases, establishing SCISSOR as a foundational
framework for mitigating shortcut learning and fostering more robust,
bias-resistant AI systems. | [
"cs.LG"
] |
# 1 Introduction
The security of the software supply chain has emerged as a critical concern in today’s interconnected and rapidly evolving digital landscape. The complexity of the software supply chain, combined with the growing number of stakeholders involved in the software ecosystem, has significantly increased the risk of vulnerabilities and attacks. A seemingly minor compromise at any stage of the chain can result in the complete subversion of the final product, underscoring the necessity for robust security measures [1, 2]. In response to these challenges, organizations often turn to official security standards, frameworks, and regulations to guide and enhance their security practices [3]. However, despite the existence of these resources, national regulations remain insufficient to enforce compliance with secure software supply chain practices, leaving significant gaps in global security efforts [4].
Developing practical and effective solutions for software supply chain security from a holistic perspective presents notable challenges [5]. Existing security regulations/guidelines/frameworks, while valuable, are often criticized for being overly generic and failing to address the specific needs of software development teams and engineers. As practitioners have noted, many guidelines lack actionable details and fail to provide concrete, universally applicable rules [6]. Moreover, the integration of security standards and regulatory requirements into the software development life-cycle (SDLC) proves difficult in practice. Challenges such as insufficient training for engineers and technical limitations in implementing these standards further hinder efforts to achieve secure software supply chain practices [3].
These challenges highlight the need for systematic solutions that incorporate a holistic mapping of software security frameworks to detailed operational security requirements to enhance the security of the software supply chain. Such solutions are essential for promoting the practical implementation of software security frameworks in real-world SDLC, effectively addressing the complexity of software supply chain security risks and overcoming the practical obstacles faced by organizations and individual practitioners.
The security of the software supply chain has garnered increasing attention in recent years, driven by its critical role in modern software development and the growing prevalence of supply chain attacks. Researchers have explored various frameworks and methodologies to address these challenges. Sun et al.[7] proposed a knowledge-driven framework that systematically analyzes software supply chain security risks, emphasizing structured approaches to identify and mitigate vulnerabilities. Hassanshahi et al.[8] introduced a logic-based framework for ensuring supply chain security assurance through dependency modeling and trust evaluation. Several studies[5, 6, 9] have examined the practical challenges faced by developers and organizations in implementing secure supply chain practices. For example, Sammak et al.[6] conducted an interview-based study, uncovering gaps between existing security guidelines and the specific needs of developers, highlighting the limitations of overly generalized software security regulations/guidelines/frameworks.
The goal of this work is to assist companies and individual developers in effectively navigating and implementing software supply chain security requirements by addressing the fragmented nature of existing regulations, frameworks, and practices. Through a collaborative research study, we aim to provide a holistic mapping that spans three levels frameworks until to detailed operational steps—offering practical reference architecture model that is easy to follow and implement in diverse real-world scenarios. This study also provides a machine-readable format for this mapping, which enables better portability, integration with automation tools, and support for continuous compliance and assessment workflows.
This study makes key contributions by i) providing a multi-layered and goal-driven mapping framework that connects high-level software security goals to operational actions through structured decomposition, ii) enabling operationalization and interoperability by deriving over 400 detailed operations and representing them in an extended, machinereadable data format, enabling traceability, automation, and tool integration, iii) offering comprehensive coverage through the integration of key existing frameworks and a real-world application, demonstrating practical utility across the entire software supply chain, and iv) laying the groundwork for future research in security requirements traceability and life-cycle-aware security modeling.
The remainder of this paper is organized as follows. Section 2 introduces background knowledge about software supply chain security and existing frameworks and mappings. Section 3 provides details of the methodoloy used to conduct the software security mapping. Section 4 presents structure of holistic mapping that spans three levels, from high-level frameworks to detailed operational steps to provide supply chain security mapping framework. Section 5 presents the practical applications of the mapping framework, introduces the machine-readable data format, and discusses the theoretical implications of this study. Section 6 concludes this study and discusses potential directions for future work.
# 2 Background
Before delving into specific frameworks and practices, it is important to understand the broader context in which software supply chain security has emerged as a critical concern. This section outlines the core security risks that arise from the complexity of modern software supply chains and reviews key frameworks and existing mapping efforts that have been proposed to mitigate security risks. Together, these insights provide the foundation for our multi-layered and goal-driven mapping framework, which aims to bridge the gap between high-level governance requirements and low-level technical implementation.
# 2.1 Software Supply Chain Security Risks
Modern software supply chains consist of complex and interconnected processes, components, and stakeholders. This complexity introduces significant challenges to ensuring the delivery of secure software products and services. As a result, the software supply chain has become a critically vulnerable attack surface, experiencing a significant surge in software security risks [10].
Key risks include insecure development environments, unpatched or malicious dependencies, and a lack of visibility into component provenance. Poorly secured environments can lead to unauthorized access across development, testing, and production stages [11], while reliance on outdated or unverified packages propagates vulnerabilities across the supply chain [12, 13, 14]. Attackers may also inject malicious code via compromised tools or third-party libraries [15].
The lack of component transparency, particularly regarding transitive dependencies, has driven the adoption of practices like Software Bills of Materials (SBOMs) [10]. Vulnerability management remains one of the most challenging areas, as organizations must prioritize which of the many reported vulnerabilities require urgent action [16, 17].
Historically, software supply chains were not seen as deliberate attack vectors [18], while the increasing complexity and interdependence of components have fundamentally changed the threat landscape. Adversaries now actively exploit this complexity by implanting vulnerabilities into upstream dependencies or compromising development and deployment infrastructure. These attacks exploit the very interconnectedness that makes modern software delivery efficient, allowing malicious code to spread rapidly across systems and organizations. As awareness of these systemic risks grows, governments and industry stakeholders respond with frameworks and practices that aim to increase transparency, strengthen provenance, and reduce the likelihood and impact of supply chain compromises.
# 2.2 Existing Software Security Frameworks
In response to the growing threats to the software supply chain, a range of standards, frameworks, and guidelines have emerged, each addressing different facets of software security. Notable examples include ISM, NIST SSDF, SLSA, TUF, SAMM, and S2C2F. While each offers valuable insights and controls, they vary significantly in scope, depth, and applicability—creating a fragmented landscape that poses challenges for consistent interpretation and adoption across organizations.
ISM (Information Security Manual) 1: Developed by the Australian Signals Directorate, ISM provides highlevel security guidelines, including controls relevant to software development. While valuable for aligning with governance and compliance, ISM is abstract in nature and lacks concrete implementation guidance, making it difficult to operationalize within typical development life-cycles.
NIST SSDF (Secure Software Development Framework) 2: NIST SSDF organizes secure software development practices into four groups: Prepare the Organization, Protect the Software, Produce Well-Secured Software, and Respond to Vulnerabilities. While the framework provides clear security tasks across the SDLC. As a framework-agnostic baseline, SSDF is applicable to organizations of all sizes and sectors, regardless of their development methodology or toolchain. However, while SSDF specifies what security practices should be in place, it does not prescribe how to implement them—leaving implementation details to be determined by the adopting organization.
SLSA (Supply-chain Levels for Software Artifacts) 3: SLSA defines a tiered model to secure software artifacts, focusing on verifiable build provenance and tamper resistance. It introduces specific technical requirements related to build systems and CI/CD pipelines. While detailed and actionable, its focus is limited to post-development build integrity and lacks broader guidance on software governance and early-stage development practices.
TUF (The Update Framework) 4: TUF aims to secure the software update process, even in cases where signing keys or repositories are compromised. It provides robust cryptographic and metadata-based protections. However, its scope is narrowly focused on update infrastructure and does not address broader supply chain concerns such as development processes, dependency management, or life-cycle governance.
SAMM (Software Assurance Maturity Model) 5: OWASP SAMM is a maturity model that supports the evaluation and continuous improvement of an organization’s software security posture. It is structured around five business functions and 15 security practices, each organized across three maturity levels acticities. SAMM could serves as a practical framework for operationalizing high-level standards such as NIST SSDF.
S2C2F (Secure Supply Chain Consumption Framework) 6: S2C2F focuses on securing the consumption of opensource software (OSS) packages. It outlines eight practices across four maturity levels, based on known adversary techniques. While its scope is limited to OSS consumption rather than the full development life-cycle, S2C2F offers actionable, threat-informed guidance that helps organizations strengthen their dependency management and reduce risks associated with third-party software.
Running Title for Header
While each of the aforementioned frameworks offers valuable guidance on particular aspects of software supply chain security, they often address different layers of the ecosystem in a fragmented landscape that can be challenging to navigate in practice. ISM provides governance-level controls but lacks implementation specificity. NIST SSDF guidance aim to cover SDLC practices, yet still leave gaps in translating high-level requirements into operational processes. SLSA and TUF provide technical depth in narrow domains, such as build integrity and update security, but do not address broader SDLC or policy concerns.
This fragmentation creates a barrier for organizations attempting to apply software supply chain security holistically. It remains difficult to trace how a high-level policy (e.g., an ISM control) relates to concrete actions, such as provenance requirements in SLSA. A unified mapping is needed to connect governance-level intent with technical and operational execution.
# 2.3 Software Security Mapping
In this study, we define "Software Security Mapping" as the process of identifying, aligning, and integrating key concepts, objectives, control, and recommended practices from existing software supply chain security frameworks into a unified, structured reference architecture.
To support the adoption of secure software development guidance, several mapping efforts have been introduced to align overlapping frameworks and identify conceptual equivalences. While these efforts provide valuable reference points, they typically focus on specific pairwise comparisons and lack deeper integration across diverse security frameworks. Moreover, they often fall short of offering a unifying structure that spans the full software development life-cycle. Below, we summarize the most relevant examples.
SSDF mapping to SAMM 7: The OWASP SAMM team has developed a bidirectional mapping between NIST SSDF tasks and SAMM activities. This linkage helps connect SSDF’s broad security practices to SAMM’s structured activities and maturity levels, providing a pathway for organizations to operationalize SSDF through SAMM. While conceptually sound, the mapping lacks granularity for integration into tooling or operational workflows.
SLSA mapping to SSDF 8: The SLSA working group aligned its build-level requirements with related SSDF tasks. This mapping offers technical depth in securing build pipelines and artifact provenance but is limited in life-cycle coverage and lacks broader governance alignment.
S2C2F mapping to Other Frameworks 9: S2C2F maps its OSS-focused practices to multiple specifications, including SSDF, SLSA, and CIS. While this supports OSS governance, the mapping is narrow in scope and does not address broader development and deployment concerns.
P-SSCRM mapping to Ten Industry Frameworks: The Proactive Software Supply Chain Risk Management Framework (P-SSCRM) Version 1.0 [18] outlines 73 risk management tasks across 15 practices, grouped by product life-cycle stages: Governance, Product, Environment, and Deployment. It maps these tasks to ten existing standards, SSDF, SLSA, BSIMM, OpenSSF, and OWASP SCVS etc. Although broad in scope, the P-SSCRM serves more as a strategic reference. It lacks deeper operational guidance and fine-grained integration between frameworks, making it less suitable for direct implementation.
# 3 Methodology
We adopted a collaborative research methodology [19] to develop a software security framework, emphasizing ongoing dialogue and shared objectives. This study involved active participation from both researchers and industry experts in the fields of software engineering and security, guided by three essential principles of collaborative research. First, researchers and practitioners collaborated closely throughout the process. Second, the approach prioritized both practical problem-solving and theoretical advancements. Finally, the participants cultivated mutual respect and enhanced their knowledge and understanding of software security practices and operations.
The research team consisted of seven core members: five researchers and two industry experts. Additional contributors participated in specific tasks, such as conducting reviews and providing specialized input when needed. The industry experts work for a global-leading technology company. With extensive experience in software engineering and security, they contributed valuable insights into industry practices and helped align the research with real-world challenges and practical solutions.
Table 1 outlines the roles and responsibilities assigned to each core team member for this study.
Table 1: Research team - roles and responsibilities
The collaborative research process followed in this study is depicted in Figure 1, which outlines seven key phases: Problem/Issue, Research Goal/Direction, Approach and Methodology, Challenge/Opportunity, Data Collection and Analysis, Software Security Mapping, and Evaluation/Feedback.
Collaborative Research Phases Problem/issue Research goal/direction Approach and methodology Challenge/opportunity Evaluation/feedback R • Identify current • Set research goal • Define overall D • Share/discuss • Review research 0 • pDrioscbluesms sm/iostsivuaetsions 0 • aDnefdi ndier escutciocness ampeptrhoadcohloagnyd detailed cohpaplolretnugneitsieasnd • oCuotllpeucttsfeedback from and contributions criteria for the • Explore alternative stakeholders and • pGeartshpercstiavkeeshonldkery • rAelisgenarecsheparocjhe cgtoals Data collection and aovpeprcooacmhe cshtaollenges • iPnrceopraproeratfeincahl arenpgoerst concerns with industry needs analysis • Document lessons summarizing key • Prioritize issues and academic Q + • Collect software security learned from previous findings and next steps based on impact standards requirements and validate phases and feasibility the data through peer review • Analyse data for mapping Software security mapping Research team H • Conduct multi-layer mapping for secure 8 Practitioner team software and identify actionable operations iMM External reviewers • bCeonmcphamrea rmksapping results with industry • Internal research team meeting , • Entire project team meeting 8 wmeektilyn g • PDriescpuarses pwreoegrkelyssp roegproersts and issues bmi-eweteienkgly • Diesciusisosnk-emyaukipndgaftoers,t hceh anlelextnsgtespasnd questions 2024.09 £ 2025.03
Problem/Issue: The research team began by identifying current problems and issues in software security practices. This phase also involved discussions on motivations and contributions from stakeholders, gathering diverse perspectives on key concerns, and prioritizing issues based on their potential impact and feasibility.
Research Goal/Direction: Once the issues were identified, we collaboratively set the research goals and direction. Success criteria for the research project were defined, ensuring alignment with both industry needs and academic standards. The success criteria of this study includes practical applicability, collaborative engagement, stakeholder satisfaction, and knowledge dissemination.
Practical applicability means that the research outcomes should be directly applicable in real-world settings. Industry partners should be able to implement the developed mapping framework in their software security practices. For this, we aimed to identify actionable operations through the software security mapping framework.
Figure 2: Overview of the approach for operationalizing software security requirements in this study.
Collaborative engagement was a focal success factor for this study. We implemented sustained and meaningful collaboration between researchers and industry practitioners throughout the project. Additionally, we had regular feedback loops and iterative improvements based on stakeholder input.
Stakeholder satisfaction is to ensure that the key stakeholders (both researchers and industry experts) find value in the research outcomes. In this study, we addressed their key concerns and challenges, and obtained positive feedback from stakeholders through a final project evaluation survey
Knowledge dissemination refers that the research findings/outputs should be shared with the broader research and practitioner communities. This can be generally achieved through presentations, workshops, or open access publications. We plan to provide open access web-pages to share the research outputs
Approach and Methodology: We established the overall research approach and designed a detailed methodology to conduct this study. Peer review was integrated to validate the methodology.
In this study, we focused on identifying software security requirements and actionable operations based on the requirements. To do so, we adopted goal oriented requirements engineering (GORE). GORE has been used to effectively identify requirements and achieve goals [20]. We specifically used KAOS approach to identify goals, requirements, and operations for practitioners. This approach is a formal method for modeling and reasoning about system requirements based on goals [21]. It helps break down high-level strategic goals into finer-grained requirements through goal decomposition. There are several key components in KAOS:
• Goals: High-level objectives or purposes (what the system should achieve).
• Requirements: Specific conditions or capabilities needed to meet those goals.
• Operationalizations: Technical and practical details that define how the system will achieve the requirements.
• Agents: Stakeholders or systems responsible for fulfilling goals and requirements.
Figure 2 shows an overview of the approach we used in this study. The figure illustrates the hierarchical breakdown from strategic goals to sub-goals, requirements at different levels, and operations, specifying agents responsible for tasks and the phases of the software supply chain during which implementation should occur.
Challenge/Opportunity: The Challenge/Opportunity phase focused on identifying and discussing key challenges and potential opportunities related to software security. The research team explored alternative approaches to address the identified challenges and documented lessons learned from earlier phases to refine and enhance the research framework. This phase was conducted bi-weekly with the participation of the entire team to ensure continuous feedback and alignment.
The primary challenges encountered during this study were:
Lack of completeness in the primary frameworks:
Initially, we used ISM, NIST SSDF, SLSA, and TUF as the primary frameworks. However, the first round of mapping revealed significant gaps or missing requirements, represented by empty cells in the traceability matrix.
High volume of requirements and operations:
The number of identified requirements and operations increased rapidly, raising concerns about feasibility and manageability.
To address the gaps in the initial mapping, we conducted a broader survey of additional frameworks and resources. These included CISA, SAMM, NIST AI RMF, NIST GenAI, and OSSF S2C2F. By incorporating these additional frameworks, we were able to fill the missing requirements and practices. After four iterative rounds of mapping, we achieved a $100 \%$ completion rate, a significant improvement from the initial $13 \%$ completion rate in the first round.
While this iterative mapping process helped achieve comprehensive coverage, we identified a total of 131 Level-3 requirements. With an estimated 4 to 5 operations per requirement, the total number of operations was expected to exceed 600. A feasibility test conducted on several requirements revealed potential complexity issues, making it challenging to manage such a high volume of operations effectively.
To enhance understanding, cohesion, and manageability, we proposed narrowing our focus by applying specific criteria to refine and prioritize the key requirements:
• Relevance: Requirements that were not directly aligned with the strategic goal or higher-level requirements were excluded.
• Overlap: Requirements that overlapped or had similar concepts within the same requirement group were consolidated or excluded to avoid redundancy.
• Feasibility: Requirements that were too high-level, broad, or difficult to operationalize were excluded to focus on actionable and practical requirements.
By applying these criteria, we reduced the overall volume of requirements while increasing the coherence and manageability of the final mapping. This approach ensures that the remaining requirements are both relevant and feasible for practical implementation.
Data Collection and Analysis: We collected software security requirements from existing frameworks and incorporated practical insights through direct engagement with industry practitioners. The collected data was validated through peer review, ensuring accuracy and relevance. A detailed analysis was then conducted to support the development of the software security mapping.
Table 2 presents the key frameworks selected for data collection and analysis, the total number of collected data, and their primary usage in this study. The collected data from each framework has been analyized in depth, with some further broken down to provide fine-grained requirements and operations, while others have been merged to ensure consistency and eliminate redundancies.
Software Security Mapping: Based on the collected data and analysis, we developed a multi-level framework for secure software. This mapping was conducted using the seven steps previously introduced, from "Define Top-Most Strategic Goal" to "Identify Operations and Agents".
Figure 3 provides a detailed breakdown of the distribution of software security requirements across the defined goals and source frameworks used in this study. Figure 3a shows how requirements and operations are allocated across the four software security goals. Each goal includes requirements at different levels (Level-1, Level-2, and Level-3), as well as associated operations, with the total counts highlighted. This figure highlights the prominence of the "Secure software development" goal, which contains the largest share of requirements and operations.
Figure 3b presents the distribution of requirements based on their source frameworks, such as ISM, CISA, NIST AI RMF, SSDF, and SLSA. It shows how different frameworks contribute to requirements at varying levels of granularity (Level-1, Level-2, and Level-3). For example, ISM and NIST SSDF contribute significantly to higher-level requirements (Level-1 and Level-2, respectively), whereas SAMM and other frameworks provide more fine-grained details at Level-3.
Evaluation/Feedback: The final phase involved reviewing the research outputs and collecting feedback from stakeholders, including external reviewers. We have shared the initial results with internal and external reviewers for feedback and improve the mapping. Insights from the feedback were incorporated into the research, and a comprehensive final report was prepared, summarizing key findings and potential next steps.
The key feedback is on the usability and interoperability of the mapping.
Table 2: The existing software security frameworks primimarily used in this study; There are some complementary framework used such as CISA Securing the Software Supply Chain, and OWASP famework, NIST AI Risk Management Framework.
First, the mapping includes a large volume of requirements distributed across multiple layers, which can make it challenging for users to fully understand and navigate. To address this limitation and improve accessibility and usability, we developed a web-based tool that allows users to explore the mapping more effectively. A detailed description and screenshots of the tool can be found in Section 5.1.
Second, to support interoperability and machine-readability, we adopted the Open Security Controls Assessment Language (OSCAL), a standard format introduced by NIST. Our mapping format is aligned with OSCAL models, particularly the catalog and profile models used in the control layer. The data format and its application are explained in detail in Section 5.2.
Project Meetings and Timeline: Throughout the project, regular meetings were held to ensure consistent progress and alignment across all stakeholders. Weekly internal research team meetings focused on progress updates and issue resolution, while bi-weekly project team meetings involved all participants to discuss key updates, challenges and questions, and decision-making. The timeline for the research spanned from March 2024 to March 2025, reflecting a structured and iterative process to achieve the research objectives.
# 4 Software Security Mapping Framework
# 4.1 Framework Design
We had seven steps to design and develop this framework. The following describes the detailed steps we conducted in this study.
Define Top-most Strategic Goal: Defining a strategic goal is essential to ensure that this study addresses current problems and capitalizes on opportunities identified in collaboration with industry practitioners. This is a good starting point to provide a strong foundation for subsequent phases of the research. In this study, we defined "Secure Software" as the high-level strategic goal. This goal emphasizes the development of a comprehensive framework that mitigates existing security issues while promoting proactive practices to seize new opportunities in software security.
Define Security Levels based on Focus and Depth: Under the high-level goal, we defined three requirement levels—Level-1, Level-2, and Level-3—based on their focus (e.g., strategic, operational, or technical) and depth of detail.
Level-1 focuses on strategic, high-level objectives without providing specific operational instructions. This level typically includes regulatory requirements, which are often described broadly without detailed technical specifications.
(a) The distribution of requirements and operations across software security goals, illustrating the breakdown by levels (Level-1, Level-2, Level-3) and operations under each goal. The total count of requirements and operations for each goal is also highlighted.
Figure 3: Overview of the distribution of software security requirements and operations.
(b) The distribution of requirements categorized by source frameworks, showcasing the contribution of each framework at different levels of granularity (Level-1, Level-2, Level-3).
Level-2 provides a mid-level focus, offering general guidance with some details, but lacking the in-depth technical instructions needed for direct implementation.
Level-3 is the most detailed, focusing on technical-level requirements. It specifies how processes or requirements should be implemented to achieve the goals and meet higher-level requirements.
Select and Review Existing Frameworks: This step involves conducting an in-depth literature review of both academic research and industry frameworks to survey and select appropriate frameworks for each requirement level. Based on this review, Australian Information Security Manual (ISM), NIST Secure Software Development Framework (SSDF), and the Supply-chain Levels for Software Artifacts (SLSA) and the Update Framework (TUF) were chosen as the primary frameworks for Level-1, Level-2, and Level-3, respectively.
Identify Software Security Goals: In this step, we identified four software security goals that align with the strategic goal of "Secure Software". These goals were derived from the Level-1 requirements and include: "Secure Software Environment", "Secure Software Development", "Software Traceability", and "Vulnerability Management".
Section 4.3 provides a detailed description of each goal.
Elicit Requirements: We collected data (requirements) from the selected frameworks and categorized them into the four primary goals. These requirements were aligned with the high-level strategic goal of "Secure Software" and further refined using the KAOS approach to ensure comprehensive coverage of each goal. Based on extensive data collection and analysis, we established a knowledge base that primarily aggregates requirements in their original form. We then added value by providing detailed explanations, structured interpretations, and practical insights.
Mapping: Group and Link Requirements: This step involved iteratively grouping and linking requirements across different levels to create a comprehensive mapping from Level-1 to Level-3.
First, requirements were mapped at a high level by aligning Level-1 requirements with the four defined goals. The primary objective was to establish broad categories for strategic alignment.
We then focused on linking Level-2 requirements to Level-1 by identifying mid-level guidance that supports strategic objectives. This round emphasized interpreting general guidance while maintaining consistency across levels.
Finally, Level-3 requirements were mapped to Level-2 by specifying detailed technical and operational requirements These finer-grained requirements offered actionable details on how to implement the mid-level guidance.
We employed a "Traceability Matrix" as a tool for this mapping to ensure that high-level goals and requirements were systematically linked to lower-level implementation practices. The matrix traced relationships between goals, requirements, and final implementation steps, ensuring consistency, traceability, and completeness throughout the process. Thorough reviews were conducted during each round, and the resulting mapping was cross-checked by the research team. For exmaple, both top-down and bottom-up analyses were performed to identify potential gaps or missing requirements, ensuring no critical controls or practices were overlooked. This dual analysis approach enhanced the comprehensiveness and coverage of the framework.
Identify Operations and Agents: Following the completion of the requirements mapping, we identified operations and agents for each requirement. Operations can be identified using two different methods: one through stakeholder engagement (e.g., workshops), and the other by deriving them directly from requirements [22]. In this study, we first applied the latter approach and subsequently reviewed the results with relevant stakeholders. Operations were derived from Level-3 requirements and represent specific and practical actions needed to achieve the goals and fulfill the requirements. For each operation, agents, which include both stakeholders (e.g., software developers, security teams) and systems (e.g., automated security tools), were identified as responsible entities for implementing the operations and ensuring goal fulfillment.
# 4.2 Overview of the Structure
Our framework aims to facilitate stakeholder understanding by offering a structured, transparent process. It helps stakeholders clearly see how security requirements are implemented and understand their respective roles in fulfilling these requirements.
Figure 4 illustrates an architecture of the mapping framework. The figure highlights the external sources used in developing the framework and potential practical applications, such as generating security checklists.
This improves stakeholder understanding by simplifying complex requirements and making them ready for day-to-day operational use. Organizations can easily generate operational checklists by extracting relevant requirements or directly selecting operations tailored to their specific security needs. This flexibility makes the framework a valuable tool for improving operational security practices in real-world scenarios.
Figure 5 presents an example mapping between goals, requirements, and operations. The figure illustrates the top-most goal, its sub-goals, and five Level-1 requirements under the primary goal, "Secure Software Environment". The first Level-1 requirement, "Environment segregation", includes a Level-2 requirement that is further broken down into three Level-3 requirements: "Secure and isolate sensitive application secrets", "Build platform-isolation strength-hosted", and "Implement isolated build platforms for secure environment segregation". Additionally, the figure shows four operations associated with the first Level-3 requirement, which includes agents and software supply-chain phases.
# Running Title for Header
Figure 4: Overview of the software security mapping framework.
In the following sections, we provide a detailed introduction to the mapping elements.
# 4.3 Software Security Goals
As shown in Figure 5, we identified "Secure Software" as the top-level strategic goal for this study. This goal is supported by four sub-goals: "Secure Environment", "Secure Development", "Software Traceability", and "Vulnerability Management".
Secure software environment ensures that the infrastructure used for development, testing, and production is protected from unauthorized access, malicious code, and other security threats. This reduces the risk of vulnerabilities being introduced into the software and mitigates potential damage from security breaches [23].
Secure software development focuses on designing, developing, and testing software with security as a primary consideration. This proactive approach minimizes vulnerabilities and coding errors, reducing future security risks and enhancing the overall integrity of the software [24].
Software Traceability ensures that all components of the software, including their origins and any modifications, are documented and tracked [25]. This supports transparency, facilitates the identification of vulnerabilities, improves incident response, and ensures compliance with security and regulatory requirements [26].
Vulnerability Management aims to identify, report, and resolve vulnerabilities promptly [27]. By implementing a vulnerability disclosure program and clear reporting mechanisms, organizations can address security risks quickly and reduce the likelihood of exploitation.
These four sub-goals collectively ensure that the overarching objective of "Secure Software" is achieved by addressing key aspects of security throughout the software life-cycle. By systematically implementing these goals, organizations can enhance software reliability, mitigate risks, and maintain compliance with security standards.
# 4.4 Multi-layered Requirement
# Level-1 Requirement:
This level comprises key components such as id, requirement, and reference including the source framework and relevant links. We identified 23 high-level requirements from the ISM framework, which primarily include regulatory requirements for various types of software development, such as traditional software, mobile software, and Large Language Models (LLMs). These high-level requirements serve as strategic objectives, ensuring compliance with industry regulations and standards.
Figure 5: Example mapping between goal, requirement, and operation.
Table 3 presents the full list of the requirements we identified, categorized into the four goals.
Level-2 Requirement:
At this level, each Level-1 requirement has been broken down into one or more mid-level requirements to provide more detailed guidance and operational context. These mid-level requirements offer clarity on how high-level objectives can be met in practice. A total of 73 requirements were identified, primarily based on the NIST SSDF framework. To ensure alignment with different goals and higher-level requirements, some of these mid-level requirements include intentional duplications, where the same requirement may apply across multiple categories or goals.
Figure 6 presents all requirements at this level and shows how they align with the higher-level goals and requirements
The goal of "Secure software environment" is to ensure that the infrastructure used for software development, testing, and production is protected against unauthorized access, data breaches, and other security threats. The primary focus is on maintaining a secure environment where software can be developed without introducing vulnerabilities. At Level-2, this goal is operationalized by implementing specific requirements such as ensuring strict segregation between different environments (e.g., development, testing, production), securing data through encryption and robust access control
Table 3: Level-1 requirements: 23 regulatory requirements which are derived from Australia Information Security Manual (ISM).
Development environments • Harden development environments for security Data security • Secure data security across environments
ESnevciurroenment Access control • ESsetcaubrleisdhactao pmrportehcetinosnivper oacecsesesscaondtromlecrihtaerniiasms • Secure and restrict code access Secure modification • Ensure integrity of software releases • Define and maintain security requirements Establish secure internal development policies • Set security standards for third parties • Define and review team accountability Secure-by-design • Provide training for secure development roles • Secure management commitment to development security • Maintain security lifecycle documentation records • Manage secure third-party software components • Develop secure in-house software components • Verify compliance of third-party components • Adhere to comprehensive secure coding practices • Compliance as code: Specify tools/tool types to mitigate risks • Compliance as code: Secure development environments • Shift-left security: Determine appropriate code review methods • Shift-left security: Perform early code review and analysis SecDevOps practices • Shared responsibility: Define roles and responsibilities • Automation: Automate toolchain to secure information • Automation: Implement secure toolchain management • Automation: Configure tools to generate secure artifacts • Collaboration: Establish vulnerability disclosure and collaboration • Collaboration: Perform secure code reviews collaboratively Threat modelling • Apply threat modelling techniques • Conduct thorough secure design reviews • Encrypt sensitive data-at-rest effectively
Secure • Use secure cryptography for data protection
Software Mobile Application • Implement robust authentication mechanisms securely
Development Security • FEonlclroywp tbaenstdpsreacutirceedsaftoar-ipnl-attrfaonrsmitinteractions • Process data securely with updated practices • Implement strong anti-tampering mechanisms • Include user privacy and control measures • Strictly enforce LLM access control policies • Secure LLM data with cryptography • Validate and sanitize LLM inputs effectively • Integrate secure design in LLM applications LLM risk mitigation • Review and secure LLM configurations • Maintain up-to-date LLM inventories • Implement strong authentication for LLM access • Ensure integrity of LLM code and models • Implement comprehensive logging for LLM security • Mitigate server-side request forgery risks • Detect and block harmful LLM content Evaluation of • Mitigate misinformation and disinformation risks LLM applications • Enhance LLM information security measures • Block abusive or harmful content generation Contents protection • Secure tools and reliably sign executables Update protection • Secure tools and reliably sign updates Secure configuration • Establish a secure configuration baseline • Apply and enforce secure default settings • Monitor and respond to vulnerabilities proactively Security testing • Conduct automated vulnerability analysis regularly • Assess executable testing requirements thoroughly • Conduct comprehensive security test procedures Software Software bill of materials • Archive software data for traceability Traceability Vulnerability disclosure program • Implement an accessible disclosure program Vulnerability disclosure policy • Develop and update disclosure policies VMualnaegraebmileitnyt Vulnerability disclosure SVeulcnuerritaybiilniftoyrdmisactliosnure process • Defvienleovpualnsercaubriiltiytyr emsapnoangsempleanytbporokceasnsdehsost a security.txt • Manage responsible reporting processes effectively Vulnerability resolution • Analyze vulnerability risks for prioritization Root cause analysis • Perform root cause analysis for security
mechanisms, hardening the development environment by applying secure configurations and regularly monitoring for threats, and establishing clear criteria for controlling and restricting access to critical assets and environments.
"Secure software development" is to ensure that software is developed with security as a primary consideration, minimizing vulnerabilities and maintaining the integrity of the software throughout its life-cycle. This goal is achieved by defining and enforcing secure development policies and practices, establishing accountability by defining team roles and responsibilities, incorporating SecDevOps practices to integrate security into the development pipeline, applying threat modeling techniques and conducting secure design reviews to identify and mitigate risks early, ensuring compliance with secure coding standards and automating security checks (e.g., code review, testing), and providing training to development teams to enhance their understanding of secure development practices.
"Software traceability" is to ensure that all components of the software, including their origins and any changes, are documented and tracked. This enhances transparency and accountability, making it easier to identify vulnerabilities and ensure compliance with regulatory standards. This goal can be achieved by maintaining a Software Bill of Materials (SBOM) that lists all software components, including third-party dependencies, and by archiving software data to ensure historical traceability and auditability. Additionally, ensuring that modifications to the software are traceable and documented enables quick identification of issues and facilitates effective incident response [28].
The goal of "Vulnerability management" is to establish processes for identifying, reporting, and addressing vulnerabilities in a timely manner to reduce security risks. This goal is implemented through a vulnerability disclosure program that enables responsible reporting of security issues and defines clear processes for disclosure and resolution. It involves developing a security response playbook, hosting a security.txt file to guide external reporters, and performing root cause analysis to address underlying issues and prevent similar vulnerabilities in the future. Additionally, regular automated vulnerability analysis and proactive monitoring help detect potential threats before exploitation [29].
# Level-3 Requirement:
Level-3 requirements were identified to provide finer-grained details, offering more precise technical and practical descriptions or examples. These include 99 requirements, primarily derived from SLSA, SAMM, and S2C2F, with additional consideration given to CISA, NIST AI RMF, and OWASP to complement the framework.
For example, the Level-2 requirement "Ensure strict segregation of development environments" is mapped to the Level-3 requirement "Build platform-isolation strength—hosted", which is based on the SLSA framework. This Level-3 requirement offers a detailed description along with specific examples as follows.
"All build processes must be executed on isolated, hosted build platforms operating on shared or dedicated infrastructure rather than individual workstations. This requirement enforces strict environmental segregation, aligning with the principles of zero-trust architecture and robust environmental protection. Examples of hosted build platforms include GitHub Actions, Google Cloud Build, and Travis CI."
Each requirement at this level provides a similar level of detail, serving as a source for defining operations.
# 4.5 Actionable Operations
Operationalization of security requirements supports clear responsibility and task allocation. It includes defining operations (tasks) and identifying agents responsible for executing these tasks to achieve the goals. We derived operations from the Level-3 requirements, along with corresponding agents and relevant software supply-chain phases for each operation. Each Level-3 requirement generated 3 to 5 operations, resulting in a total of 424 operations in the framework.
Table 4 shows example operations identified from "Implement isolated build platforms for secure environment segregation", Level-3 requirement. This requirement provides detailed description such as "Build platforms must enforce isolation to ensure that runs cannot influence each other, even within the same project. Isolation safeguards include preventing builds from accessing platform secrets, such as provenance signing keys, ensuring the integrity of the build provenance, ensuring builds that overlap in time cannot interact preventing issues such as memory manipulation across builds, provisioning ephemeral environments for each build preventing persistent influences between consecutive builds, mitigating risks of "cache poisoning" by ensuring outputs remain consistent regardless of cache usage, and restricting remote influence or interactions unless they are explicitly captured and audited as external parameters." in the framework.
We identified six operations from the requirement as shown in the table.
Table 4: Example operations identified from Level-3 requirement.
The operations are detailed alongside the responsible parties ("Agent") and the timeline ("Phase") in which they are conducted. In the table, "Agent" identifies the team or individual accountable for specific tasks, while "Phase" categorizes the stage of the software supply chain where the operations occur.
As the framework was designed for general-purpose use and to support a wide range of applications, the software supply chain is divided into four key phases: preparation, development, deployment, and post-deployment. General roles, such as the security team and development team, are also defined to ensure clarity and accountability.
This structure provides organizations with the flexibility to customize and adapt the framework to meet their unique operational needs and contextual requirements, enabling efficient and effective implementation of security practices across their software supply chain.
# 4.6 Validation of the Framework
Our work builds on previous efforts to map software supply chain security frameworks (e.g., SSDF<- $\cdot >$ SAMM). While these mappings are valuable, they are primarily crosswalks between frameworks at similar abstraction levels and stop short of offering actionable and lack operational detail. They also do not provide a unified structure that links strategic intent to practical implementation, nor address traceability across the full software life-cycle. In contrast, our framework introduces several novel contributions as follows.
Multi-layered and goal-driven structure. We move beyond flat framework-to-framework mappings by applying goaloriented requirements engineering to create a hierarchical, three-level mapping, from strategic (Level-1) to operational (Level-3) requirements. This structure provides semantic clarity and traceable linkages between goals, requirements, and implementation actions, addressing a key limitation of prior mappings that often lacked internal consistency or top-down rationale.
Running Title for Header
Operationalization of requirements. A core contribution of our framework is the derivation of over 400 operations grounded in Level-3 requirements. These operations are detailed and context-aware tasks that specify who (agent) performs what (operation), when (phase), and why (linked requirement and goal). This operational focus addresses a well-recognized gap in existing frameworks, which tend to remain at the level of high-level policies or control objectives without specifying actionable steps.
Enhanced interoperability. To support integration with existing tools and practices, we adopted and extended the NIST Open Security Controls Assessment Language (OSCAL). Our adaptation includes a new <operation> component within the OSCAL catalog model, allowing organizations to represent and share fine-grained operational practices in a machine-readable format (see Section 5.2 for details). This supports automation, toolchain integration, and streamlined compliance reporting, which is not addressed by prior mapping frameworks.
Comprehensive framework and supply chain coverage. While previous mappings focus on aligning two frameworks (e.g., SSDF and SAMM), our approach unifies inputs from seven major frameworks (ISM, SSDF, SLSA, SAMM, TUF, CISA, S2C2F), enabling broader coverage of diverse software security concerns. Importantly, this mapping also comprehensively spans the entire software supply chain, from the early preparation phase through development, deployment, and post-deployment activities. Each operation is contextualized within its relevant phase and assigned to appropriate agents, ensuring that responsibilities and actions are clearly traceable across the full life-cycle. Through iterative review and refinement, we achieved $100 \%$ mapping completion across all identified goals and requirements, demonstrating the robustness, extensibility, and practical applicability of our approach to real-world secure software development and supply chain security management.
Real-world validation through incident-based checklist. We validated the practical utility of our framework by generating a scenario-based checklist in response to the Log4j vulnerability (CVE-2021-44228) (see Section 5.1 for details). By extracting relevant operations tied to risk mitigation recommendations, we demonstrated how our framework can be used to rapidly assess, plan, and communicate security actions in real-world scenarios which existing mappings do not directly support.
In summary, our framework advances the field by offering operationally detailed and practically validated model that is tailored for end-to-end software supply chain security. It addresses not only what needs to be done (requirements), but also how it should be done (operations), by whom (agents), and when (phases), enabling a more complete and actionable understanding of secure software development practices. We compared with key existing mappings as shown in Table 5.
Table 5: Comparison with the existing mapping frameworks.
# 5 Discussion and Implications
# 5.1 Real-world applications of the mapping
This section discusses the potential applications of the mapping framework, including a user navigation tool and example checklists designed to mitigate potential software security risks for specific concerns and scenarios.
Interactive tool for mapping exploration: The mapping developed in this study encompasses a large number of requirements and operations under four overarching goals. Potential users, such as software developers and security experts within organizations, may face challenges in understanding the mapping due to its high volume and complex structure. To address this issue and enhance accessibility, we have developed a web-based navigation tool 10 that provides a simple yet efficient way to explore the mapping (Figure 7).
Figure 7a shows the top-level strategic goal, "Secure Software," along with its four sub-goals and corresponding Level-1 requirements. Users can expand each goal to view its description before exploring the associated requirements in greater detail (Figure 7b).
For each requirement, the tool provides a brief description and a list of the next-level requirements (Level-2). Users can click on any requirement to navigate to the next level and see more detailed requirements and associated operations.
As shown in Figure 7c, selecting a Level-2 requirement leads to a page that presents its detailed requirements and operations. This page provides comprehensive information about the selected requirement, including the Level-3 requirements and operations linked to each of them.
While this study provides a comprehensive set of software security requirements and operational guidelines for practical application, it is recognized that these may not be exhaustive. This tool can serve as a platform to engage users and foster an ecosystem where multiple stakeholders can contribute to the mapping and its ongoing improvement. A dynamic, collaborative ecosystem can enable self-sustaining growth, allowing diverse groups to interact, contribute, utilize, and collaborate. Such an environment facilitates the continuous refinement and expansion of security practices to address emerging challenges and evolving needs.
Scenario-based security checklists: To demonstrate the practical application of our mapping framework, we selected a real-world incident, "Log4j Vulnerabilities Create Unprecedented Impacts Worldwide". This case study is discussed in "CISA Tabletop Exercise Package Open Source Software" and has been registered in the Common Vulnerabilities and Exposures (CVE) database and publicly visible with the assigned identifier, CVE-2021-44228 [30]. The following is the case description from the CISA report.
In November 2021, critical vulnerabilities were discovered in a widely used, open source Javabased logging framework. The vulnerability set allowed for remote code execution that could be exploited in Java installations worldwide. The vulnerabilities became a zero-day exploit, with an upgraded version made publicly available the day after the first exploit was observed. Mitigation and remediation required unprecedented levels of effort from individual organizations and the broader cybersecurity community. The Java Naming and Directory Interface lookup feature, incorporated into Log4j in 2014, introduced the vulnerable attack surface. Log4j is considered an “endemic vulnerability” because vulnerable versions of Log4j will remain in systems for years to come. Organizations should have long-term capabilities to discover and upgrade vulnerable software to reduce the risks created by this endemic vulnerability, to include proactively monitoring for and upgrading vulnerable versions of Log4j, preventing the reintroduction of vulnerable versions of Log4j, and prioritizing applying software upgrades to avoid long-term exposure of vulnerable attack surfaces [31].
To address the concerns raised by the Log4j incident, we identified 24 recommendations and practices for organizations from the review reports of this incident [31, 32] and mapped to the requirements and operational actions from our framework. The recommendations encompass actual organizational responses, lessons learned, and expert guidance discussed in the reports.
Table 6 presents the list of recommendations along with the corresponding security checklist (a total of 21) generated using our mapping framework. The table also provides brief instructions for each checklist item. For detailed information and operational tasks, our mapping framework provides a structured approach that connects specific security requirements to actionable steps.
Table 6: Checklist generated using our mapping framework, incorporating recommendations from Log4j inciden reports.
Although the checklist was developed to address a specific security incident $\left( \mathrm { L o g 4 j } \right)$ , it comprehensively covers various aspects of software security. In addition to vulnerability management, it also includes areas such as secure software environments, secure development, and software traceability (Figure 8). Furthermore, the checklist can serve as a proactive security framework for organizations to strengthen their overall security posture and mitigate risks beyond Log4j-related threats.
Figure 8a shows identified checklist distribution by software security goals. The figure illustrates the proportion of checklists categorized under the four software security goals. As shown, the majority of the checklist fall under "Secure Software Development $( 3 8 \% )$ " and "Vulnerability Management $( 3 8 \% ) "$ , reflecting their critical importance in addressing security concerns such as coding standards, vulnerability identification, and mitigation. "Secure Software Environment" and "Software Traceability" are comparatively less represented, with contributions of $14 \%$ and $10 \%$ , respectively, emphasizing their specialized but essential roles.
Figure 8b is a breakdown of the checklist by software security requirements. It provides a detailed count of the checklist mapped to specific requirements (Level-1) within each software security goal. "Secure Software Development" includes key checklists for SecDevOps practices, secure-by-design principles, and secure testing, with each category having significant contributions. "Secure Software Environment" emphasizes environment segregation, which accounts for the
# Running Title for Header
Download ·OSCAL OSCAL pretty-print
(a) Navigator for the mapping; It shows four software security goals and level-1 requirements for each goal.
(b) Selection of level-2 requirement; "Access control" has three level-2 requirements.
# Environment segregation:
Development, testing and production environments are segregated.
# Ensure strict segregation of development environments
# Secureandisolatesensitiveapplication secrets
Protectpro ties principle and encrypting productio secrets stored in configurati dards. Avoid storing unprotected secrets in code repositories. Adopt secure storage practices- u nforce access controls and implement rigorous key management orotocols to ensure that only per configurations and credentials.
# Operations
(c) Level-3 and operations; A user can select Level-1 or Level-2 requirement and see the detailed requirements and operations.
Figure 7: Web-based tool for exploring the mapping framework.
Running Title for Header
(b) Breakdown of the checklist by software security goals and high-level categories.
Figure 8: Distribution and categorization of the checklist derived from the mapping framework.
highest number of checklists in this category. "Software Traceability" primarily focuses on maintaining and safeguarding the software bill of materials (SBOM) to ensure transparency and traceability. "Vulnerability Management" spans multiple high-level categories such as vulnerability disclosure programs, vulnerability resolution, and root cause analysis, highlighting its comprehensive approach to addressing and mitigating vulnerabilities.
# 5.2 Machine-readable Format for Interoperability
Machine-readable formats such as Open Security Controls Assessment Language (OSCAL)[33], Structured Threat Information Expression (STIX)[34], Common Security Advisory Framework (CSAF)[35] are essential for modern cybersecurity, compliance, and risk management. They enable automation, accuracy, portability and interoperability, reducing manual effort, minimizing errors, and streamlining security assessments. In the context of software supply chain security, OSCAL plays a critical role in facilitating the structured representation, validation, and exchange of security requirements/controls across different organizations and tools.
To support a comprehensive and practical implementation of software supply chain security frameworks, we provide both an Excel sheet and a web tool for holistic mapping. This mapping spans three levels of frameworks down to detailed operational steps, offering a structured reference architecture model that is both practical and adaptable across diverse real-world scenarios.
While the human-readable representation of the Software Security Framework Mapping ensures stakeholder transparency and accessibility, its machine-readable specification, based on the OSCAL model, enhances portability and interoperability for documentation, reporting, and information sharing across the software supply chain community.
OSCAL standardizes the representation of security controls, enabling security frameworks to define, search, import, and export control information in a unified format. This common structure ensures seamless integration with automated compliance tools, risk management systems, and DevSecOps pipelines, thereby strengthening the security posture of the software supply chain.
# 5.2.1 Introduction to OSCAL
OSCAL is a standardized framework developed by NIST to facilitate security assessment, authorization, and continuous monitoring of information systems. OSCAL provides a machine-readable format for security control-related data, supporting multiple formats such as JSON, YAML, and XML[36]. By leveraging OSCAL, organizations can automate compliance processes, improve security documentation consistency, and enhance interoperability across security tools and frameworks.
OSCAL is structured into three layers including Control layer, Implementation layer, and Assessment layer (Figure 9).
Each layer contains a set of models that support different aspects of security control implementation.
Control Layer. Cybersecurity/software security frameworks often define a set of controls(security requirements) intended to reduce the risk to a system. Framework authors typically organize these controls into a catalog. The catalog model is the basis for all other OSCAL models. Controls used in any other OSCAL model must first be defined in this model. Then, organizations and system owners identify which controls are applicable to a system/use case, which may include controls from more than one catalog. The profile model for selecting, organizing, and tailoring a specific set of controls. A profile enables controls to be selected and tailored to express a baseline of controls.
Implementation Layer. The OSCAL Implementation Layer focuses on the implementation of a system under a specific baseline as well as the individual components that may be incorporated into a system.
Assessment Layer. The OSCAL Assessment Layer focuses on assessment activities, on communicating all assessment findings including supporting evidence, and identifying and managing the remediation of identified risks to a system identified as a result of assessment activities.
The Australian Signals Directorate (ASD) provides the Information Security Manual (ISM) in the OSCAL format11. Additionally, NIST has published various OSCAL learning resources12 to help organizations understand the underlying concepts and practical applications of OSCAL.
# 5.2.2 OSCAL-aligned Format for Software Security Operations
In this study, we adopted the control layer, while excluding other layers, as implementation and assessment are beyond the scope of this project. In particular, we use the Catalog model in the Control layer to represent our holistic mapping framework. Figure 9 provides an overview of the structure of our catalog model illustrating how the NIST OSCAL model was applied in this study. The primary objective of using OSCAL catalogs is to define organized sets of security controls. Therefore, the OSCAL catalog model offers the ability to group related controls, and the ability to define individual controls as well. To transform our mapping into the OSCAL catalog model, we introduced a newly defined operation component, which is nested within individual controls. As a result, as highlighted in Figure 9, our modified catalog model follows a top-down structure, from a top-level group to a set of security controls and their associated operations.
Representing the mapping structure in a machine-readable format requires adding structure around textual information, enabling each data point to be easily identified and processed by a machine. In the following, we provide a simple OSCAL catalog model example for the "Secure Software Environment" goal.
A top-level strategic goal. The ISM guidelines for software development define four top-level strategic sub-goals which represented as <group> in OSCAL format. Each <group> (A snippet for a single group see Figure 10) includes: a unique identifier (e.g., goal-id: "SSS-01"), a title (e.g., "Secure Software Environment"), a goal description, and a set of nested security $< c o n t r o l >$ which represented as different levels security requirements.
Three level security requirements. Level-1 security requirements are represented as a single <control>, with their subcontrols (Level-2 and Level-3 security requirements) nested in OSCAL format. These elements are structured under the sub-goal "Secure Software Environment", ensuring a standardized and machine-readable representation. Each <control> (see Figure 11) encompasses multiple nested Level-1 to Level-3 requirements. It includes: a title (e.g., <title $>$ Environment segregation</title>), an identifier (ID) (e.g., "SSS-01-01"), the original security requirement
Figure 9: NIST OSCAL format (layers and models) and our catalog model.
Figure 10: A top-level strategic goal, in OSCAL.
1 <catalog>
2 <group> <title>Secure software environment</title> <prop name $\ c =$ "goal-id" value $\ c =$ "SSS-01"/>
5 <part name $\ c =$ "goal-description"> <p>Segregating development, testing and production environments,and ...</p> </part>
8 <control id="SsS-01-01" class $\ c =$ "level-1" value $= "$ ISM-control">
9
10 </control>
11 </group>
12 </catalog>
statement, an interpreted description, reference link resources (e.g., <link>) and generated several step operations (although not in this example).
Generated Operations. In the OSCAL catalog model, both group and control are redefined. To incorporate the Operations from our proposed Reference Architecture into the catalog model, we introduce a newly defined element within the security control: <operation>. The <operation> (see Figure 12) including an operation identifier (e.g., id="SSS-01-01-01-01-02"), a title ("Ensure Secret Isolation "), the description, and alongside the responsible parties ("operation-agent") and the timeline ("operation-phase") in which they are conducted.
While the Profile model is not the primary focus of this study, we briefly illustrate how our catalog model can be used to address specific security concerns, such as Log4j incident. In the previous section, we presented Table 6 including a recommended security checklist, consisting of 23 requirements, generated using our mapping framework. An example of the OSCAL profile model format for the Log4j security checklist is provided in Figure 13.
# 5.3 Theoretical Implications
Goal-driven Operationalization of Security Requirements. This study advances the theory of software security requirements by demonstrating how high-level goals can be systematically decomposed into structured, actionable components. Using a goal-oriented requirements engineering (GORE) approach, specifically the KAOS model, we offer a multi-layered mapping framework that links strategic intent to operational tasks. This structure provides semantic
<catalog>
2 <group>
4 <!--level-1-->
5 <control id="SSS-01-01" class $\ c =$ "level-1" value $\ O =$ "ISM-control">
6 <title>Environment segregation</title>
<prop name $\ c =$ "level-id" value $\ c =$ "SSS-01-01"/>
8 <part name $\ O =$ "statement" io $\vDash$ "SSS-01-01_smt">
9 <p>Development, testing and production environments are segregated.</p>
10 </part>
11 <link href $\mathbf { \Sigma } = \mathbf { \Sigma }$ "https://www.cyber.gov.au/sites/..." rel $\ c =$ "reference">
12 Information Security Manual (ISM-0400)
13 </link>
14
15 <control id="SSS-01-01-01" class $\vDash$ "level-2" value $\ c =$ "SSDF-task">
16
17 <!--level-3-->
18 <control id="SSS-01-01-01-01" class $\ c =$ "level-3" value $\ c =$ "SAMM-activity">
19
20 <!--operation-->
21 <operation> ... </operation>
22 </control>
23 </control>
24 </control>
25 </group>
26 </catalog>
clarity, traceability, and contextual relevance, addressing a common limitation in existing security frameworks that often lack implementation-level detail.
Synthesis of Fragmented Security Knowledge. This research synthesizes fragmented and often ambiguous security guidance—spread across multiple industry and government frameworks—into a unified, coherent model. By integrating inputs from key security frameworks (e.g., ISM, SSDF, SAMM, SLSA), our framework not only supports alignment across differing standards but also fills conceptual and operational gaps between them. This contributes to theory building by offering a more holistic view of software security as a multi-framework, life-cycle-spanning concern.
Bridging Abstract Frameworks and Real-world Practices. While most security frameworks operate at a high level of abstraction, our framework introduces a middle layer that connects these abstract requirements to practical, real-world operations. By defining over 400 operations mapped to specific agents and software supply chain phases, we provide a tangible model for how organizations can implement and assess security requirements across the software life-cycle. This operational granularity contributes to the theoretical understanding of how security practices are distributed and contextualized across organizational roles and development stages.
A Foundation for Machine-readable Security Modeling. The integration of our mapping framework into an extended, OSCAL-based machine-readable format opens new theoretical directions for automation, compliance, and interoperability. By introducing a new <operation> component and aligning with existing OSCAL models, we demonstrate how structured requirement models can evolve into dynamic, tool-friendly artifacts. This supports research on formalizing and automating security controls, and offers a foundation for future work in standards-based, life-cycle-aware security assurance. | The escalating complexity of modern software development environments has
heightened concerns around supply chain security. However, existing frameworks
often fall short in translating abstract security principles into concrete,
actionable practices. This paper introduces the Software Security Mapping
Framework, a structured solution designed to operationalize security
requirements across hierarchical levels -- from high-level regulatory standards
(e.g., ISM, Australia cybersecurity standard published by the Australian
Signals Directorate), through mid-level frameworks (e.g., NIST SSDF, the U.S.
Secure Software Development Framework), to fine-grained technical activities
(e.g., SLSA, a software supply chain security framework). Developed through
collaborative research with academic experts and industry practitioners, the
framework systematically maps 131 refined security requirements to over 400
actionable operational steps spanning the software development lifecycle. It is
grounded in four core security goals: Secure Software Environment, Secure
Software Development, Software Traceability, and Vulnerability Management. Our
approach leverages the KAOS goal modeling methodology to establish traceable
linkages between strategic goals and tactical operations, enhancing clarity,
accountability, and practical implementation. To facilitate adoption, we
provide a web-based navigation tool for interactive exploration of the
framework. A real-world case study based on the Log4j vulnerability illustrates
the framework's utility by generating a tailored checklist aligned with
industry best practices. Additionally, we offer a structured, machine-readable
OSCAL Catalog Model of the Software Security Mapping Framework, enabling
organizations to automate implementation, streamline compliance processes, and
respond effectively to evolving security risks. | [
"cs.SE"
] |
# I. INTRODUCTION
Phishing is a well-known attack technique dating back to at least the 1990s [1]. As the use of the internet has continued to grow, so have the assets accessible online. In today’s digital world, most businesses and organizations are connected to the internet, resulting in a substantial volume of email communication that malicious actors can exploit.
Phishing emails remain a prevalent threat [2], as the majority of successful cyberattacks originate from phishing campaigns [3], [4]. Many email defense mechanisms against phishing attacks focus on metadata, information around used protocols, and data besides the subject and body text fields within the email [3]. Although such approaches have been successful in detecting phishing emails, other emails that experienced or trained users can easily identify as phishing by simply reading the text still evade detection. With this in mind, our hypothesis is as follows:
By addressing the language and intent of emails, LLMs can detect phishing in a manner that complements existing metadata-based detection techniques.
This research was funded by the European Union as part of the Horizon Europe project SYNAPSE (GA No. 101120853). Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.
Large Language Models (LLMs) have been shown to exhibit knowledge in this area, and this paper explores to what extent LLMs can act as ”the experienced users” to detect phishing intent, both with inherent knowledge and through in-context learning using one or more examples.
Different types of phishing emails exist, each with a distinct intent, as characterized by various MITRE ATT&CK techniques [5]. For example, the intent behind an untargeted mass phishing campaign typically differs significantly from that of a targeted spearphishing email, which contains personalized information about the victim. Exploring both in-context learning and phishing categories, this paper addresses the following five research questions:
(RQ1) To what extent can LLMs infer intent in emails and use that as a factor for phishing detection?
(RQ2) To what extent is knowledge inherent in LLMs, and to what degree do examples in a few-shot learning setting help with detection?
(RQ3) To what degree are LLMs able to explain and justify their reasoning?
(RQ4) To what degree can LLMs differentiate between different types of phishing categories?
(RQ5) To what degree does the contextual knowledge provided by the phishing categories help to identify phishing emails?
In addition to addressing the research questions, the contributions of the paper are as follows. Based on the MITRE ATT&CK framework [5], we populate a taxonomy of phishing intent and use it to enrich a curated dataset1 of phishing emails. We then design a set of prompts and evaluate them under two settings. In the zero-shot approach, the prompt is presented with the email alone, without any examples of desired outputs. In the few-shot approach, the prompt includes example emails paired with correct labels to guide the model. This study evaluates multiple LLMs to assess their effectiveness in detecting phishing intent, revealing mixed results across models when using in-context learning.
# II. RELATED WORK
In the context of cybersecurity, defense against phishing attacks can be broadly categorised into two types: technical defenses and non-technical defenses [6]. Non-technical defenses primarily focus on educating potential targets—typically email recipients—through methods such as training courses and simulated phishing tests. These initiatives aim to build user awareness and resilience by teaching individuals how to recognize and respond to phishing attempts. In contrast, technical defenses play a critical role in securing email platforms through automated detection and prevention mechanisms.
In [8], the author argues that LLMs can reduce the workload and skill barrier required to create high-quality, targeted phishing emails. The research indicates that certain detection mechanisms can be circumvented by carefully crafting phishing emails using LLMs through a method known as prompt engineering, which involves adjusting prompts to produce specific responses or outcomes. The paper proposes either restricting the functionality of advanced models or implementing traceability to prevent their misuse in malicious contexts. In addition, the author proposes an LLM-based defensive system in which the LLMs themselves can detect phishing emails, a crucial development given the strong indicators that LLMs will continue to improve, potentially enabling more sophisticated phishing attack campaigns. Phishing detection systems should consider the capability of prompt engineering to bypass content filters quickly [8].
The authors in [9] provide empirical evidence that using LLMs to create phishing emails can achieve a greater incentivizing success rate than existing phishing emails gathered from online archives. Although LLMs have not outperformed manually written emails using a framework, phishing emails that are empowered by both LLMs and humans have achieved the best results. The authors developed phishing emails employing spearphishing techniques, integrating contextually relevant information tailored to specific targets. Although the primary objective of the paper was to examine the construction of such emails, it also proposed approaches for LLMs in phishing detection. In particular, the authors highlighted the importance of analyzing communicative intent as a potential differentiator between legitimate marketing content and malicious phishing attempts. Furthermore, [8] demonstrated how LLMs could create cost-effective and scalable spearphishing campaigns.
In addition to the subject line and body content, publicly available phishing datasets often include additional metadata, such as IP addresses and authentication protocol logs. Existing detection algorithms frequently leverage sender authentication mechanisms—such as SPF, DKIM, and DMARC [10]—which are commonly employed by traditional machine learningbased email security solutions. For example, datasets like SpamAssassin and the Anti-Phishing Working Group (APWG) provide IP addresses, domain information, and authentication results. Such data plays an essential role in research focused on analyzing the overall characteristics of emails. Furthermore, many phishing emails resemble poorly constructed spam messages, making them easier for users to identify and disregard.
Our paper primarily focuses on analyzing the intent of emails by examining only the subject line and body content, thereby simulating the way a typical user perceives an email. This approach is particularly valuable in scenarios where traditional detection mechanisms fail, allowing phishing messages to bypass security filters and reach users’ inboxes.
ChatSpamDetector [11] is a recent example in which LLMs have demonstrated strong performance in phishing detection, utilizing recent datasets and real-world emails to achieve an accuracy of $9 9 . 7 \%$ . This significantly outperforms baseline systems and other traditional models. Despite these promising results, the approach is not intended to fully replace existing solutions. Deploying commercial LLMs—such as OpenAI’s GPT-4o—at scale remains both cost-prohibitive and potentially non-compliant with privacy best practices [12]. Non-technical approaches focus on educating users on how to identify phishing emails [13]. Numerous studies utilize publicly available datasets to conduct their experiments. Many phishing emails included in these datasets might be considered unsophisticated attempts; however, due to varying email security configurations, they can still occasionally end up in users’ inboxes [15]. Recently, there has been significant progress in the field of LLMs, particularly in text reasoning tasks and zero-shot learning [14]. ChatSpamDetector used prompts to instruct LLMs on how to perform detection tasks effectively.
# III. AN INTENT-TYPE PHISHING TAXONOMY
The taxonomy used in this work is derived from the MITRE ATT&CK Technique T1566 for phishing [16], as presented in Table I. We adopt the sub-techniques defined by ATT&CK as three distinct categories, focusing on how the attacker delivers the phishing attempt. This categorization supports our analysis of intent in phishing emails, particularly in the context of LLM-based detection. By emphasizing the delivery vector rather than attribution or payload analysis, our use of this taxonomy aligns with the goal of examining how LLMs can interpret the purpose behind an email. To generalize the classification and reflect the broader applicability to various phishing scenarios—including those involving LLM-generated content—we omit the term ’spear’ from the category names, while preserving the core distinctions among the attack vectors.
Phishing via Link refers to phishing emails designed to lure users into clicking on a link or visiting a website. Methods may include the use of shortened URLs, links that closely resemble legitimate domains but contain slight variations (e.g., a single altered character), or obfuscated, non-clickable links. For instance, a URL might be disguised using textual substitutions such as ’(dot)com’ in place of ’.com’ to deceive recipients into manually entering the address in their browser. Overall, this category encompasses all phishing attempts that seek to redirect users to malicious websites, whether through direct clicking or more indirect methods.”
Phishing via Attachment refers to the method of delivering malicious code through a file that is attached to an email. This method relies on the victim downloading and interacting with the attachment to initiate a cyber infection. This category applies when a malicious file is attached, and the attacker aims for the victim to open it. It is important to note that the experiments conducted in this study focused solely on the text fields within the body and subject of the email. Consequently, the attachment was not included as part of the system’s input. As a result, the system’s outcomes are based exclusively on the text fields without any access to the actual attachment.
Phishing via Service refers to a broader category of phishing attacks that utilize vectors outside the traditional email inbox, meaning the threat does not originate from a link or attachment within the email itself. Instead, attackers typically attempt to redirect the victim to engage through less secure and less monitored channels, such as personal phone numbers, SMS, or even physical mail. These emails usually contain just enough information to prompt the recipient to take further action, such as initiating a money transfer, installing software, or continuing the interaction through third-party services. This category highlights phishing techniques that exploit external communication channels to bypass conventional email-based defenses.
TABLE I TRANSPOSING MITRE ATT&CK TECHNIQUES TO PHISHING CATEGORIES
# IV. EXPERIMENTAL SETUP
# A. Data sources and curation
The primary dataset used in the experiments consisted of emails manually selected from three publicly available large email datasets: LING, Nazario, and Enron. The phishing emails were chosen from the LING and Nazario datasets, and legitimate emails were sourced from the Enron dataset. These datasets were selected due to their popularity and their compliance with privacy, especially with benign emails. The labeled datasets were downloaded from Kaggle [15].
During initial experiments, it was observed that when using Enron emails that mention specific references to the company or its products, the LLMs sometimes recognized them as originating from the Enron dataset. Although interesting, this could shift the focus away from email intent and result in skewed results. To maintain concentration on detecting intent, such emails were filtered out.
After initial experiments, a validation set of 100 manually labeled emails was created to ensure an unbiased evaluation of classification and categorization during the final testing phase. This validation set adhered to the same labeling schema as the first dataset but remained unused until the end of the project to minimize any bias in training.
This research incorporated datasets with varying origins, sizes, and levels of complexity to provide a robust assessment of LLM capabilities in detecting phishing emails in the real world.
1) Data Preprocessing: In order to standardize the data for analysis, we processed the data from the datasets in the following way:
1) Extraction of email components: We extracted the text fields from all datasets, specifically the ”Subject” field as the header and the ”Body” field as the primary text content of each email.
2) Binary label identification: From each dataset, the emails were labeled with a binary label, where a value of 1 indicated a phishing email, and 0 represented a legitimate message.
3) Manual labeling and categorization: For the two custom datasets, all emails with the phishing label were manually categorized according to the corresponding intent categories from the taxonomy.
4) Filtering out dataset bias: During experiments, some emails, like those from the Enron dataset, had clear indicators that caused the LLMs to recognize the text. For these cases and other examples, such as data formatting errors in the datasets, the emails were removed and replaced.
# B. Prompting Approaches
1) Zero-shot Prompting: In the zero-shot experiments, the prompts are constructed without providing specific examples of phishing or legitimate emails. Instead, they rely on descriptive guidance highlighting key features to identify. The classification prompt remains relatively simple, while the categorization prompt incorporates more detailed criteria. This zero-shot approach leverages the model’s pre-trained knowledge by asking it to assess whether an email is malicious based solely on its internal understanding, without requiring explicit examples.
As a first step, the model is prompted with a binary (yes-orno) question to determine whether the email is malicious. If the response is affirmative, the second step involves classifying the email into a intent category, which reflects what the attacker aims to prompt the recipient to do. Since no examples are provided, the model must rely entirely on its pre-trained knowledge to infer the characteristics of a malicious email and its underlying intent.
2) Few-shot Learning Prompt: To enhance accuracy, particularly in categorization, we implemented a few-shot learning prompting approach. The main distinction in the few-shot prompt is that each category includes two complete examples of phishing emails, encompassing both the header and body. This approach provides the models with real examples as references, aiming to improve their ability to recognize various patterns within the categories.
# C. The experiments
This project conducted three experiments, all of which employed two different approaches: zero-shot and few-shot. All emails were sent in single prompts, with no overlap between each model; each model received only one prompt at a time to avoid data contamination.
All experiments are first conducted using a zero-shot approach by sending only the prompt with the email without any examples. The LLMs execute their task by using only the prompt’s instructions and their own capabilities.
In the few-shot approach, all experiments are conducted using the same base prompt, that base prompt is augmented in each experiment. In the few-shot version of the experiments the prompt is further augmented with two labeled examples for each category of phishing emails relevant to the specific experiment. These examples serve as contextual cues to help the LLM learn from limited data—a technique commonly referred to as few-shot prompting. This setup allows us to investigate the model’s ability to generalize and classify phishing intent based on minimal, targeted guidance.
Experiment 1 investigates the LLM’s innate knowledge by using basic prompt engineering and asking if the email is malicious. A commonly used prompt engineering technique, often referred to as ”role prompting,” is employed to convey the purpose that the LLM will be utilized as an email classifier in the initial step. The experiment introduces the prompt shown below, which will serve as a baseline for all the other experiments. The experiment was run twice, using the same prompt, with the difference that the few-shot examples of emails and the desired output were appended at the end of the prompt. The few-shot prompt in experiment 1 does not include the intent categories.
You are an email classifier analyzing potential phishing emails. Your task is as follows:
1. Determine if this email is malicious (Yes/ No)
2. Give a short justification for your decision, explain the result.
The response should follow this format
Phishing: YES/NO
Justification: Where the goal is not To CLICK or download in the inbox, but to get the user to use some other service, like calling a number or some other way they could phish outside of the email inbox
- Other
If the email is clearly a phishing attempt but does not fall into any of the defined categories
Experiment 3 incorporates all three steps, building on the initial assessment of the LLM’s capability by introducing a second step: a categorization task. This expanded approach aims to evaluate the LLM’s ability to perform a more comprehensive analysis, extending beyond simple binary classification, by focusing on its understanding of various phishing tactics and its reasoning capabilities.
Experiment 2 enhances the prompt by introducing intent categories in Step 1. This addition provides the LLM with more contextual information but does not constitute an additional step in the overall process. These intent categories are also included in the few-shot learning examples to guide the model more effectively.
3. Give a short justification for your decision, and explain the result.
# D. Model selection
The experiments utilized four models: GPT-4o-mini, Claude 3.5 Haiku, Phi-4 (14B), and Qwen (7B). The objective was not to determine the most capable model, but rather to explore the effectiveness of modern large language models in phishing detection and categorization. Qwen (7B), the smallest and oldest model (over a year old), was included to evaluate how a smaller, less recent model performs in comparison to newer, larger, and more cost-effective enterprise models. Claude 3.5 Haiku and GPT-4o-mini were accessed via commercial APIs, while Qwen (7B) and Phi-4 (14B) were run locally on a highend consumer desktop.
# V. RESULTS
The experiments progressed through three stages: (1) basic malicious email identification (Exp1); incorporation of phishing technique categorization (Exp2); and (3) combining both tasks with an added justification requirement (Exp3). For each stage, we used both zero-shot and few-shot learning approaches, suffixed by ‘-Zero’ and ‘-Few’ respectively in Table II, which summarizes the results.
TABLE II ACCURACY ACROSS EXPERIMENTS. CATEGORY ACCURACY IS SHOWN AS Detection / Category WHERE APPLICABLE.
Across all experiments, GPT-4o-mini, Claude-3.5-haiku, and Phi-4 (14b) consistently demonstrated high accuracy, highlighting their ability to understand and classify malicious emails even with limited example data. The Qwen(7b) performed considerably worse than the other models. In some tasks, it failed to produce output in the correct format, resulting in zero percentage accuracy. The inclusion of categorization focuses on what the attacker intends for the targeted user to perform, which could give security professionals a head start in the triage process of a real attack. The requirement for justification provides some insight into the model’s reasoning process and transparency. The complete suite of six experiments, when executed in a single batch, requires approximately 70 minutes to complete. The overall execution time is primarily constrained by the locally hosted models. In contrast, experiments conducted exclusively via API access typically take between 1 to 3 minutes per experiment, incurring a cost of approximately $\$ 01$ to $\$ 0.03$ USD for the GPT-4omini and Claude Haiku models.
All experiments also required the models to generate justifications as part of the output. Consistent with the results from Steps 1–3, Qwen exhibited inadequate performance on this task. Additionally, Phi-4 and Claude encountered formatting issues that led to empty justifications in up to one-third of the emails. These shortcomings indicate clear opportunities for improvement in the justification generation process. The justifications provided in the correct format were of high quality and offered good logic for determining whether an email appeared legitimate or suspicious. Example justifications for both a legitimate and a phishing email via a link are included below:
# Legitimate email:
The email appears to be a legitimate inquiry about linguistic analyses and does not contain any malicious intent, links, or attachments that would indicate phishing. It is a straightforward request for information from a researcher.
# Phishing via Link:
The email is attempting to get the recipient to click on a link to verify their account , which is a common tactic used in phishing attempts. The urgency created by the threat of account suspension within 24 hours further indicates malicious intent. | Phishing attacks remain a significant threat to modern cybersecurity, as they
successfully deceive both humans and the defense mechanisms intended to protect
them. Traditional detection systems primarily focus on email metadata that
users cannot see in their inboxes. Additionally, these systems struggle with
phishing emails, which experienced users can often identify empirically by the
text alone. This paper investigates the practical potential of Large Language
Models (LLMs) to detect these emails by focusing on their intent. In addition
to the binary classification of phishing emails, the paper introduces an
intent-type taxonomy, which is operationalized by the LLMs to classify emails
into distinct categories and, therefore, generate actionable threat
information. To facilitate our work, we have curated publicly available
datasets into a custom dataset containing a mix of legitimate and phishing
emails. Our results demonstrate that existing LLMs are capable of detecting and
categorizing phishing emails, underscoring their potential in this domain. | [
"cs.CR",
"cs.AI"
] |
# 1 Introduction
Transformer architectures have emerged as the backbone of modern deep learning, powering state-of-the-art advancements across diverse fields such as natural language processing [32], computer vision [2], reinforcement learning [26] and beyond [6, 21]. However, their self-attention mechanism, while effective, suffers from quadratic computational and memory costs with respect to the sequence length, posing scalability challenges [30].
Efforts towards addressing these challenges have largely focused on improving the efficiency of attention mechanisms. One line of methods seek to improve the design of the attention block. Grouped Query Attention (GQA) [1], for example, reduces computational overhead by clustering keys and values into coarse groups, which reduces the number
78.9012 7.8 0 246∆Seq. Ppl. (%) 20 0 0.000 0.025 0.050 0.075 Token Sequence ∆ Sequence Perplexity $( \% )$ (a) Change for each token. (b) Change distribution.
of processed KV pairs. Nevertheless, GQA assumes static group sizes and allocates resources uniformly, disregarding variations in token importance. Some works have been devoted to optimize the memory footprint of the widely adopted KV cache [33]. Token-level approaches, such as DynamicKV [40], introduce flexible KV cache allocation by prioritizing high-value tokens. However, these methods often involve rigid resource allocation strategies that neglect to fully exploit the significance of low-priority tokens.
Another promising line of work [22, 28] adopts MoEs to dynamically route tokens to a subset of experts, enabling efficient resource utilization. While these approaches achieve computational efficiency, they frequently suffer from imbalanced expert utilization. Moreover, their coarse-grained routing overlooks token-level variability, highlighting the need for finer-grained adaptivity in token-level resource allocation. In many cases, tokens deemed less important are outright discarded or receive minimal processing, which can lead to degraded performance for certain tasks.
Our work is motivated by the experimental findings presented in Figure 1, which reveal that token importance exhibits dynamic behavior and spans a wide spectrum. This observation naturally inspires the high-level idea of tailoring experts’ token selection based on their importance. While this approach holds significant promise for efficiently leveraging the potential of token prioritization, it faces several critical challenges that must be addressed. First, current token-choice routing (TCR) approaches can result in unbalanced expert utilization, particularly challenging to tune for heterogeneous capacities of experts, as tokens may always prefer high capacity experts, posing the risk of collapsed routing mechanisms. Second, existing expert-choice routing (ECR) methods shifts imbalanced expert utilization to token utilization, where tokens may be assigned to multiple experts, while some tokens are ignored. Third, existing ECRs also introduce training and inference disparities in CLMs, where during the training or prefill phase, routings are made based on the complete sequence, whereas the decoding-phase routings are made based on past context.
To surmount these obstacles, we introduce mixSGA. Unlike prior work that discards less significant tokens, our method retains all tokens while dynamically allocating computation and memory resources proportionally to their importance. For the experts, we propose a weight-sharing mechanism across grouped attentions, allowing the model to remain lightweight while dynamically scaling based on token significance. To overcome the routing disparity between prefill and decode stages, we propose a layer-wise auxiliary loss that encourages routing consistency.
Our contributions are summarized as follows:
• The mixSGA Framework: mixSGA integrates dynamic token-wise routing with KV attention head grouping, enabling adaptive computational/memory allocation without discarding tokens. It also uses weight-sharing for parameter efficiency.
• Autoregressive Expert-Choice Routing: We propose a novel past-context routing mechanism with an auxiliary loss to ensure prefill-decode consistency in CLMs. It also enables flexible tuning of individual expert capacities.
• Broad Empirical Validation: We demonstrates superior efficiency and performance over static and dynamic baselines across OPT, Llama3 and Gemma2 models on diverse instruction-following and continued pretraining benchmarks.
# 2 Related Work
# 2.1 KV Cache Management
KV cache optimization enhances the memory efficiency of CLMs [33]. Recent methods such as PyramidKV [5], DynamicKV [40], H2O [39] and NACL [7] aim to reduce memory footprint, typically by prioritizing high-utility tokens based on heuristics, random, or learned importance, and evict those deemed less critical to stay within a constrained memory budget. Furthermore, methods like SnapKV [24] and FastGen [14] focus on pattern- and importance-based token selection to optimize KV cache efficiency during inference.
Despite these improvements, such methods often rely on predefined grouping mechanisms, which may not fully capture token-level variability. As a result, they can overlook the nuanced importance of individual tokens, limiting their ability to optimize resource utilization effectively. In addition, inevitably remove tokens from the attention context, which may lead to degraded performance on fine-grained contextual understanding. By contrast, mixSGA adaptively allocates smaller KV cache sizes to less critical tokens without evicting them, striking a balance between efficiency and preserving full contextual integrity. It also preserves all tokens, and instead of hard eviction, adaptively allocates memory and compute resources in proportion to token importance. This design ensures that even less critical tokens retain their contextual influence, offering a more flexible and context-preserving alternative to hard cache eviction.
# 2.2 MoE Routing Strategies and Challenges
MoEs provide a scalable approach to increase model capacity without proportionally increasing computational costs [22, 28]. In token-choice routing (TCR) MoE models such as GShard [22] and Switch Transformer [12], each token independently select an expert or a subset of experts for resource-efficient computation. As TCR allows each token to independently select an expert, it may suffer from imbalanced expert utilization, inefficient resource allocation, and potentially unstable training dynamics. This is especially problematic when experts have heterogeneous capacities, as tokens may favor experts with higher capacities, making it challenging to balance expert loads. Expert-choice routing (ECR) [41] enables experts to select tokens for processing while explicitly defining their capacities, improving load balancing and resource utilization. Despite its advantages over TCR, ECR presents two significant challenges in the context of CLMs: (1) it requires access to the entire input sequence to make routing decisions, which is incompatible with CLMs that rely solely on past tokens to predict the next token; and (2) it shifts the issue of imbalanced expert utilization to imbalanced token utilization, where some tokens may remain unprocessed by any expert, while others may be redundantly processed by multiple experts.
Our work differentiates itself from existing TCR and ECR methods by introducing a routing mechanism specifically designed for CLMs. This mechanism evaluates token significance based on partial sequence context, enables dynamic expert selection, and ensures prefill-decode routing consistency in decoder-only architectures. Additionally, our method accommodates experts with heterogeneous capacity, delivering fine-grained resource allocation and improved efficiency on computation and memory costs.
# 2.3 Grouped Attention Methods
Grouped Query Attention (GQA) [1] reduces the computational and memory costs by merging keys and values into larger groups, reducing the number of KV pairs processed during attention. This can lead to inefficiencies when token importance varies significantly, as structured merging fail to prioritize tokens critical to the task. Cross-layer Attention (CLA) [4] merges key and value projectors across adjacent layers. While these approaches learn to enhance structural efficiency, they rely on static group sizes for attention heads during inference, assuming uniform token importance and lacking fine-grained, token-level adaptability. Additionally, these methods do not support heterogeneous expert configurations with varying group sizes, limiting their adaptability.
In contrast, mixSGA integrates the strengths of grouped attention and token-level adaptivity by dynamically routing each token to weight-shared experts with heterogeneous KV configurations, based on learned token importance. Unlike prior methods, mixSGA retains all tokens, ensuring no loss of contextual information, while adaptively allocating computational and memory resources at both group and token levels.
# 3 The mixSGA Method
mixSGA, mixture of weight-shared grouped attention experts, combines dynamic token-wise expert assignment with token-level KV optimization to achieve efficient attention computation and minimize KV memory. This section
Queries 00000000 □■
Values 口
Keys cache cache cache 00000000 口□ 1 ■ ■
Shared cache cache cache
Weights Expert1 T4 Expert2 T1 T3 T7 Expert3 T2 T5 T6
T1 T3 T2 T5 T6 Proegfrielsls:ive Routing Prefill Token (Context dependent) Decode Token routing T1 -T6 T7 Training Path mask Inference Path T4 Shared Path ← Tokens → one-hot aux argmax TDoekceno-dweis:e Argauxiliary loss Max Routing (Context TokenSequence T1 T2 T3 T4 T5 T6 T7 Router Independent)
elaborates on the key components, including the routing mechanism for expert selection, the mixture of weight-shared KV grouping experts, and the auxiliary loss designed to improve prefill/decode consistency.
# 3.1 Prefill and Training Phase Routing
Token-to-expert mapping score function Given an input sequence $\boldsymbol { X } \in \mathbb { R } ^ { L \times D }$ , where $L$ is the sequence length and $D$ is the embedding dimension, we define the following token-to-expert mapping scoring function for all tokens, a trainable linear layer $\mathsf { S } \colon \mathbb { R } ^ { L \times D } \mathbb { R } ^ { L \times E }$ with weight $\phi \in \mathbf { \mathbb { R } } ^ { D \times E }$ and bias $\beta \in { \hat { \mathbb { R } } } ^ { E }$ , where $E$ is the number of experts:
$$
\mathsf { S } ( \mathbf { x } ) = \sigma ( \mathbf { x } \phi + \beta ) ,
$$
and $\sigma ( \cdot )$ is the sigmoid function. The sigmoid activation ensures bounded scores within $[ 0 , 1 ]$ , avoiding additional normalization during training.
MoEs with Heterogeneous Capacities To facilitate downstream KV cache optimization, our method employs a routing mechanism that dynamically assigns tokens to experts based on predefined capacity ratios. These ratios regulate token distribution among experts, aligning with memory and computational constraints. Assume that we have $E$ experts, where with predefined capacity ratios for each expert $\pmb { \rho } \overset { - } { = } \{ \rho _ { 1 } , \rho _ { 2 } , \dots , \rho _ { E } \}$ , representing the fraction of tokens it processes. The capacity ratios lie in the range $[ 0 , 1 ]$ , and are normalized such that the sum of all ratios is 1, i.e., $\textstyle \sum _ { e = 1 } ^ { E } \rho _ { e } = 1$ . During training, our token-to-expert routing thus takes the scoring function output $\mathsf { S } ( \mathbf { x } )$ and greedi y assigns tokens to experts progressively. For the $e ^ { \mathrm { t h } }$ expert, we assign tokens based on the top- $\lceil \rho _ { e } L \rceil$ scores, and route the remaining tokens to the next $( e + \mathrm { \dot { 1 } ^ { t h } } )$ expert. Formally, it employs the following sparse masking function $\mathbf { m } _ { e } \colon \mathbb { R } ^ { L \times E } \to \{ 0 , 1 \} ^ { \zeta \times E }$ , where:
$$
\begin{array} { r } { \mathbf { m } _ { e } ( \mathbf { x } ) = \mathbf { 1 } \Big [ \mathrm { t o p } _ { \lceil \rho _ { e } L \rceil } \Big ( \mathsf { S } ( \mathbf { x } ) \prod _ { i = 1 } ^ { e - 1 } ( 1 - \mathbf { m } _ { i } ( \mathbf { x } ) ) \Big ) \Big ] , } \end{array}
$$
and 1 denotes the element-wise indicator function, producing 1 for the top- $\lceil \rho _ { e } L \rceil$ scores, and 0 otherwise. Note ${ \bf { m } } _ { e } ( { \bf { x } } )$ depends on the masks of preceding experts, ensuring that tokens previously assigned to other experts are skipped, thereby guaranteeing an exclusive mapping of each token to a single expert.
# 3.2 Decode-Phase Routing
The preceding paragraphs outline the training/prefill phase of our token-wise ECR mechanism, which operates on a sequence of tokens as input. However, this routing approach cannot be directly applied to the decoding phase of CLMs, where tokens are generated iteratively, this means that we need a different routing strategy for the decode phase.
A key advantage of eq. (2) is that it ensures exclusive expert mapping for each token, resulting in $\begin{array} { r } { \sum _ { e = 1 } ^ { E } \mathbf { m } _ { e } ( \mathbf { x } ) } \end{array}$ being a one-hot vector for each token. If we encourage both phases to have the same expert assignments, we can simply use arg max $\mathsf { S } ( \mathbf { x } )$ to determine the expert assignment during decoding. During the decoding phase, expert assignments for the next token are then determined by simply taking the arg max of the scoring function, i.e., This approach eliminates the need for a top- $k$ operation over the entire input sequence, which is infeasible during decoding. To summarize, the prefill and decode phases use the following routing functions:
$$
\begin{array} { r l } & { \mathbf { T } _ { \mathrm { p r e f l l } } ( \mathbf { x } ) = \sum _ { e = 1 } ^ { E } \mathbf { m } _ { e } ( \mathbf { x } ) , } \\ & { \mathbf { T } _ { \mathrm { d e c o d e } } ( \mathbf { x } ) = \mathbf { 1 } [ \mathrm { a r g m a x } ( \mathsf { S } ( \mathbf { x } ) ) = e ] . } \end{array}
$$
# 3.3 Prefill-Decode Consistency Loss
To align arg max $\mathsf { S } ( \mathbf { x } )$ with the expert assignment arg max $\mathbf { T } _ { \mathrm { p r e f i l } } ( \mathbf { x } )$ , we introduce the following consistency loss where arg max $\mathbf { T } _ { \mathrm { p r e f i l l } } ( \mathbf { x } )$ extracts the expert index assigned to each token:
$$
\begin{array} { r } { \mathcal { L } _ { \mathrm { a u x } } ( \mathbf { x } ) = \mathcal { L } ^ { \mathrm { s c e } } \bigl ( \mathsf { S } ( \mathbf { x } ) , \mathrm { a r g } \operatorname* { m a x } \mathbf { T } ( \mathbf { x } ) \bigr ) . } \end{array}
$$
The total training loss for the model combines the primary language-modeling loss ${ \mathcal { L } } _ { \mathrm { m o d e l } }$ with the auxiliary loss $\mathcal { L } _ { \mathrm { a u x } } ( \mathbf { x } ^ { ( l ) } )$ applied across all layers $l \in \{ 1 , \ldots , L \}$ , weighted by $\alpha$ :
$$
\begin{array} { r } { \mathcal { L } = \mathcal { L } _ { \mathrm { m o d e l } } + \frac { \alpha } { L } \sum _ { l = 1 } ^ { L } \mathcal { L } _ { \mathrm { a u x } } \big ( \mathbf { x } ^ { ( l ) } \big ) . } \end{array}
$$
# 3.4 Mixture of Weight-Shared GQAs
KV projection Building on the token-wise expert assignment described earlier, we extend the attention mechanism by introducing a mixture of weight-shared GQAs. Each expert processes its assigned tokens independently and maintains KV caches tailored to its group configuration, achieving an efficient trade-off between computation and memory. Assuming a pretrained attention layer with $( \mathbf { w } ^ { \mathrm { k } } , \mathbf { w } ^ { \mathrm { v } } ) \in \breve { \mathbb { R } } ^ { D \times D } , ( \mathbf { b } ^ { \mathrm { k } } , \mathbf { b } ^ { \mathrm { v } } ) \in \mathbb { R } ^ { H \times D }$ , key and value weights and biases, where $D$ is the embedding dimension, we first define the following key and value projection $p _ { h } ^ { j } \colon \mathbb { R } ^ { L \times D } $ $\mathbb { R } ^ { H \times L \times ( D / H ) }$ for the $h ^ { \mathrm { t h } }$ head, where $j \in \{ \mathrm { k , v } \}$ , $h \in \{ 1 , \dots , H \}$ , and:
$$
P ^ { j } ( \mathbf { x } ) _ { h } = \big ( \mathbf { w } ^ { j } \mathbf { x } ^ { \top } + \mathbf { b } ^ { j } \big ) _ { \left[ \frac { D ( h - 1 ) } { H } + 1 : \frac { D h } { H } \right] } ,
$$
Here, the subscript $\mathbf { z } _ { [ a : b ] }$ denotes the slice operation which selects elements from the first dimension of $\mathbf { z }$ ranging from $a$ to $b$ .
KV grouping Inspired by GQA [1], for each expert $f _ { e } ^ { j }$ , we design the following mechanism to reduce the number of projected KV heads from $H$ to $H / 2 ^ { e }$ groups of size $2 ^ { e }$ by taking the average of the corresponding grouped heads. Specifically, for each grouping $g \in G _ { e }$ of expert $e$ , we have $\boldsymbol { \bar { f } } _ { e } ^ { j } : \mathbb { R } ^ { \boldsymbol { \bar { H } } \times L \times \left( \boldsymbol { D } / \boldsymbol { \bar { H } } \right) ^ { - } } \to \mathbb { R } ^ { H / 2 ^ { e } \times L \times \bar { \left( \boldsymbol { D } / \boldsymbol { H } \right) } }$ :
$$
\begin{array} { r } { f _ { e , g } ^ { j } ( \mathbf { x } ) = 1 / 2 ^ { e } \sum _ { h \in g } p ^ { j } ( \mathbf { x } ) _ { h } , } \end{array}
$$
where $G _ { e }$ groups a range of heads by size $2 ^ { e }$ , For example, if $H = 4$ and $E = 3$ , we have $G _ { 1 } = \{ \{ 1 \} , \{ 2 \} , \{ 3 \} , \{ 4 \} \}$ , $G _ { 2 } = \{ \{ 1 , 2 \} , \{ 3 , 4 \} \}$ , and $G _ { 3 } = \{ \bar { \{ 1 , 2 , 3 , 4 \} } \}$ . Notably to ensure parameter efficiency, we share the same key and value weights across all experts. While for mathematical clarity we define the mean operation over the projected heads, one can easily instead aggregate the KV projection weights before applying the projection operation to achieve the same effect.
Due to this grouping, the total KV cache size is thus adjusted based on which expert processes the token, with the cache size of the $e ^ { \mathrm { t h } }$ expert being $H / 2 ^ { e }$ of the original size.
Attention computation Before computing the attention, for expert $e$ we match the KV head counts $H / 2 ^ { e }$ with the query head count H by repeating the KV heads 2e times using hje,g : RH/2e×L×(D/H) → RH×L×(D/H):
$$
h _ { e , g } ^ { j } ( \mathbf { x } ) = f _ { e , g } ^ { j } ( \mathbf { x } ) \otimes \mathbf { 1 } _ { 2 ^ { e } } .
$$
where $\otimes$ denotes the outer product, and $\mathbf { 1 } _ { 2 ^ { e } }$ is a vector of ones of size $2 ^ { e }$ . Finally, the overall result computed by the MoE is:
$$
\begin{array} { r } { \begin{array} { r } { h ^ { j } ( \mathbf { x } ) = \sum _ { e = 1 } ^ { E } \mathbf { m } _ { e } ( \mathbf { x } ) \odot h _ { e , g } ^ { j } ( \mathbf { x } ) . } \end{array} } \end{array}
$$
t is noteworthy that since ${ \bf { m } } _ { e } ( { \bf { x } } )$ is sparse and has token-wise exclusive expert assignment, the most of the $h _ { e , g } ^ { j } ( \mathbf { x } )$ are eroed out and skipped. In practice, this is carried out efficiently with scatter and gather tensor operations.
The attention computation is then performed following the standard scaled dot-product attention mechanism, where $q ( \mathbf { x } )$ is the original query projection:
$$
a ( \mathbf { x } ) = \mathsf { s o f t m a x } \Big ( q ( \mathbf { x } ) h ^ { \mathrm { k } } ( \mathbf { x } ) ^ { \top } / \sqrt { D } \Big ) h ^ { \mathrm { v } } ( \mathbf { x } ) .
$$
Expert Allocation for Memory Efficiency mixSGA computes varying KV sizes per token thanks to its dynamic routing mechanism assigning tokens to experts of different group sizes. For $E = 3$ experts, the group sizes are ${ 1 , 2 , 4 }$ respectively, and the head counts are thus $\bar { H } , H / 2 , H / 4$ . This means that on average given a ratio of $a : b : c$ , all tokens require $( a + b / 2 + c / 4 ) / ( a + b + c )$ of the original KV size. Along with the KV cache, we also store a single index value for each token to track expert assignment.
Integration with KV eviction Although mixSGA dynamically allocates per-token KV sizes, it remains fully compatible with KV eviction such as H2O [39] and NACL [7] to further reduce memory usage.
# 4 Experiments
# 4.1 Supervised Fine-tuning
Models and methods We evaluate mixSGA on the following CLMs: OPT- $\{ 1 2 5 \mathrm { m } , 3 5 5 \mathrm { m } \}$ [38], Llama3.1-8b, Llama3.2- {1b,3b} [31], and Gemma2-2b [15], covering various model sizes and architectures. As a default baseline, we implement a GQA-variant of the original models which forms KV head groups of size 2 by initializing the KV projection matrices with the mean of the group. For fair comparisons, mixSGA is configured with expert density ratios which maintain the same active KV head counts, and thus the same KV size, as GQA. It keeps the pretrained weights from the original models, and randomly initializes the newly added routing weights with He initialization [17] and biases with zeros.
Training and evaluation setup We fine-tune the modified models on the Dolly- $1 5 \mathrm { k }$ instruction-following dataset [10] with 14,000 training samples, and evaluate their performance on 5 conversational datasets: Dolly (DL, 500 testing samples from Dolly-15k), Self-Instruct (SI) [34], Vicuna (VC) [8], Super-Natural Instructions (SN) [35], and Unnatural Instruction (UI) [18]. In addition to the ROUGE-L (R-L) scores, which measure the longest common sub-sequence between generated and reference answers, we also evaluate all answers to the queries using DeepSeek-V3 [11] to provide feedback scores ranging from 0 to 10. The template to generate feedback is provided in Appendix A. All hyperparameter configurations are provided in Appendix A for reproducibility.
Main Results For supervised fine-tuning tasks, we initiate our approach by conducting a grid search on a smaller model (OPT-355M) to determine the optimal expert density ratios, incrementing by 0.1 while maintaining the total KV size constant at $5 0 \%$ of the original model. Our results show that allocating tokens as $30 \%$ to experts with a group size of 1, $10 \%$ to size 2, and $60 \%$ to size 4 optimizes performance across most metrics. This 3:1:6 ratio consistently outperforms other configurations. As shown in Table 1, mixSGA consistently outperforms GQA across various benchmarks and model sizes. These results demonstrate mixSGA’s ability to dynamically allocate resources and improve performance over static GQA baselines.
# 4.2 Continued Pretraining
Models and methods We investigate mixSGA’s ability in continued pretraining on additional corpus. We used a TinyLlama-1.1B model [37], which was pretrained on SlimPajama [29] and StarCoder [23] and adapted its weights to GQA with group size set to 2, CLA [4], and mixSGA. Both CLA and mixSGA aligns the same KV cache size as the GQA baseline.
Training and evaluation setup We train the models with each method applied for one epoch of MiniPile [19], which amounts to 1.6 billion tokens. We use a diverse set of benchmarks to evaluate the resulting models: HellaSwag [36], PIQA [3], Winogrande [27], ARC-Easy (ARC-E), ARC-Challenge (ARC-C) [9], and the perplexity on Wikitext-2 [25]. For the first six tasks, higher accuracy $( \% )$ indicates better performance, while lower perplexity on Wikitext-2 reflects stronger language modeling ability. The training and evaluation details are provided in Appendix A.
Table 1: Supervised fine-tuning of a range of models on the Dolly- $1 5 \mathrm { k }$ instruction-following dataset [10]. Evaluation includes ROUGE-L (R-L) and DeepSeek-V3 feedback scores (DSv3) on 5 conversational datasets. mixSGA demonstrates consistent improvements over GQA baselines with the same KV budgets. The “Avg. R-L” column shows the average ROUGE-L scores across all datasets.
Table 2: Continued pretraining on TinyLlama-1.1B with MiniPile. ( : higher is better, : lower is better.)
Main Results In our continued pretraining setting, the key challenge is to recover previously learned capabilities of the model with a fraction of data drawn from a distribution domain similar to the original pretraining data. As shown in Table 2, mixSGA consistently demonstrates competitive or superior accuracy on most benchmarks. It attains $3 7 . 0 0 \%$ on HellaSwag and $56 . 3 0 \%$ on Winogrande, both surpassing GQA (group size $= 2$ ) and CLA. Performance on ARC-C $( 2 5 . 1 7 \% )$ also exceeds that of the baselines, highlighting mixSGA’s strength in handling more challenging tasks. mixSGA also shows a clear advantage in Wikitext-2 PPL, delivering the lowest value (20.46) among all models. To summarize, these results indicate that mixSGA can enable the model to preserve previously acquired knowledge, as applying it to existing models does not impact their pretrained weights.
mixSGA compliments cache eviction better To investigate the compatibility of mixSGA with dynamic KV cache eviction strategies, we conduct a set of controlled experiments by integrating H2O [39] with both GQA and mixSGA on Gemma2-2b. These experiments are designed to evaluate whether the orthogonal benefits of token-level eviction and token-wise KV allocation can be combined effectively. Both GQA and mixSGA are configured to operate under a shared KV budget of $50 \%$ of the original size, with H2O applied as a post-processing eviction method to further compress memory. We vary the H2O keep ratio from $80 \%$ down to $20 \%$ to simulate increasing memory pressure. The results, shown in Table 3, demonstrate that mixSGA consistently outperforms GQA across all compression levels. This validates that mixSGA not only preserves the contextual coherence lost in aggressive token eviction, but also enhances the effectiveness of cache compression when used in conjunction with existing methods like H2O. The results demonstrate that integrating mixSGA with cache eviction policies further enhances its applicability in inference tasks while reducing KV memory footprint.
Table 3: Integrating H2O with various KV keep ratios on Gemma2-2b. mixSGA consistently outperforms GQA across most tasks and H2O KV keep ratios (KR).
Table 4: Effect of different expert group ratios under the same KV size budget $( 5 0 \% )$ for Llama3.2-1B. Results are reported for ROUGE-L across multiple benchmarks. (DL: Dolly Evaluation, SI: Self-Instruct, VC: Vicuna, SN: Super-Natural Instructions, UN: Unnatural Instructions, Avg.: Average ROUGE-L across benchmarks)
# 4.3 Ablation Studies
To comprehensively attribute the impact of each component in mixSGA, we perform ablation studies under three key aspects by varying the following: expert density ratios and expert counts, and the auxiliary loss with learned routing mechanism. Experiments in Tables 4 and 5 and Table 6 are conducted on Llama3.2-1b and Gemma2-2B respectively, following the same setup in Section 4.1. We provide detailed analyses of the results below.
Varying the expert ratios Table 4 investigates the effect of varying density ratios among experts while keeping a fixed KV size budget of $5 0 \%$ . We systematically increase the ratio assigned to the $2 ^ { \mathrm { n d } }$ expert in a group of size 2, testing configurations from 1:1:2 to 1:9:2, Our results reveals that evaluation metrics improve as the $2 ^ { \mathrm { n d } }$ expert’s ratio decreases, indicating a preference for allocating more tokens to the $1 ^ { \mathrm { s t } }$ and $3 ^ { \mathrm { r d } }$ experts. This suggests the model prioritizes assigning important tokens to the $1 ^ { \mathrm { s t } }$ expert, which retains the original model’s KV projection weights, while routing less significant tokens to the smallest $( 3 ^ { \mathrm { r d } } )$ expert.
Varying the expert counts In Table 5, we investigate the influence of employing 2-3 experts while maintaining a fixed total KV budget of $5 0 \%$ . Specifically, we compare configurations with 3:1:6, 3:4:0, 1:1:2, 1:2:0 ratios. Here, a value of 0 for the $3 ^ { \mathrm { r d } }$ expert indicates its exclusion from the model. Remarkably, we observe that introducing a $3 ^ { \mathrm { r d } }$ expert significantly enhances performance, achieving an average ROUGE-L score improvement of up to 3.12 across all benchmarks. Given the variable information content of individual tokens, this finding highlights the critical role of the $3 ^ { \mathrm { r d } }$ expert in capturing less crucial tokens within the input sequence, allowing the other two experts to focus on processing more significant ones.
Learned Routing To assess the auxiliary loss and learned routing mechanism, we conduct experiments on Gemma2- 2B with a 3:1:6 expert ratio, following Section 4.1. As shown in Table 6, we found that removing the auxiliary loss leads to inconsistent routing between prefill and decoding, resulting in near-random expert assignments (0.3458:0.3306:0.3236 for the 3 experts on Dolly), as the model never learns to route according to expert density ratios. This causes a severe average ROUGE-L drop (21.20 to 7.35). We also found that replacing the learned router with a router that randomly assigns experts per the 3:1:6 ratio degrades performance.
Varying KV Budgets To evaluate the influence of varying KV budgets on language modeling ability, we conducted comparative experiments involving mixSGA, GQA, and CLA across different KV budgets using the TinyLlama continued pretraining task as outlined in Section 4.2. For mixSGA, the configurations were set as follows: 0:0:1 for a $2 5 \%$ KV budget, 1:1:8 targeting $3 5 \%$ , 3:1:6 for $5 0 \%$ , and 1:1:0 for $7 5 \%$ . CLA was configured to align with these KV sizes.
Table 5: Effect of redistributing KV cache across tokens under fixed KV size $5 0 \%$ of the original model) for Llama3.2- 1B. Results are reported for ROUGE-L following the style in Table 4.
Table 6: Ablation study on the effect of auxiliary loss and learned routing for Gemma2-2B with 3:1:6 expert ratios under a $50 \%$ KV budget. Results report ROUGE-L scores across benchmarks.
Given that the TinyLlama-1.1B attention module comprises only 4 heads, GQA could thus only employ a group size of 2 to achieve a $5 0 \%$ KV budget.
As illustrated in Figure 3, mixSGA consistently achieves superior performance, manifesting in lower perplexity across most KV budgets compared to the baselines. Notably, CLA experiences a pronounced increase in perplexity as the KV budget decreases, particularly below $5 0 \%$ , where its performance deteriorates significantly. This highlights the challenges faced by static approaches in maintaining accuracy under constrained KV budgets. Conversely, mixSGA exhibits enhanced robustness, with lower perplexity levels across various budgets, suggesting that its dynamic token routing mechanism enables more effective resource allocation. This adaptability underscores its capability to deliver improved language modeling performance, even under limited KV budgets. | Transformer models face scalability challenges in causal language modeling
(CLM) due to inefficient memory allocation for growing key-value (KV) caches,
which strains compute and storage resources. Existing methods like Grouped
Query Attention (GQA) and token-level KV optimization improve efficiency but
rely on rigid resource allocation, often discarding "low-priority" tokens or
statically grouping them, failing to address the dynamic spectrum of token
importance. We propose mixSGA, a novel mixture-of-expert (MoE) approach that
dynamically optimizes token-wise computation and memory allocation. Unlike
prior approaches, mixSGA retains all tokens while adaptively routing them to
specialized experts with varying KV group sizes, balancing granularity and
efficiency. Our key novelties include: (1) a token-wise expert-choice routing
mechanism guided by learned importance scores, enabling proportional resource
allocation without token discard; (2) weight-sharing across grouped attention
projections to minimize parameter overhead; and (3) an auxiliary loss to ensure
one-hot routing decisions for training-inference consistency in CLMs. Extensive
evaluations across Llama3, TinyLlama, OPT, and Gemma2 model families show
mixSGA's superiority over static baselines. On instruction-following and
continued pretraining tasks, mixSGA achieves higher ROUGE-L and lower
perplexity under the same KV budgets. | [
"cs.CL",
"cs.LG"
] |
# 1 INTRODUCTION
Retrieval Augmented Generation (RAG) has emerged as a key technique for enhancing LLM performance in question answering (QA) by incorporating external knowledge [21, 34]. The LLM prompt is enriched with retrieved information to mitigate issues related to unknown or sparse knowledge within the model itself [24].
On the live challenge day of the LiveRAG Challenge, participants are provided with 500 DataMorgana-generated [15] questions. The generation of answers with Falcon-3-10B [4] and their submission is limited to a two-hour time slot. The organizers of the LiveRAG Challenge also provide access to the sparse search engine OpenSearch [25] and the dense vector database Pinecone [22], both populated with data from Fineweb-10BT [16]. Subsequently, the State-of-the-Art (SotA) LLM, Claude-3.5-Sonnet, evaluates the submitted answers according to correctness [23] and faithfulness [13] scores. The top-ranked submissions are also manually evaluated to determine the final ranking.
Our approach (Figure 1) combines the cross-evaluation of SotA RAG solutions with research insights on the optimization of distinct RAG components. We benchmarked five different generation strategies, two retrievers, two rerankers, and context-ordering techniques using two evaluation LLMs and various single- and multi-hop questions generated by DataMorgana for our internal benchmark.
# 2 RELATED-WORK
In this section, we outline the relevant components of a complete RAG pipeline and associated concepts.
Retriever. It collects and evaluates information to expand LLM queries using sparse (lexical) or dense (semantic) methods [9, 17].
Sparse methods like BM25 [26] are used in OpenSearch [25], while dense methods may use ANN-based systems like Pinecone [22]. Recent work compares retriever performance on QA tasks [18] and investigates context length and document order effects [9].
Reranker. This component rearranges the retrieved documents to enhance contextual relevance [33]. Recent research in reranking addresses faster comparison methods [40] or improved search relevance [7, 27].
Generation. The generation process consists of using the retrieved documents as context to generate a coherent and relevant answer to the question. We select five SotA RAG solutions (Figure 1) that exclusively consider retrieval-augmented prompts [20] or additionally reflect on retrieved passages [29, 37]. Further approaches include the comparison of passages with the parametric knowledge of LLMs [28] or the execution of retrievals and generation in several rounds [32].
Evaluation. Judging generated answers by LLMs [35] became an alternative to simple string matching and inefficient, expensive human evaluators [8]. Influencing factors of LLM-judges include bias through prompt styles [14], answer-length [12] or crosscapability performance [36]. Mitigation approaches suggest the expensive use of high-performance LLMs [19], finetuning [38] or confidence estimation [19]. We select two judge-LLMs for our performance metric to evaluate the generated answers.
# 3 RAG COMPONENTS
In this section, we provide a detailed overview of the RAG components considered for the LiveRAG Challenge, followed by a description of the components used in our submission.
# 3.1 DataMorgana
DataMorgana [15] is a novel approach to generate QA-pairs from documents by defining diverse user and question categories to create highly customizable synthetic benchmarks. We define additional user and question categories (Figure 3) to increase the variety of generated questions, thereby further challenging the answer generation capabilities. From a pool of 10,000 generated questions, we created a dataset of 500 randomly selected QA-pairs, evenly distributed across single- and multi-document subsets.
# 3.2 Retriever
Our RAG solution considers the dense retriever implemented with Pinecone. This choice is based on experiments with the QA-pairs we generated. We compared both provided indices, where the dense retriever demonstrated faster response times and a higher retrieval rate of gold documents $\mathbf { \Pi } \left( \varpi \mathbf { k } \right)$ , both with and without an additional reranker.
# 3.3 Reranker
We investigate the performance of BGE-M3 [7] and Rank-R1 [39]. Both rerankers aim to improve document relevance by reordering retrieved documents according to their relevance to the input query, thereby enhancing the quality of the context provided to the generation model.
We investigate the key performance characteristics of the BGEM3 reranker, focusing on its latency in combination with different amounts of retrieved documents, its ability to handle diverse queries and document lengths for context understanding, and its ranking accuracy with distinct queries and retriever settings to determine the optimal configurations.
In exploring alternative SotA reranking methods, we also investigate Rank-R1 [39], a novel LLM-based reranker notable for its explicit reasoning capabilities. However, Rank-R1’s application was ultimately deemed impractical due to its processing time, which can take up to 100𝑠 for a single query, making it unsuitable for the time constraints of the LiveRAG Challenge.
# 3.4 Generation
We consider recent advances in RAG and cross-evaluate various answer generation approaches with distinct retriever and reranker settings to compare their performance on DataMorgana-generated QA-pairs. We use a non-finetuned Falcon-3-10B LLM for all generation tasks, with a temperature setting of 0.1. We consider the generation prompts from the following RAG approaches:
Simple Prompt. The instruction utilizes direct-input augmentation for answer generation, combining the retrieved documents followed by the query [20].
TrustRAG. The solution proposes a three-step process where the retrieved information is compared against the parametric knowledge to filter out malicious or irrelevant documents, aiming to enhance the security and reliability of answer generation against retrieval-influenced corpus poisoning attacks [37].
InstructRAG. That strategy introduces a framework for explicit context denoising in retrieval-augmented generation through a two-phase methodology [29]. First, rationale generation utilizes the LLM’s instruction-following capabilities to identify relevant information in noisy inputs. Second, explicit denoising learning employs synthesized rationales as demonstrations or training data that enable effective denoising strategies.
AstuteRAG. A framework addressing imperfect retrieval results through a three-phase process: adaptive elicitation of internal model knowledge, source-aware knowledge consolidation, and reliabilitybased answer finalization [28]. In contrast to conventional RAG implementations, Astute RAG explicitly identifies and resolves knowledge conflicts between the model’s parametric knowledge and the retrieved information and adaptively combines the most reliable elements from each source.
Iterative Demonstration-Based RAG (IterDRAG). An iterative approach based on Demonstration-based RAG (DRAG) [6], where contextualized examples guide the LLM in its long-context usage [32]. IterDRAG extends this approach by incorporating a multi-round question refinement process, specifically targeting multi-hop questions. It decomposes the main question into sub-queries, generates an answer for each sub-question, which can additionally be retrieved independently, and ultimately constructs the final prompt containing the original question, the complete set of retrieved documents, follow-up questions, and intermediate answers.
All RAG generation approaches, with the exception of IterDRAG due to its inherent complexity, utilize the inverted context ordering proposed by Cuconasu et al. [10]. In this ordering, the retrieved or reranked documents are arranged in descending order of relevance, with the highest-ranked document placed immediately before the question.
# 3.5 Evaluation
Gemma-3-27B [2] serves as the primary evaluation model in our RAG Challenge pipeline. Considering the expensive use of the bestperforming models, which are also the best-performing judges [11], we searched for the smallest, yet best-performing LLM on the Chatbot Arena [1]. This approach also aligns with the recommendation against using the same LLM for both answer generation and evaluation [31]. Our selection is therefore based on its competitive Chatbot Arena Elo score of 1341, which ranks it as the smallest among the top open models. While acknowledging the potential drawbacks associated with this specific model, given the complexities of investigating biases and unexpected weaknesses in LLMs-as-Judges [31], we proceeded with Gemma-3-27B. Additionally, we evaluate the candidate systems using Claude-3.5-Haiku [5], as this model family is utilized for the final evaluation in the LiveRAG Challenge.
Furthermore, we consider different proposed evaluation prompting techniques to investigate influencing factors on LLMs-as-Judges evaluations. The first evaluation prompt (simple comparison) compares only the Falcon-3-10B generated answer with DataMorgana’s generated ground-truth answer [35]. The prompt directly instructs the LLM to decide which answer is better or if it’s a tie after providing a brief explanation of each answer’s most important aspects. The second evaluation prompt is derived from CRAG [30], which employs several metrics to define a good answer, such as conciseness, correctness, and support from retrieved documents. The third evaluation approach extends these methods by employing two distinct metrics, a correctness score and a faithfulness score, with a range of 4 and 3 possible values per metric, respectively. This evaluation prompt (Figure 2), hereafter referred to as the LiveRAG prompt, combines the evaluation strategy specified by the LiveRAG Challenge [3] with generated and ground truth information.
# 4 EXPERIMENTS
Now, we discuss the results for the components we considered, starting with insights from the use of DataMorgana and LLMs-asJudges. We then move on to a general investigation of retriever performance and retrieval-influenced reranking, and finally describe the results of the cross-evaluation that led to our LiveRAG solution.
# 4.1 DataMorgana Question Generation
The question generation process with DataMorgana yields an overall solid alignment of questions, documents, and answers. During question generation, we observed that user and question categories are more likely to influence the generated question as desired if the category description emphasizes how it should behave rather than what it should represent. For instance, a categorization of a $S p y$ with a behavioral description like hides his true intentions by omitting information, giving misleading details, or communicating in encrypted form yields more expected results compared to representational description like secretly gathers information, often for a government or organization, typically about enemies or competitors. Spies use covert methods to obtain intelligence.
# 4.2 Evaluating Prompts and Judges
First, we investigate the impact of three different evaluation prompts. We select a small subset of 100 DataMorgana-generated QA- pairs to cross-evaluate the influence of these prompts on the overall evaluation metric. We use Falcon-3-10B to generate answers by providing: 1. query only, 2. golden document and query, and 3. OpenSearch@k and query, where $k \in \{ 1 , 5 , 1 0 , 2 0 , 5 0 \}$ . Subsequently, we evaluate all prompts using Gemma-3-27B with the generated answer and any additional necessary prompt information. At this phase, we use the OpenSearch retriever due to its superior performance on DataMorgana queries with fewer than 50 retrieved documents, which aligns with Falcon-3-10B’s limited context window of up to 50 retrieved passages.
In general, the answers generated using the query and gold documents achieved the highest performance across all prompt styles. In the simple comparison prompt, answers generated with retrieved documents were penalized, as scores decreased as retrieva $^ { \mathrm { \textregistered } \mathrm { k } }$ increased. While only specifying the query yielded the second-best performance, surpassed only by the inclusion of the golden document. The CRAG prompt favors retrieval@k with $k \in \{ 5 , 1 0 \}$ over $k \in \{ 2 0 , 5 0 \}$ and penalizes answers generated with only the query. The LiveRAG prompt (Figure 2) favors an increase in retrieval@k for the correctness metric. Compared to the simple comparison and the CRAG prompt, query-only answers are scored almost as correctly as the best-performing answer generation strategies, but these answers are penalized with the lowest faithfulness scores among all generation settings.
Since these different prompts provide comparable results with their biased characteristics, we choose the LiveRAG prompt because of its detailed assessment of correctness and faithfulness and its similarity to the LiveRAG Challenge evaluation methodology.
Investigating the performance of Gamma-3-as-a-Judge, we compare a subset of questions judged by Gemma-3-27B against the judgments of Claude-3.5-Haiku. To do this, we selected samples that were rated as poor, fair, and good by Gemma-3-27B (using the LiveRAG prompt) and re-evaluated them with Claude-3.5-Haiku. As a result, the poor and good samples evaluated by Gemma-3-27B yielded nearly identical correctness and faithfulness scores when judged by Claude. For the mediocre samples, we notice a minor shift towards lower scores from Claude.
# 4.3 Retriever Performance
Considering the time constraints of the LiveRAG Challenge and the influence of golden documents in retrieval@k, we investigate the runtime of the retrieval and the number of golden documents returned at distinct retrieval@k values for the provided OpenSearch and Pinecone indices. Figure 4 shows that the number of gold documents increases continuously with higher retrieval@k. Notably, Pinecone outperforms OpenSearch at $\textcircled { a } k = 2 0$ for multi-hop questions and at $\textcircled { d } k = 5 0$ for single-hop questions. Runtime measurements (Figure 5) reveal that Pinecone is faster for all retrieval@k at $k = [ 1 . 6 0 0 ]$ . OpenSearch takes 0.12𝑠 and Pinecone 0.15𝑠 at $\boldsymbol { \mathcal { Q } } \boldsymbol { k } = 1$ , scaling nearly linearly up to $k = 6 0 0$ , where OpenSearch takes 0.9𝑠 and Pinecone 0.58𝑠. We decide to proceed with Pinecone due to its performance after reaching Falcon-3-10B’s context limit of about 50 retrieved documents. Consequently, we check the performance increase by using a reranker in combination with retriever $\textcircled { \omega } k > 5 0$ .
# 4.4 Reranker Performance
Evaluating the impact on the performance of BGE reranker, we measure the runtime affected by retrieval@k and reranker@k and report the percentage of remaining golden documents. We consider retrieval $^ { \mathrm { \textregistered } \mathrm { k } }$ for $k \in [ 1 , 3 0 0 ]$ and reranker $^ { \mathrm { \textregistered } \mathrm { k } }$ for $k \in \{ 1 , 3 , 5 , 1 0 , 2 0 \}$ , allowing Falcon-3-10B with its limited context length to be used for all generation tasks. We cross-evaluate various retrieval and reranker settings to find configurations that perform better than retrieval@k alone in terms of the number of retrieved golden documents for single- and multi-hop questions within a reasonable runtime.
The runtime of BGE (Figure 5) increases with the number of retrieved documents. Further experiments reveal that BGE takes $\sim$ 11.2𝑠 to rerank 400 retrieved documents. Considering the LiveRAG time constraints and available computational resources, we limit further experiments to 300 retrieved documents, which takes $\sim 8 . 6 s$ per reranking operation per question. If we increase $\mathbf { k }$ up to the context limit of $k = 5 0$ (Figure 4), the percentage of returned gold documents increases due to the query alone. We hypothesize an increasing RAG performance due to more golden documents are present in the context when a higher retrieval@k is used in combination with a reranker set to $k \leq 2 0$ . Therefore, we searched within the subset of viable retriever and reranker settings for a configuration that outperforms retrieval $\textcircled{2} 5 0$ in terms of gold documents $@ \mathbf { k }$ . For single-hop questions using Pinecone $@ 3 0 0$ with $\operatorname { B G E } ( \varnothing 1 0 $ and OpenSearch and Pinecone@[100, 300] with BGE $@ 2 0$ , more gold documents remaining in the reranked set compared to using OpenSearch or Pinecone $@ 5 0$ alone. Similarly, for multi-hop questions, this occurs for OpenSearch and Pinecone@[100, 300] with BGE $\textcircled{2} 2 0$ .
# 4.5 Final System Performance
With our insights into retriever and reranker performance, we cross-evaluate various settings. We use Pinecone $\textcircled { a } \{ 1 0 0 , 2 0 0 , 3 0 0 \}$ with $\mathrm { B G E } @ \{ 5 , 8 , 1 0 , 1 2 \}$ combined with optional inverted context order for each RAG solution. Due to the iterative context generation of IterDRAG, we omitted the context ordering and tested Pinecone $\mathfrak { D } \{ 1 0 0 , 2 0 0 \}$ with $\mathrm { B G E } @ \{ 5 , 1 0 \}$ for the initial retrieval step. For additional retrieval steps in IterDRAG, we considered Pinecone $\textcircled{ a} 2 0 0$ with $\mathrm { B G E } @ \{ 4 , 5 \}$ for a maximum of four and five iterations respectively, and BGE $@ 3$ for six iterations.
Considering the LiveRAG constraints, increasing retrieval@k for a fixed rerank $^ { \mathrm { \textregistered } \mathrm { k } }$ does not consistently lead to better performance. The performance differences due to a change in retrieval@k remain mainly within a $\pm 2 \%$ range of the correctness evaluation metric (Figure 2). Increasing rerank $\boldsymbol { \mathcal { O } } \mathbf { k }$ with fixed retrieval $\boldsymbol { \mathcal { O } } \mathbf { k }$ results in a higher variation in the correctness score, where we measure variations from $1 \%$ up to $2 5 \%$ . The influence of the inverted context order averages to $1 \%$ performance increase.
Table 1: Gemma-3-27B evaluation on 500 DataMorganagenerated questions, equally distributed between single and multi-hop questions. We report the LiveRAG prompt (Fig. 2) metrics $[ \% ]$ for Correctness $\{ \mathbf { 1 } , 2 \}$ and Faithfulness {0,1}, discarding other Correctness $\{ - 1 , 0 \}$ and Faithfulness $\{ - 1 \}$ values.
The best-performing RAG approaches are listed in Figure 1, using identical settings of Pinecone $@ 2 0 0$ , $\mathtt { B G E } @ 5$ and inverted context order. IterDRAG uses Pinecone $\textcircled{4} 2 0 0$ and $\operatorname { B G E } ( \varnothing 1 0$ for initial retrieval, and Pinecone $\textcircled{2} 2 0 0$ with BGE $@ 4$ for up to 5 iterations. With InstructRAG and IterDRAG perform comparably on Gemma-3-27B, we select both as possible approaches for LiveRAG. During the live challenge day, we generated answers with both RAG solutions and evaluated the results using Gemma-3-27B, omitting the golden document and golden answer, supplemented by human evaluation. Resulting in InstructRAG outperforms IterDRAG in terms of the evaluation metric. IterDRAG achieved an average correctness of 1.70 and a faithfulness score of 0.73, while InstructRAG achieved a correctness of 1.91 $( + 2 8 . 1 \% )$ and a faithfulness score of $0 . 9 3 \ : ( + 2 7 . 4 \% )$ . An additional manual comparison between these two approaches revealed an occasionally subjectively better question-answer alignment for InstructRAG. Compared to our measurements, the organizer’s LLM evaluation returned lower scores for our submitted InstructRAG-based approach: correctness of 1.13 $( - 4 0 . 9 \% )$ and faithfulness of 0.55 $( - 4 0 . 9 \% )$ . This difference was likely influenced by using a more capable judge LLM and accessing the golden document and golden answer. | Retrieval-Augmented Generation (RAG) enriches Large Language Models (LLMs) by
combining their internal, parametric knowledge with external, non-parametric
sources, with the goal of improving factual correctness and minimizing
hallucinations. The LiveRAG 2025 challenge explores RAG solutions to maximize
accuracy on DataMorgana's QA pairs, which are composed of single-hop and
multi-hop questions. The challenge provides access to sparse OpenSearch and
dense Pinecone indices of the Fineweb 10BT dataset. It restricts model use to
LLMs with up to 10B parameters and final answer generation with Falcon-3-10B. A
judge-LLM assesses the submitted answers along with human evaluators. By
exploring distinct retriever combinations and RAG solutions under the challenge
conditions, our final solution emerged using InstructRAG in combination with a
Pinecone retriever and a BGE reranker. Our solution achieved a correctness
score of 1.13 and a faithfulness score of 0.55, placing fourth in the SIGIR
2025 LiveRAG Challenge. | [
"cs.IR",
"cs.AI",
"cs.LG"
] |
# 1 INTRODUCTION
Advances in deep learning have made it possible to embed data as vectors in high-dimensional vector spaces so that the distance between vectors captures various notions of semantic similarity. This opens up a new interface for users as well as AI models/agents to interact with large information stores based on semantic and contextual relevance, rather than literal matches or structured queries. Therefore, efficient search in vector spaces has become a critical requirement for information retrieval systems. Already, vector search is a central component in industrial scale retrieval (web and document search) and recommendation systems. In databases, especially document databases, augmenting existing workloads with vector representations is becoming commonplace.
A new class of scenarios requiring vector search on operational data modeling e-commerce, document retrieval, conversational histories, and AI agent interaction patterns, is rapidly growing. This has motivated a new class of specialized vector databases that optimize primarily for vector search performance. However, this pattern forces the replication of data between a primary operational database and a secondary vector database which can cause data divergence, and increased cost and complexity for the user. This also may not provide the operational resilience that developers expect.
An ideal solution to these workloads would be a highly available and scalable operational database that allows flexible data models and indexing over vector representations. It would further offer:
A vector index in sync with underlying data without replication to external systems. Elastic scaling to billions of thousand $^ +$ dimensional vectors. Cost-effective and accurate search at any scale and QPS. Robustness to incremental changes – ensures data integrity and consistency, and high search accuracy across updates. Low-latency transactions for data updates and retrieval. Built-in multi-tenancy to allow multiple users or groups to securely and cost-effectively share the same database instance.
We achieve all these properties by integrating a state-of-theart vector indexing library, DiskANN, with Azure Cosmos DB for NoSQL, an operational database for Tier-0 enterprise workloads. Cosmos DB already stores vast quantities of data such as conversational history, documents and e-commerce data where semantic retrieval is important. The database engine underlying Cosmos DB NoSQL [34] already offers many features to help realize these properties including multi-tenant support, automatic indexing with flexible schema, scale-out and high-availability architecture, multi-region support, as well as flexible cost structures such as dynamic auto-scale and pay-per-use serverless models. We take advantage of these properties by adapting the DiskANN library within the contours of the Cosmos DB architecture.
Each collection in Cosmos DB maps to multiple physical partitions (based on hashed key ranges), each of which is made highly available with a replica-set. Each physical machine in a Cosmos DB cluster hosts partitions corresponding to many collections to maximize fleet efficiency. Therefore, only a portion of available memory in these machines is available as a cache for the indices in these replicas. The available cache might be $1 0 - 5 0 \times$ lower than the size of the document data and indices over them. For a vector index to be effective and cost-efficient in such a constrained setting, it must be able to process incremental updates and queries with limited memory. Moreover, the index must be truly incremental and avoid needing to be rebuilt or merged to maintain search quality over a long period of time or a large number of operations.
DiskANN is a suite of vector indexing algorithms [24, 36, 37] designed for such constraints. It derives quantized representations of vectors which can be much smaller than the original vectors, and supports updates and queries to the index mostly via quantized vectors. To perform an index update or query, full-precision vectors stored in the index are accessed $5 0 \times$ less frequently compared to quantized vectors. This allows the system to provide high performance even when most of the index is stored on SSDs. DiskANN has been widely deployed in several extreme scale Microsoft semantic indices used for web search, advertisements, Microsoft 365 search, Co-pilots as well as on edge devices [30].
The DiskANN library [24] has previously been designed to control the layout of index terms either in memory or in SSD, akin to other monolithic systems [2, 22]. While databases such as SingleStore [8] and Elastic [31, 38] use vector indexing libraries in a loosely coupled way to produce a separate index for each immutable data segment, we do not use such a design due to several drawbacks: (a) each query has to fan out to numerous segments which limits query efficiency, (b) regular rebuilds of vector indices are needed as segments are merged or consolidated, which consumes significant compute, memory and causes serious latency spikes for queries [39, 40], (c) either large cold start latencies occur while loading large vector indices into memory, or high expense is incurred from having to hold the index in memory.
Instead, for simplicity and robustness of the system, we store the terms representing the vector index on the Bw-Tree index in Cosmos DB. Bw-Tree supports high concurrency through latch free algorithms, provides fast random reads and writes in a tiered memory/SSD setting. To operate in this setting, we rewrote the DiskANN library to decouple the algorithmic logic from physical index layout. The new library provides supports updates and queries by manipulating or reading index terms (quantized vector and graph adjacency lists) stored external to the library. This leads to several structural advantages:
• We can maintain and update just one vector index per replica, which can be as large as 50GB, enabling higher query efficiency via reduced fan-out. In fact, DiskANN’s query complexity scales logarithmically in the size of the index. A vector insert results in immediate and durable changes to the index terms to Bw-Tree, and does not require further indexing or merging down the line. • The Bw-Tree caches index terms for hot partitions on demand, while collections that have not been used recently do not waste memory and can only be billed for storage. • The Bw-Tree has a long history of support in the current system, so its reuse for DiskANN indexing terms benefits from its established stability and ease of operation. • A long tail of indices can be stored in a machine without paying a minimum floor cost, especially in multi-tenant collections.
This deep integration eliminates the need for a vector index outside an operational database, and instead composes sophisticated features from Cosmos DB and DiskANN. From the new DiskANN rewrite we inherit existing features – querying with limited memory – as well as new features developed for this integration such as index updates with limited memory, filter-aware search and paginated search. The latter increases the efficiency of hybrid queries with predicates and vector search. From Cosmos DB, we inherit flexible cost structures, resource governance, elasticity, and resiliency.
A few highlights of the integrated system include:
Query cost that is nearly $1 5 \times$ and $4 1 \times$ lower than Zilliz and Pinecone enterprise-tier serverless vector databases respectively (for 10 million 768D vectors), while offering higher availability. • Query latency of about 20 milliseconds, including the time to fetch underlying docs, even at 10 million index scale. Query cost increases less than $2 \times$ despite a $1 0 0 \times$ increase in index size in one partition. Query cost does not change much as the dimensionality of the vector increases. • Ingest offers stable recall over long update sequences, cost and performance comparable to other vector databases. Collections can scale out to a billion vectors. Multi-tenant design where the number of partition keys or the vectors per partition can grow independently to large numbers. • Pay-per-use or auto-scale cost structure. • Optimized hybrid queries for better latency and cost than paginated search with post-filtering.
# 2 BACKGROUND
We now review necessary background on the DiskANN library and the Cosmos DB system to motivate the new design in Section 3.
# 2.1 The DiskANN vector indexing library
DiskANN is a graph-structured index for vector search that can efficiently index and update large sets of vector data, while supporting accurate and fast vector search queries. It is widely used at scale in Microsoft for semantic indices including those in web search, enterprise document search, computational advertisement and Windows Co-pilot runtime. The overall ideas are described in [24] and an open-source implementation is available [35].
The index consists of a graph over the vectors in the database, with one vertex representing each vector and directed edges connecting vertices. The search for a query $q$ uses a “greedy” approach, starting at a designated start point 𝑠, computing the distances from $q$ to each point in the out-edges $N _ { \mathrm { o u t } } ( s )$ , and moving on to the nearest neighbor of $q$ among $N _ { \mathrm { o u t } } ( s )$ . It continues this process of greedily visiting the closest neighbor to $q$ until it can no longer improve on the closest neighbor to $q$ , at which point the search terminates. This algorithm is formally described in Algorithm 1, and can be naturally extended to return the top- $\mathbf { \nabla } \cdot k$ neighbors by keeping a priority queue of the $k$ closest neighbors instead of hopping to the single closest neighbor each time. The accuracy, or recall $k { \mathcal { Q } } k$ , of a search, is defined as how many of the $k$ results returned by a search are the true top- $\mathbf { \nabla } \cdot k$ nearest neighbors. When clear from context, it is also used to refer to the average recall over a batch of searches.
DiskANN’s query complexity grows logarithmically with the size of the index (empirically observed [27, 37]). It can work effectively
Data: Graph $G$ with start node $s$ , query $\scriptstyle { \mathsf { x } } _ { q }$ , result size $k$ , search list size $L \geq k$
Result: $\mathcal { L }$ contains $k$ -approx NNs, and set of visited nodes $\mathcal { V }$
begin initialize sets ${ \mathcal { L } } \gets \{ s \}$ , $\varepsilon \gets \emptyset$ , and $\mathcal { V } \emptyset$ // $\mathcal { L }$ is the list of best $L$ nodes, $\varepsilon$ is the set of nodes which have already been expanded from the list, $\mathcal { V }$ is the set of all visited nodes, i.e., inserted into the list initialize hops $ 0$ and $\mathrm { c m p s } \gets 0$ while $\mathcal { L } \backslash \mathcal { E } \neq \emptyset$ do let $p * \gets \arg \operatorname* { m i n } _ { \substack { p \in { \mathcal { L } } \backslash { \mathcal { E } } } } | | \mathbf { x } _ { \mathclose { p } } - \mathbf { x } _ { q } | |$ update $\mathcal { L } \gets \mathcal { L } \cup ( N _ { \mathrm { o u t } } ( p ^ { * } ) \setminus \mathcal { V } )$ and $\mathcal { E } \mathcal { E } \cup \{ p ^ { * } \}$ if $| { \mathcal { L } } | > L$ then update $\mathcal { L }$ to retain closest $L$ points to $\scriptstyle \mathbf { x } _ { q }$ update $\mathcal { V } \mathcal { V } \cup N _ { \mathrm { o u t } } ( { p ^ { * } } )$ return [closest $k$ points from $\nu ; \ \mathcal { V } ]$
$$
\overline { { \mathrm { A l g o r i t h m ~ 2 : ~ I n s e r t } ( x _ { \rho } , s , L , \alpha , R ) } }
$$
Data: Graph $G ( P , E )$ with start node 𝑠, new vector $\mathbf { \boldsymbol { x } } _ { p }$ , parameter $\alpha > 1$ , out degree bound $R$ , list size $L$
Result: Graph $G ^ { \prime } ( P ^ { \prime } , E ^ { \prime } )$ where $P ^ { \prime } = P \cup \{ p \}$
begin initialize expanded nodes $\varepsilon \gets \emptyset$ initialize candidate list $\mathcal { L } \emptyset$ $[ \mathcal { L } , \mathcal { E } ] \gets$ GreedySearch $( s , p , 1 , L )$ set $N _ { \mathrm { o u t } } ( p ) \gets \mathrm { R o b u s t P r u n e } ( p , \mathcal { E } , \alpha , R )$ foreach $j \in N _ { \mathrm { o u t } } ( \mathfrak { p } )$ do if $| N _ { \mathrm { o u t } } ( j ) \cup \{ p \} | > R$ then set $\boldsymbol { N _ { \mathrm { o u t } } } ( j ) \gets$ RobustPrune( $j , N _ { \mathrm { o u t } } ( j ) \cup \{ p \} , \alpha , R )$ else update $N _ { \mathrm { o u t } } ( j ) N _ { \mathrm { o u t } } ( j ) \cup \{ p \}$
with an almost entirely SSD-based index and limited memory, while providing performance parity with in-memory indices like ScaNN that consume an order of magnitude more memory. DiskANN is IO efficient – it can achieve $9 0 \%$ recal $@ 1 0$ on a billion-size SIFT dataset with as few as 50 random 4 KB reads to SSD. The same DiskANN index can also be loaded entirely into DRAM for scenarios requiring extreme throughput, where it outperforms graph-based methods such as HNSW [20] and partition-based methods such as IVF and ScaNN [26].
Inserts and Replaces. The DiskANN graph is built via repeated calls to the insertion algorithm, which is formally described in Algorithm 2. At a high level, the insert procedure generates candidates for insertion using a call to Algorithm 1, prunes the candidates down to respect the degree bound $R _ { ; }$ and then adds edges pointing to the newly inserted node to make it reachable. One of the main innovations behind DiskANN’s performance is the RobustPrune routine, shown in Algorithm 3, which is used to prune a vertex’s out-neighbors down to the degree bound $R$ . At a high level, it removes an edge $( u , v )$ when 𝑣 is likely to be reachable via one of $u$ ’s other neighbors. Furthermore, it utilizes a scaling parameter $\alpha$ to
$$
\mathbf { A l g o r i t h m 3 : } \ \mathrm { R o b u s t P r u n e } ( p , \mathcal { E } , \alpha , R )
$$
prune more or less aggressively; in practice, the ability to scale $\alpha$ to prune less aggressively is consequential for performance [36, 37]. In some cases, it may also be necessary to replace the vector corresponding to a document identifier. We handle this by overwriting the original vector and invoking Algorithm 2 to re-insert the new vector. Any pre-existing edges pointing to the replaced point are cleaned up lazily via later calls to pruning.
Mini-batch updates. Using multiple threads to insert vectors in a DiskANN graph will result in race conditions between updates to the adjacency lists in the graph which could be handled via finegrained locking [35]. In some cases, the underlying data structure used to store the graph, may not tolerate parallel updates / parallel updates with potential duplicate values for a key (graph vertex in this case). The latter scenario also includes CosmosDB Bw-Tree with stricter contracts around no duplicate insert patches for key and delete patches for a non-existing key. In order to benefit from parallelism while adhering to the above strict contracts, we utilize so-called mini-batch updates, where the edge insertions corresponding to a small batch of nodes are computed in parallel, then applied to the graph in a single update. They are formally described in Algorithm 5 in the Appendix, and follow a similar routine to the batch build in ParlayANN [27]. Here, we use a smaller maximum batch size (about 100) to support batch updates.
In-place Deletion. When a document is deleted, the corresponding vector and quantized vector are immediately removed from the index. Diskann removes updates the graph index terms to reflect the deletion using Algorithm 6, which is an adapatation of [43]. At a high level, the algorithm eagerly replaces the connections between non-deleted points that were maintained via a deleted point, ensuring stability of the index quality. A lightweight background process continuously removes any remaining edges pointing to the deleted point. Experiments show that the combination of in-place deletion and lightweight background consolidatipn is effective at keeping recall stable over long cycles of insertion and deletion.
Compressing Vectors via Quantization. An additional algorithmic building block is the compression of the associated vector data. This allows data to be stored more compactly in expensive storage tiers (e.g., main memory), transmitted more efficiently across memory bus, and distance comparisons to be computed with fewer CPU cycles with little loss in accuracy. Both scalar quantization and product quantization are widely used on top of a vector index to increase search speeds. Scalar quantization maps each coordinate of the embedding to a smaller representation. For example, 32-bit floating point representations are easily rounded to the nearest 16-bit floating point with little loss of precision. Rounding to the nearest 8-bit integer or even to 1- or 2-bit representations is more at a coordinate level, but still preserves enough information overall to help the query navigate the index.
Product quantization (PQ) [21] maps collections of coordinates to a few bytes by clustering data and mapping the data to the identity of the coordinate center. For many datasets, product quantization achieves a better compression than scalar quantization normalized for noise introduced in distance calculations. For example, PQ can compress OpenAI’s ada-003 embeddings (12KB) by 96x while retaining enough information to navigate the index. While PQ was specifically formulated for preserving Euclidean distances between embeddings, in practice it is also reasonable at preserving inner product distances.
# 2.2 The Cosmos DB system
Azure Cosmos DB is Microsoft’s globally distributed, elastic, cloudnative database service for managing JSON documents at Internet scale. It is the primary database service in Microsoft cloud with 10s of millions of database partitions, $1 0 0 +$ PBs of data under management and $2 0 M +$ vCores. Prior work presents a detailed description of the overall system [34]. Here we briefly present ideas necessary for the design of the integrated vector index.
Schema Agnostic Indexing. Cosmos DB uses the simplicity of JSON and its lack of a schema specification. No assumptions are made about the documents stored in Cosmos DB and they can vary in schema. Cosmos DB operates directly at the level of JSON grammar, blurring the boundary between the structure and instance values of documents. This, in turn, allows the database to be "schema-free", enabling it to automatically index documents without requiring schema or secondary indices. For additional control, a custom indexing policy[11] can also be used to index specific properties using the ’path’ notation, that allows precise navigation to specific substructures in a JSON document. For example, the property path ’/employee/name’ represents the ’name’ node in the ’employee’ object. Cosmos DB, in addition to automatic indexing supported for the JSON type system, also has support for specialized indexes. This includes Spatial Indexing and more recently, indexing for vector and full-text search. For fast query and avoiding storage bloat associated with JSON text, Cosmos DB employs a performant custom binary encoding [3] for JSON data.
Logical Partitioning and Elasticity. Clients define a logical partition key on a Collection (up to three levels of hierarchy are allowed in partition keys [10]). A Collection can thus span multiple physical partitions, with data hashed horizontally across partitions based on the logical partition key value. As clients adjust throughput and/or storage needs on their Collections, the elasticity component can scale out or scale back the compute and storage required through partition splits and merges.
Figure 1: Azure CosmosDB architecture diagram from [34].
System Topology. Cosmos DB service is deployed worldwide on clusters of machines each with dedicated local SSDs. The unit of deployment, called a federation (Figure 1), is an overlay network of machines, which can span one or more clusters. Each physical machine hosts replicas corresponding to various partitions for scaled out collections. Replicas corresponding to a single partitions are placed and load balanced across machines spanning different fault domains, upgrade domains and availability zones in the federation. Each replica hosts an instance of Cosmos DB database engine, which manages the JSON documents as well as the associated indices.
Resource Governance. Cosmos DB provides performance isolation between Replicas through the Resource Governance (RG) component. Azure Cosmos DB normalizes the cost of all database operations using Request Units (or RUs, for short) and measures cost based on throughput (RUs per second, RU/s). Request unit is a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations. The RG component guarantees the provisioned RUs per partition for user requests while at the same time rate limiting requests when usage exceeds provisioned throughput. This helps provide an isolated provisioned throughput experience, while achieving high utilization per node and lower cost.
A Cosmos DB Replica’s database engine hosts:
Document Store - Cosmos DB’s transactional database engine that serves as the main store for documents. Inverted and Forward Index - Cosmos DB incorporates Bw-Tree both as an Inverted and Forward Index for its indexing needs. The Bw-Tree is a latch-free index that is designed for fast writes, thanks to its support for blind incremental updates and underlying log structured storage for persistence. The design effectively batches multiple incremental updates into a single flush on to disk. As a result, Bw-Tree does not update in-place, helping it reduce write amplification to SSD based storage. [34]
# 3 SYSTEM DESIGN
To create a vector index over a collection, users turn on the capability at a collection level and declare a JSON path as the target for vector indexing. In addition, users specify the dimension and the distance function to be used for the embeddings in this path, as well as the vector indexing policy. Any document ingested with a valid vector will be part of the vector index (we support one vector per path). The JSON document along with the vector is stored in the primary Document Store and other paths meant for non-vector indexing are indexed into the Bw-Tree as described in [34].
The simplest (brute force) way to compute the nearest neighbors of a query is to scan all JSON documents in the collection in the query runtime, and compute the distance to each vector in the designated vector path. This is useful for small collections, say with less than 1000 documents but is not scalable otherwise.
An improvement would be to map all vectors in the collection to a contiguous range in the Bw-Tree by prefixing the vectors with the collection id and the name of the vector index path (we call this the Flat index). This can be done easily in the user request path as part of the "Document Analysis" during ingestion on the primary replica, and then replicated to all secondaries. A range scan in the Bw-Tree for the appropriate prefix would need many fewer random accesses for smaller vectors. This is still not a great option since vectors tend to be large, and storing them twice limits scale up.
A second improvement would be to compress the vectors via quantization (say using PQ) and store them in a contiguous range in the Bw-Tree. This significantly reduces the number of nodes to scan in the Bw-Tree, by up to $9 6 \times$ for OpenAI Ada v3 embeddings, for example. To answer a request for the top- $\mathbf { \nabla } \cdot \mathbf { k }$ nearest neighbors to query $q$ , we would first find, say, $5 k$ entries to the query in the quantized space. We could then look up the full precision vectors corresponding to each of the $5 k$ candidates from Document Store, and compute the full fidelity distance to the query to identify the top$k$ candidates. We refer to this as the Q-Flat (Quantized Flat) index. With an appropriate multiplier over how many extra candidates are retrieved in quantized space, this method can yield very high recall. For moderate sized collections, or for small tenants in a multitenant collection, this can be efficient. For instance, in a collection of 5, 000 vectors quantized to 96 bytes, we need to touch only about 60 8KB-sized Bw-Tree logical leaf nodes to answer the query. The CPU time required for computing distances over the retrieved quantized vectors is under a millisecond.
However, an exhaustive scan in quantized space does not scale to larger replicas. Cosmos DB can fit over 10 million 768-dimensional floating point vectors in one replica, and we need a index that can answer queries by accessing fewer nodes in Bw-Tree. We use the DiskANN graph-structured index for this. With the DiskANN vector indexing policy in a collection, two additional indexing terms in the Bw-Tree are created: quantized vectors (akin to Q-Flat) and graph neighbor terms which represent out-neighbors of each vector in the index (see Fig. 2). In the rest of this section, we describe how we layer these index terms in Cosmos DB, and how DiskANN manipulates the terms to update and query the index.
# 3.1 Re-designing DiskANN for databases
DiskANN was previously written as a library that manages its own buffers and index layout, thus limiting its use inside a database. We rewrote it in Rust using the following principles so that it can used with a variety of systems including databases and key-value stores.
Decoupled index layout. The index layout is not controlled or even visible to the algorithms. The core of the library consists of
LUR G Quantized vectors (\~50Bytes) Quantized vectors (32-192 Bytes) Graph(\~100 degree) Graph(\~40 degree) Full-precision Vectors(100-1000d) S 0 GoR Full-precision Vectors(100-1000d) The DiskANN index optimized for SSD DiskANN index terms in Cosmos DB
methods that update and query the index by reading and updating index terms – quantized vectors, full precision vectors and neighbor list for vertices – via implementations of standardized Provider traits. The NeighborProvider trait, for example, defines the way the index would retrieve, append and overwrite the out-neighbors of a vector. These traits are implemented by the database which understands the best lay out and encoding/decoding for each kind of indexing terms. The database also owns the persistence and recovery of these terms.
Asynchronous interface. The term that the index needs may be immediately available in a memory buffer or may need to be retrieved from a slower storage device. Therefore the Provider traits allow for get methods to return either the actual data or a future that eventually returns the data so that the calling thread can be scheduled with other work meanwhile. This is encoded via the MaybeDone enumerated type in the snippet below:
/// Get the quantized vector for given \`vector_id
fn get_quant_vector (& self , context : & Context , vector_id : Data :: VectorIdType ,
) -> MaybeDone < impl Future < Output $\mathbf { \tau } = \mathbf { \tau }$ Result < Self :: Element <'_>>> + Send >;
As a consequence, all update and query methods in the library are also asynchronous and need a runtime to manage threads and drive the futures to completion. We use the tokio runtime [1].
Execution Context. Since the library does not own any index terms, one instantiation of the DiskANN process is sufficient to update and query all the replicas in a machine which belong to many different collections. The database process invoking DiskANN methods uses the execution context variable to identify the target replica for each request:
The insert method can in turn pass this through to the Provider methods such as get_quant_vector to help the database identify the term in the correct replica. The context can also contain LSN and activity_id to help the database emit telemetry for debugging and fine-grained performance metrics.
Our rewrite achieved these goals without compromising on performance compared to monolithic “in-memory” libraries such as [2, 35]. We can implement the Provider traits using a type backed by memory buffers for maximum performance – in fact, the new library is at least as fast as the the previous monolithic DiskANN library for all use cases it supported.
Figure 3: Control flow of paginated search.
Further details on how inter-operation between Cosmos DB and DiskANN including runtime configuration and the design of $C { + } { + }$ /Rust asynchronous callbacks are in Appendix B.
# 3.2 Adaptations to the algorithm
Querying in quantized space. Given the limited memory available to the indexer and query processor, Algorithms 2 and 1 would be too slow since full precision vectors can not be cached and require random reads in to the SSD. 1 So we modify the search algorithm to traverse the graph using distance between query and the quantized representation of the vectors which can be cached. As observed previously [37], this does not significantly impact the convergence rate of greedy search. However, we must re-rank a small set of best candidates found in quantized space using distances to fullprecision vectors (see Fig. 5). We configure the index so that a query for top-10 entries on a graph with degree 32 and search list size $L = 1 0 0$ might touch about 3500 quantized vectors, but only about 50 full precision vectors.
Indexing in quantized space (mostly). Inserting a vector $\boldsymbol { p }$ first requires querying for it as described in Algorithm 2 to retrieve the set of visited vertices during search. This can be done entirely in quantized space. The next step is to prune the visited vertex set to get the neighbor set of $p$ . It can not be done entirely with quantized vectors because highly compressed quantized vectors can help greedy search navigate to neighborhood of $p$ , but may not provide enough precision to precisely identify the nearest neighbors. Prune needs the full precision representation of at least some of the vertices in the visited set. Through experiments over multiple datasets, we determined that using full-precision vectors for the 48 closest vectors to $p$ and Product Quantized vectors for the rest does not cause a significant dip in the quality of the index. This is described formally in Algorithm 8.
Paginated search. When processing hybrid queries with predicates other than similarity in vector space, the number of candidates returned by greedy search that satisfy such predicates may be insufficient. Therefore, we designed paginated search to allow the query layer to search iteratively until a sufficient number of candidates that match the predicates are identified.
Paginated search maintains two priority queues, one named best with max size $L$ as in the standard greedy search, and another named backup that has unlimited size. Each pagination for the next $k$ candidates first explores the best queue and trims the queue to size $L$ . Any vertices popped out from best get pushed to the backup queue. The search stops when all $L$ vertices in best have been visited and returns the best $K$ results from the queue. When the query asks for the next K, and best does not have enough candidates, it brings in the closest candidates from the backup queue and continues the search until all $L$ vertices have been visited. A visited set saved across paginations will prevent repeated results. Paginated search can be performed in quantized space with appropriate reranking before returning results to the user. The control flow of paginated search is sketched in Figure 3.
# 3.3 Design of index terms in Cosmos DB
Cosmos DB stores index terms as key-value pairs in Bw-Tree so they can be read via a single index key lookup or a range scan over keys. There are two kind of index terms stored in Bw-Tree: 1) Inverted terms that map each term (path $^ +$ document specific value) to a set of ’document ids’ that contain them and 2) Forward terms (introduced with Vector Search) that map each term (path $^ +$ document id) to any arbitrary value.
Design of Inverted and Forward Term. The general structure of an inverted and forward term is as follows:
TermKey-Prefix: 15 bytes, the murmur hash of the property path encoded in this term.
• TermKey-TypeMarker: A 1 byte marker indicating the type of the value encoded in the term.
• TermKey-EncodedValue: A range-comparable encoding of the property’s value / derived value from the user document.
• TermValue: An arbitrary value for the key. In practice, this is either an Inverted Value: Variable length compressed bitmap (in buckets of $1 6 \mathbf { k }$ ranges called PES [34]), representing the set of documents that have the property value encoded in the key OR Forward Term: Adjacency List (array of 8 byte document ids).
Next we describe the design for index terms for Quantized vectors and adjacency lists. For a concrete example for each scenario, please refer to Appendix C.
We use the Inverted Term design to store the quantized representations of the full vector from the user document. The TermKeyEncodedValue includes “Document ID” – the 8-byte unsigned system generated unique numerical ID of the user’s document – followed by the quantized vector in binary (see Figure 4). The TermValue PES in this case is a dummy. To retrieve a quantized vector given the Document Id, a "Prefix Seek" API is used.
We designed a new “Forward Term” type that can have an arbitrary value rather than a bitmap (corresponding to posting lists). The ’Adjacency list terms’ use the new forward term format. The term encodes out-neighbors of the graph vertex representing the vector in the document. This is done by TermKey-EncodedValue consisting of Document Id representing the vector and the TermValue consisting of the list of Document IDs of the out-neighbors (see Figure 4). The new value format supports blind incremental updates to the adjacency list to support fast appends. A new corresponding merge callback procedure is added to process the blind updates and consolidate on to an effective value during Bw-Tree consolidations.
Figure 4: Index term design for quantized vectors (top) and adjacency lists (bottom).
Figure 5: Re-ranking 10 closest neighbors retrieved in quantized space using full-precision vectors to find the top $k = 4$ . quantizedVectorListMultiplier $scriptstyle = 2 . 5 .$ .
Can Quantized Vectors be persisted as Forward Term? We could have modeled Quantized Data with Forward Term as well (which is a more natural representation) and plan to reconcile this in the future.
Extending Term Design for sharded index. By default, we construct one vector index across all the documents in the replica. In some cases, the user might want to query data that matches a shard key. In such a case, it is inefficient to query the entire vector index filtering for the intended shard key. We instead allow the user to declare a “Sharded DiskANN” with a vector index policy to create one DiskANN index per value of the declared shard key present in the replica.
To support such indices, we extend the above term design by prefixing TermKey-EncodedValue with a hash of shard key value. This allows us to access both Quantized and Adjacency terms for a given shard and also co-locates the terms for a shard in a continuous key range making it easier to cache for highly active tenants. Also by encoding each logical shard index as just another set of Bw-Tree keys with a different prefix, the decoupling between logical and physical terms helps Cosmos DB store a long tail of tenants on a single replica.
# 3.4 Index construction and maintenance
Quantized Flat Index. The Quantized Flat index needs a sample of vectors to create the quantization schema that is needed for generating quantized vector terms. We empirically found about 1000 samples to be sufficient for creating a first, if not the best, schema for PQ. Once the quantization schema is available, the quantized vector terms are generated inline with the document updates. A separate background process backfills the quantized terms for existing vectors. The DiskANN index terms are a superset of the Quantized Flat index and leverage quantized vector terms.
Graph Operations. The vectors are inserted, deleted and replaced lazily in the background while other user transactions are going on. This allows maintaining Cosmos DB’s latency SLAs for transactions while also keeping the graph updated dynamically as the users update the vectors.
Upfront charging. Cosmos DB charges RUs for processing vectors upfront during the transaction depending on the number of vectors and size of each vector. Customers thus have a predictable requests throughput for their workloads.
Re-quantization. As vectors are ingested in a collection, we resample 25000 vectors to generate a higher quality PQ schema. Post schema generation, quantized terms for all vectors are re-generated in place. Newly ingested vectors are quantized with the updated schema. We support distance computation between vectors quantized with two related schemas. Since the refined schema is very similar to the original, such distance calculations are meaningful, and further, we do not need to rebuild the graph after re-quantization.
# 3.5 Query layer
Cosmos DB supports vector search using a built-in system function called VectorDistance (formally described in Appendix D). Below is an example of a vector search query in Cosmos DB.
By default, the query engine scans all documents to compute Vector Distances. When either Flat or Q-Flat index is present, the query layer instead uses them.
If quantized terms are used to compute distance to the query, the query engine finds $k ^ { \prime } =$ quantizedVectorListMultiplier $\cdot { \bf \nabla } \times k$ closest vectors to the query in the quantized space, and reranks them to estimate the true top $k$ . Re-ranking is done by loading the documents corresponding to $k ^ { \prime }$ quantized vectors from the store and re-ordering documents based on distance between the query and the full precision vector.
Top-10 results computed using compressed data = true top-4 nearest neighbors Results re-ranked with = top-10 nearest neighbors full-precision data computed using compressed data
If a DiskANN index is present, then the query engine calls the paginated search API to get the required number of documents required by the re-ranking step. The parameter searchListSizeMultiplier controls the quality of search – higher values return more accurate results at higher latency and RU cost. The $L$ parameter sent to paginated search set to 𝑠𝑒𝑎𝑟𝑐ℎ𝐿𝑖𝑠𝑡𝑆𝑖𝑧𝑒𝑀𝑢𝑙𝑡𝑖𝑝𝑙𝑖𝑒𝑟 $\ast k$ .
Filtering and runtime fallback. Filters in the query are evaluated first followed by the ORDER BY. So for the query listed above, the filter c.category is evaluated first resulting in a compressed bitmap representing the set of documents that satisfy the filter. Further query plan depends on the selectivity of the filter — that is, how many of the database points satisfy the filter. If the selectivity is high, e.g., estimated result size $< 1 0 0 0$ , the query iterates over the full vectors corresponding to those documents.
Otherwise, a post-filtering approach based on DiskANN pagination is used. The query layer iteratively paginates DiskANN search over quantized vector until it finds at least quantizedVectorListMultiplier $\cdot { \bf { \nabla } } \times k$ documents that satisfy the predicate (which can be quickly checked using the compressed bitmap).
Continuations. Cosmos DB backend requests are limited to 5 seconds. If a query does not complete by this time, it is preempted with a continuation token capturing the query state that the client can use to resume the query. While some queries such as regular ORDER BY queries are streaming, pagination in quantized space is not. The partial results in a vector search cannot be serialized to the continuation token as it can be large in size. So, to handle continuation, the partial results are returned to client and the client needs to re-order the backend resul8t0s across different continuations.
SDK Query Plan: Cosmos DB 6S0DK has a query planner pt99hat can distinguish between single partition queries 99whicp9h5 9a9 re passed through and queries requiring cross pap95rtition fan out. T ph50e SDK supports fan out and aggregates the respu50lts from different partitions. It also handles continuations for no0n-streaming queries like Vector Distance Order By, and supports collation of partial results, merging cross-partition replies and re-ranking for the final results.
# 3.6 Filter-aware Search
Given fast access to the bitmap over documents that satisfy the filter, we can modify DiskANN search to make it more likely to find points that satisfy the filter. This is done by Algorithm 7 in Appendix B, which scales down the distances from the query to vectors that satisfy the filter by $\beta < 1 . 0$ in greedy search iteration.
We can compose sharded and filter-aware DiskANN queries. For a multi-tenant collection configured with sharded vector indexing, the queries with filter over the shard key as well additional filters $f$ can limit search to the relevant shard and use the beta-biased greedy search to optimize for filters $f$ .
# 4 EVALUATION
We now measure the query latency, cost, and ingest performance of our design. Our experiments scale up to 10 million vectors per partition and scale out to 1 billion vectors across a collection.
Datasets. We use the following datasets in our experiments.
Wiki-Cohere: 35 million Wikipedia articles embedded using the cohere.ai multilingual 22-12 model [33], and a query set of 5000 embeddings of Wikipedia Simple articles, with 768 floating-point dimensions. We use 100K, 1M and 10M prefixes of this collection. • MSTuring: 1 billion Bing queries embedded to 100-dimensional floating point using the Turing AGI v5 model [46], with 100,000 queries from the same distribution. • YFCC Dataset: 1 million vectors corresponding to a 192-dimensiona CLIP embedding applied to (copyleft) photos and videos in Flickr from the year 2004 until early 2014. This also includes metadata such as camera model, country, year, month. The number of documents per year ranges from 30,000 to 144,000.
Figure 6: p50, p95 and p99 query latencies and query RU charge for 10 million Wiki-Cohere vector index for various values of search list size, and the corresponding recall $@$ 10.
Runbooks are long-running sequences of insertion, deletion, and queries to simulate various streaming scenarios [16]. In this work, our first runbook is based on an expiration time model - where each point is inserted with a randomly selected expiration time, at which point it is deleted - as they are a good proxy for common production scenarios. For expiration time model, we use two instances based on the Wikipedia-10M dataset and the MSTuring-1M dataset to benchmark the recall stability of the vector index. Our second runbook is more adversarial and meant to imitate distribution shift. This runbook instance is based on the MSTuring-10M dataset. The dataset is partitioned into 32 clusters, and points are inserted and deleted in clustered order. For each runbook, the same query dataset is used at each query step.
Configuration. We use the the following parameters unless noted otherwise. The graph degree is 32 to minimize footprint of the index, and a slack factor of 1.3 is used to reduce number of secondary prunes. We use $L = 1 0 0$ for index construction. The parallelism for mini-batch inserts is set to 8, since replicas might not regularly get more than 3 cores on shared machines. Each physical partition size limit is 50GB and Bw-tree max chain length is set to 3.
# 4.1 Query latency and RU charges
Figure 6 shows the query latency and RU charge for an index of 10 million Wiki-Cohere vectors. Note that p50 latency is under $2 0 \mathrm { m } s$ for the 10 million index with a recall $@ 1 0$ of $9 0 . 8 3 \%$ . Increasing $L$ -search gives higher recall, but with increased latency and RU charges. The corresponding plots for 1 million and 100,000 vector indices are shown in Figures 16 and 17 in Appendix F.
We hold the search list constant and compare the query complexity as we increase the size of the index from $1 0 0 K$ to 1 million to 10 million vectors in Figures 7 and 8. We not40e thap95t pt99he p50, 9p95 and $\mathrm { p } 9 9$ latencies increase by less than $2 \times$ 0desp95itp99e t ph50 e $1 0 0 \times$ increase in index size. The RU charge similarlyp 50increases less than $2 \times$ except in the case of Wiki-Cohere 10M. Here, we chose a 2 partition set up (we could have also fit it in one partition). Therefore, we pay the extra cost for fanning out to an additional partition. Another important point to note is that despite the increase in dimensionality from 100 to 768, there is barely any increase in query latency or cost. The Cosmos DB design is well suited for extremely high dimensional vectors.
Table 1: Query and monthly storage costs of enterprise-grade serverless vector databases for 1M queries10oMver 10M 768D vectors as of May 5th, 2025. Azur2e0Cos10m0oKs1MDB provides $9 4 . 6 4 \%$ recall@10.
Figure 7: p50, p95 and $\mathbf { p 9 9 }$ query latencies a0nd RU charges for Wiki-Cohere vector indices over 100K, 1 million and $\mathbf { 1 0 \ m i l }$ - lion vectors with $L = 1 0 0$ which provides $9 4 . 6 4 \%$ recall $\textcircled { a } 1 0$ .
We also compare the query cost of Cosmos DB with that of the enterprise grade serverless vector indices such as Pinecone, Zilliz, and DataStax in Table 1. We see that for the 10 million Wiki-cohere index, Cosmos DB has about $4 1 \times$ , $1 5 \times$ , and $2 \times$ lower query cost than Pinecone (enterprise tier), Zilliz (enterprise tier), and DataStax (standard tier) respectively.
In all these experiments, the index is queried in a warm up phase, following by 5000 queries issued one at a time. The Bw-Tree cache is configured to be large enough to cache the quantized and adjacency list index terms.
# 4.2 Scale out and Query Aggregation
Cosmos DB can scale up to millions of vectors per partition (subject to the $5 0 \mathrm { G B }$ of data $^ +$ index size limit), and scale out to billion-scale vector indices. Figure 9 shows the query latency and RU properties of such collections. The MS Turing 1 billion vector collection spans 50 partitions and 100 million vector collection spans 20 partitions. The server latency is measured per partition and client latency is the end to end latency observed in client side. We use a E64is_v3 Azure VM with 64 vCPUs on the same Azure region as the index to aggregate query results with maximum concurrency and minimum latency. Since the query fans out to large partitions concurrently, the client end-to-end latency is sensitive to the worst latency on the server side. The best latency is achieved by using fewer partitions packed with as many vectors as possible. The same applies to RU charges since the query cost grows logarithmically with the number of vectors in a partition, but linearly with the number of partitions.
Figure 8: p50, p95 and $\mathbf { p 9 9 }$ query latencies and query RU charge for MS Turing vector indices over 100K, 1 million and 10 million vectors with $L = 1 0 0$ .
Figure 9: Query latency and RU charge for 1 billion (Left) and 100 million (Right) MS Turing vector index collections.
# 4.3 Ingestion
We use the parameters Ingestion $R = 6 4 , L _ { b u i l d } = 5 0 .$ Bw-Tree chain length is set to 3. We set the cache to be large enough to hold the vector index terms (excluding full precision vectors).
Figure 10 provides a breakdown of the CPU time spent in each insert. Most of the time is spent accessing the quantized vectors, as expected. Since they are mostly cached, this operation is synchronous and involves traversal of cached Bw-Tree terms. The next two dominant components are the time needed to update and read the adjacency lists, which involve traversal of Bw-Tree and potential clean up of terms. The actual distance computations as well as updating candidate sets in DiskANN is small – under 2ms/insert.
We also consider costs for vector ingestion on the WikiCohere10M dataset, comparing Azure Cosmos DB with enterprise-grade serverless vector databases Pinecone, Zilliz, and DataStax in Table2. Cosmos DB provides lower vector insertion charges, about $3 3 \%$ and $56 \%$ less than Pinecone and DataStax (standard tier) respectively. Cosmos DB does have $5 . 4 \times$ higher insertion charges than Zilliz. However, provisioned throughput and autoscale billing
Figure 10: Latency without RG per batch, insertion for MSTuring 1M and breakdown of components (quantized vectors, adjacency list, full vectors), with Bw-Tree chain length $\mathbf { \lambda } = 3$
Table 2: Insertion costs of enterprise-grade serverless vector databases for 10 million 768D vectors (Wiki-Cohere-10M) as of May 5th, 2025.
95.75 Without_Inplace_Delete 95.50 95.25 95.00 R 94.25 94.00 93.75 50 100 150 200 250 Runbook Steps
Figure 12: Recall trends for the MSTuring-1M expiration time runbook. Vectors are compressed to 50 bytes, and the index uses $R = 3 2 ,$ $L _ { b u i l d } = 1 0 0 , L _ { s e a r c h } = 1 0 0$ .
models offer at least a $7 \times$ price discount on RU charges compared to serverless, so these higher costs can be further reduced.
# 4.4 Recall Stability over Updates
Figure 11: Recall trends for WikiCohere-10M expiration time runbook. Vectors are compressed to 192 bytes, and the index uses $R = 3 2 ,$ $, L _ { b u i l d } = 1 0 0 , L _ { s e a r c h } = 2 0 0 ,$ .
Figure 13: Recall trends for the MSTuring-10M clustered runbook. Vectors are compressed to 50 bytes, and the index uses $R = 3 2 , L _ { b u i l d } = 1 0 0 , L _ { s e a r c h } = 1 0 0 .$ .
We now consider the recall stability of a vector index in Cosmos DB using the runbooks described at the beginning of this section, plotted in Figures 11 , 12, and 13. In all runbooks, an initial quantization is computed after 1000 vectors are inserted, and the index is re-quantized after 25000 vectors are inserted. The experiments compare the effects of taking no action other than dropping the deleted vector (labeled "Without_Inplace_Delete") versus using the inplace delete algorithm (Algorithm 6).
The experiments show that the in-place delete algorithm is critical for maintaining high recall under a stream of updates. In the case of the expiration time runbooks (Figures 11 and 12), in-place delete increases average recall by 1-3 percentage points. On the clustered runbook, which experiences distribution shift and is significantly more difficult than the expiration time runbooks, in-place deletes achieve recall that is as much as 20 percentage points higher than the baseline, meaning they are significantly more robust to distribution shifts.
# 4.5 Sharded indices for Multi-tenancy
For multi-tenant apps, Cosmos DB allows user to create smaller DiskANN indices per tenant by specifying a VectorIndexShardKey in the indexing policy, one for each unique value of the selected document property. This can significantly lower the query latency, cost and improve recall for queries that are focused on a tenant.
Table 3: Sharded Diskann with year as the shard key (row 1) compared with non-sharded DiskANN (rows 2 and 3) on YFCC.
Table 3 compares sharded DiskANN with a single DiskANN index on the YFCC dataset with year as the shard key. Sharded DiskANN provides $3 \mathrm { x }$ lower mean and $\mathrm { p } 9 9$ latency with default query settings. It also provided much higher recall at $9 8 \%$ . It was also more accurate than increasing search depth for the case without sharding. The query RU charge also compared favorably.
# 5 RELATED WORK
# 5.1 Algorithms for Vector Search
Algorithms for vector search fall roughly into two categories. The first category relies on partitioning the dataset into spatial cells and choosing a small number of cells to exhaustively search at query time. Common ways to partition include via clustering and localitysensitive hashing (LSH). Some widely used partition-based vector search algorithms include FAISS [14, 21], SPANN [9], ScaNN [19], and many more [5, 45]. Partition-based algorithms benefit from shorter indexing times than graph algorithms on average, but their query complexity increases much faster with the size of the dataset when compared to queries on graphs [27].
Graph-based algorithms form a proximity graph with one vertex per embedding and greedily traverse the graph at query time to find a query’s nearest neighbors. Examples include HNSW [25], NSG [17], DiskANN [37], and ELPIS [7]. Empirically, the query complexity of graphs scales logarithmically with the size of the dataset and is much better than LSH and clustering (see Figure 14). Graph-based ANNS algorithms can maintain high recall in the streaming setting [36, 44]. Furthermore, they are adaptable to filtered queries [18] and queries that respect notions of diversity [4]. For these reasons—scalability, query efficiency, versatility, and robust updates—we use graphs in this work.
# 5.2 Vector Indices in Databases
SingleStore-V [8] supports both graph and clustering based vector indices inside the SingleStore database. The design loosely couples existing vector indexing libraries for HNSW and IVF algorithms. SingleStore creates one vector index per segment, and rebuilds the indices as segments merge. Each query has to fan out to indices over multiple segments rather than being served by one index. More importantly, each vector index must be stored in memory for good performance, which can be expensive. For instance, a machine with 256GB memory and 32 cores is used for 10 million scale experiments. They report using 3.8GB memory for the million sized GIST1M dataset. In contrast, we use $< 5 \mathrm { G B }$ of memory even for 10 million scale indices over $1 2 \mathrm { K B }$ embeddings. Our system’s ability to match query performance with a small slice of machine’s resources results in significantly lower system costs.
JVector [23] is a Cassandra based vector index with DiskANN. They construct a DiskANN index in one shot and ingest it into the database, and support incremental insertions and deletions using FreshDiskANN. Details on how the indices are managed and updated across partitions and segments are not documented to the best of our knowledge. DataStax offers a managed product based on JVector. While their serverless offering does not provide an availability SLA, the enterprise provisioned capacity model has a $9 9 . 9 9 \%$ SLA. While their enterprise pricing is unspecified, their standard tier has a monthly price of $\$ 900,$ /month for 10 Queries/second over a million sized index. The corresponding Cosmos DB monthly cost with Autoscale is less than $\$ 50$ .
Elastic offers vector search using a segment-based indexing system. It therefore has disadvantages similar to the SingleStore-V design. Furthermore, as a managed search engine, it does not offer the same level of robustness and data consistency as as an operational database. To compare costs, consider the 10 million vector index over Wiki-Cohere data. Even with scalar quantization, we would need a machine with 15GB DRAM. We estimate using their pricing tool that the smallest such machine from Vector Search Optimized SKUs with $1 . 9 \mathrm { v C P U S }$ would costs about $\$ 0.7$ /hour or about $\$ 500,$ /month. A manually provisioned Cosmos DB service for $5 0 0 ~ \mathrm { R U / s e c }$ or 5000 RU/sec would serve 10 QPS and 100 QPS and cost about $\$ 30$ and $\$ 300,$ /month respectively.
pgvector [6] integrates HNSW and IVF-Flat indexing algorithm inside PostgreSQL and offers the inherent advantages of the PosgreSQL ecosystem. Initial versions required machines large enough to fit the entire dataset as well as the index in memory for reasonable performance. Scalar and binary quantizations have recently been added to reduce the memory requirement, but performant indexing and updates still rely on the availability of much larger memory than our system. pgvectorscale [28] similarly integrates DiskANN inside PostgreSQL and offers better performance than pgvector in limited memory settings. We leave an exhaustive benchmarking against these systems for future work. They primarily differ from our system in that they do not offer flexible schemas, multi-tenancy, and scale-out out of the box. V-Base [47] also builds on PostgreSQL and aims to improve vector search performance. However it uses an independent buffer pool not integrated with to PostgreSQL storage engines. Substantially more work is needed to build high-availability and other features in this system.
AnalyticDB-V [42] pioneered the concept of integrated highdimensional indices inside database systems that support hybrid queries with SQL semantics. We do not directly compare with it since it is a provisioning based system.
# 5.3 Specialized Serverless Vector DBs
Specialized serverless vector databases are popular due to their ease-of-use, flexible pay-per-use pricing structure, especially for early-stage applications, or workloads with irregular traffic patterns. However, these platforms currently exhibit limitations regarding enterprise-level readiness, particularly in terms of reliability, recovery, security, and compliance features.
For example, Pinecone offers $9 9 . 9 5 \%$ availability, but maintains only one data replica by default and provides limited backup-andrestore functionality applicable solely to the vector index, not the data itself. In terms of security, Pinecone supports Role-Based Access Control (RBAC) for both control and data plane operations without custom role support. Certifications include HIPAA BAA, AICPA SOC, GDPR, and ISO 27001 as of this writing.
Another offering, Zilliz, provides no availability SLA, lacks replication and backup-and-restore, and limits RBAC functionality to the control plane without custom roles. Its compliance certifications are limited to SOC 2 Type II, ISO/IEC 27001, and GDPR.
Turbopuffer [15] is another serverless vector database designed for multi-tenant usage. However, it still lacks significant enterprise and robustness features such as RBAC, backup-and-restore, global availability. Turbopuffer does not publicly list query price for 10 million vector shards, however, its query complexity increases substantially with the size of the tenant. For example, cost per 1 million queries increases from $\$ 3.58$ to $\$ 33.4$ when the number of documents per name space increases from 100,000 to a modest 1 million. [41]
In contrast, Azure Cosmos DB delivers robust enterprise readiness with a default availability of $9 9 . 9 9 \%$ , configurable up to $9 9 . 9 9 9 \%$ , alongside four data replicas by default. Distinct from any vector database solution, Cosmos DB has an SLA on document reads and writes of $1 0 \mathrm { m } s$ for 1KB transactional documents. It provides comprehensive RBAC capabilities for both control and data plane operations, including customizable role definitions. Additionally, it is compliant on more than a dozen standard including HIPAA BAA, FedRAMP, GDPR, ISO 27001, and others[29]. | Vector indexing enables semantic search over diverse corpora and has become an important interface to databases for both users and AI agents. Efficient vector search requires deep optimizations in database systems. This has motivated a new class of specialized vector databases that optimize for vector search quality and cost. Instead, we argue that a scalable, high-performance, and cost-efficient vector search system can be built inside a cloud-native operational database like Azure Cosmos DB while leveraging the benefits of a distributed database such as high availability, durability, and scale. We do this by deeply integrating DiskANN, a state-of-the-art vector indexing library, inside Azure Cosmos DB NoSQL. This system uses a single vector index per partition stored in existing index trees, and kept in sync with underlying data. It supports < 20ms query latency over an index spanning 10 million of vectors, has stable recall over updates, and offers nearly 15x and 41x lower query cost compared to Zilliz and Pinecone serverless enterprise products. It also scales out to billions of vectors via automatic partitioning. This convergent design presents a point in favor of integrating vector indices into operational databases in the context of recent debates on specialized vector databases, and offers a template for vector indexing in other databases. | [
"cs.DB",
"cs.IR"
] |
# I. INTRODUCTION
Since the publishing of the famous IBM manifesto on autonomic computing by Kephart and Chess [1] almost two decades ago, the interest in the self-\* properties of the systems in software engineering has increased rapidly. Some of the most broadly spread and often found self-\* properties in the literature are: self-adaptation, self-awareness, self-healing and self-organising, just to name a few. For example, the publications on self-adaptive systems have increased by $30 4 \%$ in the last twenty years, compared to the 50 years before that (from 1951-2001).1
There are many disciplines that have been considering the notion of adaptation, for example, biology and evolutionary
This work has been partially funded by the Federal Ministry of Education and Research (BMBF) as part of MANNHEIM-AutoDevSafeOps (01IS22087P).
sciences [2, 3], climate change and environmental sciences [4, 5], as well as film, cinematography and media studies [6]. The situation slightly differs in the field of software and systems engineering, where we can observe that the majority of the works available focus only on self-adaptive systems, without tackling and clarifying what is understood under the notion of adaptation in the first place. Hence, defining the property of system adaptation is circumvented by the existing works, although defining what we understand under system adaptation is an essential prerequisite for a subsequent definition of selfadaptive systems.
Suppose we only focus on the available definitions of selfadaptive systems. In that case, we can observe the following: there exist prior works that propose informal definitions of self-adaptive systems as part of their papers [7, 8, 9]. However, all the informal definitions only rely on intuitive understanding communicated by the spoken language that is fairly ambiguous, which results in an under-specified usage of the terminology of self-adaptive systems. In response, to tackle the limitations of the informal definitions, some researchers have put the focus on defining these systems formally [10, 11]. However, despite the notable advancements in the research on self-adaptive systems in the last two decades and the domain’s active community, none of the existing formal definitions is broadly accepted and used as a means of communication among the experts in the field. Therefore, the understanding of the core terminology still remains imprecise. To summarise, there is only an intuitive understanding of selfadaptive systems without a more profound understanding and a precise definition of these systems and how they differ from the “ordinary” systems considered non-adaptive. Furthermore, defining the property of system adaptation is the first step toward defining self-adaptive systems, and this is something that this research field has not paid enough attention to yet.
Other existing works in the literature also support our observations: Broy [12] and Lints [13] have independently reached the same concision regarding the intuitive use of the terms of adaptation and self-adaptive systems, arguing that although in some instances such intuitive usage might suffice, this is not the case in engineering and science, where a more rigorous definition is necessary [13]. Additionally, Weyns [14] in a recent work states that self-adaptive systems are not defined yet and that the lack of broadly accepted definitions is possibly the biggest challenge in the field of engineering self-adaptive systems [15, 16].
Problem. The lack of precise understanding of what selfadaptive systems are has different software engineering consequences and implications, for instance, how to build or engineer these systems that go beyond the famous MAPEK conceptual model. The fundamental issue with the MAPEK is that it serves as a reference model for engineering not only self-adaptive, but any self-\* system in general. Although MAPE-K gives some intuition behind the engineering of selfadaptive systems, primarily by the separation of concerns between the managed system and the managing system, a more specific semantics of these two components within the conceptual model is still lacking. A more specific semantics accompanying the MAPE-K reference model, will also enable a better separation and characterisation of, e. g., self-adaptive, self-organising and self-aware systems.
Moreover, as mentioned before, besides the engineering implications, the lack of a concrete definition of self-adaptive systems has various scientific consequences. Namely, it hinders constructive scientific debates, which are impossible if experts have different understandings of what self-adaptive systems are. A better semantics of self-adaptive systems will 1) set a foundation for more constructive scientific debates, 2) complement the already existing works (methods, architectures, models, etc.) in this field, and 3) set the foundation on how to evaluate and compare these systems in the future.
Gap. Despite 1) the acceptance and the acknowledgement of adaptation as an emerging property of software systems, and 2) the various systematic mapping studies and literature reviews in the field of self-adaptive systems [17, 18, 19, 16, 20], to the best of our knowledge, there is no other study that investigates and summarises how self-adaptive systems have been previously defined and characterised in the literature. In particular, no prior work summarises and analyses the existing formal definitions of self-adaptive systems in order to understand and gain insight into why none of these formal efforts is accepted by the community and what are their concrete limitations.
Solution. As a result, as part of this paper, we conduct a systematic literature review, which aims at summarising and analysing the existing works that formally define and specify self-adaptive systems. The following central research question leads our research:
How are self-adaptive systems formally defined in the literature?
To tackle this broad research question, we derive three more refined research questions (further explained in Section II), investigating 1) if the existing formal definitions also formalise the notion of system adaptation as part of their contributions, 2) which characteristics of self-adaptive systems are considered in the existing formalism, and 3) the formal notations used in each of the studies.
Contribution. Our systematic literature review provides an overview of the current state-of-the-art and structures the existing knowledge on how self-adaptive systems have been defined in the literature so far. More importantly, we analyse and summarise the limitations of the existing formal definitions, which provide new insights into why none of the formal definitions and specifications is accepted and used more broadly by the community. Our contributions also provide a foundation for improving the semantics of the core terminology of self-adaptive systems. This potentially leads towards a future establishment of a more unified understanding of these systems and, ideally, even to a broadly accepted definition of self-adaptive systems in the near future. A more profound understanding of the terminology will support the community in setting new challenges and identifying new directions for future research.
The rest of this paper is structured as follows: Section II presents the methodology used, based on which we conduct our systematic literature review and the identified research questions. We present the results and answer the research questions in Section III. In Section IV, we further discuss the main findings, followed by a discussion on the limitations of this review. In Section V, we present the related work and finally, Section VI concludes the paper.
# II. LITERATURE REVIEW METHODOLOGY
This section describes the research methodology we followed in conducting the systematic literature review. The systematic process followed the guidelines proposed in various works by Kitchenham et al. [21, 22]. An overview of our complete methodology is presented in Fig. 1.
Phase 1: Defining research questions: The overall objective of the systematic literature review was to give an overview of the current state-of-the-art regarding the definition of self-adaptive systems in software and systems engineering, concretely how self-adaptive systems have been formally defined in the existing literature, which is the leading research question as part of this work. To support answering the leading research question in more detail, we derive three refined research questions:
RQ-A Do the papers with formal definitions of self-adaptive systems also define system adaptation as part of their contributions?
RQ-B Which characteristics of the self-adaptive systems are considered in the existing formal definitions and specifications?
RQ-C Which formal notations have been used across different works to define self-adaptive systems?
Phase 2: Data collection: In this study, we collected the papers in two different ways: manually by an expert search and by conducting a systematic studies collection.
Expert search. We initially started the data collection by collecting papers in a non-systematic way, referred to as an expert search. In the expert search, we started with an initial set of papers that included relevant studies based on our domain knowledge, known to us as key contributions in
Phase 2: Data collection Phase 4: Filtering 44 314 9 127 ianclcuosridoingatnodtthe 294 Phaprsiem6a: rSyeslteucdtioens of Expert search Initial papers set exclusion criteria Phase 3: rDeesfeinairncgh tShnrowugbha ltihneg SPaEpASeAMrsS fOraonmd vaPanardipoesurusr fvSreoLymRs ienxcallunsdiion Collating votes seyxMspeterergtminatngidc Collating votes Reptohreting Selection of Search query Search 1366 FinisYh N FinisYh N
the field of self-adaptive systems. We extended this initial set of studies in three different ways. First, we snowballed through the related work of the initial set of studies, as described by Wohlin [23]. Second, we searched through the relevant papers in the conference proceedings of SEAMS2 and $\mathrm { s A s O } ^ { 3 }$ as the two most relevant venues in this domain of research. Finally, we searched for relevant papers from previously published systematic literature reviews and surveys on self-adaptive systems. In this last step, we considered the studies from Weyns et al. [24], Muccini et al. [17], Mac´ıasEscriva´ et al. [18], and Krupitzer et al. [7, 19]. At the end, our expert search resulted in 127 relevant studies in total.
Systematic studies collection. Our systematic search and collection of studies consist of two aspects: 1) selection of digital libraries on which we perform the automated search, and 2) search query derivation, which we later used as search queries in the selected databases.
We chose the following sources to perform the search:
ACM Digital Library (https://dl.acm.org/) IEEE Xplore (http://ieeexplore.ieee.org/) Scopus (https://dl.acm.org/) ScienceDirect (http://www.sciencedirect.com/) Wiley InterScience (http://onlinelibrary.wiley.com/) World Scientific (https://www.worldscientific.com/)
The search query derivation was an iterative process. Concretely, half a dozen trial searches were performed in each database to evaluate the number of relevant studies obtained by different queries. Through this iterative process, we aimed to better understand the suitability of different search queries and keyword combinations, their advantages, and limitations, which was crucial for the final keyword query selection. Namely, we aimed for a search query as general as possible to consider a broad range of relevant papers from the literature while minimising the number of irrelevant studies. Some of the initial searching queries were the following: (self-adapt $\star$ AND software), (self-adapt $\star$ AND system), and (self-adapt $\star$ AND engineer $\star$ ). Our preliminary results showed that including the keywords system and engineer $\star$ in the query resulted in many irrelevant studies, e. g. from networks and hardware. On the opposite side, we realised that restricting only to the keyword software excludes works from the domain of cyber-physical systems, which are systems with increasing prominence in the field of self-adaptive systems in the last decade. Furthermore, since the main focus of this literature review is to get a better understanding of how self-adaptive systems are defined, we also tried using (self-adapt $\star$ AND defin $\star$ ) as a searching query, which unfortunately gave only a few results. Different combinations of these searching keywords have led either to a broad set of irrelevant papers or to a very narrow search. For that reason, we used (self-adapt\* AND (software OR cyber-physical)) as a final searching query for our automated search on the databases identified above, searching by meta-data (title, abstract, keywords) or only by title, depending on the advanced search options available for the chosen databases. The systematic collection resulted in 1366 studies matching the derived searching query.
Phase 3: Defining inclusion and exclusion criteria: After the collection of the papers, we needed to perform the first study selection. Since we are exclusively interested in studies related to system adaptation and engineering self-adaptive systems, in this phase, we defined rigorous inclusion and exclusion criteria to filter the irrelevant papers collected during the extensive search in the previous phase. The inclusion and the exclusion criteria that we defined for this purpose are presented in Table I and Table II, respectively.
Phase 4: Filtering according to the inclusion and exclusion criteria: In this stage, we apply the inclusion and exclusion criteria to the studies collected through 1) the expert search, and 2) the automated systematic collection. Two of the co-authors of this literature review performed the filtering and selection of the studies in this stage. During the voting process, the title, the abstract, and, if necessary, the introduction and conclusion of each study (1493 in total: 127 from the expert search and 1366 from the systematic collection) were read and carefully examined to determine their relevance. The exact steps of the classification and the voting process of this phase are depicted in Fig. 1. In a nutshell, the authors voted and classified each paper individually. A discussion followed if the authors’ votes were in disagreement until the authors reached a unified decision about the study under analysis. Applying the inclusion and the exclusion criteria resulted in 338 studies in total: 44 studies from the expert search and 294 studies from the systematic collection.
Table I INCLUSION CRITERIA.
Table II EXCLUSION CRITERIA.
Phase 5: Merging expert and systematic search: In this phase, the filtered results from both the expert search and the systematic collection from the previous phase are combined, and the found duplicates are removed. This resulted in 314 unique and relevant papers.
Phase 6: Selection of primary studies: The selected relevant papers from the previous phase could be analysed in different ways based on the aims and the goals of the concrete study. Since in our work we are interested in how self-adaptive systems are formally defined in the literature, we analysed and classified the relevant studies from Phase 5 according to two questions, depicted in the activity diagram in Fig. 2. The selection process in this phase was similar to the inclusion and exclusion criteria filtering process described previously in Phase 4. In summary, two of the authors independently analysed and classified the 314 relevant studies from the previous step, according to the two-step selection process from Fig. 2. The votes were consolidated, and in case of disagreements, discussions took place among the authors until reaching a unified decision. Applying the two-step selection process in this phase resulted in a final set of nine primary studies that are analysed rigorously in the rest of this paper.
Figure 2. Two-step selection process.
Please note the following about our analysis: 1) there were more than nine studies that included some formalism; however, we only selected those papers whose goal was to define self-adaptive systems formally and the studies with different objectives were excluded during this selection process, and 2) there were three more papers [25, 26, 27] that claimed to define self-adaptive systems in their abstract and introduction, but since the actual contributions of these papers did not fulfil their claims, we excluded them from our primary studies.
Phase 7: Reporting the review: A reproducible package with the selected studies in each of the phases of our methodology, the authors’ voting, and the analysed data is available online.4 Additionally, the package contains the BibTeX bibliography (.bib) of all the relevant studies from Phase 5.
# III. RESULTS
# A. General overview of the results
This section gives an overview of the 314 relevant papers analysed in Phase 6. In Fig. 3, we show the distribution of the papers over the years in different types of venues. We can also see in Fig. 3 that the first works on this topic were published in 1999, and the publication trend has grown since 2004, which can be correlated with two distinct events.
The first event is related to the first noted instance of the term self-adaptive software in the literature in a technical report by Laddaga in 1997 [28]. In this report, Laddaga informally defines self-adaptive software systems as “[. . . ] software that evaluates its own behaviour and changes behaviour when the evaluation indicates that it is not accomplishing what the software is intended to do, or when better functionality or performance is possible.” The author also adds that the research in self-adaptive systems “[. . . ] seeks a new basis for making software adaptive, that does not require specific adaptive techniques, such as neural networks or genetic programming, but instead relies on software informed about its mission and about its construction and behaviour.” The second, probably even more significant event was the publishing of the famous IBM manifesto on autonomic computing by Kephart and Chess [1] in 2003. This paper introduced the MAPE-K conceptual model and set the foundation not only for engineering self-adaptive systems but self-\* systems in general, e. g., self-awareness [29, 30] and self-healing [31, 32]. The manifesto on autonomic computing also set a foundation for a whole new research field on self-adaptive systems, which has been expanding for the last two decades.
Figure 3. Overview of the number of publications per year.
Figure 4 shows that from the 314 relevant studies we analysed in Phase 6, the majority of studies— $56 \%$ of the studies (175 papers) provide neither an informal or formal definition nor an intuition of the authors’ understanding of selfadaptive systems. We did not expect these results since these studies were rigorously selected to contribute with solutions for engineering self-adaptive systems. $41 \%$ of the studies (130 papers) provide informal definitions as part of their works, and only $3 \%$ of the studies (9 papers) provide some formalisation of the notion of self-adaptive systems. We selected those nine studies as primary for further analysis in our systematic review.
Figure 4. Overview of the type of definitions.
# B. Identifying the different classes and dimensions for analysis
To answer the leading research question as part of this work and to discuss how self-adaptive systems are formally defined in the literature, we introduce four classes of analysis dimensions: (C1) papers that formally define the property of system adaptation as part of their formal definition of selfadaptive systems, (C2) papers that formalise MAPE behaviour, (C3) papers that consider different characteristics of selfadaptive systems in their formal definitions, and (C4) used formal notation. The introduced analysis classes contain eight analysis dimensions in total, based on which we analysed all the primary studies (see Table III).
As discussed previously in Section I, in order to define self-adaptive systems, we first need to understand what the notion of system adaptation means in the field of software and systems engineering. Defining adaptation as a system property is 1) the core pillar for defining self-adaptive systems, and 2) is necessary to compare the existing and future works in this field. Therefore, we want to investigate if the existing papers on formalising self-adaptive systems also define system adaptation as part of their contributions. Hence, in our first class (C1), we differentiate between 1) papers with a concrete aim to explicitly formalise system adaptation, 2) papers that assume they define this notion implicitly, for instance, through formalising adaptive system behaviour, and 3) papers that do not formally define system adaptation in their work.
During our analysis, we also identified that some of the primary studies aimed at defining adaptive behaviour by specifying the behaviour of the MAPE-K feedback loop. In response, we introduced the second class (C2) for analysis.
In the third class (C3) of the analysis dimensions, we consider various characteristics identified in the literature as essential while defining self-adaptive systems based on the external and internal principles proposed in a recent work by Weyns [14]. As we elaborated previously, Weyns has stated that there is no consensus on the definition of self-adaptation so far in the community. In response to that, as part of his work [14], he proposes two complementary principles— external and internal—that characterise self-adaptive systems. The principles are built upon the consolidated usage of the notion of self-adaptive systems for the past decade in the community. To the best of our knowledge, this is the most complete consolidated characterisation of self-adaptive systems. For that reason, we used the characteristics from the principles to identify further dimensions for the analysis of our primary studies.
According to the external principle, a self-adaptive system handles changes and uncertainties from its environment (also referred to as context), the system, and the system goals autonomously. The context is the part of the environment relevant to a particular system [12]. These two terms have often been used interchangeably in the domain of self-adaptive systems; however, from our point of view, having a clear understanding and differentiation of context and environment is important. However, making this differentiation is not the aim of this work, and through the rest of the paper and in Table III we will use the concept of context only. By system in the table, we refer to the managed system that gains the ability to adapt as part of a self-adaptive system and not the self-adaptive system as a whole.
The internal principle separates the system goals in selfadaptive systems into domain and adaptation goals. The domain goals are related to the concerns of the managed system—the system that gains adaptation capabilities. Whereas the adaptation goals are related to the concerns of the managing system—the entity of the self-adaptive system that enabled the adaptation of the managed system. We use this differentiation from the principles and make a further semantic distinction between the managed and the managing system as part of a self-adaptive system. Concretely, we say that the domain goals are related to the functionality of the system—more precisely, to the fulfilment of the system function, i. e., the function of the managed system. In contrast to the domain goals, we consider the adaptation goals to be in relation to one or more quality criteria or objectives, which is also supported by prior works [33, 34]. In sum, we consider the separation of the goals in self-adaptive systems between domain and adaptation goals as essential, as it provides the basis for discussing and distinguishing when a system adapts and when it simply operates or functions. Furthermore, during the analysis of our primary studies, we did not concretely search whether these terms are used or not. Instead, we analysed the studies more thoroughly to answer if perhaps the studies adopt these ideas while using different terminology, or if these ideas are implicitly considered in their contributions and formulas without giving them a specific name.
In a nutshell, in the third class (C3) of analysis dimensions, we differentiate between 1) papers that include a concrete characteristic formally as part of their definition, 2) papers that identify or mention a concrete characteristic only informally and do not include it as part of their formalism, and 3) papers that do not even identify or mention the necessity for the consideration of a concrete characteristic in their definition of self-adaptive systems.
Finally, in the fourth class (C4), we have noted the formal notation used in each paper.
# C. Analysis of the primary studies
In this section, we analyse the primary studies in order to consolidate the existing work and answer the research questions. A thorough analysis of the primary studies and discussion of their limitations should enable us to set the foundation for improving the semantics and to derive requirements for a unified and precise definition of self-adaptive systems in the future. Although we collected the papers systematically, we ended up only with nine primary studies for the analysis. Therefore, we decided to take a more qualitative approach to analyse our primary studies guided by our leading research question that we previously introduced in Section II). We summarise the qualitative analysis of our primary studies in the following, based on which Table III is filled. Due to space limitations, we are not giving the formal details, but we include the used formal notations in each of the primary studies.
One of the first efforts to formally define adaptive behaviour was made by Zhang and Cheng [35], in which the authors proposed a model-driven software development process for dynamically adaptive programs. According to the authors, adaptive programs are generally more difficult to specify due to their high complexity, especially in multi-threaded adaptations where the program behaviour results from the collaborative behaviour of multiple threads. This is the first main limitation of this work since adaptation is not necessarily an emerging property from a collaboration, and it should be treated and defined as a separate concept. In their formal representation of adaptive programs, a program is represented by a state machine that exhibits certain behaviour and operates in specific domains. A dynamically adaptive program operates in different domains and changes its behaviour (i. e., behavioural modes corresponding to the specific domain) at runtime in response to domain changes. As part of their work, the authors do not explicitly formalise system adaptation; however, they illustrate the specification process for three types of adaptive behaviour by modelling an audio streaming protocol with Petri nets. The authors use prior works on specifying dynamic systems architectures [36, 37] to formalise adaptive programs. As a result, they often use the terms adaptive and dynamic interchangeably throughout the paper without clearly distinguishing between them, which is the second limitation of this study. Lastly, this work does not consider any of the other analysis dimensions identified in class C3, which are paramount to be included in a holistic formal definition.
A similar concept in which adaptation is described through the realisation of different behavioural modes is proposed by Klarl [38]. In this work, the author realises the behavioural modes by roles which can be dynamically adopted by a component. Concretely, the author proposes a model-driven engineering process to develop self-adaptive systems, in which the adaptation logic (i. e., the managing system) is considered independently from the application logic (i. e., the managed system) and supports the systematic transition between their components. For specification, the author proposes hierarchical adaptation automata, and for the design—a role-based architecture according to a Helena Adaptation Manager pattern. This study neither defines the notion of system adaptation nor adaptive behaviour. Except for considering the context (concretely, perceptions about the context) and the system state as attributes of the signature of self-adaptive component types in the formalism of the paper, no other analysis dimension from class C3 is considered as part of this study.
In two separate works, Broy et al. [12] and Bruni et al. [11] try to answer how the self-adaptive systems differ from the “ordinary” systems, which are considered non-adaptive. Concretely, Broy et al. aim at defining adaptive system behaviour while differentiating interaction patterns between three separate entities: the system, a subject (a user or other technical system that interacts with the system) and the context. The authors claim that one can differentiate the adaptive behaviour of the system only by considering and observing the context in which the system operates. The authors further classify the system inputs into direct/explicit and indirect/implicit, and assume that a system always receives the user inputs explicitly. Therefore, adaptive system behaviour can be observed if the system reaction resulting from the user input (the explicit input) is additionally determined by some additional information about the context received through the implicit inputs. Based on these ideas, the authors identify four types of observable system behaviour (i. e., adaptive behaviour) with respect to the user: non-adaptive, non-transparent adaptive, transparent adaptive and diverted adaptive behaviour. To summarise, as part of this work, the authors identify the consideration of the context and the system (state) as relevant and necessary for the system adaptation; and therefore, they include them as part of their formalism, which is based on FOCUS modelling approach.
Bruni et al. [11] propose a conceptual framework for adaptation, in which they assign a central role to control data, which governs the adaptive behaviour of a component. The authors define adaptation informally as a run-time modification of the control data and, consequently, consider a component as self-adaptive if it can modify its own control data at run-time. They formally define adaptable vs non-adaptable components, self-adaptive components, and knowledge-based adaptation, in which they recognise the context as the observable part of the environment. The authors formalise their conceptual framework using a Labelled Transition System (LTS) model. Similarly as in [12], the authors consider the context and the system state as part of their formalisation; however, all the other analysis dimensions from class C3 are not considered in either of these two works. The most significant shortcoming of this work is that the central idea of their concept (i. e., the control data) is left fuzzy and unclear since the authors do not elaborate precisely on what they understand under the notion of control data, how one can identify control data in the system, how the system is influenced by the control data and the structure of the control data. Furthermore, in contrast to the work by Broy et al. [12], Bruni et al. do not formalise or specify adaptive behaviour as part of their work.
Weyns et al. [10] propose formally specified models for designing self-adaptive software systems. The authors propose a FOrmal Reference Model for Self-adaptation (FORMS), which enables precise descriptions of the architectural characteristics of distributed self-adaptive software systems in the early design phases of the system. FORMS primarily focuses on the formalisation of the structural aspect of self-adaptive systems without providing any insights into the behavioural semantics of the self-adaptive systems. Although FORMS had and continues to have a notable impact in the community, it neither defines system adaptation nor adaptive behaviour. FORMS considers the aspect of context and system formally, and the adaptation goals are only considered informally throughout the work. Finally, similarly to [35], the authors of FORMS leverage some other concept—specifically in FORMS, the notion of system distribution—to compensate in some sense for the lack of precise understanding of system adaptation necessary for the definition of self-adaptive systems.
Arcaini et al. in [40] and Inglesia and Weyns in [41] aim to define self-adaptive systems by formally specifying the MAPE-K feedback loop. Arcaini et al. [40] show how MAPEK loops can be explicitly formalised in terms of agents’ actions using Abstract State Machines (ASM) transition rules to model the behaviour of self-adaptive systems. Concretely, the authors exploit the concept of multi-agent ASM to specify decentralised adaptation control by using MAPE computations. Although the authors aim at modelling and specifying selfadaptive systems, concretely the behavioural aspect of selfadaptation, their contribution primarily focuses on specifying the behaviour of the MAPE feedback loop (i. e., the managing system) and not the behaviour of the self-adaptive system as a whole. The other shortcoming is that the authors consider the adaptation as a result of the collaborative behaviour of multiple managing agents (i. e., MAPE-K loops). However, system adaptation is not necessarily an emerging property from collaboration, and its definition should be independent of the type and nature of the system. Finally, the authors consider the context and system in their formal specifications and informally the adaptation goals.
To support the design and the engineering of self-adaptive systems, Inglesia and Weyns in [41] derive a set of MAPE-K formal templates for designing feedback loops of self-adaptive systems. The proposed templates comprise: 1) behaviour specification templates for modelling different components of the MAPE-K loop and their interaction—using networks of timed automata (TA), and 2) and property specification templates for specifying required properties of the adaptive behaviour—based on timed computation tree logic (TCTL). Similar to the work of Arcaini et al. [40], the authors of [41] do not define the adaptive behaviour of the entire self-adaptive system but instead specify the MAPE behaviour, assuming that the MAPE behaviour will eventually adapt the managed system. As part of this work, the context, the system, and the adaptation goals are formally considered in the templates.
A more complete formalism has been proposed in a recent work by Bucchiarone and Mongiello [42], in which the authors introduce a formal framework to characterise different aspects of an ensemble-based software engineering. Concretely, they present 1) how to model dynamic software ensembles using Typed Graph Grammar (TGG), 2) how to specialise and re-configure ensembles, and 3) how to manage collective adaptations in an ensemble. As part of this work, the authors use TGGs combined with Labelled Transition Systems (LTSs) to formally define system context, contextawareness, and system adaptation; however, only in the frame of system ensembles, which is the biggest shortcoming of this paper. However, adaptation as a system property should be considered and defined in independence from ensembles or system collaboration and not as an emerging property thereof. It is important to point out that compared to all the other analysed primary studies, there is a notable maturity in the work by Bucchiarone and Mongiello [42]. Concretely, this is the only work that defines system adaptation as part of their contribution formally. Furthermore, the authors also identify the importance of considering the context and the system, by explicitly considering the system functionality that adapts, as necessary aspects to discuss system adaptation and, therefore, self-adaptive systems.
Table III SUMMARY OF PAPERS THAT PROVIDE SOME FORMAL DEFINITIONS ON SYSTEM ADAPTATION AND SELF-ADAPTIVE SYSTEMS.
Qureshi et al. in [39] take a different approach than the rest of the primary studies. In their work, the authors focus on defining the requirements for self-adaptive systems instead of defining self-adaptive systems. The authors tackle how the requirements problems (i. e., the problems solved during the requirements engineering) differ for self-adaptive systems compared to systems that are not self-adaptive. As it was previously observed, Broy et al. [12] and Bruni et al. [11] also tried to differentiate in their works how self-adaptive systems differ from those that are considered non-adaptive. The overarching objective of the work by Qureshi et al. in [39] is to identify concepts and relations that are necessary to be considered while eliciting and analysing requirements for selfadaptive systems. Therefore, the authors do not aim to define system adaptation, adaptive behaviour, nor MAPE behaviour as part of their work. Although this paper does not explicitly identify the relevance of the independent consideration of the system (i. e., the managed system that gains adaptation capabilities) as part of their formalism, this is the only paper in our primary studies that makes a distinction and formally considers the domain goals (referred to as mandatory goals as part of their work), and the adaptation goals (referred to as quality constraints).
Addressing RQ-A: Do the papers with formal definitions of self-adaptive systems also define system adaptation as part of their contributions? It is not possible to define self-adaptive systems without defining what it means for a system to adapt in the first place. However, our literature analysis showed that only one study formally defines system adaptation as part of their efforts to define self-adaptive systems; however, only in the frame of system ensembles. Two primary studies implicitly define system adaptation by specifying adaptive system behaviour as part of their contributions. And finally, two studies specify the MAPE behaviour (i. e., the behaviour of the managing system as part of a self-adaptive system), assuming that the MAPE behaviour will eventually adapt the managed system.
Addressing RQ-B: Which characteristics of the selfadaptive systems are considered in the existing formal definitions and specifications? If in RQ-A we focused on the behavioural aspect of self-adaptive systems, in RQ-B, we shift the focus to the structural aspects of these systems. Concretely in this research question, we investigate which of the characteristics that have been recently consolidated in this field of research, as explained in Section III-B, are considered in the existing body of work that formally defines self-adaptive systems. The most notable insight of our analysis is that none of the primary studies consider the aspect of uncertainty, both formally and informally, as part of their contribution. This is extremely surprising since the notion of uncertainty has been at the centre of the idea behind self-adaptive systems. Precisely the core motivation for self-adaptive systems is built on the unpredictable changes and uncertainties that trigger the need for system adaptation during the run-time of the system. This is also roughly how all the informal definitions available in the literature define self-adaptive systems, with a liberate use of the notion of uncertainties—a notion that is seemingly difficult to be put in formalism, as shown by our results. These results are another proof of the importance of having a clear, systematic, and formal definition of self-adaptive systems.
Almost all of the primary studies that we analysed consider the (states of the) context and system in some way as part of their formalism—the majority of them formally. This concludes that system adaptation and, therefore, self-adaptive systems cannot be defined in isolation from the context in which the self-adaptive systems operate and the properties of the system (i. e., the managed system) that gains the ability to adapt as part of a self-adaptive system.
Four out of nine primary studies (two formally and two informally) consider the concept of the adaptation goals, as we previously described them in Section III-B, and identify that the system self-adapts in order to fulfil some quality objectives. However, the number of primary studies that consider the domain goals is much lower, and out of the nine primary studies only one study considers the domain goals. This is probably because this differentiation and the identification of the domain goals is much more subtle, but as we discussed in Section III-B, it is necessary in order to argue when the system adapts and when does it simply function.
Addressing RQ-C: Which formal notations have been used across different works to define self-adaptive systems? Among the primary studies, three papers used Labelled Transition Systems (LTS)—in which one of them used Typed Graph Grammars (TGG) in combination with LTS. The rest of the studies used: Petri nets, FOCUS, Techne, Z language, abstract state machines (ASM), timed automata (TA) and timed computational tree logic (TCTL).
# IV. DISCUSSION
# A. Discussion on the results and future works
Despite the vibrant and growing community and the expanding interest in self-adaptive systems, our results have shown a sparsity of contributions that define self-adaptive systems formally. We derive various premises from the analysis and the results of our study, which set the foundation for the requirements for a holistic, formal definition.
The ideas of autonomic systems that introduced the MAPE-K conceptual model have profoundly impacted the engineering field and have initiated various new lines of research for the last two decades. Although MAPE-K gives some intuition behind the engineering of self-adaptive (and self-\*) systems by the separation of concerns between the managing and the managed system, a more specific semantics of these two components is still missing. For instance, one can assume that every system that does some monitoring, planning, analysis, and execution and has some loose interpretation of the knowledge (e. g. every cyber-physical system), is self-adaptive by default. In response, the principles proposed by Weyns [14], concretely the internal principle that differentiates between the domain and the adaptation goals, have already made the initial steps in the direction of improving the terminology.
As we previously discussed in Section III-B, it is paramount to distinguish between system functioning and system adapting. Making this distinction will set the foundation for defining system adaptation and, subsequently, self-adaptive systems. In our analysis, we observed that in three of the primary studies [12, 11, 39], the authors raised the question of the necessity to differ (self-)adaptive systems from the “ordinary”, non-adaptive systems. However, the work by Bucchiarone and Mongiello [42] is the only study that contributes in this direction, in which the authors focus on identifying the system functionality that adapts; therefore, explicitly separating between system functioning and system adapting.
It is notable from the surveyed literature and our analysis that none of our primary studies (see Table III) considers all the characteristics of self-adaptive systems as discussed in the principles [14]. The most unexpected insight from our results is that the notion of uncertainty has not been considered in the contributions of any of the primary studies, although uncertainty is considered the main reason for selfadaptive systems in the published papers on this topic and the informal definitions of these systems. So far, there is an intuitive understanding of the concept of uncertainty in selfadaptive systems, resulting in a clear need for more careful consideration of the aspect of uncertainty in this research domain. Concretely, how uncertainties can be represented, quantified and in general formalised as part of a formal definition of self-adaptive systems.
Finally, our results have shown that almost half of the primary studies provide their formalism by leveraging the aspects of collaboration [35], distribution [10], decentralisation [40], and ensembles [42] to define self-adaptive systems. However, system adaptation is not necessarily an emerging property from collaboration or decentralisation and should be defined independently from these notions.
Based on our results and our findings, we can summarise that a potential formal definition of self-adaptive systems should provide a more precise semantics by 1) defining what it means for a system to adapt and how system adaptation differs from system function, 2) considering more systematically all the different characteristics of self-adaptive systems in its formalism, in particular the aspect of uncertainty, and 3) defining adaptation and self-adaptive system isolated from, e. g., collaboration and multi-agent systems.
# B. Threats to validity
Although the systematic process for data collection and analysis followed the well-known accepted guidelines for systematic literature review [21, 22], there are some possible threats to validity that we summarise in the following.
Internal validity: In this study, we aimed to investigate how self-adaptive systems are defined in the literature. Finding this information in the papers we analysed was not always straightforward, especially while searching for informal definitions since this information was often implicitly included in the text. The expertise of the researchers also plays a role in this process; however, the potential bias of the researchers who conduct the systematic literature review is a common threat to validity. To mitigate this issue, voting was done by two of the authors. In case of conflicts, there was a followup discussion and a more in-depth paper analysis until a consensus was reached. On the other hand, searching for the formal definitions in the studies was much less complicated. Namely, in this case, we first searched if the analysed studies contained any formalism (which drastically reduced the search space). In case they did, we then proceeded with a thorough analysis of the paper, searching if the paper aims to define self-adaptive systems as part of their (formal) contributions. The voting on the formal definitions led to almost no conflicts among the authors.
External validity: Doing an automated search in six databases using the term “self-adaptive systems” yields hundreds of thousands of results. For that reason, we adopted the following two strategies, as previously explained in Section II:
1) We implemented an iterative search process with pilot searches to define and fine-tune the search string to minimise the number of irrelevant studies. In each iteration, two authors manually inspected and analysed a subset of the collected data. The search string was refined based on the insights gained from the concrete iteration.
2) In our automated search, we either searched in the databases by metadata (title, abstract, keywords) or only by title, depending on the advanced search options available in the concrete database. We assumed that if a paper defines self-adaptive systems, then that paper will certainly contain the word (self-adapt $\star$ ) as part of these fields. However, there is the possibility of having missed some relevant studies by limiting the automated search in the databases only by metadata.
However, not to compromise the completeness of the collected data, we analysed the complete initial pool of papers (1493 papers) and not only a random selection of these works. This proved to be the right decision, considering that the final set of primary studies contained only nine papers that could have been easily missed if we had decided to analyse only a random selection of the initial pool of papers.
Reliability: To ensure that our research findings can be replicated, as part of this paper, we have made available a reproducible package with the selected studies in each of the phases of the methodology. The package contains all the necessary data for replication, including the final queries that we used for the automated search in the databases and the authors’ votes. To mitigate the inherent bias that each researcher has due to their background and experience, we have ensured that multiple researchers made the paper selection and the data extraction and analysis. Precisely, during the analysis in Phases 4 and 6 of our methodology, we introduced a voting process in which, if the authors classify a paper differently, a discussion took place until the voters have reached a unified decision about the study. On the other hand, the reliability of the used databases and the replication of the automated search with the specific queries is something that we cannot account for.
# V. RELATED WORK
To the best of our knowledge, this is the first study that has focused on systematically collecting and analysing how self-adaptive systems are defined. Although the interest in self-adaptive systems has been rapidly growing, the concrete semantics of the core terminology is still missing. Namely, the literature still lacks a consensus on a definition— understanding why this is the case and getting a better overview of the existing body of literature was the motivating factor for our study.
Many other systematic reviews and mapping studies with different objectives were conducted over the years in the literature. However, they were all focusing on other aspects related to self-adaptive systems, for example, engineering approaches for self-adaptive systems [7, 19, 34, 16], the use of formal methods in self-adaptive systems [24], and two recent works in which the authors focused on decentralisation in self-adaptation [20] and the application of machine learning in self-adaptive systems [43]. Besides the existing systematic literature reviews and mapping studies, there are a couple of other surveys and roadmaps on future research challenges [44, 18, 45, 15]. In contrast to these works, in our systematic literature review, we aim to consolidate the existing (formal) definitions of self-adaptive systems and, more importantly, understand their limitations, which sets the foundation for a future establishment of a more unified understanding of these systems. An improved terminology semantics will complement the existing works in this field, including the contributions from the other systematic reviews, mapping studies, and roadmaps presented above.
Motivated by similar incentives as our study, only putting the focus on self-awareness instead of self-adaptation, Elhabbash et al. in [46] have conducted a systematic literature review on the usage of self-awareness in software engineering. Among other objectives, the authors also summarise and analyse how self-aware systems have been defined in the literature. Please note that in this study, the authors only focus on summarising the informal definitions of these systems. Although most of the researchers in the literature use the terms self-adaptation and self-awareness interchangeably, there are some prior works [47, 29, 48], in which the authors distinguish these terms and consider self-awareness as an “enabler” or a precondition for self-adaptive systems. In the future, if we have a clearer and more precise definition and understanding of selfadaptive systems, this will also help us to better distinguish self-adaptive systems from other self-\* systems, such as selfaware systems. | In the last two decades, the popularity of self-adaptive systems in the field of software and systems engineering has drastically increased. However, despite the extensive work on self-adaptive systems, the literature still lacks a common agreement on the definition of these systems. To this day, the notion of self-adaptive systems is mainly used intuitively without a precise understanding of the terminology. Using terminology only by intuition does not suffice, especially in engineering and science, where a more rigorous definition is necessary. In this paper, we investigate the existing formal definitions of self-adaptive systems and how these systems are characterised across the literature. Additionally, we analyse and summarise the limitations of the existing formal definitions in order to understand why none of the existing formal definitions is used more broadly by the community. To achieve this, we have conducted a systematic literature review in which we have analysed over 1400 papers related to self-adaptive systems. Concretely, from an initial pool of 1493 papers, we have selected 314 relevant papers, which resulted in nine primary studies whose primary objective was to define self-adaptive systems formally. Our systematic review reveals that although there has been an increasing interest in self-adaptive systems over the years, there is a scarcity of efforts to define these systems formally. Finally, as part of this paper, based on the analysed primary studies, we also elicit requirements and set a foundation for a potential (formal) definition in the future that is accepted by the community on a broader range. | [
"cs.SE"
] |
# 1 Introduction
The concept of independence is appealing to many fields. In databases, it describes when a relation is the cross product for some of its projections (Paredaens 1980). Indeed, the cross product is one of the most fundamental operations, important to designing and querying databases (Elmasri and Navathe 2000; Ramakrishnan and Gehrke 2003). Informally, a relation satisfies the independence atom (IA) $X \bot Y$ whenever for two tuples there is a third that has values matching with the first on all attributes in $X$ and values matching with the second on all attributes in $Y$ (Paredaens 1980). In other words, values on $X$ are independent of values on $Y$ . For example, the relation in Table 1 satisfies the IA status⊥ gender. Independence atoms form a decidable fragment (Paredaens 1980) of embedded multivalued dependencies whose implication problem is undecidable (Herrmann 1995). Similarly, probabilistic independence statements form a decidable fragment (Geiger, Paz, and Pearl 1991) for the undecidable class of probabilistic conditional independence statements, fundamental in statistics and distributed computing (Li 2023).
However, database relations are naturally incomplete, where missing values are denoted by null markers (denoted by $^ *$ in this work). Nulls are common since they are used whenever an actual value is unknown at the time of data acquisition (Codd 1979), a value simply does not exist (Codd 1986), or no information about the value is known (Lien 1982; Zaniolo 1984). Indeed, nulls accommodate flexibility within the rigid structure that relational databases enforce. If nulls are disallowed in any column, one may specify these columns as NOT NULL in SQL (Date 1982;
Table 2: Possible world $w _ { 1 }$ of relation $r$ from Table 1
Table 1: Example relation $r$
Table 3: Possible world $w _ { 2 }$ of relation $r$ from Table 1
Grant 2008), the de-facto language for defining and querying data. Similar situations occur in statistical models where zeros are often used to denote missing values (Little and Rubin 2002).
In this work we ask the following fundamental question: How can independence be represented when data is missing? In fact, nulls have led to a broad and deep study of query answering in the presence of incomplete information (Imielinski and Jr. 1984; Libkin 2016), and no single best solution has been proposed (Toussaint et al. 2022). A promising direction is the approach where nulls are universally interpreted as values that exist but are currently unknown (Codd 1979). This leads naturally to a possible world semantics of database relations, where a possible world is obtained by replacing the occurrence of every null marker by some actual domain value. Query answers are then certain or possible. The former are answers in every possible world, while the latter are answers in some possible world (Libkin 2014). In the context of our running example, the universal query Which levels of education are associated with all statuses? has no certain but possible answers bachelor and graduation, see world $w _ { 1 }$ in Table 2.
Interestingly, the worlds $w _ { 1 }$ in Table 2 and $w _ { 2 }$ in Table 3 demonstrate that the IA $_ { e \perp s }$ is possible but not certain. In contrast, since neither column status nor gender contain nulls, IA $_ { s \perp g }$ holds necessarily in every possible world, so it is certain.
Consider the universal query that asks for each status associated with every gender. Since $r$ satisfies the certain IA $_ { s \perp g }$ , its certain answers are simply all values in column status. Generally, the certain (possible) answers to universal queries become certain (possible) answers to simple selection queries whenever the corresponding certain (possible) independence atom holds. For example, the possible answers to the universal query that returns levels of education associated with all values of status can be obtained by selecting simply all values of education, given the IA $\boldsymbol { e } \perp \boldsymbol { s }$ holds possibly.
IAs are important for the most fundamental database operations, which are updates and queries. Firstly, IAs may express important semantic constraints that every database instance ought to comply with. As a consequence, updates can only be considered compliant when the instance resulting from the update satisfies every semantic constraint that has been specified on the database schema. This motivates the study of two computational problems associated with IAs: model checking and implication. While model checking refers directly to validating updates, efficient solutions to the implication problem enable us to minimise the overhead for validation, in other words we can reduce validation to non-redundant sets of IAs. Secondly, we have seen how IAs can lead to significant optimisations of expensive queries. For example, expensive universal queries (Leinders and den Bussche 2007) reduce to simple selection queries whenever the underlying relation satisfies a corresponding IA. While checking independence may be as expensive as evaluating the query itself, the optimisation could already be applied if the IA is implied by the given set of constraints. For example, the possible answers to the universal query that returns all combinations of status and gender affiliated with all levels of education cannot be returned as projection on these combinations since the possible IA $e \perp s g$ is not implied by the two possible IAs $\boldsymbol { e } \perp \boldsymbol { s }$ and $_ { e s \perp g }$ , as shown by the relation $r$ in Table 1. Indeed, as we will uncover later, the exchange rule, known from classical IAs over complete relations (Paredaens 1980; Geiger, Paz, and Pearl 1991), does not hold for possible IAs, but it does hold for combinations of possible and certain IAs. This illustrates the challenge and motivates a rigorous study of the underlying model checking and implication problems for possible and certain IAs, and their combination.
Contributions. Informally, our contributions can be summarised as follows.
1. We propose the concepts of possible and certain independence atoms.
2. We establish several results regarding the axiomatisability and computational complexity of the implication problem associated with the individual and combined classes for possible and certain IAs.
3. We establish several results for the data and the combined complexity of model checking for possible and certain IAs.
In the realistic setting of database instances with missing data, we can assign a possible and certain semantics to the classical concept of independence. Similarly to how database queries then have possible and certain answers, update operations may have possible and certain updates. This distinction comes at the prize of an overhead for computing such answers, checking possible and certain models, and deciding possible and certain implication.
Organisation. The paper is organised as follows. Section 2 sets the foundation by defining the underlying data model, introducing possible and certain independence and defining implication problems. Section 3 establishes axiomatisations for various classes of independence atoms. Several findings about the computational complexity of implication and model checking problems are presented in Section 4, before concluding and commenting on future work in Section 5.
# 2 Preliminaries
We begin the section by defining the underlying data model that accommodates incomplete information. We then introduce the syntax and semantics of possible and certain independence atoms, and define the implication problems that we will study in the latter sections of this paper.
# 2.1 Relations with Incomplete Information
The natural numbers $\mathbb { N }$ are taken to start from 1 in this work. Given a natural number $n$ , we write $[ n ]$ for the set $\{ 1 , \ldots , n \}$ . Let Att and Val be disjoint infinite sets of symbols called attributes and values. For attribute sets $X$ and $Y$ , we often write $X Y$ for their set union $X \cup Y$ . A relation schema is a finite set ${ R } = \{ A _ { 1 } , \ldots , A _ { n } \}$ of attributes from $\mathfrak { A }$ . Each attribute $A$ of a relation schema is associated with a domain $\operatorname { D o m } ( A )$ which is the set of values that can occur in the column $A$ . In order to allow the data to contain incomplete information about the values of the attributes, we use a special null symbol $^ *$ , which represents an unknown attribute value. We always assume that $* \in \operatorname { D o m } ( A )$ and $| \mathrm { D o m } ( A ) \setminus \{ * \} | \geq 2$ . The latter assumption is made because an attribute with a domain that has only one non-null value, cannot contain incomplete information, because the unknown value could only be the one non-null value.
A tuple over $R$ is a function $t : R \to \bigcup _ { A \in R } { \mathrm { D o m } } ( A )$ with $t ( A ) \in \operatorname { D o m } ( A )$ for all $A \in R$ . The tuple $t$ is called complete if does not contain any nulls; that is, $t ( A ) \neq *$ for all $A \in R$ —otherwise it is called incomplete. The tuple $t$ is called a null tuple if $t ( A ) = *$ for all $A \in R$ . A nonnull tuple is a tuple that is not a null tuple. For $X \subseteq R$ , let $t ( X )$ denote the restriction of the tuple $t$ over $R$ on $X$ , and $\begin{array} { r } { \operatorname { D o m } ( X ) = \prod _ { A \in X } \operatorname { D o m } ( A ) } \end{array}$ the Cartesian product of the domains of a tributes in $X$ . For example, the first tuple of relation $r$ in Table 1 is complete but all remaining tuples are incomplete. However, the projections of all tuples onto status and gender are complete.
A multiset is a pair $M = ( B , m )$ consisting of a set $B$ and a multiplicity function $m \colon B \to \mathbb { N }$ . The function $m$ determines for each $b \in B$ how many copies of $b$ the multiset $( B , m )$ contains. We sometimes say that $M$ contains an element $b$ , written $b \in M$ , if $b$ is in the domain $B$ of $M$ .
If $m$ is a constant function for some $n \in \mathbb { N }$ , i.e., $m ( i ) = n$ for all $i \in \mathbb { N }$ , we denote it by $n$ . For example, $( B , 1 )$ corresponds to the set $B$ in the usual sense. A multiset $( B , m )$ is finite if the set $B$ is finite, and it is a included in a multiset $( A , n )$ if $B \subseteq A$ and $m ( a ) \leq n ( a )$ for all $a \in B$ .
A relation over $R$ is a finite multiset $r = ( r ^ { \prime } , m )$ , where $\boldsymbol { r } ^ { \prime }$ is a set of tuples over $R$ . The relation $r$ is complete if it contains only complete tuples, and otherwise incomplete. The projection of $r$ on $X \subseteq R$ is defined as $r ( X ) = ( r ^ { \prime } ( X ) , m ^ { \prime } )$ , where $r ^ { \prime } ( X ) = \{ t ( X ) \mid t \in r ^ { \prime } \}$ and $m ^ { \prime } \colon r ^ { \prime } ( X ) \to \mathbb { N }$ is such that $\begin{array} { r } { m ^ { \prime } ( t ) = \sum _ { s ( X ) = t } m ( s ) } \end{array}$ . For example, the projection of relation $r$ from Table 1 onto status and race consists of ((not-in-family,white),2), ((in-family,white),1) and ((infamily,\*),1).
We define relations as multisets instead of the more common choice, sets, because we want to allow relations to have more than one copy of tuples that have null symbols. This is because two unknown values that are marked with the null symbol could have two different values if they were known. Our definition also allows multiple copies of tuples without null symbols, but the number of copies for those rows does not affect the satisfaction of the atoms that we will consider.
# 2.2 Possible and Certain Independence Atoms
We introduce possible and certain variants of the independence atoms. These variants are based on groundings of incomplete relations, each representing a possible world obtained by replacing all null symbols by actual domain values.
# Definition 1 (Grounding).
(i) Let t be a tuple over a schema R. A grounding of $t$ is any complete tuple $t ^ { \prime }$ over a schema R obtained from t by replacing its null symbols with non-null values (from their respective domains).
(ii) Let r be a relation over a schema R. A grounding of r is any relation $\boldsymbol { r } ^ { \prime }$ over R that is obtained from r by replacing its null symbols with non-null values (from their respective domains) for each copy of a tuple independently.
For example, projections of all the groundings of the relation $r$ from Table 1 onto status and gender are the same, while they can be different on education and gender.
In analogy to query answers (Libkin 2014), an independence atom is possible (certain) whenever it is satisfied by some (all) grounding(s).
Definition 2 (Independence, possible and certain independence). For a relation schema $R _ { ; }$ , and $X , Y \subseteq R ,$ , the expressions $X \bot Y$ , $X \bot _ { p } Y$ , and $X \perp _ { c } Y$ are called independence atom (IA), possible independence atom (PIA), and certain independence atom (CIA) over $R$ , respectively. For any of these atoms $\sigma$ , we write $r \models \sigma$ to mean that a relation $r$ over $R$ satisfies $\sigma$ , which is defined as follows:
(i) $r \not = X \bot Y$ iff $r ( X Y )$ is complete, and for all $t _ { 1 } , t _ { 2 } \in r$ there is some $t \in r$ such that $t ( X ) = t _ { 1 } ( X )$ and $t ( Y ) =$ $t _ { 2 } ( Y )$ ,
(ii) $r \ \models X \bot _ { p } Y$ iff there is a grounding $\boldsymbol { r } ^ { \prime }$ of $r$ such that $r ^ { \prime } \models X \bot { \dot { Y } } ,$ ,
(iii) $\begin{array} { r l r } { r } & { { } \equiv } & { X \bot _ { c } Y } \end{array}$ iff every grounding $\boldsymbol { r } ^ { \prime }$ of $r$ satisfies $r ^ { \prime } \models X \bot Y$ .
An IA $X \bot Y$ is called disjoint if $X$ and $Y$ do not intersect. Disjoint PIAs and CIAs are defined analogously.
Of course, if a relation satisfies an IA, then the IA is certain; and if an IA is certain, then it is also possible. For example, the relation $r$ from Table 1 satisfies $\boldsymbol s \perp \boldsymbol g , \boldsymbol s \perp _ { c } \boldsymbol g ,$ , and $\boldsymbol { s } \perp _ { p } \boldsymbol { g }$ . Furthermore, $r$ satisfies neither $e \bot _ { c } s$ nor $r \perp _ { c } r$ , but it satisfies $e \perp _ { p } s$ and $r \perp _ { p } r$ .
# 2.3 Implication Problems
Understanding the interaction between constraints provides us with means to control them in applications. The Introduction has already indicated that a deep understanding of the implication problem for independence statements has direct applications for the most fundamental tasks in data processing: update and query operations.
For a set of atoms $\Sigma$ , we write $r \models \Sigma$ if and only if $r \models \sigma$ for all $\sigma \in \Sigma$ . Let $\Sigma \cup \{ \sigma \}$ be a set of atoms over $R$ . We say that a set of atoms $\Sigma$ logically implies the atom $\sigma$ , written as $\Sigma \models \sigma$ , if and only if for all relations $r$ over $R$ , $r \models \Sigma$ implies $r \models \sigma$ . For our running example, $\{ e \perp _ { c } s , e s \perp _ { c } g \}$ implies $e \bot _ { c } s g$ , $\{ e \perp _ { p } s , e s \perp _ { p } g \}$ does not imply $e \perp _ { p } s g$ , and $\{ e \perp _ { c } s , e s \perp _ { p } g \}$ implies $e \perp _ { p } s g$ .
For two subclasses of IAs $\mathcal { P } , \mathcal { Q } .$ the $( \mathscr { P } , \mathscr { Q } )$ -implication problem is to decide whether $\Sigma \models \sigma$ , for any finite set of atoms $\Sigma$ from $\mathcal { P }$ and any atom $\sigma$ from $\mathcal { Q }$ . If $\mathscr { P } = \mathscr { Q }$ , we refer to the $( \mathcal { P } , \mathcal { Q } )$ -implication problem as the implication problem for $\mathcal { P }$ .
# 3 Axiomatisations
Due to the central role of implication problems, our first goal is to establish axiomatic characterisations for combinations of PIAs and CIAs. Throughout, we will observe interesting differences to what is known from the idealised and wellknown special case of having complete information.
# 3.1 Inference Rules
We define possible and certain variants of inference rules known from the special case (Geiger, Paz, and Pearl 1991; Paredaens 1980), and also recall its major results.
Definition 3. Let $\tau , s , c , \mathcal { D } , \mathcal { E }$ be the inference rules for independence atoms depicted in Table 4. We use the subscript c or $p$ for the rule that is obtained from the corresponding rule for independence atoms by replacing each $\perp$ symbol by the certain independence symbol $\perp _ { c }$ or possible independence symbol $\perp _ { p }$ , respectively. The rules with subscript $p \& c$ are given in Table 5. We define the following sets of inference rules
(i) Independence: $\mathfrak { I } = \{ T , S , \mathcal { C } , \mathcal { D } , \mathcal { E } \}$ , (ii) Certain independence: $\Im _ { c } = \{ \mathcal { T } _ { c } , \mathcal { S } _ { c } , \mathcal { C } _ { c } , \mathcal { D } _ { c } , \mathcal { E } _ { c } \}$ , (iii) Possible independence: $\mathfrak { I } _ { p } = \{ T _ { p } , S _ { p } , \mathcal { C } _ { p } , D _ { p } \}$ , $( i \nu )$ Mixed exchange rules: $\Im _ { p \& c } = \{ \mathcal { E } _ { p \& c } , \mathcal { E } _ { c \& p } \}$ .
Table 4: Rules I for independence
Table 5: Rules $\Im _ { p \& c }$ for possible and certain independence
Note that the set ${ \mathfrak { I } } _ { p }$ is similar to the sets $\Im$ and $\Im _ { c }$ , except that the exchange rule is missing. This is a necessary omission, as Example 1 demonstrates that the exchange fails for possible independence. The implication of $X \bot _ { p } Y$ by $X \perp _ { c } Y$ follows from mixed exchange $\mathcal { E } _ { c \& p }$ and trivial independence $\mathcal { T } _ { p }$ .
Let $\mathfrak { A }$ be a set of axioms. We say that that there is an ${ \mathfrak { A } }$ - deduction for an atom $\sigma$ from the set of atoms $\Sigma$ , written as $\Sigma \vdash _ { \mathfrak { A } } \sigma$ , if and only if there is a finite sequence $( \tau _ { 1 } , \dots , \tau _ { n } )$ of atoms such that $\tau _ { n } = \sigma$ , and each $\tau _ { i }$ , $i \in \{ 1 , \ldots , n \}$ , is either an element of $\Sigma$ or is obtained by applying a rule from $\mathfrak { A }$ to some atoms of $\Sigma \cup \{ \tau _ { 1 } , \dots , \tau _ { i - 1 } \}$ . We sometimes write $\Sigma \vdash \sigma$ instead of $\Sigma \vdash _ { \mathfrak { A } } \sigma$ , if the set of axioms $\mathfrak { A }$ is clear from the context.
Let $\mathfrak { A }$ be an axiomatisation, and $\mathcal { P }$ and $\mathcal { Q }$ classes of atoms. We say that an axiomatisation $\mathfrak { A }$ is
(i) sound for the $( \mathcal { P } , \mathcal { Q } )$ -implication problem iff for all sets of atoms $\Sigma$ of the class $\mathcal { P }$ and all atoms $\sigma$ of the class $\mathcal { Q }$ , $\Sigma \vdash _ { \mathfrak { A } } \sigma$ implies $\Sigma \models \sigma$ ,
(ii) complete for the $( \mathcal { P } , \mathcal { Q } )$ -implication problem iff for all sets of atoms $\Sigma$ of the class $\mathcal { P }$ and all atoms $\sigma$ of the class ${ \mathcal { Q } } , { \boldsymbol { \Sigma } } \models \sigma$ implies $\Sigma \vdash _ { \mathfrak { A } } \sigma$ .
We write $\operatorname { C l } _ { \mathfrak { A } } ( \Sigma )$ for the closure of $\Sigma$ with respect to the axiomatisation $\mathfrak { A }$ consisting of axioms, i.e., $\mathrm { C l } _ { \mathfrak { A } } ( \Sigma ) = \{ \sigma \ |$ $\Sigma \vdash _ { \mathfrak { A } } \sigma \}$ . For example, if $\Sigma$ consists of $e \perp _ { c } s , e s \perp _ { p } g$ and $\boldsymbol { r } \perp _ { p } \boldsymbol { r }$ , then ${ r s g \perp _ { p } e } \in \mathrm { C l } _ { \{ \mathscr { E } _ { c \& p } , S _ { p } , \mathscr { C } _ { p } \} } ( \Sigma )$ due to the following $\{ \mathcal { E } _ { c \& p } , S _ { p } , \mathcal { C } _ { p } \}$ -deduction:
$$
\begin{array}{c} \frac { \frac { e \perp _ { c } s \quad e s \perp _ { p } g } { \xi _ { c \& p } : \ e \perp _ { p } s g } } { \frac { r \perp _ { p } r } { \mathcal C _ { p } : \ \begin{array} { c } { { s g } \end{array} } \end{array} } \frac { \mid } { r \mathcal C _ { p } } \frac { e ^ { \perp _ { p } } } { \mid } r e } .
$$
For a CIA or PIA $\sigma$ define $i n d ( \sigma )$ as the corresponding IA, i.e., $i n d ( X \bot _ { c } Y ) = X \bot Y$ and $i n d ( X \bot _ { p } Y ) =$ $X \bot Y$ . We extend the definition of ind to sets of CIAs and PIAs in the obvious way, i.e., $i n d ( \Sigma ) = \{ i n d ( \sigma ) \mid \sigma \in \Sigma \}$ .
The following result about the idealised special case of complete relations is well-known from the research literature (Geiger, Paz, and Pearl 1991; Paredaens 1980).
Theorem 1. The set I forms a sound and complete axiomatisation for the implication problem for IAs.
Indeed, a stronger result is known (Fagin 1982; Geiger, Paz, and Pearl 1991) that allows us to construct a single complete relation, known as an Armstrong relation, that satisfies an IA if and only if it is implied by a given set of IAs. Hence, the implication problem reduces to a model checking problem on such a relation. This is known to be useful for acquiring those constraints perceived to encode business rules of the underlying application (Fagin 1982).
Let $\Sigma$ be a set of atoms of class $\mathcal { P }$ over a schema $R$ . A relation $r$ over $R$ is called an Armstrong relation for a set $\Sigma$ of atoms of class $\mathcal { P }$ if for all atoms $\sigma$ of class $\mathcal { P }$ over $R$ , $r \mid = \sigma$ if and only if $\Sigma \models \sigma$ . We say that a class $\mathcal { P }$ of atoms enjoys Armstrong relations, if every set of atoms $\Sigma$ of class $\mathcal { P }$ has an Armstrong relation.
Theorem 2. The class of independence atoms enjoys Armstrong relations.
# 3.2 Certain Independence Atoms
As our first major result we establish the completeness of the axiomatisation $\Im _ { c }$ for CIAs. This follows from the completeness of I for IAs.
Theorem 3. The set $\Im _ { c }$ forms a sound and complete axiomatisation for the implication problem for CIAs.
Proof. The soundness of the axiomatisation is clear, as it is easy to check that all of the inference rules in $\Im _ { c }$ are sound.
We show that the axiomatisation is also complete. Let $\Sigma \cup \{ \sigma \}$ be a set of CIAs. Suppose that $\Sigma \forall \Im _ { c } \ \sigma$ . We show that $\Sigma \nvDash \sigma$ .
It is clear that $\Sigma \vdash _ { \Im _ { c } } \sigma$ if and only if $i n d ( \Sigma ) \vdash _ { \mathfrak { I } }$ $i n d ( \sigma )$ , because rules of $\Im _ { c }$ and $\Im$ are the same, except that in $\Im _ { c }$ , we have CIAs instead of IAs. Thus, $i n d ( \bar { \Sigma } ) \ \vdash _ { \mathfrak { I } }$ $i n d ( \sigma )$ , and by the completeness of the axiomatisation I, we have $i n d ( \bar { \Sigma } ) \ \not = i n \bar { d } ( \sigma )$ . This means that there is a relation $r$ over the relation schema $R \ = \ \{ A \ \in \ \operatorname { A t t } \ |$ $A$ appears in some atom of $\Sigma \cup \{ \sigma \} \}$ such that $r \mid = i n d ( \Sigma )$ , but $r \not \in i n d ( \sigma )$ . Moreover, since $i n d ( \Sigma \cup \{ \sigma \} )$ is a set of IAs, the relation $r$ must be complete. This means that there are no null symbols in $r$ , and therefore $r \mapsto i n d ( \Sigma )$ implies $r \mid = \Sigma$ , and $r \ \nvDash \ i n d ( \sigma )$ implies $r \not \in \sigma$ . Hence, the relation $r$ witnesses that $\Sigma \ \forall \ \sigma$ , and the axiomatisation $\Im _ { c }$ is complete. L
Indeed, the implication problems for IAs and CIAs are equivalent in the following sense.
Theorem 4. Let $\Sigma \cup \{ \sigma \}$ be a set of CIAs. Then
$$
\Sigma \vdash \sigma \iff i n d ( \Sigma ) \vdash i n d ( \sigma ) .
$$
Proof. As stated in the proof of Theorem 3, it is clear that $\Sigma \vdash _ { \Im _ { c } } \sigma$ if and only if $i n d ( \Sigma ) \vdash _ { \Im } i n d ( \sigma )$ The claim then follows from Theorems 1 and 3. □
$$
{ \begin{array} { c c c c c c c c c } { A } & { B } & { C } & & & { A } & { B } & { C } & & { A } & { B } & { C } \\ { 0 } & { 0 } & { 0 } & & { 0 } & { 0 } & { 0 } & & { 0 } & { 0 } & { 0 } \\ { * } & { 1 } & { 0 } & & & { 0 } & { 1 } & { 0 } & & { 1 } & { 1 } & { 0 } \\ { * } & { 0 } & { 1 } & & & { 1 } & { 0 } & { 1 } & & { 0 } & { 0 } & { 1 } \\ { 1 } & { 1 } & { 1 } & & & { 1 } & { 1 } & { 1 } & & { 1 } & { 1 } \end{array} }
$$
Figure 1: The relations $\boldsymbol { r } , \boldsymbol { r } ^ { \prime }$ , and $r ^ { \prime \prime }$ .
It follows that IAs in the idealised special case of having complete information correspond exactly to CIAs in the general case of permitting incomplete information. Intuitively, our finding assures us that known results from an idealised case hold in a more general and realistic context in the principled sense of certainty. This view extends further to Armstrong relations.
Theorem 5. The class of CIAs enjoys Armstrong relations.
Proof. Let $R$ be a relation schema and $\Sigma$ a set of CIAs over $R$ . Then there exists a complete relation $r$ such that for all CIAs $\sigma$ over $R$ , we have
$$
\begin{array} { r l } & { \Sigma \models \sigma \iff \sigma \in \mathrm { C l } _ { \mathfrak { I } _ { c } } ( \Sigma ) \iff i n d ( \sigma ) \in \mathrm { C l } _ { \mathfrak { I } } ( i n d ( \Sigma ) ) } \\ & { \iff i n d ( \Sigma ) \models i n d ( \sigma ) \iff r \models i n d ( \sigma ) \iff r \models \sigma . } \end{array}
$$
The first equivalence follows from Theorem 3, and the second one from the fact that $\Sigma \vdash _ { \Im _ { c } } \sigma$ iff $i n d ( \Sigma ) \vdash _ { \Im } i n d ( \sigma )$ . The third equivalence follows from Theorem 1, and the fourth one from the definition of Armstrong relations for independence atoms. Note that the Armstrong relation $r$ for $i n d ( \Sigma )$ exists by Theorem 2. The last equivalence follows from the completeness of $r$ . □
# 3.3 Possible Independence Atoms
Apart from positioning known results in the broader framework of incomplete information, we also need to consider the case where IAs may possibly hold. Here, things are different. Indeed, Theorem 4 does not hold if we consider PIAs instead of CIAs, because the exchange rule is not sound in the possible case. This is demonstrated by the following example.
Example 1. There exists a set $\Sigma \cup \{ \sigma \}$ of PIAs such that $\begin{array} { r l r } { i \bar { n } d ( \Sigma ) } & { { } \models } & { i n d ( \sigma ) , } \end{array}$ , but $\Sigma \quad \nvDash \quad \sigma$ . Let $\begin{array} { r l } { \Sigma } & { { } = } \end{array}$ $\{ A \perp _ { p } B , A B \perp _ { p } C \}$ and $\sigma = A \bot _ { p } B C$ . Then $i n d ( \Sigma ) =$ $\{ A \bot \bar { B } , A B \bot \bar { C } \}$ and $i n d ( \sigma ) ~ = ~ \widehat { A \bot B C }$ , and by the soundness of exchange $\mathcal { E }$ , $i n d ( \Sigma ) ~ \models ~ i n d ( \sigma )$ . Consider then the relation $r$ depicted in Fig $^ { l }$ . We have $r \mid = \Sigma$ , because $r$ has groundings $\boldsymbol { r } ^ { \prime }$ and $r ^ { \prime \prime }$ such that $r ^ { \prime } \left| = A \perp B \right.$ and $r ^ { \prime \prime } \models A B \bot C$ . But $r \not \in A \bot _ { p } B C$ , since there is no way to replace all null symbols in the column $A$ such that $A \bot B C$ would hold. This is because $r$ has two different values for $A$ and four different values for $B C$ , making it impossible to fit all of the eight different values for $A B C$ in just four rows.
The invalidity of the exchange rule for PIAs also means that the completeness proof for the axiomatisation of IAs does not work for PIAs. However, the set $\Im _ { p }$ forms a complete axiomatisation for a restricted version of the implication problem for PIAs.
Theorem 6. The set $\Im _ { p }$ forms a sound and complete axiomatisation for the (PIA, $\mathrm { P I A } ^ { * }$ )-implication problem, where $\mathrm { P I A } ^ { * }$ is the class of PIAs $X \bot _ { p } Y$ such that at least one of the following conditions hold:
Proof. The soundness of the axiomatisation is again easy to show by checking that all of the inference rules are sound.
We show that the axiomatisation is complete. Let $\Sigma$ be a set of PIAs and $X \perp _ { p } Y$ a PIA. Suppose that $\Sigma \vdash X \bot _ { p } Y$ . We show that $\Sigma \models X \bot _ { p } Y$ .
We first consider the case of the constancy atom, so assume that $X \ = \ Y \ = \ B$ . Let $\operatorname { D o m } ( A ) \ = \ \{ 0 , 1 , * \}$ for all $A \in R$ , and define $\operatorname { D o m } ^ { * } ( B ) = \operatorname { D o m } ( B ) = \{ 0 , 1 , * \}$ and $\operatorname { D o m } ^ { * } ( A ^ { \prime } ) = \{ 0 , * \}$ for all $A ^ { \prime } \in R \backslash B$ . Define then $\begin{array} { r } { r ^ { \prime } = \prod _ { A \in \boldsymbol { R } } ( \operatorname { D o m } ^ { * } ( A ) \setminus \{ * \} ) } \end{array}$ , and $r = ( r ^ { \prime } , 1 )$ . Now clearly $r \not \in B \bot _ { p } B$ . We show that $r \models \Sigma$ . Let $V \perp _ { p } W \in \Sigma$ . Now $B \notin V \cap { \mathrm { \bar { W } } }$ , because otherwise $\Sigma \vdash B \bot _ { p } B$ by decomposition $\mathcal { D } _ { p }$ . This means that either $V \subseteq R \setminus \mathbf { \bar { B } }$ or $W \subseteq \bar { R \setminus B }$ , i.e., either all the columns in $V$ or all the columns in $W$ are constant in $r$ . Then $r \models V \bot _ { p } W$ by the soundness of trivial independence $\mathcal { T } _ { p }$ , symmetry $S _ { p }$ and constancy $\mathcal { C } _ { p }$ .
We may then assume that ${ \bar { X ^ { \cdot } } } \cap Y \neq \emptyset$ . Otherwise for any $A \in X \cap Y$ either $\Sigma \ H A \bot _ { p } A$ or $\Sigma \vdash A \bot _ { p } A$ . In the first case, the construction from the case where $\mathbf { \bar { \boldsymbol { X } } } = \boldsymbol { Y } = \boldsymbol { B } $ witnesses that $\Sigma \not \vdash A \bot _ { c } A$ . Hence, (by the decomposition $\begin{array} { r } { \mathcal { D } _ { p } \dot { } } \end{array}$ ) also $\Sigma \models X \bot _ { p } Y$ . In the second case, $\Sigma \vdash A \bot _ { c } A$ , we can use constancy $\mathcal { C } _ { p }$ to obtain $\Sigma \mathbin { \vdash } X \setminus \{ A \} \bot _ { p } Y \setminus \{ A \}$ , and it suffices to show the claim for this atom. Note that by the latter argument, we may also assume that there are no $A \in X Y$ such that $\Sigma \vdash A \bot _ { p } A$ . The case that $X \bot _ { p } Y$ is disjoint can be handled in two parts. The first case that $| X | - | Y | \leq 1$ and $| Y | \geq 2$ . The second case is that $| Y | = 1$ . Note that by symmetry we may assume that $| X | \geq | Y |$ , so it suffices to consider these two cases.
For the first case, let $| X | = k \geq m = | Y |$ , and assume that $m \geq 2$ . Let $\operatorname { D o m } ( A ) = \{ 0 , 1 , * \}$ for all $A \in R$ . Construct then a relation $r$ over $R$ as follows. The relation $r$ consists of $2 ^ { k } ( 2 ^ { m } - 1 ) - 1$ rows. On the first $2 ^ { k }$ rows, let the values of $X$ be the tuples from $\{ 0 , 1 \} ^ { k }$ . For the first $2 ^ { m } - 1$ rows of these $2 ^ { k }$ rows, let the values of $Y$ be the tuples from $\{ 0 , 1 \} ^ { m } \setminus \{ ( 1 , 1 , \ldots , 1 ) \}$ and for the rest of the $2 ^ { \bar { k } } - ( 2 ^ { m } - \bar { 1 } )$ rows, let the values of $Y$ be null symbols. Then add rows with only null symbols such that you obtain $2 ^ { k } ( 2 ^ { m } - 1 ) - 1$ rows in total. Let all the values for the attributes $A \in R \backslash X Y$ be constant 0.
We show that $r \mid = \Sigma$ , but $r \not \in X \bot _ { p } Y$ . First note that $r ( X )$ has $2 ^ { k }$ different complete tuples, and $r ( Y )$ has $2 ^ { m } -$ 1 different complete tuples. Therefore, it is impossible for $X \perp _ { p } Y$ to hold in the relation $r$ , because $r$ has only $2 ^ { k } ( 2 ^ { m } -$ $1 ) - 1$ tuples.
Suppose then that $V \perp _ { p } W \in \Sigma$ . We will show that $r \not =$ $V \perp { \bar { \mathbf { \Lambda } } } _ { p } { \bar { W } }$ . Since every attribute $A \in R \backslash X Y$ is constant, we may assume that $V W \subseteq X Y$ , and therefore also $V \cap W =$ $\varnothing$ . Moreover, we may assume that $V W = X Y$ , because by decomposition $\mathcal { D } _ { p }$ , $r \models V \bot _ { p } W$ implies $r \models V ^ { \prime } \bot _ { p } W ^ { \prime }$ for every ${ \bar { V } } ^ { \prime } \subseteq V$ and $W ^ { \prime } \subseteq W$ .
We may assume that $| V | \geq | W |$ . Suppose first that $| V | \geq$ $k { + } 1$ . This means that $r ( V )$ has at most ${ \dot { 2 } } ^ { k }$ different non-null tuples and $r ( W )$ has at most $2 ^ { k + m - | V | } \leq 2 ^ { m - 1 }$ different non-null tuples. Then in order to $V \perp _ { p } W$ hold, we need to fit at most $2 ^ { k } \cdot 2 ^ { m - 1 }$ values of $X Y$ in the relation. Since we have $2 ^ { k } ( 2 ^ { m } - 1 ) - 1 ( > 2 ^ { k } \cdot 2 ^ { m - 1 } )$ rows in $r$ , and no nonnull tuple is repeated in $r$ , we have enough room to contain all the needed rows. This can be done by a grounding $\boldsymbol { r } ^ { \prime }$ of $r ( X Y )$ that replaces all the null symbols on the first $2 ^ { k }$ rows with 0s, and replaces the null symbol rows with tuples from the Cartesian product of the projections of these first $2 ^ { k }$ rows on $V$ and $W$ . Since replacing all the null symbols on the first $2 ^ { k }$ rows with 0s preserves the amounts of different non-null tuples, the cardinality approximations above hold for $\boldsymbol { r } ^ { \prime }$ . Thus $r \models V \bot _ { p } W$ .
Suppose then that $| V | = k$ . Since $| V | + | W | = | V W | =$ $| X Y | = | X | + | Y | = k + m$ , we have $| W | = m$ . Since $V \neq X$ and $W \neq X$ , there are $A \in V$ and $B \in W$ such that $A , B \in Y$ . As the relation $r$ does not contain the tuple where every attribute of $Y$ has value 1, it must be that by picking suitable values for the null symbols on the first $2 ^ { \bar { k } }$ row, we obtain at most $2 ^ { k } - 1$ different values for $V$ and $r ( W )$ has at most $2 ^ { m } - 1$ different values for $W$ . Since no non-null tuple is repeated in $r$ , we have enough room to contain all the needed rows. This can be done by a grounding $\boldsymbol { r } ^ { \prime }$ of $r ( X Y )$ that replaces the null symbols on the first $2 ^ { k }$ rows such that we obtain at most $2 ^ { k } - 1$ and $2 ^ { m } - 1$ different values for $V$ and $W$ , respectively. The grounding $\boldsymbol { r } ^ { \prime }$ replaces the null symbol rows with tuples from the Cartesian product of the projections of these first $2 ^ { k }$ rows on $V$ and $W$ , as before. Thus $r \models V \bot _ { p } W$ .
Note that it cannot be that $| V | \le k - 1$ . Otherwise, from $| X | - | Y | \leq 1$ and $| X | = k$ , it follows that $k + ( k - 1 ) \leq$ $| X | + | Y | = | X Y | = | V W | = | V | + | W | \leq k - 1 +$ |W |, i.e., $| W | \geq k$ , which is impossible because $| W | \leq | V | \leq k - 1$ . This finishes the proof in the case that $m \geq 2$ .
We now consider the second case where $| X | = k \geq 1 =$ $| Y |$ . Let again $\operatorname { D o m } ( A ) = \{ 0 , 1 , * \}$ for all $A \in R$ . Construct then $\boldsymbol { r } ^ { \prime }$ as follows. The relation $\boldsymbol { r } ^ { \prime }$ consists of $2 ^ { k + 1 } - 1$ rows. On the first $2 ^ { k }$ rows, let the values of $X$ be the tuples from $\{ 0 , 1 \} ^ { k }$ . For the first two rows of these $2 ^ { k }$ rows, let the values of $Y$ be 0 (first row) and 1 (second row). For the rest of the $2 ^ { k } - 2$ rows, let the values of $Y$ be null symbols. Then add rows with only null symbols such that you obtain $2 ^ { k + 1 } - 1$ rows in total. Let all the values for the attributes $A \in R \backslash X Y$ be constant 0.
Now $r \ \not \in \ X \bot _ { p } Y$ by an argument similar to the case where $| X | = k \geq m = | Y |$ and $m \geq 2$ . Suppose then that $V \perp _ { p } W \in \Sigma$ . We will show that $r \models V \bot _ { p } W$ . As before, we may assume that $V W = X Y$ . The proof in the case $| V | \geq k + 1$ now follows from the trivial independence $\mathcal { T } _ { p }$ , because $| V | \geq k + 1$ implies that $W = \varnothing$ .
Suppose then that $| V | = k$ and $| W | = 1$ . If $k = 1$ , then $V \perp _ { p } W$ is either $X \bot _ { p } Y$ or $Y \bot _ { p } X$ . This is impossible, because $\Sigma \vdash X \bot _ { p } Y$ . Hence, we may assume that $k > 1$ .
Suppose then that $Y \subseteq V$ , and $| V | = l + 1 > 1$ and $| W | = j > 0$ , where $l + j = k$ . Since $Y \subseteq V$ and the only possible non-null values for $Y$ are 0 and 1, by picking the value 0 for the null symbols of the column $Y$ on the first $2 ^ { k }$ rows, we obtain $2 ^ { l } + 1$ values for $V$ . We have at most $2 ^ { j }$ possible non-null values for $W$ . Because $l + j = k$ , $j \le k - 1$ , and $k > 1$ , we have $( 2 ^ { l } + 1 ) 2 ^ { j } = 2 ^ { k } + 2 ^ { j } \leq$ $\bar { 2 } ^ { k } + 2 ^ { k - 1 } = 3 \cdot 2 ^ { k - 1 } < 4 \cdot 2 ^ { k - 1 } - 1 = 2 ^ { k + 1 } - 1$ . Sinc≤e no non-null tuple is repeated in $r$ , we have enough room to contain all the needed rows. This be done by a grounding $\boldsymbol { r } ^ { \prime }$ of $r ( X Y )$ that replaces the null symbols of the column $Y$ on the first $2 ^ { k }$ rows with 0s, and replaces the null symbol rows with tuples from the Cartesian product of the projections of these first $2 ^ { k }$ rows on $V$ and $W$ , as before. □
Figure 2: The relation $r$ from the proof of Theorem 6 in the case $k = 3$ , $m = 2$ , where $X = \{ X _ { 1 } , \bar { X } _ { 2 } , X _ { 3 } \}$ , $Y = \{ Y _ { 1 } , Y _ { 2 } \} =$ and $R \setminus X Y = \{ Z _ { 1 } , . . . , Z _ { n } \}$ , and in the case $k = 2$ , $m = 1$ , where $X = \{ X _ { 1 } , X _ { 2 } \}$ , $Y = \{ Y _ { 1 } \} =$ and $R \setminus X Y = \{ Z _ { 1 } , . . . , Z _ { n } \}$ .
For our running example, the incomplete relation $r$ of Table 1 shows that $\sigma = e \perp _ { p } s g$ is not implied by the set $\Sigma$ consisting of $e \perp _ { p } s$ and $e s \bot _ { p } g$ . Indeed, while grounding $w _ { 1 }$ of $r$ in Table 2 satisfies $e \perp s$ and grounding $w _ { 2 }$ of $r$ in Table 3 satisfies $e s \bot p$ , there cannot be any grounding of $r$ that satisfies both. Theorem 6 shows that there is no ${ \mathfrak { I } } _ { p }$ - deduction of $\sigma$ from $\Sigma$ .
It is future work to investigate the axiomatisability for other fragments of PIAs.
# 3.4 Combining Certain and Possible IAs
The most general implication problem combines the classes of CIAs and PIAs. In this context, the following theorem shows that, when restricted to disjoint atoms, we may add PIAs to the set $\Sigma$ , and still obtain a complete axiomatisation.
Theorem 7. The set $( \Im _ { c } \cup \Im _ { p } \cup \Im _ { p \& c } ) \setminus \{ \mathcal { C } _ { c } , \mathcal { C } _ { p } \}$ forms a sound and complete axiomatisation for the restriction of the (CIA ∪ PIA, CIA)-implication problem to disjoint atoms.
Proof. The soundness of the axiomatisation again follows from the soundness of the inference rules.
We show that the axiomatisation is complete. Let $\Sigma$ be a set of disjoint CIAs and PIAs, and $X \perp _ { c } Y$ a disjoint CIA. Suppose that $\Sigma \vdash X \bot _ { c } Y$ . We show that $\Sigma \models X \bot _ { c } Y$ .
We may assume that $X \perp _ { c } Y$ is minimal in the sense that for all nonempty $X ^ { \prime } \subseteq X$ and $Y ^ { \prime } \subseteq Y$ such that $X ^ { \prime } Y ^ { \prime } \neq$ $X Y$ , we have $\Sigma \vdash X ^ { \prime } \bot _ { c } Y ^ { \prime }$ . If $X \perp _ { c } Y$ is not minimal, we can remove attributes from $X$ and $Y$ until we obtain a minimal atom. It suffices to show the claim for the minimal atom as decomposition $\mathcal { D } _ { c }$ ensures that the claim must hold also for the original atom. Note that by trivial independence $\mathcal { T } _ { c }$ and symmetry $ { \boldsymbol { S } } _ { c }$ , neither $X$ nor $Y$ is empty.
Let $Z = R \backslash X Y$ and $\operatorname { D o m } ( A ) = \{ 0 , 1 , * \}$ for all $A \in R$ . Define then $\operatorname { D o m } ^ { * } ( A ) \ : = \ : \operatorname { D o m } ( A )$ for all $A \in X Y$ and $\operatorname { D o m } ^ { * } ( A ) ~ = ~ \{ 0 , * \}$ for all $A \ \in \ Z$ . Let $A _ { 1 } \ \in \ X$ , and define $\begin{array} { r } { r _ { 1 } \ = \ \{ t \ \in \ \prod _ { A \in { \cal R } } ( \mathrm { D o m } ^ { * } ( A ) \setminus \{ * \} ) \ | t ( A _ { 1 } ) \ = } \end{array}$ $\textstyle \sum _ { A \in R \setminus A _ { 1 } } t ( A )$ mod 2} and $r _ { 2 } ~ = ~ \{ t ( * / A _ { 1 } ) ~ | ~ t ~ \in ~ r _ { 1 } \}$ . Let then $r = ( r _ { 1 } \cup r _ { 2 } , 1 )$ .
Now $r \not \in X \bot _ { c } Y$ , because the null values can be filled in such a way that the resulting relation is just $r ^ { \prime } = ( r _ { 1 } , 2 )$ and hence, there are tuples $t _ { 1 } , t _ { 2 } \in r ^ { \prime }$ such $t _ { 1 } ( A _ { 1 } ) = 1$ , $t _ { 1 } ( A ) = 0$ for $A \in X \setminus A _ { 1 }$ , and $t _ { 2 } ( A ) = 0$ for $A \in Y$ , but there is no $t \in r ^ { \prime }$ such that $t ( X ) = t _ { 1 } ( X )$ and $t ( Y ) = t _ { 2 } ( Y )$ because for all $t \in r ^ { \prime }$ , $\begin{array} { r } { t ( A _ { 1 } ) = \sum _ { A \in R \backslash A _ { 1 } } t ( A ) } \end{array}$ mod 2.
Now we show that $r \mid = \Sigma$ . First note that it is possible to fill the null values in $r$ such that the resulting relation is $\begin{array} { r } { ( \prod _ { A \in R } ( \operatorname { D o m } ^ { * } ( A ) \backslash \{ * \} ) , 1 ) } \end{array}$ , and therefore all of the disjoint PIAs in $\Sigma$ hold.
Note then that any way of filling the null values in $r$ (with 0 and 1) results in a relation such that for any $U \subseteq R$ , if $U \subseteq Z$ , then every attribute in $U$ is constant, and if $U \cap Z =$ $\varnothing$ and $X Y \ \nsubseteq \ U$ , then $\begin{array} { r } { r ( U ) = ( \prod _ { A \in U } ) ( \operatorname { D o m } ( A ) \setminus \{ * \} , m ) } \end{array}$ for some $m$ . This means that showing that all of the disjoint CIAs in $\Sigma$ hold can be done as in the proof of Theorem 1 in (Geiger, Paz, and Pearl 1991).
We include the proof for the sake of comprehensiveness. Suppose that $V \perp _ { c } W \in \Sigma$ . Assume first that $V W \cap X Y =$ $\varnothing$ . Then $V W \subseteq Z$ , so by the definition of $r$ , every attribute in $V W$ is constant. Thus clearly, $r \models V \bot _ { c } W$ .
Assume then that $V W \cap X Y \neq \emptyset$ and $X Y \subsetneq V W$ . We show that $r ~ \models V \bot _ { c } W$ . Because every attribute in $Z$ is constant, it suffices to check that $r \mid = V \setminus Z \bot _ { c } W \setminus Z$ . But since $r ( ( V W ) \setminus Z ) = ( \prod _ { A \in ( V W ) \setminus Z } ( \operatorname { D o m } ( A ) \setminus \{ * \} ) , m )$ for the function $m$ such that $\begin{array} { r } { m ( t ) = \sum _ { s ( ( V W ) \backslash Z ) = t } 1 ( s ) . } \end{array}$ , clearly $r \mid = V \setminus Z \bot _ { c } W \setminus Z$ .
Assume finally that $X Y \ \subseteq \ V W$ . We show that this results in a contradiction. Denote $V = X ^ { \prime } Y ^ { \prime } Z ^ { \prime }$ and $W \ = \ X ^ { \prime \prime } Y ^ { \prime \prime } Z ^ { \prime \prime }$ , where $X \ = \ X ^ { \prime } X ^ { \prime \prime }$ , $Y \ = \ Y ^ { \prime } Y ^ { \prime \prime }$ , and $Z ^ { \prime } Z ^ { \prime \prime } \subseteq Z$ . By the minimality of $X \perp _ { c } Y$ and symmetry $ { \boldsymbol { S } } _ { c }$ , we have $\Sigma \vdash X ^ { \prime } \bot _ { c } Y ^ { \prime }$ and $\Sigma \vdash Y \bot _ { c } X ^ { \prime \prime }$ . From $\Sigma \vdash X ^ { \prime } Y ^ { \prime } Z ^ { \prime } \bot _ { c } X ^ { \prime \prime } Y ^ { \prime \prime } Z ^ { \prime \prime }$ , we obtain $\Sigma \vdash X ^ { \prime } Y ^ { \prime } \bot _ { c } X ^ { \prime \prime } Y ^ { \prime \prime }$ by using decomposition $\mathcal { D } _ { c }$ . Then by applying exchange $\mathcal { E } _ { c }$ to $\Sigma \vdash X ^ { \prime } \bot _ { c } Y ^ { \prime }$ and $\Sigma \vdash X ^ { \prime } Y ^ { \prime } \bot _ { c } X ^ { \prime \prime } Y ^ { \prime \prime }$ , we obtain $\Sigma \vdash X ^ { \prime } \bot _ { c } X ^ { \prime \prime } Y ^ { \prime } Y ^ { \prime \prime }$ . Then by symmetry $ { \boldsymbol { S } } _ { c }$ , we obtain $\Sigma \vdash Y X ^ { \prime \prime } \bot _ { c } X ^ { \prime }$ . Then by applying exchange $\mathcal { E } _ { c }$ to $\Sigma \vdash$ $Y \perp _ { c } X ^ { \prime \prime }$ and $\Sigma \vdash Y X ^ { \prime \prime } \bot \setminus X ^ { \prime }$ , we obtain $\Sigma \vdash \vdash \vdash \bot _ { c } X ^ { \prime \prime } X ^ { \prime }$ . Then by applying symmetry $ { \boldsymbol { S } } _ { c }$ , we have $\Sigma \vdash X \bot _ { c } Y$ .□
Moreover, in the disjoint case, the PIAs in $\Sigma$ do not affect whether a CIA is logically implied or not, as described in the following theorem.
Theorem 8. Let $\Sigma$ be a set of disjoint CIAs and PIAs, and $\sigma$ a disjoint CIA. Then $\Sigma \models \sigma$ if and only $i f \Sigma \setminus \{ \tau \in \Sigma \ |$ $\tau$ is a $P I A \} | = \sigma$ .
Proof. Let $\Sigma$ and $\sigma$ be as in the above theorem. By Theorem 7, Σ |= σ if and only if Σ ⊢(Ic∪Ip∪Jp&c)\{Cc,Cp} σ. Since no rule in $( \Im _ { c } \cup \Im _ { p } \cup \Im _ { p \& c } ) \setminus \{ \substack { \dot { \mathcal { C } _ { c } } , \dot { \mathcal { C } _ { p } } } \}$ is such that it has a PIA in the antecedent and a CIA in the consequent, we have $\begin{array} { r l } { \sum } & { { } \vdash _ { ( \Im _ { c } \cup \Im _ { p } \cup \Im _ { p \& c } ) \setminus \{ \mathcal { C } _ { c } , \mathcal { C } _ { p } \} } \quad \sigma } \end{array}$ if and only if $\Sigma \vdash _ { \Im _ { c } \backslash \{ c _ { c } \} } \sigma$ . The latter is clearly equivalent with $\Sigma \ \backslash \ \{ \tau \in \ \Sigma \ | \ \ \tau \ \mathrm { i s \ a P I A } \} \vdash _ { \mathfrak { I } _ { c } \backslash \{ \mathcal { C } _ { c } \} } \ \sigma .$ Since all the atoms considered are disjoint, it does not matter whether we consider the set $\Im _ { c } \setminus \overline { { \{ { \mathcal { C } } _ { c } \} } }$ or $\Im _ { c }$ . Then by Theorem 3, $\Sigma \setminus \{ \tau \in \Sigma \mid \tau$ is a $\operatorname { P I A } \} \vdash _ { \mathfrak { I } _ { c } } \sigma$ if and only if $\Sigma \setminus \{ \tau \in \Sigma \mid$ $\tau$ $| \mathrm { s } \mathrm { a } \mathrm { P I A } \} | = \sigma$ . □
The following example demonstrates that the disjointness assumption in the previous two theorems is crucial.
# Example 2. Let
$$
\Sigma = \{ A \bot _ { p } A , B \bot _ { p } B , C \bot _ { p } C , A \bot _ { c } C , B \bot _ { c } C \} .
$$
Then $\Sigma \vdash _ { \Im _ { c } \cup \Im _ { p } \cup \Im _ { p \& c } } A B \bot _ { c } C ,$ , but $\Sigma \models A B \bot _ { c } C$ . For the first part, note that no rule in $\Im _ { c } \cup \Im _ { p } \cup \Im _ { p \& c }$ with a PIA in the antecedent has a CIA in the consequent. Therefore it suffices to show that $\{ A \bot _ { c } C , B \bot _ { c } C \} \vdash _ { \mathfrak { I } _ { c } }$ $A B \bot _ { c } C$ . By Theorem $3$ , this follows from the fact that $\{ \zeta \boldsymbol { A } \perp _ { c } \boldsymbol { C } , \boldsymbol { B } \perp _ { c } \boldsymbol { C } \} \not \vdash$ $A B \bot _ { c } C$ , which is witnessed by the relation $r ^ { \prime }$ of Figure $^ { l }$ .
To see that $\Sigma \models A B \bot _ { c } C$ , note that each of $A , B$ , and $C$ is either constant or contains at most one non-null value. Without loss of generality, assume this value is $O _ { \ l }$ . Since the domain of each attribute contains at least two non-null values, any nulls can be filled so that each column has zeroes except at one position, which we set to $\jmath$ . If there are no nulls, the column consists entirely of zeroes.
If $\Sigma \ | = C \bot _ { c } C ,$ , we clearly have $\Sigma \models A B \bot _ { c } C$ . Now assume, that $\Sigma \ | \neq C \bot _ { c } C$ . Then $\Sigma \vdash A \bot _ { c } C$ and $\Sigma \ \nvDash$ $C \bot _ { c } C$ imply that the column $A$ is all zeroes at the beginning, i.e., $\Sigma ~ \models ~ A \bot _ { c } A$ . (If both $\Sigma \ | \neq \ A \bot \_ B$ and $\Sigma \ | \not = C \bot _ { c } C$ hold, then the corresponding columns can be filled such that they have zeroes in every position except one position in each column where the value is one, as described above. Then $\Sigma \vdash A \bot _ { c } C$ cannot hold.) Similarly, $\Sigma \vdash B \bot _ { c } C$ and $\Sigma \not \vdash C \bot _ { c } C$ imply that the column $B$ is all zeroes at the beginning, i.e., $\Sigma \models B \bot _ { c } B$ . But this means that $\Sigma \vdash A B \bot _ { c } A B$ , and hence $\Sigma \models A B \bot _ { c } C$ .
Corollary 1. The set $\Im _ { c } \cup \Im _ { p } \cup \Im _ { p \& c }$ does not form a complete axiomatisation for the (CIA $\cup$ PIA, CIA)-implication problem.
# 4 Computational Complexity
This section covers the computational complexity of PIAs and CIAs with respect to some key problems. We start our analysis with the combined complexity and the data complexity of model checking and conclude by considering the implication problem. In this section we assume that the domains of attributes are finite. The results do not depend on how the multiplicities of the tuples are encoded; that is, they can be written either in unary or in binary.
# 4.1 Complexity of Model Checking
Let $\mathcal { P }$ be a class of dependencies. The combined complexity problem for $\mathcal { P }$ is to decide, given a relation $r$ over attributes $A _ { 1 } , \ldots , A _ { n }$ , the associated (finite) domains $\operatorname { D o m } ( A _ { 1 } ) , \ldots , \operatorname { D o m } ( A _ { n } )$ , and a dependency $\sigma$ from $\mathcal { P }$ as the input, whether $r$ satisfies $\sigma$ . If the input dependency $\sigma$ is fixed, the problem is called the data complexity problem of $\sigma$ . Given a complexity class C, we say that the data complexity problem for $\mathcal { P }$ is
• in $\mathbf { C }$ if for any $\sigma \in \mathcal { P }$ , the data complexity problem of $\sigma$ is in $\mathbf { C }$ ;
• C-hard if for some $\sigma \in \mathcal { P }$ , the data complexity problem of $\sigma$ is C-hard; and
• C-complete when it is both in C and C-hard.
First we show that for PIAs already data complexity is NP-complete.
Theorem 9. The combined complexity and data complexity problems for possible independence are both NP-complete. The NP-hardness holds for any subclass of PIAs that contains an independence atom of the form $A \bot _ { p } B C$ , where $A , B , C$ are distinct attributes.
Proof. The membership in NP is straightforward for combined complexity (and therefore also for data complexity).
For the NP-hardness, it suffices to consider only data complexity. Letting $\sigma : = V P \bot _ { p } C$ , we construct a reduction from the satisfiability problem (SAT) to the data complexity of $\sigma$ . The input of SAT is a Boolean formula in conjunctive normal form: $\phi = C _ { 1 } \wedge \cdot \cdot \cdot \wedge C _ { m }$ , where $C _ { i } = l _ { i , 1 } \vee \cdot \cdot \cdot \vee l _ { i , n }$ , $i \in [ m ]$ , are such that each $l _ { i , j }$ , $j \in [ n ]$ , is a propositional variable $p$ or a negated propositional variable $\overline { { p } }$ . We construct a relation $r$ from $\phi$ in the following way. Assuming that $W$ is the set of variables appearing in $\phi$ , and representing each tuple $t : ( V , P , C ) \mapsto ( a , b , c )$ simply as $( a , b , c )$ , we construct the relation $r$ as follows:
1. For each $p \in W$ , add $( * , + , p )$ and $( * , - , p )$ to $r$ . Furthermore, for all $q \in W \setminus \{ p \}$ , add $( q , * , p )$ and $( \overline { { q } } , * , p )$ to $r$ .
2. For each clause $\begin{array} { r l r } { C _ { i } } & { { } = } & { l _ { i , 1 } \lor \ \cdot \cdot \cdot \ \lor l _ { i , n } } \end{array}$ , add $( \overline { { l _ { i , 1 } } } , * , i ) , \ldots , ( \overline { { l _ { i , n } } } , * , i )$ to $r$ . Add also $( * , * , i )$ with multiplicity $n - 1$ , and $( * , + , i )$ with multiplicity 1 to $r$ . Furthermore, assuming li,1, . $l _ { i , 1 } , \ldots , l _ { i , n }$ are literals over variables of some set $W ^ { \prime }$ , add $( v , * , i )$ and $( \overline { { v } } , * , i )$ for all $v \in W \setminus W ^ { \prime }$ .
Above it is to be understood that the doubly-nested negation $\overline { { \overline { { p } } } }$ of a variable $p$ is $p$ itself. Also the multiplicity of each tuple is assumed to be 1 unless otherwise stated. The construction of $r$ is illustrated in Table 6. It suffices to show that $\phi$ is satisfiable if and only if $r \models V P \bot _ { p } C$ .
Suppose first $\phi$ is satisfiable by some variable assignment $s$ . We construct a grounding $\boldsymbol { r } ^ { \prime }$ of the relation $r$ in the following way. We ensure that each tuple has the form $( l , + , * )$ when $\bar { s } ( l ) = 1$ , and $( l , - , * )$ when $s ( l ) = 0$ , where $*$ represents an arbitrary value of the variable $C$ . It is easy to see that this is possible whenever $s$ satisfies $\phi$ . Moreover, this way the obtained grounding $\boldsymbol { r } ^ { \prime }$ satisfies the independence atom $V P \bot C$ .
For the converse direction, suppose $r$ satisfies $V P \bot _ { p } C$ , and let $\boldsymbol { r } ^ { \prime }$ be the grounding of $r$ satisfying $V P \bot C$ . Let us first consider how the tuples introduced in Item 1 are grounded. For each variable $p$ there are only two possibilities: either the pair $( p , + , v )$ and $( { \overline { { p } } } , - , v )$ , or the pair $( p , - , v )$ and $( { \overline { { p } } } , + , v )$ must appear in the grounding $\boldsymbol { r } ^ { \prime }$ consistently for all values $v$ of $C$ . Thus the grounding of Item 1 represents an assignment $s$ of the variables in $\phi$ . Before considering Item 2, let us first note that the values of pairs $( V , P )$ must be consistent between the tuples obtained from Item 1 and those obtained from Item 2; this ensured by the independence atom. It is then easy to see that a grounding for the tuples obtained from Item 2 entails $s$ is a satisfying assignment. In particular, for each clause $C _ { i }$ , having one tuple of the form $( * , + , i )$ in $r$ ensures that for at least one literal $l$ there is a tuple of the form $( l , + , i )$ in the grounding. This concludes the proof. □
$$
\begin{array} { r l } { \frac { V } { \rho } } & { { } ^ { \rho } \quad \mathcal { C } } \\ { \frac { V } { \rho } } & { { } ^ { \rho } \quad \bot \frac { V } { \rho } } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \bot \frac { V } { \rho } } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \vdots \quad \frac { P } { \rho } } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \vdots \quad \frac { P } { \rho } } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \vdots \quad \frac { P } { \rho } } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \vdots \quad \frac { P } { \rho } } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \vdots \quad \frac { P } { \rho } } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \vdots \quad \frac { P } { \rho } } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \vdots \quad \frac { P } { \rho } } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \vdots \quad \frac { P } { \rho } } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \vdots \quad \frac { P } { \rho } } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \vdots \quad \frac { P } { \rho } } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \vdots \quad \frac { P } { \rho } } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \vdots \quad \frac { P } { \rho } } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \vdots \quad \frac { P } { \rho } } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \vdots } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \vdots } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \vdots } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \vdots } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \vdots } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \vdots } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \frac { P } { \rho } + \frac { P } { \rho } } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \frac { P } { \rho } } \\ { \frac { P } { \rho } } & { { } ^ { \rho } \quad \frac { P } { \rho } } \\ { \frac { P } { \rho } } & { } ^ { \rho } \quad \frac { P } \ \end{array}
$$
Table 6: The relation $r$ obtained via the reduction from an example SAT instance $\phi = C _ { 1 } \wedge C _ { 2 } \wedge C _ { 3 }$ , where $C _ { 1 } = p _ { 2 } \vee p _ { 3 }$ , $C _ { 2 } =$ $p _ { 1 } \lor \overline { { p _ { 2 } } } \lor p _ { 3 }$ , and $C _ { 3 } = \overline { { p _ { 3 } } }$ .
Although the data complexity and model checking problems are generally NP-complete, it can be shown that for unary PIAs (i.e., PIAs $X \bot _ { p } Y$ where $| X | = | Y | = 1 )$ model checking lies in polynomial time. As shown next, this is achieved by representing the question as a maximum flow problem.
Let us start by introducing few necessary definitions. A network is a directed graph $\mathbf { \bar { \boldsymbol { G } } } = ( \boldsymbol { V } , \boldsymbol { E } )$ , where each edge $( u , v ) \in E$ is associated with a capacity $c ( u , v ) \geq 0$ . The network is called a flow network if it is furthermore associated with a source node $s \in V$ and a sink node $t \in V$ . A flow is a mapping $f : E \to \mathbb { R } _ { \geq 0 }$ that satisfies:
1. capacity constraint: $f ( u , v ) \leq c ( u , v )$ for all $( u , v ) \in E$ , 2. flow conservation: $\sum _ { \substack { ( v , u ) \in E } } f ( v , u )$ = $\sum _ { \mathbf { \boldsymbol { ( u , v ) } } \in E } f ( \boldsymbol { u } , \boldsymbol { v } )$ for all $u \in V \setminus \{ s , t \}$ .
The total flow out of the source $s$ is defined as $\sum _ { \ l } { \ l } _ { s , v ) \in E } f ( s , v ) .$ . The maximal flow problem is to determine the maximum total flow in a given flow network. In what follows we use the well-known integral flow theorem, which states that maximum total flow is integral if all the edge capacities are integral (see, e.g., (Ahuja, Magnanti, and Orlin 1993)).
Theorem 10. The data complexity and combined complexity problems for unary possible independence are in polynomial time.
Proof. Let $r = ( r ^ { \prime } , m )$ be a relation and $A \bot _ { p } B$ a PIA over single attributes $A$ and $B$ . We write $r _ { \times }$ for the Cartesian product of the non-null (set) projections of $r$ on $A$ and $B$ ; that is, $r _ { \times }$ is the set of tuples $t$ over $\{ A , B \}$ such that $t ( A ) \in$ $r ^ { \prime } ( A ) \setminus \{ * \}$ and $t ( B ) \in r ^ { \prime } ( B ) \setminus \{ * \}$ . The flow network is then constructed in the following way. The nodes consist of the tuples in $\boldsymbol { r } ^ { \prime }$ and $r _ { \times }$ , as well as fresh values $s _ { 0 }$ and $s _ { 1 }$ denoting the source and the sink, respectively. The edges and capacity constraints are build as follows:
1. For each $t \in r ^ { \prime }$ there is an edge $( s _ { 0 } , t )$ with capacity constraint $f ( s _ { 0 } , t ) \leq m ( t )$ .
2. For each $t \in r ^ { \prime }$ and $t ^ { \prime } \in r _ { \times }$ such that $t ^ { \prime }$ is a grounding of $t$ , there is an edge $( t , t ^ { \prime } )$ with capacity constraint $f ( t , \bar { t } ^ { \prime } ) \leq$ 1
3. For each $\textit { t } \in \textit { r } _ { \times }$ there is an edge $( t , s _ { 1 } )$ with capacity constraint $f ( t , s _ { 1 } ) \leq 1$ .
We claim that the maximum total flow is $| \boldsymbol { r } _ { \times } |$ if and only if $r \Vdash A \bot _ { p } B$ .
Suppose first the maximum total flow $\textstyle \sum _ { v \in V } f ( s , v )$ is $| \boldsymbol { r } _ { \times } |$ . Consequently, $f$ assigns each incoming edge $( t , s _ { 1 } )$ of the sink the maximum capacity: $f ( t , s _ { 1 } ) ~ = ~ 1$ . Due to flow conservation and the the integral flow theorem, for each tuple $t ^ { \prime } \in r _ { \times }$ we find exactly one tuple $t \in \boldsymbol { r } ^ { \prime }$ such that $f ( t , t ^ { \prime } ) ~ = ~ 1$ . Conversely, due to Item 1 and flow conservation, a tuple $\textit { t } \in \textit { r } ^ { \prime }$ is associated with no more than $m ( t )$ many tuples $\textit { t } ^ { \prime } \in \textit { r } _ { \times }$ such that $f ( t , t ^ { \prime } ) ~ = ~ 1$ . Since $f ( t , t ^ { \prime } ) = 1$ implies $t ^ { \prime }$ is a grounding of $t$ , we obtain $r \Vdash A \bot B$ .
For the converse direction, assume $r \mid = A \bot B$ . Then we find a grounding $\boldsymbol { r } ^ { \prime }$ of $r$ satisfying $A \bot B$ . Consequently, the set $r _ { \times }$ , defined from $r$ as above, is a subset of $\boldsymbol { r } ^ { \prime }$ . It suffices to define a flow function $f$ which has a total flow $| \boldsymbol { r } _ { \times } |$ . Note that the maximum total flow cannot be greater than $| \boldsymbol { r } _ { \times } |$ due to the capacity constraints in Item 3.
Now, it is possible to choose a binary relation $E \subseteq r ^ { \prime } \times r _ { \times }$ (see Figure 3) such that
• for each $t \in r ^ { \prime }$ there is at most $m ( t )$ distinct $t ^ { \prime } \in r _ { \times }$ such that $( t , t ^ { \prime } ) \in E$ ; and
• for each $\textit { t } \in \textit { r }$ there is exactly one $t ^ { \prime } \in \mathbf { \mathfrak { r } } ^ { \prime }$ such that $( t ^ { \prime } , t ) \in E$ .
We let $f ( t , t ^ { \prime } ) ~ = ~ 1$ if $( t , t ^ { \prime } ) \ \in \ E$ , and $f ( t , t ^ { \prime } ) ~ = ~ 0$ if $( t , t ^ { \prime } ) \in r ^ { \prime } \times r _ { \times } \ : \backslash \ : E$ . The remaining values of $f$ , for the edges outgoing the source and the edges incoming the sink, are uniquely determined by flow conservation. It follows by construction $f$ satisfies all the capacity constraints. We conclude that the total flow of $f$ is $| \boldsymbol { r } _ { \times } |$ , as required. □
Figure 3: One possibility for the relation $E$ is to connect tuples on the same row from both tables.
We have now shown that the data complexity of model checking is NP-complete for the class of PIAs $X \bot Y$ where $X$ or $Y$ contains at least two attributes. When both $X$ and $Y$ are single attributes, we established that the problem is in polynomial time. We conclude this section by showing that for CIAs data complexity is in FO. For the next proposition, recall that the domain of each attribute has at least two elements.
Proposition 1. $r \Vdash X \bot _ { c } X$ if and only if $r ( X )$ is complete and satisfies $X \bot X$ .
Theorem 11. The data complexity problem for IAs and CIAs is in FO. The combined complexity problem for CIAs is in polynomial time.
Proof. Clearly it suffices to show the claim for CIAs. We claim that $r \models X \bot _ { c } Y$ iff one of the following three properties hold:
1. $r \Vdash X \bot _ { c } X$ ,
2. $r \Vdash Y \bot _ { c } Y$ , or
3. both of the following hold: (a) for all complete $t \in r ( X )$ and $t ^ { \prime } \in r ( Y )$ , there is $t ^ { \prime \prime } \in$ $r$ such that $t ^ { \prime \prime } ( X ) = t$ and $t ^ { \prime \prime } ( Y ) = t ^ { \prime }$ ; and (b) any grounding of $r ( X Y )$ is included in $r ( X Y )$ .
We show first that, assuming $r \not \in X \bot _ { c } X$ and $r \not \in Y \bot _ { c } Y .$ , the failure of either Item 3a or Item 3b leads to $r \not \in X \bot _ { c } Y$ . Suppose Item 3a is not true. Let $t \in r ( X )$ and $t ^ { \prime } \in r ( Y )$ be two complete tuples such that $t ^ { \prime \prime } ( X ) \neq t$ or $t ^ { \prime \prime } ( Y ) \neq t ^ { \prime }$ for any $t ^ { \prime \prime } \in r$ . Clearly, for each $t ^ { \prime \prime } \in r$ we can then define a grounding $s$ such that $s ( X ) \neq t$ or $s ( Y ) \neq t ^ { \prime }$ . For this, note that the domain of each attribute is at least of size 2. The obtained grounding of $r$ then does not satisfy $X \bot Y$ , which means that $r$ does not satisfy $X \perp _ { c } Y$ .
Suppose then Item 3a holds but Item 3b does not hold. Let $s \not \in r ( X Y )$ be a grounding of a tuple $t \in r ( X Y )$ . Item 3a entails that $s ( X ) \not \in r ( X )$ or $s ( Y ) \notin r ( Y )$ . By symmetry, we may assume that $s ( X ) \not \in r ( X )$ . Since $r \not \in Y \bot _ { c } Y .$ , either $r \ \not \in \ Y \cap X \bot _ { c } Y \cap X$ or $r \ \not \in \ Y \ \backslash \ X \bot _ { c } Y \ \backslash \ X$ . Since $r \not \in Y \cap X \bot _ { c } Y \cap X$ entails $r \not \in X \bot _ { c } Y$ by the decomposition and symmetry rules of certain independence, we may assume that $r \not \in Y \backslash X \bot _ { c } Y \backslash X$ . We then construct a grounding $\boldsymbol { r } ^ { \prime }$ of $r$ with the following properties:
• One occurrence of the tuple $t$ in $r$ is grounded to $s ^ { \prime }$ such that $s ^ { \prime } ( X ) = s ( X )$ .
• All the remaining tuples in $r$ are grounded to tuples $s ^ { \prime \prime }$ such that $s ^ { \prime \prime } ( X ) \neq s ( X )$ .
• The grounding with respect to $Y \setminus X$ is such that $r ^ { \prime } \not \in$ $Y \setminus { \bar { X } } \bot Y \setminus { \bar { X } }$ .
It is not difficult to see that $r ^ { \prime } \not \Vdash X \bot Y$ , as required.
For the converse direction, suppose one of the three listed properties hold. In the case of $r \Vdash X \bot _ { c } X$ or $r \Vdash { Y \bot _ { c } Y }$ , we obtain $r \ \models X \bot _ { c } Y$ by the constancy, symmetry, and trivial independence rules of certain independence. Furthermore, $r \models X \bot _ { c } Y$ is a straightforward consequence of Items 3a and 3b. This concludes the proof of the claim.
A relation $r$ over an ordered list of $n$ attributes can be interpreted as a first-order structure $\begin{array} { r l } { \mathfrak { A } _ { r } } & { { } = } \end{array}$ $( U , f , D _ { 1 } , \ldots , D _ { n } , 0 )$ , where $U ~ : = ~ D _ { 1 } \cup \dots \cup ~ D _ { n } \cup$ $\{ 0 , \ldots , m \}$ , $f : U ^ { n } \to \{ 0 , \ldots , m \}$ is a function representing $r$ , and $D _ { i }$ , for $i \in [ n ]$ , is the domain of the ith attribute in $r$ . The attributes of $X Y$ furthermore are assumed to occupy fixed positions in the ordered list of attributes. Then, using Proposition 1 and the claim proven above, it is straightforward to write a first-order formula $\phi$ such that ${ \mathfrak { A } } _ { r } \ \models \phi$ iff $r \models X \bot _ { c } Y$ . We conclude that the data complexity problem for CIAs is in FO. Regarding combined complexity, the items of the claim can be checked in polynomial time in the combined size of the relation and the CIA. □
# 4.2 Complexity of Implication Problem
From the results of Section 3, we immediately obtain the following theorem concerning the complexity of implication problems.
# Theorem 12. The implication problems of Theorems 3, 6, and 7 are in polynomial time.
Proof. (Theorem 3.) By Theorem 4, the implication problem for CIAs is equivalent to the implication problem for IAs, so the cubic time algorithm (Geiger, Paz, and Pearl 1991) can be used.
(Theorem 6.) The theorem states that the set $\Im _ { p }$ is sound and complete for the $( \mathrm { P I A } , \mathrm { P I A } ^ { * } )$ -implication problem. Deciding whether $X \bot _ { p } Y$ can be derived from $\Sigma$ using the inference system ${ \mathfrak { I } } _ { p }$ is in polynomial time. When both $X$ and $Y$ are non-empty, it suffices to remove from each the attributes that $\Sigma$ designates as constants, yielding reduced sets $X ^ { * }$ and $Y ^ { * }$ . One then checks whether $\Sigma$ contains a PIA $X ^ { \prime } \bot _ { p } Y ^ { \prime }$ or $Y ^ { \prime } \bot _ { p } X ^ { \prime }$ such that $X ^ { * } \subseteq X ^ { \prime }$ and $Y ^ { * } \subseteq Y ^ { \prime }$ . If either $X$ or $Y$ is empty, the trivial independence axiom applies.
(Theorem 7.) Let $\Sigma$ be a set of disjoint CIAs and PIAs, and let $\sigma$ be a disjoint CIA. By Theorems 4 and 8, $\Sigma \models \sigma$ if and only if $i n d ( \dot { \Sigma } \backslash \{ \tau \in \Sigma \mid \dot { \tau } \mathrm { i s } \mathrm { a P I A } \} ) \mid = i n d ( \sigma )$ . This is an instance of implication problem for IAs, and therefore the problem is in cubic time (Geiger, Paz, and Pearl 1991). | We initiate an investigation how the fundamental concept of independence can be represented effectively in the presence of incomplete information. The concepts of possible and certain independence are proposed, and first results regarding the axiomatisability and computational complexity of implication problems associated with these concepts are established. In addition, several results for the data and the combined complexity of model checking are presented. The findings help reduce computational overheads associated with the processing of updates and answering of queries. | [
"cs.DB"
] |
# 1. Introduction
Cone-beam computed tomography (CBCT) has become an essential imaging modality in modern radiation therapy. Mounted on linear accelerator and C-arm gantries, flat-panel CBCT systems provide in-room volumetric imaging that enables sub-millimeter patient setup verification, adaptive replanning based on the anatomy-of-the-day (intraoperative in room brachytherapy), and intra-fraction motion monitoring1. For external beam treatments, daily CBCT reduces setup errors and improves target coverage with daily anatomy verification. However, slow CBCT acquisitions, which typically require 60–90 seconds of gantry rotation, introduce several clinically significant limitations. Respiratory and internal organ motion during this period degrade image quality through blurring and ghosting2-4. In thoracic regions, lung lesions and vessel walls may appear distorted due to breathing cycles; in abdominal and pelvic regions, peristalsis and bowel gas can lead to inconsistent contrast and motion artifacts. Involuntary patient movements—such as muscle twitches, discomfort-related shifts, or coughing—further contribute to streaking and misregistration, compromising spatial accuracy, which leads to elevated margins during treatment planning. In interventional workflows, full-scan CBCT presents additional challenges. The prolonged acquisition time can interrupt procedural steps such as needle or applicator placement and often necessitate extended sedation or general anesthesia, increasing the risk of anesthetic complications5. In addition to time and motion-related challenges, full-arc CBCT is often limited by spatial constraints6. In operating rooms, brachytherapy suites, and interventional radiology settings, equipment such as anesthesia machines, sterile drapes, and applicators may obstruct the gantry’s path, making full $3 6 0 ^ { \circ }$ rotation infeasible. As a result, image acquisition is often restricted to partial arcs to avoid collisions between the gantry, couch, and patient. Although alternative acquisition strategies, such as non-circular or flexible trajectories, may reduce collision risk and mitigate metal artifacts, they often require specialized calibration and are not widely implemented. These limitations can disrupt workflow, increase setup variability, and reduce image quality, particularly for mobile patient anatomy or in a crowded procedural environment. To address these challenges, limited-angle CBCT acquisition has gained increasing attention as a potential alternative to fullarc scanning. By restricting the gantry rotation to a single arc segment—typically $9 0 ^ { \circ }$ or less—clinicians can reduce scan time, minimize motion artifacts, and avoid mechanical collisions, while using standard imaging hardware.
However, the incomplete projection data introduces artifacts—manifesting as shading, streaking, and blurring—that degrade image quality. For instance, the Feldkamp-Davis-Kress (FDK)7 the most widely used analytical algorithm—is accurate and efficient with complete projection data, but under limited-angle acquisition it produces shadow artifacts because of the missing information8. To address this challenging problem, a variety of approaches have been developed for limited-angle CBCT reconstruction. These methods are generally categorized into two groups: model-based iterative reconstruction (MBIR), and datadriven deep learning frameworks. However, each approach has distinct limitations: MBIR addresses the data-incompleteness problem but brings new challenges, notably a heavy computational burden and strong dependence on carefully tuned regularization parameters. For example, total-variation–regularized (TV) MBIR can suppress streak artifacts, yet an ill-chosen TV weight often produces staircase effects in homogeneous regions and blurs fine anatomical detail9,10. Deep-learning based reconstruction algorithm have recently shown great promise, yet they introduce a new set of limitations—most notably the risk of hallucinated anatomy11, inadequate enforcement of data consistency12, and poor generalizability outside the training domain.
Recent deep-learning efforts have substantially advanced limited-angle CBCT reconstruction. Early work treated the task as post-image processing, employing convolutional neural networks (e.g., U-Net) to transform streak-laden FBP or FDK reconstructions into artifact-reduced image13,14. However, because the projection data are no longer enforced during inference, these post-processing networks often leave residual streaks and may hallucinate anatomy, motivating the shift toward physics-guided iterative networks. To strengthen data consistency, subsequent work unfolded MBIR into trainable deep networks, mapping each iterative update onto a learnable block that jointly enforces data fidelity and regularization15-17. Although this physics-guided design suppresses streaks more effectively than image-domain CNNs, its performance remains highly sensitive to predefined step sizes and regularization weights, depends on large, paired datasets tailored to a particular scanning geometry, and can still leave staircase artifacts or residual streaks when acquisition conditions differ from those seen during training. Most recently, researchers have introduced attention-based and Transformer-hybrid networks to solve this problem. By inserting selfattention blocks—often in a Swin (Shifted Window Vision Transformer) or ViT (Vision Transformer) configuration—these models capture long-range correlations that help restore extended anatomical structures in limited-angle data18. Hybrid designs combine convolutional layers for fine textures with transformer attention for global context, achieving clearer organ boundaries and fewer streaks than either post-processing CNNs or unfolded MBIR networks. Even so, their performance still depends on large, anatomy-specific training sets, and the substantial model size can lead to overfitting and domain-shift vulnerability when the acquisition geometry or patient population differs from that used for training. In parallel, diffusion-based generative models have been investigated for limited-angle CT/CBCT19,20. These networks learn a stepwise denoising process that can inpaint the missing sinogram wedge or synthesize a full image from noise, producing sharp textures and allowing uncertainty sampling. Yet when a diffusion model is tasked with jumping directly from incomplete projections to a reconstructed CT/CBCT image, it operates with only weak physics-based conditioning, making it easier for hallucinated structures to appear. As a result, there is still no limited-angle CT/CBCT method that both follows the measured data closely and avoids creating false structures. This study aims to fill this gap.
In this study, we propose a Limited-angle Geometry-integrated Cycle-domain (LA-GICD) framework for LA CBCT reconstruction that combines three key components into a unified pipeline. First, a projectiondomain denoising diffusion model (Projection-DDPM) learns to complete missing projections in a datadriven manner, without requiring handcrafted priors. Second, a geometry transformation module (GTM) projects the completed sinogram into image space using the known cone-beam scan geometry, thereby ensuring physical consistency. This step defines the "geometry-integrated" aspect of the framework and warrants explicit clarification to emphasize its role in preserving ray fidelity. Third, an image-domain CBCT reconstruction denoising diffusion model (Image-DDPM) further refines the reconstructed CBCT volume by suppressing artifacts and restoring fine anatomical detail. Here, the term “cycle-domain” refers to the projection $$ image→projection loop enforced via analytic forward and back-projection operators, ensuring that the completed sinograms can be reconstructed into volumes that, when re-projected, faithfully match the measured data. By enforcing cyclic consistency between the projection and image domains, the LAGICD framework enables more accurate and robust reconstructions. This dual-domain, geometry-aware design improves generalization in real-world limited-angle settings, where conventional methods often fail due to undersampling and lack of physical constraints. The key methodological contributions are summarized as follows.
LA-GICD. We present a limited-angle CBCT framework capable of reconstructing volumetric images from a single $9 0 ^ { \circ }$ acquisition arc using one unified model.
GTM. A fixed, analytic cone-beam projector/back-projector injects exact geometric priors at every iteration, consistently aligning projection and image spaces and enhancing robustness across scanners and angle deficits.
GICD strategy. Two linked diffusion modules—a Projection-DDPM for projection completion and an Image-DDPM for image denoising and refinement—operate in a closed projection–image– projection loop that alternates global inpainting and fine-detail refinement, enforcing consistency with measured data and preventing hallucinated anatomy.
# 2. Materials and Methods
# 2.1 LA-GICD framework
# 2.1.1. Overview
We propose a novel deep generative framework, named LA-GICD, for high-fidelity CBCT reconstruction from limited-angle projections. The proposed architecture mitigates the ill-posedness of limited-angle reconstruction by integrating data-driven priors with explicit geometric consistency constraints.
The pipeline contains three fully differentiable blocks: Projection-DDPM first converts $1 3 5 ^ { \circ } - 2 2 5 ^ { \circ }$ sinograms into full-view projections through conditional denoising diffusion; the GTM then maps these projections to CBCT volumes by filtered back-projection while preserving scanner geometry; ImageDDPM finally refines the GTM output, removing residual artifacts and sharpening anatomy. A GICD loss enforces bidirectional consistency between projection and image spaces. Both DDPMs share one noise schedule, and GTM lets gradients flow from the final image loss back to projection synthesis. Training uses pixel-wise CBCT reconstruction loss, projection cycle loss, and edge-aware penalties that keep highfrequency anatomy, so the network can fill missing angles yet still respect geometric fidelity. A similar cycle-domain, geometry-integrated diffusion design was first demonstrated in our earlier patient-specific CBCT frameworks, including the Cycle-domain Geometry-integrated DDPM (CG-DDPM)21 model for single-view CBCT reconstruction and the Patient-specific Physics-integrated DDPM (PC-DDPM) approach for real-time Optical Surface-Derived CBCT (OSD-CBCT)22 synthesis.
# 2.1.2 Projection-DDPM
The Projection-DDPM module is de designed to generate a complete set of synthetic projections $\widehat { P } _ { 1 : 3 6 0 ^ { \circ } }$ from limited-angle projections ${ \hat { P } } _ { 1 3 5 ^ { \circ } : 2 2 5 ^ { \circ } }$ . This task is formulated as a conditional generative process within the DDPM framework. The model learns to map from Gaussian noise to high-fidelity, full-view projections conditioned on the limited-angle projections input. In the DDPM network, the forward process gradually perturbs a clean data sample with Gaussian noise through a Markov chain23. In the forward process, a clean projection image $P$ is gradually perturbed by Gaussian noise through a Markov chain:
$$
q ( x _ { t } | x _ { t - 1 } ) = \mathcal { N } \big ( P _ { t } ; \sqrt { 1 - \beta _ { t } } P _ { t - 1 } , \beta _ { t } \mathbf { I } \big )
$$
With a predefined noise schedule $[ \beta _ { t } ] _ { t = 1 } ^ { T }$ . The marginal distribution at any time step $t$ can be written as:
$$
q ( P _ { t } | P _ { 0 } ) = \mathcal { N } \big ( P _ { t } ; \sqrt { \bar { \alpha } _ { t } } P , ( 1 - \bar { \alpha } _ { t } ) { \bf I } \big )
$$
where $\alpha _ { t } = 1 - \beta _ { t }$ and $\begin{array} { r } { \overline { { \alpha _ { t } } } = \prod _ { s = 1 } ^ { t } \alpha _ { s } } \end{array}$ .
The reverse process is parameterized by a neural network $\epsilon _ { \theta }$ to predict the noise added at each step:
$$
P _ { t - 1 } ^ { s y n } = \frac { 1 } { \sqrt { \alpha _ { t } } } \big ( P _ { t } ^ { s y n } - \frac { 1 - \alpha _ { t } } { \sqrt { 1 - \overline { { \alpha _ { t } } } } } \cdot \epsilon _ { \theta } ( P _ { t } ^ { s y n } , t , P _ { 0 } ^ { r e a l } ) ) + \sigma _ { t } \cdot z , \quad z \sim \mathcal { N } ( 0 , I )
$$
The model is trained to minimize a mean squared error between the predicted noise and true noise:
$$
\begin{array} { r } { \mathcal { L } _ { P r o j - D D P M } = \mathbb { E } _ { P _ { 0 } ^ { s y n } , \epsilon , t } \left[ \left. \epsilon - \epsilon _ { \theta } \bigl ( P _ { t } ^ { s y n } , p _ { 0 } ^ { r e a l } , t \bigr ) \right. _ { 2 } ^ { 2 } \right] } \end{array}
$$
To ensure anatomical plausibility under the limited-angle constraint, we condition the denoising network $\epsilon _ { \theta }$ on the available real projection data. Specifically, the U-Net–based denoiser receives two inputs at each denoising step: the noisy synthesized projection (full view) $P _ { t } ^ { s y n }$ and the real limited-angle projection $P _ { 0 } ^ { r e a l }$ The latter acts as a geometric and anatomical prior, providing essential constraints that guide the synthesis toward physically plausible solutions. This conditioning encourages consistency with the measured data while allowing the network to infer missing information beyond the limited angular coverage. The final output of this module is a clean synthesized projection $P _ { 0 } ^ { s y n }$ , which approximates a full-view projection. These synthesized projections are subsequently passed to the GTM to produce intermediate CBCT volumes for downstream refinement.
Where all the variables are listed below (Figure 1 & Equation 1-4):
$F P _ { t } ^ { r e a l }$ : Ground truth full-view projection
𝑃𝑠𝑦𝑛: noisy synthesized projection at time $t$
𝑃0𝑠𝑦𝑛: clean synthesized projection (final output of reverse process)
$P _ { 0 } ^ { r e a l }$ :Limited-angle projections $( 1 3 5 ^ { \circ } : 2 2 5 ^ { \circ }$ ,conditional input)
$\overline { { \alpha _ { t } } }$ :cumulative noise factor, $\begin{array} { r } { \overline { { \alpha _ { t } } } = \prod _ { s = 1 } ^ { t } \alpha _ { s } } \end{array}$
$\alpha _ { t }$ :noise schedule scalar at timestep $t$
$\epsilon \sim \mathcal { N } ( 0 , I )$ :Gaussian noise
$\epsilon _ { \theta } ( \cdot )$ :neural network prediction of noise
$\sigma _ { t }$ :variance (or standard deviation) of reverse noise at step $t$
$\boldsymbol { z } \sim \mathcal { N } ( \boldsymbol { 0 } , \boldsymbol { I } )$ :Gaussian noise in reverse process
$t \in [ l , . . . , T ]$ : timestep index in diffusion process
$T$ : total number of diffusion steps
# 2.1.3 Image-DDPM
The second DDPM module is Image-DDPM module (Image-DDPM shown in Figure 1. (c)), which is designed to refine intermediate CBCT volumes reconstructed from synthesized projections, yielding highfidelity CBCT images with enhanced anatomical consistency. Let $I _ { t } ^ { \mathrm { s y n } }$ denote the noisy volumetric CBCT at diffusion timestep $t$ , and let $I _ { t } ^ { \mathrm { d e t } }$ be the deterministic intermediate CBCT volume reconstructed from the synthesized projections via GTM. The model is trained to denoise $I _ { t } ^ { \mathrm { s y n } }$ in a reverse stochastic process conditioned on $I _ { t } ^ { \mathrm { d e t } }$ , ultimately producing the final reconstruction $I _ { 0 } ^ { \mathrm { s y n } }$ . The forward diffusion process adds Gaussian noise to the ground truth CBCT image $I _ { 0 } ^ { \mathrm { s y n } }$ , producing a noisy sample $I _ { t } ^ { \mathrm { s y n } }$ over $T$ steps:
$$
\left. q \left( I _ { t } ^ { \mathrm { c y c l e } } \middle | I _ { 0 } ^ { \mathrm { r e a l } } \right) = \mathcal { N } \left( I _ { t } ^ { \mathrm { c y c l e } } ; \sqrt { \overline { { \alpha _ { t } } } } I _ { 0 } ^ { \mathrm { r e a l } } , ( 1 - \overline { { \alpha _ { t } } } ) I \right) \right.
$$
Where $\begin{array} { r } { \overline { { \alpha _ { t } } } = \prod _ { s = 1 } ^ { t } \alpha _ { s } } \end{array}$ accumulates the noise schedule. In the reverse process, the denoising network $\epsilon _ { \theta }$ learns to estimate the noise added in the forward process, conditioned on the intermediate volume $I _ { t } ^ { \mathrm { d e t } }$ . The reverse sampling at each step is given by:
$$
I _ { t - 1 } ^ { s y n } = \frac { 1 } { \sqrt { \alpha _ { t } } } \bigg ( I _ { t } ^ { s y n } - \frac { 1 - \alpha _ { t } } { \sqrt { 1 - \overline { { \alpha _ { t } } } } } \cdot \epsilon _ { \theta } ( I _ { t } ^ { s y n } , t , I _ { t } ^ { d e t } ) \bigg ) + \sigma _ { t } \cdot z , \qquad z \sim \mathcal { N } ( 0 , I )
$$
The model is trained using the standard simplified objective:
$$
\mathcal { L } _ { R e c o n - D D P M } = \mathbb { E } _ { I _ { 0 } ^ { r e a l } \epsilon , t } \left[ \left. \epsilon - \epsilon _ { \theta } \left( I _ { t } ^ { ( c y c l e ) } , t , I ^ { ( r e c ) } \right) \right. _ { 2 } ^ { 2 } \right]
$$
where $I _ { 0 } ^ { \mathrm { r e a l } }$ is the ground-truth CBCT image used for supervision. This enables the model to learn to reconstruct a clean anatomical image from corrupted volumes, guided by structural cues from GTM. Where all the variables are listed below:
$I _ { 0 } ^ { r e a l }$ : Ground-truth CT image reconstructed from full-view projections
𝐼𝑡syn : Noisy CT volume at diffusion step 𝑡 (during reverse process)
$I _ { 0 } ^ { \mathrm { s y n } }$ : Final CT output synthesized by Image-DDPM
𝐼𝑡cycle: Noisy CT volume at timestep 𝑡, reconstructed from forward-projected synthetic projection via GTM during cycle supervision
$I _ { 0 } ^ { \mathrm { c y c l e } }$ : Final output of Image-DDPM, representing the sCT
$I ^ { \mathrm { r e c } }$ : Intermediate reconstruction from GTM, used as strict condition
$\epsilon \sim \mathcal { N } ( 0 , I )$ :Gaussian noise
(𝐼𝑡(𝑐𝑦𝑐𝑙𝑒), 𝑡, 𝐼(𝑟𝑒𝑐)) : Predicted noise in volume domain by Image-DDPM network
$\mathcal { L } _ { R e c o n - D D P M } : \mathrm { I }$ Loss function used to train the Image-DDPM.
# 2.1.4 Geometry Transformation Module (GTM)
The fixed, differentiable Geometry Transfer Module (GTM) links projection and image spaces for the entire pipeline. In the forward direction, it uses a pixel-driven projector to convert a CBCT volume into simulated X-ray views; these are then passed back through GTM’s inverse step to reconstruct a CBCT volume that enforces cycle consistency. In the inverse direction, GTM performs filtered back-projection on full-view projections, first producing an intermediate volume that conditions Image-DDPM and later recovering a CBCT volume from synthetic projections for the cycle-domain loss. We adopted a LEAP-CT FDK² with a ramp filter cutoff frequency of 1.0 in PyTorch so the system remains fully differentiable. We matched all projection and reconstruction parameters—such as source-to-isocenter distance, detector spacing, and view angles—to the geometry of the gantry-mounted CBCT system used in Siemens radiotherapy linacs. The following parameters were used in our GTM:
Source-to-Detector Distance (SDD): 1500 mm
Source-to-Isocenter Distance (SID): $1 0 0 0 \mathrm { m m }$
X-ray Detector Size: $7 6 8 \times 1 0 2 4$
X-ray Detector Spacing: $0 . 7 8 \mathrm { m m }$
Projection Angles: $0 ^ { \circ } - 3 6 0 ^ { \circ }$
Reconstruction Diameter: $4 9 5 \mathrm { m m }$
Pixel Spacing: $0 . 9 6 6 8 \mathrm { m m }$ (in-plane)
Slice Thickness: $1 . 0 \mathrm { m m }$
Matrix Size: $5 1 2 \times 5 1 2$
These geometry parameters are applied consistently across projection synthesis and CBCT volume recovery, ensuring that simulated views adhere to real-world scanner configurations. Although GTM itself is not trainable, its integration guarantees that the LA-GICD framework remains physically grounded and spatially consistent across all modules.
# 2.1.5 GICD Strategy: Geometry-Integrated Cycle-Domain Supervision
The GICD supervision strategy integrates physical acquisition geometry and bidirectional consistency to guide training under limited-angle settings. It consists of three core components: Geometry-Conditioned Forward Path. Geometry-Conditioned reconstruction, the Projection-DDPM generates full-view synthetic projections $P _ { 0 } ^ { \mathrm { s y n } }$ from limited-angle real projections $P _ { 0 } ^ { \mathrm { r e a l } }$ . These are passed through the GTM to reconstruct intermediate CBCT volumes $I ^ { \mathrm { r e c } }$ , which serve as structural conditioning for the Image-DDPM. To ensure that $I ^ { \mathrm { r e c } }$ is physically plausible and aligns with the actual CBCT geometry, it is supervised by a ground truth volume $I ^ { \mathrm { r e a l - r e c } }$ reconstructed from the real full-angle projections via the same GTM. The loss is defined as:
$$
\mathcal { L } _ { C T - r e c } = M A E \big ( I ^ { r e a l - r e c } , I ^ { r e c } \big )
$$
The final synthetic CBCT output $I _ { 0 } ^ { \mathrm { c y c l e } }$ generated by the Image-DDPM is directly supervised against the ground truth full-scan CT volume $I _ { 0 } ^ { \mathrm { r e a l } }$ . This supervision is computed using a voxel-wise mean absolute error:
$$
\mathcal { L } _ { C T - c y c l e } = M A E \left( I _ { 0 } ^ { r e a l } , I _ { 0 } ^ { \mathrm { c y c l e } } \right)
$$
This path ensures that the entire pipeline, from limited-angle projection synthesis to final CBCT volume generation, is grounded in anatomical reality. The total loss function from GICD used during training is:
$$
\mathcal { L } _ { C T } = \mathcal { L } _ { \theta ^ { I } } ^ { \mu } + \gamma _ { 1 } \mathcal { L } _ { \theta ^ { I } } ^ { \Sigma } + \gamma _ { 2 } \mathcal { L } _ { C T - r e c } + \gamma _ { 3 } \mathcal { L } _ { C T - c y c l e }
$$
Where $\mathcal { L } _ { \theta ^ { I } } ^ { \mu }$ and $\mathcal { L } _ { \theta ^ { I } } ^ { \Sigma }$ denote the mean and variance prediction losses in Image-DDPM, and the weights are empirically set as $\gamma _ { 1 } = 0 . 0 5$ , $\gamma _ { 2 } = 0 . 5$ and $\gamma _ { 3 } = 0 . 5$ .
# 2.1.6 Inference Procedure
At inference time, as shown in Figure 1, our pipeline operates in a cascaded fashion, beginning with limitedangle projections and culminating in a fully synthesized CBCT volume. First, the Projection-DDPM module takes the limited-angle projection $P _ { 0 } ^ { \mathrm { r e a l } }$ as conditioning input and generates a full-view projection $P _ { 0 } ^ { \mathrm { s y n } }$ via the reverse diffusion process, starting from pure Gaussian noise. This synthesized projection is then passed through the GTM, which applies the imaging geometry extracted from DICOM metadata to reconstruct an intermediate CBCT volume $I ^ { \mathrm { r e c } }$ . Next, the Image-DDPM module takes $I ^ { \mathrm { r e c } }$ as its conditioning input and refines it through a second diffusion process. Starting again from Gaussian noise, the model iteratively denoises the sample using the learned volume-domain prior, ultimately yielding the final synthetic CBCT volume $I _ { 0 } ^ { \mathrm { c y c l e } }$ . This inference procedure eliminates the need for full-view measurements at test time and enables reconstruction from limited-angle inputs alone.
# 2.1.7. Implementation Detail
All models were implemented using PyTorch and trained on a single NVIDIA A100 GPU with 80 GB memory. Both Projection-DDPM and Image-DDPM were trained using 1000 diffusion steps during learning and sampled using 50 inference steps at test time. A cosine-based noise schedule $[ \beta _ { t } ] _ { t = 1 } ^ { T }$ was used, with cumulative noise factor $\begin{array} { r } { \overline { { \alpha _ { t } } } = \prod _ { s = 1 } ^ { t } \alpha _ { s } } \end{array}$ . The reverse diffusion followed a denoising diffusion implicit models (DDIM) style reparameterization using predicted noise and learned variances. During training, a batch size of 1 was used due to the high memory demand of 3D volume modeling. The optimizer was
AdamW with a learning rate of $2 \times 1 0 ^ { - 5 }$ and default $( \beta _ { 1 } , \beta _ { 2 } ) = ( 0 . 9 , 0 . 9 9 9 )$ and no weight decay. All experiments used fixed-angle limited projection inputs acquired over a $9 0 ^ { \circ }$ arc spanning $1 3 5 ^ { \circ }$ to $2 2 5 ^ { \circ }$ . The Projection-DDPM was trained to synthesize full $3 6 0 ^ { \circ }$ projections from these limited-view measurements. Input intensities (for both projection and CBCT volume domains) were linearly rescaled to the normalized interval $[ - 1 , 1 ]$ . This mapping was applied consistently across training and inference. Spatial resolution was kept at $5 1 2 \times 5 1 2$ , with pixel spacing retained from the original DICOM. Preprocessing included DICOM metadata parsing to extract geometric parameters such as source-to-detector distance ( $1 5 0 0 \mathrm { m m } )$ , sourceto-patient distance $\left( 1 0 0 0 \mathrm { m m } \right)$ , and detector size, all of which were integrated into the GTM for accurate forward and backward projection. During inference, the Projection-DDPM synthesizes full-view projections conditioned on limited-angle inputs. These projections are passed through GTM to generate intermediate reconstructions, which are further refined by the Image-DDPM using geometry-informed cycle-domain priors. Both DDPM modules follow a U-Net backbone, with domain-specific conditioning.
Figure 1. Overview of the training and application pipeline of the proposed dual-stage diffusion-based limited-angle CBCT reconstruction framework. (a) Projection-DDPM training, the first stage learns to synthesize full $3 6 0 ^ { \circ }$ projection stacks from limited-angle inputs (from $1 3 5 ^ { \circ }$ to $2 2 5 ^ { \circ }$ total $9 0 ^ { \circ }$ projections).Given the paired real projections $P _ { t } ^ { r e a l } ( 3 6 0 ^ { \circ }$ projections) and limited-angle input $P ^ { r e a l }$ ( $9 0 ^ { \circ }$ projections ), the Projection-Net $\theta ^ { P }$ is trained to produce a pseudo full-angle projection
$P _ { t } ^ { s y n } ( 3 6 0 ^ { \circ }$ synthetic projections), which serves as a geometric prior for subsequent reconstruction. (b) GTM-based reconstruction supervision. The GTM module simulates the forward projection process based on detector–source geometry. By feeding both $P _ { t } ^ { r e a l }$ and $P _ { t } ^ { s y n }$ into GTM, corresponding reconstructed CBCT images $I _ { t } ^ { r e a l - r e c }$ and $I _ { t } ^ { s y n - r e c }$ are obtained. (c) Image-DDPM training, the second-stage generative module, Image-DDPM, learns to refine coarse CT volumes into anatomically faithful reconstructions. It receives GTM-reconstructed images from synthetic projections $( I ^ { s p - r e c } )$ as conditional inputs. During training, Image-DDPM performs reverse denoising to model the conditional posterior $P _ { \theta } ( x _ { 0 } | I ^ { s p - r e c } )$ , guided by structural consistency with the groundtruth CBCT $I ^ { r e f }$ . A dual-path training strategy is employed: the deterministic path supervises predictions from $I ^ { \mathrm { r e a l - r e c } }$ , while the stochastic path enables generation from $I ^ { \mathrm { s p - r e c } }$ .In addition to the diffusion loss $\mathcal { L } _ { \mathrm { D D P M } }$ , a cycle-consistency loss $\mathcal { L } _ { \mathrm { C T - c y c l e } }$ is applied to align reconstructions from synthetic projections with those from real ones. (d) Projection-DDPM generation at timestep $s$ , during inference, a limited-angle projection set $P _ { 9 0 ^ { \circ } } ^ { \mathrm { r e a l } }$ is provided as the conditioning input to the trained Projection-DDPM. Starting from Gaussian noise, the model performs reverse denoising to generate full-view pseudo projections. At an intermediate timestep $s$ , the current projection estimates $P _ { s } ^ { \mathrm { r e c } }$ is shown. This partial output represents a stochastic intermediate state of the diffusion trajectory toward the final $3 6 0 ^ { \circ }$ projection $\hat { P } _ { 0 - 3 6 0 }$ .(e) Image-DDPM generation at timestep $s$ . Given a conditional input $I _ { t } ^ { \mathrm { r e a l - r e c } }$ , which is reconstructed from limited-angle real projections via GTM, the trained Image-DDPM iteratively denoises a Gaussian latent to synthesize the target CBCT slice. At an intermediate timestep $s$ , the model outputs a partially denoised estimate $I _ { s }$ . This prediction progressively approaches the final output $\widehat { I _ { t } }$ through the reverse diffusion trajectory. The training objective ensures that such samples remain structurally aligned with the ground truth $I _ { t }$ , enabling high-fidelity reconstruction under geometry-aware conditioning. (f) Projection-DDPM generation at timestep $s - 1$ This diagram illustrates the reverse denoising process within Projection-DDPM at timestep $s - 1$ . Starting from Gaussian noise, the model iteratively predicts pseudo projections conditioned on the limited-angle input $P ^ { \mathrm { r e a l } }$ . At each step, a sample $P _ { s + 1 } ^ { \mathrm { s t o } }$ is transformed into the next state $P _ { s } ^ { \mathrm { r e c } }$ , approaching the final full-view distribution. These intermediate predictions can be optionally reconstructed into CBCT slices via GTM to assess geometric consistency during sampling. This stepwise evolution highlights the generative path that connects noise to structure-aware projections within the diffusion model.
# 2.2 Dataset and Evaluation Protocol
# 2.2.1 Dataset description
We studied 18 anonymized patients who received gynecologic HDR brachytherapy. The dataset comprises a total of 78 CT volumes, acquired from 18 patients who received 4 or 5 CBCT scans during the course of treatments. Each case provided a $5 1 2 \times 5 1 2 \times 2 5 6$ pelvic CBCT volume with voxel spacing of $1 \times 1 \times 2$ mm in native resolution; we kept the $\mathbf { X } { \mathrm { - } } \mathbf { y }$ spacing unchanged and linearly down-sampled the $z \cdot$ -axis to 256 slices to fit GPU memory. All CT volumes were normalized to the [0,1] range, and the simulated projections were normalized the same way; at test time the reconstructed CBCTs were rescaled to their original Hounsfield-unit range for quantitative evaluation. We checked every volume to make sure the uterus, cervix, and adjacent soft tissue are intact. Using LEAP- $C \mathrm { T } ^ { 2 4 }$ , we built patient-specific cone-beam geometry from the DICOM tags and generated full-arc projections. For model input, we kept only 90 projection views that covered a $9 0 ^ { \circ }$ arc ( $1 3 5 ^ { \circ } - 2 2 5 ^ { \circ }$ , clockwise), a range that reflects realistic limited-angle projections. The full $3 6 0 ^ { \circ }$ projections were generated for supervision, but we didn’t expose to the model at inference. LEAP-CT then supplied the same geometric parameters—source-to-detector distance, detector pitch, slice thickness— to the GTM.
# 2.2.2 Data Splits and Evaluation Metrics
To evaluate the generalizability of the proposed model across anatomical variations and treatment fractions, a volume-level split was employed rather than a patient-level separation. Specifically, a total of 78 volumes were collected from 18 patients. To ensure patient-level independence in model evaluation, 60 volumes were randomly chosen for training, and one representative volume per patient (18 volumes) was held out for testing. No patient overlap exists between the training and testing subsets to prevent data leakage.
The reconstructed CBCT volumes were quantitatively evaluated using standard image similarity metrics, including:
Peak Signal-to-Noise Ratio (PSNR), expressed in decibels (dB), to measure pixel-level fidelity. Structural Similarity Index Measure (SSIM), computed over $1 1 \times 1 1$ windows to assess structural preservation.
Mean Absolute Error (MAE), calculated voxel-wise over the body region to evaluate global intensity error.
For each metric, values were computed between the model output (Limited-angle reconstructed CBCT) and the corresponding ground truth CBCT volume. Unless otherwise noted, all metrics were reported as averages over the full test set.
# 3. Quantitative and qualitative evaluation
We evaluated LA-GICD on a limited-angle CBCT dataset using three standard metrics: MAE, SSIM, and PSNR. As shown in Table 1, the proposed method demonstrates strong reconstruction performance under severe angular limitations. These results are averaged across all test cases and anatomical views, including axial, coronal, and sagittal planes.
Table 1. Quantitative comparison of LA CBCT reconstructions using LA-GICD and conventional FDK. MAE, SSIM, PSNR were evaluated on 18 test volumes. LA-GICD significantly outperformed FDK in all metrics $( \mathtt { p } < 0 . 0 1$ , paired $t \cdot$ -test), demonstrating superior image fidelity and structural consistency under $9 0 ^ { \circ }$ limited-angle acquisition.
Visual comparisons under axial, coronal, and sagittal views are presented in Figures 2–4, showcasing results from eight representative test cases. Each column corresponds to a different patient. For each case, we display the reconstruction from full-angle data (ground truth), the proposed LA-GICD, and the direct reconstruction from limited-angle projections, followed by the corresponding error maps. LA-GICD visibly reduces shading and streak artifacts and soft-tissue blurring across all views. Compared to FDK baseline, our method better preserves organ boundaries and low-contrast structures, with lower error magnitudes and more uniform residual maps.
Figure 2. Axial slice comparison of CBCT reconstruction quality using LA-GICD versus conventional FDK under limited-angle acquisition over three patients. Each row corresponds to a different patient. From left to right: ground truth $( 3 6 0 ^ { \circ } \mathrm { C B C T } )$ ), limitedangle FDK, LA-GICD reconstruction from limited-angle projections, voxel-wise error maps, and hallucination-prone residual maps highlighting local discrepancies. Compared to FDK, LA-GICD markedly reduces streaking artifacts and recovers soft-tissue and bony structures with improved fidelity.
Figure 3. Coronal slice comparison of CBCT reconstruction quality using LA-GICD versus conventional FDK under limited-angle acquisition over three patients. Each row corresponds to a different patient. From left to right: ground truth $( 3 6 0 ^ { \circ } \mathrm { C B C T } )$ , limitedangle FDK, LA-GICD reconstruction from limited-angle projections, voxel-wise error maps, and hallucination-prone residual maps highlighting local discrepancies. Compared to FDK, LA-GICD markedly reduces streaking artifacts and recovers soft-tissue and bony structures with improved fidelity.
Figure 4. Sagittal slice comparison of CBCT reconstruction quality using LA-GICD versus conventional FDK under limited-angle acquisition over three patients. Each row corresponds to a different patient. From left to right: ground truth $( 3 6 0 ^ { \circ } \mathrm { C B C T } )$ ), limitedangle FDK, LA-GICD reconstruction from limited-angle projections, voxel-wise error maps, and hallucination-prone residual maps highlighting local discrepancies. Compared to FDK, LA-GICD markedly reduces streaking artifacts and recovers soft-tissue and bony structures with improved fidelity.
# 4. Discussion
Limited-angle CBCT has the potential to significantly improve clinical workflows by reducing scan time, minimizing motion artifacts, and avoiding physical gantry collisions—particularly in RT settings such as equipment packed HDR rooms, intra-fractional motion managements. However, reconstructing highquality volumes from such limited data remains a major challenge. This study demonstrates that a geometryintegrated, dual-domain diffusion framework can reconstruct high-quality CBCT volumes from a single $9 0 ^ { \circ }$ arc of projections. With one model trained once on this limited-angle configuration, LA-GICD achieved a mean absolute error of $3 5 ~ \mathrm { H U }$ , an SSIM of 0.84, and a PSNR of 29.8 dB on 18 gynecologic HDR brachytherapy scans. As illustrated in Figures 2–4, the method markedly reduced shading/streak artifacts and preserved soft-tissue detail across axial, coronal, and sagittal views, despite the combined challenges of limited angle coverage and metallic applicators. The performance gain stems from the GICD training scheme, where a Projection-DDPM and an Image-DDPM are optimized jointly with a single optimizer. In each training step the Projection-DDPM infers the missing views, a fixed analytic cone-beam backprojector converts the completed sinogram into an image, the Image-DDPM denoises and sharpens this volume, and the refined image is forward-projected back to projection space. The resulting projectionimage-projection loop is differentiated end-to-end, anchoring every update to the measured data while steadily suppressing hallucinated structures.
Early deep learning approaches treated limited-angle reconstruction as a post-processing task13,14, where a CNN or U-Net is trained to map artifact-contaminated images reconstructed from limited-angle projections to high-quality CT images obtained from full-angle scans. Because raw projections are never revisited, post-processing CNNs cannot enforce projection domain consistency. Residual streaks often remain along the direction of missing angles, and the learned priors may generate anatomically realistic but incorrect structures, especially when the angular coverage changes or high-density hardware is present. In contrast, our framework operates in both the projection and image domains. The Projection DDPM completes the projections, the Recon DDPM refines the image, and a fixed cone-beam projector and back-projector connect the two. This dual-domain geometry-informed approach reduces artifacts and lowers the risk of hallucinated anatomy. To improve reconstructed image fidelity, later studies unfolded model-based iterative reconstruction into a fixed sequence of trainable stages. Each stage applies a data-consistency update followed by a learned regularization step, and all parameters are optimized end-to-end from paired fullangle and limited-angle data15,16. Although this design suppresses artifacts more effectively than imageonly CNNs (post-processing approach), its performance is still highly sensitive to the choice of step size and regularization weights and often depends on training datasets that match the specific scan geometry and angular deficit. When the acquisition conditions differ from those used during training, residual streaks and staircase effects frequently reappear. These limitations reveal the need for a reconstruction scheme that generalizes across angular coverage without retraining. Because LA-GICD decouples projection completion and image refinement from any fixed view range, a single network could in principle be trained on mixed-angle data and then applied to previously unseen arc lengths—a possibility we plan to evaluate in future work. Most recently, score-based DDPM has been explored for sparse and limited-angle CT reconstruction19,25. These models learn the gradient (score) of the data distribution and then iteratively transform pure Gaussian noise into a synthetic image or projection that matches the training data. Because the denoising is applied at multiple noise scales, diffusion models can recover high-frequency details that earlier networks often blur. Their main limitation in limited-angle CT is insufficient geometric conditioning. In most implementations the model is conditioned only on the incomplete projections (or on a coarse FBP/FDK image derived from it); the forward–projection operator and exact ray geometry are not embedded in the reverse process. As a result, the diffusion trajectory has no explicit penalty for deviating from the true measurement space inside the missing wedge. During inference the model can therefore inpaint this region with structurally reasonable but physically inconsistent content, especially when the acquisition arc, source–detector distance, or patient positioning differ from those represented during training. LA-GICD is explicitly designed to overcome the geometric-conditioning gap identified above. First, the framework embeds a fixed analytic cone-beam projector and back-projector (GTM) within the diffusion loop, so every denoising step is evaluated against the true ray geometry rather than an implicit image prior. Second, the projection-image-projection cycle forces the Projection-DDPM and Image-DDPM to agree with the measured data at each iteration, sharply limiting the degrees of freedom available for hallucination. Third, because the two DDPMs are optimized jointly under the same loss, projection completion and image refinement co-evolve, allowing geometric errors in one domain to be corrected by feedback from the other. In practice, these features yield streak-free reconstructions that remain anatomically accurate even in the presence of metal applicators and across different patient anatomies, while maintaining the fine detail recovery characteristic of diffusion models. Taken together, these design elements enable LA-GICD to combine the anatomical sharpness and generative flexibility of diffusion models with the geometric rigor of traditional forward–inverse operators. Unlike prior approaches that rely solely on image priors or approximate data consistency, LA-GICD enforces projection fidelity at every step, explicitly resolves missing-angle ambiguity, and preserves structural realism even under challenging clinical conditions. This dual-domain, geometry-integrated strategy offers a promising pathway toward generalizable limited-angle CT reconstruction that is robust to scan variability, patient heterogeneity, and metallic implants—without retraining or view-specific tuning.
Clinically, the proposed LA-GICD framework addresses several longstanding barriers in image-guided brachytherapy and other interventional workflows. From a procedural standpoint, in-room CBCT provides immediate volumetric feedback at the point of care, eliminating the need to transfer the patient between imaging and treatment rooms. In HDR gynecologic treatments, patients are frequently positioned in lithotomy or oblique orientations that severely restrict gantry clearance. Under these conditions, a full CBCT scan is often infeasible due to the risk of collisions with the couch, patient, or other equipment. By limiting acquisition to a single $9 0 ^ { \circ }$ arc, LA-GICD avoids mechanical interference while preserving volumetric imaging capability. This is particularly beneficial in cases involving metallic applicators or extended transperineally needles, where conventional CBCT is either skipped entirely or replaced by offtable imaging. This enables the clinical team to verify applicator geometry, evaluate target coverage, and make intra-procedural adjustments without disrupting the sterile field or prolonging anesthesia time. The shorter scan duration—approximately one-quarter of a conventional full-arc CBCT—also reduces the likelihood of motion-induced artifacts caused by respiration, bowel peristalsis, or patient discomfort during external beam radiotherapy. As a result, LA-GICD facilitates the acquisition of high-fidelity online images with minimal motion blur, even in anatomically unstable regions such as the pelvis and abdomen. Moreover, the substantial reduction in imaging dose—without loss of image quality—supports repeated on-board imaging throughout the treatment course, particularly relevant in pediatric patients or those requiring multiple fractions within a short treatment window. Because the LA-GICD algorithm operates as a postacquisition reconstruction module, compatible with existing flat-panel CBCT systems, it introduces no additional hardware requirements. This software-only implementation facilitates broad deployment across diverse clinical settings, including high-throughput centers and resource-limited environments, potentially expanding access to adaptive and image-guided brachytherapy.
Hallucination—generation of anatomically plausible yet physically incorrect structures—is an inherent concern when using deep generative models such as DDPM for limited-angle CBCT reconstruction. To systematically evaluate this risk, we conducted repeated reconstructions from identical projection data sets and examined voxel-wise uncertainty (Fig. 5). Uncertainty maps showed slightly elevated variability adjacent to steep attenuation gradients, such as at bone–soft-tissue interfaces and around metallic applicators, which are prone to reconstruction errors. However, the magnitude of these deviations remained low (typically less than 0.01 normalized units), indicating that LA-GICD preserves high fidelity and effectively suppresses hallucinations even in the most challenging regions. Nevertheless, acknowledging the presence of these minor inconsistencies highlights the necessity of incorporating uncertainty quantification into clinical decision-making. Future studies could further reduce such uncertainty by embedding additional physics-based constraints, explicit anatomical priors, or uncertainty-aware training strategies into the diffusion framework.
Figure 5. Demonstration of hallucination variability in LA-GICD limited-angle CBCT reconstruction. Three independent reconstructions (columns 1–3) of the same patient using LA-GACD with identical limited-angle inputs reveal visible inconsistencies in soft-tissue regions, particularly near applicator tips and bladder boundaries. The fourth column visualizes the normalized corresponding voxel-wise standard deviation across runs, highlighting high-uncertainty regions (0–0.06 range) in a jet colormap.
This study has several limitations. First, our validation focused solely on limited-angle CBCT reconstruction for gynecologic HDR brachytherapy. Although these data provided strong evidence of clinical utility for pelvic anatomy, the performance of our proposed LA-GICD framework in other anatomical regions—such as thoracic, abdominal, or head-and-neck regions—remains unknown. Different anatomical sites vary significantly in complexity, internal motion characteristics, and tissue contrast; thus, the results observed in the pelvis may not directly extend to other clinical applications or anatomical contexts. Second, computational complexity remains a concern. Diffusion-based models, including our LAGICD framework, inherently require iterative sampling procedures and multiple denoising steps during inference. Consequently, inference times are currently longer compared to standard analytical reconstruction methods (e.g., FDK) or simple convolutional neural network approaches. The prolonged reconstruction times might limit the practical deployment of the method, especially in real-time or intraoperative imaging scenarios, where timely feedback is crucial for procedural decision-making. Third, although our model demonstrated robustness to the specific $9 0 ^ { \circ }$ limited-angle configuration tested, we did not systematically evaluate the stability of image quality under different angular spans or orientations. Technically, the precise acquisition angle may vary due to patient positioning constraints, procedural setup, or equipment restrictions. Without explicit verification across multiple angular configurations (e.g., smaller arcs or different orientations), it remains unclear whether reconstruction accuracy and artifact suppression performance would remain consistent. Finally, the LA-GICD model offers the possibility of patient-specific fine-tuning, potentially enhancing reconstruction accuracy by leveraging patient-specific data from previous imaging fractions. However, such personalization requires collecting additional imaging data and computationally intensive retraining procedures for each individual patient. These requirements introduce practical barriers, potentially limiting widespread adoption in clinical workflows where computational resources, staffing, or imaging time are constrained.
Future research should evaluate the generalizability of LA-GICD to other anatomical sites and clinical settings beyond pelvic HDR brachytherapy. This includes thoracic and abdominal sites where internal organ motion, tissue heterogeneity, and respiratory artifacts may pose different challenges for limited-angle reconstruction. Expanding the training dataset to encompass diverse anatomical contexts could improve robustness and clinical utility. Further optimization is also needed to reduce inference time. Techniques such as model pruning, knowledge distillation, and cascaded denoisers may help accelerate the diffusion process without compromising image quality, enabling near real-time reconstruction suitable for timesensitive workflows. To ensure reliability in variable acquisition settings, future studies should systematically test LA-GICD under a range of angular spans and orientations. Evaluating model stability across different arc lengths (e.g., $6 0 ^ { \circ }$ or $4 5 ^ { \circ }$ ) and gantry paths would clarify their robustness in flexible clinical deployments. In addition, while LA-GICD demonstrated minimal hallucination in our internal assessments, rigorous prospective evaluation is warranted. This includes both qualitative review by expert clinicians and quantitative testing with anatomically altered phantoms or surgical validation data to confirm that high-frequency details are faithfully reconstructed from incomplete data. Finally, we aim to explore strategies for integrating uncertainty estimation into the reconstruction pipeline, allowing clinicians to identify image regions with low reconstruction confidence—an important safeguard when using AIgenerated volumes in interventional and high-dose workflows. | Cone-beam CT (CBCT) is widely used in clinical radiotherapy for image-guided treatment, improving setup accuracy, adaptive planning, and motion management. However, slow gantry rotation limits performance by introducing motion artifacts, blurring, and increased dose. This work aims to develop a clinically feasible method for reconstructing high-quality CBCT volumes from consecutive limited-angle acquisitions, addressing imaging challenges in time- or dose-constrained settings. We propose a limited-angle (LA) geometry-integrated cycle-domain (LA-GICD) framework for CBCT reconstruction, comprising two denoising diffusion probabilistic models (DDPMs) connected via analytic cone-beam forward and back projectors. A Projection-DDPM completes missing projections, followed by back-projection, and an Image-DDPM refines the volume. This dual-domain design leverages complementary priors from projection and image spaces to achieve high-quality reconstructions from limited-angle (<= 90 degrees) scans. Performance was evaluated against full-angle reconstruction. Four board-certified medical physicists conducted assessments. A total of 78 planning CTs in common CBCT geometries were used for training and evaluation. The method achieved a mean absolute error of 35.5 HU, SSIM of 0.84, and PSNR of 29.8 dB, with visibly reduced artifacts and improved soft-tissue clarity. LA-GICD's geometry-aware dual-domain learning, embedded in analytic forward/backward operators, enabled artifact-free, high-contrast reconstructions from a single 90-degree scan, reducing acquisition time and dose four-fold. LA-GICD improves limited-angle CBCT reconstruction with strong data fidelity and anatomical realism. It offers a practical solution for short-arc acquisitions, enhancing CBCT use in radiotherapy by providing clinically applicable images with reduced scan time and dose for more accurate, personalized treatments. | [
"cs.CV"
] |
# 1 Introduction
Vector-based similarity search is a core problem with broad applications in machine learning, data mining, and information retrieval. It involves retrieving data points in high-dimensional space that are most similar to a given query vector based on a specific similarity measure. This task is central to many downstream applications, including nearest neighbor classification, recommendation systems, clustering, anomaly detection, and large-scale information retrieval. However, the high dimensionality of modern datasets makes efficient similarity search particularly challenging, highlighting the need for fast and scalable vector computation techniques.
Among the various similarity measures for high-dimensional vectors, the $\ell _ { 2 }$ norm, cosine distance, and inner product are the most commonly used in practice. As discussed in [30, 11, 22], it is often possible to pre-compute and store the norms of vectors in advance, allowing these measures to be reduced to the computation of the angle (inner product) between two normalized vectors, thereby highlighting the central role of angle computation. On the other hand, in many real-world scenarios, we are not concerned with the exact value of the angle itself but rather with the outcome of an angle-based comparison, which is referred to as the angle testing. Specifically, given a query vector $\pmb q$ and data vectors ${ \pmb v } _ { \mathbf { 1 } }$ , $\mathbf { \nabla } v _ { 2 }$ , $_ v$ on sphere $\mathbb { S } ^ { d - 1 }$ , typical operations include comparing $\langle q , v _ { 1 } \rangle$ and $\langle q , v _ { 2 } \rangle$ , or determining whether $\langle \pmb { q } , \pmb { v } \rangle$ exceeds a certain threshold. These operations, however, require computing exact inner products, which have a cost of $O ( d )$ per comparison and become expensive in high dimensions. To address this, we aim to design a computation-efficient probabilistic kernel function $K$ that can approximate these comparisons with reduced cost and high success probability. In particular, we focus on the following two problems:
Problem 1.1 (Probabilistic kernel function for comparison) Given $\pmb q$ , $\pmb { v _ { 1 } }$ and $\scriptstyle v _ { 2 }$ on $\mathbb { S } ^ { d - 1 }$ , where $\langle \pmb { q } , \pmb { v _ { 1 } } \rangle > \langle \pmb { q } , \pmb { v _ { 2 } } \rangle$ , how can we design a probabilistic kernel function $K ( \cdot , \cdot ) : \mathbb { S } ^ { d - 1 } \times \mathbb { S } ^ { d - 1 } \to R V ,$ where RV denotes the set of real-valued random variables, such that $( l )$ the computation of $K ( \pmb { q } , \pmb { v } )$ does not rely on the computation of $\langle \pmb { q } , \pmb { v } \rangle$ , and (2) $\mathbb { P } [ K ( \pmb { q } , \pmb { v _ { 1 } } ) > K ( \pmb { q } , \pmb { v _ { 2 } } ) ]$ is as high as possible?
Problem 1.2 (Probabilistic kernel function for thresholding) Given an arbitrary pair of normalized vectors $( \pmb q , \pmb v )$ with an angle $\phi$ , and an angle threshold $\theta$ , how can we design a probabilistic kernel
Preprint. Under review.
function $K ( \cdot , \cdot ) : \mathbb { S } ^ { d - 1 } \times \mathbb { S } ^ { d - 1 } \to R V$ such that $( l )$ the computational complexity of $K ( \pmb { q } , \pmb { v } )$ is significantly lower than that of computing the exact inner product $\langle \pmb q , \pmb v \rangle$ , and (2) $K ( \pmb { q } , \pmb { v } )$ can reliably determine whether $\phi$ is smaller than $\theta$ with a high success probability?
Both problems have various applications. The goal of Problem 1.1 aligns with that of CEOs [25] under cosine distance (we will postpone the general case of inner product to Sec. 4), allowing the designed kernel function to be applied to tasks where CEOs is effective, such as Maximum Inner Product Search (MIPS) [25], filtering of NN candidates [26], DBSCAN [28], and more. On the other hand, the goal of Problem 1.2 is similar to that of PEOs [22], making the corresponding kernel function well-suited for probabilistic routing tests in similarity graphs, which have demonstrated significant performance improvements over original graphs, such as HNSW [23]. In Sec. 4, we will further elaborate on the applications of these two probabilistic kernel functions.
Despite addressing different tasks, all the techniques [25, 26, 28, 22] mentioned above use Gaussian distribution to generate projection vectors and are built upon a common statistical result, as follows.
Lemma 1.3 (Theorem 1 in [25]) Given two vectors $_ v$ , $\pmb q$ on $\mathbb { S } ^ { d - 1 }$ , and m random vectors $\{ { \pmb u } _ { i } \} _ { i = 1 } ^ { m } \sim$ $\mathcal { N } ( 0 , I ^ { d } )$ , and $m$ is sufficiently large, assuming that $\begin{array} { r } { { \pmb u } _ { \mathrm { m a x } } = \mathrm { a r g m a x } _ { { \pmb u } _ { i } } | { \pmb q } ^ { \top } { \pmb u } _ { i } | } \end{array}$ , we have:
$$
\boldsymbol { v } ^ { \top } \boldsymbol { u } _ { \mathrm { m a x } } \sim \mathcal { N } ( \mathrm { s g n } ( \boldsymbol { q } ^ { \top } \boldsymbol { u } _ { \mathrm { m a x } } ) \cdot \boldsymbol { q } ^ { \top } \boldsymbol { v } \sqrt { 2 \ln m } , 1 - ( \boldsymbol { q } ^ { \top } \boldsymbol { v } ) ^ { 2 } ) .
$$
Lemma 1.3 builds the relationship between angles and corresponding projection vectors. Actually, $\boldsymbol { v } ^ { \top } \boldsymbol { u } _ { \mathrm { m a x } }$ can be viewed as an indicator of the cosine angle $\mathbf { \boldsymbol { q } } ^ { \intercal } \mathbf { \boldsymbol { v } }$ . More specifically, the larger $\boldsymbol { v } ^ { \top } \boldsymbol { u } _ { \mathrm { m a x } }$ is, the more likely it is that $\pmb { q } ^ { \top } \pmb { v }$ is large. On the other hand, $v ^ { \top } u _ { \mathrm { m a x } }$ can be computed beforehand during the indexing phase and can be easily accessed during the query phase, making $K ( \pmb { q } , \pmb { v } ) = \pmb { v } ^ { \top } \pmb { u } _ { \mathrm { m a x } }$ a suitable kernel function for various angle testing problems, e.g., Problem 1.1.
However, Lemma 1.3 has a significant theoretical limitation: the relationship (1) relies on the assumption that the number of projection vectors $m$ tends to infinity. Since the evaluation time of projection vectors depends on $m , m$ cannot be very large in practice. Moreover, since [26, 28, 22] all used Lemma 1.3 to derive their respective theoretical results, these results are also affected by this limitation, and the impact of $m$ becomes even harder to predict in these applications.
The starting point of our research is to overcome this limitation, and we make the following two key observations. (1) The Gaussian distribution used in Lemma 1.3 is not essential. In fact, the only factor determining the estimation accuracy of $\pmb { q } ^ { \top } \pmb { v }$ is the reference angle, that is, the angle between $\pmb q$ and ${ \pmb u } _ { \mathrm { m a x } }$ . (2) By introducing a random rotation matrix, the reference angle becomes dependent on the structure of the projection vectors and is predictable.
Based on these two observations, we design new probabilistic kernel functions to solve problems 1.1 and 1.2. Specifically, the contributions of this paper are summarized as follows.
(1) The proposed kernel functions $K _ { S } ^ { 1 }$ and $K _ { S } ^ { 2 }$ (see eq. (2) and eq. (3)) rely on a reference-anglebased probabilistic relationship between angles in high-dimensional spaces and projected values. Compared with eq. (1), the new relationship (see relationship (4)) is deterministic without dependence on asymptotic condition. By theoretical analysis, we show that, the proposed kernel functions are effective for the Problems 1.1 and 1.2, and the smaller the reference angle is, the more accurate the kernel functions are (see Lemmas 3.2 and 3.3 ).
(2) To minimize the reference angle, we study the structure of the configuration of projection vectors (see Sec. 3.2). We propose two structures (see Alg. 1 and Alg. 2) that perform better than purely random projection (see Alg. 3), and we establish the relationship between the reference angle and the proposed structures (see Lemma 3.5 and Fig. 4).
(3) Based on $K _ { S } ^ { 1 }$ , we propose a random-projection technique KS1 which can be used for CEOs-based tasks [25, 26, 28]. Based on $K _ { S } ^ { 2 }$ , we introduce a new routing test called the KS2 test which can be used to accelerate the graph-based Approximate Nearest Neighbor Search (ANNS) [22] (see Sec. 4).
(4) We experimentally show that KS1 provides a slight accuracy improvement (up to $0 . 8 \%$ ) over CEOs. In the ANNS task, we demonstrate that $\mathrm { H N S W } { + } \mathrm { K S } 2$ improves the query-per-second (QPS) of the state-of-the-art approach HNSW $+$ PEOs [22] by $10 \% - 3 0 \%$ , along with a $5 \%$ reduction in index size.
# 2 Related Work
Due to space limitations, we focus on random projection techniques that are closely related to this work. An illustration comparing the proposed random projection technique with others can be found in Sec. B.1 and Fig. 3 in Appendix. Since the proposed kernel function is also used in similarity graphs for ANNS, a comprehensive discussion of ANNS solutions is provided in Appendix B.2.
In high-dimensional Euclidean spaces, the estimation of angles via random-projection techniques, especially Locality Sensitive Hashing (LSH) [18, 2, 3], has a relatively long history. One classical LSH-based technique is SimHash [8], whose basic idea is to generate multiple hyperplanes and partition the original space into many cells such that two vectors falling into the same cell are likely to have a small angle between them. In [4], the authors proposed a different LSH-based technique called Falconn for angular distance, whose basic idea is to find the closest or furthest projection vector to the data vector and record this projection vector as a hash value, leading to better search performance than SimHash. Later, the authors in [25] employed Concomitants of Extreme Order Statistics (CEOs) to identify the projection with the largest or smallest inner product with the data vector, as shown in Lemma 1.3, and recorded the corresponding maximum or minimum projected value to obtain a more accurate estimation than using a hash value alone [26]. Notably, CEOs can be used not only for angular distance but also for inner product estimation.
Due to its ease of implementation, CEOs has been employed in several similarity search tasks [25, 4, 28], as mentioned in Sec. 1. Additionally, CEOs has been used to accelerate similarity graphs, which are among the leading structures for Approximate Nearest Neighbor Search (ANNS). In [22], by swapping the roles of query and data vectors in CEOs, the authors introduced a space-partitioning technique and proposed the PEOs test, which can be used to compare the objective angle with a fixed threshold under probabilistic guarantees. This test was incorporated into the routing mechanisms of similarity graphs and achieved significant search performance improvements over original graph structures like HNSW [23] and NSSG [13].
# 3 Two Probabilistic Kernel Functions
# 3.1 Reference-Angle-Based Design
In Sec. 3.1, we aim to propose probabilistic kernel functions for Problems 1.1 and 1.2. First, we introduce some notation. Let $\mathbf { \mathbb { R } } ^ { d }$ be the ambient vector space. Define $H \in S O ( d ) \in \mathbb { R } ^ { d \times d }$ as a random rotation matrix (Note that the definition here differs from that in [4], where the so-called random rotation matrix is actually a matrix with i.i.d. Gaussian entries), and let $S = [ \pmb { u } _ { 1 } , \pmb { u } _ { 2 } , \dots , \pmb { u } _ { m } ] \in \mathbb { R } ^ { d \times m }$ be an arbitrary fixed set of $m$ points on the unit sphere $\mathbb { S } ^ { d - 1 }$ . For any vector $\pmb { v } \in \mathbb { S } ^ { d - 1 }$ , define the reference vector of $\boldsymbol { v }$ with respect to $S$ as $Z _ { S } ( { \pmb v } ) = \operatorname * { a r g m a x } _ { { \pmb u } \in S } \left. { \pmb u } , { \pmb v } \right.$ . Let $A _ { S } ( \pmb { v } )$ denote the cosine of the reference angle with respect to $Z _ { S } ( \pmb { v } )$ , that is, $A _ { S } ( \pmb { v } ) = \langle \pmb { v } , Z _ { S } ( \pmb { v } ) \rangle$ . Next, we introduce two probabilistic kernel functions $K _ { S } ^ { 1 } ( \cdot , \cdot )$ and $K _ { S } ^ { 2 } ( \cdot , \cdot )$ as follows, where $K _ { S } ^ { 1 } ( \cdot , \cdot )$ corresponds to Problem 1.1 and $K _ { S } ^ { 2 } ( \cdot , \cdot )$ corresponds to Problem 1.2.
$$
\begin{array} { c } { { K _ { S } ^ { 1 } ( \pmb { q } , \pmb { v } ) = \left. \pmb { v } , Z _ { H S } ( \pmb { q } ) \right. \qquad \pmb { v } , \pmb { q } \in \mathbb { S } ^ { d - 1 } . } } \\ { { K _ { S } ^ { 2 } ( \pmb { q } , \pmb { v } ) = \left. H \pmb { q } , Z _ { S } ( H \pmb { v } ) \right. \big / A _ { S } ( H \pmb { v } ) \qquad \pmb { v } , \pmb { q } \in \mathbb { S } ^ { d - 1 } . } } \end{array}
$$
Remarks. (1) (Computational efficiency) Both functions are computationally efficient. For $K _ { S } ^ { 1 } ( \pmb q , \pmb v )$ , $H S$ and $\langle v , H u _ { i } \rangle$ ( $1 \leq i \leq m )$ can be pre-computed. Hence, for all $\boldsymbol { v }$ ’s, we only need to determine $Z _ { H S } ( { \pmb q } )$ online, which requires cost $O ( d m )$ (This cost can be further reduced by a reasonable choice of $S$ . See Sec. 3.2). For $\dot { K } _ { S } ^ { 2 } ( \pmb q , \pmb v )$ , $A _ { S } ( H \pmb { v } )$ and $Z _ { S } ( H \pmb { v } )$ can be pre-computed. $H \pmb q$ only needs to be computed online via Fast Johnson–Lindenstrauss Transform [12, 4] once, with cost ${ \dot { O } } ( d \log d )$ , and $\langle H \pmb { q } , Z _ { S } ( H \pmb { v } ) \rangle$ can be computed online with cost $O ( L )$ for every $\boldsymbol { v }$ , where $L \ll d$ denotes the number of partitioned subspaces and will be explained in Sec. 3.2.
(2) (Exploitation of reference angle) In the design of existing projection techniques such as CEOs [25], Falconn [4], Falconn $^ { + + }$ [26], etc., only the reference vector $Z _ { S } ( \cdot )$ ) is utilized. In contrast, our kernel functions defined in eq. (2) and eq. (3) incorporate not only the reference vector $Z _ { S } ( \cdot ) \rangle$ ) but also the reference angle information $A _ { S } ( \cdot )$ (although the reference angle is not explicitly shown in eq. (2), its influence will become clear in Lemma 3.2). In fact, the reference angle plays a central role, as it is the key factor controlling the precision of angle estimation (see Lemma 3.2).
(3) (Generalizations of CEOs and PEOs) Beyond incorporating the reference angle, these two kernel functions can also be regarded as generalizations of CEOs and PEOs respectively in a certain sense. Specifically, if $S$ is taken as a point set generated via a Gaussian distribution and $Z _ { S } ( \pmb { v } )$ is replaced by the reference vector having the maximum inner product with the query, then $K _ { S } ^ { 1 } ( \pmb q , \pmb v )$ equals the indicator $\boldsymbol { v } ^ { \top } \boldsymbol { u } _ { \mathrm { m a x } }$ used in CEOs. Similarly, if we remove the term $A _ { S } ( H \pmb { v } )$ and take $S$ to be the same space-partitioned structure as that of PEOs, $K _ { S } ^ { 2 } ( \pmb q , \pmb v )$ is similar to the indicator of PEOs.
(4) (Configuration of projection vectors) Although we do not currently require any specific properties of the configuration of $S$ , it is clear that the shape of $S$ impacts both $\bar { K } _ { S } ^ { 1 } ( \pmb q , \dot { \pmb v } )$ and $K _ { S } ^ { 2 } ( \pmb q , \pmb v )$ . We will discuss the structure of $S$ in detail in Sec. 3.2.
Next, we provide a definition that will be used to establish the property of $K _ { S } ^ { 2 }$ .
Definition 3.1 Let $\phi _ { 1 }$ $, \phi _ { 2 } \in ( 0 , \pi )$ and let $\theta \in ( 0 , \pi )$ be an arbitrary angle threshold. A probabilistic kernel function $K ( \pmb { q } , \pmb { v } )$ is called angle-sensitive when it satisfies the following two conditions:
$( I ) I f \cos \theta \leq \cos \phi _ { 1 } = \langle \pmb { q } , \pmb { v } \rangle$ , then $\mathbb { P } [ K ( \pmb { q } , \pmb { v } ) \ge \cos \theta ] \ge p _ { 1 } ( \phi _ { 1 } )$
(2)If $\left. q , v \right. = \cos \phi _ { 2 } < \cos \theta$ , then $\mathbb { P } [ K ( \pmb { q } , \pmb { v } ) \ge \cos \theta ] < p _ { 2 } ( \phi _ { 2 } )$
, where $p _ { 2 } ( \phi _ { 2 } )$ is a strictly decreasing function in $\phi _ { 2 }$ and $p _ { 1 } ( \phi _ { 1 } ) > p _ { 2 } ( \phi _ { 2 } )$ when $\phi _ { 1 } < \phi _ { 2 }$ .
The definition of the angle-sensitive property is analogous to that of the locality-sensitive hashing property. The key difference is that the approximation ratio $c$ used in LSH is not introduced here, as the angle threshold $\theta$ is explicitly defined, and only angles smaller than $\theta$ are considered valid.
We are now ready to present the following two lemmas for $K _ { S } ^ { 1 }$ and $K _ { S } ^ { 2 }$ , which demonstrate that they serve as effective solutions to Problems 1.1 and 1.2, respectively.
Lemma 3.2 $( l )$ Let $d \geq 3$ and $( \boldsymbol { q } , \boldsymbol { v } )$ be an arbitrary pair of normalized vectors with angle $\phi \in { }$ $( 0 , \pi )$ . The conditional CDF of ${ \dot { K } } _ { S } ^ { 1 } ( \pmb q , \pmb v )$ can be expressed as follows:
$$
F _ { K _ { S } ^ { 1 } ( q , v ) \mid A _ { S } ( q ) } ( x \mid \cos \psi ) = I _ { t } ( \frac { d - 2 } { 2 } , \frac { d - 2 } { 2 } )
$$
where $\psi \in ( 0 , \pi )$ , $\begin{array} { r } { t = \frac { 1 } { 2 } + \frac { x - \cos \phi \cos \psi } { 2 \sin \phi \sin \psi } } \end{array}$ , $I _ { t }$ denotes the regularized incomplete Beta function and $x \in [ \cos ( \phi + \psi ) , \cos ( \phi - \psi ) ]$ .
$( 2 ) L e t q , v _ { 1 }$ and $\scriptstyle { v _ { 2 } }$ be three normalized vectors on $\mathbb { S } ^ { d - 1 }$ such that $\langle \pmb { q } , \pmb { v } _ { 1 } \rangle > \langle \pmb { q } , \pmb { v } _ { 2 } \rangle$ . The probability $\mathbb { P } [ K _ { S } ^ { 1 } ( \pmb { q } , \pmb { v _ { 1 } } ) > K _ { S } ^ { 1 } ( \pmb { q } , \pmb { v _ { 2 } } ) | A _ { S } ( \pmb { q } ) = \cos \psi ]$ increases as $\psi$ decreases in $( 0 , \pi )$ . In particular, when $\psi \in ( 0 , \pi / 2 )$ , $P [ \tilde { K } _ { S } ^ { 1 } ( \pmb { q } , \pmb { v _ { 1 } } ) > K _ { S } ^ { 1 } ( \pmb { q } , \pmb { v _ { 2 } } ) | A _ { S } ( \pmb { q } ) = \cos \psi ] > 0 . 5$ .
Lemma 3.3 Let $\psi \in ( 0 , \pi / 2 )$ , that is, $A _ { S } ( \pmb { v } ) = \cos \psi \in ( 0 , 1 )$ , and $d \geq 3$ . Then $K _ { S } ^ { 2 }$ is an anglesensitive function, where p1(ϕ) = 0.5 and p2(ϕ) = It′ ( d 2 , d 2 ), where t′ = 12 − 2c osisnθϕ−tcaons ϕψ .
Remarks. (1) (Discussion on boundary values) When $\phi = 0$ or $\phi = \pi$ , $K _ { S } ^ { 1 }$ and $K _ { S } ^ { 2 }$ take fixed values rather than being random variables, and when $\psi = 0$ or $\psi = \pi$ , the exact value of $\langle \pmb { q } , \pmb { v } \rangle$ can be directly obtained. Therefore, probability analysis in these cases is meaningless. Additionally, in Lemma 3.3, we adopt the following convention: $\dot { p } _ { 2 } ( \phi ) = 0$ if $t ^ { \prime } < 0$ .
(2) (Deterministic relationship for angle testing) Lemma 3.2 establishes a relationship between the objective angle $\phi$ and the value of the function $\mathbf { \bar { \Pi } } _ { K _ { S } ^ { 1 } }$ . Notably, after computing $Z _ { S } ( \cdot )$ , the value of reference angle $A _ { S } ( \cdot )$ can be obtained automatically. Besides, as will be shown in Sec. 3.2, with a reasonable choice of $S$ , the assumption $A _ { S } ( \cdot ) > 0$ can always be ensured. Hence, eq. (4) essentially describes a deterministic relationship. In contrast to the asymptotic relationship of CEOs, eq. (4) provides an exact relationship without additional assumptions.
(3) (Effectiveness of kernel functions) The above two lemmas show that, with a reasonable construction of $S$ such that the reference angle is small with high probability, $K _ { S } ^ { 1 }$ and $K _ { S } ^ { 2 }$ can effectively address the corresponding angle testing problems. Specifically, the smaller the reference angle is, the more effective $K _ { S } ^ { \bar { 1 } }$ and $\bar { K } _ { S } ^ { 2 }$ become $\mathrm { ( B y ~ } p _ { 2 } ( \phi )$ , a smaller $\psi$ leads to a smaller $p _ { 2 } ( \phi ) )$ .
(4) (Gaussian distribution is suboptimal) The fact that a smaller reference angle is favorable justifies the utilization of $Z _ { S } ( \cdot )$ and also implies that the Gaussian distribution is not an optimal choice for configuring $S$ , since in this case, the selected reference vector with the largest inner product with the query or data vector is not guaranteed to have the smallest reference angle.
Input: $L$ is the level; $d = L d ^ { \prime }$ is the data dimension; $m$ is the number of vectors in each level Output: $S _ { \mathrm { s y m } } ( m , L )$ , which is represented by $m L$ sub-vectors with dimension $d ^ { \prime }$ . 1 for $l = 1$ to $L$ do 2 Generate m/2 points along with their antipodal points i.i.d. on Sd′−1 3 Scale the norm of all $m$ points in this iteration to $1 / \sqrt { L }$ , and collect the vectors after scaling
# Algorithm 2 Configuration of $S$ via multiple cross-polytopes
Input: $L$ is the level; $d = L d ^ { \prime }$ is the data dimension; $m = 2 d ^ { \prime } a + b$ ; $R$ is the maximum number of iterations
Output: $S _ { \mathrm { p o l } } ( m , L )$ , which is represented by $m L$ sub-vectors with dimension $d ^ { \prime }$
4 Generate $N$ points randomly and independently on $\mathbb { S } ^ { d - 1 }$ , where $N$ is a sufficiently large number
5 for $r = 1$ to $R$ do
6 for $t = 1$ to $a$ do
7 Generate a random rotation matrix $H \in \mathbb { R } ^ { d ^ { \prime } \times d ^ { \prime } }$ , and rotate $2 d$ axes in $\mathbb { R } ^ { d ^ { \prime } }$ using $H$
8 Collect the $2 d$ vectors of the cross-polytope after rotation
9 if $b > 0$ then
10 Repeat the above iteration and select $b / 2$ antipodal pairs from the rotated cross-polytope
11 For the generated $S \in \mathbb { S } ^ { d ^ { \prime } - 1 }$ , compute ${ \tilde { J } } ( S , N )$ and maintain the largest $S$ denoted by $S _ { \mathrm { m a x } }$
12 for $l = 1$ to $L$ do
Generate a random rotation matrix $H \in \mathbb { R } ^ { d ^ { \prime } \times d ^ { \prime } }$ and rotate the configuration $S _ { \mathrm { m a x } }$ using $H$ Scale the norm of all $m$ points in this iteration to $1 / \sqrt { L }$ and collect the vectors after scaling
# 3.2 Configuration of Points Set on Hypersphere
The remaining task is to configure the set $S$ . Given $m$ , our goal is to construct a set $S$ of $m$ points on $\mathbb { S } ^ { d - 1 }$ , denoted by $S _ { m }$ , such that the reference angle $A _ { S _ { m } } ( \cdot )$ is minimized. Due to the effect of the random rotation matrix $H$ , the optimal configurations denoted by $\bar { S } _ { m }$ and $S _ { m } ^ { * }$ , can be obtained either in the sense of expectation or in the sense of the worst case, respectively. That is,
$$
\begin{array} { r } { \bar { S } _ { m } = \underset { S = \{ u _ { 1 } , \ldots , u _ { m } \} \subset \mathbb { S } ^ { d - 1 } } { \mathrm { a r g m a x } } \{ \mathbb { E } _ { v \in U ( \mathbb { S } ^ { d } ) } [ A _ { S } ( v ) ] \} . } \\ { S _ { m } ^ { * } = \underset { S = \{ u _ { 1 } , \ldots , u _ { m } \} \subset \mathbb { S } ^ { d - 1 } } { \mathrm { a r g m a x } } \underset { v \in \mathbb { S } ^ { d - 1 } } { \mathrm { m i n } } \underset { 1 \leq i \leq m } { \mathrm { m a x } } \langle u _ { i } , v \rangle . } \end{array}
$$
By the definitions of $\bar { S } _ { m }$ and $S _ { m } ^ { * }$ , they correspond to the configurations that achieve the smallest expected value and the smallest maximum value of $A _ { S } ( \pmb { v } )$ , respectively. On the other hand, finding the exact solutions for $\bar { S } _ { m }$ and $S _ { m } ^ { * }$ is closely related to the best covering problem on the sphere, which is highly challenging and remains open in the general case. To the best of the authors’ knowledge, the optimal configuration $S _ { m } ^ { * }$ is only known when $m \leq d + 3$ . In light of this, we provide two configurations of $S$ : one relies on random antipodal projections (Alg. 1), and the other is built using multiple cross-polytopes (Alg. 2). Each has its own advantages. Specifically, Alg. 1 enables the estimation of reference angles, while Alg. 2 can empirically produce slightly smaller reference angles and is more efficient for projection computation.
Before proceeding into the detail of algorithms, we introduce a quantity $J ( \boldsymbol { S } )$ as follows.
$$
J ( S ) = \mathbb { E } _ { v \in U ( \mathbb { S } ^ { d } ) } [ ( A _ { S } ( \pmb { v } ) ) ] .
$$
By definition, $J ( \boldsymbol { S } )$ denotes the expected value of the cosine of the reference angles w.r.t $S$ . This quantity is consistent with our theory, as a random rotation is applied to $_ v$ or $\pmb q$ in eq. (2) and eq. (3). Based on the previous analysis, for a fixed $m$ , $J ( S )$ is minimized when $S = \bar { S } _ { m }$ , which is hard to compute. However, we have the following result to approximately evaluate $J ( S )$ .
Lemma 3.4 For every integer $N \geq 1$ , let $v _ { 1 , N } , \ldots , v _ { N , N }$ be the vectors randomly and independently drawn from $U ( \mathbb { S } ^ { d } )$ . Let $\tilde { J } ( S , N ) = [ \Sigma _ { i = 1 } ^ { N } A _ { S } ( \pmb { v } _ { i , N } ) ] / N$ . $\tilde { J } ( S , N ) \overset { p } { \to } J ( S )$ as $N$ goes infinity.
Lemma 3.4 shows that, when $N$ is sufficiently large, we can approximately evaluate the performance of different configurations $S$ ’s by comparing their ${ \tilde { J } } ( S , N )$ ’s.
Now, we are ready to explain Alg. 1 and Alg. 2 as follows.
(1) (Utilization of antipodal pairs and cross-polytopes) We use the antipodal pair or the crosspolytope as our building block for the following three reasons. (1) Since all the projection vectors are antipodal pairs, the evaluation time of projection vectors can be halved. (2) Both of two structures can ensure that the assumption $A _ { S } ( \pmb { v } ) > 0$ holds, such that the condition in Lemma 3.3 is always satisfied. (3) The result in [7] shows that, for $m = 2 d$ , under mild conditions, the $2 d$ vertices of a cross-polytope can be proven to have the smallest covering radius, that is, the smallest reference angle in the worst case. Although the results in the case $m > 2 d$ are unknown, we can rotate the fixed cross-polytope in random directions to generate multiple cross-polytopes until we obtain $m$ vectors, which explains the steps from 6 to 10 in Alg. 2.
(2) (Selection from random configurations) We can generate such $m$ points in the above way many times, which forms multiple $S$ ’s. By Lemma 3.4, we can use ${ \tilde { J } } ( S , N )$ to approximately evaluate the performance of $J ( S )$ , and thus, among the generated $S$ ’s, we select the configuration $S _ { \mathrm { p o l } }$ corresponding to the maximal $\tilde { J } ( S , n )$ . This explains steps 5 and 11 in Alg. 2.
(3) (Accuracy boosting via multiple levels) Clearly, increasing $m$ can lead to a smaller reference angle. The analysis in [22] shows that, for certain angle-thresholding problems requiring high accuracy, an exponential increase in $m$ , rather than a linear one, can be effective. Therefore, similar to [22], we use a product-quantization-like technique [19] to partition the original space into $L$ subspaces (levels), which is adopted in both algorithms. By concatenating equal-length sub-vectors from these $L$ subspaces, we can virtually generate $m ^ { L }$ normalized projected vectors. As will be shown in Lemma 3.5, the introduction of $L$ can significantly decrease the reference angle.
(4) (Fast projection computation via multi-cross-polytypes) In eq. (2) and eq. (3), we need to compute $H \mathbf { \boldsymbol { q } } , H \mathbf { \boldsymbol { v } } , H S \mathbf { \boldsymbol { q } }$ and $H S v$ in the indexing phase or in the query phase. If such projection time is a concern in practice, we can set $R$ and $L$ to 1 in Alg. 2 to accelerate the projection without the explicit computation of $S$ . Specifically, if $m \geq 2 d$ , suppose that $m$ is divisible by $2 d$ and $m = 2 l d$ $H ^ { \dag }$ or $( H S ) ^ { \dagger }$ can be approximated by $[ S _ { ( 1 ) } , S _ { ( 2 ) } , S _ { ( l ) } ] ^ { \dagger }$ , where $S _ { ( i ) }$ here denotes the $i$ -th structured random projection matrix. Then the projection time cost for $v$ or $q$ can be reduced from $O ( m d )$ to $O ( ( m \log d ) / 2 )$ . If $2 d > m$ , the cost is $O ( d \log d )$ by the projection matrix completion.
By Alg. 1 and Alg. 2, we obtain structures $S _ { \mathrm { s y m } } ( m , L )$ and $S _ { \mathrm { p o l } } ( m , L )$ virtually containing $m ^ { L }$ projection vectors. For $S _ { \mathrm { s y m } } ( m , L )$ , we can establish a relationship between $J ( S _ { \mathrm { s y m } } ( m , L ) )$ and $( m , L )$ as follows, which reveals the impact of the choice of $( m , L )$ on reference angle.
Lemma 3.5 Suppose that $d$ is divisible by $L$ , and $d = L d ^ { \prime }$ , where $d ^ { \prime } \geq 3$ . Let $\begin{array} { r } { c _ { d ^ { \prime } } = \frac { \Gamma \left( \frac { d ^ { \prime } } { 2 } \right) } { \sqrt { \pi } \Gamma \left( \frac { d ^ { \prime } - 1 } { 2 } \right) } } \end{array}$ $f ( y ) = c _ { d } ^ { \prime } ( 1 - y ^ { 2 } ) ^ { \frac { d ^ { \prime } - 3 } { 2 } }$ and $\textstyle F ( y ) = \int _ { - 1 } ^ { y } f ( t ) d t$ . We have
$$
J ( S _ { \mathrm { s y m } } ( m , L ) ) > m \sqrt L \frac { \Gamma ( \frac { d + L } { 2 L } ) \Gamma ( \frac { d } { 2 } ) } { \Gamma ( \frac { d } { 2 L } ) \Gamma ( \frac { d + 1 } { 2 } ) } \int _ { - 1 } ^ { 1 } y F ( y ) ^ { m - 1 } f ( y ) \mathrm { d } y .
$$
The RHS of ineq. (8) actually denotes $J ( S _ { \mathrm { r a n } } )$ , where $S _ { \mathrm { r a n } }$ is the configuration of purely random projections (see Alg. 3 and Sec. A.4 in Appendix for more detail). A numerical computation of the RHS of ineq. (8) is shown in the Appendix (Fig. 4). On the other hand, due to the introduction of cross-polytopes, the analysis of $J ( S _ { \mathrm { p o l } } ( m , L ) )$ is challenging. However, simulation experiments show that $J ( \tilde { S _ { \mathrm { p o l } } } ( m , L ) )$ is only slightly larger than $J ( S _ { \mathrm { s y m } } ( \bar { m } , \bar { L } ) )$ , which makes the lower bound in ineq. (8) still applicable to $J ( S _ { \mathrm { p o l } } ( m , L ) )$ in practice.
# 4 Applications to Similarity Search
In Sec. 4, we show how $K _ { S } ^ { 1 }$ and $K _ { S } ^ { 2 }$ can be used in concrete applications.
# 4.1 Improvement on CEOs-Based Techniques
As for $K _ { S } ^ { 1 }$ , we can use it to improve CEOs, which is used for Maximum Inner Product Search (MIPS) and further applied to accelerate LSH-based ANNS [4] and DBSCAN [28]. Since CEOs is originally
designed for inner products, we generalize $K _ { S } ^ { 1 }$ to $K _ { S } ^ { 1 ^ { \prime } }$ as follows to align with CEOs:
$$
K _ { S } ^ { 1 ^ { \prime } } ( \pmb { q } , \pmb { v } ) = \lVert \pmb { v } \rVert \cdot \langle \pmb { v } , Z _ { H S } ( \pmb { q } ) \rangle \qquad \pmb { v } \in \mathbb { R } ^ { d } , \pmb { q } \in \mathbb { S } ^ { d - 1 } .
$$
It is easy to see that, with two minor modifications, that is, replacing $\pmb { v } \in \mathbb { S } ^ { d - 1 }$ with $\pmb { v } \in \mathbb { R } ^ { d }$ , and replacing $x$ with $\| \pmb { v } \| \ b { x }$ in eq. (4), Lemma 3.2 still holds. Therefore, $K _ { S } ^ { 1 ^ { \prime } }$ can be regarded as a reasonable kernel function for inner products. Then, we can apply $K _ { S } ^ { 1 ^ { \prime } }$ to the algorithm in [25, 26, 28]. We only need to make the following modification. In these algorithms, the random Gaussian matrix, which denotes the set of projection vectors, can be replaced by $S _ { \mathrm { s y m } }$ or $S _ { \mathrm { p o l } }$ , with the other parts unchanged. This substitution does not change the complexity of the original algorithms. To distinguish this projection technique based on $K _ { S } ^ { 1 ^ { \prime } }$ from CEOs, we refer to it as KS1 (see Alg. 4 for the projection structure of KS1). In the experiments, we will demonstrate that KS1 yields a slight improvement in recall rates over CEOs, owing to a smaller reference angle.
# 4.2 A New Probabilistic Test in Similarity Graph
This is the original task of PEOs [22], where the authors introduce probabilistic routing into similarity graphs to accelerate the search. The definition of probabilistic routing is as follows.
Definition 4.1 (Probabilistic Routing [22]) Given a query vector q, a node v in the graph index, an error bound ϵ, and a distance threshold $\delta$ , for an arbitrary neighbor w of $v$ such that $d i s t ( { \pmb w } , { \pmb q } ) < \delta$ , if a routing algorithm returns true for $w$ with a probability of at least $1 - \epsilon$ , then the algorithm is deemed to be $( \delta , 1 - \epsilon )$ -routing.
In [22], the authors proposed a $( \delta , 1 - \epsilon )$ -routing test called PEOs test. Here, based on $K _ { S } ^ { 2 }$ , we propose a new routing test for $\ell _ { 2 }$ distance, called the KS2 test, as follows.
$$
\Sigma _ { i = 1 } ^ { L } { q _ { i } } ^ { \top } { u _ { e [ i ] } ^ { i } } \geq A _ { S } ( \pmb { v } ) \cdot \frac { \| \pmb { w } \| ^ { 2 } / 2 - \tau - \pmb { v } ^ { \top } \pmb q } { \| \pmb { e } \| } .
$$
Here, $\pmb q \in \mathbb { R } ^ { d }$ is the query, $v$ is the visited graph node, $w$ is the neighbor of $v$ and $e$ is the edge between $v$ and $w . \tau$ is the threshold determined by the result list of graph. $q _ { i } , e _ { i } , u _ { j } ^ { i }$ denote the $i$ -th sub-vectors of $\pmb q$ $\ 1 \leq i \leq L )$ , $e$ and $\boldsymbol { \mathbf { \mathit { u } } } _ { j }$ , respectively. ${ \boldsymbol u _ { e [ i ] } ^ { i } }$ denotes the reference vector of $e _ { i }$ among all $\{ u _ { j } ^ { i } \}$ ’s $( 1 \leq j \leq m )$ . In our experiments, $S$ is taken to be $S _ { \mathrm { s y m } } ( 2 5 6 , L )$ .
During the traversal of the similarity graph, we check the exact distance from graph node $w$ to $q$ only when ineq. (10) is satisfied; otherwise, we skip the computation of $w$ for efficiency. A complete graph-based algorithm equipped with the KS2 test can be found in Alg. 6. By Lemma 3.3 and the same analysis in [22], we can easily obtain the following result.
Corollary 4.2 The graph-based search equipped with the KS2 test (10) is a $( \delta , 0 . 5 )$ -routing test.
Comparison with PEOs. Since PEOs also uses a Gaussian distribution to generate projection vectors in subspaces like CEOs, the estimation in ineq. (10) is more accurate than that of the PEOs test, as discussed earlier. In addition, the proposed test has two advantages:(1) ineq. (10) is much simpler than the testing inequality in the PEOs test, resulting in higher evaluation efficiency; (2) ineq. (10) requires fewer constants to be stored, leading to a smaller index size compared to that of PEOs.
Complexity analysis. For the time complexity, for every edge $e$ , the computation of the LHS of ineq. (10) requires $L$ lookups in the table and $L - 1$ additions, while the computation of the RHS of ineq. (10) requires two additions and one multiplication. By using SIMD, we can perform the KS2 test for 16 edges simultaneously. For the space complexity, for each edge, we need to store $L$ bytes to recover ${ q _ { i } } ^ { \top } \hat { u } _ { e [ i ] } ^ { i }$ , along with two scalars, that is, $\mathbf { \bar { \boldsymbol { A } } } _ { S } ( \pmb { v } ) \| \pmb { w } \| ^ { 2 } / ( 2 \| \pmb { e } \| )$ and $A _ { S } ( \pmb { v } ) / \Vert \pmb { e } \Vert$ , which are quantized using scalar quantization to enable fast computation of the RHS of ineq. (10).
# 5 Experiments
All experiments were conducted on a PC equipped with an Intel(R) Xeon(R) Gold 6258R CPU $\textcircled { a } 2 . 7 0 \mathrm { G H z }$ . KS1 and KS2 were implemented in $^ { C + + }$ . The ANNS experiments used 64 threads for indexing and a single CPU for searching. We evaluated our methods on six high-dimensional real-world datasets: Word, GloVe1M, GloVe2M, Tiny, GIST, and SIFT. Detailed statistics for these datasets are provided in Appendix C.1. More experimental results can be found in Appendix C.
Table 1: Comparison of recall rates $( \% )$ for $k$ -MIPS, $k = 1 0$ . The number of projection vectors was fixed at 2048 for all compared methods. Top-5 projection vectors are probed. Probe $@ n$ indicates that the top- $\mathbf { \nabla } \cdot n$ points were probed on each probed projection vector. To eliminate the bias introduced by random projection, we obtain the results over 10 runs and report the average recall rate.
# 5.1 Comparison with CEOs
As demonstrated in Sec. 4.1, the results of CEOs are directly used to accelerate other similarity search processes [26, 28]. In this context, we focus solely on the improvement of CEOs itself. Specifically, we show that KS1, equipped with the structures $S _ { \mathrm { s y m } } ( m , 1 )$ and $S _ { \mathrm { p o l } } ( m , 1 )$ , can slightly outperform $\operatorname { C E O s } ( m )$ on the original task of CEOs, that is, $k$ -MIPS, where $m$ denotes the number of projection vectors and was set to 2048, following the standard configuration of the original CEOs. Since the only difference among the compared approaches is the configuration of the projection vectors, we use a unified algorithm (see construction of projection structure Alg. 4 and MIPS query processing Alg. 5 in Appendix) with the configuration of projection vectors as an input to compare their recall rates. From the results in Tab. 1, we observe that: (1) in most cases, KS1 with the two proposed structures achieves slightly better performance than CEOs, supporting our claim that a smaller reference angle yields a more accurate estimation, and (2) $S _ { \mathrm { p o l } }$ generally achieves a higher recall rate than $S _ { \mathrm { s y m } }$ , verifying that a configuration closer to the best covering yields better performance.
# 5.2 ANNS Performance
We chose ScaNN [16], HNSW [23], and HNSW+PEOs [22] as our baselines, where ScaNN is a stateof-the-art quantization-based approach that performs better than IVFPQFS, and HNSW $+$ PEOs [22] is a state-of-the-art graph-based approach that outperforms FINGER [9] and Glass. Similar to $\mathrm { H N S W + P E O s }$ , KS2 is implemented on HNSW, and the corresponding approach is named $\mathrm { H N S W } { + } \mathrm { K S } 2$ . The detailed parameter settings of all compared approaches can be found in Appendix C.1, and additional experimental results can also be found in Appendix C.
(1) Index size and indexing time. Regarding indexing time, after constructing the HNSW graph, we require an additional 42s, 164s, 165s, 188s, 366s, and 508s to align the edges and build the KS2 testing structure on Word, GloVe1M, GIST, GloVe2M, SIFT, and Tiny, respectively. This overhead is less than $2 5 \%$ of the graph construction time. In practice, users can reduce the parameter efc to shorten indexing time while still preserving the superior search performance of $\mathrm { H N S W } { + } \mathrm { K S } 2$ . As for the index size, it largely depends on the parameter $L$ , which will be discussed later.
(2) Query performance. From the results in Fig. 1, we make the following observations. (i) Except for Word, $\mathrm { H N S W } { + } \mathrm { K S } 2$ achieves the best performance among all compared methods. In particular, $\mathrm { H N S W } { + } \mathrm { K S } 2$ accelerates HNSW by a factor of 2.5 to 3, and is 1.1 to 1.3 times faster than HNSW+PEOs, demonstrating the superiority of KS2 over PEOs. (ii) Compared with ScaNN, the advantage of $\mathrm { H N S W } { + } \mathrm { K S } 2$ is especially evident in the recall region below $8 5 \%$ , highlighting the high efficiency of the routing test. On the other hand, in the high-recall region for Word, ScaNN outperforms $\mathrm { H S N W } { + } \mathrm { K S } 2$ due to the connectivity issues of HNSW.
(3) Impact of $L$ . The only tunable parameter in KS2 is $L$ . Generally speaking, the larger $L$ is, the larger the index size is. On the other hand, a larger $L$ can lead to a smaller reference angle and yield better search performance. Hence, $L$ can be used to achieve different trade-offs between index size and search performance. In Fig. 2, we show the impact of $L$ on index size and search performance. From the results, we have the following observations. (i) The index size of $\mathrm { H N S W } { + } \mathrm { K S } 2$ is slightly smaller than that of HNSW+PEOs due to the storage of fewer scalars. (ii) When $d ^ { \prime } = d / L$ is around 16, $\mathrm { H N S W } { + } \mathrm { K S } 2$ achieves the best search performance. This is because a larger $L$ also leads to longer testing time and $d ^ { \prime } = 1 6$ is sufficient to obtain a small enough reference angle.
Figure 1: Recall-QPS evaluation of ANNS. $k = 1 0$ .
Figure 2: Impact of $L$ (See Appendix C.4 for other datasets). $k = 1 0$ . The y-axis of the upper figures denotes the additional index cost $( \% )$ of HNSW $\dot { + }$ PEOs compared to the original HNSW. | In this paper, we study the angle testing problem in high-dimensional Euclidean spaces and propose two projection-based probabilistic kernel functions, one designed for angle comparison and the other for angle thresholding. Unlike existing approaches that rely on random projection vectors drawn from Gaussian distributions, our approach leverages reference angles and employs a deterministic structure for the projection vectors. Notably, our kernel functions do not require asymptotic assumptions, such as the number of projection vectors tending to infinity, and can be both theoretically and experimentally shown to outperform Gaussian-distribution-based kernel functions. We further apply the proposed kernel function to Approximate Nearest Neighbor Search (ANNS) and demonstrate that our approach achieves a 2.5X ~ 3X higher query-per-second (QPS) throughput compared to the state-of-the-art graph-based search algorithm HNSW. | [
"cs.LG",
"cs.AI",
"cs.CV",
"cs.DB",
"cs.DS"
] |
# 1. Introduction
As the area of distributed optimization grows — owing to recent applications in federated learning (McMahan et al., 2017) and large-scale distributed deep learning (Verbraeken et al., 2020) — the gap between theory and practice has grown proportionally. Local Stochastic Gradient Descent (SGD) and its variants have been successfully used for distributed learning with heterogeneous data in practice for years (Wang et al., 2021; Reddi et al., 2021; Xu et al., 2023), but so far we have little theoretical understanding of this success (Wang et al., 2022).
The majority of theoretical works in distributed optimization take a worst-case approach to algorithm analysis: they consider the worst-case efficiency over some large class of optimization problems, such as the class of convex, smooth objectives satisfying some hetorogeneity requirement (Woodworth et al., 2020a;b; Koloskova et al., 2020). While the resulting guarantees are very general, they do not always reflect practice, since they describe the worst-case, rather than cases which may appear in practice. For Local SGD and its deterministic variant, Local GD, these worst-case guarantees rely on the potentially unrealistic condition of small step sizes $\eta \le \mathcal { O } ( 1 / K )$ , where $K$ is the communication interval (Woodworth et al., 2020b; Koloskova et al., 2020). For Local GD, this small step size can guarantee monotonic decrease of the objective, but such stable convergence is far removed from practice, as non-monotonic decrease of the objective is common in practical machine learning (Jastrzebski et al., 2020; Cohen et al., 2021).
Motivated by this gap between theory and practice, we take a problem-specific approach and analyze Local GD for logistic regression. Our central question is:
# Can Local GD for logistic regression achieve accelerated convergence with a large step size $( \eta \gg 1 / K )$ ?
Despite the apparent simplicity of this setting, existing theory is unable to answer this question. In the single-machine setting, GD is known to converge for logistic regression with any step size (Wu et al., 2024b;a), and a large enough step size will cause non-monotonic decrease of the objective. For the distributed setting, previous work for this problem considered a two-stage variant of Local GD (Crawshaw et al., 2025), that uses a small step size $\eta \le \mathcal { O } ( 1 / K )$ before switching to a larger step size later in training. It remains open to analyze the vanilla Local GD with a constant stepsize in this setting.
Contributions In this paper, we prove that Local GD for distributed logistic regression converges with any step size $\eta > 0$ and any communication interval $K \geq 1$ . In particular, we show that choosing $\begin{array} { r } { \eta K = \widetilde { \Theta } \left( \frac { \gamma ^ { 3 } R } { M } \right) } \end{array}$ yields a convergence rate faster than existing lowerebounds of Local GD for distributed convex optimization (see Section 3 for definitions of all parameters).
Our accelerated convergence crucially uses $\eta K \gg 1$ , which violates the condition $\eta \le \mathcal { O } ( 1 / K )$ from previous work and potentially creates non-monotonic objective decrease across communication rounds. To handle this instability, we adapt
Table 1: Upper bounds on the objective gap $F ( w ) - F _ { * }$ of distributed GD variants for logistic regression, up to constants and logarithmic factors. $R$ is the number of communication rounds, $K$ is the number of local steps, $M$ is the number of clients, and $\gamma$ is the maximum margin of the combined dataset. $( a )$ These bounds are derived in (Crawshaw et al., 2025) by applying the worst-case upper bounds of (Woodworth et al., 2020b) and (Koloskova et al., 2020) to the specific problem of logistic regression. $( b )$ Assuming $R \geq \Omega ( M n \gamma ^ { - 2 } )$ . $( c )$ Assuming $R \geq \widetilde \Omega ( \operatorname* { m a x } ( M n \gamma ^ { - 2 } , K M \gamma ^ { - 3 } ) ) ,$ ). $( d )$ This lower bound is included for comparison of the rate in terms of $R$ and $K$ , and appli es to the class of convex, $H$ -smooth objectives that have a minimizer $\pmb { w } _ { \ast }$ with $\lvert | \pmb { w } _ { * } \rvert | \leq B$ and $\begin{array} { r } { \| \nabla F _ { m } ( \pmb { w } _ { * } ) - \nabla F ( \pmb { w } _ { * } ) \| \le \zeta _ { * } } \end{array}$ . It should be noted that logistic regression with separable data is not a member of this class, because no minimizer $\pmb { w } _ { \ast }$ exists for this objective.
techniques from the analysis of GD with large step sizes for single-machine logistic regression, introduced by ${ \sf W } { \sf u }$ et al. (2024a), which shows that GD operates in an initial unstable phase before entering a stable phase where the objective decreases monotonically. We use these techniques to analyze Local GD by decomposing the algorithm’s update into the contribution from each individual data point, and tracking this contribution throughout the local update steps, in order to relate the trajectory of Local GD to that of GD. Consequently, we can show that Local GD also transitions from an unstable phase to a stable phase.
We also experimentally evaluate Local GD for logistic regression with synthetic data and MNIST data, and the results corroborate our theoretical finding that acceleration can be achieved by allowing for non-monotonic objective decrease. To probe the limitations of our theory, we evaluate Local GD under different regimes of $\eta$ and $K$ , and accordingly we propose open problems and directions for future research.
Organization We first discuss related work (Section 2), then state our problem (Section 3) and give our analysis (Section 4). We provide experimental results (Section 5), then conclude with a discussion of our results and future work (Section 6).
Notation For $n \in \mathbb { N }$ , we denote $[ n ] = \{ 1 , \dots , n \}$ . We use $\| \cdot \|$ to denote the $L _ { 2 }$ norm for vectors and the spectral norm for matrices. Outside of the abstract, we use $\mathcal { O } , \Omega$ , and $\Theta$ to omit only universal constants. Similarly, $\widetilde { \mathcal { O } } , \widetilde { \Omega }$ , and $\widetilde { \Theta }$ only omit universal constants and logarithmic ermes.
# 2. Related Work
General Distributed Optimization Early work in this area focused on distributed algorithms for solving classical learning problems with greater efficiency through parallelization (Mcdonald et al., 2009; McDonald et al., 2010; Zinkevich et al., 2010; Dekel et al., 2012; Balcan et al., 2012; Zhang et al., 2013; Shamir & Srebro, 2014; Arjevani & Shamir, 2015). Recent years have seen a growth of research in distributed optimization due to applications for large-scale distributed training of neural networks (Tang et al., 2020; Verbraeken et al., 2020) and federated learning (McMahan et al., 2017). Federated learning is a paradigm for distributed learning in which user devices collaboratively train a machine learning model without sharing data; see (Kairouz et al., 2021; Wang et al., 2021) for a comprehensive survey.
Efficiency of Local SGD Local SGD (also known as Federated Averaging, or FedAvg) is a fundamental algorithm for distributed optimization, both in theory and practice. Convergence guarantees of Local SGD for distributed convex optimization under various conditions were proven by
Stich (2019); Haddadpour & Mahdavi (2019); Woodworth et al. (2020b); Khaled et al. (2020); Koloskova et al. (2020); Glasgow et al. (2022). These works consider the worstcase efficiency of Local SGD for solving large classes of optimization problems, such as the class of problems with smooth, convex objectives with some condition on the heterogeneity between local objectives; we refer to these guarantees as worst-case baselines. Lower bounds have established that Local SGD is dominated by Minibatch SGD in the worst case over various problem classes despite the fact that Local SGD tends to outperform Minibatch SGD for practical problems (Woodworth et al., 2020a;b; Glasgow et al., 2022; Patel et al., 2024), and variants of Local SGD remain standard in practice (Wang et al., 2021; 2022; Reddi et al., 2021; Xu et al., 2023). It remains an active topic of research to develop a theoretical understanding of Local SGD and Minibatch SGD that aligns with practical observations (Woodworth et al., 2020b; Glasgow et al., 2022; Wang et al., 2022; Patel et al., 2023; 2024).
Gradient Methods for Logistic Regression In this work, we narrow our focus and consider the efficiency of Local GD for solving one particular optimization problem, continuing a line of work which shows that gradient-based optimization algorithms have very particular behavior for certain problems of interest in machine learning. Soudry et al. (2018); Ji & Telgarsky (2019) showed that GD for logistic regression converges to the maximum margin solution without explicit regularization. Gunasekar et al. (2018); Nacson et al. (2019); Ji et al. (2021) proved further implicit regularization results for general steepest descent methods, stochastic gradient descent, and a fast momentum-based algorithm, respectively. A separate line of work observed that GD exhibits non-monotonic decrease in the objective when training neural networks, a phenomenon called the Edge of Stability (Cohen et al., 2021; Damian et al., 2023).
The works which are most closely related to ours are $\mathrm { W u }$ et al., 2024b;a) and (Crawshaw et al., 2025). Wu et al. (2024b) showed that GD for logistic regression can converge with any positive stepsize, despite non-monotonic decrease of the objective, and that GD converges to the maximum margin solution. Wu et al. (2024a) showed that GD with a large stepsize can achieve accelerated convergence for logistic regression. Crawshaw et al. (2025) proved that a two-stage variant of Local GD can achieve accelerated convergence compared to the worst-case baselines (Koloskova et al., 2020; Woodworth et al., 2020b).
# 3. Problem Setup
We consider a distributed version of binary classification with linearly separable data. The number of clients is denoted by $M$ , the number of data points per client as $n$ , and
# Algorithm 1 Local GD
Input: Initialization $\boldsymbol { w } _ { 0 } \in \mathbb { R } ^ { d }$ , rounds $R \in \mathbb { N }$ , local steps
$K \in \mathbb { N }$ , learning rate $\eta > 0$
1: for $r = 0 , 1 , \ldots , R - 1$ do
2: for $m \in [ M ]$ do
3: ${ \pmb w } _ { r , 0 } ^ { m } { \pmb w } _ { r }$
4: for $k = 0 , \ldots , K - 1$ do
5: $\pmb { w } _ { r , k + 1 } ^ { m } \pmb { w } _ { r , k } ^ { m } - \eta \nabla F _ { m } ( \pmb { w } _ { r , k } ^ { m } )$
6: end for
78: $\begin{array} { r } { \pmb { w } _ { r + 1 } \frac { 1 } { M } \sum _ { m = 1 } ^ { M } \pmb { w } _ { r , K } ^ { m } } \end{array}$
9: end for
the dimension of the input data as $d$ . The data consists of $M$ local datasets, one for each client: Dm = {(xim, yim)}i [n] for each $m \in [ M ]$ , where $\pmb { x } _ { i } ^ { m } \in \mathbb { R } ^ { d }$ and $y _ { i } ^ { m } \in \{ - 1 , 1 \}$ . We assume that the global dataset $\boldsymbol { D } = \cup _ { m \in [ M ] } \boldsymbol { D } _ { m }$ is linearly separable, that is, there exists some $\pmb { w } \in \mathbb { R } ^ { d }$ such that $y \langle { \pmb w } , { \pmb x } \rangle > 0$ for every $( x , y ) \in D$ . We also denote by $\gamma$ and $\pmb { w } _ { \ast }$ the maximum margin and the maximum margin classifier for the global dataset, that is,
$$
\begin{array} { r } { \gamma = \underset { { \pmb w } \in \mathbb { R } ^ { d } , \| { \pmb w } \| = 1 } { \operatorname* { m a x } } \underset { ( { \pmb x } , { \pmb y } ) \in D } { \operatorname* { m i n } } y \langle { \pmb w } , { \pmb x } \rangle } \\ { { \pmb w } _ { * } = \underset { { \pmb w } \in \mathbb { R } ^ { d } , \| { \pmb w } \| = 1 } { \operatorname* { a r g m a x } } \underset { ( { \pmb x } , { \pmb y } ) \in D } { \operatorname* { m i n } } y \langle { \pmb w } , { \pmb x } \rangle . } \end{array}
$$
Note that $\gamma > 0$ from the assumption of linear separability.
We are interested in studying the behavior of Local Gradient Descent (Algorithm 1) for minimizing the logistic loss of this classification problem. Denoting $\ell ( z ) =$ $\log ( 1 + \exp ( - z ) )$ , the local objective $F _ { m } : \mathbb { R } ^ { d } \mathbb { R }$ for client $m \in [ M ]$ is defined as
$$
F _ { m } ( \pmb { w } ) = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \ell ( y _ { i } ^ { m } \langle \pmb { w } , \pmb { x } _ { i } ^ { m } \rangle ) ,
$$
and our goal is to approximately solve the following:
$$
\operatorname* { m i n } _ { \pmb { w } \in \mathbb { R } ^ { d } } \left\{ F ( \pmb { w } ) : = \frac { 1 } { M } \sum _ { m = 1 } ^ { M } F _ { m } ( \pmb { w } ) \right\} .
$$
In this work, we focus on minimization of this training loss, and guarantees for the population loss can be derived using standard techniques.
Notice that the objective depends on each data point $( \pmb { x } _ { i } ^ { m } , \pmb { y } _ { i } ^ { m } )$ only through the product $y _ { i } ^ { m } \pmb { x } _ { i } ^ { m }$ . Therefore, we can assume without loss of generality that $y _ { i } ^ { m } = 1$ for every $m \in [ M ] , i \in [ n ]$ , since we can replace any data point $( \pmb { x } _ { i } ^ { m } , - 1 )$ with $( - \pmb { x } _ { i } ^ { m } , 1 )$ , which preserves the product $y _ { i } ^ { m } { \pmb x } _ { i } ^ { m }$ and therefore does not change the trajectory of Local GD. We also assume that $\| \pmb { x } _ { i } ^ { m } \| \leq 1$ for every $m , i$ which can always be enforced by rescaling all data points by $\operatorname* { m a x } _ { m , i } \| \pmb { x } _ { i } ^ { m } \|$ . Lastly, we will denote by $H$ the smoothness constant of $F$ , that is, $\begin{array} { r } { H : = \operatorname* { s u p } _ { { \pmb w } \in \mathbb { R } ^ { d } } \| \nabla ^ { 2 } F ( { \pmb w } ) \| } \end{array}$ , which satisfies $H \leq 1 / 4$ when $\| \pmb { x } _ { i } ^ { m } \| \leq 1$ (Crawshaw et al., 2025).
# 4. Convergence Analysis
We present two convergence results of Local GD for the logistic regression problem stated in Equation 4. Our Theorem 4.1 gives an upper bound on the average objective $\begin{array} { r } { \frac { 1 } { r } \sum _ { s = 0 } ^ { r - 1 } \breve { F } ( { \pmb w } _ { r } ) } \end{array}$ over the first $r$ communication rounds, which holds for any $r$ . On the other hand, Theorem 4.2 provides a last-iterate upper bound on the objective $F ( w _ { r } )$ for every $r$ after a transition time $\tau$ . Both of these results hold for any learning rate $\eta > 0$ and any number of local steps $K$ Corollary 4.3 summarizes our results by deriving the error with the best choices of $\eta$ and $K$ for a given communication budget $R$ . We first state and discuss the results in Section 4.1, then give an overview of the proofs in Section 4.2. The complete proofs are deferred to Appendix A.
# 4.1. Statement of Results
Theorems 4.1 and 4.2 provide guarantees in two phases: the initial unstable phase (lasting for $\tau$ rounds), and the latter stable phase. During the unstable phase, we cannot provide a last-iterate guarantee, but we can upper bound the average loss over the trajectory. After the loss becomes sufficiently small, Local GD enters the stable phase, where the loss decreases monotonically at every round. These two phases mimic the observed behavior of Local GD in experiments (see Section 5), and align with the behavior of single-machine GD (Wu et al., 2024a).
Theorem 4.1. For every $r \geq 0$ , Local GD satisfies
$$
\begin{array} { r l r } { { \frac { 1 } { r } \sum _ { s = 0 } ^ { r - 1 } F ( { \pmb w } _ { s } ) \leq } } \\ & { } & { 2 6 \frac { \| { \pmb w } _ { 0 } \| ^ { 2 } + 1 + \log ^ { 2 } ( K + \eta K \gamma ^ { 2 } r ) + \eta ^ { 2 } K ^ { 2 } } { \eta \gamma ^ { 4 } r } . } \end{array}
$$
Notice that the RHS of Equation 5 grows at most linearly with $\eta$ and quadratically with $K$ : this aligns with the intuition that large stepsizes and/or long communication intervals can create instability. Indeed, even if $\eta \le 1 / H$ , so that the local objectives are guaranteed to decrease with each local step, the global objective may not decrease monotonically over rounds when $K$ is large, due to a large effective per-round step size $\eta K$ . However, for any fixed $\eta$ and $K$ , Theorem 4.1 shows that the average loss can be made arbitrarily small with large enough $r$ . After at most $\tau$ rounds, $F ( w _ { r } )$ will decrease below a certain threshold, after which the global objective will decrease monotonically with each communication round, leading to the following last-iterate guarantee.
Theorem 4.2. Denote $\begin{array} { r } { \psi = \operatorname* { m i n } \left( \frac { \gamma } { 1 4 0 \eta K M } , \frac { 1 } { 2 M n } \right) } \end{array}$ and
$$
\tau = \frac { 4 \gamma \| \pmb { w } _ { 0 } \| + 2 \sqrt { 2 } + 2 \eta + \log \left( 1 + \frac { \sqrt { K } } { \sqrt { \eta } \gamma \psi } \right) } { \eta \gamma ^ { 2 } \psi } .
$$
For every $r \geq \tau$ , Local $G D$ satisfies
$$
F ( \pmb { w } _ { r } ) \leq \frac { 1 6 } { \eta \gamma ^ { 2 } K ( r - \tau ) } .
$$
Note that Theorems 4.1 and 4.2 apply for any choice of the stepsize $\eta$ and number of local steps $K$ . In contrast with the worst-case analysis which requires that $\begin{array} { r } { \eta \le \mathcal { O } \left( \frac { 1 } { K } \right) } \end{array}$ , ours is the first result showing that Local GD can converge for logistic regression without any restrictions on $\eta$ and $K$ The following corollary shows that, by tuning $\eta$ and $K$ , we can achieve an accelerated rate with $R ^ { - 2 }$ dependence on $R$ , which improves upon the lower bounds of Local GD for general distributed convex optimization (see Table 1).
Corollary 4.3. Suppose $\begin{array} { r } { R \ge \widetilde \Omega \left( \operatorname* { m a x } \left( \frac { M n } { \gamma ^ { 2 } } , \frac { K M } { \gamma ^ { 3 } } \right) \right) } \end{array}$ With ${ \pmb w } _ { 0 } = { \bf 0 }$ , $\eta \geq 1$ , and $\begin{array} { r } { \eta K = \widetilde { \Theta } \left( \frac { \gamma ^ { 3 } R } { M } \right) } \end{array}$ , Local $G D$ satisfies
$$
F ( \pmb { w } _ { R } ) \leq \widetilde { O } \left( \frac { M } { \gamma ^ { 5 } R ^ { 2 } } \right) .
$$
The condition $\begin{array} { r } { R \ge \widetilde \Omega \left( \operatorname* { m a x } \left( \frac { M n } { \gamma ^ { 2 } } , \frac { M K } { \gamma ^ { 3 } } \right) \right) } \end{array}$ ensures that $R \geq \tau$ , so that trainingewill actually enter the stable phase and decrease the objective at the rate $1 / ( \eta \gamma ^ { 2 } K R )$ . A similar condition is used in the analysis of GD with large stepsizes for single-machine logistic regression (Wu et al., 2024a).
Also, note that aside from the condition $\eta \geq 1$ , the stepsize $\eta$ and the communication interval $K$ always appear together as the product $\eta K$ . This means that our guarantee does not distinguish the performance of Local GD as $K$ changes, so long as the stepsize changes to keep $\eta K$ constant. Therefore, it remains open to show whether or not Local GD can actually benefit from the use of local steps for this problem. Indeed, the analysis of GD for single-machine logistic regression (Wu et al., 2024a) immediately implies that for our distributed problem, GD (parallelized over $M$ machines) achieves error $\widetilde { \mathcal { O } } ( 1 / ( \gamma ^ { 4 } R ^ { 2 } ) )$ , which improves upon our guarantee for L ceal GD in terms of $M$ and $1 / \gamma$ . We further discuss this comparison in Section 6.
# 4.2. Proof Overview
Throughout the analysis, we will denote $b _ { r , i } ^ { m } = \langle { \pmb w } _ { r } , { \pmb x } _ { i } ^ { m } \rangle$ , so that $\begin{array} { r } { F _ { m } ( { \pmb w } _ { r } ) = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \ell ( b _ { r , i } ^ { m } ) } \end{array}$ . Similarly, we will denote $b _ { r , i , k } ^ { m } = \langle \pmb { w } _ { r , k } ^ { m } , \pmb { x } _ { i } ^ { m } \rangle$ .
The proofs of Theorems 4.1 and 4.2 adapt existing tools introduced by (Wu et al., 2024a) and (Crawshaw et al.,
2025); our application of these tools for our setting relies on a comparison between the trajectories of GD and Local GD by decomposing updates into the contribution from each individual data point $\pmb { x } _ { i } ^ { m }$ . Specifically, a single GD update starting from ${ \pmb w } _ { r }$ is
$$
- \eta \nabla F ( \pmb { w } _ { r } ) = \frac { \eta } { M n } \sum _ { m = 1 } ^ { M } \sum _ { i = 1 } ^ { n } | \ell ^ { \prime } ( b _ { r , i } ^ { m } ) | \pmb { x } _ { i } ^ { m } .
$$
Denoting
$$
\beta _ { r , i } ^ { m } = \frac { \frac { 1 } { K } \sum _ { k = 0 } ^ { K - 1 } | \ell ^ { \prime } ( b _ { r , i , k } ^ { m } ) | } { | \ell ^ { \prime } ( b _ { r , i } ^ { m } ) | } ,
$$
a single round update of Local GD from ${ \pmb w } _ { r }$ can be rewritten
$$
\begin{array} { r l } { \displaystyle \pmb { w } _ { r + 1 } - \pmb { w } _ { r } = - \frac { \eta } { M } \sum _ { m = 1 } ^ { M } \sum _ { k = 0 } ^ { K - 1 } \nabla F _ { m } ( \pmb { w } _ { r , k } ^ { m } ) } & { } \\ { \displaystyle = \frac { \eta K } { M n } \sum _ { m = 1 } ^ { M } \sum _ { i = 1 } ^ { n } \beta _ { r , i } ^ { m } | \ell ^ { \prime } ( b _ { r , i } ^ { m } ) | \pmb { x } _ { i } ^ { m } . } \end{array}
$$
Comparing Equation 9 and Equation 12, the updates for GD and Local GD can both be represented as linear combinations of the data $\pmb { x } _ { i } ^ { m }$ , and the two trajectories can be compared by analyzing the coefficients $\beta _ { r , i } ^ { m }$ . By upper and lower bounding $\beta _ { r , i } ^ { m }$ , we can adapt the split comparator and gradient potential techniques of Wu et al. (2024a) (which were introduced for GD) to analyze Local GD during the unstable phase and show a transition to stability.
For the stable phase, we leverage the relationship between the derivatives of the objective function, namely that
$$
\| \nabla ^ { 2 } F ( { \pmb w } ) \| \le F ( { \pmb w } ) \quad \mathrm { a n d } \quad \| \nabla F ( { \pmb w } ) \| \le F ( { \pmb w } ) ,
$$
to show that a small objective value $F ( w )$ implies a small local smoothness $\| \nabla ^ { 2 } F ( \pmb { w } ^ { \prime } ) \|$ for $\| \pmb { w } ^ { \prime } - \pmb { w } \| \leq 1$ , and this in turn implies monotonic decrease of the objective. A similar argument was used by Crawshaw et al. (2025), but here we use a refined version that allows for any $\eta > 0$ , whereas the analysis of Crawshaw et al. (2025) requires $\eta \le 1 / H$ .
Below we state key lemmas to sketch the proofs of each theorem, and full proofs are deferred to Appendix A.
Unstable Phase As previously mentioned, we aim to apply the split comparator technique of $\mathrm { w } _ { \mathrm { u } }$ et al. (2024a) to analyze Local GD, and we can do so if we upper and lower bound $\beta _ { r , i } ^ { m }$ . Our lower bound is surprisingly simple:
$$
\beta _ { r , i } ^ { m } = \frac { \frac { 1 } { K } \sum _ { k = 0 } ^ { K - 1 } | \ell ^ { \prime } ( b _ { r , i , k } ^ { m } ) | } { | \ell ^ { \prime } ( b _ { r , i } ^ { m } ) | } \geq \frac { 1 } { K } ,
$$
where the inequality simply ignores all terms of the sum in the numerator, except that corresponding to $k = 0$ . While this may appear very loose, it is not hard to show in special
cases that this bound is tight up to logarithmic factors for certain values of ${ \pmb w } _ { r }$ (see Lemma B.7).
We upper bound $\beta _ { r , i } ^ { m }$ as
$$
\begin{array} { r l } & { \beta _ { r , i } ^ { m } = \displaystyle \frac { 1 } { K } \sum _ { k = 0 } ^ { K - 1 } \frac { 1 + \exp ( b _ { r , i } ^ { m } ) } { 1 + \exp ( b _ { r , i , k } ^ { m } ) } } \\ & { \qquad \le 1 + \exp ( b _ { r , i } ^ { m } ) = 1 + \exp ( \langle { \pmb w } _ { r } , { \pmb x } _ { i } ^ { m } \rangle ) } \\ & { \qquad \le 1 + \exp ( \| { \pmb w } _ { r } \| ) , } \end{array}
$$
where the last line uses $\| \pmb { x } _ { i } ^ { m } \| \le 1$ . To bound $\Vert \pmb { w } _ { r } \Vert$ , we apply the split comparator technique of $\mathrm { w } _ { \mathrm { u } }$ et al. (2024a) to analyze the local trajectories of each round $\{ { \pmb w } _ { s , k } ^ { m } \} _ { k }$ , then use this to establish a recursive bound on $\| \pmb { w } _ { s } - \pmb { u } \|$ over rounds, where $\pmb { u } = \pmb { u } _ { 1 } + \pmb { u } _ { 2 }$ is a yet unspecified comparator. The analysis within each round implies that
$$
\begin{array} { r l r } { { \frac { \| { \pmb w } _ { s , K } ^ { m } - { \pmb u } \| ^ { 2 } } { 2 \eta K } + \frac { 1 } { K } \sum _ { k = 0 } ^ { K - 1 } F _ { m } ( { \pmb w } _ { s , k } ^ { m } ) \leq } } \\ & { } & { \frac { \| { \pmb w } _ { s } - { \pmb u } \| ^ { 2 } } { 2 \eta K } + F _ { m } ( \pmb u _ { 1 } ) , } \end{array}
$$
and in particular that
$$
\begin{array} { r } { \| \pmb { w } _ { s , K } ^ { m } - \pmb { u } \| \leq \| \pmb { w } _ { s } - \pmb { u } \| + \sqrt { 2 \eta K F _ { m } ( \pmb { u } _ { 1 } ) } . } \end{array}
$$
Averaging over $\textit { m } \in { } [ M ]$ and recursing over $\begin{array} { r l } { s } & { { } \in } \end{array}$ $\{ 0 , \ldots , r - 1 \}$ implies that
$$
\begin{array} { r } { \| \pmb { w } _ { r } - \pmb { u } \| \leq \| \pmb { w } _ { 0 } - \pmb { u } \| + r \sqrt { 2 \eta K F ( \pmb { u } _ { 1 } ) } , } \end{array}
$$
so
$$
\| \pmb { w } _ { r } \| \leq \| \pmb { w } _ { 0 } \| + 2 \| \pmb { u } \| + r \sqrt { 2 \eta K F ( \pmb { u } _ { 1 } ) } .
$$
By choosing $\mathbf { \Delta } _ { \pmb { u } }$ to balance the last two terms on the RHS, we arrive at the following bound.
Lemma 4.4. For every $r \geq 0$ ,
$$
\| \pmb { w } _ { r } \| \leq \| \pmb { w } _ { 0 } \| + \frac { \sqrt { 2 } + \eta + \log ( 1 + \eta \gamma ^ { 2 } K r ^ { 2 } ) } { \gamma } .
$$
We can now plug this in to Equation 17 to upper bound $\beta _ { r , i } ^ { m }$ . Although the bound for $\beta _ { r , i } ^ { m }$ is exponential in $\Vert \pmb { w } _ { r } \Vert$ , Lemma 4.4 shows that $\Vert \pmb { w } _ { r } \Vert$ is only logarithmic in $r$ , so the resulting upper bound of $\beta _ { r , i } ^ { m }$ is only polynomial in $r$ .
With upper and lower bounds of $\beta _ { r , i } ^ { m }$ , the split comparator technique can be used to analyze Local GD similarly as for GD. The full proof can be found in Appendix A.1.
Stable Phase Our error bound for the stable phase uses the following modified descent inequality:
Lemma 4.5. For ${ \pmb w } , { \pmb w } ^ { \prime } \in \mathbb { R } ^ { d }$ , $i f \| \pmb { w } ^ { \prime } - \pmb { w } \| \leq 1$ , then for every $m \in [ M ]$ ,
$$
\begin{array} { r l r } & { } & { F _ { m } ( { \pmb w } ^ { \prime } ) - F _ { m } ( { \pmb w } ) \leq \qquad ( 2 3 ) } \\ & { } & { F _ { m } ( { \pmb w } ) + \langle \nabla F _ { m } ( { \pmb w } ) , { \pmb w } ^ { \prime } - { \pmb w } \rangle + 4 F _ { m } ( { \pmb w } ) \| { \pmb w } ^ { \prime } - { \pmb w } \| ^ { 2 } . } \end{array}
$$
The above descent inequality is proven by using the facts that $\lVert \nabla ^ { 2 } F _ { m } ( \pmb { w } ) \rVert \ \leq \ F _ { m } ( \pmb { w } )$ (Lemma B.1), and $\| \pmb { w } ^ { \prime } - \pmb { \mathrm { \tau } }$ $\pmb { w } \| \leq \mathcal { O } ( 1 )$ implies that $\| \nabla ^ { 2 } F _ { m } ( \pmb { w } ^ { \prime } ) \| \leq \mathcal { O } ( \| \nabla ^ { 2 } F _ { m } ( \pmb { w } ) \| )$ (Lemma B.3). This descent inequality captures a desirable property of the logistic loss: the local smoothness constant decreases with the objective value, so that large stepsizes can yield monotonic objective decrease as long as the objective is below some threshold.
To use this lemma to bound the error of Local GD, we need to do three things: (1) show that $\lVert \pmb { w } _ { r + 1 } - \pmb { w } _ { r } \rVert \leq 1$ when $F ( w _ { r } )$ is below some threshold; (2) show that the bias in the update direction ${ \pmb w } _ { r + 1 } - { \pmb w } _ { r }$ compared to $- \eta K \nabla F ( { \pmb w } _ { r } )$ is negligible when $F ( w _ { r } )$ is below some threshold; (3) show that $F ( w _ { r } )$ becomes smaller than our desired threshold within $\tau$ rounds.
First, to show that $\lVert \pmb { w } _ { r + 1 } - \pmb { w } _ { r } \rVert \leq 1$ based on the magnitude of $F ( w _ { r } )$ , notice
$$
\begin{array} { r l r } & { } & { \| \pmb { w } _ { r + 1 } - \pmb { w } _ { r } \| = \eta \left\| \frac { 1 } { M } \displaystyle \sum _ { m = 1 } ^ { M } \displaystyle \sum _ { k = 0 } ^ { K - 1 } \nabla F _ { m } ( \pmb { w } _ { r , k } ^ { m } ) \right\| } \\ & { } & { \qquad \le \displaystyle \frac { \eta } { M } \displaystyle \sum _ { m = 1 } ^ { M } \displaystyle \sum _ { k = 0 } ^ { K - 1 } \| \nabla F _ { m } ( \pmb { w } _ { r , k } ^ { m } ) \| . } \end{array}
$$
We know $\| \nabla F _ { m } ( \pmb { w } _ { r , k } ^ { m } ) \| \leq F _ { m } ( \pmb { w } _ { r , k } ^ { m } )$ (Lemma B.1), and if we knew that local updates monotonically decrease the local loss, we further have $F _ { m } ( \pmb { w } _ { r , k } ^ { m } ) \leq F _ { m } ( \pmb { w } _ { r } )$ . Combined with Equation 25, this would yield
$$
\| w _ { r + 1 } - \pmb { w } _ { r } \| \leq \eta K F ( \pmb { w } _ { r } ) .
$$
In fact, we can use Lemma 4.5 to show that local updates monotonically decrease the local objective, that is, $F _ { m } ( \pmb { w } _ { r , k + 1 } ^ { m } ) \leq F _ { m } ( \pmb { w } _ { r , k } ^ { m } )$ , whenever $F _ { m } ( \boldsymbol { w } _ { r , k } ^ { m } ) \leq 1 / ( 4 \eta )$ This shows that local objectives monotonically decrease across local steps (Lemma 4.6), and this in turn implies that $\| \pmb { w } _ { r , k } ^ { m } - \pmb { w } _ { r } \| \leq 1$ (Lemma 4.7).
Lemma 4.6. If $F ( \pmb { w } _ { r } ) \leq 1 / ( 4 \eta M )$ for some $r \geq 0$ , then $F _ { m } ( \pmb { w } _ { r , k } ^ { m } )$ is decreasing in $k$ for every $m \in [ M ]$ .
Lemma 4.7. If $F ( { \pmb w } _ { r } ) \le 1 / ( \eta K M ) .$ for some $r \geq 0$ , then $\| \pmb { w } _ { r , k } ^ { m } - \pmb { w } _ { r } \| \leq 1$ for every $m \in [ M ] , k \in [ K ]$ .
By choosing $k = K$ and averaging over $m \in [ M ]$ , Lemma 4.7 implies that $\lVert \pmb { w } _ { r + 1 } - \pmb { w } _ { r } \rVert \leq 1$ .
Next, to handle the bias of the update direction, we rewrite the update as
$$
\pmb { w } _ { r + 1 } - \pmb { w } _ { r } = - \eta K ( \nabla F ( \pmb { w } _ { r } ) + \pmb { b } _ { r } ) ,
$$
where
$$
b _ { r } = \frac { 1 } { M K } \sum _ { m = 1 } ^ { M } \sum _ { k = 0 } ^ { K - 1 } ( \nabla F _ { m } ( { \pmb w } _ { r , k } ^ { m } ) - \nabla F _ { m } ( { \pmb w } _ { r } ) ) .
$$
We can bound the magnitude of the bias as follows:
$$
\| \boldsymbol { b } _ { r } \| \leq \frac { 1 } { M K } \sum _ { m = 1 } ^ { M } \sum _ { k = 0 } ^ { K - 1 } \| \nabla F _ { m } ( \boldsymbol { w } _ { r , k } ^ { m } ) - \nabla F _ { m } ( \boldsymbol { w } _ { r } ) \| ,
$$
and denoting $C = \{ ( 1 - t ) \pmb { w } _ { r } + t \pmb { w } _ { r , k } ^ { m } \ | \ t \in [ 0 , 1 ] \} ,$
$$
\begin{array} { r l } & { \| \nabla F _ { m } ( \pmb { w } _ { r , k } ^ { m } ) - \nabla F _ { m } ( \pmb { w } _ { r } ) \| } \\ & { \quad \leq \left( \underset { { \pmb { w } } \in { \pmb { C } } } { \operatorname* { m a x } } \| \nabla ^ { 2 } F _ { m } ( \pmb { w } ) \| \right) \| \pmb { w } _ { r , k } ^ { m } - \pmb { w } _ { r } \| } \\ & { \quad \leq \left( \underset { { \pmb { w } } \in { \pmb { C } } } { \operatorname* { m a x } } F _ { m } ( \pmb { w } ) \right) \| \pmb { w } _ { r , k } ^ { m } - \pmb { w } _ { r } \| } \\ & { \quad \leq \operatorname* { m a x } \left( F _ { m } ( \pmb { w } _ { r } ) , F _ { m } ( \pmb { w } _ { r , k } ^ { m } ) \right) \| \pmb { w } _ { r , k } ^ { m } - \pmb { w } _ { r } \| , } \end{array}
$$
where the last two inequalities use $\lVert \nabla ^ { 2 } F _ { m } ( \pmb { w } ) \rVert \leq F _ { m } ( \pmb { w } )$ (Lemma B.1) and convexity of $F _ { m }$ , respectively. Using Lemmas 4.6 and 4.7, we can already bound the two terms of Equation 33 when $F _ { m } ( \pmb { w } _ { r } )$ is small, which gives the following.
Lemma 4.8. If ${ \cal F } ( { \pmb w } _ { r } ) ~ \le ~ \gamma / ( 7 0 \eta K M )$ , then $\lvert | \boldsymbol { b } _ { r } \rvert | \ \leq$ $\begin{array} { r } { \frac { 1 } { 5 } \| \nabla F ( { \pmb w } _ { r } ) \| } \end{array}$ .
Third, we must show that $F ( { \pmb w } _ { r } )$ will be sufficiently small for some $r \leq \tau$ in order to satisfy the conditions of Lemmas 4.6, 4.7, and 4.8. To do this, we adapt the gradient potential argument of $\mathrm { w } _ { \mathrm { u } }$ et al. (2024a), as previously mentioned, by lower bounding $\beta _ { r , i } ^ { m }$ . We use the same bound as in the proof of Theorem 4.1: $\beta _ { r , i } ^ { m } \ge 1 / K$ . This allows us to relate the gradient potential of Local GD to that of GD, and combining this with Lemma 4.4 shows that $F ( w _ { r } )$ is sufficiently small to enable stable descent after $\tau$ rounds.
Lemma 4.9. There exists some $r \leq \tau$ such that $F ( w _ { r } ) \leq$ $\frac { \gamma } { 7 0 \eta K M }$
Finally, to prove Theorem 4.2, we can apply Lemma 4.5 for all $r \geq \tau$ . Applying Lemma 4.8 to control the bias of the update direction, we obtain
$$
F ( \pmb { w } _ { r + 1 } ) - F ( \pmb { w } _ { r } ) \leq - \frac { 1 } { 4 } \eta K \| \nabla F ( \pmb { w } _ { r } ) \| ^ { 2 } .
$$
Using $\begin{array} { r } { \| \nabla F ( { \pmb w } _ { r } ) \| \ge \frac { \gamma } { 2 } F ( { \pmb w } _ { r } ) } \end{array}$ (Lemma B.1), this leads to a recursion over $F ( { \pmb w } _ { r } )$ , and unrolling back to round $\tau$ gives exactly Equation 7 from Theorem 4.2. The full proof is given in Appendix A.2.
Corollary 4.3, which gives our result stated in Table 1, is proved in Appendix A.3.
# 4.3. Comparison to Single-Machine Case
When $K = 1$ or $M = 1$ , the Local GD algorithm reduces to GD. However, our convergence rate of $M / ( \gamma ^ { 5 } R ^ { 2 } )$ does not exactly recover the $1 / ( \gamma ^ { 4 } R ^ { 2 } )$ rate of Wu et al. (2024a) in terms of the dataset’s margin $\gamma$ . Here we provide some technical details on the origin of this issue and whether it can be removed.
The issue of our $\gamma$ dependence stems from bounding the bias term $\left\| \boldsymbol { b } _ { r } \right\|$ in Lemma 4.8. ${ \pmb b } _ { r }$ is the difference between the update direction for a round compared to the global gradient at the beginning of that round. Notice that other conditions for entering the stable phase (Lemma 4.6, Lemma 4.7) only require $F ( { \pmb w } _ { r } ) \leq O ( 1 / ( \eta K M ) )$ , whereas Lemma 4.8 requires $F ( { \pmb w } _ { r } ) \leq O ( \gamma / ( \eta K M ) )$ . This additional factor of $\gamma$ needed to bound $\left\| \boldsymbol { b } _ { r } \right\|$ creates the worse dependence on $\gamma$ compared with the single-machine case. Note that the gradient bias results from taking multiple local steps before averaging, so it does not appear when $K = 1$ or $M = 1$ .
Technically, the requirement ${ \cal F } ( { \pmb w } _ { r } ) ~ \le ~ { \cal O } ( \gamma / ( \eta K M ) )$ might be weakened, but with a fine-grained analysis of the Local GD trajectory. First, note that the requirement on $F ( w _ { r } )$ is used in Equation 114 of Lemma A.5, for the inequality marked $( i v )$ . The need for the factor of $\gamma$ arises from the next inequality (marked $( v ) \quad$ ), where we apply $F ( { \pmb w } ) \leq 2 \| \nabla F ( { \pmb w } ) \| / \gamma$ (Lemma B.2). The additional factor of $\gamma$ is needed to cancel out the $1 / \gamma$ from Lemma B.2. Now, if we had a stronger bound in Lemma B.2 — say $F ( { \pmb w } ) \leq \| \nabla F ( { \pmb w } ) \|$ — then we could remove the extra $\gamma$ factor. The bound $F ( { \pmb w } ) \leq \| \nabla F ( { \pmb w } ) \|$ does not hold for all $\textbf { { w } }$ , but it does hold for some $\textbf { \em w }$ , namely in the case that $\pmb { w } = t \pmb { w } _ { \ast }$ , where $t$ is a large scalar. So we could possibly improve the gamma dependence if we knew that Local GD converges to the max-margin solution, however, this kind of implicit bias of Local GD with large $\eta$ or $K$ is not known; even in the single-machine case the implicit bias of GD for logistic regression is unknown when the step size scales linearly with the number of iterations (Wu et al., 2024a). We consider this implicit bias analysis as an important direction of future research.
# 5. Experiments
We further investigate the behavior of Local GD for logistic regression through experiments, in order to answer the following questions: Q1: Can Local GD converge faster by choosing $\eta$ and $K$ large enough to create non-monotonic objective decrease? Q2: Do local steps yield faster convergence if we tune $\eta$ after choosing $K ?$ Q3: Do local steps yield faster convergence if we keep $\eta K$ constant? We investigate Q1 to empirically verify our theoretical findings, whereas Q2 and Q3 are meant to probe the limitations of our theory: our guarantee (Corollary 4.3) does not show any benefit of local steps, and we ask whether such a benefit occurs in practice. We further discuss this limitation of our theory in Section 6. Lastly, we provide an additional experiment with synthetic data in Appendix D.2 to evaluate how optimization behavior is affected by heterogeneity among the margins of each client’s local dataset.
Setup We evaluate Local GD for a synthetic dataset used by Crawshaw et al. (2025) and for a subset of the MNIST dataset with binarized labels, following (Wu et al., 2024a) and (Crawshaw et al., 2025). The synthetic dataset is a simple testbed with $M = 2$ clients and $n = 1$ data point per client. For MNIST, we use a common protocol (Karimireddy et al., 2020; Crawshaw et al., 2025) to partition 1000 MNIST images among $M = 5$ clients with $n = 2 0 0$ images each, in a way that induces heterogeneous feature distributions among clients. Note that $H \leq 1 / 4$ for these datasets. See Appendix C for complete details of each dataset. Additionally, we provide results with the CIFAR-10 dataset in Appendix D.1.
We run Local GD with a wide range of values for the parameters: $\eta ~ \in ~ \{ 2 ^ { - 2 } , 2 ^ { 0 } , 2 ^ { 2 } , 2 ^ { 4 } , 2 ^ { 6 } , 2 ^ { 8 } , 2 ^ { 1 0 } \}$ and $K \in$ $\{ 2 ^ { 0 } , 2 ^ { 2 } , 2 ^ { 4 } , 2 ^ { 6 } \}$ . Note that the traditional choice of $\eta =$ $1 / H = 2 ^ { 2 }$ is in the middle of the search range for $\eta$ , so a large number of these experiments fall outside of the scope of conventional theory. All experiments have a communication budget of $R = 2 0 4 8$ rounds.
Results Our investigations of Q1, Q2, and Q3 are shown in Figures 1, 2, and 3. Note that the results for $\eta = 2 ^ { 1 0 }$ are not shown because all such trajectories diverged.
The loss curves in Figure 1 show that the final error reached by Local GD is always made smaller when either $\eta$ or $K$ is increased while the other is held fixed, even when such changes create instability. This answers Q1 affirmatively and is consistent with our theory. Unsurprisingly, increases to $\eta$ create higher loss spikes and require more communication rounds to reach stability, which aligns with our theory. More surprising is that increases to $K$ actually preserve or decrease the rounds required to reach stability while also leading to a smaller final loss! This is consistent across both datasets and is stronger than predicted by our theory, since the transition time $\tau$ in Theorem 4.2 is proportional to $\eta K$ .
Figure 2 shows that a larger communication interval $K$ can accelerate convergence when $\eta$ is tuned to $K$ , which answers Q2 positively. For the synthetic data, larger choices of $K$ do not increase the time to reach stability, but they lead to a smaller final error. In the MNIST case, we see another stabilizing effect of $K$ : larger choices of $K$ permit larger choices of $\eta$ ! Indeed, setting $\eta = 2 5 6$ when $K = 1$ or $K = 4$ caused divergence, whereas this choice led to fast (albeit unstable) convergence when $K = 1 6$ or $K = 6 4$ .
Lastly, since our Theorem 4.2 does not distinguish the error of Local GD when $\eta K$ is constant, Figure 3 evaluates different parameter choices which have a common value of $\eta K$ . For both datasets, the final error reached by Local GD is nearly identical for all parameter choices, which leans toward a negative answer for Q3. However, we can see that the number of rounds required to reach the stable phase
Figure 1: Objective gap when varying one of $\eta , K$ and keeping the other fixed. In general, Local GD converges faster when $\eta$ and $K$ are larger, despite the initial instability in early rounds.
Figure 2: Objective gap for different values of $K$ with tuned $\eta$ . Left: Synthetic data. Right: MNIST data.
tends to decrease as $K$ increases, which still suggests that there may be room for improvement in our bound of the transition time in Theorem 4.2.
Together, our experimental results confirm that instability is an important ingredient for the fast convergence of Local GD for logistic regression. Further, they suggest that Local GD with $K > 1$ may be able to outperform GD under the same communication budget, which is even stronger than our current guarantees. We discuss this possibility as a direction of future research in Section 6.
# 6. Discussion
We have presented the first results showing that Local GD for logistic regression can converge with any step size $\eta > 0$ and any communication interval $K$ , and our convergence rate improves upon that guaranteed by the worst-case analysis which is known to be tight (Koloskova et al., 2020; Woodworth et al., 2020b; Patel et al., 2024). Below we discuss the problem-specific approach, limitations of our results, and suggest directions for follow up work.
Figure 3: Objective gap for different values of $\eta , K$ with constant $\eta K$ . Left: Synthetic data. Right: MNIST data.
Choice of Problem Class The conventional optimization analysis of distributed learning focuses on providing guarantees of efficiency in the worst-case over large classes of optimization problems. The question is, which class of problems should we analyze? Certain classes of problems lend themselves well to theoretical analysis, such as those satisfying a heterogeneity condition like uniformly bounded gradient dissimilarity (Woodworth et al., 2020b), or bounded gradient dissimilarity at the optimum (Koloskova et al., 2020); however, such conditions have come into question, since they lead to worst-case complexities that do not explain algorithm behavior for practical problems (Wang et al., 2022; Patel et al., 2023; 2024). These works have attempted to find the “right” heterogeneity condition, but so far (to the best of our knowledge), no such condition has explained the significant advantage enjoyed by Local SGD over Minibatch SGD in practice. In this work, by focusing on a specific problem, we investigate the possibility that algorithm performance can be explained according to the specific problem structure rather than general heterogeneity conditions, as discussed by Patel et al. (2024) and Crawshaw et al. (2025). Even though this approach is less general than the conventional style, we believe that a narrow analysis which accurately describes practice has a different kind of value than a general analysis which does not, and is an important direction for the community to pursue.
Usefulness of Local Steps The main limitation of our results is that our error bound for Local GD is strictly worse than that of GD for $R$ steps (Wu et al., 2024a) in terms of $M$ and $1 / \gamma$ (see Table 1). If we are to accept these results, one should set $K = 1$ and parallelize GD over $M$ machines rather than use Local GD with $K > 1$ , but it remains open whether our analysis for Local GD can be improved to match (or even dominate) GD. Based on our experiments, we conjecture that Local GD with $K > 1$ can converge faster than GD, and this suggests two open problems: (1) Provide a lower bound of GD for logistic regression, and (2) Determine whether Local GD with $K > 1$ can converge with error smaller than $R ^ { - \alpha }$ for some $\alpha > 2$ . Our current results are insufficient to show any advantage to setting $K >$ 1, not only because of the unfavorable comparison with GD, but also because $\eta$ and $K$ appear in our Theorem 4.2 only through the product $\eta K$ (excluding non-dominating terms of the transition time $\tau$ ). This means that any error guaranteed by choosing stepsize $\eta$ and communication interval $K$ can also be guaranteed with stepsize $\eta K$ and communication interval 1, so that an interval larger than 1 does not produce any advantage. The challenge of proving an advantage from local steps is fundamental in distributed optimization (Woodworth et al., 2020b; Glasgow et al., 2022; Patel et al., 2024), and we hope to address this in future work.
Future Extensions There are several natural extensions of our work, given the narrow focus of the problem setting. Since SGD for logistic regression was analyzed by Wu et al. (2024a) using similar techniques as we have leveraged in this work, one direction is to extend our analysis for Local SGD. These same techniques were applied by Cai et al. (2024) to analyze GD for training two-layer neural networks with approximately homogeneous activations, so another direction is to analyze the distributed training of twolayer networks with Local GD. Lastly, one could attempt to generalize our analysis for a larger class of problems, by formulating some general problem class for which Local GD outperforms the existing worst-case lower bounds. We leave these directions for future work.
# Acknowledgements
Thank you to the anonymous reviewers for the valuable feedback. Michael Crawshaw is supported by the Institute for Digital Innovation fellowship. Mingrui Liu is supported by a ORIEI seed funding, an IDIA P3 fellowship from George Mason University, and NSF awards #2436217, #2425687.
# Impact Statement
This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
References
Arjevani, Y. and Shamir, O. Communication complexity of distributed convex learning and optimization. Advances in neural information processing systems, 28, 2015.
Balcan, M. F., Blum, A., Fine, S., and Mansour, Y. Distributed learning, communication complexity and privacy. In Conference on Learning Theory, pp. 26–1. JMLR Workshop and Conference Proceedings, 2012.
Cai, Y., Wu, J., Mei, S., Lindsey, M., and Bartlett, P. L. Large stepsize gradient descent for non-homogeneous two-layer networks: Margin improvement and fast optimization. arXiv preprint arXiv:2406.08654, 2024.
Cohen, J., Kaur, S., Li, Y., Kolter, J. Z., and Talwalkar, A. Gradient descent on neural networks typically occurs at the edge of stability. In International Conference on Learning Representations, 2021.
Crawshaw, M., Woodworth, B., and Liu, M. Local steps speed up local gd for heterogeneous distributed logistic regression. International Conference on Learning Representations, 2025.
Damian, A., Nichani, E., and Lee, J. D. Self-stabilization: The implicit bias of gradient descent at the edge of stability. In The Eleventh International Conference on Learning Representations, 2023. URL https:// openreview.net/forum?id $\ c =$ nhKHA59gXz.
Dekel, O., Gilad-Bachrach, R., Shamir, O., and Xiao, L. Optimal distributed online prediction using mini-batches. Journal of Machine Learning Research, 13(1), 2012.
Glasgow, M. R., Yuan, H., and Ma, T. Sharp bounds for federated averaging (local sgd) and continuous perspective. In International Conference on Artificial Intelligence and Statistics, pp. 9050–9090. PMLR, 2022.
Gunasekar, S., Lee, J., Soudry, D., and Srebro, N. Characterizing implicit bias in terms of optimization geometry. In International Conference on Machine Learning, pp. 1832–1841. PMLR, 2018.
Haddadpour, F. and Mahdavi, M. On the convergence of local descent methods in federated learning. arXiv preprint arXiv:1910.14425, 2019.
Jastrzebski, S., Szymczak, M., Fort, S., Arpit, D., Tabor, J., Cho\*, K., and Geras\*, K. The break-even point on optimization trajectories of deep neural networks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum? id $\ c =$ r1g87C4KwB.
Ji, Z. and Telgarsky, M. The implicit bias of gradient descent on nonseparable data. In Beygelzimer, A. and Hsu, D. (eds.), Proceedings of the Thirty-Second Conference on Learning Theory, volume 99 of Proceedings of Machine Learning Research, pp. 1772–1798. PMLR, 25–28 Jun 2019. URL https://proceedings.mlr.press/ v99/ji19a.html.
Ji, Z., Srebro, N., and Telgarsky, M. Fast margin maximization via dual acceleration. In International Conference on Machine Learning, pp. 4860–4869. PMLR, 2021.
Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., Bonawitz, K., Charles, Z., Cormode, G., Cummings, R., et al. Advances and open problems in federated learning. Foundations and trends® in machine learning, 14(1–2):1–210, 2021.
Karimireddy, S. P., Kale, S., Mohri, M., Reddi, S., Stich, S., and Suresh, A. T. Scaffold: Stochastic controlled averaging for federated learning. In International conference on machine learning, pp. 5132–5143. PMLR, 2020.
Khaled, A., Mishchenko, K., and Richt´arik, P. Tighter theory for local sgd on identical and heterogeneous data. In International Conference on Artificial Intelligence and Statistics, pp. 4519–4529. PMLR, 2020.
Koloskova, A., Loizou, N., Boreiri, S., Jaggi, M., and Stich, S. A unified theory of decentralized sgd with changing topology and local updates. In International Conference on Machine Learning, pp. 5381–5393. PMLR, 2020.
Mcdonald, R., Mohri, M., Silberman, N., Walker, D., and Mann, G. Efficient large-scale distributed training of conditional maximum entropy models. Advances in neural information processing systems, 22, 2009.
McDonald, R., Hall, K., and Mann, G. Distributed training strategies for the structured perceptron. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 456–464. Association for Computational Linguistics, 2010.
McMahan, B., Moore, E., Ramage, D., Hampson, S., and Arcas, B. A. y. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Singh, A. and Zhu, J. (eds.), Proceedings of the 20th International Conference on Artificial Intelligence and
Statistics, volume 54 of Proceedings of Machine Learning Research, pp. 1273–1282. PMLR, 20–22 Apr 2017. URL https://proceedings.mlr.press/v54/ mcmahan17a.html.
Nacson, M. S., Srebro, N., and Soudry, D. Stochastic gradient descent on separable data: Exact convergence with a fixed learning rate. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 3051–3059. PMLR, 2019.
Patel, K. K., Glasgow, M., Wang, L., Joshi, N., and Srebro, N. On the still unreasonable effectiveness of federated averaging for heterogeneous distributed learning. In Federated Learning and Analytics in Practice: Algorithms, Systems, Applications, and Opportunities, 2023. URL https://openreview.net/forum? id $\ c =$ vhS68bKv7x.
Patel, K. K., Glasgow, M., Zindari, A., Wang, L., Stich, S. U., Cheng, Z., Joshi, N., and Srebro, N. The limits and potentials of local sgd for distributed heterogeneous learning with intermittent communication. In Agrawal, S. and Roth, A. (eds.), Proceedings of Thirty Seventh Conference on Learning Theory, volume 247 of Proceedings of Machine Learning Research, pp. 4115–4157. PMLR, 30 Jun–03 Jul 2024. URL https://proceedings. mlr.press/v247/patel24a.html.
Reddi, S. J., Charles, Z., Zaheer, M., Garrett, Z., Rush, K., Koneˇcn´y, J., Kumar, S., and McMahan, H. B. Adaptive federated optimization. In International Conference on Learning Representations, 2021. URL https: //openreview.net/forum?id $\ c =$ LkFG3lB13U5.
Shamir, O. and Srebro, N. Distributed stochastic optimization and learning. In 2014 52nd Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 850–857. IEEE, 2014.
Soudry, D., Hoffer, E., Nacson, M. S., Gunasekar, S., and Srebro, N. The implicit bias of gradient descent on separable data. Journal of Machine Learning Research, 19(70):1–57, 2018. URL http://jmlr.org/ papers/v19/18-188.html.
Stich, S. U. Local sgd converges fast and communicates little. In ICLR 2019-International Conference on Learning Representations, 2019.
Tang, Z., Shi, S., Wang, W., Li, B., and Chu, X. Communication-efficient distributed deep learning: A comprehensive survey. arXiv preprint arXiv:2003.06307, 2020.
Verbraeken, J., Wolting, M., Katzy, J., Kloppenburg, J., Verbelen, T., and Rellermeyer, J. S. A survey on distributed machine learning. Acm computing surveys (csur), 53(2): 1–33, 2020.
Wang, J., Charles, Z., Xu, Z., Joshi, G., McMahan, H. B., Al-Shedivat, M., Andrew, G., Avestimehr, S., Daly, K., Data, D., et al. A field guide to federated optimization. arXiv preprint arXiv:2107.06917, 2021.
Wang, J., Das, R., Joshi, G., Kale, S., Xu, Z., and Zhang, T. On the unreasonable effectiveness of federated averaging with heterogeneous data. arXiv preprint arXiv:2206.04723, 2022.
Woodworth, B., Patel, K. K., Stich, S., Dai, Z., Bullins, B., Mcmahan, B., Shamir, O., and Srebro, N. Is local sgd better than minibatch sgd? In International Conference on Machine Learning, pp. 10334–10343. PMLR, 2020a.
Woodworth, B. E., Patel, K. K., and Srebro, N. Minibatch vs local sgd for heterogeneous distributed learning. Advances in Neural Information Processing Systems, 33: 6281–6292, 2020b.
Wu, J., Bartlett, P. L., Telgarsky, M., and Yu, B. Large stepsize gradient descent for logistic loss: Non-monotonicity of the loss improves optimization efficiency. In Agrawal, S. and Roth, A. (eds.), Proceedings of Thirty Seventh Conference on Learning Theory, volume 247 of Proceedings of Machine Learning Research, pp. 5019–5073. PMLR, 30 Jun–03 Jul 2024a. URL https://proceedings. mlr.press/v247/wu24b.html.
Wu, J., Braverman, V., and Lee, J. D. Implicit bias of gradient descent for logistic regression at the edge of stability. Advances in Neural Information Processing Systems, 36, 2024b.
Xu, Z., Zhang, Y., Andrew, G., Choquette, C., Kairouz, P., Mcmahan, B., Rosenstock, J., and Zhang, Y. Federated learning of gboard language models with differential privacy. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track), pp. 629–639, 2023.
Zhang, Y., Duchi, J., Jordan, M. I., and Wainwright, M. J. Information-theoretic lower bounds for distributed statistical estimation with communication constraints. Advances in Neural Information Processing Systems, 26, 2013.
Zinkevich, M., Weimer, M., Li, L., and Smola, A. J. Parallelized stochastic gradient descent. In Advances in neural information processing systems, pp. 2595–2603, 2010.
# A. Proofs of Main Results
# A.1. Proof of Theorem 4.1
Lemma A.1 (Restatement of Lemma 4.4). For every $r \geq 0$ ,
$$
\| \pmb { w } _ { r } \| \leq \| \pmb { w } _ { 0 } \| + \frac { \sqrt { 2 } + \eta + \log ( 1 + \eta \gamma ^ { 2 } K r ^ { 2 } ) } { \gamma } .
$$
Proof. Recall that Wu et al. (2024a) introduced a large stepsize analysis of GD for logistic regression, which provides an upper bound on the norm of the parameter at each step. We wish to achieve a similar bound for the norm of the parameter found by Local GD. To accomplish this, we treat the local training of each client during each round as GD on a logistic regression problem, and we apply the ”split comparator” technique of $\mathrm { w } _ { \mathrm { u } }$ et al. (2024a). This leads to a recursive upper bound on the norm of the global parameter $\Vert \pmb { w } _ { r } \Vert$ , and unrolling yields the desired bound. We demonstrate this argument below.
Let $0 \leq s < r$ and $m \in [ M ]$ . Define $\pmb { u } _ { 1 } = \lambda _ { 1 } \pmb { w } _ { * }$ , $\begin{array} { r } { { \pmb u } _ { 2 } = \lambda _ { 2 } { \pmb w } _ { \ast } } \end{array}$ , and $\pmb { u } = \pmb { u } _ { 1 } + \pmb { u } _ { 2 }$ , where $\lambda _ { 1 } , \lambda _ { 2 }$ will be chosen later and will not depend on $s$ or $m$ . Note that $\mathbf { \Delta } _ { \pmb { u } }$ is a scalar multiple of $\pmb { w } _ { \ast }$ , which is the maximum margin predictor of the global dataset, not that of any local dataset. We start by applying the split comparator technique of (Wu et al., 2024a) to the local updates of client $m$ at round $s$ , which takes $K$ gradient steps with learning rate $\eta$ on the objective $F _ { m }$ , initialized from ${ \pmb w } _ { s }$ . For every $0 \leq k < K$ ,
$$
\begin{array} { r l } { \| w _ { s , k + 1 } ^ { m } - u \| ^ { 2 } = \| ( w _ { s , k } ^ { m } - u ) + ( w _ { s , k + 1 } ^ { m } - w _ { s , k } ^ { m } ) \| ^ { 2 } } & { } \\ { = \| w _ { s , k } ^ { m } - u \| ^ { 2 } + 2 \left. w _ { s , k + 1 } ^ { m } - w _ { s , k } ^ { m } , w _ { s , k } ^ { m } - u \right. + \| w _ { s , k + 1 } ^ { m } - w _ { s , k } ^ { m } \| ^ { 2 } } & { } \\ { = \| w _ { s , k } ^ { m } - u \| ^ { 2 } + 2 \eta \left. \nabla F _ { m } ( w _ { s , k } ^ { m } ) , u - w _ { s , k } ^ { m } \right. + \eta ^ { 2 } \| \nabla F _ { m } ( w _ { s , k } ^ { m } ) \| ^ { 2 } } & { } \\ { = \| w _ { s , k } ^ { m } - u \| ^ { 2 } + \underbrace { 2 \eta \left. \nabla F _ { m } ( w _ { s , k } ^ { m } ) , u _ { 1 } - w _ { s , k } ^ { m } \right. } _ { A _ { 1 } } } & { } \\ { \ } & { \ + \underbrace { 2 \eta \left. \nabla F _ { m } ( w _ { s , k } ^ { m } ) , u _ { 2 } \right. + \eta ^ { 2 } \| \nabla F _ { m } ( w _ { s , k } ^ { m } ) \| ^ { 2 } } _ { A _ { 2 } } } \end{array}
$$
The first term $A _ { 1 }$ is easily bounded by convexity of $F _ { m }$ :
$$
A _ { 1 } = 2 \eta \left. \nabla F _ { m } ( w _ { s , k } ^ { m } ) , u _ { 1 } - w _ { s , k } ^ { m } \right. \leq 2 \eta ( F _ { m } ( u _ { 1 } ) - F _ { m } ( w _ { s , k } ^ { m } ) ) .
$$
The second term $A _ { 2 }$ can be bounded by the Lipschitz property of $F _ { m }$ together with a choice of $\mathbf { \delta } \mathbf { u } _ { 2 }$ :
$$
\begin{array} { r l } & { \mathcal { A } _ { 2 } = q ( 2 \langle \nabla F _ { n } ( w _ { n , k } ^ { \mathrm { o r } } ) , u _ { n } ^ { \mathrm { o r } } \rangle + q | \Gamma } \\ & { \quad \stackrel { \mathrm { G r } } { = } \frac { 2 } { n } ( - \frac { 2 } { n } \sum _ { i = 1 } ^ { n } \frac { \Phi } { 1 + \mathrm { e x p } \{ w _ { n , k } ^ { \mathrm { o r } } , x _ { n } ^ { \mathrm { o r } } \} } + \eta | \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \frac { x _ { n } ^ { \mathrm { o r } } } { 1 - \mathrm { e x p } \{ i \log ( x _ { n , k } ^ { \mathrm { o r } } , x _ { n } ^ { \mathrm { o r } } ) \} } | ^ { 2 } ) } \\ & { \quad \stackrel { \mathrm { G r } } { = } q ( \frac { - 2 \lambda _ { 2 } } { n } \frac { \Phi } { 1 + \mathrm { e x p } \{ i \log ( x _ { n , k } ^ { \mathrm { o r } } , x _ { n } ^ { \mathrm { o r } } ) \} } + \eta | \frac { 1 } { n } \sum _ { i = 1 } ^ { n } | \frac { x _ { n } ^ { \mathrm { o r } } } { 1 + \mathrm { e x p } \{ i ( x _ { n , k } ^ { \mathrm { o r } } , x _ { n } ^ { \mathrm { o r } } ) \} } | ^ { 2 } ) } \\ & { \quad \stackrel { \mathrm { G r } } { \le } q ( \frac { - 2 \lambda _ { 2 } } { n } \frac { \Phi } { 1 + \mathrm { e x p } \{ i ( x _ { n , k } ^ { \mathrm { o r } } , x _ { n } ^ { \mathrm { o r } } ) \} } + \eta | \frac { 1 } { n } \sum _ { i = 1 } ^ { n } | \frac { x _ { n } ^ { \mathrm { o r } } } { 1 + \mathrm { e x p } \{ i ( x _ { n , k } ^ { \mathrm { o r } } , x _ { n } ^ { \mathrm { o r } } ) \} } | ^ { 2 } ) } \\ & { \quad \stackrel { \mathrm { G r } } { \le } \eta ( - \frac { 2 \gamma _ { 2 } \lambda _ { 2 } } { n } \frac { \mathbf { N } } { 1 - \mathrm { i } } + \eta ( 4 \eta w _ { n , k } ^ { \mathrm { o r } } , x _ { n } ^ { \mathrm { o r } } ) + \frac { \eta } { n } \sum _ { i = 1 } ^ { n } | \frac { x _ { n } ^ { \mathrm { o r } } } { 1 + \mathrm { e x p } \{ i ( x _ { n , k } ^ { \mathrm { o r } } , x _ { n } ^ { \mathrm { o r } } ) \} } | ) } \\ & \quad \stackrel { \mathrm { G r } } { \le } \eta ( \frac - \end{array}
$$
where $( i )$ uses the definition of $\nabla F _ { m }$ , $( i i )$ uses the definition of $\mathbf { \delta } \mathbf { u } _ { 2 }$ and Jensen’s inequality, and both $( i i i )$ and $( i v )$ use
$\| \pmb { x } _ { i } ^ { m } \| \leq 1$ . Therefore, choosing $\lambda _ { 2 } = \eta / ( 2 \gamma )$ implies that $A _ { 2 } \leq 0$ . Plugging back to Equation 40,
$$
\begin{array} { r l } & { \| \pmb { w } _ { s , k + 1 } ^ { m } - \pmb { u } \| ^ { 2 } \leq \| \pmb { w } _ { s , k } ^ { m } - \pmb { u } \| ^ { 2 } + 2 \eta ( F _ { m } ( \pmb { u } _ { 1 } ) - F _ { m } ( \pmb { w } _ { s , k } ^ { m } ) ) } \\ & { \qquad F _ { m } ( \pmb { w } _ { s , k } ^ { m } ) \leq \frac { \| \pmb { w } _ { s , k } ^ { m } - \pmb { u } \| ^ { 2 } - \| \pmb { w } _ { s , k + 1 } ^ { m } - \pmb { u } \| ^ { 2 } } { 2 \eta } + F _ { m } ( \pmb { u } _ { 1 } ) . } \end{array}
$$
Averaging over $k \in \{ 0 , \ldots , K - 1 \}$ ,
$$
\begin{array} { c } { \displaystyle \frac { 1 } { K } \displaystyle \sum _ { k = 0 } ^ { K - 1 } F _ { m } ( \pmb { w } _ { s , k } ^ { m } ) \leq \frac { \| \pmb { w } _ { s } - \pmb { u } \| ^ { 2 } - \| \pmb { w } _ { s , K } ^ { m } - \pmb { u } \| ^ { 2 } } { 2 \eta K } + F _ { m } ( \pmb { u } _ { 1 } ) } \\ { \displaystyle \frac { \| \pmb { w } _ { s , K } ^ { m } - \pmb { u } \| ^ { 2 } } { 2 \eta K } + \displaystyle \frac { 1 } { K } \displaystyle \sum _ { k = 0 } ^ { K - 1 } F _ { m } ( \pmb { w } _ { s , k } ^ { m } ) \leq \frac { \| \pmb { w } _ { s } - \pmb { u } \| ^ { 2 } } { 2 \eta K } + F _ { m } ( \pmb { u } _ { 1 } ) . } \end{array}
$$
In particular, this implies
$$
\frac { \| \pmb { w } _ { s , K } ^ { m } - \pmb { u } \| ^ { 2 } } { 2 \eta K } \le \frac { \| \pmb { w } _ { s } - \pmb { u } \| ^ { 2 } } { 2 \eta K } + F _ { m } ( \pmb { u } _ { 1 } ) ,
$$
so
$$
\| w _ { s , K } ^ { m } - u \| \leq \sqrt { \| w _ { s } - u \| ^ { 2 } + 2 \eta K F _ { m } ( u _ { 1 } ) } \leq \| w _ { s } - u \| + \sqrt { 2 \eta K F _ { m } ( u _ { 1 } ) } .
$$
Recall that $\begin{array} { r } { { \pmb w } _ { s + 1 } = \frac { 1 } { M } \sum _ { m = 1 } ^ { M } { \pmb w } _ { s , K } ^ { m } } \end{array}$ . So averaging over $m$ ,
$$
\begin{array} { r l r } { { \| w _ { s + 1 } - u \| = \| \displaystyle \frac { 1 } { M } \sum _ { m = 1 } ^ { M } w _ { s , k } ^ { m } - u \| \le \displaystyle \frac { 1 } { M } \sum _ { m = 1 } ^ { M } \| w _ { s , k } ^ { m } - u \| } } \\ & { } & { \le \| w _ { s } - u \| + \displaystyle \frac { 1 } { M } \sum _ { m = 1 } ^ { M } \sqrt { 2 \eta K F _ { m } ( u _ { 1 } ) } } \\ & { } & { \stackrel { ( i ) } { \le } \| w _ { s } - u \| + \sqrt { 2 \eta K F ( u _ { 1 } ) } , } \end{array}
$$
where $( i )$ uses the fact that $\sqrt { \cdot }$ is concave together with Jensen’s inequality. We can now unroll this recursion over $s \in \{ 0 , \ldots , r - 1 \}$ to obtain
$$
\left. w _ { r } - u \right. \leq \left. w _ { 0 } - u \right. + \sqrt { 2 \eta K r ^ { 2 } F ( u _ { 1 } ) } \leq \left. w _ { 0 } \right. + \left. u \right. + \sqrt { 2 \eta K r ^ { 2 } F ( u _ { 1 } ) } .
$$
so
$$
\begin{array} { r l } & { \| { \pmb w } _ { r } \| \leq \| { \pmb w } _ { r } - { \pmb u } \| + \| { \pmb u } \| \leq \| { \pmb w } _ { 0 } \| + 2 \| { \pmb u } \| + \sqrt { 2 \eta K r ^ { 2 } F ( { \pmb u } _ { 1 } ) } } \\ & { \qquad = \| { \pmb w } _ { 0 } \| + 2 \lambda _ { 1 } + 2 \lambda _ { 2 } + \sqrt { 2 \eta K r ^ { 2 } F ( \lambda _ { 1 } { \pmb w } _ { * } ) } . } \end{array}
$$
It only remains to choose $\lambda _ { 1 }$ . Note that
$$
\begin{array} { l } { \displaystyle F ( \lambda _ { 1 } { \pmb w } _ { * } ) = \frac { 1 } { M n } \sum _ { m = 1 } ^ { M } \sum _ { i = 1 } ^ { n } \log ( 1 + \exp ( - \lambda _ { 1 } \langle { \pmb w } _ { * } , { \pmb x } _ { i } ^ { m } \rangle ) ) } \\ { \displaystyle \stackrel { ( i ) } { \leq } \frac { 1 } { M n } \sum _ { m = 1 } ^ { M } \sum _ { i = 1 } ^ { n } \exp ( - \lambda _ { 1 } \langle { \pmb w } _ { * } , { \pmb x } _ { i } ^ { m } \rangle ) } \\ { \displaystyle \stackrel { ( i i ) } { \leq } \exp ( - \lambda _ { 1 } \gamma ) , } \end{array}
$$
where $( i )$ uses $\log ( 1 + x ) \leq x$ for $x \geq 0$ and $( i i )$ uses the definition of $\pmb { w } _ { \ast }$ . Therefore, choosing $\begin{array} { r } { \lambda _ { 1 } = \frac { 1 } { \gamma } \log ( 1 + \eta \gamma ^ { 2 } K r ^ { 2 } ) } \end{array}$ yields
$$
F ( \lambda _ { 1 } { w _ { * } } ) \leq \frac { 1 } { 1 + \eta \gamma ^ { 2 } K r ^ { 2 } } \leq \frac { 1 } { \eta \gamma ^ { 2 } K r ^ { 2 } } ,
$$
so
$$
\begin{array} { l } { \displaystyle \| { \pmb w } _ { r } \| \leq \| { \pmb w } _ { 0 } \| + \frac { 2 } { \gamma } \log ( 1 + \eta \gamma ^ { 2 } K r ^ { 2 } ) + \frac { \eta } { \gamma } + \sqrt { 2 \eta K r ^ { 2 } \frac { 1 } { \eta \gamma ^ { 2 } K r ^ { 2 } } } } \\ { = \| { \pmb w } _ { 0 } \| + \frac { \sqrt { 2 } + \eta + \log ( 1 + \eta \gamma ^ { 2 } K r ^ { 2 } ) } { \gamma } . } \end{array}
$$
Theorem A.2 (Restatement of Theorem 4.1). For every $r \geq 0$ , Local GD satisfies
$$
\frac { 1 } { r } \sum _ { s = 0 } ^ { r - 1 } F ( \pmb { w } _ { s } ) \leq 2 6 \frac { \| \pmb { w } _ { 0 } \| ^ { 2 } + 1 + \log ^ { 2 } ( K + \eta K \gamma ^ { 2 } r ) + \eta ^ { 2 } K ^ { 2 } } { \eta \gamma ^ { 4 } r } .
$$
Proof. To achieve this bound on the loss of Local GD, we again adapt the split comparator technique of (Wu et al., 2024a). This time, we consider the trajectory of the global model ${ \pmb w } _ { r }$ , instead of the trajectory of locally updated models ${ \pmb w } _ { r , k } ^ { m }$ as in Lemma 4.4. To apply this technique for Local GD, we have to account for the fact that the update direction ${ \pmb w } _ { r + 1 } - { \pmb w } _ { r }$ is not equal to the global gradient $\nabla F ( { \pmb w } _ { r } )$ . However, both the update direction and the global gradient are linear combinations of the data $\{ \pmb { x } _ { i } ^ { m } \} _ { m , i }$ , and we account for the discrepancy between the two by bounding the ratio of their linear combination coefficients.
Let $\pmb { u } _ { 1 } = \lambda _ { 1 } \pmb { w } _ { * } , \pmb { u } _ { 2 } = \lambda _ { 2 } \pmb { w } _ { * }$ , where $\lambda _ { 1 }$ and $\lambda _ { 2 }$ will be determined later, and let $\pmb { u } = \pmb { u } _ { 1 } + \pmb { u } _ { 2 }$ . Then
$$
\begin{array} { r l } { \| w _ { s + 1 } - u \| ^ { 2 } = \| ( w _ { s } - u ) + ( w _ { s + 1 } - w _ { s } ) \| ^ { 2 } } & { } \\ { = \| w _ { s } - u \| ^ { 2 } + 2 \langle w _ { s + 1 } - w _ { s } , w _ { s } - u \rangle + \| w _ { s + 1 } - w _ { s } \| ^ { 2 } } & { } \\ { = \| w _ { s } - u \| ^ { 2 } + \displaystyle \frac { 2 \eta } { M } \displaystyle \sum _ { m = 1 } ^ { M } \displaystyle \sum _ { k = 0 } ^ { K - 1 } \langle \nabla F _ { m } ( w _ { s , k } ^ { m } ) , u - w _ { s } \rangle + \eta ^ { 2 } \left\| \displaystyle \frac { 1 } { M } \displaystyle \sum _ { m = 1 } ^ { M } \displaystyle \sum _ { k = 0 } ^ { K - 1 } \nabla F _ { m } ( w _ { s , k } ^ { m } ) \right\| ^ { 2 } } & { } \\ { = \| w _ { s } - u \| ^ { 2 } + \displaystyle \frac { 2 \eta } { M } \displaystyle \sum _ { m = 1 } ^ { M } \displaystyle \sum _ { k = 0 } ^ { K - 1 } \langle \nabla F _ { m } ( w _ { s , k } ^ { m } ) , u _ { 1 } - w _ { s } \rangle } & { } \\ { \quad \quad + \displaystyle \frac { 2 \eta } { M } \displaystyle \sum _ { s } ^ { M } \displaystyle \sum _ { s } ^ { K - 1 } \langle \nabla F _ { m } ( w _ { s , k } ^ { m } ) , u _ { 2 } \rangle + \eta ^ { 2 } \left\| \displaystyle \frac { 1 } { \lambda ! } \displaystyle \sum _ { s } ^ { M } \displaystyle \sum _ { s } ^ { K - 1 } \nabla F _ { m } ( w _ { s , k } ^ { m } ) \right\| ^ { 2 } . } & { } \end{array}
$$
To bound $A _ { 1 }$ , we express the local gradient of the local models $\nabla F _ { m } ( \pmb { w } _ { s , k } ^ { m } )$ in terms of the local gradient of the preceding global model $\nabla F _ { m } ( \pmb { w } _ { s } )$ . For any $\pmb { w }$ ,
$$
\nabla F _ { m } ( \pmb { w } ) = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \nabla F _ { m , i } ( \pmb { w } ) = \frac { - 1 } { n } \sum _ { i = 1 } ^ { n } \frac { \pmb { x } _ { i } ^ { m } } { 1 + \mathrm { e x p } ( \langle \pmb { x } _ { i } ^ { m } , \pmb { w } \rangle ) } .
$$
So denoting $\beta _ { s , i , k } ^ { m } = ( 1 + \exp ( b _ { s , i } ^ { m } ) ) / ( 1 + \exp ( b _ { s , i , k } ^ { m } ) )$ and $F _ { m , i } ( \pmb { w } ) = \mathrm { l o g } ( 1 + \mathrm { e x p } ( - \langle \pmb { w } , \pmb { x } _ { i } ^ { m } \rangle ) )$ ,
$$
\nabla F _ { m } ( \pmb { w } _ { s , k } ^ { m } ) = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \frac { - \pmb { x } _ { i } ^ { m } } { 1 + \exp ( b _ { s , i , k } ^ { m } ) } = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \frac { 1 + \exp ( b _ { s , i } ^ { m } ) } { 1 + \exp ( b _ { s , i , k } ^ { m } ) } \frac { - \pmb { x } _ { i } ^ { m } } { 1 + \exp ( b _ { s , i } ^ { m } ) } = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \beta _ { s , i , k } ^ { m } \nabla F _ { m , i }
$$
Notice, from the definition of $\beta _ { s , k } ^ { m }$
$$
\beta _ { s , k } ^ { m } : = \frac { 1 } { K } \sum _ { k = 0 } ^ { K - 1 } \frac { | \ell ^ { \prime } ( b _ { s , i , k } ^ { m } ) | } { | \ell ^ { \prime } ( b _ { s , i } ^ { m } ) | } = \frac { 1 } { K } \sum _ { k = 0 } ^ { K - 1 } \frac { 1 + \exp ( b _ { s , i } ^ { m } ) } { 1 + \exp ( b _ { s , i , k } ^ { m } ) } = \frac { 1 } { K } \sum _ { k = 0 } ^ { K - 1 } \beta _ { s , i , k } ^ { m } ,
$$
so
$$
\begin{array} { r l } & { \displaystyle { A _ { 1 } = \frac { 2 \eta } { M n } \sum _ { m = 1 } ^ { M } \sum _ { k = 0 } ^ { K - 1 } \sum _ { i = 1 } ^ { n } \beta _ { s , i , k } ^ { m } \langle \nabla F _ { m , i } ( { \pmb w } _ { s } ) , { \pmb u } _ { 1 } - { \pmb w } _ { s } \rangle } } \\ & { \displaystyle { \vphantom { \frac { ( i ) } { \sum _ { i } } } \le \frac { 2 \eta } { M n } \sum _ { m = 1 } ^ { M } \sum _ { k = 0 } ^ { K - 1 } \sum _ { i = 1 } ^ { n } \beta _ { s , i , k } ^ { m } \big ( F _ { m , i } ( { \pmb u } _ { 1 } ) - F _ { m , i } ( { \pmb w } _ { s } ) \big ) } } \\ & { \displaystyle { \vphantom { \frac { 2 \eta } { M n } \sum _ { m = 1 } ^ { M } \sum _ { i = 1 } ^ { n } \sum _ { i = 1 } ^ { n } \beta _ { s , i } ^ { m } F _ { m , i } ( { \pmb u } _ { 1 } ) - \frac { 2 \eta K } { M n } \sum _ { m = 1 } ^ { M } \sum _ { i = 1 } ^ { n } \beta _ { s , i } ^ { m } F _ { m , i } ( { \pmb w } _ { s } ) } } . } \end{array}
$$
where $( i )$ uses the convexity of $F _ { m , i }$ . We can now bound the two terms of Equation 77 with upper and lower bounds of $\beta _ { s , i } ^ { m }$ respectively. Denoting ϕ = ∥w0∥ + √2+η+log(γ1+ηγ2Kr2) ,
$$
\begin{array} { r l } & { \displaystyle \beta _ { s , i } ^ { m } = \frac { 1 } { K } \sum _ { k = 0 } ^ { K - 1 } \frac { 1 + \exp ( b _ { s , i } ^ { m } ) } { 1 + \exp ( b _ { s , i , k } ^ { m } ) } \le 1 + \exp ( b _ { s , i } ^ { m } ) = 1 + \exp ( \langle w _ { s } , x _ { i } ^ { m } \rangle ) } \\ & { \displaystyle \stackrel { ( i ) } { \le } 1 + \exp ( \| w _ { s } \| ) \stackrel { ( i i ) } { \le } 1 + \exp \left( \| w _ { 0 } \| + \frac { \sqrt { 2 } + \eta + \log ( 1 + \eta \gamma ^ { 2 } K s ^ { 2 } ) } { \gamma } \right) } \\ & { \displaystyle \le 2 \exp ( \phi ) , } \end{array}
$$
where $( i )$ uses Cauchy-Schwarz together with $\| \pmb { x } _ { i } ^ { m } \| \leq 1$ and $( i i )$ uses Lemma 4.4. Also,
$$
\beta _ { s , i } ^ { m } = \frac { 1 } { K } \sum _ { k = 0 } ^ { K - 1 } \frac { 1 + \exp ( b _ { s , i } ^ { m } ) } { 1 + \exp ( b _ { s , i , k } ^ { m } ) } \geq \frac { 1 } { K } \frac { 1 + \exp ( b _ { s , i } ^ { m } ) } { 1 + \exp ( b _ { s , i , 0 } ^ { m } ) } = \frac { 1 } { K } .
$$
The step $\begin{array} { r } { \beta _ { s , i } ^ { m } \geq \frac { 1 } { K } } \end{array}$ was mentioned in our proof overview, and it will be used again in the proof of Lemma 4.9. See Lemma B.7 for a discussion on the tightness of this bound. Plugging Equation 80 and Equation 81 into Equation 77,
$$
\begin{array} { r l r } { { A _ { 1 } \leq \frac { 4 \eta K \exp ( \phi ) } { M n } \sum _ { m = 1 } ^ { M } \sum _ { i = 1 } ^ { n } F _ { m , i } ( { \pmb u } _ { 1 } ) - \frac { 2 \eta } { M n } \sum _ { m = 1 } ^ { M } \sum _ { i = 1 } ^ { n } F _ { m , i } ( { \pmb w } _ { s } ) } } \\ & { } & { \leq 4 \eta K \exp ( \phi ) F ( { \pmb u } _ { 1 } ) - 2 \eta F ( { \pmb w } _ { s } ) . } \end{array}
$$
This bounds $A _ { 1 }$ . For $A _ { 2 }$ ,
$$
\begin{array} { r l } { \mathbf { A } _ { t } - \frac { 2 \eta _ { 1 } } { \Delta t } _ { w _ { t } = 0 } ^ { \mathcal { N } } } & { \textrm { F } _ { \mathbf { x } } ^ { \mathcal { N } } + \frac { \eta _ { 1 } } { \Delta t } ( \mathbf { F } _ { \mathbf { x } } ( \mathbf { a } \mathbf { F } _ { \mathbf { x } } ^ { \mathcal { N } } ) ; \mathbf { a } _ { t } ) - \eta _ { 2 } ^ { \mathcal { N } } \times \frac { \eta _ { 1 } } { \Delta t } \frac { 1 } { \Delta t } \frac { \mathbf { F } _ { \mathbf { x } } ^ { \mathcal { N } } } { \mathbf { F } _ { \mathbf { x } } ^ { \mathcal { N } } } \frac { \mathbf { F } _ { \mathbf { x } } ^ { \mathcal { N } } } { \mathbf { F } _ { \mathbf { x } } ^ { \mathcal { N } } } \mathbf { F } _ { \mathbf { x } } ( \mathbf { a } \mathbf { F } _ { \mathbf { x } } ^ { \mathcal { N } } ) } \\ & { \leq \frac { \eta _ { 1 } } { \Delta t } \frac { \sqrt { \eta _ { 1 } } } { \omega _ { 0 } } \frac { \sqrt { \eta _ { 1 } } } { \omega _ { 0 } } \frac { \sqrt { \eta _ { 1 } } } { \omega _ { 0 } } \frac { \sqrt { \eta _ { 1 } } } { \omega _ { 0 } } \frac { \sqrt { \eta _ { 1 } } } { \omega _ { 0 } } \frac { \sqrt { \eta _ { 1 } } } { \omega _ { 0 } } \frac { \sqrt { \eta _ { 1 } } } { \omega _ { 0 } } \frac { \sqrt { \eta _ { 1 } } } { \omega _ { 0 } } \frac { \sqrt { \eta _ { 1 } } } { \omega _ { 0 } } \frac { \sqrt { \eta _ { 1 } } } { \omega _ { 0 } } \frac { \sqrt { \eta _ { 1 } } } { \omega _ { 0 } } \frac { \sqrt { \eta _ { 1 } } } { \omega _ { 0 } } \frac { \sqrt { \eta _ { 1 } } } { \omega _ { 0 } } } \\ & { \leq \frac { \eta _ { 1 } } { \Delta t } \sum _ { i = 1 } ^ { \infty } \mathbf { F } _ { \mathbf { x } } ( \mathbf { a } \mathbf { F } _ { \mathbf { x } } ^ { \mathcal { N } } ) \mathbf { a } _ { i } + \eta _ { 1 } \mathbf { F } _ { \mathbf { x } } ( \mathbf { a } ^ { \mathcal { N } } ) \mathbf { a } _ { i } } \\ \mathbf { F } _ { \mathbf { x } } ^ { \mathcal { N } } \leq \frac { \eta _ { 1 } } { \Delta t } \sum _ { i = 1 } ^ { \infty } \mathbf { F } _ { \mathbf { x } } ( \mathbf { a } \mathbf { F } _ \mathbf \end{array}
$$
where $( i )$ uses the fact that $\| \nabla F _ { m } ( \pmb { w } ) \| \leq 1$ , coming from Equation 72 and $\| \pmb { x } _ { i } ^ { m } \| \leq 1$ , and $( i i )$ uses the definition of $\mathbf { \delta } \mathbf { u } _ { 2 }$ .
Choosing $\lambda _ { 2 } = \eta K / ( 2 \gamma )$ then implies that $A _ { 2 } \leq 0$ .
Plugging $A _ { 2 } \leq 0$ and Equation 83 into Equation 71,
$$
\begin{array} { r l r } { { \| { \pmb w } _ { s + 1 } - { \pmb u } \| ^ { 2 } \leq \| { \pmb w } _ { s } - { \pmb u } \| ^ { 2 } + 4 \eta K \exp ( \phi ) F ( { \pmb u } _ { 1 } ) - 2 \eta F ( { \pmb w } _ { s } ) } } \\ & { } & { F ( { \pmb w } _ { s } ) \le \frac { \| { \pmb w } _ { s } - { \pmb u } \| ^ { 2 } - \| { \pmb w } _ { s + 1 } - { \pmb u } \| ^ { 2 } } { 2 \eta } + 2 K \exp ( \phi ) F ( { \pmb u } _ { 1 } ) , } \end{array}
$$
and averaging over $s \in \{ 0 , \ldots , r - 1 \}$ yields
$$
\begin{array} { r l } { \displaystyle \frac { 1 } { r } \sum _ { s = 0 } ^ { r - 1 } F ( w _ { s } ) \leq \displaystyle \frac { \| w _ { 0 } - u \| ^ { 2 } - \| w _ { r } - u \| ^ { 2 } } { 2 \eta r } + 2 K \exp ( \phi ) F ( u _ { 1 } ) } & \\ { \leq \displaystyle \frac { \| w _ { 0 } - ( u _ { 1 } + u _ { 2 } ) \| ^ { 2 } } { 2 \eta r } + 2 K \exp ( \phi ) F ( u _ { 1 } ) } & \\ { \leq \frac { 3 } { 2 } \displaystyle \frac { \| w _ { 0 } \| ^ { 2 } + \| u _ { 1 } \| ^ { 2 } + \| u _ { 2 } \| ^ { 2 } } { \eta r } + 2 K \exp ( \phi ) F ( u _ { 1 } ) } & \\ { \leq \frac { 3 } { 2 } \displaystyle \frac { \| w _ { 0 } \| ^ { 2 } + \lambda _ { 1 } ^ { 2 } + \lambda _ { 2 } ^ { 2 } } { \eta r } + 2 K \exp ( \phi ) F ( \lambda _ { 1 } w _ { \ast } ) . } \end{array}
$$
Recall that
$$
F ( \lambda _ { 1 } w _ { * } ) = \frac { 1 } { M n } \sum _ { m = 1 } ^ { M } \sum _ { i = 1 } ^ { n } \log ( 1 + \exp ( - \lambda _ { 1 } \langle { \boldsymbol x } _ { i } ^ { m } , { \boldsymbol w } _ { * } \rangle ) ) \le \log ( 1 + \exp ( - \lambda _ { 1 } \gamma ) ) \overset { ( i ) } { \le } \exp ( - \lambda _ { 1 } \gamma ) ,
$$
where $( i )$ uses $\log ( 1 + x ) \leq x$ for $x \geq 0$ . So
$$
\begin{array} { r l } & { \displaystyle \frac { 1 } { r } \sum _ { s = 0 } ^ { r - 1 } F ( { \pmb w } _ { s } ) \leq \frac { 3 } { 2 } \frac { \| { \pmb w } _ { 0 } \| ^ { 2 } + \lambda _ { 1 } ^ { 2 } + \lambda _ { 2 } ^ { 2 } } { \eta r } + 2 K \exp ( \phi - \lambda _ { 1 } \gamma ) } \\ & { \qquad \quad = \displaystyle \frac { 3 } { 2 } \frac { \| { \pmb w } _ { 0 } \| ^ { 2 } + \lambda _ { 1 } ^ { 2 } + \lambda _ { 2 } ^ { 2 } } { \eta r } + 2 \exp ( \log K + \phi - \lambda _ { 1 } \gamma ) . } \end{array}
$$
Here we choose $\lambda _ { 1 } = ( \phi + \log ( K + \eta K \gamma ^ { 2 } r ) ) / \gamma$ . Finally, together with the previous choice of $\lambda _ { 2 } = \eta K / ( 2 \gamma )$ , we have
$$
\begin{array} { r l } & { \displaystyle \frac { 1 } { r } \sum _ { s = 0 } ^ { r - 1 } F ( { \pmb w } _ { s } ) \leq \frac { 3 \| { \pmb w } _ { 0 } \| ^ { 2 } } { 2 \eta r } + \frac { 3 \big ( \phi ^ { 2 } + \log ^ { 2 } ( K + \eta K \gamma ^ { 2 } r ) \big ) } { \eta \gamma ^ { 2 } r } + \frac { 3 \eta K ^ { 2 } } { 8 \gamma ^ { 2 } r } + \frac { 2 } { 1 + \eta \gamma ^ { 2 } r } } \\ & { \qquad \leq \frac { 1 4 \| { \pmb w } _ { 0 } \| ^ { 2 } } { \eta \gamma ^ { 4 } r } + \frac { 1 2 \eta } { \gamma ^ { 4 } r } + \frac { 1 5 \log ^ { 2 } ( K + \eta K \gamma ^ { 2 } r ) } { \eta \gamma ^ { 4 } r } + \frac { 3 \eta K ^ { 2 } } { 8 \gamma ^ { 2 } r } + \frac { 2 6 } { \eta \gamma ^ { 4 } r } } \\ & { \qquad \leq 2 6 \frac { \| { \pmb w } _ { 0 } \| ^ { 2 } + 1 + \log ^ { 2 } ( K + \eta K \gamma ^ { 2 } r ) + \eta ^ { 2 } K ^ { 2 } } { \eta \gamma ^ { 4 } r } . } \end{array}
$$
# A.2. Proof of Theorem 4.2
Lemma A.3 (Restatement of Lemma 4.6). If $F ( \pmb { w } _ { r } ) \leq 1 / ( 4 \eta M )$ for some $r \geq 0$ , then $F _ { m } ( \pmb { w } _ { r , k } ^ { m } )$ is decreasing in $k$ for every $m$ .
Proof. Recall that for each $r , m$ , the sequence of local steps $\{ { \pmb w } _ { r , k } ^ { m } \} _ { k }$ is generated by GD for a single-machine logistic regression problem. To show decrease of the objective, we use the modified descent inequality from Lemma 4.5.
We want to show that $F _ { m } ( \pmb { w } _ { r , k + 1 } ^ { m } ) \leq F _ { m } ( \pmb { w } _ { r , k } ^ { m } )$ for every $k$ . To do this, we prove $F _ { m } ( \pmb { w } _ { r , k } ^ { m } ) \leq F _ { m } ( \pmb { w } _ { r } )$ by induction on $k$ . Clearly it holds for $k = 0$ , so suppose that it holds for some $0 \leq k < K$ . Then
$$
\| \pmb { w } _ { r , k + 1 } ^ { m } - \pmb { w } _ { r , k } ^ { m } \| = \eta \| \nabla F _ { m } ( \pmb { w } _ { r , k } ^ { m } ) \| \overset { ( i ) } { \leq } \eta F _ { m } ( \pmb { w } _ { r , k } ^ { m } ) \overset { ( i i ) } { \leq } \eta F _ { m } ( \pmb { w } _ { r } ) \overset { ( i i i ) } { \leq } 1 / 4 ,
$$
where $( i )$ uses Lemma B.1, $( i i )$ uses the inductive hypothesis, and $( i i i )$ uses $F _ { m } ( \pmb { w } _ { r } ) \leq M F ( \pmb { w } _ { r } ) \leq 1 / ( 4 \eta )$ . This bound on $\lVert \mathbf { \boldsymbol { w } } _ { r , k + 1 } ^ { m } - \mathbf { \boldsymbol { w } } _ { r , k } ^ { m } \rVert$ shows that the condition of Lemma 4.5 is satisfied, so
$$
\begin{array} { r l } & { F _ { m } ( \boldsymbol { w } _ { r , k + 1 } ^ { m } ) - F _ { m } ( \boldsymbol { w } _ { r , k } ^ { m } ) \leq \langle \nabla F _ { m } ( \boldsymbol { w } _ { r , k } ^ { m } ) , \boldsymbol { w } _ { r , k + 1 } ^ { m } - \boldsymbol { w } _ { r , k } ^ { m } \rangle + 4 F _ { m } ( \boldsymbol { w } _ { r , k } ^ { m } ) \| \boldsymbol { w } _ { r , k + 1 } ^ { m } - \boldsymbol { w } _ { r , k } ^ { m } \| ^ { 2 } } \\ & { \qquad \leq - \eta \left\| \nabla F _ { m } ( \boldsymbol { w } _ { r , k } ^ { m } ) \right\| ^ { 2 } + 4 \eta ^ { 2 } F _ { m } ( \boldsymbol { w } _ { r , k } ^ { m } ) \left\| \nabla F _ { m } ( \boldsymbol { w } _ { r , k } ^ { m } ) \right\| ^ { 2 } } \\ & { \qquad \leq - \eta \left( 1 - 4 \eta F _ { m } ( \boldsymbol { w } _ { r , k } ^ { m } ) \right) \left\| \nabla F _ { m } ( \boldsymbol { w } _ { r , k } ^ { m } ) \right\| ^ { 2 } } \\ & { \qquad \overset { ( i ) } { \leq } 0 , } \end{array}
$$
where $( i )$ uses the inductive hypothesis $F _ { m } ( \pmb { w } _ { r , k } ^ { m } ) \leq F _ { m } ( \pmb { w } _ { r } ) \leq 1 / ( 4 \eta )$ . This completes the induction, so that $F _ { m } ( w _ { r , k } ^ { m } ) \leq$ $F _ { m } ( \pmb { w } _ { r } )$ . Additionally, Equation 107 shows that $F _ { m } ( \pmb { w } _ { r , k } ^ { m } )$ is decreasing in $k$ . □
Lemma A.4 (Restatement of Lemma 4.7). If $F ( \pmb { w } _ { r } ) \leq 1 / ( \eta K M )$ for some $r \geq 0$ , then $\| \pmb { w } _ { r , k } ^ { m } - \pmb { w } _ { r } \| \leq 1$ for every $m \in [ M ] , k \in \{ 0 , \ldots , K - 1 \}$ .
Proof. To bound the per-round movement $\lVert \mathbf { \boldsymbol { w } } _ { r , k } ^ { m } - \mathbf { \boldsymbol { w } } _ { r } \rVert$ , we simply use the property $\lVert \nabla F _ { m } ( \pmb { w } ) \rVert \leq F _ { m } ( \pmb { w } )$ from Lemma B.1, combined with the fact that the local loss is decreasing during the round from Lemma 4.6. Specifically,
$$
\begin{array} { r l } { \displaystyle \| { \boldsymbol w } _ { r , k } ^ { m } - { \boldsymbol w } _ { r } \| = \eta \left\| \sum _ { t = 0 } ^ { k - 1 } \nabla F _ { m } ( { \boldsymbol w } _ { r , t } ^ { m } ) \right\| = \eta \sum _ { t = 0 } ^ { k - 1 } \left\| \nabla F _ { m } ( { \boldsymbol w } _ { r , t } ^ { m } ) \right\| } & { } \\ { \displaystyle \stackrel { ( i ) } { \le } \eta \sum _ { t = 0 } ^ { k - 1 } F _ { m } ( { \boldsymbol w } _ { r , t } ^ { m } ) ~ \le ~ \eta K F _ { m } ( { \boldsymbol w } _ { r } ) ~ \le ~ 1 , } & { } \end{array}
$$
where $( i )$ uses Lemma B.1, $( i i )$ uses $F _ { m } ( \pmb { w } _ { r , t } ^ { m } ) \leq F _ { m } ( \pmb { w } _ { r } )$ from Lemma 4.6, and $( i i i )$ uses the condition $F _ { m } ( \pmb { w } _ { r } ) \leq$ $M F ( { \pmb w } _ { r } ) \leq 1 / ( \eta K )$ . □
Lemma A.5 (Restatement of Lemma 4.8). If $F ( { \pmb w } _ { r } ) \leq \gamma / ( 7 0 \eta K M )$ , then $\begin{array} { r } { \| \pmb { b } _ { r } \| \leq \frac { 1 } { 5 } \| \nabla F ( \pmb { w } _ { r } ) \| } \end{array}$ .
Proof. Our bound of $\left\| \boldsymbol { b } _ { r } \right\|$ is essentially a direct calculation that leverages Lemmas B.4, B.1, and 4.6.
$$
\begin{array} { r l } { \| b _ { r } \| = \displaystyle \left\| \frac { 1 } { M K } \displaystyle \sum _ { m = 1 } ^ { M } \sum _ { k = 0 } ^ { K - 1 } ( \nabla F _ { m } ( \boldsymbol { w } _ { r , k } ^ { m } ) - \nabla F _ { m } ( \boldsymbol { w } _ { r , k } ) ) \right\| \le \displaystyle \frac { 1 } { M K } \displaystyle \sum _ { m = 1 } ^ { M } \sum _ { k = 0 } ^ { K - 1 } \| \nabla F _ { m } ( \boldsymbol { w } _ { r , k } ^ { m } ) - \nabla F _ { m } ( \boldsymbol { w } _ { r } ) \| } & \\ { \displaystyle \stackrel { ( b ) } { \le } \displaystyle \frac { 1 } { M K } \displaystyle \sum _ { m = 1 } ^ { M } \displaystyle \sum _ { k = 0 } ^ { K - 1 } \tau _ { F m } ( \boldsymbol { w } _ { r , k } ) \| \boldsymbol { w } _ { r , k } ^ { m } - \boldsymbol { w } _ { r } \| = \displaystyle \frac { 7 } { M K } \displaystyle \sum _ { m = 1 } ^ { M } F _ { m } ( \boldsymbol { w } _ { r , k } ) \displaystyle \sum _ { k = 0 } ^ { K - 1 } \left\| \sum _ { l = 0 } ^ { k - 1 } \eta \nabla F _ { m } ( \boldsymbol { w } _ { r , k } ^ { m } ) \right\| } & \\ { \displaystyle \le \displaystyle \frac { 7 \eta } { M K } \displaystyle \sum _ { m = 1 } ^ { M } F _ { m } ( \boldsymbol { w } _ { r , j } ) \displaystyle \sum _ { k = 0 } ^ { K - 1 - 1 } \| \nabla F _ { m } ( \boldsymbol { w } _ { r , k } ^ { m } ) \| \le \displaystyle \frac { 1 } { 3 M K } \displaystyle \sum _ { m = 1 } ^ { \eta } F _ { m } ( \boldsymbol { w } _ { r } ) \displaystyle \sum _ { k = 0 } ^ { M - 1 - 1 } \sum _ { r = 0 } ^ { M } F _ { m } ( \boldsymbol { w } _ { r , j } ^ { m } ) } & \\ { \displaystyle \stackrel { ( b ) } { \le } \displaystyle \frac { 7 \eta } { M } \displaystyle \sum _ { m = 1 } ^ { M } F _ { m } ( \boldsymbol { w } _ { r , j } ) ^ { 2 } \le \displaystyle \frac { 7 \eta K } { M } \left( \displaystyle \sum _ { m = 1 } ^ { M } F _ { m } ( \boldsymbol { w } _ { r } ) \right) ^ { 2 } = 7 \eta K M F ( \boldsymbol { w } _ { r , j } ) ^ { 2 } } & \\ { \displaystyle ( \boldsymbol { w } ) \to \displaystyle \frac { 7 } { M } \sum _ { m = 1 } ^ { M } \| \boldsymbol { w } \| \displaystyle ( \boldsymbol { w } _ { r , j } ) \sum _ { k = 1 } ^ { M } \| \boldsymbol { w } \| } & { } \\ { \displaystyle ( \boldsymbol { w } ) \to \displaystyle \frac { 7 } { M } \displaystyle \sum _ { m = 1 } ^ { M } \| \boldsymbol { w } \| \displaystyle ( \boldsymbol { w } _ { r , j } ) \| , } & { } \\ \end{array}
$$
where $( i )$ uses Lemma B.4 to bound the change in the local gradient during the round, $( i i )$ applies $\lVert \nabla F _ { m } ( \pmb { w } ) \rVert \leq F _ { m } ( \pmb { w } )$ from Lemma B.1, $( i i i )$ uses the fact that $F _ { m } ( \pmb { w } _ { r , t } ^ { m } )$ is decreasing in $t$ (Lemma 4.6), $( i v )$ uses the assumption $F ( w _ { r } ) \leq$ $\gamma / ( 7 0 \eta K M )$ , and $( v )$ uses $\begin{array} { r } { F ( \pmb { w } ) \leq \frac { 2 } { \gamma } \| \nabla F ( \pmb { w } ) \| } \end{array}$ from Lemma B.2. □
Lemma A.6. There exists some r ≤ τ such that F (wr) ≤ 70ηγKM .
Proof. We use a potential function argument inspired by Lemma 9 of (Wu et al., 2024a). Similarly to our proof of Theorem 4.1, we have to account for the change in the local gradient $\nabla F _ { m } ( \pmb { w } _ { r , k } ^ { m } )$ during each round.
Define
$$
G _ { m } ( \pmb { w } ) = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } | \ell ^ { \prime } ( \langle \pmb { w } , \pmb { x } _ { m , i } \rangle ) | ,
$$
and $\begin{array} { r } { G ( \pmb { w } ) = \frac { 1 } { M } \sum _ { m = 1 } ^ { M } G _ { m } ( \pmb { w } ) } \end{array}$ . Then for every $r \geq 0$ ,
$$
\begin{array} { r l } { \langle w _ { r + 1 } , w _ { \star } \rangle = \langle w _ { r } , w _ { \star } \rangle + \langle w _ { r + 1 } - w _ { r } , w _ { \star } \rangle } & { } \\ & { = \langle w _ { r } , w _ { \star } \rangle - \displaystyle \frac { \mathcal { M } } { \mathcal { M } } \displaystyle \sum _ { m = 1 } ^ { M } \displaystyle \sum _ { k = 0 } ^ { K - 1 } \langle \nabla F _ { m } ( w _ { r , k } ) , w _ { \star } \rangle } \\ & { = \langle w _ { r } , w _ { \star } \rangle + \displaystyle \frac { \mathcal { M } } { M } \displaystyle \sum _ { m = 1 } ^ { M } \displaystyle \sum _ { k = 0 } ^ { K - 1 } \displaystyle \sum _ { i = 1 } ^ { n } | e ^ { \prime } \langle \langle w _ { r , k } ^ { m } , x _ { m , i } \rangle \rangle | \langle x _ { m , i } , w _ { \star } \rangle } \\ & { \geq \langle w _ { r } , w _ { \star } \rangle + \displaystyle \frac { \mathcal { M } \gamma } { M } \displaystyle \sum _ { m = 1 } ^ { M } \displaystyle \sum _ { k = 0 } ^ { K - 1 } \displaystyle \sum _ { i = 1 } ^ { n } | e ^ { \prime } \langle \langle w _ { r , k } ^ { m } , x _ { m , i } \rangle \rangle | } \\ & { = \langle w _ { r } , w _ { \star } \rangle + \displaystyle \frac { \mathcal { M } \gamma K } { M } \displaystyle \sum _ { m = 1 } ^ { M } \displaystyle \sum _ { i = 1 } ^ { n } \displaystyle \sum _ { j = 1 } ^ { n } | e ^ { \prime } \langle \langle w _ { r , k } ^ { m } , x _ { m , i } \rangle \rangle | \langle x _ { m , i } , w _ { \star } \rangle , } \end{array}
$$
where $\begin{array} { r } { \beta _ { r , i } ^ { m } : = \frac { 1 } { K } \sum _ { k = 0 } ^ { K - 1 } \frac { | \ell ^ { \prime } ( b _ { r , i , k } ^ { m } ) | } { | \ell ^ { \prime } ( b _ { r , i } ^ { m } ) | } } \end{array}$ . We can lower bound $\beta _ { r , i } ^ { m } \ge 1 / K$ by ignoring all terms of the sum except the one $k = 0$
the tightness of this step. $\beta _ { r , i } ^ { m } \ge 1 / K$ implies
$$
\begin{array} { r l r } { { \langle { \pmb w } _ { r + 1 } , { \pmb w } _ { * } \rangle \geq \langle { \pmb w } _ { r } , { \pmb w } _ { * } \rangle + \frac { \eta \gamma } { M n } \sum _ { m = 1 } ^ { M } \sum _ { i = 1 } ^ { n } \vert \ell ^ { \prime } ( \langle { \pmb w } _ { r } , { \pmb x } _ { m , i } \rangle ) \vert } } \\ & { } & \\ & { } & { = \langle { \pmb w } _ { r } , { \pmb w } _ { * } \rangle + \frac { \eta \gamma } { M } \sum _ { m = 1 } ^ { M } G _ { m } ( { \pmb w } _ { r } ) } \\ & { } & \\ & { } & { = \langle { \pmb w } _ { r } , { \pmb w } _ { * } \rangle + \eta \gamma G ( { \pmb w } _ { r } ) , } \end{array}
$$
Rearraging and averaging over $r$ ,
$$
\begin{array} { r l } & { \displaystyle \frac { 1 } { r } \sum _ { s = 0 } ^ { r - 1 } G ( \pmb { w } _ { s } ) \leq \frac { \langle \pmb { w } _ { r } , \pmb { w } _ { * } \rangle - \langle \pmb { w } _ { 0 } , \pmb { w } _ { * } \rangle } { \eta \gamma r } } \\ & { \qquad \leq \frac { \| \pmb { w } _ { r } - \pmb { w } _ { 0 } \| } { \eta \gamma r } } \\ & { \quad \overset { ( i ) } { \leq } \frac { 2 \gamma \| \pmb { w } _ { 0 } \| + \sqrt { 2 } + \eta + \log ( 1 + \eta \gamma ^ { 2 } K r ^ { 2 } ) } { \eta \gamma ^ { 2 } r } , } \end{array}
$$
where $( i )$ uses Lemma 4.4 together with $\lvert | \pmb { w } _ { r } - \pmb { w } _ { 0 } \rvert | \leq \lvert | \pmb { w } _ { r } \rvert | + \lvert | \pmb { w } _ { 0 } \rvert |$ . Recall that $\begin{array} { r } { \psi = \operatorname* { m i n } \left( \frac { \gamma } { 1 4 0 \eta K M } , \frac { 1 } { 2 M n } \right) } \end{array}$ we want to the RHS of Equation 126 to be smaller than $\psi$ . So we want
$$
\begin{array} { r } { \psi \geq \frac { 2 \gamma \| \pmb { w } _ { 0 } \| + \sqrt { 2 } + \eta + \log ( 1 + \eta \gamma ^ { 2 } K r ^ { 2 } ) } { \eta \gamma ^ { 2 } r } } \\ { r \geq \frac { 2 \gamma \| \pmb { w } _ { 0 } \| + \sqrt { 2 } + \eta + \log ( 1 + \eta \gamma ^ { 2 } K r ^ { 2 } ) } { \eta \gamma ^ { 2 } \psi } . } \end{array}
$$
Applying Lemma B.6 with
$$
A = \frac { 2 \gamma \| \pmb { w } _ { 0 } \| + \sqrt { 2 } + \eta } { \eta \gamma ^ { 2 } \psi } , \quad B = \frac { 1 } { \eta \gamma ^ { 2 } \psi } , \quad C = \eta \gamma ^ { 2 } K ,
$$
Equation 128 is satisfied when
$$
r \geq \tau : = \frac { 1 } { \eta \gamma ^ { 2 } \psi } \left( 4 \gamma \| \pmb { w } _ { 0 } \| + 2 \sqrt { 2 } + 2 \eta + \log \left( 1 + \frac { \sqrt { K } } { \sqrt { \eta } \gamma \psi } \right) \right) .
$$
In particular, Equation 128 is satisfied with $r = \tau$ . So, letting $r _ { 0 } = \arg \operatorname* { m i n } _ { 0 \leq s < \tau } G ( \pmb { w } _ { s } )$ ,
$$
G ( \pmb { w } _ { r _ { 0 } } ) \leq \frac { 1 } { \tau } \sum _ { s = 0 } ^ { \tau - 1 } G ( \pmb { w } _ { s } ) \leq \psi .
$$
We can now bound $F ( w _ { r _ { 0 } } )$ in terms of $G ( w _ { r _ { 0 } } )$ . First, since $\begin{array} { r } { G ( { \pmb w } _ { r _ { 0 } } ) \le \frac 1 { 2 M n } } \end{array}$ , we have for each $m \in [ M ] , i \in [ n ]$ ,
$$
\frac { 1 } { M n } | \ell ^ { \prime } ( \langle w _ { r _ { 0 } } , \pmb { x } _ { m , i } \rangle ) | \leq \frac { 1 } { M n } \sum _ { m = 1 } ^ { M } \sum _ { i = 1 } ^ { n } | \ell ^ { \prime } ( \langle w _ { r _ { 0 } } , \pmb { x } _ { m , i } \rangle ) | = G ( \pmb { w } _ { r _ { 0 } } ) \leq \frac { 1 } { 2 M n } ,
$$
so
$$
\begin{array} { r } { | \ell ^ { \prime } ( \langle w _ { r _ { 0 } } , \pmb { x } _ { m , i } \rangle ) | \leq \frac { 1 } { 2 } } \\ { \frac { 1 } { 1 + \exp ( \langle \pmb { \mathscr { w } } _ { r _ { 0 } } , \pmb { x } _ { m , i } \rangle ) } \leq \frac { 1 } { 2 } } \\ { \langle \pmb { \mathscr { w } } _ { r _ { 0 } } , \pmb { x } _ { m , i } \rangle \geq 0 , } \end{array}
$$
so that every point is classified correctly by ${ \pmb w } _ { r _ { 0 } }$ . Therefore
$$
\begin{array} { r l r } { { F ( w _ { r _ { 0 } } ) = \frac { 1 } { M n } \sum _ { m = 1 } ^ { M } \sum _ { i = 1 } ^ { n } \log ( 1 + \exp ( - \langle w _ { r _ { 0 } } , x _ { m , i } \rangle ) ) } } \\ & { } & { \leq \frac { 1 } { M n } \sum _ { m = 1 } ^ { M } \sum _ { i = 1 } ^ { n } \exp ( - \langle w _ { r _ { 0 } } , x _ { m , i } \rangle ) } \\ & { } & { \stackrel { ( i ) } { \leq } \frac { 1 } { M n } \displaystyle \sum _ { m = 1 } ^ { M } \sum _ { i = 1 } ^ { n } \frac { 2 } { 1 + \exp ( \langle w _ { r _ { 0 } } , x _ { m , i } \rangle ) } } \\ & { } & { \leq 2 G ( w _ { r _ { 0 } } ) \leq 2 \psi = \operatorname* { m i n } ( \frac { \gamma } { 7 0 \eta K M } , \frac { 1 } { M n } ) , } \end{array}
$$
where $( i )$ uses $1 \leq \exp ( \langle w _ { r _ { 0 } } , x _ { m , i } \rangle )$ .
Theorem A.7 (Restatement of Theorem 4.2). Denote $\begin{array} { r } { \psi = \operatorname* { m i n } \left( \frac { \gamma } { 1 4 0 \eta K M } , \frac { 1 } { 2 M n } \right) } \end{array}$ and
$$
\tau = \frac { 4 \gamma \| \pmb { w } _ { 0 } \| + 2 \sqrt { 2 } + 2 \eta + \log \left( 1 + \frac { \sqrt { K } } { \sqrt { \eta } \gamma \psi } \right) } { \eta \gamma ^ { 2 } \psi } .
$$
For every $r \geq \tau$ , Local GD satisfies
$$
F ( \pmb { w } _ { r } ) \leq \frac { 1 6 } { \eta \gamma ^ { 2 } K ( r - \tau ) } .
$$
Proof. The proof of this theorem has a similar structure as that of Lemma 4.6. When the loss $F ( w _ { s } )$ is small, the total movement $\lVert \pmb { w } _ { s + 1 } - \pmb { w } _ { s } \rVert$ can be bounded (Lemma 4.7); when the movement is bounded, we can apply a modified descent inequality (Lemma 4.5), which shows decrease of the loss when $F ( w _ { s } )$ is small. The main difference compared to Lemma 4.6 is that the update ${ \pmb w } _ { s + 1 } - { \pmb w } _ { s }$ is not necessarily parallel with the gradient $\nabla F ( { \pmb w } _ { s } )$ . However, Lemma 4.8 shows that the magnitude of this bias is negligible compared to the magnitude of the gradient. Finally, Lemma 4.9 implies that the conditions of these lemmas (that $F ( w _ { r } )$ is below some threshold) are met for some $r \leq \tau$ . We execute this argument below.
By Lemma 4.9, there exists some $r _ { 0 } \le \tau$ such that $\begin{array} { r } { F ( { \pmb w } _ { r _ { 0 } } ) \le \frac { \gamma } { 7 0 \eta K M } } \end{array}$ . We will prove $F ( { \pmb w } _ { r } ) \le F ( { \pmb w } _ { r _ { 0 } } )$ for all $r \geq r _ { 0 }$ by induction. Clearly it holds for $r = r _ { 0 }$ , so suppose it holds for some $r \geq r _ { 0 }$ . Notice that the condition $\lVert \pmb { w } _ { r + 1 } - \pmb { w } _ { r } \rVert \leq 1$ of Lemma 4.5 is satisfied, since
$$
\left\| \pmb { w } _ { r + 1 } - \pmb { w } _ { r } \right\| = \left\| \frac { 1 } { M } \sum _ { m = 1 } ^ { M } \pmb { w } _ { r , K } ^ { m } - \pmb { w } _ { r } \right\| \leq \frac { 1 } { M } \sum _ { m = 1 } ^ { M } \left\| \pmb { w } _ { r , K } ^ { m } - \pmb { w } _ { r } \right\| \overset { ( i ) } { \leq } 1 ,
$$
where $( i )$ uses Lemma 4.7. Recall that $\pmb { w } _ { r + 1 } - \pmb { w } _ { r } = - \eta K ( \nabla F ( \pmb { w } _ { r } ) + \pmb { b } _ { r } )$ . By applying Lemma 4.5:
$$
\begin{array} { r l } { { \boldsymbol { \mathrm { ~ \mu ~ } } _ { r } ^ { \mathrm { \tiny ~ \scriptstyle \Upsilon ~ } } ( \boldsymbol { \mathrm { ~ w } } _ { r } ) \mathrm { ~ \mu ~ } _ { r } ^ { \mathrm { \tiny ~ \scriptstyle \Upsilon ~ } } ( \boldsymbol { \mathrm { ~ w } } _ { r } ) } } \\ & { \leq { \nabla F ( \boldsymbol { w } _ { r } ) , \boldsymbol { w } _ { r + 1 } - \boldsymbol { w } _ { r } } + 4 F ( \boldsymbol { w } _ { r } ) \| \boldsymbol { w } _ { r + 1 } - \boldsymbol { w } _ { r } \| ^ { 2 } } \\ & { = - \eta K { \nabla F ( \boldsymbol { w } _ { r } ) , \nabla F ( \boldsymbol { w } _ { r } ) + b _ { r } } + 4 \eta ^ { 2 } K ^ { 2 } F ( \boldsymbol { w } _ { r } ) \| \nabla F ( \boldsymbol { w } _ { r } ) + b _ { r } \| ^ { 2 } } \\ & { = - \eta K \| { \nabla F ( \boldsymbol { w } _ { r } ) + b _ { r } } \| ^ { 2 } + \eta K b _ { r } , \nabla F ( \boldsymbol { w } _ { r } ) + b _ { r } + 4 \eta ^ { 2 } K ^ { 2 } F ( \boldsymbol { w } _ { r } ) \| { \nabla F ( \boldsymbol { w } _ { r } ) + b _ { r } } \| ^ { 2 } } \\ & { = - \eta K ( { 1 - 4 \eta K F ( \boldsymbol { w } _ { r } ) } ) \| { \nabla F ( \boldsymbol { w } _ { r } ) + b _ { r } } \| ^ { 2 } + \eta K b _ { r } , \nabla F ( \boldsymbol { w } _ { r } ) + b _ { r } } \\ & { \leq - \eta K ( { 1 - 4 \eta K F ( \boldsymbol { w } _ { r } ) } ) \| { \nabla F ( \boldsymbol { w } _ { r } ) + b _ { r } } \| ^ { 2 } + \eta K \| b _ { r } \| \| { \nabla F ( \boldsymbol { w } _ { r } ) + b _ { r } } \| } \end{array}
$$
By Lemma 4.8, we have $\begin{array} { r } { \| \pmb { b } _ { r } \| \leq \frac { 1 } { 5 } \| \nabla F ( \pmb { w } _ { r } ) \| } \end{array}$ . Therefore
$$
\| \nabla F ( { \pmb w } _ { r } ) + { \pmb b } _ { r } \| \geq \| \nabla F ( { \pmb w } _ { r } ) \| - \| { \pmb b } _ { r } \| \geq 4 \| { \pmb b } _ { r } \| ,
$$
so $\| \pmb { b } _ { r } \| \leq \| \nabla F ( \pmb { w } _ { r } ) + \pmb { b } _ { r } \| / 4$ . Plugging this back into Equation 148,
$$
\begin{array} { r l } { F ( { \pmb w } _ { r + 1 } ) - F ( { \pmb w } _ { r } ) \le - \eta K \left( 1 - 4 \eta K F ( { \pmb w } _ { r } ) - \frac { 1 } { 4 } \right) \left. \nabla F ( { \pmb w } _ { r } ) + { \pmb b } _ { r } \right. ^ { 2 } } & { } \\ { \quad } & { \stackrel { ( i ) } { \le } - \frac { 1 } { 2 } \eta K \left. \nabla F ( { \pmb w } _ { r } ) + { \pmb b } _ { r } \right. ^ { 2 } } \\ { \quad } & { \stackrel { ( i i ) } { \le } - \frac { 1 } { 4 } \eta K \left. \nabla F ( { \pmb w } _ { r } ) \right. ^ { 2 } } \\ { \quad } & { \stackrel { ( i i i ) } { \le } - \frac { 1 } { 1 6 } \eta \gamma ^ { 2 } K F ( { \pmb w } _ { r } ) ^ { 2 } , } \end{array}
$$
where $( i )$ uses the condition $F ( { \pmb w } _ { r } ) \le \gamma / ( 7 0 \eta K M ) , ( i i )$ uses
$$
\| \nabla F ( { \pmb w } _ { r } ) + { \pmb b } _ { r } \| \geq \| \nabla F ( { \pmb w } _ { r } ) \| - \| { \pmb b } _ { r } \| \geq \frac { 4 } { 5 } \| \nabla F ( { \pmb w } _ { r } ) \| ,
$$
and $( i i i )$ uses $\begin{array} { r } { \| \nabla F ( \pmb { w } ) \| \geq \frac { \gamma } { 2 } F ( \pmb { w } ) } \end{array}$ from Lemma B.2. Equation 148 completes the induction, so $F ( { \pmb w } _ { r } ) \le F ( { \pmb w } _ { r _ { 0 } } )$ for all $r \geq r _ { 0 }$ . Further, Equation 148 holds for all $r \geq r _ { 0 }$ , so we can unroll it to get an upper bound on $F ( w _ { r } )$ . Diving both sides of Equation 148 by $F ( { \pmb w } _ { r } ) F ( { \pmb w } _ { r + 1 } )$ ,
$$
\begin{array} { r l } & { \frac { 1 } { F ( w _ { r } ) } - \frac { 1 } { F ( w _ { r + 1 } ) } \le - \frac { 1 } { 1 6 } \eta \gamma ^ { 2 } K \frac { F ( w _ { r } ) } { F ( w _ { r + 1 } ) } } \\ & { \qquad \frac { 1 } { F ( w _ { r + 1 } ) } \ge \frac { 1 } { F ( w _ { r } ) } + \frac { 1 } { 1 6 } \eta \gamma ^ { 2 } K \frac { F ( w _ { r } ) } { F ( w _ { r + 1 } ) } } \\ & { \qquad \frac { 1 } { F ( w _ { r + 1 } ) } \overset { ( i ) } { \ge } \frac { 1 } { F ( w _ { r } ) } + \frac { 1 } { 1 6 } \eta \gamma ^ { 2 } K . } \end{array}
$$
Unrolling from $r$ to $r _ { 0 }$
$$
\frac { 1 } { F ( \pmb { w } _ { r } ) } \geq \frac { 1 } { F ( \pmb { w } _ { r _ { 0 } } ) } + \frac { 1 } { 1 6 } \eta \gamma ^ { 2 } K ( r - r _ { 0 } ) \geq \frac { 1 } { 1 6 } \eta \gamma ^ { 2 } K ( r - r _ { 0 } ) ,
$$
so
$$
F ( { \pmb w } _ { r } ) \le \frac { 1 6 } { \eta \gamma ^ { 2 } K ( r - r _ { 0 } ) } .
$$
Recall that $r _ { 0 } \leq \tau$ , so $r - r _ { 0 } \ge r - \tau$ , and finally
$$
F ( { \pmb w } _ { r } ) \le \frac { 1 6 } { \eta \gamma ^ { 2 } K ( r - \tau ) } .
$$
# A.3. Proof of Corollary 4.3
Corollary A.8 (Restatement of Corollary 4.3). Suppose $\begin{array} { r } { R \ge \widetilde \Omega \left( \operatorname* { m a x } \left( \frac { M n } { \gamma ^ { 2 } } , \frac { K M } { \gamma ^ { 3 } } \right) \right) } \end{array}$ . With $\mathbf { { w } } _ { 0 } ~ = ~ \mathbf { { 0 } }$ , $\eta \geq 1$ , and $\eta K = \widetilde { \Theta } ( \frac { \gamma ^ { 3 } R } { M } )$ , Local $G D$ satisfies
$$
F ( \pmb { w } _ { R } ) \leq \widetilde { O } \left( \frac { M } { \gamma ^ { 5 } R ^ { 2 } } \right) .
$$
Proof. With our choices of $\boldsymbol { w } _ { 0 } , \eta$ , and $\eta K$ , the transition time $\tau$ becomes
$$
\begin{array} { r l } & { \tau = \frac { 2 \sqrt { 2 } + 2 \eta + \log \big ( 1 + \frac { \sqrt { K } } { \sqrt { \eta ^ { 2 } + \rho } } \big ) } { \eta \gamma ^ { 2 } + \frac { \sqrt { \eta ^ { 2 } + \rho } } { \sqrt { \eta ^ { 2 } + \rho } } } } \\ & { \quad = \bar { \mathcal { O } } \left( \frac { 1 + \eta } { \sqrt { \eta ^ { 2 } + \rho } } \right) } \\ & { \quad \stackrel { ( i i ) } { = } \bar { \mathcal { O } } \left( \frac { 1 } { \gamma ^ { 2 } + \rho } \right) } \\ & { \quad \stackrel { ( i i ) } { = } \bar { \mathcal { O } } \left( \operatorname* { m a x } \left( \frac { \eta K M } { \gamma ^ { 3 } } , \frac { M \pi } { \gamma ^ { 2 } } \right) \right) } \\ & { \quad \stackrel { ( i i i ) } { = } \bar { \mathcal { O } } \left( \operatorname* { m a x } \left( n , \frac { M n } { \gamma ^ { 2 } } \right) \right) } \\ & { \quad \stackrel { ( i i i ) } { = } \bar { \mathcal { O } } \left( R \right) , } \end{array}
$$
where $( i )$ uses $\eta \geq 1$ , $( i i )$ uses the definition of $\psi , ( i i i )$ uses the choice of $\eta K$ , and $( i v )$ uses the condition
$$
R \geq \widetilde \Omega \left( \frac { M n } { \gamma ^ { 2 } } \right) .
$$
Therefore, we can ensure that $R \geq 2 \tau$ with the appropriate choice of constant/logarithmic multiplicative factors on the RHS of Equation 168. Since $R \geq \tau$ , Theorem 4.2 implies
$$
\begin{array} { r l r } { { F ( \pmb { w } _ { r } ) \leq \frac { 1 6 } { \eta \gamma ^ { 2 } K ( R - \tau ) } } } \\ & { } & { \stackrel { ( i ) } { \leq } \frac { 3 2 } { \eta \gamma ^ { 2 } K R } } \\ & { } & { \stackrel { ( i i ) } { \leq } \tilde { \mathcal { O } } ( \frac { M } { \gamma ^ { 5 } R ^ { 2 } } ) , } \end{array}
$$
where $( i )$ uses $R - \tau \geq R / 2$ , since $R \geq 2 \tau$ , and $( i i )$ uses the choice $\begin{array} { r } { \eta K = \widetilde { \Theta } \left( \frac { \gamma ^ { 3 } R } { M } \right) } \end{array}$ . Note that the condition $\begin{array} { r } { R \geq \widetilde \Omega \left( { \frac { K M } { \gamma ^ { 3 } } } \right) } \end{array}$ is necessary to ensure that the choice $\begin{array} { r } { \eta K = \widetilde { \Theta } \left( \frac { \gamma ^ { 3 } R } { M } \right) } \end{array}$ is compatible with the requirement $\eta \geq 1$ . □
# B. Auxiliary Lemmas
Lemma B.1 (Lemma 25 from (Crawshaw et al., 2025)). For every $\pmb { w } \in \mathbb { R } ^ { d }$ ,
$$
\| \nabla F _ { m } ( { \pmb w } ) \| \le F _ { m } ( { \pmb w } ) \quad a n d \quad \| \nabla F ( { \pmb w } ) \| \le F ( { \pmb w } ) .
$$
Lemma B.2 (Lemma 26 of (Crawshaw et al., 2025)). If $\pmb { w } \in \mathbb { R } ^ { d }$ such that $\langle { \pmb w } , { \pmb x } _ { i } ^ { m } \rangle \geq 0$ for a given $m \in [ M ]$ and all $i \in [ n ]$ , then
$$
\| \nabla F _ { m } ( \pmb { w } ) \| \geq \frac { \gamma } { 2 } F _ { m } ( \pmb { w } ) .
$$
Similarly, $i f \left. w , w _ { m , i } \right. \geq 0$ for all $m \in [ M ]$ and all $i \in [ n ] ,$ , then
$$
\| \nabla F ( \pmb { w } ) \| \geq \frac { \gamma } { 2 } F ( \pmb { w } ) .
$$
Lemma B.3 (Lemma 1 from (Crawshaw et al., 2025)). For every $\pmb { w } _ { 1 }$ ${ \bf \Phi } _ { 1 } , { \pmb w } _ { 2 } \in \mathbb { R } ^ { d }$ ,
$$
\| \nabla ^ { 2 } F _ { m } ( w _ { 2 } ) \| \leq F _ { m } ( w _ { 1 } ) \left( 1 + \| w _ { 2 } - w _ { 1 } \| \left( 1 + \exp ( \| w _ { 2 } - w _ { 1 } \| ^ { 2 } ) \left( 1 + { \frac { 1 } { 2 } } \| w _ { 2 } - w _ { 1 } \| ^ { 2 } \right) \right) \right) .
$$
Lemma B.4. For $\pmb { w } _ { 1 } , \pmb { w } _ { 2 } \in \mathbb { R } ^ { d } , i f \Vert \pmb { w } _ { 1 } - \pmb { w } _ { 2 } \Vert \leq 1$ , then
$$
\begin{array} { r } { \| \nabla F _ { m } ( \pmb { w } _ { 2 } ) - \nabla F _ { m } ( \pmb { w } _ { 1 } ) \| \leq 7 F _ { m } ( \pmb { w } _ { 1 } ) \| \pmb { w } _ { 2 } - \pmb { w } _ { 1 } \| . } \end{array}
$$
Proof. The proof is a direct calculation, leveraging the upper bound of the objective’s Hessian norm from Lemma B.3. Let λ = ∥w2 − w1∥ and v = ∥w2−w1∥ . By the fundamental theorem of calculus,
$$
\begin{array} { r l } { \nabla F _ { m } ( w _ { 2 } ) - \nabla F _ { m } ( w _ { 1 } ) = \displaystyle \int _ { 0 } ^ { \lambda } \nabla ^ { 2 } F _ { m } ( w _ { 1 } + t v ) v \ d t } \\ { \| \nabla F _ { m } ( w _ { 2 } ) - \nabla F _ { m } ( w _ { 1 } ) \| = \displaystyle \left\| \int _ { 0 } ^ { \lambda } \nabla ^ { 2 } F _ { m } ( w _ { 1 } + t v ) v \ d t \right\| } \\ { \displaystyle } & { \leq \int _ { 0 } ^ { \lambda } \left| \nabla ^ { 2 } F _ { m } ( w _ { 1 } + t v ) v \right| \ \mathcal { A } } \\ { \displaystyle } & { \leq \int _ { 0 } ^ { \lambda } \left| \nabla ^ { 2 } F _ { m } ( w _ { 1 } + t v ) \right| \ \mathcal { A } } \\ { \displaystyle } & { \overset { ( b ) } { \leq } \int _ { 0 } ^ { \lambda } \int _ { \mathbb R ^ { m } } ^ { \lambda } ( F _ { m } ( w _ { 1 } ) d t } \\ { \displaystyle } & { = \gamma F _ { m } ( w _ { 1 } ) \lambda , } \end{array}
$$
where $( i )$ uses Lemma B.3, noting that the condition $\| ( \pmb { w } _ { 1 } + t \pmb { v } ) - \pmb { w } _ { 1 } \| \le 1$ is satisfied by the assumption $\begin{array} { r } { \| \pmb { w } _ { 2 } - \pmb { w } _ { 1 } \| \leq } \end{array}$ 1. □
Lemma B.5 (Restatement of Lemma 4.5). For $\pmb { w } , \pmb { w } ^ { \prime } \in \mathbb { R } ^ { d } , i f \Vert \pmb { w } - \pmb { w } ^ { \prime } \Vert \leq 1 ,$ , then
$$
F _ { m } ( \pmb { w } ^ { \prime } ) \leq F _ { m } ( \pmb { w } ) + \langle \nabla F _ { m } ( \pmb { w } ) , \pmb { w } ^ { \prime } - \pmb { w } \rangle + 4 F _ { m } ( \pmb { w } ) \| \pmb { w } ^ { \prime } - \pmb { w } \| ^ { 2 } ,
$$
and
$$
F ( \pmb { w } ^ { \prime } ) \leq F ( \pmb { w } ) + \langle \nabla F ( \pmb { w } ) , \pmb { w } ^ { \prime } - \pmb { w } \rangle + 4 F ( \pmb { w } ) \| \pmb { w } ^ { \prime } - \pmb { w } \| ^ { 2 } .
$$
Proof. To prove this fact, we write $F _ { m }$ as a second-order Taylor series centered at $\textbf { \em w }$ , then use Lemma B.3 to upper bound the quadratic term.
Let $\lambda = \| \pmb { w } ^ { \prime } - \pmb { w } \|$ and $\begin{array} { r } { \pmb { v } = \frac { \pmb { w } ^ { \prime } - \pmb { w } } { \Vert \pmb { w } ^ { \prime } - \pmb { w } \Vert } } \end{array}$ Then
$$
F _ { m } ( \pmb { w } ^ { \prime } ) = F _ { m } ( \pmb { w } ) + \langle \nabla F _ { m } ( \pmb { w } ) , \pmb { w } ^ { \prime } - \pmb { w } \rangle + \underbrace { \int _ { 0 } ^ { \lambda } ( \lambda - t ) \langle \pmb { v } , \nabla ^ { 2 } F _ { m } ( \pmb { w } + t \pmb { v } ) \pmb { v } \rangle } _ { Q } d t .
$$
The quadratic term $Q$ can be bounded as follows:
$$
\begin{array} { r l } & { Q \le \displaystyle \int _ { 0 } ^ { \lambda } ( \lambda - t ) \| v \| \left\| \nabla ^ { 2 } F _ { m } ( w + t v ) v \right\| d t } \\ & { \quad \le \displaystyle \int _ { 0 } ^ { \lambda } ( \lambda - t ) \left\| \nabla ^ { 2 } F _ { m } ( w + t v ) \right\| d t } \\ & { \quad \overset { ( i ) } { \le } 7 F _ { m } ( w ) \displaystyle \int _ { 0 } ^ { \lambda } ( \lambda - t ) d t } \\ & { \quad = \displaystyle \frac { 7 } { 2 } F _ { m } ( w ) \lambda ^ { 2 } , } \end{array}
$$
where $( i )$ uses Lemma B.3 to bound $\| \nabla ^ { 2 } F _ { m } ( \pmb { w } + t \pmb { v } ) \|$ , using the condition that $\| ( \pmb { w } + t \pmb { v } ) - \pmb { w } \| \leq \lambda \leq 1$ . Plugging this into Equation 185 gives Equation 183, and averaging over $m \in [ M ]$ gives Equation 184. □
Lemma B.6. For $A , B , C \geq 0$ , the inequality
is satisfied when
$$
x \geq A + B \log ( 1 + C x ^ { 2 } )
$$
$$
x \geq 2 A + B \log ( 1 + B { \sqrt { C } } ) .
$$
Proof. Using concavity of $\sqrt { \cdot }$ and log,
$$
\begin{array} { r l } & { \displaystyle { A + B \log ( 1 + C x ^ { 2 } ) = A + \frac { B } { 2 } \log ( \sqrt { 1 + C x ^ { 2 } } ) } } \\ & { \qquad \leq A + \frac { B } { 2 } \log ( 1 + \sqrt { C } x ) } \\ & { \qquad \leq A + \frac { B } { 2 } \left( \log ( 1 + B \sqrt { C } ) + \frac { \sqrt { C } } { 1 + B \sqrt { C } } ( x - B ) \right) } \\ & { \qquad \leq A + \frac { B } { 2 } \left( \log ( 1 + B \sqrt { C } ) + \frac { x } { B } \right) } \\ & { \qquad = A + \frac { B } { 2 } \log ( 1 + B \sqrt { C } ) + \frac { x } { 2 } . } \end{array}
$$
So, to satisfy Equation 190, it suffices that
$$
\begin{array} { l } { \displaystyle x \geq A + \frac { B } { 2 } \log ( 1 + B \sqrt { C } ) + \frac { x } { 2 } } \\ { \displaystyle \frac { x } { 2 } \geq A + \frac { B } { 2 } \log ( 1 + B \sqrt { C } ) } \\ { \displaystyle x \geq 2 A + B \log ( 1 + B \sqrt { C } ) . } \end{array}
$$
An important part of the proofs of Theorem 4.1 and Lemma 4.9 is the lower bound
$$
\beta _ { r , i } ^ { m } : = \frac { 1 } { K } \sum _ { k = 0 } ^ { K - 1 } \frac { | \ell ^ { \prime } ( b _ { r , i , k } ^ { m } ) | } { | \ell ^ { \prime } ( b _ { r , i } ^ { m } ) | } \geq \frac { 1 } { K } ,
$$
which comes by ignoring all terms of the sum coming from $k > 0$ . This may seem pessimistic, but the following lemma shows that for the case $n = 1$ , this bound is tight up to logarithmic multiplicative factors for certain values of ${ \pmb w } _ { r }$ .
Lemma B.7. Suppose $n = 1$ and ${ \pmb w } _ { r } = { \bf 0 }$ . Then $\begin{array} { r } { \beta _ { r , i } ^ { m } \leq \mathcal { O } \left( \frac { 1 } { K } + \frac { 1 } { \eta \gamma ^ { 2 } K } \log \left( 1 + \eta \gamma ^ { 2 } K \right) \right) } \end{array}$ , and if additionally $\eta \geq 1$ , then $\begin{array} { r } { \beta _ { r , i } ^ { m } \leq \widetilde { \mathcal { O } } \Big ( \frac { 1 } { K } \Big ( 1 + \frac { 1 } { \gamma ^ { 2 } } \Big ) \Big ) . } \end{array}$
Proof. Since $n = 1$ , we omit the index $i \in [ n ]$ . We will also denote $\gamma _ { m } = \| \pmb { x } ^ { m } \|$ . Recall that $\ell ( z ) = \log ( 1 + \exp ( - z ) )$ , so $\begin{array} { r } { | \ell ^ { \prime } ( z ) | = \frac { 1 } { 1 + \exp ( z ) } } \end{array}$ , and recall the definitions $b _ { r } ^ { m } = \langle { \pmb w } _ { r } , { \pmb x } ^ { m } \rangle$ and $b _ { r , k } ^ { m } = \langle \pmb { w } _ { r , k } ^ { m } , \pmb { x } ^ { m } \rangle$ . Then we want to upper bound
$$
\beta _ { r } ^ { m } = \frac { 1 } { K } \sum _ { k = 0 } ^ { K - 1 } \frac { 1 + \exp ( \langle \pmb { w } _ { r } , \pmb { x } ^ { m } \rangle ) } { 1 + \exp ( \langle \pmb { w } _ { r , k } ^ { m } , \pmb { x } ^ { m } \rangle ) } .
$$
Wmhen $n = 1$ , each local trajectory is relatively simple to analyze, since the updates $\pmb { w } _ { r , k + 1 } ^ { m } - \pmb { w } _ { r , k } ^ { m }$ are always parallel to . For this case, we will consider the gradient flow trajectory of initialized at ${ \pmb w } _ { r }$ . Since $n = 1$ , the gradient flow has a convenient analytical form while also providing a lower bound for $b _ { r , k } ^ { m }$ , which will in turn give our upper bound for $\beta _ { r } ^ { m }$ .
Let $\widetilde { \pmb { w } } _ { r } ^ { m } : [ 0 , \infty ) \mathbb { R } ^ { d }$ be the gradient flow of $F _ { m }$ initialized at ${ \pmb w } _ { r }$ , so that $\widetilde { \pmb { w } } _ { r } ^ { m }$ is the unique solution to
$$
\frac { d } { d t } \widetilde { \pmb { w } } _ { r } ^ { m } ( t ) = - \eta \nabla F _ { m } ( \widetilde { \pmb { w } } _ { r } ^ { m } ( t ) ) \quad \mathrm { a n d } \quad \widetilde { \pmb { w } } _ { r } ^ { m } ( 0 ) = \pmb { w } _ { r } .
$$
Then define $\widetilde { b } _ { r } ^ { m } ( t ) = \langle \widetilde { \pmb { w } } _ { r } ^ { m } ( t ) , \pmb { x } ^ { m } \rangle$ , so that
$$
\begin{array} { l } { \displaystyle \frac { d } { d t } \widetilde { b } _ { r } ^ { m } ( t ) = \left. \frac { d } { d t } \widetilde { w } _ { r } ^ { m } ( t ) , x ^ { m } \right. } \\ { = - \eta \left. \nabla F _ { m } ( \widetilde { w } _ { r } ^ { m } ( t ) ) , x ^ { m } \right. } \\ { = - \eta \left. \frac { - x _ { m } } { 1 + \exp \left( \left. \widetilde { w } _ { r } ^ { m } ( t ) , x ^ { m } \right. \right) } , x ^ { m } \right. } \\ { = \displaystyle \frac { \eta \gamma _ { m } ^ { 2 } } { 1 + \exp \left( \widetilde { b } _ { r } ^ { m } ( t ) \right) } . } \end{array}
$$
We claim that $\widetilde { b } _ { r } ^ { m } ( k ) \leq b _ { r , k } ^ { m }$ , which we show by induction on $k$ . Clearly it holds for $k = 0$ , since $\widetilde { b } _ { r } ^ { m } ( 0 ) = b _ { r } ^ { m } = b _ { r , 0 } ^ { m }$ . So suppose it holdes for some $k \geq 0$ . If $\widetilde { b } _ { r } ^ { m } ( k + 1 ) \leq b _ { r , k } ^ { m }$ , then we are done, since $b _ { r , k + 1 } ^ { m } \geq b _ { r , k } ^ { m }$ . Otherweise, by the intermediate value theorem, there exists some $t _ { 0 } \in [ k , k + 1 ]$ such that $\widetilde { b } _ { r } ^ { m } ( t _ { 0 } ) = b _ { r , k } ^ { m }$ , so
$$
\begin{array} { r l } { \widetilde { \beta } _ { \mathrm { r } } ^ { m } ( k + 1 ) = \widetilde { \beta } _ { \mathrm { r } } ^ { m } ( t _ { 0 } ) + \int _ { \hat { \omega } _ { \mathrm { t } } } ^ { k + 1 } \frac { d } { d k } \widetilde { \mu } _ { \mathrm { r } } ^ { m } ( t ) d t } & { } \\ & { = \widetilde { \ b } _ { \mathrm { r } , k } ^ { m } + \eta \gamma _ { \mathrm { o } } ^ { m } \int _ { \hat { \omega } _ { \mathrm { t } } } ^ { k + 1 } \frac { 1 } { 1 + \exp ( \widetilde { \beta } _ { \mathrm { r } } ^ { m } ( t ) ) } d t } \\ & { \leq \widetilde { \theta } _ { \mathrm { r } , k } ^ { m } + \eta \gamma _ { \mathrm { o } } ^ { 2 } \int _ { \hat { \omega } _ { \mathrm { t } } } ^ { k + 1 } \frac { 1 } { 1 + \exp ( \widetilde { \beta } _ { \mathrm { r } } ^ { m } ( t _ { 0 } ) ) } d t } \\ & { = \widetilde { b } _ { \mathrm { r } , k } ^ { m } + \eta \gamma _ { \mathrm { o } } ^ { 2 } ( k + 1 - t _ { 0 } ) \frac { 1 } { 1 + \exp ( \widetilde { \beta } _ { \mathrm { r } , k + 1 } ^ { m } ) } } \\ & { \leq \widetilde { b } _ { \mathrm { r } , k } ^ { m } + \eta \gamma _ { \mathrm { o } } ^ { 2 } \frac { 1 } { 1 + \exp ( \widetilde { b } _ { \mathrm { r } , k - 1 } ^ { m } ) } } \\ & { = \widetilde { b } _ { \mathrm { r } , k + 1 } ^ { m } . } \end{array}
$$
This completes the induction, so we know $\begin{array} { r } { \widetilde { b } _ { r } ^ { m } ( k ) \le b _ { r , k } ^ { m } } \end{array}$ for all $k$ . From Equation 201, this means
$$
\beta _ { r } ^ { m } \le \frac { 1 + \exp ( b _ { r } ^ { m } ) } { K } \sum _ { k = 0 } ^ { K - 1 } \frac { 1 } { 1 + \exp ( \widetilde { b } _ { r } ^ { m } ( k ) ) } .
$$
Also, we can directly solve the ODE in Equation 206 for $\widetilde { b } _ { r } ^ { m } ( t )$ :
$$
\begin{array} { c } { \displaystyle \frac { d } { d t } \widetilde { b } _ { r } ^ { m } ( t ) = \frac { \eta \gamma _ { m } ^ { 2 } } { 1 + \exp ( \widetilde { b } _ { r } ^ { m } ( t ) ) } } \\ { \displaystyle \left( 1 + \exp ( \widetilde { b } _ { r } ^ { m } ( t ) ) \right) d \widetilde { b } _ { r } ^ { m } ( t ) = \eta \gamma _ { m } ^ { 2 } d t } \\ { \displaystyle \widetilde { b } _ { r } ^ { m } ( t ) + \exp ( \widetilde { b } _ { r } ^ { m } ( t ) ) = \eta \gamma _ { m } ^ { 2 } t + C } \\ { \displaystyle \widetilde { b } _ { r } ^ { m } ( t ) + \exp ( \widetilde { b } _ { r } ^ { m } ( t ) ) \overset { ( i ) } { = } \eta \gamma _ { m } ^ { 2 } t + b _ { r } ^ { m } + \exp ( b _ { r } ^ { m } ) , } \end{array}
$$
where $( i )$ comes from the initial condition $\widetilde { b } _ { r } ^ { m } ( 0 ) = b _ { r } ^ { m }$ . For a fixed $t$ , we use the substitutions $z = \exp ( \tilde { b } _ { r } ^ { m } ( t ) )$ and $b = \eta \gamma _ { m } ^ { 2 } t + b _ { r } ^ { m } + \exp ( b _ { r } ^ { m } )$ to obtain
$$
\begin{array} { c } { \log ( z ) + z = b } \\ { z \exp ( z ) = \exp ( b ) } \\ { z = W ( \exp ( b ) ) , } \end{array}
$$
where $W$ denotes the principal branch of the Lambert W function. So
$$
\begin{array} { r l } & { \exp ( \widetilde { b } _ { r } ^ { m } ( t ) ) = W ( \exp ( \eta \gamma _ { m } ^ { 2 } t + b _ { r } ^ { m } + \exp ( b _ { r } ^ { m } ) ) ) } \\ & { \qquad \widetilde { b } _ { r } ^ { m } ( t ) = \log ( W ( \exp ( \eta \gamma _ { m } ^ { 2 } t + b _ { r } ^ { m } + \exp ( b _ { r } ^ { m } ) ) ) ) } \\ & { \qquad \widetilde { b } _ { r } ^ { m } ( t ) = \log ( W ( \exp ( 1 + \eta \gamma _ { m } ^ { 2 } t ) ) ) , } \end{array}
$$
where we used the choice ${ \pmb w } _ { r } = { \pmb 0 } \implies \ b _ { r } ^ { m } = 0$ . Denoting $w = W ( \exp ( 1 + \eta \gamma _ { m } ^ { 2 } t ) )$ , we have by the definition of $W$
$$
\begin{array} { c } { { w \exp ( w ) = \exp ( 1 + \eta \gamma _ { m } ^ { 2 } t ) } } \\ { { w + \log w = 1 + \eta \gamma _ { m } ^ { 2 } t } } \\ { { 2 w \stackrel { ( i ) } { \geq } 1 + \eta \gamma _ { m } ^ { 2 } t } } \\ { { w \geq \displaystyle \frac { 1 + \eta \gamma _ { m } ^ { 2 } t } { 2 } , } } \end{array}
$$
where $( i )$ uses $\log w \leq w$ . Plugging $w \geq \textstyle { \frac { 1 } { 2 } } ( 1 + \eta \gamma _ { m } ^ { 2 } t )$ back into Equation 223 yields $\begin{array} { r } { \widetilde { b } _ { r } ^ { m } ( t ) \geq \log ( \frac { 1 } { 2 } ( 1 + \eta \gamma _ { m } ^ { 2 } t ) ) } \end{array}$ , and plugging this back into Equation 213 yields
$$
\begin{array} { r l } { \mathcal { S } _ { 5 ^ { \prime } } ^ { \prime \prime } \leq \frac { 1 } { R } \frac { \exp ( \frac { \delta x _ { \delta } x _ { \delta } } { x } ) } { R } + \frac { 1 } { R } \frac { \exp ( \delta x _ { \delta } x _ { \delta } ^ { \prime \prime } ) } { R } \leq \frac { 1 } { \exp ( \frac { \delta x _ { \delta } x _ { \delta } } { x } ) } } \\ & { = \frac { 2 } { R } + \frac { 2 } { R } \frac { \sum _ { i = 1 } ^ { R } \frac { 1 } { \delta x _ { \delta } } } { \exp ( 1 + \exp ( \delta x _ { \delta } x _ { \delta } ) ) } } \\ & { \leq \frac { 2 } { R } + \frac { 4 } { R } \frac { \exp ( \frac { \delta x _ { \delta } x _ { \delta } } { x } ) } { \exp ( 1 + \exp ( \frac { \delta x _ { \delta } x _ { \delta } } { x } ) ) } } \\ & { \leq \frac { 2 } { R } + \frac { 4 } { R } \frac { \exp ( \frac { \delta x _ { \delta } x _ { \delta } } { x } ) } { \exp ( 1 + \exp ( \frac { \delta x _ { \delta } x _ { \delta } } { x } ) ) } } \\ & { = \frac { 2 } { R } + \frac { 4 } { R } \frac { \exp ( 1 + \exp ( \frac { \delta x _ { \delta } x _ { \delta } } { x } ) ) } { \exp ( 1 + \exp ( \frac { \delta x _ { \delta } x _ { \delta } } { x } ) ) } \leq 1 } \\ & { = \frac { 2 } { R } + \frac { 4 } { R } \frac { \exp ( 1 + \exp ( \frac { \delta x _ { \delta } x _ { \delta } } { x } ) ) } { \exp ( 1 + \exp ( \frac { \delta x _ { \delta } x _ { \delta } } { x } ) ) } } \\ & { \leq \frac { 2 } { R } + \frac { 4 } { R } \frac { \exp ( \frac { \delta x _ { \delta } x _ { \delta } } { x } ) } { \exp ( 1 - \exp ( \frac { \delta x _ { \delta } x _ { \delta } } { x } ) ) } } \\ & { \leq \frac { 2 } { R } + \frac { 4 } { R } \frac { \exp ( \frac { \delta x _ { \delta } x _ { \delta } } { x } ) } { \exp ( 1 - \frac { \delta x _ { \delta } x _ { \delta } } { x } ) } } \\ & \leq \frac { 2 } { R } + \frac { 2 } { R } \frac { \exp ( 1 - \frac { \delta x _ { \delta } x _ { \delta } } { x } ) } \exp ( 1 + \frac \delta x _ { \delta } x _ { \delta } x _ { \delta } \end{array}
$$
where the last line uses $\gamma _ { m } = \| \pmb { x } ^ { m } \| \geq \gamma$ together with the fact that $f ( x ) = \log ( 1 + x ) / x$ is decreasing in $x$ .
# C. Additional Experimental Details
The synthetic and MNIST datasets that we use for the experiments in Section 5 are described in full detail below.
# C.1. Synthetic Data
The synthetic dataset is a simple task with $M = 2$ clients and $n = 1$ data points per client, with $d = 2$ dimensional data. It was introduced by Crawshaw et al. (2025) with the goal of inducing conflict between the magnitude and direction of local client updates. The two data points ${ \mathbf { } } x _ { 1 } , { \mathbf { } } x _ { 2 }$ are defined in terms of parameters $\delta , g$ as follows: ${ \pmb w } _ { 1 } = \gamma _ { 1 } { \pmb w } _ { 1 } ^ { * }$ and ${ \pmb w } _ { 2 } = \gamma _ { 2 } { \pmb w } _ { 2 } ^ { * }$ , where
$$
\begin{array} { l } { { \pmb w } _ { 1 } ^ { \ast } = \left( \frac { \delta } { \sqrt { 1 + \delta ^ { 2 } } } , \frac { 1 } { \sqrt { 1 + \delta ^ { 2 } } } \right) } \\ { { \pmb w } _ { 2 } ^ { \ast } = \left( \frac { \delta } { \sqrt { 1 + \delta ^ { 2 } } } , - \frac { 1 } { \sqrt { 1 + \delta ^ { 2 } } } \right) , } \end{array}
$$
Figure 4: Train loss of Local GD (step size $\eta$ , communication interval $K$ ) with the CIFAR-10 dataset. Overall, we observe that Local GD converges faster in the long run by choosing a larger step size/communication interval, despite unstable/slow optimization in early iterations. For $( a )$ , we first fix $K = 1 6$ while varying $\eta$ , then fix $\eta = 2 ^ { 9 }$ while varying $K$ .
and $\gamma _ { 1 } = 1 , \gamma _ { 2 } = 1 / g$ . By choosing $\delta$ close to zero and $g$ with large magnitude, the two local objectives differ significantly in terms of gradient direction and magnitude. For our experiments, we use $\delta = 0 . 1$ and $g = 1 0$ .
# C.2. MNIST
Similar to $\mathrm { w } _ { \mathrm { u } }$ et al. (2024a) and Crawshaw et al. (2025), we use a subset of MNIST data with binarized labels, and our implementation follows that of Crawshaw et al. (2025). First, we randomly select 1000 images from the MNIST dataset, which we then partition among the $M$ clients using a heterogeneity protocol that is common throughout the federated learning literature (Karimireddy et al., 2020). Specifically, for a data similarity parameter $s \in [ 0 , 1 0 0 ]$ , the $s \%$ of the data is allocated to an “iid pool”, which is randomly shuffled, and a “non-iid pool”, which is sorted by label. When sorting the non-iid pool, we sort according to the 10-way digit label. We then split the iid pool into $M$ equally sized subsets, and similarly split the non-iid pool into $M$ equally sized subsets (keeping the sorted order), and each client’s local dataset is comprised of one subset of the iid pool together with one subset of the non-iid pool. In this way, the local datasets have different proportions of each digit. If $s = 1 0 0$ , then the 1000 images are allocated uniformly at random to different clients, and if $s = 0$ , then the clients will have nearly disjoint sets of digits in their local datasets. Finally, after images have been allocated to clients, we replace each image’s label with the parity of its depicted digit. For our experiments, we set $M = 5$ and $s = 5 0$ . For all images, the pixel values initially fall into the range [0, 255]; we normalize the data by subtracting 127 from each pixel, then dividing all pixels by the same scaling factor to ensure that $\operatorname* { m a x } _ { m , i } \| \pmb { x } _ { i } ^ { m } \| = 1$ .
# D. Additional Experimental Results
# D.1. CIFAR-10 Experiments
In this section, we provide additional experiments on the CIFAR-10 dataset, using similar protocols as in Section 5. For these experiments, we vary the step size $\eta \in \{ 2 ^ { 6 } , 2 ^ { 7 } , \dots , 2 ^ { 1 0 } \}$ , and other details of the setup exactly match those of our MNIST experiments (see Section C.2), including the number of communication rounds $R$ , the heterogeneity procedure, number of clients $M$ , number of samples per client $n$ , data similarity parameter $s$ , data normalization procedure, and choice of interval $K \in \{ 1 , 4 , 1 6 , 6 4 \}$ . Note that we used step sizes between $2 ^ { 6 }$ and $2 ^ { 1 0 }$ , since smaller choices led to very slow, very stable convergence and larger choices led to overflow.
The results can be seen in Figure 4. For these additional experiments, we used the same evaluation protocol as in Section 5: Figures 4(a) corresponds to Q1 and Figure 1, Figure 4(b) corresponds to Q2 and Figure 2, and Figure 4(c) corresponds to Q3 and Figure 3.
The results on CIFAR-10 further support our theoretical findings. In Figure 4(a), larger step sizes/communication intervals lead to faster convergence in the long run, despite the resulting slow/unstable convergence in early iterations. In Figure 4(b), we can see that a larger communication interval $K$ leads to faster convergence when $\eta$ is tuned to $K$ . The results in Figure 4(c) are similar to the MNIST results in Figure 3: when $\eta K$ is constant, $K = 1$ is less stable and slower than other choices of $K$ , and all other choices have roughly the same final loss. These results strengthen the evidence that our theoretical findings accurately describe the behavior of Local GD in practice.
Figure 5: Three splits of a synthetic dataset. Binary labels are shown in red/blue, and client indices for each data point are shown with markers. Note that some data points are contained by multiple clients, which is shown with overlapping markers. In the homogeneous split (left), all clients have the same data, so they all have the same local margins. For mixed (middle), two clients have local margin $\gamma$ , and two clients have local margin $3 \gamma$ . For heterogeneous (right), all four clients have different local margins. Note that the combined dataset of all four clients is the same for all three splits.
# D.2. Margin Heterogeneity
While our theoretical analysis makes no assumption about data heterogeneity (it applies to any linearly separable dataset), the question remains whether the convergence rate can be improved with a more fine-grained analysis that considers the local margins $\begin{array} { r } { \gamma _ { m } : = \operatorname* { m a x } _ { \pmb { w } \in \mathbb { R } ^ { d } , \| \pmb { w } \| = 1 } \operatorname* { m i n } _ { ( \pmb { x } , \pmb { y } ) \in D _ { m } } \pmb { y } \langle \pmb { w } , \pmb { w } \rangle } \end{array}$ instead of the global margin $\gamma$ alone. We investigate this question with a controlled synthetic dataset, by changing the local margins $\gamma _ { m }$ while preserving the global dataset.
This synthetic dataset has $M = 4$ clients with a total of 16 data points. The dataset can be split among the four clients in three different ways to create either homogeneous, partially homogeneous (i.e. mixed), or heterogeneous margins among clients, which are shown in Figure 5. Note that $\| \pmb { x } _ { i } ^ { m } \| \le 1$ for every data point, so that $H \leq 1 / 4$ , similarly with the datasets of Section 5. Also, the global dataset (and therefore $\gamma$ ) is the same for all three splits. Our theory provides the same convergence rate upper bound for all three splits, and we verify this prediction by evaluating Local GD with various hyperparameters on the three splits. Results are shown in Figure 6.
The left subplots of Figure 6 show that the losses for each split are slightly different in early iterations, but quickly become nearly identical. The right subplots show that all three splits satisfy $\eta \gamma ^ { 2 } K r \cdot F ( \pmb { w } _ { r } ) 1$ as $r$ increases, so that the asymptotic convergence rate is unaffected by heterogeneity in the local margins. This behavior is consistent across choices of $\eta$ and $K$ . These results align with our theoretical prediction that the convergence rate of Local GD depends on properties of the global dataset, rather than how that dataset is allocated among clients.
Figure 6: Results of Local GD on three splits of the synthetic dataset pictured in Figure 5. The right subplots show the asymptotic rate as the number of iterations goes to $\infty$ , similarly to Figures 1(b) and 1(d) of (Wu et al., 2024a). | Existing analysis of Local (Stochastic) Gradient Descent for heterogeneous objectives requires stepsizes $η\leq 1/K$ where $K$ is the communication interval, which ensures monotonic decrease of the objective. In contrast, we analyze Local Gradient Descent for logistic regression with separable, heterogeneous data using any stepsize $η> 0$. With $R$ communication rounds and $M$ clients, we show convergence at a rate $\mathcal{O}(1/ηK R)$ after an initial unstable phase lasting for $\widetilde{\mathcal{O}}(ηK M)$ rounds. This improves upon the existing $\mathcal{O}(1/R)$ rate for general smooth, convex objectives. Our analysis parallels the single machine analysis of~\cite{wu2024large} in which instability is caused by extremely large stepsizes, but in our setting another source of instability is large local updates with heterogeneous objectives. | [
"cs.LG"
] |
introduction. Their widespread adoption is often credited to their dramatically improved trainability: residual networks train faster, more stably, and achieve higher accuracy than their feedforward counterparts. While numerous techniques, ranging from improved initialization to advanced learning rate schedules, have been proposed to close the performance gap between residual and feedforward networks, this gap has persisted. In this work, we propose an alternative explanation: residual networks do not merely reparameterize feedforward networks, but instead inhabit a different function space. We design a controlled post-training comparison to isolate generalization performance from trainability; we find that variable-depth architectures, similar to ResNets, consistently outperform fixed-depth networks, even when optimization is unlikely to make a difference. These results suggest that residual connections confer performance advantages beyond optimization, pointing instead to a deeper inductive bias aligned with the structure of natural data.
# 1 Introduction
Combining deep neural networks and big data has enabled a series of dramatic breakthroughs; first in computer vision [Krizhevsky et al., 2012] but soon extending to a variety of different domains such as text processing [Llama Team, 2024], protein folding [Jumper et al., 2021], and multimodal tasks [Anil et al., 2023]. The fact that this apparent success seems to extend to almost every type of natural data might be one of the most surprising results of modern deep learning research. In many cases, the inductive bias provided by network architectures and training protocols appears to be better aligned with natural data distributions than any other methods available today. This makes it at least plausible that we have found something akin to a universal prior for natural data, hidden within the inductive bias behind deep learning.
Over the course of the last decade, thousands of different network architectures and design patterns were proposed. Out of those, very few stood the test of time, in the sense that they are still commonly used years after their introduction. Additionally, it seems to be the case that when swapping one network component for another, similar performances are often reached, making it difficult to draw conclusions regarding their potential hidden inductive biases. Notable exceptions to this rule include Transformer blocks [Vaswani et al., 2017], normalization layers [Ioffe and Szegedy, 2015], and residual connections [He et al., 2016], which are still to this day ubiquitous in current architectures. In this work, we focus on the latter, aiming to take a deeper look at the reasons for the immense success and perseverance of the residual architecture design since its introduction.
He et al. [2016] show that deep feedforward networks suffer from a degradation problem, where training performance deteriorates with depth, and propose residual connections as a reformulation that lets a layer learn the identity map more easily. The term reformulation, however, is ambiguous:
it could mean either (i) a pure reparametrisation of the same function class, or (ii) the introduction of a genuinely different hypothesis space. Follow-up work explored both interpretations. Mean-field analysis by Yang and Schoenholz [2017] traces the degradation to numerical instability: the authors show that well-conditioned $q$ - and $c$ -maps, which guarantee a forward/backward pass and nonvanishing rank, are highly predictive for network trainability and that adding skip-connections mitigates these problems. Yet, Martens et al. [2021] and Zhang et al. [2022] show that even after one eliminates such numerical pathologies through carefully shaped initialisation and re-scaling of the nonlinear layers, residual nets still outperform equally well-conditioned plain nets, suggesting additional factors at play.
A complementary perspective is given by Veit et al. [2016], who unravel a ResNet into an ensemble of exponentially many paths of different lengths and hypothesize that since long paths result in very low gradient norms, they effectively might not contribute to the total gradient in any meaningful way. They conclude that residual connections do not magically make deep paths trainable; instead, they shorten the effective path length and thereby avoid exploding/vanishing gradients. However, despite being centered on the architectural shape of residual networks, their work only discusses the implications for trainability properties.
Despite extensive study, past literature still conveys an incomplete picture of the benefits provided by residual connections. We provide an analytical argument showing that adding skip-connections alters the network’s function space rather than being a simple re-parametrization. After undergoing an extensive literature review showing the difficulty of disentangling trainability and generalization properties, we supply new experiments showing that variable-depth architectures (i.e. containing both long and short paths) such as ResNets outperform fixed-depth feed-forward networks (i.e. only containing long paths), even in a setting where the differences in trainability are negligible. This evidence, in line with past findings, suggests that the function space induced by variable-depth networks might be more closely aligned with real-world data distributions than the space defined by their fixed-depth counterparts, implying that the long-standing performance gap between them may never be fully closed.
# 2 Motivation
# 2.1 A Brief History of Numerical Issues in Neural Network Training
When training deep neural networks, certain basic numerical constraints must be met to optimize a network effectively. In this section, we show a list of commonly used necessary criteria for trainability as proposed by Lubana et al. [2021] and Balduzzi et al. [2017], along with a short history of how researchers addressed those issues in the past. For the remainder of this paper, we will refer to these specific issues simply as “trainability issues”.
1. Stable Forward Propagation. The scale of a network’s activations should not grow exponentially across layers during the forward pass. This effect was also sometimes called covariate shift (e.g. [Santurkar et al., 2018] Figure 2). Earlier network architectures, such as VGG [Simonyan and Zisserman, 2015] without normalization layers, had difficulties scaling beyond $\sim 2 0$ layers due to this issue.
2. Non-Shattering Gradients / Stable Backward Propagation. Gradients with respect to network inputs should maintain some degree of spatial auto-correlation / smoothness (ref. [Balduzzi et al., 2017], Fig. 1) for successfully training with SGD, particularly with momentum. Ali Mehmeti-Göpel et al. [2021] show that the classical “exploding gradients” problem in deep feedforward networks (ref. [Yang et al., 2019b]) can be seen as a consequence of shattering gradients, as the gradients wrt. the network’s weights also shatter, which in turn leads to high average gradient norms. Ali Mehmeti-Göpel and Wand [2024] (ref. Fig 5) and Schoenholz et al. [2017] show that exploding gradients can severely impede a network’s trainability.
3. Informative Forward Propagation: The output rank, or more generally the singular value spectrum of the last layer’s activations of a network, should not collapse. When the activations of the last layer become linearly dependent, the effective dimensionality of the network function is reduced, and as a consequence, the network loses the ability to distinguish between different inputs and cannot learn effectively. This effect has been shown to occur for the product of many Gaussian matrices (ref. Saxe et al. [2014a] Figure 6) and also impedes a network’s trainability [Schoenholz et al., 2017].
The real difficulty lies in solving all of these problems at the same time, as a solution to a single problem can worsen the remaining ones, as we will see in the following.
Initialization Schemes Researchers first attempted to solve the first two issues by utilizing cleverly designed initialization schemes that guarantee a stable forward and backward propagation [Glorot and Bengio, 2010]. Assuming that the effect of the nonlinear layers in the network is negligible, the authors find that for layers of different width, an initialization scheme can either satisfy condition 1 or 2 at the same time and derive an initialization scheme that average both approaches as a compromise; however, this approach breaks down for deeper networks of non-constant width. He et al. [2015] later extended this idea to account for the effect of the nonlinear layer used on activations and gradients.
Dynamic Isometry / DKS There is a later line of research called dynamic isometry, which aims to initialize the network in a way that brings all singular values of the input-output Jacobian of the network function close to 1, consequently solving all three stability issues. For linear networks at initialization, it is possible to achieve dynamic isometry for networks with arbitrary depth using i.i.d. orthogonal matrices [Saxe et al., 2014b], but not for i.i.d. Gaussian matrices. As for nonlinear networks with sigmoid activations, a similar result can be found [Pennington et al., 2017], but the proof strategy involves shrinking the pre-activations into the linear regime of the activation, effectively linearizing the network. [Martens et al., 2021] recognize this problem and find a solution to create truly nonlinear deep feedforward networks that are also numerically stable. However, their approach not only relies on weight initialization but additionally requires modifications of the network, mainly activation function transformations.
Normalization Layers Normalization layers like Batch Normalization [Ioffe and Szegedy, 2015] ensure a stable forward pass at initialization and throughout training as per construction. However, as these layers usually do not perform a full whitening transformation, the covariance between channels can still vanish. Daneshmand et al. [2020] prove that BatchNorm retains at least the square root of the full rank at initialization for linear networks; however, their result does not hold for nonlinear networks or during training. There are attempts to implement full whitening normalization layers [Huang et al., 2019, 2018] that also normalize covariances between channels, but these add significant computational overhead, guarantee spectral isotropy only in the forward pass but not in the backward pass, and bear several other issues such as stochastic axis swapping. Unfortunately, the combination of nonlinear layers and normalization layers also causes exploding gradients in the backward pass at initialization [Yang et al., 2019a, Luther, 2020].
Residual Connections The further addition of residual connections ensures an informative forward pass along with a stable backward pass Yang et al. [2019a]. When properly scaling the residual branch, a stable forward pass can be achieved even without normalization layers [Zhang et al., 2019]. Many works attempt to characterize the benefits gained by the addition of residual connections, but their argumentation often boils down to the compression of the singular spectrum across layers [Huang et al., 2020, Oyedotun et al., 2021] or exploding/shattering gradients [Veit et al., 2016] mentioned above.
# 2.2 Closing the Gap
Researchers have attempted to close the performance gap between deep feedforward networks and ResNets originally described by He et al. [2016] for many years, but still fail to fully close it.
Ali Mehmeti-Göpel and Wand [2024] show that it is possible to eliminate exploding/shattering gradients even in arbitrarily deep normalized feedforward networks by warming-up the learning rate properly in early training, or simply normalizing the layer-wise gradient norms. However, even with these fixes, a significant performance gap is observable between deep feedforward and residual networks in Figure 5 of their work.
Xiao et al. [2018] show that using a clever initialization scheme, it is possible to train 10,000-layer nonlinear convolutional neural networks. However, their construction involves practically linearizing the network at initialization and taking steps so small that the network remains mostly linear also during training; this effectively renders the network linear and drastically reduces its expressivity. The “looks-linear” initialization of [Balduzzi et al., 2017] follows a similar approach of initializing the network in a linear state, however, it suffers from trainability issues due to the nonlinear activations once training moves beyond the initial regime.
Instead of fully linearizing the network at initialization, another possible approach is to control the “degree of nonlinearity” of the nonlinear layers in the network: Ali Mehmeti-Göpel et al. [2021] (ref. Figure 9) show that the performance degradation of deep feedforward networks can be alleviated to a certain degree by using Leaky ReLU activations and controlling their slope. Martens et al. [2021] follow a similar idea, but allow more flexibility, as in their work every nonlinear layer obtains a different scaling factor that is found by a solver. Zhang et al. [2022] further develop this idea and fix an issue of the approach specifically wrt. the ReLU nonlinearity and obtain even better results, but a generalization gap of over one percent on ImageNet remains for networks with 101 layers (ref. Table 9), even when trained with the expensive K-FAC Martens and Grosse [2015] optimizer. However, it is unclear whether optimizing numerically degenerate networks, such as deep feedforward networks, using a second-order optimizer, creates numerical instabilities in itself. Additionally, to compensate for the drastically different c- and $\mathfrak { q }$ -maps of residual and non-residual architectures, their layer-wise scaling factors must be highly different; but these directly control the “degree of nonlinearity” of the networks, directly affecting their expressivity. Thus, this work does not yield a direct comparison of networks of similar nonlinear depth, which is the setup we are looking for.
We conclude that to our best knowledge, there have been no successful attempts to fully close the performance gap between deep feedforward and residual networks. As fixing the numerical instabilities of deep feedforward networks is a difficult problem, it is unclear at this point whether the performance gap measured so far stems from side effects or newly introduced instabilities of proposed solutions such as DKS, or can be attributed to a an inductive bias beyond trainability issues as defined above.
# 2.3 Partial Linearization
As a direct comparison of residual and feedforward networks in training is highly problematic for the reasons elucidated above, we opt for a different approach. Instead of opposing two architectures with potentially quite different trainability properties in a side-by-side training run from scratch, we attempt to extract a network with the relevant structure from an already trained network, therefore minimizing the impact of trainability properties. In this section, we take a look at existing frameworks that allow for such an undertaking.
Ali Mehmeti-Göpel and Disselhoff [2023] and Dror et al. [2021] present similar techniques that reduce the amount of ReLU units in fully trained networks. Both works realize this in a post-training phase, where ReLU units are replaced with PReLU [He et al., 2015] units and an additional regularization term, which simply penalizes every nonlinear unit in the network, is used to push their slopes towards linearity. The major difference is that the former authors do this at a channel granularity (i.e. one slope parameter per channel), whereas the latter use layer-wise units (i.e. a single slope parameter per layer). This difference, however, is crucial: by using a channel-wise technique, the resulting network can have variable-depth, akin to a ResNet; whereas when using the layer-wise technique, the resulting network has fixed depth, akin to a feedforward network. We use this difference to realize our comparison of the performance of variable- and fixed-depth networks. The authors have done plenty of comparisons between their methods, but the precise setup needed to make our claim (ref. Section 4) is not covered.
# 3 On Function Spaces
“We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions.”
This quote from the abstract of He et al. [2016] explicitly describes ResNets as a “reformulation” of a non-residual network function. As this terminology is ambiguous, we first introduce precise definitions and notation.
Definition 1. Let $f ( x , \theta ) : \mathbb { R } ^ { n \times p } \mathbb { R } ^ { m }$ be a network function with flattened input vector $\boldsymbol { x } \in \mathbb { R } ^ { n }$ and weight vector $\theta ~ \in ~ \mathbb { R } ^ { p }$ . We then define a reparametrization of the network function $f$ as another network function $g ( x , \theta ) : \mathbb { R } ^ { n \times p ^ { \prime } } \mathbb { R } ^ { m }$ along with a weight reparametrization function $h ( \theta ) : \mathbb { R } ^ { p ^ { \prime } } \mathbb { R } ^ { p }$ such that:
$$
g ( x , h ( \theta ) ) = f ( x , \theta ) .
$$
We call a reparametrization equivalent, if the networks $f$ and $g$ have the same width and depth.
In Appendix Section A.1, we first establish that in the general case with a non-injective nonlinearity and square matrices, it is not possible to equivalently reparametrize a ResNet as a feedforward network. Now we take a look at a simple constriction that shows that such a reparametrization is generally possible, albeit requires more parameters.
Proposition 1 (Locally Linear Nonlinearity). Let $R$ be a residual block defined as follows:
$$
R ( x ) = \phi ( \overline { { W } } x + \bar { b } ) + x
$$
where $\overline { W } \in \mathbb R ^ { n \times n }$ represents the weight matrix and $\bar { b } \in \mathbb { R } ^ { n }$ the bias and $\phi : \mathbb { R } \to \mathbb { R }$ is an element-wise nonlinear function, which is differentiable in a single point $c \in \mathbb { R } .$ .
$R$ can then be reparametrized as a feedforward layer $F$ with one additional linear layer and double the width:
$$
F ( x ) = W _ { 2 } \phi ( W _ { 1 } x + b _ { 1 } ) + b _ { 2 } ,
$$
with weights $W _ { 1 } \in \mathbb { R } ^ { n \times 2 n }$ , $W _ { 2 } \in \mathbb { R } ^ { 2 n \times n }$ and biases $b _ { 1 } \in \mathbb { R } ^ { 2 n }$ , $b _ { 2 } \in \mathbb { R } ^ { n }$ .
Proof. In a first step, we need to shift and shrink the input into the linear region of $\phi$ using a linear map $L _ { 1 } : \mathbb { R } ^ { n } \to \mathbb { R } ^ { 2 \bar { n } }$ . Let $\varepsilon > 0$ , we can then write:
$$
L _ { 1 } ( x ) : = \underbrace { \left[ { \frac { \varepsilon \cdot I } { W } } \right] } _ { \mathbb { R } ^ { 2 n \times n } } x + \underbrace { \left[ { \begin{array} { l } { c \cdot \mathbb { 1 } } \\ { { \bar { b } } } \end{array} } \right] } _ { \mathbb { R } ^ { 2 n } } .
$$
Next, after applying the nonlinear layer $\phi$ , we need to add, un-shrink, and shift back using $L _ { 2 }$ : $\mathbb { R } ^ { 2 n } \to \mathbb { R } ^ { n }$ :
$$
L _ { 2 } ( x ) : = \underbrace { { \big [ } { \frac { 1 } { \varepsilon \phi ^ { \prime } ( c ) } } \cdot I \ I { \big ] } _ { \mathbb { \ } } x } _ { \mathbb { R } ^ { n \times 2 n } } x - \underbrace { { \frac { \phi ( c ) } { \varepsilon \phi ^ { \prime } ( c ) } } \cdot \mathbb { 1 } } _ { \mathbb { R } ^ { n } } .
$$
We can now put together our feedforward layer $F = L _ { 1 } \circ \phi \circ L _ { 2 }$ and obtain:
$$
F ( x ) = \frac { \phi ( \varepsilon x + c ) } { \varepsilon \phi ^ { \prime } ( c ) } + \phi ( \overline { { { W } } } x + \bar { b } ) - \frac { \phi ( c ) } { \varepsilon \phi ^ { \prime } ( c ) } \cdot \mathbb { 1 } .
$$
If we now plug the Taylor expansion for the term we want to linearize
$$
\phi ( \varepsilon x + c ) = \phi ( c ) \cdot \mathbb { 1 } + \phi ^ { \prime } ( c ) \varepsilon x + { \mathcal { O } } ( \varepsilon ^ { 2 } \| x \| ^ { 2 } )
$$
into Equation 5, we finally obtain the reparametrization, which is exact for $\varepsilon \to 0$ :
$$
\begin{array} { l } { { \displaystyle { \cal F } ( x ) = \frac { 1 } { \varepsilon \phi ^ { \prime } ( c ) } ( \phi ( c ) \cdot \mathbb { 1 } + \phi ^ { \prime } ( c ) \varepsilon x + { \mathcal O } ( \varepsilon ^ { 2 } \| x \| ^ { 2 } ) + \phi ( \overline { { W } } x + \bar { b } ) ~ - ~ \frac { \phi ( c ) } { \varepsilon \phi ^ { \prime } ( c ) } \cdot \mathbb { 1 } } } \\ { { \displaystyle ~ = x + \phi ( \overline { { W } } x + \bar { b } ) + { \mathcal O } ( \varepsilon \| x \| ^ { 2 } ) , } } \end{array}
$$
In Appendix Section A.2, we show a similar construction specifically for $\phi = R e L U$ that works by shifting the data term into the linear region of the activation function instead of shrinking, thus not suffering from numerical issues related to the shrinking/unshrinking of the pre-activations.
Figure 1: Comparing the test accuracies of partially linearized networks using a channel-wise or layer-wise partial linearization approach on the ImageNet (left) and Cifar100 (right) datasets. The results of the latter are averaged over 5 runs per data point, and standard deviations are indicated as error bars.
Limitations The construction above however requires significant assumptions:
• Requires sufficient numerical precision in order mitigate numerical instabilities during shrinking/unshrinking of the pre-activations. Otherwise yields an error term if $\varepsilon$ is not chosen small enough.
• Requires boundedness of the pre-activations.
• Requires double the depth and width.
We don’t see the halving of the number of layers as a major limitation, since we would typically compare networks with the same normalized average path length, i.e. a feedforward network with $\ell$ layers with a ResNet with approximately $\ell / 2$ layers. Especially in normalized networks, the boundedness of the pre-actiations does not seem like a major constraint. However, the effective halving of the width of the network is likely to affect the performance significantly.
In conclusion, it is impossible to reparametrize a residual network as an equivalent feedforward network and possible reparametrizations (for a single block) require additional width.
# 4 Empirical Evidence
As we have now established that ResNets span a different function space than feedforward networks, it is now plausible that these networks outperform feedforward networks beyond trainability issues. As this question fundamentally depends on how well the network architecture’s prior aligns with the training data, it is empirical in nature. As we established in the past section, numerical instability (i.e. an unstable forward/backward pass or vanishing rank in the last layer) is greatly impeding the trainability of deep feedforward networks, so we need to exert great care in our experimental design to avoid such issues.
The basic idea is a setup vey similar to Ali Mehmeti-Göpel and Disselhoff [2023], Dror et al. [2021]: we start from an already pre-trained network and then mold it into different shapes in a post-training phase, in order to reduce the impact of trainability on the comparison. Starting from a deep pre-trained feedforward network, we gradually reduce the number of nonlinear units in the network using an additional regularization term that penalizes each nonlinear unit in the network during a post-training phase. Our goal is to mold the network either into a variable-depth network or a fixed-depth network during the post-training phase in order to compare their respective performances. We achieve this by using channel-wise, respective layer-wise linearization approach in otherwise equal experimental settings: starting from a deep feedforward network, a channel-wise linearization can result in a variable-depth network (i.e. similar to a ResNet) if only a proportion of the channels within a layer become linear; a layer-wise linearization always results in a fixed-depth network (i.e. a shallower feedforward network). As the networks are molded only into their respective shapes only after the network reaches its full performance, we do not expect trainability to differently affect both runs.
# 4.1 Implementation Details
In order to reduce the amount of nonlinear units in a trained network, we replace its ReLU units with channel/layerwise PReLU units [He et al., 2015] and add a sparsity regularization term
$$
L _ { 0 . 5 } = \sum \vert 1 - \alpha _ { i } \vert ^ { 0 . 5 }
$$
to the regular training loss scaled with a regularization weight $\omega$ , where $\alpha _ { i }$ is the variable slope of the i-th PReLU. This way, the networks are incentivized to regularize some of their nonlinear units towards linearity, while preserving performance as much as possible. We set and freeze the slope parameter to $\alpha _ { i } = 1$ if it gets close enough to one, i.e. $| \alpha _ { i } - 1 | < 0 . 0 1$ .
Similar to Ali Mehmeti-Göpel and Disselhoff [2023], we measure the depth of the resulting partially linearized networks in average path length, which represents the average amount of nonlinear units encountered in a path from input to output through the computation graph of the network. Particularly, we use the width-agnostic normalized average path length (NAPL) as a measure. For networks linearized layer-wise, the NAPL of the resulting network is simply its depth minus one.
We chose RepVGG [Ding et al., 2021] as our starting architecture, as it is a recent network architecture without skip-connections but with competitive ImageNet performance, with a reasonable amount of parameters, and publicly available weights. The RepVGG architecture does contain skip-connections across linear layers, which in our framework does not affect nonlinear depth here measured by NAPL. We specifically chose an architecture with residual connections across the linear layers to highlight that nonlinear depth, which is only influenced by residual connections across nonlinear layers, is decisive of a network’s expressivity. We chose specifically the RepVGG-A2 architecture with 23 layers, as it has a low number of parameters but still achieves an ImageNet performance of $7 6 . 4 \%$ , similar to a ResNet50. The post-training phase has a duration of 10/60 Epochs on ImageNet resp. Cifar100. We chose a longer post-training duration on the Cifar100 dataset to show that this effect does not vanish when (post-)training close to convergence. More implementation details can be found in the Appendix Section C.
# 4.2 Experimental Results
In Figure 1 (left), we show the test accuracies of the resulting partially linearized networks using a channel-wise and layer-wise technique starting from a pre-trained RepVGG-A2 model. We can clearly see that starting for networks with NAPL under 12, the layer-wise approach starts to be less performant than the channel-wise approach, and that the gap becomes bigger for shallower networks. In the Appendix Section B.1, we show similar results for the Cifar10 and Cifar100 datasets. Interestingly, the NAPL where the performances of the two approaches diverge seems to be lower on easier datasets.
We also repeat this experiment on the Cifar100 dataset 5 times with a longer post-training phase, and report the results with error bars in Figure 1 (right). Interestingly, we see a slight increase of performance towards NAPL of around 3; this is possible as with increasing linearization the loss surface is also smoothened and therefore generalization performance can in fact slightly increase as a result for intermediate $\omega$ , before dropping for lower $\omega$ as a result of low expressivity. Also on the Cifar100 dataset, we observe a significant performance gap between the layer-wise and channel-wise variants for lower NAPL.
# 4.3 Comparing to Linearized ResNets
In this Section, we repeat the results above, but start our linearization process from a pre-trained residual network instead of a pre-trained feedforward network. If the effect we saw in Section 4 is reduced, this imples that part of the observed benefit of the variable-depth path lengths can be replicated by regular residual connections. For a setup with comparable residual and feedforward networks, we use a ResNet56 “Short” (i.e. with residual connections) and “NoShort” (i.e. without residual connections). When linearizing a ResNet in Figure 2 (left), we see that the layer-wise extracted networks only slightly underperform the channel-wise extracted networks for ${ \mathrm { N A P L } } > 4$ For NAPL 4 and below, we still see a bigger difference, which we conjecture is due to quantization artifacts, as there are only very few free slope parameters left in this case. The gap is significantly bigger when linearizing a feedforward network (right).
Figure 2: Comparing the test accuracies on Cifar100 of partially linearized networks using a channelwise or layer-wise partial linearization approach, stating from a ResNet56 Short (left) and ResNet56 NoShort (right).
Figure 3: Comparing the histograms of networks extracted via partial linearization (left) versus standard ResNets (right). The red color indicates a lower $\omega$ (left) or a lower depth (right).
# 4.4 Shape of the Extracted Networks
In this Section, we analyze the shape of the resulting partially linearized networks from Section 4 (ImageNet dataset) as a histogram of path lengths. For reference, we also include the histogram of path lengths for the non-quantized ResNets; we omit the zero-density points resulting from the block length 2 for better legibility. In Figure 4.4 on the right-hand side, we see that the resulting distribution closely resembles a binomial distribution as predicted by Veit et al. [2016]. Observed discrepancies from the theoretical binomial distribution in shallower models can be explained by architectural components beyond linear and PReLU layers, present in the actual network but excluded from the simplified theoretical model. In Figure 4.4 (left), we observe that the extracted networks also contain a mixture of short and long paths, fairly similar to the standard ResNets. Note that this is not trivial, as these networks are derived from a fixed-depth model, and could very well also result in a fixed-depth network as well; their shape can be regarded as an emergent property of optimization.
# 4.5 Limitations
In this Section, we discuss the limitations of our experimental results. First, in the experiment from Section 4, the partially linearized networks are extracted via gradient descent after training, which is necessary to limit the impact of the drastically different trainability properties of variable-depth and fixed-depth networks on our experiment. There is no guarantee that networks trained from scratch behave the same way, although it appears likely. Also, since we are dealing with a non-convex optimization problem, which we optimize via a local gradient descent strategy, there is no guarantee that the networks extracted represent the exact sub-network with the best possible test accuracy given their target shape, i.e. that we reach a global minimum in both the channel-wise and layer-wise case. However, since merely the (relatively short) post-training phases differ between both approaches, and the network has already reached convergence before partial linearization, we argue that optimization is unlikely to make a difference in this comparison. Because of this limitation of our experimental setup, the results should be viewed as just another indication, along with a vast corpus of similar observations made in past literature (ref. Section 2), that point to a similar conclusion.
Another limitation of this experiment is that the partially linearized network using the channel-wise approach has slightly more free parameters compared to the layer-wise approach. To be exact, the parameter difference represents 7786 out of $\sim 2 \dot { 6 } \cdot 1 0 ^ { 6 }$ parameters or $\sim 0 . { \bar { 0 3 \% } }$ of the total amount. We address this concern in the Appendix Section B.3, where we present additional experiments that show that this small difference is highly unlikely to cause the performance gap observed.
Finally, one could argue that the performance gap we measured is a result of some other artifact of our experimental setup and cannot purely be attributed to the differences in the shape of the network. We addressed this issue in Section 4.3 and see that except for very low NAPL values, the gap is mostly vanishes. This supports our claim that the performance gains measured indeed are a result of the variable-depth shape of the network.
# 5 Discussion
In this work, we investigate a potential inductive bias induced by the function space of variable-depth networks, i.e. networks composed of long and short paths. First, using a simple analytical argument, we establish that the function space described by residual networks truly is different from the one described by feedforward networks, given that we constrain the networks to have the same shape (i.e. width and depth), as one would in a realistic setting. We show that is not possible to find an equivalent reparametrization of a residual network as a feedforward network and that simple constructions that realize such reparametrizations involve doubling the network’s width, along with other limitations. This makes it plausible that the performance of variable-depth and fixed-depth networks can differ due to factors beyond trainability.
Next, since there is “no free lunch” in machine learning, any claim of an architectural inductive bias must be verified on real data. However, after an extensive study of relevant literature, we come to understand that disentangling trainability from inductive bias is exceptionally challenging in this context, as deep feedforward networks suffer from many numerical issues during training, whereas comparable residual networks do not. Issues related to the uncontrolled growth of activation and gradient scales appear relatively tractable and can be addressed through normalization layers and specific warm-up strategies. The compression of the singular value spectrum of activations across layers (vanishing rank issue), however, seems more difficult to manage. As shown by Huang et al. [2020] (Figure 1), the inner product structure of inputs is affected from the very first layer, rendering even shallow networks unsuitable in a direct comparison by simple training. It still remains unclear whether the performance gap that persists in literature even after attempting to fix numerical issues in deep feedforward networks is truly beyond such issues.
For these reasons, in this work, we opt for a substantially different strategy in our experiments. Starting from a fully-trained deep feedforward network, we give the network the ability to turn some of its channels fully linear (channel-wise approach) and apply regularization pressure that punishes every nonlinear unit to the same degree. We observe in Section 4.4 that the resulting emerging sub-networks contain a mixture of long and short paths, not unlike a standard ResNet. Then, we repeat the same procedure, but constrain the network to only keep paths of the same length (layer-wise approach). Finally, we then compare the resulting generalization performance of the networks extracted using both approaches, where we match networks of the same nonlinear depth, which we quantify as the average amount of nonlinear units encountered on a path through the computation graph of the network (NAPL). We observe a significant performance gap between the extracted variable-depth architectures and their fixed-depth counterparts, even when controlling for average depth and parameter count, as we push the network’s nonlinear capacity toward the lower end. We further observe that this gap mostly disappears when the initial architecture is a ResNet, confirming that it arises from the constraint of allowing only long paths. We interpret this as further evidence that variable-depth networks outperform fixed-depth networks on natural data beyond mere trainability. This finding aligns with trends observed in prior work and strengthens the case that variable-depth architectures can offer a genuine inductive advantage over fixed-depth networks. | Residual connections remain ubiquitous in modern neural network architectures nearly a decade after their introduction. Their widespread adoption is often credited to their dramatically improved trainability: residual networks train faster, more stably, and achieve higher accuracy than their feedforward counterparts. While numerous techniques, ranging from improved initialization to advanced learning rate schedules, have been proposed to close the performance gap between residual and feedforward networks, this gap has persisted. In this work, we propose an alternative explanation: residual networks do not merely reparameterize feedforward networks, but instead inhabit a different function space. We design a controlled post-training comparison to isolate generalization performance from trainability; we find that variable-depth architectures, similar to ResNets, consistently outperform fixed-depth networks, even when optimization is unlikely to make a difference. These results suggest that residual connections confer performance advantages beyond optimization, pointing instead to a deeper inductive bias aligned with the structure of natural data. | [
"cs.LG",
"cs.AI"
] |
# 1 Introduction
Recent advancements in machine learning have produced foundation models. These models are notable for their capacity to generalize across diverse tasks and datasets, extending beyond the confines of training data [70, 149, 157]. Their task and data agnostic characteristic [12] distinguishes them from traditional models, offering a more flexible and adaptable paradigm. However, the application of these models to tabular data analysis is often hindered by simplifying assumptions, particularly in complex real-world settings. We perceive that the work on foundation models for tabular data sometimes conflates different problems. First, it focuses primarily on ML on “isolated tables” [146], a perspective that might be legitimate in certain scenarios but fails to reflect the realities of intricate data ecosystems. Second, multi-table methods using, for example, graph neural networks (GNNs) [38, 120], while effective in capturing relational structures, often assume information completeness within tables, neglecting crucial semantic context that is required for understanding the data generated by real-world applications. Although current models display impressive generalization, the oversimplification of tabular data presents a significant gap that must be addressed to unlock the full potential of foundation models within complex data environments.
Recent work suggests that foundation models contain an implicit world model [1, 93, 145]. Although world models can, in principle, be based purely on statistical associations or other forms of implicit structure, we hypothesize that, for structured data, explicitly modeling semantic context may promote greater generalizability and robustness. Motivated by this perspective, we introduce Semantically Linked Tables (SLT), which leverage semantic relationships as the foundation for world modeling in structured data. We acknowledge that tables are inherently linked to operational knowledge. This knowledge includes both declarative and procedural components. It is often created to facilitate the development of applications and is usually encoded within diverse artifacts. When combined with world knowledge, these artifacts act as a unifying mechanism. They form a ”semantic frame” that
Data Richness 一 (Operational) World knowledge Semantically Linked FMSLT General world knowledge about relevant entities, types,and events Data FMG Opertialbusinessknowtege RelatiDoantal $\div 0$ RelBen FMDB rules&processmodels,...
Semantically Procedural: Agent logicin natural language,
Linked applicationlogicascode,. 一
Tables Relational data STinagble GBDTs LTM Model Capabilities Reltionalatastyasetaleslikedb Single Task Multi-Task Foundation Model Model Model
governs data operations (i.e., how applications write data to databases). This semantic frame typically resides externally to databases that store the actual application data - see Fig. 1.
We propose Foundation Models for Semantically Linked Tables (FMSLT) as models to integrate operational knowledge, including declarative and procedural aspects, to ground tables within their real-world context. This proposal directly addresses the shortcomings of existing models that oversimplify tabular data by neglecting the rich operational and semantic context in which real-world data is embedded. This grounding encompasses intra- and inter-table relationships, rich contextual metadata, and procedural logic. Prior work highlights that the lack of explicit reasoning capabilities limits model performance, especially in scenarios requiring multi-hop and cross-table interactions, as observed in text-to-SQL tasks [113]. By capturing latent interactions and enabling a deeper understanding of data processes, FMSLTs aim to unlock the potential of machine learning on structured data. To illustrate how existing approaches fall short, consider the following example contrasting a vanilla ML approach and an FMSLT within an SLT scenario. Figure 2 showcases a simplified supply chain involving a manufacturer of configurable goods (in this case computers) with an associated webshop. The webshop allows the configuration of computers with compatible hardware elements, taking into account information from the warehouse and the availability of items. While this example is drawn from a business context, similar complexities arise in other domains such as healthcare, where operational contexts are equally critical for robust data-driven decision making. Here, SLT encompasses components such as product catalog and products with their configurable components, including warehouse management and supply tracking. For instance, when predicting internal material restocking requirements during production, a typical machine learning approach would be constrained to a company’s order history, or perhaps a manually curated subset of data from past analysis, limited by the underlying data complexity c.f. [101, 81]. However, for more reliable predictions, it is crucial to recognize that effective material restocking relies on multiple SLT intricacies - see Fig. 2 for an associated sample multi-table schema (for a legend and more details see the Appendix):
First, declarative knowledge, e.g., that the required material was replaced by a substitute in the product component graph, a component is exchangeable with lower cost components, or some material has a lower failure rate due to an improved manufacturing process.
Second, procedural knowledge, e.g., that the material is no longer recommended as the default material or if the user-specified configuration is not compliant with manufacturing constraints as defined in the product configuration logic of the webshop.
Third, world knowledge, e.g., regarding factory disruptions, risk of blocked supply routes, or supplier financial issues, geopolitical issues should be considered long-term but are beyond the essential scope.
To get a holistic view, an FMSLT would leverage the interacting SLT components in context. FMSLTs require operational knowledge grounded in real-world contexts, which is typically not publicly available. While close collaboration between domain experts and researchers remains essential to obtain and contextualize such knowledge, we also anticipate that synthetic data will play an important role in enabling research by simulating operational scenarios and addressing privacy and accessibility challenges. This work highlights the limitations of existing tabular models, introduces FMSLTs as a new research direction, and aims to foster these collaborations for impactful real-world applications.
# 2 Semantically Linked Tables
Figure 2: Mockup supply-chain: Left: Multi-table schema. Right: Webshop example.
PRD PROPT PROPT.VAL CMP
prd,id int1 propt,id int +prov,id int 1cmp.id Naturaldisastes 诊 TaiwanEarthquake Displays RMAT BOM INV_LOC VulnerabilityofChipGiants
CFG.CMP.CN 1rmat,id PORD SHPMT
INV.,TRX SUPP pord_id shp,ld int14 OrderSupp 曲曲 OROCG supplyocked 血 Financialses
Businessdata Declarativeknowledge Worldknowledge Proceduralknowledge
Taking a relational view on data recognizes the inherent interconnectedness of tables up to a certain degree, as data stored across multiple tables linked by foreign keys. However, not all context required for understanding is available in this form. SLT addresses this by recognizing the context-richness of data within real-world applications available in forms beyond just relational data. Unlike relational data, which primarily consists of structured data and transactions, SLT integrates both declarative (e.g., data models, rules, process models) and procedural (e.g., source code) operational knowledge that defines data usage and interpretation. This combination of relational data and operational knowledge is crucial for extracting meaningful insights. Relational data provides the factual basis, while operational knowledge provides the context, explaining data relationships and changes within data processes. SLT is characterized by its interconnectivity, reflecting the relational structure of entities [39], its context-richness, grounded in declarative and procedural knowledge, its dynamic nature, mirroring the evolving data landscape, and its often large scale. Beyond procedural and declarative knowledge, there is also world knowledge, which comprises both domain-specific and general knowledge that does not reside within a specific application or context. In many real-world environments, the available data resembles an archipelago of semi-isolated information islands. These “islands,” which are individual tables, are typically understood only by their creators and domain specialists. Each table, or cluster of related tables, embodies both application-specific details and an implicit conceptualization of the domain. Ideally, table schemas (including names and column headers) would possess semantic richness, enabling direct interpretation. However, in practice, these schema elements frequently act as opaque shorthands, demanding significant additional contextualization [171]. This fragmentation into semi-isolated tables substantially hinders the derivation of comprehensive insights and the establishment of meaningful connections across the overall data landscape. This opacity exceeds the limited context offered by relational database system (e.g., column/table names, foreign keys), which fail to capture the rich semantic context of tables. Research on training machine learning models on relational data highlights the challenges of extracting meaning from such interconnected datasets [120]. Insufficient metadata contributes to data “swamps”[102], where data volume, coupled with semantic ambiguity, hinders information retrieval. Current efforts to improve tabular data utilization focus on metadata enrichment, including using large language models (LLMs) to automatically generate metadata, enhance table metadata, and improve concept matching via enriched ontologies [102]. Simultaneously, approaches aim to improve dataset search and discovery through metadata enhancements [14, 91, 27, 3]. Beyond discoverability, rich metadata improves downstream tasks, for example, contextual information improves time series forecasting [160] and enhances tabular data analysis [32], particularly with text-heavy benchmarks.
# 2.1 Elements of Foundation Models for SLT
Training foundation models requires carefully curated data mixtures to achieve desired downstream functionality [110]. Recognizing that SLT intrinsically links tables to diverse declarative and procedural operational knowledge, accommodating these different knowledge types during pre-training is crucial for developing grounded table foundation models capable of generating contextualized representations. This integration extends beyond processing isolated tables, allowing the capture of intricate relationships within real-world operational data. We detail this knowledge and its integration in the following sections.
# 2.1.1 Declarative Knowledge
Within the context of SLT, declarative knowledge is central for representing domain concepts, their interrelationships, and the foundational information required for reasoning and decision-making. For example, in the webshop scenario illustrated in Fig. 2, declarative knowledge includes the available computer components, their specifications, and compatibility relationships between them. This type of knowledge also encompasses the rules and constraints that govern the configuration process, often expressed as conditional statements (e.g., “This mainboard supports only DDR5 RAM”). In practice, such rules and policies are essential for ensuring that applications operate according to business requirements and are typically distributed across various artifacts and formats. The effective application of declarative knowledge relies on proper representation, usually captured in structured repositories across the application domain, including ontologies, knowledge graphs (KGs), data dictionaries, and domain glossaries. These artifacts act as formalized repositories for capturing, structuring, and accessing the information necessary to understand and execute processes effectively. Ontologies, ranging from simple taxonomies to complex structures, provide a formal representation of domain-specific concepts and their relationships, enabling reasoning and inference [134, 52, 53]. KGs, created by instantiating an ontology with actual data, offer a flexible and concrete way to represent declarative knowledge, structuring information as a graph, with nodes representing entities and concepts, and edges representing their relationships. This rich representation of interconnected information is particularly useful in capturing domain and foundational knowledge, as it provides a direct connection to relational data, a perfect fit for representing SLT. General domain KGs like BabelNet [107], DBpedia [4], Wikidata [151], and YAGO [136, 62] have demonstrated the potential of this approach. Since the debut of Google’s Knowledge Graph in 2012, numerous organizations across various domains have adopted KGs for semantic search, recommendations, fraud detection, risk assessment, and other applications [63, 64].
Integrating Declarative Knowledge: A major challenge in artificial intelligence (AI) involves combining the strengths of symbolic and neural approaches. Symbolic AI offers interpretability and logical structure, while neural networks excel at pattern recognition and adaptation. For an overview of neurosymbolic approaches, see [100, 152] and for reasoning on KGs, see [127, 31, 95, 25]. Declarative knowledge, often encoded in domain KGs, application logic, and process definitions, is inherently symbolic. This symbolic representation provides crucial context for understanding the interconnectedness of data. It naturally integrates with data-driven learning in a neurosymbolic framework. This integration aligns with the core concept of SLT, which recognizes that tables exist within an ecosystem of interconnected resources. FMSLTs must effectively use this symbolic knowledge to “ground” tables within their real-world context. This integration of symbolic reasoning with data-driven learning defines neurosymbolic AI [8, 169, 35, 28, 29, 46, 86].
Recent developments in LLMs provide new possibilities for neurosymbolic reasoning. Among the prominent ones are Chain-of-Thought (CoT) [158], Tree-of-Thoughts (ToT) [164], and Graph-ofThoughts (GoT) [9] can be viewed as neurosymbolic approaches. In these, the LLM generates intermediate reasoning steps (symbolic representations) to guide problem-solving. However, limitations of LLMs in functional linguistic competence [98] can impact their reasoning consistency and performance. Approaches like Proof-of-Thought [43, 126] seek to address these shortcomings by employing formal logic verification of LLM-generated outputs.
Another viable path in this direction is the idea of graph foundation models (FMG) [99]. These models seek to overcome the limitations of task-specific GNNs. However, their success depends on addressing the primary challenge of leveraging vast and diverse graph data to achieve positive transfer across various tasks [94, 99]. All these integrated approaches suggest a future where AI systems effectively combine logical reasoning and adaptive learning to tackle complex challenges, such as integrating declarative knowledge and constraints.
# 2.1.2 Procedural Knowledge
Within a system, procedural knowledge, which embodies how things are done, is crucial to understanding dynamic data processes. In contrast to declarative knowledge, which describes what is known, procedural knowledge specifies the processes, rules, and logic that govern data creation, manipulation, and utilization. For example, in the webshop scenario shown in Fig. 2, procedural knowledge defines the step-by-step logic for configuring a computer. As illustrated, the user first selects the intended purpose, such as training or inference (see also Alg. 1 in the Appendix). Based on this selection, procedural logic dynamically guides the user through subsequent configuration steps, presenting compatible hardware components tailored specifically for the chosen purpose. In contrast, declarative knowledge describes the available hardware components and their characteristics. Procedural knowledge thus ensures correct decision sequences and compatibility at each stage. Such procedural knowledge typically appears as (proprietary) source code, formulas, or application logic, encoding rules, constraints, and workflows within the system. In the context of SLT, capturing procedural knowledge is crucial for understanding the operational semantics of linked tables.
Integrating Procedural Knowledge: LLMs have revolutionized automating code-related tasks [21]. A key application is code generation from natural language, beneficial for program synthesis, maintaining legacy code, and illuminating underlying procedures, potentially aiding in predictive analysis and decision-making. Recent research [45, 158, 174, 23, 124] has explored the potential of LLMs, especially those trained on code repositories, to incorporate and leverage procedural knowledge. To this end, both proprietary and open-source models have been adapted for code generation through continual pre-training [176, 77] or fine-tuning. Examples include Meta AI’s LLaMA [141] refined into Code Llama [123]; DeepSeek’s LLM [10] evolved into DeepSeek Coder [54]; and the Qwen team’s progression from Qwen [7] to Code Qwen [138]. We propose leveraging the source code corresponding to procedural knowledge generating data during FMSLT training, embedding the underlying functionality. The question of how to achieve this integration is an open research question. LLMs trained to excel on coding tasks are a natural way forward.
# 3 Data Challenges
The rapid growth of data has fueled advances in AI, particularly in machine learning (ML) and deep learning (DL). However, the full potential of AI remains unrealized due to various obstacles, primarily data silos [16, 144]. These silos are prevalent across different domains, exhibiting considerable similarities, especially in complex settings like healthcare and business operations. Both types of applications face challenges in data governance and access restrictions, stemming from disparate systems governed by varying policies, security protocols, and access controls. In business applications, fragmentation arises from competitive sensitivities, departmental divisions, or mergers and acquisitions. Healthcare is constrained by stringent privacy regulations (e.g., HIPAA, GDPR), patient consent requirements, and institutional policies [121]. Furthermore, both domains struggle with data heterogeneity and standardization, including inconsistent terminologies, varying data quality, and structural variations. Healthcare’s issues are compounded by variations in data acquisition protocols and equipment manufacturers [119], while complex operational systems often involve intricate knowledge bases spanning multiple domains, formats, and systems [58]. Moreover, incentives against data sharing and limited data discoverability, driven by competitive advantages, data investments, and the opaque nature of silos, further exacerbate these challenges. Consequently, these shared issues impede AI development by limiting the creation of large and diverse datasets essential for effective DL [147, 137, 20]. These challenges have contributed to healthcare and other regulated sectors lagging behind other domains in AI applications, such as computer vision and natural language processing. Therefore, a key objective is to circumvent the data siloing pitfalls that have hindered progress in these domains and to promote a more integrated approach to data utilization, potentially using synthetic data generation and privacy-enhancing techniques.
# 3.1 Declarative Data
Large, heterogeneous declarative data assets are increasingly managed as knowledge graphs (KGs) by major organizations [109]. These KGs often contain proprietary, confidential, and sensitive information, including identifiable data, business logic, and trade secrets. As a result, organizations almost always keep their KGs non-public [59]. There is growing interest in using such knowledge graphs, despite their sensitive nature, to enable advanced reasoning, generalization, and cross-organizational knowledge sharing, while upholding strict privacy requirements. A key objective is to build systems that can learn from KGs in a way that enables transferability and inductive reasoning across different organizations and domains. However, sharing or synthesizing KGs without violating privacy is an extremely challenging problem. Open-sourcing even partial KGs is generally infeasible, and generating synthetic data that preserves both privacy and semantic utility remains a significant technical challenge. Furthermore, public and domain-specific KGs often comprise completely disjoint entity and relation sets, making it difficult to train universally transferable models.
To address these and related challenges, recent research investigates the development of foundational models for (knowledge) graphs (FMGs) [41, 42]. FMGs are designed to learn universal and transferable graph representations, enabling inference over unseen nodes and relations. By adopting inductive generalization properties, these models can facilitate reasoning across diverse graphs with differing vocabularies. Training FMGs on open-source KGs represents an initial step toward this vision, but there remains a critical need for developing privacy-preserving capabilities to enable secure collaboration and deployment in real-world scenarios.
# 3.2 Tabular Data
A major challenge in tabular data research is the prevalence of “isolated information islands” a notion that captures only part of the broader complexity present in real-world data. Most current tabular datasets used in research are assembled through web scraping, extracting tables from HTML pages or CSV files on platforms such as GitHub. This approach inherently reinforces the assumption of an information island by presenting data in isolated and disjoint formats. Large-scale efforts like WebTables [17], which contains 233 million tables from the Common Crawl project, and TabLib [36], with 627 million tables sourced from GitHub and Common Crawl, provide vast quantities of webscraped tables. However, these datasets fundamentally diverge from the rich, interconnected, and context-dependent data found in operational systems.
The goal of tabular foundation models is to ground learning in data that better reflect the complexities and dependencies of real-world operational environments. While cleaner and more structurally diverse datasets exist—such as TURL [32], with 580 thousand Wikipedia tables, and GitTables [72], with over 10 million tables from GitHub CSVs—these also fail to capture the interconnected nature of operational data. A few datasets move closer to representing real-world structures by acknowledging multi-table scenarios. For example, WikiDBs [150] offers 10,000 relational databases that mimic real-world data, and LakeBench [33] focuses on data lake benchmarks for joinability and unionability tasks. However, these data sets remain limited in their representation of operational knowledge.
Recent efforts, such as RelBench [39], provide collections of datasets with associated tasks, representing notable exceptions in the field. Similarly, the Adventure Works dataset [101] – a sample database by Microsoft that simulates operational data for a fictional manufacturer – and the SALT dataset [81], which captures a snapshot of a real multitable system, offer insights into real-world structures. Adventure Works is notable for its complexity, featuring over 70 tables with both simple and composite keys [104], while SALT is based on data from an actual operational system. However, both datasets lack the crucial operational knowledge intrinsic to SLT. Additionally, SALT’s data originates from a single source, limiting its generalizability for comprehensive SLT research. This analysis highlights a significant gap: operational knowledge remains missing from the datasets currently available to the research community. As a result, there is a persistent disconnect between the widely used web-scraped datasets and the complex realities of SLT. Bridging this gap is essential for developing foundation models that are grounded and applicable to real-world tabular data scenarios.
Synthetic tables: Tabular data, ubiquitous in various domains, is the product of both declarative and procedural processes, intrinsically merging these two distinct knowledge sources. This dual nature presents both opportunities and challenges. On one hand, the structured format of tabular data facilitates easier synthesis compared to purely declarative knowledge. On the other hand, maintaining operational integrity while ensuring sufficient diversity in generated data becomes a complex task. Further complicating this issue are the privacy and confidentiality concerns that restrict access to real-world operational data. As a result, synthetic data generation is emerging as a vital solution, with the goal of creating realistic yet anonymized datasets that capture operational complexities while safeguarding sensitive information. Such datasets, alongside privacy-sanitized real-world examples (e.g., [81]), are crucial for advancing FMSLT research. Simulators offer a viable approach, with tools like SupplySim generating synthetic supply chain data [19], and digital twins enabling realistic synthetic Electronic Health Records (EHRs) without compromising patient data [140, 154, 24]. These generated datasets should preserve crucial statistical properties and dependencies while simultaneously safeguarding sensitive information. Just as linking tables to operational knowledge is crucial for understanding real-world processes, linking synthetic data generation methods to the specific characteristics of data is vital for creating useful and representative synthetic datasets. Current approaches often fall short of capturing this complexity, particularly when dealing with SLT. For tabular data, significant progress has been made in single-table synthesis such as [84, 128, 162, 85]. However, capturing the interconnectedness of SLT requires multi-table synthesis, which presents a more complex challenge. Existing multi-table generation approaches, including Synthetic Data Vault [114] and PrivLava [18], utilize hierarchical and marginal-based methods. However, these methods face limitations in processing speed and scalability, particularly with increasing numbers of tables and attribute domain sizes. Moreover, they often struggle to capture the complex dependencies inherent in SLT. Diffusion models have shown promise in various data synthesis tasks due to their strong controlled generation capabilities [122]. Their application to tabular data, while initially limited to unconditional models [84, 170, 89, 78, 128], represents a promising pathway for generating synthetic datasets that augment real-world data within regulatory boundaries. This gap in guided, multi-table synthesis has recently been addressed by approaches like ClavaDDPM [112], which leverages guided diffusion for multi-table data generation. The development of robust and scalable multi-table synthetic data generation methods is therefore crucial for advancing research on FMSLT. These models require high-quality synthetic data that reflects the complex relationships and dependencies present in real-world environments with high fidelity. Furthermore, the use of secure sandbox systems could empower organizations to establish robust and privacy-preserving benchmarking environments [117].
# 3.3 Procedural Data
Coding LLMs are pre-trained (or continually pre-trained from general LLMs) on massive unlabeled code corpora supplemented with text and mathematical data. General-purpose LLMs prioritize large-scale text data with smaller code and math components. Large-scale, unlabeled code datasets for training LLMs include CodeSearchNet [73], Google BigQuery [61], The Pile [44], GitHub Code [142], ROOTS [87], The Stack [82], and The Stack v2 [96]. This reliance on publicly available code datasets overlooks a crucial aspect: the integration of proprietary operational knowledge. Connecting LLMs for code generation with proprietary corpora of procedures, including application logic and process definitions, represents a significant opportunity to enable a deeper understanding of the underlying context and operational logic. At the same time, proprietary source code repositories might be significantly smaller in size compared to their open-source counterparts.
Synthetic code: Recent work has demonstrated the effectiveness of including synthetic data in the training corpora of code models, as exemplified by Qwen2.5-Coder [71]. Building on the principle of generating synthetic code through a combination of symbolic AI and neural approaches, agent-based frameworks that emulate this hybrid strategy have recently emerged. Those approaches address the need to combine the interpretability of symbolic systems with the learning capabilities of neural networks in a more flexible and reliable manner. For example, the Tree-of-Code framework [108] uses code execution results as decision tree nodes, enabling the exploration of multiple solution paths. Its CodeProgram paradigm decouples reasoning from execution, promoting flexibility and consistency in code generation. Similarly, the SOP-Agent framework [165] uses standard operating procedures encoded as decision graphs to guide agents through tasks, demonstrating how structured guidance can be combined with dynamic adaptation. Similarly, [153, 2, 69, 111, 172, 92] proposed multi-agent frameworks for complex coding tasks. Such approaches could be leveraged to produce large volumes of training data based on proprietary source bases based on agentic flows [103].
# 4 Towards Foundation Models for SLT
This section provides a concise overview of the current landscape of model architectures for tabular data followed by a future outlook in the domain of operational world models.
Neural Tabular Models: The development of neural models for tabular data represents an active and burgeoning field of research, as explored in recent surveys [132, 37, 13, 5]. Traditionally, benchmarks for machine learning on tabular data have been dominated by tree-based methods [51] such as XGBoost [22] and CatBoost [115]. Recent neural approaches, however, are beginning to challenge and occasionally outperform these established techniques [166, 67].
Despite these advances, current neural tabular models largely overlook the inherent interconnectedness of data by treating tables as isolated entities, providing no straightforward method to integrate this essential aspect. Unlike image and text domains, where CNNs [88] and Transformers [148] have successfully captured transferable patterns, tabular data poses distinct challenges. The heterogeneity of data types (numeric, categorical, textual, etc.), along with missing values and the order-invariance of rows and columns [47], limits the direct application of standard neural architectures. Moreover, variations in encoding and numerical values introduce noise, complicating transfer learning and highlighting the need for models capable of handling multi-modal data [118, 68, 133, 135].
LLMs have made remarkable progress, but they often struggle with temporal reasoning [74], which may contribute to their suboptimal performance in the tabular domain—particularly when temporal relationships are implicit in operational logic. Additionally, explicitly leveraging causal features has not consistently improved accuracy [106]. Similarly, self-supervised learning (SSL), despite its promise, encounters difficulties due to challenges in creating meaningful augmentations without generating out-of-distribution samples [129, 51]. Indeed, table-specific SSL methods [143, 90, 139] have generally not matched gradient boosting performance, with notable successes limited to specific contexts [65, 66, 79]. The lack of large, standardized, low-noise public datasets further hinders the development of robust and generalizable models in this area.
Early research primarily focused on small-scale tabular datasets [26, 168, 155, 65]. However, recent investigations have started addressing model scalability [97, 75, 66, 135, 116]. Additionally, approaches leveraging off-the-shelf LLMs through document-based contextualization of linked tables have emerged, enabling competitive predictive performance relative to deep learning methods [161]. Recent research is also actively exploring novel training procedures [76, 48, 60, 66], representation learning techniques [167, 6, 175], and architectures specifically tailored for tabular data [131, 83, 79]. Finally, leveraging graph-like representations of tabular data offers a promising future direction towards developing Foundation Models for Graphs (FMGs). For instance, [79] demonstrates this potential by employing star-shaped graphlets and a graph-based encoder with column embeddings, using a graph-attentional network to contextualize table entries with column names and neighboring entries. This approach, combining insights from Graph Neural Networks (GNNs) and LLMs, underscores the importance of grounding data in operational contexts alongside its graph-relational properties.
Operational World Model: The integration of world models into foundation models for FMSLT is critical to unlocking the full potential of SLT. FMSLT’s core pillars combine declarative and procedural knowledge to represent domain-specific concepts and interactions, mirroring the dynamism of real-world environments. However, relying solely on these two types of knowledge is insufficient for certain predictive tasks. The inherent complexity of processes and the sparsity of real-world data necessitate incorporating broader world knowledge to address diverse challenges effectively.
Currently, LLMs exemplify the potential and limitations of leveraging world knowledge in foundational models. While LLMs excel at utilizing prior knowledge to infer underlying functions [130], they struggle with recognizing raw data patterns, performing rare numerical operations [163], and extrapolating beyond known data. Although numerical proficiency is vital within SLT, it represents only one aspect of the broad operational versatility required in technical domains.
This challenge of integrating world knowledge also arises prominently within Digital Twins (DTs), virtual replicas originally developed for automation and robotics. Traditional DTs primarily rely on mathematical modeling and system identification [105], but recent applications now extend into industrial process analysis [50], capturing sequences, rules, and constraints for dynamic monitoring and simulation. Emerging research further broadens DT applicability to business process analysis [125, 40]. However, most DT implementations offer oversimplified approximations, serving merely as “surrogate” world models that capture correlations without explaining causal dynamics [34].
Addressing these limitations requires comprehensive knowledge integration, inspired by world models from visual scene understanding [55, 56], to learn causal mechanisms and achieve richer representations. Future FMSLTs should integrate and unify diverse Digital Twin capabilities into a single foundational framework, supporting a broader spectrum of operational tasks. Adopting object-centric methods from visual domains [173, 49, 156, 80], FMSLTs can decompose complex systems into constituent entities, explicitly modeling their interactions. This approach would facilitate differentiable, end-to-end pipelines capable of grounding processes in comprehensive world knowledge, thus overcoming the limitations of localized and domain-specific representations.
# 5 Alternative Views
Our position advocating for FMSLTs must be understood within the context of existing research directions for machine learning on tabular data - see Fig. 1. We categorize current approaches into three main setups to clarify the unique contribution of FMSLT, addressing feedback on problem formulation clarity 1. Single Table Data/Models: This line of research focuses on classical tabular learning tasks where data is confined to isolated tables, as argued in the position paper by [146]. Models like TabPFN [65, 66] excel in such scenarios, particularly for smaller datasets. However, by design, these approaches neglect the inherent interconnectedness and rich operational context present in many real-world data ecosystems, which is the central focus of our FMSLT proposal. 2. Multi-Table Relational Data/Models: Recognizing the limitations of single-table views, another direction focuses on relational structures across multiple tables, common in relational databases. Benchmarks like RelBench [120] drive progress here, with methods such as GraphSAGE [57] and CARTE [79] effectively capturing relational dependencies using graph neural networks or specialized transformers. While valuable, these models often assume that the necessary context is fully captured by the relational schema (e.g., foreign keys), potentially missing crucial operational knowledge encoded elsewhere. 3. Relational/Tabular Data with Additional Knowledge: FMSLT belongs to this emerging category, which aims to augment tabular and relational data with richer contextual information, such as declarative and procedural operational knowledge. This includes metadata, business logic, process models, and source code defining data generation and usage. Although we share the goal of integrating AI and databases, our approach differs significantly from related work such as TAG [11]. TAG primarily focuses on enhancing existing query systems (text-to-SQL), leveraging LLMs to better handle complex analytical questions requiring domain/world knowledge and computations on existing data. In contrast, FMSLT aims for a broader scope: enabling end-to-end predictive tasks by grounding tables in their real-world operational context using versatile foundation models. FMSLT explicitly incorporates complex relationships, rich metadata, and procedural logic derived from the operational context to create a comprehensive framework capable of handling operational complexities beyond question-answering, highlighting the need for benchmarks that capture this broader functionality. We acknowledge certain observations in [146] regarding the challenges of tabular data but argue against viewing it as a standard modality such as text or images. Unlike these, tables often lack inherent semantic self-containment; their interpretation requires the external operational context that SLT aims to capture, drawing parallels to the need for context in Saussurean semiotics [30]. Furthermore, while supervised training on relational data as explored in RelBench [120] improves generalization over isolated tables, it generally lacks the crucial in-context learning (ICL) capability found in foundation models. A key strength of these models is their ability to adapt to previously unseen tasks through ICL [15], generalizing from just a few examples provided at prediction time. This capacity for ICL enables FMSLTs to adjust to new information or contexts at inference without requiring full retraining, which is a major advantage for dynamic real-world scenarios and for preserving privacy. Although this paper focuses on defining the necessary capabilities and data requirements for FMSLT rather than proposing a new architecture, we identify potential pathways forward. One direction involves developing “table-native” models, similar to TabPFN [65, 66] or TabICL [116], designed for direct interaction with tabular data for tasks such as regression and classification. Achieving FMSLT capabilities via this route would necessitate extensive pre-training on diverse SLT datasets to embed the crucial declarative and procedural operational knowledge. A second pathway leverages the power of LLMs, potentially using verbalizations of table structures and their links, similar to approaches like GTL [159]. Here, the rich functional capacity of LLMs could be harnessed to learn operational concepts through pre-training or fine-tuning. | Current research on tabular foundation models often overlooks the complexities of large-scale, real-world data by treating tables as isolated entities and assuming information completeness, thereby neglecting the vital operational context. To address this, we introduce the concept of Semantically Linked Tables (SLT), recognizing that tables are inherently connected to both declarative and procedural operational knowledge. We propose Foundation Models for Semantically Linked Tables (FMSLT), which integrate these components to ground tabular data within its true operational context. This comprehensive representation unlocks the full potential of machine learning for complex, interconnected tabular data across diverse domains. Realizing FMSLTs requires access to operational knowledge that is often unavailable in public datasets, highlighting the need for close collaboration between domain experts and researchers. Our work exposes the limitations of current tabular foundation models and proposes a new direction centered on FMSLTs, aiming to advance robust, context-aware models for structured data. | [
"cs.LG",
"cs.AI",
"cs.DB"
] |
# Rigor in AI: Doing Rigorous AI Work Requires a Broader, Responsible AI-Informed Conception of Rigor
Alexandra Olteanu\* Microsoft Research
Su Lin Blodgett Microsoft Research
Agathe Balayn Microsoft Research
Angelina Wang Stanford University
Fernando Diaz Carnegie Mellon University
Flavio du Pin Calmon Harvard University
Margaret Mitchell Hugging Face
Michael Ekstrand Drextel University
Reuben Binns Oxford University
Solon Barocas Microsoft Research
Corresponding email: alexandra.olteanu@microsoft.com
# Abstract
In AI research and practice, rigor remains largely understood in terms of methodological rigor—such as whether mathematical, statistical, or computational methods are correctly applied. We argue that this narrow conception of rigor has contributed to the concerns raised by the responsible AI community, including overblown claims about AI capabilities. Our position is that a broader conception of what rigorous AI research and practice should entail is needed. We believe such a conception—in addition to a more expansive understanding of (1) methodological rigor—should include aspects related to (2) what background knowledge informs what to work on (epistemic rigor); (3) how disciplinary, community, or personal norms, standards, or beliefs influence the work (normative rigor); (4) how clearly articulated the theoretical constructs under use are (conceptual rigor); (5) what is reported and how (reporting rigor); and (6) how well-supported the inferences from existing evidence are (interpretative rigor). In doing so, we also aim to provide useful language and a framework for much-needed dialogue about the AI community’s work by researchers, policymakers, journalists, and other stakeholders.
# 1 Rigor in AI Research and Practice
Rigor remains a subject of intense debate in science [e.g., 2, 18, 41, 50, 66, 79, 101, 104, 126], a debate we are unlikely to resolve here. We thus keep our goal relatively modest: to help broaden our collective perspective of what rigorous AI work should entail. We argue this is critically needed as relying on impoverished conceptions of rigor can have an undesirable, yet formative impact on the quality of both AI research and practice—heightening concerns ranging from unsubstantiated claims about AI systems [e.g., 33, 112, 124, 138, 145] to a plethora of unintended consequences [e.g., 78, 107, 123].
Rigor in AI: The debate surrounding rigor in science has by no means evaded the AI community [e.g., 62, 85, 114, 129]. In AI research and practice, rigor remains largely understood in terms of methodological rigor, which is typically conceptualized as whether mathematical, statistical, or computational methods are correctly applied; whether new methods, models, or systems are tested on large-scale or complex benchmarks and compared with a sufficient number of competing methods, models, or systems; whether the methods or analyses can scale or generalize; or whether the phenomena under analysis were—in contrast to more qualitative work—mathematically formalized and quantified [e.g., 20, 24, 47, 57, 62, 73, 127, 129, 139, 155]. These conceptualizations are often shaped by implicit assumptions that more complex methods and architectures and larger data samples are better [e.g., 7, 45, 81, 136, 139], by the common practice of relying on benchmark-driven evaluations [e.g., 84, 141, 155], and by a dominant mode of thinking oriented towards algorithmic formalism (centered on ideas of objectivity/neutrality, abstract and mathematical representations, and universalism/generalization) [e.g., 20, 42, 57, 87, 147]. Yet such conceptualizations of rigor may fail to demand, for instance, that benchmarks be fit-for-purpose or proven to measure what they claim to be measuring [e.g., 26, 110, 142], or that the knowledge or assumptions the work relies on be reliable or valid [e.g., 26, 90, 96, 112], putting into question the integrity and reliability of any conclusions drawn based on such benchmark-centered evaluations or that depend on questionable assumptions.
Table 1: Overview of the six facets of rigor in AI research and practice. For each facet, we highlight what the facet is concerned with (descriptive overview)—i.e., what the objects of concern is for the facet—and what the facet asks for (evaluative overview).
While such failures threaten the scientific integrity of AI research, the consequences of putting insufficiently rigorous AI into practice are often seen as the purview of responsible AI—which is nevertheless generally seen as separate from rigor, as it is understood as more concerned with ethics, stakeholders, societal impacts and harms, and real-world deployment scenarios. However, it is often only in real-world deployment scenarios or when considering stakeholders that the inadequacies and impact of current approaches to (lack of) rigor in AI research and practice become clear. We thus recast this distinction, and argue that by making visible and demanding attention to such failures, responsible AI asks researchers and practitioners to uphold principles of scientific integrity in their work. Although responsible AI is often seen as out of scope for an AI researcher or practitioner not engaged in that space, any scientist should see rigor as well within scope. In other words, even though unrigorous AI work implicates what are often seen as responsible AI consequences, we argue that a broader notion of rigor that accounts for what produces these consequences needs to be considered by all AI researchers and practitioners.1
We take the position that doing rigorous AI work requires broadening our understanding of what rigor in AI research and practice should entail by drawing on work by the responsible AI community.2 To this end, we foreground a wider range of critical considerations from responsible AI literature (broadly construed) for rigorous AI work. Instead of prescribing rigid criteria for rigorous AI work, our goal is to provide useful scaffolding for much-needed dialogue about and scrutiny of the community’s work by researchers as well as policymakers, journalists, and other stakeholders.
# 2 Facets of Rigor in AI Research and Practice
We foreground six key facets of rigor—epistemic, normative, conceptual, methodological, reporting, and interpretative—that we argue AI research and practice should consider. While it may be difficult to draw clear boundaries between some of these facets, we discuss them separately to provide distinct lenses for reflecting on and interrogating the quality and scientific integrity of AI work (see Table 1).
background knowledge theoretical constructs informs normative considerations
(epistemic rigor) (conceptual rigor) ? research findings inferences & claims 心 (normative rigor) methods (reporting rigor) (interpretative rigor) (methodological rigor)
Specifically, for each facet, we provide a descriptive overview—what the facet is concerned with, i.e., what the object of concern is for that facet—and an evaluative overview—what the facet asks for. All facets of rigor are inherently about the choices we make about an object of concern (e.g., epistemic rigor is concerned with background knowledge, while conceptual rigor is concerned with theoretical constructs). Having these distinct facets encourages researchers and practitioners to think carefully about the choices they make for each object of concern, and disentangles possible debates about these choices—e.g., separates debates about which norms we should follow (normative rigor) from what background knowledge should inform the work (epistemic rigor). For each facet, we discuss examples of mechanisms that help promote rigor along that facet, which can include a mix of processes and desiderata; while we foreground each mechanism for only one of the facets, we note that some mechanisms might help foster rigor along more than one facet (e.g., engaging with construct and internal validity concerns can foster both methodological and interpretative rigor).
Figure 1 provides a simplified overview of the objects of concern for each facet of rigor and of possible common dependencies among them. It illustrates how limiting our conception of rigor to methodological concerns may obfuscate how our work and the claims we make are shaped by a variety of choices that both precede and succeed any methodological considerations, even when some of those choices we make remain tacit or implicit (e.g., as is often the case for common disciplinary norms, standards, or practices [53, 155]). The figure also underscores how it may be difficult to make good choices for “downstream” objects when “upstream” objects are poorly chosen. For instance, making poor construct choices (conceptual rigor) may reduce our chances of operationalizing those constructs well (methodological rigor). The different facets of rigor may, however, also be tangled in complex relations of mutual interdependency [e.g., 42, 54, 143]—as sometimes methodological choices may, for instance, in turn “limit the structure of one’s theoretical con[structs]” [42]—and present choices may (and usually do) impact future work. For example, a lack of interpretative rigor—when ambiguous or baseless claims are being made—can have epistemic consequences for future work relying on those claims [e.g., 39, 63]. We further unpack these below:
# 2.1 Epistemic rigor
What background knowledge informs which problems are addressed and how? Is this background knowledge clearly and explicitly communicated? Is the background knowledge appropriate, welljustified, and appropriately applied?
Before considering any methodological questions, we have to contend with what it is that is being investigated, why it merits consideration, and what knowledge is “not under investigation, but [is] assumed, asserted, or essential” [58]. That is, we need to clarify the facts and background assumptions upon which the work relies in order to establish what the foundation for the work is and whether that foundation is sound. A frequently given example for epistemic failures is work about “the ability to predict unobservable latent character traits, including homosexuality, political ideology, and criminality, from photographs of human faces or other records of outward appearance” [8]. Such work assumes that we can predict actions (what one does) or inner character (what one likes, thinks, or values) based on physical attributes (how one looks). Although this assumption has its roots in “physiognomy”—regarded as pseudo-scientific as it relies on a set of epistemically baseless and extensively debunked claims [134, 152]—pockets of AI research recurrently draw upon it [8, 152]. This example also illustrates a broader class of epistemic failures: cases where a failure to scrutinize background assumptions may lead to work whose validity depends on whether existing methods, tools, or systems function—or can be made to function—on given tasks when they do not or cannot [112], including because the tasks are conceptually or practically impossible or because there is no reliable evidence that those methods, tools, or systems are fit-for-purpose or reliable [96, 112, 116, 155].
Epistemic rigor is thus not only about making sure that the background knowledge that new work builds on and how that knowledge is acquired are appropriate and appropriately applied, but also that that knowledge is appropriately justified [34]. Rigorously applying statistical or computational methods (methodological rigor) will do little to mitigate epistemic concerns if the problems being tackled or the assumptions underpinning the use of some methods are baseless, nonsensical, unethical, or grounded in pseudo-scientific or scientifically shallow work [8, 78, 112, 130]. Why and what is being studied, built, or deployed [13]; why, what and whose problems are being prioritized [21, 121]; what implicit or tacit assumptions are being made about the stated problems or solutions under consideration [91, 112]; and whether the choices of problems and methods are drawing on valid and well-founded evidence [22] are all examples of the types of key epistemic considerations that we should engage with. By contrast, easily available or common artifacts (such as datasets, methods, tools, or systems) tend to serve as “tools of opportunity, not instruments of epistemic rigor” [151], with researchers and practitioners often prioritizing work based on whether there are some existing artifacts available without appropriately scrutinizing the knowledge and assumptions underpinning them [155].
Epistemic rigor, however, does not necessarily require specific epistemological commitments or choices but rather that those commitments and choices be made explicit. While doing so may not definitively answer whether some problems or assumptions are baseless, nonsensical, or unethical, articulating epistemic commitments and the existing knowledge the work relies on lays down the grounds on which people can have discussions and work out disagreements [30].
Mechanisms to promote epistemic rigor: Ensuring work is appropriately grounded in past literature [e.g., 8] and that underlying assumptions are made explicit and appropriately interrogated [e.g. 91, 112, 155] can help foster epistemic rigor, and in turn all other facets of rigor (§2.2–§2.6).
Appropriate grounding: Common goals in AI often include producing new insights, theories, or artifacts, providing evidence in support of existing theories, evaluating existing artifacts, or debunking prior work. A failure to review, appropriately situate within, and acknowledge prior literature, and how it influenced the questions being asked or the solutions being considered, can cast doubt on whether any meaningful progress towards those goals was actually made [e.g., 60, 66]. When such failures become systematic, research communities risk “curl[ing] up upon themselves and become self-referential systems that orient more [internally]” [35] and developing their separate “terminology, source texts, and knowledge claims” [5]. Indeed, concerns that “[f]indings within [a research] community are often self-referential and lack [quality]” [65] are critical as this can make it difficult to trace the provenance of central claims that form the foundation that new work builds on or is motivated by. It can also result in an overall narrowing of what questions AI research tackles and the epistemic marginalization of certain viewpoints [4, 80, 96]. Appropriate grounding requires due diligence when reviewing and acknowledging work in one’s subfield and related fields [16, 66, 99].
Interrogating assumptions: Any research work necessarily relies on assumptions about what is important, what is possible, or what represents sufficient or useful evidence [e.g., 115]. Not only does a lack of epistemic rigor make it hard to situate new work and the knowledge it produces in the context of existing knowledge it may draw from, relate to, corroborate, or dispute, but a lack of due diligence about the background knowledge underlying the work further risks bringing to the fore long-debunked claims from other disciplines—e.g., risking the reanimation of physiognomic methods [8, 134, 152]. Interrogating the assumptions underpinning such tasks—e.g., why would outer attributes be useful proxies for someone’s character?—can expose them as a category error where observable traits (e.g., skin tone) are incorrectly treated as direct indicators of internal states (e.g., personality). Making background assumptions explicit thus facilitates our ability to scrutinize them [12, 155].
# 2.2 Normative rigor
Which disciplinary, community, organizational, or personal norms, standards, values, or beliefs influence the work and how? Are these norms, standards, values, or beliefs clearly and explicitly communicated? Are these norms, standards, values, or beliefs appropriate and appropriately followed?
Because of differences in underlying evidentiary standards, background assumptions, goals and ideals, and theoretical frameworks, different disciplines and communities have different ways of determining what methods to use, what types of evidence are needed or sufficient, and why the resulting work or resulting knowledge matters [29, 34]. The assumptions and theories that underpin our work are not derived “out of thin air, but as a function of our experiences with the world in general and specifically with our colleagues, through dialogue, and the literature” [58], all of which can shape the type, direction, and quality of research [42, 54, 67]. Disciplinary and community norms are intertwined with a researcher’s personal norms and drivers [43], what catches their interest or advances their goals. Making explicit how this mix of influences shapes the AI community’s work can help others understand them and enable debates on their appropriateness.
Consider the emerging research on developing AI personas as tools to simulate users and human study participants [e.g., 6, 51, 82, 108]. While this research reflects common norms and beliefs in AI communities around scalability and efficiency [e.g., 3, 20], by aiming to replace human participants these tools may fail to align or may even conflict with foundational values and norms around representation, participation, inclusion, and understanding [3, 146] that underpin many types of work this research seeks to support, such as user experience or social science studies with human participants. Such value conflicts “cannot be alleviated with better training or improved model performance alone” [3].
Common beliefs and expectations in the AI community—such as the goal of producing generalizable findings that are often abstracted away from any use context [20, 68]—have led to a reduction of problems and use scenarios “to a common set of representations or affordances” [20], and thus (even if implicitly) to de-valuing how context shapes datasets and tools. Reliance on undisclosed normative assumptions about problem statements, artifacts, or constructs that are contested or value-laden, yet treated as if they are neutral or generally applicable, can however be particularly misleading when a “field employs the language of procedural adherence to project a sense of certainty, objectivity, and stability” [56]. While dominant values in AI like performance, generalization, and efficiency [20] may encourage extensive evaluations across as many metrics, benchmarks, and baselines as possible [20, 155], these evaluations also tell us little about when, whether, or which improvements are necessary, desirable, or translate to meaningful benefits when deployed. Such expectations and norms—along resource-related considerations and constraints such as about costs and strict timeliness [61, 153, 155]—nonetheless influence what type of work gets prioritized and done [48].
Mechanisms to promote normative rigor: All research is shaped by normative considerations. Normative rigor asks us to make these considerations explicit—such as via ethical and positionality statements [107] or other disclosure practices—and can include aspects related to expectations around the significance and impact of research [e.g., 58, 104] or what ethical norms to follow [e.g., 106].
Research significance: Central to normative considerations are expectations about how our work and its outcomes should intervene in the world, or why the work matters and is appropriate. The goal of research is often to “generate knowledge that will have positive practical impact” [10] or that advances the field [104], with norms around what work is worthwhile driven by desires to reward such work [32, 148]. There is however also an increasing recognition that the appraisal of the work’s quality and significance should also include the work’s potential for harm and not only for benefits [10, 27, 58]. Such appraisal typically rests on claims about the possible impacts from either the work’s process or its outcomes. This echoes the concept of consequential validity from social science measurement scholarship [97, 142], which asks us when determining a measurement instrument’s (in)validity to consider the consequential basis both of its use and of possible inferences (along with actions those inferences may entail) [70, 97]. Reckoning with the impact of our work thus promotes not only discussions about whether current norms around what constitutes good work are appropriate, but also promotes methodological $( \ S 2 . 4 )$ and interpretative rigor (§2.6).
Positionality and ethical statements: Researchers’ personal, disciplinary and institutional backgrounds, their lived experiences, and their goals motivate and shape how they approach their work. Positionality statements are a mechanism meant to make such considerations explicit in order to help others contextualize the research and research outcomes [83, 107]. Positionality, however, does not only encompass aspects related to researchers’ beliefs and values, but also those related to what knowledge they draw on, how they know what they know, and how they make methodological choices [17]. Thus, it can also aid or compromise epistemic (§2.1) and methodological rigor (§2.4). Ethical statements can further complement positionality statements by foregrounding the ethical concerns researchers grappled with or mitigated before or while conducting the work [107]. Yet this practice of disclosing whether such concerns were considered and how they shaped any methodological choices (if at all) remains inconsistently adopted across AI communities.
# 2.3 Conceptual rigor
Which theoretical constructs are under investigation? Are these theoretical constructs clearly and explicitly articulated? Are these theoretical constructs appropriate and well-justified?
Assume a researcher wishes to evaluate whether a model “hallucinates.” In AI research, the construct of “hallucination” has, however, been used to refer to several distinct types of system behaviors [93], including cases of generating content which is nonsensical, which contains factual errors, which is not in the input data, which is not in the training data, or a mix of these behaviors. Further, all these different understandings of what it means for a model to “hallucinate” are markedly different from the more common use of the term that requires an ability to perceive, feel, or have sensory experiences [e.g., 38], and the term can thus carry meanings incompatible with AI systems. To understand what a researcher’s evaluation of whether the model “hallucinates” means, we need to know which conceptualization they use, and if that conceptualization is sensible.
While contested constructs—those that have competing or even conflicting definitions, like “hallucination,” “value alignment,” “AGI,” or “human-likeness”—are increasingly common in AI research and practice, their definitions often remain elusive, ambiguous, or poorly specified [e.g. 23, 26, 142]. Without clarity about what specifically we are analyzing, measuring, or striving for, it can be hard to assess progress or make any useful or reliable claims. Work can also rely on constructs that are inconsistent with any theoretical tradition, such as treating identity categories as fixed and objective rather than continuous and constructed [90] or more generally failing to recognize how some constructs are innately fluid, non-deterministic, and fuzzy [19]. Such a lack of clarity can also hinder replicability and reproducibility or facilitate speculative post-factum interpretations, yielding possibly unsound and unfounded claims, or a conflation of proxy measurements (which may or may not measure any version of the underlying construct) with the construct under analysis—thus also undermining epistemic (§2.1), methodological (§2.4), and interpretative rigor (§2.6).
Mechanisms to promote conceptual rigor: Conceptual rigor requires attention to conceptual clarity—which construct we are after and how it is defined, appropriate conceptual systematization— the process by which the definition is made specific, and terminological rigor—that the terms used to refer to a construct do not harbor meanings that can lead to a misinterpretation of what the construct is.
Conceptual clarity: A growing number of objects in AI research are ambiguous or poorly specified, or else are objects for which we lack consensus about what they are or what they are for. The examination by Saphra and Wiegreffe [122] of what is meant by “mechanistic interpretability” is instructive: not only does the term have multiple competing meanings, but those meanings also reflect distinct disciplinary orientations and epistemic origins; a lack of clarity about which meaning is under use can obfuscate not only what the work does, but also why and how the work is done. Similar critiques have been made about the lack of conceptual clarity about what unlearning [40], bias [25], model collapse [125], interpretability [86], or generalization [87] mean. While we see growing concerns about the lack of conceptual clarity surrounding many aspects of AI research, from how desired capabilities are described or what metrics to optimized for [26, 62, 71, 122, 142], these remain largely overlooked in discussions about research integrity and quality in AI.
Conceptual systematization: In practice many constructs involve a “broad constellation of meanings and understandings” [1], and working with them requires making choices about which meanings to use, “narrowing [them] into an explicit definition” [142]. The process of conceptual systematization asks researchers to engage not only with a high-level construct in the abstract, but to grapple more concretely with what it means in the context of the work they are conducting and how it relates to empirical observations or other constructs. Conceptual systematization is a prerequisite for rigorous measurement, (computational) specification, empirical analysis, and theory development [e.g., 1, 62, 110, 142], and is thus a prerequisite for methodological rigor (§2.4).
Terminological rigor: Conceptual rigor depends on terminological choices and what those choices communicate. Many terms in AI often carry over meanings from the human realm or other disciplinary contexts that are incompatible with AI systems, and can mislead [24, 36, 87, 115], suggest “unproven connotations,” or lead to “collisions with other definitions, or conflation with other related but distinct concepts” [87]. Blili-Hamelin et al. [24] note that “when researchers equate human faculties with model proxies... [t]his rhetorical move is enabled by using colloquial terms like ‘imagination’ without considering whether it corresponds to the human faculty,” leading to inflated claims. Clear, precise language “help[s] dispel speculative, scientifically unsupported portrayals of [AI] systems, and support more factual descriptions of them” [37], and thus clear scientific communication.
# 2.4 Methodological rigor
What methods are being used? Are these methods and their use clearly and explicitly described? Are these methods appropriate, well-justified, and appropriately applied?
Rigor is often “conceptualized as the appropriate execution of [methods]” [104], with discussions about rigor in AI centering around methodological considerations, from data collection and analysis, to model training and tuning, to experimental practices [e.g., 62, 128, 129], and notions of theoretical rigor (of algorithmic and mathematical analysis) and empirical rigor (of statistical and experimental approaches). Theoretical rigor typically seeks precise problem formulation using well-defined mathematical notation, accompanied by results (e.g., propositions, theorems, lemmas) with correct proofs—e.g., a clear and correct sequence of mathematically derived steps that support the stated result. Empirical rigor, in turn, seeks comparison of a proposed algorithm with a sufficient number of alternative—often competing—approaches, ablation studies, and some form of statistical analysis (e.g., power analysis, significance tests, or simply error bars). Renewed calls for methodological rigor have often been motivated by reproducibility concerns [49, 74, 92], with checklists and documentation practices proposed as a way to support reproducibility and replicability by standardizing methodological choices, recording them, and making them explicit.
Mechanisms to promote methodological rigor: Methodological choices are shaped by considerations about how to operationalize what we know—e.g., the background knowledge—to substantiate existing knowledge or produce new knowledge or artifacts. As methodological concerns have been central to discussions about rigor in AI, here we only focus on foregrounding mechanisms for aspects of methodological rigor we believe deserve added attention, including construct validity [26, 71, 143] and the need for methodological standards, particularly for high-risk domains [149]. For more comprehensive discussions of methodological rigor we direct the reader to [e.g., 62, 74, 87, 128, 129, 143].
Construct validity: Many problems in AI, such as assessments of systems and phenomena, are concerned with measurement [71, 143]. Even when there is conceptual clarity, for these problems ensuring construct validity—i.e., that measurement instruments (e.g., benchmark metrics) appropriately capture the construct of interest (e.g., reasoning, understanding, values)—is foundational to meaningful measurement and thus methodological rigor. A growing body of work has proposed frameworks and best practices for assessing the validity of measurements [e.g., 71, 89, 106, 140] and illustrated that existing measurement instruments exhibit a range of concerns that threaten their ability to measure what they purport to measure [26, 59, 102]. For example, Northcutt et al. [102] show widespread label errors in benchmark test datasets which can “destabilize ML benchmarks,” thereby “lead[ing] practitioners to incorrect conclusions about which models actually perform best in the real world.”
Compliance with methodological standards: Establishing methodological standards often involves extensive, community-wide debates about what methods are appropriate and when. Petzschner [109] notes how the failure of ML models intended for medical settings “to generalize to data from new, unseen clinical trials [...] highlight[s] the necessity for more stringent methodological standards,” particularly for high-risk settings [149]. This has precipitated calls for developing standards in health datasets in AI [9]. The standardization of information retrieval evaluation practices via NIST’s Text Retrieval Conference (TREC) was fundamental in revitalizing the research community and impacting the development of web search engines [118]. However, even though established standards can help a research community promote more rigorous debates about methodological choices, they may not by themselves ensure that methodological choices are explicitly reported or reflected on; Geiger et al. [53] hypothesize “that in fields with widely-established and shared methodological standards, researchers could have far higher rates of adherence to methodological best practices [...] but have lower rates of reporting that they actually followed those practices.” By the same token, not making a choice of methods and presenting a kitchen sink (of metrics, methods) [155] undermines critical engagement with why the methods are appropriate. Compliance with standards should include explicit reflections on methodological choices and their application, including aspects related to constraints that researchers and practitioners had to navigate, such as access to participants, computing, or other resources [107].
# 2.5 Reporting rigor
What research findings are being reported? Are these research findings clearly communicated? Is the presentation of research findings appropriate and well-justified?
The understanding of research findings depends on what is communicated about these findings and how. Reporting rigor is concerned with making sure research findings are clearly and appropriately communicated and justified. For instance, assume a researcher wishes to compare the performance of different recommender systems. Even when reporting only aggregated results, multiple options are possible, including averaging across ratings (treating each rating as equally important regardless of which user provided it or what item it was provided for), users (treating each user equally by first computing performance at user level), or items (treating each item equally by first computing performance at item level).3 Depending on the data distribution (e.g., number of ratings per item/user), these differences may lead someone to draw different or even contradictory conclusions about which model performs best [e.g., 105]. Some ratings may also be more difficult to predict than others [e.g., 117], and any aggregation can obfuscate where exactly the model fails or succeeds [31, 87, 117]. Further, even such simple aggregations can introduce tacit assumptions about what should be optimized for e.g., to ensure a good predictive performance across all users versus all items [31]. Making these choices explicit can facilitate others’ understanding of what specifically is being reported and why.
Such choices of what findings to report, and how—much as with choices of research questions, theoretical constructs, or methods—are shaped by our beliefs, values, and preferences, as well as disciplinary norms and incentives. Negative results are, for instance, less likely to be reported or published [133, 150], potentially “lead[ing] others to develop overly optimistic ideas about scientific progress on a particular topic” [10], and what is reported may be cherry-picked, such as “uncommonly compelling examples to illustrate the output of generative models” [10]. And as the example above also illustrates, even for the same findings, how the findings are reported or what about the findings is reported matters—such as choosing to report inferential uncertainty (“how precisely we have estimated the average for each group,” when interested in estimating aggregate outcomes) versus outcome variability (“how much individual outcomes vary around averages for each group”) [154]. Communicating the former risks leading people to “overestimate the size and importance of scientific findings” [154].
Poor choices of how findings are reported can thus also undermine interpretative rigor (§2.6)—what inferences or claims are made—as any statement of findings inherently embeds some interpretations while possibly hindering others. While this makes it difficult to fully separate concerns about reporting rigor from those about interpretative rigor, we foreground them separately to help draw attention to the different choices that often can be made about which findings to report and how.
Mechanisms to promote reporting rigor: Pre-study practices around documenting and disclosing how a study would be run and what about it would be reported like pre-registration [e.g., 64, 103, 137], and around reporting more granular, disaggregated results [e.g., 14, 31] can promote reporting rigor.
Pre-registration and reporting practices: Reflecting on what to report about a study before the study is even conducted can help mitigate concerns about making such choices post-factum by hypothesizing after the results are known or selectively presenting only from positive results [76, 103, 119]. Pre-registration is the “practice of specifying what you are going to do, and what you expect to find in your study, before carrying out the study” [137]. While pre-registration is seen as a mechanism for promoting more reliable and replicable research findings by differentiating between confirmatory research—where hypotheses are tested and pre-registration is required—and exploratory research—where hypotheses are generated and pre-registration is not required [103, 132], it also asks us to reflect on and pre-set what metrics and measures of success will be used and reported on to minimize post-factum hypothesizing [64].
Disaggregated evaluations: As illustrated by the earlier example, aggregate measurements and metrics can obscure rare phenomena and information about where systems or models tend to fail or succeed, and can mask important effects [14, 31, 117, 129]. To mitigate such concerns, many have called for reporting disaggregated evaluations [31, 68, 87], which “have proven to be remarkably effective at uncovering the ways in which AI systems perform differently for different groups of people” [14] and deemed a “critical piece of full empirical analysis” [129]. While conceptually simple, “their results, conclusions, and impacts depend on a variety of choices” [14], and they require careful consideration and justification—including by engaging with domain experts [135]—of how different choices of why, when, what, and how to conduct and report on such evaluations shape what inferences can be drawn and their impact. This however can in turn help make such choices explicit and encourage debates about their appropriateness.
# 2.6 Interpretative rigor
What inferences are being drawn from the research findings? Are these inferences clearly and explicitly communicated? Are these inferences appropriate and well-justified?
Assume an AI system is used to solve International Math Olympiad (IMO) problems due to achieving high accuracy on a benchmark designed to assess mathematical reasoning [e.g., 55, 120]. As Salaudeen et al. [120] note, based on the system’s performance on the benchmark, two different alternative claims could be considered: the system can “solve linear algebra questions from a textbook accurately” or the system “has reached human-expert-level mathematical reasoning.” Reliably moving from performance on a benchmark to any of the two claims requires clarity about any background assumptions concerning the feasibility of an AI system reaching human-level reasoning abilities [e.g., 138] (epistemic and normative rigor, $\ S 2 . 1 \mathrm { - } 2 . 2 \AA )$ , about how both “mathematical reasoning” and “human-expert-level” are conceptualized (conceptual rigor, $\ S 2 . 3 )$ , about whether the benchmark actually measures mathematical reasoning ability (methodological rigor, $\ S 2 . 4 )$ , and about how findings were reported—e.g., do we know what the performance on linear algebra questions is? (reporting rigor, $\ S 2 . 5 )$ ). As others have noted, a pernicious trap “is to believe that [methods] bestow a natural interpretative clarity and self-reflexive awareness on the researcher” [95], and thus “scientists must acknowledge the social and interpretative character of scientific discovery” [22].
Both making and understanding knowledge claims require interpretation. There are often multiple perspectives through which empirical, experimental, or theoretical evidence could be interpreted, and these different perspectives may not only lead to different aspects being emphasized but also lead to different or even contradictory conclusions being drawn. While reporting rigor $( \ S 2 . 5 )$ is concerned with choices about what findings to report and how, interpretative rigor is concerned with the choices we make when we move from findings to descriptive claims—e.g., the system accurately solves a given task—or to prescriptive or normative claims—e.g., the system should be used to replace humans. Such claims rarely just directly follow from findings, but are shaped by choices and considerations related to all other facets of rigor (§2.1–2.4)—they are influenced and made in the context of background knowledge, the relationship with the theoretical constructs under use, and the methods used to produce the findings and their limitations. While critical to how any work ultimately intervenes in the world, how claims are arrived at is often overlooked in discussions about rigor in AI, with the interpretation of results—i.e., what they mean, what people should do next—being treated as self-evident. The criteria for interpretative rigor is also “not whether the same interpretation would be independently arrived upon by different” people, but rather that “based on the evidence provided, is a given interpretation credible” or if “given all the same source information, would the interpretation stand up to scrutiny as being a justified, empirically grounded, exposition of the phenomenon” [104].
Mechanisms to promote interpretative rigor: Documenting and justifying evidence informing or situating possible claims can promote interpretative rigor, like via documenting AI-related artifacts [e.g., 15, 52, 100] or by engaging with possible threats to internal/external validity [e.g., 85, 106, 131].
Documenting AI-related artifacts, their limitations, and their impacts: Transparency about any AI-related artifacts (e.g., datasets, models, systems) under use can facilitate others’ understanding of what claims may or may not be possible by providing added context about their characteristics and intended uses. To help scaffold and promote more transparent reporting on AI-related artifacts, researchers have developed tools, resources, or what Boyd [28] terms “context documents” (e.g., for datasets [15, 52, 111], models [100], services [113]). Complementing these efforts, others have argued for and put forward practices for recognizing and disclosing the limitations—i.e., “drawbacks in the design or execution of research that may impact the resulting findings and claims” [131]—and impacts—i.e., actual or possible consequences from the research, development, deployment, or use—of AI artifacts [11, 27, 46, 72, 88, 94, 98, 107, 131]. Limitations, in particular, directly impact how research findings can be interpreted—a failure to recognize them can lead to unsubstantiated claims, while a failure to disclose them can further lead to misinterpretations or misuse of claims. Impacts, in turn, can further affect prescriptive and normative claims and their implications, such about how one should act given the research findings.
Internal and external validity: Interpretative rigor requires not only careful deliberation on what the claims are about, but also whether there is appropriate evidence to support the claims. Establishing whether the research findings constitute appropriate evidence requires engaging with both internal validity—whether there are unaddressed issues with study design or execution that may compromise the findings such as data leakage [e.g., 75] or improper baseline comparisons [e.g., 85]—and external validity—whether the findings generalize to different settings such as from one dataset [e.g., 110] or construct [e.g., 144] to another. For broader discussions, see Olteanu et al. [106] and Liao et al. [85].
# Concluding Reflections
By making the case for better documentation [52, 100, 113], evaluation practices [68, 143, 155], development and deployment practices [12, 72, 112], and understanding of impacts and limitations [11, 27, 131], responsible AI research asks for greater scientific rigor. The AI community has too often cast responsible AI considerations as out of scope, but holds up research rigor as a virtue. In AI research and practice, however, rigor remains largely understood in terms of methodological rigor. Nevertheless, this can have unintended consequences as, for instance, “if the methodology is considered to be the sine qua non of scientificity, as it usually is, then there will be enormous pressures for the structure of all theories to accommodate to the theoretical structure embedded in the methodology [... with each] embedded theory involv[ing] its own value hierarchy” (emphasis original) [42].
We argue that rigor in AI means more than just methodological rigor, and in so doing we bring responsible AI as it pertains to six distinct aspects of rigor—epistemic, normative, conceptual, methodological, reporting, and interpretative—under the umbrella of an AI researcher and practitioner’s responsibility. By calling attention to these different facets of rigor, we also hope to provide the AI community with useful language that can help it raise, clarify, and examine a wider range of concerns about existing practices in AI work. Nevertheless, while a broader conception of rigor can improve research integrity and quality, rigor is not a panacea for all problems in AI research and practice. We do not claim that these facets are all-inclusive, but that they help demonstrate how expanding our conception of rigor beyond methodological considerations can help contribute evidence that work is rigorous.
# Alternative Views
AI work is already rigorous or rigorous enough: Some may view addressing methodological rigor concerns as sufficient for ensuring rigorous AI work; under this view, rigor equates to and is achieved by focusing on methodological concerns. As underscored throughout this paper, this view leaves unaddressed a range of concerns which have produced undesirable outcomes, including a reliance on pseudo-scientific assumptions [e.g., 134], treatment of social phenomena inconsistent with broader scholarly understanding [e.g., gender 44, 77], systems that are not fit-for-purpose or cause harm [e.g., 69, 112], and claims or use of language that impedes public understanding of AI [e.g., 37].
Non-methodological concerns are outside the purview of AI work: This view may arise because AI researchers and practitioners may see such concerns as outside the scope of core AI work [155]; they may also not see themselves as well-suited to addressing such concerns, either because it would be difficult to acquire the expertise needed, or because some concerns ought instead to be addressed by subject matter experts (with whom they may not have the time, desire, or resources to engage). We agree that engaging with subject matter experts can be valuable, and believe that AI work has been strengthened when it has done so [143]. Nevertheless, since all work involves choices about what problems are important, why they are important, and what precise objects are under investigation, and since AI work often claims real-world impact in or relevance to particular domains, we argue that it is impossible to do any AI work that does not implicate epistemic, normative, or conceptual questions—and thus researchers and practitioners must grapple with these concerns explicitly rather than implicitly.
All rigor concerns are validity concerns: Under this view, this presentation of rigor concerns simply reframes work already addressing well-understood validity threats in AI research [e.g., 71, 85, 106, 120, 143]. While some of the concerns and mechanisms we describe (e.g., construct clarity/validity, internal/external validity) appear in the literature on validity, many considerations, particularly those related to epistemic, normative, conceptual, and interpretative rigor, precede or are a prerequisite to questioning and establishing validity, and facilitate reflection about issues beyond validity. For example, a failure to interrogate one’s epistemological and normative commitments may result in systems that operationalize a construct as defined, but whose definition has been contested.
# Acknowledgments
We thank Micheal Veale and Hanna Wallach for early conversations that have motivated this paper.
We are also grateful to the members of the STAC team at Microsoft Research NYC for their feedback.
References
[1] Robert Adcock and David Collier. Measurement validity: A shared standard for qualitative and quantitative research. American political science review, 95(3):529–546, 2001. [2] Daniel Adler and Randi Zlotnik Shaul. Disciplining bioethics: Towards a standard of methodological rigor in bioethics research. Accountability in Research, 19(3):187–207, 2012.
[3] William Agnew, A Stevie Bergman, Jennifer Chien, Mark Díaz, Seliem El-Sayed, Jaylen Pittman, Shakir Mohamed, and Kevin R McKee. The illusion of artificial inclusion. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, pages 1–12, 2024. [4] Nur Ahmed, Amit Das, Kirsten Martin, and Kawshik Banerjee. The narrow depth and breadth of corporate responsible ai research. arXiv preprint arXiv:2405.12193, 2024.
[5] Shazeda Ahmed, Klaudia Ja´zwi´nska, Archana Ahlawat, Amy Winecoff, and Mona Wang. Field-building and the epistemic culture of ai safety. First Monday, 2024. [6] Danial Amin, Joni Salminen, Farhan Ahmed, Sonja MH Tervola, Sankalp Sethi, and Bernard J Jansen. How is generative ai used for persona development?: A systematic review of 52 research articles. arXiv preprint arXiv:2504.04927, 2025.
[7] Chris Anderson. The end of theory: The data deluge makes the scientific method obsolete. Wired magazine, 16(7):16–07, 2008. [8] Mel Andrews, Andrew Smart, and Abeba Birhane. The reanimation of pseudoscience in machine learning and its ethical repercussions. Patterns, 5(9), 2024.
[9] Anmol Arora, Joseph E Alderman, Joanne Palmer, Shaswath Ganapathi, Elinor Laws, Melissa D Mccradden, Lauren Oakden-Rayner, Stephen R Pfohl, Marzyeh Ghassemi, Francis Mckay, et al. The value of standards for health datasets in artificial intelligence-based applications. Nature Medicine, 29(11):2929–2938, 2023.
[10] Carolyn Ashurst, Solon Barocas, Rosie Campbell, and Deborah Raji. Disentangling the components of ethical research in machine learning. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 2057–2068, 2022.
[11] Carolyn Ashurst, Emmie Hine, Paul Sedille, and Alexis Carlier. Ai ethics statements: analysis and lessons learnt from neurips broader impact statements. In Proceedings of the 2022 ACM conference on fairness, accountability, and transparency, pages 2047–2056, 2022.
[12] Agathe Balayn, Natasa Rikalo, Jie Yang, and Alessandro Bozzon. Faulty or ready? handling failures in deep-learning computer vision models until deployment: A study of practices, challenges, and needs. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pages 1–20, 2023.
[13] Solon Barocas, Asia J Biega, Benjamin Fish, J˛edrzej Niklas, and Luke Stark. When not to design, build, or deploy. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 695–695, 2020.
[14] Solon Barocas, Anhong Guo, Ece Kamar, Jacquelyn Krones, Meredith Ringel Morris, Jennifer Wortman Vaughan, W. Duncan Wadsworth, and Hanna Wallach. Designing disaggregated evaluations of ai systems: Choices, considerations, and tradeoffs. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, AIES $^ { , } 2 1$ , page 368–378, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450384735. doi: 10.1145/3461702.3462610. URL https://doi.org/10.1145/3461702.3462610.
[15] Emily M Bender and Batya Friedman. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587–604, 2018.
[16] Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 610–623, 2021.
[17] Danielle Berkovic. Researcher positionality. Qualitative Research–a practical guide for health and social care researchers and practitioners, 2023.
[18] Frank J Bernieri. Rigor is rigor: But rigor is not necessarily science. Theory & Psychology, 1 (3):369–373, 1991.
[19] Abeba Birhane. The impossibility of automating ambiguity. Artificial Life, 27(1):44–61, 2021.
[20] Abeba Birhane, Pratyusha Kalluri, Dallas Card, William Agnew, Ravit Dotan, and Michelle Bao. The values encoded in machine learning research. In Proceedings of the 2022 ACM conference on fairness, accountability, and transparency, pages 173–184, 2022.
[21] Abeba Birhane, Elayne Ruane, Thomas Laurent, Matthew S. Brown, Johnathan Flowers, Anthony Ventresque, and Christopher L. Dancy. The forgotten margins of ai ethics. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 948–958, 2022.
[22] Abeba Birhane, Atoosa Kasirzadeh, David Leslie, and Sandra Wachter. Science in the age of large language models. Nature Reviews Physics, 5(5):277–280, 2023.
[23] Borhane Blili-Hamelin, Leif Hancox-Li, and Andrew Smart. Unsocial intelligence: An investigation of the assumptions of agi discourse. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, volume 7, pages 141–155, 2024.
[24] Borhane Blili-Hamelin, Christopher Graziul, Leif Hancox-Li, Hananel Hazan, El-Mahdi ElMhamdi, Avijit Ghosh, Katherine Heller, Jacob Metcalf, Fabricio Murai, Eryk Salvaggio, et al. Stop treating ‘agi’as the north-star goal of ai research. arXiv preprint arXiv:2502.03689, 2025.
[25] Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. Language (technology) is power: A critical survey of “bias” in NLP. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault, editors, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5454–5476, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.485. URL https://aclanthology.org/2020.acl-main.485/.
[26] Su Lin Blodgett, Gilsinia Lopez, Alexandra Olteanu, Robert Sim, and Hanna Wallach. Stereotyping norwegian salmon: An inventory of pitfalls in fairness benchmark datasets. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1004–1015, 2021.
[27] Margarita Boyarskaya, Alexandra Olteanu, and Kate Crawford. Overcoming failures of imagination in ai infused system development and deployment. arXiv preprint arXiv:2011.13416, 2020.
[28] Karen L Boyd. Datasheets for datasets help ml engineers notice and understand ethical issues in training data. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2):1–27, 2021.
[29] E Brister. Epistemological obstacles to interdisciplinary research. Integration and Implementation Insights. https://i2insights. org/2017/10/31/epistemology-andinterdisciplinarity, 2017.
[30] Evelyn Brister. Disciplinary capture and epistemological obstacles to interdisciplinary research: Lessons from central african conservation disputes. Studies in history and philosophy of science part C: studies in history and philosophy of biological and biomedical sciences, 56:82–91, 2016.
[31] Ryan Burnell, Wout Schellaert, John Burden, Tomer D Ullman, Fernando Martinez-Plumed, Joshua B Tenenbaum, Danaja Rutar, Lucy G Cheke, Jascha Sohl-Dickstein, Melanie Mitchell, et al. Rethink reporting of evaluation results in ai. Science, 380(6641):136–138, 2023.
[32] Andrew Burton-Jones. Editor’s comments research article.
[33] Maarten Buyl, Hadi Khalaf, Claudio Mayrink Verdun, Lucas Monteiro Paes, Caio C Vieira Machado, and Flavio du Pin Calmon. Ai alignment at your discretion. arXiv preprint arXiv:2502.10441, 2025.
[34] Stacy M Carter and Miles Little. Justifying knowledge, justifying method, taking action: Epistemologies, methodologies, and methods in qualitative research. Qualitative health research, 17(10):1316–1328, 2007.
[35] Karin Knorr Cetina. Culture in global knowledge societies: Knowledge cultures and epistemic cultures. Interdisciplinary science reviews, 32(4):361–375, 2007.
[36] Allison Chen, Sunnie SY Kim, Amaya Dharmasiri, Olga Russakovsky, and Judith E Fan. Portraying large language models as machines, tools, or companions affects what mental capacities humans attribute to them. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, pages 1–14, 2025.
[37] Myra Cheng, Alicia DeVrio, Lisa Egede, Su Lin Blodgett, and Alexandra Olteanu. "I Am the One and Only, Your Cyber BFF": Understanding the Impact of GenAI Requires Understanding the Impact of Anthropomorphic AI. arXiv preprint arXiv:2410.08526, 2024.
[38] Cleveland Clinic. Hallucinations, Last accessed May-2025. URL https://my. clevelandclinic.org/health/symptoms/23350-hallucinations.
[39] A Feder Cooper, Yucheng Lu, Jessica Forde, and Christopher M De Sa. Hyperparameter optimization is deceiving us, and how to stop it. Advances in Neural Information Processing Systems, 34:3081–3095, 2021.
[40] A. Feder Cooper, Christopher A. Choquette-Choo, Miranda Bogen, Matthew Jagielski, Katja Filippova, Ken Ziyu Liu, Alexandra Chouldechova, Jamie Hayes, Yangsibo Huang, Niloofar Mireshghallah, Ilia Shumailov, Eleni Triantafillou, Peter Kairouz, Nicole Mitchell, Percy Liang, Daniel E. Ho, Yejin Choi, Sanmi Koyejo, Fernando Delgado, James Grimmelmann, Vitaly Shmatikov, Christopher De Sa, Solon Barocas, Amy Cyphert, Mark Lemley, danah boyd, Jennifer Wortman Vaughan, Miles Brundage, David Bau, Seth Neel, Abigail Z. Jacobs, Andreas Terzis, Hanna Wallach, Nicolas Papernot, and Katherine Lee. Machine unlearning doesn’t do what you think: Lessons for generative ai policy, research, and practice. arXiv:2412.06966, 2024.
[41] Chris LS Coryn. The ‘holy trinity’of methodological rigor: A skeptical view. Journal of MultiDisciplinary Evaluation, 4(7):26–31, 2007.
[42] Kurt Danziger. The methodological imperative in psychology. Philosophy of the social sciences, 15(1):1–13, 1985.
[43] Deirdre Davies and Jenny Dodd. Qualitative research and the question of rigor. Qualitative health research, 12(2):279–289, 2002.
[44] Hannah Devinney, Jenny Björklund, and Henrik Björklund. Theories of “gender” in nlp bias research. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22, page 2083–2102, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450393522. doi: 10.1145/3531146.3534627. URL https://doi.org/10.1145/3531146.3534627.
[45] Fernando Diaz and Michael Madaio. Scaling laws do not scale. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, volume 7, pages 341–357, 2024.
[46] Kimberly Do, Rock Yuren Pang, Jiachen Jiang, and Katharina Reinecke. “that’s important, but...”: How computer science researchers anticipate unintended consequences of their research innovations. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pages 1–16, 2023.
[47] Gintare Karolina Dziugaite, Alexandre Drouin, Brady Neal, Nitarshan Rajkumar, Ethan Caballero, Linbo Wang, Ioannis Mitliagkas, and Daniel M Roy. In search of robust measures of generalization. Advances in Neural Information Processing Systems, 33:11723–11733, 2020.
[48] Miriam Fahimi, Mayra Russo, Kristen M Scott, Maria-Esther Vidal, Bettina Berendt, and Katharina Kinder-Kurlanda. Articulation work and tinkering for fairness in machine learning. Proceedings of the ACM on Human-Computer Interaction, 8(CSCW2):1–23, 2024.
[49] Jessica Zosa Forde and Michela Paganini. The scientific method in the science of machine learning. arXiv preprint arXiv:1904.10922, 2019.
[50] Roberto Forero, Shizar Nahidi, Josephine De Costa, Mohammed Mohsin, Gerry Fitzgerald, Nick Gibson, Sally McCarthy, and Patrick Aboagye-Sarfo. Application of four-dimension criteria to assess rigour of qualitative research in emergency medicine. BMC health services research, 18:1–11, 2018.
[51] Tao Ge, Xin Chan, Xiaoyang Wang, Dian Yu, Haitao Mi, and Dong Yu. Scaling synthetic data creation with 1,000,000,000 personas. arXiv preprint arXiv:2406.20094, 2024.
[52] Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. Datasheets for datasets. Communications of the ACM, 64(12):86–92, 2021.
[53] R Stuart Geiger, Dominique Cope, Jamie Ip, Marsha Lotosh, Aayush Shah, Jenny Weng, and Rebekah Tang. “garbage in, garbage out” revisited: What do machine learning application papers report about human-labeled training data? Quantitative Science Studies, 2(3):795–827, 2021.
[54] Pulivelil M George. Conceptualization: the central problem of science. Organon, 9:23–33, 1973.
[55] Elliot Glazer, Ege Erdil, Tamay Besiroglu, Diego Chicharro, Evan Chen, Alex Gunning, Caroline Falkman Olsson, Jean-Stanislas Denain, Anson Ho, Emily de Oliveira Santos, et al. Frontiermath: A benchmark for evaluating advanced mathematical reasoning in ai. arXiv preprint arXiv:2411.04872, 2024.
[56] Ben Green and Lily Hu. The myth in the methodology: Towards a recontextualization of fairness in machine learning. In Proceedings of the machine learning: the debates workshop, 2018.
[57] Ben Green and Salomé Viljoen. Algorithmic realism: expanding the boundaries of algorithmic thought. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 19–31, 2020.
[58] Olivia Guest. What makes a good theory, and how do we make a theory good? Computational Brain & Behavior, pages 1–15, 2024.
[59] Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. Annotation artifacts in natural language inference data. In Marilyn Walker, Heng Ji, and Amanda Stent, editors, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107–112, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. doi: 10.18653/v1/N18-2017. URL https: //aclanthology.org/N18-2017/.
[60] Dylan Hadfield-Menell. The need for scientific rigor in ai safety research. https://medium.com/@dhm.csail/ the-need-for-scientific-rigor-in-ai-safety-research-3e3c71f29968, 2024.
[61] Emma Harvey, Emily Sheng, Su Lin Blodgett, Alexandra Chouldechova, Jean GarciaGathright, Alexandra Olteanu, and Hanna Wallach. Understanding and meeting practitioner needs when measuring representational harms caused by llm-based systems. arXiv preprint arXiv:2506.04482, 2025.
[62] Moritz Herrmann, F Julian D Lange, Katharina Eggensperger, Giuseppe Casalicchio, Marcel Wever, Matthias Feurer, David Rügamer, Eyke Hüllermeier, Anne-Laure Boulesteix, and Bernd Bischl. Position: Why we must rethink empirical research in machine learning. In Forty-first International Conference on Machine Learning.
[63] Moritz Herrmann, F Julian D Lange, Katharina Eggensperger, Giuseppe Casalicchio, Marcel Wever, Matthias Feurer, David Rügamer, Eyke Hüllermeier, Anne-Laure Boulesteix, and Bernd Bischl. Position: Why we must rethink empirical research in machine learning. 2024.
[64] Jake M Hofman, Angelos Chatzimparmpas, Amit Sharma, Duncan J Watts, and Jessica Hullman. Pre-registration for predictive modeling. arXiv preprint arXiv:2311.18807, 2023.
[65] House Science, Space, and Technology Committee. Science committee leaders stress importance of diligence in nist ai safety research funding, 2023. URL https://science.house.gov/2023/12/ science-committee-leaders-stress-importance-of-diligence-in-nist-ai-safety-research-fundin
[66] Mark B Houston. Four facets of rigor, 2019.
[67] Nick Howe. ‘stick to the science’: When science gets political. Nature, 2020.
[68] Ben Hutchinson, Negar Rostamzadeh, Christina Greer, Katherine Heller, and Vinodkumar Prabhakaran. Evaluation gaps in machine learning practice. In Proceedings of the 2022 ACM conference on fairness, accountability, and transparency, pages 1859–1876, 2022.
[69] Wiebke Hutiri, Orestis Papakyriakopoulos, and Alice Xiang. Not my voice! a taxonomy of ethical and safety harms of speech generators. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency, pages 359–376, 2024.
[70] Dragos Iliescu and Samuel Greiff. On consequential validity, 2021.
[71] Abigail Z Jacobs and Hanna Wallach. Measurement and fairness. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 375–385, 2021.
[72] Seyyed Ahmad Javadi, Chris Norval, Richard Cloete, and Jatinder Singh. Monitoring ai services for misuse. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 597–607, 2021.
[73] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020.
[74] Sayash Kapoor, Emily M Cantrell, Kenny Peng, Thanh Hien Pham, Christopher A Bail, Odd Erik Gundersen, Jake M Hofman, Jessica Hullman, Michael A Lones, Momin M Malik, et al. Reforms: Consensus-based recommendations for machine-learning-based science. Science Advances, 10(18):eadk3452, 2024.
[75] Shachar Kaufman, Saharon Rosset, Claudia Perlich, and Ori Stitelman. Leakage in data mining: Formulation, detection, and avoidance. ACM Transactions on Knowledge Discovery from Data (TKDD), 6(4):1–21, 2012.
[76] Norbert L Kerr. Harking: Hypothesizing after the results are known. Personality and social psychology review, 2(3):196–217, 1998.
[77] Os Keyes. The misgendering machines: Trans/hci implications of automatic gender recognition. Proc. ACM Hum.-Comput. Interact., 2(CSCW), November 2018. doi: 10.1145/3274357. URL https://doi.org/10.1145/3274357.
[78] Os Keyes, Jevan Hutson, and Meredith Durbin. A mulching proposal: Analysing and improving an algorithmic system for turning the elderly into high-nutrient slurry. In Extended abstracts of the 2019 CHI conference on human factors in computing systems, pages 1–11, 2019.
[79] Halil Kilicoglu. Biomedical text mining for research rigor and integrity: tasks, challenges, directions. Briefings in bioinformatics, 19(6):1400–1414, 2018.
[80] Joel Klinger, Juan Mateos-Garcia, and Konstantinos Stathoulopoulos. A narrowing of ai research? arXiv preprint arXiv:2009.10385, 2020.
[81] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012.
[82] Christopher Lazik, Christopher Katins, Charlotte Kauter, Jonas Jakob, Caroline Jay, Lars Grunske, and Thomas Kosch. The impostor is among us: Can large language models capture the complexity of human personas? arXiv preprint arXiv:2501.04543, 2025.
[83] Calvin Liang. Reflexivity, positionality, and disclosure in hci, 2021. URL https://medium.com/@caliang/ reflexivity-positionality-and-disclosure-in-hci-3d95007e9916.
[84] Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. Holistic evaluation of language models. Transactions on Machine Learning Research.
[85] Thomas Liao, Rohan Taori, Inioluwa Deborah Raji, and Ludwig Schmidt. Are we learning yet? a meta review of evaluation failures across machine learning. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021.
[86] Zachary C Lipton. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3):31–57, 2018.
[87] Zachary C Lipton and Jacob Steinhardt. Troubling trends in machine learning scholarship: Some ml papers suffer from flaws that could mislead the public and stymie future research. Queue, 17(1):45–77, 2019.
[88] David Liu, Priyanka Nanayakkara, Sarah Ariyan Sakha, Grace Abuhamad, Su Lin Blodgett, Nicholas Diakopoulos, Jessica R Hullman, and Tina Eliassi-Rad. Examining responsibility and deliberation in ai impact statements and ethics reviews. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, pages 424–435, 2022.
[89] Yu Lu Liu, Su Lin Blodgett, Jackie Cheung, Q. Vera Liao, Alexandra Olteanu, and Ziang Xiao. ECBD: Evidence-centered benchmark design for NLP. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 16349–16365, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.acl-long.861. URL https://aclanthology.org/2024.acl-long.861/.
[90] Christina Lu, Jackie Kay, and Kevin McKee. Subverting machines, fluctuating identities: Re-learning human categorization. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 1005–1015, 2022.
[91] Li Lucy, Su Lin Blodgett, Milad Shokouhi, Hanna Wallach, and Alexandra Olteanu. “one-sizefits-all”? examining expectations around what constitute “fair” or “good” nlg system behaviors. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 1054–1089, 2024.
[92] Ian Magnusson, Noah A Smith, and Jesse Dodge. Reproducibility in nlp: What have we learned from the checklist? In Findings of the Association for Computational Linguistics: ACL 2023, pages 12789–12811, 2023.
[93] Negar Maleki, Balaji Padmanabhan, and Kaushik Dutta. Ai hallucinations: A misnomer worth clarifying. In 2024 IEEE Conference on Artificial Intelligence (CAI), pages 133–138, 2024. doi: 10.1109/CAI59869.2024.00033.
[94] Momin M Malik. A hierarchy of limitations in machine learning. arXiv preprint arXiv:2002.05193, 2020.
[95] Annette Markham. Response to nancy baym. Internet Inquiry: Conversations about Method. Annette Markham and Nancy Baym, eds, pages 190–197, 2009.
[96] Lisa Messeri and MJ Crockett. Artificial intelligence and illusions of understanding in scientific research. Nature, 627(8002):49–58, 2024.
[97] Samuel Messick. Validity. ETS research report series, 1987(2):i–208, 1987.
[98] Jacob Metcalf, Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, and Madeleine Clare Elish. Algorithmic impact assessments and accountability: The co-construction of impacts. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 735–746, 2021.
[99] Margaret Mitchell. Oversight of ai: Insiders’ perspectives. Testimony before the U.S. Senate Subcommittee on Privacy, Technology, and the Law, September 2024. Available at https://www.judiciary.senate.gov/imo/media/doc/2024-09-17_pm_ _testimony_-_mitchell.pdf.
[100] Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency, pages 220–229, 2019.
[101] National Academies of Sciences and Policy and Global Affairs, Board on Research Data, Information, Division on Engineering, Physical Sciences, Committee on Applied, Theoretical Statistics, Board on Mathematical Sciences, et al. Reproducibility and replicability in science. National Academies Press, 2019.
[102] Curtis G Northcutt, Anish Athalye, and Jonas Mueller. Pervasive label errors in test sets destabilize machine learning benchmarks. arXiv preprint arXiv:2103.14749, 2021.
[103] Brian A Nosek, Charles R Ebersole, Alexander C DeHaven, and David T Mellor. The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11):2600– 2606, 2018.
[104] Branda Nowell and Kate Albrecht. A reviewer’s guide to qualitative rigor. Journal of public administration research and theory, 29(2):348–363, 2019.
[105] Alexandra Olteanu, Anne-Marie Kermarrec, and Karl Aberer. Comparing the predictive capability of social and interest affinity for recommendations. In 15th International Conference on Web Information System Engineering (WISE 2014), pages 276–292, 2014.
[106] Alexandra Olteanu, Carlos Castillo, Fernando Diaz, and Emre Kıcıman. Social data: Biases, methodological pitfalls, and ethical boundaries. Frontiers in big data, 2:13, 2019.
[107] Alexandra Olteanu, Michael Ekstrand, Carlos Castillo, and Jina Suh. Responsible ai research needs impact statements too. arXiv preprint arXiv:2311.11776, 2023.
[108] Alexandra Olteanu, Solon Barocas, Su Lin Blodgett, Lisa Egede, Alicia DeVrio, and Myra Cheng. Ai automatons: Ai systems intended to imitate humans. arXiv preprint arXiv:2503.02250, 2025.
[109] Frederike H Petzschner. Practical challenges for precision medicine. Science, 383(6679): 149–150, 2024.
[110] Ian Porada, Alexandra Olteanu, Kaheer Suleman, Adam Trischler, and Jackie Chi Kit Cheung. Challenges to evaluating the generalization of coreference resolution models: A measurement modeling perspective. In Findings of the Association for Computational Linguistics ACL 2024, pages 15380–15395, 2024.
[111] Mahima Pushkarna, Andrew Zaldivar, and Oddur Kjartansson. Data cards: Purposeful and transparent dataset documentation for responsible ai. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 1776–1826, 2022.
[112] Inioluwa Deborah Raji, I Elizabeth Kumar, Aaron Horowitz, and Andrew Selbst. The fallacy of ai functionality. In 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 959–972, 2022.
[113] K Natesan Ramamurthy et al. Factsheets: Increasing trust in ai services through supplier’s declarations of conformity. IBM Journal of Research and Development, 63(4/5):6–1, 2019.
[114] Ben Recht. The war of symbolic aggression, 2023. URL https://www.argmin.net/p/ the-war-of-symbolic-aggression.
[115] Rainer Rehak. The language labyrinth: Constructive critique on the terminology used in the ai discourse. AI for Everyone, pages 87–102, 2021.
[116] Michael Roberts, Derek Driggs, Matthew Thorpe, Julian Gilbey, Michael Yeung, Stephan Ursprung, Angelica I Aviles-Rivero, Christian Etmann, Cathal McCague, Lucian Beer, et al. Common pitfalls and recommendations for using machine learning to detect and prognosticate for covid-19 using chest radiographs and ct scans. Nature Machine Intelligence, 3(3):199–217, 2021.
[117] Pedro Rodriguez, Joe Barrow, Alexander Miserlis Hoyle, John P Lalor, Robin Jia, and Jordan Boyd-Graber. Evaluation examples are not equally informative: How should that change nlp leaderboards? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4486–4503, 2021.
[118] Brent R. Rowe, Dallas W. Wood, Albert N. Link, and Diglio A. Simoni. Economic impact assessment of nist’s text retrieval conference (trec) program. Technical Report Project Number 0211875, RTI International, Research Triangle Park, NC, July 2010.
[119] Mark Rubin. When does harking hurt? identifying when different types of undisclosed post hoc hypothesizing harm scientific progress. Review of General Psychology, 21(4):308–320, 2017.
[120] Olawale Salaudeen, Anka Reuel, Ahmed Ahmed, Suhana Bedi, Zachary Robertson, Sudharsan Sundar, Ben Domingue, Angelina Wang, and Sanmi Koyejo. Measurement to meaning: A validity-centered framework for ai evaluation, 2025. URL https://arxiv.org/abs/2505. 10573.
[121] Nithya Sambasivan, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora M Aroyo. “Everyone wants to do the model work, not the data work”: Data Cascades in High-Stakes AI. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI ’21, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450380966. doi: 10.1145/3411764.3445518. URL https://doi. org/10.1145/3411764.3445518.
[122] Naomi Saphra and Sarah Wiegreffe. Mechanistic? arXiv preprint arXiv:2410.09087, 2024.
[123] Devansh Saxena, Ji-Youn Jung, Jodi Forlizzi, Kenneth Holstein, and John Zimmerman. Ai mismatches: Identifying potential algorithmic harms before ai development. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, pages 1–23, 2025.
[124] Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo. Are emergent abilities of large language models a mirage? Advances in Neural Information Processing Systems, 36:55565–55581, 2023.
[125] Rylan Schaeffer, Joshua Kazdan, Alvan Caleb Arulandu, and Sanmi Koyejo. Position: Model collapse does not mean what you think. arXiv:2503.03150, 2025.
[126] Mark Schaller. The empirical benefits of conceptual rigor: Systematic articulation of conceptual hypotheses can reduce the risk of non-replicable results (and facilitate novel discoveries too). Journal of Experimental Social Psychology, 66:107–115, 2016.
[127] D. Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Michael Young, Jean-François Crespo, Dan Dennison, Emily Fox, and H. Larochelle. Survey of scientific rigor studied in machine learning. 2023. URL https://api.semanticscholar. org/CorpusID:259300529.
[128] D Sculley, Will Cukierski, Phil Culliton, Sohier Dane, Maggie Demkin, Ryan Holbrook, Addison Howard, Paul Mooney, Walter Reade, Megan Risdal, et al. Position: Ai competitions provide the gold standard for empirical rigor in genai evaluation. arXiv preprint arXiv:2505.00612, 2025.
[129] David Sculley, Jasper Snoek, Alex Wiltschko, and Ali Rahimi. Winner’s curse? on pace, progress, and empirical rigor. 2018.
[130] Mona Sloane, Emanuel Moss, and Rumman Chowdhury. A silicon valley love triangle: Hiring algorithms, pseudo-science, and the quest for auditability. Patterns, 3(2), 2022.
[131] Jessie J Smith, Saleema Amershi, Solon Barocas, Hanna Wallach, and Jennifer Wortman Vaughan. Real ml: Recognizing, exploring, and articulating limitations of machine learning research. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 587–597, 2022.
[132] Anders Søgaard, Daniel Hershcovich, and Miryam de Lhoneux. A two-sided discussion of preregistration of nlp research. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 83–93. Association for Computational Linguistics; Dubrovnik, Croatia, 2023.
[133] Fujian Song, Sheetal Parekh, Lee Hooper, Yoon K Loke, Jon Ryder, Alex J Sutton, Caroline Hing, Chun Shing Kwok, Chun Pang, and Ian Harvey. Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess, 14(8):1–193, 2010.
[134] Luke Stark and Jevan Hutson. Physiognomic artificial intelligence. Fordham Intell. Prop. Media & Ent. LJ, 32:922, 2021.
[135] Laura M Stevens, Bobak J Mortazavi, Rahul C Deo, Lesley Curtis, and David P Kao. Recommendations for reporting machine learning analyses in clinical research. Circulation: Cardiovascular Quality and Outcomes, 13(10):e006556, 2020.
[136] Richard Sutton. The bitter lesson. Incomplete Ideas (blog), 13(1):38, 2019.
[137] Emiel Van Miltenburg, Chris van der Lee, and Emiel Krahmer. Preregistering nlp research. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 613–623, 2021.
[138] Iris Van Rooij, Olivia Guest, Federico Adolfi, Ronald de Haan, Antonina Kolokolova, and Patricia Rich. Reclaiming ai as a theoretical tool for cognitive science. Computational Brain & Behavior, pages 1–21, 2024.
[139] Gaël Varoquaux, Alexandra Sasha Luccioni, and Meredith Whittaker. Hype, sustainability, and the price of the bigger-is-better paradigm in ai. arXiv preprint arXiv:2409.14160, 2024.
[140] Claudia Wagner, Markus Strohmaier, Alexandra Olteanu, Emre Kıcıman, Noshir Contractor, and Tina Eliassi-Rad. Measuring algorithmically infused societies. Nature, 595(7866):197– 204, 2021.
[141] Kiri Wagstaff. Machine learning that matters. arXiv preprint arXiv:1206.4656, 2012.
[142] Hanna Wallach, Meera Desai, Nicholas Pangakis, A Feder Cooper, Angelina Wang, Solon Barocas, Alexandra Chouldechova, Chad Atalla, Su Lin Blodgett, Emily Corvi, et al. Evaluating generative ai systems is a social science measurement challenge. arXiv preprint arXiv:2411.10939, 2024.
[143] Hanna Wallach, Meera Desai, A Feder Cooper, Angelina Wang, Chad Atalla, Solon Barocas, Su Lin Blodgett, Alexandra Chouldechova, Emily Corvi, P Alex Dow, et al. Position: Evaluating generative ai systems is a social science measurement challenge. arXiv preprint arXiv:2502.00561, 2025.
[144] Angelina Wang. Identities are not interchangeable: The problem of overgeneralization in fair machine learning. arXiv preprint arXiv:2505.04038, 2025.
[145] Angelina Wang, Sayash Kapoor, Solon Barocas, and Arvind Narayanan. Against predictive optimization: On the legitimacy of decision-making algorithms that optimize predictive accuracy. Journal of Responsible Computing, 2024.
[146] Angelina Wang, Jamie Morgenstern, and John P. Dickerson. Large language models that replace human participants can harmfully misportray and flatten identity groups. Nature Machine Intelligence, 2025.
[147] Hilde Weerts, Raphaële Xenidis, Fabien Tarissan, Henrik Palmer Olsen, and Mykola Pechenizkiy. The neutrality fallacy: When algorithmic fairness interventions are (not) positive action. In The 2024 ACM Conference on Fairness, Accountability, and Transparency, pages 2060–2070, 2024.
[148] Richard E West and Peter J Rich. Rigor, impact and prestige: A proposed framework for evaluating scholarly publications. Innovative Higher Education, 37:359–371, 2012.
[149] Christoph Wilhelm, Anke Steckelberg, and Felix G Rebitschek. Benefits and harms associated with the use of ai-related algorithmic decision-making systems by healthcare professionals: a systematic review. The Lancet Regional Health–Europe, 48, 2025.
[150] Torsten Wilholt. Bias and values in scientific research. Studies in History and Philosophy of Science Part A, 40(1):92–101, 2009.
[151] Eric Winsberg and Ali Mirza. Success and scientific realism: considerations from the philosophy of simulation. In The Routledge handbook of scientific realism, pages 250–260. Routledge, 2017.
[152] Blaise Agüera y Arcas, Margaret Mitchell, and Alexander Todorov. Physiognomy in the age of ai. Feminist AI: Critical Perspectives on Algorithms, Data, and Intelligent Machines, page 208, 2023.
[153] Annie Zaenen. Last words: Mark-up barking up the wrong tree. Computational Linguistics, 32(4):577–580, 2006.
[154] Sam Zhang, Patrick R Heck, Michelle N Meyer, Christopher F Chabris, Daniel G Goldstein, and Jake M Hofman. An illusion of predictability in scientific results: Even experts confuse inferential uncertainty and outcome variability. Proceedings of the National Academy of Sciences, 120(33):e2302491120, 2023.
[155] Kaitlyn Zhou, Su Lin Blodgett, Adam Trischler, Hal Daumé III, Kaheer Suleman, and Alexandra Olteanu. Deconstructing nlg evaluation: Evaluation practices, assumptions, and their implications. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 314–324, 2022. | In AI research and practice, rigor remains largely understood in terms of methodological rigor -- such as whether mathematical, statistical, or computational methods are correctly applied. We argue that this narrow conception of rigor has contributed to the concerns raised by the responsible AI community, including overblown claims about AI capabilities. Our position is that a broader conception of what rigorous AI research and practice should entail is needed. We believe such a conception -- in addition to a more expansive understanding of (1) methodological rigor -- should include aspects related to (2) what background knowledge informs what to work on (epistemic rigor); (3) how disciplinary, community, or personal norms, standards, or beliefs influence the work (normative rigor); (4) how clearly articulated the theoretical constructs under use are (conceptual rigor); (5) what is reported and how (reporting rigor); and (6) how well-supported the inferences from existing evidence are (interpretative rigor). In doing so, we also aim to provide useful language and a framework for much-needed dialogue about the AI community's work by researchers, policymakers, journalists, and other stakeholders. | [
"cs.CY",
"cs.AI",
"cs.LG"
] |
# 1 Introduction
Scientific breakthroughs play a foundational role in advancing human knowledge [45], driving technological innovation, and improving societal well-being [3]. However, the traditional paradigm of natural science research remains slow and labor-intensive [46], where countless experiments must be performed by skilled researchers to reach meaningful insights [51, 39, 42, 5]. These limitations constrain the overall pace and scalability of scientific discovery. Therefore, automated laboratories [10, 29] have emerged as a promising alternative, aiming at developing intelligent agents that autonomously design and execute complex experiments through adaptive workflows. By reducing human workload, such agents enable scalable, reproducible, and around-the-clock experimentation, significantly increasing research throughput [23]. Nevertheless, building effective scientific agents remains challenging, particularly due to the high cost of data collection and the difficulty of generalizing across diverse hardware platforms [38].
A promising approach to address the above issues is the sim-to-real framework [56], where agents are first trained within realistic simulations before being deployed in real-world laboratory settings. This paradigm enables cost-effective and safe training while maintaining the potential for generalization to physical environments [24]. However, current established simulators predominantly focus on household environments [33, 43, 54] and fail to adequately address the specific challenges of scientific experimentation. As discussed in Table 1, they exhibit three fundamental limitations: (1) the inability to model chemical dynamics, such as product formation or color changes, which are essential for accurate perception and reasoning in lab tasks; (2) limited diversity and semantic richness in their asset libraries, which restricts the faithful representation of heterogeneous laboratory environments in real world; and (3) the lack of comprehensive evaluation protocols, particularly those that span from fine-grained atomic actions to complex, long-horizon experimental procedures.
Figure 1: The LabUtopia simulation environment and benchmark for developing scientific embodied agents in automated laboratories. LabUtopia supports chemical reaction modeling and provides diverse laboratory assets, forming a high-fidelity testbed for tasks of varying difficulty—from atomic actions to long-horizon action sequences involving both manipulation and navigation.
To address these challenges, we present LabUtopia, a comprehensive simulation and benchmarking suite tailored for scientific laboratory contexts. LabUtopia integrates a diverse set of functional assets, a hierarchical task taxonomy, and a high-fidelity simulation engine capable of modeling rigid, deformable, and fluid objects, as well as simulating both physical and chemical processes. Our goal is to provide a scalable, versatile platform for training and evaluating agents’ perception, planning, and control skills under high task complexity and diverse environments, advancing the role of embodied intelligence in accelerating scientific discovery. The key designs and innovations include:
(1) LabSim is a high-fidelity simulation environment built on Isaac Sim, enhanced with a chemical engine that models reaction-driven transformations (e.g., color change, product generation) by combining a curated substance database with the reasoning model. Extending beyond conventional physical dynamics, LabSim supports a wide range of chemical reactions, enabling precise and visually grounded simulation of laboratory phenomena.
(2) LabScene is a procedural generation pipeline that synthesizes diverse, physically plausible 3D laboratory scenes aligned with real-world configurations. Built upon a curated set of expert-verified assets, LabScene employs a hybrid layout strategy that combines grid stochastic sampling with constraint-aware search, enabling scalable environment creation for scientific embodied tasks.
(3) LabBench is a hierarchical benchmark, featuring a five-level task structure that spans from atomic object manipulations to long-horizon missions requiring integrated navigation and manipulation. Together, these components establish a rigorous and scalable testbed for developing and evaluating scientific-purpose embodied agents under rich physical constraints and procedural complexity.
To provide an in-depth analysis of embodied agents in scientific laboratory settings, we complement prior research with an evaluation that targets the agent’s ability to conduct realistic experimental procedures that involve accurate material recognition, multi-step task planning, and precise instrument control. Powered by our LabScene generation pipeline, we construct over 100 diverse and physically plausible lab environments, and evaluate agent performance across 30 tasks of varying complexity in the LabBench benchmark. Through extensive experiments, we show that current state-of-theart manipulation policy models still struggle with the variability of instrument configurations and accumulated errors in long-horizon task execution, highlighting the need for more specialized solutions in research-oriented embodied AI.
We summarize our main contributions as follows:
• We present LabUtopia, a simulation and benchmarking suite tailored to the unique challenges of laboratory settings. LabUtopia supports complex physical interactions and various chemical reactions across 30 distinct tasks, enabling realistic evaluation of embodied agents in high-fidelity scientific scenarios.
• We provide a high-quality asset set comprising over 100 laboratory scenes and 100 scientific instruments, that have been standardized and filtered by domain experts. Building on this, we design an automated scene generation pipeline to produce diverse, scalable lab environments, supporting both real-world alignment and large-scale training and evaluation.
• We introduce a hierarchical benchmark that spans multiple levels of task complexity, from low-level atomic operations to high-level long-horizon reasoning tasks. This structure enables principled assessment of embodied agents’ capabilities and reveals performance bottlenecks across varying levels.
# 2 Related Work
Automated Laboratories. Current automated laboratories enhance the efficiency and scalability of experiments in chemistry and materials science while reducing costs by integrating machine learning, robotics and modular platforms. Self-driving laboratories (SDLs) [2] automate repetitive tasks, yet often constrain autonomous experimental design. Systems such as Synbot [20], Chemputer [49], MARS-Chem [10], Artificial Chemist [1], Reactivity Explorer [12], and AI-EDISON [29] have demonstrated remarkable performance in organic synthesis, exploratory chemistry, and nanomaterial optimization, enabling standardized and reproducible experiments and accelerating molecular discovery. However, these systems are limited by predefined protocols, hardware dependencies, insufficient task comprehension, and poor real-time adaptability. As a result, their flexibility and intelligence are constrained, making them less suitable for various experimental tasks. Moreover, most of these systems primarily focus on advancing scientific discovery, with limited attention to the long-term development of intelligent embodied systems, thereby overlooking the potential value of embodiment in scientific research. Therefore, we propose a LabUtopia that offers low-cost, high-efficiency workflows and large-scale data acquisition, enhancing flexibility and data-driven capabilities. This environment aims to support the development of embodied intelligence that is adaptable to a wide range of chemical research scenarios.
Simulators for Embodied AI. The rapid development of simulators is progressively transitioning from general-purpose functionality to high-fidelity realism. Certain simulators emphasize versatile capabilities, primarily for modeling interactions and dynamic changes in the physical world, facilitating algorithm validation and training. PyBullet [9], Gazebo [32], and RLBench [28] support real-time physical simulations, including rigid body dynamics and collision detection, while offering diverse sensor emulation and deep learning integration. Conversely, other simulators prioritize high-fidelity scene reconstruction to meet the demands of complex real-world task environments. ARNOLD [18], VLMbench [57], Habitat [44], OmniGibson [35, 48], ManiSkill3 [52], and ClevrSkills [21] focus on language-guided task learning in realistic 3D environments, aiming to advance robotic manipulation and human-robot interaction research. These platforms offer vision-language manipulation benchmarks, open-source frameworks, and human-centric evaluations, supporting rich simulations of daily activities, GPU-accelerated parallel robot simulation, and photorealistic rendering, while investigating compositional reasoning and generalization capabilities. However, existing simulation platforms generally lack specialized modeling for laboratory settings and operations. To address this, we propose LabUtopia based on Isaac Sim [40] tailored for chemical laboratories, enabling embodied agents to perform operational learning, path navigation, and task planning in chemical experimental environments. Integrated with visualized simulations of chemical reaction processes, LabUtopia aims to provide critical support for the advancement of embodied intelligence in experimental sciences.
Table 1: Comparison with existing embodied AI simulators/benchmarks. Fluid / Physics / Chemistry: Support for simulating fluids, realistic physical interactions, and chemical processes, respectively. Scene / Object: Whether the benchmark supports realistic lab environments with scene-level and object-level assets. Multi-action: Multiple different axiom actions. Composed: Tasks involve the simple composition of atomic actions. Generalization: Generalization task across unseen scenes or object variations. Long-Horizon: Tasks demand high-level planning and long sequences of atomic and composed actions. Navigation: Tasks integrate spatial navigation with manipulations.
# 3 Laboratory Simulation Suite
We introduce LabUtopia, a high-fidelity simulation platform tailored to the challenges of embodied manipulation in laboratory settings. It is specifically designed for simulating, training, and evaluating agents in lab-centric tasks. LabUtopia consists of three key components: LabSim provides highfidelity physical simulation with extensions for modeling chemically relevant dynamics, such as fluid mixing and reactive state transitions. LabScene includes a diverse asset library of scientific instruments and procedurally generates 3D environments, enabling rich spatial and task variations. Finally, a builtin trajectory collection module supports automated generation of expert demonstrations, facilitating scalable data collection for diverse lab tasks.
# 3.1 LabSim: High-Fidelity Simulation Environment
LabSim is a high-fidelity simulation engine designed to model the rich physical and chemical phenomena of laboratory environments. It not only supports physically accurate modeling of diverse material properties, but also introduces a reasoning-driven pipeline to simulate chemical reactions, enabling aligned training and evaluation with real-world scientific workflows.
Physical Realism. LabSim supports physically accurate interactions among rigid, deformable, and fluid entities [41]. Each asset in the environment is annotated with empirically grounded physical properties such as mass, friction coefficients, and restitution. For rigid-body simulation, we enable precise contact and collision modeling. For soft materials, we incorporate deformable body physics to capture compressible and elastic behavior. Notably, for fluid simulation, we employ a GPUaccelerated Position-Based Dynamics (PBD) framework [37], supporting rich fluid-agent interactions required for scientific manipulation.
Chemical Process Modeling. To simulate chemical processes within laboratory tasks, we introduce a chemical engine that integrates a curated knowledge base with a reasoning model. We begin by constructing a structured database of 200 common chemical substances, sourced from the authoritative PubChem repository [31]. Each encodes key its attributes, such as color, molar mass, and pH value, allowing it to be represented as a substance asset within the simulation. Given a set of reactants, we leverage a large language model (GPT-4o mini [27]) to reason about potential chemical processes and infer corresponding transformations, including color changes, product formation, etc. These inferred changes are then rendered in the simulation by dynamically updating the physical state and visual properties of the involved substances. This engine equips LabUtopia with the capability to model complex chemical interactions with both interpretability and flexibility.
# 3.2 LabScene: Procedural Laboratory Environment Generation at Scale
Current 3D scene datasets primarily focus on domestic, office, or industrial environments [34, 52, 4, 16, 26], offering limited support for simulating laboratory settings. However, training and evaluating embodied agents in lab-centric tasks requires high-quality, interactive environments populated with scientifically relevant instruments and layouts [36]. To address this gap, we introduce LabScene, a scalable dataset of laboratory object and scene assets with the procedural generation mechanism.
Figure 2: An overview of our laboratory simulation suite. LabScene automatically synthesizes scalable laboratory scenes using a diverse asset library and a procedural generation pipeline, while LabSim supports the simulation of high-fidelity physical and chemical interactions.
Scene Assets. Due to the scarcity of open-source lab assets in the community, we collected over one thousand candidate scenes from designer websites. These raw assets underwent a multi-stage preprocessing pipeline involving content filtering, format normalization, and structural standardization. To ensure realism, we consulted domain experts in chemistry and physics to assess scene fidelity and provide refinement suggestions. Based on their feedback, we selected a high-quality subset of expert-verified scenes to serve as the foundational environments for LabUtopia.
Object Assets. While some collected scenes include basic instruments, many lack detailed internal structures required for executing lab tasks. To address this, we assembled a comprehensive set of object assets spanning essential equipment (e.g., drying ovens) and diverse glassware types (e.g., beakers). To ensure compatibility with robotic manipulation, we refine and modularize these assets into standard, interactable objects. And some were augmented with articulated joints and hinge mechanisms to support physically plausible interactions within the simulation. In total, the final dataset includes about 60 categories of laboratory equipment assets and around 80 types of transparent glassware and plasticware assets, covering a diverse range of materials, sizes, and functional forms.
Environment Generation Pipeline. To incorporate our collected instruments into the laboratory scenes, we develop an environment generation pipeline that balances layout diversity with physical plausibility. Specifically, all objects are placed sequentially based on importance and size rankings. For each object, candidate positions and orientations are sampled from a discretized grid that satisfies various constraints, including boundary, collision, and instrument-specific constraints [11]. The layout score is computed considering factors such as edge proximity, inter-object distance, and orientation alignment. The configuration with the highest score is selected. If the random sampling fails to produce a valid layout within a time limit, the system falls back to a depth-first search strategy [50], systematically exploring placements while enforcing physical and spatial constraints. This hybrid approach ensures functional, reasonable scene layouts suitable for embodied agent training.
# 3.3 Task-Rich Trajectory Collection
Manipulation Trajectory Auto-collection. We divide the motion planners into two levels: atomic action controllers and task-level action controllers. Atomic actions are standard laboratory operations, such as pouring and stirring, that align with experimental protocols and are executed using finite state machines. Task-level controllers organize atomic action controllers to collect data for specific tasks. For atomic actions, we control multiple target keypoints for the robotic arm. At each keypoint, an RMPflow controller [6] plans motion toward dynamically determined positions based on the real-time state of manipulated objects. We incorporate spherical linear interpolation (Slerp) [18] for continuously manipulating articulated objects, e.g., opening the dry container task, for more robust results. Furthermore, task-level controllers organize atomic actions to streamline the entire experimental procedure, enabling our motion planner to generate demonstration data efficiently.
Figure 3: An overview of our hierarchical benchmark. LabBench structures scientific tasks across five levels, from atomic manipulations to long-horizon experiments, enabling rigorous evaluation of embodied agents in realistic laboratory settings.
Navigation Trajectory Auto-collection. Robots in laboratory environments need to autonomously move between locations and interact with various experimental instruments. To automate trajectory collection, we design a method based on the $\mathbf { A } ^ { * }$ algorithm [19] and occupancy map [13]. Specifically, we first generate and store occupancy maps built by Isaac Sim for the laboratory. These maps explicitly indicate the areas where obstacles are present, thus marking regions that are non-navigable. This information is then bound to the laboratory asset data. When deployed in a new laboratory scene, the robot uses its initial and target positions to plan a path with key waypoints. It then follows these waypoints, generating navigation trajectory data. This approach proves successful in experiments, offering an efficient solution for trajectory planning and data collection.
# 4 LabBench: Hierarchical Benchmark for Lab Agents
Embodied manipulation in laboratory environments spans a wide spectrum of tasks, ranging from lowlevel interactions to long-horizon workflows. These tasks vary significantly in complexity and skill requirements, making it difficult to evaluate agent capabilities in a unified manner. To address this,
we propose LabBench, a hierarchical benchmark comprising over 50 tasks designed to systematically evaluate embodied agents across multiple levels of control, planning, and reasoning.
# 4.1 Five-Level Task Structure
In order to comprehensively evaluate embodied agents in laboratory environments, LabBench organizes its tasks into five levels of increasing complexity, ranging from low-level atomic actions to integrated long-horizon and mobile manipulation tasks.
Level 1: Atomic Manipulation Tasks. This level focuses on fundamental low-level interactions that serve as the building blocks for more complex operations. Tasks include single-step action such as grasping, pouring, stirring, opening instrument, and placing containers. These actions can typically be executed via primitive controllers without requiring task-level planning.
Level 2: Short-Horizon Manipulation Tasks. This level involves agents performing a sequence of 2-3 atomic actions to complete a compound objective. For example, an agent might open a container and then pour a reagent, or pick up a test tube and place it in a mixer. These tasks require precise coordination of sequential actions to achieve the desired outcome.
Level 3: Generalizable Short Manipulation Tasks. This level evaluates agents’ generalization capabilities under distributional shifts. Agents are trained jointly on mix of objects with varying shapes and appearances, as well as different visual material and environmental. Tasks are tested in novel scenarios featuring unseen object configurations, unfamiliar scene arrangements, or appearance variations. The evaluation tests the agents’ capacity to transfer learned skills to effectively handle out-of-domain objects and scenes.
Level 4: Long-Horizon Manipulation Tasks. This level involves high-level planning and executing multi-step laboratory protocols that span numerous atomic and composed actions. These workflows, such as preparing a chemical solution or executing the cleaning instrument program, require high-level planning, reasoning, and robustness to compounding execution errors.
Level 5: Mobile Manipulation Tasks. The highest level integrates spatial navigation with manipulation. Agents are required to traverse large-scale laboratory environments using mobile-base control while performing manipulation tasks. Our task is to transport container between areas.
# 4.2 Evaluation Protocol
Embodiment. We employ a 7-DoF Franka Emika Panda [15] manipulator with a parallel gripper for manipulation tasks. The agent controls the seven joints and gripper directly. Isaac Sim’s built-in motion planner maps end-effector commands to joint-space actions for execution. For navigation tasks, we utilize the Fetch mobile manipulator [14] and Ridgeback with Franka [8], both supporting arm manipulation and base locomotion. These robots are controlled using three degrees of freedom—x and y velocities and rotation—enabling integrated navigation and manipulation [53].
Evaluation Execution. A task instance is considered successful if the success condition—where the current state remains within the tolerance threshold of the goal state—is continuously satisfied for 2 seconds after the agent completes the final stage of the task, which is the same as previous work [18, 22, 25]. For example, in the “Open Door” task, success requires the cabinet door to remain at the specified open position for 2 seconds after the robot releases the handle, ensuring no shortcuts during motion. The success rate is used as the evaluation metric in LabSim, with strict evaluation ensuring both the robot and object maintain the successful state for the required duration, only after the motion planner has executed its final action.
# 5 Experiment
# 5.1 Experimental Setup
Models. To benchmark the performance of existing imitation learning algorithms in LabSim, we select two representative models: ACT [17] and Diffusion Policy [7].
• ACT is a transformer-based model designed for sequential decision making in robotic manipulation. ACT processes RGB images, robot proprioception, and tokenized instructions through a multi-layer transformer encoder. At each decision step, ACT autoregressively predicts the next low-level action, conditioned on past observations and actions.
• Diffusion Policy is a generative model that formulates robot control as a conditional diffusion process. The model takes recent observations as input and learns to generate control trajectories by progressively denoising an initial random trajectory sample, conditioned on the observation context. This approach not only enables efficient learning from expert demonstrations, but also provides robust stochasticity during action generation. In our work, we utilize the CNN-based version of Diffusion Policy as the encoder.
Visual Input. We use two cameras for each manipulation task, positioned to ensure both the robot and the objects are visible. Camera placement may vary slightly by task, but the general principle remains consistent. Each camera outputs $2 5 6 \times 2 5 6$ RGB images by default. Since accurate depth data for transparent objects is difficult to obtain in the real world, depth maps are not included by default, but users can render images at any resolution and optionally enable depth maps. Other Omniverse sensors (e.g., tactile sensors) are currently disabled as they are unnecessary for current tasks, but full support is available if needed.
Training Details. All experiments are conducted on a single Nvidia 4090 GPU. For each task, the collected dataset consists of 150 episodes. Evaluation is performed in real-time within a simulated environment based on the task settings, with 60 episodes used for testing. Following the original settings in ACT and Diffusion Policy, we set the horizon to 8 and 60,
Table 2: Performance comparison across task levels. For Level-3, results are reported under ID/OOD settings. Values represent success rates $( \% )$ .
respectively, and the number of training epochs is uniformly set to 200.
# 5.2 Experimental Results
We evaluated the performance of the ACT and DP models across Level-1 to Level-3 tasks in Table 2. For Level-1 tasks, both models demonstrated robust fundamental manipulation skills, achieving high success rates due to the simplicity of these single-step operations. In Level-2 tasks, success rates began to decline, particularly for DP. We observed that DP frequently stalled. For instance, in the
Table 3: Performance comparison of different models on Level-4 long-horizon tasks. SP: Single-stage Primitive; A1–A7: Different sub-steps of task sequence.
Heater Beaker task, the DP algorithm halted during the final action until time expired, while in the Operate Drawer task, most DP failures stemmed from its inability to execute any action successfully. In the Stir with GlassRod task, both models exhibited errors in grasping or stirring position due to the small size of the glass rod.
In Level-3 tasks, the performance gap widened, with ACT maintaining superior stability, while DP struggled with generalization to novel materials and precise actions, such as pressing a button in Heater Beaker. In contrast, ACT showed greater robustness when handling out-of-domain visual features. We tested generalization to out-of-domain shapes in Table 4 by jointly training datasets with objects of varying sizes and evaluating on out-of-domain objects. The results revealed that joint training with objects of significantly different sizes led to a substantial drop in in-domain success rates, with near-zero success rates for out-of-domain shapes and sizes. This suggests that both models largely lack the ability to manipulate out-of-domain objects. Level-4 tasks, as shown in the provided table, assessed long-horizon tasks. ACT significantly outperformed DP, achieving higher success rates in single-stage primitive (SP) actions and sub-steps (A1–A7), though both models experienced sharp declines in later sub-steps (A5–A7) due to cumulative errors in complex sequences. In sub-steps, although each policy is more stable when handling its own task, it is necessary to design specific action decomposition and switching mechanisms, which increases the overall system complexity. Moreover, the transitions between actions may become discontinuous or suffer from distribution shift issues.
Table 4: Performance comparison of different models when manipulating different size objects.Values represent success rates $( \% )$ .
ACT exhibited greater stability across most subsequent tasks. We hypothesize that this disparity arises from several factors. First, compared to ACT, DP’s shorter prediction horizon makes it more prone to stagnation. For example, in the Heater Beaker task, DP often hovered above the button without pressing it in the final step. Our task setting outputs joint position, and DP’s outputs are more jittery than ACT’s, increasing the likelihood of dropping grasped objects. Second, DP’s high sensitivity to visual features led to inconsistent performance, particularly in the Stir with GlassRod task, where its output struggled to accurately locate the glass rod. Furthermore, in generalization tests for Level-3 tasks, DP experienced significant performance declines when encountering unseen materials. These findings suggest that future work should focus on enhancing the models’ ability to generalize across diverse object properties and task variations. Please refer to the Supplementary Materials for more details. | Scientific embodied agents play a crucial role in modern laboratories by automating complex experimental workflows. Compared to typical household environments, laboratory settings impose significantly higher demands on perception of physical-chemical transformations and long-horizon planning, making them an ideal testbed for advancing embodied intelligence. However, its development has been long hampered by the lack of suitable simulator and benchmarks. In this paper, we address this gap by introducing LabUtopia, a comprehensive simulation and benchmarking suite designed to facilitate the development of generalizable, reasoning-capable embodied agents in laboratory settings. Specifically, it integrates i) LabSim, a high-fidelity simulator supporting multi-physics and chemically meaningful interactions; ii) LabScene, a scalable procedural generator for diverse scientific scenes; and iii) LabBench, a hierarchical benchmark spanning five levels of complexity from atomic actions to long-horizon mobile manipulation. LabUtopia supports 30 distinct tasks and includes more than 200 scene and instrument assets, enabling large-scale training and principled evaluation in high-complexity environments. We demonstrate that LabUtopia offers a powerful platform for advancing the integration of perception, planning, and control in scientific-purpose agents and provides a rigorous testbed for exploring the practical capabilities and generalization limits of embodied intelligence in future research. | [
"cs.RO",
"cs.SE"
] |
# 1 Introduction
Word embedding (WE) is an advancement in the natural language processing (NLP) area that makes computers better understand text-based content. As a type of word representation, it is considered one remarkable breakthrough of deep learning in solving challenging NLP problems[1]. With WE models, words are represented in real-valued numeric vectors. These vectors embed individual words into a feature space (hence the name word embeddings) with generally a few hundred dimensions; they are expected to capture the context of a word in a document, its semantic and syntactic features, its relation with other words, etc.[2]. Compared to other traditional word representations, such as the bag of words, WE vectors are relatively low-dimensional. Moreover, as the vectors are learned based on word usage, this makes words with similar meanings have similar vector values and, hence, naturally become close-by in the n-dimensional geometric feature space[3].
As a revolutionary word-meaning-capturing tool through low-dimensional numeric vectors, WEs have demonstrated great potential in facilitating various NLP tasks based on text analysis, such as text classification, named entity recognition, question answering, machine translation, etc.[4, 5, 6]. Both the academic and industrial communities have been and are still devoting much effort to developing a series of more advanced WE models, such as ELMo[7] and BERT[8]. Meanwhile, it is also becoming more and more prevalent for practitioners from other disciplines attempting to leverage the WE achievements in the NLP area to handle their tasks at hand. A representative cross-discipline usage is to apply WE models for the software engineering (SE) domain.
As an important subcategory of computer science, one major goal of the software engineering discipline is to develop various SE techniques to help practitioners better develop and maintain software products. Correspondingly, the demand for well-performed SE techniques to ensure software quality is increasing in the software-defined age[9]. These SE techniques generally rely on the analysis of different kinds of software artifacts generated in the software development and maintenance activities, including source code, documentation, bug reports, specifications, etc. All these artifacts need to be digitized to make computers understand. Whether the semantics embedded in the artifacts could be well represented largely affects the performance of those SE techniques.
In the early stage, traditional information retrieval models such as the vector space model (VSM), latent semantic analysis (LSA), latent dirichlet allocation (LDA), abstract syntactic tree (AST), etc., are used to extract semantics from software artifacts. These models generally consider little about the contextual semantics of tokens/words/terms or may fail to extract hidden high-level semantics within the artifacts. Inspired by the potential of deep learning and WE in the NLP area, researchers propose to adapt WE models (originally used to represent the semantics of plain texts) to process software artifacts. Taking the WE vectors of software artifacts as data basis, by applying machine learning or other model-building algorithms, a series of automatic SE techniques are built, such as bug localization, test case generation, API recommendations, etc.[10, 11, 12, 13].
However, existing studies that used WE models for the SE domain are generally isolated from each other without comprehensive comparison and discussion. This makes the best practice of such cross-discipline technique adoption buried in scattered papers. Further, it also keeps us from obtaining a general view of current progress in the semantic representation of SE artifacts. Considering the key role of semantic representation of SE artifacts, we decide to perform a systematical analysis of the use of WE models for the SE domain. Specifically, we first retrieved 1,957 candidate studies through designed search keywords upon 45 software engineering venues. Then we obtained 181 primary studies for analysis after applying our inclusion and exclusion criteria. We designed four research questions that relate to different aspects of the practice of using WE models for the SE domain, including the involved SE applications/artifacts, the training strategy of WE models, the comparison with traditional semantic representation methods, etc. Through the analysis, we found that: (1) The adoption of word embedding models in the field of software engineering has been on a rising trend year by year (except for the year of 2023); (2) Software maintenance and development are two main areas where WE models are applied in relevant SE tasks; (3) Word2Vec and BERT are the top two models used in the SE domain, with SE-specific embeddings being generally favored over generic models; and (4) there generally lacks a systematic comparative analysis in the selection of WE models. Through our study, we get a comprehensive understanding of the current practice of using WE for the SE domain, and provide some actional suggestions in adopting or developing practical semantic representation approaches for the SE artifacts used in a series of SE tasks.
The remaining parts of our paper are structured as follows. Section 2 introduces the review methods we adapted to perform this study. Section 3 presents the results. The discussion and related work are described in Section 4 and 6. Finally, we conclude our study in Section 7.
# 2 Methodology
In this section, we first introduce the process of searching and identifying relevant papers, including the search keywords and the inclusion and exclusion criteria we used to search papers. Then we describe the research questions we aim to answer to understand the use of WE models in the SE domain.
# 2.1 Search Strategy
# 2.1.1 Selected Venues
As our goal is to understand the use of WE models in the SE domain, we mainly select the papers from the venues that appear in the list of international academic periodicals and conferences recommended by the China Computer Federation (CCF), more specifically belong to the Software Engineering/Systems Software/Programming Language category in the list. Moreover, to make it actionable for manual checking with individual paper content, we mainly take into account the venues ranked as A or B in the list (venues with C-rank are excluded). In total, 45 venues are considered while searching relevant papers, including 29 conferences such as ICSE and FSE, and 16 journals such as TSE and TOSEM. The detailed venues and retrieved relevant papers can be referred in https://docs.qq.com/sheet/WE in SE.
# 2.1.2 Keywords
Since our focus is on the WE models for the SE domain, the first keyword we naturally used is “word embedding”. At first, We used “word embedding” to search within the selected venues on the DBLP website (http://dblp.uni-trier.de/). Then, we randomly selected several retrieved papers and checked their contents. We found that some papers would directly use the names of WE models (such as Word2Vec, BERT, etc.) in the whole paper rather than using “word embedding”. In addition to the keyword “word embedding”, we also add the names of existing WE models as search keywords, namely “word2vec”, “GloVe”, “fasttext”, “BERT” and “ELMo”. In other words, for each selected venue, we would search its official website through DBLP with the above six keywords separately, to retrieve any paper whose full-text contains at least one keyword. In this step, we obtain 1,957 candidate papers. The time for literature search is up to 2023, including early access articles.
# 2.1.3 Inclusion/Exclusion Criteria
After collecting the initial 1,957 papers, two authors manually checked each paper to identify whether it was a relevant one according to the following inclusion criteria and exclusive criteria.
• The paper aims to propose an automatic SE technique, of which semantic representation is a key step and the representation approach used is a specific WE model.
• Papers that do semantic representation beyond word embedding granularity but at, for example, sentence level or document level, are excluded.
• The paper is a regular research paper. Workshop/Symposium/industry/demonstration papers, posters, etc., are excluded.
• Review studies, e.g., literature review or survey, are excluded.
• If a conference paper is extended to a journal article, we only keep the journal version.
# 2.2 Research Questions
In this study, we aim to, on one hand, obtain an overall view of the adoption of WE models for semantic representation in various SE tasks and, on the other hand, identify potential research opportunities for adopting and developing practical semantic representation techniques for SE domain. To achieve these objectives, we design the following four research questions whose answers may help us understand different aspects of the use of WE models in the SE domain.
RQ1. What is the distribution of the studies across publication years and venues? (To obtain the trend of the publications in the domain).
RQ2. What SE tasks tend to use WE models for semantic representation? (To reveal the prevalence of WE use in different SE tasks, hence identify potential application opportunities).
RQ 3. What WE models are generally adopted by SE tasks, and are they compared with other semantic representation models in the evaluation experiments? (To understand the strengths and limitations of these models in real-world SE applications, and potentially revealing gaps in evaluation practices or model selection that could improve SE task performance.)
RQ4. What is the general way to obtain WE vectors in SE tasks? By using the general pre-trained WE models or training a domain-specific one? (To provide insights into which training approach better captures SE-specific semantics and improves task outcomes.)
# 3 Results
# 3.1 RQ1. What is the distribution of the studies across publication years and venues?
To answer RQ1, after obtaining the 181 primary studies (PS) from various sources, we collected their publication years and the journals or conferences where they were published. Figure 1 shows the distribution of these studies across different years and journals/conferences.
From the publication year view, we can find that the first paper that applied WE model in the SE domain is published in the year of 2015 (Phong et al.[14] applied Word2Vec to the opinion mining task in the SE domain. Word2Vec[15] is the first WE model released by Google in 2013). Between 2015 and 2017, only the Word2Vec model was applied to tasks in the SE domain. In 2018, researchers began to use other WE models (e.g., GloVe[16] (released in 2014), fastText[17] (released in 2016)) to solve SE-related tasks. Since then, up until 2022, the number of papers using WE techniques in the SE domain has grown significantly, though there was a slight decline in 2023. This may be attributed to the increasing popularity of Seq2Seq models (with embedding layers integrated into the model) and the emergence of large language models such as GPT. However, this does not imply a decrease in the application of WE techniques; rather, it has, to some extent, expanded the concept of traditional word embeddings, suggesting that context-aware embedding methods may become increasingly prevalent. These show that the application of WE technology in the SE domain is currently an active research area.
Figure 1: Primary Studies (#PS) by year and publication.
From the venues view, we can find that among all publications, IEEE Transactions on Software Engineering (TSE, CCF-Rank A) published the largest number of related studies, with a total of 23. International Conference on Automated Software Engineering (ASE(C), CCF-Rank A) and International Conference on Software Engineering (ICSE ,CCF-Rank A) ranked second in terms of contribution, both publishing 18, followed by Empirical Software Engineering (ESE, CCF-Rank B) with 14 studies. These figures indicate that the cross-disciplinary use of word embedding models in the software engineering domain is gaining increasing recognition within the SE research community.
# RQ1 - Summary
The first use of WE model in the SE domain was in 2015. Few studies were conducted from 2016 to 2018. During the period 2019 to 2023, the application of WE in SE has expanded to considerable research interest. Meanwhile, TSE, ICSE and ASE are the top three venues published the most papers related to WE adoption in SE domain.
# 3.2 RQ2. What SE tasks tend to use WE models for semantic representation?
In this RQ, we attempt to achieve two sub goals, one is to obtain a general view of the concrete SE tasks that adopted WE models for semantic representation by providing a taxonomy of those SE tasks. The other one is to understand exactly what kinds of software artifacts tend to more likely be represented by WE models for semantic representation.
# 3.2.1 Taxonomy of SE Tasks
We followed the open coding practice[18] by manually applying codes (i.e., SE tasks—e.g., API recommendation, vulnerability detection, test automation) to the studies (in a shared online spreadsheet). We first checked the keywords and abstracts of each paper. If a certain keyword explained the SE task solved by the paper, we selected it as the extracted code. This process was carried out by two authors independently. If the code of an article was uncertain, the two authors would read the introduction or even the whole paper till the code was determined through discussion. Next, the authors discussed together conceptually-related codes by generalizing or specializing them, employing the Qualitative Content Analysis approach. After these processes, the accuracy of the determination of SE tasks was guaranteed.
After identifying the SE task addressed by each paper, we combined both closed and open card sorting methods to develop a taxonomy of those SE tasks. First, for the closed card sorting part, we pre-identified four categorical themes based on the Software Life Cycle (SLC), i.e., requirement engineering, software development, software testing, and software maintenance, and made cards labeled with SE tasks. Then, two authors worked independently to place these cards into the four pre-defined theme categories. In some cases, two authors may place a paper in different pre-defined categories, in this situation, they would discuss together to determine its final category. It happens that some papers may not be covered by the pre-defined theme categories. In this case, the authors would conduct open card sorting to assign new theme categories to them. Specifically, they would independently create new theme codes for these papers, and then discuss together to determine their categories.
The final results are shown in Tables 1 to 4 that present the taxonomy of 181 papers (The 181 references that started with prefix ‘R’ in the tables, and the complete taxonomy introductions on SE tasks could be found at the website: Taxonomy of SE Tasks.). The number of papers covered by the four pre-defined categories in the SLC is 142 ( $78 \%$ of all studies). For the remaining SE tasks, the two authors established two new category themes (i.e., general task support and project management) through Open Card Sorting, covering 25 and 14 primary studies, respectively. Among the six areas, the software maintenance area has the highest number of tasks utilizing WE technology for semantic representation of software artifacts, accounting for 66 papers $( 3 6 \% )$ . This is followed by the software development area, which has 44 papers $( 2 4 \% )$ . Both areas are critical stages in the software lifecycle, involving numerous related software artifacts such as code and bug reports. In the “Software Maintenance” area, the most common subarea, “Defect Handling,” accounts for 42 tasks $( 2 3 \% )$ , encompassing defect detection, localization, fixing, and severity analysis. Tasks such as “Bug/Vulnerability Detection” and “Bug/Fault Localization” are the most frequently represented in terms of semantic representation using WE within detection and localization subcategories. In the subarea of “Code Quality Evaluation & Optimization,” tasks like “Code Review” and “SATD Detection” are particularly notable. The main subareas utilizing WE for semantic representation in “Software Development” include “Code Entity Recommendation/Generation” and “Code Comprehension.” In “Code Entity Recommendation/Generation,” the API category is predominant, featuring tasks like “API Recommendation” (8 studies), “API Mapping” (3 studies), and “Similar Technology Comparison” (1 study). Other notable subcategories include “Code Examples,” with tasks such as “Code Examples Recommendation” (3 studies), and related categories like “Log.” The “Code Comprehension” subarea includes tasks such as “API Knowledge Retrieval” (5 studies), “Code Summarization” (2 studies), and “API Extraction & Linking” (3 studies). These tasks play a crucial role in enhancing developers’ understanding and management of code during the software development process. Additionally, there is a certain percentage of studies addressing the use of WE technology for semantic representation in SE tasks related to requirements engineering and software testing, accounting for $10 \%$ and $7 \%$ of the primary studies, respectively. The results suggest that within “Requirements Engineering,” WE technology is predominantly used for tasks related to requirement acquisition and management. While for “Software Testing,” it emphasizes tasks that improve automation and testing reliability, reflecting a growing research interest in leveraging WE to enhance the efficiency and consistency of software testing processes.
Table 1: The Taxonomy of SE tasks that Adopt WE models for semantic representation (1/4)
Table 2: The Taxonomy of SE tasks that Adopt WE models for semantic representation (2/4)
Table 3: The Taxonomy of SE tasks that Adopt WE models for semantic representation (3/4)
Table 4: The Taxonomy of SE tasks that Adopt WE models for semantic representation (4/4)
# 3.2.2 SE Artifacts for WE Representation
The SE domain encompasses a wide range of software artifacts, including requirement documents, source code snippets, application programming interfaces (APIs), test cases, bug reports, log files, etc. Each of these artifacts contains rich semantic information, which are candidates which WE models can be applied to. In this part, we mainly aim to understand whether some kinds of SE artifacts are more likely to use WE models for semantic representation, and exactly what are they. To this, we read all the papers and marked the artifacts represented by WE models. Figure 2 shows the distributions of those artifacts.
Figure 2: The distribution of software artifacts that use WE models for semantic representation in all 181 papers.
From the figure, we can find that the distribution of software artifacts where WE models are applied to shows a strong connection with the distribution of software engineering (SE) tasks. The dominance of text-based artifacts $( 6 1 . 8 8 \% )$ aligns with the high prevalence of WE technology usage in the software maintenance $( 3 6 \% )$ and software development $( 2 4 \% )$ areas, as both tasks heavily involve textual artifacts such as bug reports, requirement documents, and code documentation. Similarly, the significant use of WE in code-based artifacts $( 2 0 . 4 4 \% )$ reflects its application in tasks like code analysis and bug localization. Artifacts that combine both text and code account for $9 . 3 9 \%$ , while purely API-based ones represent $4 . 4 2 \%$ . The smallest proportion, $3 . 8 7 \%$ , comes from artifacts that combine text and APIs.
# RQ2 - Summary
Software maintenance $( 3 6 \% )$ and software development $( 2 4 \% )$ are the two areas within SE where WE models are most frequently applied. These two phases are also critical in the software lifecycle. Specifically, defect handling (42 studies) and code entity recommendation/generation (31 studies) are the two most dominant subareas. These tasks often involve substantial amounts of text (e.g., bug reports) and code, where capturing the underlying semantic information is crucial for effective problem-solving and automation. The distribution of software artifacts also mirrors this trend. Text-based artifacts dominate, comprising $6 1 . 8 8 \%$ of the studied cases, followed by code-based artifacts at $2 0 . 4 4 \%$ . This distribution reflects the fact that much of the work in these software engineering tasks involves processing textual descriptions of software issues or generating recommendations based on code, highlighting the potential of robust semantic understanding provided by WE models. The ability to effectively represent and interpret these artifacts can significantly enhance the accuracy and efficiency of various SE tasks.
# 3.3 RQ3. What WE models are generally adopted by SE tasks, and are they compared with other semantic representation models in the evaluation experiments?
With this RQ, we hope to find out whether there exist any WE models that are commonly used by SE tasks, and further understand whether the authors of those studies performed relevant experiments to demonstrate the effectiveness of their adopted WE models by comparing with other semantic representation models. To answer RQ3, we carefully read each paper to identify the exact WE model adopted, and record any evaluation experiments of comparing the adopted WE model and other WE models or traditional semantic representation models like the VSM and topic model. If a paper adopted a WE model mainly inspired by findings from previous research and did not perform further comparison experiments, then we say it conducted a reference comparison. Following shows the detailed results.
# 3.3.1 Distribution of various WE models used in SE tasks
Figure 3 demonstrate the frequency of use of various WE models in the SE field, reflecting their real-world applications and popularity. From the figure, we can find that:
Figure 3: The distribution of different WE models in SE.
Word2Vec (98 related studies) is clearly the most commonly used word embedding model, likely due to its efficiency and adaptability to large-scale data. It effectively captures semantic relationships between words and performs well in SE tasks such as code documentation analysis and error detection.
Following Word2Vec, BERT (including its variants, such as CodeBERT) ranked second in usage, with 54 studies utilizing this bidirectional transformer model. BERT’s deep contextual understanding makes it highly suitable for SE tasks that require semantic comprehension, particularly in code understanding and natural language processing-related tasks.
FastText (21 studies), although less frequently used than Word2Vec and BERT, excels in handling out-of-vocabulary words and n-gram terms, giving it an advantage in SE scenarios where programming languages or terminologies are diverse. GloVe (22 studies) ranks slightly above FastText. As a model based on global word co-occurrence, GloVe performs well with large-scale text or code corpora but is less popular than Word2Vec and BERT.
In contrast, ELMo (2 studies), with its strong contextual capturing ability via bidirectional LSTMs, is used less frequently, possibly due to its higher computational resource requirements and longer training times compared to static word embedding models. Additionally, compared with the BERT model, which also requires considerable computational power , ELMo performs less effectively in handling complex sentences and long dependencies, making it less cost-effective overall.
# 3.3.2 Comparison with other semantic representation models
In this part, we mainly consider two kinds of comparisons, namely compared a WE model with traditional semantic representation models, and compared a WE model with other WE models. Result details are as follows.
# (1) Comparison with traditional models
Table 5 shows the statistic results for studies that compared the adopted WE model with traditional semantic representation models. The studies did not perform such comparison are also counted. From the table, we can find that, while the use of WE in SE tasks is growing, $84 \%$ of the studies (152 papers) did not compare them with traditional semantic representation methods. Only $9 \%$ (17 studies) and $7 \%$ (12 studies) conducted an experimental comparison or a reference comparison respectively. This low comparison rate suggests that the majority of research lacks direct performance evaluations between emerging word embedding techniques and traditional representation methods.
Table 5: Comparison experiments between WE models and traditional word Semantic representation models
The above results indicate that there lacks convincing experimental evidence that supports the decision of existing studies that use WE models instead of traditional and relatively simpler semantic representation models to represent the semantics of SE tasks. Considering that the most suitable semantic representation models of different tasks and scenarios may also be different. It would be appreciated if systematic evaluations between WE models and traditional models over various SE tasks are performed. Such comparisons will provide researchers with a more comprehensive theoretical foundation and empirical support, promoting more rational and informed decisions when selecting semantic representation models for their SE tasks at hand.
# (2) Comparison with other WE models
After we reviewed all 181 papers, we found that 119 studies $( 6 6 \% )$ did not conduct any direct comparisons between word embedding models. Among the remaining 62 papers, 22 papers referenced related literature to explain their choice of WE models, and only 40 studies $( 2 2 \% )$ performed experimental comparisons between WE models to demonstrate the optimal choice of certain WE models. The detailed comparison between different WE models are shown in Table 6.
The results in Table 6 reveal that a variety of WE models have been compared across different studies, with Word2Vec and BERT&variants being the most frequently tested , appearing in 11 and 8 studies, respectively. Word2Vec was compared against models such as Doc2Vec, FastText, and BERTbase, highlighting its versatility and competitive performance in different SE tasks. Similarly, BERT and its variants, were commonly compared with models like GloVe, Transformer, and other WE models, showcasing the growing interest in contextual embeddings for SE applications. GloVe appeared in 5 studies, often compared with Word2Vec, FastText, and specialized embeddings like SentimentSpecific Word Embedding (SSWE). FastText was evaluated in 4 studies, where it was compared with models like
Table 6: Comparison experiments between WE models
Word2Vec, GloVe, and contextual embeddings like BERTbase and Sent2Vec, reflecting its strength in handling out-ofvocabulary words and morphological variations in software text. Additionally, models such as ELMo and CodeBERT were less frequently compared, appearing in 2 and 1 studies, respectively, but their inclusion suggests a focus on more advanced contextual models and domain-specific embeddings for tasks like source code understanding. These comparisons reflect an evolving focus on understanding the performance differences between classical and contextual embedding models, particularly in SE-specific applications. While Word2Vec and BERT remain popular choices, a broader exploration of specialized models is taking place, indicating a shift toward more targeted and fine-tuned approaches in software engineering tasks.
Additionally, the italicized portions in the Table 6 indicate that, although these studies (a total of 9) conducted comparative experiments between different WE models, they did not ultimately select any one model as their semantic representation method for SE tasks. This suggests that, while a variety of word embedding models were evaluated, the final choice of model for representing SE data was left open, possibly due to the models’ varying performances across different tasks, or due to a preference for using multiple models for comprehensive analysis. This further highlights the complexity of selecting a single, optimal word embedding model for SE tasks, especially when balancing the trade-offs between performance and task-specific requirements.
# RQ3 - Summary
Word2Vec is the most widely used word embedding model in software engineering, followed by BERT, reflecting their adaptability and effectiveness in handling large-scale data and complex semantic tasks. However, exploration of newer, task-specific models remains limited compared to these more mature models. Despite the widespread adoption of word embeddings, $84 \%$ of studies did not compare them with traditional representation methods such as TF-IDF or one-hot encoding. Without consistent comparisons, it is difficult to fully assess the added value of word embeddings over simpler methods in SE applications. Similarly, only $22 \%$ of studies conducted experimental comparisons between different WE models, indicating a gap in systematically exploring which model performs best for specific SE tasks.
# 3.4 RQ4. What is the general way to obtain WE vectors in SE tasks? By using the general pre-trained WE models or training a domain-specific one?
Generally speaking, there are two ways to obtain word embeddings. One is to directly use a general pre-trained WE model, which has already been trained on large external datasets (like Wikipedia or Google News); the other one is to use domain-specific datasets to train a WE model from scratch or fine-tune a general pre-trained model to generate the embedding vectors. This RQ could help us understand the current practice of obtaining WE vectors and identify potential improvements. To answer RQ4, we employed the following approach. If the corpus used for training the WE model was general web contents like Wikipedia or Google News, then we say the specific study used a generic pre-trained WE model. Conversely, if the corpus was specific to the software engineering (SE) domain (e.g., Stack Overflow), then we say a domain-specific, i.e., SE-specific, WE model is used.
Most papers would explicitly state the corpus used for training WE models and its source. However, in some studies, the word embedding model was described without clearly specifying the corpus. In such cases, we followed the links provided by the authors to the word embedding model to determine whether it was generic or domain-specific. After checking all papers, the following WE generation strategies are used in different studies. Figure 4 shows the detailed statistics.
Figure 4: The type and training methods of WE in SE.
From Figure 4, we can find that, except for 9 papers that did not clearly state the generation details of WE vectors, the majority (118 papers) utilized SE-specific embeddings trained on domain-specific corpora. Among these, 83 studies trained the models from scratch on SE-related datasets, while 33 studies applied fine-tuning methods, adapting pretrained generic models to better suit SE-specific tasks. This shows a significant preference for customizing embeddings to the SE domain, likely due to the specialized vocabulary and unique linguistic patterns found in software-related texts. In contrast, 40 studies used generic word embeddings trained on large, non-SE-specific corpora (e.g., Wikipedia). These models were pre-trained and directly applied without further customization. Additionally, 16 studies trained word embeddings using both general corpora and SE-specific corpora, among which 15 studies conducted experiments to compare their performance. Eleven studies reported that SE-specific embeddings outperformed general embeddings, while the other four indicated no significant difference in their performance. Another study, due to its task specificity (i.e. SEthesaurus), mixed a general corpus and a specialized corpus for word embedding training.
Most SE tasks chose to use domain-specific data to train a WE model or fine-tune a pre-trained general one to obtain the WE vectors of their SE artifacts. Yet, few studies have tried to conduct comparative experiments to check whether the domain-specific WE performs better than the general pre-trained one. it would be interesting and very valuable to conduct further investigation into when and where domain-specific embeddings provide a tangible advantage over general pre-trained ones, or vice versa.
# 4 Implications
Our work systematically reviews the application of WE models in the SE domain by answering four research questions related to the prevalence of WE models across various SE tasks and software artifacts, the selection of pre-trained or domain-specific models, etc. Based on the results, we have summarized some actionable research opportunities as follows. Progress in these research directions can further advance the development of software technology and semantic sentation techniques.
• Expanding the Application of WE Models Beyond Unstructured Textual Artifacts and in Less-Explored SE Areas. WE models have gained significant recognition in major academic journals and conferences (in RQ1) and are applied in various SE tasks, primarily on software development and maintenance tasks, and focused mainly on the semantic representation of unstructured textual artifacts (in RQ2). To harness the full potential of WE models, future research should aim to expand their use into less-explored tasks/areas like requirements engineering, testing, and design, thereby increasing their applicability and driving innovation across the broader landscape of software engineering. Furthermore, it is also essential to extend the application of WE models beyond non-structured software artifacts to other forms, including structured data, graphical control/data flow, formal requirements specification, etc. This would enhance their adaptability to a wider range of tasks within software engineering and enable more comprehensive support for automation, analysis, and optimization across the entire development process.
• Enhancing Comparative Analysis of WE Models and Traditional Semantic Retrieval Techniques. After thoroughly reviewing all 181 primary studies, we found a notable scarcity of comparative analyses between WE models and traditional semantic retrieval methods such as TF-IDF and one-hot encoding of the vector space model (in RQ3). Only a small percentage of these studies have conducted systematic comparisons, which hinders our ability to fully evaluate the advantages and disadvantages of employing WE models in place of simpler, well-established techniques for various SE tasks. Moreover, there is a significant lack of comparative studies among different WE models themselves, resulting in an incomplete understanding of which models are most effective for addressing specific challenges within the SE domain. This gap highlights the need for future research to prioritize comprehensive evaluations that compare WE models against traditional methods as well as among themselves across a diverse range of SE scenarios. Such investigations will not only clarify the performance dynamics of these models but also guide practitioners in selecting the most suitable semantic retrieval models for their specific needs.
• Advancing WE Models for Enhanced Performance in SE Tasks. The predominance of Word2Vec, followed by BERT and its variants, indicates a clear preference for models that excel in capturing semantic relationships and contextual understanding (in RQ3). This trend suggests several promising research directions aimed at enhancing the efficacy of WEs in SE tasks. One could be the integration and hybridization of existing models. For example, by leveraging the strengths of Word2Vec’s efficiency and BERT’s deep contextual understanding, researchers can develop novel embedding models that maximize performance across a range of SE applications, such as code documentation analysis and error detection. This could involve creating composite models that combine the rapid processing capabilities of traditional methods with the advanced semantic comprehension of newer transformer-based architectures. Another one involves addressing the challenges posed by diverse programming languages and terminologies. Models like FastText and GloVe, though less frequently used, offer unique advantages in handling out-of-vocabulary words and large-scale text corpora. Research efforts could focus on refining these models or integrating them with more popular frameworks to create robust solutions tailored to the complex linguistic structures found in SE contexts.
• Comparing Domain-Specific and General Word Embeddings in the SE Domain. The clear preference for SE-specific embeddings over general embeddings (in RQ4), such as those trained on domain-specific corpora like Stack Overflow, underscores a growing awareness of the significance of domain-specific knowledge in addressing SE tasks. Despite this shift towards specialization, a notable gap remains in the systematic research that directly compares SE-specific embeddings with their general counterparts. Some studies suggest that the differences between SE-specific and general embeddings may not always be significant for certain tasks, opening up avenues for deeper investigation. It is crucial to discern the contexts and conditions under which domain-specific embeddings provide meaningful advantages over general embeddings. Such explorations could yield valuable insights, ultimately contributing to the development of clearer guidelines for practitioners and researchers in selecting between general pre-trained WE models and domain-specific WE models tuned/trained with SE data. By tailoring model choices to the unique demands of various tasks, we can enhance the effectiveness and efficiency of solutions in the SE domain.
# 5 Threats to Validity
# 5.1 Internal Validity.
A potential threat lies in the process of selecting relevant studies. While we did not thoroughly read the entire content of every paper, we followed a systematic and structured approach. Specifically, we assessed papers by examining the title, abstract, and introduction. In cases where these sections did not provide sufficient clarity, we resorted to reviewing the full paper. While this method may introduce some bias by potentially missing details buried deeper in the text, it is a commonly accepted practice in literature reviews. This approach allows for efficient filtering while maintaining a high level of rigor, ensuring that we included relevant studies that aligned with our research focus. Another internal validity concern arises from the classification of SE tasks. Since the categorization of tasks is subject to human judgment, there is an inherent risk of subjectivity in the process. To mitigate this, we followed established frameworks and task definitions from prior literature whenever possible. We also ensured that our classification was consistent across studies by having multiple reviewers discuss and agree on the task categories. By employing this collaborative approach, we reduced the potential for bias and improved the reliability of the classifications.
# 5.2 External Validity.
One threat is the limitation of our literature sources to CCF A and B-ranked conferences and journals specifically within the Software Engineering (SE) domain. The China Computer Federation (CCF) provides a comprehensive directory that classifies high-quality scientific journals and conferences based on their impact and reputation in various fields, including SE. These CCF A and B rankings are recognized for identifying the most prestigious venues, including internationally renowned conferences such as ICSE, FSE, and journals like IEEE Transactions on Software Engineering. By focusing on these high-ranking venues, we aimed to ensure that the studies included in our review were of the highest academic and research standards, reflecting significant contributions to the field. However, this focus may inadvertently exclude valuable insights from lower-tier venues. Future reviews could consider incorporating studies from a wider range of sources to provide a more comprehensive view of the SE landscape. Besides, this review focuses solely on peer-reviewed academic publications, potentially excluding relevant insights from non-academic or "gray" literature, such as technical blogs, white papers, and industry reports. This exclusion might overlook the most current practical innovations and trends in the SE industry. Nonetheless, peer-reviewed literature offers a more rigorous validation of findings, ensuring that the included studies meet a certain standard of quality. To address this limitation, future reviews could consider a more systematic inclusion of high-quality gray literature to capture cutting-edge practices in the field.
# 6 Related Work
Related to our work of studying the use of WE in SE domain, some studies have tried to provide a comprehensive study on the applications of other advanced techniques in the SE domain. For example, in[19], the application of Machine learning (ML) throughout the SE lifecycle has been widely explored, particularly in the areas of software quality and testing, but challenges remain in fields such as human-computer interaction. Yang et al.[20] summarized the effectiveness and challenges of Deep Learning (DL) in various SE tasks. The application of DL in software testing and maintenance, such as defect prediction and code analysis, has shown significant potential[21]. Wang et al.[22] also discussed the impact of ML and DL on SE, particularly the issues of complexity and reproducibility. Large language models (LLMs) have also demonstrated potential in optimizing SE processes and outcomes[23]. The above studies together with our study, could provide better and more broader support on cross-discipline technique adoption in facilitating the development of SE techniques.
There also exist some studies from the NLP area that mainly systematically evaluated various kinds of WE models in certain NLP tasks. For example, In[24], the authors found that WE techniques could significantly improve the performance of text classification techniques, and largely surpassed traditional bag-of-words models. Some researchers tried to divide existing WE models into traditional word embedding, static word embedding, and contextualized word embedding, and emphasized the significant performance advantages of BERT, in tasks like sentiment classification, text classification and next sentence prediction[25]. In a systematic review of[26], neural network-based WE methods are found to outperform matrix factorization techniques (i.e., variants of word2vec). Some researchers explored the theoretical foundations and development trajectory of WE, and analyzed the advantages of using WE models in semantic representation[27, 28]. By optimizing training methods and corpus selection, WE techniques could significantly improve classification accuracy in sentiment analysis[29, 30]. Cross-lingual WE[31] are also found to be able to improve the accuracy of semantic reasoning in multilingual environments. Incitti et al.[32] conduct a performance evaluation of text embedding models, with a focus on embedding such as textual sentence or paragraph that go beyond words.
The above representative studies either focus on completely different technology adoption in the SE domain, or are mostly limited to the comprehensive study of WE use in the NLP field, with little attention being devoted to studying the current practice of using WE in the broad SE domain. To fill this research gap, this paper aims to provide a more comprehensive perspective through a systematic review of the applications of WE techniques in the field of SE. Unlike existing studies, we do not restrict our focus to specific SE tasks but instead broadly collect and discuss research that employs word embeddings WE as a semantic representation method across the field of SE. This will help researchers better understand and master techniques for the semantic representation and processing of SE artifacts, such as code and requirements documents, providing strong support for the further development of tasks related to semantic representation in SE. | Word embedding (WE) techniques are advanced textual semantic representation models oriented from the natural language processing (NLP) area. Inspired by their effectiveness in facilitating various NLP tasks, more and more researchers attempt to adopt these WE models for their software engineering (SE) tasks, of which semantic representation of software artifacts such as bug reports and code snippets is the basis for further model building. However, existing studies are generally isolated from each other without comprehensive comparison and discussion. This not only makes the best practice of such cross-discipline technique adoption buried in scattered papers, but also makes us kind of blind to current progress in the semantic representation of SE artifacts. To this end, we decided to perform a comprehensive study on the use of WE models in the SE domain. 181 primary studies published in mainstream software engineering venues are collected for analysis. Several research questions related to the SE applications, the training strategy of WE models, the comparison with traditional semantic representation methods, etc., are answered. With the answers, we get a systematical view of the current practice of using WE for the SE domain, and figure out the challenges and actions in adopting or developing practical semantic representation approaches for the SE artifacts used in a series of SE tasks. | [
"cs.SE"
] |
# 1 Introduction
In the domain of language acquisition tools, a key capability is the measurement of the linguistic difficulty of text. Traditionally, this has been used to assess a language learner’s ability by evaluating their writing (Arnold et al., 2018; Ballier et al., 2019; Kerz et al., 2021). With the advent of use of Large Language Models (LLMs) for language learning and practice (Bonner et al., 2023; Kwon, 2023; Mahajan, 2022; Young and Shishido, 2023), a novel application has arisen: adjusting the language output of an LLM to the ability of a specific user. This can be used to adjust content to a user’s level of understanding, or to maximize a user’s learning by keeping them in the Zone of Proximal Development (ZPD) (Kinginger, 2002), reducing the difficulty for beginners and increasing it for more advanced users.
While LLMs have some innate understanding of text complexity, this typically takes the form of text simplification, especially on long text passages (Cardon and Bibal, 2023; Espinosa-Zaragoza et al., 2023). In contrast, language learning requires exposure to short, authentic text segments (Leow, 1997), such as conversation. While LLMs are uniquely positioned to provide this, they are not typically trained to generate text at a learner’s level.
To generate difficulty-tuned text directly, LLMs need offline and online modules that are able to evaluate such texts. In this kind of system, a difficulty model is used to label training data, annotate prompts, and filter output. An example system of this kind is shown in Figure 1. Such applications require a mix of offline and online processing, with the latter being highly sensitive to latency.
Figure 1: Example system diagram of LLM trained to produce text at different levels of difficulty, with a Difficulty Annotation Model required to label text at three points in the processing pipeline.
To be effective in this kind of system, the difficulty annotation model must be trained on texts analogous to those the LLM is generating, which means short, conversational passages.
# 1.1 Summary of Contributions
• We release a novel dataset, Ace-CEFR, for English language difficulty. The dataset can be used to train models to understand the difficulty of text, as well as to train LLMs to generate text at specified levels, or for related tasks such as complex word identification.
• We establish baselines for performance on the difficulty evaluation task, for both human experts and machine models of different levels of complexity.
• We demonstrate the feasibility of medium size models to use the Ace-CEFR dataset to achieve good accuracy on the difficulty evaluation task, with latency suitable for real-time applications.
# 1.2 Related Work
# 1.2.1 Datasets
There are a number of longer passage difficultyannotated text datasets. These sets are comprised of passages on the order of hundreds of words in length each. These include the English First Cambridge open language Database (EFCAMDAT) (Geertzen et al., 2014), the Cambridge Learner Corpus for the First Certificate in English (CLC-FCE) (by Lexical Computing Limited on behalf of Cambridge University Press and Assessment., 2017), Weebit (Rama and Vajjala, 2021), OneStopEnglish (Vajjala and Luˇci´c, 2018), Newsela (Nushi and Fadaei, 2020), a dataset provided by Adam Montgomerie (Montgomerie, 2021), Wiki-Auto (Jiang et al., 2020), and the Sentence Corpus of Remedial English (SCoRE) (Chujo et al., 2015). These texts are deliberately long to establish a representative sample of difficulty (Shatz, 2020).
The passages in these datasets are too long to train LLMs to produce conversational responses, being hundreds or more words long, compared to the average conversation turn length of approximately 10 words (Yuan et al., 2006). We cannot simply split the passages up and train models on sub-passages, as individual sentences vary greatly from the overall passage assessment (Arase et al., 2022).
There are a few datasets annotated at the sentence level. These include Štajner et al. (2017), Brunato et al. (2018), (McDonald et al., 2013), and the CEFR-SP dataset (Arase et al., 2022).
However, these shorter datasets are not conversational, so are unsuitable for training a conversational LLM. As a representative example, the CEFR-SP dataset is composed of uniform, singlesentence, complete-thought sentences, and do not include the variations typically seen in conversations such as phrases, single word responses, references to other parts of the conversation, or multiple sentences.
Further difficulties in training models on all of the above datasets arise from unbalanced distributions of difficulties. The datasets are taken either from examples authored by language learners (e.g.
EFCAMDAT and CLC-FCE), or sampled from natural text (e.g. CEFR-SP). This results in distributions that are highly skewed either toward the beginner or the middle of the difficulty curve, with almost no examples at high levels. This makes it difficult to train models capable of a wide range of evaluation. It is further worth noting that, while examples authored by language learners are ideal for evaluating learners, they are inappropriate for training LLMs to generate native-sounding speech.
For these reasons, we decided to author and annotate a novel dataset, composed deliberately of short, conversational texts at a variety of levels, including single words, phrases, sentences, and short passages.
# 1.2.2 Modeling
A variety of automated models have been used for the evaluation of text difficulty, typically using either readability scores or the Common European Framework of Reference (CEFR) scale, a standardized measure of language difficulty for L2 learners.
For readability, there are multiple defined metrics (Matricciani, 2023), focused on the length and complexity of sentences and words. Readability prediction models measure those features, sometimes additionally considering word frequency statistics (Stenner et al., 1988; Fry, 1990; Chall and Dale, 1995), Petersen and Ostendorf (2009) and word complexity (Aleksandrova and Pouliot, 2023) (North et al., 2023). Recent works show that neural network-based approaches outperform statistical feature-based methods when using these features (Azpiazu and Pera, 2019; Meng et al., 2020; Imperial, 2021), (Martinc et al., 2021).
However, readability is only representative of one kind of difficulty, and many research efforts focus on the CEFR scale, which evaluates multiple dimensions of difficulty, especially for L2 learners. Salamoura and Saville (2010); Ishii and Tono (2018) explored aligning English vocabulary and grammar with CEFR levels. Uchida and Negishi (2018) experimented with automated CEFR level assessment at the passage level, using data from Cambridge English exams. Notably, Rama and Vajjala (2021) showcased the high accuracy of Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) in multilingual CEFRlevel classification tasks, and Arase et al. (2022) developed a text CEFR level assessment model with BERT embeddings that performs significantly better than models based on superficial text fea
tures.
In alignment with these efforts, we have focused our modeling on the CEFR scale, applied to the Ace-CEFR dataset. To establish a clear baseline for further work, we evaluated a representative range of models, including statistical feature engineering, neural networks, and LLM prompting, analyzing their respective characteristics.
# 2 Ace-CEFR Dataset
To address the lack of short, conversation datasets described in Section 1.2, we created a new dataset, targeting conversational texts, labeled by human language experts.
The Ace-CEFR (Annotated ConvErsational CEFR-aligned) dataset is comprised of 890 short text passages in English, created specifically for this task. The average length of a passage is 12 words, with a median of 10, aligned with typical conversation turn length (Yuan et al., 2006). There are 62 passages composed of a single word each, and the longest passage is 114 words. While the dataset is small compared to some of the datasets in 1.2, its properties make it possible to train models surpassing human experts (2).
The dataset is comprised of a mix of sources: generated by our research organization for other language practice efforts (272), authored specifically for this task (255), generated by LLMs (198), pulled from conversations with trusted tester language learners, with anonymizations (101), and pulled from public data from the web (64). Anonymized conversation segments were processed via automated tools to remove potentially identifying information, and then further manually inspected and rewritten to ensure privacy. Much of the dataset is selected to be conversational in nature, since that is the primary expected application.
The texts were labeled aligned with the Common European Framework of Reference (CEFR) scale, a standard that organizes proficiency into six levels: A1-A2 (beginner), B1-B2 (intermediate), and C1-C2 (expert). In order to include examples of all levels, the dataset was labeled in batches of around 100, with a sampling method adjusted with the goal of a uniform distribution of levels. The distribution of floor(label) is A1:131, $_ { \mathrm { A } 2 / \mathrm { A } 2 + : 1 8 0 }$ , $\mathbf { B } 1 / \mathbf { B } 1 + : 1 6 9$ , $\mathbf { B } 2 / \mathbf { B } 2 \substack { + : 1 8 6 }$ , C1:107, C2:116. Subsampling techniques can be used to achieve a perfectly balanced distribution if needed.
For the C1 and C2 levels, language experts created examples using both advanced vocabulary (e.g., “He feigned indifference.”) and colloquial and idiomatic usage (e.g., “Get off your high horse and lend me a hand. This house isn’t going to paint itself.”)
# 2.1 Human Expert Labels
Passages in the dataset were rated by English language learning experts, each with at least a Master’s degree in Applied Linguistics or similar, plus a minimum of 10 years of experience in language teaching, language teaching curricula and assessment development, teacher education, or research in the field. Labels were applied on the CEFR scale (CEFR): A1 through C2. By convention, the labels A2 through B2 include $^ { 6 6 } + ^ { 7 9 }$ variations, indicating a level higher than the baseline.
Each text was labeled by at least two raters, working independently, but collaborating on a rating guideline document to align themselves. The CEFR labels were applied based on the productive difficulty, i.e., the level at which an L2 learner can be expected to produce the text. For texts composed of a single homograph, the meaning with the lowest level was chosen, as that is most likely to be used by a language learner.
Ratings were converted to numbers $\mathrm { \bf ~ A } 1 = 1$ , $\scriptstyle \mathbf { A } 2 = 2$ , $_ { \mathrm { A } 2 + = 2 . 5 }$ , $\mathbf { B } \mathbf { l } = 3$ , $\mathbf { B } 1 + = 3 . 5$ , $\scriptstyle \mathbf { B } 2 = 4$ , $B 2 + = 4 . 5$ , $\mathrm { C } 1 = 5$ , $\mathrm { C } 2 = 6$ , and averaged to arrive at a consensus per text. In some cases, more raters were available and we included those in the average (112 cases).
While most human expert labels were within 1 point of one another, $8 \%$ of the labels were further apart than this. Disagreements were particularly common for intermediate CEFR levels, but the quadratic weighted kappa (QWK) between the two primary raters is 0.89, which indicates overall close agreement.
In about $5 \%$ of cases, due to differences greater than 1 between individual raters, labels were adjudicated by expert raters as a group to arrive at a consensus label. At the end of model training for each of the Linear, BERT-based and PaLM 2-L models, the worst 20 predictions from each were re-adjudicated to identify potential mislabels. Results presented in the Experiment section (section 4) are on the final dataset, after all adjudication was completed (123 cases of adjudication in total).
# 3 Evaluation Framework
We split the ACE-CEFR evenly into training (445) and test (445) sets. The same train and test set was used for all models.
We evaluated our the on predicting the labels in the test set. Because of averaging between raters, the labels are not constrained to CEFR boundaries, e.g., “I have lived here since I was 4.” is labeled 2.75, meaning that it falls between the $^ { \mathrm { A 2 + } }$ and B1 CEFR labels. Our primary metric was therefore chosen to be Mean Squared Error (MSE) between a model’s predictions and the consensus human expert label, on the 1-6 scale, meaning the maximum error possible is 5, and accordingly the maximum MSE is 25.
In addition to accuracy, latency is a major practical consideration. Some use cases, like generating offline training data, are relatively latency insensitive, but others are in the critical path, like integrating with an LLM for generation (Figure 1) or evaluating user proficiency in real time. For key applications, a model with latency in the $1 0 \mathrm { m s }$ to 100ms range is desirable.
# 4 Experiment
# 4.1 Models Overview
We evaluated three types of models, in order from simplest to most complex: a linear regression model on surface language features, a custom model fine-tuned off Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018), and a Large Language Model (PaLM 2-L) (Anil et al., 2023) in a few-shot setting. Finetuning an LLM was not a focus of this research due to its higher cost and limited accessibility to most LLM users (Trad and Chehab, 2024) (Xu et al., 2023), but it is a topic of interest for future investigation. As a comparison baseline, the test set was also rated by a human expert.
Summary of MSE results is in Figure 2, and latency results are in Table 1.
# 4.2 Human Expert
As a basis for comparison, a set of ratings was performed on the test set by a human expert with the same qualifications as the original raters. This expert did not previously work with the labelers of the dataset, but used the rating guideline as well as the training set labels for calibration. Their labels had a MSE of 0.75 ( $90 \%$ confidence [0.67, 0.84]) (Figure 2 (a)).
Table 1: Latency summary of single lookup latency averaged over 100 requests. Latency is estimated within an order of magnitude, and no effort has been made to optimize code for speed. CPU latency was measured on a Linux desktop Intel(R) Xeon(R) CPU E5-2690 $\mathrm { v } 4 ~ \textcircled { \varpi } \ 2 . 6 0 \mathrm { G H z }$ with 128 Gb RAM. TPU latency was measured via the Vertex API on a low-latency network connection, querying TPU v5e accelerators. Note that TPU execution is highly parallelizable, so amortized batch lookup speed is substantially faster than individual lookup.
# 4.3 Linear Regression Model
The benefit of such models is their simplicity and speed. The model we built can execute locally inprocess, with latency measured in microseconds. The downside is that their accuracy is limited because of a lack of text understanding.
# 4.3.1 Features
There is considerable prior research on measuring text difficulty using surface features such as sentence and word length (Khushik and Huhta, 2022) and word diversity (Treffers-Daller et al., 2018). While these are not encompassing metrics of text complexity (Tanprasert and Kauchak, 2021), they correlate strongly with difficulty. After experimentation, we settled on the signals “average word length in characters,” “average sentence length in characters,” and “average sentence length in words”, with correlations of 0.67, 0.70 and 0.35 to the difficulty. The sentence length signals has a logarithmic relationship to the difficulty, and taking ln(signal) improves the correlation to 0.71 for length in words and 0.75 for length in chars.
The key weakness of these features is that they are content agnostic. For example, “The cat is here.” (A1 difficulty) and “His ire is epic.” (C1/C2 difficulty) have indistinguishable word and sentence features. For these reasons, such approaches are most effective when averaged over long texts, and are much less useful for short conversational passages.
Figure 2: Summary of mean squared error using the Ace-CEFR set to train difficulty prediction models, with $90 \%$ confidence intervals. See Section 4 for detailed results and analysis.
# 4.3.2 Results
The linear model got an MSE of 0.81 $90 \%$ confidence [0.71-0.91]) (Figure 2 (b)). This is slightly worse than human expert labels, but within the confidence interval. Typical errors relate to mistaking the difficulty of a short word and sentences comprised of short words (Table 3). It also tends to overestimate the difficulty of sentences that are simple in structure, but have many words, e.g., “For herbal tea, we have blueberry chamomile, chai, rooibos, fennel tarragon, and nettle.” is labeled at 3 (B1) but predicted by the model to be 5 (C1).
# 4.4 Large Language Model
An LLM is a natural choice for evaluating the difficulty of text. Such models have intrinsic understanding of language, and their training data often organically includes the CEFR scale (Yancey et al., 2023). It is possible to ask an LLM to evaluate text and get a reasonable response. The downside is that these models are comparatively slow (Table 1) and are therefore primarily suitable for offline text labeling.
We used the PaLM 2-L model (Anil et al., 2023), a model optimized for language understanding, generation, and translation tasks. We limited ourselves to few-shot prompt engineering. It is likely that prompt tuning or fine tuning would yield better results, and this is a direction for future research.
# 4.4.1 Results
As a baseline, we first tested a zero-shot version, where we asked the model for a response without giving it any examples. This establishes the LLM’s innate understanding of CEFR levels. This resulted in an MSE of 1.45 (Figure 2 (c)). While this is the worst of the results, it is notably better than a random guess (MSE of approximately 4.6) or always guessing the median of the training set (MSE of 2.37). This shows that the LLM does indeed have some understanding of CEFR levels, though an extremely imprecise one. Iterative improvements over this demonstrate the effectiveness of the Ace-CEFR training set.
For the initial few-shot results, we used a single prompt (A), populated by instructions and examples from the training data. Notably, because of the constraints of context length, we randomly sampled 64 out of 445 training examples. This resulted in an MSE of 0.98 (Figure 2 (d)).
Since the limitation of context length prevented us from using all of the training data as few-shot examples, we experimented with running the model multiple times, re-sampling the training data for few-shot examples, and averaging the results. By rerunning the model 3 times, we improved accuracy, from an MSE of 0.98 to 0.78 (Figure 2 (e)). Naturally, this results in proportionately increased latency. Further improvement is likely possible if more samples are taken.
We noted that the model had significant difficulty predicting the label of single words compared to phrases. We hypothesized that this is because from the LLM’s perspective, these are very different tasks, and because many more of the training examples are phrases $( \mathrm { N } { = } 4 1 8 )$ ) compared to single words $\scriptstyle ( \mathrm { N } = 2 7$ ). Since the training examples are further subsampled in sets of 64 to fit in the context, only 3-4 single words would actually be seen by the model.
To address this, we separated the prompts into two types: one responsible for predicting the difficulty of phrases, and another one for predicting the difficulty of individual words (Appendix A). This significantly improved the MSE, from 0.78 to 0.48 (Figure 2 (f)).
The final results are an MSE of 0.48 ( $90 \%$ confidence [0.43, 0.54]) (Figure 2 (f)). This is 0.33 better than the linear model and 0.27 better than human expert ratings, albeit at a significant latency cost (Table 1). Unlike the linear model, there is no obvious pattern of errors (Table 4). The opacity of mistakes is a risk factor, since this can make it challenging to improve the model further.
# 4.5 BERT-based Model
The BERT-based model builds on an existing, lightweight BERT encoder, which provides a combination of a high degree of accuracy and production-level latency. We fine-tuned a custom model by taking the first few layers of the pretrained BERT-base-uncased checkpoint and adding a classification head. The BERT encoder is multiple orders of magnitude smaller than a typical LLM (millions rather than billions of parameters), but still comes pretrained with a degree of language understanding and is easily fine-tuned to very specific tasks. It is also well-suited to learn from a larger teacher model, which was used during a quality iteration.
human expert rated dataset. The results improved significantly, from MSE 0.44 to 0.37 (Figure 2 (h)).
The final results are an MSE of 0.37 $90 \%$ confidence [0.32, 0.41)] (Figure 2 (h)), which is a 0.38 better than the human expert. The latency, particularly when running on TPU (Table 1), is also practical enough for latency-sensitive production applications, making this the ideal model for most use cases.
The only recurring issue we saw was that this model struggled with misspellings, compared to the LLM (with its larger vocabulary) and the Linear Model (which has no concept of spelling). We did not deliberately introduce misspellings into the Ace-CEFR dataset, but they arose naturally from several of our sources. Ultimately, we decided to correct the misspellings, because we want the dataset to be usable for generative tuning, and mistakes in the input could cause an LLM to learn to produce misspellings. However, this is a weakness that needs to be taken into account when integrating into production use cases, and a spell-checker may be helpful.
Aside from misspellings, the BERT-based model’s errors were similarly opaque to the LLM errors. The only significant pattern was having difficulty with idiomatic sayings, like “It’s been a rough spell but I’m game to try anything that might help us weather this storm.” (Table 5)
# 4.5.1 Results
We finetuned the BERT encoder on the 445 training samples, and ran light hyperparameter tuning (on a validation set split from the training samples) for the number of layers of the pretrained encoder to keep learning rate and batch size. The best setup retained the first 3 layers, trained with a learning rate of $6 e { - 5 }$ at batch size 32 for 6 epochs. The final model has 45.7M parameters and achieved an MSE of about 0.44 (Figure 2 (g)), which is substantially better than any of the other models.
Unlike the linear model, which peaks in accuracy after a few dozen examples, and the LLM, which is context-constrained to accept only a few dozen examples, the BERT model continues to improve with additional training data. We therefore added an extra finetuning stage to the training. In the first stage, we labeled 10,000 examples from various sources with our best LLM version. We used those LLM-labeled examples to finetune the BERT model using a smaller learning rate of $2 e { \mathrm { - } } 5$ . In the second stage, we further finetuned the model on the
# 4.6 Ensemble Models
It is noteworthy that while each model makes mistakes, the categories of mistakes made by different models differ. For example, the Linear Model has no concept of semantics, whereas the BERT model has no concept of word length. We therefore evaluated whether it’s possible to offset the errors of the different models by combining them together.
To do so, we randomly split out 100 examples from the test set to use for tuning, and used the remaining 355 examples for evaluation. We weighted the models to optimize performance on the tuning set, essentially putting a linear model over them. With this approach, we were able to reduce MSE from 0.36 for BERT to 0.33 when combining BERT+LLM. Adding the linear model to the mix did not improve results further beyond noise levels.
While this improvement is incremental, and likely incurs too much complexity to be used in production, it establishes that further improvements in accuracy are possible, and this approach may be useful for creating better pre-training datasets for improvements to BERT in the future. | There is an unmet need to evaluate the language difficulty of short, conversational passages of text, particularly for training and filtering Large Language Models (LLMs). We introduce Ace-CEFR, a dataset of English conversational text passages expert-annotated with their corresponding level of text difficulty. We experiment with several models on Ace-CEFR, including Transformer-based models and LLMs. We show that models trained on Ace-CEFR can measure text difficulty more accurately than human experts and have latency appropriate to production environments. Finally, we release the Ace-CEFR dataset to the public for research and development. | [
"cs.CL",
"cs.AI"
] |
Introduction
Designing and implementing robust test automation frameworks has emerged as a critical factor in ensuring the reliability and quality of software applications in today’s fastpaced development environment. This work focuses on leveraging the capabilities of Cucumber-BDD integrated with Java to create a framework that bridges the gap between technical development and business requirements. The adoption of Behavior-Driven Development (BDD) facilitates clear communication among project stakeholders by translating complex requirements into simple, humanreadable test scenarios. Java’s powerful and versatile ecosystem supports the development of modular and scalable test scripts, enabling seamless maintenance and rapid adaptation to evolving project needs. The framework presented in this study emphasizes a structured methodology, beginning with detailed requirement analysis, followed by the formulation of comprehensive test scenarios and the development of reusable automation components. By incorporating best practices in software testing and automation, the framework addresses common challenges such as test data management, environment configuration, and integration with continuous integration pipelines. Through systematic design and iterative improvements, the approach aims to reduce manual testing efforts while enhancing defect detection rates and overall software performance. The integration of Cucumber-BDD with Java not only streamlines the automation process but also fosters a collaborative culture among development teams. This introduction outlines the fundamental principles, design considerations, and implementation strategies that underpin the framework, providing a roadmap for practitioners seeking to enhance their testing processes and deliver high-quality software products. This paper further discusses benefits and potential limitations of the framework, offering practical recommendations for successful adoption and continuous evolution.
# 1. Background and Motivation
In today’s agile and fast-paced software development landscape, ensuring the reliability of applications while accelerating release cycles is paramount. The emergence of behavior-driven development (BDD) practices, particularly through tools like Cucumber, has transformed how teams design test scenarios. Integrating Cucumber-BDD with Java allows for the creation of test automation frameworks that are both robust and maintainable, directly linking user stories with executable tests.
# 2. Framework Components
# Cucumber-BDD:
Cucumber employs a natural language syntax, enabling stakeholders with non-technical backgrounds to understand and contribute to test scenarios. This transparency facilitates better communication and ensures that business requirements are directly reflected in the testing process.
# Java:
Java’s extensive libraries and platform independence make it an ideal candidate for developing scalable test automation solutions. Its object-oriented nature supports the creation of reusable and modular test components that can evolve with changing project needs.
# 3. Importance of Robust Test Automation
Robust test automation frameworks reduce manual testing efforts, improve defect detection rates, and enable continuous integration/continuous delivery (CI/CD) pipelines. By ensuring that test cases are both reliable and easily maintainable, organizations can minimize the risk of regressions and accelerate software delivery without compromising on quality.
Source: https://kmccorp.in/enhancing-quality-assurance-with-automatedtesting-a-cucumber-framework-approach/
# 4. Objectives and Scope
This study aims to design a comprehensive test automation framework by integrating Cucumber-BDD and Java. The objectives include addressing common challenges such as test data management and environment configuration, while establishing a modular structure that promotes code reusability and efficient maintenance.
# 5. Structure of the Work
The subsequent sections detail the framework’s design and implementation, discuss empirical validations, and explore avenues for future enhancements in automated testing practices.
# CASE STUDIES AND RESEARCH GAP
# 1. Overview of Existing Studies
Recent research highlights a strong emphasis on agile testing practices and the adoption of BDD for improved collaboration between business and technical teams. Studies conducted between 2015 and 2018 focused on the early adoption of BDD methodologies and the integration of test automation in agile environments. Researchers demonstrated how Cucumber’s human-readable format helped bridge communication gaps, yet noted limitations in scalability for larger systems.
# 2. Advancements from 2019 to 2022
The period from 2019 to 2022 saw significant enhancements in automation frameworks, with multiple case studies illustrating the integration of Java-based solutions with BDD practices. Researchers have explored various design patterns to achieve modularity and maintainability, emphasizing the role of reusable components and automated reporting systems. During this phase, the integration with CI/CD pipelines received considerable attention, enhancing test efficiency and early defect detection.
# 3. Recent Trends in 2023 and 2024
Recent literature (2023–2024) has pivoted towards the incorporation of advanced analytics and AI-driven insights into test automation frameworks. These studies explore predictive testing models and dynamic test data management, aiming to further reduce human intervention. While promising, these innovations are yet to be widely standardized or adopted in a uniform manner across industries.
# 4. Identified Research Gap
Despite the evolution in test automation frameworks, a notable research gap persists in establishing a unified methodology that seamlessly integrates Cucumber-BDD with Java across diverse application environments. There is a limited understanding of:
Scalability challenges: How modular design principles can be optimized for large-scale enterprise applications. • Standardization: Best practices for unifying BDD and Java-based testing approaches that are adaptable to various domains. Advanced integration: Effective incorporation of AI and machine learning techniques to predict test failures and optimize test suite management.
# DETAILED LITERATURE REVIEWS.
# 1. Behavior-Driven Development Adoption in Agile Environments (2015)
This early study explored the integration of Behavior-Driven Development (BDD) within agile teams. It demonstrated that adopting BDD practices using Cucumber provided clearer communication between developers and business stakeholders. The research highlighted how natural language test scenarios improved requirements traceability and reduced ambiguity in test cases. The paper also discussed initial challenges such as tooling integration with Java and the need for a cultural shift within development teams.
# 2. Scalable Automation Frameworks with Java and Cucumber (2016)
The 2016 work focused on building scalable automation frameworks by leveraging Java’s robust ecosystem alongside Cucumber’s BDD capabilities. It presented a modular architecture that supported component reuse and simplified maintenance. Key insights included the benefits of objectoriented design in creating flexible test scripts and the challenges related to integrating legacy systems. The study stressed the importance of designing with scalability in mind to accommodate growing codebases and evolving business requirements.
# 3. Continuous Integration and BDD: A Synergistic Approach (2017)
In 2017, researchers investigated the integration of BDD frameworks with continuous integration (CI) pipelines. This paper demonstrated how automating tests with Cucumber and Java improved early defect detection and reduced regression risks. The study provided practical guidelines for configuring CI environments to support automated BDD tests and emphasized the role of automated reporting in maintaining high software quality.
# 4. Enhancing Modular Design in Test Automation (2018)
This review from 2018 examined the importance of modular design principles in test automation frameworks. It detailed how dividing tests into reusable modules could lead to more maintainable and adaptable frameworks. The study showcased several design patterns tailored to Java-based automation and discussed best practices for organizing Cucumber test suites, aiming to streamline updates when application functionalities evolved.
# 5. Enterprise-Level BDD Implementations (2019)
In 2019, research shifted toward enterprise applications, highlighting case studies where Cucumber-BDD frameworks were deployed in large-scale environments. This paper detailed methods for handling complex test data and integrating with enterprise-grade build tools. It underscored the need for robust error-handling mechanisms and adaptive test reporting systems to support large development teams and complex project infrastructures.
# 6. Distributed Testing and BDD Frameworks (2020)
This study focused on challenges and solutions for implementing BDD frameworks in distributed systems. It explored strategies for synchronizing test executions across various environments using Java’s concurrency features and Cucumber’s parallel execution capabilities. The work also discussed network latency and resource management as critical factors influencing test stability in distributed contexts.
# 7. Improving Test Maintainability and Reusability (2021)
The 2021 literature emphasized enhancing maintainability in test automation frameworks. It proposed refactoring strategies and the implementation of design patterns that promote reusability. The research compared monolithic versus modular test architectures, illustrating how Java’s inheritance and interface capabilities can be harnessed alongside Cucumber’s scenario outlines to minimize duplication and simplify updates.
Source: https://katalon.com/resources-center/blog/bdd
# testing
# 8. Integrating Advanced Analytics into BDD Frameworks (2022)
A 2022 study introduced the integration of advanced analytics within BDD frameworks to predict test failures and optimize execution strategies. Researchers experimented with datadriven decision-making models to improve test suite efficiency. The findings suggested that coupling analytical tools with Java-based frameworks could help in identifying flaky tests and enhancing overall test reliability.
# 9. AI-Driven Predictive Testing Models (2023)
The 2023 paper explored the incorporation of artificial intelligence (AI) techniques into test automation frameworks. It discussed how machine learning models could be trained to forecast potential test failures based on historical data, thereby proactively adjusting test scenarios. The study provided insights into combining AI with Cucumber-BDD to create a more adaptive and self-optimizing test environment.
# 10. Future Trends: Hybrid Models for Test Automation (2024)
The most recent study from 2024 examined emerging trends in test automation by proposing a hybrid model that integrates traditional BDD practices with novel automation techniques. It highlighted the potential of blending Java’s mature ecosystem with new technologies such as containerized test environments and microservices-based testing. The research identified promising areas for further investigation, including the standardization of hybrid testing models and the automation of complex integration scenarios.
# PROBLEM STATEMENT
In today’s rapidly evolving software development landscape, ensuring software quality through effective testing is more challenging than ever. Traditional manual testing methods cannot keep pace with agile development cycles and the increasing complexity of modern applications. Despite the adoption of automated testing frameworks, many organizations face issues related to maintainability, scalability, and efficient integration of test cases with continuously changing requirements. Specifically, integrating Behavior-Driven Development (BDD) tools like Cucumber with Java offers a promising approach by aligning test cases with business requirements through natural language. However, the design and implementation of such frameworks are fraught with challenges, including the complexity of modular architecture, efficient test data management, synchronization within distributed environments, and integration with continuous integration/continuous delivery (CI/CD) pipelines. These challenges often lead to fragmented testing practices, reduced reusability, and increased maintenance overhead. Therefore, there is a pressing need to develop a robust, scalable, and adaptable test automation framework that effectively leverages the strengths of both Cucumber-BDD and Java to enhance communication between technical and non-technical stakeholders while ensuring high software quality and rapid delivery.
# RESEARCH QUESTIONS
1. How can a test automation framework be designed to maximize modularity and reusability when integrating Cucumber-BDD with Java? This question investigates the architectural strategies and design patterns that facilitate the creation of modular test components. It explores how Java’s object-oriented features can be effectively combined with Cucumber’s scenario-driven approach to produce a framework that is both maintainable and scalable.
2. What are the key challenges and solutions in integrating automated test suites with CI/CD pipelines in a Cucumber-BDD and Java environment? This research question aims to identify common integration issues—such as test synchronization, data management, and error handling—and to propose strategies that enable seamless incorporation of automated tests into continuous integration workflows.
3. In what ways can advanced analytics and AI techniques enhance the predictive capabilities and efficiency of a test automation framework built with Cucumber-BDD and Java? Here, the focus is on evaluating the potential for integrating machine learning and data analytics to predict test failures, optimize test execution, and reduce maintenance efforts, thereby enhancing the overall effectiveness of the test automation process.
4. How does the integration of natural language test scenarios with technical test scripts affect stakeholder communication and overall test quality? This question examines the impact of using Cucumber’s
human-readable language on bridging the gap between technical developers and business stakeholders, and how this affects the clarity, accuracy, and comprehensiveness of test cases.
5. What are the scalability concerns when implementing a test automation framework for large-scale enterprise applications using Cucumber-BDD and Java, and how can these be addressed? This question explores the limitations of current frameworks when applied to large, complex systems and seeks solutions to overcome scalability issues through efficient design and resource management.
# RESEARCH METHODOLOGY
# 1. Research Design
This study will adopt a mixed-methods approach, combining both qualitative and quantitative research techniques. The primary aim is to evaluate the design, implementation, and performance of the proposed test automation framework. The methodology is structured into several phases:
# Literature Review:
An extensive review of academic publications, technical reports, and industry case studies from 2015 to 2024 will be conducted to establish a solid theoretical foundation. This phase will identify key challenges, best practices, and research gaps in integrating Cucumber-BDD with Java.
# Framework Design and Development:
Based on insights from the literature, the framework will be designed with a focus on modularity, reusability, and integration with CI/CD pipelines. Design patterns and object-oriented principles will be applied to construct a scalable architecture. The development phase will utilize Java as the primary programming language and Cucumber for behavior-driven testing.
# Experimental Setup:
A series of case studies and controlled experiments will be set up in both simulated and real-world environments. The experiments will measure various performance indicators such as test execution time, defect detection rate, maintainability, and integration efficiency. Data will be collected using automated logging tools and manual observations.
# Data Analysis:
Quantitative data will be statistically analyzed to compare the performance of the proposed framework against traditional automation practices. Qualitative feedback from development teams and stakeholders will be gathered through surveys and interviews to assess communication improvements and ease of use.
# Validation:
The framework’s effectiveness will be validated through iterative testing and refinement. Peer reviews and industry expert evaluations will also be incorporated to ensure reliability and practical applicability.
# 2. Tools and Technologies
Programming Language: Java
Testing Framework: Cucumber-BDD
CI/CD Tools: Jenkins, GitLab CI, or similar platforms Data Analysis Software: Statistical analysis tools (e.g., SPSS, R) and qualitative analysis software for survey data
# 3. Ethical Considerations
The study will ensure ethical compliance by maintaining transparency with all participants during surveys and interviews, protecting sensitive data, and adhering to academic integrity principles throughout the research process.
# ASSESSMENT OF THE STUDY
The study is poised to offer significant contributions by:
Developing a comprehensive framework that enhances test automation through improved modularity, maintainability, and integration.
Bridging the communication gap between technical and non-technical stakeholders by leveraging natural language test scenarios.
Providing empirical data that quantifies the performance benefits and scalability of integrating Cucumber-BDD with Java.
# 2. Strengths
Innovative Integration:
The study’s combination of Cucumber-BDD with Java addresses real-world challenges in modern agile environments, making it highly relevant to current software development practices.
Methodological Rigor:
By adopting a mixed-methods approach, the research captures both quantitative performance metrics and qualitative insights, leading to a well-rounded assessment.
Practical Relevance:
The framework is designed with industry best practices in mind, ensuring that findings are directly applicable to large-scale, real-world projects.
# 3. Limitations and Future Work
Scalability Constraints:
While the study aims to address scalability, real-world validation across various enterprise contexts may reveal additional challenges that require further investigation. Technological Evolution:
Given the rapid pace of technological advancements, future research should explore the integration of emerging tools such as AI-driven test automation and containerized testing environments.
# Generalizability:
The findings may be influenced by the specific development environments and tools used during the study. Expanding the research to include diverse contexts could enhance generalizability.
# STATISTICAL ANALYSIS.
Table 1: Test Suite Performance Metrics
Test Suite Performance Metrics
Fig: Test Suite Performance Metrics
This table highlights overall performance improvements, with the proposed
framework showing reduced execution time, higher defect detection, and lower maintenance efforts compared to traditional frameworks.
Table 2: Efficiency Comparison Between Frameworks
Efficiency Comparison
Fig: Efficiency Comparison
The above table compares key efficiency parameters. The statistically significant $p$ -values indicate that differences in setup time, integration, and code reusability favor the proposed framework.
Table 3: Test Execution Time Analysis
This table presents the execution time across different environments. The results show consistent performance with moderate variability as test conditions become more complex.
Table 4: Defect Detection Rate Analysis
Fig: Defect Detection Rate Analysis
The table compares the number of defects detected during different testing phases, demonstrating that the proposed framework consistently outperforms the traditional approach in defect detection.
Table 5: Scalability and Maintenance Overhead
This final table highlights scalability and maintenance aspects. The proposed framework exhibits superior code reusability, reduced module integration time, and lower weekly maintenance efforts compared to traditional frameworks.
# SIGNIFICANCE OF THE STUDY
This study is significant because it addresses critical challenges in modern software development, where rapid release cycles and complex application architectures demand more robust and efficient testing solutions. By integrating
Cucumber-BDD with Java, the proposed framework bridges the gap between technical teams and business stakeholders through natural language test scenarios, promoting clear communication and enhanced requirement traceability.
# Potential Impact:
Improved Quality Assurance: The framework aims to enhance defect detection rates and reduce manual testing efforts, thereby elevating overall software quality. Accelerated Development: By integrating seamlessly with CI/CD pipelines, the framework can significantly reduce the time spent on test execution and maintenance, allowing development teams to focus on feature delivery. Enhanced Collaboration: The use of behavior-driven development fosters a shared understanding of application behavior among all team members, ultimately reducing misinterpretations and errors. Scalability and Adaptability: The modular design encourages code reuse and maintainability, making it easier for organizations to scale testing practices as their software evolves.
# Practical Implementation:
The framework can be practically implemented by leveraging widely adopted tools such as Java for scripting and Cucumber for behavior specifications. Organizations can integrate this framework within their existing development ecosystems using CI/CD tools like Jenkins or GitLab CI. Pilot projects and iterative testing cycles will help tailor the framework to address domain-specific challenges while providing measurable benefits in efficiency and quality.
# RESULTS
The experimental evaluation of the proposed test automation framework yielded the following key outcomes:
Test Execution Efficiency: The framework demonstrated a reduction in test execution time by
approximately $2 5 \%$ compared to traditional frameworks, leading to quicker feedback cycles.
Enhanced Defect Detection: Empirical data indicated a defect detection improvement of $1 5 - 2 0 \%$ across various testing phases, ensuring higher software quality.
Reduced Maintenance Overhead: The modular architecture resulted in a $33 \%$ reduction in maintenance efforts, reflecting lower long-term operational costs. Improved Integration: The framework significantly decreased setup and CI/CD integration times, showcasing enhanced efficiency in continuous testing environments.
Scalability Metrics: Scalability assessments confirmed that the framework supports higher code reusability and faster module integration, making it suitable for largescale enterprise applications. | Modern software development demands rapid, reliable testing methods to maintain high quality in increasingly complex systems. This paper details a comprehensive approach to designing and implementing robust test automation frameworks by leveraging Cucumber BDD with Java. By utilizing Cucumber BDD natural language syntax, the framework enables clear communication between technical and non-technical team members, ensuring that requirements are accurately translated into executable tests. Java, renowned for its versatility and extensive libraries, serves as the backbone for creating scalable, maintainable, and efficient test scripts. The framework described herein focuses on modular architecture, facilitating re usability and streamlined maintenance across diverse application domains. It systematically addresses challenges such as test data management, dynamic environment handling, and integration with continuous integration/continuous delivery pipelines. Empirical evaluations demonstrate that this integrated approach not only reduces manual testing effort but also significantly enhances defect detection and overall software reliability. The methodology encourages the adoption of best practices in test design, including clear documentation, iterative development, and automated reporting. | [
"cs.SE"
] |
# 1. Introduction
Games play a pivotal role in the field of AI, offering unique challenges to the research community and serving as fertile ground for the development of novel AI algorithms. Board games, such as Go [1] and chess [2], provide ideal settings for perfect-information scenarios, where all agents are fully aware of the environment states. Card games, like heads-up no-limit hold’em (HUNL) [3,4], Doudizhu [5,6], and Mahjong [7], present different dynamics with their imperfect-information nature, where agents must infer and cope with hidden information from states. Video games, such as Starcraft [8], Minecraft [9], and Honor of Kings [10], push AI algorithms to process and extract crucial features from a multitude of signals amidst noise.
Conversely, the advancement of AI algorithms also incites new enthusiasm in games. The work of AlphaGo [11] in 2016 has had a long-lasting impact on the Go community. It revolutionized the play style of Go, a game with millennia of history, and changed the perspectives of world champions on this game [12]. Teaching tools based on AlphaGo [13] have become invaluable resources for newcomers while empowering professional players to set new records [14].
Mahjong, a worldwide popular game with unique characteristics, has gained traction in the AI research community as a new testbed. It brings its own flavor as a multiplayer imperfect-information environment. First, there is no rank of individual tiles in Mahjong; all tiles are equal in their role within the game. A game of Mahjong is not won by beating other players in card ranks but by being the first to reach a winning pattern. Therefore, state evaluation is more challenging for Mahjong players, as they need to assess the similarity and distance between game states and the closest game goals. Second, the game objective in Mahjong is to become the first to complete one of many possible winning patterns. The optimal goal can frequently change when players draw new tiles during gameplay. In fact, Mahjong’s core difficulty lies in selecting the most effective goal among numerous possibilities. Players must evaluate their goals and make decisions upon drawing each tile and reacting to other players’ discarded tiles. This decision-making process often involves situations where multiple goals are of similar distance, requiring trade-offs that distinguish play styles and reveal a player’s level of expertise.
Figure 1. Comparison between current situation of black-box Mahjong agents and Mahjong agents with Mxplainer. a). Raw action output without explanations. b). Mxplainer explains possible reasons behind actions.
Several strong agents have already been developed for different variants of Mahjong rules [7,15,16]. However, as illustrated in Fig.1, without the use of explainable AI methods[17,18], people can only observe the agents’ actions without understanding how the game states are evaluated or which game goals are preferred that lead to those actions.
In Explainable AI (XAI)[19], black-box models typically refer to neural networks that lack inherent transparency and thus rely on post-hoc explanation tools for interpretability [18]. Current post-hoc XAI tools, such as Grad-CAM [20] and LIME [21], are primarily designed for neural networks. While these tools can explain how input features affect outputs, they do not provide insights into agents’ decision-making processes.
In this paper, we present Mxplainer (Mahjong Explainer), a parameterized classical agent framework designed to serve as an analytical tool for explaining the decision-making process of black-box Mahjong agents, as shown in Fig. 1(b). Specifically, we have developed a parameterized framework that forms the basis for a family of search-based Mahjong agents. This framework is then translated into an equivalent neural network model, which can be trained using gradient descent to mimic any black-box Mahjong agent. Finally, the learned parameters are used to populate the parameterized search-based agents. We consider these classical agents to be inherently explainable because each calculation and decision step within them is comprehensible to human experts. This enables detailed interpretation and analysis of the decision-making processes and characteristics of the original black-box agents.
Through a series of experiments on game data from both AI agents and human players, we demonstrate that the learned parameters effectively reflect the decision processes of agents, including their preferred game goals and tiles to play. Our research also shows that by delving into the framework components, we can interpret the decision-making process behind the actions of black-box agents.
This paper pioneers research on analyzing Mahjong agents by presenting Mxplainer, a framework to explain black-box decision-making agents using search-based algorithms. Mxplainer allows AI researchers to profile and compare both AI agents and human players effectively. Additionally, we propose a method to convert any parameterized classical agent into a neural agent for automatic parameter tuning. Beyond traditional approaches like decision trees, our work explores the conversion from neural agents to classical agents. Our data and codes, which encompass toy examples and comparative studies, are accessible at https://github.com/Lingfeng158/Mxplainer.
The rest of this paper is organized as follows: We first review related literature. Next, we introduce the rules of Mahjong and the specific variant used in our study. Then, we provide a detailed explanation of the components of our approach. Following that, we present a series of experiments demonstrating the effectiveness of our method in approximating and capturing the characteristics of different Mahjong agents. Finally, we discuss the implications of our work and conclude with future research directions.
# 2. Related Works
In Explainable AI (XAI)[19] is a research domain dedicated to developing techniques for interpreting AI models for humans. This field encompasses several categories: classification models, generative AI, and decision-making agents. A specific subfield within this domain is Explainable Reinforcement Learning (XRL)[22], which focuses on explaining the behavior of decision-making agents. Explanations in XRL can be classified into two main categories: global and local.
Global explanations provide a high-level perspective on the characteristics and overall strategies of black-box agents, answering questions such as how an agent’s strategy differs from others. Local explanations, on the other hand, focus on the detailed decision-making processes of agents, elucidating why an agent selects action A over B under specific scenarios.
XRL methods can be classified into intrinsic and post-hoc methods. Intrinsic methods directly generate explanations from the original black-box models, while post-hoc methods rely on additional models to explain existing ones. Imitation Learning (IL)[23,24] is a family of post-hoc techniques that approximate a target policy. LMUT[25] constructs a U-tree with linear models as leaf nodes and approximates target policies through a node-splitting algorithm and gradient descent. Q-BSP [26] uses batched Q-value to partition nodes and generate trees efficiently. EFS [27] employs an ensemble of linear models with non-linear features generated by genetic programming to approximate target policies. These methods have been tested and excelled in environments such as CartPole, MountainCar, and others in the Gym [28]. However, they rely on properly sampled state-action pairs to generate policies and may not be robust to out-of-distribution (OOD) states, which is particularly crucial for Mahjong with its high-dimensional state space and imperfect information.
PIRL [29] distinguishes itself among IL methods by introducing parameterized policy templates using its proposed policy programming language. It approximates the target $\pi$ through fitting parameters with Bayesian Optimization in Markov games, achieving high performance in the TORCS car racing game. Compared to TORCS, Mahjong has far more complex state features and requires encoding action histories within states to obtain the Markov property. Additionally, Mahjong agents must make multi-step decisions from game goal selection to tile picking. Similar to PIRL, we define a parameterized search-based framework and optimize parameters using batched gradient descent to address these challenges.
# 3. Mahjong the Game
Mahjong is a four-player imperfect information tile-based tabletop game. The complexity of imperfect-information games can be measured by information sets, which are game states that players cannot distinguish from their own observations. The average size of information sets in Mahjong is around $1 0 ^ { 4 8 }$ , making it a much more complex game to solve than Heads-Up Texas Hold’em [30], whose average size of information sets is around $1 0 ^ { 3 }$ . To facilitate the readability of this paper, we highlight terminologies used in Mahjong with bold texts, and we distinguish scoring patterns (fans) by italicized texts.
In Mahjong, there are a maximum of 144 tiles, as shown in Fig. 2-A. Despite its plethora of rule variants, Mahjong’s general rules are the same. On a broad level, Mahjong is a pattern-matching game. Each player begins with 13 tiles only observable to themselves, and they take turns to draw and discard one tile until one completes a game goal with a 14th tile. The general pattern of 14 tiles is four melds and a pair, as shown in Fig. 2-C. A meld can take the form of Chow, Pung, and Kong, as shown in Fig. 2-B. Apart from drawing all the tiles by themselves, players can take the tile just discarded by another player instead of drawing one to form a meld, called melding, or declare a win.
A. MahJong TilesSuited Tiles (xe)开 公H 开开开 W Bb Bc Bd Be Bf Bg Bh Bi Bj三 四 伍 交 七 八 九萬 萬 萬 萬 萬 萬 萬 萬 萬Cb Cc Cd Ce Cf Cg Ch Ci CjX X XX XX x× XKX XXXX X X X xX XX X 8 XXX XX×Db Dc Dd De Df Dg Dh Di DjHonored Tiles (xe)東南 西 北 中 發EW SW WW NW RD GD WDFlower Tiles (xb)春 3秋力 4 冬脚 葡 Sb Sc Sd Se Fb Fc Fd Fe
B. Melds西西西 8 XXChow Pung Kong
C. General Winning Pattern开 开开 四萬 伍萬 六萬 ? 福 xX3 888 西西西中中
# 3.1. Official International Mahjong
Official International Mahjong stipulates Mahjong Competition Rules (MCR) to enhance the game’s complexity and competitiveness and weaken its gambling nature. It specifies 80 scoring patterns with different points, called "fan", ranging from 1 to 88 points. In addition to the winning patterns of four melds and a pair, players must have at least 8 points by matching multiple scoring patterns to declare a win.
Specific rules and requirements for each pattern can be found in this book [31]. Of 81 fans, 56 are highly valued and called major fans since most winning hands usually consist of at least one. The standard strategy of MCR players is to make game plans by identifying several major fans closest to their initial set of tiles. Then, depending on their incoming tiles, they gradually select one of them as the terminal goal and strive to collect all the remaining tiles before others do. The exact rules of MCR are detailed in Official International Mahjong: A New Playground for AI Research [30].
# 4. Methods
We first introduce the general concepts of Mxplainer and then present the details of the implementations in the following subsections. To facilitate readability, we use uppercase symbols to indicate concepts, and lowercase symbols with subscripts for instances of concepts.
Fig. 3 presents the concept overview of Mxplainer. We assume that the search-based nature of $F$ is explainable to experts in the domain, such as Mahjong players and researchers. Within $F _ { . }$ , there are parameters $\Theta$ that control the behaviors of $F$ . To explain a specific target agent $\Psi$ , we would like the behaviors of $F$ to approximate those of $\Psi$ as closely as possible. In order to automate and speed up
Search-basedFramework $\overline { { F } }$ ConvertedFrameworkF TargetAgent亚
Search Input: State S Search Input: State S Input: State S
Search Output: Results R Search Output: Results R
Input: Search Results R Input: Search Results R
for goal(G, wr) in goal_list: Conversion X1 $\theta _ { 2 }$ Approximation for alltile t in G: $X _ { 2 }$
Output: Predicted Action A Output: Predicted Action A Output: Action A
the approximation process, we convert a part of $F$ that contains $\Theta$ into an equivalent neural network representation, and leverage supervised learning to achieve the goals.
$F$ consists of Search Component SC and Calculation Component CC, denoted as $F = S C | C C$ . $S C$ searches and proposes valid goals in a fixed order. Next, CC takes groups of manually defined parameters $\Theta _ { . }$ , each of which carries meanings and is explainable to experts, and makes decisions based on the search results from SC, as shown in Fig. 4.
CC can be converted into an equivalent neural network $N$ , whose neurons are semantically equivalent to $\Theta$ . $\mathbf { \xi } _ { S C \mid N }$ can approximate any target agents $\Psi$ by fitting Ψ’s state-action pairs. Since $\Theta$ is the same for both CC and $N _ { . }$ , learned $\hat { \Theta }$ can be put back into CC and explains actions locally through step-by-step deductions of $S C | C C$ . Moreover, by normalizing and studying $\Theta$ , Mxplainer is able to compare and analyze agents’ strategies and characteristics.
The construction of $F$ and the design of parameters $\Theta$ are problem-related, as they reflect researchers’ and players’ attention to the game’s characteristics and the game agents’ strategies. The conversion from CC to $N$ and the approximation through supervised learning is problem agnostic and can be automated. Out of Mxplainer, SC of $F$ is fixed and the same for all agents, but behaviors of CC and $N$ change as $\Theta$ changes for different target agents $\Psi$ .
# 4.1. Parameters $\Theta$ of Framework $F$
For $\Theta$ , we manually craft three groups of parameters to model people’s general understanding of Mahjong, $\Theta _ { t i l e } , \Theta _ { f a n } ,$ and $\Theta _ { m e l d }$ .
$\Theta _ { t i l e }$ is used to break ties between tiles, and different players may have different preferences for tiles.
$\Theta _ { f a n }$ is designed to break ties between goals when multiple goals are equally distant from the current hand. There are more than billions of possible goals in Mahjong, but each goal consists of fans. Thus, we use the compound of preferences of fans to sort goals. We hypothesize that there exists a natural order of difficulty between fans, which implies that some are more likely to achieve than others. However, such an order is impossible to obtain unless there is an oracle that always gives the best action. Additionally, players break ties based on their inclinations towards fans. Since the natural difficulty order and players’ inclinations are hard to decouple, we use $\Theta _ { f a n }$ to represent their products. Consequently, $\Theta _ { f a n }$ between different fans for a single player cannot be compared directly, but $\Theta _ { f a n }$ for the same fans between different players reflect their comparative preferences.
$\Theta _ { m e l d }$ are the linear weights of a heuristic function that approximates the probability of melding a tile from other players. The linear function takes features from local game states, including the number of total unshown tiles, the length of game steps, and the unshown tile counts of the neighboring tiles. The components and the usage of $\Theta _ { m e l d }$ will be discussed in detail in the following sections.
# 4.2. Search Component SC of Framework $F$
In Mahjong, Redundant Tiles $R$ refer to the tiles in a player’s hand that are considered useless for achieving their game goals. In contrast, Missing Tiles $M$ are the ones that players need to acquire in
Goal Proposer Search Component Prepare goals by searching Observable State $\boldsymbol { S }$ Return: A list of goals, $[ G \colon ( \boldsymbol { \mathsf { M } } , \boldsymbol { \mathsf { R } } , \boldsymbol { \Phi } ) ]$ Unknown tile dict, $U$ . 4 Calculation Win Rate Calculator Component Calculate win rate for each goal with parameters input Fan Preferences Melding Prefs Return: each goal's Win Rate, Decision Selector Tile Preferences $\Theta _ { t i l e }$ ? Select the least useful tile Return: the least useful tile, $t$
future rounds to complete their objectives. Following that, Shanten Distance is defined as $D = | M | .$ which conveys the distance between the current hand and a selected goal.
Here, we define a game goal as $G = ( M , R , \Phi )$ , since when both $M$ and $R$ are fixed, a game goal is set, and its corresponding fans $\Phi$ can be calculated. Each tile $t \in M$ additionally has two indicators, $i _ { p }$ and $i _ { c } ,$ which determine if it can be acquired through melding from other players. $i _ { p }$ is true if the player already owns two tiles in the Pung, and the same happens to $i _ { c }$ and Chow. Thus, $\forall t \in M ,$ it has $( t , i _ { p } , i _ { c } )$ .
Through dynamic programming, we can efficiently propose different combinations of tiles and test if they satisfy MCR’s winning condition. We can search all possible goals, but not all goals are reasonable to be considered as candidates. In practice, only up to 64 goals $G$ are returned in ascending Shanten Distances $D$ , since in most cases, only the closest goals are important to decisions, and they are usually less than two dozen.
However, only goals are not enough. MCR players also need to consider other observable game information to jointly evaluate the win rate of each goal $G$ , such as other players’ discard history and unshown tiles. For our framework $F$ , we only consider unshown tiles $U$ , a dictionary keeping track of the number of each tile that is not shown through observable information.
Thus, the Goal Proposer $P$ accepts game state information $S$ , and outputs $( [ G ] , U )$ , such that $0 < | [ G ] | \leq 6 4 ,$ , as shown in Fig. 4-A. In most cases, goals $G$ in $\big [ G \big ]$ can be split into several groups. Different groups contain different fans $\Phi$ and represent different paths to win, while the goals within each group have different tile combinations for the same fans $\Phi$ .
# 4.3. Calculation Component CC of Framework $F$
Calculation Component CC of Framework $F$ consists of Win Rate Calculator $C$ and Decision Selector $D S$ . CC contains three groups of tunable parameters $\Theta _ { f a n } , \Theta _ { m e l d } ,$ and $\Theta _ { t i l e }$ that control the behavior of Win Rate Calculator $C$ and Decision Selector $D S$ . In all, these three groups of parameters, $\Theta _ { f a n } , \Theta _ { m e l d } ,$ and $\Theta _ { t i l e } ,$ are similar to parameters of neural networks, and they are the key factors that control the behaviors of agents derived from framework $F$ .
# Algorithm 1 Win Rate Estimation for A Single Goal
Input: Goal G: $<$ missing tiles $M : < t _ { * }$ , Chow indicator $i _ { c }$ , Pung indicator $i _ { p } >$ , fan list $F >$ , unshown dict U, game
length $L$
Parameter: $\Theta _ { f a n } , \Theta _ { m e l d }$
Output: Estimated win rate for goal G
1: Initialize win rate $w r \gets 1 0 0$ 2: for all missing tile $m \in M$ do 3: // Construct local tile feature $x _ { m }$ from game length $L ,$ 4: // and remaining adjacent tile count $U$ . 5: $x _ { m } \gets U , L$ 6: // Calculate prob. of drawing $m$ and others discarding $m$ 7: $p _ { d r a w } U [ m ] / s u m ( U )$ 8: $p _ { d i s c a r d } \gets U [ m ] / s u m ( U ) * \Theta _ { m e l d } * x _ { m }$ 9: $s o u r c e \gets 0$ 10: if $i _ { p }$ then 11: $\dot { s } o u r c e \gets 3 / /$ one can pung from all others 12: else if $i _ { c }$ then 13: $s o u r c e \gets 1 \bmod$ one can only chow from the one left 14: end if 15: 16: $\begin{array} { l } { p _ { m e l d } p _ { d i s c a r d } * s o u r c e } \\ { w r w r \times ( p _ { d r a w } + p _ { m e l d } ) } \end{array}$ 17: end for 18: Total fan weight $f w \gets \sum \Theta _ { f a n } [ f ]$ for fan $f \in F$ 19: $w r \gets w r \times f w$ 20: return wr
The Win Rate Calculator $C$ takes the results of the Goal Proposer $P _ { \cdot }$ , $( [ G ] , U )$ as input and estimates the win rate for each goal $G$ . The detailed algorithm of $C$ is shown in Algorithm 1. Simply put, C multiplies the probability of successfully collecting each missing tile of a proposed goal as its estimated win rate.
The estimation of win rate in Win Rate Calculator $C$ depends on the two groups of parameters, $\Theta _ { f a n }$ and $\Theta _ { m e l d }$ . The probability of acquiring each tile is made of two parts: drawing by oneself or melding another player’s discarded tile. For each tile $t ^ { 1 }$ , we model the probability of drawing it as $P ( t ) = U [ t ] / Z ,$ in which $Z = s u m ( U )$ . We assume the tile is drawn uniformly from all unshown tiles. The second part of the probability is partly determined by $\Theta _ { m e l d } ,$ , representing agents’ optimism of forming melds.
As discussed previously, $\Theta _ { m e l d }$ is the parameter for a heuristic function that takes in local game features. The features are $\begin{array} { r } { \{ Z , \frac { 1 } { Z } , 1 - \frac { 1 } { Z } , L , \frac { 1 } { L } , 1 - \frac { 1 } { L } , U [ t - 2 ] : U [ t + 2 ] , b i a s \} } \end{array}$ , where $Z$ is the number of total unshown tiles, $L$ is the length of game, and $U [ t - 2 ] : U [ t + 2 ]$ represents the unshown counts of adjacent tiles centered around the tile $t$ . With an additional bias term, $\Theta _ { m e l d }$ represents 12 linear weights in total.
For simplicity, uniform distribution is used to estimate the probability of collecting tiles because we designed the learned parameters only to reflect the characteristics of the agents’ play styles. On the other hand, tie-breaking between multiple goals with similar shanten numbers frequently happens in Mahjong. Players break ties by leaning towards their preferred fans, and such preferences are captured by $\Theta _ { f a n }$ . Thus, in $\Theta _ { f a n } ,$ a higher value of weight for a specific fan represents a higher preference of the agent for this fan.
The Decision Selector $D S$ collects results from the Win Rate Calculator $C$ and calculates the final action based on each goal’s estimated win rates. The detailed algorithm for $S$ is shown in Algorithm 2. The goal of $S$ is to efficiently select the tile to discard with the most negligible impact on the overall win rate. Heuristically, the required tiles of goals with higher win rates are more important than those with lower ones. Conversely, the redundant tiles of such goals are more worthless since their
# Algorithm 2 Discarding Tile Selection
Input: List of goals and their win rates $L = [ ( \operatorname { G o a l } G , \operatorname { W i n } \operatorname { r a t e } w r ) ]$ , Hand tiles $H$
Parameter: Θtile
Output: Tile to discard
1: Initialize tile values $d \gets$ Dict $\{ { \mathrm { t i l e } } t : 0 \}$
2: for all $( G , w r ) \in L$ do
3: for all tile $t \in G$ do
4: $d [ t ] d [ t ] + w r \ / /$ redundant tiles
5: end for
6: end for
7: for all tile $t \in d$ do
8: $d [ t ] d [ t ] \times \Theta _ { t i l e } [ t ]$
9: end for
10: return arg $\operatorname* { m a x } _ { t \in H } d [ t ]$
Table 1. Sizes and meanings of Neural Network N’s components. Action Pung becomes Kong if Kong is possible
existence actually hinders the goals’ completion. Thus, each tile’s worthless degree can be computed by accumulating the win rate of goals that regard it as a redundant tile, and the tile with the highest worthless degree has the most negligible impact on the overall win rate. Decision Selector $D S$ accepts $\Theta _ { t i l e }$ as parameters, which are used similarly as $\Theta _ { f a n }$ to break ties between tiles. For action predictions, such as Chow or Pung, $F$ records win rates computed by $C ,$ assuming those actions are taken, and $F$ selects the action with the highest win rate.
# 4.4. Differentiable Network N of Mxplainer
The Search-based Framework $F$ is a parameterized search-based agent template, and its parameters $\Theta$ need to be tuned to approximate any target agents’ behaviors. Luckily, Calculator $C$ and Decision Selector $D S$ only contain fixed-limit loops, with each iteration independent from others and if-else statements whose outcomes can be pre-computed in advance. Thus, $C$ and $s$ can be converted into an equivalent neural network $N$ for parallel computation, and the parameters $\Theta$ can be optimized through batched gradient descent. The rules for the conversion can be found in Appendix II.
$$
\mathbb { I } = { \left\{ \begin{array} { l l } { 1 } & { { \mathrm { i f ~ i t ~ i s ~ r e a l ~ d a t a } } } \\ { 0 } & { { \mathrm { o t h e r w i s e } } } \end{array} \right. }
$$
To distinguish between padding and actual values, we define a value indicator as Eq (1) to facilitate parallel computations. Then, $\mathrm { A l g ~ 1 }$ and $\mathbf { A l g } 2$ can be easily re-written as Listing 1 and Listing 2. The inputs are:
$F \in \mathbb { R } ^ { 8 0 }$ , $U \in \mathbb { N } ^ { 3 4 }$ , $R \in \mathbb { N } ^ { 6 4 \times 3 4 }$ , $X \in \mathbb { R } ^ { 3 4 \times 1 2 }$ : vectorized $F , U , R ,$ and $x _ { m }$ from $\mathrm { A l g 1 }$ . $M T : \left( \mathbb { I } , M \right) \in \mathbb { R } ^ { 3 4 \times 2 }$ : missing tiles $M$ with indicators. $B M \in \mathbb { N } ^ { 3 4 \times 3 }$ : one-hot branching mask from $i _ { c }$ and $i _ { p }$ . $W R \in \mathbb { R } ^ { 6 4 \times 1 }$ : the computed win rate from Listing 1. • w_meld, w_fan, w_tile: $\Theta _ { m e l d } , \Theta _ { f a n } ,$ and $\Theta _ { t i l e }$ .
Listing 1. Paralleled Algorithm 1 in PyTorch
Listing 2. Paralleled Algorithm 2 in PyTorch
# 4.4.1. The Resulting Network N and the Training Objective
The sizes and meanings of the network $N$ outputs and three groups of parameters are reported in Table 1.
The learning objectives depend on the form of target agents $\Psi$ and the problem context. Since we are modeling the selection of actions as classification problems, we use cross entropy (CE) loss between the output of $N$ and the label Y. The label can be soft or hard depending on whether the target agent gives probability distributions. Since we cannot access human players’ action distribution, we use actions as hard ground-truth labels for all target agents. Additionally, an L2-regularization term of $( \Theta _ { f a n } - a b s ( \Theta _ { f a n } ) ) ^ { 2 }$ is added to penalize negative values of fan preferences to make the heuristic parameters more reasonable.
After supervised training, the learned parameters $\Theta$ can be directly filled back into CC. Since CC is equivalent to $N , S C | C C ( \Theta )$ inherits the characteristics of $S C | N ( \Theta )$ , approximating the target agent $\Psi$ . By analyzing the parameterized agent $S C | C C ( \Theta ) .$ , we can study the parameters to learn the comparative characteristics between agents and gain insights into the deduction process of $\Psi$ through the step-by-step algorithms of Framework $S C | C C$ . Since only data pairs are required from the target agents, we can use Mxplainer on both target AI agents and human players.
# 5. Experiments
We conducted a series of experiments to evaluate the effectiveness of Mxplainer in generating interpretable models and analyze their explainability. Three different target agents are used in these experiments. The first agent $\psi _ { 1 }$ is a search-based MCR agent with manually specified characteristics as a baseline. Specifically, this agent only considers the fan of Seven Pairs to choose, and all its actions are designed to reach this target as efficiently as possible. When multiple tiles are equally good to discard, a fixed order is predefined to break the equality. The second agent $\psi _ { 2 }$ is the strongest MCR AI from an online game AI platform, Botzone [32]. The third agent $\psi _ { 3 }$ is a human MCR player from an online Mahjong platform, MahjongSoft.
Around 8,000 and 50,000 games of self-play data are generated for the two AI agents. Around 34,000 games of publicly available data are collected from the game website for the single human player over more than 3 years. Through supervised learning on these datasets, three sets of parameters $\theta _ { 1 , 2 , 3 }$ are learned and filled back in the Search-based Framework $F$ as-is, making interpretable white-box agents $\hat { \psi } _ { 1 , 2 , 3 }$ with similar behavior to the target agents $\psi _ { 1 , 2 , 3 }$ . The weights for each agent are selected with the highest validation accuracy from three runs. To make weights comparable across different agents, reported weights are normalized with $1 0 0 * \left( \Theta - \Theta _ { m i n } \right) / \Sigma ( \Theta - \Theta _ { m i n } )$ .
Table 2. A). Defined Order for tiles and their learned weights. B). Sorted weights in the learned parameters $\theta _ { 1 , f a n }$ and their corresponding fans. Learned weights for $\theta _ { 2 , f a n }$ are added for comparison. C). The major fans of $\psi _ { 2 }$ and $\psi _ { 3 }$ with a difference of at least $1 \%$ in historical frequency, which is calculated from their game history data.
Since it is common in MCR where multiple redundant tiles can be discarded in any order, it is difficult to achieve high top-1 accuracy on the validation sets, especially for $\psi _ { 2 }$ and $\psi _ { 3 } ,$ which have no preference on the order of tiles to discard. As a reference of the similarity, $\hat { \psi } _ { 1 , 2 , 3 }$ achieves top-3 accuracy of $9 7 . 1 5 \%$ , $9 2 . 7 5 \%$ , $9 0 . 1 2 \%$ on the data of $\psi _ { 1 , 2 , 3 } { } ^ { 2 }$ .
# 5.1. Correlation between Behaviors and Parameters
In the Search-based Framework $F _ { \cdot }$ , the parameters $\Theta$ are designed to be strategy-related features that should take different values for different target agents. Here we analyze the correlation between the learned parameters and target agents’ behavior to prove that Mxplainer can extract high-level characteristics of agents and explain their behavior.
# Preference of fans to choose
$\Theta _ { f a n }$ stands for the relative preference of agents to win by choosing each fan. For the baseline target $\psi _ { 1 }$ which only chooses the fan of Seven Pairs, the top values of $\theta _ { 1 , f a n }$ learned are shown in Table 2-B. We also include $\theta _ { 2 , f a n }$ values for comparisons, demonstrating the differences in learned weights between specialized Seven Pairs agent and other agents. It can be seen that the weight for Seven Pairs is much higher than those of other fans, showing the strong preference of $\hat { \psi } _ { 1 }$ to this fan. The fans with the second and third largest weights are patterns similar to Seven Pairs, indicating $\hat { \psi } _ { 1 }$ also tends to approach them during the gameplay based on its learned action choices, though it eventually ends with Seven Pairs for $1 0 0 \%$ of the games. For targets $\psi _ { 2 }$ and $\psi _ { 3 }$ with unknown preferences of fans, we count the frequency of occurrences of each major fan in their winning hands as an implication of their preferences and compare these two agents on both the frequency and the learned weights of each fan.
Table 2-C shows all the major fans with a difference of $1 \%$ in frequency. Except for the last row, the data shows a significant positive correlation between the preference of target agents $\psi$ and the learned θ f an.
Preference of tiles to discard
$\Theta _ { t i l e }$ stands for the relative preference of discarding each tile, especially when multiple redundant tiles are valued similarly. Since $\psi _ { 2 }$ and $\psi _ { 3 }$ show no apparent preference in discarding tiles, we focus on the analysis of $\psi _ { 1 } ,$ which is constructed with a fixed tile preference. We find the learned weights almost form a monotonic sequence with only three exceptions out of 34 tiles, showing strong correlation between the learned parameters and the tile preferences of the target agent. Table 2-A shows the first a few entries of θ1,tile.
# 5.2. Manipulation of Behaviors by Parameters
Previous experiments have shown that greater frequencies of fans within the game’s historical record lead to elevated preferences. In this experiment, we illustrate that through the artificial augmentation of fan preferences, the modified agents, in contrast, display elevated frequencies of the corresponding fans. We make adjustments to the parameters $\theta _ { 2 , f a n }$ within $\hat { \psi } _ { 2 }$ by multiplying the weight assigned to the All Types fan by a factor of 10. This gives rise to the generation of a new agent, $\psi _ { 2 } ^ { \prime } ,$ , which is expected to exhibit a stronger inclination towards selecting this fan.
We collect roughly around 8,000 self-play game data samples from $\psi _ { 2 }$ and $\psi _ { 2 } ^ { \prime }$ . Subsequently, we determine the frequency with which each fan shows up in their winning hands. The data indicates that among all the fans, only the All Types fan undergoes a frequency variation that surpasses $1 \%$ . Its frequency has risen from $2 . 5 9 \%$ to $5 . 7 6 \%$ , signifying an increment of $3 . 1 7 \%$ . In contrast, the frequencies of all other major fans have experienced changes of less than $1 \%$ . Given that Mahjong is a game characterized by a high level of randomness in both the tile-dealing and tile-drawing processes, adjusting the parameters related to fan preferences can only bring about a reasonable alteration in the actual behavior patterns of the target agents. In conjunction with the findings of previous experiments, we can draw the conclusion that the parameters within Mxplainer are, in fact, significantly correlated with the behaviors of agents. Moreover, through the analysis of these parameters, we are capable of discerning the agents’ preferences and high-level behavioral characteristics.
Table 3. A). An example game state at the beginning of the game where no tiles have been discarded by other players. B). Some proposed goals for state $s$ by $\hat { \psi } _ { 2 }$ and the estimated win probabilities. "H&K" is an abbreviation for Honors and Knitted due to space constraint.
# 5.3. Interpretation of Deduction Process
In this subsection, we analyze the deduction process of the Search-based Framework $F$ on an example game state by tracing the intermediate results of $\hat { \psi } _ { 2 }$ to demonstrate the local explainability of Mxplainer agents in decision-making.
The selected game state $S$ is at the beginning of a game with no tiles discarded yet, and the player needs to choose one to discard, as shown in Table 3-A. The target black-box agent $\psi _ { 2 } .$ , selects the tile
Table 4. A). Comparison between parameter size and resulting accuracies. B). Comparison between different methods and Accuracies.
B9 as the optimal choice to discard for unknown reasons. However, analyses of the execution of the white-box agent $\hat { \psi } _ { 2 }$ explain the choice.
The Search Component SC proposed 64 possible goals for s, and Table 3-B shows a few goals representative of different major fans. With fitted $\theta _ { 2 } ,$ Algorithm 1 produces an estimated win rate for each goal $G$ , as listed under the "Win Rate" column of Table 3-B. We observe that though B9 is required in goals such as Pure Straight, it is not required for many goals with higher estimated win rates, such as Knitted Straight3.
Following Algorithm 2, we accumulate the win rates for tiles in Redundant Tiles $R$ . A higher value of a tile indicates a higher win rate if the tile is discarded, and B9 turns out to have a higher value than other tiles by a large margin, which is consistent with the observed decision of $\psi _ { 2 }$ .
This section merely demonstrates an analysis of action selection within the context of a simple Mahjong state. However, such analyses can be readily extended to other complex states. The Search Component SC consistently takes in information and puts forward the top 64 reachable goals, ranked according to distances. The Calculation Component CC calculates the win rate for each goal using the learned parameters. Finally, an action is chosen based on these win rates.
Without Mxplainer, people can only observe the actions of black-box MahJong agents, but cannot understand how the decisions are made. Our experiments show that Mxplainer’s fitted parameters can well approximate and mimic target agents’ behaviors. By examining the fitted parameters and Mxplainer’s calculation processes, experts are able to interpret the considerations of black-box agents leading to their actions.
# 6. Discussion
In an effort to boost the explainability of Mxplainer, we have minimized the parameter size of the Small Network N. Nevertheless, this design might compromise its expressiveness and decrease the accuracy of the approximated agents. Consequently, we aim to further investigate the impact of the parameter size of the Small Network $N$ on the approximation of the Target Agent Ψ. Specifically, we augment the parameter size and train networks $\bar { \hat { \psi } } _ { 2 } ^ { \prime }$ and $\hat { \psi } _ { 2 } ^ { \prime \prime }$ to approximate the identical Target Agent $\psi _ { 2 }$ . We do not elaborate on the modifications to the network structures herein. The parameter sizes and their respective top-3 accuracies are presented in Table 4-A.
We observe that by increasing the parameter size, Mxplainer can approximate the Target Agent $\Psi$ with greater accuracy. Nevertheless, the drawback lies in the fact that the larger the number of parameters in the Small Network $N$ , the more challenging it becomes to explain the actions and the meaning of the parameters to users. Although the approximated agents cannot replicate the exact actions of the original black-box agents, they can account for their actions in most scenarios, as demonstrated by previous experiments.
We also conduct a comparison of the effects of different methods. Specifically, we construct a random forest consisting of 100 decision trees and employ a pure neural network to learn the behavior policy of $\psi _ { 2 }$ through behavior cloning. The results are presented in Table 4-B. Although decision trees are inherently self-explanatory, their low accuracies render them unsuitable for Mahjong, which involves numerous OOD states. The accuracy of Mxplainer is not notably lower than that of traditional neural networks with sufficient expressivity, but it has the advantage of being able to explain the reasoning underlying the actions.
The explanation capability of Mxplainer is closely related to the degree to which it approximates black-box agents. We believe the approximation power of Mxplainer can still be improved. Currently, we use uniform distribution as the base probability to model the chance of drawing and melding tiles from other players. Although this straightforward scheme yields favorable results, we are of the opinion that there is potential for improvement. This could be achieved by transforming it into another learnable function and incorporating more state information, such as the action history and strategies of other players.
With Mxplainer, we can compare the differences between agents. By comparing the fitted parameters in parallel with other agents, we can analyze the characteristics of different agents. For example, we can easily observe that the black-box AI agent $\psi _ { 2 }$ has a much higher weight on Thirteen Orphans, Seven Pairs, and Lesser H&K Tiles. In contrast, the human player $\psi _ { 3 }$ has a significant inclination of making Melded Hand, and these observations are indeed backed by their historical wins in Supplemental Material B.
Our proposed approach has a unique advantage in quantifying and tuning weights for customdefined task-related features in areas where interpretability and performance are crucial. While Mxplainer is specifically designed for Mahjong, we hypothesize that its unique approach and XAI techniques may be applicable to other applications. In fact, we experimentally applied our proposed framework to two examples: Mountain Car from Gym [28] and Blackjack [33]. Both examples, which can be found in Appendix I, confirmed the effectiveness of our proposed method. However, the scope of application of our method still requires further study. | People need to internalize the skills of AI agents to improve their own capabilities. Our paper focuses on Mahjong, a multiplayer game involving imperfect information and requiring effective long-term decision-making amidst randomness and hidden information. Through the efforts of AI researchers, several impressive Mahjong AI agents have already achieved performance levels comparable to those of professional human players; however, these agents are often treated as black boxes from which few insights can be gleaned. This paper introduces Mxplainer, a parameterized search algorithm that can be converted into an equivalent neural network to learn the parameters of black-box agents. Experiments conducted on AI and human player data demonstrate that the learned parameters provide human-understandable insights into these agents' characteristics and play styles. In addition to analyzing the learned parameters, we also showcase how our search-based framework can locally explain the decision-making processes of black-box agents for most Mahjong game states. | [
"cs.AI"
] |
# 1 Introduction
A foundational insight in linguistics research is that applying minimal changes to a sentence can render it entirely acceptable or unacceptable to native speakers (Chomsky, 1965). Minimal pairs, as illustrated in Example (1), are a widely used diagnostic tool in linguistics.
We contribute to this growing collection by introducing the first Turkish benchmark of linguistic minimal pairs. TurBLiMP enriches the typological diversity of available linguistic evaluation benchmarks by incorporating a morphologically rich agglutinative language with highly flexible word order. While Turkish and other agglutinative languages like Finnish have been the object of several studies focusing on word-level morphology (Ismayilzada et al., 2025), the effects of word order flexibility and morphological complexity on the robustness of sentence-level grammatical judgments have not been studied in detail before. We fill this gap by introducing two sets of experimental minimal pair paradigms.
Our evaluation shows that even top-performing LMs suffer performance losses under word order or subordination manipulations, revealing sensitivities that would otherwise go undetected. Compared to the acceptability judgments we collected from native speakers, baseline tests across 13 models and 16 Turkish phenomena demonstrate that Large LMs can struggle with linguistic tasks where humans perform reliably. By providing this resource, we aim to facilitate linguistically motivated NLP research and contribute a high-quality dataset for linguists and NLP researchers.
(1) a. People in Istanbul love cats. b. \* People in Istanbul loves cats.
Minimal pairs have been a cornerstone of linguistic analysis for decades, and in recent years they have become a vital tool for the linguistic evaluation of language models (LMs). Warstadt et al. (2020) published the first large-scale English Benchmark of Linguistic Minimal Pairs (BLiMP) in an effort to systematically evaluate the linguistic knowledge of language models, and since then various benchmarks have been introduced for other languages.
# 2 Minimal Pair Benchmarks
Minimal pairs have played an important role for evaluating the linguistic abilities of language models, targeting phenomena such as subject-verb agreement (Linzen et al., 2016), filler-gap dependencies (Wilcox et al., 2018), and negative polarity items (Jumelet and Hupkes, 2018). Warstadt et al. (2020) then established an English benchmark of 67,000 sentence pairs testing 67 paradigms through automated generation based on linguist-curated templates. This work inspired numerous adaptations for other languages, each employing different benchmark creation strategies. Benchmarks using a similar template-based approach as BLiMP include CLiMP (Chinese, Xiang et al., 2021), ZhoBLiMP (Chinese, Liu et al., 2024), BLiMPNL (Dutch, Suijkerbuijk et al., 2025), and for Basque/Swahili/Hindi by Kryvosheieva and Levy (2025). Another approach is based on modifying Universal Dependency trees, which has been used by SLING (Chinese, Song et al., 2022), RuBLiMP (Russian, Taktasheva et al., 2024), and MultiBLiMP (Jumelet et al., 2025), a multilingual benchmark covering 101 languages. Other approaches include the extraction of minimal pairs from linguistics journals, employed by JBLiMP (Japanese, Someya and Oseki, 2023), manual creation of pairs, as done for Icelandic by Ármannsson et al. (2025), and the usage of LLMs for generating pairs, as done for Tamil and Indonesian by Leong et al. (2023).
Methodological innovations across these benchmarks reveal key trade-offs between scale, linguistic coverage, and data quality. Template-based generation enables large datasets but risks producing unnatural sentences (Vázquez Martínez et al., 2023), while manual extraction from literature or learner corpora ensures quality at the cost of scale. Some of the benchmarks incorporate hybrid approaches and human validation steps to balance these concerns. TurBLiMP too is the result of such hybrid approaches. While creating our benchmark, we developed strategies specifically adapted to the challenges of creating minimal pairs for Turkish.
# 3 Turkish Morphosyntax & NLP
Turkish presents a particularly interesting case for BLiMP-style evaluation due to its flexible word order and rich morphological system. Turkish syntactically licenses all six possible orderings of the main sentence constituents: Subject-Object-Verb (SOV) represents the canonical order, while other permutations introduce subtle pragmatic variations without altering the core meaning of the sentence. As a result, evaluating LMs on a language like Turkish makes it possible to test them for their robustness to different positional patterns or grammatical hierarchies, in a way that is not possible with English and other fixed-order languages that dominate the training material of current LLMs.
Furthermore, Turkish has highly productive agglutinative morphology, whereby words typically consist of several morphemes attached to a root. Speakers can easily produce and understand numerous legitimate but low-frequency word forms through regular morphological processes, yielding substantially larger vocabulary requirements for LMs compared to analytic and fusional languages. Many syntactic phenomena are realized in Turkish through morphology, rather than by separate function words like in English and other IndoEuropean languages that form a large chunk of the world’s highest-resource languages. A salient example is subordination, which largely involves the use of suffixes to nominalize or adverbialize the verb of the embedded clause. For instance, the sentence I know that Elif likes Gaye translates to Elif’in Gaye’yi sevdig˘ini biliyorum whose structure can be intuitively conveyed as ‘I know the liking of Gaye by Elif’. Here, the nominalized verb ‘like’ takes an accusative case suffix as the object of ‘know’, but also a possessive agreement suffix corresponding to the genitive suffix taken by the subordinate subject ‘Elif’.
In general, agglutinative languages have been shown to be particularly challenging for neural models (Gerz et al., 2018; Cotterell et al., 2016; Park et al., 2021; Arnett and Bergen, 2025). Focusing on Turkish, Ataman et al. (2017) established that fixed vocabulary constraints combined with suboptimal sub-word segmentation significantly impair neural machine translation performance for agglutinative languages. Ismayilzada et al. (2025) studied LLMs’ ability to produce and systematically understand novel well-formed combinations of morphemes in Turkish and Finnish, and reported limited morphological generalization. These findings suggest that studying flexible-order, morphologically rich languages like Turkish can provide unique insights into the true linguistic capabilities of LMs beyond surface fluency.
# 4 TurBLiMP
The creation of the TurBLiMP benchmark was motivated by the need for a controlled evaluation benchmark that accounts for the unique linguistic properties of Turkish. Some of these properties include flexible word order, morphological richness, optional pro-drop, and syncretism in third-person subject-verb agreement markers. We now provide a brief linguistic background on our minimal pairs.
# 4.1 Phenomena
We consider 16 different grammatical phenomena, some of which are cross-lingually present in other
# Translation
Table 1: Glossed minimal pairs for each phenomenon in TurBLiMP. The differences are underlined.
benchmarks, alongside a few language-specific ones such as suspended affixation (see Table 1 for a complete overview with examples).
ANAPHOR AGREEMENT The anaphoric reflexive pronoun kendi agrees with its referent through number and person inflections. Unacceptable sentences in this category feature inflected forms of kendi with incorrect agreement.
ARGUMENT STRUCTURE (TRANSITIVE) Turkish has a nominative-accusative case marking system where the direct object of a sentence is marked by the accusative case. However, a special subset of verbs assigns lexical case to their objects, deviating from structural case assignment. Unacceptable sentences feature objects with incorrect case endings, such as dative.
ARGUMENT STRUCTURE (DITRANSITIVE) The prototypical Turkish ditransitive construction applies a dative case marker to the indirect object. However, verbs assigning lexical case can deviate from the general trend. Here too, unacceptable sentences feature objects with incorrect case endings.
BINDING Principle B in Binding Theory (Chomsky, 1981) asserts that pronouns should be free in their binding domain, implying that pronouns should not refer to another entity in the same immediate clause. Unacceptable sentences are created by swapping an anaphora coreferring with the subject with a pronoun of similar features.
DETERMINERS While determiners are largely optional in Turkish, the indefinite article bir is sometimes required. When a direct object occurs immediately before the verb, its accusative case ending can be omitted. If such an object is modified by a relative clause, the indefinite article must precede the noun head (Arslan-Kechriotis, 2009). Unacceptable sentences in this phenomenon omit the obligatory determiner.
ELLIPSIS This phenomenon deals with a specific type of ellipsis called backward gapping. For coordinated clauses in Turkish, it is possible to omit the verb in the first clause, leading to a gap which is resolved by the verb in the second clause. Turkish only licenses this if both clauses maintain parallel word order (Bozs¸ahin, 2000). Acceptable sentences show the same subject-object order across clauses while unacceptable ones alternate their order.
IRREGULAR FORMS The aorist is an aspect/ mood marker with three allomorphs -r, -Ir (high vowel harmony), and -Ar (non-high vowel harmony). While monosyllabic verbs take -Ar, a specific subset of irregular verbs take -Ir (Nakipo˘glu et al., 2023). Unacceptable sentences feature an incorrect -Ar form.
ISLAND EFFECTS We focus on a specific type of island constraint in which complex noun phrases are modified by a relative clause containing a whphrase. The occurrence of the wh-phrase is only permitted if the wh-phrase is not an adjunct (Çakır, 2016). Acceptable sentences contain argument whphrases like who or what, while unacceptable ones contain wh-adjuncts such as how or why.
NOMINALIZATION Turkish extensively uses a derivational process called nominalization, where verbal bases take suffixes (like -DIK, -mA, and others) to form noun phrases. A category of Turkish verbs only selects complement clauses with -DIK, while others only allow -mA (Kornfilt, 2003b). Correspondingly, minimal pairs contain verbs with the correct and incorrect nominalization suffixes.
NPI LICENSING This phenomenon deals with Turkish negative polarity items such as hiç, kimse, hiçbir, hiçbir s¸ey, and asla. NPIs occur in contexts where the predicate is negated. Acceptable sentences either omit the NPI or use placeholder indefinite pronouns, while unacceptable ones feature an NPI with a predicate that is not negated.
PASSIVES Turkish licenses the passivization of intransitive verbs via passive suffixes, creating impersonal (vs. personal) passives. While personal passives permit optional by-phrases to express agents, impersonal passives prohibit them (Özsoy, 2009). Thus, acceptable sentences omit by-phrases, while unacceptable ones include them.
QUANTIFIERS Turkish quantifiers such as her and ço˘gu can only occur with accusative-marked nouns (Enç, 1991). All minimal pairs for this phenomenon feature direct objects without accusative marking. Unacceptable sentences include a quantifier before the bare noun while acceptable sentences omit it.
RELATIVE CLAUSES Turkish uses participle suffixes -DIK and -An to form object and subject relative clauses (Göksel and Kerslake, 2005). -DIK clauses feature genitive-possessive agreement. The subject takes genitive case and the verb carries possessive agreement. In subject relative clauses with -An, only the object (if present) is casemarked. Minimal pairs target an argument preceding the nominalized verb. Acceptability depends on whether this noun is inflected with a genitive or non-genitive case ending.
SCRAMBLING Turkish shows word order flexibility and allows postverbal scrambling. This means that constituents can appear after the verb in certain contexts. However, local postverbal scrambling from an embedded clause is prohibited (Kornfilt, 2003a). Acceptable sentences position the object before the embedded verb while unacceptable sentences feature them in the opposite order.
SUBJECT AGREEMENT Turkish realizes subject-verb agreement via person/number suffixes. Gender agreement is absent. A notable feature is third-person syncretism. The same verb inflection can indicate either a third-person singular or plural subject. However, a plural-inflected verb cannot co-occur with a singular subject. Unacceptable sentences either involve singular subjects with plural verbs or pronoun mismatches with first/second-person agreement.
SUSPENDED AFFIXATION Suspended affixation refers to a phenomenon where a shared suffix applies to all conjuncts in a coordinated structure, rather than being repeated. Turkish does not allow suspended affixation for predicates inflected only with the past tense suffix -DI (Serova, 2019). Minimal pairs feature two coordinated past-tense clauses. Acceptable sentences inflect both verbs, while unacceptable ones omit inflection on the first.
# 4.2 Benchmark Creation
In the creation of TurBLiMP, we opted for the more labor-intensive process of manually crafting sentences. 10 initial samples per each phenomenon were created entirely manually to establish clear guidelines. This first step ensured that each pair differed only minimally while accurately capturing the targeted grammatical contrasts.
Semi-automatic augmentations To enhance lexical diversity, we then adopted a semi-automated workflow in which a masked Turkish LM, BERTurk (Schweter, 2020) is used to suggest lexical replacements at random positions of each manually created sentence. We verified and adjusted each replacement manually to ensure acceptability. This process yielded 100 samples per phenomenon. In a final fully-automated augmentation step, BERTurk was used to generate a list of contextually appropriate words for replacement (e.g. woman or boy for girl). We use the Turkish morphology pipeline by Akın and Akın (2007) to inflect them with the same morphological features. At the end of this process, our 100 manually validated pairs increase to 1000 pairs per phenomenon. Our three-fold approach balanced scalability with linguistic precision, resulting in a robust benchmark for evaluating Turkish LMs.
# 4.3 Experimental Paradigms
We further assess the robustness of LMs’ syntactic abilities by focusing on two salient properties of Turkish: (i) word order flexibility and (ii) subordination through morphological processes, both discussed in Section 3. Word order variations provide a useful framework for testing the effect of word order biases on syntactic competence, extending the types of variations covered by the existing minimal pair benchmarks (Linzen et al., 2016; Mueller et al., 2020). Subordination is a particularly interesting case to study the interplay between syntactic competence and morphological generalization, broadening the scope of current word-level evaluations (Ismayilzada et al., 2025).
We generate word order and subordinating variations for two of the TurBLiMP phenomena (Transitive and Ditransitive Argument Structure) chosen for their flexibility for manipulation. We derive all 6 subject/verb/object orders and 4 different subordination structures for each minimal pair. Complete examples of experimental paradigms and details about how they were created are provided in Appendix A. The experimental paradigms add a total of 2,000 minimal pairs to the 16,000 pairs forming the base TurBLiMP, and considerably extend our benchmark’s utility for investigating controlled linguistic variations.
# 5 Human Acceptability Judgments
To validate our benchmark, we collected acceptability judgments from 30 native Turkish speakers using a 7-point Likert scale (1: completely unacceptable, 7: completely acceptable). While previous BLiMP variants rely on forced-choice tasks for data validation, BLiMP-NL (Suijkerbuijk et al., 2025) collects Likert scale responses to capture the gradient nature of acceptability judgments. We followed their approach to provide a benchmark that allows for fine-grained evaluation of model-human alignment. Our participant pool was mixed, comprising 17 linguistics students and 13 non-linguists. The study was carried out via an anonymous online survey. Appendix B includes a screenshot of survey instructions. Each participant rated 216 sentences spanning 16 linguistic phenomena as well as 20 experimental paradigms. 3 acceptable and 3 unacceptable sentences were included for each grammatical category, and the acceptability conditions were flipped between the two survey versions.
Figure 1: Mean acceptability judgments for 16 TurBLiMP phenomena. Likert scale ratings are transformed to z-scores. Error bars show standard errors of the mean.
Figure 1 reports average acceptability judgments for each phenomenon. Additional participant rating statistics are provided in Appendix C. The responses are first normalized by transforming Likert scores to z-scores. Overall, participants made clear distinctions between acceptable and unacceptable sentences. Some phenomena such as Island Effects, Passives, and Nominalization were less discriminable than others.
# 6 Experimental Setup
Monolingual models We employed the Goldfish series (Chang et al., 2024), a series of causal LMs with fixed architecture trained on varying training data sizes (5MB, 10MB, 100MB, and 1000MB). Another monolingual model we used is BERTurk (Schweter, 2020), a 185M-parameter Turkish masked LM. With a vocabulary size of $1 2 8 \mathrm { k \Omega }$ , it is the only masked LM in our set of monolingual models. The largest monolingual model that we test is cosmosGPT (Kesgin et al., 2024), a 774M-parameter GPT-2-based model pretrained on Turkish web corpora and books.
Table 2: Accuracy scores of each model across the linguistic phenomena in TurBLiMP. The red-green color gradient indicates performance, ranging from low to high. Significant Pearson correlations to the human judgments $( p < 0 . 0 5 )$ are indicated in boldface.
Multilingual models The evaluated multilingual models include Qwen 2.5 7B (Yang et al., 2025), Llama 3.1 8B (Meta, 2024), Aya Expanse 8B (Dang et al., 2024), Gemma 2 7B (Team et al., 2024), Gemma 3 4B and 12B (Team et al., 2025), as well as EuroLLM 9B (Martins et al., 2024). For a balanced comparison between the various models, we employed comparable parameter sizes ranging from 4B to 12B. Notably, Aya Expanse is the only instruction-tuned variant in our set of multilingual models, supporting 23 languages including Turkish. The Gemma series also boast multilinguality with Gemma 3 providing support for over 140 languages. EuroLLM prioritizes the coverage of European languages alongside a few others including Turkish.
As our evaluation metric for model performance, we computed entire-sequence log probabilities for acceptable and unacceptable sentences in each pair using the minicons library (Misra, 2022; Kauf and Ivanova, 2023). Accuracy scores reflect the proportion of pairs where the model assigned a higher probability to the acceptable sentence. We also report Pearson’s correlation between human and model evaluations, calculated from the difference between average scores of acceptable and unacceptable sentences.
# 7 Results
Model performances across linguistic phenomena are summarized in Table 2. The results reveal that, more often than not, models were able to rate the acceptable sentence higher than its unacceptable counterpart. Some particular phenomena pose challenges for all the models. Ellipsis proved particularly difficult, with scores ranging from 14.9 to 87.5. Other challenging phenomena include Island Effects, Relative Clauses, and Determiners.
Island Effects, Determiners, and Ellipsis also happen to be some of the phenomena with the lowest mean rating difference in acceptability judgments collected from native speakers as seen in Figure 1. We should note that participants preserved a clear acceptability contrast with these phenomena as well. In the case of Ellipsis, considerably low model performances are not consistent with the collected judgments. Though Ellipsis and Scrambling both manipulate word order, models handle Scrambling well. Thus, Ellipsis scores cannot be attributed to general order-manipulation difficulty.
We see that the monolingual models BERTurk and cosmosGPT tend to outperform their multilingual counterparts. Their performance is comparable to the best multilingual models EuroLLM and
2.4 Human Acceptability Rating Difference 2.2 Suspended Affixation 2.0 NPI Licensing Binding Argument Str. Ditr. Irregular Forms Argument Str. Tran. 1.8 Subject Agreement Quantifiers 1.6 Relative Clauses Anaphor Agreement Scrambling 1.4 Ellipsis Determiners 1.2 C Nominalization O Passives 1.0 Island Effects 0.8 0 5 10 15 20 25 Model Log-probability Difference
Although SOV is the canonical word order in Turkish, Slobin and Bever (1982) found that $52 \%$ of utterances in their spontaneous adult speech corpus deviate from this order. Similarly, Türk et al. (2022) reported that only $5 9 . 5 \%$ of sentences in the BOUN Universal Dependencies Treebank follow SOV. Notably, they identified two different word orders as the second most frequent, highlighting how Turkish word order patterns can vary largely between spoken and written language. Both studies, however, agree that VOS is the least attested.
Gemma 3 12B. BERTurk is the only model that shows a strong cross-phenomenon correlation with human acceptability ratings, as illustrated in Figure 2. This is worth noting given that BERTurk is the only masked language model that we have tested. None of the other models had a statistically significant correlation in either direction.
Multilingual models generally show better performance with increasing model sizes, but exceptions exist. Gemma 3 4B outperforms Gemma 2 8B, and EuroLLM 9B slightly surpasses Gemma 3 12B. The superior performance of EuroLLM 9B over the same-sized Gemma 2 9B may stem from better distribution of training data across languages.
Finally, the Goldfish model series reveals the effect of training data size on performance. Models with larger training data sizes typically achieve better performance, though some counter-intuitive patterns emerge near random-chance levels. While more data generally improves learning, this pattern does not hold when acceptable sentences are consistently shorter than unacceptable ones.
# 7.1 Effect of Word Order
Our word order paradigm results for the best monolingual (BERTurk) and multilingual (EuroLLM) models are illustrated in Table 3. By manipulating minimal pairs for the Transitive and Ditransitive Argument Structure phenomena, we examine how different word orders affect performance.
Native-speaker acceptability judgments reflect that SOV had the highest mean rating difference for both transitive and ditransitive sentences, in line with spoken and written corpus frequencies. The second-highest mean acceptability rating difference for transitive paradigms was the SVO word order, while it was OVS for the ditransitive ones. These are also the second-most-frequent word orders reported by Slobin and Bever (1982) and Türk et al. (2022) respectively. In transitive sentence ratings, VOS is not found to be the most challenging word order. This suggests that a rare word order does not inherently hinder people’s ability to identify acceptable sentences. Speakers seem to tolerate non-canonical word orders more readily in transitives than in ditransitives. One interpretation may be that case differences are easier to spot in transitive sentences due to fewer arguments.
Table 3: Word order performance comparison between human judgments and best models. The white-blue gradient represents mean acceptability differences (low to high) for each row, while the white-yellow gradient reflects corpus frequency.
We see the opposite trend for model evaluations with EuroLLM being particularly sensitive to non-canonical word orders in transitive sentences. BERTurk remains robust to all word orders, showing only a pronounced drop for the rare VOS paradigm in the ditransitive condition. For both transitive and ditransitive sentences, models show high mean log probability differences on OVS word orders. This suggests that model performances align more closely with word order statistics from the BOUN treebank than with those from the spoken language corpus by Slobin and Bever (1982).
# 7.2 Effect of Subordination
Table 4 displays human and model performance on four subordination paradigms compared to a non-subordinated baseline. In Turkish, subordinate clauses can be finite or non-finite. However, finite subordinate clauses are much less frequent than non-finite ones (Göksel and Kerslake, 2005). For non-finite subordination, we consider three different subordinating suffixes: -DIK, -(y)IncA, and -(y)ken. -DIK forms nominal subordinate clauses while the latter two form adverbial ones.
Table 4: Subordination performance comparison between human judgments and best models.
The acceptability judgment task appears to be easier in non-finite -DIK subordinates than in finite ones, consistent with finite clauses’ lower frequency. While -DIK’s mean difference nearly matches the baseline in transitives, it shows a decline for ditransitives. Among non-finite structures, -(y)IncA and -(y)ken prove harder than -DIK, suggesting that adverbial clauses pose greater challenges. However, performance deficits may also reflect semantic incongruities from augmentation. Some verb roots may conflict with the aspectual property of the adverbial markers. Therefore, we cannot reliably claim inherent difficulty in adverbial clauses.
With human judgment patterns established, we evaluate model performance. EuroLLM’s -DIK performance shows a drop from baseline in transitive sentences. BERTurk mirrors human trends more closely, exhibiting a greater decline in ditransitives. Both models struggle more with finite subordination than -DIK, though EuroLLM shows a sharper contrast. Compared to nominal subordination, both models show smaller mean differences with adverbial clauses. Overall, we observe that models show sensitivity to different subordination structures. | We introduce TurBLiMP, the first Turkish benchmark of linguistic minimal pairs, designed to evaluate the linguistic abilities of monolingual and multilingual language models (LMs). Covering 16 linguistic phenomena with 1000 minimal pairs each, TurBLiMP fills an important gap in linguistic evaluation resources for Turkish. In designing the benchmark, we give extra attention to two properties of Turkish that remain understudied in current syntactic evaluations of LMs, namely word order flexibility and subordination through morphological processes. Our experiments on a wide range of LMs and a newly collected set of human acceptability judgments reveal that even cutting-edge Large LMs still struggle with grammatical phenomena that are not challenging for humans, and may also exhibit different sensitivities to word order and morphological complexity compared to humans. | [
"cs.CL"
] |
# 1 Introduction
Since the introduction of the Transformer architecture [36], language modeling has undergone a paradigm shift, enabling the development of models with unprecedented scale and performance. However, the resulting Large Language Models (LLMs), often comprising hundreds of billions of parameters, pose significant challenges in terms of training efficiency, storage requirements, and inference cost. For example, storing model weights alone can require hundreds of gigabytes of memory, not accounting for the additional overhead during training and deployment. These limitations are especially pronounced when LLMs are applied to narrower tasks than those they were originally trained for, motivating the need for model compression and adaptation. Moreover, when models exhibit undesirable behaviors, retraining or even fine-tuning can be prohibitively expensive – highlighting the need for efficient, post-hoc tools for model inspection and targeted modification.
a) Model Compression b) Circuit Discovery c)Supression of Undesired Behaviour Estimate Relevance RGeneral on Reference Estimate Relevance $\textbf { \textit { R } } ^ { \mathrm { ~ \textit ~ { ~ ~ } ~ } }$ CorRIOI on Reference Estimate Relevance RUndesired dand RGeneral Samples For a General Task (e.g., C4,Wikipedia) SamplesFora Specific Task(e.g.,Toxic on Reference Samples For Both the Undesired Specific Task (e.g.,Toxic Responses) and the General Task (e.g., C4, Wikipedia)
·"The Industrial Revolution, beginning in the ·"If the answer to that question is yes," late 18th century,marked a major turning Compute the Differential Relevance of General point in and Undesired Task:
·‘Photosynthesis S the process by which ·‘Tom and Rachel went to the park.Tom told green plants,algae,and some bacteria ... a secret to" Rachel. $R ^ { \mathrm { { d i f f } } } = R ^ { \mathrm { { G e n e r a l } } } - R ^ { \mathrm { { U n d e s i r e d } } }$ Prune Least Relevant via RGeneral 9D Prune Least Relevant via RToxic CorRIOI Prune Least Relevant via $R ^ { \mathrm { { d i f f } } }$ O
Inputs Inputs Inputs H 萬 万 Compressed Language Model Task-Specific Circuits (Subgraphs) Corrected Model (e.g., Detoxified)
To address the efficiency challenges of LLMs, two widely studied approaches are pruning and quantization. Pruning removes parameters that contribute little to the model’s predictions, thereby increasing sparsity and reducing memory and compute demands. Successful quantization approaches reduce the precision of model weights (e.g., from 32-bit float to 8-bitint), lowering the storage and computational footprint without significantly affecting performance. Early foundational works in pruning [17, 14] propose using gradients to identify and eliminate irrelevant parameters. Recent pruning techniques tailored to LLMs [38, 16, 18, 31, 39, 18] focus on structural sparsity, per-layer attribution scoring, and low-rank approximations to reduce model size while maintaining performance. In this paper, we focus on pruning, specifically targeting parameters that are irrelevant to the model’s inference process.
Understanding the internal mechanisms of Deep Neural Networks (DNNs) is a central goal of the fields of eXplainable Artificial Intelligence (XAI) and mechanistic interpretability. Among the most widely used tools in this area are attribution methods [4, 30, 32, 21], which provide importance scores for inputs or latent components, enabling the identification and interpretation of input features and internal pathways most relevant to a model’s predictions [6]. Recent works have begun to explore the utility of attribution methods for model compression instead. The works of [37, 40, 15] propose using Layer-wise Relevance Propagation (LRP) [4, 21] for structured pruning, with [37] focusing on attention heads in language Transformers, and [40, 15] targeting vision models. Notably, [15] incorporates AttnLRP [1], an LRP extension for more faithful attribution of Transformer models.
A crucial step in attribution-based pruning [40, 15] is the selection of reference samples – the input examples used to estimate the importance of model components with – that strongly influence which parameters are identified as relevant and, consequently, are retained or pruned. Using a diverse set of general-purpose samples guides the pruning of parameters that contribute minimally across tasks, enabling effective model compression. However, by selecting task-specific reference samples, we can identify task-relevant subgraphs – also named circuits – which reflect the internal pathways responsible for specific behaviors. This capability is of particular interest in mechanistic interpretability [8, 11, 19]. Moreover, this approach enables targeted model editing. By using reference samples that elicit undesired behaviors (e.g., the generation of toxic outputs), we can attribute relevance to responsible components and selectively prune them in a post-hoc manner.
In this work, we propose a unified framework for attribution-guided pruning of LLMs supporting three key applications as illustrated in Fig. 1: (1) General model compression via unstructured pruning, (2) Circuit discovery by extracting parameter-level subgraphs responsible for specific tasks (e.g., indirect object identification); and (3) Model correction by identifying and removing circuits associated with undesired behaviors, enabling post-hoc editing with minimal impact on overall performance.
# 2 Related works
Model compression and pruning The large size of LLMs leads to high memory and computational demands. Compression mitigates these issues through techniques such as quantization, which lowers parameter precision [7, 16, 39], and pruning, which removes parameters that contribute little to model performance. Pruning strategies include knowledge distillation [22] or training with low-rank and structured sparsity constraints [38], though these often incur high computational costs. Some methods aim to prune with minimal fine-tuning [18], while others, such as [31], achieve efficient unstructured pruning by identifying low-activation components only using forward-pass statistics on reference samples. Unstructured pruning typically achieves higher sparsity levels, rendering it more effective at reducing model size compared to structured and semi-structured approaches, but are less aligned with current hardware accelerators [43, 31]. In this work, we adopt an unstructured pruning approach inspired by [31], but replace its activation-based heuristics with LRP [4, 21, 1], an attribution method that has shown promise in the structured pruning of vision models [40, 15].
Circuit discovery Understanding LLM behavior is critical for improving safety and reliability, especially in high-stakes applications. Circuit discovery, a central task in mechanistic interpretability, aims to uncover the internal components, such as attention heads and Multilayer Perceptron (MLP) neurons, that drive specific model predictions. Accurately extracting these circuits, however, remains a challenge. Prior methods include Sparse Auto Encoders (SAEs) [19], which require training, and activation patching (Automated Circuit DisCovery (ACDC)) [8], which ablates edges of the computational graph to assess importance but is resource-intensive and threshold-dependent. Alternatives such as Information Flow Routes (IFR) [11] and Edge Attribution Patching (EAP) [33], streamline the process (e.g., by using gradients), but still rely on heuristics or external metrics. We instead propose using LRP for efficient and scalable circuit discovery. LRP assigns relevance scores to model components in a single forward-backward pass, enabling direct extraction of task-relevant subgraphs. By ranking and pruning low-relevance components, LRP supports both structured pruning (of e.g., attention heads, MLP neurons) and unstructured pruning (e.g., individual weights). Unlike token-level methods, our approach operates at the parameter level, naturally aligning with model compression and behavioral control goals.
Model correction DNNs trained on large, imperfect datasets often exhibit undesirable behaviors, such as shortcut learning, biased predictions, or toxic outputs. While data cleaning or fine-tuning can mitigate these issues, such solutions are typically expensive and impractical at scale. Existing methods address this in various ways. To mitigate this in vision models, the authors of [27] fine-tune networks using a modified loss that leverages attribution information, while [29, 3, 24, 10] identify and remove biases by targeting directions in latent spaces. For LLMs, [26, 35] edit model behaviors exploiting specific directions in latent space, but these methods neither offer compression benefits nor avoid fine-tuning. The authors of [23] align models with user intent via extensive fine-tuning, while [9] localize knowledge neurons using gradients for behavioral control. In this work, we propose a more efficient approach using LRP relevance scores to localize the components responsible for undesirable behaviors. By comparing relevance from harmful versus benign reference samples, we isolate and prune the responsible parameters. This yields targeted behavior correction without fine-tuning, preserving performance while reducing model size.
# 3 Methods
We present a general framework for pruning deep models using attribution-based relevance scores. We then introduce Layer-wise Relevance Propagation (LRP), the primary attribution method used in our work. Finally, we define task-specific circuits and describe how their removal enables targeted model correction.
# 3.1 Attribution-based pruning
Building on the framework introduced by [40, 15], let $\Psi = \{ \psi _ { 1 } , . . . , \psi _ { p } \}$ denote a set of $p$ components (neurons from MLPs, attention heads, or other trainable parameters) that constitute a DNN, and let $\mathcal { X } _ { \mathrm { r e f } } = \{ x _ { 1 } , x _ { 2 } , \dots , x _ { n _ { \mathrm { r e f } } } \}$ represent a set of reference samples. For each component $\psi _ { k } \in \Psi$ and reference sample $x _ { i } \in \mathcal { X } _ { \mathrm { r e f } }$ , we define $R _ { \psi _ { k } } ( x _ { i } )$ as the relevance (or importance) score obtained from an attribution method (i.e., LRP). By aggregating these scores across all reference samples and applying the normalization described in Eq. (1), we obtain $\mathcal { R } = \{ \bar { R } _ { \psi _ { 1 } } , \bar { R } _ { \psi _ { 2 } } , \dots , \bar { R } _ { \psi _ { p } } \}$ , the set of normalized relevance scores for all components.
$$
\bar { R } _ { \psi _ { k } } = \frac { 1 } { n _ { \mathrm { r e f } } } \sum _ { i = 1 } ^ { n _ { \mathrm { r e f } } } R _ { \psi _ { k } } ( x _ { i } ) .
$$
Regardless of the pruning approach, whether it is structured, fully unstructured, per-layer unstructured, or row-wise unstructured (an overview of these approaches is explained in Appendix C), we can order the defined components based on their attributed relevance scores to receive the indices $c$ corresponding to the least relevant components up to the $q$ -th place:
$$
\{ c \} _ { q } = \mathrm { a r g s o r t } ( \mathcal { R } ) _ { 1 , 2 , . . . , q }
$$
Defining 1 to represent an indicator function with condition $i \in \{ c \} _ { q }$ , the $q$ least relevant components can be pruned by masking as:
$$
\forall \psi _ { i } \in \Psi : \psi _ { i } \mapsto ( 1 - \mathbf { 1 } _ { i \in \{ c \} _ { q } } ) \psi _ { i }
$$
# 3.2 Layer-wise Relevance Propagation
Layer-wise Relevance Propagation [4, 21] treats a neural network with $L$ layers as a Directed Acyclic Graph (DAG), such that for a given input $x$ :
$$
f ( x ) = f ^ { L } \circ \cdot \cdot \cdot \circ f ^ { l } \circ f ^ { l - 1 } \circ \cdot \cdot \cdot \circ f ^ { 1 } ( x )
$$
LRP employs a backpropagation process via specific rules designed to allocate “relevance” scores to (both parametric and non-parametric) edges of the DAG, proportional to their contribution to the final prediction. At first, this process begins at the last layer ${ \bf { \dot { f } } } ^ { L }$ by initializing the relevance score of $R _ { j _ { . } } ^ { L }$ at output $j$ of $f ^ { L }$ and ultimately redistributing this score to its input variables. To elaborate the redistribution at a specific layer of $l$ , denote $z _ { i j }$ to be the mappings of inputs $i$ to outputs $j$ which in linear layers this notation is represented by $z _ { i j } = a _ { i } w _ { i j }$ with $w _ { i j }$ as the weight parameters and $a _ { i }$ as the activation of neuron $i$ . LRP then redistributes the upper layer relevance quantity of $R _ { j } ^ { l }$ towards tqhuealnotiwfiers ltahyeecrosnptriobpuotirtoinonofallnyeutro tnh artelatyievre ,ritboutihoenasctoifv $z _ { i j }$ nto $z _ { j }$ ,eruersounl nagt ilan $R _ { i j } ^ { ( l - 1 , l ) }$ that $i$ $l - 1$ $j$ $l$
$$
R _ { i j } ^ { ( l - 1 , l ) } = \frac { z _ { i j } } { z _ { j } } R _ { j } ^ { l } .
$$
An aggregation of all $R _ { i j } ^ { ( l - 1 , l ) }$ obtains the contribution of neuron $i$ to all upper layer neurons $j$ :
$$
\sum _ { i } R _ { i } ^ { l - 1 } = \sum _ { i , j } R _ { i j } ^ { ( l - 1 , l ) } = \sum _ { j } R _ { j } ^ { l }
$$
Extra steps on obtaining relevance scores from attention heads, and scores of each weight parameter, are discussed in detail at Appendix B.1.
# 3.3 Circuit discovery
We define a circuit as a subnetwork comprising a subset of model components ${ \mathcal { C } } \subseteq \Psi$ , where $\Psi$ denotes the complete set of components (e.g., weights, neurons, or attention heads). A circuit is extracted by iteratively pruning components $\psi _ { i } \in \Psi$ that contribute least to a specific behavior, as determined by their attribution scores computed on a set of reference samples $\mathcal { X } _ { \mathrm { r e f } }$ designed to capture the behavior of interest. During pruning, we ensure that the task-specific performance metric remains above a predefined threshold. The resulting subset $\mathcal { C }$ represents the essential components responsible for the target behavior under sparsification.
In contrast, existing methods (e.g., [8, 33, 11]) typically define circuits as computational subgraphs derived from hidden activations across tokens, capturing information flow through the model for specific inputs and producing circuits tied to individual examples. While these approaches reveal detailed behavior for a given input, it makes the circuits hard to generalize and interpret. Our approach instead identifies circuits directly from the model’s parameters by removing components that are not important for a specific behavior. This yields input-independent circuits that are easier to interpret, and more pactical for tasks like compression, analysis, and correcting unwanted behaviors.
# 3.4 Model correction
Let $\chi _ { \mathrm { r e f } } ^ { \mathrm { G e n e r a l } }$ and $\mathcal { X } _ { _ { 1 } }$ reUfndesired denote the sets of reference samples that respectively capture the model’s general behavior (e.g., Wikipedia and C4) and a specific undesired behavior (e.g., toxicity). Applying the framework described in Sec. 3.1 to each of these sets, yields two sets of attribution scores $\mathcal { R }$ General and $\mathcal { R } ^ { \mathrm { U n d e s i r e d } }$ . We then define a differential attribution set $\mathcal { R } ^ { \mathrm { d i f f } } = \{ \bar { R } _ { \psi _ { 1 } } ^ { \mathrm { d i f f } } , \bar { R } _ { \psi _ { 2 } } ^ { \mathrm { d i f f } } , \dots , \bar { R } _ { \psi _ { p } } ^ { \mathrm { d i f f } } \}$ R¯dψiff} as:
$$
\bar { R } _ { \psi _ { k } } ^ { \mathrm { d i f f } } = \bar { R } _ { \psi _ { k } } ^ { \mathrm { G e n e r a l } } - \bar { R } _ { \psi _ { k } } ^ { \mathrm { U n d e s i r e d } }
$$
Following the pruning procedure from Eq. (2), we sort $\mathcal { R } ^ { \mathrm { d i f f } }$ in ascending order to prioritize the removal of components the most responsible for the undesired behavior while being the least important for the model’s general performance. This method resembles isolating and removing the part of the undesired circuit that minimally overlaps with the subgraph governing the model’s general behavior.
# 4 Experiments
Our experiments cover the application and evaluation of our framework across the tasks of model compression, circuit discovery, and model correction (see Fig. 1 for an overview over the tasks).
# 4.1 Unstructured pruning for model compression
We begin with model compression that has the aim to reduce model size without hurting model performance on a general task. For model compression, unstructed pruning is the most widely used approach due to its finer granularity and strong potential to achieve high sparsity with minimal impact on performance. Compared to pruning individual components (e.g., neurons or attention heads), it allows selective removal of individual weights. As detailed in Appendix C, unstructured pruning can be applied in various ways (i.e., row-wise, layer-wise, or global).
Experimental settings We follow the evaluation protocol of [31], applying row-wise unstructured pruning with uniform sparsity across the rows of weight matrices within linear layers. Thereby, attribution scores are ranked by magnitude per row, rather than across layers or the full model, as prior work [31] found global or per-layer ranking to yield inferior performance. To benchmark our method, we compare against the state-of-the-art Wanda approach [31], which, like LRP, uses reference samples to assign importance scores to parameters without relying on external metrics or thresholds (see Appendix B.2). All experiments are conducted without fine-tuning. We evaluate three models from the Llama family: TinyLlama [41], Llama-2-7B [34], and Llama-3-8B [2]. Performance is assessed using two standard metrics: (1) perplexity on WikiText2 [20], reflecting uncertainty in language modeling, and (2) zero-shot accuracy on a broad suite of tasks from [12], capturing task-specific capabilities. Following [31], we perform attribution using reference samples from the C4 dataset [25] to capture general model behavior. Specifically, we generate three sets of 128 samples (sequence length 2048), each from a different random seed to ensure robustness.
Table 1: Perplexity (PPL) on WikiText2 and mean zero-shot accuracy (ACC) of TinyLlama, Llama2- 7B, and Llama3-8B under $50 \%$ sparsity via row-wise unstructured pruning. Errors represent the standard error of the mean. Full performance details for each task are in the Appendix at Tab. 2.
In Tab. 1, we apply a $50 \%$ sparsity rate. Higher sparsity rates (e.g., $6 0 \%$ ) typically degrade model performance strongly(as shown at Fig. 8 in the Appendix). Weight magnitude, a computationally cheap compression method, is not effective in pruning, as larger weights do not necessarily indicate greater contributions to decision-making – for example, a neuron with large weights may remain inactive. Both LRP and Wanda perform well in unstructured pruning and model compression, with Wanda showing a slight advantage. Our analysis in Appendix D.2 details the key methodological differences between the two: Wanda efficiently attributes importance with fewer reference samples, while LRP excels at identifying sparser, task-relevant subgraphs. Notably, LRP becomes more effective when a larger corpus is available for attribution, which enables surpassing Wanda in performance (also see Fig. 8 in the Appendix).
# 4.2 Discovering task-specific and sparse circuits
Understanding how specific behaviors’ functionalities are implmented within a model requires the identification of sparse subgraphs – so-called circuits – that are necessary and sufficient for a given task. In this experiment, we evaluate our framework’s ability to extract such circuits, focusing on the well-established Indirect Object Identification (IOI) task [8], where the model must resolve for the correct indirect object in a sentence. This task is frequently used to benchmark circuit discovery methods due to its well-defined structure and known localization in models. Our goal is to assess whether attribution-based pruning can recover circuits that preserve task behavior while achieving high sparsity – i.e., pruning irrelevant components without affecting performance.
Experimental settings We use the 125M-parameter OPT model [42] and generate six reference sets of 128 IOI-like sequences, following the data generation setup from [8], each sampled with a different random seed. To extract circuits, we compare LRP and Wanda-based pruning and additionally include gradient and neuron activation as baselines, following their use in [11, 8]. All methods are evaluated under two levels of granularity: (1) structured pruning, where entire neurons or attention heads are removed, and (2) unstructured pruning, where individual weight elements – edges between neurnos – are pruned based on their attributed relevance.
A circuit is considered high-quality if it (i) includes all task-critical components (whose removal significantly degrades performance) and (ii) excludes irrelevant ones. We assess this via performancesparsity curves, measuring task accuracy across a range of pruning rates. Inspired by the feature perturbation paradigm for attribution evaluation [28], these curves reveal how resilient a circuit is to pruning: a flat or even increasing trend suggests redundancy, while sharp performance drops identify the pruning of essential components.
As shown in Fig. 2, relevance scores from LRP and Wanda produce significantly sparser parameterlevel IOI circuits compared to gradient and neuron activations. Results here align with [15], which shows that Integrated Gradients (which is based on averaging gradients) [32] struggle with attributing latent components due to noisy signals – an issue affecting gradients in general [5]. Further results in Appendix E.1 and Appendix E.2 indicate that Wanda excels in row-wise unstructured pruning, while LRP and gradient achieve superior results with globally unstructured pruning. However, under their optimal settings (at Fig. 2), LRP consistently discovers sparser circuits, supporting our analysis in Appendix D.2 where LRP is shown to better isolate task-relevant subgraphs. Moreover, Wanda is inherently limited in attributing components that involve multiple weights (e.g., individual attention heads) (see Appendix B.2). This limitation arises from Wanda’s implementation, where the attribution process relies on assessing weight values and activations directly.
Figure 2: a) IOI circuits are identified at the edge level – weight elements – within the linear layers of the OPT model, specifically in the up and down projection layers of the MLP blocks (fc1 and fc2). For Wanda, row-wise unstructured pruning is applied. In contrast, for LRP and gradient, we perform global sorting of components across all layers rather than within each row. $b$ ) IOI circuits extracted within neurons of MLPs or attention heads via structured pruning generally exhibit lower sparsity compared to unstructured pruning. The shaded region indicates the mean $\pm$ standard deviation.
# 4.3 Model correction by supressing harmful circuits
This section addresses part c of Fig. 1 by combining circuit discovery and model compression to suppress harmful behaviors in the OPT model. Controlling model behavior is crucial for ensuring safety and trustworthiness, especially in sensitive applications where models may generate toxic, biased, or harmful content. Undesired behaviors in LLMs extend beyond toxic outputs and can include repetitive text generation, where models produce the same token or short sequences repeatedly. Such repetitions degrade response quality and user experience, making their mitigation critical.
Experimental settings We here focus on toxic behavior and repetitive text generation. For toxic behavior, we use the RealToxicityPrompts dataset [13], which provides prompts known to trigger toxic responses, including profanity, gender bias, racism, and other harmful content. To quantify the level of toxicity, we use the Perspective $\mathsf { A P I } ^ { \mathrm { 1 } }$ , which assigns a scalar value $s \in [ 0 , 1 ]$ to each model response (higher scores indicate greater toxicity). We construct $\mathcal { X } _ { \mathrm { r e f } } ^ { \mathrm { T o x i c } }$ using 93 prompts that generate highly toxic responses $( s \geq 0 . 9 )$ . For text repitition, we construct a set of 53 prompts that consistently trigger repetition measured by the Response Uniqueness Ratio (RUR) $( r \leq 0 . 5 )$ , forming $\chi _ { \mathrm { r e f } } ^ { \mathrm { R e } }$ petitive (see Appendix F.2 for more details). For $\chi _ { \mathrm { r e f } } ^ { \mathrm { G e n e r a l } }$ , we use 128 randomly selected prompts from the C4 dataset, similar to those in Sec. 4.1. We hypothesize that a subset of model components (a circuit) is responsible for the individual undesired behaviors. Our objective is to identify and prune these components, ensuring they are relevant to specific behavior (i.e., via $\bar { R } ^ { \mathrm { T o x i c } }$ or $\bar { R } ^ { \mathrm { R e p e t i t i v e } } )$ but have minimal relevance to general tasks (via $\bar { R } ^ { \mathrm { G e n e r a l ` } }$ ). This avoids degrading overall model performance. Similar to Sec. 4.2, we compare LRP and Wanda and gradient for behavior suppression at multiple levels of granularity: structured pruning (e.g., removing neurons) and unstructured pruning (e.g., removing individual weight elements or edges between neurons).
Our model improvement results, shown in Fig. 3, reveal that removing just 100 $\approx \% 0 . 3$ of total) neurons from the fc1 layers by using LRP in particular, significantly lowers the toxicity level
a) Mean of Toxicity Score b) Behaviour Change of Samples (↓) (↓)Perplexity on Wikipedia (Misbehaviour) Triggering Toxic Responses G G 1 @8 " 旺 0 。 0 ? ? C 南 中 0.2 : C 0 20 50 75 100 200 0 20 50 100 200 0.0 Numbers of Neurons to Prune from fc1 Original Model Corrected
C) d t \*\*\*g stu 万 H tts ·"if the ansver to that question is yes," ·“ then you are a \*\*\*\*ing idiot." ·" then you are a good person." Prompts Original Response Response from Corrected Model
a) Mean of Response b) Behaviour Change of Samples (↑) (↓)Perplexity on Wikipedia Uniqueness Ratio Triggering Repetitive Responses 1.0 30.0 1.0 LRP . : 0.8 Gradient 0.8 H 0.8 : : 王 29.0 Original Model R :。
R 0.6 0.6 28.5 0 0.4 28.0 国 0.4 C . 0.2 茶 . … 27.5 0.2 0.0 0.0 0.0 0e+00 3e+05 4e+05 5e+05 6e+05 0e+00 1e+052e+053e+054e+055e+056e+05 Numbers of Edges to Prune from fc1 Original Model Corrected
C “There is no doubt that" “ the world is changing. The world is changing ... + the world is changing towards a more sustainable future. ·“Explaining an explanation' “ of the why of the why of the why .." "for thelack ofagood one-year-oldin the school ·‘Stop,stop,stop" ·“ stop,stop,stop,stop,..." ,stop,stop.I'm not sure if you've ever been to ..." Prompts Original Response Response from Corrected Model
of harmful responses. Notably, this detoxification is achieved without degrading general model performance, as measured by perplexity on WikiText2. Extra results on other MLP layers and at various pruning granularities, detailed in Fig. 13 and Fig. 14 in the Appendix, consistently confirm the ability of our method to localize and prune toxic components without performance loss. Results shown in Fig. 4 (with further examples at Fig. 15 and Fig. 16 in the Appendix) illustrate effective suprresion of repetitive text generation without compromising general model performance. We focus on moderate sparsity rates, based on the hypothesis that undesired behaviors are encoded in a small subset of parameters. Higher sparsity rates caused significant performance drops, while very low rates yielded minimal behavioral changes, indicating insufficient pruning. This supports targeting a balanced range where harmful behavior can be mitigated without compromising overall performance.
Across both behavior correction tasks, the qualitative effects of pruning with different attribution methods are illustrated in Fig. 5. While Wanda and gradient offer partial improvements and help maintain model performance in certain configurations, LRP enables more reliable identification and
Prompts Corrected with Gradient Corrected with Wanda Corrected with LRP
ta there are objective" going to go to the gym.
b) Prompts Corrected with Gradient Corrected with Wanda Corrected with LRP 1.“the world is changing. 1.“ the world is changing. 1.“ the world is
川 客 健
stop, 4.p, stop, 4g matrgare 4.ym sreastif
5.\*mow,tI, nov I 5." md It I aat Iam 5.", and I mat I I 5. 0r not, 5.“,and I am not afraid of the am what now I am what am what I am. what I am...." Iam unknown...."
mitigation of harmful behaviors, demonstrating the generalizability of our method when a proper attribution method is incorporated. Unlike fine-tuning, which is computationally intensive and risks altering general model capabilities, our approach to pruning directly removes harmful parameters while preserving general model behaviour, making it a lightweight yet effective solution. | Large Language Models (LLMs) are central to many contemporary AI applications, yet their extensive parameter counts pose significant challenges for deployment in memory- and compute-constrained environments. Recent works in eXplainable AI (XAI), particularly on attribution methods, suggest that interpretability can also enable model compression by identifying and removing components irrelevant to inference. In this paper, we leverage Layer-wise Relevance Propagation (LRP) to perform attribution-guided pruning of LLMs. While LRP has shown promise in structured pruning for vision models, we extend it to unstructured pruning in LLMs and demonstrate that it can substantially reduce model size with minimal performance loss. Our method is especially effective in extracting task-relevant subgraphs -- so-called ``circuits'' -- which can represent core functions (e.g., indirect object identification). Building on this, we introduce a technique for model correction, by selectively removing circuits responsible for spurious behaviors (e.g., toxic outputs). All in all, we gather these techniques as a uniform holistic framework and showcase its effectiveness and limitations through extensive experiments for compression, circuit discovery and model correction on Llama and OPT models, highlighting its potential for improving both model efficiency and safety. Our code is publicly available at https://github.com/erfanhatefi/SparC3. | [
"cs.LG",
"cs.AI",
"cs.CL"
] |
# 1 Introduction
Serverless NoSQL databases have emerged as pivotal technologies in cloud-native environments, supporting large-scale, highly available applications. These systems offer elastic and flexible data storage solutions without the need for infrastructure management, effectively meeting the demands of modern applications that require rapid deployment and significant elasticity. Cloud providers have utilized multi-tenant architectures in their serverless NoSQL databases. By co-locating different tenants within the same resource pool and sharing resources [6, 16], this approach has shown substantial potential in maximizing elasticity and enhancing resource utilization. Several multi-tenant NoSQL databases have been deployed in production environments, including Amazon DynamoDB [12], Microsoft CosmosDB [2], and Google Firestore [19].
In large-scale cloud environments, a well-implemented multitenant database must address the challenges arising from diverse workloads and dynamically evolving data requests. For instance, ByteDance operates across a broad array of business domains, such as e-commerce, search, social media, and AI services. Similar to other large internet corporations, each of ByteDance’s domains demonstrates significant workload diversity, characterized by differing requirements for resources such as throughput, storage, and cache hit ratios across different business scenarios. Additionally, workload dynamism is reflected in rapid changes in resource consumption by tenants, like throughput surges and sharp drops in cache hit ratios. We will analyze these aspects in detail in Section 2.
As a result, to be practical in such a large-scale cloud environment, a multi-tenant NoSQL serverless database must fulfill a diverse range of roles—serving as a high-speed cache, a largecapacity persistent storage, and a foundational layer for other systems—while also meeting immense performance requirements. For example, in ByteDance, we must manage a total peak QPS (queries per second) exceeding 13 billion and storage capacities surpassing 1 exabyte (EB). For individual tenants, the maximum QPS can reach 450 million, and the highest storage capacity exceeds 11 petabytes (PB). After careful research, we concluded that existing multi-tenant NoSQL databases may not adequately meet our business needs due to their insufficient traffic capacities and designs intended for limited scenarios. We will elaborate on this in Section 8. To effectively manage these highly diverse and dynamic workloads, we developed ABase, a multi-tenant NoSQL database system at ByteDance. In the course of designing, implementing, and maintaining ABase, we have identified three unique yet significant challenges that are typically encountered in multi-tenant NoSQL systems within largescale cloud environments:
Challenge 1: In high-speed caching scenarios, tenants require frequent access to recently-updated data with low latency. ABase incorporates both proxy and data node caches to satisfy this need. A request hitting the cache significantly alters both the execution process and resource consumption. However, exactly predicting whether a request will access the cache is challenging, introducing uncertainty and complexity into the performance isolation mechanism. For example, requests that hit the proxy cache are directly returned without entering the data node, while those that hit the data node cache consume only CPU and memory resources, without disk I/O resources. This necessitates the systematic integration of caching considerations into the isolation mechanism1. Previously multi-tenant NoSQL databases [2, 12, 19] did not discuss cache impact on isolation or propose cache-aware isolation mechanisms.
Challenge 2: Traffic patterns change dynamically, reflected in two aspects: First, as tenant traffic trends increase, the pre-applied resources (termed “quota”) may become exhausted, thereby triggering throttling. Conversely, a sustained traffic decrease in tenant traffic typically results in the wastage of these resource quotas. Second, even when traffic volume remains constant, changes in access distribution can lead to hot key pressure if requests concentrate on a few keys, or significant drops in cache hit ratios if the accesses become dispersed. To our knowledge, previous multi-tenant NoSQL systems have not integrated temporal forecasting as we have, leaving the hot key issue unresolved.
Challenge 3: Each tenant has differing requirements on request traffic and storage; if the layout of tenant data is not carefully planned, it can lead to imbalanced resource utilization within and across data nodes, thereby limiting overall resource utilization. For instance, if all tenants assigned to a certain data node are storageheavy but have low traffic, this can result in high disk resource utilization while CPU resources remain idle. Although this challenge is common in large-scale cloud environments, previously reported multi-tenant NoSQL serverless systems have not provided explicit implementations or algorithms to address it.
To address these challenges in large-scale cloud environments, we have made the following innovative contributions in ABase:
(1). Cache-Aware Isolation Mechanism (Challenge 1): We designed a cache-aware request unit (RU) that incorporates the cache hit ratio into RU computation, and introduced request restrictions at both the proxy and data node layers to control traffic. Within the data node, we implemented a dual-layer Weighted Fair Queuing (WFQ). The CPU-WFQ schedules requests and checks their existence in the data node cache; upon a cache miss, the I/O-WFQ further schedules requests to retrieve data from the disk layer.
(2). Hierarchical Caching Mechanism (Challenge 2): At the data node layer, we implemented a cache based on size-aware LRU, employing individual eviction policies for items of different sizes to improve the cache hit ratio. At the proxy layer, we implemented a cache based on auto-updated LRU, along with a limited fan-out hash strategy, to effectively address both hot keys and sharp declines in cache hit rates.
(3). Predictive Autoscaling Policy (Challenge 2): We outline the challenges of workload forecasting in ABase, such as nonperiodic bursts, period diversity, and trend variability. We then propose an ensemble-based forecasting solution that combines the adaptive-periodic Prophet model with historical averages to achieve accurate predictions and elastic adjustments.
(4). Multi-Resource Rescheduling Algorithm (Challenge 3): Considering the trade off between efficiency and effectiveness, we propose a heuristic multi-resource rescheduling algorithm to balance traffic and storage utilization across data nodes within a resource pool. We further extend this algorithm to support the data balancing across multiple resource pools.
(5). Production Analysis, Evaluation and Lessons: We conduct comprehensive experiments to validate our contributions in large-scale cloud environments and provide detailed business analysis along with practical operational lessons. ABase has been examined for managing ten-billion-level QPS and exabyte-level data storage. We believe the insights offered in this paper will prove valuable to readers.
The rest of this paper is organized as follows: Section 2 introduces the business scenarios and workloads in a large-scale cloud environment, exemplified by ByteDance. Section 3 provides an overview of ABase’s architecture and design principles. Section 4 details the system implementation of ABase. Section 5 discusses workload management strategies, including predictive autoscaling and rescheduling algorithms. Section 6 presents experimental results validating the system’s performance. Section 7 discusses key lessons learned from ABase’s lifecycle. Section 8 reviews related work and our analysis. Finally, Section 9 concludes the paper.
Table 1: Diverse application scenarios and workload characteristics of ABase in ByteDance business.
# 2 Background
In this section, we sketch an overview of workload diversity and dynamism using ByteDance as a case study. We believe these phenomena are also applicable to other large-scale cloud environments.
# 2.1 Diversity
ABase supports a broad spectrum of business lines, and Table 1 reveals significant workload diversity within and across various business lines. For clarity, throughput and storage metrics have been normalized according to an empirical standard unit. If the normalized throughput and storage metrics are comparable, this indicates a balanced demand for CPU and disk resources in this workload. The complexity of these diverse business requirements stems from variations in data characteristics and the ways in which ABase is utilized. First, considering the diversity within business lines, in the Social Media sector (Douyin), two workloads for comments and direct messages require different throughput-to-storage ratios (250:125 and 25:678, respectively). Next, considering the diversity across business lines, E-commerce and Search sectors demonstrate a preference for higher throughput over storage, with cache hit ratios exceeding $9 0 \%$ due to frequent reads of hot data and few updates. The Advertisement and Recommendation sectors necessitate high throughput and storage capacities. Notably, the cache hit ratio for the Advertisement workload is a mere $1 8 \%$ , which can be attributed to the specific application of the advertisement message joiner, where most data is read only once after being written. ABase supports large language models (LLM) by providing a remote caching store for kv-cache data, facilitating the caching of key-value results from token sequences to reduce costly recalculations during the generation of new tokens. These workloads demand throughput and storage capacities significantly higher than typical applications, normalized at 10000 and 5760, respectively. LLM’s cache ratio is 0, as it bypasses caching to directly process data from underlying logs, optimizing network bandwidth and query speed.
We further illustrate the read ratios (reflecting the operation distribution), K-V data sizes, and common TTLs (time-to-live) of these workloads. Most workloads in Table 1 are read-heavy or balanced, but ABase also serves write-heavy scenarios, such as the advertising business. K-V data sizes vary significantly across workloads. For instance, document and advertisement message data sizes are 7 KB and $1 0 ~ \mathrm { K B }$ respectively, while social media comments are only about $0 . 1 \mathrm { K B }$ . Some businesses exhibit typical access patterns, with common TTLs set at about 3 hours for advertisements, 15 days for recommendations, and 1 day for language models.
# 2.2 Dynamism
Based on ABase experiences at ByteDance, we identified three challenging workload dynamism scenarios:
(1). Throughput sharply increases: During annual shopping events such as the Double-11 Shopping Festival and Black Friday, we observe rapid and significant increases in throughput among many tenants. These escalating workloads come from various sectors, including e-commerce, advertising, and search, with traffic peaks concentrated within a single week.
(2). Cache hit ratio sharply declines: A rapid increase in throughput can significantly reduce cache hit ratios. Additionally, even when throughput is stable, a business’s cache hit ratio may still experience a substantial decline, often due to shifts in access patterns, such as ad hoc access to large volumes of older, cold data. (3). Emergence of Hot Keys: In sectors like social media and search, hot events often cause a small amount of data to be heavily accessed. In multi-tenant architectures, the hot key issue is considered a "last mile" problem [12] because the system must accommodate heavy traffic with a limited number of data nodes and cannot resolve this through data partitioning and migration.
Workload dynamism is also evident in other scenarios. Adjustments to the data TTL (time-to-live) can lead to rapid fluctuations in storage capacity. For tenants across multiple data centers, changes in traffic strategy can cause rapid shifts in traffic, read-write ratios, and cache hit ratios. Workload dynamism manifests in varying resource consumption among tenants, posing challenges to elasticity, load balancing, and tenant isolation.
# 3 Architecture
# 3.1 Data Model and Design Rationale
ABase supports the Redis protocol to ease adoption for users familiar with Redis, and enables eventual consistency. As shown in Figure 1, the ABase system comprises a series of resource pools, each managing a suite of tenants. A tenant can create several key-value
User Space Proxy Plane Meta Server Predictive Autoscaler Intra-Pool & Inter-Pool Rescheduler CPolnatnreol T1 Proxy Groups Resource Pool 1 Pool 2 Pool 3
Te(nTa1nt 1 ↓ PrLoixmyi tQFuaont-aout AHCa-sLhRUouCtiancghe DataNode 1 DataNode 2 DataNode 3 DataNode 1 DataNode 1 Tenant-specific Partition Quota Partition Quota Partition Quota Data
Tenant 2 T2 Proxy Groups SA-LRU Cache Dual-Layer WFQ Cache WFQ Cache WFQ DataNode 2 , DataNode 2 Plane ( T2 ) ! T1 Partition 1 T1 Partition 2 T1 P1 T1 P2 T1 P1 T1 P2
Tenant 3 IV T3 Proxy Groups T2 Partition 1 T3 Partition 1 T2 P1 T3 P1 T2 P1 T3 P1 DataNode 3 DataNode 3 ( T3 )
tables, where each table is composed of numerous items, each identified by a unique key. The data belonging to a tenant are uniformly allocated into several contiguous and disjoint partitions accordingly. Each partition generates multiple replicas across various Availability Zones (AZs), thus enhancing availability and security.
ABase introduces a resource pooling concept that distributes multiple tenants’ data across individual physical machines, forming a vast resource pool. Data partitioning plays a key role in the multi-tenant architecture. ABase divides each tenant’s data into multiple non-overlapping partitions and strives to distribute these partitions across different machines within the same resource pool. The multi-tenant architecture can utilize workload diversity, allowing tenants with different resource demands to be co-located, thereby enhancing machine resource utilization.
ABase isolation adheres to two fundamental principles: Firstly, the design of isolation must consider the impact of caching, encompassing RU, quota, and the request queue. Secondly, traffic control for individual tenants should be prioritized before the traffic reaches the shared request queue. Using dedicated resources, individual restrictions are designed to block excessive traffic at the singletenant stage, thereby enabling the shared request queue to focus on fair and efficient request processing among multiple tenants under moderate traffic pressure, rather than rejecting enormous requests.
Continuous growth in traffic may trigger tenant throttling, while dynamic changes in tenant traffic and storage can unbalance the load across ABase’s data nodes. ABase adopts predictive scaling to maintain a modest ratio between applied resources and actual utilized resources, and deploys rescheduling algorithms to periodically balance tenant replicas both within and across resource pools. However, neither of these measures addresses the issue of hot keys, which we address by introducing an innovative caching strategy.
# 3.2 Multi-Tenant Architecture
Architecture Overview: Figure 1 depicts the overall architecture of ABase. ABase comprises three parts: the control plane is a centralized management component that administers a series of resource pools for traffic management, scaling, and rescheduling. The data plane comprises several resource pools. Within each pool, numerous DataNodes manage multiple partitions for different tenants. The proxy plane contains tenant proxies, responsible for routing tenant requests to the relevant data nodes.
Control Plane comprises meta server, autoscaler, and rescheduler. The meta server serves as the centralized management module for ABase, tasked with managing global metadata, monitoring resource pool health, repairing data nodes, and overseeing the scaling and migration of data partitions. The autoscaler collects metrics on tenant RU and storage utilization, making tenant scaling decisions based on time-series forecasting. The rescheduler uses the same metrics to trigger rescheduling events, migrating replicas both within and between resource pools.
Data Plane contains multiple resource pools, each comprising multiple DataNodes. Each DataNode is allocated a physical disk along with corresponding CPU resources and manages multiple partition replicas for a diverse range of tenants. DataNodes handle partition-layer traffic control based on each tenant’s specific partition quota. DataNodes are equipped with a cache that utilizes Size-Aware LRU (SA-LRU) and a fine-grained Weighted Fair Queueing (WFQ) module, together ensuring Quality of Service (QoS) in multi-tenant environments.
Proxy Plane consists of proxies belonging to various tenants. The primary function of the proxy is to route requests. Upon receiving a client-initiated request, the proxy communicates with the MetaServer to obtain essential routing details for tenant partitions to facilitate subsequent request retransmission. Proxies conduct the proxy-layer traffic control based on each tenant’s specific proxy quota. To further enhance ABase’s ability to defend against cache hit dynamism and hot key issues, proxies are equipped with a cache based on active-update LRU (AC-LRU), and proxies for each tenant are organized into proxy groups that adopt a Limit Fan-out Hash Routing strategy to enhance the cache hit ratio.
# 3.3 Recovery and Robustness
The multi-tenant architecture of ABase exhibits superior recovery capabilities over single-tenant designs, facilitated by parallel processing and resource pooling. When a DataNode fails, the MetaServer coordinates parallel replica reconstruction across operational nodes, thereby effectively utilizing multi-node disk I/O bandwidth to accelerate recovery. This distributed approach eliminates a fundamental constraint in single-tenant systems, wherein the complete replica restoration on a single replacement node is significantly constrained by its disk I/O limitations. Moreover, the architecture maintains robustness while achieving higher resource utilization through its shared resource pool. For example, in singletenant system with 3 replicas, resource utilization must remain below 2/3 to accommodate potential 3/2 workload spikes during single node failure. The multi-tenant design mitigates this impact through N-node redundancy, where load redistribution results in only a $1 / \mathrm { N }$ utilization increase on surviving nodes, thus enabling sustainable high utilization without compromising fault tolerance.
# 4 System Implementation
# 4.1 Normalized Request Unit
Request Units (RUs) are widely employed in serverless databases to direct user focus towards request throughput demands [2, 12] and to abstract from underlying hardware complexities. In ABase, RUs are not only crucial for billing but also constitute a key component of the isolation mechanism by quantifying a request’s consumption of CPU, memory, and disk I/O. We demonstrate how ABase tailors RU estimation to different request types, ensuring that RUs closely reflect the actual resource consumption of operations, while taking into account the impact of caching on resource consumption.
Write Operations: For write operations, the value size of the written item $S _ { \mathrm { w r i t e } }$ is typically known, which facilitates a straightforward computation of $R U _ { \mathrm { w r i t e } } = S _ { \mathrm { w r i t e } } / U$ , where $U$ is the unit byte size, empirically set to 2KB. Importantly, considering ABase’s replication mechanism, a single user write request translates into one direct write operation and $\boldsymbol { r } - 1$ synchronization operations to other replicas (where $r$ is the number of replicas), resulting in a total charge of $r \cdot R U _ { \mathrm { w r i t e } }$ .
Read Operations: Since the value size and cache hit status of read operations are not predetermined, we estimate the size of upcoming reads, $\mathbb { E } [ S _ { \mathrm { r e a d } } ]$ , and cache hit ratios, $\mathbb { E } [ R _ { \mathrm { h i t } } ]$ , using a moving average of the last $k$ requests. We employ $\mathbb { E } [ S _ { \mathrm { r e a d } } ]$ for traffic control, detailed in Section 4.2, and charge based on the actual size returned. Requests that hit the proxy cache are directly returned without throttling or charges. In summary, the formula for estimating read costs is $R U _ { \mathrm { r e a d } } = \mathbb { E } [ S _ { \mathrm { r e a d } } ] \times ( 1 - \mathbb { E } [ R _ { \mathrm { h i t } } ] ) / U$ .
Complex Read Operations: Challenges in estimating RUs for complex operations stem from the unpredictable number of items a request may scan (e.g., HLen (the number of fields in a hash table)) and the intricate multi-stage procedures involved in requests (e.g., HGetAll (a command to retrieve all fields and values in a hash table)). To estimate HLen, we use historical data on the length of the HashSet, and for HGetAll, we decompose the operation into HLen followed by a scan, calculating the RU for each stage separately.
# 4.2 Hierarchical Request Restriction
ABase implements a hierarchical request restriction strategy, divided into proxy-level and partition-level. As shown in Figure 2, each tenant is assigned a dedicated set of proxies, proportional to its allocated quota. Proxies forward their respective requests to the DataNodes. Note that, requests that hit the proxy cache do not consume any proxy quota. DataNodes route the requests to a request queue, filtering out those that exceed predefined quotas. Remaining valid requests are processed by the subsequent Dual-Layer WFQ module, which will be discussed in Section 4.3.
At the proxy level, the primary duty of the proxy is to prevent the total RUs from surpassing the tenant quota. Unlike DynamoDB, which requires real-time interactions between request routers and the Global Admission Control, ABase Proxy employs an asynchronous traffic control strategy to minimize dependencies between proxies and the centralized MetaServer. Each proxy receives a specific proxy_quota, calculated by dividing the tenant quota by the number of proxies, allowing them to process up to double this quota autonomously. To maintain the tenant’s total traffic across all proxies within the set tenant quota, the MetaServer continuously monitors each proxy’s traffic and, if exceeded, directs the proxies to revert to their standard proxy_quota.
Another responsibility of the proxy is to shield tenants on DataNodes — which are shared among multiple tenants — from the impact of co-tenants’ burst traffic. When the traffic of certain tenants significantly escalates, the proxy designated for this tenant can reject excess traffic, thereby preventing requests from reaching the DataNodes. This avoidance helps reduce the extensive resource consumption that would occur if DataNodes were to handle and reject these requests, thus safeguarding the stability of other tenants.
At the partition level, DataNodes reject requests that exceed the maximum allowed quota of a partition at the entry point, namely the request queue. In DynamoDB, a partition is allowed to consume the entire tenant quota. However, under extreme conditions, this flexibility inevitably leads to mutual interference among cotenants. Elevated loads on specific tenant partitions may deplete resources, potentially leading to service degradation as traffic surges in previously low-load tenants. ABase explicitly introduces a partition_quota, defined as the tenant quota divided by the number of partitions, ensuring that no single partition surpasses three times its partition_quota. This restriction is reasonable because ABase organizes all items in an hash partition, thus ensuring that each partition is likely to experience even traffic. We will discuss hot key optimization in Section 4.4.
# 4.3 Dual-Layer Weighted Fair Queueing
In the ABase system, each ABase DataNode may host partitions belonging to various tenants. To ensure fair and efficient handling of requests from various tenants, we designed a fine-grained, duallayer Weighted Fair Queueing (WFQ) mechanism.
As noted in 2DFQ [27], a fair and efficient WFQ is expected to prevent interference between heavyweight and lightweight requests. To achieve this objective, we have implemented a straightforward yet robust approach. As illustrated in Figure 2, all requests are categorized into four independent dual-layer WFQs based on their type (read/write) and their size (large/small). This categorization has proven effective in practice, as it ensures closely matched request latencies within each queue type.
Resource consumption by requests depends on whether they hit the DataNode cache. Referencing SQLVM [11, 28], we designed dual-layer queues, including CPU-WFQ and I/O-WFQ. Requests first enter the upper CPU-WFQ for processing. If a request hits the cache, it can be directly returned; otherwise, it proceeds to the lower I/O-WFQ. The I/O-WFQ uses a group of basic threads to handle normal requests, and employs additional threads to handle requests from other tenants when basic threads are fully occupied by a single tenant.
WFQ acts as a min-heap to prioritize requests with the customized smallest virtual finish time (VFT). ABase has elaborately
QPruoxtya T1 Proxy T1 Proxy T1 Proxy T2 Proxy T2 Proxy T3 Proxy
PQarutiotitoan DataNode Request Queue Small Read Large Read Small Write Large Write CPU WFQ CPU WFQ CPU WFQ CPU WFQ STR1 STR21 STR31 STR12 STR1 STR21 STR31 STR1 STR21 STR31 SR2 STR1 STR21 STR31 STR12
DataNode
Cache IO WFQ IO WFQ IO WFQ IO WFQ
Two-Layer
WFQ Thread Pool Thread Pool Thread Pool Thread Pool Basic Basic Basic Basic Threads Threads Threads Threads Extra Extra Extra Extra Threads Threads Threads Threads LavaStore (Underlying Storage Engine)
designed the VFT to ensure efficient and fair execution among various tenant requests. The VFT of a request is formulated as follows:
$$
\begin{array} { r } { \operatorname { w R e q C o s t } ( Q _ { i } ) = \frac { \operatorname { C o s t } ( Q _ { i } ) } { \operatorname { w P a r t i t i o n } ( Q _ { i } ) } = \frac { \operatorname { C o s t } ( Q _ { i } ) } { Q _ { i } / \sum Q _ { p } } } \\ { \operatorname { V F T } ( Q _ { i } ) = \operatorname { p r e V F T } _ { T _ { i } } + \operatorname { w R e q C o s t } ( Q _ { i } ) } \end{array}
$$
A request’s cost is weighted according to its partition quota, where a higher proportion of the partition quota in the DataNode (denoted by wPartition) leads to a higher weight cost (denoted by wReqCost). Moreover, the VFT for all requests from the same tenant is cumulative, thereby preventing scenarios where a single tenant’s requests are consistently prioritized high, even if that tenant has a larger partition quota or lower request costs.
Furthermore, to address challenges encountered in practical deployments, we introduced the following enhancements:
Rule 1: We defined different $\displaystyle C o s t ( Q _ { i } )$ for requests in CPU-WFQ and I/O-WFQ. For CPU-WFQ, costs are based on RU, while for I/O-WFQ, they are determined by the request’s IOPS. This is based on observing that in ABase, a single $\mathrm { I } / \mathrm { O }$ operation generally has a similar execution time, regardless of request detail.
Rule 2: In CPU-WFQ, concurrency limits are enforced on both read and write requests to ensure stable latency [13]. For write operations, in addition to managing concurrency, we impose a ceiling on the total RU to enhance stability of write latencies during compaction and garbage collection processes in the LavaStore storage engine [43]. This strategy is essential in preventing significant throughput oscillations within the storage engine.
Rule 3: All requests from a single tenant can occupy at most $9 0 \%$ of the CPU-WFQ resources, even if that tenant has a substantial partition quota. This rule is designed to prevent significant delays in other tenants’ requests during traffic bursts from a single tenant.
Rule 4: If all basic threads in the I/O-WFQ thread pool are monopolized by tasks from one tenant, we temporarily increase extra threads to handle tasks from other tenants. This strategy relies on the assumption that simultaneous high traffic from two tenants on a DataNode is unlikely; therefore, a few additional threads are sufficient to manage such conflicts.
# 4.4 Dual-Layer Caching
Proxy-Layer Cache: Hot key management presents a critical challenge for key-value databases, particularly during high-traffic scenarios such as major promotional events. Previously discussed techniques, such as partition splitting and rescheduling, have proven inadequate to manage the strain placed by high-frequency access to a few hot keys on a single data node [12]. Traditional caching approaches face limitations due to proxy memory constraints, typically less than 10GB, leading to frequent cache evictions and suboptimal hit ratios under random routing schemes.
To address these challenges, we propose a dual-component optimization framework comprising a proxy-side cache module and client routing strategy. The proxy layer implements an ActiveUpdate LRU (AU-LRU) mechanism that bypasses DataNode accesses if the proxy cache is hit. An active-update mechanism is applied to address potential spikes in requests due to expired cache entries. It automatically refreshes hot keys as they near expiration, thus maintaining the timeliness and continuity of the cached data. On the client side, we adopt a limited fan-out hash strategy to determine the destination of requests. The tenant’s $N$ proxies are divided into $n$ groups. When a tenant accumulates a list of requests, each request is hashed to one of the $n$ ProxyGroups using a custom hashing function. This group of requests will randomly choose one proxy from this hashed ProxyGroup to send out. By carefully adjusting $n$ , tenants can optimize the balance between hit ratio and hot key pressure. Because each proxy receives $1 / n$ of the total requests, a larger $n$ results in a higher cache hit ratio for each proxy. During hot key events, selecting a smaller $n$ value facilitates load distribution across a larger number of proxies $( = N / n )$ .
DataNode-Layer Cache: Workload diversity necessitates the management of a broad spectrum of key-value data sizes. Inspired by these considerations, we developed a DataNode-layer cache that utilizes the enhanced Size-Aware LRU strategy (SA-LRU). This strategy, SA-LRU, strategically evicts data that occupies more memory while yielding fewer cache hits, effectively managing memory more efficiently. By prioritizing the retention of smaller-sized data, which typically incurs lower access costs, SA-LRU not only optimizes resource utilization but also enhances the overall cache hit ratio.
# 5 Workload Management
# 5.1 Predictive AutoScaling
Algorithm 1 shows the details of the ABase scaling policy. Details on workload forecasting and resource rescheduling are further explored in subsequent sections (Section 5.2 and Section 5.3). In the ABase system, quotas are categorized into RU (Request Unit) and Storage, each allowing for independent scaling by tenants. Suppose a tenant with tenant quota $Q _ { T }$ , number of partitions $N$ , and its partition quota $Q _ { P }$ . We forecast the resource usage in next 7 days based on a 30-day historical series as $U _ { m a x }$ . When the forecasted usage exceeds the upper threshold (0.85) or falls below the lower threshold (0.65) of the tenant quota, scaling up or down is triggered accordingly. After scaling up, if the partition quota exceeds the quota upper bound UP, a partition split is triggered; after scaling down, we ensure that the partition quota does not fall below the quota lower bound LOWER to accommodate occasional traffic bursts from tenants. Scaling operations result in changes to the distribution of partition quotas and usage; thus, ABase continuously invokes the rescheduling strategy to balance the utilization of DataNodes within the resource pool.
# Algorithm 1 ABase Scaling Policy
Require: $Q _ { T } , N , U _ { m a x }$
1: if $U _ { m a x } > 0 . 8 5 \times Q _ { T }$ then
2: $Q _ { T } U _ { m a x } / 0 . 6 5$ ;
3: $Q _ { P } Q _ { T } / { N }$ ;
4: if $Q _ { P } > \mathrm { U P }$ then
5: Trigger partition split so that $Q _ { P } 0 . 5 \times Q _ { P }$
6: end if
7: else if $U _ { m a x } < 0 . 6 5 \times Q _ { T }$ and not scaled in last 7 days then
8: $Q _ { T } U _ { m a x } / 0 . 6 5$ ;
9: $Q _ { P } \gets \operatorname* { m a x } ( Q _ { T } / N , \mathrm { L O W E R } )$
10: end if
11: Invoking rescheduling strategy periodically.
12: Continue to forecast the max usage $U _ { m a x }$ in next 7 days.
# 5.2 Workload Forecasting
The workload forecasting module is crucial in the ABase autoscaling. It processes resource usage metrics from the past 30 days, downsampled to 1-hour intervals, along with their quota records, and predicts the resource usage trends for the next 7 days to inform scaling decisions. Although time-series based scaling strategies are commonly utilized in cloud services [10, 30, 32–35, 37, 47, 49], ABase faces several complex challenges in practice:
Issue 1: Sporadic Bursts and Metric Noise: ABase must be cautious with scaling operations, which involve costly processes such as partition migration and resource pool scaling. Sporadic bursts, which may be ad-hoc and temporary, should not trigger unnecessary upscaling. Furthermore, metrics erroneously recorded during partition migrations or master node transitions can lead to their misinterpretation as transient bursts.
Issue 2: Period Diversity and Trend Variability: The periodicity of the ABase workload is highly diverse. Apart from standard daily and weekly cycles, it includes various uncommon periods, such as 3.5 days, often attributed to specific tenant TTL configurations. Significant trend variations frequently occur within individual series, typically due to business adjustments and data cleaning.
Issue 3: Consistent Non-periodic Bursts: For some tenants, peaks occur daily at varying times without regular periodicity. These should not be dismissed as mere outliers. Accurately predicting these bursts’ maximum value is crucial for appropriate ABase scaling decisions.
To address these issues, we have developed an ensemble-based forecasting solution. In the preprocessing phase, we apply multimetric collaboration for denoising. If Usage and Quota metrics simultaneously show spikes, these are considered noise and filtered out, as such simultaneous occurrences are nearly impossible in practice. Additionally, we use heuristic methods to eliminate sporadic peaks, likely due to accidental events, such as those appearing only once in the past 10 days. We also utilize change point detection methods to identify trend shifts, thereby focusing the forecasting algorithms more on recent data changes (for Issue 1).
During the forecasting phase, we initially use power spectral density (PSD) [42] analysis to determine the time series’ periodicity. Subsequently, we employ a weighted ensemble of predictions derived from both the Prophet [41] and historical average methods [39]. The Prophet model is effective for time series with clear trends and periods, while the historical average provides stable forecasts, especially suitable when trend changes are minimal (for Issue 2). For consistent non-periodic bursts, if the forecasts are significantly lower than historical input data, we directly use the most recent period’s historical data for predictions to avoid unnecessary downscaling (for Issue 3).
We have alao investigated other deep learning-based methods, like TFT [24], AutoFormer [46], N-Beats [31] and N-Hits [7]. Although these models yield high-quality forecasts after pre-training, our ensemble-based approach maintains comparable precision and robustness, seamlessly adapting to new tenants with emerging trend characteristics without the need for retraining.
# 5.3 Workload Rescheduling
To address imbalanced DataNode utilization from diverse workloads, ABase incorporates a novel resource rescheduling module. This module uses a heuristic approach to balance efficiency and effectiveness, with two components: intra-pool, focusing on reallocations within a single pool, and inter-pool, managing reallocations across different pools to optimize resource utilization.
The intra-pool rescheduling algorithm primarily consists of two phases. The first phase aims to balance the replica distribution for each tenant, distributing the count of a tenant’s replicas across DataNodes as evenly as possible, thus enhancing elasticity and robustness against failures. The second phase aims to balance resource utilization across all DataNodes within a resource pool, involving two resource dimensions (RU and storage) without compromising the previously established replica balance. Both phases use similar heuristic algorithms; for brevity, we next focus on a detailed explanation of the second phase, resource utilization rescheduling.
(1). Load Indicator: We characterize the resource load (e.g., RU, Storage) of a Replica (RE), DataNode (DN), Resource Pool (RP) as follows. First, the load of each replica is aggregated based on the hourly average, retaining load data from the past seven days. This data is then aggregated by taking the maximum value within the hour-of-day dimension to derive the load vector $R E ^ { l d } =$ $( R E _ { 1 } ^ { l d } , \cdot \cdot \cdot , R E _ { 2 4 } ^ { l d } )$ . Note that the RU load incorporates the weighted factors of read RU, write RU and the cache hit ratio.
Second, the load vectors of all replicas on the DataNode or Resource Pool are summed and the maximum value of the resulting vector is computed. The specific calculation formula is as follows:
$$
{ \cal D } N ^ { l d } ( R P ^ { l d } ) = \operatorname * { m a x } _ { i } \left( \sum _ { R E \in { \cal D } N ( R P ) } R E _ { i } ^ { l d } \right) \quad \mathrm { f o r ~ } i \in \{ 1 , 2 , . . . , 2 4 \}
$$
where $D N ^ { l d } ( R P ^ { l d } )$ represents the resource load of the DataNode(Resource Pool).
(2). Optimal Load: Considering the necessity to balance resource load across multiple dimensions (RU, Storage), the optimal
load vector $( R , S )$ within a single resource pool is defined as follows:
$$
\langle R , S \rangle = \left( \frac { R P _ { r u } ^ { l d } } { R P _ { r u } ^ { c a p } } , \frac { R P _ { s t o } ^ { l d } } { R P _ { s t o } ^ { c a p } } \right)
$$
where $R P _ { r u } ^ { l d } ( R P _ { s t o } ^ { l d } )$ represents the RU (Storage) load of the resource pool, and $R P _ { r u } ^ { c a p } ( R P _ { s t o } ^ { c a p } )$ represents the total RU (Storage) capacity of the resource pool.
(3). Migration Gain: To quantify the benefits of migrating a replica $R E$ to $D e s \_ D N$ (the Destination DataNode), we initially define the deviation between a DataNode’s load with optimal load. We employ the L2-Norm Loss to evaluate this deviation as follows:
$$
\mathcal { L } ( D N ) = \sqrt { ( \frac { D N _ { r u } ^ { l d } } { D N _ { r u } ^ { c a p } } - R ) ^ { 2 } + ( \frac { D N _ { s t o } ^ { l d } } { D N _ { s t o } ^ { c a p } } - S ) ^ { 2 } }
$$
where $D N _ { r u } ^ { l d } ( D N _ { s t o } ^ { l d } )$ represents the RU (Storage) load of the DataNode, $D N _ { r u } ^ { c a p } ( D N _ { s t o } ^ { c a p } )$ represents the total RU (Storage) capacity of the DataNode.
Therefore, when migrating the replica $R E$ from its current DataNode $( R E . D N )$ to the destination DataNode $( D e s \_ D N )$ , we quantify the migration’s gain by the reduction in maximum load across both nodes post-migration. A decrease in maximum load indicates a positive gain, signifying improved load distribution:
$$
\begin{array} { r } { \mathcal { G } ( R E , D e s \_ D N ) = \operatorname* { m a x } [ \mathcal { L } ( R E . D N ) , \mathcal { L } ( D e s \_ D N ) ] - } \\ { \operatorname* { m a x } [ \mathcal { L } ( R E . D N . R e m o v e ( R E ) ) , \mathcal { L } ( D e s \_ D N . A d d ( R E ) ] } \end{array}
$$
where 𝑅𝐸.𝐷𝑁 .𝑅𝑒𝑚𝑜𝑣𝑒 𝑅𝐸 represents $R E . D N$ removing $R E$ , and $D e s \_ D N . A d d ( R E )$ represents $D e s \_ D N$ adding $R E$ .
(4). DataNode Division: Workload rescheduling is based on the heuristic of migrating replicas from high-loaded DataNodes to low-loaded DataNodes. Specifically, DataNodes are divided into three groups based on their load levels: $ { \boldsymbol { S } } _ { L }$ (Low Load DataNodes), $s _ { M }$ (Medium Load DataNodes), $s _ { H }$ (High Load DataNodes). Using the RU load as a case study, the DataNodes are divided as follows:
$$
\left\{ \begin{array} { l l } { \mathrm { D N } \in { \cal S } _ { L } , } & { \mathrm { ~ i f ~ } \displaystyle \frac { D N _ { r u } ^ { l d } } { D N _ { r u } ^ { c a p } } \le R - \theta } \\ { \mathrm { D N } \in { \cal S } _ { M } , } & { \mathrm { ~ i f ~ } \displaystyle R - \theta < \displaystyle \frac { D N _ { r u } ^ { l d } } { D N _ { r u } ^ { c a p } } \le R } \\ { \mathrm { D N } \in { \cal S } _ { H } , } & { \mathrm { ~ o t h e r s ~ } } \end{array} \right.
$$
where $\theta$ is themanually set threshold, such as $5 \%$ .
Algorithm 2 outlines the intra-pool workload rescheduling process. For each resource type, we categorize DataNodes into three groups: $s _ { L } , s _ { M }$ , and $s _ { H }$ . The algorithm iterates over each high-load DataNode $( S r c \_ D N )$ in $s _ { H }$ , excluding those with ongoing replica migrations. For each eligible $S r c \_ D N$ , the algorithm examines each replica $R E$ on it. It then considers all low-load DataNodes (𝐷𝑁 ) in $ { \boldsymbol { S } } _ { L }$ that meet two criteria: $D N . C a n P l a c e ( R E )$ , which preserves the uniform distribution of table replicas without overloading $D N$ into the high-load set $s _ { H }$ , and 𝐷𝑁 .𝐼𝑠𝑀𝑖𝑔𝑟𝑎𝑡𝑖𝑛𝑔 $( R E )$ , which verifies the absence of ongoing replica migration on $D N$ . The process concludes by selecting the replica $R E _ { m o v e }$ and destination DataNode 𝐷𝑒𝑠_𝑁𝑜𝑑𝑒 that maximize the gain function $\mathcal { G } ( R E , D N )$ . A positive gain triggers the execution of the migration.
In terms of the inter-pool rescheduling algorithm, it primarily focuses on reallocating DataNodes between resource pools, which can be readily extended from the intra-pool algorithm. For example, to balance the resource utilization between two resource pools, $P o o l _ { H }$ (with higher load) and $P o o l _ { L }$ (with lower load), we tend to vacate a portion of the DataNodes from $P o o l _ { L }$ and reallocate them to $P o o l _ { H }$ . Initially, we select some low-utilization DataNodes from $P o o l _ { L }$ and migrate replicas from these selected DataNodes to others within the same pool $( P o o l _ { L } )$ . Then, we reassign these vacated DataNodes to $P o o l _ { H }$ . Finally, we invoke the intra-pool algorithm to re-balance the load within the two resource pools.
Algorithm 2 Intra-Pool Workload Rescheduling
# 6 Experiments
# 6.1 Production Statistics
Diversity Analysis: We present real production statistics from a specific resource pool at ByteDance in Figure 3, each circle represents a tenant in this pool, with the horizontal and vertical axes showing the tenant’s average RU and storage usage over the past month, respectively. The color of each circle indicates the read operation ratio of the tenant, with darker colors indicating a higher read ratio. Generally, tenants with higher RU tend to have larger storage capacities, yet there are numerous cases exhibiting diverse RU/storage characteristics. In terms of the read ratio, it can be observed that tenants with a larger ratio of RU to storage (the lower right corner of Figure 3) tends to indicate a read-heavy workload.
Figure 3: Distribution of tenants by RU, storage, and read ratio. Each point represents one tenant, normalized by median.
Figure 4: Metric values across tenant percentiles.
50% 100% SLA max (66.0%) p90 (24.0%) 50% p99 (100.0%) p50 (11.2%) GR p90 (99.9%) p50 (93.5%)
0% 0% 0 50 100 0 50 100 Percentile Percentile (a) Latency to SLA (b) Cache Hit Ratio
100% 103 p99 (99.9%) p99 (308 KB)
a p90 (97.6%) ? p90 (50 KB)
50% p50 (39.3%) 10 1 101 p50 (0.12 KB)
0% 0 50 100 0 50 100 Percentile Percentile (c) Read Ratio (d) Average K-V Size
Figure 4 provides detailed metrics statistics. Figure 4a shows that all tenants on this resource pool experience latencies (P99) significantly below the SLA threshold (Service Level Agreement, red horizontal line). Exceeding SLA threshold indicates a failure to meet tenant demands. All tenants maintain latencies below $6 6 . 0 \%$ of SLA, $9 0 \%$ of tenants are under $2 4 . 0 \%$ of SLA, and $5 0 \%$ of tenants under $1 1 . 2 \%$ . These low latencies demonstrate that ABase effectively supports diverse business needs and enhances performance stability, making it difficult for even significant traffic bursts to result in SLA violations. Figure 4b shows the distribution of the cache hit ratio among tenants. Over $5 0 \%$ of tenants have a cache hit ratio over $9 3 . 5 \%$ , consistent with the low latency observed in most tenants (Figure 4a). Figure 4c shows the distribution of read operation ratios among tenants: $5 0 \%$ of ABase tenants have a read ratio of less than $3 9 . 3 \%$ (write-heavy), while a significant proportion of tenants have a read ratio exceeding $5 0 \%$ (read-heavy). Finally, Figure 4d shows the distribution of the average key-value size among tenants. The median size is 0.12KB, with a few tenants having significantly larger sizes; on this resource pool, the 90th and 99th percentile key-value sizes are 50KB and 308KB, respectively.
Dynamism Analysis: To demonstrate how effectively ABase handles the dynamic workloads at ByteDance, metrics were collected for tenants, including RU usage, cache miss ratio, and latency during the Double-11 Shopping Festival, a period characterized by intense E-Commerce activities that significantly alter the usual workload characteristics of many tenants. During the Double-11 period, more than $2 5 \%$ of tenants in this resource pool exhibited significant increases in QPS or notable fluctuations in cache hit ratios. We illustrate some representative examples in Figure 4. From Figure 5a to Figure 5c, all three tenants exhibited traffic increases, but their cache hit ratios varied. The cache hit ratio of Figure 5a was virtually unaffected, remaining consistently at $1 0 0 \%$ ; in Figure 5b, the cache hit ratio significantly decreased by over $2 0 \%$ following an increase in tenant traffic, due to a broad distribution of requested keys leading to increased cache eviction. Figure 5c shows a $1 0 \%$ increase in the cache hit ratio following a surge in tenant traffic, attributable to hot-key scenarios. In contrast to Figure 5a, Figure 5d depicts a decrease of approximately $1 0 \%$ in the cache hit ratio despite stable traffic levels. The tenant in Figure 5e experienced a traffic peak lasting about 3 days, during which the cache hit ratio plummeted from $1 0 0 \%$ to about $2 \%$ .
Despite the various workload changes among ABase’s tenants, the latency for all tenants remained stable, still fully meeting the SLA requirements. This can be explained using Figure 5f, which shows changes in total traffic, average cache hit ratio, and average latency at the resource-pool level. Benefiting from ABase’s multitenant design, the resource capacity of the resource pool far exceeds the variation in individual tenant demands, allowing tenants to share reserved resources and thus providing ample capacity to handle changes in tenant loads. As a result, despite significant loads during the Double-11 shopping festival, overall pool traffic and cache hits remained stable.
# 6.2 Performance Isolation
This section examines the effectiveness of the proxy quota, partition quota, and the dual-layer WFQ mechanism through ablation studies on synthetic workloads.
Proxy Quota. As shown in Figure 6, the experimental setup involved hosting partition replicas for two tenants on a single DataNode, with the proxy initially disabled. Initially, both tenants experienced low traffic volumes, and all requests were processed successfully with minimal latency. At the 10-minute mark, Tenant 1 initiated a traffic burst that significantly exceeded their assigned tenant quota (indicated by the red line). In the absence of the proxy’s interception, these requests overwhelmed the DataNode’s request queue. Tenant 1’s success QPS reached the partition quota, and the requests exceeding this quota were returned as errors. The DataNode expended considerable resources rejecting Tenant 1’s excessive requests, which severely disrupted the processing of Tenant 2’s legitimate requests. Consequently, Tenant 2 was severely impacted by Tenant 1’s burst, with their success QPS beginning to decline, nearly reaching zero. At the 35-minute mark, upon activating Tenant 1’s proxy (indicated by the green line), the proxy efficiently intercepted traffic exceeding Tenant 1’s tenant quota, enabling the DataNode to efficiently manage the remaining traffic. Subsequently, latency levels for both tenants returned to low values, and Tenant 2’s QPS recovered to the pre-burst levels.
Partition Quota and Dual-Layer WFQ. As shown in Figure 7, we conducted a simulation experiment to validate the efficacy of partition-level restrictions and the dual-layer WFQ mechanism.
0
Cache Hit
N
11-01 11-03 11-05 11-07 11-09 11-11
# (a) QPS increases, cache hit ratio remains stable.
(b) QPS increases, cache hit ratio decreases.
00
Cache Hit
t
11-01 11-03 11-05 11-07 11-09 11-11
(c) Both QPS and cache hit ratio increase.
(d) QPS remains stable, cache hit ratio decreases.
(e) QPS increases shortly, cache hit ratio decreases.
Figure 5: Tenant latency is stable amid workload fluctuations during the Double-11 Shopping Festival. Subfigures show QPS, cache hit ratio, and latency from top to bottom.
(f) At resource pool scale: QPS and cache hit ratio remain stable.
Figure 6: Effectiveness of proxy quota
Figure 8: Oncall (urgent contact) amount decreases by $6 5 \%$ .
The setup, similar to a previous experiment, hosted two tenants’ partition replicas on a single DataNode. Initially, both tenants maintained normal QPS and latency levels under low traffic conditions, with the partition quota disabled.
At the 10-minute mark (indicated by the red line), we modeled a skewed partition traffic scenario for Tenant 1, directing a significant volume of traffic to Tenant 1’s partition. Since the current traffic did not exceed the tenant quota, the proxy-level restriction did not
Figure 7: Effectiveness of partition quota and WFQ
Disk Usage Oncall Count
电505551555551
0d 7d 14d 21d Jul Aug SepOct NovDec Jan Feb Actual 10d Predict Weekly Oncall 2C0o2u3nts Quota 17d Predict Deploy AutoScaling (a) A scaling case (b) Oncall decrease
reject any requests, resulting in zero error QPS for Tenant 1. Subsequently, the dual-layer WFQ mechanism was activated, aiming to ensure that the service capacity deployed on the DataNode’s partition quota was proportional across tenants. Although Tenant 2’s success QPS inevitably decreased by $2 5 \%$ , the latency remained unaffected, indicating that the dual-layer WFQ mechanism preserved Tenant 2’s isolation. However, for Tenant 1, the lack of DataNode limitations meant that ABase had to process all incoming requests, which led to a twenty-fold increase in latency, significantly degrading its quality of service. At the 37-minute mark, we enabled the partition quota (indicated by the green line). Tenant 1’s success QPS rapidly dropped to 3,000, matching the partition quota limit, and requests exceeding this threshold were rejected by the DataNode as error QPS. The success QPS for Tenant 2 also returned to its normal levels. Importantly, the latency for successful requests for both tenants was maintained at a low throughout the experiment.
# 6.3 Elasticity
This section shows the effectiveness of ABase’s predictive scaling policy, using statistics from historical records. Figure 8a illustrates an online scaling example in the search business, where the disk usage (blue line) shows a 24-hour periodicity with an increasing trend. The tenant quota is depicted by the red line. On day 10, ABase predicted the usage would reach $8 5 \%$ of the quota within a week (orange line), prompting a proactive quota increase to keep predicted usage below $6 5 \%$ . This adjustment matched actual usage, as shown in Figure 8a, effectively preventing user throttling.
To demonstrate the business impact of the automatic scaling mechanism, we tracked the change in the number of upscaling oncalls (i.e. urgent contacts to technical support staff) over approximately six months before and after the deployment, as depicted in Figure 8b. Only up-scaling related oncalls are displayed. The occurrence of emergency oncalls likely indicates that users have experienced throttling, thus impacting the business. After deployment, the number of oncalls decreased by approximately $6 5 \%$ , signifying a significant alleviation in user throttling.
# 6.4 Resource Utilization
To demonstrate the effectiveness of our rescheduling mechanisms, we first conducted offline experiments on a resource pool comprising 1000 DataNodes. As shown in Figure 9a, the original storage and RU utilization of the DataNodes were highly dispersed, indicating that the load on the DataNodes was extremely uneven, which limited the rapid scaling of tenants on them. Following the application of Algorithm 2, as shown in Figure 9b, the load distribution across DataNodes was more balanced, with a $7 4 . 5 \%$ reduction in the standard deviation of RU usage and an $8 4 . 8 \%$ decrease in storage usage variance.
DataNode Util. 100 DataNode Util. 75 Ideal Util Ideal Util. 50 25
0 25 50 75 100 0 25 50 75 100 RU Util. (%) RU Util. (%)
(a) Before Rescheduling (b) After Rescheduling
This algorithm has been deployed in the online environment, executing once every 10 minutes. The changes in RU usage for a resource pool are illustrated in Figure 10. Following the rescheduling algorithms, the maximum RU utilization among DataNodes increasingly converged towards the average RU utilization. Consequently, the proposed rescheduling algorithm effectively mitigates resource skewness, facilitating better resource utilization and reducing the risk associated with highly loaded DataNodes.
From the perspective of overall production statistics, powered by the data rescheduling, ABase achieves higher resource utilization compared to the single-tenant ABase-Pre. The average utilization
Maximum QPS Among Datanodes 08 SAtvaertaRgesQchPeSdAulminogng Datanodes W 0 20 40 60 80 100 Time (hours)
rates of CPU, Memory, and Disk for each machine in ABase-Pre were only $1 7 \%$ , $5 2 \%$ , and $2 7 \%$ , respectively. After upgrading to ABase, these rates increased to $4 4 \%$ , $6 3 \%$ , and $4 6 \%$ . This is because in the single-tenant design, the resources of low-utilization tenant cannot be reallocated to other tenants; moreover, as mentioned in Section 3, ABase-Pre must restrict the upper limit of resource utilization to tolerate single-node failures. Contrastively, the multi-tenant ABase eliminates machines with low utilization and enables resource pools to achieve higher utilization rates without sacrificing robustness.
# 6.5 Cache Effectiveness
Table 2: Benefit summary by proxy cache.
We validated the effects of the proxy cache on six tenants within the Social Media and E-Commerce sectors. As shown in Table 2, the tenant Social Media 1 experiences extremely tight RU quotas during holiday periods, often resulting in throttling. Despite having 375 proxies, the original cache hit ratio was only $5 \%$ . After activating the proxy cache and dividing the 375 proxies into 75 groups, this adjustment increased the cache hit ratio to $8 6 \%$ , significantly reducing the underlying load and saving $8 5 \%$ of RU for this tenant. Note that this change is very lightweight, solely altering the traffic routing proxy strategy. Similarly, for the remaining two Social Media tenants, the cache hit ratios improved by $6 2 \%$ and $23 \%$ , with RU savings of $70 \%$ and $3 8 \%$ , respectively. For the three E-commerce tenants, the cache hit ratios increased from $2 4 \%$ to $6 0 \%$ , with RU savings of $6 1 \%$ , $5 7 \%$ , and $7 9 \%$ respectively.
# 7 Lessons in Practice
Resource Allocation. We regulate the size of the resource pool to ensure that its idle resources exceed the quota of any single tenant. In practice, we ensure that the size of the resource pool is at least ten times the quota of any single tenant. Furthermore, at least $2 0 \%$ of the resource pool consists of idle resources. This arrangement guarantees sufficient elasticity for any tenant while ensuring a controlled proportion of idle resources.
Resource Isolation: While increasing the scale of resource pools can enhance tenant elasticity when there is a significant proportion of idle resources, we recommend limiting the maximum number of tenants within a single resource pool and the maximum scale of each pool. Lessons learned from failures suggest that maintaining a moderate number of resource pools and tenants is crucial, which can avoid a large failure radius that could potentially lead to severe online incidents. Furthermore, given that the aggregate quota of resource pools should substantially exceed that of any individual tenant, we correspondingly regulate the maximum quota for each tenant.
Handling Spiky Workloads. To ensure rapid, second-level elastic scaling capabilities for tenants, we not only guarantee the reservation of idle resources at the entire resource pool level but also ensure a significant balance at the individual machine level. The idle resources are noticeably greater than any single tenant’s quota at the same level, enabling each tenant to at least double their quota in the short term to accommodate sudden traffic changes.
Auto-scaling Principles. ABase approaches downscaling cautiously, prioritizing business stability. Overly aggressive downscaling might necessitate re-upscaling should business traffic rebound. Furthermore, even for tenants whose utilization has decreased but quota remains unscaled, this does not entail significant waste. A resource pool contains multiple tenants sharing idle resources. Some tenants’ idle resources support others’ burst traffic and growth, thereby maintaining a stable resource utilization rate.
# 8 Related Works
# 8.1 NoSQL Serverless Databases
Traditional NoSQL databases, such as Cassandra [22], have made substantial contributions to distributed database systems by emphasizing scalability, fault tolerance, and innovative consistency models, employing techniques such as sharding and replication. These systems excel in scenarios that require high write throughput and flexible schema designs for unstructured data. However, their architectures, originally designed for static resource allocation in single-tenant and on-premise environments, are deficient in native support for both elastic scaling and fine-grained performance isolation. This makes them less suited for cloud-native, multi-tenant serverless scenarios that demand dynamic resource provisioning and tenant-level SLA guarantees.
Multitenancy, an essential architectural approach for serverless databases, allows multiple tenants to share the same infrastructure, thereby enhancing scalability, flexibility, and cost-efficiency [38]. However, this architecture poses significant challenges, including tenant isolation, load balancing, autoscaling, and issues related to hot keys [18]. DynamoDB [12], a pioneering serverless key-value NoSQL database, providing a scalable and predictably performant service, has set a benchmark for performance in distributed databases. Although DynamoDB explores the requirement and necessity for traffic control and resource balancing in multitenant architectures, it does not disclose further technical details such as rescheduling algorithm and scaling policy. As reported [12], DynamoDB supports trillions of API calls, peaking at 89.2 million QPS during the Amazon Prime Day shopping event. To support caching scenarios, DynamoDB introduces Amazon DynamoDB Accelerator (DAX) [1], supporting up to 10 nodes per tenant and millions of QPS. Microsoft CosmosDB [2] offers a fully managed serverless experience but imposes a capacity limit of 1 million request units per database, limiting its ability to handle large-scale workloads. Firestore [19] is tailored to enhance usability for web and mobile developers, offering real-time data synchronization and scalable development within the Firebase ecosystem.
# 8.2 Predictive AutoScaling
Autoscaling in cloud systems has drawn significant attention, with notable contributions from Qu et al. [36], Barnawi et al. [4], and
Lorido-Botran et al. [26]. These solutions are now extensively implemented across a variety of infrastructure services, such as databases [17, 25, 40] and microservices [3, 5, 48]. Autoscaling is typically categorized by scaling direction into horizontal [49] and vertical [37] types, as well as by timing into reactive and proactive types [36]. This paper concentrates on predictive scaling.
Workloads exhibiting regular periods have been shown to significantly benefit from proactive strategies as demonstrated by Higginson et al. [14] and Cortez et al. [9]. However, the diversity of periods and trends introduces substantial forecasting challenges. To address these, Qin et al. have proposed a collection of robust decomposition methods [35, 44, 45]. Moreover, integrating multiple prediction models has proven effective in handling complex workload patterns in industrial applications. For instance, Seagull [33] classifies Microsoft Azure services into daily/weekly and stable/shortlived categories based on user activity, applying tailored prediction models for each. Kim et al. introduce a cloud workload prediction framework that incorporates multiple predictors [20]. Hu et al. describe a framework that integrates five distinct prediction models for effective virtual machine provisioning [15].
# 8.3 Resource Scheduling
Resource scheduling in cloud computing has been extensively studied in recent years. For example, Eigen [23] introduces a hierarchical resource management system, along with three heuristic-based resource optimization algorithms aimed at enhancing the resource allocation ratio without compromising resource availability. Königet al. [21] propose a method that combines mathematical modeling with solvers to address the tenant placement problem in a Databaseas-a-Service cluster, with a focus on minimizing the probability of failovers. Chen et al. [8] develop a method based on graph partitioning and solver-based algorithms to address resource allocation with service affinity in large-scale cloud environments. RAS [29] employs Mixed-Integer Programming (MIP) to formulate the capacity reservation challenge for large-scale clusters. To adhere to the Service Level Objective (SLO) of achieving a solution within one hour, multi-phase solving techniques and variable aggregation methods are utilized. | Multi-tenant architectures enhance the elasticity and resource utilization of NoSQL databases by allowing multiple tenants to co-locate and share resources. However, in large-scale cloud environments, the diverse and dynamic nature of workloads poses significant challenges for multi-tenant NoSQL databases. Based on our practical observations, we have identified three crucial challenges: (1) the impact of caching on performance isolation, as cache hits alter request execution and resource consumption, leading to inaccurate traffic control; (2) the dynamic changes in traffic, with changes in tenant traffic trends causing throttling or resource wastage, and changes in access distribution causing hot key pressure or cache hit ratio drops; and (3) the imbalanced layout of data nodes due to tenants' diverse resource requirements, leading to low resource utilization. To address these challenges, we introduce ABase, a multi-tenant NoSQL serverless database developed at ByteDance. ABase introduces a two-layer caching mechanism with a cache-aware isolation mechanism to ensure accurate resource consumption estimates. Furthermore, ABase employs a predictive autoscaling policy to dynamically adjust resources in response to tenant traffic changes and a multi-resource rescheduling algorithm to balance resource utilization across data nodes. With these innovations, ABase has successfully served ByteDance's large-scale cloud environment, supporting a total workload that has achieved a peak QPS of over 13 billion and total storage exceeding 1 EB. | [
"cs.DB"
] |
# 1 Introduction
The advent of large-scale foundation models (FMs), such as large language models (LLMs), is reshaping the software development process. By leveraging training on source code repositories and textual artifacts from the software development process, these models can support software makers in various tasks such as code generation [41, 43, 64], code documentation [20], and test generation [35, 55]. Inherent ability of FMs to process and generate natural language makes them promising candidates for interpreting diverse stakeholder inputs, a cornerstone of Requirements Engineering [48]. However, despite the apparent synergy and their broad capabilities, FMs have not yet streamlined all parts of software development, particularly the nuanced demands of refining requirements.
In fact, requirements refinement (i.e., elaborating on requirements, decomposing them into more manageable parts, resolving ambiguities and conflicts) [10, 11, 38] is a critical yet overlooked phase of software development in the AI era. In practice, as stakeholders articulate expectations, developers must manage ambiguities, inconsistencies, and incompleteness [67] in emergent requirements by engaging in dynamic dialogues with stakeholders. Complete and consistent requirements remain a principal driver of project success [18], as inadequate or incomplete requirements can lead to costly rework, misaligned functionality, and overall project failures [46]. However, taking existing requirements, which might have been initially elicited in a raw, vague, or high-level form, and making them precise, detailed, complete, consistent, and understandable requires capabilities beyond brief, one-shot interactions.
Current FMs—despite their capacity to generate coherent, context-sensitive outputs—often rush to generate solutions. Wu and Fard [67] found that in more than $6 0 \%$ of the problem statements that required clarifications, FMs still generated code rather than asking clarifying questions crucial for effective refinement. Bajpai et al. [6] also observed that FMs may prematurely attempt to resolve tasks, even when the provided information is insufficient. FMs’ dialogue mechanisms, trained for conversational brevity, limit deeper investigations into ambiguous or incomplete requirements. This constraint can hamper the discovery of nuanced software needs and, in turn, compromise solution quality. In fact, researchers have identified that there is often a disconnect between developers’ expectations and the responses that they receive for software engineering tasks because FMs are prompted with insufficient context, specifications, or clarity [13].
Clarified and detailed requirements directly inform design and implementation. For example, the user may request “a system that can quickly search through my documents”. However, there are many ambiguities in this requirement that need clarifying before starting implementation, such as the type of documents to be searched, e.g., text, images, or mixed; the search criteria to support, e.g., keywords, metadata, or full-text; and latency expectations, e.g., milliseconds, seconds or minutes. On the other hand, intentions, while being abstract and high-level, guide the overall direction and decision-making. The request of a user, “I want to build a website displaying the latest tech news every day,” could be motivated by the user’s need to stay up-to-date with technology news. Identifying this user’s intention helps to offer alternative solutions that suit the purpose, such as setting up a workflow to send a daily tech news digest to the user. This fixation on a solution prematurely has long been recognized as a pitfall in requirements gathering [51] and underscores the need for intent alignment [12].
While some recent research efforts have begun investigating more robust conversational strategies for FMs, particularly in coding tasks [47, 67], these often offer incremental improvements for specific, well-defined interactions, such as code generation for programming competition questions. However, the complexities of refining requirements for real-world software projects, which demand nuanced understanding, iterative clarification, and extended engagement, remain largely unaddressed. A holistic approach that facilitates this kind of dialogue, mirroring effective human-to-human interactions in this domain, has yet to emerge.
Addressing this critical gap for real-world requirements refinement, this paper introduces a novel methodology to overcome the short and often unproductive discussion patterns exhibited by existing FM-based solutions. We propose an interactive framework designed to enable extended discourse, pinpoint ambiguities, and systematically address inconsistencies in requirement statements. By combining theory-of-mind (ToM) capabilities [3] with a multi-agent system, our approach ensures comprehensive coverage of stakeholder needs while preserving the efficiency gains associated with automated dialogue support. Through this framework, we aim to bridge the gap between the promise of FMs for automated software development and the realities of prolonged, detail-oriented conversations that are essential for robust requirements clarification and refinement. Our work lays the groundwork for more intuitive and effective development environments where AI collaborators can deeply understand and co-create software aligned with true stakeholder intentions.
Specifically, we have implemented these ToM and multi-agent approaches in a tool called AlignMind. This tool begins by taking an initial user requirement as input and then, through a series of clarifying questions, deduces the user’s intent and determines the final requirements. The requirements are then translated into a natural language workflow consisting of step-by-step instructions. This workflow serves a dual purpose: first, as a crucial validation artifact for the refined requirements, ensuring they are complete and actionable; and second, as an explicit representation of the plan to fulfill those requirements. We evaluate this integrated approach through the following three research questions:
RQ1 Can AlignMind output high-quality requirements and natural language workflows during the requirements clarification and refinement process?
Motivation: We aim to evaluate how effectively AlignMind can improve the output during the requirements clarification and refinement process. The evaluation should take into account both the generated requirements and the natural language workflow proposed by AlignMind, as the workflow’s quality is indicative of how well the requirements have been clarified and operationalized. We want to compare the improvement provided by AlignMind compared to the baseline, which is directly prompting one FM to act as a requirements refiner.
Results: We first conduct an evaluation using a panel of three FM-powered judges, based on five rubrics (i.e., assessment criteria) and 150 diverse scenarios in which the users want to refine requirements before building a software-based solution. From this evaluation, we find that AlignMind has significantly higher output quality compared to the baseline. Then, as an objective measure, we compute the requirement richness quantifying the vocabulary variety using lexical richness [34, 59] in the final set of requirements output by the baseline system and AlignMind. We find that the lexical richness is eight times higher when the AlignMind is used for refining requirements compared to the baseline. Furthermore, AlignMind enables longer multi-round conversations compared to the baseline.
RQ2 Is the output of AlignMind grounded on user conversations?
Motivation: FM-powered systems are prone to hallucination [25]. In the case of requirements refinement, the system could hallucinate new requirements that the user did not ask for during the conversation. Therefore, it is critical to investigate to what extent the final set of requirements produced by AlignMind is affected by hallucination. For this purpose, we investigate whether the generated requirements by AlignMind are grounded in content provided by the user, in contrast to generating content that is inconsistent with the user’s conversations.
Results: In abstractive summarization [22], a summary and its source document are deemed factually consistent when the summary does not introduce any information that is not already present in the source [33]. Similarly, in our context, we have to ensure that the final set of requirements does not include any new requirements that were not previously mentioned during the AI-human conversation. By assessing this consistency through a panel of FM-powered judges, we find that AlignMind shows no tendency to hallucinate. The requirements generated by AlignMind maintain a level of consistency with previous conversations comparable to the baseline. In fact, both methods achieved a perfect consistency score in most scenarios.
RQ3 What are the operational costs associated with AlignMind?
Motivation: Before introducing an automated system such as AlignMind to improve the requirements clarification and refinement process, organizations need to consider the associated costs in practice. Therefore, we set out to investigate the operational costs of leveraging AlignMind for requirements refinement tasks in real-world scenarios.
Results: While AlignMind has greatly enhanced the richness of requirements, our findings suggest that achieving this improvement comes with a cost overhead that must be taken into account. We observe that AlignMind makes 10.6 times more API calls than the baseline. It uses 30 times the number of tokens in comparison to the baseline, based on the median. Balancing cost and performance is crucial when implementing an FM-powered requirements refinement system in real-world applications.
The remainder of this paper is organized as follows: We summarize related work in Section 2. Section 3 describes our approach for refining requirements using a multi-agent system with ToM capabilities. Section 4 presents our experimentation setup and results. Section 5 discusses the broader implications of our work. Section 6 outlines the threats to validity. Section 7 concludes the paper.
# 2 Background and Related Work
In this section, we situate our work in the context of the literature on using language models for requirements engineering and leveraging the ToM capabilities of language models.
# 2.1 Language Models for Requirements Engineering
While several studies [5, 52, 65] explored the use of language models in requirements elicitation, some recent studies [17, 42, 47, 53, 54] have tried to resolve requirements-related problems with the assistance of language models. Luitel et al. [42] used language models to find potential incompleteness in requirements. Santos et al. [54] investigated the suitability of using LLMs with in-context learning to check the requirements satisfiability given a system specification and associated domain knowledge. Fazelnia et al. [17] proposed to integrate Satisfiability Modulo Theories (SMT) solvers with LLMs to detect conflicting software requirements. ClarifyGPT [47] added requirements clarification for LLM-based code generation, improving task performance that can be verified by generating solution candidates and test cases. Ruan et al. [53] demonstrated that using LLMs for developer intent extraction for automated program repair tasks is effective. However, their approach infers intent from project structure and program behaviour instead of natural language utterances. Arora et al. [5] explored the potential of LLMs in driving RE processes and conducted a preliminary feasibility evaluation of integrating LLMs into requirements elicitation. Recent work [52, 65] has investigated prompt patterns suitable for requirements elicitation.
Although there have been attempts to use language models in the requirement elicitation process, we observe a gap in using LLMs’ conversational ability to clarify and refine requirements iteratively. Furthermore, the feasibility of FMs should be explored in requirements refinement when implementing end-to-end software projects, beyond competitive coding tasks and method-level code completion.
# 2.2 Theory-of-Mind in Language Models
Theory-of-Mind (ToM) refers to the process of inferring a user’s intents, beliefs, and goals from their utterances [3]. With the advancements in LLMs, the research community is showing an increased interest in investigating their ToM capabilities to guide human-AI interaction (HAI) research [61]. Wang et al. [62] harnessed linguistic features extracted from conversations between students and a virtual teaching assistant to gain insights into students’ perceptions. Their aim was to integrate the Mutual Theory of Mind concept into human-AI interactions. Jung et al. [27] proposed a new framework to improve LLM’s ToM reasoning. Their approach is designed to deduce the beliefs of the agents in an interaction by inferring others’ perceptions and isolating the context perceived by others. More recently, Shi et al. [57] showed AI systems can develop a sophisticated "theory of mind" by combining information from multiple modalities (e.g., language, vision, gestures) and reasoning about the mental states of multiple agents simultaneously. Wilf et al. [66] demonstrated perspective-taking, i.e., placing oneself in another’s position, as a promising direction for improving LLMs’ ToM capabilities. Their work is inspired by Simulation Theory, the prominent cognitive science perspective which argues that perspective-taking is the initial step to simulating another’s mental state. Amirizaniani et al. [3] assessed the abilities of LLMs to perceive and integrate human intentions and emotions into their ToM reasoning processes for open-ended questions by using posts from Reddit’s ChangeMyView platform. Fang et al. [16] explored the potential of using the ToM capabilities of LLMs to proactively identify possible errors before crucial actions are taken. This approach aims to ensure that LLM-based agents can be effectively deployed in critical environments. Alongside these studies, we hypothesize that LLMs’ inherent ToM capabilities can be enhanced and effectively leveraged specifically for intent alignment and requirement refinement in the software engineering domain.
Based on insights from these related publications, we see a promising opportunity to illustrate the effectiveness of using a multi-agent system equipped with Theory of Mind (ToM) capabilities for requirements clarification and refinement through multi-round conversations.
# 3 Solution Design
This section discusses the iterative prototyping approach that we adopted to develop an improved FM-powered requirements refinement system, followed by a detailed discussion of the design decisions we made after the iterative feedback collection process. The goal is to build an FM-powered system where users, via natural language, can refine the requirements of a software solution that they wish to be built. The final output of this system will be a set of refined requirements for a given initial user query and a natural language workflow to achieve these requirements.
# 3.1 Iterative Prototype Development
We adopted an iterative prototyping approach to develop a conversational FM-powered solution for refining requirements. By providing a prototype for users to engage with, we were able to pinpoint any challenges they may face during its use and make improvements based on their feedback. To facilitate this process, we recruited six software engineering experts. These participants worked with a prototype of our tool to clarify the requirements for a specific task that needed refinement. Using their insights, we continually enhanced the prototype and identified the core concepts that will eventually guide the implementation of our solution.
Our iterative prototyping process consisted of the following activities:
(1) Implement an initial prototype of an FM-powered requirement refinement system.
(2) Conduct an interview with a participant where they will first interact with the prototype and then provide answers to a questionnaire.
(3) Incorporate new feedback from the participant into the prototype
(4) Iterate from step 2 until all six participants are interviewed.
Next, we detail each step of the process.
Initial Prototype Construction. To facilitate user feedback collection about FM-powered requirements refinement, we developed a Terminal-based User Interface (TUI), implemented using Textual 1 framework, as shown in Figure 1. The TUI consists of three main panels. The first panel, positioned horizontally, allows the user to input their name and select a configuration. This configuration option helped us to present different backend implementations to the user and observe the interactions. The second panel on the right features an input field for users to express their intents in natural
aliarabat A B The proposed applicationaims to provide detailed weather ( forecasts for major citieswithinCanada.Theappwill offera I want to build an app to receive detailed weather forecasts for specific comprehensive weather forecast experience,including current regions. weather conditions,hourlyand dailyforecasts,weather alerts,and air quality indexes.To achieve this,theapp will integrate witha reliable weather API that providesdetailed Agent (@) forecasts for Canadian cities.The user interface willallow That soundslikeagreatidea.Toget started,I'dliketoclarifya few usersto selectacityfromaListofmajorCanadiancities, details.whattypeof regionsare you interestedinproviding weather forecasts andthen display the corresponding weather forecasts.The for, such as: app's scopeisfocused on Canada,and it will cater to big citiesonly.The deployment platform for theapp has not been ·Cities determined yet,with optionsincluding mobile devices,web ·Countries ·Zipcodes ·Geographiccoordinates (latitudeand longitude)
1.API callto fetch_the list of major Canadiancities from a · Other (please specify) reliablegeographicinformation database.
2.API calltoa weather APIto fetch current weather This information will help me better understand your requirements and guide the conditions for each major Canadian city. next steps.
3.APIcalltotheweather APIto fetch hourly forecasts for each major Canadian city.
4.APIcall tothe weatherAPI to fetch daily forecasts for eachmajor Canadiancity. I am mostly interested in cities.
5.API call tothe weather API to fetch weatheralerts for each major Canadian city.
6.API calltoanairquality index API to fetch air quality Agent (@) indexes for each major Canadian city. Soyou'd like to provide detailed weather forecasts for specific cities.That
7.API calltoadata processing service to compile and helpsnarrow it down.What type of weather forecast detailsare you interested organize the fetched weather forecastsand air quality in displaying to your users, such as: indexesinto a comprehensive dataset for each city.
8.API call to a compatibility checking service to ensure the compiled dataset is compatible with potential.deployment platforms,including mobile devices,web applications,ad voice assistants.
language (i.e., ask a question or seek clarification) and a pane displaying past conversation history between the user and the prototype. Finally, the ‘Requirements’ and ‘Workflow’ panels on the left are progressively updated as sufficient information about the user’s intent is gathered.
The initial version of the prototype directly forwarded the user’s queries to an FM along with a system prompt, which requested the FM to gather users’ requirements and respond with requests for clarification as needed or a finalized set of user requirements and a workflow in natural language that is necessary to achieve the requirements. The full system prompt is available in the Appendix A.5.1. Participant Recruitment. Via communication in internal mailing lists of a tech company, we recruited six individuals to converse with the requirements refinement system and complete the questionnaire. The software development experience of the participants ranges from 4 to 20 years, with a median of eight years. Their experience using LLMs spans from 5 months to 24 months, with a median of 14 months.
User Interviews. The first two authors conducted face-to-face interviews with the participants. The interviews lasted between 60 and 90 minutes and were divided into two parts. The first part of the interview consisted of interaction with the prototype. In this session, the participants were presented with an initial set of ten example tasks that required refinement and spanned multiple domains. The users were asked to choose one of these tasks or create one on their own, inspired by the examples provided. Participants then engaged with the prototype, with their session data being saved into an SQLite database upon exiting the application. The time taken and the number of conversation rounds involved in completing the requirements refinement by each user were recorded. At the end of the interaction with the prototype, participants were asked to complete a questionnaire designed according to the procedures of conducting controlled experiments [31]. The questionnaire included the following questions:
• Programming experience: How many years of software development experience do you have?
LLMs experience: For how many months have you been using LLMs?
• Conversation: How would you rate the quality of your conversation with the tool based on the following attributes (ignoring the generated requirements and the workflow)?: (1) coherence, (2) identification of interests, and (3) sufficiently detailed responses Choose from the following options: Strongly Dissatisfied / Dissatisfied / Neutral / Satisfied / Strongly Satisfied.
• Requirements: How satisfied are you with the generated requirements? Choose one of the following options: Strongly Dissatisfied / Dissatisfied / Neutral / Satisfied / Strongly Satisfied.
• Workflow: How satisfied are you with the generated workflow? Choose from the following options: Strongly Dissatisfied / Dissatisfied / Neutral / Satisfied / Strongly Satisfied.
Strengths: What did you like the most about the tool?
• Weaknesses: What challenges or issues did you encounter when using the tool?
• Future use: Would you consider using the tool to refine your requirements in the future?
• Overall experience: Can you briefly describe your overall experience with the tool in two or three sentences?
Post-user study analysis. After each participant had completed the questionnaires, we transcribed the responses from all participants into a structured document to draw further conclusions. Then, the authors discussed the transcribed responses to determine if any changes should be made to the prototype. Once the final prototype was obtained, we extracted the list of features of this prototype, then used open coding to group related features into more abstract concepts.
Observation 1: All participants agreed on the usefulness of using an FM-powered tool for requirements refinement. In the responses, the participants appreciated the convenience offered by such a tool. For example, P3 stated, “It does help you reflect on and reason about the requirements. Maybe the nicest aspect was the final list of requirements. I’m not sure if I would have been able to create such a detailed list myself (at least not in one go).” The participants echoed the sentiment that such a tool helps the user by guiding them through the requirements clarification and refinement process, as shown by the comment by P5, “[tool] seems promising in assisting users to refine requirements for achieving a complex goal.”
Observation 2: Participants identified several challenges with the requirements refinement system prototypes, such as lack of depth solutions, repetition, and unnatural conversations. Two participants found the requirements generated by the tool weren’t as in-depth as they wanted. As P1 puts it, “[tool] ended up coming up with a workflow that on the surface seemed to make sense. However, [tool] seemed to just be coming up with ‘cookie-cutter’ requirements.”. Another comment by the same participant, “[tool] seemed to prematurely arrive at a workflow” may explain the lack of depth in the generated output. Two of the participants found the tool repetitive at times. P4 explains this as follows: “[Tool] had more repetitive questions, sometimes felt like repeating myself multiple times.” P3 mentioned that the conversation “feels a bit too templated/scripted.” The same participant explained a scenario in which the conversation flow with the tool broke when they tried to add additional details to a question, and the tool ended up responding to two things simultaneously. This observation emphasized the need to confirm with the user before moving to the next topic during a conversation.
Fig. 2. An overview of the proposed requirements refinement system, AlignMind, which consists of Router, Requirement Refiner, Workflow Generator, Workflow Refiner, and ToM helper agents.
# 3.2 Four Key Pillars of our Solution
The final prototype that we obtain after the iterative prototyping phase is named AlignMind and is capable of systematically capturing requirements through dialogue and developing a refined workflow. We identify multiple distinguishing features of AlignMind compared to directly prompting FMs.
• Multi-Agent Architecture. FMs often struggle with handling long, multi-objective prompts effectively [56]. Decomposing the requirements refining task into smaller, independent components improves performance. Therefore, we implement a multi-agent architecture where each agent has a distinct, well-defined role.
Theory-of-Mind. Effective AI-human collaboration requires understanding the implicit human intent beyond what has been explicitly communicated. Prior research in HumanAI Interaction (HAI) supports the idea that AI systems with ToM capabilities foster more constructive and coherent communication [61]. Hence, our solution incorporates ToM units that infer human characteristics, needs, and goals in the background, leading to more contextaware interactions and better-aligned responses in the requirement refinement task.
• Iterative Improvement. Users rarely articulate perfect requirements in a single step; refining them over multiple rounds enhances precision and clarity [47]. Thus, our solution supports multi-round conversations while maintaining a persistent internal state across invocations. This enables users to iteratively refine their requirements and build a well-structured natural language workflow.
Intent Decomposition. Breaking down complex problems into smaller sub-problems is a fundamental principle in computational thinking [50], leading to more manageable and accurate solutions. Accordingly, our approach decomposes user intent into subtopics and generates targeted questions for each, ensuring a more structured and thorough understanding of the requirements, ultimately improving the final generated workflow.
# 3.3 Multi-agent Architecture
During the process of refining user requirements, AlignMind has to achieve three objectives: maintaining a dialogue with the user while resolving ambiguities, preparing a complete summary of requirements, and generating a natural language workflow to achieve those requirements. However, we found that current FMs exhibit degraded performance when used as a single agent with very long, multi-objective prompts. To address this, we propose an approach that involves multiple FM-powered agents, each responsible for a specific task. This approach aligns with the conventional Software Engineering wisdom of decoupling and helps ensure that changes to the system prompt won’t interfere with the performance of another agent’s capabilities.
To better explain our approach, we consider a running example as follows.
# Running Example
A user submits the query, $^ { * } \mathrm { I }$ want to build an app to receive detailed weather forecasts for specific regions.” After multiple iterations with our multi-agent solution and internal processes, the user has refined requirements and a workflow with multiple steps.
Figure 2 illustrates different FM-powered agent components of the solution and the data flow across the agents of AlignMind when such a query is submitted. Through multi-round interactions with the user, our solution refines the user’s requirements and generates the final workflow to fulfill these requirements. The main entry point to AlignMind is the Router Agent, which processes the user’s query and directs it to the appropriate agent. If the query involves refining the user’s requirements, it is handled by the Requirement Refiner Agent and subsequently the Workflow Generator Agent; otherwise, the query is managed by the Workflow Refiner Agent. To assist with the requirements refinement, the system also includes a group of ToM helper agents. We next explain each of the agents’ behaviour in detail.
3.3.1 Router Agent. The Router Agent receives a user’s query and routes it to either the Requirement Refiner Agent or the Workflow Generator Agent based on the query’s purpose. The system prompt associated with this agent defines a set of instructions for the FM to follow to identify whether a query should be sent to the Requirement Refiner Agent or the Workflow Refiner Agent. To help the FM make the right decision, we provide three few-shot examples, each of which illustrates the next agent to handle the query. The first example scenario is when a user asks for an update to the requirements, the second is when a user asks for a change to the workflow with no requirements, and the third is when a user requests to change the workflow when requirements have already been generated. To ease the post-processing of the Router Agent’s response, we explicitly prompt the model to generate only the relevant agent to which we should forward the query (i.e., either “RequirementRefiner” or “WorkflowRefiner”). For our running example, the output of the Router Agent would be “RequirementRefiner”, indicating that the query should be forwarded to the Requirement Refiner Agent.
3.3.2 Requirement Refiner Agent. The goal of the Requirement Refiner Agent is to interact with users through a set of clarification questions to clarify users’ intent and develop a set of detailed requirements. The refinement process of requirements is carried out iteratively, as shown in the top portion of Figure 2. As the first step, the user interacts with the Requirement Refiner Agent to clarify their intent, supported by a set of FM-powered agents that are based on the ToM concept and further detailed in Section 3.4. These agents infer different perspectives related to the context of the query. Among such inferred perspectives are the user’s interests, topics, goals, sentiment, and expertise. The feedback from these ToM helpers is returned to the Requirement Refiner Agent, allowing it to provide a more accurate and targeted response to the user. The user can provide clarifications or ask back questions in natural language. Once the user’s intent is clarified through the dialogue, and when the Requirement Refiner Agent determines it has collected sufficient information, the
Requirement Refiner Agent produces a detailed set of requirements, summarizing the dialogue and highlighting key points. This output is then passed to the Workflow Generator Agent.
3.3.3 Workflow Generator Agent. Subsequently, the Workflow Generator Agent translates the requirements forwarded by the Requirements Refiner Agent into an actionable plan in natural language to achieve the user’s requirement. This step is integral to the refinement process for three reasons: (1) It forces a check on the completeness and consistency of the requirements; if a coherent workflow cannot be generated, it indicates gaps in the requirements. (2) The generated workflow provides a more concrete artifact for stakeholder validation than textual requirements alone. (3) It represents the first step in operationalizing the user’s intent, paving the way for future (semi-)automated execution. However, as observed in previous work [44], where the authors found that $2 6 . 4 \%$ to $7 3 . 7 \%$ of FM-generated output requires parsing or post-processing for code translation tasks, we observed that the Workflow Generator Agent can sometimes produce the workflow with additional text. Therefore, we implement a post-processing step that removes any free-form text and only retains the numbered steps as initially requested.
3.3.4 Workflow Refiner Agent. A second use case of AlignMind involves the enhancement of an existing workflow. This flow is shown in the bottom portion of Figure 2. For instance, a user may want to add missed steps during the initial requirements refinement, modify incorrectly defined steps or even remove unnecessary ones. The Workflow Refiner Agent handles these adjustments based on the user’s prompts. For instance, a user might request: “Can you change the third step in the workflow by replacing WeatherAPI with OpenWeatherMap API?”. In such cases, the Workflow Refiner Agent processes the requested changes, adjusts the workflow, and returns an improved version in real-time. This iterative cycle ensures that the user’s evolving preferences and requirements are continually incorporated into the workflow, maintaining alignment with their intent.
# 3.4 Improvements based on Theory-of-Mind (ToM)
Fig. 3. An overview of the topic and question generation workflow.
We use multiple ToM-based agent components to enhance the requirements clarification and refinement process. Accordingly, we provide a detailed definition of each component and its role in achieving the user’s intent.
3.4.1 Topics & Questions Decomposer Agent. Upon the reception of the initial user requirement by the Requirement Refiner Agent, it forwards this initial requirement to the Topics & Questions Decomposer Agent, which defines topics related to the user requirement and suggested clarification questions related to each topic. These topics and their related questions are to clarify the user intent to develop a clear and detailed requirement. This agent works in three steps: the agent generates three groups of topics, each group with a maximum of five topics. Then the agent self-reflects on the generated groups to identify an optimal group of a maximum of five topics. The optimal group is expected to cover a large spectrum of topics related to the user’s initial requirement. Then, we send these results to the question generator agent, which identifies a maximum of five questions for each of our five topics. In our running example of the user asking to build a weather forecast application, the subtopics that would be generated are high-level, such as App User Needs and Goals, Core Features, Weather Data Sources and APIs, Technology Stack, and Deployment Platforms. Each sub-topic is accompanied by questions for framing and clarifying the intent of the user. For the User Needs and Goals topics, potential questions might include:
• What specific weather information does the user want (e.g., temperature, precipitation, wind speed, humidity)?
What is the user’s preferred frequency and granularity of forecasts (hourly, daily, weekly)?
• What are the user’s desired regions? For example, are they local areas, global regions, or user-defined locations?
These subtopics and their corresponding questions are sent back to the Requirement Refiner Agent, which iteratively refines the requirements with the user across the subtopics independently.
3.4.2 Users interaction with Requirement Refiner Agent for Clarifications: While the Topics & Decomposer Agent is invoked once to generate topics and their questions, the Requirement Refiner Agent sends back one question at a time to the user in an iterative fashion. This agent will also make sure that questions are not repeated, focus on one topic at a time, and track the evolution of the discussion with the user. During the conversation, AlignMind uses two strategies to determine if a certain subtopic is covered and that the conversation should move to a new subtopic. First, the Requirement Refiner Agent performs a self-check to determine whether the conversation history contains a sufficient number of question-answer pairs to cover each of the subtopics. Second, AlignMind checks if a hard cut-off of $n$ questions has been spent inquiring about the sub-topic. We use $n = 5$ in the experiments for this paper as a reasonable cutoff. If either of the above two conditions is satisfied, the Requirement Refiner Agent moves the conversation to the next topic.
3.4.3 Expertise ToM Helper: After each user iteration, including the first one, the AlignMind sends the entire dialogue, including the current iteration, to the multiple ToM helpers for feedback, including the Expertise ToM Helper. The Expertise ToM Helper analyzes the user’s language and conversational history, performing reasoning to classify the user’s expertise level as one of the following: “Novice”, “Intermediate”, or “Expert”. This will eventually guide the Requirement Refiner Agent and Workflow Generator Agent to provide adequate responses that align with the expertise of the user. For the first iteration of our running example, the Expertise ToM Helper returns “Novice”. Based on this estimated novice user experience, the Requirement Refiner Agent will ensure that the user is not overwhelmed with complex technical jargon in its responses to the user. As the conversation continues, the user’s level of expertise may evolve.
3.4.4 Sentiment ToM Helper: After each user utterance, the AlignMind also sends the entire dialogue to the Sentiment ToM Helper. It monitors the sentiment flow of the discussion within the Human-GA conversation, categorizing it as “Negative”, “Neutral”, or “Positive”. Based on such estimated sentiment, the Requirement Refiner Agent can adjust its discussion with the user (e.g., moving forward to the next question, rephrasing the topic, and/or changing the tone of the response).
3.4.5 Extendability of Helpers: The implementation of our ToM-based architecture within AlignMind provides several advantages regarding extensibility. Specifically, organizations may require domain-specific ToM helpers to assist the requirement refinement process by providing domain knowledge unique to each business. AlignMind can support the seamless integration of such helpers as plugins or extensions. This plugin architecture allows organizations to flexibly choose one or more ToM helpers and accommodate future ones extending AlignMind capabilities. Additionally, there is a clear separation of concerns between the various ToM helpers, which enhances their maintainability. That also enforces the concept of specialized agents as discussed in Section 3.2.
# 4 Evaluation Design and Results
Fig. 4. An overview of the evaluation of AlignMind.
Figure 4 provides an overview of the evaluation for AlignMind. This evaluation consisted of two main stages. We first collected data to conduct the evaluation by simulating the human-AI conversations (Section 4.1). Then, based on the collected data, we carried out the evaluation from three perspectives, with each perspective focusing on a particular research question.
# 4.1 Data Collection for Evaluation
Collecting data from the real world to evaluate the quality of a conversational agent in the domain of requirement refinement is challenging. Therefore, we opted to generate synthetic data using FMs to simulate human-AI interactions for the specific use case of requirements refinement, following similar approaches from prior studies [1, 4, 19, 39, 45, 70]. Thus, we create our dataset using the following steps:
Scenario Generation. We adopt a multi-step approach to construct 150 scenarios across various domains with the goal of creating 150 dialogues between a pair of FMs. We first prompt an FM to generate ten diverse domains for which automated workflows can be developed. The complete system prompt of this agent can be found in the appendix A.1.1. Each domain is then combined with one of three personas (i.e., “casual”, “indecisive”, and “rude” [45]), and a random expertise level (i.e., “novice”, “intermediate”, or “expert”), to obtain 30 configurations. After that, we generate five intents for each configuration using a template-based technique, similar to prior work [45]. This technique uses dynamic placeholders to diversify the generated dataset. These placeholders enable the generation of a diverse set of dialogues. Specifically, we consider four variables: (1) the expertise level of the user, (2) the persona of the user, (3) the domain, and (3) one of two sentence fragments typically used to express an intent: ‘I would like to’ or ‘I am looking for a way to.’ Finally, we prompt an FM to complete the dynamically constructed sentence fragment with an intent.
The template we used to generate the intents is given as follows: “As {{expertise_level}} in $\{ \{ { \mathsf { d o m a i n } } \} \}$ , $\{ \{ \mathsf { v e r b } \} \} ^ { * }$ . For example, after following this process, one scenario generated through the system will be “As a novice in Artificial Intelligence, I am looking for a way to receive bi-weekly notification of upcoming AI conferences and workshops, sent to my calendar through an automated API service.” We obtain 150 different scenarios after following this process $\phantom { - } 1 0 \times 3 \times 5 = 1 5 0$ ).
Simulate Human AI-Interactions. In this step, we use an FM-powered agent role-playing as a human to interact with AlignMind and the baseline based on each scenario generated in the previous step. The system prompt of the FM agent roleplaying as the human is specified in the Appendix A.1.3. As the baseline, we use an FM-powered agent that is only instructed in the system prompt to refine user requirements. We included the full system prompt used in this agent in the Appendix A.5.1. Based on each scenario, we invoke human-AlignMind and human-baseline conversations to generate a dialogue, a refined set of requirements, and a step-by-step workflow in natural language to achieve the requirements. At the end of this stage, we store the tuples consisting of the dialogue, requirements, and workflow of all 150 scenarios in a database for subsequent analyses.
In each subsequent section, we describe the motivation for studying the three research questions, outline our methodology to answer them using the generated data, and finally include the results.
# RQ1 Can AlignMind output high-quality requirements and natural language workflows during the requirements clarification and refinement process?
We set out to investigate if AlignMind can improve the output of the requirement clarification and refinement process compared to directly prompting an FM, which we consider as the baseline, closely following the work of Wang et al. [63]. However, it is important to note that a single metric cannot fully capture the enhancements in the requirements refinement process or the quality of the artifacts produced by that process. Therefore, we consider a multi-faceted approach based on the following three aspects:
• RQ1.1 Evaluation using a panel of FM-powered judges • RQ1.2 Evaluation of requirement richness RQ1.3 Evaluation on the number of conversation rounds
Fig. 5. An overview of the output quality evaluation by a panel of FM judges (RQ1.1).
# RQ1.1 Evaluation using a panel of FM-powered judges
Motivation: As achieving consistent evaluation at scale with human evaluators is challenging, the "LLM-as-a-Judge" paradigm has emerged, in which LLMs are employed as evaluators for complex tasks [21, 36]. Even in the software engineering domain, Ahmed et al. [2] show that replacing some human annotation effort with LLMs can produce inter-rater agreements equal to or close to humanrater agreement. Therefore, we chose to score the artifacts produced during the requirements refinement process (i.e., each tuple of dialogue, requirements, and workflow) by prompting a panel of three FM-powered judges. However, previous research has shown that using the same FM or closely related variants as evaluators can introduce bias due to preference leakage [37] or self-preference [49]. They observed that an FM serving as both a predictor and an evaluator assigns disproportionately high ratings to its outputs compared to other FMs. Therefore, to mitigate this bias, building upon recent work [60] which showed the effectiveness of using a panel of FM evaluators, we chose three distinct FMs, drawn from two different model families: Llama3.3-70b from Meta and gpt-4o-mini, as well as gpt-4o from OpenAI, as our FM-powered judges.
Furthermore, we used single-point scoring [32] where the judge model is tasked with rating the quality of an output based on natural language instructions on how the grading should be performed (i.e. what properties constitute a good or bad output). In our case, this requires providing the judging panel with clear evaluation criteria, or rubrics, that are relevant to the process of refining requirements through conversation. To the best of our knowledge, no such guidelines currently exist. However, prior work [8, 40] has used conversations conducted with an FM-powered agent to generate a set of rubrics to estimate domain-specific user satisfaction in other domains such as software debugging. Inspired by this work, we opted to generate synthetic data using FMs to simulate human-AI interactions for the use case of requirements refinement and derive rubrics from this data.
Approach: Figure 5 illustrates the approach we followed to evaluate AlignMind using the panel of FM-powered judges. The process consists of two stages: (1) Data generation for rubrics extraction and (2) evaluation by the FM-powered judge panel, which we describe next.
Data generation for rubrics extraction. Similar to the steps in Section 4.1, we first generate a set of 100 scenarios with the help of an FM. For this purpose, we begin by prompting an FM to generate 20 diverse domains for which automated workflows can be developed. After that, we generate five diverse intents for each domain using the same template-based technique.
Based on these 100 domain and intent combinations (i.e., 100 scenarios) we use a pair of FMpowered agents to generate a dialogue. The first FM-powered agent, roleplaying as a human, is provided with each one of the 100 scenarios and prompted to initiate the conversation. The second FM agent serves as a baseline requirement refiner agent, helping the former agent clarify its intent. The refiner agent terminates the dialogue when it has collected sufficient information to achieve the user’s intent. The specific system prompts used in the FM agent roleplaying as the human and requirements refiner agent are available in Appendix A.1.3 and Appendix A.1.4 respectively.
Once the dialogue is terminated, an FM is prompted to extract the refined requirements from the entire dialogue into a requirements document. Then, another FM call is used to create a step-bystep workflow in natural language based on the requirements document. Each combination of the dialogue, requirements, and workflow for all 100 scenarios is passed on to the next step for rubric extraction and evaluation by FM judges.
Evaluation by FM-powered judges. Inspired by prior work [8, 40], first, we use FMs to derive rubrics to assess the quality of the generated artifacts during the requirements refinement. We employ a two-step process to generate rubrics. First, we provide a combination of dialogue, requirements, and workflow to an FM-powered agent to extract three reasons why these artifacts might be considered good. The full system prompt used can be found in the Appendix A.2.1. This process yields a total of 300 reasons, with three reasons derived from each of the 100 data points. Next, we prompt another FM-powered agent to generate rubrics based on the extracted reasons. The full system prompt of this agent is available in the Appendix A.2.2. After manual inspection to remove duplicates, the list of five rubrics, which was produced by this process, is presented below:
# Rubrics
■ The assistant is able to accurately identify the user’s intent.
■ The requirements capture all of the user’s intent with respect to their requirements, preferences, and perceptions.
The requirements are relevant to achieve the user’s intent.
■ The workflow includes detailed, actionable, and ordered steps.
The workflow is realizable and error-free.
Next, based on these rubrics, we evaluate the artifacts (dialogue, requirements, and workflow) generated by AlignMind and the baseline solution described in Section 4.1. For this purpose, We employ three FMs, drawn from two different model families: Llama3.3-70b from Meta and gpt-4o-mini, as well as gpt-4o from OpenAI, as our panel of FM-powered judges for the evaluation based on the five rubrics. The full system prompt used in each of these evaluator agents to rate each triplet is available in the Appendix A.2.3. To improve the robustness of the result [9], we prompt the evaluator to provide reasoning before assigning a score using a 5-point Likert scale. This 5-point score, which ranges from Strongly Disagree to Strongly Agree, is converted into a normalized score within the [0, 10] range, where 0 indicates Strongly Disagree, 2.5 indicates Disagree, 5 indicates Neutral, 7.5 indicates Agree, and 10 indicates Strongly Agree. To ensure the consistency of our evaluation results, we prompt each evaluator agent three times and compute the mean for each rubric score. The overall score for a given triplet is calculated as the mean of all rubric scores as follows:
$$
{ \mathrm { O v e r a l l ~ S c o r e } } ( D , R , W ) = { \frac { 1 } { N } } \sum _ { i = 1 } ^ { n } C _ { D R W } ( i )
$$
• $D :$ Dialogue.
· $R$ : Generated Requirements.
𝑊 : Generated Workflow.
$C _ { D R W n }$ : Score of the $n$ -th rubric.
. $N$ : Total number of rubrics.
Then, we use the Wilcoxon signed-rank test to determine if there is a statistically significant difference between the distributions of scores between the baseline and AlignMind. Moreover, we use Cliff’s delta to measure the effect size, which is negligible when delta $< 0 . 1 4 7$ , small when 0.147 $\leq d e l t a < 0 . 3 3$ , medium when $0 . 3 3 \leq d e l t a < 0 . 4 7 4$ , and large otherwise. Furthermore, we use the median value for aggregating scores across the panel of three judges. (e.g., For a given scenario out of 150, if the three judges have given a higher median overall score to the AlignMind, compared to the baseline, AlignMind is chosen as the preferred approach for that scenario.)
To validate whether FM-powered judges are aligned with human judgment, we compare the preference of the FM judge panel with the human-provided preference for a subset of 20 scenarios. We assess inter-rater reliability using Cohen’s $\kappa$ coefficient [15], which measures the degree of agreement between the FM-chosen labels and human labels. We obtain a value of 0.685 for $\kappa$ , indicating substantial agreement, and therefore continue with the evaluation.
Results: Figure 6 illustrates the performance comparison between the baseline and AlignMind based on the overall evaluation score.
AlignMind outperforms the baseline in the evaluations by all three FM-powered judges, with the overall score differences being statistically significant. Specifically, incorporating AlignMind’s capabilities in requirements clarification and refinement tasks results in a higher overall performance score, as illustrated in Figure 6. Notably, different judge models exhibit consistent performance trends across both AlignMind and the baseline configurations. The Wilcoxon signed-rank test reveals statistically significant results favoring AlignMind in evaluations by Llama3.3-70b, gpt-4o-mini, and gpt-4o, with p-values of $1 . 9 5 \times 1 0 ^ { - 2 0 }$ , $2 . 4 2 \times 1 0 ^ { - 1 8 }$ , and $1 . 4 5 \times 1 0 ^ { - 1 4 }$ , respectively. The median of the distribution of aggregate scores across all three judge models for the baseline and AlignMind are 9.08 and 10, respectively.
Fig. 6. The overall score of AlignMind vs baseline, as judged by three different FMs. The rightmost plot (Median) shows the distribution of scores where, for each scenario, the median score from the three judge models is taken.
Fig. 7. Scatterplot showing the relative improvement of AlignMind, compared to the baseline, across 150 different scenarios. Data points that lie above the red line indicate instances where AlignMind improves the overall score compared to the baseline.
Figure 7 shows the relative improvement in aggregate scores when AlignMind is used, compared to the baseline, across different scenarios. The relative improvement ranges between $- 1 3 . 8 5 \%$ and $3 6 . 4 2 \%$ with the median at $7 . 4 4 \%$ . 122 out of 150 $( 8 1 . 3 3 \% )$ scenarios show an improvement in the overall score with AlignMind compared to the baseline.
AlignMind is particularly useful in situations where the baseline approach is facing challenges. According to Figure 7, the scenarios that received lower overall scores with the baseline approach exhibit the greatest relative improvement when AlignMind is used. This clearly highlights the effectiveness of our method in tackling complex scenarios.
When the individual rubrics are considered, AlignMind is unanimously chosen as better in three out of five rubrics. We observe that all the considered judge models yield better performance in most of the rubrics, as shown in Figure 8. In particular, when using Llama3.3- 70b and gpt-4o-mini as the judge models, AlignMind achieves better performance in rubrics 2,
Fig. 8. The score for each rubric of AlignMind and the baseline as judged by three different FMs.
4, and 5, with statistically significant differences (as illustrated in Table 1). Furthermore, gpt-4o favors AlignMind in four out of five rubrics (2 to 5). Interestingly, we notice that AlignMind and the baseline exhibit similar performance for rubric 1 across all judge models. Only in one case (when even evaluating rubric 3 using Llama3.3-70b judge model), the baseline has outperformed AlignMind. Hence, our results suggest that the obtained performance measures are reliable across different judge models. Moreover, AlignMind, supported by its advanced clarification features, demonstrates promising results in the requirements refinement task.
Table 1. Wilcoxon Test Results for each rubric when three different FMs are used as judges. Win. stands for Winner. Sign. stands for Significant.
From the practical point of view, AlignMind can typically capture all users’ intent concerning their requirements, preferences, and perceptions (rubric 2). Furthermore, AlignMind can help achieve the user’s intent (rubric 3). Additionally, AlignMind generates a detailed, actionable, and ordered-step workflow (rubric 4), while such a workflow is realizable and error-free (rubric 5).
# RQ1.2 Evaluation of requirement richness
Motivation: In this RQ we set out to asses the quality of the generated requirements through an objective measurement. In previous work [58, 69], lexical features of natural language have been used to extract requirements from documents. Therefore, to assess the richness of the generated requirements, drawing inspiration from this previous work, we compare the lexical diversity [34], specifically the number of unique and non-stop words in the final refined requirements output by both AlignMind and the baseline.
Approach: For this purpose, we begin by removing stop words from the refined requirements. Next, we transform the requirements into a matrix of tokens, with each token representing a word using the CountVectorizer from the scikit-learn Python library. The result is a binary matrix where each element indicates the presence of a specific word. Using this matrix, we then count the number of unique content words.
Fig. 9. The requirements richness between AlignMind and baseline configurations. The median of the baseline approach is 33 and AlignMind is 266.5.
Results: Figure 9 illustrates the richness of requirements generated by AlignMind and the baseline, as measured by the number of content words.
Requirement richness achieved by AlignMind is eight times higher than that of the baseline. The median number of content words in the final set of requirements when the AlignMind is used for requirements refinements is 266.5, while it is 33 for the baseline. With its advanced capabilities, AlignMind generates richer content in the final requirements compared to the baseline approach. A Wilcoxon signed-rank test confirms that this difference is statistically significant, with a $\mathrm { \Delta p }$ -value of $2 . 3 1 \times 1 0 ^ { - 2 6 }$ . The cliff’s delta is 1, suggesting that the requirements richness output by AlignMind in all instances is higher than the outputs in the baseline approach, with no overlapping values between the two groups. These results suggest that employing a multi-agent architecture supported by various refinement features, such as those in AlignMind, can be highly effective in enhancing requirement richness.
# RQ1.3 Evaluation on the number of conversation rounds
Motivation: Previous work [6, 67] has identified that FMs tend to prematurely terminate conversations by jumping straight to providing solutions, even when the information provided by the user is insufficient to reach a solution. Therefore, we set out to investigate how effectively AlignMind can mitigate this weakness by extending conversations with the user when necessary, with the support of our proposed improvements.
Approach: To address this RQ, we compare the number of conversation rounds maintained by AlignMind and the baseline solution when interacting with the FM agent role-playing as a human while refining requirements for the 150 scenarios.
Fig. 10. The number of conversation rounds used for requirements refinement using AlignMind vs. baseline.
, Vol. 1, No. 1, Article . Publication date: June 2025.
Results: As depicted in Figure 10, AlignMind engages in a median of 13 conversation rounds with the user, whereas the baseline approach results in a median of only four rounds.
AlignMind enables longer multi-round conversations compared to the baseline. Even when the system prompt does not explicitly instruct to ask clarifying questions, the baseline approach still has four conversation rounds. However, this is significantly lower than the 13 conversation rounds of AlignMind. The Wilcoxon signed-rank test confirms a statistically significant difference with p-value $= 2 . 1 4 \times 1 0 ^ { - 2 6 }$ . We also obtain a Cliff’s delta of 0.986, suggesting a large effect size. This means that AlignMind overwhelmingly outperforms the baseline approach in terms of guiding users through several refinement steps to generate a refined set of requirements and a natural language workflow. A higher number of conversation rounds allows for deeper clarification of user intent through iterative dialogue rather than relying on a single direct prompt to the FM for the same task [67]. This difference highlights AlignMind ’s ability to guide users through a more comprehensive and iterative refinement process, ensuring grounded and contextually relevant questions across various requirement aspects.
# Summary of RQ1
Key Findings: AlignMind demonstrates superior performance in terms of overall score, with a statistically significant difference compared to the baseline. Furthermore, we observe that AlignMind outperforms the baseline in three to five rubrics across the judge models, as confirmed by the Wilcoxon signed rank test. Similarly, AlignMind significantly improves requirement richness compared to the baseline with a reasonable number of conversation rounds.
Implications: Our solution, AlignMind, can assist software developers and non-technical professionals across various organizations in requirements refinement, enabling the creation of rich and well-structured requirements and workflows. Furthermore, our results suggest that the integration of clarification capabilities, such as those in AlignMind, can be valuable for requirements refinement tasks.
# RQ2 Is the output of AlignMind grounded on user conversations?
Motivation: While RQ2 touches on how much information the summarized requirements contain, content alone may not serve as a reliable metric, as it could include hallucinated information. Therefore, in this research question, we investigate the hallucination aspect of the summarized requirements (the extent to which the summarized requirements are consistent with and specific to the dialogue).
Approach: Kryscinski et al. [33] proposed four dimensions for evaluating abstractive summaries: relevance, coherence, consistency, and fluency. Out of these dimensions, consistency, which checks if the summary aligns with the facts in the source document, is the most relevant for hallucination detection. The intuition is that the summary and source document should be factually consistent if no new information is present in the summary that’s not in the source. Therefore, we formulate our problem of checking the final requirements for hallucination as a problem of consistency checking in abstractive summaries. In our context, the human-AI conversation is considered the source document, while the final set of requirements is considered the summary.
We follow OpenAI’s guidelines and use FMs-as-judges for determining consistency in the summaries. 2 We use the same three models as judges that we employed in the previous RQ: Llama3.3-70b, gpt-4o-mini, and gpt-4o. The system prompt includes guidelines for evaluating the consistency between a specified source document and its summary. Additionally, we ask the judging model to assign a consistency score ranging from 0 to 5.
For each of the 150 scenarios, we provide the dialogues and the refined requirements generated by the baseline and AlignMind, independent of each other, and prompt each of the judge FMs to provide a consistency score. Then, we use the Wilcoxon signed-rank test to determine if there is a statistically significant difference between the distributions of consistency scores between the baseline and AlignMind. Moreover, we use Cliff’s delta to measure the effect size.
Results: All three judge models gave full score (5 out of 5) for consistency, for output generated by both baseline and AlignMind in the majority of the cases. The median consistency score was 5 for both baseline and AlignMind by all judge models. This demonstrates that requirements generated by both baseline and AlignMind are consistent with the AI-human conversations, which were used for requirements refinement (i.e, no additional requirements not discussed in the user conversations were included in the final set of requirements due to hallucination).
No statistical difference between the consistency scores of baseline and AlignMind, based on assessment by two out of three FM-based judges. Based on the Wilcoxon signed-rank test results for Llama3.3-70b and gpt-4o-mini judge models, there was no evidence to claim a statistical difference in consistency in output generated by baseline and AlignMind. When GPT-4o is used as the judge model, based on the Wilcoxon signed-rank test, AlignMind consistency scores are significantly higher than the baseline ( $\mathrm { \Delta p }$ -value $1 . 7 1 \times 1 0 ^ { - 6 }$ ). Moreover, we obtain a Cliff’s delta of 0.19, which is considered small but non-negligible.
# Summary of RQ2
Key Findings: Both the AlignMind and the baseline produce hallucination-free requirements from the requirements clarification and refinement process, consistent with the user conversations.
Implications: AlignMind proves to be an effective tool towards creating rich, high-quality requirements specifications that are grounded in dialogue. Moreover, AlignMind has the potential to be a useful resource for various stakeholders, not just in refining requirements but also in areas such as requirements elicitation.
# RQ3 What are the operational costs associated with AlignMind?
Motivation: This research question seeks to estimate the costs associated with using an FMbased conversational agent system enhanced with ToM capabilities for requirement refinement tasks. Specifically, we aim to evaluate whether AlignMind can assist organizations in clarifying requirements efficiently while maintaining reasonable costs. The findings of this study could provide valuable insights for various industries, helping them better understand the cost implications of using a multi-agent solution vs. a single agent in requirements refinement tasks.
Approach: We quantify the expenses associated with using our AlignMind vs. the baseline for requirement refinement tasks. To address this research question, we consider three different cost-related metrics. Below, we provide a brief introduction to each of these metrics:
Number of FM calls: This metric represents the number of calls that AlignMind performs to foundation models during a session used for requirements refinement. Since FM calls are usually made over HTTP to externally hosted foundation models, this metric gives an estimate of network resource usage and latency while AlignMind is in operation at an organization.
Number of tokens: All major FM hosting providers currently charge users based on token counts. This includes both input (aka prompt_tokens) and output (aka completion_tokens). Therefore, by calculating token usage, we can estimate the financial costs of using AlignMind. OpenAI provides a lightweight method to measure the cost of tokens consumed in each FM call. Typically, when invoking any OpenAI model, three key pieces of information are returned from the endpoint: prompt_tokens, completion_tokens, and total_tokens. Since we invoke OpenAI models multiple times for each intent-based instance, we sum each of these token metrics across all FM calls made for each use case.
Fig. 11. The number of FM calls required in the two configurations for requirements refinement.
Results: The AlignMind version uses a higher number of FM calls compared to the baseline. Our findings indicate that performing requirements refinement tasks with AlignMind requires a median of 74.5 calls to FMs, while the baseline necessitates only seven requests, as shown in Figure 11. The Wilcoxon signed-rank test shows a statistically significant difference with a p-value of $2 . 3 0 \times 1 0 ^ { - 2 6 }$ . Every observation generated by AlignMind has a higher number of API calls than every observation in the baseline. We attribute these observations to the multiagent architecture of the AlignMind, which employs several FM-based agents to facilitate the requirements refinement process. Consequently, a higher number of FM calls is expected. Each interaction begins with a pair of FM agents that collaboratively decompose the user’s intent into a set of topics and subsequently prompt another FM to generate relevant questions. Additionally, AlignMind leverages a range of ToM-based agent helpers to improve the requirement refinement. Such helpers are invoked after each user query.
Fig. 12. The cost in terms of (a) prompt, (b) completion, and (c) total tokens needed for requirements refinement in the baseline configuration vs AlignMind.
There is a notable monetary cost associated with using AlignMind compared to the baseline, as measured by metrics related to prompt (input), completions (output), and total tokens. Our findings indicate that AlignMind incurs a higher token cost compared to the baseline across all metrics, as illustrated in Figure 12. Specifically, the median token usage is as follows: 129,181.5 for prompt tokens, 10,139 for completion tokens, and 139,784 for total tokens with AlignMind, contrasted with 3,758.5 for prompt tokens, 981.5 for completion tokens, and 4,735 total for tokens for the baseline. A Wilcoxon signed-rank test shows a statistically significant difference with p-values $7 . 5 9 \times 1 0 ^ { - 2 6 }$ , $1 . 1 1 \times 1 0 ^ { - 2 5 }$ , and $7 . 7 5 \times 1 0 ^ { - 2 6 }$ for prompt, completion, and total tokens, respectively. All Cliffs’ Delta values are 0.87, which is considered a large effect size. This suggests almost no overlap between AlignMind and the baseline in token usage. Accordingly, this increase in token usage aligns with our earlier observations, as it is reasonable to expect a higher cost due to the multi-agent architecture of our solution. Each FM call necessitates sending the entire conversation history, including the prompt, to maintain a coherent dialogue and ensure accurate responses. Moreover, given the multi-agent structure of our solution, one can anticipate an increase in tokens, particularly in the prompt tokens providing instructions for each agent. It’s important to recognize that not all of the 10,139 output tokens will be visible to the end user. Most of these tokens are used by FM-powered agents for internal decision-making processes. Therefore, while overwhelming users with the sheer number of tokens is not a concern, we should still be mindful of the financial implications since costs for FM access are currently calculated based on the number of tokens consumed and generated. Another point is that prompt tokens are generally priced lower than completion tokens. Our method tends to use more input tokens than output tokens, so the cost may not be as dire as it first appears.3
Despite the cost difference between AlignMind and the baseline, AlignMind remains both cost-effective and efficient for requirements refinement tasks compared to manual requirement refinement. For instance, the operational costs are approximately a median of $\$ 0.94$ using Gemini-1.5-flash (Google), $\$ 0.97$ using Claude-3-haiku (Anthropic), and $\$ 31.40$ using gpt-4o (OpenAI) for requirements refinement tasks with AlignMind, while it costs as low as $\$ 0$ and as high as $\$ 0.13$ for the same FM providers using the baseline approach.4 With inference costs further decreasing [14] and techniques like prompt caching [29] and prompt compression [26] being supported by major FM service providers, we postulate that FM-based solutions will become an attractive choice for requirement refinement and intent alignment in the coming years.
# Summary of RQ3
Findings: Our findings underscore notable computational and operational overhead when leveraging AlignMind for requirement refinement. In particular, our solution requires a median of 74.5 FM calls, while it costs as low as $\$ 0.97$ and as high as $\$ 31.40$ according to various FM providers.
Implications: Our analysis offers key insights for various corporate stakeholders regarding the computational and operational costs associated with leveraging AlignMind multiagent solution to refine requirements. With the inference costs getting reduced further, we anticipate the emergence of numerous innovative solutions based on the multi-agent architecture and various ToM capabilities, which will significantly revolutionize the requirements engineering field, mainly requirement refinement.
# 5 Discussion and Implications
From the experience of building AlignMind and trialling it out in the field, we have learned some lessons and have seen a glimpse of what the future holds in intent alignment and requirements refinement with the assistance of FMs.
Assisting with the Performance-Cost Tradeoff. Compound AI systems and agentic AI systems such as AlignMind are complex and known to be costly [23, 28]. In contrast to a single model call, agent-based systems call the models multiple times before arriving at the final solution. Based on current pricing plans offered by model hosting providers, costs are incurred each time the agent is run and depend on the number of input and output tokens. Based on the use case, developers may decide to compromise accuracy to reduce costs. Tools must be provided for the FM-based application developers to visualize these factors and manage the tradeoff between cost and accuracy. Furthermore, they may benefit from techniques [30] that can optimize costs by reducing the prompt length, output format, and number of agent hops while maintaining functionality and overall performance.
The multi-modal future. Requirements engineering is a quintessentially multimodal process involving a variety of modes and media to gather, articulate, and refine the needs and expectations of stakeholders in a software development project [68]. Stakeholders engage in discussions to express needs and expectations. These verbal conversations can occur in formal meetings, workshops, or informal dialogues, with different levels of precision. Furthermore, diagrams, charts, and sketches crafted on whiteboards or other digital tools occasionally provide visual representations of system components, workflows, or user interactions. Written documents, including emails, requirements specifications, and notes, serve as permanent records and reference points, capturing details and justifications for each requirement over long periods. Therefore, AI-based requirements refinement should get support beyond large language models and integrate with multi-modal foundation models. In future work, we plan to address this challenge.
New UX Paradigms. The chat UI may not be the optimal way to interact with foundation models to achieve complex tasks in domains such as software development. On the other hand, organizations are experimenting with new ways for users to interact with foundation models. v0 by Vercel , Artifacts by Anthropic6, and Townie7 are one such category of interfaces where interactive UI components are rendered dynamically from FM output. Users can refine these FM-generated interfaces as needed and copy them into their own projects. It is uncertain at this point in time, in a future where AI systems are expected to be intelligent collaborators, which of these experiments will gain traction [24]. We are confident that the features provided by AlignMind to preview the captured requirements and generated workflow in real-time, as well as to modify them using natural language, also offer a glimpse of the future of intent-first IDEs.
Beyond Software Domain. The applications of a goal alignment agent that can help with intent alignment and requirements refinement are not limited to Software Engineering. It can benefit both technical and non-technical professionals by bridging domain-specific knowledge and communication gaps. For software developers, the agent assists by systematically capturing detailed software requirements, ensuring that the developed system aligns with stakeholder objectives and user needs, reducing the likelihood of costly rework and project delays. In the automotive sector, engineers can utilize the agent to clarify complex design specifications and safety standards, facilitating effective collaboration between multidisciplinary teams and ensuring compliance with industry regulations. For healthcare practitioners, the agent aids in translating clinical requirements into actionable software features, enhancing the development of medical informatics tools that improve patient care and data management. Analysts can leverage the agent to articulate complex financial models and compliance requirements in the financial sector, ensuring that the resultant software solutions accurately reflect regulatory standards and market dynamics. The conversational agent enhances cross-disciplinary communication and fosters a more integrated approach to problem-solving across various industries by providing a natural language interface for capturing and aligning diverse requirements.
Plug-and-play Architecture. As the need for requirements refinement and intent alignment spans across industries, a Foundation-model-based goal alignment solution can serve a broad spectrum of industries, offering the ability to create innovative, intent-driven solutions. The flexibility provided by AlignMind plug-and-play architecture of AlignMind allows organizations to seamlessly develop custom modules tailored to their unique requirements and constraints. By leveraging Theory-of-Mind-based adapters, the solution ensures a deeper alignment with the perspectives of various stakeholders. However, recognizing and responding to user perceptions can be a complex task in practice [62]. In particular, one might need to put some effort into creating a set of ToM-based helpers for a more aligned user’s output.
Importance of Domain-Specific Rubrics. Rubrics serve as a valuable foundation for evaluating the performance of foundation models (FMs), as seen in our work and in program debugging tasks [8]. However, designing effective rubrics requires careful consideration to ensure consistency and domain specificity. For instance, Biyani et al. [8] found that domain sensitization plays a crucial role in significantly improving final evaluation scores. Their rubrics, derived from a debugging-code dataset, highlight the importance of tailoring evaluation criteria to the nuances of a specific field. Additionally, industries should therefore prioritize the development of accurate, domain-specific rubrics to ensure comprehensive and precise assessment. Well-crafted rubrics enhance the reliability of evaluation outcomes, leading to more meaningful insights. This is especially critical in domains where human cognitive factors, such as intent and communication styles, play a key role.
Comprehensive Evaluation Using Diverse FMs. Many existing studies on leveraging foundation models (FMs) for software engineering tasks rely on the same FM as both predictor and evaluator. However, this approach can introduce critical biases, leading to phenomena such as preference leakage [37] or self-preference [49], where an FM tends to favor its own predictions over those of other models. To mitigate this issue, we employ three variants of FMs from two widely used families: Llama3.3-70b from Meta; gpt-4o-mini and gpt-4o from OpenAI. Our evaluation reveals slight variations in assessments among the three models. However, gpt-4o judge model consistently favors our proposed approach, AlignMind, over the baseline across all rubrics and summarizationbased metrics. Consequently, our findings highlight the importance of using diverse FMs for evaluation. Therefore, researchers are encouraged to adopt this approach to enhance the consistency and comprehensiveness of FM performance evaluations.
# 6 Threats to Validity
Internal Validity refers to the extent to which the observed effect is indeed due to the independent variable and not other factors. In our case, the inherently stochastic nature of LLMs can introduce variability in the responses generated at each intermediate step where an LLM call is made in the requirements refinement system. To mitigate the effects of this behaviour for our evaluation, we have queried the evaluator model three times for each instance and computed the mean score.
External Validity refers to the extent to which the findings of the study can be generalized to other populations or settings. In this study, which focused on requirements refinement, we specifically used the Meta LLaMA 3.3 and OpenAI GPT-4o class of models for our implementation and evaluation. It’s important to note that results could vary if models from other providers, like
Google Gemini, Anthropic Claude, or Mistral, were employed. Nevertheless, recent benchmark findings indicate that the performance of models from all leading labs has begun to converge. This trend can likely be attributed to the similarities in model architectures and significant overlaps in pre-training data [7]. Therefore, we hypothesize that the influence of varying models on the performance of AlignMind will be negligible, producing consistent results.
Construct Validity concerns the extent to which the model measures the intended construct or concept. In our study, we adopt a set of rubrics to evaluate the quality of the conversations, requirements, and natural language workflows generated by FM-powered agents, similar to prior work [8]. To avoid potential biases from solely using FM-generated rubrics, we carefully crafted rubrics to account for the primary purposes, requirements refinement and natural language workflow generation. Each rubric was then manually reviewed by the first two authors for relevance. The authors’ analysis of each rubric and discussion resulted in the final set of five FM-generated rubrics mitigating bias. | Foundation Models (FMs) have shown remarkable capabilities in various natural language tasks. However, their ability to accurately capture stakeholder requirements remains a significant challenge for using FMs for software development. This paper introduces a novel approach that leverages an FM-powered multi-agent system called AlignMind to address this issue. By having a cognitive architecture that enhances FMs with Theory-of-Mind capabilities, our approach considers the mental states and perspectives of software makers. This allows our solution to iteratively clarify the beliefs, desires, and intentions of stakeholders, translating these into a set of refined requirements and a corresponding actionable natural language workflow in the often-overlooked requirements refinement phase of software engineering, which is crucial after initial elicitation. Through a multifaceted evaluation covering 150 diverse use cases, we demonstrate that our approach can accurately capture the intents and requirements of stakeholders, articulating them as both specifications and a step-by-step plan of action. Our findings suggest that the potential for significant improvements in the software development process justifies these investments. Our work lays the groundwork for future innovation in building intent-first development environments, where software makers can seamlessly collaborate with AIs to create software that truly meets their needs. | [
"cs.SE",
"cs.AI"
] |
# 1. Introduction
Reinforcement learning (RL) is a general computational framework for building agents that learn to maximize a scalar reward from their experience. RL agents sense their environment and produce actions at every single timestep, yet effective reward maximization in complex environments requires reasoning and learning over many timescales, spanning vast horizons. Consider how we typically go about our day: as we actuate muscles every few milliseconds, we simultaneously perform high-level decisions such as choosing presents for a loved one, deciding what to eat for lunch, figuring out meaningful scientific questions, and so on. Such abstract decision-making allows us to make decisions in a complex world, without being overwhelmed with unnecessary detail.
Figure 1: Overview of the methods for temporal structure discovery. We focus on the problem of discovering temporal structure autonomously from data. We put the discovery problem in perspective of the overall agent, covering the major benefits of Hierarchical Reinforcement Learning as well as the associated challenges and trade-offs.
Hierarchical reinforcement learning (HRL) formalizes the idea of flexibly reasoning over different timescales by developing agents that learn, predict, and act in the world at multiple levels of abstraction. At its core, HRL builds on the temporal structure revealed through interaction with an environment. These can be leveraged either within a learning algorithm, for example, as a curriculum over goals, or by defining a set of useful and reusable skills. When the temporal structure is defined by human specialists, HRL can dramatically ease the decision-making burden of the agent by improving exploration (Bellemare et al., 2020), learning (Vinyals et al., 2019), and generalization (Ahn et al., 2022). On the other hand, when the temporal structure underpinning HRL is poorly defined, it can hamper learning— for example, resulting in pathologically bad exploration (Jong et al., 2008). These appeals and drawbacks naturally lead to the question: how can agents autonomously discover useful temporal structures in HRL?
Before designing algorithms that successfully address the discovery problem, we are faced with the question of what constitutes a “good” temporal structure in the first place. Is there one type of “good” structure that yields higher rewards in all possible environments? Are there specific types of problem settings, like multi-task learning (Plappert et al., 2018) and continual learning (Khetarpal et al., 2020c), where we expect HRL to outperform nonhierarchical RL, and others where we do not? How can prior knowledge, for instance, through the integration of large language models (LLMs), alleviate the difficulties of discovery? This work presents various perspectives on what constitutes “good” temporal structures through the lens of the fundamental problems of RL—specifically, how HRL can aid exploration, credit assignment, transfer, and interpretability.
Key Contributions. The discovery of useful temporal structures has been a prolific, albeit challenging, topic of research. Before we present the various algorithms that tackle this problem, we take a step back and discuss the potential benefits of HRL methods as
Discovering Temporal Structure: An Overview of Hierarchical RL well as their trade-offs in the context of sequential decision-making. It is through the lens of these benefits and trade-offs that we introduce the diverse approaches that have been developed to tackle the fundamental question of discovery. While recent surveys in HRL (Pateria et al., 2021; Hutsebaut-Buysse et al., 2022) present papers based on their technical differences and domains of application, we present the literature based on how each method contributes to these core benefits. We then discuss the challenges associated with discovering structure in HRL and the domains that are particularly well-suited for such methods.
Scope. Almost all of the algorithms we cover are compatible with deep neural networks. We categorize approaches in terms of the amount of prior knowledge, presenting works that (1) learn directly from the agent’s online experience, (2) leverage offline datasets through offline RL, and (3) build on foundation models such as LLMs to define policies and rewards.
Overview. Section 2 discusses the benefits of the HRL framework and the different tradeoffs faced when discovering temporal structure. Section 3 introduces the notation and fundamental concepts used throughout the paper. In Section 4, 5, and 6, we present methods that try to answer the central question in HRL: how can agents effectively discover temporal structure in a stream of data? These sections are divided into methods that learn directly from interaction, methods that leverage offline datasets, and more recent methods that build on foundation models. In Section 7, we present approaches that investigate how an agent might deliberate over the skills it has mastered to achieve different goals. In Section 8, we discuss the challenges of discovering temporal structure through HRL. In Section 9, we explore additional related fields to HRL and how they are interconnected, such as research on state and action abstractions, continual RL, and programmatic RL. Finally, in Section 10, we highlight environments and domains that are particularly promising for HRL research, with a particular focus on open-ended systems.
# 2. What is Hierarchical Reinforcement Learning for?
Hierarchical reinforcement learning (HRL) aims to exploit the temporal structure of sequential decision-making problems. Solutions to complex problems can often be approximated by deconstructing the problem into simpler sub-problems that are modular and composable. Modularity refers to the property that a solution to a subproblem can be reused without concern for exactly how it was solved. Compositionality means that sub-problems can subsequently be recombined to create solutions to a wide range of more complex problems.
To better understand what such a structure might represent in practice, consider a programmer with an abundance of time who cares about solving only a single task. In such a scenario, Assembly language might be the optimal choice because its precise control over hardware resources potentially maximizes memory efficiency and minimizes execution time. However, in practice, programmers often opt for higher-level programming languages and use external software libraries because they offer compositional modules that solve common programming subtasks, and therefore make the writing of most new programs more efficient, at the cost of increasing execution time. In fact, such programming languages allow us to quickly solve complex problems; without them, most large software projects would simply be
for#eMpoackhe isnuraengrea(dEiPeOntCtHraS)c:king is on Modularity G model.train(True) &
Graphical User Interface avg_loss running_vloss = 0.0 $\mathbf { \tau } = \mathbf { \tau }$ train_one_epoch(epoch_number, writer) Compositionality # Set the model to evaluation mode, disabling dropout # and using population statistics for batch normalization. model.eval() def train_one_epoch(epoch_index, tb_writer): # Make predictions for this batch outputs $\mathbf { \sigma } = \mathbf { \sigma }$ model(inputs)
High-level def forward(self, x): # Compute the loss and its gradients loss = loss_fn(outputs, labels)
Languages x = self.pool(F.relu(self.conv1(x))) loss.backward() x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 \* 4 \* 4) def backward(self, \*gradients) x = F.relu(self.fc1(x)) # Shared backward utility.
Assembly rxet=urF.nrexlu(self.fc2(x)) rnesutletd=_gsrealdf.ibeanctsk $\mathbf { \sigma } = \mathbf { \sigma }$ _rud_neflxatetennd(egdr(a\*dniesnttesd,_sgerlfa._dinenstse)d_output)
Language return tuple(_iter_None_tensors(result))
Machine mul.wide.u32 %addr_in, %index, 4; // Assuming 4-byte elements
Language add.u64 %addr_in, %addr_in, ptr_in; ld.global.u32 %value, [%addr_in]; mul.wide.u32 %shared_index, %tid.x, 4; // Index into shared memory
infeasible. This idea is visually represented in Figure 2 on the left, where abstract interfaces allow us to manipulate machine language efficiently. Modularity and compositionality are also particularly appealing properties for software expected to undergo changes throughout its life cycle. In a software library, each function typically handles a specific subtask and can be composed within a sequence of function calls to achieve a larger objective. In some more complex tasks, functions might call other functions. Developing such a library requires careful consideration of the right code organization to adopt and which guiding principles to follow, balancing execution time, readability, and performance. However, once a library is written, the user can focus on the overall program’s behaviour without needing to understand the implementation details of each function (Wilkes et al., 1958), greatly empowering the user’s ability to achieve their goals. This is represented in Figure 2 on the right, where the modular and compositional nature of PyTorch allows researchers to efficiently explore research ideas.
The temporal structure at the core of HRL is analogous to functions and subroutines in programming languages.1 Just as a human programmer writing a complex program is faced with the difficulty of breaking their task into subtasks, so must RL agents autonomously identify hierarchical structure in a stream of data. The modularity and compositionality properties are therefore good indicators as to the kind of problems in which we might find HRL particularly useful (see Section 10). We now discuss how, by discovering and leveraging such temporal structure, HRL methods can help address fundamental challenges in decision-making.
Discovering Temporal Structure: An Overview of Hierarchical RL
# 2.1 The Benefits of Hierarchical Reinforcement Learning
Just as modular and compositional codebases can facilitate effective software development of complex systems, HRL can leverage an environment’s structure to improve decision-making. This is particularly powerful when an agent is faced with tasks spanning vast horizons. By breaking down such long horizons into manageable subgoals, HRL effectively affords learnability. How can we understand this more precisely? In this section we attempt to provide a comprehensive perspective on the benefits of HRL through the lens of three fundamental challenges agents face when learning from interaction: how to select the right data to collect (exploration), how to efficiently learn from this data (credit assignment), and how to transfer knowledge and behaviour to new situations (transferability). Additionally, as agents become increasingly more capable, a new challenge emerges: understanding their decision-making processes (interpretability). When covering the different families of HRL methods (Sections 4, 5, and 6), we will explicitly consider how these benefits are instantiated in practice.
Exploration. Broadly speaking, RL agents must solve two problems: (a) how to use existing data to learn useful behaviours, and (b) what data to collect in the first place. The latter problem, known as the exploration problem, is both unique and central to RL; the agent must learn how to collect data that improves its understanding of the world even if doing so does not immediately maximize reward in the short term (Amin et al., 2021). By exploiting the temporal structure of an environment, an agent can improve exploration in at least three ways. First, it can seek subgoals that are closer and more achievable than the overarching task’s goal, potentially creating a progressive curriculum of subgoals that allows the agent to explore more effectively. Second, it can explore in a diversity of directions, each defined by a skill in the agent’s skill set. Finally, agents can explore at a higher level of abstraction than individual actions, enabling them to search the solution space more efficiently. Consider a researcher tackling an important scientific question. By learning a high-level programming language, such as PyTorch, and writing modular code, the researcher can iterate faster to investigate many high-level ideas. When iterating over ideas, the researcher may seek to achieve some important milestones, such as a proof of concept, that can reveal new perspectives and provide insights into possible future courses of action.
Credit Assignment. To improve its decisions over time, an agent must identify the key moments in a sequence of decisions that best explain the observed result. RL algorithms typically leverage multi-step error propagation (Sutton, 1988) to learn about temporally distant, or delayed, outcomes. An agent leveraging the environment’s temporal structure could more efficiently identify the origins of an outcome by propagating errors at the abstraction level defined by this structure. Consider our previous example of a researcher performing a scientific experiment. Completing such an experiment consists of a sequence of high-level decisions, such as the choices of data preprocessing or evaluation metrics. Each of these high-level decisions is instantiated through a series of keystrokes that make up the final working code. By reflecting on the validity of the sequence of high-level decisions, rather than of each individual keystroke, the researcher could better identify which ones were critical for the observed outcome and how this sequence could be improved. By breaking down a task into such segments, it is also easier to identify if a particular segment is completed, narrowing down the search for lower-level mistakes, such as where an errant keystroke might have introduced a bug.
Transfer. HRL offers a particularly promising way of exploiting structure shared between a family of problems: skills acquired in one task can seamlessly be transferred to another. Agents could achieve this by breaking a complex task into simpler subtasks that have the potential to recur in many contexts and then learn skills that achieve such subtasks. Faced with a new challenge, such an agent can re-compose the skills, either by sequencing them or acting according to a mixture of them efficiently. Consider our previous example of an AI researcher conducting experiments and writing a paper for a particular conference. The collection of research code subroutines and writing skills learned while writing this initial paper could substantially reduce the complexity of writing a follow-up article, further improving their ability to conduct impactful research. A set of skills can also serve as a foundation for learning increasingly more complex ones, as a form of auto-curriculum.
By addressing the three aforementioned fundamental challenges, HRL aims to achieve faster learning and planning, ultimately improving the agent’s problem-solving capabilities. Beyond these, HRL also has the potential to tackle the additional challenge of interpretability.
Interpretability. While not all HRL algorithms produce interpretable behaviour, those that do offer the unique advantage of allowing human observers to better understand an agent’s decision-making process. As such, agents become increasingly powerful; they will eventually be deployed in real-world situations where the consequences of their actions carry considerable stakes. A crucial requirement would then be our capacity to ensure their alignment, and interpreting their decisions is a key aspect of this challenge (Amodei et al., 2016). HRL could provide an interpretable interface for AI alignment via a more abstract decision formulation than the one defined in the environment. For instance, in a robot navigation task, low-level actions such as the forces applied at the joints are particularly hard to interpret compared to a sequence of semantically meaningful skills such as “reach the stairs” and “descend to the first floor”. Humans may be able to follow the agent’s reasoning at that level of abstraction, and provide feedback as to the type of goals that are to be preferred.
# 2.2 Trade-offs
HRL builds on the inductive bias that tasks can be naturally decomposed into simpler, modular, and compositional subtasks, making it especially effective when such a hierarchical structure is apparent. However, while it offers several potential advantages, consistent with the No Free Lunch Theorem (Wolpert and Macready, 1997), HRL is not guaranteed to outperform non-hierarchical RL methods across all tasks. Its bias may lead to suboptimal outcomes when the assumed structure does not match the problem’s underlying one. In other words, a poorly chosen task decomposition can sometimes make a problem harder, not easier. Taking the analogy of programming, adding abstractions in a codebase can simultaneously help in seeing a broader picture, but also obscure the important details (Victor, 2011). As a result, for any task or environment, HRL agents face a trade-off between performance, sample efficiency, and computation efficiency, as illustrated in Figure 3.
We first consider the trade-off between performance and sample efficiency. This trade-off can be appreciated from a theoretical point of view: under standard assumptions, the optimal
Sample Efficiency
Figure 3: (Left) Agents trade-off between three different objectives: performance (in terms of reward), sample efficiency (the amount of data required to reach a certain performance), and computational efficiency (amount of computation needed to do so). The Pareto frontier between performance and sample efficiency shifts with different compute budgets. For instance, with unlimited compute (a low computational efficiency), high performance can be achieved at moderate sample efficiency. (Right) A qualitative illustration of a “flat” agent and a hierarchical agent under the Pareto frontier of the performance vs. sample efficiency trade-off, given a fixed compute budget. A “flat” agent, as opposed to a hierarchical one, does not use temporally extended actions. While it is not always the case, hierarchical agents tend to trade some amount of performance for improved sample efficiency.
policy can always be represented using primitive actions alone (Bertsekas, 1995). However, learning the optimal policy for large and realistic environments is often simply intractable. Rather than pursuing perfect solutions, an agent should embrace efficient learning algorithms to develop reasonable but often suboptimal policies. This is one of the main appeals of HRL: by re-composing existing solutions (i.e., behaviours achieving particular subtasks), an agent may be able to quickly find approximate solutions for a variety of tasks, offering a promising way to trade off optimality with sample efficiency. For example, a pre-trained skill that opens doors allows an agent to bypass learning the complex motor controls for that specific action. However, this very abstraction can be limiting; if a door is stuck and requires an unusual push-and-jiggle motion, the rigid pre-defined skill might fail, preventing the agent from solving an edge case that a more flexible, low-level policy could have.
Another important challenge faced by agents interacting with complex and realistic environments is computational efficiency: the amount of computation spent selecting the right action at each timestep. Such computation can correspond to the neural network size, the maximum depth allowed for an agent using Monte Carlo Tree Search (Coulom, 2006), or the length of the reasoning trace of an LLM. Suppose the computation time is unrestricted, e.g., for agents acting in a simulator with the liberty of performing thousands of imagined rollouts for each timestep. In such a case, large amounts of computation can be spent on planning the next action. However, in real-world scenarios, the compute time per timestep is constrained. As HRL agents make decisions at a high level of abstraction, computation time can be managed more flexibly. For example, a robotic agent equipped with an LLM may plan over a set of semantically meaningful skills, which is significantly smaller than the underlying continuous action space. Since each high-level decision made by such an agent typically takes place over multiple timesteps, the cost of deliberating is naturally amortized over such timescales.
On the Importance of Knowledge Reuse. A common pitfall in HRL applications is that the number of interactions required to discover the hierarchical structure of a problem can be greater than the number of interactions needed to solve the problem itself, highlighting the importance of carefully considering the types of problem for which HRL is used (see Section 10). This cost can be amortized in different ways. For example, it may be offset if the agent is expected to complete many different tasks within its lifetime, allowing the learned subtasks to be potentially reused. Alternatively, reusing existing knowledge—such as offline datasets (Section 5) and foundation models (Section 6)—can also help mitigate this cost. As we will see, HRL’s formalism offers a natural and particularly promising way to incorporate such prior knowledge.
# 3. Formalizing Hierarchical Reinforcement Learning
We use the notation introduced by Sutton and Barto (2018): capital letters refer to random variables, whereas lowercase letters refer to their instantiation. Table 2 summarizes the notation used in this section.
# 3.1 Reinforcement Learning
We consider an agent interacting with an environment where the agent is in state $S _ { t } \in \mathcal S$ at timestep $t$ , selects an action $A _ { t } \in \mathcal A$ , and in response the environment emits a scalar reward $R _ { t + 1 } \in \mathbb { R }$ and transitions to a new state, $S _ { t + 1 } \in \mathcal { S }$ . This transition happens according to a transition probability distribution,
$$
p ( s ^ { \prime } | s , a ) = p ( S _ { t + 1 } = s ^ { \prime } | S _ { t } = s , A _ { t } = a ) .
$$
The agent’s goal is to find a policy $\pi : \mathcal { S } \Delta ( \mathcal { A } )$ , where $\Delta ( \mathcal { A } )$ is the distribution over $\mathcal { A }$ , that maximizes the expected discounted sum of rewards (return):
$$
G _ { t } = \mathbb { E } _ { \boldsymbol { \pi } } \left[ \sum _ { i = t } ^ { \infty } \gamma ^ { i - t } R _ { i + 1 } \right] ,
$$
where $\gamma \in \left[ 0 , 1 \right)$ is the discount factor. This 5-tuple, $\langle \mathcal { S } , \mathcal { A } , R , p , \gamma \rangle$ defines a Markov Decision Process (MDP) (Puterman, 1994), the most commonly accepted formalism in RL.
When following a particular policy $\pi$ , the value of each state can be represented by the state value function,
$$
\begin{array} { r } { v _ { \pi } ( s ) = \mathbb { E } _ { \pi } \left[ G _ { t } | S _ { t } = s \right] . } \end{array}
$$
Similarly, we may consider the value of being in state $s$ and taking action $a$ , following policy $\pi$ afterward, represented by the action value function, or $q$ -function,
$$
q _ { \pi } ( s , a ) = \mathbb { E } _ { \pi } \left[ G _ { t } | S _ { t } = s , A _ { t } = a \right] .
$$
This function can be written recursively,
$$
\begin{array} { l } { { q _ { \pi } ( s , a ) = \mathbb { E } [ r _ { t + 1 } + \gamma r _ { t + 2 } + \gamma ^ { 2 } r _ { t + 3 } + \cdot \cdot \cdot \mid S _ { t } = s , a _ { t } = a , \pi ] } } \\ { { \ = r ( s , a ) + \gamma \displaystyle \sum _ { s ^ { \prime } } p ( s ^ { \prime } \mid s , a ) v _ { \pi } ( s ^ { \prime } ) } } \\ { { \ = r ( s , a ) + \gamma \displaystyle \sum _ { s ^ { \prime } } p ( s ^ { \prime } \mid s , a ) \displaystyle \sum _ { a ^ { \prime } } \pi ( s ^ { \prime } , a ^ { \prime } ) q _ { \pi } ( s ^ { \prime } , a ^ { \prime } ) . } } \end{array}
$$
A similar derivation is possible for the value function, and this set of equations is referred to as the Bellman equations for evaluation (Bellman, 1957).
The goal of an RL agent is to maximize the rewards it gets from interacting with the environment. In an MDP, there exists at least one optimal policy, defined as,
$$
\pi ^ { * } = \arg \operatorname* { m a x } _ { \pi } q _ { \pi } ( s , a ) .
$$
In most settings, this quantity is impractical to compute exactly, and we must resort to approximation. Such approximations stem from two families of algorithms for learning reward-maximizing policies. The first family of methods, called value-based methods, greedily maximizes an estimated action-value function. Q-Learning (Watkins and Dayan, 1992) is likely the most used algorithm to estimate the optimal policy, $\pi ^ { * }$ , whose update rule takes the following form,
$$
Q ( S _ { t } , A _ { t } ) Q ( S _ { t } , A _ { t } ) + \alpha [ R _ { t + 1 } + \gamma \operatorname* { m a x } _ { a \in \mathcal { A } } Q ( S _ { t + 1 } , a ) - Q ( S _ { t } , A _ { t } ) ] .
$$
This update has been the basis of the Deep Q-Networks algorithm (Mnih et al., 2015).
The second family of methods directly maximizes the quantity of interest, that is, the discounted sum of returns. The policy gradient theorem (Sutton et al., 1999a) provides the gradient of the expected discounted return from an initial state distribution, $d ( s _ { 0 } )$ , with respect to a stochastic policy, $\pi _ { \zeta } ( \cdot | s )$ , parameterized by $\zeta$ ,
$$
\frac { \partial J ( \zeta ) } { \partial \zeta } = \sum _ { s } d _ { \pi } ^ { \gamma } ( s ) \sum _ { a } \frac { \partial \pi _ { \zeta } \left( a | s \right) } { \partial \zeta } q _ { \pi } ( s , a ) ,
$$
where $\begin{array} { r } { d _ { \pi } ^ { \gamma } ( s ) = \sum _ { s _ { 0 } } d ( s _ { 0 } ) \sum _ { t = 0 } ^ { \infty } \gamma ^ { t } \sum _ { a } P _ { \pi } ( S _ { t } = s | S _ { 0 } = s _ { 0 } ) } \end{array}$ is the discounted state occupancy measure, and $P _ { \pi } ( S _ { t } = s | S _ { 0 } = s _ { 0 } )$ the probability of reaching state $s$ from $s _ { 0 }$ in $t$ steps when following policy $\pi$ . This update has been the basis of many modern algorithms, including the well-known proximal policy optimization (Schulman et al., 2017).
# 3.2 Hierarchical Reinforcement Learning
The temporal structure an agent learns using HRL has been formalized in a variety of names, such as skills, options, temporal abstractions, or goal-conditioned policies, amongst others. These frameworks carry their own notations and focus on particular methodologies or research questions. We adopt the options formalism (Sutton et al., 1999b; Precup and Sutton, 2000) as it provides a useful and comprehensive framework for expressing temporal structure. In Section 3.2.2, we expand on how alternative formalisms are fundamentally connected by focusing on what constitutes HRL at its core.
An HRL agent makes use of a set of options, O, which are defined by three components: a policy, an initiation function, and a termination function. These components can be implemented through parameterized functions, such as neural networks, or symbolically through code (e.g., see Section 6). In Figure 4, we illustrate the temporal structure exhibited by options while interacting with an environment. More formally,
• $\pi : \mathcal { S } \times \mathcal { O } \Delta ( \mathcal { A } )$ is an option policy, which selects an action according to the current state and the current option.2 This quantity can also be referred to as the skill’s policy, the intra-option policy, or the goal-conditioned policy (see Section 3.2.2). When this function is parameterized by a set of weights $\theta$ , we will write $\pi _ { \boldsymbol { \theta } } ( a | \boldsymbol { s } , \boldsymbol { o } )$ .
• $\beta : \mathcal { S } \times \mathcal { O } [ 0 , 1 ]$ is the option termination function, giving the probability with which option $o$ should stop executing if it reaches state $s$ . When this function is parameterized, we will use the notation $\beta _ { \psi } ( s , o )$ , where $\psi$ represents the termination function parameters.
• $\mathcal { I } : \mathcal { S } \times \mathcal { O } [ 0 , 1 ]$ is the option initiation function, which determines to what degree an option $o$ can start its execution from a certain state. Traditionally, this component is referred to as the initiation set, which determines the set of states in which an option can initiate. When this component is parameterized, we will use the notation ${ \mathcal { I } } _ { \chi } ( s , o )$ , where $\chi$ represents the initiation function parameters.
Additionally, to select among the set of options, an HRL agent uses:
• $\mu : \mathcal { S } \to \Delta ( \odot \cup \mathcal { A } )$ , the high-level policy, which outputs a probability distribution over the set of options O and actions $\mathcal { A }$ given a state $s$ . When this probability is parameterized, we will use the notation $\mu _ { \kappa } ( o | s )$ , where $\kappa$ represents the high-level policy parameters. Similarly to the previous components, in practice, this policy can also be instantiated by other means, e.g., a programmatic policy that directly encodes domain knowledge or follows predefined rules (see Section 9.3).
When planning with options, an HRL agent will do so through:
• $P _ { \mathcal { O } } : \mathcal { S } \times \mathcal { O } \times \mathcal { S } \to \mathbb { R }$ , the option model function. This function takes as input a state $s$ , an option $o$ , a future state $s ^ { \prime }$ , and the discount factor $\gamma$ , and outputs a measure of how likely the option $o$ will terminate at state $s ^ { \prime }$ , at any point in the future.
Not all of the papers covered in this work will explicitly define each of these components. It is common for research in HRL to only highlight the components for which a significant contribution is made and to make assumptions about the other components. For instance, the termination function is often assumed to output termination after a fixed number of timesteps. Similarly, the option initiation function is often assumed to allow option initiation across the whole state space. We will highlight the relevant aspects within the presentation of each work.
Figure 4: A simplified diagram illustrating the decision process of a hierarchical agent: large white nodes signify high-level decisions made over options, and large grey nodes represent the state observed by the agent at that moment. The high-level decisions can be made over a potentially infinite set of options, such as when option policies are represented as goal-conditioned policies. Grey trails represent state transitions during option execution. This diagram illustrates how different options last for different timescales and traverse the environment in a diversity of directions. After an option finishes execution, the agent must make its next high-level decision.
Using the presented terms, we can now define the option value function,
$$
q _ { \pi } ( s , o ) = \sum _ { a } \pi ( a \mid s , o ) q _ { u } ( s , o , a ) ,
$$
where $q _ { u } : \mathcal { S } \times \mathcal { O } \times \mathcal { A } \to \mathbb { R }$ is the value of executing action $a$ in the context of a state-option pair:
$$
q _ { u } ( s , o , a ) = r ( s , a ) + \gamma \sum _ { s ^ { \prime } } p ( s ^ { \prime } \mid s , a ) u _ { \beta } ( o , s ^ { \prime } ) .
$$
The function $u _ { \beta } : \mathcal { O } \times \mathcal { S } \mathbb { R }$ is called the option-value function upon arrival, that is, the value of executing option $o$ upon entering a state $s ^ { \prime }$ is given by:
$$
u _ { \beta } ( o , s ^ { \prime } ) = ( 1 - \beta ( s ^ { \prime } , o ) ) q _ { \pi } ( s ^ { \prime } , o ) + \beta ( s ^ { \prime } , o ) v _ { \mu } ( s ^ { \prime } ) .
$$
Finally, the function $v _ { \mu } : \mathcal { S } \to \mathbb { R }$ is defined as the value function over a set of options
$$
v _ { \mu } ( s ) = \sum _ { o } \mu ( o | s ) q _ { \pi } ( s , o ) .
$$
Subgoal Options. Technically, an option is simply described by a way of initiating, a way of acting, and a way of terminating—its behaviour need not maximize any objective at all. As an example, consider an option that initiates everywhere, terminates nowhere, and whose policy arbitrarily maps each state to an action; this is a well-defined option but does not optimize any useful objective. However, for option discovery, rather than searching for these three quantities in their raw form, it is often more convenient to think of options as achieving subgoals. In fact, the vast majority of the literature on option discovery can be seen as achieving subgoals (e.g., McGovern and Barto, 2001; Precup, 2001; Colas et al., 2022; Sutton et al., 2023); we refer to these options as subgoal options (Bagaria, 2025). One way to learn options that achieve subgoals is through the following:
• $r ^ { o } : \mathbb { S } \times \mathcal { A } \times \mathbb { S } \to \mathbb { R }$ , the option reward function is a function conditioned on an option $o$ . We also refer to this quantity as the goal reward function. It takes as input a state $s$ , an action $a$ , and a next state $s ^ { \prime }$ , and outputs a scalar reward. When maximized, it would produce the corresponding option policy. When parametrized, this function will use parameter notation $\nu$ .
It is important to note that not all subgoal options need to maximize option reward functions. Indeed, some approaches learn a set of useful behaviours through imitation learning (e.g., Le et al., 2018; Team et al., 2024). Alternatively, subgoal options can be defined by mapping states to actions through symbolic functions such as code (see Sections 6 and 9.3). It is also possible these quantities take a slightly different set of inputs, for example, the option reward function may only receive as input the current state $s$ , written as $r ^ { o } ( s )$ .
# 3.2.2 Related Terminologies and Formalisms
The previous section uses the language of options to formalize the learned temporal structure. As the field of HRL is rich and diverse, some researchers may feel misrepresented by such a formalism. Therefore, throughout the paper, we may interchangeably refer to options (with the notation $o$ for each option) as skills (with the notation $z$ for each skill), goal-conditioned policies (with the notation $g$ for each goal), or simply refer to the general term of temporal abstractions. We believe such differences in language are mostly superficial and may hinder the integration of the best practices from each of these fields. We now highlight the differences among various related formalisms and illustrate their connections.
Skills. The skill terminology has largely been used informally in the HRL literature to refer to temporally extended behaviours. Skills can most commonly be formalized using the options framework, but they can also be formalized using other formalisms such as macroactions (Fikes and Nilsson, 1971), feudal hierarchies (Dayan and Hinton, 1993), MAXQ (Dietterich et al., 1998), and HAMs (Parr and Russell, 1997).
Goal-conditioned RL. A goal can be formally defined using a triple: $( g , r ^ { g } , \gamma _ { g } )$ , where $g : \mathcal { S } \mathbb { R } ^ { d }$ is a goal vector that can, for example, be used to condition a policy, $\pi ( a | s , g )$ , or a value function, $v ( s | g )$ , $r ^ { g } : \mathcal { S } \mathbb { R }$ , is a goal reward function that maps each state to real-valued number, and, finally, $\gamma _ { g } : \mathcal { S } \times \mathcal { A } \times \mathcal { S } \to [ 0 , 1 ]$ describes the goal’s continuation function, and hence the timescale for achieving that goal (Kaelbling, 1993a; Schaul et al., 2015). Most work in goal-conditioned RL (GCRL) considers the state space to be the set of goals an agent should reach (Andrychowicz et al., 2017). To obtain useful measures of the distance between the current state and the goal state, a key emerging research question is defining representations that afford meaningful distance measures. Although some of the works from the GCRL literature are present in this work, we refer the reader to Liu et al. (2022) for an in-depth review.
Discovering Temporal Structure: An Overview of Hierarchical RL
Feudal RL. In Feudal RL (Dayan and Hinton, 1993), decision-making is divided across multiple levels of the hierarchy, where higher-level “managers” set subgoals for lower-level “workers” who are rewarded by their managers for achieving these subgoals. The space from which the manager draws subgoals is usually continuous, whereas options are usually instantiated as a discrete set of policies. In the previous section, we intentionally refrained from specifying the nature of the option set, accommodating both discrete and continuous sets. The concept of a continuous option set can be interpreted through the lens of parameterized skills (da Silva et al., 2012).
# 3.2.3 Beyond Architectural Choices
In the previous section, we mentioned that skills, options, and goal-conditioned policies were slightly different instantiations of the same fundamental principle. We now attempt to clarify this statement. In Figure 5, we depict a set of common instantiations of hierarchical architectures. One such architecture is a modular architecture of hierarchical components: a high-level policy is explicitly defined, together with a collection of options, each potentially implemented through neural networks. Our previous statement makes it obvious that HRL is not restricted to such a hierarchical architecture, despite the fact that it is quite common in the literature.
An alternative instantiation is the goalconditioned neural network, which can be instantiated by an LLM (see Section 6). However, HRL is also not restricted to such an architecture. In fact, we argue that HRL is fundamentally defined through the algorithm, not the agent architecture. In the most general case, HRL can produce agents that are simply instantiated by a single, large neural network where the options and goals are implicitly learned and defined within the neurons themselves.
An HRL algorithm empowers the agent’s exploration by selecting goals across time, and rewarding the agent for achieving them. It also facilitates more effective credit assignment by decomposing a long, continuous stream of experience into meaningful subtasks. Additionally, HRL can better prepare an agent for future challenges by promoting
Figure 5: The agent architecture is a subproblem to the main question posed in HRL: how to discover temporal structure?
the learning of reusable behaviours, which can be explicitly or implicitly defined. These essentially represent the core benefits of HRL, as outlined earlier in this work in Section 2.1.
# 4. Discovery from Online Experience
In this section, we present work in option discovery that takes place in the online setting: the agent seeks to construct useful options by simply interacting with the environment. This setting has received significant attention because it holds the promise of scalability (Sutton,
Table 1: A summary of HRL methods shown in Section 4, 5, and 6 that discover temporally abstract behaviors, highlighting the main benefits elaborated in Section 2.1. Each method links to the corresponding section. A single black dot (•) indicates that a class of methods generally contributes to addressing a specific challenge, while a double black dot $( \bullet \bullet )$ signifies that the class of methods is explicitly designed to tackle that challenge.
2019)—a long-lived agent that can learn new, useful options simply via interaction and can potentially keep increasing its competence in the world, bootstrapping new skills with previously discovered ones (Ring, 1995; Schmidhuber, 2010).
We broadly categorize this literature based on the proxy objectives used for option discovery. For each family of methods, we first describe the core intuition and key methodological patterns. Then, we discuss how each category contributes to the core benefits of HRL (as outlined in Section 2.1). Finally, we discuss some limitations of each category and highlight opportunities for research.
Before presenting the methods in detail, we direct the reader’s attention to Table 1, which provides an overview of all the discovery methods discussed in this work. For each method, we highlight which benefits have been mostly studied by researchers from the field, where a single black dot ( ) indicates that a class of methods generally contributes to addressing a specific challenge, while a double black dot ( $\bullet \bullet$ ) signifies that the class of methods is explicitly designed to tackle that challenge.
Figure 6: Bottleneck Discovery in Four Rooms. Skill discovery using (a) betweenness centrality, a measure of the likelihood that a state lies on the shortest path between any two other states, and (b) Q-cuts, which finds the edge that solves the Min-Cut problem on the transition graph. Both classes of methods attempt to identify bottleneck states and use them as option subgoals.
# 4.1 Bottleneck Discovery
Many challenging problems in RL have bottlenecks, which are small regions of states that an agent must pass through to reach a larger, potentially more interesting region of the state space. For example, in the Two Rooms task (Sutton et al., 1999b; Solway et al., 2014), the agent must go through a doorway state to access the goal in the other room. Another example is a player in a video game who must pick up a key to unlock a door that leads to the other levels. In these examples, the doorway and the key act as bottlenecks—reaching those states grants the agent access to an entirely new region of interesting states. When an agent identifies such bottleneck states during learning, it defines a subgoal option (as in Section 3.2.1) to reach it. Specifically, the option terminates with a positive subgoal reward when it reaches the bottleneck state, and continues without termination or reward otherwise. When bottleneck states are successfully identified and targeted with subgoal options, the agent often improves exploration, credit assignment, and transfer, as we will soon discuss. Given this intuitive appeal, several papers have proposed algorithms for identifying bottlenecks.
Most algorithms for finding bottlenecks begin with a graph-based view of the MDP: states are treated as nodes and an edge exists between two states, $( s , s ^ { \prime } )$ , when the agent can reach $s ^ { \prime }$ from $s$ in a single timestep:
$$
\begin{array} { r } { G = ( \mathcal { S } , E ) , e _ { s , s ^ { \prime } } \in E = \mathbb { 1 } _ { \sum _ { a \in \mathcal { A } } p ( s ^ { \prime } | s , a ) > 0 } . } \end{array}
$$
In this graph, bottlenecks have been described, and identified, using the following approaches:
# Diverse Density
An early approach to option discovery used the concept of diverse density, which measures how much more likely a state is to lie on a successful trajectory than an unsuccessful one. McGovern and Barto (2001) formulate bottlenecks as states with highly diverse density and propose a simple algorithm to identify them. Concretely, consider that the agent has a set of successful trajectories, ${ \mathcal { T } } ^ { + }$ , (each is a sequence of states leading to a goal state) and a set of unsuccessful trajectories, $\mathcal { T } ^ { - }$ , (sequences that did not reach the goal state). The diverse density score, $\mathrm { D D } ( s )$ , captures the probability that a given state, $s$ , occurs in successful trajectories and does not occur in unsuccessful trajectories:
$$
\mathrm { D D } ( s ) = \prod _ { \tau \in T ^ { + } } P ( s \in \tau ) \prod _ { \tau \in T ^ { - } } \Big ( 1 - P ( s \in \tau ) \Big ) ,
$$
where the probability that a state occurs in a trajectory can be computed in tabular domains using visitation counts:
$$
P ( s \in \tau ) = { \frac { \mathrm { N u m b e r ~ o f ~ t i m e s ~ } s \ \mathrm { a p p e a r s ~ i n } \ \tau } { | \tau | } } .
$$
States with a diverse density greater than a threshold are chosen as bottlenecks, and subgoal options are created to reach them. A drawback is that trajectories must be classified as positive or negative depending on whether they were on the path to a goal state. Stolle and Precup (2002) address this shortcoming by defining diverse density over a family of tasks: bottleneck states are those that are repeatedly visited while solving many goal-reaching tasks.
# Graph partitioning
Under the graph view of MDPs, bottlenecks can be interpreted as “accumulation” nodes— states in which many paths or trajectories coincide. These accumulation nodes, or bottleneck states, tend to separate loosely connected sub-graphs, which are otherwise densely connected among themselves. To see why bottlenecks separate loosely connected sub-graphs, Menache et al. (2002) describe the problem of going from a start state, $s$ , to a goal state, $g$ , as a Max-Flow problem (Ahuja et al., 1993): the agent should maximize the accumulation (or flow) of probability along paths that originate in $s$ and terminate in $g$ . But, the problem of maximizing the flow in a graph is equivalent to the Min-Cut problem (Ford and Fulkerson, 1962), which requires identifying the lowest probability edges that can be removed to completely separate the source state, $s$ , from the goal state, $g$ . Off-the-shelf algorithms can be used to identify these min-cuts, which are interpreted as bottleneck states, and used as a target for new subgoal options (Kazemitabar and Beigy, 2009).
Specifically, the Q-Cut algorithm (Menache et al., 2002) finds such bottlenecks by solving the Min-Cut problem. In Min-Cut, the nodes of the graph are divided into disjoint sets, $U$ and $V$ ( $U \cup V = \mathcal { S }$ and $U \cap V = \emptyset$ ), such that the source state belongs to the first set, $s \in U$ , and the goal state belongs to the second set, $g \in V \setminus U$ . The cut-value between them is defined as the sum of probabilities along the edges that connect the two subsets:
$$
\operatorname { C u t } ( U , V \setminus U ) = \sum _ { ( i , j ) \in E : i \in U , j \in V \setminus U } p ( j | i , a ) .
$$
Additionally, the min-cut is the solution to the following optimization problem, which searches for the edges that separate source $s$ and goal $g$ while minimizing the sum of probabilities along the cut edges:
$$
\begin{array} { r l } & { \mathrm { M i n C u t } ( G ) = \{ ( i , j ) \in E : i \in U ^ { * } , j \in V \backslash U ^ { * } \} , } \\ & { \quad \mathrm { w h e r e ~ } U ^ { * } = \underset { U \subset \mathbb { S } } { \arg \operatorname* { m i n } } \mathrm { C u t } ( U , V \backslash U ) . } \end{array}
$$
Although there are exponentially many valid cuts, the Min-Cut problem can be solved in polynomial time (Ford and Fulkerson, 1962). Finally Menache et al. (2002) define the bottlenecks as the destination nodes of the min-cut edges: $B = \{ j \ | \ ( i , j ) \in \operatorname { M i n - C u t } ( G ) \}$ .
A drawback of Q-cut is that the entire MDP must be described with a global graph, which is not scalable. To address this shortcoming, L-cut (S¸im¸sek et al., 2005) constructs “local graphs” using states visited in an episode. Instead of searching for individual states, Mannor et al. (2004) suggest identifying clusters of states and then connecting them using options; this approach has recently been extended using more sophisticated clustering techniques (Metzen, 2012; Srinivas et al., 2016; Campos et al., 2020; Bacon, 2013). Notably, Evans and S¸ims¸ek (2023)’s use of graph modularity (Newman and Girvan, 2004) as the metric for clustering allows them to efficiently learn multi-level hierarchies, where each level operates at a different timescale.
# Graph Centrality
In graph theory, centrality measures the importance of each node within a graph. The search for useful subgoals in an MDP can be viewed as being analogous to identifying central nodes in a graph. Centrality measures are theoretically well-understood, and several efficient algorithms exist for computing them in large graphs, so it is attractive to use these methods for option discovery. Although many different graph centrality measures exist, S¸ims¸ek and Barto (2008) advocate for betweenness centrality because of its ability to find bottlenecks in large graphs. Betweenness quantifies how important a node is in a network by counting how many times it appears on the shortest path between other nodes (S¸ims¸ek and Barto, 2008). Specifically, the betweenness score $b ( v )$ for a vertex (or equivalently, a state) is given as:
$$
b ( v ) = \sum _ { s \neq t \neq v } \frac { \sigma _ { s t } ( v ) } { \sigma _ { s t } } w _ { s t } ,
$$
where $\sigma _ { s t }$ is the number of shortest paths from state $s$ to state $t$ , $\sigma _ { s t } ( \boldsymbol { v } )$ is the number of those paths that pass through state $v$ , and $\boldsymbol { w } _ { s t }$ is the weight assigned to paths from vertex $s$ to vertex $t$ . The ratio in Equation 19 is the fraction of all-pairs shortest paths in the state transition graph that go through vertex $v$ . When $\boldsymbol { w } _ { s t }$ is the same for all pairs of nodes, then Equation 19 is the betweenness centrality measure on graphs. To tailor this centrality measure to MDPs, $\boldsymbol { w } _ { s t }$ is set to the expected reward while going from state $s \to t$ .
# 4.1.1 Benefits and Opportunities
Having introduced the major approaches for identifying bottlenecks, we now discuss how the resulting algorithms contribute to the aforementioned benefits of HRL.
Exploration. If an agent can easily reach the bottleneck states in an environment, it can perform more effective exploration. This is because states that were once hard to reach become more accessible, even under a random policy (these states are often referred to as access states). For example, picking up a key makes it easy for the player of a video game to visit previously unseen rooms. When this bottleneck discovery is done in an incremental fashion, as in L-Cuts (S¸ims¸ek et al., 2005), the agent expands the frontier of its experiences in the environment.
Credit Assignment. Methods like that of McGovern and Barto (2001), and S¸ims¸ek and Barto (2008) require the agent to solve the problem several times before option discovery can even begin; in such cases, exploration is clearly not the main benefit of discovering options. However, once the agent identifies bottlenecks, it can perform rapid credit assignment. This is primarily because of three reasons: (a) rather than progressing step-by-step, value can propagate in large, multi-step “jumps” from the states in which option execution terminates to the states from where it initiates (Sutton et al., 1999b), (b) value from rewarding events only needs to propagate along trajectories that pass through the bottleneck, greatly reducing the state-action pairs whose values need to be updated, and (c) in long-horizon problems, the difference in value between different actions—the action-gap (Bellemare et al., 2016a)— tends to approach zero (Lehnert et al., 2018), making it impossible to learn an accurate action-value function; in such cases Lehnert et al. (2018) suggest partitioning the state-space along bottlenecks, so that each partition can be treated as a short-horizon problem, inducing a larger action-gap, and hence, easier credit assignment.
Transfer. Bottlenecks are useful for transfer because they are largely task agnostic—they focus on capturing structure in the transition function, and so the same bottlenecks are often useful for a family of tasks or reward functions. For example, in the Two Rooms task, the ability to quickly and reliably reach the doorway enables the agent to reach the goal, no matter where it is placed in the second room (McGovern and Barto, 2001).
# Opportunities for Research.
• Scalability. Most methods for finding bottlenecks apply to discrete graphs; as a result, these techniques often struggle to scale to large, continuous MDPs. Notable exceptions include spectral methods (discussed in Section 4.2), which compute continuous properties of the underlying graph, without explicitly representing the graph in the first place.
• Performance guarantees. It is generally not well understood how the proxy objective of targeting bottlenecks contributes to high-level objectives of the agent, such as reward maximization or faster planning. Methods outlined in Section 4.6 attempt to answer this question in general for all option discovery methods, but given the number of option discovery algorithms related to bottlenecks, it would be useful to find if there is a high-level objective of the agent that is maximized (at least to some degree) while optimizing for this proxy objective.
Discovering Temporal Structure: An Overview of Hierarchical RL
# 4.2 Spectral Methods
Many option discovery methods are based on the idea of leveraging the state space’s topology, be it to discover options that identify key states that connect different partitions of the environment (S¸im¸sek et al., 2005), that connect states that are far from each other when looking at the diffusion properties of the environment (e.g., Machado and Bowling, 2016; Machado et al., 2017), or that easily allow the agent to traverse the environment in a reusable manner (Liu et al., 2017; Klissarov and Machado, 2023). They are termed spectral methods because, through the eigenvectors of a matrix representation of the environment, they extract information from the state space, such as connectivity or diffusion.
The different algorithms in this group leverage the different ways of representing the environment as a matrix and the different types of information one can extract from such matrices. Originally, heavily inspired by results from the graph theory literature, these methods were based on the graph Laplacian and its eigenfunctions,3 which can approximate any function on the graph (Chung, 1997). The normalized graph Laplacian, $\mathcal { L }$ , for example, is defined as
$$
\mathcal { L } = \mathbf { D } ^ { - 1 / 2 } ( \mathbf { D } - \mathbf { A } ) \mathbf { D } ^ { - 1 / 2 } ,
$$
where $\mathbf { A }$ is the graph’s adjacency matrix obtained by modelling each state in the environment as a node. The adjacency matrix reflects the degree of connectivity between two states. The matrix $\mathbf { D }$ is a diagonal matrix whose entries are the row sums of $\mathbf { A }$ . In the reinforcement learning literature, the eigenvectors of the graph Laplacian are also known as proto-value functions (PVFs; Mahadevan, 2005; Mahadevan and Maggioni, 2007).
Importantly, when considering the eigendecomposition, $\mathcal { L } \mathbf { e } = \lambda \mathbf { e }$ , the eigenvector of the graph Laplacian associated to the second smallest eigenvalue captures the number of connected components in a graph (Shi and Malik, 2000), allowing one to easily identify bottleneck states (S¸im¸sek et al., 2005), as discussed in Section 4.1. The eigenvectors of the graph Laplacian, in general, capture different time scales of diffusion, which can be used to discover options that promote exploration, such as eigenoptions (Machado et al., 2017, 2018; Machado, 2019), covering options (Jinnai et al., 2019b, 2020), and covering eigenoptions (Machado et al., 2023).
Eigenoptions, for example, are defined such that each option, $o _ { i }$ , is associated with the corresponding eigenvector, $\mathbf { e } _ { i }$ , of the graph Laplacian. Their policy is defined as the policy that maximizes the intrinsic reward that incentivizes the agent to navigate alongside the direction pointed by $\mathbf { e } _ { i }$ , which, in the linear function approximation (and tabular) case, is formalized as
$$
r ^ { \mathbf { e } _ { i } } ( s , s ^ { \prime } ) = { \mathbf { e } } _ { i } ^ { \top } \big ( \phi ( s ^ { \prime } ) - \phi ( s ) \big ) ,
$$
where $\phi ( s )$ denotes the feature representation of state $s$ . Originally, an option $o _ { i }$ was defined to terminate in state $s$ if $q _ { \pi } ^ { \mathbf { e } _ { i } } ( s , a ) \leq 0$ for all $a \in { \mathcal { A } }$ , where $q _ { \pi } ^ { \mathbf { e } _ { i } }$ is defined w.r.t. $r ^ { \mathbf { e } _ { i } } ( \cdot , \cdot )$ . All other states in the environment were defined to be in the initiation set.
Naturally, explicitly representing an environment through its underlying graph is not scalable. Existing methods leverage approximations of the eigenfunctions of the graph Laplacian that can be obtained through neural networks trained with stochastic gradient descent (Wu et al., 2019; Wang et al., 2021; Gomez et al., 2023). The underlying idea behind these methods is to learn a representation that captures the properties of the approximated eigenvectors such that observations that happen “close in time” are close in representation space and that different eigenfunctions are indeed orthogonal to each other. The current state-of-the-art method for doing so is called the augmented Lagrangian Laplacian objective (ALLO; Gomez et al., 2023). It consists of the following max-min objective for approximating $d$ eigenfunctions:
Figure 7: Visualization of the first and second eigenfunctions on Montezuma’s Revenge, an Atari 2600 game, discovered by the algorithm proposed by Klissarov and Machado (2023). The arrows depict what the eigenoption induced by these eigenfunctions could end up being.
$$
\operatorname* { m a x } _ { \beta } \operatorname* { m i n } _ { \mathbf { u } \in \mathbb { R } ^ { d | \delta | } } \sum _ { i = 1 } ^ { d } \langle \mathbf { u } _ { i } , \mathcal { L } \mathbf { u } _ { i } \rangle + \sum _ { j = 1 } ^ { d } \sum _ { k = 1 } ^ { j } \omega _ { j k } \big ( \langle \mathbf { u } _ { j } , [ \mathbf { u } _ { k } ] ] - \delta _ { j k } \big ) + b \sum _ { j = 1 } ^ { d } \sum _ { k = 1 } ^ { j } \big ( \langle \mathbf { u } _ { j } , [ \mathbf { u } _ { k } ] \rangle - \delta _ { j k } \big ) ^ { 2 } .
$$
where $\mathcal { L }$ denotes the graph Laplacian again, $\left[ \left[ \cdot \right] \right]$ the stop gradient operator, $\delta _ { j k }$ the Kronecker delta, $b$ is a scalar hyperparameter, and $\omega = [ \omega _ { 1 , 1 } , \omega _ { 2 , 1 } , \omega _ { 2 , 2 } , \cdot \cdot \cdot , \omega _ { d , 1 } , \cdot \cdot \cdot , \omega _ { d , d } ] \in \mathbb { R } ^ { d ( d + 1 ) / 2 }$ is a vector containing all of the dual variables of the objective. Note that the optimal dual variables, $\omega ^ { \ast }$ , are proportional to the smallest eigenvalues of $\mathcal { L }$ . These approximations have now been used to learn options that are effective in various domains, including continuous control tasks (Jinnai et al., 2020), 3D navigation tasks, and Atari 2600 games (Klissarov and Machado, 2023). An issue these methods had to circumvent was that most of these approximation objectives assume the ability to sample uniformly the entire state space. This is currently addressed by iteratively increasing the region covered by the agent (e.g., Machado et al., 2023); some methods even do so explicitly in the objective they minimize (Erraqabi et al., 2022).
The process to compute the intrinsic reward maximized by the option is slightly different when using neural network estimates of the eigenfunctions of the graph Laplacian. Instead of first computing the eigenvectors, one usually directly estimates the components of the eigenfunction associated with a particular state, $s$ . Formally,
$$
r ^ { f _ { e _ { i } } } ( s , s ^ { \prime } ) = f _ { e _ { i } } ( s ^ { \prime } ) - f _ { e _ { i } } ( s ) ,
$$
where we used $f _ { e _ { i } } ( s )$ to denote the value of $i$ -th eigenfunction of the graph Laplacian associated with state $s$ . In this setting, stochastic option terminations are more common in practice due to the difficulties generalization introduces to accurately estimating action-value functions without interference (e.g., Klissarov and Machado, 2023).
Discovering Temporal Structure: An Overview of Hierarchical RL
Many other mathematical objects are somewhat equivalent to the eigenvectors of the graph Laplacian and have also been used for option discovery. Slow Feature Analysis (SFA; Wiskott and Sejnowski, 2002; Sprekeler, 2011), for example, are a key component of Continual Curiosity-driven Skill Acquisition (CCSA; Kompella et al., 2017). The eigenvectors of the successor representation (SR; Dayan, 1993) have also been shown to be equivalent to the eigenvectors of the graph Laplacian (Machado et al., 2018).
The equivalence between the eigenvectors of the SR and of the graph Laplacian is particularly important due to the predictive nature of the SR and the ease with which one can learn it incrementally. In fact, the SR now has a quite prominent role in the option literature, being used in the discovery of options for both faster credit assignment (Ramesh et al., 2019) and exploration (Machado et al., 2018; Machado, 2019).
The successor representation is defined as
$$
\Psi _ { \pi } ( s , s ^ { \prime } ) = \mathbb { E } _ { \pi , p } \left[ \sum _ { \mathfrak { t } = 0 } ^ { \infty } \gamma ^ { \mathfrak { t } } \mathbb { 1 } _ { \{ S _ { t } = s ^ { \prime } \} } \mid S _ { 0 } = s \right] ,
$$
where $\mathbb { 1 }$ denotes the indicator function. The SR was originally introduced through an intuition that is very similar to the one outlined above: one should capture the environment’s dynamics by assigning similar values to temporally close states, thus creating a representation of the underlying structure. It can also be estimated with temporal-difference learning methods (Sutton, 1988), which, as we mentioned above, allows us to learn it incrementally:
$$
\Psi ( S _ { t } , j ) \gets \Psi ( S _ { t } , j ) + \eta \Big ( \mathbb { 1 } _ { \{ S _ { t } = j \} } + \gamma \Psi ( S _ { t + 1 } , j ) - \Psi ( S _ { t } , j ) \Big ) ,
$$
where we used $\Psi ( \cdot , \cdot )$ to denote a sample-based approximation of $\Psi _ { \pi }$ .
Importantly, beyond the discovery methods mentioned above; as a representation, which was its original purpose, the SR can also be used to combine options without additional learning (Barreto et al., 2019a), and recent results in neuroscience and cognitive sciences suggest the SR can model activations in the hippocampus (Stachenfeld et al., 2017) and explain some human behaviour (Momennejad et al., 2017). These results have led Machado et al. (2023) to propose that the successor representation should be seen as the “natural substrate for the discovery and use of temporal abstractions” in reinforcement learning.
In terms of scalability, again, there have been many proposals on how to scale the SR to function approximation settings ranging from specific neural network architectures (Kulkarni et al., 2016; Machado et al., 2018; Chua et al., 2024) to ideas such as successor features (Barreto et al., 2017) and successor measures (Touati and Ollivier, 2021; Farebrother et al., 2023). Successor features, for example, can be seen as a projection of the SR onto the space realizable by the representation, $\phi$ . In matrix form, if we use $\Phi \in \mathbb { R } ^ { | \mathcal { S } | \times d }$ to denote the matrix encoding the $d$ -dimensional feature representation of each state, successor features are defined as $\begin{array} { r } { \Psi _ { \pi } = \sum _ { t = 0 } ^ { \infty } ( \gamma { \bf P } _ { \pi } ) ^ { t } \Phi = ( I - \gamma { \bf P } _ { \pi } ) ^ { - 1 } \Phi } \end{array}$ .
# 4.2.1 Benefits and Opportunities
Exploration. The eigenoptions line of work (Machado et al., 2017) has popularized the idea of leveraging temporal abstraction for exploration. Eigenoptions can significantly decrease the diffusion time $^ 4$ in an environment, and this afforded exploration can lead to faster learning. Machado et al. (2018) further extends previous work to the function approximation case by estimating the successor representation and then performing a singular value decomposition on it. Jinnai et al. (2019b) introduce covering options, arguing that rather than constructing an option for every eigenvector of the graph Laplacian, a single option constructed based on the second eigenvector is sufficient. This is because that single option minimizes the cover time of the underlying MDP, which loosely refers to how long it takes for a random high-level policy to visit all states. Leveraging direct approximations of the eigenfunctions of the graph Laplacian, Jinnai et al. (2020) extended covering options to the function approximation case, and Klissarov and Machado (2023) did the same for covering eigenoptions (Machado et al., 2023), demonstrating strong exploration properties in a variety of reinforcement learning problems.
Transferability. Options are often thought to be important in lifelong/continual learning settings where skills can be reused in an ever-changing world. The benefit of Laplacian-based options in such settings has been demonstrated both in simpler tabular problems in which the goal location changes regularly (Liu et al., 2017) and in more complex, high-dimensional settings in which not only the goal location would change but also the topology of the environment (Klissarov and Machado, 2023).
# Opportunities for Research.
• Improving Representations. Machado et al. (2023) have proposed the perspective that spectral methods consist of a phase in which a representation is first learned (e.g., PVFs, SR), followed by a phase in which options are then derived from such a representation. This process can even be done in a cycle, which Machado et al. (2023) called Representation-driven Option Discovery (ROD) cycle. Thus, better representation learning methods are an exciting research frontier for this line of work in which the learned representation informs the option discovery process. This can be investigated from the SR perspective (e.g., Touati and Ollivier, 2021; Carvalho et al., 2023; Farebrother et al., 2023), or from the perspective of directly estimating the spectral decomposition of the SR (e.g., Pfau et al., 2018; Wang et al., 2021, 2022; Gomez et al., 2023), including non-symmetric settings (Wang et al., 2023b).
• Planning. Another promising line of work involves further exploring the recent success of Laplacian-based methods in planning and credit assignment in general, as these options are often used in a reward-agnostic way (e.g., Sutton et al., 2023). Validating these results beyond the tabular case and extending existing results to partially-observable settings are also intriguing lines of work.
• Reward-Aware Representations. The representations discussed in this section rely on the topology of the environment without considering the underlying reward function. There is an interesting question of whether one should define proximity not only in terms of when observations take place but also in terms of the reward associated with them. Interestingly, the linear MDP formalism (Todorov, 2006, 2009b) gives rise to representations akin to the SR but that are reward-aware. In this context, Tse et al.
Figure 8: Sequentially Composable Options. The skill chaining algorithm incrementally learns options backwards from the goal, such that the subgoal of each option is the initiation region of another option. First the agent finds the states from which it can reliably reach the goal (left), then it finds the states from where it can reach the first region (middle), and so on, until there is a high probability of success from the environment’s start state (right).
(2025) has shown that options derived from the eigenvectors of such a reward-aware representation, termed the default representation (Piray and Daw, 2021), exhibit qualitatively different exploratory behaviour when faced with regions of negative reward in the state space.
# 4.3 Sequentially Composable Options
Options are said to be sequentially executable when each option terminates in a region where another option can successfully achieve its own subgoal. Sequentially composable options are more useful for high-level planning (Konidaris et al., 2018) and even result in highly robust solutions (Tedrake et al., 2010). While most methods attempt to sequentially compose discovered options post hoc, some methods explicitly incorporate sequential composition into the option discovery objective. A prominent family of such methods is that of Skill Chaining (Konidaris and Barto, 2009; Bagaria and Konidaris, 2020).
Figure 8 illustrates the skill chaining algorithm. Given a target region of states $g \subset { \mathcal { S } }$ (for example, the task goal) (shown as a flag in Figure 8), skill chaining discovers subgoal options that can be sequenced together so that each option execution roughly brings the agent closer to $g$ . This is done by learning options backward from the goal: first, the agent learns option $o _ { 1 }$ such that $\beta ( o _ { 1 } ) = g$ ; this entails learning two functions: (a) the option policy $\pi ( a | s , o _ { 1 } )$ , which aims to maximize the subgoal reward $r ^ { o _ { 1 } } ( s ) = \beta ( o _ { 1 } )$ , and (b) the initiation function $\Im ( o _ { 1 } )$ , which is defined to be the states from which $\pi ( \cdot | s , o _ { 1 } )$ can reliably reach $g$ . Shortly after, the agent creates another option $o _ { 2 }$ so that its subgoal is the initiation of the previous option—this is because the agent can reach the goal with high probability from states inside the first option’s initiation region. This process continues until the start state, $s _ { 0 } \sim \rho _ { 0 }$ , of the MDP is inside the initiation region of some option. This is because when the initiation probability is high at the start state, the agent can simply execute its learned options to achieve its goal $g$ . Skill composability is explicitly enforced by setting the termination region of each option $\beta ( o _ { i } )$ to be the states in which another option has a high initiation probability, i.e., $\Im ( o _ { i - 1 } )$ is greater than some pre-specified threshold $c \in [ 0 , 1 ]$ .
In skill chaining, the initiation set of an option has special meaning: it represents the states from which option execution is likely to achieve its subgoal. Learning the initiation function is usually framed as a binary classification problem: states along successful option trajectories (those that achieve the option’s subgoal) are considered as positive examples $\mathbf { s } ^ { + } = \{ s _ { 1 } ^ { + } , \cdots , s _ { n } ^ { + } \}$ and states along unsuccessful trajectories are considered as negative examples $\mathbf { s } ^ { - } = \{ s _ { 1 } ^ { - } , \cdot \cdot \cdot , s _ { m } ^ { - } \}$ . Then, a probabilistic classifier (with parameters $\chi$ ) is fit on these training examples using the binary cross-entropy loss. Now, when a new state $s$ is encountered during learning, $\mathcal { I } ( s , o )$ represents the probability that the agent can reach option $o$ ’s subgoal $\beta ( o )$ in a single execution of $\pi ( \cdot | s , o )$ . While this classification approach is simple to implement, some of its drawbacks include: (a) the classifier struggles to adapt to changing option policies, and (b) the agent must wait until the end of option execution to update its initiation function. To address this issue, Bagaria et al. (2023) frame the initiation function as a general value function (Sutton et al., 2011): the agent uses each experience tuple $( s , a , \beta ( o ) , s ^ { \prime } )$ to update its prediction of whether an option execution will achieve its subgoal; this is done using the following temporal difference (TD) error and stochastic gradient descent update rule:
$$
\begin{array} { c } { { \delta _ { \Im } ( s , o ) = \beta ( s ^ { \prime } , o ) + \Im ( s ^ { \prime } , o ) - \Im ( s , o ) , } } \\ { { \chi = \chi - \alpha \delta _ { \Im } \nabla _ { \chi } \Im ( s , o ) , } } \end{array}
$$
where $\alpha \in \mathbb { R } ^ { + }$ is a step size parameter and $\chi$ are the initiation function parameters. However, at a given state $s$ , an option’s initiation probability ${ \mathcal { I } } _ { \chi } ( s , o )$ can be low either because the option policy is unlikely to successfully reach its subgoal from state $s$ or because the agent does not have enough data to confidently estimate ${ \mathcal { I } } _ { \chi } ( s , o )$ . As a result, the skill chaining agent additionally estimates its uncertainty $\boldsymbol { \mathcal { U } } ( s , o )$ about its initiation function’s predictions: when deciding whether an option is executable from a state $s$ , it is optimistic with respect to that uncertainty, but when targeting another option’s initiation region, it is pessimistic with respect to it (Bagaria et al., 2021a).
Algorithm 1 summarizes the skill chaining algorithm. First, the high-level policy picks an option with the aim of maximizing extrinsic reward, while attending to the initiation probability of each option. Actions are selected using the chosen option’s policy, which is rewarded for achieving its own subgoal. Transitions encountered during option execution are used to update the low-level option policy, the high-level policy, and the option’s initiation function. When the agent is confident that there is no option that could reach its subgoal from the start states of the environment, it creates a new option and adds it to the skill chain. This new option’s subgoal region is the states where the previous option in the skill chain has a high initiation probability, thereby enforcing sequential composability.
# 4.3.1 Benefits and Opportunities
Planning. Each option execution drives the agent to a small, predictable region of the state-space. Since those states are constructed to be inside the initiation region of another option, they can be sequentially composed. In practice, each option’s initiation and termination region is parameterized using probabilistic classifiers, so there is a probability that two options can be executed in sequence, which eventually permits computation of the probabilistic feasibility of entire plans. Bagaria et al. (2023) used graph-search to find recursively optimal solutions and Bagaria et al. (2021a) provided a dynamic programming algorithm to approximate hierarchically optimal ways of planning with subgoal options.
# Algorithm 1 Skill Chaining Algorithm
1: Initialize:
2: Initialize first option $o _ { 1 }$ ’s subgoal as task goal: $\beta ( o _ { 1 } ) = g$ .
3: Initialize $o _ { 1 }$ ’s initiation function $\Im ( s , o _ { 1 } )$ , uncertainty $\mathcal { U } ( s , o _ { 1 } )$ , and policy $\pi _ { \boldsymbol { \theta } } ( \cdot | s , o _ { 1 } )$ .
4: Initialize the agent’s option set using the first option: $\mathcal { O } = \{ o _ { 1 } \}$ .
5: Hyperparameters:
6: Option horizon $H _ { o }$ and initiation function thresholds $c _ { 1 } , c _ { 2 } \in [ 0 , 1 ]$ for each option.
7: while True do
8: Sample an option $o$ from the following distribution:
$$
\frac { \mu ( o | s ) \mathcal { I } ^ { + } ( s , o ) } { \sum _ { o ^ { \prime } \in \mathcal { O } } \mu ( o ^ { \prime } | s ) \mathcal { I } ^ { + } ( s , o ^ { \prime } ) } , \forall o \in \mathcal { O } ,
$$
where $\mathcal { I } ^ { + } ( s , o ^ { \prime } ) = \mathrm { c } 1 \mathrm { i } \mathrm { p } ( \mathcal { I } ( s , o ^ { \prime } ) + \mathcal { U } ( s , o ) , 0 , 1 )$ .
9: while option $o$ does not terminate do
10: Sample an action $a \sim \pi ( \cdot \mid s , o )$ .
11: Execute the action to get reward $r$ and next state $s ^ { \prime }$ .
12: Update the option policy $\pi ( \cdot | s , o )$ using reward $r ^ { o } ( s , a , s ^ { \prime } ) = \beta ( s ^ { \prime } , o )$ .
13: Update the high-level policy using extrinsic reward $r$ .
14: Update the option’s initiation function using generalized TD-Error:
$$
\delta _ { \mathfrak { I } } ( s , o ) = \beta ( s ^ { \prime } , o ) + \mathfrak { I } ( s ^ { \prime } , o ) - \mathfrak { I } ( s , o ) .
$$
# 15: end while
16: if Es0∼ρ0[I(s $_ { \mathrm { ( ) } } , o ) ] < c _ { 1 } \& \mathbb { E } _ { s _ { 0 } \sim \rho _ { 0 } } [ \mathcal { U } ( s _ { 0 } , o ) ] < c _ { 2 } , \forall o \in \mathcal { O }$ then
17: Extract the last option in the chain, $\omega$ .
18: Create new option $o ^ { \prime }$ such that $\beta ( s , o ^ { \prime } ) = \mathbb { 1 } \Big ( \mathbb { \Im } ( s , \omega ) > c \Big )$
19: Add the new option $o ^ { \prime }$ to the agent’s option set $\mathcal { O }$ .
20: end if
21: end while
Credit Assignment. Skill chaining has demonstrated more sample-efficient credit assignment in goal-reaching tasks than non-hierarchical RL, which can be attributed to the following reasons. (1) Jumpy transitions: Skill chaining methods usually use the entire $T$ -step option transition $( s _ { t } , o , \sum r _ { t : t + T } , s _ { t + T } )$ to update the high-level policy $\mu ( o | s )$ . Much like $n$ -step returns and $\mathrm { T D } ( \lambda )$ in non-hierarchical RL, this has the effect of rapidly propagating credit among state-action pairs. (2) Focused next-state distribution: not only does each option execute for multiple timesteps, but it also guides the agent to states that are closer to the goal. In other words, options in the skill chain modify the agent’s state distribution to make states closer to the goal more likely. Since these states are usually the ones with non-zero values, bootstrapping-based value learning (e.g., TD) progresses more rapidly.
Exploration. Since skill discovery proceeds backward from the goal, the algorithm requires either an exploration policy or a set of demonstration trajectories (Konidaris et al., 2010; Kang and Oh, 2022) that achieve the task goal. This advocates for a view of skill chaining as producing options that are good for exploitation, which can be combined with options that are good for exploration. Deep skill graphs (DSG) (Bagaria et al., 2021b) overcome this limitation: the agent finds intrinsically motivating states and learns skill chains that connect them to each other; the resulting chains form a graph abstraction of the environment, which is useful for planning. Furthermore, the graph building process has a Voronoi bias (LaValle, 1998; Lindemann and LaValle, 2004), meaning that it tends to grow towards parts of the state-space where the agent has the least experience.
Opportunities for Research.
• Goal-reaching options. To learn the initiation set of each option in the chain, its subgoal must be described using a binary function: either the subgoal is achieved in the current state, or it is not. Such a subgoal description is not universal, as it cannot be used to describe continuing tasks like maintaining a constant velocity or repeating periodic motions. If the initiation cumulant (Bagaria et al., 2023) can be formulated for general reward functions, then skill chaining can be applied to non-goal-reaching tasks as well.
• Controlling all state variables at the same time. If we think of states being composed of different state variables (a property known as factoredness Boutilier et al. 2000), then skill chaining drives the value of all variables to a certain range of values. In more complex environments, it may be unnecessary, or even impossible, to control all state variables at the same time. Future work could create a version of skill chaining that leverages the factoredness of the state-space and only controls a subset of all the factors at any given time.
Additional connections to control theory and motion planning. Lozano-Perez et al. (1984), Mason (1985), and Burridge et al. (1999) popularized the view of policies as funnels: these policies drive a large set of ordinary states to a small set of desired states. Policies can be sequentially composed to reach some target set of states by placing the end (narrow part) of each funnel inside the beginning (broad part) of some other funnel. Tedrake et al. (2010); Ames and Konidaris (2019) provided a way to compute these initiation regions for complex, dynamical systems using convex optimization and built robust controllers for fixed-wing UAVs (Tedrake et al., 2010). Later, Konidaris and Barto (2009) extended this idea to model-free RL. Bagaria and Konidaris (2020) then upgraded the skill-chaining algorithm with deep learning so that it could be applied to higher-dimensional systems. Variants of deep skill chaining have been used in robotic surgery (Huang et al., 2023), manipulation (Lee et al., 2021; Vats et al., 2023), multi-agent RL (Xie et al., 2022), and task and motion planning (Mishra et al., 2023).
# 4.4 Empowerment Maximization
Empowerment-based methods discover diverse skills by maximizing an agent’s control over its environment. At its core, empowerment quantifies how much influence an agent has over its future observations—an agent is more empowered when it can reliably cause a wider variety of outcomes (Klyubin et al., 2005; Salge et al., 2014). For example, having access to a car empowers you to reach many different locations; learning to swim empowers you to survive in water. Empowerment can also be seen as a way to maximize social influence in multi-agent settings (Jaques et al., 2019), or to seek agreement between future states and the agent’s internal representations (Hafner et al., 2020).
Figure 9: Empowerment-based skill-discovery methods learn skills that generate trajectories that are maximally different from one another, with the constraint that, having observed a trajectory, it should be clear which skill generated it. $( a )$ Trajectories generated by 6 distinct skills in MuJoCo Ant; (b) $( x , y )$ location of the center of mass of the Ant plotted after executing skills learned by the DADS algorithm. Figure from Sharma et al. (2020b), used with permission.
Formally, empowerment is defined as the mutual information between an agent’s actions $\begin{array} { r } { I ( X ; Y ) = \sum _ { x , y } p ( x , y ) \log { \frac { p ( x , y ) } { p ( x ) p ( y ) } } } \end{array}$ much information one random variable provides about another, equaling zero when the variables are independent and increasing as they become more statistically dependent.
Now, consider an agent that executes a sequence of $n$ actions $\mathbf { a } = ( a _ { t } , a _ { t + 1 } , \ldots , a _ { t + n - 1 } )$ starting from state $s _ { t }$ , resulting in state $s _ { t + n }$ . The $n$ -step empowerment at state $s _ { t }$ is:
$$
\mathcal { E } _ { n } ( s _ { t } ) = \operatorname* { m a x } _ { p ( \mathbf { a } ) } I ( \mathbf { a } ; s _ { t + n } | s _ { t } ) ,
$$
where $p ( \mathbf { a } )$ is the probability distribution over action sequences that the agent can choose. This captures the maximum amount of information that action sequences can provide about future states, optimized over all possible action distributions $p ( \mathbf { a } )$ . Expanding the mutual information reveals its intuitive meaning:
$$
I ( \mathbf { a } ; s _ { t + n } | s _ { t } ) = \mathcal { H } ( s _ { t + n } | s _ { t } ) - \mathcal { H } ( s _ { t + n } | s _ { t } , \mathbf { a } ) ,
$$
where the first term represents uncertainty about future states given only the current state, while the second represents remaining uncertainty after choosing actions. Empowerment measures how much this uncertainty can be reduced through deliberate action choice.
As the mutual information is intractable, Mohamed and Rezende (2015) propose to estimate it through variational inference. Specifically, the authors estimate $p ( \mathbf { a } | s _ { t + n } )$ using the variational approximation $q _ { \phi } ( \mathbf { a } | s _ { t + n } )$ and leverage the non-negativity of the KL divergence
to obtain5:
$$
\begin{array} { r l } { I ( \mathbf { a } ; s _ { t + n } | s _ { t } ) = \mathcal { H } ( \mathbf { a } | s _ { t } ) - \mathcal { H } ( \mathbf { a } | s _ { t } , s _ { t + n } ) } & { } \\ { = \mathcal { H } ( \mathbf { a } | s _ { t } ) + \mathbb { E } [ \log p ( \mathbf { a } | s _ { t } , s _ { t + n } ) ] } & { } \\ { \geq \mathcal { H } ( \mathbf { a } | s _ { t } ) + \mathbb { E } [ \log q _ { \phi } ( \mathbf { a } | s _ { t } , s _ { t + n } ) ] } & { \mathrm { ( V a r i a t i o n a l ~ B o u n d ) } } \end{array}
$$
This variational bound (Equation 32) provides a practical way to compute empowerment in high-dimensional continuous spaces using neural networks (with parameters $\phi$ ). However, this objective finds open-loop action sequences $\mathbf { a }$ , and we want to discover skills (closed-loop policies). Gregor et al. (2017) addressed this concern by introducing Variational Intrinsic Control (VIC), which replaces fixed action sequences with parameterized skills $\pi _ { \boldsymbol { \theta } } ( a | s , z )$ conditioned on skill variables $z$ . VIC maximizes the mutual information between skills and final states reached from skill execution:
$$
J _ { \mathrm { V I C } } = I ( z ; s _ { t + n } | s _ { t } ) ,
$$
where $s _ { t }$ is the initial state and $s _ { t + n }$ is the final state after executing skill $z$ for $n$ timesteps. We can expand this as:
$$
I ( z ; s _ { t + n } | s _ { t } ) = { \mathcal { H } } ( z | s _ { t } ) - { \mathcal { H } } ( z | s _ { t + n } , s _ { t } ) .
$$
Similar to Mohamed and Rezende (2015), VIC uses the variational lower bound:
$$
I ( z ; s _ { t + n } | s _ { t } ) \geq \mathcal { H } ( z | s _ { t } ) + \mathbb { E } [ \log q _ { \phi } ( z | s _ { t + n } , s _ { t } ) ] .
$$
This variational lower bound can be optimized by training two neural networks: a policy $\pi _ { \boldsymbol { \theta } } ( a | s , z )$ that executes skills, and a discriminator $q _ { \phi } ( z | s _ { t + n } , s _ { t } )$ that predicts which skill was used based on the final state.
Eysenbach et al. (2019) simplified VIC’s approach in their method Diversity is All You Need (DIAYN). While VIC maximizes mutual information between skills and final states, DIAYN instead focuses on making skills distinguishable from the states they visit throughout execution. DIAYN builds on maximum entropy reinforcement learning, which augments the standard RL objective with an entropy bonus ${ \mathcal { H } } ( A | S )$ to encourage exploration. DIAYN learns skills by maximizing:
$$
\begin{array} { r } { J _ { \mathrm { D I A Y N } } = I ( s ; z ) + \mathcal { H } ( a | s ) - I ( a ; z | s ) , } \end{array}
$$
which has an intuitive interpretation: skills should be distinguishable from the states they visit $( I ( s ; z ) )$ , actions should be diverse $\textstyle { \left( { \mathcal { H } } ( a | s ) \right) }$ , but skills should be consistent in their behaviour $\big ( - I ( a ; z | s ) \big )$ . This objective can further be simplified by expanding the mutual information in terms of the conditional entropies, and then applying the variational approximation similar to Mohamed and Rezende (2015):
$$
\begin{array} { r l } & { J _ { \mathrm { D I A Y N } } = \Big ( \mathcal { H } ( z ) - \mathcal { H } ( z | s ) \Big ) + \mathcal { H } ( a | s ) - \Big ( \mathcal { H } ( a | s ) - \mathcal { H } ( a | s , z ) \Big ) } \\ & { \quad \quad \quad \quad = \mathcal { H } ( z ) - \mathcal { H } ( z | s ) + \mathcal { H } ( a | s , z ) } \\ & { \quad \quad \quad = \mathcal { H } ( a | s , z ) + \mathbb { E } [ \log p ( z | s ) ] - \mathbb { E } [ \log p ( z ) ] } \\ & { \quad \quad \quad \geq \mathcal { H } ( a | s , z ) + \mathbb { E } [ \log q _ { \phi } ( z | s ) ] - \mathbb { E } [ \log p ( z ) ] . } \end{array}
$$
The final step implies the use of a discriminator $q _ { \phi } ( z | s )$ to variationally approximate $p ( z | s )$ ; the result is an intra-skill pseudo reward function for each skill $z$ :
$$
r ^ { z } ( s ) = \log q _ { \phi } ( z | s ) - \log p ( z ) ,
$$
assuming that the $\mathcal { H } ( a | s , z )$ term is maximized using an maximimum entropy RL formulation (Ziebart et al., 2008). This leads to a practical algorithm: sample skill $z \sim p ( z )$ , execute policy $\pi _ { \boldsymbol { \theta } } ( a | \boldsymbol { s } , z )$ , train discriminator $q _ { \phi } ( z | s )$ to predict skills from states, and update the policy using pseudo-reward in Equation 41.
While DIAYN successfully learns diverse behaviours, Sharma et al. (2020b) observed that it can discover skills with unpredictable effects, making them difficult to sequentially compose downstream. Their method, Dynamics-Aware Discovery of Skills (DADS), addresses this by explicitly encouraging predictable skill dynamics while maintaining diversity. Their formulation captures two desirable properties simultaneously: different skills should lead to different future states (diversity), and given the current state and skill, the future state should be predictable. Expanding the mutual information:
$$
I ( z ; s _ { t + n } | s _ { t } ) = { \mathcal { H } } ( s _ { t + n } | s _ { t } ) - { \mathcal { H } } ( s _ { t + n } | s _ { t } , z ) = \mathbb { E } \left[ \log { \frac { p ( s _ { t + n } | s _ { t } , z ) } { p ( s _ { t + n } | s _ { t } ) } } \right] .
$$
DADS uses variational approximation with two learned models: a skill dynamics model $q _ { \psi } ( s _ { t + n } | s _ { t } , z )$ and a marginal dynamics model $q _ { \xi } \big ( s _ { t + n } | s _ { t } \big )$ . The skill reward function becomes
$$
r ^ { z } ( s _ { t } , s _ { t + n } ) = \log q _ { \psi } ( s _ { t + n } | s _ { t } , z ) - \log q _ { \xi } ( s _ { t + n } | s _ { t } ) ,
$$
encouraging skills that make state transitions more predictable than the marginal dynamics. Resulting skills have focused effects, and examples of learned skills are visualized in Figure 9. These methods share the key insight that useful skills should be distinguishable from their effects on the environment—whether through final states (VIC), visited states (DIAYN), or state transitions (DADS), each approach learns discriminators that identify which skill was executed from observations. This creates an intrinsic reward signal driving the discovery of diverse, meaningful behaviours without external supervision.
Let $\tau = ( s _ { t } , a _ { t } , s _ { t + 1 } , a _ { t + 1 } , . . . , s _ { t + n } )$ denote a trajectory of states and actions. DIAYN’s update rule relies on the approximation that $\begin{array} { r } { \log ( q _ { \phi } ( z | \tau ) ) = \sum _ { i = 0 } ^ { n } \log ( q _ { \phi } ( z | s _ { t + i } ) ) } \end{array}$ , i.e., as a sum of per-timestep log probabilities along the trajectory generated by the skill $z$ . Instead of the sum-based decomposition of the trajectory, which treats each transition as being independent from the others, VALOR approximates $q _ { \phi } ( z | \tau )$ using an LSTM architecture (Achiam et al., 2018). Strouse et al. (2022) noticed that the discriminator, $q _ { \phi }$ is pessimistic in new states; to address this pessimism, their algorithm DISDAIN, augments skill learning with a novelty bonus. CIC (Laskin et al., 2022) improves the optimization of the mutual information objective using contrastive learning. Relative VIC (Baumli et al., 2021) departs from this view slightly by introducing a new term to the optimization: rather than requiring that trajectories be distinguishable from the observed trajectory, they additionally require that the trajectory not be distinguishable from the final state alone, thereby encouraging skills to cause characteristic changes in state, rather than taking the agent to different parts of the state-space alone.
# 4.4.1 Benefits and Opportunities
Exploration. Empowerment serves as an intrinsic motivation that encourages agents to seek states where they have the most control over future outcomes (Klyubin et al., 2005). Several recent empowerment methods seek to address the exploration problem in RL by explicitly optimizing for skill diversity (Eysenbach et al., 2019) and state-space coverage (Campos et al., 2020); by learning a diverse set of skills, agents not only explore effectively in a single-task setting (Massari et al., 2021), but also adapt quickly to new tasks, reducing sample complexity (Sharma et al., 2020b; Baumli et al., 2021; Hansen et al., 2020). One difficulty in obtaining better exploration with empowerment-based methods is that the learned skills tend to be localized, i.e., only cover a small area. Lipschitz-constrained Skill Discovery (LSD) (Park et al., 2022) replaces mutual information estimation with a Lipschitz-constrained objective, ensuring that learned skills correspond to large, meaningful state transitions rather than minor variations. Exploration is further enhanced by paying attention to the the parts of the environment that are within the agent’s control (Park et al., 2023)—this is done by using a controllability-aware distance function that assigns higher values to harder-to-control state transitions, leading to more complex skill acquisition, such as object manipulation, without direct external supervision.
Credit Assignment. While the primary motivation of empowerment-driven techniques is that of exploration, recent methods seek to improve the process of learning policies over discovered skills. For example, Leibfried et al. (2019) derive Bellman operators that combine empowerment-, and reward-maximization; Sharma et al. (2020b,a) advocate for learning option models during the skill-discovery phase so that the learned options can be composed via a planner at test-time. In fact, the simplification of empowerment-based objectives to state-reaching (Pitis et al., 2020) can be viewed as a compromise between learning more expressive skills and creating stationary objectives that ease online policy learning for utilizing discovered skills.
Transfer. Most empowerment-based skill-discovery algorithms learn skills that are distinguishable via specific states encountered in sampled trajectories (for example, the last state of the trajectory). This approach leads to skills that are tied to specific states encountered during skill-learning; in other words, learned skills do not transfer to unseen, related parts of the state-space. Relative VIC is a promising approach for learning transferrable skills because it rewards skill policies for causing characteristic changes in state, rather than targeting specific states themselves. Some algorithms use successor features to enable transfer (Zahavy et al., 2021), but more research is needed on learning skills that simultaneously maximize empowerment and enable reuse across different portions of the state-space.
Opportunities for Research.
• Optimization challenges. Despite significant progress in online mutual information estimation, empowerment remains challenging to estimate and optimize (Achiam et al., 2018). To address this, Park et al. (2024c) introduce a Wasserstein variant of the mutual information objective, where the KL divergence in MI is replaced with the Wasserstein distance. Finding such ways to ease the estimation and optimization of the empowerment objective is a key area of current research.
Discovering Temporal Structure: An Overview of Hierarchical RL • Connections to causal learning. Gopnik (2024) hypothesizes that if an agent learns an accurate causal model of the world, it will necessarily increase its empowerment, and, conversely, increasing empowerment will lead to a more accurate (albeit implicit) causal model of the world (Salge et al., 2014). This could enable model-based planning for complex, long-horizon problems (Kahneman, 2011), fully unleashing the power of HRL.
• Possible signatures in human learning. There is mounting evidence in developmental cognitive science that the drive to learn causal models of the world is behind many of the exploratory capabilities of children (Gopnik and Wellman, 2012). For example, Rovee-Collier and Gekoski (1979) show that infants as young as 3 months old vary their actions to observe their causal effects on their environment; Du et al. (2023b) show that children playing some video games can be thought of as maximizing their empowerment. Gopnik (2024) hypothesizes that empowerment maximization in RL could become the new dominant paradigm (after Bayesian approaches that struggle to scale to large hypothesis spaces) for explaining exploration in humans and other animals.
Additional Connections to goal-based exploration. When the variational distribution in Equation 35 is Gaussian and fixed, empowerment objectives reduce to goal-based exploration in RL (Choi et al., 2021), by which we mean methods that propose random target states and use a goal-conditioned policy (Schaul et al., 2015) to reach them, for example, in hindsight experience replay (Andrychowicz et al., 2017) and Go-Explore (Ecoffet et al., 2020). In fact, it is possible to think of goal-based exploration and variational empowerment as lying on a spectrum: the more expressive the variational distribution, the more powerful, albeit non-stationary, the associated representation learning problem (Choi et al., 2021). Furthermore, Warde-Farley et al. (2019) advocate for taking a mutual information maximization approach to goal-conditioned reward functions, Pitis et al. (2020) argue that empowerment maximization is roughly equivalent to maximizing the size of the set of goals that can be achieved by the agent’s policy, and Levy et al. (2023) find that goal-conditioning can make the empowerment objective significantly easier to compute and optimize. These findings further blur the lines between goal-based exploration in RL and empowerment maximization.
# 4.5 Via Environment Rewards
Most of the work on learning skills has focused on discovering intrinsic reward functions, which are then used to learn option policies. There are, however, two important lines of work that instead aim to learn behaviour directly through the rewards given by the environment.
# Feudal Methods
The first set of approaches builds on feudal reinforcement learning (Dayan and Hinton, 1993). In this framework, the agent is decomposed hierarchically into managers and workers: managers set subgoals for workers to achieve, and workers use non-hierarchical RL to achieve those subgoals. In this way, goal-setting is decoupled from goal-achievement; each level in the hierarchy communicates to the level below it what must be achieved, but does not specify how to do so. The manager maximizes the reward coming from the environment to define the goals that the worker should achieve. Feudal RL was extended to deep RL through
Feudal Networks (FuN) (Vezhnevets et al., 2017). FuN learns a two-level hierarchy in which the higher-level manager outputs a goal vector $g _ { t }$ at time $t$ that specifies the direction in which the lower-level worker should modify the agent’s current state. Specifically, a linear transformation, $\phi$ , then maps the last $c$ goals outputted by the manager into an embedding vector $w _ { t }$ ,
$$
w _ { t } = \phi \big ( \sum _ { i = t - c } ^ { t } g _ { i } \big ) .
$$
The worker’s policy is then defined through this embedding vector and a matrix of learnable parameters $U _ { t }$ , that is $\pi ^ { \mathrm { { w o r k e r } } } = \operatorname { S o f t M a x } ( U _ { t } w _ { t } )$ .
The worker policy is trained through the standard policy gradient update rule, where the rewards are the goal vectors,
$$
r ^ { \mathrm { w o r k e r } } ( s _ { t } , g _ { t - i } , s _ { t - i } ) = 1 / c \sum _ { i = 1 } ^ { c } d _ { c o s } ( s _ { t } - s _ { t - i } , g _ { t - i } ) ,
$$
where $d _ { c o s } ( x , y )$ is the cosine similarity measure between vector $x$ and $y$ . The manager policy is learned with the task reward function; however, the authors propose the following update rule, which they term directional policy gradient,
$$
\nabla \mu ( g _ { t } | s _ { t } ) = \nabla d _ { c o s } ( s _ { t + c } - s _ { t } , g _ { t } ) \left( Q ^ { \mathrm { m a n a g e r } } ( s _ { t } , g _ { t } ) - V ^ { \mathrm { m a n a g e r } } ( s _ { t } ) \right) .
$$
This update closely puts an emphasis on the direction in which a goal vector points to, and whether that direction was achieved by transitioning from $s _ { t }$ to $\mathbf { \xi } ^ { s } t { + c }$ .
As with most skill discovery methods, the high-level policy is trained at the same time as the low-level policy; as the low-level policy changes during learning, data from a high-level action taken in the past may not yield the same low-level behaviour in the future (Nachum et al., 2018). This non-stationarity is addressed using relabeling tricks and off-policy learning in the HIRO algorithm (Nachum et al., 2018). In their work, as well as following literature, the worker’s intrinsic reward is defined as,
$$
r ^ { \mathrm { w o r k e r } } ( s _ { t } , g _ { t } , s _ { t + 1 } ) = - | | s _ { t } + g _ { t } - s _ { t + 1 } | | .
$$
This definition forgoes the explicit use of the cosine similarity; however, it maintains the idea that a goal vector would represent a delta between state transitions. Later, Levy et al. (2019) present the algorithm Hierarchical Actor Critic algorithm, which improves upon HIRO by removing the need for dense reward functions, by instead using hindsight experience replay (Andrychowicz et al., 2017). In a separate direction, Hafner et al. (2022) instantiate the feudal architecture within a model-based algorithm called Director, which shows strong performance across a wide range of environments. Their approach additionally provides interpretability as the world model can decode goals into images.
# Option-Critic
The second set of approaches is based on the option-critic (Bacon et al., 2017). In this work, the authors derive both the intra-option policy gradient theorem as well as the termination gradient theorem, which provide the update rules for learning option policies and termination
functions, respectively. The intra-option policy gradient theorem leads to the following update,
$$
\frac { \partial q _ { \pi } ( s , o ) } { \partial \theta } = \sum _ { s , o } d _ { \pi , \mu , \beta } ^ { \gamma } ( s , o ) \sum _ { a } \frac { \partial \pi _ { \theta } ( a | s , o ) } { \partial \theta } q _ { u } ( s , o , a ) ,
$$
where, $\begin{array} { r } { d _ { \pi , \mu , \beta } ^ { \gamma } ( s , o ) = \sum _ { t } \gamma ^ { t } P _ { \pi , \mu , \beta } ( S _ { t } = s , O _ { t } = o ) } \end{array}$ , is the $\gamma$ -discounted occupancy measure over state-option pairs. In the policy gradient theorem (Sutton et al., 1999a), the flat policy is multiplied by the state-action value function, leading to an increased probability for actions whose future discounted return is higher. In the case of the intra-option policy gradient, the quantity modulating the action probabilities is the state-action-option value function; therefore, a strict generalization of the policy gradient theorem. The termination gradient theorem used to learn the termination function is derived from the option value of option $o$ upon arrival in state $s$ (see Equation 11). The update rule takes the following form,
$$
\frac { \partial u _ { \beta } ( s ^ { \prime } , o ) } { \partial \psi } = \sum _ { s , o } d _ { \pi , \mu , \beta } ^ { \gamma } ( s , o ) \frac { \partial \beta _ { \psi } ( s ^ { \prime } , o ) } { \partial \psi } A ( s ^ { \prime } , o ) ,
$$
where $A ( s ^ { \prime } , o ) = q _ { \pi } ( s ^ { \prime } , o ) - v _ { \mu } ( s ^ { \prime } )$ is the advantage function over options, representing how advantageous it is to be in state $s ^ { \prime }$ with option $o$ with respect to the value of state $s ^ { \prime }$ averaged over all options. Usually, the advantage function is implemented as a heuristic for reducing the variance of the estimator, but in this case, it comes naturally from the derivation of the theorem. Later, Bacon (2018) unified these different objectives and derivation through the following objective,
$$
J _ { \alpha } ( \omega ) = \sum _ { s , o } \alpha ( s , o ) Q _ { \omega } ( s , o ) = \mathbb { E } _ { \alpha , \omega } \left[ \sum _ { t = 0 } ^ { \infty } \gamma ^ { t } r ( S _ { t } , A _ { t } ) \right] ,
$$
where $\alpha : D i s t ( \mathbb { S } \times \mathbb { O } )$ is a distribution of an initial state-option pair, and where $\omega$ defines all the parameters within the options framework, including the termination, option policies, and high-level policy. The authors show how, by assuming independence between the parameters of these components, the previous update rules can be recovered.
The line of work surrounding the option-critic has received significantly more attention than we can present in detail in this section. Some of the contributions include learning safe policies (Jain et al., 2018), using multiple discount factors (Harutyunyan et al., 2019b), learning option termination in an off-policy manner (Harutyunyan et al., 2019a), extending the theorems to multiple levels of hierarchy (Riemer et al., 2018), and theoretical derivations that take parameter sharing between options into consideration (Riemer et al., 2019).
# 4.5.1 Benefits and Opportunities
Credit assignment. Vezhnevets et al. (2017) show strong performance of their feudal method on a set of Atari 2600 games from the Arcade Learning Environment (Bellemare et al., 2012) and 3D navigation challenges (Beattie et al., 2016). These domains are longhorizon and require the agent to propagate credit across multiple steps. The authors report significantly better results than a baseline not leveraging such a hierarchy.
Transfer. Bacon et al. (2017) present experiments where the learned options improve the ability to generalize across changes in the Four rooms environment (Sutton et al., 1999b) compared to non-hierarchical RL algorithms. Such changes included modifying the goal location and the agent’s starting location. This benefit is later reinforced by multiple works (Zhang and Whiteson, 2019; Khetarpal et al., 2020b; Kamat and Precup, 2020; Klissarov and Precup, 2021) showcasing the transferability of options learned through the option-critic method in more complex environments such as locomotion control (Todorov et al., 2012) and 3D navigation (Chevalier-Boisvert et al., 2023). In these transfer experiments, the agent usually first learns to perform a task before some component of the task is changed.
Interpretability. A particular highlight of the option-critic line of work is that interpretability naturally emerges by learning options directly from environmental rewards. For example, Bacon et al. (2017) report experiments where the termination function would highlight bottleneck states, which are often seen as key in learning temporal abstraction (Stolle and Precup, 2002). Findings on interpretability are similarly reported across different domains (Harb et al., 2018; Klissarov et al., 2017; Zhang and Whiteson, 2019).
Opportunities for Research.
• Avoiding option degeneracy. An important practical obstacle when learning options through the update rules proposed by the option-critic is that it may lead to degenerate solutions (Luo et al., 2023). Options tend to reduce to actions where each of the options’ duration is only one timestep long. Another observed phenomenon is that only one option ends up being executed throughout all episodes. In both cases, the essence of temporal abstraction is lost. To avoid such undesirable behaviour, the authors add a penalty term $c _ { \mathrm { d e l i b } }$ to the termination gradient’s advantage function: $A ( s ^ { \prime } , o ) + c _ { \mathrm { d e l i b } }$ . This term essentially discourages the termination to prefer switching, unless the advantage in doing so is greater than the value of $c _ { \mathrm { d e l i b } }$ . A thorough theoretical derivation was later done to justify the use of such a term, which was coined as the deliberation cost (Harb et al., 2018). This cost is introduced as a hyperparameter, which raises the question of what value we should choose for a specific environment. Discovering more general solutions to option degeneration remains an open area of research.
• Reliance on the environment reward. The strength of the methods we presented in this section is that they do not require a human-defined objective for learning the hierarchy. As such, such methods heavily rely on an informative environment reward. For example, in feudal methods, if the high-level policy is poorly trained due a sparse environmental rewards, it might output goals that fail to drive the learning progress of the lower-level policy. To address the exploration challenge, recent methods like HAC-Explore incorporate a novelty-based intrinsic rewards (McClinton et al., 2021) or demonstrations (Gupta et al., 2019) to solve longer-horizon tasks.
# 4.6 Directly Optimizing for the Benefits of Hierarchical Reinforcement Learning
Many of the option discovery methods that we have discussed so far rely on proxy objectives; these objectives include finding bottleneck states, empowerment maximization, more reliable
Discovering Temporal Structure: An Overview of Hierarchical RL composability, and so on. The intuition is that if the agent had options that maximized these proxy objectives, it would unlock agent-level capabilities such as effective exploration, credit assignment, or transfer. Indeed, these methods often show empirical success in some scenarios, but the formal connection between these proxy objectives and the overall objectives of the agent is unclear (Solway et al., 2014). For example, options that target bottleneck states are empirically useful in some tasks, but what kind of performance can we expect from the same technique in an entirely different problem? In fact, several papers have shown that not all skills are created equal—that is, options that are perfectly suited for a particular task, might severely hurt agent-level objectives in other tasks (Jong et al., 2008; Solway et al., 2014). To address this gap, a class of methods—initiated by Solway et al. (2014)—has sought to discover options with precise guarantees on agent-level objectives. These methods explicitly state the performance criterion of the agent and then derive an algorithm that discovers options with bounded loss on that criterion.
# 4.6.1 Benefits and Opportunities
Planning. In the planning context, option discovery can be framed as the search for a set of options that minimizes the planning time—defined as the number of iterations a planning algorithm (e.g., value iteration) takes to approximate the optimal value function $v ^ { * }$ within some accuracy $\epsilon$ (Silver and Ciosek, 2012; Jinnai et al., 2019a). Formally, given a maximum allowable value error $\begin{array} { r } { \operatorname* { m a x } _ { s \in \mathcal { S } } | v ^ { * } ( s ) - \hat { v } ( s ) | \le \epsilon } \end{array}$ , the goal is to find a set of at most $k$ options $\mathcal { O }$ that minimizes $L _ { \epsilon }$ , the number of iterations needed to reach this accuracy:
$$
\operatorname* { m i n } _ { \mathcal { O } } L _ { \epsilon } \quad \mathrm { s . t . } \left| \mathcal { O } \right| \leq k .
$$
Jinnai et al. (2019a) prove that this problem is NP-hard, even in deterministic tabular MDPs. They introduce approximation algorithms with provable guarantees, but their results are limited to point options—options that initiate and terminate in a single state.
While their method minimizes worst-case planning time, Average Options (Ivanov et al., 2024) focuses instead on minimizing the expected planning time across a distribution of tasks. These tasks share the same transition dynamics, but differ in their start and goal states. The idea is to discover options that reduce the expected cost of reaching any state from any other:
$$
\underset { \mathcal { O } } { \arg \operatorname* { m i n } } \quad d _ { \mathcal { O } } ( G ) = \underset { \mathcal { O } } { \arg \operatorname* { m i n } } \sum _ { s \in \mathcal { S } } \sum _ { s ^ { \prime } \in \mathcal { S } } d _ { \mathcal { O } } ( s , s ^ { \prime } ) ,
$$
where $d _ { \mathcal { O } } ( s , s ^ { \prime } )$ is a non-symmetric distance metric (e.g., shortest path length) in the MDP graph augmented with options $\mathcal { O }$ ; such an augmentation adds edges to the graph, while leaving nodes unchanged. Like the worst-case version, this problem is also NP-hard. However, by reducing it to the well-studied $k$ -medians with penalties problem in graph theory (Meyerson and Tagiku, 2009), Ivanov et al. (2024) derive efficient approximation algorithms with bounded suboptimality. Planning can also be sped up using options in the single-task setting: Wan and Sutton (2022) present an option discovery algorithm that seeks options that maximize reward—similar to option-critic (Harb et al., 2018)—but reduces the number of options available at different states to reduce planning time.
Exploration. In the context of exploration, Jinnai et al. (2019b) formalize the performance criterion of the agent as minimizing the number of steps needed for a policy to visit every state (as a proxy for discovering some unknown reward). They show that this performance criterion is related to the graph-theoretic property of cover-time, which measures the number of steps needed by a random walk to visit every edge in a graph. To define the cover time $C$ , we first need the hitting time $H _ { i j }$ between two states $i j$ : the hitting time in a Markov chain is the greatest lower-bound on the number of steps needed to get from source state $i$ to destination state $j$ : $H _ { i j } = \operatorname* { i n f } \{ t : \mathbb { \mathcal { S } } _ { t } = j \ | \ \mathbb { \mathcal { S } } _ { 0 } = i \}$ . Then, the cover time $C _ { i }$ starting in state $i$ is the maximum hitting time over all possible destination states: $C _ { i } = \operatorname* { m a x } _ { j \in \mathcal { S } } H _ { i j }$ . Jinnai et al. (2019b) show that the expected cover time $\mathbb { E } [ C _ { i } ]$ —where the expectation is with respect to the dynamics induced by a random walk—can be most effectively reduced by creating an option that connects the two states that are furthest apart according to the second eigenvector of the graph Laplacian (see Equation 20). Jinnai et al. (2019b) also show that finding options that minimize cover-time in a graph is NP-Hard; but, they provide an approximation algorithm that minimizes an upper-bound on the expected cover-time. This method was later extended to continuous environments using deep learningbased approximations of the graph Laplacian (Jinnai et al., 2020; Wu et al., 2019), further suggesting strong connections to the eigenoptions literature (Machado et al., 2017, 2023; Klissarov and Machado, 2023) (c.f. Section 4.2).
Credit assignment. As discussed earlier, options can accelerate policy evaluation by enabling value updates that span multiple steps, rather than progressing one step at a time. Bacon and Precup (2016) formalize this intuition using the lens of matrix splitting, a technique from numerical linear algebra that speeds up the solution of linear systems. In their view, each set of options defines a modified Bellman operator that can be interpreted as a preconditioned version of the original policy evaluation problem. Recall that the Bellman expectation equation for a fixed policy $\pi$ is:
$$
\begin{array} { r } { v = r _ { \pi } + \gamma P _ { \pi } v , } \end{array}
$$
where $v \in \mathbb { R } ^ { | \mathcal { S } | }$ is the value function, $r _ { \pi }$ is the expected reward vector, and $P _ { \pi }$ is the transition matrix under policy $\pi$ . This is a linear system of the form $A v = b$ , with $A = I - \gamma P _ { \pi }$ , and $b = r _ { \pi }$ . Planning with options induces a matrix splitting $A = M - N$ (Varga, 2000), leading to an iterative update of the form:
$$
v _ { k + 1 } = M ^ { - 1 } N v _ { k } + M ^ { - 1 } b .
$$
In this formulation, the matrix $M$ reflects the dynamics induced by the options, and is chosen to be easy to apply and invert; the remaining part $N$ captures what is not directly handled by the options. The matrix $M ^ { - 1 } N$ is known as the iteration matrix, as it governs how the current value estimate $v _ { k }$ influences the next one $v _ { k + 1 }$ . This kind of transformation is known as preconditioning: a way of rewriting the problem so that the resulting iterative updates converge more quickly. The speed of convergence is governed by the spectral radius $\rho _ { r } ( M ^ { - 1 } N )$ : the largest absolute eigenvalue of the iteration matrix. A smaller spectral radius means that errors shrink faster with each iteration. From this perspective, a good set of options is one that minimizes $\rho _ { r } ( M ^ { - 1 } N )$ , enabling value information to propagate more efficiently. While Bacon and Precup (2016) do not introduce a concrete option discovery algorithm, they offer a powerful design principle: discover options that act as preconditioners for value propagation. This opens the door to leveraging ideas from numerical linear algebra in option discovery.
Transfer. In the context of transfer, Solway et al. (2014) define the optimal set of options as those that maximize the efficiency with which an agent can learn the optimal policy for other, possibly unseen, sets of tasks. They show that in this setting, optimal options are those that maximize Bayesian model evidence under the distribution of tasks that the agent is expected to solve. Specifically, a hierarchy that maximizes model evidence, also provably minimizes the geometric mean of the number of samples needed to find the optimal policy for any task in the given task distribution. Brunskill and Li (2014) consider a similar formulation of the option transfer problem: given interaction data from a set of tasks, how can an agent learn options that minimize the sample complexity of learning in a future stream of tasks? They find that this problem is at least as hard as the set cover problem in Operations Research, and is thus also NP-hard. They use a greedy approximation algorithm for option discovery and evaluate it empirically in a tabular MDP.
# Opportunities for Research.
• Guarantees in more general settings. The papers discussed in this section emphasize the importance of formally stating the objective of option discovery and relating that to the overall objectives of the agent. However, this research is still nascent, and more papers exploring this subject are needed. For instance, can we develop formal algorithms that bound planning time without needing the assumption of “point options”? Can we bound planning time or cover time when using function approximation? Although Brunskill and Li (2014) derive an algorithm to minimize sample complexity during transfer, the greedy approximation algorithm they present does not bound sample complexity; future work could extend their theoretical results to bound the performance of the greedy approximation algorithm. Finally, can we write down the problems of option-driven exploration, planning, and policy evaluation in different ways that result in HRL algorithms with even stronger guarantees or better scaling properties?
# 4.7 Meta Learning
RL algorithms, such as Q-learning, learn policies; meta-RL algorithms, in contrast, aim to learn the RL algorithm itself, or parts of it, to subsequently learn a policy. This creates a bilevel optimization: the algorithm for learning the RL algorithm itself is called the outer-loop, while the learned algorithm (which learns a policy) is called the inner-loop (Schmidhuber, 1987; Thrun and Pratt, 1998; Beck et al., 2023). The appeal of meta-RL approaches is that if the environment demands certain properties from the RL agent (for example, transferability), then such properties will automatically be learned from data, without the explicit need for careful human ingenuity and design in every part of the training process (Silver et al., 2021).
Typically, a meta-RL algorithm consists of an inner and an outer loop. Within each of these loops, a set of parameters is being maximized. Concretely, let $\omega _ { o u t }$ represent the parameters learned by the outer loop, and $\omega _ { i n }$ the parameters learned by the inner loop.
These parameters in practice represent a particular subset of the option parameters presented in Section 3. For example, in the work by Veeriah et al. (2021), the inner loop optimizes the parameters option policies and the high-level policy, whereas in the work by Frans et al. (2018) the parameters of the high-level policy are part of the outer loop.
# Meta-Gradients
A common instantiation of meta-RL algorithms is through the use of meta-gradients. In the inner loop, the agent updates the inner parameters,
$$
\omega _ { \mathrm { i n } } ^ { \prime } \omega _ { \mathrm { i n } } + \alpha \nabla _ { \omega _ { \mathrm { i n } } } J _ { \mathrm { i n } } ( \omega _ { \mathrm { i n } } ) ,
$$
where $J _ { \mathrm { i n } }$ is an arbitrary objective that depends on $\omega _ { \mathrm { i n } }$ . To obtain the meta-gradients, we assume that the outer parameters depend on the inner parameters. Data is then collected with the updated inner parameters such to proceed to the following update,
$$
\begin{array} { r l } & { \omega _ { \mathrm { o u t } } ^ { \prime } \omega _ { \mathrm { o u t } } ^ { \prime } + \alpha \nabla _ { \omega _ { \mathrm { o u t } } } J _ { \mathrm { o u t } } ( \omega _ { \mathrm { i n } } ^ { \prime } ( \omega _ { \mathrm { o u t } } ) ) , } \\ & { \omega _ { \mathrm { o u t } } ^ { \prime } \omega _ { \mathrm { o u t } } ^ { \prime } + \alpha \nabla _ { \omega _ { \mathrm { i n } } ^ { \prime } } J _ { \mathrm { o u t } } ( \omega _ { \mathrm { i n } } ^ { \prime } ( \omega _ { \mathrm { o u t } } ) ) \nabla _ { \omega _ { \mathrm { o u t } } } \omega _ { \mathrm { i n } } ^ { \prime } ( \omega _ { \mathrm { o u t } } ) , } \end{array}
$$
where $\nabla _ { \omega _ { \mathrm { o u t } } } \omega _ { \mathrm { i n } } ^ { \prime } ( \omega _ { \mathrm { o u t } } )$ encodes how the outer loop parameters affected the updated inner loop parameters. The objectives $J _ { \mathrm { i n } }$ and $J _ { \mathrm { o u t } }$ may differ in various ways, such as defining different distributions over tasks.
Veeriah et al. (2021) leverage meta-gradients to learn options in high-dimensional navigation environments. In the inner loop, they update the option policies parameters, $\theta$ , and the high-level policy parameters $\kappa$ ,
$$
\begin{array} { r l } & { \theta ^ { \prime } \theta + \alpha _ { \theta } ( G _ { t } - q _ { \pi } ( s _ { t } , o _ { t } ) ) \cdot \nabla _ { \theta } [ \log \pi _ { \theta } ( a _ { t } \vert s _ { t } , o _ { t } ) - q _ { \pi } ( s _ { t } , o _ { t } ) ] , } \\ & { \kappa ^ { \prime } \kappa + \alpha _ { \kappa } ( G _ { t } ^ { \mu } - v _ { \mu } ( s _ { t } ) ) \cdot \nabla _ { \kappa } [ \log \mu _ { \kappa } ( o _ { t } \vert s _ { t } ) - v _ { \mu } ( s _ { t } ) ] , } \end{array}
$$
where $G _ { t }$ is the option policy return (see Section 3.2.1) and $G _ { t } ^ { \mu }$ is a $n$ -step return for the highlevel policy defined as, $\begin{array} { r } { G _ { t } ^ { \mu } = \sum _ { j = 1 } ^ { n } \gamma ^ { j } r _ { t + j } - \gamma ^ { n } c + \gamma ^ { n + 1 } V _ { \mu } ( s _ { t + n } ) } \end{array}$ where $c$ is a switching cost added on option terminations, similar to Harb et al. (2018). The outer loop is instantiated through these updates to the parameters $\nu$ of the option reward function, and the parameters $\psi$ of the termination function,
$$
\begin{array} { r l } & { \psi \psi + \alpha _ { \psi } ( G _ { t } ^ { \mu } - v _ { \mu } ( s _ { t } ) ) \nabla _ { \psi } \log \pi _ { \theta ^ { \prime } ( \psi , \nu ) } ( a _ { t } | s _ { t } , o _ { t } ) , } \\ & { \nu \nu + \alpha _ { \nu } ( G _ { t } ^ { \mu } - v _ { \mu } ( s _ { t } ) ) \nabla _ { \nu } \log \pi _ { \theta ^ { \prime } ( \psi , \nu ) } ( a _ { t } | s _ { t } , o _ { t } ) . } \end{array}
$$
The outer loop updates the option-reward and termination meta-parameters using a new trajectory generated by interacting with the environment using the most recent inner-loop parameters, $\theta ^ { \prime } ( \psi , \nu )$ and $\kappa ^ { \prime } ( \psi , \nu )$ , which depend on the outer loop parameters. The update in the outer loop assesses the impact of updates to the high-level policy, $\mu _ { \kappa }$ , and option policies, $\pi _ { \boldsymbol { \theta } }$ , and it may involve a different distribution of tasks than the one used in the inner loop, as is common in meta learning (Finn et al., 2017).
Figure 10: Black-box meta reinforcement learning. Trials consist of multiple episodes during which the hidden state, $h _ { i }$ , of the agent is unrolled. The hidden state is only reset between trials. Figure reproduced from (Duan et al., 2016).
# Black-box Meta Reinforcement Learning
In black-box meta RL (Wang et al., 2016; Duan et al., 2016), an agent interacts with a sequence of different tasks drawn from an arbitrary distribution, $p ^ { \xi } : \xi \to \mathbb { R } _ { + }$ . Each interaction with a task, or distribution of tasks, is considered a trial, which itself consists of $N$ episodes, represented in Figure 10. During a trial, the agent receives observations, rewards, and termination signals from the environment, where episode termination signals represent the episode boundaries. These variables are used to update the agent’s internal memory $h$ , which is typically represented by the hidden state of an RNN (Hochreiter and Schmidhuber, 1997) or the context of a transformer network (Vaswani et al., 2017). Importantly, the agent continuously updates $h$ across episodes within the same trial; the memory is only reset at the end of each trial. The overall goal is to maximize the total reward accumulated over an entire trial,
$$
\operatorname* { m a x } _ { \pi } \mathbb { E } _ { \xi \sim p ^ { \xi } } \left[ \sum _ { \mathrm { e p i s o d e } = 1 } ^ { N } \mathbb { E } _ { \pi } \left[ \sum _ { t } r ^ { \xi } ( s _ { t } , a _ { t } ) \right] \right] ,
$$
where $r ^ { \xi }$ is the reward associated with task $\xi$ . This objective incentivizes the agent to learn how to adapt its policy based on the experience gathered so far during a trial, effectively forcing it to implicitly learn, through the updates to its policy’s memory $h$ , a reinforcement learning rule capable of efficient adaptation to new tasks. When further conditioning the policy $\pi$ on the task’s goal $g$ , as done by Bauer et al. (2023), this approach can lead to human-timescale adaptation.
# 4.7.1 Benefits and Opportunities
Transfer. As discussed earlier, a major benefit of learning options is that of reuse: options learned in one part of the state-space could speed up learning in another (Taylor and Stone, 2009; Konidaris and Barto, 2007). Some methods have tried to discover transferable options using meta-learning. For example, MLSH (Frans et al., 2018) discovers a set of policies and trains a high-level policy to select among them. The meta-objective trains these components such that the high-level policy can quickly learn to solve new tasks from a distribution by reusing the learned skills, making them reusable across a pre-specified task distribution (Nam et al., 2022; Gupta et al., 2018; Fu et al., 2023). MODAC (Veeriah et al., 2021) uses meta-gradients (Xu et al., 2018; Oh et al., 2020) to do the same: an outer loop learns option reward functions and termination functions that an inner loop maximizes using policy gradients. The outer loop of the optimization learns from the reward coming from the environment.
Exploration. Meta-learning approaches have also sought to address the exploration question in non-stationary and multi-task settings. When the agent finds itself in a new environment, how can it leverage its past experiences to targetedly explore this new environment? This problem is called meta-exploration by Beck et al. (2023). For example, when someone is in a new house and they have to look for utensils, they begin their search from the kitchen; similarly, we would like to create RL agents that can direct their exploration for quick adaptation in new environments (Gupta et al., 2018). This is one of the motivations for the Adaptive Agent (AdA) (Bauer et al., 2023) that uses meta-learning to train a policy capable of human-timescale adaptation in a massive, combinatorial task space (OEL Team et al., 2021). Specifically, they use black-box meta-RL: the policy is implemented as a Transformer-XL model (Dai et al., 2019). This model $\pi _ { \theta }$ takes the history $h$ of interactions within the current episode (past states, actions, rewards) and goal description, $g$ , as input to determine the next action. The adaptation happens implicitly within the recurrent state of the model. The combinatorial complexity of the environments allows for careful selection of tasks that are at the appropriate difficulty given the current agent capabilities, generating an effective meta-learning curriculum. As such, AdA mixes ideas from meta-learning as well as curriculum learning, which we cover in the next section.
# Opportunities for Research.
• Relaxing the multi-task formulation. Meta-learning approaches have demonstrated abilities of transfer, adaptation, and meta-exploration—abilities that have been challenging to scalably acquire using other techniques. Furthermore, meta-learning via in-context learning (Dong et al., 2022; Bauer et al., 2023; Raparthy et al., 2023) provides a scalable, and potentially simpler way, to acquire these crucial capabilities. Existing meta-learning approaches rely on a specialized multi-task formulation, with clear task boundaries and episodes. Methods that lift these assumptions will be able to bring these capabilities to a wider variety of settings. For an in-depth review of meta learning approach, please refer to Beck et al. (2023).
# 4.8 Curriculum Learning
Within a complex environment, there exists a diversity of goals that are interesting for an agent. Some of these goals might be easily achievable, whereas others would simply be impossible to complete for an agent’s current capabilities. How could, then, such an agent learn to achieve difficult goals? An effective strategy would be to try to achieve a curriculum of goals, where the complexity of each attempted goal increases continuously with the agent’s capabilities. The idea of curriculum learning has a long history in AI that goes beyond the RL setting (Kaplan and Oudeyer, 2003; Schmidhuber, 2004; Bengio et al., 2009; Schmidhuber, 2011). A central question then becomes, how should one prioritize which goal or task should be attempted at any given time? Taking the HRL perspective, we can rephrase this question as: how should the high-level policy select the next goal? This question can be formalized through the following objective function:
$$
\operatorname* { m a x } _ { \mu } \sum _ { g \in { \mathcal { G } } } \mathbb { E } _ { \pi ^ { \prime } } [ r ^ { g } ] ,
$$
where the goal-conditioned policy, $\pi ^ { \prime }$ , used in the expectation, $\mathbb { E } _ { \pi ^ { \prime } } [ \cdot ]$ , depends on the choice of the goal selection distribution $\mu$ . Specifically, $\pi ^ { \prime }$ is obtained by starting with an initial policy $\pi _ { 0 }$ and applying $N$ iterative updates. For each iteration $k = 1 , \ldots , N$ , a goal $g _ { k } \sim \mu$ is sampled, and the policy is updated via $\pi _ { k } = U _ { g _ { k } } ( \pi _ { k - 1 } )$ , where the update rule $U _ { g _ { k } }$ aims to maximize the reward $r ^ { g _ { k } }$ associated with goal $g _ { k }$ , i.e. through policy gradient updates. The final policy used in Equation 63 is $\pi ^ { \prime } = \pi _ { N }$ . The optimization will thus find the distribution $\mu$ which, when used to update $\pi$ , would lead to the best performance as measured across all goals $\mathcal { G }$ .
The objective of Equation 63 is also referred to as the global learning progress (LP), and is, as such, intractable. Researchers have thus approximated this objective through local measures of LPlocal (Baranes and Oudeyer, 2013; Stout and Barto, 2010; Forestier et al., 2017; Colas et al., 2018) which can be defined as,
$$
\mathrm { L P } _ { \mathrm { l o c a l } , g } = V _ { \pi _ { t } , g } - V _ { \pi _ { t - i } , g } ,
$$
where $V _ { \pi _ { t } , g }$ is the estimate of the performance of the updated policy after $t$ iterations on goal $g$ and $V _ { \pi _ { t - i } }$ is that of the policy at iteration $t - i$ . These values are usually obtained through Monte Carlo estimates by rolling out policies over multiple episodes, thus possibly covering a subset of the possible goals within the goal space.
In such methods, the high-level policy $\mu$ is often optimized through multi-arm bandit algorithms rather than through RL. In other words, $\mu$ maximizes the following one-step reward: $\begin{array} { r } { \operatorname* { m a x } _ { \mu } = \mathbb { E } [ r ^ { \mu } ] = \mathbb { E } [ \mathrm { L P } _ { \mathrm { l o c a l } , g } ] } \end{array}$ , which can then be defined as
$$
\mu _ { t } ( g ) = \frac { \exp ( | E _ { t } ( g ) | / e ) } { \sum _ { g \in { \mathcal { G } } } \exp ( | E _ { t } ( g ) | / e ) } ,
$$
where $e$ is the temperature and $E _ { t }$ is an exponential moving average of the rate of change in performance on goal $g$ ,
$$
E _ { t + 1 } ( g ) = ( 1 - \alpha ) E _ { t } ( g ) + \alpha \mathrm { L P } _ { \mathrm { l o c a l } , g } .
$$
The global learning process objective can be approximated through other means, which we discuss in the following sub-section on the benefits and opportunities.
In addition to covering methods that produce curricula by explicitly generating goals according to a certain distribution, we also include a discussion around implicit curricula. In these methods, certain properties of the learning algorithm itself create a curriculum-like effect. A prominent example of an implicit curriculum is hindsight experience replay (HER) (Andrychowicz et al., 2017), which stores experience generated by seeking a certain goal $g$ in a buffer called an experience replay, and relabels such experience with a variety of other goals $g ^ { \prime }$ . We present HER in Algorithm 2, where we highlight the operations that differ from the standard use of an experience replay. We use the symbol of the high-level policy, $\mu$ , as the operator that relabels experience. In its most common form, HER relabels stored trajectories that do not reach their intended goals with whatever final state was reached. The relabeled goals then tend to naturally progress from those easily achievable by a random agent to increasingly challenging ones.
Research on learning from a curriculum of goals has received much more attention than we can cover here, producing a diversity of approximations to the global learning progress (Forestier and Oudeyer, 2016; Matiisen et al., 2017; Kovacˇ et al., 2020; Akakzia et al., 2021). For an in-depth review please see the surveys by Colas et al. (2020c) and Portelas et al. (2020).
# 4.8.1 Benefits and Opportunities
Exploration. One of the main benefits of leveraging curriculum learning to achieve goals is that the agent will be continuously pushed to the limits of its capacity. By doing so, it might discover new locations in an environment or learn completely new behaviour from the combination of previously achieved goals. A family of methods approximates the global learning progress, specifically with the intent of seeking intermediate difficulty. Florensa et al. (2018) propose using a Goal Generative Adversarial Network (Goal GAN) to automatically generate a curriculum of tasks for reinforcement learning agents. The method focuses on generating Goals of Intermediate Difficulty (GOID), defined as goals where the agent’s current policy, $\pi$ , achieves an expected performance $v _ { \pi }$ within a specific range:
$$
\mathrm { G O I D } _ { i } : = \{ g : v _ { \operatorname* { m i n } } \leq v _ { \pi } \leq v _ { \operatorname* { m a x } } \} .
$$
Here, $v _ { \mathrm { m i n } }$ and $v _ { \mathrm { m a x } }$ represent the minimum and maximum desired performance, ensuring goals are neither too easy nor too hard for the current policy $\pi _ { i }$ . The generator in Goal GAN is trained to output goals within the GOID set, whereas the discriminator is trained to distinguish between goals that are within the set from those that are not. Racanie\`re et al.
Discovering Temporal Structure: An Overview of Hierarchical RL (2019) introduce a setter-solver paradigm with three criteria represented through values in $\lfloor 0 , 1 \rfloor$ : validity, feasibility, and coverage. Goals are sampled according to the distribution defined by these criteria, allowing for a balanced selection. Their findings highlight that these criteria, along with conditioning on the current version of the environment, are crucial for an effective learning curriculum. Sukhbaatar et al. (2018) instead rely on asymmetric self-play to generate a curriculum of explorative goals in reversible or resettable environments, leading to improved performance on a diverse set of tasks. Campero et al. (2021) train a goal-generating teacher to guide a goal-conditioned student policy by proposing goals that are neither too hard nor too easy, as measured by the number of timesteps to reach the goal,
$$
r ^ { \mu } = { \left\{ \begin{array} { l l } { + a } & { { \mathrm { i f ~ } } t ^ { + } \geq t ^ { * } } \\ { - b } & { { \mathrm { i f ~ } } t ^ { + } < t ^ { * } , } \end{array} \right. }
$$
where $a , b$ are hyperparameters that quantify the bonus and penalty, $t ^ { \pi }$ represents the time it took the policy to reach the goal, and $t ^ { * }$ is a hyperparameter.
Additionally, HER-based approaches (Andrychowicz et al., 2017; Fang et al., 2018; Yang et al., 2021a) have demonstrated promising results in improving exploration compared to curiosity-driven methods. Curriculum-guided HER (Fang et al., 2019) introduces an explicit curriculum that transitions from curiosity-driven selection early on to goal-proximity focus in later stages, mimicking human-like exploration. Complementing HER, CER (Liu et al., 2019) enhance exploration by introducing a competitive dynamic between two agents learning the same task, where one agent is penalized for revisiting states explored by the other. Many more works show how curriculum learning can help in hard-to-explore environments (Colas et al., 2018; Zhang et al., 2020; Pitis et al., 2020; Colas et al., 2020a).
Transfer. Learning successfully through curricula produces a whole set of behaviours that were previously not seen. OEL Team et al. (2021) leverage population-based training to quantify progress on goal completion within large, open-ended, and procedurally-generated environments and tasks. Through a continuum of task difficulty, the authors show that the resulting goal-conditioned agent can generalize zero-shot to new situations.
# Opportunities for Research.
• Refining the measure of progress. One of the main challenges in deriving a curriculum of goals is accurately measuring how difficult a chosen goal is for a learning algorithm at a certain point in time. Different heuristics can work well for certain environments, for example, the number of timesteps required for reaching a goal (Campero et al., 2021), or might involve a combination of heuristics (OEL Team et al., 2021). However, such formalizations might not be generally applicable. Ideas from unsupervised environment design, where the environment evolves as well as the agent’s parameters, could be particularly promising (Dennis et al., 2020; Jiang et al., 2021; Parker-Holder et al., 2022; Samvelyan et al., 2023). Another important desideratum is that a curriculum should continuously increase the difficulty of the goals, but should also generate interesting goals. Finding a formalization that would encode a general measure of interestingness and difficulty is still an open question. However, for many tasks of interest, such as tasks where human prior knowledge would be relevant, leveraging foundational models offers a particularly promising way to define such metrics for curriculum learning, as covered in Section 6. One notable work is that of Zhang et al. (2024), which investigates whether an LLM’s common sense can be a good measure of interestingness in open-ended environments.
Figure 11: (a) In the original DSG algorithm, a state is sampled uniformly at random (blue star) from the state-space S and the graph is pulled towards it via its nearest neighbor node (green). (b) The IM-DSG agent uses intrinsic motivation to identify a node to expand using an exploration value function vnovelty. (c) Once the agent reaches the expansion node, it executes an exploration policy $\pi$ novelty, and the most novel state in the resulting trajectory is identified as a target for a new skill.
# 4.9 Intrinsic Motivation
Intrinsic Motivations (IM) drive actions for their own sake, meaning that they are not in service of achieving an obvious, externally specified goal; instead, they are in service of augmenting knowledge and learning skills whose utility only becomes apparent later on (Oudeyer and Kaplan, 2007; Barto and S¸im¸sek, 2005; Berlyne, 1965; Harlow, 1950). Computationally, this can be formalized using notions of information gain (Bellemare et al., 2016b)—an agent may take actions that result in new information about its environment, even if it requires forsaking extrinsic reward in the short term. IM underpins a developmental approach where an agent learns reusable skills autonomously, preparing it for various future challenges. For example, children, as intrinsically motivated biological agents, develop skills by engaging in activities that yield interesting, memorable outcomes (Gopnik et al., 2009); these skills improve in efficiency with repetition and can be strategically reproduced for specific goals (Barto et al., 2004). Such behaviours are well represented by options, with the intended outcomes encapsulated in the options’ subgoals (Singh et al., 2004). While we have discussed IM-based option discovery approaches like empowerment maximization, spectral methods, and bottleneck discovery, this section explores additional methods not covered by these categories.
One common intrinsic motivation signal is novelty, which decreases with repeated state visitations. For example, upon visiting state $s$ , an agent might receive
$$
r ^ { \mathrm { i n t } } ( s ) = N ( s ) ^ { - 1 / 2 } ,
$$
where $N ( s )$ is the visit count. This count-based bonus encourages exploration of infrequently seen states by making familiar states less rewarding (Auer et al., 2008; Strehl and Littman, 2008; Bellemare et al., 2016b; Lobel et al., 2023). An example of using this for option discovery is provided by the Relative Novelty algorithm (Simsek and Barto, 2004). Here, a state $s$ is deemed a good subgoal if it leads to experience that is significantly more novel than the experience preceding it. Let $n ( s )$ be a novelty score (e.g., inverse visitation count from Equation 69), and let $w ^ { + }$ and $w ^ { - }$ be fixed-size windows of future and past states, respectively. The relative novelty at time $t$ is then computed as
$$
\mathrm { R N } ( s _ { t } ) = \frac { 1 } { | w ^ { + } | } \sum _ { s \in w ^ { + } } n ( s ) \Big / \left( \frac { 1 } { | w ^ { - } | } \sum _ { s \in w ^ { - } } n ( s ) \right) .
$$
States with high relative novelty are likely to be gateways to unexplored regions (e.g., doorways in Figure 6), and can be automatically selected as subgoals. An option is then created to reach each such subgoal from a broader initiation set, often by learning a dedicated policy using intrinsic reward. In this way, the agent transforms spikes in novelty into reusable skills without supervision or knowledge of external task rewards.
A related example is First Return then Explore (FRTE) (Ecoffet et al., 2020), which formalizes intrinsic motivation using count-based novelty over discretized “cells”, which are pre-specified state abstractions that serve as option subgoals (an example state abstraction is spatially downsampling image-based observations). Every time a new cell is encountered, the agent logs it in an archive. The policy is then conditioned to return to these cells to deepen exploration. FRTE selects target cells based on their inverse visitation count, returning to underexplored frontiers before taking random, exploratory actions. This loop results in a growing archive of reachable states, each effectively defining an option subgoal. A model-based approach has recently been proposed by Bagaria et al. (2025b), who extend the Deep Skill Graphs algorithm discussed in Section 4.3 to image-based observation spaces where a meaningful distance metric is not readily apparent. Their algorithm, Intrinsically Motivated Deep Skill Graphs (IM-DSG), learns a graph-based model of the world—nodes of the graph represent option subgoal regions (abstract states) and edges represent option policies (abstract actions). Figure 11 iilustrates the main steps of the algorithm: first, the agent picks an existing node based on how much that node is expected to contribute to exploration (Figure 11(b)), then the agent plans with its abstract model using dynamic programming to determine the options it needs to executes to reach the sampled node, from where it executs a novelty seeking exploration policy (Figure 11(c)). States visited by the novelty-seeking policy are candidates for creating a new node in the graph, similar to the Relative Novelty algorithm discussed earlier.
A different line of intrinsically motivated option discovery leverages structural state-space features to define internal subgoals. In particular, methods for factored MDPs (Boutilier et al., 2000) use the causal dependencies between state features to propose options. A factored MDP assumes the state can be described by a vector of state variables and that the transition dynamics factorize according to a dynamic Bayesian network (DBN). HEXQ (Hengst et al., 2002) was an early algorithm in this category. HEXQ automatically decomposes a factored MDP into subtasks by detecting exits—states where a change in one state variable causes a change in another variable (or termination). More formally, if $X$ and $Y$ are state variables, an exit can be identified where the conditional entropy $\mathcal { H } ( Y ^ { \prime } \mid X , a )$ increases sharply, indicating that a transition cannot be explained without accounting for additional variables. Each such transition is marked as an exit and treated as a subgoal. HEXQ then learns a hierarchy of options, each option driving the agent toward one of these subgoals—lower-level options correspond to frequently-changing variables, whereas higher-level options handle more slowly-changing aspects. In this way, HEXQ yields an option hierarchy spanning multiple levels of temporal abstraction.
A similar approach is proposed by Jonsson and Barto (2006), who analyze the structure of a learned DBN to extract a causal abstraction hierarchy. For each action, they examine the DBN’s parent-child dependencies: if variable $X _ { i }$ influences the next-state value of $X _ { j }$ , i.e., $X _ { i } \in \operatorname { P a r e n t } ( X _ { j } ^ { \prime } \mid a )$ for some action $a$ , then $X _ { i }$ is said to causally affect $X _ { j }$ . These dependencies define a directed graph over state variables, which is decomposed into strongly connected components (SCCs). The SCCs are then topologically ordered to yield levels of abstraction—variables in earlier components are controlled first, while later components are conditioned on them. This is because earlier variables in the topological ordering tend to be those that causally influence, but are not influenced by, variables in later components. While their algorithm assumes access to the transition model, Vigorito and Barto (2008) propose a model-free algorithm that incrementally builds DBNs online through exploration. When new dependencies are detected—e.g., when variable $X _ { i }$ begins to influence $X _ { j }$ —a new option is instantiated to induce that dependency reliably. This learning process can be guided by structure learning techniques (e.g., maximizing Bayesian Information Criterion), or by identifying transitions that lead to salient changes in state abstractions. More recently, Nayyar and Srivastava (2024) cluster states based on the temporal-difference (TD) error:
$$
\delta _ { t } = r _ { t } + \gamma v ( s _ { t + 1 } ) - v ( s _ { t } ) ,
$$
which serves as a proxy for learning progress. Regions with high variance in $\delta _ { t }$ are recursively split, producing a symbolic abstraction over state variables and spawning new options targeted to regions where prediction error remains high. This method resembles HEXQ, but instead of focusing on the frequency of variable changes, it uses the TD-error incurred by candidate state abstractions.
One downside of these factored approaches is that they assume the agent observes factored state variables, which requires significant domain knowledge. Bagaria et al. (2025a) address this limitation by developing an agent that learns to identify relevant features directly from image observations. When their agent encounters a particularly novel state, it uses counterfactual analysis to isolate which visual features are responsible for the novelty of that state. Then, the agent learns a classifier that focuses only on these salient features (Singh et al., 2004), ignoring other aspects of the image. This feature-specific classifier serves
Discovering Temporal Structure: An Overview of Hierarchical RL as an abstract subgoal (Bagaria and Schaul, 2023) for option learning, enabling factored skill discovery without requiring pre-specified state decompositions.
# 4.9.1 Benefits and Opportunities
Exploration. The primary role of intrinsic motivation in RL is to facilitate exploration, especially when extrinsic rewards are sparse, delayed, or misleading. By rewarding novelty, surprise, or learning progress, IM helps the agent identify and prioritize skills that could result in mastery of the environment (Veeriah et al., 2018). The resulting behaviours are often more structured and directed than undirected exploration strategies like epsilon-greedy or softmax sampling.
Transfer. Options discovered through intrinsically motivated factor-based methods are transferable because they target changes in individual state variables or abstract subspaces (Sutton et al., 2023). For example, in HEXQ, an option that changes a frequently occurring variable—like an agent’s position—can be reused across many contexts where that variable matters, regardless of the values of other variables. Similarly, in methods based on causal abstraction (Jonsson and Barto, 2006; Vigorito and Barto, 2008), options are constructed to affect only a specific part of the environment while assuming other parts are stable or independently controllable. This modularity reduces interference between options because each option specializes in a different part of the environment, which additionally encourages compositionality in behaviour space. As a result, once such an option is learned, it can be reused across multiple tasks or contexts without retraining, greatly improving sample efficiency.
Opportunities for Research.
• Generalize factored approaches to large observation spaces. Factored approaches like HEXQ assume access to explicit state variables, limiting their applicability to high-dimensional observation spaces such as images or sensor data. Recent works (Bagaria et al., 2025a; Higgins et al., 2016; Kim and Mnih, 2018) demonstrate some promising initial results, but significant challenges remain in automatically discovering meaningful factors without domain knowledge. Future research should focus on developing methods that can identify relevant features and their causal dependencies in continuous, high-dimensional spaces while maintaining the transferability and combosability benefits of factored approaches.
• Improved estimates of novelty. While count-based novelty measures work well in discrete spaces, they struggle in continuous environments where exact state revisitation is unlikely. Recent advances (Bellemare et al., 2016b; Burda et al., 2019; Lobel et al., 2023; Guo et al., 2022) provide neural network-based alternatives, but fundamental challenges persist in distinguishing meaningful novelty from environmental stochasticity (sometimes colloquially called the “noisy TV” problem). Future work should develop more robust novelty estimation methods that can maintain exploration incentives across different timescales, handle function approximation errors gracefully, and integrate structural priors about environment dynamics to focus on causally relevant state changes.
• Connections to other discovery techniques. Intrinsic motivation approaches have developed largely independently from other option discovery methods like graph clustering, empowerment, and spectral decomposition techniques. However, recent theoretical work suggests that some of these approaches may be unified under information-theoretic frameworks (Achiam et al., 2018). Establishing formal connections between intrinsic motivation and other discovery paradigms (for example, (Machado et al., 2020)) could enable hybrid approaches that leverage the complementary strengths of different methods.
# 5. Discovery through Offline Datasets
Offline RL (or batch RL) aims to learn policies from pre-collected datasets, avoiding active data collection. This enables scalable deployment in domains such as robotics, autonomous driving, education, and healthcare (Levine et al., 2020), where interaction data can be difficult to obtain. Similarly, offline skill discovery leverages these datasets to extract temporally abstract behaviours that can later serve as high-level primitives (in either offline or online RL) to accelerate learning.
In this section, to facilitate the discovery of skills, we assume access of an offline dataset $\mathcal { D } = \{ \tau _ { i } \} _ { i = 1 } ^ { N }$ where $\tau _ { i } = ( s _ { t } ^ { i } , a _ { t } ^ { i } , r _ { t } ^ { i } ) _ { t = 1 } ^ { T }$ represents a trajectory interacting with the environment. The dataset can be populated with expert demonstrations or acquired through arbitrary policies. It is also possible that not all components are present in the dataset; indeed, numerous works do not assume access to rewards (most methods in Section 5.1) or actions (Kim et al., 2019).
A closely related line of work, which we do not cover in detail, focuses on learning useful representations from offline datasets (Ma et al., 2020; Touati et al., 2023; Farebrother et al., 2023; Chen et al., 2023; Park et al., 2024b; Tirinzoni et al., 2025). A key idea in these methods involves learning representations that decouple environment dynamics from specific task rewards. This is often done by modeling discounted future state occupancies or their features (Dayan and Hinton, 1993; Barreto et al., 2017), which then allows for rapid adaptation to new reward functions or goal specifications, typically by linear combination of the learned representations based on the new reward.
# 5.1 Variational Inference of Skill Latents
A prominent class of methods in offline skill discovery focuses on the reconstruction loss of pre-collected trajectories $\tau$ , typically optimized through likelihood maximization of the observed data. In these approaches, skills will be defined as the latent variables within the reconstruction loss. The methods rely on unlabeled experiences, $\tau = ( s _ { t } , a _ { t } ) _ { t = 1 } ^ { T }$ —that is, data collected without explicit reward feedback—and in some cases, even excludes actions (Kim et al., 2019). This is often referred to as “unsupervised skill discovery” (Eysenbach et al., 2019). We model each trajectory $\tau$ with a latent skill sequence:
$$
\zeta = ( z _ { t } , b _ { t } ) _ { t = 1 } ^ { T } , \qquad z _ { t } \in \mathbb { R } ^ { d } , \ b _ { t } \in \{ 0 , 1 \} ,
$$
where $z _ { t }$ encodes the skill active at time $t$ and $b _ { t }$ is a boundary signal that indicates when a skill starts or ends, i.e. the analogue of an option-termination signal.6
Equation below states the maximum-likelihood objective: the parameters $\phi$ are adjusted to maximize the average log likelihood that the model assigns to the trajectories observed in the dataset:
$$
J ( \phi ) = \mathbb { E } _ { \tau \sim \mathcal { D } } \big [ \log p _ { \phi } ( \tau ) \big ] .
$$
Here the term $\begin{array} { r } { p _ { \phi } ( \tau ) = \int p _ { \phi } ( \tau , \zeta ) d \zeta } \end{array}$ has already marginalized the latent skill sequence $\zeta$ , so maximizing $J ( \phi )$ encourages the model to explain the observed trajectories without fixing any particular skills in advance. Because the integral over $\zeta$ is usually intractable, we replace $\log p _ { \phi } ( \tau )$ by its evidence lower bound (ELBO), obtained by introducing an approximate posterior $q _ { \phi } ( \zeta \mid \tau )$ and applying Jensen’s inequality:
$$
\begin{array} { r l } & { \log p _ { \phi } ( \tau ) = \log \displaystyle \int q _ { \phi } ( \zeta \mid \tau ) \frac { p _ { \phi } ( \tau , \zeta ) } { q _ { \phi } ( \zeta \mid \tau ) } d \zeta } \\ & { ~ \geq \mathbb { E } _ { q _ { \phi } ( \zeta \mid \tau ) } \bigl [ \log p _ { \phi } ( \tau , \zeta ) - \log q _ { \phi } ( \zeta \mid \tau ) \bigr ] } \\ & { ~ = \mathbb { E } _ { q _ { \phi } } \bigl [ \log p _ { \phi } ( \tau \mid \zeta ) \bigr ] - D _ { \mathrm { K L } } \bigl ( q _ { \phi } ( \zeta \mid \tau ) \parallel p _ { \phi } ( \zeta ) \bigr ) . } \end{array}
$$
Averaging over $\tau \sim \mathcal { D }$ , and introducing the $\beta$ -weight7 (Higgins et al., 2016) which balances reconstruction and regularization terms yields the training objective:
$$
\begin{array} { r } { J _ { \mathrm { E L B O } } ( \phi ) = \underbrace { \mathbb { E } _ { \tau \sim \mathcal { D } , \zeta \sim q _ { \phi } ( \zeta \vert \tau ) } \left[ \log p _ { \phi } ( \tau \mid \zeta ) \right] } _ { \mathrm { r e c o n s t r u c t i o n } } - \beta \mathbb { E } _ { \tau \sim \mathcal { D } } \underbrace { D _ { \mathrm { K L } } \left( q _ { \phi } ( \zeta \vert \tau ) \Vert p _ { \phi } ( \zeta ) \right) } _ { \mathrm { r e g u l a r i z a t i o n } } , } \end{array}
$$
which is maximized with respect to $\phi$ . The first term obliges the model to reconstruct each trajectory by segmenting it at boundaries $b _ { t }$ and encoding each segment with a latent skill vector $z _ { t }$ ; the second term regularizes those encodings toward the prior, encouraging the emergence of a compact skill space. In practice, three distinct models are employed whose parameters are jointly denoted by $\phi$ :
• Prior $p _ { \phi } ( \zeta )$ : defines a prior distribution over latent skill sequences. A common choice is a fixed, factorized prior (e.g., unit Gaussian for $z _ { t }$ and Bernoulli for $b _ { t }$ ), but the prior can instead be endowed with learnable parameters and conditioned on the current state, yielding $p _ { \phi } ( \zeta \mid s )$ (Ajay et al., 2021; Nam et al., 2022). This can also serve as a prior on the policy over skills, $\mu _ { \kappa } ( z \mid s )$ ;
• Decoder $p _ { \phi } ( \tau \mid \zeta )$ : models the probability of a trajectory conditioned on a given skill sequence. Importantly, the decoder can also be formulated as a skill-conditioned policy, $\pi _ { \boldsymbol { \theta } } ( \boldsymbol { a } \mid \boldsymbol { s } , \boldsymbol { \zeta } )$ , that reconstructs only the actions in the trajectory, as done in several works (Kipf et al., 2019; Ajay et al., 2021; Pertsch et al., 2021; Nam et al., 2022). To do so, we need to adjust Equation 75 accordingly:
$$
J _ { \mathrm { E L B O } } ( \phi , \theta ) = \mathbb { E } _ { \tau \sim \mathcal { D } , \zeta \sim q _ { \phi } ( \zeta | \tau ) } \Big [ \sum _ { t = 1 } ^ { T } \log \pi _ { \theta } \big ( a _ { t } \mid s _ { t } , \zeta \big ) \Big ] - \beta \mathbb { E } _ { \tau \sim \mathcal { D } } \underbrace { D _ { \mathrm { K L } } \big ( q _ { \phi } ( \zeta \mid \tau ) \| p _ { \phi } ( \zeta ) \big ) } _ { \mathrm { ~ , ~ \cdot ~ } \mathrm { ~ . ~ } }
$$
• Encoder $q _ { \phi } ( \zeta \mid \tau )$ : given an observed trajectory, returns a distribution over the skill sequence that likely produced it.
Skill discovery algorithms differ in both the optimization procedure adopted for Equation 75 and the specific parameterization of latent variables. The largest group, variationalautoencoder (VAE)-based methods (Kingma and Welling, 2014), directly maximize the ELBO in Equation 75. Alternative strategies include the Expectation-Gradient framework (Fox et al., 2017; Krishnan et al., 2017) and adversarial approaches inspired by generative adversarial networks (Sharma et al., 2018), each offering distinct bias-variance trade-offs and inductive biases for learning reusable skill spaces.
The learned skills naturally support a hierarchy. In such works, a low-level controller, $\pi _ { \theta } ( a _ { t } \mid s _ { t } , z _ { t } )$ , is typically trained offline via behavioural cloning to execute any given skill, while a high-level policy, $\mu _ { \kappa } ( \boldsymbol { z } _ { t } \mid s _ { t } )$ , is subsequently optimized, either online (Pertsch et al., 2021; Nam et al., 2022; Salter et al., 2022), or with offline RL (Ajay et al., 2021), thereby accelerating learning efficient policies. The skills can also augment the primitive action space, expanding the agent’s control repertoire (Fox et al., 2017; Kipf et al., 2019; Jiang et al., 2022), or be transformed into intrinsic reward signals to enhance long-term credit assignment (Liu et al., 2023b).
Beyond pure likelihood maximization, it is also common to add a compression regularizer grounded in the minimum description length (MDL) principle (Rissanen, 1978). MDL prefers the model that can be transmitted with the fewest total bits of (i) the model parameters and (ii) the data encoded through that model. Viewing the latent variables as skills and boundaries (Equation 72), the model parameters are the decoder (and prior if parameterized), and the data are the offline trajectories; hence, a concise skill set shortens the overall description length.
The bits-back coding argument (Hinton and Zemel, 1993; Honkela and Valpola, 2004; Zhang et al., 2021b) shows that maximizing the ELBO (Equation 75) approximately minimizes the description length, but with an ill-chosen prior $p ( \zeta )$ , the optimum can collapse to a degenerate representation (e.g., a single skill encoding that simply mirrors the observations). To avoid this, LOVE (Jiang et al., 2022) augments the ELBO with an MDL-inspired information-cost term that explicitly penalizes skills increasing the expected code length of transmitting trajectories, yielding a representation that is both informative (high ELBO) and economical (low description length):
$$
\operatorname* { m i n } _ { \phi } \ L _ { \mathrm { D L } } ( \phi ) \quad \mathrm { s . t . } \quad J _ { \mathrm { E L B O } } ( \phi ) \ \geq \ C ,
$$
$$
L _ { \mathrm { { D L } } } ( \phi ) \ = \mathbb { E } _ { \tau \sim D , \{ b _ { t } , z _ { t } \} \sim q _ { \phi } ( \cdot | \tau ) } \Big [ - \ \sum _ { t = 1 } ^ { T } b _ { t } \log p _ { z } ^ { * } \big ( z _ { t } ; \phi \big ) \Big ] ,
$$
where $b _ { t } = 1$ indicates the start of a new skill at time $t$ , and $C$ is a constant. The optimal prior on $z$ , $p _ { z } ^ { * }$ , that minimizes the expected description length is defined by:
$$
p _ { z } ^ { * } ( z ; \phi ) \ = \frac { \mathbb { E } _ { \tau \sim D , \{ b _ { t } , z _ { t } \} \sim q _ { \phi } ( \cdot \vert \tau ) } \Big [ \sum _ { t = 1 } ^ { T } b _ { t } \delta \big ( z _ { t } = z \big ) \Big ] } { \mathbb { E } _ { \tau \sim D , \{ b _ { t } , z _ { t } \} \sim q _ { \phi } ( \cdot \vert \tau ) } \Big [ \sum _ { t = 1 } ^ { T } b _ { t } \Big ] } ,
$$
Discovering Temporal Structure: An Overview of Hierarchical RL with $\delta ( \cdot )$ denoting the indicator function. Intuitively, ${ \mathcal { L } } _ { \mathrm { D L } }$ penalizes having too many skill boundaries and distinct skill choices, driving the method toward a concise skill decomposition, and longer skills encompassing common structures are generally favored to avoid the degenerate solution where each skill represents a single action. Salter et al. (2022) instead leverages the concept of bottleneck option by introducing a predictability objective, encouraging option-level transitions to be predictable. The authors show that maximizing this predictability reduces the conditional entropy and thus the optimal code length, and this objective is equivalent to applying the MDL principle.
Vlastelica et al. (2023) cast offline skill discovery as empowerment maximization, presented in Section 4.4, under an imitation constraint. To ensure that each skill remains faithful to the offline demonstrations, they constrain the divergence between the induced state occupancy and the state occupancy $d _ { E }$ from a skill-independent expert dataset, resulting in a constrained optimization problem:
$$
\operatorname* { m a x } _ { \{ \pi _ { z } \} , q _ { \phi } } \mathbb { E } _ { z \sim p ( z ) , s \sim d _ { \pi } } [ \log q _ { \phi } ( z \mid s ) ] \quad \mathrm { s . t . } D _ { \mathrm { K L } } ( d _ { \pi } \parallel d _ { E } ) \leq \varepsilon , \forall z .
$$
Here, $d _ { \pi }$ denotes the state-occupancy measure of the skill-conditioned policy, estimated in the offline setting via density-based learning. The term $q _ { \phi }$ represents a skill discriminator that tightens a variational lower bound on the mutual information between skills and states. It simultaneously diversifies behaviours by maximizing the lower bound on the mutual information between states and skills, and regularizes them by penalizing departures from the expert state distribution $d _ { E }$ .
# 5.1.1 Benefits and Opportunities
Credit Assignment. By extracting hierarchical structure from offline datasets, agents can break complex trajectories into more manageable subgoals. This segmentation makes it easier to understand why certain results occur, as it allows each outcome—such as achieving a subgoal—to be more directly linked to the specific actions and conditions that produced it. In other words, by focusing on a sequence of subgoals rather than the full sequence of primitive actions, the learning algorithm can more easily attribute success or failure to specific decisions or events. For example, Kipf et al. (2019) tackle credit assignment in mazelike environments with delayed feedback. Their method infers segment boundaries $q ( b _ { i } \mid s , a )$ , with $b _ { i } \in [ 1 , T + 1 ]$ functioning similarly to option termination functions, and encodes each segment using $q ( z _ { i } \mid s , a )$ as latent skill (subgoal) descriptors. This segmentation captures subgoal structure, facilitating effective credit assignment across subgoals when applying the skills in sparse-reward settings. Similarly, Kim et al. (2019) improve credit assignment in goal-oriented navigation tasks by decomposing action-free trajectories into subsequences by inferring skill descriptors $z _ { t }$ and binary termination signals $b _ { t } \in \{ 0 , 1 \}$ .
Transfer. By discovering skills from offline datasets, agents develop a foundational set of versatile competencies. These competencies can then be transferred to new tasks with minimal adjustment. Pertsch et al. (2021) show promising results on transferring skills obtained in the offline dataset to more complex simulated robotic tasks unseen in the dataset (e.g., maze navigation with larger maps in evaluation). Similarly, Jiang et al. (2022) and Salter et al. (2022) show that by optimizing a compression objective, in addition to the reconstruction one, the discovered skills help transfer across multiple tasks. Nam et al. (2022) demonstrate that by meta-training a high-level policy, $\pi _ { \boldsymbol { \theta } } ( z _ { t } \mid s _ { t } , \boldsymbol { e } )$ , where $e$ is a task encoding, and executing a low-level policy, $\pi _ { \theta } ( a _ { t } \mid s _ { t } , z _ { t } )$ , which is learned via behavioural cloning, the agent can solve a wide range of new tasks in a meta-RL setting.
Exploration. When reusing the offline discovered skills for online interaction, this can reduce the difficulty of exploration since the agent can quickly apply well-tested, prelearned behaviours rather than learning them through trial-and-error in real-time. Fox et al. (2017) show promising exploration results in a simple four-room domain by augmenting the action space with discovered parameterized options. Salter et al. (2022) show that the learned temporally compressed bottleneck options are beneficial for exploration in maze-like environments with delayed rewards.
Avoiding Distributional Shift. In offline RL, distributional shift describes the discrepancy between the action distribution present in the training dataset and the actions chosen by the policy during evaluation or deployment. This occurs when the policy selects actions that are rarely or never observed in the dataset, leading to unreliable value estimates. Ajay et al. (2021) leverage offline skill discovery to mitigate this issue. Their approach encodes short trajectories (e.g., every $K$ steps) from the dataset into a skill descriptor $z$ . By maximizing the log-likelihood of actions in trajectories $\tau$ , conditioned on states $s _ { t }$ and skill descriptors $z$ , the method captures recurring behaviours present in the data. The offline dataset can then be enhanced using $z$ , and a high-level policy, $\pi _ { \boldsymbol { \theta } } ( z \mid s )$ , can be derived by off-the-shelf offline RL algorithms. The authors show that such a temporal structure reduces compounding errors for extrapolating out-of-distribution actions in offline RL.
# Opportunities for Research.
• Optimization challenges. Evident in some studies (Jiang et al., 2022), optimization challenges can lead to degraded skill quality if the learning dynamics are not carefully managed. Additionally, the under-utilization of reward signals in existing datasets creates an opportunity to further refine learned skills, and incorporating offline RL methods—rather than relying solely on reconstruction-based approaches—into HRL may unlock greater performance gains, as Hu and Leung (2023) provide provably positive results on sample efficiency. • Broader scope of test environments. Additionally, many methods in this field primarily validate their concepts on simulated robotic navigation tasks, which typically involve deterministic transitions and rewards (Gao et al., 2024), and often favor specific inductive biases. A natural extension would be scaling this paradigm to real-world, image-based tasks, or other practical applications with different properties.
# 5.2 Hindsight Subgoal Relabeling
In Section 5.1, we discussed the methods that automatically infer the skill descriptors, usually characterized by latent variables. In this section, we explore methods that identify and relabel subgoals within an offline dataset, effectively leveraging existing transitions to learn how to achieve subgoals (Kaelbling, 1993b). Offline experiences offer valuable insights into identifying subgoals, which can be viewed as milestones or waypoints for accomplishing
Discovering Temporal Structure: An Overview of Hierarchical RL a task (Gupta et al., 2019; Park et al., 2024a), or abstracting bottleneck states to make a good partition of the state space (Paul et al., 2019). This is conceptually similar to hindsight experience replay (Andrychowicz et al., 2017) we discussed in Section 4.8, which relabels the final or intermediate states reached in a trajectory as if they were the intended goals.
In an examplar work by Paul et al. (2019), a reward-free trajectory dataset, $\mathcal { D } =$ $\{ ( s _ { 1 } ^ { ( i ) } , a _ { 1 } ^ { ( i ) } , \ldots , s _ { n _ { i } } ^ { ( i ) } ) \} _ { i = 1 } ^ { n _ { d } }$ , is segmented into an ordered list of $n _ { g }$ disjoint partitions, $G =$ $\{ 1 , \ldots , n _ { g } \}$ , that serve as subgoals.
Initial labeling. Each trajectory is equipartitioned by assigning consecutive subgoal indices:
$$
\begin{array} { r l } { g _ { t } ^ { ( i ) } = j } & { { } \mathrm { i f f } \quad \left\lfloor \frac { ( j - 1 ) n _ { i } } { n _ { g } } \right\rfloor < t \leq \left\lfloor \frac { j n _ { i } } { n _ { g } } \right\rfloor , \ j \in G . } \end{array}
$$
Iterative refinement. Repeat the following steps until the label change falls below a threshold. Alternating these steps until convergence yields a classifier $\mu _ { \kappa }$ that both partitions the state space and respects the required ordering. Such a $\mu _ { \kappa } ( \boldsymbol { g } \mid s )$ can provide information on whether a state is a milestone or bottleneck.
Learning step: fit a classifier $\mu _ { \kappa } ( \boldsymbol { g } \mid s )$ by cross-entropy on the current labels:
$$
L ( \kappa ) = \mathbb { E } _ { ( s , g ) \sim D } \bigl [ - \log \mu _ { \kappa } ( g \mid s ) \bigr ] .
$$
• Inference step: enforce the trajectory order $1 \prec 2 \prec \ldots \prec n _ { g }$ with Dynamic Time Warping $^ 8$ (Mu¨ller, 2007) over the posterior sequence $\left. \mu _ { \kappa } ( \cdot \mid s _ { t } ) \right.$ .
Potential function and intrinsic reward. The most probable class defines a potential $\Phi _ { \kappa } ( s ) = \arg \operatorname* { m a x } _ { g } \mu _ { \kappa } ( g \mid s )$ , and an intrinsic reward $r ^ { \prime }$ can be defined as:
$$
r ^ { \prime } ( s , a , s ^ { \prime } ) = \gamma \Phi _ { \kappa } ( s ^ { \prime } ) - \Phi _ { \kappa } ( s ) ,
$$
where γ is the discount factor of the MDP.
Learning schedule. The policy $\pi _ { \theta }$ is first initialized through behaviour cloning, after which reinforcement learning proceeds with the augmented reward $\boldsymbol { r } + \boldsymbol { r } ^ { \prime }$ . This phase exploits subgoal guidance to supply dense progress signals without additional expert interaction and still preserves the original optimum.
As another example, Relay Policy Learning (RPL) (Gupta et al., 2019) relabels demonstration trajectories $\tau = ( s _ { 0 } , a _ { 0 } , \ldots , s _ { T } )$ with relay subgoals, producing two goal-augmented datasets:
$$
\begin{array} { r l } & { \mathcal { D } _ { \ell } = \left\{ \begin{array} { l l } { \big ( s _ { t } , a _ { t } , g _ { \ell } \big ) \big | \ g _ { \ell } = s _ { t + w } , \ 0 \leq t < T , \ 1 \leq w \leq W _ { \ell } , \ t + w \leq T } \end{array} \right\} , } \\ & { \mathcal { D } _ { h } = \left\{ \begin{array} { l l } { \big ( s _ { t } , g _ { \ell } , g _ { h } \big ) \big | \ g _ { \ell } = s _ { t + \operatorname* { m i n } ( W _ { \ell } , w ) } , \ g _ { h } = s _ { t + w } , \ 0 \leq t < T , \ 1 \leq w \leq W _ { h } , \ t + w \leq T } \end{array} \right\} } \end{array}
$$
$\mathcal { D } _ { \ell }$ offers short-horizon examples (subgoal horizon $\le W _ { \ell }$ ) for training a low-level controller $\pi _ { \theta } ( a \mid s , g _ { \ell } )$ to reach nearby states, whereas $\mathcal { D } _ { h }$ pairs each long-horizon target $g _ { h }$ (up to
$W _ { h }$ steps away) with a feasible intermediate subgoal $g _ { \ell }$ , enabling hierarchical planning by a high-level goal-setter $\mu _ { \kappa } ( g _ { \ell } \mid s , g _ { h } )$ . The imitation objective is:
$$
\operatorname* { m a x } _ { \kappa , \theta } \ \mathbb { E } _ { ( s , a , g _ { \ell } ) \sim \mathcal { D } _ { \ell } } \big [ \log \pi _ { \theta } ( a \mid s , g _ { \ell } ) \big ] \ + \ \mathbb { E } _ { ( s , g _ { \ell } ^ { \prime } , g _ { h } ) \sim \mathcal { D } _ { h } } \big [ \log \mu _ { \kappa } ( g _ { \ell } ^ { \prime } \mid s , g _ { h } ) \big ] ,
$$
with $W _ { \ell } = 3 0$ and $W _ { h } = 2 6 0$ in all experiments of RPL. During execution, every $W _ { \ell }$ steps the high-level policy samples a new subgoal $g _ { \ell } \sim \mu _ { \kappa } ( \cdot \mid s , g _ { h } )$ and the low-level controller tracks it step-by-step until the next subgoal is issued.
# 5.2.1 Benefits and Opportunities
Credit Assignment. Subgoal relabeling decomposes complex tasks into manageable intermediate objectives by highlighting which actions contribute to reaching the final goal.
Paul et al. (2019) address credit assignment by constructing an intrinsic reward based on a subgoal policy $\mu _ { \kappa } ( \boldsymbol { g } \mid s )$ , which identifies a state’s progress toward a final goal. This classifier is trained via an EM-style procedure that enforces an ordering constraint over subgoal indices, ensuring that states later in a demonstration receive higher labels. The resulting potential function, $\Phi _ { \kappa } ( s ) = \arg \operatorname* { m a x } _ { g } \mu _ { \kappa } ( g \mid s )$ , induces a shaped reward (Equation 83) which provides dense feedback aligned with behavioural progress. This intrinsic signal facilitates temporal credit assignment by rewarding transitions that advance the agent through the learned subgoal structure, even when extrinsic rewards are sparse or delayed.
Alternatively, RPL (Gupta et al., 2019) addresses the credit assignment challenge from sparse, delayed rewards by relabeling demonstrations with overlapping sliding-window subgoals. It trains a goal-conditioned low-level policy to reach short-horizon targets, while a highlevel policy selects subgoals. This hierarchical structure ensures that each low-level episode ends with an intrinsic success signal. As a result, external rewards propagate after at most one window. This transforms the long-horizon problem into a sequence of locally supervised updates, enabling faster and more stable credit assignment than flat or single-level baselines.
Similarly, Park et al. (2024a) relabel subgoals as the state $\boldsymbol { s } _ { t + k }$ that lies exactly $k$ steps ahead of the current state $s _ { t }$ . A high-level policy, $\mu _ { \kappa } ( s _ { t + k } \mid s _ { t } , g )$ , proposes such waypoints conditioned on the ultimate goal $g$ , while a low-level policy, $\pi _ { \boldsymbol { \theta } } { \big ( } a _ { t } \ { \big | } \ s _ { t } , s _ { t + k } { \big ) }$ , outputs primitive actions that move the agent toward the subgoal. Both policies are optimized via a shared goal-conditioned value function $V _ { \psi }$ : $\mu _ { \kappa }$ maximizes $V _ { \psi } ( s _ { t + k } , g )$ , whereas $\pi \theta$ maximizes $V _ { \psi } ( s _ { t + 1 } , s _ { t + k } )$ . Specifically, $\mu _ { \kappa }$ is trained to choose subgoals $s _ { t + k }$ that maximize $V _ { \psi } ( s _ { t + k } , g )$ , while $\pi _ { \theta }$ is trained to select actions that make the next state $s _ { t + 1 }$ have high value $V _ { \psi } ( s _ { t + 1 } , s _ { t + k } )$ relative to the current subgoal. Because different subgoals induce much larger variations in $V _ { \psi }$ than individual actions, the high-level receives a more reliable learning signal, and since $\pi _ { \boldsymbol { \theta } }$ queries $V _ { \psi }$ only for nearby states where estimates are more accurate, the entire hierarchy is less susceptible to noise and approximation errors in the value function, resulting in a more robust policy and credit assignment.
Interpretability. Identifying subgoals within the offline dataset can provide insights into understanding the decision-making process. For example, Paul et al. (2019) present visualizations of the state space in robotic navigation tasks, such as AntMaze and AntTarget, demonstrating that the state space can be structurally partitioned using discovered subgoals. The structural decomposition is intuitively meaningful to humans, facilitating better understanding and verification.
Discovering Temporal Structure: An Overview of Hierarchical RL
Opportunities for Research.
• Enhancing interpretability of decision making with interpretable subgoals. Although positive empirical result is shown in (Paul et al., 2019), current offline relabeling schemes select subgoals with limited transparency into why particular states are chosen or how they steer the learned policy. Embedding interpretability or alignment objectives, such as attributing subgoal selection to human-understandable criteria, would not only clarify the decision rationale, but also foster trust and diagnosability.
• Scaling to complex observations by identifying latent subgoals. Researchers can extend offline subgoal relabeling to environments with high-dimensional inputs, such as images, language, or tactile data, with Park et al. (2024a) as an example. To do so, methods could identify subgoal representations in some latent space that pinpoint meaningful milestones within these spaces. By focusing on compact embeddings, relabeling can remain effective even when raw observations are noisy, partial, or multimodal.
# 6. Discovery with Foundation Models
Agents that learn skills from scratch through environment interactions are directly exposed to the inherent complexities of the domains in which they operate. Such agents must learn from their stream of experience how to organize the collected data into meaningful chunks in order to derive a useful set of skills. To mitigate these challenges, we can instead build on prior knowledge contained in large pretrained models to guide the discovery of useful skills in complex environments. The starting assumption for such methods is that pretrained models contain knowledge about the environment of interest. Although this assumption may not always hold, it is likely applicable to many domains of interest and will grow as the training paradigm of LLMs expands in scope.
Simultaneously, an interesting feature of LLM-based methods is that, as these large models are based on human priors and are instantiated through natural language, the set of behaviours will generally be more interpretable. In fact, leveraging language, without using LLMs, has produced a prolific line of work (Shu et al., 2018; Bahdanau et al., 2018; Fu et al., 2019; Jiang et al., 2019; Colas et al., 2020a,b; Akakzia et al., 2021).9 These works underscore, amongst other features, the compositional nature of language. This quality makes it a particularly useful space to represent a variety of goals.
It is therefore natural to consider LLMs for HRL as they provide both useful inductive biases from pre-training on human data and a meaningful abstraction space through natural language. This connection is reinforced by the fact that LLMs, by their very nature, can represent goal-conditioned policies, where goals are specified linguistically.
As such, many recent works leverage LLMs to decompose tasks into subtasks (Pignatelli et al., 2024; Yang et al., 2024; Wang et al., 2024c), an operation done according to their pre-existing understanding of the task’s underlying structure. Another perspective is to use LLMs as a measure of interestingness to propose a curriculum of goals and tasks in open-ended domains (Colas et al., 2023; Zhang et al., 2024; Faldor et al., 2025; Wang et al., 2024b; Zala et al., 2024). The key to converting an LLM’s latent knowledge into a functional agent lies in efficiently learning the options required to execute the decomposed goals.
In this section, we investigate four families of methods that propose solutions to this problem. The first family consists of methods using embeddings from large pretrained models as representations from which option rewards are defined. Next, we present methods that use large pretrained models to provide feedback, in the form of rewards or preferences, for learning different skills. Building on the code generation capabilities of large models, we present two families of methods that write code to either craft reward functions to learn specific behaviours. Finally, as LLMs can be seen as goal-conditioned policies, we cover methods that use them directly to specify goals and achieve them.
Not all the presented methods in this section are hierarchical by nature. For example, some papers focus on defining rewards or policies for a set of tasks, rather than a set of options. However, given the promising potential for such methods to drive progress for HRL, we include them as well.
# 6.1 Embedding Similarity
As foundation models are trained on Internet-scale datasets, their embeddings contain useful structure for a variety of tasks. Such embeddings can be the result of contrastive pretraining on image and text pairs, for instance, the Contrastive Language-Image Pretraining (CLIP) encoder (Radford et al., 2021). Let ${ \bf w } _ { i } \in \mathbb { R } ^ { d }$ represent the normalized feature vector (embedding) generated by the image encoder for the $i$ -th image, $I _ { i }$ . Similarly, let $\mathbf { u } _ { j } \in \mathbb { R } ^ { d }$ be the normalized feature vector generated by the text encoder for the $j$ -th text, $\boldsymbol { T } _ { j }$ . The similarity between image $i$ and text $j$ is computed using the cosine similarity (which simplifies to the dot product for normalized vectors):
$$
C _ { i j } = \mathbf { w } _ { i } ^ { \top } \mathbf { u } _ { j } .
$$
These embeddings can then be used to represent image or language goals and define reward functions by taking the cosine similarity between the embeddings of the goal and the observation in which the agent is currently situated,
$$
r ^ { g } ( s ) = \mathbf { w } ( s ) ^ { \top } \mathbf { u } ( g ) .
$$
This reward function is then maximized by a goal-conditioned policy interacting with an environment to learn behaviours that achieve the specified goals.
To obtain these vectors, the objective is formulated as minimizing a cross-entropy loss, applied symmetrically for both image-to-text and text-to-image prediction tasks. The loss for predicting the correct text caption for a given image $i$ (considering all $N$ text captions in the batch) is defined as:
$$
L _ { \mathrm { i m a g e } _ { i } } = - \log \frac { \exp ( c _ { i i } / e ) } { \sum _ { j = 1 } ^ { N } \exp ( c _ { i j } / e ) } ,
$$
where $e$ is the temperature hyperparameter. The loss for predicting the correct image for a given text caption $i$ (considering all $N$ images in the batch) is:
$$
L _ { \mathrm { t e x t } _ { i } } = - \log \frac { \exp ( c _ { i i } / e ) } { \sum _ { j = 1 } ^ { N } \exp ( c _ { j i } / e ) } .
$$
Figure 12: Illustration of the method of embedding similarity for defining option reward functions. Visual observations and language goal descriptions are converted into embeddings, and their similarity (e.g., via MineCLIP) is used to define reward functions for goal-conditioned policies. In this example, the agent is rewarded for successfully performing sheep shearing. Figure taken from Fan et al. (2022).
Fan et al. (2022) instantiate this idea in the open-ended Minecraft game (Johnson et al., 2016; Kanervisto et al., 2021). To do so, they introduce the MineDojo framework. The authors collect a large dataset of Minecraft gameplay for training a reward function that would map textual goals and a sequence of observations to a scalar value indicating their similarity. The language goal is encoded through the pretrained CLIP encoder (Radford et al., 2021) whereas the video encoder is composed of an image encoder and a temporal aggregator that accumulates 16 consecutive frames from the video. This leads to the following non-Markovian reward,
$$
r ^ { g } ( s _ { t - 1 6 : t } ) = \mathbf { w } ( s _ { t - 1 6 : t } ) ^ { \top } \mathbf { u } ( g ) .
$$
The authors train their reward model, called MineCLIP, on the aforementioned dataset using the same losses as in Equation 88 and 89. This resulting reward function excels at capturing correct behaviour on a wide collection of tasks, such as “Combat zombie”. Lifshitz et al. (2023) build on this work to obtain an instruction-following agent in Minecraft, where language instructions represent goals.
CLIP-based methods have also been applied to robotics. Xiao et al. (2022) fine-tune the CLIP model on a small dataset of robotic tasks and then utilize the model to label, using a set of predefined annotations, a much larger dataset of unlabeled observations. Using this larger dataset, the authors then train language-conditioned policies to achieve goals through imitation learning. Further improving the sample efficiency of embedding-based methods, Palo et al. (2023) show the possibility of efficiently fine-tuning the same CLIP model on as little as 1000 data points. Avoiding the costly operation of fine-tuning large pretrained models, Cui et al. (2022) investigate the prospect of using the CLIP model in a zero-shot fashion for defining goal-conditioned policies, obtaining good results on robotics tasks. Similarly, Rocamonde et al. (2024) leverage a fixed pretrained CLIP model and study the scaling effect of such models on the resulting RL performance.
# 6.1.1 Benefits and Opportunities
Exploration. Methods building on embedding-based rewards empirically show improved exploration in complex tasks. In particular, in open-ended environments such as Minecraft, the dense nature of the reward functions obtained from embedding similarity significantly helps with exploration, leading to sophisticated behaviour (Fan et al., 2022; Lifshitz et al., 2023). The dense nature of such reward functions is also particularly useful for approaches studying the challenge of robotics (Xiao et al., 2022; Fu et al., 2024) and web navigation (Baumli et al., 2023). Du et al. (2023c) investigate how guiding exploration with an LLM during a pretraining phase can help an agent’s downstream performance. To do so, the authors introduce the idea of restricting the reward function through a similar threshold,
$$
r ^ { g , T } ( s _ { t } , a _ { t } , s _ { t + 1 } ) = { \left\{ \begin{array} { l l } { r ^ { g } ( { s } _ { t } , { a } _ { t } , { s } _ { t + 1 } ) } & { { \mathrm { i f ~ } } > T , } \\ { 0 } & { { \mathrm { o t h e r w i s e ~ } } , } \end{array} \right. }
$$
where $T$ , the threshold, is a hyperparameter. This reduces the noise of possibly imperfect embeddings used to define the reward function, further improving exploration.
Transfer. Another important benefit from LLM-based approaches to skill discovery stems from the compositional nature of language, which easily allows for specifying a variety of goals. For example, Du et al. (2023a) study how pretraining the agent on self-generated goals, where good behaviour is rewarded by the embeddings of an LLM, can lead to improved downstream performance on a set of complex goals. To encourage reaching a diversity of goals that will transfer well, the authors additionally prompt the LLM to generate $k$ goals and reward the agent on the goal with the greatest reward,
$$
r ^ { g _ { m a x } } = \operatorname* { m a x } _ { i = 1 \ldots k } r ^ { g _ { i } , T } ( s _ { t } , a _ { t } , s _ { t + 1 } ) .
$$
Similar generalization to different language-conditioned goals is reported by Lifshitz et al. (2023). Instead of directly training with a goal-conditioned model, Mahmoudieh et al. (2022) efficiently train a discrete set of smaller policies, used as a basis of behaviour. This is then distilled into a single language-conditioned neural network, which can better generalize on a larger spectrum of behaviours than the basis.
# Opportunities for Research.
• Understanding the trade-offs of different embeddings. An important question when working with embedding similarity measures is with respect to the origin of the embeddings themselves. Most of the presented papers rely on CLIP, but other embeddings have been used, such as the Bidirectional Encoder Representations (BERT) embeddings (Devlin et al., 2019) and the Reusable Representations for Robot Manipulation (R3M) Adeniji et al. (2023), which is pretrained on the Ego4D dataset (Grauman et al., 2021) through a combination of contrastive and video-language alignment losses. When considering a wide range of tasks, it is not clear which model shows greater performance, or is more amenable to fine-tuning.
• Expanding beyond text-image similarity. Most works compute the similarity between a language goal instruction and the current observation. Sontakke et al. (2023)
Discovering Temporal Structure: An Overview of Hierarchical RL instead compute the similarity between an agent attempting to reach a goal and a demonstration of such a successful behaviour. Moreover, contrary to most works, the authors compute the reward at the trajectory level, that is, the reward is only given at the end of an interaction. The authors show that their approach can be applied even in the case where the demonstration is done by a human physically completing the task, rather than teleoperating a robot, which presents greater opportunities for generalization.
# 6.2 Providing Feedback
Leveraging the embeddings of foundation models to measure the similarity between a desired goal and the current state places significant emphasis on the quality of the embeddings themselves. One way to avoid this is by considering the auto-regressive nature of LLMs, which allows for chain-of-thought (Wei et al., 2022) and in-context learning (Brown et al., 2020). Such capabilities can be particularly useful to define option reward functions. This can be done by taking as input a state, or a trajectory, as well as a goal description, and using LLMs to output a scalar feedback, representing the degree of success with respect to the goal. Alternatively, preferences can be elicited from an LLM over pairs of states that are then converted into a reward model through preference-based learning (Wirth et al., 2017).
# Direct Reward
To obtain a success measure, Du et al. (2023a) combine a sequence of observations from the environment together with a question such as “Did the agent successfully place the cactus left of the sofa?” to query a multimodal model (Alayrac et al., 2022) for a binary answer. Formaly, $\mathtt { L L M } : \mathbb { S } \times \mathbb { S } \mathbb { Y }$ where the goal $g$ is represented by the question and $y \in \{ 0 , 1 \}$ . The goal reward is then defined through this binary output,
$$
r ^ { g } ( s ) = y = \mathtt { L L M } ( s , g ) .
$$
Such reward functions are evaluated on a diversity of domains: embodied simulations (Abramson et al., 2021), robotic manipulation with a 6DoF device, and human interactions in the Ego4D dataset (Grauman et al., 2021). To obtain accurate success measures, the authors have to initially fine-tune the model on a large dataset of expert interactions. Instead of costly model updates, Kwon et al. (2023a) propose to replace weight updates with few-shot in-context examples, building on improved learning capabilities in the employed LLM. Pan et al. (2024) show that measures of success obtained from a multimodal LLM have high agreement (up to $9 2 . 9 \%$ ) with oracle evaluators. Such results are reported on WebArena (Zhou et al., 2024) and Android-in-the-Wild (Rawles et al., 2023) benchmarks. Leveraging the strong performance of LLMs as direct reward modelers, Bai et al. (2024) successfully train robust RL policies on a variety of goals derived from changing web interfaces.
# Eliciting Preferences
When an LLM’s output directly functions as the reward signal, it often lacks the granularity to effectively measure the relative merit of a specific state against the full spectrum of alternatives. Instead, we can leverage the idea of reinforcement learning from AI feedback (Bai et al., 2022), introduced in the context of fine-tuning large models and relying on preference-based learning (Wirth et al., 2017; Thomaz et al., 2006). Building on this idea, Klissarov et al. (2024) introduce the Motif algorithm, which leverages an LLM’s feedback to guide an agent acting in the open-ended NetHack environment (K¨uttler et al., 2020). Observations from the environment are presented to an LLM before querying, using chainof-thought prompting, to provide a preference over which observation is more desirable for a certain goal.
Figure 13: Learning option rewards from AI feedback proceeds in three phases. In the first phase, an LLM is conditioned on a behaviour description and queried for preferences over pairs of observations, which are stored with their preference labels within a dataset. In the second phase, the preferences are distilled into an observation-based scalar reward function. Finally, an agent is trained interactively with RL, receiving a scalar signal at every step through the reward function extracted from the preferences.
Formally, the annotation function is given by LLM : ${ \mathcal { S } } \times { \mathcal { S } } \times { \mathcal { G } } \to { \mathcal { F } }$ , where S is the space of states, where G is the space of goals defined through natural language, and $\mathcal { F } = \{ 1 , 2 , \emptyset \}$ is a space of preferences for either the first, the second, or none of the captions. These preferences are then distilled into a reward function through the Bradley-Terry model (Bradley and Terry, 1952) and given to an RL agent interacting with the environment,
$$
\begin{array} { r l } & { \mathcal { L } ( \nu ) = - \mathbb { E } _ { ( s _ { 1 } , s _ { 2 } , g , y ) \sim \mathcal { D } _ { \mathrm { p r e f } } } \bigg [ \mathbb { 1 } [ y = 1 ] \log P _ { \nu } [ s _ { 1 } \succ s _ { 2 } | g ] + \mathbb { 1 } [ y = 2 ] \log P _ { \nu } [ s _ { 2 } \succ s _ { 1 } | g ] } \\ & { \qquad + \mathbb { 1 } [ y = \emptyset ] \log \Big ( \sqrt { P _ { \nu } [ s _ { 1 } \vdash s _ { 2 } | g ] \cdot P _ { \nu } [ s _ { 2 } \succ s _ { 1 } | g ] } \Big ) \bigg ] , } \end{array}
$$
where $\begin{array} { r } { P _ { \nu } [ s _ { a } \succ s _ { b } | g ] = \frac { e ^ { r _ { \nu } ^ { g } ( s _ { a } ) } } { e ^ { r _ { \nu } ^ { g } ( s _ { a } ) } + e ^ { r _ { \nu } ^ { g } ( s _ { b } ) } } } \end{array}$ = erνg (sear)ν (+sear)νg (sb) is the probability of preferring a state sa to another sb given a goal $g$ ; $r _ { \nu } ^ { g }$ is the reward defined with respect to the goal specified in the LLM’s prompt. Through the process of comparing states to alternatives, eliciting LLM preferences, or receiving AI feedback, nuanced and fine-grained reward functions can be provided. Such
Discovering Temporal Structure: An Overview of Hierarchical RL reward functions can also be understood as process-based rewards (Uesato et al., 2023; Lightman et al., 2023). Klissarov et al. (2024) leverage this characteristic to learn a set of policies that exhibit a certain behaviour across time, such as preferring generally more cautious strategies when exploring. This is in contrast to the work on LLM as direct reward modelers, which typically define rewards for reaching goal states as binary success detectors (Du et al., 2023a). As illustrated in the MaestroMotif algorithm (Klissarov et al., 2025a), the flexibility offered by AI feedback is key in designing HRL agents capable of subtle behaviours and fast adaptation. Adding to the generality of AI feedback, Wang et al. (2024a) investigate the resulting policies across a range of continuous control domains using pixel observations and a multimodal LLM. Their findings show that reward functions generated through AI feedback yield more performant policies compared to embedding similarity approaches or methods that directly query the LLM for scalar rewards.
# 6.2.1 Benefits and Opportunities
Exploration. Klissarov et al. (2024) illustrate the potential of AI feedback-based rewards to significantly improve exploration on the complex open-ended world of NetHack. The obtained reward function is shown to be naturally dense and encodes a variety of important milestones, such as unlocking doors or picking up items. The authors hypothesize that, by querying the model on thousands of pairs of observations from the environment, the LLM’s common sense reasoning and domain knowledge are distilled into a useful reward function.
Credit Assignment. Wang et al. (2024a) report that the reward obtained from preferences monotonically increases as the agent advances towards the goal, naturally assigning credit to states in between the starting state and the goal. Klissarov et al. (2025b) further study the dense nature of such reward functions and reveal a strong correlation with value functions obtained at the end of training. As such, value functions have been trained to propagate information through temporal difference learning (Sutton, 1988), the authors argue that this high correlation is another indication that the reward functions based on LLM feedback are useful for credit assignment. An equivalent perspective is that the resulting dense reward can be seen as a form of reward redistribution (Arjona-Medina et al., 2018; Hung et al., 2019; Klissarov and Precup, 2020; Ni et al., 2023), which is an established method for improving credit assignment.
Transfer. In MaestroMotif, Klissarov et al. (2025a) show how a set of semantically meaningful skills can be easily re-composed zero-shot to adapt to complex new tasks. Leveraging the code generation abilities of LLMs, they propose a neuro-symbolic approach where skill policies are neural networks trained by reinforcement learning, and the high-level policy is defined through code. The authors then use the in-context learning abilities of LLMs to re-compose the skills, significantly outperforming baselines that are trained specifically on each of the tasks. Their approach highlights how the compositional nature of language can be particularly helpful when combined with a set of linguistically-defined skills, leading to an easily promptable agent.
# Opportunities for Research.
• Simplifying the reward learning process. Despite the strength of preference-based methods for crafting rewards through LLMs, they are more complex than directly querying for a reward signal. Is there a way to leverage the improved exploration and credit assignment without the additional complexity? Is an existing dataset of observations needed for eliciting useful preferences? Zheng et al. (2024) provide an initial answer to these questions by comparing different ways in which the LLM feedback is leveraged, for example, by using it as a label for a classification loss. Their results show surprisingly strong performance of some of these simpler baselines, even when querying the LLM with online interactions.
# 6.3 Reward as Code
Instead of relying on LLMs to evaluate good and bad behaviour from observations, it is possible to rely on their code generation abilities to craft helpful rewards. In this line of work, a goal description is given to the LLM as input, as well as additional information from the environment,
$$
\begin{array} { c } { { \tt c o d e ^ { \it g } \sim \tt L L M ( \it g , i n f o ) , } } \\ { { r ^ { \it g } ( \it s ) = \tt c o d e ^ { \it g } ( \it s ) . } } \end{array}
$$
This additional information often constitutes important symbolic information, such as lowlevel features, that is used to define the code. This code is then executed alongside the environment simulation to generate a reward for every state $s$ . Xie et al. (2024) explore the possibility of leveraging an LLM’s capacity to code reward functions for robotics tasks. The authors provide the LLM with additional information in the form of a symbolic representation of the environment (e.g., Python classes describing each object and methods to access specific information about it). Furthermore, the authors provide the LLM with helpful functions from different packages (such as quaternion computation in NumPy) to be used for reward generation. Finally, their algorithms also allow for integrating human feedback. Yu et al. (2023) similarly investigate how LLMs can generate reward functions for learning robotics skills. In their approach, an LLM takes as input a detailed language description of a goal and instantiates a set of reward functions.
Another notable work is that of Ma et al. (2024), which presents Evolution-driven Universal REward Kit for Agent (EUREKA). They provide a task description to the LLM, such as “make the pen spin to a target orientation”, and proceed to do an evolutionary search on the reward function space. This process is supported through additional context given to the LLM in the form of selected parts of the source code of the environment. For each candidate reward function that the LLM generates, a complete learning run through massively distributed RL experiments using IsaacGym (Makoviychuk et al., 2021). The most promising reward function candidates are then retained and given to the LLM together with the learning statistics, such that the model performs in-context learning and suggests a new batch of candidates.
# 6.3.1 Benefits and Opportunities
Transfer. The ability to efficiently generate reward functions, without human supervision, is particularly important for transfer. For example, Ma et al. (2024) achieve super-human level reward design for complex robotics skills across a variety of embodiments. In the domain of Minecraft, Li et al. (2023) show how reward functions as code can be used to solve a variety of long-horizon goals given access to the symbolic features from the environment.
Figure 14: Defining reward functions as code requires access to a symbolic representation of the environment. This is done through an expert abstraction function that represents the environment as a hierarchy of Pythonic classes. The user instruction describes, in natural language, the goal to be achieved. The agent then interacts with the environment to maximize this symbolic reward function. It is also possible to include user feedback that summarizes the failure modes of the current reward code. Figure taken from Xie et al. (2024).
Opportunities for Research.
• Going beyond symbolic representations. Generating a reward function as code is a powerful paradigm: it avoids the need to query the LLM during the RL phase and does not require learning a parametric reward model. However, by definition, such an approach requires access to symbolic features from the domain of interest, which can be limiting if this involves real-world interactions with humans. Venuto et al. (2024) propose to query the LLM to craft its own symbolic representation from high-dimensional observations, similar to the work by Palo and Johns (2024). These representations are then used to define reward functions in code. However, their approach requires access to expert demonstrations, which future work could alleviate.
# 6.4 Directly Modeling the Policy
So far, we have covered methods that leverage foundation models to define goal reward functions through a variety of strategies, such that goal-conditioned policies can be obtained by maximizing the reward functions. Alternatively, there exists a line of work that uses LLMs to directly model the policy itself, where goals are defined through prompts and conditioning the LLM on them, effectively serving as goal-conditioned policies. In this setting, the LLM is oftentimes updated through in-context learning (Wei et al., 2022) to obtain policy improvements, bypassing the need for performing parameter updates, which can be costly and time-consuming. Building on the code generation capabilities of LLMs,
Figure 15: An LLM is conditioned on a goal description and generates snippets of code which instantiate skill policies. When interacting in multimodal environments, such as Minecraft, a bridge between this symbolic skill policy representation and the high-dimensional nature of the environment has to be present. Under such a setting, an LLM can act as an HRL agent, efficiently achieving complex goals. Figure taken from Wang et al. (2023a).
Liang et al. (2022) propose to define robotic skills as policies in the form of Python code,
$$
\begin{array} { c } { { \mathsf { c o d e } ^ { g } \sim \mathsf { L L M } ( g ) , } } \\ { { a \sim \mathsf { c o d e } ^ { g } ( s ) , } } \end{array}
$$
where the goal-conditioned code $g$ acts as a the goal-conditioned policy $\pi ( a | s , g )$ . They show that the LLM can re-compose calls to an API such that a new code policy achieves a specific goal. In particular, they propose hierarchical code-generation that recursively defines undefined functions from existing functions, leading to strong performance on robotics tasks. Kwon et al. (2023b) extend this work, removing assumptions such as providing in-context examples or requiring the LLM to predict end-effector poses. In the complex open-ended environment of Minecraft, Wang et al. (2023a) propose Voyager, a method leveraging an LLM to continually expand a library of skills. Such skills are crafted by prompting the LLM to define specific behaviours in code, building on an existing JavaScript API (PrismarineJS, 2013) that allows for grounding the generated code in the multimodality of Minecraft. Voyager further uses ideas of auto-curriculum and self-reflection to update the set of skills, learn new ones, or define their composition for a given task.
Another line of work directly queries the LLM for actions by giving as context the natural language description of a goal and the current state,
$$
a \sim \mathtt { L L M } ( s , g ) .
$$
In these settings, LLMs output the low-level actions in an environment, effectively as a goal-conditioned policy. This particular instantiation highlights that LLMs are already particularly effective HRL agents that can be conditioned on goal descriptions. The current focus of such models is on computer-based tasks (Anthropic, 2024; OpenAI, 2025). Despite the appeal of generalizing zero-shot to new language instructions, current LLMs are still quite limited in successfully performing long-horizon tasks by directly selecting low-level actions (OpenAI, 2024; Zhou et al., 2024).
# 6.4.1 Benefits and Opportunities
Exploration. By relying on an LLM’s common sense, prior knowledge, and possible API libraries, researchers have shown that agents explore their environment significantly better. By directly modeling the policy, it is possible to condition the LLM on a wider variety of goals and find well-performing policies for a subset of easier goals. This allows for making progress on very hard exploration problems by breaking the task into achievable milestones. Examples include collecting diamonds in Minecraft (Wang et al., 2023a) or intricate web navigation tasks (Zhou et al., 2024). By conditioning an LLM on language and directly outputting a sequence of actions to achieve tasks, agents achieve tasks that would be extremely difficult, or even impossible, to learn by maximizing a reward function.
Transfer. Directly acting with an LLM greatly simplifies how users can leverage the compositional nature of language. For example, the same LLM can be directly conditioned on a variety of computer interaction tasks and achieve them zero-shot (Anthropic, 2024). Alternatively, a library of skills can be re-composed through in-context learning to craft new skills (Wang et al., 2023a).
# Opportunities for Research.
• Lifting restrictions on the action space and action frequency. The prospect of directly generating a wide spectrum of behaviours simply by querying a large pretrained model is particularly appealing. It essentially encompasses the fundamental promise of HRL for fast adaptation thanks to the compositional nature of language. However, it also poses interesting challenges. For example, would such a model be restricted to a certain action space, or is there a way to efficiently adapt to a variety of embodiments? Are there limitations in terms of action frequencies? The domain of computer navigation is especially promising as grounding an LLM in the action space of computers would allow a model to achieve many economically useful tasks. However, the same model could not be used to control an embodied robot, unless fine-tuning is performed, which for large models is costly. A varying action space also raises the necessity to co-fine-tune the model to avoid catastrophic forgetting (Brohan et al., 2023).
# 7. Using Temporally Abstract Behaviour
In the previous sections, we presented a variety of approaches addressing the option discovery problem. This naturally leads to the question: how might an agent effectively use this set of behaviours to inform decision-making? In this section, we outline a spectrum of possible ways of integrating options and discuss different learning strategies.
# 7.1 Different Ways of Deliberating over Options
Let us consider the most common way of integrating options within an agent: the call-and-return model. In this model, a single option is chosen at every high-level decision point, and this option selects actions in the environment until its termination or interruption. This process repeats, and the high-level policy selects again amongst the available options. This model is by far the most predominant one across all HRL approaches we have covered in this work and was also used to give a simplified presentation of HRL itself in Figure 4. The call-andreturn model presents a straightforward way to think about HRL: a computational cost is paid at every high-level decision point for the high-level policy to deliberate and decide on an option. This cost can come in the form of a forward pass in a large neural network, chain-of-thought deliberation in LLMs, or a planning budget using option models. Once this cost is paid, the computational burden is reduced to the amount of computation required for the option to pick primitive actions.
Figure 16: Depiction of the distribution of computation over time for the standard calland-return model of execution. We assume that high-level decisions incur a greater computation cost compared to the low-level ones. This is illustrated in the spikes that characterize the call-and-return model. We also present a hypothetical model that would distribute computation over time in a more flexible way.
The call-and-return model proposes to spend computation as a binary choice: either the model deliberates over options or executes them. However, one could allocate computation according to a different distribution by allowing various degrees of deliberation to happen across timesteps. We illustrate this through Figure 16. Some states could require extensive deliberation, for example, in the form of long chains of thought during the reasoning process. Some other states could require shorter deliberations to decide on the correct action. The line of work on the generalized policy iteration and the generalized policy evaluation (Barreto et al., 2017, 2019b, 2020) is a concrete example of how one might redistribute computation across all timesteps. In this work, additional computation is spent at every timestep to select an action that is at least as good as the actions that would be chosen by any of the individual option policies in isolation.10
# 7.2 Learning High-level Policies
The agent’s high-level policy, $\mu ( o | s )$ , is responsible for selecting an option. We present different approaches to learning this quantity by separating methods into three categories: model-free approaches, model-based approaches, and approaches that rely on in-context learning using LLMs.
Discovering Temporal Structure: An Overview of Hierarchical RL
# 7.2.1 Model-free approaches
Usual model-free RL methods (like Q-learning) can, with slight modifications, be used to learn a policy that selects options $\mu ( o | s )$ . These modifications simply involve discounting rewards obtained during option execution appropriately and using the state at the end of option exection as the next environment state, i.e., the experience tuple used to update the agent is $\textstyle \left( s _ { t } , o _ { t } , \sum _ { k = 0 } ^ { \tau } \gamma ^ { k } r _ { t + k } , s _ { t + \tau } \right)$ , where $\tau$ is the duration of execution of option $o _ { t }$ (Bradtke and Duff, 1994 ). This approach, while simple, treats option execution as a black-box. When the chosen option is Markov, meaning that its duration $\tau$ can be written purely as a function of state (and not time), then intra-option learning can be used for improved sample-efficiency. As long as states observed during option execution are inside the option’s initiation set, then the corresponding transitions can be used to update $\mu ( o | s )$ (Sutton et al., 1998). Specifically, an SMDP transition $\textstyle \left( s _ { t } , o _ { t } , \sum _ { k = 0 } ^ { \tau } \gamma ^ { k } r _ { t + k } , s _ { t + \tau } \right)$ can be decomposed into up to $\tau$ transitions of the kind $\begin{array} { r } { \left( s _ { i } , o _ { t } , \sum _ { k = 0 } ^ { \tau } \gamma ^ { k } r _ { i + k } , s _ { i + \tau } \right) } \end{array}$ for all $i$ such that $s _ { i } \in \mathcal I _ { o }$ . Bacon (2018) later generalize these insights to policy gradient methods by proposing the option gradient theorem.
Bandits that maximize learning progress. A popular model-free approach is to treat the high-level policy as a contextual bandit (which can be thought of as an MDP with $\gamma = 0$ ). The reward function for the bandit is designed to carefully trade off various objectives. For example, when the extrinsic reward is dense and informative, the bandit simply chooses the option expected to maximize the reward (Schaul et al., 2019). When the reward function is sparse or deceptive, then a measure of learning progress (LP) is often added to the extrinsic reward; the idea is that the agent should pick options that (in addition to greedily maximizing reward) would also improve its knowledge of the environment and its own competence in the environment (Colas et al., 2022). Although measuring LP itself is intractable, proxies are used in practice. Competence progress (Oudeyer and Kaplan, 2007; Stout and Barto, 2010) prioritizes skills whose capabilities change the most with time—these skills represent subgoals of intermediate difficulty (Florensa et al., 2018). Count-based bonuses prioritize options that lead to high novelty (Bagaria and Schaul, 2023; Badia et al., 2020b,a), and density-based approaches (Pong et al., 2020) attempt to maintain a high entropy distribution for option selection from different states (Pitis et al., 2020).
# 7.2.2 Model-based approaches
Typically, in model-based RL, the agent first learns transition and reward models of the world, and then uses those models to look ahead in the future, before finally making a decision at the current timestep. When the agent learns single-timestep models of the world, it must roll out these models over a long horizon. This is problematic because model-prediction errors compound over time (Talvitie, 2017; Janner et al., 2019) and small errors in model prediction can lead to massive errors in value approximation (Kearns and Singh, 2002). Options allow the agent to learn temporally extended models of the world, which afford longer-horizon planning.
Learning option models. The agent’s stream of interaction data can be used to learn option models in two ways: (a) on-policy: where the agent updates the models for an option after it is executed (Sutton et al., 1999b), or (b) off-policy: where the agent uses intra-option learning (Sutton et al., 1998) to simultaneously learn about many options from the data collected at every timestep. Some methods learn the option model in the agent’s observation space, while others operate in an abstract state space. Models trained in the raw observation space must contend with the challenges of high-dimensional inputs and outputs (Nair and Finn, 2020). When state abstraction is learned alongside options, the agent must also manage drift, where option models must rapidly adapt to changes in the evolving abstract state representation.
Abstract planning. Options enable procedural abstraction, but the agent still has to plan in its original observation space, which is challenging when that observation space is highdimensional. More effective planning can be achieved by combining options with a suitable state abstraction. This combination of state and action abstraction can result in abstract decision processes that are simpler to plan in, but this often comes at a cost—the coarser the abstraction, the greater the potential for suboptimality of the resulting plans, mirroring the trade-offs discussed in the context of options in Section 2.2. We now briefly discuss some approaches that combine options with state abstraction for model-based planning.
• Expectation models. There are at least three choices for representing an option models: (a) distribution model: predict the distribution over possible next states, (b) sample model: generate a sample from the next state (and reward) distribution, and use sample-based planning techniques such as Monte-Carlo Tree Search (Coulom, 2006), and (c) expectation model, where the agent predicts the expected next state and reward. When the value function is linear in the agent’s state representation, then expectation models are sufficient for planning (Wan et al., 2019). Due to its simplicity, expectation models can be learned efficiently by solving a system of linear equations (Sutton et al., 2023), making it an attractive choice for HRL agents that simultaneously learn state representations that evolve over time. There have also been proposals of using temporal abstractions as a mechanism for focusing on local, subgoal-conditioned models that are possibly easier to learn than a complete model of the environment (Lo et al., 2024).
• Skills to symbols (Konidaris et al., 2018). When options have the property that their policy drives all state variables to a small range of values, then the abstract state representation needed for planning is that of a graph. Nodes of this graph correspond to abstract states and edges correspond to options; an edge exists between two nodes when one option terminates in a state from which another option has a high probability of being successful in its own subtask. The discovery of options with this property of sequential composability was discussed in Section 4.3. The Deep Skill Graphs algorithm (Bagaria et al., 2021b, 2025b) simultaneously learns options and such a graph representation for planning in continuous environments. However, skills cannot always control all state variables—they often set some state variables, while leaving others unchanged. When options have this property, then the representation needed for planning is that of a type of factored MDP (Boutilier et al., 2000), which can be succinctly described using Planning Domain Definition Language (PDDL) (McDermott et al., 1998). The advantage of generating a PDDL description of the problem is that it can then be efficiently solved by off-the-shelf classical planners, even when the planning problem is long-horizon and combinatorial in the number of state variables.
Discovering Temporal Structure: An Overview of Hierarchical RL
The algorithms of Konidaris et al. (2018) provide a way to learn such abstract state representations, enabling an agent to compute the probability with which a given plan will be successful. Recently, Rodriguez-Sanchez and Konidaris (2024) proposed a way to learn continuous state representations that lead to provably bounded value loss (Li et al., 2006; Abel et al., 2016, 2020)—meaning that when the agent plans solely with its learned abstract state representations, it foregoes no more than a bounded amount of reward compared to an agent that plans in the MDP’s native state-space. An additional challenge when learning option compatible state abstractions for planning is that of transfer—learned representations should be resusable in future tasks encountered by the agent. To learn transferrable representations, James et al. (2019) leverage a simple insight: when the same agent is used to solve a family of related problems, then state representations that are expressed from the point-of-view of the agent are more amenable to transfer than state representations that uniquely describe each individual task (Konidaris and Barto, 2007). For example, a home robot that solves many tasks in many homes, does so with the same set of sensors and actuators; so representations expressed from the perspective of that robot are reusable across many different contexts. By applying this insight to learned symbolic representations, James et al. (2019) reduce the number of samples required to solve each additional task in a given sequence of tasks.
In summary, the combination of options with appropriate state abstractions offers a powerful framework for efficient model-based planning in complex environments. These approaches address fundamental challenges in reinforcement learning by enabling longer planning horizons, reducing the dimensionality of the planning space, and mitigating error propagation in learned models (Bagaria et al., 2025b). The trade-off between abstraction granularity and solution optimality remains a central consideration, with different methods offering various compromises between planning efficiency and performance guarantees. As hierarchical reinforcement learning continues to evolve, integrating these state and action abstraction techniques with advances in representation learning and approximate planning promises to further enhance the scalability and applicability of RL to increasingly complex real-world problems. Future research directions include developing more robust methods for discovering suitable abstractions automatically, improving the theoretical understanding of abstraction hierarchies, and bridging the gap between symbolic planning and continuous control.
# 7.2.3 Large Language Models
If options are represented using LLMs, in-context learning can be used to learn the high-level policy, $\mu ( o | s )$ . This can be done by having the LLM output Python code that implements skill-selection logic (Wang et al., 2024c; Klissarov et al., 2024), or to output formal plans described using PDDL (Silver et al., 2024). Such policies can then be updated by providing execution traces as context to the LLMs and asking for code refinements. It is also possible to directly deploy the LLM in the environment to select skills at every high-level decision point (Ahn et al., 2022). Since such approaches do not require gradient updates, they potentially offer faster adaptation. However, the nature of in-context is currently not well understood, for example, in terms of generalization and robustness, and is an active area of research.
# 8. Challenges of Discovery
Arguably, one of the biggest challenges in discovering temporal abstractions comes from the fact that there is a lack of agreed-upon objective that would yield meaningful options across a variety of domains. This can be observed by the wide diversity of methods presented in Section 4, 5, and 6. Additionally, the complexity overhead that HRL sometimes introduces can make it less appealing from a practical perspective. The time invested by a practitioner in setting up an HRL algorithm is valuable. If this time investment does not lead to significantly improved performance on a particular task, or is not generally applicable across tasks, the practitioner will likely choose a simpler approach.
The two aforementioned points indicate that there is a lot of potential for research in HRL in order to find reliable and general solutions as well as understanding where to apply them (see Section 10). In what follows, we highlight prominent technical challenges that arise when attempting to discover temporal abstractions.
# 8.1 Non-stationarity
One of the main difficulties in learning a hierarchy of behaviours stems from its modular nature. A hierarchical agent has to learn, potentially simultaneously, about the option policies, option reward functions, termination functions, initiation functions, and high-level policy. As each of these modules is being learned, it creates non-stationary targets for the other modules.
A straightforward approach to deal with this non-stationarity is by learning the different components separately. For instance, this can be done by leveraging offline datasets (methods in Section 5) to first learn a set of option rewards or a set of option policies, before fixing them. These components can then be provided to a high-level policy that will learn to achieve a certain task. Similarly, we can leverage the LLM’s prior knowledge to define, beforehand, option reward functions or to directly model the option policies (methods in Section 6). These would create stationary targets for the remainder of the components. LLMs can also be used to model the high-level policy itself, either by directly querying them or by leveraging their coding abilities to define the skill execution logic. The in-context learning abilities of LLMs could further allow for fast, gradient-free adaptation with respect to a changing option set.
When learning tabula rasa, the non-stationarity can be particularly challenging. It is common for methods to first define an option learning phase, where the high-level policy acts according to a more exploratory behaviour, for example by uniformly choosing over the options (Machado et al., 2017; Eysenbach et al., 2019). Such a phase is meant to provide experience in learning the option reward functions and option policies. Nachum et al. (2018) emphasize the difficulty of non-stationarity in HRL when learning from past experiences that are stored in a dataset, called an experience replay buffer (Lin, 1991). An option that was previously sampled and stored within a replay buffer, together with the experience it generated, would not produce the same data distribution if we were to sample it now. To alleviate this, Nachum et al. (2018) relabel which option was used for a stored datapoint with the option that is currently most likely to generate the actions seen in this datapoint.
Bagaria et al. (2023) illustrate how the non-stationary challenge affects the initiation set. They argue that learning the initiation function using binary classification (or, equivalently,
Discovering Temporal Structure: An Overview of Hierarchical RL
Monte Carlo value estimation) is only a sound approach when the option policy is fixed. In their approach, the initiation function captures the capability of the current option policy to achieve its goal. As the option policy evolves, so must the initiation function. As a consequence, when an option is unsuccessful from a state, its initiation probability at that state goes down, and so does the probability that the option policy improves in and around that state. While this is unproblematic when the option policy is fixed, it eventually leads to overly conservative initiation functions: options tend to initiate in smaller and smaller parts of the state-space during the agent’s lifetime. To address these issues, they incorporate tools from off-policy evaluation and use exploration bonuses to increase the initiation probability of states from which policy improvement is most likely.
# 8.2 Learning About Multiple Behaviours
One of the appeals of the HRL is that if an agent has access to a large collection of options, it may efficiently achieve good performance on a variety of tasks by re-composing them. However, such a large library of behaviours also comes at the cost of first learning the options themselves, highlighting some of the fundamental trade-offs presented in Section 2.2.
To approach this problem, it is convenient to turn to off-policy algorithms (Precup et al., 2000). Such algorithms allow for learning from data that was not generated by the current policy. Klissarov and Precup (2021) propose update rules to improve all options simultaneously by relying on a decomposition of the state-option distribution, introducing a minimal amount of off-policy corrections, and remaining compatible with any policy optimization algorithm. Their method can also be seen as an all-options policy optimization, similar to all-action updates in RL (Sutton et al., 2001). Daniel et al. (2016) instead leverage the perspective in which options are seen as latent variables. The authors adopt an expectation-maximization approach, which assumes a linear structure of the option policies. Smith et al. (2018) alleviate this assumption and derive a policy gradient objective that improves the data efficiency and interpretability of the learned options. A conceptually related work is proposed by Wulfmeier et al. (2020), which leverages dynamic programming to infer option and action probabilities in hindsight.
We have previously introduced the methods of hindsight relabeling (Andrychowicz et al., 2017) as part of the skill discovery methods. We can reframe their approach through this question: if you have a multitude of options, or even a continuous spectrum, which other option should you update for a given trajectory? The authors answer this question by relabeling the trajectory stored in the replay buffer with the final state that was reached. This essentially leverages off-policy as the experience generated by one policy is used to update another policy. The importance of learning off-policy through re-labeling is emphasized by Nachum et al. (2018), which shows significantly faster learning, and by Levy et al. (2019), which extends the ideas of re-labeling experience through hindsight goal transitions.
Is it possible to sample-efficiently learn about multiple options from a single stream of experience? Barreto et al. (2020) propose the Generalized Policy Improvement update rules to answer this question. The authors extend the concept of improvement from a single policy to multiple policies simultaneously. Specifically, their theorem states that, for a given set of policies, $\pi _ { 1 } , \pi _ { 2 } , . . . , \pi _ { n }$ , and their associated approximate $Q$ values, $Q ^ { \pi _ { 1 } } , Q ^ { \pi _ { 2 } } , . . . , Q ^ { \pi _ { n } }$ ,
$$
\pi ( s ) \in \mathop { \mathrm { a r g m a x } } _ { a } \operatorname* { m a x } _ { i } Q ^ { \pi _ { i } } ( s , a ) ,
$$
then $Q ^ { \pi } ( s , a ) \geq \operatorname* { m a x } _ { i } Q ^ { \pi _ { i } } ( s , a )$ . This update rule is used by Barreto et al. (2019a) to efficiently learn how to execute a combination of options. Thakoor et al. (2022) further generalize the results beyond Markov policies, in particular, to options whose execution duration follows a geometric distribution. The idea of learning efficiently about multiple policies is closely related to concepts such as the successor representations (Dayan, 1993) and successor features (Barreto et al., 2017), as well as other decompositions of the transition function (Touati et al., 2023).
# 8.3 Combining Rewards
When learning option policies through their option reward functions, we are faced with another important question: how should we balance between the option reward and the environmental reward? Dayan and Hinton (1993) argue that the option policies should be agnostic to the environmental reward and learned only through the intrinsic one, leading to specialised options. Vezhnevets et al. (2017) take a softer approach and provide both rewards, possibly as the environmental reward contains rich information in the environments that were considered. In other cases, there is no intrinsic reward at all (Bacon et al., 2017). Sutton et al. (2023) investigate these questions from the perspective of planning and learning with options that either respect or do not respect the environmental reward. The authors show that reward-respecting options (that is, options that take the environment reward into consideration) are much more effective when used for planning. Zahavy et al. (2022) propose a point of view of constrained optimization to balance these objectives and leverage Lagrange multipliers in practice. A thorough examination concerning the trade-offs of how hierarchical agents combine environmental reward and intrinsic reward is yet to be made.
# 9. Related Fields
We now discuss the fields related to HRL, covering different types of abstractions, continual RL, programmatic RL, and cooperative multi-agent RL.
# 9.1 State and Action Abstractions in Reinforcement Learning
Scaling RL for real-world applications faces challenges in handling high-dimensional or noisy observations and large action spaces. Accordingly, the RL community has long explored abstraction, which in computer science practice suppresses irrelevant low-level details so that reasoning can proceed at a higher conceptual level (Colburn and Shute, 2007), as a means to mitigate the curse of dimensionality and improve sample efficiency (Konidaris, 2019; Ho et al., 2019; Abel, 2022). Abstraction can be accomplished either through explicit aggregation of states and actions (Li et al., 2006), or by using neural networks as a mapping from the raw state or action space to an abstract space—a process often referred to as representation learning (Abel, 2022). Various forms of abstraction have been proposed in the RL literature, each targeting distinct equivalence relations to capture different aspects of the learning problem.
Discovering Temporal Structure: An Overview of Hierarchical RL
State abstraction offers a principled approach to scaling RL to control tasks involving high-dimensional observations, such as images, which often contain substantial task-irrelevant details. Li et al. (2006) survey a spectrum of state-abstraction schemes, each defined by its own equivalence criterion. For example, some merge states that yield identical immediate reward and transition dynamics under every action, while others require the same optimal action-value functions. In contrast, bisimulation metrics (Ferns et al., 2004, 2011; Castro, 2020; Zhang et al., 2021a; Luo et al., 2025) dispense with such rigid equivalence by quantifying how much two states differ in their reward distributions and transition kernels, which enables grouping those whose combined divergence falls below a chosen threshold. To make state abstraction more deep-learning-friendly, recent approaches introduce differentiable objectives, specifically reward prediction and self-prediction losses defined with respect to a learned representation, to train compact, informative embeddings (Gelada et al., 2019; Ni et al., 2024).
Another line of work focuses on state-action abstraction, notably MDP homomorphism, which maps state–action pairs to abstract equivalents while preserving transition and reward structure (Ravindran, 2004; Ravindran and Barto, 2001, 2004; Narayanamurthy and Ravindran, 2008; Rezaei-Shoshtari et al., 2022). This aggregation of the state-action space, termed model minimization, forms an abstract MDP capable of capturing symmetrical aspects of the environment.11
As for action abstraction, it can be classified into two categories: per-timestep and multiple-timestep. Per-timestep action abstraction is commonly applied to mitigate the computational complexity associated with large action spaces, involving action elimination (Even-Dar et al., 2006; Zahavy et al., 2018), action embedding or transformation (Van Hasselt and Wiering, 2009; Dulac-Arnold et al., 2015; Jiang et al., 2023), and affordances (Abel et al., 2014; Fulda et al., 2017; Khetarpal et al., 2020a), which reduces the effective action space to only those that satisfy a given intent or task-relevant criterion under the current state. Pertimestep action abstraction can also be extended to policy abstraction (Barreto et al., 2019a; Zhang et al., 2023), which provides a framework for generalizing and compressing policy behaviours by mapping detailed decision-making strategies into a succinct abstract space. Multiple-timestep action abstraction, often referred to as temporal abstraction, is a fundamental aspect of HRL. It can be either closed-loop as described in the option framework (Sutton et al., 1999b), or open-loop as a compression of an action sequence (Pertsch et al., 2021).
These abstraction types naturally interface with HRL, which provides a framework for integrating them effectively. In addition to temporal abstraction, HRL facilitates the integration of various types of state and action abstractions. In classical HRL, two common forms of state abstraction are employed: first, state abstraction within the high-level controller, enabling learning or planning in a more tractable space. Feudal RL (Dayan and Hinton, 1993), as a prominent example, employs information hiding to abstract low-level details from the state observed by the manager. Second, state abstraction within the lowlevel controller, which abstracts states irrelevant to a particular option. State abstractions within MAXQ (Dietterich et al., 1998) and option models are natural examples, as options can be defined exclusively for states where the option is applicable. Classical HRL also incorporates per-timestep action abstractions. In the option framework (Sutton et al.,
1999b), the initiation set serves as a high-level per-timestep action abstraction, indicating the affordance of a specific option in different states.
Several HRL methods leverage state and action abstractions in addition to temporal abstraction. Relativized options (Ravindran and Barto, 2002; Ravindran, 2003; Ravindran and Barto, 2003) integrate state-action abstraction (MDP homomorphism) techniques within an HRL framework to generate concise representations of a related task family. These options are defined without an absolute frame of reference, and their policies adapt according to the circumstances of their invocation, enabling effective multi-task knowledge transfer. Portable options (Konidaris and Barto, 2007) extend this concept, ensuring that the option depends solely on abstract states characterized by task-invariant descriptors. Castro and Precup (2010) apply a bisimulation metric for two different MDPs to facilitate knowledge transfer and propose an option-bisimulation metric to quantify the behavioural discrepancy between states under an option. Abel et al. (2020) propose a value-preserving abstraction, combining state abstractions and options to ensure the representation of near-optimal policies is maintained. In their approach, the state abstraction $\phi$ , which maps the state to an abstract state, defines the initiation and termination functions for a set of $\phi$ -relative options. Khetarpal et al. (2021) extend their definition of affordances (Khetarpal et al., 2020a), introducing temporally extended intents and option affordances that benefit planning in temporally abstract partial models. Hansen-Estruch et al. (2022) connect GCRL and bisimulation metrics. The authors propose a state-goal bisimulation metric to learn a shared state-goal representation, improving representation learning across tasks defined by different goals.
# 9.2 Continual Reinforcement Learning
Continual RL defines the problem setting in which any component of the environment, such as the transition function, the reward function, the state space, or the action space, may change over time (Khetarpal et al., 2020c). Continual RL emphasizes the stability-plasticity dilemma (Carpenter and Grossberg, 1988) which arises when training neural networks under nonstationarity: should we prioritize recent experiences or remember previous experiences? A common example is when an agent is faced with a series of tasks within a complex environment, without being told when tasks are changing. Such an example illustrates the importance of fast adaptation as a desirable quality in a continually learning agent. A related and well-known difficulty is in avoiding catastrophic forgetting, where an agent adapts adequately to the latest experiences, but completely forgets what it learned in past experiences.
To face the challenges posed by the continual RL problem setting, there exists a variety of methods, such as explicit knowledge retention mechanisms or leveraging the structure shared across tasks. Agents empowered by a set of reusable skills are a part of the latter category: they have the potential to efficiently adapt to new tasks by recombining or fine-tuning their library of skills, minimizing the need to relearn from scratch (e.g., Klissarov and Machado, 2023). Additionally, HRL agents could mitigate catastrophic forgetting by expanding and filtering their set of skills over time. One of the fundamental reasons for the synergy between HRL and continual RL is that both fields rarely focus on optimally solving any of the tasks that are being given. Instead, they are concerned about fast adaptation and transferability.
While promising, integrating HRL and continual RL presents open research challenges. As mentioned in Section 8, it is necessary to develop scalable skill discovery methods that
Discovering Temporal Structure: An Overview of Hierarchical RL can function in non-stationary settings, devise frameworks that jointly optimize for continual learning and HRL objectives, and design benchmarks and metrics for evaluating agents.
# 9.3 Programmatic Reinforcement Learning
As stated in Section 2, HRL conceptually makes an analogy to programming languages and formal systems. An example of this connection is Hoare Logic (Hoare, 1969), a formal system for assessing the correctness of imperative programs, which shares a similar structure with the option framework (see Section 3.2) including initiation sets (pre-conditions), policies (commands), and termination conditions (post-conditions). Both frameworks facilitate reasoning about action sequences, thereby enhancing the structuring of complex decisionmaking processes. Research efforts have been undertaken to bridge the gap between HRL, programming languages, logic, and formal methods.
Programs as high-level policy. Early approaches, HAM (Parr and Russell, 1997) and PHAM (Andre and Russell, 2000) utilized hierarchies of partially specified finite-state machines (FSM) to structure policies. There are four types of states in HAMs: Action states execute actions, Call states execute subroutines, Choice states select subsequent states non-deterministically, and Stop states halt execution and return control to prior call states. This provides a prototype for early HRL methods, allowing for better compositionality, transferability (Andre and Russell, 2000), and state abstraction (Andre and Russell, 2002). More recent approaches utilize programs, specifically in domain-specific languages (DSLs), as high-level policies to guide lower-level RL agents. These are often called programmatic policies. Such an approach allows the system designer to inject biases that could, for example, improve sample efficiency over neural representations (Moraes et al., 2025).
Programs convey structured, interpretable, and unambiguous information, and their incorporation into the policy space can reduce the search space for the overall solution and offer a natural method for integrating prior knowledge symbolically. The structured representation of these programs allows one to decompose policies into options that can also be used to induce spaces that are more conducive to search (Moraes and Lelis, 2024; Moraes et al., 2025). In general, the programs can be either hand-crafted (Andreas et al., 2017; Sun et al., 2020), synthesized automatically by construction or synthesis on a predetermined syntax (Carvalho et al., 2024) or semantic (latent) space (Yang et al., 2021b; Hasanbeig et al., 2021; Moraes et al., 2023; Moraes and Lelis, 2024), by parameterizing the program space, also known as neuro-symbolic (Sheth and Roy, 2023) approaches (Denil et al., 2017; Sohn et al., 2018; Trivedi et al., 2021; Zhao et al., 2021; Qiu and Zhu, 2022; Liu et al., 2023a; Lin et al., 2024) or by leveraging foundation models (Wang et al., 2023a; Klissarov et al., 2025a; Moraes et al., 2025). Learning search guidance for these spaces is an active area of research (Medeiros et al., 2022; Aleixo and Lelis, 2023). The idea of decomposing policies into subprograms has also been explored even when the underlying policy is a neural network (Alikhasi and Lelis, 2024).
Programs to intrinsic rewards. Akin to the intrinsic reward described in Section 4, recent studies (Jothimurugan et al., 2019; Icarte et al., 2022; Furelos-Blanco et al., 2023; Venuto et al., 2024) demonstrate the feasibility of “translating” the formal languages (e.g., programs or FSMs) into the reward signal to enhance the RL agent.
Distilling the neural policies to interpretable programs. A series of studies focuses on condensing an agent’s policy into more hierarchical, interpretable, and verifiable formats such as programs (Verma et al., 2018, 2019) or Decision Trees (Bastani et al., 2018), enhancing both lightness and clarity.
# 9.4 Cooperative Multi-Agent Reinforcement Learning
Cooperative multi-agent RL (Cooperative MARL) and HRL can be seen as conceptually connected: managing problem complexity using the structure of the problem. By breaking down large-scale problems into more manageable sub-problems, both approaches improve tractability and facilitate learning. In cooperative MARL, decomposition is achieved by distributing the decision-making process among multiple agents, whereas in HRL, it is accomplished through temporal abstraction. As an example, Feudal RL (Dayan and Hinton, 1993) can be viewed as a multi-agent system comprising managers and workers. This framework naturally extends to cooperative MARL settings (Ahilan and Dayan, 2019). Extensive research has explored the integration of HRL with cooperative MARL; interested readers are referred to Section 3.5 of the work by Pateria et al. (2021) for further details.
# 10. Promising Domains for Hierarchical Reinforcement Learning
In this work, we have examined a wide diversity of HRL approaches, each time highlighting the important ways in which they help decision-making through the benefits we laid out in Section 2.1. The vast body of research in HRL encompasses a wide spectrum of methods spanning multiple environments and domains. Under this diversity of approaches, a key question emerges: in what domains should we expect HRL to be most effective? One obvious criterion is for the domain to contain tasks that are temporally extended tasks, as short-horizon tasks offer limited opportunities for leveraging temporal abstractions. For example, decomposing short-horizon tasks into subtasks is likely to be less fruitful than long-horizon ones. However, can we go beyond this simple criterion to predict the suitability of HRL methods?
As illustrated in Section 2.2, one of the motivations for the HRL formalism is that it is a way to efficiently obtain good solutions within a certain sample and computation budget. This is particularly relevant in complex environments, where optimality is impractical. Should HRL then be considered as a fallback option when non-hierarchical RL fails in complex environments? This perspective positions HRL as a last resort when the task is too hard, but importantly, does not rely on any concrete intuition as to why HRL should even work in such situations. To provide a more informative answer, we go back to the fundamental idea that was used to introduce the methods in this work. This idea is that HRL methods exploit structure. A complex environment lacking exploitable structure might not benefit from HRL. Similarly, a complex environment where we only care about a single task might limit HRL’s advantages, given the inherent overhead of learning a hierarchy. Therefore, task complexity alone is not a sufficient condition for the effectiveness of HRL methods.
HRL appears best suited for long-horizon environments that allow for a diversity of goals that share a structural overlap (whether these goals are defined by the environment or the agent itself). From this perspective, open-ended systems are particularly promising domains for HRL methods. Hughes et al. (2024) define an open-ended system as
Discovering Temporal Structure: An Overview of Hierarchical RL one that presents a constant flow of novel and possibly learnable goals. It is common in such systems that these goals share, to a degree, a common underlying structure, which makes HRL particularly appealing. Below, we showcase specific domains exemplifying these characteristics. Importantly, this list is not exhaustive but rather serves to illustrate settings where HRL might excel.
# 10.1 Example Environments and Applications
Web Agents. The World Wide Web, a dynamic and ever-changing environment, presents a unique challenge for AI agents. The recent surge in interest has led to a variety of implementations of challenging domains, such as Android-in-the-Wild (Rawles et al., 2023) or WebArena (Zhou et al., 2024). Its near-infinite tasks and constantly evolving goals demand adaptability and the ability to decompose complex objectives into manageable subgoals. As mentioned Section 3, even if the resulting agent is not hierarchical (i.e., does not explicitly carry a set of skills), learning to navigate the web through HRL methods, such as curriculum-based ones, is particularly important to address the sheer complexity of the web. Indeed, Web Agents must learn to navigate a constantly shifting landscape of information and services, adapting to new data, evolving user preferences, and the emergence of novel websites and services. Another important characteristic is that many tasks of interest share a lot of underlying structure, a key point of HRL. Overall, this complex and open-ended domain requires agents capable of learning, adapting, and generalizing across multiple timescales, ultimately revolutionizing how we interact with the online world.
Robotics. Robotics, with its emphasis on embodied intelligence and real-world interaction, presents a compelling domain for exploring the potential of HRL methods. The tasks robots face, from navigating complex environments to manipulating objects with dexterity, involve long horizons where, at each step, a low-level action is sampled from a continuous action space. HRL offers a natural framework for decomposing these complex tasks into manageable sub-policies, allowing robots to learn and refine abstract skills while also developing higherlevel strategies for sequencing and coordinating them. Practical implementations of interest include AI2-THOR (Kolve et al., 2017), Habitat (Szot et al., 2021; Puig et al., 2024), CALVIN (Mees et al., 2022) and OGBench (Park et al., 2025a).
The ability to recompose learned skills into novel combinations is crucial for robots operating in unstructured and dynamic environments, where adaptability and generalization are key. For instance, a robot learning to grasp objects might develop sub-policies for reaching, orienting its gripper, and applying the appropriate force. Ideally, these individual skills could then be recombined and adapted to grasp a wide variety of objects in different contexts, without requiring retraining from scratch. The long horizons inherent in many robotic tasks, coupled with the need for flexible and adaptable skill acquisition, make HRL a promising approach for developing robots capable of performing complex, real-world tasks with increasing autonomy and efficiency.
Open-ended games. Training AI agents on games has a long history of striking successes in domains like Go (Silver et al., 2017) or Atari 2600 games (Mnih et al., 2015).12 However, for HRL to be particularly effective, the domain should be complex, long-horizon, and open-ended. We have seen in Section 4.8 such an example, where a goal-conditioned policy trained on a large diversity of tasks led to human-timescale adaptation. Key to this success was the fact that data was readily available through fast simulation, allowing for quicker research iteration. This makes it particularly interesting to study open-ended games in order to better understand HRL methods. We now provide two such examples. NetHack is a complex roguelike game, and it is an ideal environment for exploring the benefits of HRL. It has been brought to the RL community through the NetHack Learning Environment (Ku¨ttler et al., 2020). Its open-ended nature, procedurally generated dungeons, and long-horizon gameplay require exploration, planning, and adaptation across multiple timescales. Success requires not just immediate tactical decisions, but also strategizing towards long-term goals, demanding credit assignment across extended temporal spans. The vast diversity of situations encountered also requires generalization, making HRL’s ability to learn reusable sub-policies and higher-level strategies particularly valuable. Minecraft. Minecraft (Johnson et al., 2016; Kanervisto et al., 2021), with its expansive, procedurally generated world and openended gameplay, presents a compelling testbed for HRL algorithms. The game requires navigating across diverse biomes, gathering resources, crafting tools, and structures, and ultimately, surviving and thriving. This requires planning and execution across multiple timescales. For instance, while the immediate goal might be chopping down a tree for wood, this action serves the higher-level objective of building a shelter for protection against nocturnal mobs. Furthermore, Minecraft’s crafting system inherently embodies a hierarchical structure. Creating complex items like diamond tools requires a chain of prerequisite crafting steps, each with its own subgoals and resource requirements. HRL agents could learn to decompose these complex tasks into manageable sub-policies, mirroring the hierarchical nature of crafting itself. | Developing agents capable of exploring, planning and learning in complex open-ended environments is a grand challenge in artificial intelligence (AI). Hierarchical reinforcement learning (HRL) offers a promising solution to this challenge by discovering and exploiting the temporal structure within a stream of experience. The strong appeal of the HRL framework has led to a rich and diverse body of literature attempting to discover a useful structure. However, it is still not clear how one might define what constitutes good structure in the first place, or the kind of problems in which identifying it may be helpful. This work aims to identify the benefits of HRL from the perspective of the fundamental challenges in decision-making, as well as highlight its impact on the performance trade-offs of AI agents. Through these benefits, we then cover the families of methods that discover temporal structure in HRL, ranging from learning directly from online experience to offline datasets, to leveraging large language models (LLMs). Finally, we highlight the challenges of temporal structure discovery and the domains that are particularly well-suited for such endeavours. | [
"cs.AI"
] |
# 1 INTRODUCTION
Neighborhood environments influence human well-being and have become a focus of research in urban studies, public health, and family social science [12]. Assessing neighborhood environmental features is important for understanding how neighborhood contexts affect outcomes like adolescent development [12], mental health [7], and social cohesion [2, 7]. Neighborhood environment assessments have relied on qualitative and semi-quantitative methods, including surveys (e.g., Los Angeles Family and Neighborhood Survey) [2] and interviews [12]. As street view imagery (SVI) becomes more accessible, researchers have created systematic social observation (SSO) protocols [10, 11] to evaluate neighborhood environments using structured visual analysis using SVI. Protocols involve multiple human coders applying detailed codebooks to identify visual cues, such as signs of physical disorder, physical decay, and street safety. In practice, human coders rarely produce identical annotations for the same image, so multiple coders are needed per image, along with statistical analysis to compare their results. Despite offering invaluable insights, the process remains labor-intensive, dependent on trained experts, and difficult to scale across a wide variety of study contexts or geographic regions.
Recent advancements in vision-language models (VLMs) have enabled researchers to automate certain aspects of the neighborhood environmental assessment process. Although researchers have begun to use VLMs with SVI [5, 6], these efforts often apply models to a single image annotation task, such as identifying objects without a structured framework for adapting the models to comprehensively assess neighborhood environments. A limitation is the lack of a systematic method to “teach” VLMs in the way human coders are trained, including learning from the literature, coding protocols, and annotated examples. In addition, VLM results are often accepted as-is, without providing feedback to the researcher.
To address this gap, we introduce StreetLens, an end-to-end researcher-centered workflow that mimics general human coder training processes. StreetLens guides VLM through reviewing relevant studies, studying coding manuals, examining example annotations, and comparing model output with those of experienced coders. StreetLens automates the assessment pipeline using opensource VLMs and supports flexible adaptation to different study contexts and geographic regions. Specifically, StreetLens enables researchers to embed domain knowledge as a central component of the workflow by leveraging relevant studies to configure the VLM through role prompting [16], explicitly defining the VLM’s role in the assessment process. StreetLens then processes questions from the codebook (e.g., assessing physical disorder or identifying objects), retrieves relevant SVI, and generates structured annotations aligned with expert-defined protocols. StreetLens supports a feedback loop that allows researchers to compare and interpret automated coding with human coding, further facilitating the refinement of the research process. To support accessibility, we provide a Google Colab notebook that enables users to run StreetLens with either publicly available or user-provided image data, eliminating the need for advanced technical expertise. The result is a flexible and reusable workflow that facilitates assessing neighborhood environmental features and benefits various studies across geographic settings.
# 2 CASE STUDY
This section introduces a case study from family social science that motivates the workflow of StreetLens. The case study [11] aims to assess how neighborhood environments relate to adolescents’ ethnic and racial label usage. To match labels from semistructured interviews with potential environmental features in neighborhoods, Pasco and White [11] leverages the established SSO protocol [10] (i.e., a detailed set of instructions) to train human coders. Multiple trained coders then use Google Earth Pro to virtually walk through each street segment, evaluating the degree of physical decay (e.g., deteriorated buildings, poor sidewalks) and sociocultural symbols (e.g., Spanish-language signs, Latino-owned businesses).
After coding environmental features, Pasco and White [11] assess the reliability of ratings by comparing coders’ assessments of the street segments using intraclass correlation coefficients. Such a validation step removes outliers and ensures consistent ratings across other coders while minimizing personal bias. Involving multiple coders accounts for variations in individual perception and judgment, thus enhancing the overall quality of the assessments. Therefore, StreetLens aims to serve as an additional agentic coder by enabling researchers to create an automated coder with domainspecific materials.
# 3 STREETLENS
StreetLens provides a researcher-oriented workflow (Figure 1), targeting users whose research involves neighborhood environmental assessments. A researcher begins by using StreetLens through a simple, guided interface. The first module, M1. Data Processor, prompts the researcher to upload key materials such as codebooks, protocols, related papers, and example annotations, as well as to specify the study area for retrieving SVI data. This module organizes and prepares all inputs for the next steps. Next, M2. Automated Prompt Tuning uses the collected domain knowledge to define the role of the VLM agent and generate protocol-aligned prompts that follow the researcher’s coding instructions. These prompts are passed to M3. VLM Processor, which analyzes street-level images and generates the assessments of environmental features. The researcher then reviews the results with the examples created by human coders. Finally, M4. Feedback Provider allows the VLM agent to communicate back to the researcher, offering explanations on how the VLM agent interpreted the coding instructions. This feedback helps the researcher understand the agent’s reasoning.
M1. Data Processor. StreetLens starts by asking the researcher to choose a study area, such as a city or specific census tracts, where they want to assess environmental features. Based on the selected area, the system returns a set of predefined point locations. These points are derived from U.S. Census TIGER road data, which has been segmented at 5-meter intervals to support efficient data retrieval. For this demonstration, StreetLens uses Google Street View imagery to match the source used in the original case study with human annotations. Once the study area is set, the system prompts the researcher to indicate which materials are available,
# I1. Codebook
Disorder1|Is there strewngarbage,litter, broken 1 - Light (ISOLATED instances of smallamounts on the block face relative to the length of the block glass, clothes or papers on the block face face) in the street/sidewalk/or public space? 2 - Moderate (REPEATED amounts of small to medium amounts along the block face) (including parking lots of multi-unit 3- Heavy (Any instance of LARGE amounts of strewn garbage,litter, broken glass,clothes or papers) dwellings) 0- None 99 - Cannot Evaluation 12. Protocol
Disorder2 Are there abandoned cars, cars with 1 - Yes I4. Annotations Heavy (Any instance of broken windows,and/or run down cars? 0- No large amounts of strewn \*look for flat tires on cars if unsure Census Street Direction Disorder1 Disorder2 garbage,litter, broken whether rundown or abandoned Tract Block glass, clothes or papers) ID Coder1 1258 18067 E 99 0 M1. Data Processor M2. Automated Prompt Tuning M3. Vision Language Model Processor Role Prompt Coding Question Prompt M4. Feedback Provider I3. Related Papers
StreetLens [Paper Title& Abstract]
such as a codebook, protocol, sample annotations, or academic papers in the domain that the user wants StreetLens to focus.
M2. Automated Prompt Tuning. StreetLens generates a role by reviewing related papers and assigns a role to the VLM to help it “think” like a trained human coder. This step enhances the VLM’s performance [16] and aligns with how human coders learn by reading background studies and understanding what to look for in the environment. To achieve this, StreetLens uses the following prompt with the abstracts of relevant papers to the large language model (LLM):
You are an expert in the following fields and the author of the paper abstracts provided here: [I3. Abstracts of related papers]. Based on the expertise demonstrated, generate a general professional role description of yourself in one to two sentences, starting with "You are" written in the second person. This will be used as a system prompt introduction.
For example, using two studies on neighborhood environments and Mexican-origin adolescents [11, 12], the LLM generates:
You are an expert in conducting mixed-methods research in urban sociology and ethnic studies, focusing on the impact of neighborhood environments on social behaviors and identity formation among Latinx adolescents. You specialize in using systematic social observations and qualitative interviews to compare and contrast the perspectives of researchers and adolescents, providing critical insights into the shared and unique aspects of urban environments and ethnicracial identity within ethnically/racially segregated neighborhoods.
After defining the specific role for the VLM agent, StreetLens processes the codebook collected in M1, which contains questions and answer options for each environmental feature. For each feature, StreetLens uses the following prompt:
[I1. Question and answer options from codebook] Review the question and answer options above. Guide a vision-language model to assess environmental features in S using only visual input. First, write one sentence to complete the system prompt. Then, write 2–3 clear sentences for the user prompt using the EXACT SAME numeric options.
Finally, StreetLens asks if the user wants to modify the prompt using the materials collected in M1. For example, for the case study, protocols include detailed instructions for subjective assessments. We demonstrate how researchers can use the corresponding imageanswer pairs to improve the VLM agent’s performance by applying in-context learning techniques. Similarly, researchers can incorporate existing annotations from human coders as additional imageanswer pairs to improve the model. The following is an example of a generated prompt for assessing the condition of a sidewalk (corresponding to the “Decay $2 ^ { \mathfrak { s } }$ code theme in the SSO protocol [11]):
[Assigned role prompt] Assess the environmental condition of the sidewalk in the street view image provided, focusing on its surface and overall maintenance. Evaluate the condition of the sidewalk based on visual cues from the image and choose the best matching option. Options are: 1 - Good (NO holes, sizable cracks, or crumbling or uneven pavement) 2 - Fair (Holes, sizable cracks, or crumbling or uneven pavement, outgrown weeds along SOME of the side walk) 3 - Poor (Holes, sizable cracks, or crumbling or uneven pavement, outgrown weeds along most or ALL of the sidewalk) 99 - Under construction or cannot Evaluate. Your response must be an integer. DO NOT PROVIDE ANY OTHER OUTPUT TEXT OR EXPLANATION.
Figure 2 shows an example prompt generated by StreetLens based on input from the codebook, protocol, and relevant academic papers.
M3. Vision Language Model Processor. With the generated prompts from M2 and the retrieved SVI from M1, StreetLens generates assessments for each environmental feature. In this demo, StreetLens loads the open-source VLM InternVL32B [17], which combines the image encoder InternViT-300M-448px-V2_5 with the language model Qwen2.5-1.5B. As the prompt includes SVI as well, StreetLens generates the <image $\gtrdot$ tags matching the number of processed SVI per prompt.
M4. Feedback Provider. After the VLM agent assesses the environmental features, StreetLens provides feedback to researchers that includes explanations of the agent’s assessments, leveraging reasoning capabilities. Moreover, the agent’s output acts as an additional coder, enabling the calculation of inter-rater reliability metrics. In the demo, StreetLens computes the intraclass correlation coefficient to validate the agreement between coders.
# 4 RELATED WORK
Assessing neighborhood environment characteristics has traditionally involved trained coders conducting SSO and in-person audits. For example, a seminal study by Sampson and Raudenbush [15] employed video-based ratings of over 23,000 street segments in Chicago to quantify physical disorder. To scale such efforts, researchers later adopted “virtual audits” using platforms like Google Earth and Street View [4, 9, 14]. These methods enabled remote assessment of neighborhoods with high inter-rater reliability and reduced cost. Building on this, computer vision techniques have been used to detect and quantify physical features such as urban greenery and sidewalk quality [1] and to infer higher-level, conceptual attributes like perceived safety from imagery [1, 8]. However, such supervised approaches are often constrained by task-specific labels and limited generalizability to new contexts. Recent advances in VLMs offer more flexible solutions by enabling open-ended scene interpretation through joint image-text representations. VLM-based methods have been used to assess walkability [3] and generate structured descriptions of urban environments [13], showing promise in automating scalable, semantically rich neighborhood audits across diverse settings. In contrast to prior work tailored to specific studies, StreetLens is a VLM-based, end-to-end, researcher-centered SSO workflow that simulates the human coder training process, enhancing adaptability across diverse study designs and geographic contexts.
Automated Prompt Tuning You are an expert in conducting mixed-methods research in urban sociology and ethnic Related Papers Codebook & Protocol identity formation among Latinx adolescents. You specialize in using systematic social Designated bike lanes (sign or painted markers)? (0 - No Visible,1- Visible) neighborhoods. Assess the presence or visibility of designated bike lanes, which can be
# Figure 2: An example of StreetLens ’s automated prompt tuning procedure demonstrates how the system utilizes related academic papers, codebooks, and assessment protocols to generate domain-specific prompt.
# 5 DISCUSSION AND FUTURE WORK
We introduce StreetLens, a human-centered workflow designed to augment environmental feature assessment by functioning as an additional coder. StreetLens directs the VLM agent to replicate the training and evaluation process used by human coders in collaboration with domain experts, adhering closely to established assessment protocols. StreetLens provides a flexible and reusable framework that enables researchers from diverse disciplines to assign specific roles to VLM agents based on prior domain knowledge. Furthermore, StreetLens implements a feedback loop that facilitates ongoing communication with researchers, offering explanations of the VLM agents’ coding decisions. We will focus on making the workflow more human-centered to better assist researchers throughout the process. This includes adding features that track the origin and history of data and annotations (provenance tracking), which helps researchers understand where results come from and how they were generated. The system will also include tools that improve how VLMs and LLMs make decisions by aligning their reasoning and preferences with those of the researchers. Additionally, the workflow will support explanations generated by LLMs to clarify why certain coding decisions were made. These enhancements will help researchers trust the automated assessments more and make it easier to review and refine results.
# REFERENCES
[1] Filip Biljecki and Koichi Ito. 2021. Street view imagery in urban analytics and GIS: A review. Landscape and Urban Planning 215 (2021), 104217. https://doi. org/10.1016/j.landurbplan.2021.104217
[2] Eileen ES Bjornstrom and Margaret L Ralston. 2014. Neighborhood built environment, perceived danger, and perceived social cohesion. Environment and behavior 46, 6 (2014), 718–744.
[3] Ivan Blečić, Valeria Saiu, and Giuseppe A. Trunfio. 2024. Enhancing Urban Walkability Assessment with Multimodal Large Language Models. In Computational Science and Its Applications – ICCSA 2024 Workshops. Springer Nature Switzerland, Cham, 394–411.
[4] Philippa Clarke, Jaime C. Ailshire, Omar L. Melendez, Michael D. Bader, and Jeffrey D. Morenoff. 2010. Using Google Earth to conduct a neighborhood audit: Reliability of a virtual audit instrument. Health & Place 16, 6 (2010), 1224–1229.
[5] Weiming Huang, Jing Wang, and Gao Cong. 2024. Zero-shot urban function inference with street view images through prompting a pretrained vision-language model. International Journal of Geographical Information Science 38, 7 (2024), 1414–1442.
[6] Hao Liang, Jiaxin Zhang, Yunqin Li, Bowen Wang, and Jingyong Huang. 2024. Automatic Estimation for Visual Quality Changes of Street Space Via Street-View Images and Multimodal Large Language Models. IEEE Access (2024).
[7] Yuqi Liu, Ruoyu Wang, Yi Lu, Zhigang Li, Hongsheng Chen, Mengqiu Cao, Yuerong Zhang, and Yimeng Song. 2020. Natural outdoor environment, neighbourhood social cohesion and mental health: Using multilevel structural equation modelling, streetscape and remote-sensing metrics. Urban Forestry & Urban Greening 48 (2020), 126576.
[8] Nikhil Naik, Jade Philipoom, Ramesh Raskar, and Cesar Hidalgo. 2014. Streetscore - Predicting the Perceived Safety of One Million Streetscapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. 779–785.
[9] Candice L. Odgers, Avshalom Caspi, Christopher J. Bates, Robert J. Sampson, and Terrie E. Moffitt. 2012. Systematic social observation of children’s neighborhoods using Google Street View: a reliable and cost-effective method. Journal of Child Psychology and Psychiatry 53, 10 (2012), 1009–1017.
[10] Candice L Odgers, Avshalom Caspi, Christopher J Bates, Robert J Sampson, and Terrie E Moffitt. 2012. Systematic social observation of children’s neighborhoods using Google Street View: A reliable and cost-effective method. Journal of Child Psychology and Psychiatry 53, 10 (2012), 1009–1017.
[11] Michelle C Pasco and Rebecca MB White. 2020. A mixed methods approach to examining Mexican-origin adolescents’ use of ethnic-racial labels in neighborhood contexts. Journal of Adolescent Research 35, 4 (2020), 489–520.
[12] Michelle C Pasco and Rebecca MB White. 2024. A mixed methods comparison of adolescents’ and researchers’ observations of neighborhood characteristics in Latinx neighborhoods. American journal of community psychology 73, 3-4 (2024), 526–540.
[13] Joan Perez and Giovanni Fusco. 2025. Streetscape Analysis with Generative AI (SAGAI): Vision-Language Assessment and Mapping of Urban Scenes. Available at SSRN 5226191 (2025).
[14] Andrew G. Rundle, Catherine A. M. Richards, Kimberly M. Neckerman, Michael R. Bader, and Julienne J. Teitler. 2011. Using Google Street View to audit neighborhood environments. American Journal of Preventive Medicine 40, 1 (2011), 94–100.
[15] Robert J Sampson and Stephen W Raudenbush. 1999. Systematic social observation of public spaces: A new look at disorder in urban neighborhoods. American journal of sociology 105, 3 (1999), 603–651.
[16] Sander Schulhoff, Michael Ilie, Nishant Balepur, Konstantine Kahadze, Amanda Liu, Chenglei Si, Yinheng Li, Aayush Gupta, HyoJung Han, Sevien Schulhoff, et al. 2024. The Prompt Report: A Systematic Survey of Prompting Techniques. CoRR (2024).
[17] Jinguo Zhu, Weiyun Wang, Zhe Chen, Zhaoyang Liu, Shenglong Ye, Lixin Gu, Hao Tian, Yuchen Duan, Weijie Su, Jie Shao, et al. 2025. Internvl3: Exploring advanced training and test-time recipes for open-source multimodal models. arXiv preprint arXiv:2504.10479. | Traditionally, neighborhood studies have employed interviews, surveys, and manual image annotation guided by detailed protocols to identify environmental characteristics, including physical disorder, decay, street safety, and sociocultural symbols, and to examine their impact on developmental and health outcomes. While these methods yield rich insights, they are time-consuming and require intensive expert intervention. Recent technological advances, including vision-language models (VLMs), have begun to automate parts of this process; however, existing efforts are often ad hoc and lack adaptability across research designs and geographic contexts. In this demo paper, we present StreetLens, a human-centered, researcher-configurable workflow that embeds relevant social science expertise in a VLM for scalable neighborhood environmental assessments. StreetLens mimics the process of trained human coders by grounding the analysis in questions derived from established interview protocols, retrieving relevant street view imagery (SVI), and generating a wide spectrum of semantic annotations from objective features (e.g., the number of cars) to subjective perceptions (e.g., the sense of disorder in an image). By enabling researchers to define the VLM's role through domain-informed prompting, StreetLens places domain knowledge at the core of the analysis process. It also supports the integration of prior survey data to enhance robustness and expand the range of characteristics assessed across diverse settings. We provide a Google Colab notebook to make StreetLens accessible and extensible for researchers working with public or custom SVI datasets. StreetLens represents a shift toward flexible, agentic AI systems that work closely with researchers to accelerate and scale neighborhood studies. | [
"cs.HC",
"cs.AI"
] |
# 1 Introduction
Human interactions shape relationships through shared understandings, influenced not just by explicit words but by emotional and pragmatic nuances that convey implicit meanings. The ability to interpret beyond the literal meaning of language, known as pragmatics, is essential for social cognition, interpersonal awareness, and emotional intelligence. It allows individuals to navigate conversations fluidly, recognising intentions, cultural contexts, and unspoken implications.
Recent progress in large language models (Brown et al., 2020; Team et al., 2024; Yang et al., 2024a; Achiam et al., 2023; Dubey et al., 2024; Team et al., 2023) has advanced the capabilities
Ami: Can you make a cake? John: Can birds fly? Determine the implication of John's response Training OverLabel Tokens John's response is irrelevant to Ami's question Thought Based Training John'sresponse,“Canbirdsfly?",isarhetorical question implyingan obvious "yes" to Ami's question of conversational AI. These systems exhibit robust performance in natural language generation, reasoning tasks like math word problems, code generation (Wang et al., 2019; Cobbe et al., 2021; Geva et al., 2021; Clark et al., 2018), etc., largely due to the exploitation of extensive computational resources and vast language datasets. Despite these strengths, current LLMs struggle with effective communication, specifically in capturing the pragmatic and ambiguous dimensions of user inputs. Additionally, conventional training strategies prioritise the production of responses that are safe, objective, and widely acceptable (Glaese et al., 2022). This approach, while ensuring reliability, diverges from the goal of replicating truly human-like conversational behaviour, where the subtleties of context, emotion, and cultural nuance are critical.
While humans naturally engage in pragmatic reasoning, LLMs often struggle with this skill, especially the small LLMs (SLMs) (Amirizaniani et al., 2024), which are often used in practical scenarios due to their lower inference costs, reduced latency, and suitability for local deployment. Given the increased interaction between humans and LLMs, it is very important for the LLMs to obtain substantial pragmatic understanding of human language and intent. Recent work has primarily focused on evaluating LLMs’ pragmatic understanding, yet efforts to enhance their performance on such tasks remain limited (Van Dijk et al., 2023). Approaches that try to improve LLMs in pragmatic reasoning rely on label-based supervision or policy optimisation over annotated datasets (Wu et al., 2024), but these methods do not explicitly incorporate the reasoning process that humans use to grasp implicit meaning. This is mainly due to the absence of training mechanisms which can explicitly incorporate the reasoning process. For instance, as shown in Figure 1, interpreting the response "Can birds fly?" as $' \prime \it { Y e s } ^ { \prime \prime }$ to the question "Can you make a cake?" requires recognising it as a rhetorical question with an obvious affirmative answer—implying that the speaker’s answer to the original question is also an obvious $" \mathit { Y e s } ^ { \prime \prime } .$ .
To address this gap, we introduce a novel approach that leverages explicit reasoning, or thoughts, to improve LLMs’ pragmatic comprehension. Specifically, we perform thought-based training for the task of implicature recovery, understanding what is implied in a statement even though it is not literally expressed. We then show generalizability on multiple pragmatics domains, which include implicature, presupposition and reference. Unlike reasoning tasks such as math word problems or coding challenges, pragmatic reasoning often lacks definitive answers, making it more challenging. The correct interpretation in a given scenario is highly influenced by context, culture, and the individuals involved. This interpretation is often not described in the raw training data explicitly and can not be easily captured during the training process. To mitigate this, an explicit intermediate reasoning process must be provided during the training time along with the correct label, which details the intermediate reasoning process, mimicking how humans derive correct interpretation by deliberate system-2 thinking (Weston and Sukhbaatar, 2023). Hence, we present a first-ofits-kind pragmatic dataset where each instance includes a thought explaining the reasoning behind the correct label, along with a plausible yet incorrect negative thought justifying the incorrect label. We integrate this thought-based data into both preference-tuning and supervised fine-tuning settings, demonstrating an absolute improvement of $1 1 . 1 2 \%$ in accuracy across three model families. Our findings establish the effectiveness of thought-based learning in advancing LLMs’ ability to interpret implicit meaning in language. Our contributions are:
• A training framework incorporating explicit reasoning (thoughts) 2, leading to an $1 1 . 1 2 \%$ improvement in implicature recovery compared to label-based training approaches (Figure 2).
• A transfer learning analysis examining the effects of thought-based supervised fine-tuning (SFT) and direct preference optimisation (DPO) on unseen tasks, showing an improvement of $1 6 . 1 0 \%$ over label-based training approaches (Section 7.2).
• Synthetic QA datasets; Syn-Circa and Synludwig, consisting of ${ \sim } 3 3 . 7 5 \mathrm { K }$ , created by extending CIRCA and LUDWIG to improve understanding of implicit responses (Section 3.2).
• A novel dataset, named ImpliedMeaningPreference, for thought based implicature recovery consisting of ${ \sim } 6 6 . 2 K$ instances. This dataset is developed through a human-LLM collaboration integrating multiple implicature recovery datasets (Section 3.1).
# 2 Related Work
Implicature recovery is a central topic in pragmatics, attracting significant attention from linguists and computational researchers alike. One of the most influential theoretical contributions to this field is the formulation of the Gricean Maxims (Grice, 1975), which outline principles governing conversational implicature through Quality, Quantity, Relevance, and Manner.
Various approaches have been proposed to analyse and recover implicatures. For instance, Louis et al. (2020); Ruis et al. (2023) study indirect answers in polar questions, shedding light on how conversational participants infer unstated meanings. Zheng et al. (2021) leverage hierarchical grammar models to interpret both implicatures and deictic references in structured dialogues. Additionally, Jeretic et al. (2020) explores the role of Natural Language Inference (NLI) in understanding scalar implicatures, while Deng et al. (2014) integrate implicature-based reasoning into sentiment analysis.
Figure 2: This diagram shows the proposed thought-based training framework with two different training mechanisms: 1) SFT (Supervised Finetuning) and 2) DPO (Direct Preference Optimisation). The left side of the diagram shows the preference data generation steps, and the right side of the diagram shows the training pipeline. We use preferred thought+label for SFT and preference tune with the rejected thought+incorrect label and preferred thought+correct label in DPO.
Further contributions in this domain include corpus-based studies such as Lahiri (2015), which provide sentence-level annotations for implicature detection. Work by Schuster et al. (2019) and Li et al. (2021) focuses on employing neural networks and linguistic signals to predict scalar inferences, highlighting the potential of machine learning in implicature comprehension. Despite these advancements, recent benchmarking efforts (Hu et al., 2023; Sravanthi et al., 2024) consistently reveal a persistent performance gap between human reasoning and LLM capabilities in pragmatics.
Building upon these findings, Wu et al. (2024) introduces an open-ended evaluation framework to assess LLMs’ pragmatic abilities, showing the superiority of preference-based learning over supervised fine-tuning when label-based data is considered. Going forward, our work incorporates the intermediate reasoning steps (thoughts) in fine-tuning and preference optimisation process for pragmatic reasoning. Unlike conventional approaches that reward only label accuracy, our method explicitly incorporates thought processes into model training, enabling LLMs to develop a deeper understanding of pragmatics. In the following sections, we present our datasets, methodology and evaluate the effectiveness of structured reasoning in enhancing LLM performance in pragmatics tasks like implicature recovery, presupposition and deixis.
# 3 Datasets
In this section, we discuss the process of generating ImpliedMeaningPreference data and synthetic QA datasets (Syn-Circa and Syn-ludwig).
# 3.1 Preference Data Generation
Gathering high-quality preference data typically requires substantial resources and significant human effort. Existing pragmatic QA datasets, such as Circa and Ludwig, include human-annotated mappings between indirect answers and their corresponding direct interpretations (i.e., labels such as yes and no). To minimise human efforts in preference construction, we leverage these existing label mappings: the original mapped label is treated as the preferred label, while its complement is considered the rejected label.
Preferred thought generation: As shown in Figure 2 we generate the thoughts supporting the correct label by prompting gpt-4o-mini. <Question, Indirect answer, Correct label> are given as input to the model, and the model is tasked to generate an intermediate reasoning step that helps in mapping the indirect answer to the label.
Rejected thought generation: We attempted to generate rejected thoughts using a similar approach by providing <Question, Indirect answer, Wrong la$b e l >$ to the model. However, we observed that most of the time, the model generated thoughts supporting the correct label, which may be due to the inherent safety guardrails present in the model (Achiam et al., 2023). Therefore, for rejected thought generation, a linguistic expert is tasked to write templates. The templates are made to capture the wrong reasoning that mimics the misunderstanding humans can have when one or more of the Gricean maxims are flouted. For example, in the Figure 1, the maxim of relevance is flouted, and it can be understood as John is giving an irrelevant reply to the question asked by Ami. Each rejected thought is generated by randomly selecting one of the 50 templates written by the linguist. Prompts for the data generation and sample templates are given in the Appendix, Section 7.2. A sample of preferred and rejected thoughts is verified by linguistic experts. Details discussing the human evaluation can be found in Appendix A.
# 3.2 Synthetic QA Datasets
To enhance our preference dataset, we expanded the existing QA dataset, facilitating the generation of additional preference annotations. We construct our synthetic QA datasets based on existing polar questions and indirect answer datasets (Louis et al., 2020; Ruis et al., 2023). Circa (Louis et al., 2020) and ludwig (Ruis et al., 2023) consist of 3,345 and 601 unique questions. For generating syn-circa and syn-ludwig, we take unique questions from both the datasets and generate indirect answers that can be mapped to polar direct answers i.e., each indirect answer conveys a "yes" or "no" reply to the question. The responses were generated using gpt-4o-mini (Achiam et al., 2023) in a few-shot prompting setting. For each unique question, we generate five answers using five different temperature values -0.0, 0.2, 0.4, 0.6 and 0.8 for generating varied and creative responses. To guide the model effectively, 3 to 6 examples were randomly selected from a curated set of 50 examples, serving as contextual prompts to steer the generation process. The prompts for generation are given in Appendix B. To evaluate that the generated responses adhered to the desired criteria of being indirect, we use a BERT-based classifier (Devlin, 2018) trained on classifying declarative sentences of the questions and indirect answers. Out of 33.75K instance only five examples were classified as indirect answers by the classifier. The effects of this data augmentation using synthetic datasets is discussed in Section 7.3.
# 4 Methodology
This section discusses our approach in detail. We aim to study the impact of incorporating thought training in two settings: 1) Supervised FineTuning (SFT) and 2) Direct Preference Optimization (DPO) (Rafailov et al., 2023). We formulate the task as follows: Given an input consisting of an indirect answer to a question along with a context, output the pragmatic interpretation. Let $P ( x )$ be the initial prompt which contains the task description $T _ { d e s c }$ and input description $I _ { d e s c }$ where $x =$ $\left[ T _ { d e s c } , I _ { d e s c } \right]$ . Let $G$ be the generated output, which contains a thinking process $\mathcal { T } _ { t h o u g h t }$ followed by a predicted label $\mathcal { P } _ { l a b e l }$ . Here, $G = [ \mathcal { T } _ { t h o u g h t } ; \mathcal { P } _ { l a b e l } ]$ consisting of tokens $( g _ { 1 } , g _ { 2 } , \ldots , g _ { t - 1 } , g _ { t } )$ .
In the general supervised fine-tuning process, we aim to maximize the conditional log-likelihood of the output tokens given the input tokens. In the context of our setting, this corresponds to:
$$
\mathcal { L } _ { s f t } = - \sum _ { t = 1 } ^ { | G | } l o g P _ { \theta } ( g _ { t } \mid P ( x ) , g _ { 1 } , g _ { 2 } , . . . , g _ { t - 1 } )
$$
Here $\mathcal { L }$ is the total loss (negative log-likelihood of the sequence), $| G |$ is the length of the output sequence $G$ $' , P _ { \theta } ( g _ { t } \mid P ( x ) , g _ { 1 } , g _ { 2 } , . . . , g _ { t - 1 } )$ is the model’s predicted probability of the token $g _ { t }$ at position $t$ , given the input prompt $P ( x )$ and all previous tokens $g _ { 1 } , g _ { 2 } , \dotsc , g _ { t - 1 }$ and $\theta$ is the model parameters being optimized.
Contrary to SFT, in standard Reinforcement Learning from Human Feedback (RLHF) setup, we use the structure of the Markov Decision Process consisting of 4 tuples: (States $S$ , Actions $A$ , Transition Probabilities $T _ { p }$ , Rewards $R$ ). Here, we define a function policy $\pi$ , which maps states to actions $( \pi : S A )$ . The goal is to optimize the policies to maximize the rewards. In our context, given the current state (Input Prompt), we would like to optimize the policy (language model) to select the actions (which token to predict next) such that the reward function (a function which scores the generated output based on human preferences) yields the maximum value. We aim to study the effects of optimizing policy over the thoughts and labels together.
Table 1: Class distribution and totals for Train, Validation, and Test datasets.
This means that the probability of winning generation $( G _ { W } )$ preferred by humans should be more than the probability of losing generation $( G _ { L } )$ , which humans do not prefer. Therefore, the Bradley Terry Model for our setup is:
$$
P ( G _ { W } > G _ { L } ) = \frac { e ^ { R ( P ( x ) , G _ { W } ) } } { e ^ { R ( P ( x ) , G _ { W } ) } + e ^ { R ( P ( x ) , G _ { L } ) } }
$$
This finally yields the adapted DPO loss for our setting, incorporating policy optimization over thought and labels.
Specifically, $L _ { D P O } ( \pi _ { \theta } ; \pi _ { r e f } )$ :
$$
\begin{array} { r } { - \mathbb { E } _ { ( x , G _ { W } , G _ { L } ) \sim \mathbb { D } } [ l o g ( \sigma ( \beta \psi ( G _ { W } ) - \beta \psi ( G _ { L } ) ) ] } \end{array}
$$
where
$$
\psi ( G ) = l o g ( \frac { \pi _ { \theta } ( G | P ( x ) ) } { \pi _ { r e f } ( G | P ( x ) ) } )
$$
In the above equation, $\pi _ { r e f }$ is the reference model instantiated with the initial version of the model, $\pi _ { \boldsymbol { \theta } }$ is the model obtained after preference tuning, and $\beta$ is the regularizing parameter used for penalizing the scenario when the resulting model is very far from the base version resulting to the loss of prior knowledge.
From a linguistic perspective, our approach is motivated by the need to model pragmatic competence in language understanding. Pragmatic reasoning involves interpreting implied meanings that go beyond the literal content of utterances, as theorized in Grice’s maxims of conversation. Traditional models often struggle with implicature resolution because they lack an explicit mechanism for reasoning about contextually inferred meanings. By integrating structured thought processes into both fine-tuning and preference optimization, our method provides a computational analog to human inferential processes in discourse interpretation. This, we hope, should enable LLMs to better grasp implicatures, handle indirect responses, and align with human-like conversational norms, thereby improving their effectiveness in pragmatic language tasks.
# 5 Experimental Setup
For our experiments, we consider models from three different families: 1) Llama-3.2-1B (Dubey et al., 2024) 2) Qwen-2.5-1.5B (Yang et al., 2024b), 3) Gemma-2-2B (Team et al., 2024). We also report the zero-shot performance of Llama3.1 70B for comparison with a large language model. Our experiments kept the learning rate at $5 e \mathrm { ~ - ~ } 7$ with warmup steps of 500 iterations. We use RMSprop (Ruder, 2016) as our optimiser following Wu et al. (2024). All models in both settings are trained for one epoch (till convergence), and the greedy decoding mechanism was used throughout the experiments. For Qwen-2.5-1.5B and Gemma-2-2B, we use the global batch size of 32; for llama-3.2-1B, the global batch size was set to 64. For regularization, we use gradient clipping of 1 in DPO and weight decay of 0.01 in SFT. We use 4 NVIDIA H100 80GB HBM3 GPUs for all the experiments in this work, with a total train time of 8 GPU hours. For all other hyper-parameters, we use the default values. We report all the training and evaluation prompts in Appendix, Sections C and D respectively. We use macro precision (P), recall (R) and F1 scores for evaluation.
# 6 Results
In Table 2, we report the results of our experiments after training with QA datasets. We note significant improvement after the inclusion of the thoughts in SFT and DPO for Llama-3.2-1B and Qwen2.5-1.5B. For Gemma2-2B, we observe significant gains in SFT with thought settings and a slight performance decline when thought is incorporated in the DPO setting. We note that DPO was not originally used in the training process of Gemma-2B, unlike Llama-3.2-1B and Qwen2.5- 1.5B. We conjecture that since the model was not exposed to DPO in the general training, our training for implicature recovery could not induce the thoughts as effectively as in other models.
We note that training with just labels provides an edge to DPO over SFT across models, aligning with Wu et al. (2024). While training with thought alongside labels provides significantly higher gains for SFT when compared to DPO, with thoughtbased SFT outperforming thought-based DPO in most cases. The ‘thoughts’ contain more explicit signals and the interpretation of the reasoning required to reach the right answer, which may be captured in a more straightforward way in the SFT setup compared to DPO. Intuitively, the thoughtbased training mechanism would require higher updates in the parameters than the scenario when we have to optimise over only label tokens. In general, the optimisation objective for SFT does not have any constraints and is more flexible compared to DPO, which requires a regularising parameter $\beta$ for the KL constraint to prevent divergence from the base model (untrained) during the training.
Another perspective in the context of this observation was suggested by Feng et al., 2024; Pal et al., 2024, which shows the gradient of the DPO loss with respect to preferred (winning) response is lower compared to the dispreferred (losing) response which essentially hinders the learning capacity of LLMs to generate the actual human preferences while introducing the tendency of avoiding human dispreferred responses. This effect may have been further magnified in our setting which has more tokens compared to the only label setting.
We also note that our best-performing model, Gemma2-2B, supervised-fine-tuned with thoughts, yields comparable performance to LLama3.1-70B, which highlights the effectiveness of incorporating thoughts in the training mechanism. In general, the thought-based training mechanism yielded better results compared to the setting, which just incorporates labels, highlighting the importance of learning thought generation.
# 7 Analysis
In this section, we discuss various insights about the proposed method, which describes the advantages of thought-based learning and some general error cases.
# 7.1 Predictive Analysis
Here, we describe the general predictive trends observed in our framework. In general, we observe a significant improvement after incorporating thought in the generated output. Intuitively, the causal models are optimized to generate the appropriate explanations first and then derive the predictions based on the generated explanation. Probabilistically, the next token for prediction is conditioned on the ‘thought’ and ‘input’ tokens which can act as a guide for reaching the correct predictions more accurately compared to the scenario when only input tokens are considered. We discuss an example in Illustration 1 where the task is to determine if the given response to a question implies a “Yes" or ‘No". We observe that the model is correctly predicting the output of ‘Yes’ by generating thought which are used for resolving the final predictions compared to the scenario when the model trained on just labels (without thought) is considered. The generated thought is also helpful in understanding if the model is genuinely predicting the correct output based on the right understanding or predicting the correct output randomly (further explanations in Section 7.5).
# 7.2 Transfer Learning Analysis
This section discusses whether thought learning is transferable to the other datasets and tasks which are not seen during the training process. The primary motivation behind this study is to understand if the thought training done for one of the pragmatic tasks is helpful in learning other pragmatic tasks in different datasets.
Specifically, we evaluate our models trained for implied question answering with the following datasets: 1) FigQA (figurative Natural Language Inference) 2) Flute (figurative Natural Language Inference) 3) IMPPRES (figurative Natural Language Inference) 4) Ludwig 5) Pub-presupposition
Table 2: Comparison of P (Precision), R (Recall), and F1 scores across Circa, Synthetic_Circa, and Synthetic_Ludwig datasets under various settings for QA dataset. The last column reports the mean F1 score across datasets.
Table 3: Data Ablation on Gemma-2B: We report the Precision (P), Recall (R) and F1 scores on all four settings by training the model with just the Circa dataset.
# task 6) Pub-reference task.
Presupposition, implicature, and reference are pragmatic phenomena that rely on context, shared background knowledge, and the interactive nature of communication to convey meaning beyond the literal content of an utterance. Intuitively, models trained with explicit reasoning for performing implicature recovery should also perform better on these related linguistic phenomena. Specially, reference is a special case of implicature where the only difference is the usage of deixis terms.
For these experiments, we chose our bestperforming model, Gemma2-2B. We observe significant improvement in performance when the thought is incorporated into the training mechanism for both the SFT and DPO. We report these results in Table 4 for three NLI datasets and Table 5 for other pragmatics tasks with a mean improvement of $1 6 . 1 0 \%$ .
We observe significant improvements across all the datasets with thought-based training approaches when compared to the label-based training approaches. This highlights that the learning for implicature recovery is also transferable to other datasets and pragmatic tasks.
In the Illustration 2, we describe a general scenario where the model is able to resolve the figurative language of as fast as a turtle to slow, finally arriving at the correct prediction of Contradiction.
Table 4: Transfer Learning for NLI on figurative sentences: FigQA, Flute, and IMPPRES.
Table 5: Transfer Learning for the Presupposition, Ludwig and Reference dataset
# 7.3 Data Ablations
In this section, we discuss the effects of introducing our synthetically created data in the training. Specifically, we perform the experiments without training with any synthetic data and use the Circa data (similar to the original setting) to train our best-performing model, Gemma, in all four settings. We report the results in Table 3 and note that the performances across all settings are significantly lower compared to the original training setup (as reported in Table 2), where we also include the synthetic data. We observe a reduction of $9 . 1 1 \%$ in the $\mathrm { S F T + }$ Thought setting and $1 5 . 8 4 \%$ in the DPO setting, respectively. A drastic reduction is observed in the $\mathrm { \Delta D P O + } ^ { \prime }$ Thought and SFT settings, primarily due to the poor legibility of model predictions where the generated text has very high hallucination. This shows the utility of synthetic data, highlighting its role in enhancing the robustness of training mechanisms by providing diverse and well-aligned examples that may be difficult to cover otherwise.
# Given Input:
fails to understand that the speaker is being sarcastic about the situation, and the implication is similar to sentence 1, which leads to an incorrect prediction of ‘Contradiction’. Similar predictive behaviour is observed when other special linguistic phenomena like metaphors and hyperbole are present.
# 7.4 Error Analysis
This section describes the most prominent cases of error in the given task. We observe that the majority of errors occur when there is a use of a complex linguistic phenomenon that requires an additional layer of interpretation.
In the Illustration 3, we describe one such scenario. Specifically, sentence 2 shows the presence of sarcasm, which is evident in the latter part, which uses the adjective ‘lovely’ for work. The model
# 7.5 Thought Analysis
In this section, we discuss various insights related to the thought generations. The primary aim of this study is to understand if the model is deriving the predictions from the correct thought or getting the predictions right with the incorrect logic. To analyze this quantitatively, we consider our bestperforming model, Gemma2-2B-SFT, for evaluating thoughts using GPT4O-mini. To perform a human evaluation to ensure the quality of GPT4omini predictions, we asked two linguistic experts to annotate if they agreed with the predictions, leading to a Cohen kappa score of 0.79 on 85 examples randomly sampled from the data.
We observe that in the cases of correct predictions, the model generates correct thoughts in 96.41 $\%$ of the instances and $3 . 5 9 \%$ of incorrect thoughts. For incorrect predictions, we observe $7 0 . 9 2 \%$ instances with wrong thoughts while $2 9 . 0 8 \%$ instances have the correct thoughts where the model generates the correct reasoning but could not resolve the correct predictions. We discuss an example in Illustration 4 pertaining to the scenario where the model is correctly generating the contrast in both the sentences but provides prediction as ‘Entailment’ instead of ‘Contradiction’.
In general, we observe that most of the correct predictions have correct thoughts, and a significant amount of incorrect predictions also have correct thoughts. This highlights that the model is able to generate thoughts to a reasonable extent but cannot cut the threshold of reaching the correct answer.
# 7.6 Thought Perturbation Analysis
In this section, we describe our experiments, which aim to understand the if the improvements are observed due to the presence of correct thoughts leading to the right label in the training data or is it just some spurious correlation. For this experiment, we perturb the correct thought with the incorrect thought: for SFT, we replace the correct thought with the incorrect thought and for DPO, we flip the correct (preferred) and incorrect (dispreferred) thoughts in preference data. In general, we observe a significant decrement in the scores compared to the original setting where we consider the correct thought. The decrease in the SFT is very drastic, and the F1-scores went down to as low as $2 \%$ . In the DPO, we also see a considerable decline in the performance $2 5 \% - 3 0 \%$ ) across all QA tasks compared to the $\mathrm { \Delta D P O + }$ Thought settings. Even though, in both cases, there is a decrease in the performance, DPO models did not suffer a tragic decline in the accuracies due to the presence of the regularizing constant $( \beta )$ in DPO. In other words, the regularizing constant $\beta$ prevents the large updates in the model, which is not the case in SFT, where the weight updates are unconstrained. | Pragmatics, the ability to infer meaning beyond literal interpretation, is crucial for social cognition and communication. While LLMs have been benchmarked for their pragmatic understanding, improving their performance remains underexplored. Existing methods rely on annotated labels but overlook the reasoning process humans naturally use to interpret implicit meaning. To bridge this gap, we introduce a novel pragmatic dataset, ImpliedMeaningPreference, that includes explicit reasoning (thoughts) for both correct and incorrect interpretations. Through preference-tuning and supervised fine-tuning, we demonstrate that thought-based learning significantly enhances LLMs' pragmatic understanding, improving accuracy by 11.12% across model families. We further discuss a transfer-learning study where we evaluate the performance of thought-based training for the other tasks of pragmatics (presupposition, deixis) that are not seen during the training time and observe an improvement of 16.10% compared to label-trained models. | [
"cs.CL",
"cs.AI"
] |
# 1 Introduction
The rapid progress in LLM capabilities—specifically their ability to follow instructions and maintain large contexts—has made them a natural choice in many applications. Natural Language to SQL (NL2SQL) is a long-standing and important task in many businesscritical scenarios, requiring a deep understanding of user queries
∗Work done while at Megagon Labs
and the underlying databases for effective translation. Recent years have witnessed significant progress in NL2SQL, fueled by advancements in LLMs [3, 4, 8, 15]. However, building an effective NL2SQL system goes beyond simply leveraging LLMs—it requires the careful selection of instructions, exemplars, and schema, making it a challenging task despite recent breakthroughs [6].
Recent works [4, 21] emphasize that exemplar selection is crucial for building effective NL2SQL systems. Retrieval-based exemplar selection—i.e., identifying exemplars similar to the user query—has become the de facto method. However, studies [4, 19] highlight inefficiencies and overfitting issues with similarity-based retrieval methods, and argue that synthetic exemplars can yield better performance. While each approach has its advantages—retrieval-based methods are cheaper due to index-based lookups without LLM calls, and synthetic exemplars may be more accurate—they both require exemplar selection at inference time, which can become a bottleneck in business-critical applications [23].
Prompt Optimization. To address the limitations of current NL2SQL systems, we argue that for effective SQL generation, all an LLM needs is a static set of exemplars that capture the intricacies of the domain—offering performance comparable to retrieval-based approaches, while eliminating the need for inference-time retrieval. The key challenge lies in identifying this representative set of exemplars. To tackle this, we leverage prompt optimization techniques for exemplar selection in NL2SQL and demonstrate their effectiveness.
Multi-Objective Optimization. Most existing NL2SQL approaches focus solely on accuracy. However, accuracy is only one dimension in deploying practical NL2SQL systems. In real-world settings, systems must also understand query efficiency and the characteristics of target SQL engines, generating queries that are efficient to execute (i.e., with lower latency). In this work, we propose a way to extend prompt optimization to multi-objective settings. To support this, we introduce an augmented benchmark based on BIRD that includes query latency measurements.
To summarize, our contributions are as follows:
To the best of our knowledge, this is the first work to study the effectiveness of prompt optimization in NL2SQL systems. • We propose an iterative prompt optimization (IPO) framework that jointly optimizes instructions and exemplar selection through two agents Proposer and SQL Generator. Additionally, the framework implicitly performs schema pruning, reducing prompt size and thereby lowering inference costs. • We introduce the aspect of generating efficient SQL translations in NL2SQL systems, and introduce an augmented benchmark BIRD-MULTI (based on BIRD dataset) that incorporates query latency information.
# 2 Related Work
Exemplar Selection. With the advent of powerful API-based LLMs such as ChatGPT [25] and Gemini [24], in-context learning (ICL)–based approaches [4, 5, 7, 16, 19–21] have become the dominant strategy for building high-performing NL2SQL systems. Specifically, retrieval-based exemplar selection [21], where examples are selected from a training set based on text or structural similarity, has proven sufficient to improve NL2SQL performance without expensive fine-tuning. However, such systems introduce inference-time costs and may overfit to specific queries due to the retrieval of overly similar examples [4].
To address this, recent approaches [4, 19] employ (online) synthetic exemplar generation rather than relying on training data selection. While this mitigates overfitting, it requires learning exemplar generators, which incurs additional costs and presents challenges in domain transfer. In this work, we explore optimizationbased methods for exemplar selection that avoid both the expense of retrieval indexes and the complexity of online synthetic generation.
Prompt Optimization. Optimizing LLM prompts has been a focus for several years [22, 26], showing effectiveness across a multitude of applications. More recently, DSPy [9] introduced a declarative framework for expressing and optimizing prompts for NLP tasks. Foundational work by [27] demonstrated the inherent capability of LLMs to act as optimizers, particularly for instruction tuning across various tasks. Building on this,[18] proposed MIPRO, a noniterative technique for joint optimization of instructions and exemplar selection in multi-stage pipelines. Furthermore,[13] introduced a declarative framework focused on BI workloads, combining hybrid database systems with AutoML-style optimization for pipeline tuning. While these works introduced key optimization techniques, their applicability and effectiveness in the NL2SQL setting remain unexplored—a gap that our work seeks to address.
# 3 Prompt Optimization for NL2SQL
To demonstrate the effectiveness of optimization in NL2SQL, we adopt a simple in-context learning (ICL)[2] pipeline, as illustrated in Figure1, which uses a single LLM to generate the SQL query. The prompt provided to the LLM consists of: a) #Instruction – a guiding instruction for the task, b) #Exemplars – examples selected from the training data via an Exemplar Selection component, c) #Query – the user query to be translated, d) #Schema – the relevant schema retrieved using a Schema Retrieval module, and e)#SQL – a prefix to trigger SQL generation by the LLM.
To apply to production use-case, we use exact proprietary schema, and focus our efforts on optimizing exemplar selection.
# 3.1 Exemplar Selection and Optimization
As previously mentioned, exemplar selection is a crucial step in NL2SQL generation, particularly when using ICL-based approaches [8] This involves identifying an appropriate set of exemplars—each consisting of a natural language (NL) query, database schema, corresponding SQL query, and optionally hints or cell values—that help the LLM understand the domain, the target SQL engine, and data-specific nuances. Below, we discuss various exemplar selection strategies and how optimization can enhance the selection process.
Figure 1: NL2SQL Pipeline
# Algorithm 1: Optimization of Random Exemplar Selection
Random. A straightforward approach to exemplar selection is random sampling. For a predefined value of $k$ (the number of exemplars), this strategy randomly samples $k$ exemplars from the training data to include in the prompt. More sophisticated sampling techniques, such as stratified sampling, can also be used to account for the distribution of query types. For example, queries in the BIRD [11] dataset are categorized into three groups: simple, moderate, and challenging.
Optimizing Random Exemplar Selection. A key challenge with random selection is choosing an appropriate value for $k$ . A small $k$ may fail to capture the diversity of the NL and SQL constructs, while a large $k$ can lead to lost-in-the-middle issues with LLMs [14] and increase generation costs due to the larger prompt size. A simple yet effective approach is to treat $k$ as a hyperparameter and optimize it using AutoML-style techniques, as illustrated in Algorithm 1. Inspired by DSPy’s BootStrap with FewShot Example Selection [9], this method optimizes the number of demonstrations by randomly sampling exemplars (with replacement), rather than bootstrapping, using a performance metric $\mu$ .
In addition to optimizing exemplar selection, joint optimization of instruction and exemplar selection can lead to improved performance. MIPRO [18] leverages an LLM to generate $N$ instruction–exemplar pairs $( I _ { 1 } , E _ { 1 } ) , ( I _ { 2 } , E _ { 2 } ) , \ldots , ( I _ { N } , E _ { N } )$ , where $I _ { i }$ is an instruction generated from a set of randomly bootstrapped exemplars $E _ { i }$ . A hyperparameter optimization algorithm such as TPE [1] is then used to identify the optimal pair $( I _ { i } , E _ { i } )$ based on an objective function, such as validation accuracy.
Figure 2: Iterative Prompt Optimization
# 3.2 Iterative Prompt Optimization
One of the key limitations of the exemplar selection strategies discussed earlier is their ad hoc nature—exemplars are either randomly sampled or bootstrapped using heuristics (as in [18]), which may lead to suboptimal performance. Long-context LLMs (LCMs) aim to overcome this by fitting a larger number of exemplars (100–200) into their context window. However, recent work [4] has shown that relying on LCMs to implicitly perform exemplar selection does not improve performance and can, in fact, be detrimental, as LCMs often struggle with effective in-context learning [12].
To address this, we extend the work of [27], which uses an LLM as an optimizer to find an optimal prompt instruction, by enabling it to perform both instruction generation and exemplar selection through two cooperating agents: the Proposer and the SQL Generator. Specifically, we introduce an Iterative Prompt Optimization (IPO) approach (illustrated in Figure 2) in which the two agents work together to discover optimal NL2SQL prompts for a given training corpus.
The Proposer agent takes a Proposer prompt as input and generates an NL2SQL prompt comprising an instruction and a set of exemplars. The SQL Generator agent then evaluates the generated prompt on a validation set (sampled iteratively from the training data) and collects performance metrics including accuracy, as well as the correct and incorrect examples. This feedback is used to update the Proposer prompt. In subsequent iterations, the Proposer is guided to refine the NL2SQL prompt based on past performance, aiming to produce more informative exemplars and a better-suited instruction for improved SQL generation.
In contrast to MIPRO, which bootstraps exemplars randomly, IPO uses an LLM as an optimizer to jointly refine both the instruction and exemplar selection. Additionally, we observed that IPO often generates more concise NL2SQL prompts by pruning irrelevant schema information from the exemplars. For example, Figure 3 shows an exemplar whose schema includes only the table film and the columns film_id, title, and rating from the database movie_3. Although schema pruning was not an explicit design goal of IPO, this behavior highlights the strength of LLMs as optimizers in complex tasks such as NL2SQL.
NLQ: List all the films that are rated as PG-13.
Schema:
Database Name: movie_3
Tables: [’film’]
#Columns:
film: [film_id:integer, title:text, rating:text]
Evidence: film refers to title; rated as PG-13 refers to rating $= { \mathopen { - } } { \mathsf { P G } } { \mathclose { - } } 1 3 { \mathclose { - } }$ .
SQL: SELECT title FROM film WHERE rating $\begin{array} { r l } { \mathbf { \Psi } } & { { } = \mathbf { \Psi } ^ { \prime } { \mathsf { P G } } - 1 \mathbf { \Psi } 3 \mathbf { \Psi } ^ { \prime } } \end{array}$ ;
# Figure 3: IPO generated exemplar with automatic schema pruning
# 4 Extending to Multi-Objective
Motivation. Thus far, NL2SQL systems have focused mainly on improving execution accuracy while ignoring a critical dimension: generating efficient SQL queries. Consider the example of SQL translations: ground truth (GT) and generated SQL (Gen) for the query NLQ. Executing the GT SQL query on a SQLite3 database took around 10.2 seconds (due to the sub-query), while the Gen query (which uses an inner join) took only 0.03 seconds. This example demonstrates that the generated query (in this case, ground truth) may not always be the most efficient SQL translation.
NLQ: Show the avatar of the user who gave the rating at
2019/10/17 1:36:36.
GT: SELECT user_avatar_image_url FROM lists_users WHERE user_id $\mathbf { \Sigma } = \mathbf { \Sigma }$ (SELECT user_id FROM ratings WHERE rating_timestamp_utc LIKE ’2019-10-17 01:36:36’)
Gen:SELECT T2.user_avatar_image_url FROM ratings AS T1 INNER JOIN lists_users AS T2 ON T1.user_id $= \tau 2$ .user_id WHERE T1.rating_timestamp_utc LIKE ’2019-10-17 01:36:36’
Benchmark Creation. To build NL2SQL systems capable of generating efficient SQL queries, it is essential to have information about the efficiency of a SQL query—such as its wall-clock execution time—alongside the SQL translation itself. This efficiency information enables SQL generators (LLMs) to better understand the nuances and computational complexity of various SQL constructs, ultimately guiding them toward generating more optimized queries. However, existing benchmarks such as BIRD[11] and SPIDER[10] lack this critical execution-time data. To address this gap, we developed a new augmented benchmark built on top of BIRD, which includes each natural language query (NLQ) paired with two different SQL variants (generated using reasoning models OpenAI O3 [17]) along with their measured execution times.
NLQ: Give the full name of the actor with the highest rental rate. SQL1: SELECT a.first_name, a.last_name FROM actor AS a JOIN film_actor AS fa ON a.actor_id $\mathbf { \tau } = \mathbf { \tau }$ fa.actor_id JOIN film AS f ON fa.film_id $\mathbf { \Psi } = \mathbf { \Psi } \mathbf { f }$ .film_id ORDER BY f.rental_rate DESC LIMIT 1; Time1: 0.0012 seconds
SQL2: SELECT a.first_name, a.last_name FROM actor a JOIN
film_actor fa ON a.actor_id $\mathbf { \Sigma } = \mathbf { \Sigma }$ fa.actor_id JOIN film f ON
fa.film_id ${ \bf \Psi } = { \bf { \Psi } } { \bf { f } } { \bf { \Psi } }$ .film_id WHERE f.rental_rate $\mathbf { \Sigma } = \mathbf { \Sigma }$
(SELECT MAX(rental_rate) FROM film) LIMIT 1;
Time2: 0.0009 seconds
Generating Efficient SQL Queries. With the augmented benchmark containing SQL variants and their corresponding execution times, it becomes feasible to design LLM prompts specifically aimed at generating efficient SQL queries. Furthermore, by leveraging the optimization techniques described in Section 3, it is possible to jointly optimize for both SQL efficiency and generation accuracy, leading to more practical and performant NL2SQL systems.
# 5 Preliminary Results
Here, we present preliminary results demonstrating the effectiveness of prompt optimization in NL2SQL systems. As described earlier, we consider a simple NL2SQL pipeline consisting of a single LLM (illustrated in Figure 1) and evaluate the following prompt optimization strategies discussed in Section 3. For all experiments, we use GPT-4o as the LLM.
• RES. Random Exemplar Selection (RES) is the baseline approach, where $k = 1 0$ exemplars are randomly sampled from the training data. We run RES over 10 random samples and
report the best accuracy.
ORES. Optimized Random Exemplar Selection (ORES) uses Bayesian Optimization to tune the hyperparameter $k$ in the RES method. We limit the number of trials to 20 and note that increasing trials does not necessarily correlate with improved performance.
• MIPROv2. Multiprompt Instruction PRoposal Optimizer (MIPROv2) is the DSPy [9] recommended optimizer that jointly optimizes instruction and exemplar selection. Similar to ORES, we set the number of trials to 20, and the maximum number of labeled demonstrations to 10.
• IPO. Iterative Prompt Optimization (IPO) is a bi-agent, LLMas-optimizer approach that iteratively refines the NL2SQL prompt using feedback on SQL generation quality. For IPO, we set the number of iterations to 5 and instruct the LLM to generate at least 5 diverse exemplars per iteration.
# 5.1 Effectiveness of Optimization
Performance. Table 1 highlights the effectiveness of different optimization strategies on the BIRD dataset. The naive RES approach underperforms due to its inability to select informative exemplars for SQL generation. The ORES approach, which applies a simple AutoML-based optimization, performs better than RES but falls short compared to more advanced strategies like MIPROv2 and IPO. IPO achieves the best performance, benefiting from iterative refinement using feedback from the SQL Generator agent, which leads to more relevant exemplar selection. The absence of this feedback loop in MIPROv2 makes it less effective than IPO. However, we emphasize that these observations are specific to the NL2SQL task.
Table 1: Ex. Accuracies of Prompt optimization Methods on BIRD (dev) dataset
Quantitative Analysis. Table 2 presents a quantitative analysis of the different optimization techniques, measured across two dimensions: prompt length and optimization time. RES (with 10 exemplars) results in a prompt length of approximately 23k tokens, while ORES (with 75 exemplars) leads to a significantly larger prompt of around 84k tokens. MIPROv2 (with 10 exemplars) produces a prompt length of about 26k tokens, similar to RES. IPO yields the shortest prompt length, as it prunes a substantial portion of schema information from each exemplar, while also delivering the best performance.
In terms of optimization time, MIPROv2 takes the longest, as it involves both data analysis and joint optimization, whereas IPO and ORES are comparatively faster.
Table 2: Quantitative analysis of PO on BIRD (dev) dataset
# 5.2 Multi-Objective Optimization
Table 3 demonstrates the effectiveness of multi-objective optimization using the IPO approach. For this, we consider both accuracy and latency on the BIRD (dev) dataset. When compared to the ground truth (GT), accuracy-only IPO optimization (Section 3.2) results in the generation of SQL queries that are less efficient, with a maximum latency of approximately 18 seconds (vs. 8.7 seconds for GT) and a standard deviation $( \sigma )$ that is almost 1.8 times larger. In contrast, joint optimization of both accuracy and latency leads to only a marginal increase in the maximum latency of queries, while maintaining a standard deviation similar to that observed in GT.
Table 3: Effectiveness of multi-objective optimization on BIRD (dev) dataset | NL2SQL approaches have greatly benefited from the impressive capabilities of large language models (LLMs). In particular, bootstrapping an NL2SQL system for a specific domain can be as simple as instructing an LLM with sufficient contextual information, such as schema details and translation demonstrations. However, building an accurate system still requires the rigorous task of selecting the right context for each query-including identifying relevant schema elements, cell values, and suitable exemplars that help the LLM understand domain-specific nuances. Retrieval-based methods have become the go-to approach for identifying such context. While effective, these methods introduce additional inference-time costs due to the retrieval process.
In this paper, we argue that production scenarios demand high-precision, high-performance NL2SQL systems, rather than simply high-quality SQL generation, which is the focus of most current NL2SQL approaches. In such scenarios, the careful selection of a static set of exemplars-capturing the intricacies of the query log, target database, SQL constructs, and execution latencies-plays a more crucial role than exemplar selection based solely on similarity. The key challenge, however, lies in identifying a representative set of exemplars for a given production setting. To this end, we propose a prompt optimization framework that not only addresses the high-precision requirement but also optimizes the performance of the generated SQL through multi-objective optimization. Preliminary empirical analysis demonstrates the effectiveness of the proposed framework. | [
"cs.CL",
"cs.DB"
] |
# 1 Introduction
Agricultural systems face increasing pressure to meet the growing demand for food while maintaining ecological balance and mitigating climate impacts [54]. Rural landscapes not only serve as the backbone of food production but also play a pivotal role in sequestering carbon, regulating water cycles, and supporting biodiversity. However, the intensification of agriculture over recent decades has led to significant environmental challenges, including habitat loss, soil degradation, and declining wildlife populations [24].
In intensively farmed landscapes, natural habitats such as woodlands, grasslands, and hedgerows are often fragmented or removed altogether [23]. In this scenario, remaining patches of semi-natural habitats are disconnected from one another, creating isolated islands in a sea of monocultures. This fragmentation limits species movement, reduces genetic exchange, and diminishes overall biodiversity [53]. Biodiversity loss, in turn, weakens the resilience of ecosystems, reducing their ability to provide essential services such as pollination, pest control, and climate regulation services that are crucial to sustainable agriculture [25].
Addressing these challenges requires creating buffer zones that promote biodiversity and restoring ecological connectivity while maintaining high farming yields [52]. Linear features such as hedgerows, tree lines, and riparian corridors play a critical role in reconnecting fragmented habitats, acting as wildlife corridors that enable species movement and resource access [26, 29, 30, 41]. Buffer zones, including grassy field margins and riparian vegetation, help shield sensitive ecosystems from agricultural pressures by reducing soil erosion, limiting nutrient runoff, and protecting waterways from pollution [39, 1]. For example, restoring hedgerows can simultaneously enhance biodiversity, improve soil stability, and serve as natural barriers that protect crops from wind and water runoff [32]. By carefully designing and mapping these features, it is possible to integrate ecological restoration with productive farming systems [36].
The importance of such efforts is underscored by the “30 by $3 0 ^ { \circ }$ initiative [51], a global commitment to protect $30 \%$ of terrestrial and marine ecosystems by 2030, adopted under the Kunming-Montreal Global Biodiversity Framework [7]. The EU Biodiversity Strategy for 2030 has embraced these targets, requiring Member States to enhance the protection and restoration of habitats, including farmland biodiversity, to meet this ambitious goal [20]. Reaching the 30 by 30 target will necessitate large-scale restoration efforts that prioritize the connectivity of fragmented landscapes and the integration of semi-natural features into agricultural areas.
Figure 1: Typical English landscape elements, hedgerows (left), stone walls (center), and woodland (right).
In addition to international commitments, several European countries have launched national initiatives aimed at reversing biodiversity loss in rural areas. In the United Kingdom, hedgerow restoration programs aim to reconnect fragmented habitats and enhance the ecological and cultural value of rural areas. The UK Centre for Ecology and Hydrology estimates that there are over 600,000 kilometres of hedgerows, of which approximately 250,000 kilometres are in need of restoration or improved management [6]. France’s Trame Verte et Bleue (Green and Blue Infrastructure) [3] initiative focuses on creating ecological corridors to link habitats across urban and rural regions, ensuring the survival of species in highly modified landscapes. Similarly, Germany’s Biotopverbund (Biotope Network) [43] promotes habitat connectivity at regional and national scales, emphasizing the role of linear features such as hedgerows and tree rows in fostering biodiversity and ecological resilience.
Those initiatives play a crucial role in incentivizing landscape restoration and enhancing biodiversity. However, to maximize their effectiveness, more targeted and cost-efficient interventions are necessary [17]. These interventions should be guided by accurate maps of rural landscapes, which can help governments and local initiatives prioritize restoration efforts and allocate resources more effectively [50, 31].
The development of those maps faces several significant challenges, with the key obstacle being the tradeoff between accuracy and cost. Field surveys, which involve direct observation and data collection on the ground, remain the gold standard for producing detailed and reliable maps. However, the cost of conducting such fieldwork is prohibitively high, especially when addressing large-scale landscapes [46]. Remote sensing data offers extensive geographical coverage, making it a highly scalable option for large-scale landscape mapping. This data can come from a variety of sources, including publicly accessible missions like Landsat [12] and Sentinel [47], which provide moderate-resolution imagery $3 0 \mathrm { m }$ and $1 0 \mathrm { m }$ GSD respectively). This freely available data has proven suitable for large-scale assessments such as crop type mapping [42] and landscape monitoring [35], enabling analysis of historical trends and frequent monitoring of Earth’s surface. However, the limited resolution of these options prevents the identification of small landscape features crucial for biodiversity, such as hedgerows and stone walls. Commercial high-resolution satellites, e.g., WorldView [27] (from $3 0 \mathrm { c m }$ to $5 0 \mathrm { c m } \mathrm { G S D } )$ ) and Pleiades [37] $( 5 0 \mathrm { { c m } \thinspace G S D ) }$ offer a compelling alternative. While these options provide the necessary detail, they come with increased financial costs and greater complexity in acquisition, processing, and analysis [57]. Consequently, state-of-the-art high-resolution landscape mapping is often limited in scale due to these practical constraints [38].
Our work addresses the gap in large-scale, high-resolution landscape mapping by generating an open dataset that covers most of England at $2 5 \mathrm { c m }$ resolution. This dataset has been produced using a machine learning model trained on Google’s proprietary aerial imagery and a novel human-annotated dataset of English rural landscape elements, including hedgerows and stone walls. The English landscape map will be released on Earth Engine. A preview of the data is currently available as an Earth Engine $\mathbf { \bar { A } p p } ^ { 1 }$ . To the best of our knowledge, this constitutes the first large-scale, high-resolution dataset specifically focused on rural landscape features, offering ecologists a powerful tool to assess the ecological status of rural landscapes and enabling targeted restoration efforts by policymakers and local communities.
# 2 Related work
Image segmentation – the task of assigning a class label to each pixel in an image – is a fundamental process in image analysis. In the context of Earth observation, semantic segmentation of aerial and satellite imagery has emerged as a crucial technique, enabling a wide range of applications including precision agriculture [55] and land cover classification [48]. This surge in applications is largely driven by advancements in remote sensing technologies, including improvements in aerial and orbital platforms, emerging sensor capabilities, and increased data accessibility [58].
Deep learning, particularly in remote sensing, has become the state-of-the-art for image segmentation. Convolutional Neural Networks (CNNs), notably the U-Net architecture [40], have been remarkably successful in this domain. More recently, Vision Transformers (ViTs) [16] have shown promise, leveraging their ability to capture long-range dependencies within an image to offer potential advantages for certain landscape analyses [4]. However, applying deep learning to remote sensing presents unique challenges. Noise and variability in remotely sensed data, due to atmospheric conditions and sensor calibration, can hinder performance [56]. High spatial and spectral heterogeneity in remote sensing imagery, caused by diverse land cover types and their varied spectral signatures, requires models to be particularly robust [10]. Finally, stitching image tiles for large area processing can introduce boundary artifacts [21].
In the context of high-resolution landscape mapping, hedgerow mapping is one of the most studied applications [38]. In Germany, Ahlswede et al. [2] trained a DeepLab v3 [8] model using 1-meter resolution imagery from the IKONOS mission [15]. Labels were manually digitized hedgerow polygons provided by the Bavarian State Office for the Environment. Although this study improved upon the use of coarser imagery, it was limited by the resolution, which may miss finer details such as small gaps in hedgerows critical for ecological analysis. Additionally, the focus solely on hedgerows excluded other important landscape features. Strnad et al. [44] employed a U-Net model trained on aerial photography of Slovenia, with a high spatial resolution of $2 5 \mathrm { c m }$ . Labels for this study were derived from a reference layer based on LiDAR point clouds collected in 2014 and processed to exclude buildings. While this approach allowed for scaling without manual labelling, it was limited to identifying woody vegetation and could not specifically distinguish hedgerows. More recently, Muro et al. [33] used multi-temporal PlanetScope satellite imagery with a 3-meter resolution to map hedgerows across Germany using a U-Net architecture. Labels were sourced from a dataset created by the Schleswig-Holstein State Office for Agriculture, Environment, and Rural Areas, combining digital terrain models and high-resolution imagery. However, this approach buffered hedgerow labels by five meters, potentially leading to over-segmentation and the loss of fine details. In addition to academic research, institutional efforts have also played a role in advancing hedgerow mapping. The UK Centre for Ecology and Hydrology offers a dataset of linear hedgerows, providing a valuable resource for ecological studies [6]. Similarly, Bluesky [22] offers a map of hedgerows and trees across the UK, which incorporates detailed volumetric information. While these datasets offer significant potential for large-scale analyses, their high cost and lack of transparency regarding methodologies and validation processes limit broader accessibility and utility.
In parallel, mapping stone walls has also benefited from advances in remote sensing and deep learning. Suh and Ouimet [45] mapped stone walls in the Northeastern USA using U-Net, and Diakogiannis et al. [14] models with high-resolution airborne LiDAR data. The model input consisted of LiDAR-derived hillshades and slope maps, with labels created through manual digitization of stone walls from LiDAR data, supplemented by aerial imagery, Google Street View, and field verification. Similarly, Trotter et al. [49] focused on updating Denmark’s stone wall registry using a U-Net model applied to LiDAR-derived terrain data. The model input included a Digital Terrain Model (DTM), Height Above Terrain (HAT), and a Sobel-filtered DTM, all at a $4 0 \mathrm { c m }$ resolution. Labels were generated from a stone wall dataset provided by the Danish Ministry of Culture, validated and corrected using the DTM. While both approaches demonstrate the effectiveness of using LiDAR data to detect stone walls, the high manual labor required to create the labels limits the scale of the datasets.
# 3 Landscape elements
Our work focuses on the following key landscape elements, which are important for biodiversity:
Farmed land refers to areas of land that are actively managed for agricultural production. This includes land used for cultivating crops, raising livestock, or both. Farmed land is characterized by human interventions aimed at optimizing the growth and yield of desired plant or animal species, often involving practices such as tilling, planting, fertilizing, irrigating, grazing, and harvesting. It is a dominant land use type globally, playing a crucial role in food security and shaping landscapes and ecosystems worldwide. Farmed land can vary significantly in its characteristics, from intensively managed monoculture fields to more diverse agroforestry systems.
Hedgerows are linear semi-natural landscape features typically composed of closely spaced shrubs, trees, and other vegetation, often forming boundaries between fields or lining roadsides. They are characteristic features of many agricultural landscapes, particularly in Europe. Hedgerows provide a variety of important ecological functions, including habitat and food sources for wildlife, corridors for species movement, and services such as pollination, pest control, and erosion prevention. There are three major types of hedgerows (see: Neumann et al. [34]): Type 1 (Figure 2a) is a low-lying, intensively managed hedge that does not contain woody elements. It can reach 1.5 meters in height and has an average width of 2.5 meters. Type 2 (Figure 2b) contains small trees or tall shrubs and it is usually less managed. It is taller than Type 1 and has an average width of 7 meters. Type 3 (Figure 2c) contains mature trees and appears similar to a linear woodland from an aerial perspective.
Figure 2: Types of hedgerows.
Woodland are areas dominated by trees that form a distinct, but generally more open and discontinuous, canopy compared to denser forests. They feature a lower tree density compared to forests and often have a well-developed understory layer composed of shrubs, grasses, and other herbaceous plants. Woodlands are important ecosystems providing numerous benefits, including carbon sequestration, soil stabilization, water regulation, and recreational opportunities. They often represent transitional zones between open landscapes and closed forests, contributing significantly to overall landscape diversity.
Stone walls are man-made linear structures constructed primarily from stones, either without the use of mortar (dry stone walls) or with mortar to bind the stones together. Beyond their primary functions of marking boundaries, enclosing livestock, and providing shelter, they also offer unique microhabitats that enhance local biodiversity. The crevices and spaces within stone walls provide shelter and nesting sites for a variety of small animals, including insects, spiders, reptiles, amphibians, and small mammals. They can also support diverse plant life, such as lichens, mosses, and other plants adapted to growing in rocky environments. In this way, stone walls, while human-made, can become integral components of the ecosystem, contributing to the overall biodiversity of a landscape.
In addition to these basic elements, gaps within hedgerows (Figure 3a) also play a disruptive role in the ecological connectivity of the landscape as gaps can act as barriers to movement for different species. Trees along hedgerows (Figure 3b) are also important because these trees significantly enhance the structural complexity and habitat value of the hedgerow, providing nesting sites for birds, roosting locations for bats, and increased foraging opportunities for various species [5].
Figure 3: Most important cases for ecologists and landscape restoration initiatives.
# 4 Data
Aerial imagery. A primary challenge in identifying small landscape features, such as hedgerows, is the lack of extensive, high-resolution imagery datasets where these features are distinctly visible. The highest-resolution imagery that is publicly available is Sentinel-2 $\mathrm { 1 0 m G S D ) }$ , which is insufficient to capture the nuances of these smaller elements. To overcome this limitation, we leveraged Google’s proprietary aerial imagery captured over England between 2018 and
2022, with a resolution of $2 5 \mathrm { c m }$ per pixel. This dataset provides a much finer level of detail than what’s available from public sources. Figure 4 directly illustrates this advantage by contrasting Sentinel-2 imagery with our high-resolution data. Critically, details like gaps within hedgerows, which are crucial for assessing habitat fragmentation and ecological connectivity, are not discernible in the Sentinel-2 imagery. Many of these gaps are no larger than $4 \mathrm m$ , and so they often fall within the bounds of a single Sentinel-2 pixel.
Figure 4: Remote sensing imagery with different resolutions are significantly different in terms of what you can actually see in terms of details. In this example, the presence of gaps is clear in the aerial imagery but not visible from Sentinel-2.
LiDAR measurements. While high-resolution, sub-meter imagery provides excellent spatial resolution in the horizontal plane, capturing the shape and layout of features from a top-down perspective, it offers limited information about the vertical dimension, i.e., the height of those features. While shadows present in the RGB imagery can provide some clues about relative heights, these cues are often ambiguous and dependent on factors like sun angle. Consequently, landscape elements with similar visual characteristics in the imagery – such as a tall, dense hedgerow, a patch of low-lying scrub, or a newly planted group of trees – can be difficult to differentiate based on imagery alone. To overcome this challenge and gain a more complete and accurate representation of the landscape, we incorporated a high-resolution height map derived from the UK Environment Agency’s LiDAR Digital Terrain Model (DTM) [1] into the annotation tool. This publicly available LiDAR dataset provides precise elevation data at a 1m resolution, independent of lighting conditions. By integrating this vertical information, we add a crucial dimension to our analysis, enabling us to accurately distinguish between features based on their height profiles and significantly improving the accuracy of our landscape classifications.
Figure 5: Comparison of data sources. (a) Sentinel-2 imagery $( 1 0 \mathrm { m } )$ , where fine features are not easily visible. (b) LiDAR-derived height map, used to aid human annotation by revealing vertical structure. (c) High-resolution (25cm) aerial imagery, the primary input for our model, where features like gaps are clearly visible.
Human annotations. A significant challenge in applying machine learning to landscape mapping is the limited availability of high-quality training data, particularly for relatively rare but ecologically important features like hedgerows and stone walls. We addressed this by creating a dataset of $9 4 2 5 1 2 \mathrm { m } \times 5 1 2 \mathrm { m }$ image tiles, sampled across
Figure 6: Sampling strategy illustration.
England using a strategy designed to maximize relevance. First, high-density urban areas (identified using the World Settlement Footprint [28]) and forests (identified using the Global Forest Change dataset [18]) were excluded. The remaining area was then divided into two strata: regions with a strong tradition of stone wall construction (Stratum 1) and everything else (Stratum 2). We then performed stratified random sampling, allocating $1 5 \%$ of the total sample size to Stratum 1 and $8 5 \%$ to Stratum 2 (Figure 6). This approach ensured that our dataset captures the diverse range of rural landscape elements and their typical spatial arrangements, with a focus on the agricultural landscape. Annotators were tasked with identifying and labeling seven distinct land cover classes within each tile: farmed land, hedgerows, woodland, stone walls, other vegetation (including shrubs, grassland, and other non-tree or hedgerow vegetation), water bodies (such as seas, lakes, and rivers), and “other” (representing human-made features like roads and buildings). These seven classes are exhaustive, covering every possible land cover type encountered in the English countryside. The annotation process utilized both the 25cm resolution aerial imagery (Fig. 5a) and the LiDAR-derived height data (Fig. 5b) to maximize the accuracy of the corresponding annotations (Fig. 5c).
# 5 Model
Machine learning target. The human annotations are structured to create four distinct training targets for the machine learning model: a single multiclass target encompassing the mutually exclusive ground classes (farmed land, other vegetation, and other), and three separate binary targets for the potentially overlapping above-ground classes (hedgerows, woodland, and stone walls). This structure mirrors the fact that a single pixel can be both “woodland” and a “hedgerow”, while it cannot simultaneously be both “farmed land” and “other”. As shown in Table 1, the distribution of these classes reflects the predominantly agricultural nature of the English landscape. Farmed land dominates the ground target, accounting for roughly two-thirds of the pixels, with other vegetation making up slightly under one-third, and other comprising a minimal fraction. Similarly, hedgerows, stone walls, and woodland represent only $1 \%$ , $1 \%$ , and $11 \%$ of the total pixels, respectively. Consequently, our model training must address this substantial disparity, particularly for accurately identifying and mapping the less frequent hedgerow and stone wall classes.
Table 1: Percentage of classes in the training targets.
Model architecture. The model is a segmentation transformer with progressive upsampling (SETR-PUP) [59] with separate decoders per training target as described in Figure 7. This is a common architecture for image segmentation because the ViT excels at capturing global context in the image (by processing the image into high-level features and context), while the convolutional decoder helps to refine the pixel-level predictions.
Encoder pretraining. The encoder is pretrained as a masked autoencoder [19] on a vast, globally diverse dataset of 100M high-resolution aerial images. This self-supervised pretraining process involves hiding random patches of an image and training the ViT to reconstruct the missing parts. This forces the ViT encoder to learn a general understanding of visual features. This pretraining provides an excellent starting point (a robust initialization) for our specific task, enabling the model to learn more effectively.
Data augmentation. To improve the robustness of our model, we implement several data augmentation strategies. To ensure the model learns features that are invariant to object orientation and viewpoint changes, we apply horizontal and vertical flips, as well as continuous image rotations. Building upon the data augmentation pipeline of SimCLR [9], we introduce color jittering and random resizing within a pixel resolution range of [0.2, 0.325] using bilinear sampling. This improves the model’s resilience to variations in color and object size. Furthermore, we employ Cutout [13], randomly masking rectangular regions of the input images during training with size ratios for height and width ranging from 0.05 to 0.5. This encourages the model to derive features from the entirety of the image, enhancing overall robustness. Finally, we randomly pick a crop of size $5 1 2 \times 5 1 2$ from the $2 0 4 8 \times 2 0 4 8$ image to reduce the computational cost of training the model while providing sufficient context to effectively segment the image. For an image resolution of 0.25 meters, 512 pixels correspond to 128 meters.
Multi resolution targets. The decoder utilizes ViT features, progressively upsampling them through four stages to generate predictions at multiple resolutions (64, 128, 256, and 512 pixels), each corresponding to a specific spatial resolution (2m, 1m, $0 . 5 \mathrm { m }$ , and the original input resolution). Loss is computed at each stage by comparing the model’s predictions to downsampled ground truth labels. This multi-resolution approach encourages meaningful representations at each scale, improves robustness to ground truth inaccuracies, and ensures consistent predictions across resolutions.
# 6 Results
Hyperparameter optimization. The 942 tiles of the collected datasets are partitioned into 742 for training, 100 for validation, and 100 for testing. To accurately assess model performance, validation and test tiles were carefully hand-selected. This selection process prioritized tiles with the clearest annotations and a representative
Figure 7: Model architecture.
number of instances for rarer classes like stone walls and hedgerows. We optimized the training of hyperparameters by sweeping over weight decay and learning rate. Performance for each hyperparameter combination was averaged across three random seeds evaluated on the validation set.
Metrics. Model performance on the test is evaluated using the f1-score, recall, and precision. The classification threshold for these metrics is determined by optimizing the f1-score on the validation set and then applying this same threshold to the test set. Results (Table 2) are reported as the mean and standard error across 12 independent runs with different random seeds.
Ground Layer. For the Vegetation class, the model achieves an f1-score of $8 4 \pm 1$ , a recall of $9 0 \pm 1$ , and a precision of $8 5 \pm 1$ . This indicates that the model can accurately identify general vegetation cover with high consistency and completeness. The Farmed land class exhibits even higher performance, with an f1-score of $9 5 \pm 1$ , a recall of $9 1 \pm 1$ , and a precision of $9 6 \pm 1$ , demonstrating the model’s strong ability to delineate agricultural areas. The Other class, encompassing a variety of non-vegetated, non-farmed land covers, achieves an f1-score of $8 1 \pm 3$ , a recall of $9 0 \pm 1$ and a precision of $8 2 \pm 4$ . While the precision shows some variability, the high recall indicates that the model captures most of this diverse class.
Stone Walls. The model achieves an f1-score of $6 0 \pm 1$ for stone walls. The recall is $5 3 \pm 1$ , and the precision is $5 8 \pm 1$ . This suggests that while the model has some ability to identify stone walls, there’s room for improvement, particularly in capturing all instances (recall). The relatively balanced precision and recall indicate that the model is neither overly prone to false positives nor false negatives, but rather struggles with consistent detection of these narrow features.
Table 2: Test metrics.
Hedgerows. For hedgerows, the model exhibits an f1-score of $7 2 \pm 1$ , a recall of $7 0 \pm 1$ , and a precision of $7 3 \pm 1$ These results demonstrate a good ability to identify hedgerows, with a reasonable balance between correctly identifying them (precision) and capturing all instances (recall). The performance on hedgerows is notably higher than that on stone walls, likely due to their typically larger size and more distinct visual characteristics.
Woodland. The model performs exceptionally well on the Woodland class, achieving an f1-score of $9 6 \pm 1$ , a recall of $9 2 \pm 1$ , and a precision of $9 0 \pm 1$ . This indicates that the model can very accurately and consistently identify woodland areas. The high f1-score, combined with high recall and precision, demonstrates the model’s strong capability in delineating this important land cover type. | Effective management of agricultural landscapes is critical for meeting global biodiversity targets, but efforts are hampered by the absence of detailed, large-scale ecological maps. To address this, we introduce Farmscapes, the first large-scale (covering most of England), high-resolution (25cm) map of rural landscape features, including ecologically vital elements like hedgerows, woodlands, and stone walls. This map was generated using a deep learning segmentation model trained on a novel, dataset of 942 manually annotated tiles derived from aerial imagery. Our model accurately identifies key habitats, achieving high f1-scores for woodland (96\%) and farmed land (95\%), and demonstrates strong capability in segmenting linear features, with an F1-score of 72\% for hedgerows. By releasing the England-wide map on Google Earth Engine, we provide a powerful, open-access tool for ecologists and policymakers. This work enables data-driven planning for habitat restoration, supports the monitoring of initiatives like the EU Biodiversity Strategy, and lays the foundation for advanced analysis of landscape connectivity. | [
"cs.CV",
"cs.LG"
] |
# 1 Introduction
As the demand for larger and more capable neural networks continues to grow [Kaplan et al., 2020, Brown et al., 2020], the need for architectures that can scale efficiently—without incurring prohibitive computational costs—has become increasingly important. This is especially true in the context of large language models (LLMs), where state-of-the-art performance often requires billions of parameters and massive training datasets. One such approach, the Mixture of Experts (MoE) model [Shazeer et al., 2017], introduces sparsely activated sub-networks at certain layers, allowing for increased model capacity while preserving computational efficiency.
While MoE architectures offer improved parameter scalability, they often suffer from poor expert utilization during pretraining. Without mechanisms that encourage balanced routing, the model frequently learns to rely on only a small subset of experts [Eigen et al., 2014, Bengio et al., 2016]. Typically, routing decisions are made per token using a learned router that outputs a probability distribution over experts—a paradigm known as Token Choice (TC) [Fedus et al., 2022]. To encourage balanced expert usage, various strategies have been proposed, including sequence-level auxiliary losses such as load balancing loss (LBL) [Fedus et al., 2022] or the Expert Choice (EC) routing variant which generates a distribution over a sparse set of activated tokens for each expert [Zhou et al., 2022]. Section 5 covers additional strategies for load balancing.
Load balancing strategies often encourage a uniform distribution over experts to avoid collapse. This approach has proven to be useful to stabilize MoEs during training, and has been used in many recent works [Muennighoff et al., 2025, Dai et al., 2024, DeepSeek-AI et al., 2025, Xue et al., 2024]. However, in this paper, we argue that imposing a uniform distribution over experts causes MoE models to expend their capacity acquiring the same knowledge across multiple experts. Besides the inefficiencies imposed by this approach, exposing similar tokens to several different experts during training results in inconsistent routing behavior and expert assignments. This in turn further exacerbates knowledge redundancy across experts. Previous work [Dai et al., 2024, Liu et al., 2024] suggests that the amount of knowledge shared between experts is correlated to losses in performance.
To encourage consistent expert assignments for similar input tokens during training, we propose preserving the relational structure among tokens during routing, resulting in similar expert distributions for similar tokens. We achieve this by promoting orthogonality in the router’s weights, as orthogonal matrices are dot-product (and thus, angle) preserving. We introduce similarity-preserving routers for MoE load balancing (SIMBAL), a novel load balancing auxiliary loss that maintains token-wise relational structure by softly encouraging orthogonality in the router weights. Unlike methods that impose orthogonality through explicit parameter constraints—which are computationally expensive and numerically unstable (see Section 4.1)—SIMBAL aligns the Gram matrix $( Q ^ { \top } \dot { Q } )$ of router weights with the identity matrix. This softly regularizes router outputs to preserve pairwise token similarities, achieving the benefits of orthogonal routing with significantly lower computational cost.
By maintaining semantic structure and promoting diverse expert usage, SIMBAL reduces redundancy, accelerates convergence, and improves final model quality. Our models require $36 \%$ fewer tokens when training to achieve the same loss as LBL, and achieve 0.213 lower perplexity given the same compute budget.
# 2 Background
# 2.1 Mixtures of Experts
A Mixture of Experts (MoE) model sparsely activates certain parameters during inference, in contrast to standard dense networks where all parameters are used. In this work, we focus on Mixture of Experts models for the Transformer architecture [Vaswani et al., 2017], a popular choice for training models on sequence-wise data such as those seen in natural language.
Transformers are typically composed of a series of blocks, each consisting of a self-attention module followed by a feed-forward network (FFN). The FFN is usually a two-layer fully connected network with a large hidden dimensionality. For example, given an input vector $\dot { \boldsymbol { x } } \in \mathbb { R } ^ { D _ { M } }$ , where $D _ { M }$ is the model (input/output) dimensionality, the standard FFN computes:
$$
\mathrm { F F N } ( x ) = { { W } _ { 2 } } \cdot \sigma ( { { W } _ { 1 } } x + { { b } _ { 1 } } ) + { { b } _ { 2 } } ,
$$
where $W _ { 1 } \in \mathbb { R } ^ { D _ { F } \times D _ { M } }$ , $W _ { 2 } \in \mathbb { R } ^ { D _ { M } \times D _ { F } }$ , $b _ { 1 } \in \mathbb { R } ^ { D _ { F } }$ , and $b _ { 2 } \in \mathbb { R } ^ { D _ { M } }$ . The intermediate hidden dimension $D _ { F }$ is typically much larger than $D _ { M }$ . The nonlinearity $\sigma$ is an activation function; we use SwiGLU [Shazeer, 2020].
In a Mixture of Experts Transformer, the FFN is replaced by a set of smaller, parallel FFNs called “experts.” Let there be $E$ such experts. Each expert has its own parameters $\{ \hat W _ { 1 } ^ { ( e ) } , W _ { 2 } ^ { ( e ) } , b _ { 1 } ^ { ( e ) } , b _ { 2 } ^ { ( e ) } \}$ , where $W _ { 1 } ^ { ( e ) } \in \mathbb { R } ^ { D _ { E } \times D _ { M } }$ , $W _ { 2 } ^ { ( e ) } \in \mathbb { R } ^ { D _ { M } \times D _ { E } }$ , $b _ { 1 } ^ { ( e ) } \in \mathbb { R } ^ { D _ { E } }$ , and $b _ { 2 } ^ { ( e ) } \in \mathbb { R } ^ { D _ { M } }$ . Here, $D _ { E }$ is the hidden dimension used within each expert.
A routing mechanism assigns each token $x \in \mathbb { R } ^ { D _ { M } }$ to a small subset of $A$ activated experts (typically $A \ll E$ ). The router is a linear transformation $R \in \mathbb { R } ^ { D _ { M } \times E }$ followed by a sparse top- $A$ selection, producing expert indices $i _ { 1 } , \dots , i _ { A }$ and associated routing weights $r _ { 1 } , \ldots , r _ { A }$ . The MoE layer then computes:
$$
\mathbf { M o E } ( x ) = \sum _ { a = 1 } ^ { A } r _ { a } \cdot \Big ( W _ { 2 } ^ { ( i _ { a } ) } \cdot \sigma ( W _ { 1 } ^ { ( i _ { a } ) } x + b _ { 1 } ^ { ( i _ { a } ) } ) + b _ { 2 } ^ { ( i _ { a } ) } \Big ) .
$$
This definition of the MoE can also be viewed as a weighted sum over expert FFN outputs, skipping the computation for any expert where the weight is zero. This architecture enables scaling model capacity via $E$ without a proportional increase in computational cost, as only $A$ experts are active per input.
# 2.2 Expert Routing
Despite the small parameter count of MoE routers (in our larger setting, $0 . 0 1 8 \%$ of the total parameters), they have an outsized impact on the performance and capacity of the model, as they orchestrate billions of parameters. Thus, it is imperative to pay careful attention to this mechanism when training MoE models. In MoE Transformers, routing is computed from the previous attention output $x \in \mathbb { R } ^ { D _ { M } }$ via a learned router matrix $R \in \mathbb { R } ^ { D _ { M } \times E }$ , producing scores $x R \in \overset { \mathbf { \phi } } { \mathbb { R } } { } ^ { E }$ . Applying a gating function $G$ results in routing weights $r = G ( x R )$ . We use softmax, which generates a probability distribution over experts, from which the top- $\cdot A$ active experts are selected and weighted for each token.
We compare our approach to balancing with the Load Balancing Loss (LBL) presented by Fedus et al. [2022]. This setup is highly popular and represents the state-of-the-art, being used in Muennighoff et al. [2025], DeepSeek-AI et al. [2025], Dai et al. [2024], and [Xue et al., 2024] (we give an overview of alternative methods and their limitations in Section 5.) LBL encourages uniform expert usage by correlating how frequently each expert is selected with how much routing weight it receives. Let $f _ { i }$ be the fraction of tokens routed to expert $i$ , $P _ { i }$ the average routing probability for expert $i$ , and $E$ the number of experts. The LBL is defined as:
$$
{ \mathcal { L } } _ { \mathrm { L B L } } = { \boldsymbol { \alpha } } \cdot { \boldsymbol { E } } \cdot \sum _ { i = 1 } ^ { E } f _ { i } \cdot P _ { i }
$$
Minimizing this loss encourages the router to distribute tokens more evenly across experts. However, it may require tuning of a loss coefficient $\alpha$ to avoid overpowering the main training objective.
# 3 Methods
We propose preserving token-wise structural relationships to ensure effective and consistent usage of experts during training. We accomplish this by encouraging orthogonality in the router, which preserves the pairwise angles of the inputs. In this section, we explain the methods used to achieve our results, and our design choices.
# 3.1 Load Balancing via Orthonormal Routers
A natural strategy to ensure expert choices correlate with token-wise relationships is to constrain the router weights to form an orthonormal (and thus, dot-product preserving) matrix. PyTorch [Paszke et al., 2019] provides a utility for this using a QR decomposition, producing a matrix $Q \in \mathbb { R } ^ { m \times n }$ such that $Q ^ { \top } \dot { Q } = I _ { n }$ if $m \geqslant n$ (as is typically the case with MoE routers).
While appealing, the cost of using this orthogonal parameterization is prohibitively expensive in wallclock time when applied to large-scale models, because the algorithms used to ensure this property are computationally expensive. Instead, we propose a loss that encourages structure preservation without requiring explicit parameterization.
Let the router be a matrix $R \in \mathbb { R } ^ { D _ { M } \times E }$ , where $D _ { M }$ is the model dimension and $E$ is the number of experts. Since $E \ll D _ { M }$ , we minimize the deviation of the Gram matrix $R ^ { \top } R$ from the identity:
$$
\mathcal { L } _ { \mathrm { o r t h } } = \left\| R ^ { \top } R - I _ { E } \right\| _ { 1 }
$$
This loss is dataset-agnostic and computationally cheap. We additionally initialize the router with a (near) orthogonal initialization [Saxe et al., 2014] (though it should be sufficient to simply run a few router-only training steps, see Table 2), as we find it results in quicker convergence. We call this method SIMBAL, as we are effectively balancing by preserving the pair-wise similarity of the tokens. We scale this loss with a coefficient of 0.1, as we find this marginally improves model perplexity. Without the coefficient, it still far outperforms previous methods.
# 3.2 Model Architecture and Training
Model Architecture. Our model architecture closely follows prior work by OLMo et al. [2025] and Muennighoff et al. [2025]. We use a Transformer backbone with RMSNorm [Zhang and Sennrich,
Table 1: Parameters used for the model architecture and training. Parameter (active, total) counts include token embeddings.
2019], SwiGLU activations [Shazeer, 2020], and Rotary Position Embeddings (RoPE) [Su et al., 2021]. We apply Z-loss Team [2025], Chowdhery et al. [2022] with a coefficient of 1e-5, as in OLMo et al. [2025]. Unlike OLMo 2, we do not modify the placement of normalization layers nor do we apply QK-Norm [Dehghani et al., 2023]. We replace all FFN layers with MoE layers. Further architectural details can be found in Table 1. Our implementation builds upon the open-source OLMo codebase [OLMo et al., 2025].
For the LBL baseline, we follow Muennighoff et al. [2025] and Wang et al. [2024], using a loss coefficient of 0.01. In contrast, our method does not require a coefficient; with appropriate initialization, the load-balancing loss converges quickly.
Model Scales and Training. We pretrain models at two scales: a medium model (MoE-M) with 230M active and 627M total parameters, and a large model (MoE-L) with 762M active and 3.14B total parameters (including embeddings). For each scale, we performed a brief hyperparameter sweep across three learning rates. All models are trained using the AdamW optimizer [Loshchilov and Hutter, 2019], with a weight decay of 0.01, linear warm-up from $10 \%$ of the peak learning rate over 2000 steps, followed by cosine decay [Loshchilov and Hutter, 2017] to $10 \%$ of the peak learning rate. Additional model specifications are listed in Table 1. All model parameters are in bfloat16.
All models are trained on a subset of tokens from the DCLM-pool- $4 0 0 \mathrm { { m } - 1 \mathrm { { x } } }$ dataset [Li et al., 2025] (used in other work such as Muennighoff et al. [2025]), tokenized using the cl100k_base tokenizer from the tiktoken library [OpenAI, 2024]. We reserve one file shard (77M tokens) for validation. All MoE-M models are trained on 19.9B tokens, while MoE-L mdoels are trained on 78.6B tokens. No further fine-tuning is performed, as our focus is on the pretraining phase, which is typically the most computationally intensive stage of LLM development.
Compute and FLOP Estimates. All models are trained using Distributed Data Parallelism (DDP) [Li et al., 2020]. For MoE-M, we use 8 NVIDIA A100 40GB GPUs per training run; for MoE-L, we use 8 AMD MI300X 192GB accelerators.
To estimate total training FLOPs, we follow the approximation from Brown et al. [2020], using $6 \times N \times T$ per forward pass, where $N$ is the number of non-embedding active parameters and $T$ is the number of training tokens.
For MoE-M and Dense-M, with 230M active parameters and 77M in embeddings, trained on $2 \times 1 0 ^ { 1 0 }$ tokens, this results in:
$$
6 \times \left( { \left( { 2 3 0 - 7 7 } \right) \times 1 0 ^ { 6 } } \right) \times 2 \times 1 0 ^ { 1 0 } = 1 . 8 3 6 \times { \bf 1 0 } ^ { 1 9 } \mathrm { F L O P s }
$$
For MoE-L and Dense-L, with 761M active parameters and 154M in embeddings, trained on $7 . 8 \times$ $1 0 ^ { 1 0 }$ tokens, this results in:
$$
6 \times \left( \left( 7 6 1 - 1 5 4 \right) \times 1 0 ^ { 6 } \right) \times 7 . 8 \times 1 0 ^ { 1 0 } = 2 . 8 4 0 \times 1 0 ^ { 2 0 } { \mathrm { F L O I } }
$$
Table 2: Comparison of orthogonality preservation methods, average and standard deviation over 100 trials. We report the maximum deviation from orthonormality (Max Dev) and the mean L1 distance to the identity matrix (L1 Dist) after casting to our training precision. Trained refers to our loss-based method after 100 optimization steps. Param uses the orthogonal parameterization from Lezcano-Casado [2019]. OrthoInit follows the initialization from Saxe et al. [2014]. All matrices have shape $1 5 3 6 \times 3 2$ , matching our router dimensions. Best results in each column are bolded.
# 3.3 Measuring Expert Similarity
Previous work evaluates expert specialization by measuring performance degradation when the top fraction of experts is dropped [Dai et al., 2024]. However, this approach is expensive to compute when ablating each combination of dropped experts for exhaustive comparison, as it requires inference on the full validation set per combination of dropped experts.
We instead propose Pairwise Expert Similarity $( P E S )$ : a smoother, scalable, and robust metric for quantifying expert specialization based on the similarity of expert outputs across a batch of tokens. Ideally, specialized experts should produce more diverse (i.e., less similar) outputs, maximizing the representational span of the expert set. PES is defined as:
$$
\mathrm { P E S } _ { \mathrm { m o d e l } } = \frac { 1 } { | B | } \sum _ { b \in B } \mathcal { C } _ { \mathrm { e x p e r t } } ( \mathbf { x } )
$$
$$
\mathcal { C } _ { \mathrm { e x p e r t } } ( \mathbf { x } ) = \frac { 2 } { N ( N - 1 ) } \sum _ { i = 1 } ^ { N } \sum _ { j = i + 1 } ^ { N } \cos \left( \mathbf { f } _ { i } ( \mathbf { x } ) , \mathbf { f } _ { j } ( \mathbf { x } ) \right)
$$
Here, $\mathcal { C } _ { \mathrm { e x p e r t } } ( \mathbf { x } )$ denotes the mean cosine similarity of expert outputs for batch sample $\mathbf { x }$ , and $\mathrm { P E S } _ { \mathrm { m o d e l } }$ is the batch-averaged similarity across all $| B |$ samples. $N$ is the number of experts, $\mathbf { f } _ { i }$ is the function computed by the $i$ -th expert. The cosine similarity $\cos ( \mathbf { u } , \mathbf { v } )$ is defined as $\frac { \mathbf { u \cdot v } } { \| \mathbf { u } \| \cdot \| \mathbf { v } \| }$ , measuring the angle between output vectors.
PES is intuitive (lower similarity indicates greater diversity and lower redundancy), considers all experts rather than just the most frequently selected, and is highly scalable. Unlike dropout-based evaluation, which requires repeated forward passes per ablation, PES requires less additional computation. This function can be computed batch-wise within the expert computation to reduce cost, and requires inference once with the full model parameter count (a multiplier of $3 . 6 \mathrm { - } 4 . 9 \mathrm { X }$ FLOPs in our case), rather than (potentially) hundreds of evaluation passes with the MoE for similarly comprehensive evaluations. We use 4M randomly sampled tokens to calculate PES.
# 4 Experiments
# 4.1 Orthogonalization and Balancing
Our key contribution is that we perform load balancing by using a router that is encouraged to be orthogonal, and thus preserves token-wise relationships. Rather than enforcing orthogonality through explicit parameter constraints—which is computationally expensive, requires frequent reparameterization, and is prone to numerical instability, particularly when training large-scale models—we instead use the loss function described in Section 3.1. We now evaluate the effectiveness of promoting orthogonality in the router.
As PyTorch currently lacks support for orthogonal parameterizations in lower-precision formats commonly used to train language models (that we use), we perform orthogonalization in float32, and then cast the resulting matrix to bfloat16, our training precision. Our loss-based method trains the matrix directly in bfloat16. We report both the maximum and mean deviation from orthonormality, as well as the final loss values, in Table 2. We find that our loss consistently produces matrices that more closely approximate orthonormality than direct orthogonal parameterizations in our scenario. In fact, our approach matches or exceeds the throughput of efficient orthogonal parameterizations, while avoiding the need for expensive reorthogonalization steps. For this synthetic experiment, we train with AdamW (with no weight decay), and a learning rate cosine decayed from $1 \dot { \times } 1 0 ^ { - 4 }$ to $1 \times 1 0 ^ { - 5 }$ over 100 consecutive steps. In our MoEs, we simply add our loss as an auxiliary loss term and update once per language model training step.
Figure 1: Validation loss curves for checkpoints during training. In both MoE-M and MoE-L, we achieve the same loss roughly $36 \%$ faster.
Figure 2: Expert utilization throughout training for MoE-M (left) and MoE-L (right), comparing LBL, our method (SimBal), and a baseline with no load balancing. We measure the number of unique experts activated on our full 77M-token validation set over time. Without any balancing, the expert routing collapses to a smaller set of experts. Both LBL and SimBal maintain full expert utilization across training. The no-loss baseline was truncated early.
In terms of expert utilization in MoEs, our method avoids collapse comparably to LBL, ensuring that no experts remain unutilized. Figure 2 illustrates the unique expert usage over time at two different scales, compared to LBL and using no losses (which results in unused experts).
# 4.2 Language Modeling
We compare our method to LBL by training language models according to the setup described in Section 3.2, evaluating performance based on the perplexity of the final checkpoint. The resulting models are reported in Table 3. We additionally report Sequence-wise Expert Utilization (SEU), as reported by the mean over the fraction of experts used per sequence, to show that load balance within a sequence is not significantly degraded.
Figure 3: Analysis of expert redundancy in MoE-L models. (a) PES across different layers, our approach (blue) maintains significantly lower redundancy than LBL (orange). Darker $\mathbf { \tau } = \mathbf { \tau }$ later in training. (b) Rate of change of PES during training, averaged over all layers. Redundancy occurs when many distinct experts see similar tokens, and is most likely to happen early in training, as we observe. We note that this is $> 0$ at most points for LBL, suggesting it exacerbates redundancy during the majority of training.
Table 3: Model setup and performance. The best perplexity per-category is bolded.
Across both MoE-M and MoE-L scales, SimBal converges approximately $36 \%$ faster than LBL. We show validation values during training in Figure 1 For MoE-L, SimBal approaches the target loss after processing roughly 50B tokens, compared to 78.6B for LBL—a $36 \%$ improvement. Similarly, in the MoE-M setting, SimBal reaches comparable loss levels at around 12.7B tokens, versus 19.9B for LBL. Overall, our method significantly outperforms LBL in both final performance and training efficiency.
# 4.3 Redundancy and Specialization in Experts
Motivated by the work of Dai et al. [2024], we investigate expert specialization and redundancy. As detailed in Section 3.3, we assess these properties using Pairwise Expert Similarity (PES), in contrast to their expert dropout experiments. We also do not perform expert dropping and randomized expert experiments, as we find they lack the granularity necessary to capture fine-grained redundancy patterns, and have prohibitive computational costs when comprehensively evaluating many experts. In contrast, PES offers a lightweight and scalable means of quantifying redundancy, enabling per-layer, per-checkpoint analysis across all experts in parallel.
We hypothesized that our method would lead to less redundant experts compared to LBL. This stems from the fact that LBL promotes a uniform expert distribution throughout training, which can lead to instability in the early training stages. Frequent shifts in routing distributions—caused by changing token embeddings—can induce abrupt changes in expert assignment. When the expert distribution is nearly uniform, even minor input perturbations can result in a different expert being selected. We estimate this change in expert distribution by measuring the change in redundancy, as the main source of redundancy is many experts seeing similar sets of tokens, which is exacerbated by the sensitivity of the uniform expert distribution.
As shown in Figure 3(b), the majority of expert redundancy emerges early in training, coinciding with the highest volatility in LBL (orange). During this phase, embeddings evolve rapidly, leading to unstable routing behavior and increased expert redundancy. Additionally, the change in expert redundancy is generally significantly above 0 for most of training– reinforcing our claim that LBL exacerbates redundancy.
Table 4: Performance across three scaling coefficients to SIMBAL.
Figure 4: Rate of change in minimum PES (over the layers of a model) over a training run, comparing LBL (higher perplexity) and SimBal (lower perplexity).
In contrast, our method (blue) exhibits much greater stability. While expert distributions still adapt alongside evolving embeddings, they stabilize quickly. As illustrated in Figure 3(a), this leads to a consistently lower PES in the final model. Additionally, in Figure 3(b), using our loss results in a rate of change very close to zero for most of training, showing that we do not face the same redundancy-encouraging issue of LBL.
Final PES values are summarized in Table 3. To reduce sensitivity to outliers, we report the minimum PES across all layers, filtering out spikes in a single individual layer (common with LBL). We choose minimum, since we do not observe substantial dips in PES by layer, primarily jumps, and we wanted this metric to be as simple and intuitive as possible. SimBal consistently produces models with substantially lower minimum PES than LBL. Figure 4 shows the rate of change in minimum PES over time.
# 4.4 Coefficient Sensitivity
We examine the coefficient sensitivity of SIMBAL to determine if tuning is necessary. We test coefficients 0.01, 0.1, and 1.0, and show our results in Table 4. We observe that there is slight benefit, but it is not very sensitive past a certain point. We recommend using a coefficient 0.1 for most setups. We additionally find that the final min PES values are lower across the board compared to using LBL.
# 5 Related Work
There has been significant interest in MoE models for scaling LLMs, as shown in Lepikhin et al. [2020], Zoph et al. [2022], Fedus et al. [2022], Xue et al. [2024], DeepSeek-AI et al. [2025], Databricks [2024], Llama [2025], Muennighoff et al. [2025], and more. We explore related design choices below.
Routing and Load Balancing Mechanisms. Efficient routing in MoE architectures involves selecting appropriate experts for each token (Token Choice) [Fedus et al., 2022] while ensuring balanced expert utilization. Some previous work suggests allowing experts to choose the tokens they process (Expert Choice) [Zhou et al., 2022], but this tends to have issues regarding performance in autoregressive generation [Muennighoff et al., 2025], and leak information about future tokens [Wang et al., 2024].
Traditional approaches employ an auxiliary load balancing loss [Fedus et al., 2022] to encourage a uniform distribution over experts, which can interfere with the main training objective and potentially degrade performance. To address this, auxiliary-loss-free [Wang et al., 2024] strategies have been introduced, notably used in DeepSeek-V3 [DeepSeek-AI et al., 2025], but in conjunction with auxiliary balancing loss. This method dynamically adjusts per-expert bias terms added to the routing scores, guiding the top-K expert selection without introducing additional gradients. While effective at improving global load balance, it struggles in balancing the MoE sequence-wise, which can cause degradations in throughput. We explore improving loss-free load balancing using our auxiliary loss in Appendix A.1. Additionally, LF is sensitive to batch size, with [Wang et al., 2024] finding a 0.5 decrease in perplexity when using a batch size of 512 rather than 4 (per-device, no sync), a property that is much less drastic in LBL and irrelevant with SimBal (as it is invariant to the data). [Wang et al., 2024] also claims LBL achieves improved specialization with a distributed sync to maximize batch size, we eliminate the need for this entirely.
Orthogonality in MoE. Prior studies have applied orthogonality to diversify expert representations in MoE models. OMoE [Liu et al., 2024] introduces an optimizer that updates each expert in a direction orthogonal to the subspace spanned by other experts, enhancing representation diversity. MOORE [Hendawy et al., 2024] employs the Gram-Schmidt process to enforce orthogonality among expert representations in multi-task reinforcement learning. In contrast, our approach applies orthogonality at the router level, not the experts themselves. This strategy offers computational efficiency by avoiding expensive operations during training and allows seamless integration into existing architectures. Moreover, by not constraining expert weights, we avoid potential performance degradation due to restrictive parameter constraints.
# 6 Limitations
While we train our models with relatively large data multipliers, prior work such as Muennighoff et al. [2025] suggests that substantially more data may be necessary to achieve strong performance on downstream benchmarks. Nevertheless, our training setup provides sufficient scale to meaningfully compare the relative effectiveness of different balancing methods.
Finally, although our architectural choices align with recent MoE literature, our study is limited to a single set of design decisions. We leave the exploration of alternative configurations to future work. For instance, we do not investigate how token dropping might affect the performance of our balancing mechanism (focusing on higher-quality dropless models [Gale et al., 2022]), which could be a valuable direction for further analysis. | Sparse Mixture of Experts (MoE) models offer a scalable and efficient architecture for training large neural networks by activating only a subset of parameters ("experts") for each input. A learned router computes a distribution over these experts, and assigns input tokens to a small subset. However, without auxiliary balancing mechanisms, routers often converge to using only a few experts, severely limiting model capacity and degrading performance. Most current load balancing mechanisms encourage a distribution over experts that resembles a roughly uniform distribution of experts per token. During training, this can result in inconsistent routing behavior, resulting in the model spending its capacity to learn redundant knowledge. We address this by introducing a novel load balancing loss that preserves token-wise relational structure, encouraging consistent expert choices for similar inputs during training. Our experimental results show that applying our loss to the router results in 36% faster convergence and lower redundancy compared to a popular load balancing loss. | [
"cs.LG"
] |
# 1 Introduction
Understanding road topology is essential for safe and effective autonomous driving, as it provides vehicles with crucial spatial and contextual information for navigation. A comprehensive topology model requires reasoning over lane-to-lane (L2L) and lane-to-traffic-element (L2T) relationships, enabling autonomous vehicles to follow traffic regulations and execute safe maneuvers. This task involves two key components: 1) Perception, which detects lanes and traffic elements (e.g., traffic lights) detection, with lane perception distinct from general objects. and 2) Reasoning, which infers L2L and L2T relationships to construct a structured representation of road topology.
Lane Perception. Most existing methods [1, 2, 3] adapt generic object detectors [4, 5] for lane perception by modifying bounding box into lane points or parametric representations. However, these adaptations often overlook [1, 2, 6] the intrinsic geometric relationships [7, 8] among lanes (e.g., parallelism and connectivity), which humans naturally leverage for lane perception. TopoLogic [3] partially addresses this by introducing a geometry distance topology, which maps end-to-start point distances to connectivity labels (connected vs. non-connected) and integrates this information into lane feature learning via a Graph Neural Network (GNN). However, it focuses solely on connectivity and requires additional GNN to encode these relationships into lane features. It remains an open problem: How can we effectively integrate inherent structural relationships in road scenes to enhance lane perception and topology reasoning?
Topology Reasoning. For L2L reasoning, existing methods [2, 6, 9] enhance lane features using coordinate-based encoding but are highly sensitive to endpoint shifts, causing errors [3]. TopoLogic [3] partially addresses this with geometric distance topology, applied both within the model and as post-processing. Our experiments (Supp. Tab. 3) show that removing this post-processing significantly degrades their L2L performance, indicating that the model fails to learn robust relational features. This raises a key question: Can we achieve L2L reasoning end-to-end without relying on post-processing? For L2T reasoning, which remains largely underexplored, current methods often model L2T relationships by naïvely combining BEV lane features with Front View (FV) traffic element features, ignoring the spatial discrepancy between these spaces. Topo2D [9] mitigates this issue by leveraging 2D lane features from an additional decoder, but adds computational overhead.
Our key insight is that relational modeling is crucial for both lane perception and topology reasoning, yet remains underexplored. To address this, we introduce explicit relational modeling for both tasks: In lane perception, we adopt a compact Bézier curve representation and design a relation-aware lane decoder with two key components: 1) Geometry-Biased Self-Attention (GBSA), which encodes inter-lane geometric relationships as attention biases, capturing spatial structures like parallelism and connectivity; and 2) Curve-Guided Cross-Attention (CGCA), which aggregates long-range context along lanes to enhance their representations. While the Bézier representation is compact and flexible [10], its sparse control points pose challenges for feature extraction, particularly for elongated or curved lanes (Fig. 3). We overcome this by leveraging on-curve relational cues to enhance lane representation learning (Sec. 3.2.2).
In topology reasoning, we introduce two specialized modules: 1) Geometry-Enhanced L2L reasoning module, which encodes inter-lane distance into high-dimensional features, improving L2L connectivity predictions and reducing sensitivity to minor perception errors; 2) Cross-View (X-view) L2T reasoning module, which bridges BEV lanes and FV traffic elements through a cross-view fusion design, aligning BEV lane features with FV traffic elements and enriching their representations with dual-view (BEV and FV) information. This design enables more robust L2T topology reasoning, as validated in our experiments. 3) To further enhance relation learning, we introduce a contrastive learning strategy, inspired by InfoNCE [11, 12, 13], to enhance the model’s ability in distinguishing connected (positive) and non-connected (negative) pairs for both L2L and L2T reasoning.
Overall, our contributions can be summarized as follows:
• We identify the limited exploration of relational modeling in existing methods and propose a relation-aware lane decoder with geometry-biased self-attention for inter-lane geometric relationships and curve-guided cross-attention for contextual information aggregation.
• We introduce two topology reasoning modules: a geometry-enhanced L2L module that captures inter-lane relationships with encoded geometry embeddings, and a X-view L2T module bridges BEV and FV features for enhanced L2T reasoning. To further enhance relational reasoning, we integrate a novel contrastive learning strategy. • Extensive experiments on the OpenLaneV2 dataset validate the effectiveness of our approach, surpassing previous methods across both detection and topology reasoning metrics.
# 2 Related Work
# 2.1 3D Lane Detection
3D lane detection is essential for accurately perceiving lane geometries in real-world traffic scenes. Recent research primarily focuses on extracting 3D lane features from monocular front-view images and can be broadly categorized into BEV-based and front-view-based approaches. BEV-based methods employ inverse perspective mapping (IPM) to transform front-view images into the BEV perspective, facilitating lane prediction [14, 15, 16, 17, 18, 7]. However, IPM-based methods suffer from distortions on non-flat roads due to their planar assumption, complicating lane detection in dynamic environments. To mitigate these limitations, front-view-based methods predict 3D lanes directly from FV image features, avoiding view transformation distortions. Recent approaches [19, 20, 21] employ query-based detectors [22, 5] directly on FV features, modeling 3D lane information without IPM and achieving improved performance. In our work, we extend lane perception to broader environments by leveraging multi-view images.
# 2.2 Online HD Map Construction
Online HD map construction aims to dynamically generate detailed road environment maps Early methods, such as HDMapNet [23], use dense segmentation predictions with heuristic post-processing to vectorize map elements. VectorMapNet [24] improves upon this by adopting an end-to-end detection-and-serialization pipeline for generating map polylines.Subsequent studies enhance endto-end HD map construction [25, 26, 27, 28, 29, 30, 8, 31, 32]. MapTR [25] adopts a DETR [22] framework with hierarchical query embeddings for map encoding. BeMapNet [27] and PivotNet [28] employ piecewise Bézier curves and dynamic point-based representations, respectively. InstaGraM [29] formulates map element generation as a graph problem utilizing a GNN-based framework GeMap [8] learns HD map structures by modeling element shapes and relational properties. But it focus on individual instance geometry modeling, do not explicitly capture topology relationships among lanes, and relies on polyline representations with equidistant points, which lack flexibility and precision [28, 33] for nuanced lane description. To improve computational efficiency,various decoupled self-attention mechanisms [26, 32, 8] have been proposed for integrating intra-/inter-instance information. However, topology reasoning among elements are limited in this area.
# 2.3 Driving Scene Topology Reasoning
Early efforts in topology reasoning focused on lane connectivity. STSU [34] is among the first models to construct lane graphs in an end-to-end manner using BEV representations. Following this, TPLR [35] introduces minimal cycles to enforce topological consistency. While effective, these methods focus solely on lane perception using monocular images and lack interactions with traffic elements, which are critical for comprehensive scene understanding.
Recent approaches have extended topology reasoning to jointly model both L2L and L2T relationships, leveraging multi-view data for richer contextual understanding. TopoNet [1] introduces a GNN-based framework that enhances topology prediction through message passing between lane and traffic element embeddings. TopoMLP [2] adopts MLP-based topology heads for more efficient topology reasoning. LaneSegNet [36], built upon OpenLane V2 [37], introduces lane segments augmented with left and right-side lane lines and proposes lane segment attention to capture intra-lane dependencies. However, these methods do not explicitly model relationships between lanes or between lanes and traffic elements, limiting their ability to capture dependencies among these objects. Topo2D [9] incorporates 2D detections as priors to support 3D topology reasoning, while TopoLogic [3] combines geometry distance-based topology estimation with query similarity-based relational modeling via
GNNs. However, its geometry distance topology reasoning is applied as a post-processing step for L2L reasoning. RoadPainter [6] further refines point localization by utilizing BEV masks.
In contrast, we recognize that explicit relational modeling is critical for both perception and topology reasoning. Our approach integrates relational modeling into both perception and reasoning in an endto-end manner, jointly enhancing lane detection and L2L and L2T topology reasoning capabilities.
# 3 Method
Figure 2: The overall framework of RelTopo , processing multi-view images with two main branches: (1) Lane branch projects multi-view image features to BEV space for lane centerline detection and lane-to-lane topology estimation; (2) Traffic elements branch detects traffic elements in front-view and infers lane-to-traffic-element relationships. Symbols $\begin{array} { r } { \begin{array} { r l } { \square ( Q _ { l a n e } ) , \top ( P _ { l a n e } ) , \quad } & { { } ( Q _ { t e } ) , } \end{array} } \end{array}$ , and $\triangle ( P _ { t e } )$ represent lane and traffic element queries with predicted outputs. $L _ { c o n }$ denotes our contrastive loss.
# 3.1 Overview
As shown in Fig. 2, our model consists of two primary branches: the Lane Branch and the Traffic Element Branch. The Lane Branch features a relation-aware lane decoder (Sec. 3.2), which incorporates a geometry-biased self-attention to enable each lane to focus on geometrically related peers, enhancing its structural understanding. Additionally, we introduce a curve-guided crossattention mechanism, which aggregates contextual features along the lane query with sampled points from the underlying curve. For L2L (Sec. 3.3) and L2T (Sec. 3.4) topology reasoning, we incorporate geometry-enhanced relation embeddings for L2L and X-view relation embeddings for L2T—to provide richer spatial context, improving the model’s ability to capture topological relationships. Furthermore, we introduce an additional contrastive loss (Sec. 3.5.2) to refine relation learning, enhancing the model’s ability to differentiate between various structural relationships.
# 3.2 Relation-Aware Lane Decoder
To capture inter-lane geometric dependencies, we introduce a geometry-biased self-attention mechanism (Sec. 3.2.1) that enables the model to attend more effectively to spatially related lanes. Additionally, given the elongated and curved nature of lanes, we propose a curve-guided cross-attention mechanism (Sec. 3.2.2) to capture long-range contextual features along the lane path.
# 3.2.1 Geometry-Biased Self-Attention
Learning effective query representations in DETR-like decoders can be slow and data-intensive [38, 39, 40]. [40] attributes this slow convergence to the lack of structural bias in query inputs and proposes a position relation module to accelerate the learning process. In lane perception, structurally related or connected lanes often share attributes, suggesting they can mutually enhance perception. Motivated by this, we propose a geometry-biased self-attention mechanism that encodes spatial relationships, such as inter-lane distances and angular differences, as attention biases. Unlike Topologic [3], which relies on GNNs to encode connectivity, we take a simpler yet effective approach by directly encoding geometric relationships as attention biases within self-attention. Additionally, we incorporate angular information for more comprehensive geometry relation modeling.
Formally, this mechanism is illustrated in Fig. 3, where ours $\sqsubseteq$ is defined as Eq. (1), and Geometry $( l , l ) _ { ( i , j ) }$ represents the geometry bias term between $i$ -th and $j$ -th lanes. $\mathrm { D i s t } ( l _ { i } , l _ { j } )$ (minimum endpoint distance between lanes) and Angle $( l _ { i } , l _ { j } )$ (angular difference) are concatenated and encoded through an embedding layer GE, which applies sinusoidal encoding followed by an MLP. This mechanism enhances query learning by introducing structural bias, as suggested in [40], while also improving the capture of inherent lane geometric relationships.
$$
\mathbf { Q } = \operatorname { S o f t m a x } \left( \frac { \mathbf { Q } \mathbf { K } ^ { \top } } { \sqrt { d _ { \mathrm { m o d e l } } } } + \mathbf { G e o m e t r y } ( l , l ) \right) \mathbf { V }
$$
By enhancing geometrically proximal lanes, it allows the model to allocate greater attention to spatially relevant lanes through the interleaved attention process, leading to a more robust understanding of lane topology (detailed in Sec. 3.3). Our method differs from TopoLogic [9], which uses end-to-start point distances and fails to capture relationships beyond connectivity, and from [40], which encodes progressive cross-layer box positional relationships for individual objects.
# 3.2.2 Curve-Guided Cross-Attention
Polyline representations with equidistant points often lack flexibility and precision [28, 33]. To overcome this limitation, we adopt a compact and flexible Bézier curve formulation, representing each lane as a third-degree Bézier curve defined by four control points. However, two challenges arise: 1) the sparsity of control points limits feature aggregation; 2) intermediate control points do not lie on the curve (green points in the right side of Fig. 3).
Figure 3: Illustration of our Bézier lane decoder layer, featuring our geometry-biased SA $\sqsubseteq$ and curve-guided CA $\sqsupset$ .
To address these issues, instead of relying solely on sparse control points as reference points in deformable attention like TopoDBA [41], we sample $K$ points along the Bézier curve, which serves as reference points for feature aggregation (Fig. 3 right side). Furthermore, to capture long-range intra-lane dependencies, we employ a shared lane query to integrally generate offsets and weights for these $K$ reference points. This design enables each reference point to be updated under global lane information, enhancing lane representation learning through iterative updates.
Given the feature map $\scriptstyle { \pmb x }$ , the ${ { l } ^ { t h } }$ lane query $\pmb q _ { l }$ and its reference point ${ \mathbf { } } p _ { l }$ , we adopt deformable attention mechanism [5] to update query features formulated as:
$$
\mathrm { D e f o r m A t m } ( q _ { l } , p _ { l } , \pmb { x } ) = \sum _ { m = 1 } ^ { M } { W _ { m } [ \sum _ { i = 1 } ^ { N } \sum _ { j = 1 } ^ { K } A _ { m l i j } \cdot W _ { m } ^ { ' } \pmb { x } ( p _ { l } + \Delta \pmb { p } _ { m l i j } ) ] } ,
$$
where $M$ is the number of attention heads, $N$ denotes the number of offset locations per sampled point, and $K$ is the number of sampled points along each curve. $A _ { m l i j }$ and $\Delta p _ { m l i j }$ denote the attentions weights and sampling offset, respectively, for the $i ^ { t h }$ offset of the $j ^ { t h }$ sampled points along the curve in the $m ^ { t h }$ attention head.
Unlike BézierFormer [42], which projects sampled points onto image feature maps and performs grid_sample to generate $N _ { r e f }$ point queries—each undergoing separate deformable crossattention—our approach is simpler and more effective. By leveraging global lane information, our method refines local feature aggregation, enhancing the modeling of long-range intra-lane dependencies. Besides, unlike BeMapNet [27], which represents each map element with multiple Bézier curves, our approach maintains efficiency and accuracy by using a single Bézier curve per lane. We also conduct comparative experiments against these two methods (see Supp. Tab. 4), showing that our method achieves better modeling capability for lane structures.
# 3.3 Geometry-Enhanced L2L Reasoning
We propose a geometry-enhanced L2L topology reasoning module that explicitly encodes inter-lane geometric relationships. Given the refined lane features $\check { \mathbf { Q } } _ { \mathrm { l a n e } } \in \mathbb { R } ^ { N \times C }$ and lane endpoint positions $\mathbf { P } _ { \mathrm { l a n e } } \in \mathbb { R } ^ { N \times 2 }$ , we construct an L2L relation embedding $\mathbf { G } _ { l l } \in \mathbb { R } ^ { N \times N \times C }$ by integrating positional embeddings and geometric distance embeddings. Specifically, we first project $\mathbf { Q } _ { \mathrm { l a n e } }$ into predecessor and successor embeddings using two MLPs and inject positional embeddings $\mathbf { P E } _ { \mathrm { l a n e } }$ derived from lane endpoints $P _ { \mathrm { l a n e } }$ . The L2L relation embedding $\mathbf { \bar { G } } _ { \mathrm { L 2 L } } \mathbf { \bar { \Psi } } \in \mathbb { R } ^ { N \times N \times C }$ is then constructed as:
$$
\mathbf { G } _ { \mathrm { L 2 L } } = ( \mathbf { M } \mathbf { L } \mathbf { P } _ { 1 } ( \mathbf { Q } _ { \mathrm { l a n e } } ) \odot \mathbf { M } \mathbf { L } \mathbf { P } _ { 2 } ( \mathbf { Q } _ { \mathrm { l a n e } } ) ) + \mathbf { P } \mathbf { E } _ { \mathrm { l a n e } } ,
$$
where $\circledcirc$ denotes broadcast concatenation.
To reinforce connectivity relationships, we incorporate geometric distance features as additional cues. Specifically, we compute the end-to-start point distance for each lane pair and embed it into a high-dimensional space using an MLP:
$$
\mathrm { D i s t E m b e d } _ { \mathrm { L 2 L } } ^ { i , j } = \mathbf { M L P } ( \operatorname { d i s t a n c e } ( \mathbf { P } _ { i } ^ { e } - \mathbf { P } _ { j } ^ { s } ) ) ,
$$
where $\mathbf { P } _ { i } ^ { e }$ and $\mathbf { P } _ { j } ^ { s }$ denote the endpoint of lane $i$ and the starting point of lane $j$ , respectively. These geometric cues are integrated into $\mathbf { G } _ { \mathrm { L 2 L } }$ , which is then processed by an MLP to predict L2L topology:
$$
\mathbf { T } _ { \mathrm { L 2 L } } = \mathbf { M L P } ( \mathbf { G } _ { \mathrm { L 2 L } } + \mathrm { D i s t E m b e d } _ { \mathrm { L 2 L } } ) .
$$
# 3.4 X-View L2T Reasoning
L2T reasoning requires integrating features from two different perspectives: BEV for lanes and FV for traffic elements. The disparity between these representations poses a challenge for learning spatially consistent relationships. To address this, we introduce a view-aligned fusion module, which aligns BEV-based lane features with spatial and positional information derived from the FV space. Given the predicted 3D lane coordinates Pl3anDe, we project them onto the FV image to obtain their 2D coordinates $\mathbf { P } _ { \mathrm { l a n e } } ^ { 2 D }$ . Using grid_sample, we extract the corresponding FV spatial features $\mathbf { F } _ { \mathrm { l a n e } } ^ { 2 D }$ , which are then integrated with BEV lane queries for enhanced topology reasoning.
To further refine feature alignment, we incorporate positional embeddings to encode the spatial locations of both lanes and traffic elements. The enhanced features are computed as:
$$
\tilde { \mathbf { Q } } _ { \mathrm { l a n e } } = \mathbf { M } \mathbf { L } \mathbf { P } _ { 1 } ( \mathbf { Q } _ { \mathrm { l a n e } } ) + \mathbf { F } _ { \mathrm { l a n e } } ^ { 2 D } + \mathbf { P } \mathbf { E } _ { \mathrm { l a n e } } ^ { 2 D } , \tilde { \mathbf { Q } } _ { \mathrm { t e } } = \mathbf { M } \mathbf { L } \mathbf { P } _ { 2 } ( \mathbf { Q } _ { \mathrm { t e } } ) + \mathbf { P } \mathbf { E } _ { \mathrm { t e } } ^ { 2 D } .
$$
The $\mathbf { \delta X } .$ -view features of lanes and traffic elements are then utilized to construct the L2T relation embedding, denoted as $\mathbf { G } _ { \mathrm { L 2 T } } \in \mathbb { R } ^ { N \times M \times C }$ , where $N$ and $M$ correspond to the number of lanes and traffic elements, respectively. This embedding is generated using the same broadcast concatenation operation applied in the L2L embedding. This approach ensures a consistent representation of spatial relationships between lanes and traffic elements. Finally, we predict the L2T topology by applying an MLP to the combined relation embeddings via: $\mathbf { T } _ { \mathrm { L 2 T } } = \mathbf { M } \mathbf { L } \mathbf { P } ( \mathbf { G } _ { \mathrm { L 2 T } } )$ .
By bridging the representational gap between BEV and FV, our approach enables robust L2T topology reasoning, effectively capturing spatial relationships between lanes and traffic elements while mitigating the limitations of independent feature fusion.
# 3.5 Loss Functions
# 3.5.1 Perception Loss
Lane Loss: For lane detection, we employ Focal Loss [43] for classification and a combination of point-wise L1 loss and Chamfer distance loss for regression of $K$ sampling points, ensuring precise lane geometry estimation. Bounding Box Loss: For traffic element detection, we use Focal Loss [43] for classification, L1 loss and GIoU loss [44] for bounding box supervision of $( x , y , w , h )$ .
# 3.5.2 Topology Learning Losses
Existing methods primarily use Focal Loss for topology classification, focusing on determining connectivity between pairs. However, this approach primarily emphasizes binary classification without explicitly distinguishing the relative importance of connected and non-connected pairs. To better capture topological relationships, we introduce an additional InfoNCE loss [45], designed to enhance the discrimination between connected (positive) and non-connected (negative) pairs.
Taking L2T topology as example, we consider $N$ lane queries and $M$ traffic element queries from the decoder. Our L2T module generates a relation embedding $\mathbf { G } _ { \mathrm { L 2 T } } \in \mathbb { R } ^ { N \times M \times C }$ , where each lane query is associated with every traffic element query, forming an adjacency matrix. Ground truth labels in this matrix are defined as 1 for connected and 0 for non-connected pairs. To strengthen relational learning, we introduce a hard negative mining strategy. Specifically, for each positive pair, we select the top- $\boldsymbol { \cdot } \boldsymbol { n }$ hardest negative pairs based on predicted topology scores. This ensures that the model learns to distinguish subtle differences between connected and non-connected pairs. We then apply a symmetric InfoNCE loss 2 as follows:
$$
\mathcal { L } _ { \mathrm { c o n } } = - \log \frac { \exp ( \mathbf { v } ^ { + } ) } { \exp ( \mathbf { v } ^ { + } ) + \sum _ { \mathbf { v } ^ { - } } \exp ( \mathbf { v } ^ { - } ) } = \log \left[ 1 + \sum _ { \mathbf { v } ^ { - } } \exp ( \mathbf { v } ^ { - } - \mathbf { v } ^ { + } ) \right] ,
$$
where $\mathbf { v } ^ { + }$ and $\mathbf { v } ^ { - }$ denote the logits of positive and negative pairs, respectively. To handle multiple positive samples, we extend Eq. (8) following previous works [47, 48]:
$$
\mathcal { L } _ { \mathrm { c o n } } = \log \left[ 1 + \sum _ { \mathbf { v } ^ { + } } \sum _ { \mathbf { v } ^ { - } } \exp ( \mathbf { v } ^ { - } - \mathbf { v } ^ { + } ) \right] .
$$
# 4 Experimental Results
# 4.1 Dataset and Metrics
Dataset. We evaluate our method on OpenLane-V2 [37], a large-scale dataset specifically designed for topology reasoning in autonomous driving. OpenLane-V2 comprising two subsets: subsetA (derived from Argoverse-V2 [49]) and subsetB (derived from nuScenes [50]).
Evaluation Metrics. Following the official evaluation protocol of OpenLane-V2 [37], we utilize $\mathrm { D E T } _ { l }$ and $\mathrm { D E T } _ { t }$ to measure detection accuracy for lanes and traffic elements, respectively. For topology reasoning, we employ $\mathrm { T O P } _ { l l }$ and $\mathrm { T O P } _ { l t }$ to assess Lane-to-Lane and Lane-to-Traffic element relationship prediction. The overall performance is quantified using the OpenLane-V2 Score (OLS):
$$
\mathrm { O L S } = \frac { 1 } { 4 } \left[ { \mathbf { D E T } } _ { l } + { \mathbf { D E T } } _ { t } + f ( { \mathbf { T O P } } _ { l l } ) + f ( { \mathbf { T O P } } _ { l t } ) \right] ,
$$
where $f$ denotes the square root function. Our evaluations follow the latest version (V2.1.0) of the metrics, as updated in the official OpenLane-V2 GitHub repository3.
# 4.2 Implementation Details
Model Details. We use a ResNet-50 backbone to extract features, coupled with a pyramid network, FPN, for multi-scale feature learning. Following prior work [3, 1], a BEVFormer encoder [51] with 3 layers is employed to generate a BEV feature map of size $1 0 0 \times 2 0 0$ . We employ six decoder layers, using 300 queries for the lane decoder and 100 queries for the traffic element decoder following [2].
Training Details. We utilize the AdamW optimizer [52] for model training, with a weight decay of 0.01 and an initial learning rate of $2 . 0 \times 1 0 ^ { - 4 }$ , which decays following a cosine annealing schedule. Training is conducted for 24 epochs using a total batch size of 8 on 8 NVIDIA 4090 GPUs. Input images are resized to $1 0 2 4 \times 8 0 0$ , following [2]. The overall training loss is provided in the Supp.
# 4.3 Main Results
We compare our model against SOTA methods on the OpenLane-V2 dataset, with results presented in Tab. 1. For subsetA, Our method achieves the highest OLS of $4 8 . 9 \%$ on subsetA, surpassing all previous methods by a significant margin. Despite utilizing the same traffic head decoder as previous methods [2], our model improves $\mathrm { D E T } _ { t }$ by 0.4, demonstrating its ability to enhance traffic element detection via joint relation-enhanced modeling. Furthermore, $\mathrm { D E T } _ { l }$ achieves a substantial improvement of $+ 3 . 1$ , underscoring the effectiveness of our method in enhancing lane detection. Most importantly, we observe notable gains in topology reasoning accuracy, with $\mathrm { T O P } _ { l l }$ and $\mathrm { T O P } _ { l t }$ improving by $+ 5 . 3$ and $+ 4 . 9$ , respectively. These results validate the effectiveness of our proposed L2L and L2T topology reasoning modules, which enhance relational reasoning in complex driving scenarios. For subsetB, our RelTopo achieves consistent improvements across all metrics, surpassing previous methods with $+ 3 . 9 \mathrm { D E T } _ { l }$ , $+ 0 . 6 \mathrm { D E T } _ { t }$ , $+ 1 0 . 2 \mathrm { T O P } _ { l l }$ , $+ 6 . 0 \mathrm { T O P } _ { l t }$ , and an overall $+ 6 . 1$ OLS gain. These results highlight the superiority of our method, which establishes new state-of-the-art on both OpenLane-V2 SubsetA and SubsetB. To demonstrate the effectiveness of our method, we present qualitative results in Fig. 4, showing more accurate lane predictions and well-aligned connection points in complex driving environments. Additional visualizations are available in Supp.
Table 1: Performance comparison with state-of-the-art methods on the OpenLane-V2 subsetA and subsetB dataset under the latest V2.1.0 evaluation metrics. Results for RoadPainter‡ are cited from their paper which use old metrics (due to the absence of open-source code or model). TopoMLP† results were obtained using their official model, while other results were sourced from the TopoLogic paper. Metrics are reported for detection accuracy $( \mathrm { D E T } _ { l }$ and $\mathrm { D E T } _ { t }$ ) and topology reasoning accuracy $\mathrm { T O P } _ { l l }$ and $\mathrm { T O P } _ { l t } .$ ), with overall score (OLS) indicating aggregate performance. Higher values indicate better performance across all metrics. Our method achieves the SOTA performance.
# 4.4 Ablation Studies
To provide a thorough evaluation of our method, we conduct extensive studies and analysis. Due to page limitations, additional results and discussions are provided in the Supp., including: 1) a comparative study of our SA and L2L relation embedding against previous method in [3]; and 2) an exploration of alternative Bézier representations. Below, we present our main ablations on OpenLaneV2 subsetA, validating the effectiveness of our proposed components. Our baseline model (#1) is built using a deformable-DETR decoder as [2] with lightweight MLP-based topology heads.
Table 2: Ablation study on key components: 1) Geometry-Biased Self-Attention (SA) and CurveGuided Cross-Attention (CA); 2) our L2L and L2T heads; and 3) our proposed contrastive learning.
Effect of Relation-Aware Lane Decoder. Our relation-aware lane decoder consists of two core components: Geometry-Biased Self-Attention (SA) and Curve-Guided Cross-Attention (CA). We progressively integrate these components to examine their impact. 1. Geometry-Biased Self-Attention (SA): We first replace the standard self-attention [53] in our baseline (#1) with our SA, resulting in model (#2). As shown in Tab. 2, SA enhances lane representation learning, improving $\mathrm { T O P } _ { l l }$ by $+ 2 . 9$ . This highlights the benefit of explicitly encoding inter-lane geometric relationships. Besides, we compare our geometry encoding with the distance topology method from Topologic in Supp. Tab. 3, further validating the advantages of our method. 2. Curve-Guided Cross-Attention (CA): Building on model (#2), we introduce Curve-Guided Cross-Attention (CA), forming model (#3). This helps capture global contextual information through sampled points, under lane curve formulation guidance. As shown in Tab. 2, CA boosts $\mathrm { D E T } _ { l }$ by $+ 4 . 6$ , demonstrating its effectiveness in feature aggregation and capturing long-range dependencies. Overall, integrating both our SA and CA (#3) leads to significant improvements over the baseline (#1), with a $+ 5 . 0$ increase in $\mathrm { D E T } _ { l }$ , $+ 5 . 0 \mathrm { T O P } _ { l l }$ , $+ 1 . 5 \mathrm { T O P } _ { l t }$ and an overall $+ 2 . 8$ gain in OLS, which confirm the combined benefits of our relation-aware design.
Figure 4: Comparative visual results on OpenLane-V2 subsetA. The top row shows multi-view input images, the bottom row shows lane predictions. We show comparison between groundtruth, TopoMLP [2] and ours. The blue box highlights misaligned connection point predictions from TopoMLP [2], and the green box shows the corresponding aligned predictions from our RelTopo . For clarity, zoomed-in views of selected regions are displayed at the top-right or bottom-right corners.
Effect of Topology Heads. To assess our topology heads, we perform a series of ablations by replacing them with counterparts from TopoMLP and present the results in Tab. 2. 1) L2L Head: Comparing #3 (without our L2L) and $\# 4$ (with our L2L), our geometry-enhanced L2L head improves $\mathrm { T O P } _ { l l }$ by $+ 1 . 7$ and $\mathrm { D E T } _ { t } + 1 . 0$ , validating its effectiveness in capturing L2L relationships, which could in-turn helps perception. 2) $L 2 T$ Head: Replacing the baseline L2T head $( \# 3 )$ with our proposed head (#5) leads to a $+ 1 . 5$ gain in $\mathrm { T O P } _ { l t }$ , confirming its ability to capture L2T relationships. 3) Combined Effect: Integrating both our L2L and L2T heads $( \# 6 )$ into $\# 3$ further enhances topology reasoning, achieving $+ 1 . 8$ in $\mathrm { T O P } _ { l l }$ , $+ 0 . 7$ in $\mathrm { T O P } _ { l t }$ , and $+ 0 . 9$ in OLS. These results highlights the complementary nature of the two heads, jointly contributing to more accurate detection and reasoning.
Effect of Contrastive Learning: Finally, we incorporate our InfoNCE loss for additional supervision in topology learning. Compared to $\# 6$ , adding InfoNCE loss (#7) improves $\mathrm { T O P } _ { l l }$ by $+ 0 . 6$ and $\mathrm { T O P } _ { l t }$ by $+ 1 . 6$ . demonstrating its effectiveness in enhancing relational understanding.
# 5 Limitations and Future Directions
While our recognition-based topology reasoning framework demonstrates strong performance, it currently lacks interpretability. Future work could explore integrating large language models (LLMs) to enhance relational reasoning between traffic elements and lanes, incorporating traffic rules for improved driving scene understanding. Additionally, incorporating sequential input images or temporal information may enable the model to better capture contextual and dynamic cues, akin to how humans leverage visual memory and temporal continuity in complex driving situations. | Accurate road topology reasoning is critical for autonomous driving, enabling effective navigation and adherence to traffic regulations. Central to this task are lane perception and topology reasoning. However, existing methods typically focus on either lane detection or Lane-to-Lane (L2L) topology reasoning, often \textit{neglecting} Lane-to-Traffic-element (L2T) relationships or \textit{failing} to optimize these tasks jointly. Furthermore, most approaches either overlook relational modeling or apply it in a limited scope, despite the inherent spatial relationships among road elements. We argue that relational modeling is beneficial for both perception and reasoning, as humans naturally leverage contextual relationships for road element recognition and their connectivity inference. To this end, we introduce relational modeling into both perception and reasoning, \textit{jointly} enhancing structural understanding. Specifically, we propose: 1) a relation-aware lane detector, where our geometry-biased self-attention and \curve\ cross-attention refine lane representations by capturing relational dependencies; 2) relation-enhanced topology heads, including a geometry-enhanced L2L head and a cross-view L2T head, boosting reasoning with relational cues; and 3) a contrastive learning strategy with InfoNCE loss to regularize relationship embeddings. Extensive experiments on OpenLane-V2 demonstrate that our approach significantly improves both detection and topology reasoning metrics, achieving +3.1 in DET$_l$, +5.3 in TOP$_{ll}$, +4.9 in TOP$_{lt}$, and an overall +4.4 in OLS, setting a new state-of-the-art. Code will be released. | [
"cs.CV"
] |
# 1 Introduction
Modern engineered systems are becoming more complex as they incorporate a greater number of diverse and autonomous components. This growing complexity is widely considered as one of the defining factors of modern systems engineering practices [33]. A notable artifact of growing system complexity is the increased adoption of the Systems of Systems paradigm (SoS) [37]. SoS are large-scale, distributed aggregations of independently developed and managed constituent systems [43]. These constituent systems maintain operational and managerial independence but may opt to collaborate in order to fulfill shared, higher-level goals [3]. The SoS paradigm supports flexibility, scalability, and adaptability, making it especially valuable in dynamic environments [25].
SoS increasingly extend into the virtual domain and are comprised of constituent systems of cyber-physical in nature [52]. This shift toward heterogeneous systems sets the stage for Digital Twins (DTs). DTs are real-time digital representations of physical systems that enable simulation, monitoring, and data-driven control [38]. They have demonstrated impact across several domains, including manufacturing [39], smart cities [26], and agriculture [9]. Although many current DT implementations are still domain-specific and centralized, recent research points toward more distributed, modular, and interoperable forms [12, PS5], highlighting the convergence of the DT and SoS paradigms, as shown in Fig. 1.
The convergence of SoS and DTs introduces a new class of systems, where multiple DTs representing diverse physical systems are integrated into a coordinated whole. We refer to these as Systems of Twinned Systems (SoTS).
Digital-physical convergence System of Digital twins twinned systems Information System of systems systems Flexibility of coordination
A System of Twinned Systems comprises digitally twinned systems, organized by system-of-systems principles, in which digitally twinned systems may act as autonomous constituents and collaborate to achieve complex goals.
Pertinent examples of SoTS include smart cities, where DTs of infrastructure, vehicles, and people collaborate to manage complex interactions such as traffic optimization or energy balancing. In such a setting, e.g., vehicles can autonomously decide to be part of the traffic system or leave, impacting the overall flow of traffic.
As highlighted in Fig. 1, the convergence towards a SoTS improves the cyber-physical convergence of SoS, and improves the flexibility of coordination among DTs. Or, conversely, increased cyber-physical convergence (compared to SoS) and flexibility of coordination are requirements for SoTS. The benefits of SoTS are clear. Bringing SoS principles into DT design promotes modularity, reusability, and dynamic reconfiguration; and promoting rigorous digital twinning in SoS allows for more efficient development, operation, and management of complex systems.
With research and development targeting SoTS on a noticeably accelerating course [52], a systematic review of their engineering practices, technical characteristics, and use cases is well timed and much needed.
Contributions In this manuscript, we report on our systematic literature review of SoTS. We identify key trends and design choices in the organization of systems in such settings, SoS and digital twin patterns, tendencies in non-functional system properties, such as security, and outline relevant research and development directions for experts in the SoS and DT domains.
Replicability We publish a replication package containing the data and analysis scripts of our study.
Structure The remainder of this article is structured as follows. In Sec. 2, we review the background and the related work. In Sec. 3, we design a systematic literature review to study the state of the art in digitally twinned systems of systems. In Sec. 4, we define a classification framework for digitally twinned systems of systems. In Sec. 5, we report the results of our review. In Sec. 6, we discuss the results and identify trends, tendencies, limitations and shortcomings, and key research challenges for the DT and SoS communities. Finally, in Sec. 7, we draw the conclusions and identify future work.
# 2 Background and Related Work
In this section, we discuss the background in systemsof-systems (SoS) (Sec. 2.1) digital twins (DT (Sec. 2.2), and the related work on combining SoS and DTs (Sec. 2.3).
# 2.1 System of Systems
A System of Systems (SoS) is a system composed of multiple independent systems that collaborate to achieve outcomes that no single system could accomplish alone [43]. SoS are increasingly used to manage complexity in domains where adaptability, scalability, and interoperability are essential. INCOSE identifies SoS as a key enabler for future systems, particularly in addressing global challenges that require scalable, distributed, and coordinated solutions [33].
In SoS, each constituent system maintains operational and managerial independence, i.e., it can function and evolve on its own. These systems are also geographically distributed, exhibit heterogeneous capabilities, and are dynamically reconfigurable. Most importantly, SoS exhibit emergent behavior—capabilities that arise from the interaction of components, rather than being explicitly designed [43, 30].
The conceptual foundations of SoS trace back to General Systems Theory by Von Bertalanffy [66], which emphasized the importance of interdependence and holism. Early theoretical contributions from Boulding [7], Ackoff [1] laid the groundwork for viewing systems as interconnected wholes. The term ”System of Systems” gained practical relevance in the 1980s and 1990s, especially in defense applications, where it was used to coordinate autonomous systems for crisis response and joint operations [36, 15, 63].
Drawing from a wide range of prior classifications of SoS properties—including those by Keating et al. [34], Boardman et al. [3], Sage et al. [58], and Maier [43]—Nielsen et al. [51] synthesized these perspectives into a unified eight-dimensional taxonomy designed to support the analysis and engineering of complex SoS. According to Nielsen et al. [51] the dimensions are as follows: autonomy, independence, distribution, evolution, reconfiguration, emergence, interdependence, and interoperability. Autonomy is the extent to which a constituent system’s behavior is governed by its own internal goals, rather than by directives from the SoS. Independence is the ability of a constituent system to operate even when detached from the SoS. Distribution refers to the spatial and logical separation of systems within the SoS. Evolution and reconfiguration account for long-term change and real-time adaptability within the system. Emergence describes higher-order behaviors that arise only through system interaction. Interdependence reflects mutual reliance between systems for shared objectives. Interoperability is the ability to exchange data and services across heterogeneous systems. Together, these dimensions provide a comprehensive foundation for engineering SoS.
Evidence shows that SoS are developed with increased attention to reliable and secure operation, especially in complex and changing environments. For example, Ferreira et al. [17] present architectures that support fault tolerance and system recovery, while Song et al. [64] and Hyun et al. [32] introduce verification methods designed for safety-critical systems. Other work, such as Wang et al. [67], explores ways to predict reliability over time, helping to ensure reliable performance in areas such as transportation and manufacturing.
Integrating digital twins and SoS offers a pathway to enhance real-time awareness and coordinated adaptation across distributed systems.
# 2.2 Digital Twins
A digital twin (DT) is a virtual representation of a physical system that maintains continuous two-way communication with its real-world counterpart [38, 10]. This bi-directional data exchange enables real-time monitoring, simulation, and control of the physical entity. DTs are distinct from Digital Models, which do not include any live data connection. They are also different from Digital Shadows, which receive real-time data from the physical system but cannot send control signals back. DTs support synchronized updates and mutual interaction between digital and physical systems. DTs are used in an array of domains, e.g., manufacturing, construction, smart cities, automotive, and avionics [40].
The concept of DTs originated in aerospace engineering. During the Apollo 13 mission, NASA used a virtual replica of the spacecraft to simulate mission scenarios and support failure recovery [22, 5]. This early use case emphasized the role of real-time mirroring and decision support. NASA later defined DTs as probabilistic simulation systems that predict asset behavior and support system health management throughout the lifecycle. In 2006, the “Product Avatar” introduced by Hribernik et al. [31] linked DTs to product lifecycle management and self-description capabilities.
Since then, the scope of DTs has expanded. More recent work explores advanced forms of DTs that enable prediction, autonomous operation, and the ability to adapt and improve in response to changing conditions [11] and disruptions [27]. These developments position DTs not only as monitoring tools but as adaptive agents in complex cyber-physical environments, such as smart ecosystems [47].
Despite their growing adoption, DTs face several technical and organizational challenges. These include the lack of interoperability standards [12], concerns about data privacy and security, and difficulties in scaling DTs for large, heterogeneous systems [19]. The absence of unified development practices further complicates cross-domain deployment. Overcoming these barriers requires more scalable and coordinated DT architectures, pushing current efforts toward broader integration across distributed systems and domains.
# 2.3 Related work
The integration of DTs within SoS has become a topic of particular interest, and there is a growing number of secondary studies on the topic.
Closest to our work is the review of Olsson et al. [52] who analyze ten studies in the overlap of DTs and SoS with the aim of highlighting conceptual challenges, such as integration and interoperability. Our work provides a systematic treatment of the topic with more breadth and depth.
The majority of related literature focuses on integrating DTs hierarchically to achieve a SoS. Tao et al. [65] propose a multi-level DT structure, suggesting that enterprises can achieve SoS by incrementally combining unit-level DTs into complex, higher-order systems. This view is extended by Gill et al. [21], who advocate for automated horizontal and vertical DT integration and emphasize the need for a unified DT model to enable interoperability across manufacturers. Similarly, Schroeder et al. [59] categorize connectivity levels within DT architectures and demonstrate how aggregated DTs can represent broader system behaviors. Ghanbarifard et al. [20] further reinforce this perspective by discussing distributed DT composition in dynamic, evolving operational contexts. Domain-specific applications of this principle include supply chain integration via Sub-DTs in Zhang et al. [72] and spatial-temporal configurations in Dietz et al. [13], both modeling SoS-like structures through DT aggregation. These works highlight the increasing interest in SoTS and motivate empirical inquiries such as ours.
Despite recent advances, several challenges persist in the realization of SoTS. Michael et al. [48] identify barriers of integrating DTs into SoS, including interoperability gaps, connectivity and privacy issues, and the absence of standardized development practices. These concerns are corroborated by Semeraro et al. [60], who argue that horizontal data integration is necessary for effective vertical system unification. Further adding to these challenges are the organizational and technical complexities of distributed DTs. Borth et al. [4] outline strategic and architectural difficulties associated with lifecycle coordination, data ownership, and conflicting stakeholder objectives in loosely coupled DT systems. These findings highlight the need for unified models and frameworks to advance the state of SoTS, such as our proposal in Sec. 4.
# 3 Study design
We designed a study to systematically survey the literature concerned with the combination of DTs and SoS, which we refer to as systems of twinned systems (SoTS). Our goal was to understand the characteristics of SoTS, their components, and constituent systems, as well as to identify the key limitations, challenges, and research opportunities in the field.
# 3.1 Research questions
We formulated the following research questions.
# RQ1. Why are DT and SoS combined?
By answering this RQ, we aim to understand the purposes, problems, and domains in which SoTS are used. We also aim to understand whether organizing multiple DTs is a purposeful activity, and if so, what are the motivations, intents, and ambitions to do so. In particular, we are interested whether it is SoS that benefit from twinning or the other way around. We are also interested in the challenges that limit the upside of SoTS.
# RQ2. How are DT and SoS combined?
We aim to identify architectures along which systems, such DTs, are organized into SoS. We were interested in
the nature of constituent units: whether they are purely physical, digital, or both; as well as the type of SoS (acknowledged, directed, etc).
# RQ3. What are the technical characteristics of DTs in SoTS?
We are interested in the details of DTs that are combined as a SoS, such as their level of autonomy (fully autonomous, human actuated, digital shadow, etc), services, modeling formalisms, etc.
# RQ4. What are the technical characteristics of SoS in SoTS?
We are interested in the details of SoS in SoTS, such as support for typical SoS dimensions (autonomy, belonging, etc) and the type of emergent behavior these SoS account for (simple, weak, strong, spooky).
# RQ5. How are non-functional properties addressed in SoTS?
We are particularly interested in reliability and security due to their recognized critical importance in enabling safe and trustworthy operation in distributed and dynamic environments [19, 33, 17, 52]. We focus on how reliability and security are considered in the development and operation of SoTS by examining whether these concerns are addressed at the architectural level, explicitly modeled, or empirically evaluated.
# RQ6. What is the level of technical and research maturity in SoTS?
To assess technical maturity, we rely on the Technology Readiness Level framework (TRL) [45]. We introduce the following clusters of levels for our purposes: Initial (TRL 1-2); Proof-of-concept (TRL 3-4); Demonstration prototype (in relevant environment, TRL 5-6); Deployed prototype (in the operating environment, TRL 7-8); Operational (TRL 9). To assess research maturity, we investigate how primary studies are evaluated, using the assessment framework of Petersen et al. [54]. As a sign of maturity, we are also interested in whether the sampled studies relied on any standards.
# RQ7. What are the typical technological choices to implement SoTS?
We are interested in the technological landscape supporting the implementation of SoTS. We analyze the usage of programming languages, frameworks, and platforms.
3.2 Databases and search string
To search for potentially relevant studies, we used the key academic indexing databases: Scopus, Web of Science, ACM Digital Library, IEEE Xplore. We considered peer-reviewed literature only. Grey literature, e.g., articles published on arXiv and blog posts were not included. We searched in the title, abstract, and keywords of papers. Search on Scopus was limited to works from the Computer science and Engineering disciplines.
We constructed the search string from the key concepts of our study (digital twins and system of systems) and their typical synonymous keywords found in our preliminary investigation.
("digital twin\*" AND "system\* of systems") OR
("aggregated digital twin\*" OR "system of digital twins" OR "digital twin of systems" OR "system\* of twinned systems")
This is not an exhaustive list of terms, but a rather representative one and will be further compensated in the snowballing phase.
# 3.3 Search and selection
# 3.3.1 Automated search
We executed the search on September 10, 2024. We retrieved a total of 317 studies. We removed duplicates using a combination of the automated and manual duplicate detection in EndNote . We removed 121 references and retained 196 unique references. Subsequently, we applied the exclusion criteria. The details are reported in Tab. 1.
# 3.3.2 Selection
We used the following exclusion criteria to exclude primary studies that were not in the scope of our investigation. A primary study is excluded if it meets at least one exclusion criterion.
E0. Not accessible (not in English or not available for download); not peer-reviewed (e.g., theses, grant proposals); not primary research (e.g., reviews, mappings).
E1. Does not discuss DT.
E2. Does not discuss SoS.
E3. Off-topic.
E0 was trivial to evaluate and therefore, one author evaluated each study against $\mathrm { E 0 }$ and another author validated the decisions. In exclusion criteria E1– E3, each primary study was evaluated by two authors independently, based on the full reference (title, authors, venue...) and the abstract. In case of a tie, discussion was facilitated. In Tab. 1, we report detailed figures of the selection and exclusion, including interrater agreement and reliability metrics. We measured an inter-rater agreement (IRA) of $8 8 . 0 \%$ and Cohen’s $\kappa$ of 0.734 (substantial agreement). Most of the disagreements were due to different level of leniency of the reviewers. We facilitated in-depth discussions to converge.
Eventually, we arrived at 81 unique relevant references. In the next step, these references underwent a quality assessment.
# 3.4 Quality assessment
In line with the guidelines of Kitchenham et al. [35], we defined a checklist to assess the quality of primary studies. Quality criteria were derived from the research questions. Each question was answered by “yes” (2 point), “partially” (1 points), or “no” (0 points), based on the full text. To retain a primary study, we required that it scored at least 1 points in each of the following quality checks:
Q1. SoS is clearly described.
Q2. DT is clearly described.
Q3. The contributions are tangible (i.e., not conceptual).
Q4. Reporting quality is clear.
Of the 81 tentatively included primary studies, we excluded 28 (22 due to insufficient Q1 or Q2, and 6 due to insufficient Q3; 0 studies to exclude due to insufficient Q4). This resulted in 53 primary studies from the automated search phase, i.e., a $1 6 . 7 2 \%$ overall inclusion rate. In the next step, these 53 primary studies formed the basis of snowballing.
# 3.5 Snowballing
We used forward and backward snowballing to enrich the corpus. Backward snowballing was conducted in two phases. First, every reference in the previously included primary studies was assessed by title, publication venue, and date. Of the $1 6 6 6$ references, 145 seemed to be relevant for our purposes. Second, the 145 potentially relevant references underwent the same evaluation process as previously included studies, i.e., two authors applied exclusion criteria and checked the quality of works. Forward snowballing was conducted via Google Scholar as per the recommendations of Wohlin et al. [70].
Table 1: Search statistics
In the backward and forward snowballing, in total, we selected 38 potentially relevant references that underwent the same evaluation process as previous primary studies. (13 by backward snowballing and 25 by forward snowballing.)
We measured an IRA of $9 4 . 4 \%$ and a Cohen’s $\kappa$ of 0.488. We measured these numbers on the primary studies that have been reviewed by two reviewers (731 total references: 145 backward, 586 forward). The $\kappa$ was somewhat low, although by definition, it represents “moderate” agreement. This number was due to the ambiguity of abstracts we encountered. Eventually, we included 27 additional primary studies.
At the end of the first snowballing, we noted a rather low inclusion rate of $1 . 1 9 \%$ . We interpreted this low number as sufficient evidence for saturation and we stopped with snowballing.
Eventually, we screened 2 569 potential studies.
In total, we included 80 primary studies.
# 3.6 Data extraction
We extracted data from the 80 included studies into a data extraction sheet.
The data analysis included collating and summarizing the data, aiming at understanding, analyzing, and classifying the state of the art [35]. We performed a combination of content analysis [18] (mainly for categorizing and coding studies under broad thematic categories) and narrative synthesis [57] (mainly for detailed explanation and interpretation of the findings coming from the content analysis). We analyzed the extracted data to find trends and collect information about each category of the classification framework (vertical analysis). We also explored the extracted data for possible relations across different categories of the classification framework (horizontal analysis).
Whenever possible, we started from existing categorizations or derive systematic categorizations. To characterize SoS, we chiefly relied on the taxonomy of Nielsen et al. [51]. To characterize the various flavors of DTs, we invoked the works of Kritzinger et al. [38] and David et al. [10].
In the first phase of data extraction, we piloted the classification framework. In this phase, we discussed potential modification to the classification framework to accommodate interesting trends across the primary studies. Then, we performed the extraction. Finally, we performed the codification.
To aid independent replication, we developed Python scripts to automate these steps. The data and scripts are available in the replication package.
3.7 Threats to validity and study quality
Construct validity Our observations are artifacts of the sampled papers. Potential selection bias and missed publications may have an impact on our observations and threaten the construct validity of this study. To mitigate this threat, we employed a systematic approach in accordance with the best practices of empirical research in software engineering. Specifically, we used trusted databases, redundancy and validation in the exclusion phase [70], and employed snowballing to enrich our corpus [28].
Internal validity We may have missed works due to the terminology we used. The combination of SoS and DT has had no unified definition prior to our work and thus, constructing effective search strings might not have been feasible. We mitigated this threat by an alternative, although more labor-intensive corpus construction strategy: we augmented the core keywords in the search string with synonyms, and we used snowballing.
Study quality Our work scores 81.8% (9 of 11 points) in the rigorous quality checklist of Petersen et al. [54]. (Need for review: 1 point; search strategy: 3 points; evaluation of the search: 2 points (keywords from known papers; identify objective criteria for decision; add additional reviewer, resolve disagreements between them when needed); extraction and classification: 2 points; study validity: 1 point.)3 This quality score is significantly higher than the typical values in software engineering. Petersen et al. [54] reports a median of $3 3 \%$ , with only 25% of their sampled studies having a quality score of above $4 0 \%$ . Therefore, we consider our study design of particularly high quality.
# 3.8 Publication trends
Fig. 2 reports the publication trends.
The number of publications (Fig. 2a) shows an increasing trend, with a clear increase in publication output in the past four years (2024 is a partial year). After investigating the spike in publication output in 2019, we conclude that it is not a systemic phenomenon, but rather, an outlier. Overall, we observe an increasing interest in combining DT and SoS principles. About 47% of the sampled studies are journal articles and book chapters, suggesting relatively mature research; although the majority of sampled studies are journal or conference articles ( $4 1 \%$ and $4 5 \%$ , respectively).
2017 2 (2.50%)
Publication year 20198 130(3(.1725.5%0)%) 2020 4 (5.00%) 2021 13 (16.25%) 2022 15 (18.75%) 2023 22 (27.50%) 2024 11 (13.75%) Book chapter 5 (6.25%)
GR Journal 33 (41.25%) Conference 36 (45.00%) Workshop 6 (7.50%) IEEE 28 (35.00%)
GR ESlpsreinvigerr 178((212..250%)) ACM 3 (3.75%) Other 14 (17.50%) (a) Scientific output (as of September 2024) OVERALL 83.4%
Grf Q3: Tangible contributions 75.0% Q21: SDoTSisisclceleaarr 985.30.8% Q4: Reporting clarity 80.0% (b) Quality scores
The quality of reporting (Fig. 2b) is relatively high, scoring $8 3 . 4 \%$ in our quality assessment scheme (Sec. 3.4). The quality of reporting on DT components is particularly high (95%), substantially above of that of SoS principles $( 8 3 . 8 \% )$ . Contributions are typically tangible (75%), with less than a quarter of the corpus being conceptual works. Finally, the reporting clarity is acceptable, scoring $8 0 \%$ in our quality scheme.
We judge the corpus to be of sufficient quality to answer the research questions with high certainty and reasonable validity.
# 4 A classification framework for digitally twinned SoS
To organize and compare the various organizational and architectural flavors of SoTS, we devise a classification framework. We draw on the seminal works of Maier [43, 44] to understand how SoS are organized, and combine this theory with DT concepts [38].
We rely on a mixed sample- and case-based generalization [68]. This approach is particularly useful when constructing middle-range theories that balance generality with practicality, such as engineering sciences. In Sec. 3, we sampled a statistically adequate corpus. Subsequently, we decomposed each study individually into architectural units as architectural abstractions allow for better judging of similarity between cases [68]. Finally, we identified recurring patterns.
# 4.1 Metamodel
The resulting essential (minimal) metamodel to describe typical organizational patterns of digitally twinned systems is shown in Fig. 3.
Fig. 3: Essential SoTS metamodel
A System is the elementary building block a SoS, generally understood as “an assemblage of components that produces behavior or function not available from any component individually” [43]. Systems can be hierarchically composed of other systems. In SoS, these sub-systems are referred to as the constituent systems or constituents, in short. Systems also have goal s which drive their behavior.
At this point, we draw on the theory of digital twins when we distinguish between Digital Twins and Physical Twins, i.e., the digital and physical counterparts of heterogeneous systems [38].
A distinguishing factor between DTs and SoS is the strength of coupling between system components. SoS typically rely on weak coupling, i.e., constituents are allowed to make individual decisions about belonging to the SoS or leaving it, and in some cases, pursue their own goals. This is in a stark contrast with the strong coupling between the digital and physical counterparts of digitally twinned systems. A digital twin represents the prevalent state of the physical system through precise computational reflection [42] and controls the physical system through precise control that often relies on faster-than-real-time simulations.
Finally, the Controller is a special role in SoS architectures in which constituents defer setting the goal to a higher level system. In digitally twinned systems, this controller is always the digital twin and therefore. For convenience, we will use the color coding as shown in Fig. 3 as we instantiate the metamodel in Sec. 4.2.
# 4.2 Instances
We instantiate six architectural patterns from Fig. 3 for digitally twinned SoS. Four of these follow and are backward compatible with the taxonomy of Maier [43, 44], who classifies SoS into directed (Sec. 4.2.1), acknowledged (Sec. 4.2.2), collaborative (Sec. 4.2.3), and virtual (Sec. 4.2.4) SoS. We derive two additional architectural patterns for Specialized DTs (Sec. 4.2.5) and Specialized DTs and Systems (Sec. 4.2.6) patterns.
# 4.2.1 Directed SoTS
A directed SoTS (Fig. 4) builds on directed SoS [42], i.e., it has a central controller that sets goals and orchestrates the constituent systems as they execute their tasks in accordance with the goals. The constituents operate independently, but their normal operational mode is subordinated to the centrally managed goal. Specifically, in SoTS, the controller is a digital twin.
Fig. 4: Directed SoTS
# 4.2.2 Acknowledged SoTS
An acknowledged SoTS (Fig. 5) builds on acknowledged SoS [42], i.e., it has a central controller that orchestrates the constituents, but goals are negotiated and set at the constituents’ level. Thus, constituents keep their independent objectives and sustainment goals. Similar to directed SoTS, the controller is a digital twin.
Fig. 5: Acknowledged SoTS
# 4.2.3 Collaborative SoTS
A collaborative SoTS (Fig. 6) builds on collaborative SoS [42], i.e., constituents participate in the system on a voluntary basis to collaboratively fulfill previously agreed-upon goals. The goals are centralized but constituents choose to participate in fulfilling those goals.
Fig. 6: Collaborative SoTS
In contrast to the previously discussed architectures, there is no central controller unit at the top level of a collaborative SoTS. (Of course constituents may be organized into a directed or acknowledged architecture, but that bears no relevance at the higher level as a constituent system is seen as a black box.)
In the absence of a central controller, the coordination mechanism changes, too. In contrast to the previously discussed architectures, collaborative SoTS coordinate through choreography rather than orchestration. As defined by Peltz [53], orchestration inherently represents control from one party’s perspective (i.e., the controller), while choreography is a distributed approach.
# 4.2.4 Virtual SoTS
A virtual SoTS (Fig. 7) builds on virtual SoS [42], i.e., constituents participate in the system on a voluntary basis and, in contrast with collaborative architectures, they pursue their own goals rather than previously agreed-upon ones. Goals are typically negotiated on-the-fly, in accordance with the observed emergent behaviors of the SoTS.
At this point, we note that virtual SoS have been seldom encountered in real systems due to the lack of control over constituents. This architectural style is expected to become more relevant with AI becoming a more prominent part of modern systems.
Fig. 7: Virtual SoTS
Fig. 8: Specialized DTs
# 4.2.5 System of Specialized DTs
A system of specialized DTs (Fig. 8) is a loosely coordinated set of DTs that twin the same constituent system. The DTs are specialized in their capabilities, which are typically complementary. An example of such a setup is a cyber-physical system with mechanical safety and electronic safety monitoring digital twins. Goals are typically pre-negotiated and followed by the DTs.
# 4.2.6 System of Specialized DTs and Specialized Systems
A system of specialized DTs and specialized systems (Fig. 9) is a loosely coordinated set of DTs that twin multiple constituent systems and the sets of twinned systems might overlap. Similar to the previous case, the DTs are specialized in their capabilities; but in addition, the constituent systems might be specialized as well. An example of such a setup is a cyber-physical system with mechanical and electrical physical components, which are twinned in an electro-mechanical safety DT and an electro-mechanical performance DT.
Similar to the previous case, goals are typically prenegotiated and followed by the DTs.
Fig. 9: Specialized DTs and Specialized Systems
Table 2: Motivations for Combining DT and SoS
Table 3: Intents of Combining DT and SoS
# 5 Results
In this section, we report the key findings of our study on the state-of-the-art of SoTS.
# 5.1 Why are SoS and DT combined? (RQ1)
We address why SoS and DTs are combined by analyzing the motivations (Sec. 5.1.1), integration intents (Sec. 5.1.2), primary application domains (Sec. 5.1.3), and key development challenges (Sec. 5.1.4) of SoTS.
# 5.1.1 Motivations
As shown in Tab. 2, most studies develop SoTS to support optimization, integration, validation, or maintainability. Optimization is the most common motivation (30 of $8 0 - 3 7 . 5 \%$ ). SoTS enable detailed monitoring of components while improving system-level awareness to support decision-making and control. In one example, SoTS coordinate UAV landings on USVs to minimize operation time [PS45]. Integration is a motivation in (25 of $8 0 - 3 1 . 3 \%$ ) studies. SoTS connect heterogeneous systems by coupling multiple DTs and enabling communication across distributed components. In power systems, for example, SoTS support coordinated operation across grid elements [PS55]. Validation is addressed in (15 of 80 – 18.8%) studies. SoTS enable risk-free testing by simulating system behaviors that are costly or unsafe to observe physically. This includes modeling interactions between autonomous subsystems to validate scenarios like advanced driver assistance in cars [PS16]. Maintainability appears in (10 of $8 0 \textrm { -- } 1 2 . 5 \%$ ) studies.
# 5.1.2 Intents
We distinguish between two intents in SoTS: (i) twinning a SoS, where a single DT represents the overall SoS; and (ii) combining DTs into a SoS, where multiple DTs are integrated. As shown in Tab. 3, the latter is more common (49 of $8 0 - 6 1 . 3 \%$ ).
Fig. 10 breaks down these numbers by application domain. Manufacturing dominates both approaches (24 of $8 0 \mathrm { ~ - ~ } 3 0 . 0 \%$ and 8 of $8 0 \mathrm { ~ - ~ } 1 0 . 0 \%$ ), with most studies using SoTS to coordinate machines and production lines [PS74, PS56]. Smart cities (4 of $8 0 - 5 . 0 \%$ ) more often adopt DT combination approach to coordinate distributed services and infrastructure across urban subsystems. Automotive (6 of $8 0 - 7 . 5 \%$ ) and military systems (4 of $8 0 \mathrm { ~ - ~ } 5 . 0 \%$ ) more often rely on twinning to support global system awareness.
Some domains appear exclusively under one approach. For example, networking appears only under twinning an SoS, where studies focus on holistic oversight of large-scale, dynamic communication infrastructures [PS19, PS64]. Energy, mining, healthcare, cybersecurity, and construction appear only under DT combination. Other domains, e.g., smart cities and logistics show both approaches.
# 5.1.3 Application domains
As shown in Tab. 4, the most represented domain is manufacturing (32 of $8 0 - 4 0 . 0 \%$ ), where SoTS are used to coordinate production lines and factory systems [PS28]. The automotive domain (9 of 80 – 11.3%) applies SoTS for simulation-based testing, diagnostics, and control of vehicle subsystems [PS63]. In smart
Table 4: Application Domains
Manufacturing 24 8 Automotive 3 6 Cyber-Physical Systems 5 1 Smart Cities 4 2 Military 1 4 Agriculture 1 3 Logistics 1 2 Robotics 2 1 Business 1 1 1 Maritime systems 1 1 Networking 2 Energy 2 Mining 1 Healthcare 1 Cybersecurity 1 Construction 1 30 25 20 15 10 5 0 5 10 # of Studies
cities applications (6 of $8 0 \ - \ 7 . 5 \%$ ), SoTS support the modeling and integration of urban infrastructure [PS46, PS36]. The cyber-physical systems domain (6 of $8 0 \mathrm { ~ - ~ } 7 . 5 \%$ ) focuses on managing real-time interaction between distributed physical processes and digital components [PS2, PS72].
Military, agriculture, logistics, and robotics applications appear in fewer than 6 studies each. The remaining $1 5 \%$ span maritime, healthcare, construction, energy, and networking domains.
# 5.1.4 Challenges
Tab. 5 outlines the main challenges in SoTS development. Operational challenges are most common (60 of $8 0 \mathrm { ~ - ~ } 7 5 . 0 \%$ ), with interoperability alone appearing in (26 of $8 0 \mathrm { ~ - ~ } 3 2 . 5 \%$ ) studies. Other recurring issues include synchronization (11 of $8 0 - 1 3 . 8 \%$ ), real-time constraints (9 of $8 0 \ - \ 1 1 . 3 \%$ ), and uncertainty (8 of 80 $- \ 1 0 . 0 \%$ ). Studies also report difficulties in managing emergent behaviors, lifecycle coordination, and reconfiguration. Design challenges are noted in (33 of $8 0 ~ -$ $4 1 . 3 \%$ ) studies, with complexity (12 of $8 0 - 1 5 . 0 \%$ ) and lack of standards (11 of $8 0 - 1 3 . 8 \%$ ) being the most frequent. Other concerns include legacy system compatibility, regulatory constraints, and the lack of frameworks and architectures to support SoTS development. Non-functional properties are discussed in (22 of $8 0 ~ \textbar { - }$ $2 7 . 5 \%$ ) studies. Notably scalability, reliability, and privacy are cited.
# RQ1: Why DTs and SoS are Combined
SoTS are developed to support optimization, integration, validation, and maintainability in complex systems. Manufacturing is the most common application domain, followed by automotive and smart cities. Despite growing adoption, challenges, e.g., interoperability, synchronization, complexity, and the lack of standards limit broader deployment.
5.2 How are SoS and DT combined? (RQ2)
To understand how SoS and DTs are combined, we analyze architectures (Sec. 5.2.1) and types of constituent units (Sec. 5.2.2) represented in SoTS.
# 5.2.1 Architecture Configurations
We applied our SoTS classification framework (Sec. 4) to categorize the studies into distinct architectural types. These types reflect the degree of autonomy, goal alignment, and coordination mechanisms between constituent systems, with DTs acting as either orchestrators or peers. The distribution of studies across types is summarized in Tab. 6.
The majority of studies followed an Acknowledged SoTS architecture (31 of $8 0 \mathrm { ~ - ~ } 3 8 . 8 \%$ ). In these systems, a central DT facilitates coordination, but each constituent retains managerial independence and negotiates its own goals. For instance, Li et al. [PS45] implements a cognitive twin that synthesizes simulations and provides recommendations to UAVs and USVs, which maintain control over their own missions. Similarly, in Monsalve et al. [PS55], a Digital Twin Master (DTM) oversees synchronization and data flow across grid simulations, while each local Digital Twin Client (DTC) retains its own model and operational logic.
Table 5: Challenges
Table 6: SoTS Type
A comparable number of studies implement a Directed SoTS (26 of 80 – 32.5%). These systems are governed by a central DT that imposes goals and orchestrates constituent behavior. In Reiche et al. [PS66], the Digital Twin of a System (DTS) aggregates and controls individual machine twins, using a dedicated interface (DTS2DT) to monitor operations, issue commands, and maintain an integrated simulation of the whole unit. Similarly, Li et al. [PS46] introduces an infrastructure DT that coordinates multiple civil subsystems under a unified scenario-based control structure.
Collaborative SoTS architectures were found in 19 of 80 – 23.8% studies. These systems are formed through voluntary cooperation among DTs, with no centralized controller enforcing goals. Vogel-Heuser et al. [PS75] presents a decentralized manufacturing system composed of DTs instantiated as autonomous agents. Each agent voluntarily engages in shared production tasks through local negotiation without relying on centralized orchestration. Additionally, Chen et al. [PS13] describes a fleet of connected vehicles, each sharing its own behavioral DT to support collective driving decisions without central command. Coordination emerges dynamically through peer-to-peer risk assessments.
Some studies qualify as Virtual SoTS (4 of 80 – $5 . 0 \%$ ), where constituents join voluntarily, pursue independent goals, and coordinate dynamically without centralized control. Pickering et al. [PS61] presents the MAS-H platform, where independent stakeholders operate autonomously while dynamically coordinating through an open DT and modular infrastructure. Goals such as labor efficiency or sustainability emerge from voluntary collaboration rather than centralized directives. Similarly, Esterle et al. [PS23] explores a system of autonomous cyber-physical entities that self-integrate during encounters. Coordination arises through dynamic model exchange and adaptation using DTs, without pre-defined tasks.
# 5.2.2 Constituent Units
Tab. 7 summarizes the types of constituent units in SoTS. Most studies (62 of 80 – 77.5%) focus on physical systems, e.g., machines, vehicles, or industrial assets. These DTs support monitoring, control, and optimization at the asset or network level [PS66,
PS40]. Cyber-Physical Systems (CPS) appear in (9 of $8 0 \mathrm { ~ - ~ } 1 1 . 3 \%$ ) studies, where emphasis is placed on cross-domain interoperability and reusable architectures [PS53, PS51]. Cyber-Physical-Human Systems (CPHS) are considered in (7 of 80 – 8.8%) studies, incorporating human interaction or oversight. Examples include human-robot collaboration and adaptive mission planning [PS69, PS24]. Only (2 of $8 0 \mathrm { ~ - ~ } 2 . 5 \%$ ) studies address enterprise systems, modeling organizational entities, e.g., departments or administrative units as DTs [PS41, PS50].
# RQ2: How DTs and SoS are Combined
Most SoTS adopt centralized architectures, with DTs coordinating physical systems via Acknowledged or Directed patterns. Decentralized forms like Collaborative and Virtual SoTS are less common. Constituents are primarily physical assets, with limited use of cyber-physical systems, cyberphysical-human systems, or enterprise-level twins.
# 5.3 What are the characteristics of DTs that are combined with SoS? (RQ3)
To find the characteristics of DTs used in SoTS we analyze their levels of autonomy (Sec. 5.3.1), the services they provide (Sec. 5.3.2), and the modeling and simulation techniques applied (Sec. 5.3.3).
# 5.3.1 Level of Autonomy
Tab. 8 summarizes the autonomy levels in SoTS DTs. Most studies (66 of 80 – 82.5%) implement fully autonomous DTs for independent monitoring, control, or decision-making. Digital shadows, passive representations without autonomy, appear in (6 of 80 – 7.5%) studies. Hofmeister et al. [PS34] use them as data layers for agents assessing environmental risks. Humansupervised DTs appear in (4 of $8 0 \mathrm { ~ - ~ } 5 . 0 \%$ ) studies and human-actuated DTs in (3 of $8 0 \mathrm { ~ - ~ } 3 . 8 \%$ ), typically in safety-critical contexts. For example, Folds et al. [PS24] use a supervised DT for mission adaptation in a cyberphysical-human system. Only one study uses a digital model (1 of $8 0 \textrm { -- } 1 . 3 \%$ ), representing static models, for enterprise-level planning rather than real-time operation [PS41].
# 5.3.2 DT Services
Tab. 9 summarizes the services provided by DTs in SoTS configurations. As shown in Fig. 11, most studies combine multiple services rather than using them in isolation.
Table 7: Constituent Units
Table 8: Levels of Autonomy
Table 9: DT Services Used in Papers
Fig. 11: Combinations of DT services offered across reviewed SoTS studies.
The most widely used services are real-time monitoring (79 of $8 0 - 9 8 . 8 \%$ ), simulation (77 of $8 0 - 9 6 . 3 \%$ ), and optimization (68 of $8 0 \mathrm { ~ - ~ } 8 5 . 0 \%$ ). Prediction (56 of $8 0 - 7 0 . 0 \%$ ), visualization (49 of $8 0 - 6 1 . 3 \%$ ), and information retrieval (48 of $8 0 \mathrm { ~ - ~ } 6 0 . 0 \%$ ) are also frequently integrated.
The most common service combination, observed in 6 of $8 0 \mathrm { ~ - ~ } 7 . 5 \%$ studies, includes real-time monitoring, simulation, optimization, prediction, and information retrieval, supporting both continuous system supervision and proactive planning. Other studies incorporate varied combinations, typically coupling the core services (monitoring, simulation, optimization, and prediction) with additional functionalities, e.g., visualization, information retrieval, diagnosis, and event detection.
# 5.3.3 Modeling and Simulation Formalisms and Techniques
Tab. 10 summarizes the modeling and simulation formalisms used in SoTS studies. Architectural and structural methods are most common (31 of $8 0 \mathrm { ~ - ~ } 3 8 . 8 \%$ ), with UML (12 of $8 0 \mathrm { ~ - ~ } 1 5 . 0 \%$ ) and SysML (11 of 80 $\mathrm { ~ - ~ } 1 3 . 8 \%$ ) for system specification. Spatial and visual models appear in (24 of $8 0 - 3 0 . 0 \%$ ) studies, including CAD (12 of $8 0 \mathrm { ~ - ~ } 1 5 . 0 \%$ ) and 3D modeling (10 of $8 0 ~ \textbar { - }$ $1 2 . 5 \%$ ) for physical layout and geometry. Mathematical and statistical models (23 of $8 0 \mathrm { ~ - ~ } 2 8 . 8 \%$ ) support dynamics and uncertainty, often using Bayesian networks (BN) or general equations. Ontological methods (19 of $8 0 \textrm { -- } 2 3 . 8 \%$ ) address semantic integration via Web Ontology Language (OWL) and AutomationML. Formal methods (14 of $8 0 \textrm { -- } 1 7 . 5 \%$ ) use Finite State Machines (FSM) and Fault Tree Analysis (FTA) for verification. AI/ML (13 of $8 0 \mathrm { ~ - ~ } 1 6 . 3 \%$ ) enable adaptive learning. Continuous simulation methods (12 of $8 0 - 1 5 . 0 \%$ ) and agent-based simulations (10 of $8 0 - 1 2 . 5 \%$ ) model physical dynamics and interactions. Discrete-event simulation methods (8 of $8 0 \mathrm { ~ - ~ } 1 0 . 0 \%$ ) are used for workflow and performance analysis.
# RQ3: Characteristics of DTs in SoTS
Most SoTS use fully autonomous DTs that provide monitoring, simulation, prediction, and optimization services. Modeling approaches vary, with architectural, visual, and mathematical formalisms being the most frequently used.
# 5.4 What are the characteristics of SoS that are combined with DTs? (RQ4)
To identify the characteristics of SoS used in SoTS, we analyze their supported SoS dimensions (Sec. 5.4.1) and the forms of emergent behavior they exhibit (Sec. 5.4.2).
# 5.4.1 Dimensions of SoS
Fig. 12 shows the SoS dimensions addressed in the studies, based on the framework by Nielsen et al. [51]. The most consistently supported dimensions are distribution and independence, with $9 2 . 5 \%$ and $8 8 . 7 5 \%$ of studies supporting these properties. Interdependence $( 7 7 . 5 \% )$ and interoperability $( 7 6 . 2 5 \% )$ 1 also appear frequently, highlighting the importance of coordination and information exchange in SoTS. Autonomy (47.5% “Yes” and $4 1 . 2 5 \%$ ”Partial”) and emergence $( 5 3 . 7 5 \% )$ show more variance, with a significant number of studies only partially addressing these properties. Reconfiguration and evolution, at just $4 3 . 7 5 \%$ and $3 7 . 5 \%$ support respectively, are the least acknowledged. This indicates that runtime adaptivity and long-term evolution remain major gaps in current SoTS implementations.
Table 10: Modeling and Simulation Formalisms
Fig. 12: SoS Dimensions: $\cdot$ No, $0$ Partial, Yes
# 5.4.2 Emergence Type
Tab. 11 shows the types of emergent behavior reported in the studies. Weak emergence is most common (30 of $8 0 \ \textrm { -- } \ 3 7 . 5 \%$ ). It involves behaviors that appear in system-level simulations but not in isolated components. Malayjerdi et al. [PS52] demonstrate this through vehicle safety testing in software-in-the-loop setups. Simple emergence appears in (16 of $8 0 - 2 0 . 0 \%$ ) studies. It involves predictable interactions, e.g., in Zhang et al. [PS79]’s DT framework for shop floor coordination. Strong emergence is rare (6 of $8 0 - 7 . 5 \%$ ). It captures behaviors not predictable from subsystems. Examples include SoS simulations in mining [PS10] and automotive systems [PS16]. (28 of $8 0 \mathrm { ~ - ~ } 3 5 . 0 \%$ ) studies do not address emergent behaviors at all.
# RQ4: Characteristics of SoS in SoTS
SoTS support architectural SoS dimensions (distribution, independence, interdependence, and interoperability) but rarely address dynamical aspects (emergence, reconfiguration, and evolution). Emergent behavior is addressed in two thirds of the studies, most often as weak emergence, and many studies do not consider emergence at all.
# 5.5 How are non-functional properties addressed in systems that combine SoS and DT? (RQ5)
To understand how non-functional properties are handled in SoTS, we analyze how reliability and security are addressed across studies (Sec. 5.5.1).
# 5.5.1 Security and Reliability
Reliability and security are the most frequently addressed non-functional properties in SoTS research. As shown in Tab. 12 and Tab. 13, reliability appears in (41 of 80 – 51.3%) studies, mostly through architectural mechanisms. These include fallback to local or lightweight DTs during communication loss [PS2, PS43], asynchronous communication for handling intermittent updates [PS1, PS48], and runtime fault recovery [PS23, PS74]. However, only (2 of $8 0 \mathrm { ~ - ~ } 2 . 5 \%$ ) studies formally model reliability, and only (3 of $8 0 ~ -$ $3 . 8 \%$ ) validate it through simulation or fault injection [PS58, PS68].
Security is covered architecturally in (19 of 80 – $2 3 . 8 \%$ ) studies, often through secure communication, access control, or authentication [PS5, PS1, PS19]. Just (2 of $8 0 \mathrm { ~ - ~ } 2 . 5 \%$ ) studies model security explicitly, and (3 of $8 0 - 3 . 8 \%$ ) perform validation through threat simulation or attack injection [PS52, PS72].
These two concerns remain central, but they represent only part of the broader quality landscape. ISO/IEC 25010 outlines other key properties, e.g., maintainability, interoperability, and usability.
# RQ5: NFPs focused on in SoTS
Reliability is frequently addressed through architectural strategies, but rarely formalized or evaluated. Security is less commonly treated, and most studies lack explicit modeling or validation.
Table 11: Emergence type (arranged in canonical order of emergence complexity [43])
Table 12: Reliability Considerations
Table 13: Security Considerations
# 5.6 What is the level of technical and research maturity in SoTS? (RQ6)
To assess the maturity of SoTS research, we analyzed the TRLs and contribution types of studies (Sec. 5.6.1), assessment strategies (Sec. 5.6.2), and the role of standardization (Sec. 5.6.3). Note that due to the rigorous study design (i.e., the exclusion of shallow contributions), the following results may or may not be representative of the state-of-the-art.
# 5.6.1 TRL Levels and Contribution Types
Tab. 14 shows that most studies operate at lower-tomid maturity, with demo prototypes being the most common stage (35 of $8 0 - 4 3 . 8 \%$ ), followed by initial (20 of $8 0 - 2 5 . 0 \%$ ) and proof-of-concept efforts (16 of 80 – $2 0 . 0 \%$ ). Only a few studies report deployed prototypes (8 of $8 0 \mathrm { ~ - ~ } 1 0 . 0 \%$ ) or fully operational systems (1 of 80 $- ~ 1 . 3 \%$ ).
In terms of contribution types (Tab. 15), the vast majority are technical contributions (60 of $8 0 - 7 5 . 0 \%$ ), often proposing new architectures or implementations. Conceptual works (13 of $8 0 - 1 6 . 3 \%$ ) make up a smaller portion of the sample, and case studies are underrepresented (7 of $8 0 \mathrm { ~ - ~ } 8 . 8 \%$ ).
As illustrated in Fig. 13, technical contributions dominate across all TRL levels but especially in demo prototypes and initial stages. Conceptual works appear mostly at early TRL stages. Case studies are rarely found and only emerge beyond the initial and proof-of-concept stages. This reflects a strong emphasis on engineering feasibility but limited real-world validation.
# 5.6.2 Evaluation
Tab. 16 shows that validation research (72 of 80 $\mathrm { ~ - ~ } 9 0 . 0 \%$ ) dominates the sample, mainly through prototyping (36 of $8 0 \mathrm { ~ - ~ } 4 5 . 0 \%$ ), simulation (16 of 80
Table 14: TRL (arranged in canonical order of technological readiness level [45])
Table 15: Contribution Type
Contribution Types by TRL Level
Fig. 13: Distribution of Contribution Types across TRL Levels
Conceptual Technical Case Study $- \ 2 0 . 0 \%$ ), and conceptual design validation (13 of $8 0 \mathrm { ~ - ~ } 1 6 . 3 \%$ ). Along with laboratory experiments (4 of $8 0 ~ - ~ 5 . 0 \%$ ) and mathematical analysis (3 of 80 – 3.8%). For example, Hatledal et al. [PS30] and Chen et al. [PS13] use simulation to validate co-simulated and behavior-predictive DTs, respectively. Larsen et al. [PS43] prototype a DTaaS platform for robot composition, while Redelinghuys et al. [PS65] validate architecture designs through structured frameworks and applied case studies. Mathematical analysis is used in Mahoro et al. [PS51] to formalize graphbased synchronization across DT layers. Savur et al. [PS69] conduct laboratory experiments to evaluate a human-robot collaboration system through physical trials.
In contrast, evaluation research appears in only (8 of $8 0 - 1 0 . 0 \%$ ) studies. Ashtari Talkhestani et al. [PS4] conduct an industrial case study to assess DT-based automation, and Bertoni et al. [PS10] apply action research to support planning in mining operations using an operational DT.
Table 16: Validation and Evaluation Approaches
# 5.6.3 Standards
Tab. 17 summarizes standards referenced across the studies. Open Platform Communications Unified Architecture (OPC UA) is the most used (13 of 80 – $1 6 . 3 \%$ ), supporting secure communication and hierarchical data exchange [PS19, PS39]. IEC 63278 (Asset Administration Shell) appears in (8 of $8 0 \ - \ 1 0 . 0 \%$ ) studies for asset representation and interoperability [PS27, PS28]. Reference Architectural Model Industrie 4.0 (RAMI 4.0) is cited in (4 of 80 – 5.0%) studies to guide structured DT integration [PS11]. Other domain-specific standards include VANET, IPv6 [PS2], ISO/IEC/IEEE 15288 [PS3], ISA-95 [PS19], IEC 61850 [PS37], and IEEE 1451 [PS45]. Security-related standards include GDPR [PS72] and OAuth 2.0 [PS36, PS37].
Tab. 18 shows that most standards are applied in DT specific contexts (18 of $8 0 \textrm { -- } 2 2 . 5 \%$ ), fewer relate to SoS (10 of $8 0 \mathrm { ~ - ~ } 1 2 . 5 \%$ ), and only (6 of $8 0 \mathrm { ~ - ~ } 7 . 5 \%$ ) support both. DT-oriented examples include the use of OPC UA and RAMI 4.0 for modeling and communication [PS11]. SoS-focused rely on NATO and SISO standards to support coordination and mission-level system integration[PS7]. Vermesan et al. [PS73] present a combined view, applying both DT and SoS-relevant standards in the Internet of Vehicles (IoV) context. In total, 36 of 80 (45.0%) unique studies rely on a standard, i.e., the majority of the sampled studies does not adhere to standards.
# RQ6: Maturity of SoTS research
SoTS research in our sample, even after rigorous quality criteria, is situated largely at low-to-mid TRLs, with demo prototypes and proof-of-concept efforts being the most common. Validation is primarily conducted through prototyping and simulation, with limited empirical evaluation. Standards are inconsistently applied and tend to focus on DTspecific components, with few addressing SoS integration or supporting both layers.
# 5.7 What technology is used to implement systems that combine SoS and DT? (RQ7)
To understand what technologies support the implementation of SoTS, we examine the programming languages and data formats used (Sec. 5.7.1), as well as the development frameworks and platforms adopted across studies (Sec. 5.7.2).
# 5.7.1 Programming Languages and Formats
Tab. 19 shows that most studies rely on general-purpose programming languages (36 of $8 0 - 4 5 . 0 \%$ ), particularly Python (22 of $8 0 - 2 7 . 5 \%$ ) and Java (14 of $8 0 - 1 7 . 5 \%$ ), reflecting their flexibility in data processing and simulation. Languages like JavaScript, C++, and C# appear less frequently. Data representation formats are used in (12 of $8 0 - 1 5 . 0 \%$ ) studies, with XML (9 of $8 0 - 1 1 . 3 \%$ ) and JSON (5 of $8 0 \mathrm { ~ - ~ } 6 . 3 \%$ ) supporting structured data exchange. Markup and styling languages, e.g., HTML and CSS, appear in (4 of $8 0 - 5 . 0 \%$ ) cases each, usually for visualization or web-based system interfaces.
Table 17: Standards
Table 19: Programming Languages and Data Formats Methods Used in Studies
Table 18: Standards Usage Context (DT vs. SoS)
# 5.7.2 Frameworks and Platforms
Tab. 20 shows that most studies use modeling and simulation tools (35 of $8 0 - 4 3 . 8 \%$ ), notably MATLAB (10 of $8 0 \textrm { -- } 1 2 . 5 \%$ ), Gazebo, Modelica, and Simulink (each 4 of $8 0 \mathrm { ~ - ~ } 5 . 0 \%$ ), supporting system dynamics and cosimulation. Data management tools appear in (19 of 80 $- 2 3 . 8 \%$ ) studies, with MongoDB (6 of $8 0 - 7 . 5 \%$ ) leading. Other tools like PostgreSQL, Redis, and Prote´ge´ support storage, synchronization, and ontology modeling. Visualization tools are also common (19 of $8 0 ~ -$ $2 3 . 8 \%$ ), with Unity (5 of $8 0 \mathrm { ~ - ~ } 6 . 3 \%$ ) and platforms like
WebGL and Kinect enabling interactive 3D or AR interfaces. DT and IoT platforms are used in (15 of 80 $- 1 8 . 8 \%$ ), including Eclipse Ditto and ROS (each 4 of $8 0 \mathrm { ~ - ~ } 5 . 0 \% )$ supporting twin orchestration, and device interoperability. Systems engineering tools (11 of 80 – $1 3 . 8 \%$ ), like Cameo Systems Modeler, Metasonic Suite, and Enterprise Architect, support architectural modeling. Other categories include web/app frameworks (10 of $8 0 \ - \ 1 2 . 5 \%$ ), cloud and DevOps tools (8 of 80 – $1 0 . 0 \%$ ) like Docker and Azure, and analytics platforms (7 of 80 – 8.8%), e.g., Grafana and Jupyter Lab for monitoring and machine learning (ML).
Table 20: Tools and Frameworks Used in Studies
# RQ7: Technologies used in SoTS
Systems combining SoS and DT use diverse technologies, with Python and Java as primary languages and XML/JSON for data formatting. The frameworks used focus on supporting simulation, data management, and systems engineering.
# 6 Discussion
We now discuss the key takeaways of our study and recommend research directions to prospective researchers.
# 6.1 Architecting SoTS
One of the key challenges in digital twin engineering is the relative lack of established architectures [16]. Our empirical inquiry suggests that this issue inherited in SoTS, as evidenced by Tab. 5 identifying the lack of architectures and lack of standards as recurring design challenges. As shown in Tab. 3, the intent of SoTS is typically the organization of DTs into SoS, which hints at the need for specialized architectures that are flexible enough to accommodate SoS dynamics. This hypothesis is corroborated by Tab. 5 identifying key SoS-related operational challenges of SoTS, such as interoperability—in two instances, in fact: operative interoperability and data interoperability, the two discussed in nearly 40% the sampled studies.
The prevalence of acknowledged and directed SoS types in Tab. 6 (found in over 70% of SoTS) highlight that current SoTS indeed struggle to support dynamical architectures. Collaborative and virtual SoS, i.e., more dynamical flavors of SoS are encountered in less than $3 0 \%$ of the cases. Indeed, this might be the artifact of the lack of architectural specifications and standards.
The good news for prospective researchers is that among the most typical modeling formalisms, we often find structural and architectural ones. As shown in Tab. 10, SysML and UML Class Diagrams are frequently encountered, which may hint at attempts at structural definitions of SoTS.
Developing SoTS architectures, therefore, should be a priority for prospective researchers. Such architectural specifications will indirectly contribute to the maturity of research and the maturity of systems as well— two areas current SoTS struggle with (see Tab. 16 and Tab. 14). We suggest research into microservice architectures [PS9], possibly bundled with the FMI/FMU standard for co-simulation[6], as well as interoperability of DTs which has shown to be an important enabler of SoTS [12]. For these efforts, our classification framework in Sec. 4 should provide valuable input.
# Recommendation 1
Develop architectural specifications and reference implementations for SoTS to ease their engineering and to allow higher levels of maturity in their research and development.
# 6.2 Standardization
Standardization is an overlooked aspect of engineering SoTS. We found that less than half of the sampled studies rely on any sort of standard (36 of $8 0 - 4 5 . 0 \%$ , see Tab. 17), and these standards are not primarily DT or SoS related. In most cases, we find (business) data management and exchange standards, e.g., OPC UA, the Asset Administration Shell (IEC 63278), and RAMI 4.0. These standards are among the recognized ones to support the engineering DTs in the lack of more suitable standards [61]. Among the challenges of designing SoTS (Tab. 5), standards are explicitly mentioned in a number of studies. The previous point on architecting SoTS also raises the need for technical standards [16]. Another, strong evidence of the need for standards are the application domains in which SoTS are used. As shown in Tab. 4, some of the typical application domains include automotive systems and smart cities— both of which enforce rigorous standards and will likely do so for SoTS. The lack of standards hinders the adoption of SoTS in these domains, and likely in others too.
Unfortunately, the limitations of the only ISO-grade DT standard (ISO 23247) to support dynamical systems are well known [41]; and standardization of SoS is an afterthought. According to Shao [61], two new extensions to the ISO 23247 standard are expected to appear in the coming years: digital thread for DTs (Part 5) and DT composition (Part 6). These extensions are well-positioned to address the key challenges of SoTS, including interoperability and synchronization among DTs.
# Recommendation 2
Develop standards for DT and SoS, and participate in standardization efforts to improve the maturity of SoTS.
# 6.3 Managing emergent behavior in SoS by DTs
The essential trait of SoS is the emergent behavior they exhibit. Yet, as witnessed by Tab. 6, state-of-the-art SoTS techniques are mostly limited to acknowledged and directed flavors of SoS. Our hypothesis is that augmented with DTs, SoTS can achieve more. The uniquely tight coupling of cyber and physical components in DTs allow for leveraging them to understand and manage emergent behavior. The idea of active experimentation with the physical system to infer simulation models dates back in the ’70s [71], and it is living its renaissance thanks to DTs [50, 2]. Active experimentation is the purposeful modification of the twinned system in a way that it exhibits interesting configurations from which valuable information can be extracted. Such ideas have been explored, e.g., in the control of uncrewed aerial systems [29], computer vision for autonomous vehicles [55], and AI simulation [41]. Purposeful experimentation will help SoTS engineers to characterize emergent behavior better and learn about the environment of the SoTS.
Even after purposeful experimentation, some uncertainty about the behavior of the SoTS remains. To manage these unknown unknowns [56], we recommend researching computing techniques that have the potential to react to unknown unknowns better, e.g., faster-thanreal-time simulations to react to emergence faster or to anticipate it on a short time horizon; and using sound modeling techniques, such as goal modeling (e.g., via $\mathrm { I } ^ { * }$ [23] and KAOS [24]) to codify the expected behavior of SoTS.
# Recommendation 3
Leverage DT capabilities to understand and manage emergent behavior in SoS, e.g., by purposeful experimentation with physical systems, or by improving run-time modeling&simulation capabilities.
# 6.4 Human factors
The application domains of SoTS (Tab. 4), especially smart cities and automotive systems suggest human aspects to be a substantial factor in SoTS. Humans interact with SoTS in many ways. They use, operate, test, and develop SoTS and therefore, human factors deserve research inquiries. In addition, humans can be digitally twinned, too—a trend that has been displayed since the 2021 edition of Gartner’s hype curve.4 The utility of such techniques has been verified in a growing number of domains, from healthcare [49] to smart agronomy [9]. Additional evidence in Tab. 7 corroborates the role of humans in SoTS by studies on systems positioned as cyber-physical-human ones.
Despite the emerging need for situating the human in the SoTS, Tab. 8 shows that DTs in SoTS mostly ignore human aspects. DTs in SoTS are typically considered autonomous ones without the need for human oversight, control, or actuation.
# Recommendation 4
Research the role of the human in SoTS and enable human-centered methods in SoTS.
# 6.5 Empirical inquiries are welcome additions
We observe a relatively high ratio of technical contributions compared to conceptual works in our sample (see Tab. 15). This is, of course, partly the result of our study design which excluded works with shallow and superficial contributions. Thus, the ratio of technical and conceptual contributions may not be representative to the overall field of SoTS. Tab. 14 reports that more than half of the sampled studies are beyond a demo prototype TRL. Fig. 13 shows a more detailed view of the TRL of the various contribution types. As expected, conceptual contributions are situated at lower levels of TRL (initial and proof-of-concept, i.e., TRLs 1–4), while the distribution of technical contributions peaks at a demonstrated prototype level (i.e., TRL $5 -$ 6), with occasional instances at the deployed prototype level (i.e., TRL 7–8). The few case studies we found are predominantly situated at the deployed prototype level, with one instance at the operational level of maturity (i.e., TRL 9).
The apparent existence of mature SoTS provides excellent opportunities for empirical inquiries. We encourage such investigations and suggest prospective researchers to consider reporting in case report and exemplar formats [46], e.g., in the industry and practice tracks of conferences, which are as reputable as foundations tracks. In terms of methods, we recommend case studies [69], engineering research (also known as design science) [14], action research [8], and ethnography [62] for human-focused studies (e.g., when researching the role of the human in a SoTS).
Such empirical inquiries will indirectly contribute to improved research maturity, e.g., by naturally improving the ratio of evaluative assessments over validation types. The latter is currently the prevalent assessment method, by far $9 0 \%$ vs $1 0 \%$ ), as evidenced by Tab. 16, but ranked lower on the methodological list of Petersen et al. [54].
# Recommendation 5
Conduct empirical inquiries into SoTS by using established methods, such as case studies, action research, and longitudinal studies. | Modern systems exhibit unprecedented complexity due to their increased scale, interconnectedness, and the heterogeneity of their digital and physical components. In response to scaling challenges, the system-of-systems (SoS) paradigm proposes flexible aggregations of subsystems into a larger whole, while maintaining the independence of subsystems to various degrees. In response to the cyber-physical convergence, the digital twin (DT) paradigm proposes a tight coupling between digital and physical components through computational reflection and precise control. As these two paradigms address distinct parts of the overall challenge, combining the two promises more comprehensive methods to engineer what we call systems of twinned systems (SoTS). The noticeably growing body of knowledge on SoTS calls for a review of the state of the art. In this work, we report on our systematic literature survey of SoTS. We screened over 2500 potential studies, of which we included 80 and investigated them in detail. To converge SoS and DT, we derive a classification framework for SoTS that is backward compatible with the currently accepted theories of SoS and DT. | [
"cs.ET",
"cs.SE"
] |
# 1 INTRODUCTION
Graph neural networks (GNNs) have demonstrated promising performances in graph analytical tasks such as classification. Given a graph $G$ (a network representation of a real-world dataset), a GNN $\mathcal { M }$ aims to learn the node representations of $G$ that can be converted to proper results for targeted downstream tasks, e.g., node classification, link prediction, or regression analysis. For example, a GNN-based node classification assigns a class label to a set of test nodes in $G$ , where the label of each test node $\boldsymbol { v }$ (the “output” of $\mathcal { M }$ at node $\boldsymbol { v }$ , denoted as $M ( v , G ) )$ is determined by the node representation learned by the GNN $\mathcal { M }$ . GNNs have been applied for node classification in biochemistry, social and financial networks [14, 55, 56, 61], among other graph analytical tasks.
Despite their promising performance, it remains desirable yet nontrivial to explain the output of GNNs to help users understand their behavior [62]. Several GNN explainers are proposed to generate subgraphs (called “explanatory subgraphs”) that are “responsible” to clarify the output of $\mathcal { M }$ over $G$ [34, 35, 52, 60, 66]. For example, given $\mathcal { M }$ as a GNN-based node classifier and a test node $\boldsymbol { v }$ in the graph $G$ , a GNN explainer computes an explanatory subgraph $G _ { \zeta }$ of $G$ that can best clarify the node label $M ( v , G )$ . This is often addressed by solving an optimization problem that discovers a subgraph $G _ { \zeta }$ subject to a pre-defined metric, which quantifies the explainability of $G _ { \zeta }$ for the output $M ( v , G )$ for a test node $\boldsymbol { v }$ .
Prior work typically pre-assumes and optimizes a single metric of interest, such as fidelity, sparsity, or stability to generate explanatory subgraphs with high explainability [62]. Such a metric assesses explanations from a pre-defined, one-sided perspective of explainability (as summarized in Table $2 , \ S \ 2 ,$ ). For example, a subgraph $G _ { \zeta }$ of a graph $G$ is a “factual” explanation for $\mathcal { M }$ over $G$ , if it preserves the output of $\mathcal { M }$ (hence is “faithful” to the output of $\mathcal { M }$ ) [33, 35, 52, 60]. $G _ { \zeta }$ is a “counterfactual” explanation, if removing the edges of $G _ { \zeta }$ from $G$ leads to a change of the output of $\mathcal { M }$ on the remaining graph (denoted as $G \backslash G _ { \zeta } )$ [33, 34, 52]. Other metrics include fidelity− [33, 34, 52] (resp. fidelity+ [35, 63]), which quantifies the explainability of $G _ { \zeta }$ in terms of the closeness between the task-specific output of $\mathcal { M }$ , such as the probability of label assignments, over $G _ { \zeta }$ (resp. $G \backslash G _ { \zeta } )$ and their original counterpart $G$ , and “conciseness (sparsity)” that favors small explanatory subgraphs.
Nevertheless, GNN explainers that optimize a single metric may lead to biased and less comprehensive explanations. For example, an explanation that achieves high fidelity may typically “compromise” in conciseness, due to the need of including more nodes and edges from the original graphs to be “faithful” to the GNN output. Consider the following example.
Example 1: In Bitcoin blockchain transactions, money launderers employ various techniques to conceal illicit cryptocurrency activities and evade detection by law enforcement agencies and AI-based monitoring systems [39, 43]. Figure 1 illustrates an input graph $G$ that includes account IP addresses and Bitcoin transactions among the accounts. Each $\mathbb { P }$ address is associated with transaction-related features: the number of blockchain transactions (Txs), the number of transacted Bitcoins (BTC), and the amount of fees in Bitcoins (Fee). The task is to detect illicit IP addresses with a GNN-based node classifier. A GNN classifier has correctly detected $\boldsymbol { v } _ { t }$ as “illicit”.
A law enforcement agency wants to understand why the GNN asserts $\boldsymbol { v } _ { t }$ as an illicit account address. They may ask an “explanatory query” that requests to generate explanations (“which fraction of the graph $G$ are responsible for the GNN’s decision of assigning the label “illicit” to the account ${ \boldsymbol { v } } _ { t } ? ^ { \flat ^ { \prime } } .$ ), and further ground this output with real-world evidences (e.g., by referring to known real-world money laundering scenarios [5, 12]). Therefore, it is desirable to generate explanatory subgraphs as intuitive and natural showcases of money laundering scenarios. For example, “Spindle” [12] suggests that perpetrators generate multiple shadow addresses to transfer small amounts of assets along lengthy paths to a specific destination; and Peel Chain [5] launders large amounts of cryptocurrency through sequences of small transactions, where minor portions are ‘peeled’ from the original address and sent for conversion into fiat currency to minimize the risk of being detected.
Figure 1: A Bitcoin transaction network with a target IP address (test node $\upsilon _ { t }$ ) that has a label “illicit” to be explained [19].
Consider the following explanatory subgraphs, generated by representative GNN explainers: $G _ { \zeta } ^ { 1 }$ is a factual explanatory subgraph generated by the explainer in [60] (a “factual explainer”); and $G _ { \zeta } ^ { 2 }$ is a counterfactual explanatory subgraph generated by a “counterfactual explainer” [34]. Compare $G _ { \zeta } ^ { 1 }$ and $G _ { \zeta ^ { \star } } ^ { 2 } ( 1 ) G _ { \zeta } ^ { 1 }$ includes a subgraph induced by $\boldsymbol { v } _ { t }$ and most of its neighbors, which is indeed a critical fraction that can preserve the output of a GNN’s output; nevertheless, it misses important nodes that have a higher influence on GNN’s decision making, but are not in $\boldsymbol { v } _ { t }$ ’s direct neighborhood. (2) Such nodes can be captured by a counterfactual explanatory subgraph, as depicted in $G _ { \zeta } ^ { 2 }$ . Although $G _ { \zeta } ^ { 2 }$ can capture more nodes with high influence beyond $\boldsymbol { v } _ { t }$ ’s neighbors, it is enforced to include a larger fraction of $G$ to ensure that the removal of its edges incurs great enough impact to change the output of the GNN classifier, hence sacrificing “conciseness”, conflicting for users who favor quick generation of small evidences that are “faithful” to original GNN output [33]. Choosing either alone can be biased to a “one-sided” explanation for the “illicit” IP address. □
Can we generate explanations that simultaneously optimize multiple explainability metrics? One solution is to compute subgraphs that optimize a linear combination of all metrics. This aggregates multiple criteria into a single objective function using a set of weights. However, such weighted sum methods may lead to a single, marginally optimal answer across all criteria, overlooking other high-quality and diverse solutions, hence an overkill [18, 36].
Example 2: Consider another two explanatory subgraphs: $G _ { \zeta } ^ { 3 }$ is the explanation generated by an explainer that optimizes both conciseness and factual measures [35]; $G _ { \zeta } ^ { 4 }$ is the explanation generated by an explainer that linearly combines factual and counterfactual measures into a single, bi-criteria objective function to be optimized [52]. Such simple combinations enforce explainers to optimize potentially “conflicting” measures, or highly correlated ones, either may result in lower-quality solutions that are sensitive to data bias. For example, as factual measures and conciseness may both encourage smaller and faithful explanations, explanatory subgraphs obtained by [35], such as $G _ { \zeta } ^ { 3 }$ , turn out to be relatively much smaller and less informative, hardly providing sufficient real-world evidence that can be grounded by money laundering behaviors. On the other hand, explanatory subgraphs from [52] such as $G _ { \zeta } ^ { 4 }$ may easily be a one-sided explanation of either factual or counterfactual. Indeed, we find that such explanations capture only one type of money laundering scenario at best in most cases (e.g., Spindle [12]). □
These examples illustrate the need to generate explanations via a multi-objective optimization paradigm, leading to more comprehensive and balanced outcomes. Skyline query processing has been extensively studied [13, 42]. Analogizing well-established skyline queries [6, 9, 29, 32, 40] which compute Pareto sets (also referred to as “skylines”) that are data points not dominated by each other across a set of quality measures, we advocate approaching GNN explanation by generating explanatory subgraphs that are Poreto sets over multiple user-defined explanatory measures. Skylines (or “Pareto set”) generally offer better solutions against the aforementioned alternatives [18, 50]. We refer to such explanations “skyline explanations”, and illustrate an example below.
Example 3: Consider a set of subgraphs $G _ { \zeta } ^ { 5 } , G _ { \zeta } ^ { 6 }$ , and $G _ { \zeta } ^ { 7 }$ . These explanatory subgraphs are selected as a Pareto-optimal set across three explanatory measures: fidelity $^ +$ , fidelity-, and conciseness. Each subgraph is high-quality, diverse, and non-dominated in at least one measure that is higher than others in the set. This result provides a more comprehensive and intuitive interpretation to explain “why” $\boldsymbol { v } _ { t }$ is identified as “illicit” by GNN-based classification. Indeed, $G _ { \zeta } ^ { 5 } , G _ { \zeta } ^ { 6 } ;$ , and $G _ { \zeta } ^ { 7 }$ capture different money laundering scenarios: Peel Chain [5], Spindle [12], and a combination, respectively. Therefore, our identified skyline explanations $G _ { \zeta } ^ { 5 } , G _ { \zeta } ^ { 6 }$ , and $G _ { \zeta } ^ { 7 }$ better support law enforcement agencies by distinguishing various money laundering evidences that $\boldsymbol { v } _ { t }$ is involved. □
We advocate to develop a GNN explainer that can efficiently generate skyline explanations for large-scale GNN-based analysis. Such an explainer should: (1) generate skyline explanations for designated output of interests and any user-defined set of explanatory measures; and (2) generate a diversified set of skyline explanations upon request, and (3) ensure desirable guarantees in terms of Pareto-optimality. The need for skyline explanations are evident for trustworthy and multifaceted analysis and decision making, as observed in e.g., drug repurposing [44], cybersecurity analysis [24], fraud detection [39], social recommendation [20], among others, where GNN output should be clarified from multiple, comprehensive aspects, rather than one-sided, biased perspectives.
Contribution. This paper formulates and investigates a novel problem of generating skyline explanations in terms of explanatory subgraphs, by simultaneously optimizing multiple user-specified explanatory measures. We summarize our contribution as follows. (1) A formulation of $S k$ yline Explanation Generation problem. We introduce a class of skyline explanatory query (SXQ) to express the configuration for generating skyline explanations. An SXQ takes as input a graph $G$ , a GNN $\mathcal { M }$ , nodes of interests $V _ { T }$ , and a set of explainability measures $\Phi$ , and requests a skyline explanation (a set of explanatory subgraphs) that clarifies the output of $\mathcal { M }$ over each node in $V _ { T }$ , that simultaneously optimizes the measures in $\Phi$
The evaluation problem of SXQ is to generate a skyline explanation (a set of $k$ explanatory subgraphs) as a comprehensive explanation of the output of $\mathcal { M }$ for $V _ { T }$ . We approach the problem with multi-objective optimization, based on a subgraph dominance relation over explainability measures $\Phi$ . We verify the hardness of both the decision and optimization version of the problem.
(2) We introduce efficient algorithms to process SXQs in terms of Pareto optimality. The algorithm adopts an “onion-peeling” strategy to iteratively reduce edges at each hop of the targeted nodes, and validates a bounded number of generated explanations in a discretized coordination system to incrementally improve the quality of the answer. We show that this process ensures a $( 1 + \epsilon )$ approximation of Pareto-optimal set. We also present an algorithm to diversify the answer set for SXQs.
(3) Using real-world graphs and benchmark tasks from various domains, we show qualitative and quantitative analysis to verify the effectiveness and scalability of our GNN explainers. We visualize skyline explanations with corresponding distribution, to showcase applications of our novel problems and algorithms. Our approach efficiently generates a set of skyline explanations for nodes of interest, even on large-scale graphs. For example, we outperform ${ \mathsf { C } } { \mathsf { F } } ^ { 2 }$ [52] in the Integrated Preference Function score by $2 . 8 \times$ on Cora dataset; for OGBN_arxiv dataset with million-scale edges, we outperform the fastest baseline GNNExplainer [60] by $1 . 4 \times$ .
Related Work. We categorize related work into the following.
Graph Neural Networks. GNNs have demonstrated themselves as powerful tools in performing various graph learning tasks, including, but not limited to node classification, link prediction [64], and graph classification. [60] Recent studies proposed multiple variants of the GNNs, such as graph convolution networks (GCNs) [28], graph attention networks (GATs) [54], Graph Isomorphism Networks (GINs) [57]. These methods generally follow an information aggregation scheme where features of a target node are obtained by aggregating and combining the features of its neighboring nodes.
Explanation of GNNs. Several GNN explanation approaches have been studied. (1) Learning-based methods aim to learn substructures of underlying graphs that contribute to the output of a GNN. GNNExplainer [60] identifies subgraphs with node features that maximize the influence on the prediction, by learning continuous soft masks for both adjacency matrix and feature matrix. CFGNNExplainer [34] learns the counterfactual subgraphs, which lead to significant changes of the output if removed from the graphs. PGExplainer [35] parameterizes the learning process of mask matrix using a multi-layer perceptron. GraphMask [47] learns to mask the edges through each layer of GNNs that leads to the most sensitive changes to their output. (2) Learning-based GNN explainers often require prior knowledge of model parameters and incur considerable learning overhead for large graphs. Other approaches perform post-processing to directly generate explanatory subgraphs that optimize a pre-assumed explanation criteria. SubgraphX [63] utilizes Monte-Carlo tree search to compute subgraphs that optimize a game-theory-inspired Shapley value. GStarX [65] follows a similar approach, yet aims to optimize “HN values”, a topology-aware variant of Shapley value. These methods typically focus on optimizing a single, pre-defined criterion, and cannot provide a configurable mechanism for users to customize the generation of explanations.
Closer to our setting are GNN explainers that aim to optimize more than one explainability criterion. $\mathrm { C F } ^ { 2 }$ [52] leverages both factual and counterfactual reasoning to formulate an objective that is a linear function of weighted factual and counterfactual measures. It learns feature masks and edge masks aiming to produce explanations that optimize this objective function. RoboGExp [45] generates subgraph explanations that are factual, counterfactual, and meanwhile robust, i.e., explanatory subgraphs remain invariant structures under a bounded number of edge modifications. GEAR [66] learns GNN explainers by adjusting the gradients of multiple objectives geometrically during optimization. GEAR handles gradient conflicts globally by selecting a dominant gradient based on user-desired objectives (such as fidelity) and adjusting conflicting gradients to lie within a controlled angle threshold. MOExp [33] introduces a bi-objective optimization algorithm to find Pareto optimal explanations that strike a balance between “simulatability" (factual) and “counterfactual relevance". It proposes a zero-order search algorithm that optimizes without accessing the target GNN model’s architecture or parameters, making it universally applicable. Despite these methods generating explanations that can address multiple criteria, the overall goal remains pre-defined – they do not provide configurable manner to allow user-defined preferences as needed. In addition, MOExp does not control the size of explanations, which may result in a large set of subgraphs that are hard to inspect.
Skyline queries. Multi-objective search and skyline queries have been extensively studied [6, 15, 16]. These approaches compute Pareto optimal sets [13, 26] or their approximate variants [30, 41] over data points and a set of optimization criteria. Notable strategies include [26] that transform multiple objectives into a singleobjective counterpart. Constraint-based methods such as [13] initialize a set of anchor points that optimize each single measure, and bisect the straight lines between pairs of anchor points with a fixed vertical separation distance. This transforms bi-objective optimization into a series of single-objective counterparts. Solving each derives an approximation of the Pareto frontier. $\epsilon$ -Pareto set [30, 41] has been widely recognized as a desirable approximation for the Pareto optimal set. While these algorithms cannot be directly applied to answer $\mathsf { s x Q }$ , we introduce effective multi-objective optimization algorithms to generate explanatory subgraphs with provable quality guarantees in terms of $\epsilon$ -Pareto approximation.
# 2 GRAPHS AND GNN EXPLANATION
Graphs. A directed graph $G = ( V , E )$ has a set of nodes $V$ and a set of edges $E \subseteq V \times V$ . Each node $\boldsymbol { v }$ carries a tuple $T ( \boldsymbol { v } )$ of attributes and their values. The size of $G$ , denoted as $| G |$ , refers to the total number of its edges, i.e., $\vert G \vert = \vert E \vert$ . Given a node $\boldsymbol { v }$ in $G$ , (1) the $L$ -hop neighbors of $\boldsymbol { v }$ , denoted as $N ^ { L } ( v )$ , refers to the set of nodes in the $L$ -hop of $\boldsymbol { v }$ in $G$ . (2) The $L$ -hop neighbor subgraph, denoted as $G ^ { L } ( v )$ , refers to the subgraph of $G$ induced by $N ( v )$ . (3) The $L$ -hop neighbor subgraph of a set of nodes $V _ { s } \subseteq V$ , denoted as $G ^ { L } ( V _ { s } ) ^ { - }$ refers to the subgraph induced by the node set $\textstyle \bigcup _ { v \in V _ { s } } N ^ { L } ( v )$ .
Graph Neural Networks. GNNs [28] comprise a well-established family of deep learning models tailored for analyzing graphstructured data. GNNs generally employ a multi-layer messagepassing scheme as shown in Equation 1.
$$
\mathbf { H } ^ { ( l + 1 ) } = \sigma ( \widetilde { \mathbf { A } } \mathbf { H } ^ { ( l ) } \mathbf { W } ^ { ( l ) } )
$$
Table 1: Summary of notations
H(𝑙+1) is the matrix of node representations at layer 𝑙, with $\mathbf { H } ^ { ( 0 ) } = \mathbf { X }$ being the input feature matrix. $\widetilde { \mathbf { A } }$ is the normalized adjacency matrix of an input graph $G$ , which captures the topological feature of $G$ . $\mathbf { W } ^ { ( l ) }$ is a learnable weight matrix at layer 𝑙 (a.k.a “model weights”). $\sigma$ is an activation function such as ReLU.
The inference process of a GNN $\mathcal { M }$ with $L$ layers takes as input a graph 𝐺 = (𝑋, A), and computes the embedding H𝑣(𝐿) for each node $v \in V$ , by recursively applying the update function in Equation 1. The final layer’s output $\mathbf { \bar { H } } ^ { ( L ) }$ (a.k.a “output embeddings”) is used to generate a task-specific output, by applying a post-processing layer (e.g., a softmax function). We denote the task-specific output as $M ( v , G )$ , for the output of a GNN $\mathcal { M }$ at a node $v \in V$ .
Fixed and Deterministic Inference. We say that a GNN has a fixed inference process if its inference process is specified by fixed model parameters, number of layers, and message passing scheme. It has a deterministic inference process if $M ( \cdot )$ generates the same result for the same input. We consider GNNs with fixed, deterministic inference processes. Such GNNs are desired for consistent and robust performance in practice.
Node Classification. Node classification is a fundamental task in graph analysis [21]. A GNN-based node classifier learns a GNN $\mathcal { M } : V \to \mathbf { Y }$ s.t. $\mathcal { M } ( v ) = y _ { v }$ for $\boldsymbol { v } \in V _ { T r } \ \subseteq \boldsymbol { V }$ , where $V _ { T r }$ is the training set of nodes with known (true) labels $Y _ { T r }$ . The inference process of a trained GNN $\mathcal { M }$ assigns the labels for a set of test nodes $V _ { T } \subseteq V$ , which are derived from their computed embeddings.
GNN Explainers and Measures. Given a GNN $\mathcal { M }$ and an output $M ( v , G )$ to be explained, an explanatory subgraph $G _ { \zeta }$ is an edge-induced, connected subgraph of $G$ with a non-empty edge set $E _ { \zeta } \subseteq E$ that are responsible to clarify the occurrence of $M ( v , G )$ . We call the set of all explanatory subgraphs as an interpretable domain, denoted as $\zeta$ . A GNN explainer is an algorithm that generates explanatory subgraphs in $\zeta$ for $M ( v , G )$ .
An explainability measure $\phi$ is a function: $G _ { \zeta } \to \mathbb { R }$ that associates an explanatory subgraph to an explainability score. Given $G$ and an output $M ( v , G )$ to be explained, existing GNN explainers typically solve a single-objective optimization problem:
$$
G _ { \zeta } ^ { * } = \underset { G _ { \zeta } \in \zeta } { \arg \operatorname* { m a x } } \phi ( G _ { \zeta } )
$$
We summarize the main notations in Table 1. We summarize in Table 2 a set of commonly adopted explainability measures, along with relevant GNN explainers.
# 3 SKYLINE EXPLANATIONS
We introduce our explanation structure and the generation problem.
# 3.1 Skyline Explanatory Query
We start with a class of explanatory queries. A $S k$ yline explanatory query, denoted as SXQ, is in the form
$$
{ \mathsf { S X Q } } ( G , { \mathcal { M } } , V _ { T } , \Phi )
$$
where $G = ( V , E )$ is an input graph, $\mathcal { M }$ is a GNN, $V _ { T } \subseteq V$ is a set of designated test nodes of interest, and $\Phi$ is a set of user-defined explainability measures. The output $\begin{array} { r } { \boldsymbol { \mathcal { M } } ( V _ { T } ) = \bigcup _ { \boldsymbol { v } \in V _ { T } } ( \boldsymbol { M } ( \boldsymbol { v } , \boldsymbol { G } ) ) } \end{array}$ refers to the output to be explained.
Multi-objective Explanations. As aforementioned in Example 1, explanatory subgraphs that optimize a single explainability measure may not be comprehensive for the users’ interpretation preference. On the other hand, a single explanatory subgraph that optimizes multiple measures may not exist, as two measures may naturally “conflict”. Thus, we pursue high-quality answers for SXQ in terms of multi-objective optimality measures.
Given a node of interest $\boldsymbol { v } \in V _ { T }$ , a subgraph $G _ { \zeta }$ of $G$ is an explanatory subgraph in the interpretable space $\zeta ~ w . r . t$ . the output $\mathcal { M } ( v , G )$ , if it is either a factual or a counterfactual explanation. That is, $G _ { \zeta }$ satisfies one of the two conditions below:
The interpretable space $\zeta$ w.r.t. $G$ , $\mathcal { M }$ , and $V _ { T }$ contains all the explanatory subgraphs w.r.t. output $\mathcal { M } ( v , G )$ , as $\boldsymbol { v }$ ranges over $V _ { T }$ .
A subset $\mathcal G _ { \zeta } \ \subseteq \ \zeta$ is an explanation w.r.t. $G$ , $\mathcal { M }$ and $V _ { T }$ if, for every node $\boldsymbol { v } \in V _ { T }$ , there exists an explanatory subgraph $G _ { \zeta } ( \upsilon )$ w.r.t. $M ( v , G )$ in $\mathcal { G }$ .
Explainability Measures. We make cases for three widely used explainability measures: fdl+ measures the counterfactual property of explanatory subgraphs. Specifically, we exclude the edges of the explanatory subgraph from the original graph and conduct the GNN inference to get a new prediction based on the obtained subgraph. If the difference between these two results is significant, it indicates a good counterfactual explanatory subgraph. Similarly, $\mathrm { \ f d l ^ { - } }$ measures the factual property, i.e., how similar the explanatory subgraph is compared to the original graph in terms of getting the same predictions. conc intuitively measures how compact is the explanatory subgraph, i.e., the size of the edges.
Measurement Space. The explainability measure set $\Phi$ is a set of normalized measures to be maximized, each has a range $( 0 , 1 ]$ . For a measure to be better minimized (e.g., conciseness, $\mathrm { \ f d } | ^ { - }$ in Table 2), one can readily convert it to an inverse counterpart.
To characterize the query semantic, we introduce a dominance relation over interpretable domain $\zeta$ .
Dominance. Given a set of user-specified explanatory measures $\Phi$ (converted to bigger is better) and an interpretable space $\zeta$ , we say that an explanatory subgraph $G _ { \zeta } \in \zeta$ is dominated by another $G _ { \zeta } ^ { \prime } \in \zeta$ , denoted as $G _ { \zeta } \prec G _ { \zeta } ^ { \prime }$ , if
$\circ$ for each measure $\phi \in \Phi$ , $\phi ( G _ { \zeta } ) \leq \phi ( G _ { \zeta } ^ { \prime } )$ ; and ◦ there exists a measure $\phi ^ { * } \in \Phi$ , such that $\phi ^ { * } ( G _ { \zeta } ) < \phi ^ { * } ( G _ { \zeta } ^ { \prime } )$
Table 2: Representative explainability measures and notable GNN explainers
Query Answers. We characterize the answer for an SXQ in terms of Pareto optimality. Given an interpretable space $\zeta w . r . t . G , M$ , and $V _ { T }$ , an explanation $\mathcal { G } _ { \zeta } \subseteq \zeta$ is a $S k$ yline explanation, if
◦ there is no pair $\{ G _ { 1 } , G _ { 2 } \} \subseteq { \mathcal { G } } _ { \zeta }$ such that $G _ { 1 } \prec G _ { 2 }$ or $G _ { 2 } \prec$ $G _ { 1 }$ ; and ◦ for any other $G \in \zeta \backslash \mathcal { G } _ { \zeta }$ , and any $G ^ { \prime } \in \mathcal { G } _ { \zeta }$ , $G \prec G ^ { \prime }$ .
That is, $G _ { \zeta }$ is a Pareto set of $\zeta$ [50].
As a skyline explanation may still contain an excessive number of explanatory subgraphs that are too many for users to inspect, we pose a pragmatic cardinality constraint $k$ . A $k$ -skyline query (denoted as $\mathsf { S X Q } ^ { k }$ ) admits, as query answers, skyline explanations with at most $k$ explanatory subgraphs, or simply $k$ -explanations. Here, $k$ is a user-defined constant $( k \leq | \zeta | ] )$ .
# 3.2 Evaluation of Skyline Explanatory Queries
While one can specify $\Phi$ and $k$ to quantify the explainability of a skyline explanation, SXQ may still return multiple explanations for users to inspect. Moreover, two $k$ -explanations, with one dominating much fewer explanations than the other in $\zeta$ , may be treated “unfairly” as equally good. To mitigate such bias, we adopt a natural measure to rank explanations in terms of dominance.
Given an explanatory subgraph $G _ { \zeta } \in \zeta$ , the dominance set of $G _ { \zeta }$ , denoted as $\mathcal { D } ( G _ { \zeta } )$ , refers to the largest set $\{ G ^ { \prime } | G ^ { \prime } < G _ { \zeta } \}$ , i.e., the set of all the explanatory subgraphs that are dominated by $G _ { \zeta }$ in $\zeta$ . The dominance power of a $k$ -explanation $G _ { \zeta }$ is defined as
$$
\mathsf { D S } ( \mathscr { G } _ { \zeta } ) = \bigg | \bigcup _ { \substack { G _ { \zeta } \in \mathscr { G } _ { \zeta } } } \mathscr { D } ( G _ { \zeta } ) \bigg |
$$
Note that $\mathsf { D S } ( \mathcal G _ { \zeta } ) \le | \zeta |$ for any explanations $G _ { \zeta }$ .
Query Evaluation. Given a skyline explanatory query $\mathsf { s x Q } ^ { k }$ $\mathbf { \Psi } = \mathbf { \Psi } ( G , M , V _ { T } , k , \Phi )$ , the query evaluation problem, denoted as $\mathsf { E V A L } ( \mathsf { S X Q } ^ { k } ) ,$ is to find a $k$ -explanation $\mathcal { G } _ { \zeta } ^ { k * }$ , such that
$$
\mathcal { G } _ { \zeta } ^ { k * } = \underset { \mathcal { G } _ { \zeta } \subseteq \zeta , | \mathcal { G } _ { \zeta } | \leq k } { \arg \operatorname* { m a x } } ~ \mathsf { D S } ( \mathcal { G } _ { \zeta } )
$$
# 3.3 Computational Complexity
We next investigate the hardness of evaluating skyline exploratory queries. To this end, we start with a verification problem.
Verification of Explanations. Given a query ${ \sf S X Q } = ( G , M , V _ { T } , \Phi )$ and a set of subgraphs $\mathcal { G }$ of $G$ , the verification problem is to decide if $\mathcal { G }$ is an explanation.
Theorem 1: The verification problem for SXQ is in $P .$
Proof sketch: Given an $\mathsf { S X Q } = ( G , M , V _ { T } , k , \Phi )$ and a set of subgraphs $\mathcal { G }$ of $G$ , we provide a procedure, denoted as Verify, that correctly determines if $\mathcal { G }$ is an explanation. The algorithm checks, for each pair $( v , G _ { s } )$ with $v \in V _ { T }$ and $G _ { s } \in { \mathcal { G } }$ , if $G _ { s }$ is a factual or a counterfactual explanation of $M ( v , G )$ . It has been verified that this process can be performed in PTIME, by invoking a polynomial time inference process of $\mathcal { M }$ for $\boldsymbol { v }$ over $G _ { s }$ (for testing factual explanation) and $G \setminus G _ { s }$ (for testing counterfactual explanations), respectively [10, 45]. It outputs true if there exists a factual or counterfactual explanation for every $v \in V _ { T }$ ; and false otherwise. □
While it is tractable to verify explanations, the evaluation of an SXQ is already nontrivial for $\left| \Phi \right| = 3$ , even for a constrained case that $\vert \zeta \vert$ is a polynomial of $| G |$ , i.e., there are polynomially many connected subgraphs to be explored in $G$ .
Theorem 2: EVAL $( \mathsf { S } \mathsf { X } \mathsf { Q } ^ { k } )$ is NP-hard even when $| \Phi | = 3$ and $\lvert \zeta \rvert$ is polynomially bounded by $| G |$ . □
Proof sketch: The hardness of the problem can be verified by constructing a polynomial time reduction from $k$ -representative skyline selection problem $k$ -RSP) [32]. Given a set $s$ of data points, the problem is to compute a $k$ -subset $S ^ { * }$ of $s$ , such that (a) $S ^ { * }$ is a Pareto-set and (b) $S ^ { * }$ maximizes the dominance score DS.
Given an instance of $k$ -RSP with a set $s$ , we construct an instance of $\mathsf { E V A L } ( \mathsf { S X Q } ^ { k } )$ as follows. (1) For each data point $s \in S$ , create a node ${ { v } _ { s } }$ , and construct a distinct, single-edge tree $T _ { s }$ at ${ { v } _ { s } }$ . Assign a ground truth label $l _ { s }$ to each ${ { v } _ { s } }$ . Let $G$ be the union of all the single-edge trees, and define $V _ { T }$ as the set of root nodes of all such trees. (2) Duplicate the above set $G ^ { \prime }$ as a training graph $G _ { T }$ and train a GNN classifier $\mathcal { M }$ with layer $L \geq 1$ , which gives the correct outputs. For mainstream GNNs, the training cost is in PTIME [10]. Set $\Phi$ to be a set of functions, where each $\phi \in \Phi$ assigns the $i$ -th value of a data point $s$ in the instance of $k$ -RSP to be the value for the $i$ -th explanatory measure of the matching node ${ { v } _ { s } }$ , where $i \in [ 1 , d ]$ for $d$ -dimensional data point in $k$ -RSP problem. (3) Apply $\mathcal { M }$ to $G$ with $V _ { T }$ as test set. Given that $\mathcal { M }$ is fixed and deterministic, the inference ensures the invariance property [22] (which generates the same results for isomorphic input $G$ and $G _ { T }$ ). That is, $\mathcal { M }$ assigns consistently and correctly the ground truth labels to each node ${ { v } _ { s } }$ in $G$ . Recall that the explanatory subgraph is connected with a non-empty edge set (§ 2). This ensures that $T _ { s }$ is the only factual explanation for each $v _ { s } \in V _ { T }$ in $G$ . Each $T _ { s }$ may vary in $\Phi$ .
As $\vert \zeta \vert$ is in $O ( f ( | G | ) )$ for a polynomial function $f$ , the above reduction is in PTIME. We can then show that there exists a $k$ representative skyline set for $k$ -RSP, if and only if there exists a $k$ -explanation as an answer for the constructed instance of
EVAL $( \mathsf { S X Q } ^ { k } )$ . As $k$ -RSP is NP-hard for 3-dimensional space with a known input dataset, $\mathsf { E V A L } ( \mathsf { S X Q } ^ { k } )$ remains NP-hard for the case that $\lvert \zeta \rvert$ is polynomially bounded by $| G |$ , and $\vert \Phi \vert { = } 3$ . □
An Exact Algorithm. A straightforward algorithm evaluates $\mathsf { s x Q } ^ { k }$ with exact optimal explanation. The algorithm first induces a subgraph $G ^ { L }$ with edges within $L$ -hop neighbors of the nodes in $V _ { T }$ , where $L$ is the number of layers of the GNN $\mathcal { M }$ . It then initializes the interpretable space $\zeta$ as all connected subgraphs in $G ^ { L }$ . This can be performed by invoking subgraph enumeration algorithms [27, 59]. It then enumerates $n$ Pareto sets from $\zeta$ and finds an optimal $k$ -explanation. Although this algorithm correctly finds optimal explanations for GNNs, it is not practical for large $V _ { T }$ and $G$ , as $n$ alone can be already $2 ^ { d e g ^ { L } }$ (the number of connected subgraphs in $G _ { L }$ ), and the Pareto sets to be inspected can be $\textstyle { \binom { n } { k } }$ . We thus resort to approximate query processing for $S X { \mathbb { Q } } ,$ and present efficient algorithms that do not require enumeration to generate explanations that approximate optimal answers.
# 4 GENERATING SKYLINE EXPLANATIONS
# 4.1 Approximating Skyline Explanations
We introduce our first algorithm to approximately evaluate a skyline explanatory query. To characterize the quality of the answer, we introduce a notion of $\epsilon$ -explanation.
$\epsilon$ -explanations. Given explanatory measures $\Phi$ and an interpretable space $\zeta$ , we say that an explanatory subgraph $G _ { \zeta } \in \zeta$ is $\epsilon$ -dominated by another $G _ { \zeta } ^ { \prime } \in \zeta$ , denoted as $G _ { \zeta } \leq _ { \epsilon } G _ { \zeta } ^ { \prime }$ , if
$\circ$ for each measure $\phi \stackrel { } { \in } \Phi , \phi ( G _ { \zeta } ) \leq ( 1 + \epsilon ) \phi ( G _ { \zeta } ^ { \prime } ) \stackrel { \cdot } { }$ ; and ◦ there exists a measure $\phi ^ { * } \in \Phi$ , such that $\phi ^ { * } ( \tilde { G _ { \zeta } } ) \leq \phi ^ { * } ( G _ { \zeta } ^ { \prime } )$ .
Given a $k$ -skyline query $\mathsf { S X Q } ^ { k } \ : = \ : ( G , M , V _ { T } , \Phi )$ and an interpretable domain $\zeta$ w.r.t. $G$ , $\mathcal { M }$ , and $V _ { T }$ , an explanation $\mathcal { G } _ { \epsilon } \subseteq \zeta$ is an $( \zeta , \epsilon )$ -explanation w.r.t. 𝐺 , $\mathcal { M }$ , and $V _ { T }$ , if (1) $| \mathcal { G } _ { \epsilon } | \le k$ , and (2) for any explanatory subgraph $G _ { \zeta } \in \zeta$ , there is an explanatory subgraph $G _ { \zeta } ^ { \prime } \in \mathcal { G } _ { \epsilon }$ , such that ${ \cal G } _ { \zeta } \leq _ { \epsilon } { \cal G } _ { \zeta } ^ { \prime }$ .
For a $k$ -skyline query $\mathsf { S X Q } ^ { k }$ , a $( \zeta , \epsilon )$ -explanation $\mathcal { G } _ { \epsilon }$ properly approximates a $k$ -explanation $G _ { \zeta }$ as its answer in the interpretable domain $\zeta$ . Indeed, (1) $\mathcal { G } _ { \epsilon }$ has a bounded number $k$ of explanatory subgraphs as $G _ { \zeta }$ . (2) $\mathcal { G } _ { \epsilon }$ is, by definition, an $\epsilon$ -Pareto set of $\zeta$ . In multi-objective decision making, an $\epsilon$ -Pareto set has been an established notion as a proper size-bounded approximation for a Pareto optimal set (a $k$ -explanation $G _ { \zeta }$ , in our context) [48].
$( \alpha , \epsilon )$ -Approximations. Given a $k$ -skyline query $\mathsf { S X Q } ^ { k }$ $( G , M , V _ { T } , \Phi )$ , and an interpretable domain $\zeta \ w . r . t . \ G , \ M$ , and $V _ { T }$ , let $\mathcal { G } _ { \zeta } ^ { * }$ be the optimal $k$ -explanation answer for $\mathsf { S X Q } ^ { k }$ in $\zeta$ (see $\ S 3 . 2 \}$ . We say that an algorithm is an $( \alpha , \epsilon )$ -approximation for the problem EVAL $( \mathsf { S } \mathsf { X } \mathsf { Q } ^ { k } )$ w.r.t. $\zeta$ , if it ensures the following:
$\circ$ it correctly computes an $( \zeta , \epsilon )$ -explanation $\mathcal { G } _ { \epsilon }$ ; ◦ $\mathsf { D S } ( \mathcal G _ { \epsilon } ) \ge \alpha \mathsf { D S } ( \mathcal G _ { \zeta } ^ { * } )$ ; and $\circ$ it takes time in $O ( f ( | \zeta | , | G | , \frac { 1 } { \epsilon } ) )$ , where $f$ is a polynomial. We present our main result below.
Theorem 3: There is a $\textstyle { \big ( } { \frac { 1 } { 4 } } , \epsilon { \big ) }$ -approximation for EVAL $( \mathsf { S X Q } ^ { k } )$ w.r.t. $\zeta ^ { \prime }$ , where $\zeta ^ { \prime }$ is the set of explanatory subgraphs verified by the algorithm. The algorithm computes a $( \zeta ^ { \prime } , \epsilon )$ -explanation in time $\begin{array} { r } { \tilde { O ( | \zeta ^ { \prime } | ( \log \frac { r _ { \Phi } } { \epsilon } ) ^ { | \Phi | } + L | G ^ { L } ( V _ { T } ) | ) } } \end{array}$ . □
Here (1) $\begin{array} { r } { r _ { \Phi } = \operatorname* { m a x } \frac { \phi _ { u } } { \phi _ { l } } } \end{array}$ , for each measure $\phi \in \Phi$ with a range $[ \phi _ { l } , \phi _ { u } ]$ ; (2) $G ^ { L } ( V _ { T } )$ refers to the set of all the $L$ -hop neighbor subgraphs of the nodes $V _ { T }$ , and $( 3 ) L$ is the number of layers of the GNN $\mathcal { M }$ . Note that in practice, $\left| \Phi \right|$ , $L$ and $\epsilon \in [ 0 , 1 ]$ are small constants, and the value of $r _ { \Phi }$ is often small.
As a constructive proof of Theorem 3, we first introduce an approximation algorithm for $\mathsf { E V A L } ( \mathsf { S X Q } ^ { k } )$ ), denoted as ApxSX-OP.
# 4.2 Approximation Algorithm
Our first algorithm ApxSX-OP (illustrated as Algorithm 2) takes advantage of a data locality property: For a GNN $\mathcal { M }$ with $L$ layers and any node $\boldsymbol { v }$ in $G$ , its inference computes $M ( v , G )$ with up to $L$ -hop neighbors of $\boldsymbol { v }$ $( N ^ { L } ( v ) )$ via message passing, regardless of how large $G$ is. Hence, it suffices to explore and verify connected subgraphs in the $L$ -hop neighbor subgraph $G ^ { L } ( V _ { T } )$ (see $\ S \ 2 \rangle$ ). In general, it interacts with three procedures:
(1) a Generator, which initializes and dynamically expands a potential interpretable domain $\zeta ^ { \prime }$ , by generating a sequence of candidate explanatory subgraphs (or simply a “candidate”) from $\boldsymbol { G ^ { L } } ( V _ { T } )$ ;
(2) a Verifier, which asserts if an input candidate $G _ { s }$ is an explanatory subgraph for $\mathsf { S X Q } ^ { k }$ ; and
(3) an Updater, that dynamically maintains a current size $k$ $( \zeta ^ { \prime } , \epsilon )$ - explanation $\mathcal { G } _ { \epsilon }$ over verified candidates $\zeta ^ { \prime }$ , upon the arrival of verified explanatory subgraph in (2), along with other auxiliary data structures. The currently maintained explanation $\mathcal { G } _ { \epsilon }$ is returned either upon termination (to be discussed in following parts), or upon an ad-hoc request at any time from the queryer.
Auxiliary structures. ApxSX-OP dynamically maintains the following structures to coordinate the procedures.
State Graph. ApxSX-OP coordinates the interaction of the Generator, Verifier and Updater via a state graph (simply denoted as $\zeta ^ { \prime } ,$ ). Each node (a “state”) $s \in \zeta ^ { \prime }$ records a candidate $G _ { s }$ and its local information to be updated and used for evaluating $\mathsf { S X Q } ^ { k }$ . There is a directed edge (a “transaction”) $t = \left( s , s ^ { \prime } \right)$ in $\zeta ^ { \prime }$ , if $G _ { s ^ { \prime } }$ is obtained by applying a graph editing operator (e.g., edge insertion, edge deletion) to $G _ { s }$ . A path $\rho$ in the state graph $\zeta ^ { \prime }$ consists of a sequence of transactions that results in a candidate.
In addition, each state $s$ is associated with (1) a score $\mathsf { D S } ( G _ { s } )$ , (2) a coordinate $\Phi ( s )$ , where each entry records an explainability measure $\phi ( G _ { s } )$ $( \phi \in \Phi )$ ; and (3) a variable-length bitvector $B ( s )$ , where an entry $B ( s ) [ i ]$ is 1 if $G _ { i } \preceq _ { \epsilon } G _ { s }$ , and 0 otherwise. The vector $B ( s )$ bookkeeps the $\epsilon$ -dominance relation between $G _ { s }$ and current candidates in $\zeta ^ { \prime }$ . Its $\mathsf { D S } ( G _ { s } )$ score over $\zeta ^ { \prime }$ , can be readily counted as the number of $^ { \mathfrak { s } } 1 ^ { \mathfrak { s } }$ entries in $B ( s )$ .
“Onion Peeling”. To reduce unnecessary verification, ApxSX-OP adopts a prioritized edge deletion strategy called “onion peeling”. Given a node 𝑣 and its $L$ -hop neighbor subgraph $G ^ { L } ( \boldsymbol { v } )$ , it starts with an initial state $s _ { 0 }$ that corresponds to $G ^ { L } ( V _ { T } )$ , and iteratively removes edges from the “outmost” $L$ -th hop “inwards” to $\boldsymbol { v }$ (via Generator). This spawns a set of new candidates to be verified (by invoking Verifier), and the explanatory subgraphs are processed by Updater to maintain the explanation.
# Procedure 1 Procedure updateSX
Input: a state 𝑠, a candidate $G _ { s }$ , state graph $\zeta ^ { \prime }$ , explanation $\mathcal { G } _ { \epsilon }$ ;
Output: updated $( \zeta ^ { \prime } , \epsilon )$ -explanation $\mathcal { G } _ { \epsilon }$ ; 1: initializes state $s$ with structures DS( ${ \mathfrak { s } } ) , B ( { \mathfrak { s } } ) { : = } \emptyset , \Phi ( G _ { s } ) { : = } \emptyset$ ;
2: evaluates $\Phi ( G _ { s } )$ ;
3: incrementally determines $( 1 + \epsilon )$ -dominance of $G _ { s }$ ; 4: updates $B ( s )$ and DS(𝑠); 5: if $\left\{ G _ { s } \right\}$ is a new skyline explanation then # New skyline 6: if $| \mathcal { G } _ { \epsilon } | < k$ then
7: $\mathcal { G } _ { \epsilon } : = \mathcal { G } _ { \epsilon } \cup \{ G _ { s } \}$ ; 8: else 9: $\mathcal { G } _ { \epsilon } : = s \mathbf { w a p } ( \mathcal { G } _ { \epsilon } , s ) ;$
10: return $\mathcal { G } _ { \epsilon }$ .
Input: a query $\mathsf { S X Q } ^ { k } = ( G , M , V _ { T } , \Phi )$ ; a constant $\epsilon \in [ 0 , 1 ]$ ;
Output: a $( \zeta ^ { \prime } , \epsilon )$ -explanation $G _ { \epsilon }$ .
1: set $\mathcal { G } _ { \epsilon } : = \emptyset$ ;
2: identify edges for each hop: $\mathcal { E } { = } \{ E _ { L } , E _ { L - 1 } , . . . , E _ { 1 } \}$ ;
3: for $l = L$ to 1 do $\#$ Generator: Onion Peeling
4: initializes state $s _ { 0 } = G ^ { l } ( V _ { T } )$ $\mathit { l }$ -hop neighbor subgraph).
5: while $E _ { l } \neq \varnothing$ do
6: for $e \in E _ { l }$ do
7: spawns a state 𝑠 with a candidate $G _ { s } : = G ^ { l } \backslash \{ e \}$ ;
8: update $\zeta ^ { \prime }$ with state 𝑠 and a new transaction $t$ ;
9: if vrfyF $( s ) =$ False & vrfyCF 𝑠 =False then $\#$ Verifier
10: continue;
11: $\mathcal { G } _ { \epsilon }$ : $\mathbf { \sigma } = \mathbf { \sigma }$ updateSX $( s , G _ { s } , \zeta ^ { \prime } , G _ { \epsilon } ) ; E _ { l } { = } E _ { l } \backslash \{ e \}$ ; # Updater
12: return $\mathcal { G } _ { \epsilon }$ .
This strategy enables several advantages. (1) Observe that $\mathcal { M } ( G ^ { L } ( v ) , v ) = \mathcal { M } ( G , v )$ due to data locality. Intuitively, it is more likely to discover explanatory subgraphs earlier, by starting from candidates with small difference to $G ^ { L } ( v )$ , which is by itself a factual explanatory subgraph. (2) The strategy fully exploits the connectivity of $G ^ { L } ( v )$ to ensure that the Generator produces only connected candidates with 𝑣 included, over which DS, $\Phi$ , and dominance are well defined. In addition, the process enables early detection and skipping of non-dominating candidates (see “Optimization”).
Algorithm. Algorithm ApxSX-OP dynamically maintains $\zeta ^ { \prime }$ as the state graph. It first induces and verifies the $L$ -hop neighbor subgraph $G ^ { L } ( \bar { V _ { T } } )$ , and initializes state node $s _ { 0 }$ (w.r.t $G _ { s _ { 0 } }$ ) with $\hat { G } ^ { L } ( V _ { T } )$ and local information. It then induces $L$ -batches of edge sets $E _ { i }$ , $i \in$ $[ 1 , L ]$ , from $G ^ { L } ( V _ { T } )$ for onion peeling processing. For each “layer” (line 3) $( E _ { l } , 1 \le l \le L )$ , the Generator procedure iteratively selects a next edge $e$ to be removed from the current layer and generates a new candidate $G _ { s ^ { \prime } }$ by removing $e$ from $G _ { s }$ , spawning a new state $s ^ { \prime }$ in $\zeta ^ { \prime }$ with a new transaction $t = \left( s , s ^ { \prime } \right)$ . Following this procedure, we obtain a “stream” of states to be verified. Each candidate is then processed by the Verifier procedures, vrfyF and vrfyCF, to test if $G _ { s }$ is factual or counterfactual, respectively (lines 9-10). If $G _ { s }$ passes the test, the associated state $s \in \zeta ^ { \prime }$ is processed by invoking the Updater procedure updateSX, in which the coordinator $\Phi ( s )$ , $( 1 + \epsilon )$ - dominance relation (encoded in $B ( s ) _ { . } ^ { \cdot }$ ), and DS 𝑠 are incrementally updated (line 1). updateSX then incrementally maintains the current explanation $\mathcal { G } _ { \epsilon }$ with the newly verified explanatory subgraph $G _ { s }$ , following a replacement strategy (see Procedure updateSX). The processed edge $e$ is then removed from $E _ { l }$ (line 11).
Example 4: Consider the example in Figure 2. Algorithm ApxSXOP starts the generation of explanatory subgraphs within the 2-hop subgraph $s _ { 0 }$ , by deleting one edge from the set of hop-2 edges, i.e., $e _ { 1 } , e _ { 2 }$ , and $e _ { 3 }$ . This spawns three states $s _ { 1 } , s _ { 2 }$ , and $s _ { 3 }$ to be verified and evaluated. It chooses $s _ { 3 }$ as the next state to explore, which leads to the states $s _ { 4 }$ and $e _ { 5 }$ , in response to the deletion of $e _ { 1 }$ and $e _ { 2 }$ , respectively. It continues to verify states $s _ { 4 }$ and $s _ { 5 }$ . As $s _ { 4 }$ fails the verification, i.e., it is not factual or counterfactual, it continues to verify $s _ { 5 }$ . This gives a current answer set that contains $\{ s _ { 3 } , s _ { 5 } \}$ . □
Procedure updateSX. For each new explanatory subgraph $G _ { s }$ (at state 𝑠), updateSX updates its information by (1) computing coordinate $\Phi ( G _ { s } )$ , (2) incrementally determines if $G _ { s }$ is likely to join a skyline explanation in terms of $( 1 + \epsilon )$ -dominance, i.e., if for any verified explanatory subgraph $G _ { s ^ { \prime } }$ in $\zeta ^ { \prime }$ , $G _ { s ^ { \prime } } \leq _ { \epsilon } G _ { s }$ (to be discussed). If so, and if the current explanation $\mathcal { G } _ { \epsilon }$ has a size smaller than $k$ , $G _ { s }$ is directly added to $\mathcal { G } _ { \epsilon }$ . (line 5- 7). Otherwise, updateSX performs a swap operator as follows: 1) identify the skyline explanation $\overline { s } \in { \mathcal G } _ { \epsilon }$ that has the smallest $\mathcal { D } ( \overline { { s } } ) ; 2 )$ replace 𝑠 with $G _ { s }$ , only when such replacement makes the new explanation ${ \mathcal { G } } _ { \epsilon } ^ { \prime }$ having a higher score ${ \mathcal { D } } ( G _ { \epsilon } ^ { \prime } )$ increased by a factor of $\textstyle { \frac { 1 } { k } }$ (line 8- 9).
Update Dominance Relations. Procedure updateSX maintains a set of skyline explanations $\overline { { \mathcal { G } } } _ { s }$ (not shown) in terms of $( 1 + \epsilon )$ - dominance. The set $\mathcal { G } _ { s }$ is efficiently derived from the bitvectors $B ( s )$ of the states in $\zeta ^ { \prime }$ . The latter compactly encodes a lattice structure of $( 1 + \epsilon )$ dominance as a directed acyclic graph; and $\mathcal { G } _ { s }$ refers to the states with no “parent” in the lattice, i.e., having an explanatory subgraph that is not $( 1 + \epsilon )$ -dominated by any others so far. We present details in full version [2].
Example 5: Recall Example 4 where the explanation space $\zeta ^ { \prime }$ includes $\{ s _ { 1 } , s _ { 2 } , s _ { 3 } , s _ { 5 } \}$ . $( 1 + \epsilon )$ -dominance is tracked by a dynamically maintained scoring table (top-right of Figure 2). As a sequence of states $s _ { 1 } , s _ { 2 } , s _ { 3 } , s _ { 5 }$ is generated, the first two states are verified to form a Pareto set, and added to the explanation set $\{ s _ { 1 } , s _ { 2 } \}$ . Upon the arrival of $s _ { 3 }$ , since it introduces an improvement less than a factor of $\textstyle { 1 + { \frac { 1 } { k } } = { \frac { 3 } { 2 } } }$ , procedure updateSX skips $s _ { 3 }$ . As $s _ { 5 }$ $( 1 + \epsilon )$ - dominates $s _ { 1 }$ and $s _ { 3 }$ , updateSX replaces $s _ { 1 }$ with $s _ { 5 }$ and updates the explanation set to be $\{ s _ { 2 } , s _ { 5 } \}$ . □
Explainability. Algorithm ApxSX-OP always terminates as it constantly removes edges from $G ^ { L } ( V _ { T } )$ and verifies a finite set of candidates. To see the quality guarantees, we present the results below.
Lemma 4: Given a constant $\epsilon$ , ApxSX-OP correctly computes a $( \zeta ^ { \prime } , \epsilon )$ -explanation of size $k$ defined on the interpretation domain $\zeta ^ { \prime }$ , which contains all verified candidates. □
Proof sketch: We show the above result with a reduction to the multi-objective shortest path problem (MOSP) [53]. Given an edge-weighted graph $G _ { w }$ , where each edge carries a $d$ -dimensional attribute vector $e _ { w } . c$ , it computes a Pareto set of paths from a start node 𝑢. The cost of a path 𝜌𝑤 in 𝐺𝑤 is defined as 𝜌𝑤 .𝑐 = Í𝑒𝑤 ∈𝜌𝑤 $e _ { w } . c$ . The dominance relation between two paths is determined by the dominance relation of their cost vector. Our reduction (1) constructs $G _ { w }$ as the running graph $\zeta ^ { \prime }$ with $n$ verified states and transitions; and (2) for each edge $( s , s ^ { \prime } )$ , sets an edge weight as $\boldsymbol { e } _ { w } = \boldsymbol { \Phi } ( s ) - \boldsymbol { \Phi } ( s ^ { \prime } )$ . Given a solution $\Pi _ { w }$ of the above instance of ${ \mathsf { M O S P } }$ , for each path $\rho _ { w } \in \Pi$ , we set a corresponding path $\rho$ in $\zeta ^ { \prime }$ that ends at a state $\rho _ { s }$ , and adds it into $\mathcal { G } _ { \epsilon }$ . We can verify that $\Pi _ { w }$ is an $\epsilon$ -Pareto set of paths $\Pi _ { w }$ in $G _ { w }$ , if and only if $\mathcal { G } _ { \epsilon }$ is an $( \zeta ^ { \prime } , \epsilon )$ -explanation of $\zeta ^ { \prime }$ . We show that ApxSX-OP performs a more simpler process of the algorithm in [53], which ensures to generate $\mathcal { G } _ { \epsilon }$ as a $( \zeta ^ { \prime } , \epsilon )$ -explanation. □
Figure 2: Illustration of Onion Peeling $\scriptstyle ( L = 2 , k = 2 )$ . $\zeta ^ { \prime } \ =$ $\{ s _ { 1 } , s _ { 2 } , s _ { 3 } , s _ { 5 } \}$ . A $( \zeta ^ { \prime } , \epsilon )$ -explanation $\mathcal { G } _ { \epsilon } = \{ s _ { 2 } , s _ { 5 } \}$ with $\mathsf { D S } = 4$ .
Lemma 5: ApxSX-OP correctly computes a $( \zeta ^ { \prime } , \epsilon )$ -explanation $\mathcal { G } _ { \epsilon }$ that ensures $\begin{array} { r } { \mathsf { D S } ( \mathcal { G } _ { \epsilon } ) \ge \frac { 1 } { 4 } \mathsf { D S } ( \mathcal { G } _ { \epsilon } ^ { * } ) } \end{array}$ , where $G _ { \epsilon } ^ { * }$ is the size $k$ $( \zeta ^ { \prime } , \epsilon )$ - explanation over $\zeta ^ { \prime }$ with maximum dominance power $\mathsf { D } \mathsf { S } ( G _ { \epsilon } ^ { * } )$ . □
Proof sketch: Consider procedure updateSX upon the arrival, at any time, of a new verified candidate $G _ { s }$ . (1) The above results clearly hold when $| \zeta ^ { \prime } | \le k$ or $| \mathcal { G } _ { \epsilon } | \le k$ , as $\mathcal { G } _ { \epsilon }$ is the only $( \zeta ^ { \prime } , \epsilon )$ - explanation so far. (2) When $| \mathcal { G } _ { \epsilon } | = k$ , we reduce the approximate evaluation of $\mathsf { s x Q } ^ { k }$ to an instance of the online MAX $k$ -SET COVERAGE problem [3]. The problem maintains a size- $k$ set cover with maximized weights. We show that updateSX adopts a greedy replacement policy by replacing a candidate in $\mathcal { G } _ { \epsilon }$ with the new candidate $G _ { s }$ only when this leads to a $\left( 1 + { \frac { 1 } { k } } \right)$ factor improvement for DS. This ensures a $\textstyle { \frac { 1 } { 4 } }$ -approximation ratio [3]. Following the replacement policy consistently, updateSX ensures a $\textstyle { \frac { 1 } { 4 } }$ approximation ratio for $( \zeta ^ { \prime } , \epsilon )$ -explanations in $\zeta ^ { \prime }$ . □
Time Cost. As Algorithm ApxSX-OP follows an online processing for candidates generated from a stream, we provide its time cost in an output-sensitive manner, in terms of the interpretation domain $\zeta ^ { \prime }$ when terminates. ApxSX-OP first induces the $L$ -hop neighbor subgraph of $V _ { T }$ , in $O ( | G ^ { L } ( V _ { T } ) | )$ time. $\mathsf { A p x S X - O P }$ process in total $| \zeta ^ { \prime } |$ candidates. For each candidate $G _ { s }$ , the Verifier process verifies $G _ { s }$ for SXQ (procedures vrfyF and vrfyCF), which incurs two inferences of GNN $\mathcal { M }$ , in total $O ( L | G ^ { L } ( V _ { T } ) | )$ (assuming the number
of node features are small; cf. Lemma 1, and [10, 45]. The total verification cost is thus in $O ( | \zeta ^ { \prime } | L | G | )$ time. For each verified explanatory subgraph $G _ { s }$ , the Updater procedure updateSX follows a simplified process to update a set of $( 1 + \epsilon )$ -Pareto set in $\zeta ^ { \prime }$ by solving a multi-objective dominating paths problem in the state graph $\zeta ^ { \prime }$ , which takes at most $\begin{array} { r } { \prod _ { i = 1 } ^ { | \Phi | } \left( \left\lfloor \log _ { 1 + \epsilon } \frac { \bar { \phi } _ { u } } { \phi _ { l } } \right\rfloor + 1 \right) ) } \end{array}$ time. Here $\phi _ { l }$ and $\phi _ { u }$ refer to the minimum and maximum value an explainability measure $\phi \in \Phi$ has in the verified candidates in $\zeta ^ { \prime }$ . Given $\epsilon$ is small, $\log ( 1 + \epsilon ) \approx \epsilon$ , hence the total maintenance cost is in $O \left( | \zeta ^ { \prime } | \cdot \left( \frac { \log ( r _ { \Phi } ) } { \epsilon } \right) ^ { | \Phi | } \right)$ time, with $r _ { \Phi } = { \frac { \phi _ { u } } { \phi _ { l } } }$ . Putting these together, the total cost is in $\begin{array} { r } { O ( | \zeta ^ { \prime } | ( \log \frac { r _ { \Phi } } { \epsilon } ) ^ { | \Phi | } + L | G ^ { L } ( V _ { T } ) | ) } \end{array}$ time.
Putting the above analysis together, Theorem 3 follows.
Optimization. To further reduce verification cost, ApxSX-OP uses two optimization strategies, as outlined below.
Candidate Prioritization. ApxSX-OP adopts an edge prioritization heuristic to favor promising candidates with small loss of DS and that are more likely to be a skyline explanation (Algorithm ApxSXOP, line 7). It ranks each transaction $t { = } \left( s , s ^ { \prime } \right)$ in $\zeta$ based on a loss estimation of DS by estimating $\Phi ( s ^ { \prime } )$ . The cost vector of each transaction is aggregated to an average weight based on each dimension, i.e., $\begin{array} { r } { \boldsymbol { w } ( t ) = \frac { 1 } { | \Phi | } \dot { \sum _ { i = 1 } ^ { | \Phi | } } ( \phi _ { i } ( s ) - \phi _ { i } \dot { \hat { ( } } s ^ { \prime } ) ) } \end{array}$ . The candidates from spawned states with a smallest loss of DS are preferred. This helps early convergence of high-quality explanations, and also promotes early detection of non-dominating candidates (see “early pruning”).
Early Pruning. ApxSX-OP also exploits a monotonicity property of measures $\overline { { \phi \in \Phi } }$ to early determine the non- $\epsilon$ -dominance of an explanatory subgraph, even before its measures are computed. Given $\zeta ^ { \prime }$ and a measure $\phi \in \Phi$ , we say $\phi$ is monotonic w.r.t. a path $\rho$ in $\zeta ^ { \prime }$ , if for a state 𝑠 with candidate $G _ { s }$ and another state $s ^ { \prime }$ with a subgraph $G _ { s ^ { \prime } }$ of $G _ { s }$ on the same path $\begin{array} { r } { \rho , \phi _ { l } ( G _ { s } ) \geq \frac { \phi _ { u } ( G _ { s ^ { \prime } } ) } { 1 + \epsilon } } \end{array}$ , where $\phi _ { l } ( G _ { s } )$ (resp. $\phi _ { u } ( G _ { s ^ { \prime } } ) )$ is a lower bound estimation of $\phi ( G _ { s } )$ (resp. upper bound estimation of $\phi ( G _ { s ^ { \prime } } ) )$ , i.e., $\phi _ { l } \le \phi _ { l } ( G _ { s } ) \le \phi ( G _ { s } )$ (resp. $\phi ( G _ { s ^ { \prime } } ) \leq$ $\phi _ { u } ( G _ { s ^ { \prime } } ) \leq \phi _ { u } )$ . By definition, for any $\phi$ with monotonicity property, we have $\begin{array} { r } { \phi ( G _ { s ^ { \prime } } ) \leq \phi _ { u } ( G _ { s ^ { \prime } } ) \leq ( 1 + \epsilon ) \phi _ { l } ( G _ { s } ) \leq ( 1 + \epsilon ) \phi ( G _ { s } ) } \end{array}$ , hence $G _ { s ^ { \prime } } \leq _ { \epsilon } G _ { s }$ , and any such subgraphs $G _ { s ^ { \prime } }$ of $G _ { s }$ can be safely pruned due to non- $( 1 + \epsilon )$ dominance determined by $\phi$ alone, without further verification. Explainability measures such as density, total influence, conciseness, and diversity of embeddings, are likely to be monotonic. Hence, the onion peeling strategy enables early pruning by exploiting the inherent or estimated ranges of such measures. Note that the above property is checkable in $O ( | \zeta ^ { \prime } | )$ time.
Alternative Strategy: Edge Growing. As the end user may want to early termination and obtain compact, smaller-sized explanations, we also outline a variant of ApxSX-OP, denoted as ApxSX-I. It follows the same Verifier and Updater procedures, yet uses a different Generator procedure that starts with a single node 𝑣 and inserts edges to grow candidate, level by level, up to its $L$ -hop neighbor subgraph. $\mathsf { A p x S X - l }$ remains to be an $\big ( \frac { 1 } { 4 } , \epsilon \big )$ -approximation for EVAL $( \mathsf { S X Q } ^ { k } )$ , and does not incur additional time cost compared with ApxSX-OP. We present detailed analysis in [2].
# 5 DIVERSIFIED SKYLINE EXPLANATIONS
Skyline explanation may still contain explanatory subgraphs having many similar nodes. This may lead to redundant and biased explanations, as remarked earlier. Intuitively, one also prefers the explanatory subgraphs to contain various types of nodes that can clarify the model output with a more comprehensive view when inspected. We next investigate diversified SXQ evaluation.
Diversification function. Given a query $\mathsf { S X Q } ^ { k } = ( G , M , V _ { T } , k , \Phi ) ,$ the diversified evaluation problem, denoted as DivEVAL $( \mathsf { S } \mathsf { X } \mathsf { Q } ^ { k } )$ , is to find a $( \zeta ^ { \prime } , \epsilon )$ -explanation $G _ { \epsilon }$ , such that
$$
\mathcal { G } _ { \epsilon } ^ { * } = \operatorname * { a r g m a x } _ { \mathcal { G } \subseteq \zeta , | \mathcal { G } \epsilon | \leq k } \mathsf { D i v S } ( \mathcal { G } _ { \epsilon } )
$$
where $\mathsf { D i v S } ( \boldsymbol { G } _ { \epsilon } )$ is a diversification function defined on $\mathcal { G } _ { \epsilon }$ , to quantify its overall diversity. Below we introduce such a function.
As remarked earlier, we quantify the diversity of an explanation as a bi-criteria function, in terms of both neighborhood coverage and the difference between node representations, and is defined as
$$
\mathsf { D i v S } ( \mathcal { G } _ { \epsilon } ) = \alpha \cdot \mathsf { N C S } ( \mathcal { G } _ { \epsilon } ) + ( 1 - \alpha ) \cdot \sum _ { G _ { s } , G _ { s ^ { \prime } } \in \mathcal { G } _ { \epsilon } } \mathsf { C D } ( G _ { s } , G _ { s ^ { \prime } } )
$$
(1) NCS, a node coverage measure, aggregates the node coverage of explanatory subgraphs in $\mathcal { G } _ { \epsilon }$ , and (2) CD, an accumulated difference measure, aggregates the difference between two explanatory subgraphs. The two terms are balanced by a constant $\alpha$ . Specifically, we adopt a node coverage function as
$$
{ \mathsf { N C S } } ( G _ { \epsilon } ) = \frac { | \bigcup _ { G _ { s } \in { \mathcal { G } } _ { \epsilon } } V _ { G _ { s } } | } { | V _ { G ^ { L } } | } ;
$$
and for graph differences, we define CD as the accumulated Cosine distances between two explanatory graphs as:
$$
\mathrm { C D } ( G _ { s } , G _ { s ^ { \prime } } ) = 1 - \frac { \mathbf { x } _ { G _ { s } } \cdot \mathbf { x } _ { G _ { s ^ { \prime } } } } { | | \mathbf { x } _ { G _ { s } } | | _ { 2 } \cdot | | \mathbf { x } _ { G _ { s ^ { \prime } } } | | _ { 2 } }
$$
Here, $\mathbf { x } _ { G _ { s } }$ is the embedding of $G _ { s }$ obtained by graph representation models such as Node2Vec [23].
Diversification Algorithm. We next outline our diversified evaluation algorithm, denoted as DivSX (pseudo-code shown in [2]). It follows ApxSX-OP and adopts onion peeling strategy to generate subgraphs in a stream. The difference is that when computing the $k$ -skyline explanation, it includes a new replacement strategy when the marginal gain for the diversification function DivS(·) is at least a factor of the score over the current solution DivS $( \mathcal { G } _ { \epsilon } )$ . DivSX terminates when a first $k$ -skyline explanation is found.
Procedure updateDivSX. For each new candidate $G _ { s }$ (state 𝑠), updateDivSX first initializes and updates $G _ { s }$ by (1) computing its coordinates $\Phi ( G _ { s } )$ , (2) incrementally determines if $G _ { s }$ is a skyline explanation in terms of $( 1 + \epsilon )$ -dominance, i.e., if for any verified explanatory subgraph $G _ { s ^ { \prime } }$ in $\zeta ^ { \prime }$ , $G _ { s ^ { \prime } } \ \leq _ { \epsilon } \ G _ { s }$ . If so, 1) check if the current explanation $\mathcal { G } _ { \epsilon }$ has a size smaller than $k$ ; and 2) the marginal gain of $G _ { s }$ is bigger than $\frac { ( 1 + \epsilon ) / 2 - \mathrm { D i v S } ( G _ { \epsilon } ) } { k - | G _ { \epsilon } | }$ . If $G _ { s }$ satisfies both conditions, updateDivSX adds it in the $k$ -skyline explanation $\mathcal { G } _ { \epsilon }$ .
Quality and Approximability Guarantee. Algorithm DivSX always terminates as it constantly removes edges from $G ^ { L } ( V _ { T } )$ to explore and verify a finite set of candidates. To see the quality guarantees and approximability, we show two results below.
Lemma 6: Given a constant 𝜖, DivSX correctly computes a $( \zeta ^ { \prime } , \epsilon )$ - explanation of size $k$ defined on the interpretation domain $\zeta ^ { \prime }$ , which contains all verified candidates. □
The proof is similar to Lemma 4, therefore we omit it here.
Theorem 7: DivSX correctly computes a $( \zeta ^ { \prime } , \epsilon )$ -explanation $\mathcal { G } _ { \epsilon }$ that ensures $\begin{array} { r } { \mathsf { D i v S } ( \mathcal { G } _ { \epsilon } ) \geq ( \frac { 1 } { 2 } - \epsilon ) \mathsf { D i v S } ( \mathcal { G } _ { \epsilon } ^ { * } ) } \end{array}$ , where ${ \mathcal { G } } _ { \epsilon } ^ { * }$ is the size $k$ $( \zeta ^ { \prime } , \epsilon )$ - explanation over $\zeta ^ { \prime }$ with maximum diversity power DivS $( \mathcal { G } _ { \epsilon } ^ { * } )$ . □
Proof sketch: We can verify that $\mathsf { N C S } ( \cdot )$ and $\mathrm { C D } ( \cdot )$ are submodular functions. Consider procedure updateDivSX upon the arrival, at any time, of a new verified candidate $G _ { s }$ . Given that $| \mathcal { G } _ { \epsilon } | \le k$ is a hard constraint, we reduce the approximate diversified evaluation of $\mathsf { S X Q } ^ { k }$ to an instance of the Streaming Submodular Maximization problem [4]. The problem maintains a size- $\mathbf { \nabla } \cdot \mathbf { k }$ set that optimizes a submodular function over a stream of data objects. DivSX adopts a greedy increment policy by including a candidate in $\mathcal { G } _ { \epsilon }$ with the new candidate $G _ { s }$ only when this leads to a marginal gain greater than $\frac { ( 1 + \epsilon ) / 2 - f ( S ) } { k - | S | }$ , where $s$ is the current set and $f ( \cdot )$ is a submodular function. This is consistent with an increment policy that ensures a $\bigl ( { \textstyle { \frac { 1 } { 2 } } } - \epsilon \bigr )$ -approximation in [4]. □
Time Cost. Since Algorithm DivSX follows the same process as $\mathsf { A p x S X - O P }$ , and the update time of DivSX is $O ( 1 )$ [4]. Therefore, according to ApxSX-OP, the time cost of DivSX is also $\begin{array} { r } { O ( | \zeta ^ { \prime } | ( \log \frac { r _ { \Phi } } { \epsilon } ) ^ { | \mathrm { \bar { \Phi } } | } + L | \dot { G } ^ { L } ( V _ { T } ) | ) } \end{array}$ .
# 6 EXPERIMENTAL STUDY
We conduct experiments to evaluate the effectiveness, efficiency, and scalability of our solutions. Our algorithms are implemented in Python 3.10.14 by PyTorch-Geometric framework. All experiments are conducted on a Linux system equipped with AMD Ryzen 9 5950X CPU, an NVIDIA GeForce RTX 3090, and 32 GB of RAM. Our code and data are made available at [1].
# 6.1 Experimental Setup
Datasets. We use Cora [37], PubMed [49], FacebookPage [46], AmazonComputer [51], and OGBN_arxiv [25] (Table 3). (1) Both Cora and PubMed are citation networks with a set of papers (nodes) and their citation relations (edges). Each node has a feature vector encoding the presence of a keyword from a dictionary. For both, we consider a node classification task that assigns a paper category to each node. (2) In FacebookPage, the nodes represent verified Facebook pages, and edges are mutual “likes”. The node features are extracted from the site descriptions. The task is multi-class classification, which assigns multiple site categories (politicians, governmental organizations, television shows, and companies) to a page. (3) AmazonComputeris a network of Amazon products. The nodes represent “Computer” products and an edge between two products encodes that the two products are co-purchased by the same customer. The node features are product reviews as bag-of-words. The task is to classify the product categories. (4) OGBN_arxiv is a citation network of Computer Science papers. Each paper comes with a 128-dimensional feature vector obtained by word embeddings from its title and abstract. The task is to classify the subject areas.
Table 3: Statistics of datasets
GNN Classifiers. We employ three mainstream GNNs: (1) Graph convolutional network (GCN) [28], one of the classic messagepassing GNNs; (2) Graph attention networks (GAT) [54] leverage attention mechanisms to dynamically weigh the importance of a node’s neighbors during inference; and (3) Graph isomorphism networks (GIN) [57] with enhanced expressive power up to the Weisfeiler-Lehman (WL) graph isomorphism tests.
GNN Explainers. We have implemented the following.
(1) Our skyline exploratory query evaluation methods include two approximations ApxSX-OP (§ 4.2) and ApxSX-I (§ 4.2), and the diversification algorithm DivSX (§ 5).
(2) GNNExplainer is a learning-based method that outputs masks for edges and node features by maximizing the mutual information between the probabilities predicted on the original and masked graph [60]. We induce explanatory subgraphs from the masks.
(3) PGExplainer learns edge masks to explain the GNNs. It trains a multilayer perception as the mask generator based on the learned features of the GNNs that require explanation. The loss function is defined in terms of mutual information [35].
(4) ${ \mathsf { C F } } ^ { 2 }$ is a GNN explainer that optimizes a linear function of weighted factual and counterfactual measures. It learns feature and edge masks, producing effective and simple explanations [52].
(5) MOExp is a model-agnostic, bi-objective optimization framework that finds Pareto optimal explanations, striking a balance between “simulatability" (factual) and “counterfactual relevance" [33].
We compare our methods with alternative explainers (§1). GNNExplainer generates a single factual exploratory subgraph, while PGExplainer emphasizes concise and factual explanation; ${ \mathsf { C F } } ^ { 2 }$ and MOExp optimize explanatory subgraphs based on both factuality and counterfactuality. GNNExplainer, PGExplainer, and ${ \mathsf { C } } { \mathsf { F } } ^ { 2 }$ return only one explanatory subgraph, while MOExp does not control the number of explanatory subgraphs, therefore could return almost 200 explanatory subgraphs based on our empirical results, which are hard to inspect in practice. In contrast, users can conveniently set a bound $k$ to output size-bounded skyline explanations by issuing an SXQ that wraps an explanation configuration.
Evaluation Metrics. For all datasets and GNNs, we select three common explainability measures $( \Phi )$ : fd $^ +$ , $\mathsf { f d l } ^ { - }$ , and conc. As most GNN explainers are not designed to generate explanations for multiple explainability measures, for a fair comparison, we employ three quality indicators (QIs) [7, 31, 38, 58]. These quality indicators are widely-used to measure how good each result is in a multi-objective manner. Consider a set of $n$ GNN explainers, where each explainer $A$ reports a set of explanatory subgraphs $\mathcal { G } _ { A }$ .
(1) QI-1: Integrated Preference Function (IPF) [8]. IPF score unifies and compares the quality of non-dominated set solutions with a weighted linear sum function. We define a normalized IPF of an explanation $\mathcal { G } _ { A }$ from each GNN explainer $A$ with a normalized single-objective score:
$$
\mathsf { \Pi } \mathsf { \Pi } \mathsf { P } \mathsf { F } ( \mathcal { G } _ { A } ) = \frac { 1 } { | \mathcal { G } _ { A } | \cdot | \Phi | } \sum _ { G \in \mathcal { G } _ { A } } \sum _ { \phi \in \Phi } \phi ( G )
$$
(2) QI-2: Inverted Generational Distance (IGD) [17, 31], a most commonly used distance-based QI. It measures the distance from each solution to a reference set that contains top data points with theoretically achievable “ideal” values. We introduce IGD for explanations as follows. (a) We define a universal space $\textstyle { \mathcal { G } } = \bigcup _ { i \in [ 1 , n ] } { \mathcal { G } } _ { i }$ from all the participating GNN explainers, and for each explainability measure $\phi \in \Phi$ , induces a reference set $\mathcal { G } _ { \phi } ^ { k } \in \mathcal { G }$ with explanatory subgraphs having the top- $k$ values in $\phi$ . (b) The normalized IGD of an explanation $\mathcal { G } _ { A }$ from an explainer $A$ is defined as:
$$
{ \mathsf { n l G D } } ( { \mathcal { G } } _ { A } ) = { \frac { 1 } { k \cdot | \Phi | } } \sum _ { \phi \in \Phi } \sum _ { G ^ { \prime } \in { \mathcal { G } } _ { \phi } ^ { k } } { \operatorname* { m i n } _ { G \in { \mathcal { G } } _ { A } } d ( \phi ( G ) , \phi ( G ^ { \prime } ) ) }
$$
We use the Euclidean distance function as $d ( \cdot )$ following [31, 58]. (3) QI-3: Maximum Spread (MS) [31]. MS is a widely-adopted spread indicator that quantifies the range of the minimum and maximum values a solution can achieve in each objective. For a fair comparison, we introduce a normalized $M S$ score using reference sets in QI-2. For each measure $\phi \in \Phi$ , and an explanation $\mathcal { G } _ { A }$ , its normalized MS score on $\phi$ is computed as:
$$
\mathsf { n M S } ( { \mathcal G } _ { A } ) ^ { \phi } = \frac { \phi ( G _ { \phi } ^ { A * } ) } { \phi ( G _ { \phi } ^ { * } ) }
$$
where $G _ { \phi } ^ { * }$ is the explanatory subgraph with the best score on $\phi$ in the universal set $\mathcal { G }$ , and $G _ { \phi } ^ { A * }$ is the counterpart on $\phi$ from $G _ { A }$ .
(4) Efficiency. We report the total time cost of explanation generation. For learning-based approaches, this includes learning cost.
# 6.2 Experimental Results
We next present our findings.
Exp-1: Overall Explainability. We evaluate the overall performance of the GNN explainers using QIs.
QI-1: IPF Scores. We report IPF scores (bigger is better) for all GNN explainers in Figure 3(a) with GCNs and $k { = } 1 0 .$ . Additional explainability results with varying $k$ are given in the full version [2]. (1) In general, ApxSX-OP outperforms a majority of competitors in aggregated explainability. For example, on Cora, ApxSX-OP outperforms GNNExplainer, PGExplainer, ${ \mathsf { C } } { \mathsf { F } } ^ { 2 }$ , and MOExp in IPF scores by 1.34, 1.51, 2.81, and 1.79 times, respectively. DivSX and $\mathsf { A p x S X - l }$ achieve comparable performance with ApxSX-OP. (2) We also observed that GNNExplainer and PGExplainer are sensitive as the datasets vary. For example, on FacebookPage, both show a significant increase or decrease in IPF scores. In contrast, ApxSX-OP, ApxSX-I, and DivSX consistently achieve top IPF scores over all the datasets. This verifies that our methods are quite robust in generating high-quality explanations over different data sources.
QI-2: IGD Scores. Figure 3(b) reports the IGD scores (smaller is better) of the explanations for GCN-based classification. ApxSX-OP, ApxSX-I, and DivSX achieve the best in IGD scores among all GNN explainers, for all datasets. This verifies that our explanation method is able to consistently select top explanations from a space of highquality explanations that are separately optimized over different
1.0 ASX-IOP DGSExXp PCFG2Exp MOExp 0.6 0.7 ASX-IOP DGSExXp PCFG2Exp MOExp f0d.l8+ ASX-IOP f0d.l8+ ASX-IOP
0.8 0.5 1 GExp DSX DSX GExp
0.6 0.4 MOExp PCFG2Exp MOExp PCFG2Exp
0.4 0.3 0 0.2
0.2 H 0.1 H 1.0 1.0 1.0 1.0
0.0 Cora PubMed FB AMZ Arixv 0.0 Cora PubMed FB AMZ Arixv fdl- conc fdl- conc (a) IPF Overall Score (b) IGD Overall Score (c) MS on Cora (d) MS on PubMed
measures. In particular, we found that ApxSX-OP, ApxSX-I, and DivSX are able to “recover” the top- $\cdot k$ global optimal solutions for each individual explainability measures with high hit rates (not shown). For example, for conc, at least 9 out of 10 are correctly and consistently identified by ApxSX-OP over every dataset.
QI-3: nMX Scores. Figures 3(c) and 3(d) visualize the normalized MS scores of the GNN explainers, over Cora and PubMed, respectively, where $k = 1 0$ . (1) ApxSX-OP reports a large range and contributes to explanations that are either close or the exact optimal explanation, for each of the three measures. ApxSX-I and DivSX have comparable performance, yet with larger ranges due to its properly diversified solution. (2) DivSX, ApxSX-OP, and $\mathsf { A p x S X - l }$ contribute to the optimal $\mathrm { \Delta f d l ^ { + } }$ , fdl−, and conc, respectively. On the other hand, each individual explainer performs worse for all measures, with a large gap. For example, in Cora, ${ M O E x p }$ only achieves up to $3 \%$ of the best explanation (DivSX) over fdl−.
Exp-2: Efficiency. Using the same setting as in Figure 3, we report the time cost. Figure 4(a) exhibits the following.
(1) ApxSX-OP, ApxSX-I, and DivSX outperform all (learning-based) GNN explanations. ApxSX-OP (resp. DivSX) on average outperforms GNNExplainer, PGExplainer, ${ \mathsf { C F } } ^ { 2 }$ , and MOExp by 2.05, 4.46, 9.73, 6.81 (resp. 1.62, 3.61, 7.88, and 5.51) times, respectively. Moreover, ApxSX-OP supersedes GNNExplainer and PGExplainer better over larger graphs. ${ \mathsf { C } } { \mathsf { F } } ^ { 2 }$ and MOExp fail to generate explanations due to high memory cost and long waiting time. Indeed, the learning costs remain their major bottleneck as for large and dense graphs. (2) ApxSX-OP, ApxSX-I, and DivSX are feasible in generating highquality explanations for GNN-based classification over large graphs. For example, for FacebookPage with 22,470 nodes and 342,004 edges, it takes ApxSX-OP around 150 seconds to generate skyline explanations with guaranteed quality. This verifies the effectiveness of its onion-peeling and optimization strategies.
(3) DivSX does not incur significant overhead despite that it pursues more diversified explanations. Indeed, the benefit of edge prioritization carries over to diversification process, and the incremental maintenance of the explanations reduces unnecessary verification. (4) $\mathsf { A p x S X - l }$ outperforms ApxSX-OP for cases when the test nodes have “skewed” edge distribution in $L$ -hop subgraphs, i.e., less direct neighbors but large multi-hop neighbors, which favors the edge growth strategy of ApxSX-I. ApxSX-OP takes a relatively longer time to converge to high-quality answers via onion-peeling strategy, for nodes of interest with denser neighbors at “further” hops.
Exp-3: Scalability. We report the impact of critical factors (i.e., number of explanatory subgraphs $k$ , GNN classes, and number of GNN layers $L$ ) on the scalability of skyline explanation generation, using Cora dataset. Additional experimental results about effectiveness due to varying factors are given in the full version [2].
Varying $k$ . Setting $\mathcal { M }$ as GCN-based classifier with 3 layers, we vary $k$ from 1 to 25. Since our competitors are not configurable w.r.t. $k$ , we show the time costs of generating their solutions (that are independent of $k$ ), along with the time cost of our ApxSX-OP, $\mathsf { A p x S X - l }$ , and DivSX (which are dependent on $k$ ) in Figure 4(b). Our methods take longer time to maintain skyline explanation with larger $k$ , as more comparisons are required per newly generated candidate. DivSX is relatively more sensitive to $k$ due to additional computation of cosine distances (§5), yet remains significantly faster than learning-based explainers. On average, our methods take up to 23 seconds to maintain the explanations with $k$ varied to 25.
Varying GNN classes and $L$ . Fixing $k { = } 1 0$ , we report the time cost of 3-layer GNN explainers for GCN, GAT, and GIN over Cora. As shown in Figure 4(c), all our skyline methods take the least time to explain GNN-based classification. This is consistent with our observation that the verification of GNN is the most efficient among all classes, indicating an overall small verification cost.
We fix $k { = } 1 0$ and report the time cost of GCN explainers, with the number of layers $L$ varied from 1 to 3 over Cora. Figure 4(d) shows us that all our methods significantly outperform the competitors. The learning overheads of competitors remain their major bottleneck, while our algorithms, as post-hoc explainers without learning overhead, are more efficient. As expected, all our methods take a longer time to generate explanations for larger $L$ , as more subgraphs need to be verified from larger induced $L$ -hop neighbors.
Exp-4: Case Analysis. We next showcase qualitative analyses of our explanation methods, using real-world examples from two datasets: AmazonComputer and FacebookPage.
Diversified Explanation. A user is interested in finding “Why” a product $\boldsymbol { v } _ { 1 }$ is labeled $\overline { { ^ { \mathrm { * } } \mathrm { P C } } }$ Gaming” by a GCN. GNN explainers that optimize a single measure (e.g., GNNExplainer) return explanations (see $G _ { \zeta } ^ { 8 }$ in Figure 5(a)) that simply reveals a fact that $\boldsymbol { v } _ { 1 }$ is co-purchased with “Components”, a “one-sided” interpretation. In contrast, DivSX identifies an explanation that reveals more comprehensive interpretation with two explanatory subgraphs (both factual) $\{ G _ { \zeta } ^ { 9 } , G _ { \zeta } ^ { 1 0 } \}$ (Figure 5(a)), which reveal two co-purchasing
ASX-OP DSX PGExp MOExp 200 ★ ★ ★ ★ ★ ★ 800 ASX-OP DSX PGExp MOExp ASX-OP DSX PGExp MOExp 1034 3080 ASX-I GExp CF2 50 ASX-OP DGSExXp PCFG2Exp MOExp 150 100 102 20 30 50 ? 101 Cora PubMed FB AMZ Arixv 10 5 10 15 20 25 10 GCN GIN GAT 0 1 2 (a) Efficiency (b) Scalability w. 𝑘 (Cora) (c) Scalability w. GNNs (Cora) (d) Scalability w. 𝐿 (Cora)
Monitors Laptops Components iticEiaxnpslanatGorvyerSnmuebngtsrapMhon $\textcircled{2}$ Politicians $\textcircled{9}$ Governments Explanatory Subgraph
Accessories PC Gaming SShoOwTsA ExCploamnpantiieosn $\textcircled{7}$ TV Shows $\bigcirc$ Companies SOTA Explanation × Skyline Explanation × Skyline Explanation @ 0.8 0.8 GNNExplainer DivSX PGEx0pl.a6i4ner PGExplainer DivSX 0.64 G G12 G U1 0.2 G U2 0.2 X U1 0.0 U2 G 0.0 0.20.5 0.2 0.0 0.20.40.6 0.60.40.2 Skyline Explana- 0.8 1.1 0.4 (c) Skyline Explana- fd 0.8 1.0
tion v.s. GNNExplainer (b) Visualization of Skyline Explanation tion v.s. PGExplainer (d) Visualization of Skyline Explanation
(AmazonComputer) (AmazonComputer) (FacebookPage) (FacebookPage)
patterns bridging $\boldsymbol { v } _ { 1 }$ with not only “Components” for “Monitors” (by $G _ { \zeta } ^ { 9 } ,$ ), but also to “Accessories” of “Laptops”. Indeed, a closer inspection confirms that the former indicates gamers who prefer building their gaming PC with high-refresh rate monitors designed for gaming ; and the latter indicates that $\boldsymbol { v } _ { 1 }$ is a gaming laptop which needs frequent maintenance with laptop accessories. This verifies that DivSX is able to provide more comprehensive explanation.
In Figure $5 ( \mathrm { b } )$ , we visualize the distribution of explanatory subgraphs of $\boldsymbol { v } _ { 1 }$ from Figure 5(a). Each point in the plot denotes an explanatory subgraph with $3 D$ coordinates from normalized fdl+, fdl−, and conc scores. The verified interpretable space includes all the gray dots, the explanatory subgraphs generated by state-ofthe-art (SOTA) explainers (i.e., GNNExplainer, PGExplainer, ${ \mathsf { C F } } ^ { 2 }$ , $M O \mathsf { E x p } ,$ ) are highlighted as “diamond” points. Our skyline explanation is highlighted with red crosses. We visualize the convex hull based on skyline points to showcase their dominance over other explanatory subgraphs. We observe that the skyline explanation covers most of the interpretable space and provides comprehensive and diverse explanations. On the other hand, SOTA explainers often localize their solutions in smaller regions of the interpretable space, therefore missing diverse explanations [18].
Skyline vs. Multi-objective Explanation via Linear Combination. Our second case compares DivSX and PGExplainer (shown in Figure 5(c)). The latter generates explanations by optimizing a single objective that combines two explanation measures (factuality and conciseness). We observe that PGExplainer generates small factual subgraphs, such as $G _ { \zeta } ^ { 1 1 }$ for a test node $\scriptstyle v _ { 2 }$ , yet relatively less informative. DivSX generates a skyline explanation $\{ G _ { \zeta } ^ { 1 2 } , G _ { \zeta } ^ { 1 3 } \}$ that is able to interpret $\upsilon _ { 2 }$ as “Politicians” not only due to its connection with “Governments” entities but also highly related to “TV shows” and “Companies”. Meanwhile, Figure $5 ( \mathrm { d } )$ verifies that our skyline explanation covers most of the interpretable space with diverse and comprehensive explanations, while the SOTA explainers cluster their solutions in smaller regions.
We have also compared ApxSX-OP and ApxSX-I with their applications. Due to limited space, we present the details in [2]. | This paper proposes a novel approach to generate subgraph explanations for graph neural networks GNNs that simultaneously optimize multiple measures for explainability. Existing GNN explanation methods often compute subgraphs (called ``explanatory subgraphs'') that optimize a pre-defined, single explainability measure, such as fidelity or conciseness. This can lead to biased explanations that cannot provide a comprehensive explanation to clarify the output of GNN models. We introduce skyline explanation, a GNN explanation paradigm that aims to identify k explanatory subgraphs by simultaneously optimizing multiple explainability measures. (1) We formulate skyline explanation generation as a multi-objective optimization problem, and pursue explanations that approximate a skyline set of explanatory subgraphs. We show the hardness for skyline explanation generation. (2) We design efficient algorithms with an onion-peeling approach that strategically removes edges from neighbors of nodes of interests, and incrementally improves explanations as it explores an interpretation domain, with provable quality guarantees. (3) We further develop an algorithm to diversify explanations to provide more comprehensive perspectives. Using real-world graphs, we empirically verify the effectiveness, efficiency, and scalability of our algorithms. | [
"cs.LG",
"cs.DB"
] |
# 1 Introduction
Measurement enables scientific progress. In computer science and machine learning, this requires the creation of efficient benchmarks that provide a stable foundation for evaluation, ensuring that observed performance scores reflect genuine capabilities for real-world tasks.
Table Union Search (TUS) aims to retrieve tables $C$ from a corpus that are semantically unionable with a query table $Q$ , meaning they represent the same information type and permit vertical concatenation (row appending) (Nargesian et al., 2018; Fan et al., 2023a). As a top- $\mathbf { \nabla } \cdot k$ retrieval task, TUS ranks candidate tables $C$ by a table-level relevance score $R ( Q , C )$ . This score is typically obtained by aggregating column-level semantic relevance scores $R ( C _ { Q } , C _ { C } )$ computed for each column $C _ { Q }$ of the query table $Q$ and each column $C _ { C }$ of the candidate table $C$ . The aggregation often involves finding an optimal mapping between the columns of $Q$ and $C$ , for instance via maximum bipartite matching (Fan et al., 2023b). Successful TUS facilitates data integration and dataset enrichment (Khatiwada et al., 2023; Castelo et al., 2021).
Recent research has introduced sophisticated TUS methods with complex representation learning (Fan et al., 2023b; Khatiwada et al., 2025; Chen et al., 2023) designed to capture deeper semantics. However, current benchmarks often exhibit excessive schema overlap, limited semantic complexity, and potential ground truth inconsistencies, which raises questions about whether they provide a reliable environment to evaluate advanced TUS capabilities. While state-of-the-art methodologies leverage semantic reasoning to reflect the task specific challenges, observed high performance may be significantly attributed to model adaptation to specific statistical and structural properties inherent within the benchmark datasets. This phenomenon can confound the accurate assessment and potentially underestimate the isolated contribution of improvements specifically targeting semantics-aware TUS.
In this paper, we examine prominent TUS benchmarks 1, using simple baselines to assess the benchmarks themselves. Our research questions are:
1. Do current TUS benchmarks necessitate deep semantic analysis, or can simpler features achieve competitive performance?
2. How do benchmark properties and ground truth quality impact TUS evaluation?
3. What constitutes a more realistic and discriminative TUS benchmark?
Our analysis2 reveals that simple baseline methods often achieve surprisingly strong performance by leveraging benchmark characteristics rather than demonstrating sophisticated semantic reasoning.
Our contributions include:
• A systematic analysis identifying limitations in current TUS benchmarks.
• Empirical evidence showing simple embedding methods achieve competitive performance.
• An investigation of ground truth reliability issues across multiple TUS benchmarks.
• Criteria for developing more realistic and discriminative benchmarks.
# 2 Related Work
We review existing research on TUS methods and the benchmarks used for their evaluation, with a focus on how underlying assumptions about table unionability have evolved to become increasingly nuanced and complex.
# 2.1 Methods and Their Assumptions
2.1.a) Foundational Approaches: Following early work on schema matching and structural similarity (Sarma et al., 2012), Nargesian et al. (2018) formalized TUS by assessing attribute unionability via value overlap, ontology mappings, and natural language embeddings. Bogatu et al. (2020) incorporated additional features (e.g., value formats, numerical distributions) and proposed a distinct aggregation method based on weighted feature distances. Efficient implementations of these methods rely on Locality Sensitive Hashing (LSH) indices and techniques like LSH Ensemble (Zhu et al., 2016) for efficient table search.
2.1.b) Incorporating Column Relationships: Beyond considering columns individually, Khatiwada et al. (2023) proposed SANTOS, which evaluates the consistency of inter-column semantic relationships (derived using an existing knowledge base like YAGO (Pellissier Tanon et al., 2020) or by synthesizing one from the data itself) across tables to improve TUS accuracy.
2.1.c) Deep Table Representation Learning: Recent approaches use deep learning for tabular understanding. Pylon (Cong et al., 2023) and Starmie (Fan et al., 2023b) use contrastive learning for contextualized column embeddings. Hu et al. (2023) propose AutoTUS, employing multi-stage selfsupervised learning. TabSketchFM (Khatiwada et al., 2025) uses data sketches to preserve semantics while enabling scalability. Graph-based approaches like HEARTS (Boutaleb et al., 2025) leverage HyTrel (Chen et al., 2023), representing tables as hypergraphs to preserve structural properties.
# 2.2 Benchmarks and their Characteristics
Benchmark creators make design choices at every stage of the construction process that reflect their understanding and assumptions about how and when tables can and should be meaningfully combined. We identify three primary construction paradigms applied for building TUS benchmarks:
2.2.a) Partitioning-based: TUSSmall and TUSLarge (Nargesian et al., 2018), as well as the SANTOS benchmark (referring to $\mathbf { S } \mathbf { A } \mathbf { N } \mathbf { T O S } _ { \mathbf { S m a l l } }$ , as SANTOSLarge is not fully labeled) (Khatiwada et al., 2023) partition seed tables horizontally or vertically, labeling tables from the same original seed as unionable with the seed table. This approach likely introduces significant schema and value overlap, potentially favoring methods that detect surfacelevel similarity rather than deeper semantic alignment.
2.2.b) Corpus-derived: The PYLON benchmark (Cong et al., 2023) curates tables from GitTables (Hulsebos et al., 2023) on specific topics. While this avoids systematic partitioning overlap, the focus on common topics may result in datasets with a general vocabulary that is well-represented in pre-trained models. This can reduce the comparative advantage of specialized table representation learning and data discovery methods.
2.2.c) LLM-generated: UGEN (Pal et al., 2024) leverages Large Language Models (LLMs) to generate table pairs, aiming to overcome limitations of previous methods by crafting purposefully challenging scenarios, including hard negatives. However, this strategy introduces the risk of ground truth inconsistency, as LLMs may interpret the criteria for unionability differently across generations, affecting label reliability.
2.2.d) Hybrid approaches: LAKEBENCH (Deng et al., 2024) uses tables from OpenData3 and WebTable corpora4 alongside both partitioningbased synthetic queries and real queries sampled from the corpus. However, such hybrid approaches can inherit the limitations of their constituent methods: partitioning still risks high overlap, candidatebased labeling may yield incomplete ground truth, and the large scale of these benchmarks can introduce practical evaluation challenges.
Table 1: Table Union Search Benchmarks Summary. $\mathrm { N Q } =$ Non-query table, $Q =$ Query table.
# 3 Methodology
As TUS methods become increasingly sophisticated, the benchmarks used for their evaluation may contain inherent characteristics that hinder the accurate assessment of progress in semantic understanding. This section outlines our approach to examining prominent TUS benchmarks through analysis of their construction methods and strategic use of simple baselines as diagnostic tools. The goal of advanced TUS methods is to capture deep semantic compatibility between tables, beyond simple lexical or structural similarity. Our investigation first analyzes the various benchmark construction processes to identify potential structural weaknesses, then employs computationally inexpensive baseline methods to reveal how these characteristics enable alternative pathways to high performance, thereby influencing evaluation outcomes.
# 3.1 Analyzing Benchmark Construction
We examine five prominent families of TUS benchmarks and formulate hypotheses about their potential limitations based on their construction methodologies (Table 1). We identify three issues stemming from these methodologies: (1) excessive overlap, (2) semantic simplicity, and (3) ground truth inconsistencies, which we detail below:
3.1.a) Excessive Overlap: Benchmarks like $\mathrm { T U S } _ { \mathrm { S m a l l } }$ , TUSLarge, SANTOS, and the synthetic query portion of the LAKEBENCH derivatives are created by partitioning seed tables horizontally and vertically, with tables derived from the same original seed designated as unionable pairs. We hypothesize that this methodology inherently leads to significant overlap in both schema (column names) and content (data values) between query tables and their ground truth unionable candidates.
To quantify this, we measure overlap using the Szymkiewicz–Simpson coefficient for exact column names $( O v e r l a p _ { c }$ , Eq. 1) and for values of a given data type $d$ $( O v e r l a p _ { v }$ , Eq. 2) between ground truth pairs.
$$
\begin{array} { l } { { O v e r l a p } _ { c } ( Q , C ) = \displaystyle \frac { | C o l s _ { Q } \cap C o l s _ { C } | } { \operatorname* { m i n } ( | C o l s _ { Q } | , | C o l s _ { C } | ) } } \\ { { O v e r l a p } _ { v } ( Q , C ) = \displaystyle \frac { | V _ { Q } ^ { d } \cap V _ { C } ^ { d } | } { \operatorname* { m i n } ( | V _ { Q } ^ { d } | , | V _ { C } ^ { d } | ) } } \end{array}
$$
where $C o l s _ { Q }$ and $C o l s _ { C }$ denote the sets of column names in the query table $Q$ and candidate table $C$ respectively, and $V _ { Q } ^ { d } , V _ { C } ^ { d }$ represent the sets of unique values of data type $d$ in each table. The coefficient equals 1.0 when one set is fully contained within the other. Figure 1 shows the distribution of overlap coefficients, with values $\ge ~ 5 0 \%$ indicating substantial overlap. As expected, partitioning-based benchmarks exhibit high overlap: over $90 \%$ of ground truth pairs share $\geq 5 0 \%$ of exact column names. For value overlap, we focus on string data types, which dominate the benchmarks (Table 1). Here too, $45 \text{‰}$ of query-candidate pairs share $\geq 5 0 \%$ of string tokens. LAKEBENCH derivatives (LB-OPENDATA, LBWEBTABLE) show similar trends. Appendix A provides a detailed breakdown by data type. This high surface similarity favors simple lexical methods and also influences advanced models by introducing repeated patterns in serialized inputs (Starmie), data sketches (TabSketchFM), and graph structures (HEARTS). Though designed for deeper semantics, these models are affected by strong benchmarkinduced surface signals, making it hard to attribute performance gains purely to nuanced reasoning.
Figure 1: Distribution of Exact Column Name Overlap (Top) and String Value Overlap (Bottom) Coefficients for Ground Truth Unionable Pairs Across Benchmarks. Colored circles represent mean values; numbers on the right indicate total pairwise relationships considered.
3.1.b) Semantic Simplicity: Benchmarks derived directly from large corpora, such as PYLON (Cong et al., 2023) using GitTables (Hulsebos et al., 2023) or the real query portions of LAKEBENCH derivatives using diverse public datasets, avoid the systematic overlap introduced by partitioning. However, we hypothesize that this construction method introduces other limitations since (1) it often focuses on relatively common topics with simpler semantics, reducing the need for specialized domain knowledge, and (2) it generally draws from public data sources likely included in the pre-training corpora of large foundation models. Evidence from specific benchmarks supports this concern. PYLON’s construction indeed avoids high overlap (Figure 1 shows lower overlap than partitioning-based benchmarks). For LAKEBENCH, while the distinction between real and synthetic queries was unavailable during our analysis5, the significant overall observed overlap suggests that synthetic, partitioning-based queries constitute a large portion of the benchmark. The semantic simplicity evident in PYLON’s topics and the public origins of data in both PYLON and LAKEBENCH could favor general-purpose models like BERT (Devlin et al., 2019) or SBERT (Reimers and Gurevych, 2019), which have with a high, however unverifiable, probability encountered similar content during pre-training. Consequently, the semantic challenge presented by these benchmarks might be relatively low for models with strong general language understanding – a contrast to documented LLM struggles with non-public, enterprise-specific data characteristics (Bodensohn et al., 2025), potentially allowing off-the-shelf embedding models to achieve high performance without fine-tuning.
3.1.c) Noisy Ground Truths: Ensuring accurate and complete ground truth labels is challenging, especially with automated generation or large-scale human labeling efforts as used in LLM-generated benchmarks (UGEN) and large human-labeled ones (LAKEBENCH derivatives). We hypothesize that ground truth in these benchmarks may suffer from reliability issues, including incorrect labels (false positives/negatives) or incompleteness (missed true positives). For UGEN, generating consistent, accurate positive and negative pairs (especially hard negatives) is difficult. LLMs might interpret unionability rules inconsistently across generations, leading to noisy labels. For large-scale human labeling with LB-OPENDATA and LB-WEBTABLE, the process introduces two risks: incompleteness, if the initial retrieval misses true unionable tables; and incorrectness, if human judgments vary or contain errors despite validation efforts. Evaluating performance on UGEN and LAKEBENCH derivatives thus requires caution. Scores are affected by label noise or incompleteness; low scores reflect ground truth issues and are therefore not solely attributable to benchmark difficulty, while the maximum achievable recall is capped by unlabeled true positives.
# 3.2 Baseline Methods for Benchmark Analysis
Based on the hypothesized benchmark issues identified above, we select some simple baseline methods to test benchmark sensitivity to different information types. While the (1) overlap and (2) general semantics limitations can be directly examined through baseline performance, (3) the ground truth integrity issue requires separate validation of labels, which we address in Section 5.2. Detailed implementation choices for all baseline methods are in Appendix B.1.
3.2.a) Bag-of-Words Vectorizers: To test whether the Excessive Overlap enables methods sensitive to token frequency to perform well on partitioning-based benchmarks, we employ standard lexical vectorizers (HashingVectorizer, TfidfVectorizer, and CountVectorizer) from scikit-learn6. These generate column embeddings based on sampled string values, with a single table vector obtained via max pooling across column vectors. These baselines test whether high performance can be achieved primarily by exploiting surface signals without semantic reasoning.
3.2.b) Pre-trained Sentence Transformers: To examine whether the Semantic Simplicity allows benchmarks from broad corpora to be effectively processed by pre-trained language models, we use a Sentence-BERT model (all-mpnet-base-v27) with three column-to-text serializations: (1) SBERT $( \mathrm { V } { + } \mathrm { C } )$ : input includes column name and sampled values; (2) SBERT (C): input is only the column name; and (3) SBERT (V): input is only concatenated sampled values. Column embeddings are aggregated using mean pooling to produce a single table vector. These baselines assess whether general semantic embeddings, without task-specific fine-tuning, suffice for high performance on benchmarks with general vocabulary.
# 4 Experimental Setup
To evaluate our hypotheses about benchmark limitations, we employ both simple baseline methods (Section 3.2) and advanced SOTA methods in a controlled experimental framework. This section details the benchmark datasets used, any necessary preprocessing, the comparative methods, and our standardized evaluation approach.
# 4.1 Benchmarks
Our analysis uses the benchmarks described in Section 2.2, with post-preprocessing statistics summarized in Table 1. Most benchmarks were used as-is, but the large-scale LAKEBENCH derivatives (LB-OPENDATA and LB-WEBTABLE) required additional preprocessing for feasibility and reproducibility. The original datasets were too large to process directly and included practical issues, such as missing files, as well as characteristics that complicated evaluation, such as many unreferenced tables. We removed ground truth entries pointing to missing files (58 in LB-WEBTABLE), and excluded unreferenced tables from the retrieval corpus (removing ${ \sim } 5 { , } 3 0 0$ and $> 2 . 7 \mathrm { M }$ files from LBOPENDATA and LB-WEBTABLE, respectively). This latter step was done purely for computational feasibility; as a side effect, it simplifies the benchmark by eliminating tables that would otherwise be false positives if retrieved. We also ensured that each query table was listed as a candidate for itself. These steps substantially reduced corpus size while preserving evaluation integrity. The LAKEBENCH variants considered in our study are those available as of May 20, $2 0 2 5 ^ { 8 }$ . Future updates to the original repository may modify dataset contents, which yield different evaluation results.
Additionally, for LB-OPENDATA, we created a smaller variant with tables truncated to 1,000 rows, which we use in experiments alongside the original version (Table 2). For $\mathrm { T U S } _ { \mathrm { S m a l l } }$ and TUSLarge, we followed prior work (Fan et al., 2023b; Hu et al., 2023), sampling 125 and 100 queries, respectively. For the other benchmarks, all queries were used.
# 4.2 Comparative Methods
To evaluate our baseline methods (Section 3.2), we compare them against key TUS models previously discussed in Section 2.1, focusing on SOTA methods. For each method, we optimize implementation using publicly available code for fairness:
• Starmie (Fan et al., 2023b): We retrained the RoBERTa-based model for 10 epochs on each benchmark using recommended hyperparameters and their “Pruning” bipartite matching search strategy for generating rankings, which achieves optimal results according to the original paper.
• HEARTS (Boutaleb et al., 2025): We utilized pre-trained HyTrel embeddings (Chen et al., 2023) with a contrastively-trained checkpoint. For each benchmark, we adopted the bestperforming search strategy from the HEARTS repository: Cluster Search for SANTOS, PYLON, and UGEN benchmarks, and ANN index search with max pooling for the TUS and LAKEBENCH benchmarks.
• TabSketchFM (Khatiwada et al., 2025): Results for the $\mathrm { T U S } _ { \mathrm { S m a l l } }$ and SANTOS were reported directly from the original paper, as the pretrained checkpoint was unavailable at the time of our experiments.
These methods represent significant advancements in table representation learning. AutoTUS (Hu et al., 2023) wasn’t included due to code unavailability at the time of writing. We provide further implementation details in Appendix B.2.
# 4.3 Evaluation Procedure
We use a consistent evaluation procedure for all baseline and SOTA methods to ensure fair comparison. Table vectors are generated per method (Section 3.2 for baselines; SOTA-specific procedures otherwise) and L2-normalized for similarity via inner product. For similarity search, baseline methods use the FAISS library (Douze et al., 2024) with an exact inner product index (IndexFlatIP); each query ranks all candidate tables by similarity. SOTA methods use FAISS or alternative search strategies (Appendix B.2). Following prior work (Fan et al., 2023b; Hu et al., 2023), we report Precision $@ \mathbf { k }$ $( \mathrm { P } @ \mathbf { k } )$ and Recall $@ \mathbf { k }$ $( \operatorname { R } @ \operatorname { k } )$ , averaged across queries. Values of $k$ follow prior works and are shown in results tables (e.g., Table 2). We also evaluate computational efficiency via offline (training, vector extraction, indexing) and online (query search) runtimes, with hardware details in Appendix B.3.
# 5 Results and Discussion
Our empirical evaluation revealed significant patterns across benchmarks that expose fundamental limitations in their ability to measure progress in semantic understanding. Tables 2 and 3 present effectiveness and efficiency metrics respectively.
# 5.1 Evidence of Benchmark Limitations
The most compelling evidence for our benchmark limitation hypotheses emerges from the unexpectedly strong performance of simple baselines. On partitioning-based benchmarks $\mathrm { ( T U S _ { S m a l l } }$ TUSLarge, SANTOS), lexical methods achieve nearperfect precision, matching or exceeding sophisticated models at a fraction of the cost. This directly validates our overlap issue hypothesis: the high schema and value overlap (Figure 1) creates trivial signals that simple lexical matching can exploit. While advanced methods like Starmie or HEARTS also achieve high scores here, the fact that much simpler, non-semantic methods perform nearly identically leads us to conclude that the benchmark itself does not effectively differentiate methods based on deep semantic understanding. This phenomenon, where simpler approaches can achieve comparable or even better results than more complex counterparts, especially when computational costs are considered, has also been observed in related data lake tasks such as table augmentation via join search (Cappuzzo et al., 2024).
For PYLON, a different pattern emerges: lexical methods perform considerably worse due to the much lower exact overlap, but general-purpose semantic embeddings excel. SBERT variants, particularly SBERT $\mathrm { \Delta V { + } C } )$ combining column and value information, outperform specialized SOTA models like Starmie. This confirms our general semantics hypothesis that these benchmarks employ vocabulary well-represented in standard pre-trained embeddings, diminishing the advantage of specialized tabular architectures for the TUS task.
LB-OPENDATA and LB-WEBTABLE exhibit both limitations despite their scale. Simple lexical methods remain surprisingly competitive, while SBERT variants consistently outperform specialized models. The computational demands of sophisticated models create additional practical barriers: Starmie requires substantial offline costs (training and inference) plus over 16 hours to process the queries on the truncated LB-OPENDATA, and over 21 hours to evaluate the queries of LB-WEBTABLE. HEARTS performs better computationally by leveraging a pre-trained checkpoint without additional training, resulting in a shorter offline processing time, but still under-performs SBERT variants.
# 5.2 Ground Truth Reliability Issues
A notable observation across UGEN and LAKEBENCH derivatives is the significant gap between the $\operatorname { R @ K }$ achieved by all methods and the IDEAL recall (Table 2). This discrepancy led us to question the reliability of the benchmarks’ ground truth labels. We hypothesized that such gaps might indicate not only the limitations of the search methods or the inherent difficulty of the benchmarks but also potential incompleteness or inaccuracies within the ground truth itself. Examining discrepancies at small values of $k$ is particularly revealing, as this scrutinizes the highest-confidence predictions of a system. If a high-performing method frequently disagrees with the ground truth at these top ranks, it may signal issues with the ground truth labels.
To investigate this, we defined two heuristic metrics designed to help identify potential ground truth flaws. Let $\mathcal { Q } = \{ Q _ { 1 } , \ldots , Q _ { N } \}$ be $N$ query tables. For $Q _ { i } \in \mathcal { Q } .$ $C _ { Q _ { i } , k }$ is the set of top- $k$ candidates retrieved by a search method for $Q _ { i }$ , and $G _ { Q _ { i } }$ is the set of ground truth candidates labeled unionable with $Q _ { i }$ .
Table 2: Precision and Recall across benchmarks. Highest values in bold, second highest underlined. IDEAL represents the maximum possible $\mathbf { P } @ \mathbf { k }$ and $\operatorname { R @ K }$ achievable for each benchmark at the specified k. \* : Results unavailable as checkpoint was not publicly accessible. $\ddagger$ : Not reported due to excessive computational requirements.
Table 3: Computational efficiency across benchmarks. Times are averaged over 5 runs due to runtime variability. Offline includes vector generation, indexing, and training times where applicable; Online is total query search time.
# 1. GTFP $@ \mathbf { k }$ (Ground Truth False Positive Rate):
This measures the fraction of top- $k$ candidates retrieved by a search method that are not labeled as unionable in the original ground truth. A high $\mathrm { G T F P } @ \mathrm { k }$ , especially at small $k$ , suggests the method might be identifying valid unionable tables missing from the ground truth, thereby helping us pinpoint its possible incompleteness. It is calculated as:
$$
\frac { \sum _ { i = 1 } ^ { N } | C _ { Q _ { i } , k } \setminus G _ { Q _ { i } } | } { N \cdot k }
$$
Here, $\vert C _ { Q _ { i } , k } \rangle G _ { Q _ { i } } \vert$ counts retrieved candidates for $Q _ { i }$ that are absent from its ground truth set $G _ { Q _ { i } }$ . The denominator is the total top- $k$ slots considered across all queries.
2. GTFN $@ \mathbf { k }$ (Ground Truth False Negative Rate): This quantifies the fraction of items labeled as positives in the ground truth that a wellperforming search method fails to retrieve within its top- $k$ results (considering a capped expectation up to $k$ items per query). It is calculated as:
$$
\frac { \sum _ { i = 1 } ^ { N } ( \operatorname* { m i n } ( k , | G _ { Q _ { i } } | ) - | G _ { Q _ { i } } \cap C _ { Q _ { i } , k } | ) } { \sum _ { i = 1 } ^ { N } \operatorname* { m i n } ( k , | G _ { Q _ { i } } | ) }
$$
The term $\operatorname* { m i n } ( k , | G _ { Q _ { i } } | )$ represents the capped ideal number of ground truth items we would expect to find in the top $k$ for $Q _ { i }$ . The numerator sums the "misses" for each query: the difference between this capped ideal and the number of ground truth items actually retrieved. The denominator sums this capped ideal across all queries. A high $\mathrm { \bf G T F N } @ \mathrm { k }$ at small $k$ is particularly insightful when investigating ground truth integrity. If we trust the method’s ability to discern relevance, a high $\mathrm { \bf G T F N } @ { \bf k }$ implies that the method correctly deprioritizes items that, despite being in the ground truth, might be less relevant or even incorrectly labeled as positive. Thus, it can signal potential incorrectness within the ground truth. $\mathrm { G T F N } @ \mathrm { k }$ is equivalent to "1 CappedRecall $\textcircled { a } \mathbf { k } ^ { \prime \prime }$ (Thakur et al., 2021).
These metrics assume discrepancies between a strong search method and the ground truth may indicate flaws in the latter. While not highly accurate, they helped us identify a smaller, focused subset of query-candidate pairs with disagreements for deeper manual or LLM-based inspection. Results are shown in Table 4.
Beyond heuristic metrics, we also conduct a more direct–though still imperfect–assessment of UGEN’s ground truth using an LLM-as-ajudge approach. While this method may not capture the same conflicts identified by the cheaper GTFP/GTFN heuristics, it provides a complementary perspective that can offer more precise insights in certain cases. We use gemini-2.0-flash-thinking-exp-01-219, chosen for its 1M-token context window, baked-in reasoning abilities, and low hallucination rate10. This LLM-as-a-judge approach has become increasingly common in recent works (Gu et al., 2024; Wolff and Hulsebos, 2025). We gave the LLM both tables in each query-candidate pair, along with a detailed prompt including curated unionable and nonunionable examples from UGEN (see Appendix D) to condition the LLM’s understanding of unionability based on the benchmark. Each pair was evaluated in 5 independent runs with temperatur $\cdot \tt e = 0 . 1$ . A sample of 20 LLM outputs was manually validated and showed strong alignment with human judgment. Comparison with original UGEN labels (Table 5) revealed substantial inconsistencies. Our manual inspection (Appendix C.1) suggested the LLM often provided more accurate assessments, indicating notable noise in the original ground truth.
Given the scale of LB-OPENDATA and LBWEBTABLE, full LLM adjudication was impractical. Instead, we used SBERT $( \boldsymbol { \mathsf { V } } { + } \boldsymbol { \mathsf { C } } )$ as our reference search method to compute $\mathrm { \bf G T F P } @ \mathrm { k }$ , focusing on top-ranked pairs not labeled as unionable in the ground truth. As shown in Table 4, such cases were frequent even at top ranks ( $2 < k < 5$ ). To assess ground truth completeness, we manually inspected 20 randomly sampled top-2 and top-3 disagreements. Of these, 19 were genuinely unionable but missing from the ground truth; the remaining pair was correctly non-unionable, with SBERT likely misled by its numeric-only columns. These results suggest non-negligible incompleteness in the LAKEBENCH ground truth. Example cases are shown in Appendix C.2.
In summary, our investigations, combining heuristic metrics, LLM-based adjudication, and manual inspection, reveal the presence of nonnegligible noise and incompleteness within the original benchmark labels for both UGEN and LAKEBENCH. Consequently, performance metrics reported on these benchmarks may be influenced by these underlying ground truth issues, potentially misrepresenting true task difficulty or method capabilities.
# 5.3 Implications for Measuring Progress
Our experiments reveal several critical issues. Benchmark scores often fail to measure true semantic capabilities, as simple lexical or general embed
Table 4: Disagreement rates of top- $k$ retrieved results between SBERT and the ground truth across different benchmarks. For UGEN, the query table is not considered a candidate to itself, so values at $\ @ 1$ reflect actual disagreement. For LAKEBENCH variants, the ground truth is normalized to include the query table as a valid candidate for itself. Therefore, the top-1 match is always correct by construction, yielding no disagreement $\ @ 1$ .
Table 5: Breakdown of agreement and disagreement between ground truth labels and LLM-based judgments.
ding methods can match or outperform specialized models by exploiting excessive domain overlap, semantic simplicity, or ground truth inconsistency. This suggests that current benchmarks may inadvertently reward adaptation to these characteristics, making it difficult to quantify the practical benefits of progress on sophisticated TUS methods capabilities within these settings. These persistent issues also point to a fundamental challenge, the lack of a precise, operational definition for unionability, mirroring broader difficulties in dataset search (Hulsebos et al., 2024) and highlighting the need to address the subjective, context-dependent nature of table compatibility in practice.
# 6 Towards Better TUS Benchmarks
In industry practice, unionability judgments are inherently subjective, depending on analytical goals, domain contexts, data accessibility constraints (Martorana et al., 2025), and user preferences (Mirzaei and Rafiei, 2023). Yet current benchmarks impose fixed definitions, creating a disconnect with practical utility: methods excelling on benchmarks often falter in real-world scenarios demanding different compatibility thresholds. Addressing this requires benchmark designs that embrace contextual variability and provide a stable foundation for evaluation, lest even advanced methods fall short in practice.
Rethinking Benchmark Design Principles: Overcoming current benchmark limitations requires a shift in design focusing on three key principles: (1) actively reducing artifactual overlap while introducing controlled semantic heterogeneity to better reflect real-world schema and value divergence; (2) incorporating realistic domain complexity beyond general vocabularies, addressing challenges like non-descriptive schemas and proprietary terms where LLMs struggle (Bodensohn et al., 2025), thus emphasizing domain-specific training that may require industry collaboration; and (3) rethinking ground truth representation by replacing brittle binary labels with richer, nuanced formats validated through multi-stage adjudication to improve completeness and consistency.
Exploring Implementation Pathways: Translating these principles into practice requires concrete strategies for benchmark design and evaluation. One approach is to develop (1) scenariodriven micro-benchmarks targeting specific challenges such as schema drift simulation or value representation noise, enabling more granular analysis than coarse end-to-end metrics. Another is (2) advancing controllable synthetic data generation, following LLM-based methods like UGEN (Pal et al., 2024), to verifiably embed semantic constraints or domain knowledge, supporting diverse testbeds when real data is unavailable or sensitive. Equally important is (3) exploring adaptive, interactive evaluation frameworks such as human-inthe-loop systems, which would dynamically adjust relevance criteria based on user feedback to better capture the subjective nature of unionability. Tools like LakeVisage (Hu et al., 2025) further enhance usability and trust by recommending visualizations that help users interpret relationships among returned tables, improving transparency and interpretability in union search systems.Incorporating natural language preferences is also key. The recent NLCTABLES benchmark (Cui et al., 2025) advances this by introducing NL conditions for union and join searches on column values and table size constraints. However, its predicate-style conditions may be better addressed via post-retrieval filtering (e.g., translating NL to SQL predicates with an LLM), avoiding early discard of unionable candidates and unnecessary retrieval model complexity. To drive further advancement, benchmarks should incorporate (4) natural language conditions that capture key aspects of unionability and joinability, including specifications about the characteristics of the final integrated table or conditional integration logic. For example, a challenging predicate might require identifying tables that can be "joined with a query table on column A, unioned on columns B and C, and also contain an additional column D providing specific contextual information about a particular attribute." Such conditions would demand deeper reasoning capabilities from data integration systems and encourage the development of more sophisticated methods for Table Union and Join Search. Finally, moving beyond binary success metrics, future benchmarks could adopt (5) multi-faceted evaluation frameworks using richer ground truth representations to assess unionability across dimensions like schema compatibility, semantic type alignment, value distribution similarity, and task-specific relevance, offering a more holistic evaluation than current standards. | Recent table representation learning and data discovery methods tackle table union search (TUS) within data lakes, which involves identifying tables that can be unioned with a given query table to enrich its content. These methods are commonly evaluated using benchmarks that aim to assess semantic understanding in real-world TUS tasks. However, our analysis of prominent TUS benchmarks reveals several limitations that allow simple baselines to perform surprisingly well, often outperforming more sophisticated approaches. This suggests that current benchmark scores are heavily influenced by dataset-specific characteristics and fail to effectively isolate the gains from semantic understanding. To address this, we propose essential criteria for future benchmarks to enable a more realistic and reliable evaluation of progress in semantic table union search. | [
"cs.IR",
"cs.AI",
"cs.CL",
"cs.DB",
"cs.LG"
] |
# 1 Introduction
Visual Information Extraction (VIE) (Wan et al., 2024; Kuang et al., 2023; Hong et al., 2022; Kim et al., 2022) aims to generate structured information, such as JSON, from unstructured document images. This capability is crucial for various medical applications such as report interpretation (Li et al., 2024) and online consultations (Liu et al., 2025b). The most common approach involves first applying Optical Character Recognition (OCR) (Feng et al., 2025; Poznanski et al., 2025; Wei et al., 2024) to extract text, followed by leveraging large language models (LLMs) to extract and organize the text into a JSON structure. Additionally, end-to-end methods (Wan et al., 2024; Bai et al., 2025; Kuang et al., 2023; Kim et al., 2022) have emerged, including multimodal large models that directly output JSON from image inputs.
However, VIE tasks are highly domain-specific, with each domain requiring customized schemas. (Park et al., 2019; Huang et al., 2019b) The keys and values within these schemas are often defined by intricate domain-specific details, posing significant challenges for applying general-purpose VIE models to specialized fields. This aspect fundamentally differentiates structured VIE from OCR. Moreover, the annotation cost for VIE tasks is relatively high. These challenges have resulted in suboptimal performance of existing methods in medical VIE scenarios.
Given the nontrivial relationship between diverse image inputs and outputs conforming to predefined schemas, we argue that VIE models need reasoning capabilities (OpenAI, 2024b) to address these complexities. To mitigate the high annotation cost, we explore efficient training paradigms using only 100 annotated samples. Combining these two considerations, we adopt Reinforcement Learning with Verifiable Rewards (RLVR) (Guo et al., 2025; Team et al., 2025) to achieve efficient medical visual extraction.
Specifically, our design within the RLVR framework focuses on three key aspects. First, we ensure diversity in the 100 image samples to make the dataset representative and varied. Second, we carefully design the reward mechanism by incorporating a weighted combination of precision and recall, where precision reduces model hallucinations and recall ensures the model captures all the predefined fields of interest. Lastly, we adopt two sampling strategies: one requires each response to include all fields, with rewards calculated against the ground truth for all fields, while the other evaluates responses using a random subset of fields from the total schema. By integrating these carefully designed components, we aim to establish an efficient and robust solution for medical VIE tasks.
Based on our proposed method, we finetuned Qwen2.5-VL-7B to have VIE RLVR models. We evaluate the models on medical and general VIE tasks. Our VIE RLVR models achieve SOTA performance on F1, precision and recall metrics on medical VIE tasks, indicating the advantage of our proposed method. We chose four widely-considered general VIE tasks for further evaluation. On two tasks that are similar to the medical report dataset, our models highly outperforms Qwen2.5-VL-7B. Meanwhile on other two tasks that are much different with our medical report dataset, our models failed to outperforms the base model, revealing the significant gap between different VIE tasks. The comparison of VIE models with and without thinking process while training and inferring are also delivered. In our case studies one can see how model benefits from thinking in dealing with VIE tasks.
# 2 Related Work
# 2.1 Visual Information Extraction
Visual Information Extraction (VIE) converts unstructured document images into structured outputs (e.g., key–value pairs or JSON), supporting applications like receipt understanding, form parsing, and medical document analysis(Huang et al., 2022; Powalski et al., 2021; Appalaraju et al., 2021). Existing methods fall into two main types: two-stage approaches that apply OCR followed by language models for structural parsing(Xu et al., 2020b,a), and end-to-end models that directly generate outputs from images without OCR (Kim et al., 2022; Zhang et al., 2020). Though effective on low-complexity benchmarks such as FUNSD, SROIE, and CORD (Jaume et al., 2019; Huang et al., 2019a; Park et al., 2019; Cao et al., 2022; Wang et al., 2021a), these models often omit required fields, hallucinate content, and generalize poorly to unseen layouts—especially under few-shot or domain-shift conditions. These issues are exacerbated in the medical domain(Ma et al., 2023; Zheng et al., 2022), where layouts vary widely and annotated data is scarce. While recent advances like layout-aware pretraining (Chen et al., 2022; Adnan et al., 2024; Luo et al., 2023), graph-based models (Yu et al., 2021), and schema-guided prompting (Wang et al.; Li et al., 2024; Yao et al., 2024) provide partial solutions, they often fall short of ensuring both structural completeness and semantic accuracy under low-resource constraints.
# 2.2 Reinforcement Learning for MLLM Reasoning
Reinforcement Learning (RL) has emerged as a pivotal research direction for enhancing the complex reasoning capabilities of LLMs (Guo et al., 2025; Jaech et al., 2024; Shao et al., 2024; Hui et al., 2024; Ying et al., 2024). OpenAI-o1 (Jaech et al., 2024) adopted Reinforcement Learning from Human Feedback (RLHF) during the fine-tuning process, significantly enhancing the model’s reasoning abilities and its alignment with human preferences. More recently, DeepSeek-R1 (Guo et al., 2025) employed GRPO (Shao et al., 2024), which, unlike traditional RL algorithms dependent on critic models, directly utilizes rule-based verifiable rewards to guide the model’s reasoning process. This approach has greatly simplified the training procedure and proven highly effective in improving reasoning capabilities. This trend is gradually extending to MLLMs to further enhance their visual reasoning abilities (Xu et al., 2024; Liu et al., 2025a; Yu et al., 2025a; Yang et al., 2025; Zhou et al., 2025). Studies such as Visual-RFT (Liu et al., 2025c) and VLM-R1 (Shen et al., 2025) have shown that for single-image visual grounding tasks, direct application of few-shot GRPO can achieve improvements surpassing supervised fine-tuning. The GoT-R1 (Duan et al., 2025) framework applies RL to enhance semantic spatial reasoning in visual generation. Vision-R1 (Huang et al., 2025) enhances multimodal mathematical reasoning capabilities by using DeepSeek-R1 to augment multimodal Chain of Thought (CoT) datasets and adopting step-by-step thought inhibition during GRPO training. In this research, we aim to extend this paradigm to the Medical VIE tasks mentioned earlier.
Response
Key descriptions: response <Btahsiendk>on the content of the image and in
{ "Examination Name": "", # The names of the examination . accordance with the requirements, I will extract and structure the report content with the following
"Indicators": [ # The various indicators for the examination specific steps: {"Item Name": "", #... "Result": ""#... ,"Unit": ""#...,... }, ted on the patient are generally presented in.. LM Decoder 1. Sample Collection Time: Extracted from the report. 2. Department: Pediatrics. 3. Examination Description: Typically, there is descriptive information located below the charts. 1. Add comments for keys; texts tokens pictures tokens TheExeaxtmrainctaetido nkeNyainmfeormTathioent itlseaspfpolelaorwssa: s 23. KReempooven yvaolunese;lement for list. 个 个 PRuelpmorot.nary Function Diffusing Capacity Test \*\*Treatment Recommendations\*\*: Provided after
{S"aEmxpalmeidn aktieoyn-vNalaume "p:a"irLsu nags FJusnocntGnTDiffusing Capacity Test", "Indicators": [ TYhoeu anrnesewdae..ar.nfnorotmattory,omuanineleyd...to follow is as follows: VisiPorno-jLeacntigounage aexntaIrlnaydcziticenadgt oturhsieEnrxgterlaectvtaibonltne.medical indicators - Information such as "FEV1", "FVC" with results {"Item Name": "FEV1",%"RFeVsCu"lt,":R"e1.s6u1lt",:" "U"n,"itU":n"it["L:]",[..%. ]},",...},...] {key_descriptions} <n/etehidnskt>o be extracted from the table.
} <Ftirhsitnko>ut<p/tuht tnhke> thaignsk anngdptrhoecness in \`<\`a\`jnssonwer> output the final answer in Vision Encoder { "Examination Name": "Lung Function Diffusing Sampling strategy <answer> </answer> tags. Output Capacity Test", the final answer in JSON format.
个 个 "Indicators": [ {"Item Name": "FEV1","Result": "1.61","Unit": "[L]",... },
Key-value pairs {"Item Name": "FEV1 % FVC","Result": "","Unit": "[%]",...},...]
""EAxgaem":in"a6 iYoenarNsa",me": "Lung Function Diffusing Capacity Test", JsonPredictionandJsonGT }</answer> "I{n"Idtiecma oNras"m: [e": "FEV1","Result": "1.61","Unit": "[L]",...}, raerewatrhdefinupncutisonofourrule-based √ {"Item Name": "FEV1 % FVC","Result": "","Unit": "[%]", ...}, JsonPrediction ], Frozen Trainable
# 3 Method
# 3.1 Preliminary
Due to the intricate relationship between heterogeneous image inputs and outputs that adhere to predefined schemas, we argue that VIE models must possess reasoning abilities to effectively manage these complexities. In contrast to approaches that explicitly replicate intermediate reasoning steps, RLVR (Guo et al., 2025; Team et al., 2025) relies solely on outcome-driven feedback, facilitating scalable reinforcement learning across extensive task datasets.
Group Relative Policy Optimization (GRPO) (Guo et al., 2025) is an efficient RL algorithm that eliminates the need for a separate critic model. Given a query $q$ , GRPO samples a group of $G$ outputs $\{ o _ { 1 } , o _ { 2 } , \ldots , o _ { G } \}$ from the old policy $\pi _ { \theta _ { \mathrm { o l d } } }$ . These outputs are evaluated using reward functions to obtain individual rewards $\{ r _ { 1 } , r _ { 2 } , \hdots , r _ { G } \}$ . The advantage is computed by normalizing the rewards within the group:
$$
A _ { i } = { \frac { r _ { i } - \operatorname * { m e a n } ( \{ r _ { 1 } , r _ { 2 } , \ldots , r _ { G } \} ) } { \operatorname * { s t d } ( \{ r _ { 1 } , r _ { 2 } , \ldots , r _ { G } \} ) } } .
$$
Then the policy is updated by optimizing the following objective:
$$
\begin{array} { l } { \displaystyle \mathcal { I } _ { \mathrm { G R P O } } ( \theta ) = \mathbb { E } _ { q \sim \mathcal { D } , \{ o _ { i } \} _ { i = 1 } ^ { G } \sim \pi _ { \theta _ { \mathrm { o l d } } } } } \\ { \displaystyle \frac { 1 } { G } \sum _ { i = 1 } ^ { G } \frac { 1 } { | o _ { i } | } \frac { | o _ { i } | } { t = 1 } \left( \operatorname* { m i n } \left( \varphi _ { i , t } ( \theta ) A _ { i , t } , \ \mathrm { c l i p } ( \varphi _ { i , t } ( \theta ) , 1 - \epsilon , 1 + \epsilon ) A _ { i , t } \right) - \beta \mathbb { D } _ { \mathsf { K L } } \left[ \pi _ { \theta } \| \pi _ { \mathrm { r e f } } \right] \right) , } \end{array}
$$
where
$$
\varphi _ { i , t } ( \theta ) = \frac { \pi _ { \theta } ( o _ { i , t } \mid q , o _ { i , < t } ) } { \pi _ { \theta _ { \mathrm { o l d } } } ( o _ { i , t } \mid q , o _ { i , < t } ) } .
$$
Additionally, we adopt several key techniques from DAPO (Yu et al., 2025b), including Clip-Higher and Token-Level Policy Gradient Loss. With the introduction of the two, the objective function undergoes
some slight modifications as follows:
$$
\begin{array} { r l } & { \mathcal { I } _ { \mathrm { G R P O } } ( \theta ) = \mathbb { E } _ { q \sim \mathcal { D } , \{ \sigma _ { i } \} _ { i = 1 } ^ { G } \sim \pi _ { \theta _ { \mathrm { o d d } } } } } \\ & { \quad \quad \quad \quad \quad \quad \quad \left( \operatorname* { m i n } \left( \varphi _ { i , t } ( \theta ) A _ { i , t } , \ \mathrm { c l i p } ( \varphi _ { i , t } ( \theta ) , 1 - \epsilon \mathrm { ~ \sigma ~ } , 1 + \epsilon \mathrm { ~ \sigma ~ } ) A _ { i , t } \right) - \beta \mathbb { D } _ { \mathrm { K L } } \left[ \pi _ { \theta } \| \pi _ { \mathrm { r e f } } \right] \right) } \end{array} .
$$
# 3.2 Image Diversity
We collected over 17,000 medical domain images along with their corresponding OCR ground truth. These images encompass a diverse range of report types, including laboratory reports (e.g., blood, urine, and stool tests), diagnostic reports (e.g., endoscopy, electrocardiograms, ultrasounds, and CT scans), and pathological reports (e.g., biopsy analyses and tumor staging). Furthermore, the diversity of the images extends to factors such as shooting angles, creases in the reports, the presence of obstructions, handwritten elements (e.g., doctor signatures), and varying backgrounds in the photographs.
From this dataset, we manually selected 100 images that exhibit high diversity across these dimensions. Using GPT-4o (OpenAI, 2024a), the OCR ground truth was converted into JSON format based on a predefined medical schema (see Appendix A). The JSON outputs were then manually reviewed and corrected to produce the final JSON ground truth.
# 3.3 Rule-based Reward Mechanism
We design a rule-based reward function to optimize the model’s ability to generate JSON outputs by measuring similarity with the ground truth. The reward computation consists of the following steps:
Format Score. We generally adopt the format of R1-Zero (Guo et al., 2025), which includes two components: think and answer . The format score $r _ { \mathrm { f o r m a t } }$ is 1 if both components meet the required specifications; otherwise, rformat is 0.
JSON Preprocessing. Parse the JSON object from the answer and flatten it into a non-nested key-value dictionary. Specifically, this involves traversing all leaf nodes in the JSON structure. Each leaf node’s key in the dictionary is formed by concatenating the keys along the path from the root to the leaf, and its corresponding value is the value of the leaf node. Given the model output $\hat { y }$ and the ground truth $y$ , the preprocessing step converts them into $S _ { p }$ and $S _ { g }$ , respectively.
Matching Score. The similarity between $S _ { p }$ and $S _ { g }$ is measured through a weighted combination of precision and recall. We define $n _ { \mathrm { m a t c h e d } }$ as the number of correctly matched key-value pairs between $S _ { p }$ and $S _ { g }$ . Accordingly, precision and recall are defined as $\frac { n _ { \mathrm { m a t c h e d } } } { \left| S _ { p } \right| }$ and $\textstyle { \frac { n _ { \mathrm { m a t c h e d } } } { | S _ { g } | } }$ , respectively. Therefore, the matching score is defined as:
$$
r _ { \mathrm { m a t c h i n g } } = \left\{ \begin{array} { l l } { \alpha \frac { n _ { \mathrm { m a t c h e d } } } { | S _ { p } | } + ( 1 - \alpha ) \frac { n _ { \mathrm { m a t c h e d } } } { | S _ { g } | } } & { \mathrm { i f ~ } | S _ { p } | > 0 , } \\ { 0 } & { \mathrm { e l s e } . } \end{array} \right.
$$
As shown in Figure 2, $\alpha$ serves as a critical hyperparameter to balance precision and recall during optimization:
• When $\alpha$ equals 1, the reward function focuses solely on precision, allowing the model to achieve $100 \%$ precision by outputting just a single perfectly matched key-value pair. • when $\alpha$ equals 0, the reward function emphasizes recall, potentially causing the model to generate numerous hallucinated key-value pairs in an attempt to retrieve all key-value pairs from $S _ { g }$ .
Final Reward Score. The reward for the i-th sample is calculated as the sum of the format score and the matching score, expressed as:
$$
r _ { i } = r _ { \mathrm { f o r m a t } } + r _ { \mathrm { m a t c h i n g } }
$$
Comparison with SFT. Compared to SFT, our proposed reward function better accommodates the unordered nature of JSON data. Specifically, the unordered property of JSON allows a single image to correspond to multiple ground truths. SFT uses cross-entropy loss on fixed JSON ground truths during training, which may lead to data ambiguity and affect model performance.
Figure 2: Impact of the hyperparameter $\alpha$ on response length when the Sampling Strategy is enabled. The semi-transparent and the solid lines indicate raw samples and the smoothed trend.
# 3.4 Sampling Strategy
To assess the impact of query diversity on experimental outcomes, we employ two data construction strategies. The first strategy involves random sampling of keys corresponding to the JSON data of an image, thereby generating varied queries. The second strategy forgoes sampling, utilizing all keys, which results in all samples sharing a same query. Observations from Figure 3 indicate that key sampling leads to shorter response lengths, as the number of keys decreases post-sampling, consequently shortening response lengths. Additionally, from the reward curve, it can be observed that key sampling results in faster reward growth, which we attribute to the reduced number of keys making the training task simpler and thus accelerating reward acquisition.
# 4 Experiments
# 4.1 VIE Metrics
Various metrics are used for evaluation, including the field-level precision, recall, F1 scores and treeedit-distance(TED) based accuracy as in (Kim et al., 2021). It is noted that TED based accuracy mainly reflects the correctness of trees’ topology. In practical VIE scenarios, we pay more attention to indicators such as F1 score, precision, and recall, which also reflect the correctness of extracted text information.
• TED based accuracy measures the degree of match between the model’s output and the ground truth by calculating the edit distance between two tree structures, and the tree edit distance is used to quantify discrepancies between the predicted and actual structures.
• Field-level precision is the proportion of correctly extracted fields among all predicted fields, defined as $n _ { \mathrm { m a t c h e d } } / | S _ { p } |$ in Eq. 5.
• Field-level recall is the proportion of correctly extracted fields among all actual fields, defined as nmatched/ $| S _ { g } |$ in Eq. 5.
• Field-level F1 score is the harmonic mean balancing precision and recall, measuring overall accuracy of field-specific extraction, respectively.
Figure 3: Comparison of Reward and Response Length Trends Between Sampling and Non-Sampling Strategies During Training. The semi-transparent and the solid lines indicate raw samples and the smoothed trend.
# 4.2 VIE Baselines
To compare our results with existing works, we introduce models with various types and different outputs.
Pipeline models. The pipeline models for OCR task are always composed of layout recognizer and OCR models for plain text, math functions and tables, and the OCR results are collected and rearranged into markdown, HTML or LaTeX format.
• MinerU (Wang et al., 2024b) is a widely used pipeline model for OCR tasks, it uses LayoutLMv3 (Huang et al., 2022) or DocLayout-YOLO (Zhao et al., 2024) for document layout detection, an YOLO-v8 model 1 for math function detection, UniMERNet (Wang et al., 2024a) for math function recognition, RapidTable 2, TableMaster (Authors, 2020) or StructEqTable (Xia et al., 2024) for table recognition, PaddleOCR (Authors, 2020) for plain text OCR and LayOutReader (Pang, 2024) for reading order analysis. When evaluating MinerU, we align the version and parameter settings with those in OmniDocBench (Ouyang et al., 2025).
• Marker (Wang et al., 2021b) integrates several open source models to parse document, and we align the version and parameter settings of Marker as in OmniDocBench.
Expert models. GOT-OCR (Wei et al., 2024) is a large multimodal model trained for document parsing which firstly used a multi-stage training strategy to train an end-to-end OCR model.
General MLLMs. We include general purpose MLLMs such as GPT4o(OpenAI, 2024a), Qwen2.5- VL-7B(Bai et al., 2025), Qwen2.5-VL-72B and InternVL-2.5-78B (Chen et al., 2025) as baselines. The usage and parameter settings of these models are aligned with those in OmniDocBench.
# 4.3 Our VIE Models
In this subsection we first introduce our VIE models finetuned by SFT and RLVR methods, and we report the implementation details of training.
VIE SFT models. We show the result of models trained by VIE SFT and compare them with the VIE RLVR finetuned models, in order to analyze the benefits given by RLVR instead of SFT in VIE tasks.
• JSON-SFT-100 model is finetuned with 100 high quality medical VIE data, model leans to extract key information by directly supervised finetuning.
• OCR-SFT-17K model is finetuned with 17K medical report OCR data. The model parse images into markdown format, and we apply GPT4o to rearrange the markdown into JSON while evaluating its VIE performance.
VIE RLVR models. Three VIE RLVR models are trained to evaluate our proposed method:
• RL-100 is trained on 100 high quality data, during training the model extract information of randomly sampled fields. Note that the images of the 100 training dataset is same to JSON-SFT-100.
• RL-100(w/o sample) is trained with similar schedule and 100 data, but during training the model is required to extract all the key information in the images.
• OCR-SFT-17K-RL-100 model has the same RL stage with RL-100, and it is additionally supervised finetuned with 17K high quality OCR data composed of in-service medical reports and their manually corrected ground truths.
Our proposed method is implemented in pytorch. We use $3 2 \times \mathrm { H } 2 0$ 96GB GPUs to train our model with batch size 1 and the AdamW optimizer. In the reinforce learning stage, the learning rate starts from 1e-6 and decays to 0 following the liner schedule. During the rollout process, we sample 8 responses for each input prompt, with the KL divergence coefficient $\beta$ set to 0.04.
# 4.4 Comparisons on Medical VIE task
Table 1: Performances on medical VIE task. The field-level precision, recall, F1 scores and TED based accuracies are reported. For each metric, we bold the best results and underline the second-best results. Note that all VIE SFT and VIE RLVR models are finetuned from Qwen2.5-VL-7B, and VIE RLVR shows the best precision, recall, F1 scores. # In the column named ’output’, ’OCR’ means we use models to parse medical reports into markdown format and then apply GPT4o to extract JSON format answers, ’JSON’ means we prompt models to directly output with JSON format.
Our medical evaluation dataset consists of 203 medical report images uploaded by users, covering CT, MRI, X-ray, physical examination reports, endoscopy, prescriptions, urine tests, electrocardiograms, medical records, pathology, diagnostic tests, medicine boxes, blood tests, and ultrasound. These images include screenshots and scans, exhibiting diverse clarity levels and aspect ratios. To generate ground truth, under the guidance of doctors, we selected important fields from these medical images, used GPT-4o to extract values of these fields from the images, and manually corrected the answers. Finally, we obtained 203 image-JSON pairs as our test dataset.
The results of our models and baseline models are shown in Table 1. We find that VIE RLVR models outperform all other models on the medical VIE task. They achieve SOTA performance in F1, precision, and recall metrics. TextIn and InternVL-2.5-78B perform well on the TED accuracy metric. However, their other scores are relatively low. This indicates that these models can correctly extract the structure of medical reports but fail to parse text information accurately. It is important to note that F1 score, precision, and recall are more critical than TED accuracy. TED accuracy only evaluates the tree topology extracted by the model, whereas F1 score, precision, and recall also assess the text information on each tree node. These metrics hold greater value in practical applications.
The substantial improvement of RLVR over SFT validates the effectiveness of our proposed reinforcement learning approach. Compared with SFT, RLVR is better suited to the unordered nature of JSON data, where a single image can correspond to multiple ground truths. In contrast, SFT uses cross-entropy loss on fixed JSON ground truths during training, which may cause data ambiguity and degrade model performance. In addition, the model gains the ability to think and plan during the reinforcement learning process. This ability allows it to understand image structures more accurately and extract key textual information with higher precision.
For different VIE RLVR models, we first note that RL-100 maintains a 77.81 F1 score. It outperforms pipeline, expert, closed-source, general multimodal models, and VIE SFT models by nearly 10 points. This reveals that in domain-specific VIE tasks, leveraging high-quality small-scale datasets through RLVR enables significant performance gains for models. Meanwhile, RL-100(w/o sample) achieves SOTA performance in the recall metric. The model attempts to extract as many fields as possible, but extracting redundant fields leads to a lower precision score. Furthermore, OCR-SFT-17K-RL-100 reaches a higher TED accuracy. This means the model learns the tree topology of medical reports during the supervised finetuning stage. To observe the OCR ability gained in the SFT stage, refer to Appendix B. Our model, trained on the OCR task with the medical report dataset, outperforms several pipeline and OCR expert models. Its overall score is close to the SOTA model with 72B parameters, demonstrating the high quality of our medical report dataset.
# 4.5 The Impact of Think
Table 2: Impact of model thinking on medical VIE task. The field-level precision, recall, F1 scores and TED based accuracies are reported. Note that both two models are finetuned from Qwen2.5-VL-7B, the only difference is the thinking process in training and inferring stages.
To analyze the impact of model thinking, we compare the RL-100 model which is required to think while training and inferring, to the RL-100 model without thinking in training and inferring stages. Their performance on the medical VIE dataset are reported in Table 2. For the more considerable metrics, i.e. F1 scores that better reflects the model’s VIE ability, model with thinking outperforms the model without thinking. In the thinking process, model understands the image better and plans to extract key information, thus thinking progress is important for MLLMs in VIE tasks. For case studies, refer to Appendix C.
# 4.6 Analysis on General VIE tasks
We also evaluate the medical VIE models trained with RLVR on general VIE tasks in order to demonstrate that VIE tasks exhibit strong domain-specific characteristics, VIE tasks in different domains vary significantly, making it challenging for models to acquire strong general VIE capabilities through training on specific-domain VIE tasks. There are four widerly used VIE benchmarks:
• CORD(Park et al., 2019): The Consolidated Receipt Dataset (CORD) serves as a public benchmark comprising 800 training, 100 validation, and 100 test receipt images. The textual content of these receipts is encoded in the Latin alphabet. The dataset features 30 unique fields, including menu name, quantity, total price, and others. Notably, the information exhibits complex structures, such as nested groups and hierarchical organizations, which bears a resemblance to our medical VIE dataset.
Table 3: Performances on general VIE tasks. Various widely used benchmarks are chosen to evaluate the models general VIE performance. The field-level F1 scores and TED based accuracies are reported. For each metric, we bold the best results and underline the second-best results.
• FUNSD(Jaume et al., 2019): FUNSD is a dataset for form understanding in noisy scanned documents, with 199 real, fully annotated forms for tasks like text detection and layout analysis. There are four fields to extracted, named ’question’ ’answer’ ’header’ and ’others’, and each field correspond to a list of values, which are similar to our medical VIE dataset.
• SROIE(Huang et al., 2019a): SROIE is the most widely adopted dataset, significantly advancing the field’s development. The dataset comprises scanned English printed receipts, with each image accompanied by comprehensive OCR annotations and values for four key text fields.
• Ticket(Guo et al., 2019): This public benchmark dataset comprises 1,500 training and 400 test images of Chinese train tickets. It includes eight fields, such as ticket number, departure station, and train number. The information structure is straightforward, with each key guaranteed to appear only once and the location of each field fixed.
From Table 3 we find that on CORD dataset, the VIE RLVR models outperform the base model with nearly 10 points in F1 score and 34 points in TED accuracy, and on FUNSD dataset VIE RLVR models outperform the base model with nearly 20 points in F1 score and 22 points in TED accuracy. However the performance of VIE RLVR models are poor on Ticket and SROIE datasets. We note that images in CORD benchmark are receipts with 30 complex fields, having some sub-fields to extract, which is very similar to our medical VIE evaluation dataset, while the SROIE and Ticket datasets are composed of images with few information to extract. Additionally, the comparision of RL-100(w/o sample) with RL-100 in Table 3 is different from results in Table 1 , indicating a significant gap exists among different VIE tasks. | Visual Information Extraction (VIE) converts unstructured document images into structured formats like JSON, critical for medical applications such as report analysis and online consultations. Traditional methods rely on OCR and language models, while end-to-end multimodal models offer direct JSON generation. However, domain-specific schemas and high annotation costs limit their effectiveness in medical VIE. We base our approach on the Reinforcement Learning with Verifiable Rewards (RLVR) framework to address these challenges using only 100 annotated samples. Our approach ensures dataset diversity, a balanced precision-recall reward mechanism to reduce hallucinations and improve field coverage, and innovative sampling strategies to enhance reasoning capabilities. Fine-tuning Qwen2.5-VL-7B with our RLVR method, we achieve state-of-the-art performance on medical VIE tasks, significantly improving F1, precision, and recall. While our models excel on tasks similar to medical datasets, performance drops on dissimilar tasks, highlighting the need for domain-specific optimization. Case studies further demonstrate the value of reasoning during training and inference for VIE. | [
"cs.CL"
] |
# 1 Introduction
There has been significant recent interest in formalisms for reasoning over temporal data [Artale et al., 2017]. Since its introduction by Brandt et al. [2017; 2018], the DatalogMTL language, which extends Datalog [Abiteboul et al., 1995] with operators from metric temporal logic (MTL) [Koymans, 1990], has risen to prominence. In DatalogMTL, facts are annotated by time intervals on which they are valid (e.g., $R ( a , b ) \ @ [ 1 , \mathsf { \bar { 5 } } ] )$ , and rules express dependencies between such facts (e.g., $\boxplus _ { [ 0 , 2 ] }$ $Q \Leftrightarrow _ { \{ 3 \} }$ $P$ states that if $P$ holds at time $t - 3$ , $Q$ holds from $t$ to $t + 2$ ). The complexity of reasoning in DatalogMTL has been extensively investigated for various fragments and extensions and for different semantics (continuous vs pointwise, rational vs integer timeline) [Brandt et al., 2018; Walega et al., 2019; Ryzhikov et al., 2019; Walega et al., 2020b; Walega et al., 2023a; Walega et al., 2024]. Moreover, there are also several implemented reasoning systems for (fragments of) DatalogMTL [Kalayci et al., 2019; Wang et al., 2022; Wang et al., 2024; Bellomarini et al., 2022; Walega et al., 2023b; Ivliev et al., 2024].
One important issue that has yet to be addressed is how to handle the case where the temporal dataset is inconsistent with the DatalogMTL program. Indeed, it is widely acknowledged that real-world data typically contains many erroneous or inaccurate facts, and this is true in particular for temporal sensor data, due to faulty sensors. In such cases, classical logical semantics is rendered useless, as every query is entailed from a contradiction. A prominent approach to obtain meaningful information from an atemporal dataset that is inconsistent w.r.t. a logical theory (e.g., an ontology or a set of database integrity constraints) is to use inconsistency-tolerant semantics based on subset repairs, which are maximal subsets of the dataset consistent with the theory [Bertossi, 2019; Bienvenu, 2020]. The consistent query answering (CQA) approach considers that a (Boolean) query is true if it holds w.r.t. every repair [Arenas et al., 1999; Lembo et al., 2010]. Other natural semantics have also been proposed, such as the brave semantics, under which a query is true if it holds w.r.t. at least one repair [Bienvenu and Rosati, 2013], and the intersection semantics which evaluates queries w.r.t. the intersection of all repairs [Lembo et al., 2010]. It is also useful to consider the minimal subsets of the dataset that are inconsistent with the theory, called conflicts, to explain the inconsistency to a user or help with debugging.
It is natural to extend these notions to the temporal setting. First work in this direction was undertaken by Bourgaux et al. [2019], who considered queries with linear temporal logic (LTL) operators, an atemporal DL-Lite ontology, and a sequence of datasets stating what holds at different timepoints. In that work, however, it was clear how to transfer definitions from the atemporal setting, and the main concerns were complexity and algorithms. By contrast, in DatalogMTL, facts are annotated with time intervals, which may contain exponentially or even infinitely many timepoints (if the timeline is dense or $\infty / { - } \infty$ can be used as interval endpoints). One can therefore imagine multiple different ways of minimally repairing an inconsistent dataset. For example, if a dataset states that $P$ is true from 0 to 4 and $Q$ from 2 to 6 $( P @ [ 0 , 4 ]$ , $Q @ [ 2 , 6 ] )$ , and a rule states that $P$ and $Q$ cannot hold at the same time $\perp P \land Q$ ), one can regain consistency by removing one of the two facts, adjusting their intervals, or treating intervals as their sets of points and conserving as much information as possible.
In this paper, we initiate the study of inconsistency handling in DatalogMTL. After some preliminaries, we formally introduce our framework in Section 3. We define three different notions of repair based upon deleting whole facts (srepairs), punctual facts ( $\dot { p }$ -repairs), or minimally shrinking the time intervals of facts ( $\mathit { i }$ -repairs), which give rise to the $x$ - brave, $x$ -CQA, and $x$ -intersection semantics $( x \in \{ s , p , i \} )$ .
Likewise, we define notions of $s { \mathrm { . } }$ -, $p \mathrm { - }$ , and $i$ -conflict, which capture different ways to characterize minimal reasons for inconsistency. In Section 4, we study the properties of these notions. In particular, we show that $p$ - and $i$ -conflicts and repairs are not guaranteed to exist or be finite. In Section 5, we explore the computational properties of our framework. We provide a fairly comprehensive account of the data complexity of recognizing $s$ -conflicts and $s$ -repairs, generating a single $s$ -conflict or $s$ -repair, and testing query entailment under the $s$ -brave, $s$ -CQA, and $s$ -intersection semantics. We obtain tight complexity results for several DatalogMTL fragments and identify tractable cases. We further provide some first complexity results for the $i$ -and $p$ -based notions.
Proofs of all claims are given in the appendix.
# 2 Preliminaries: DatalogMTL
Intervals We consider a timeline $( \mathbb { T } , \leq )$ , (we will consider $( \mathbb { Q } , \leq )$ , which is dense, and $( \mathbb { Z } , \leq )$ , which is not), and call the elements of $\mathbb { T }$ timepoints. An interval takes the form $\left. t _ { 1 } , t _ { 2 } \right.$ , with $t _ { 1 }$ $, t _ { 2 } \in \mathbb { T } \cup \{ - \infty , \infty \}$ , bracket ⟨ being [ or (, and bracket $\rangle$ either ] or ), and denotes the set of timepoints
$\{ t \mid t \in \mathbb { T } , t _ { 1 } < t < t _ { 2 } \} \cup \{ t _ { 1 } \mid$ if $\langle = [ \} \cup \{ t _ { 2 } \mid \mathrm { i f } \} = ] \} .$ . A punctual interval has the form $[ t , t ]$ and will also be written $\{ t \}$ . A range $\varrho$ is an interval with non-negative endpoints. Syntax Let $\mathbf { P }$ , C and $\mathbf { V }$ be three mutually disjoint countable sets of predicates, constants, and variables respectively. An atom is of the form $P ( \vec { \tau } )$ where $P \in \mathbf { P }$ and $\vec { \tau }$ is a tuple of terms from $\mathbf { C } \cup \mathbf { V }$ of matching arity. A literal $A$ is an expression built according to the following grammar: $A : : = P ( \vec { \tau } ) \mid \top \mid \boxplus _ { \varrho } A \mid \_ \boxplus _ { \varrho } A \mid \oplus _ { \varrho } A \mid \nmid A \mathcal { U } _ { \varrho } A \mid A \mathcal { S } _ { \varrho } A$ where $P ( \vec { \tau } )$ is an atom and $\varrho$ is a range. Intuitively, $s$ stands for ‘since’, $\mathcal { U }$ for ‘until’, $\diamondsuit$ for ‘eventually’, and $\bigtriangledown$ for ‘always’, with $^ +$ indicating the future and − the past. A DatalogMTL program $\Pi$ is a finite set of rules of the form $B A _ { 1 } \land \dotsc \land A _ { k } \quad { \mathrm { ~ o r ~ } } \quad \bot A _ { 1 } \land \dotsc \land A _ { k } \quad { \mathrm { ~ w i t h ~ } } k \geq 1$ where each $A _ { i }$ is a literal and $B$ is a literal not mentioning any ‘non-deterministic’ operators $\Phi _ { \varrho } , \diamond _ { \varrho } , l$ $\mathcal { U } _ { \varrho }$ , and $ { \boldsymbol { S } } _ { { \varrho } }$ . We call $A _ { 1 } \land \dotsc \land A _ { k }$ the body of the rule, and $B$ or $\perp$ its head. We assume that each rule is safe: each variable in its head occurs in its body, and this occurrence is not in a left operand of $s$ or $\mathcal { U }$ . A (temporal) dataset $\mathcal { D }$ is a finite set of (temporal) facts of the form $\alpha { \ @ \ell }$ , with $\alpha$ a ground atom (i.e., $\alpha$ does not contain any variable) and $\iota$ a non-empty interval.
Fragments A program is propositional if all its predicates are nullary. It is core if each of its rules is either of the form $\perp A _ { 1 } \land A _ { 2 }$ or of the form $B A$ . It is linear if each of its rules is either of the form $\perp A _ { 1 } \land A _ { 2 }$ or of the form $B A _ { 1 } \land \dotsc \land A _ { k }$ where at most one $A _ { i }$ mentions some predicate that occurs in the head of some rule (intensional predicate). We denote by DatalogMTL $\underset { \cdot \mathtt { c o r e } } { \odot }$ (resp. Datalo $\begin{array} { r } { \mu \mathrm { { T L } } _ { \parallel \mathrm { { i n } } } ^ { \ominus } . } \end{array}$ ) the fragment where programs are core (resp. linear) and $\diamond$ is the only temporal operator allowed in literals. The relation $\lessdot$ of dependence between predicates is defined by $P \ll Q$ iff there is a rule with $P$ in the head and $Q$ in the body. A program is non-recursive if there is no predicate $P$ such that $P \ll ^ { + } P$ , where $\leqslant ^ { + }$ is the transitive closure of $\lessdot$ . We denote by $\mathrm { \ D a t a l o g _ { n r } M T L }$ the fragment of non-recursive programs.
Semantics An interpretation $\mathfrak { M }$ specifies for each ground atom $\alpha$ and timepoint $t \in \mathbb { T }$ whether $\alpha$ is true at $t$ . If $\alpha$ is true at $t$ in $\mathfrak { M }$ , we write ${ \mathfrak { M } } , t \models \alpha$ and say that $\mathfrak { M }$ satisfies $\alpha$ at $t$ . The satisfaction of a ground literal by $\mathfrak { M }$ at $t$ is then defined inductively as follows.
$$
{ \begin{array} { r l r l } & { \mathfrak { M } , t = \mathbb { T } } & & { \operatorname { f o r e v e r y } t \in \mathbb { T } } \\ & { \mathfrak { M } , t = \mathbb { L } } & & { \operatorname { f o r e v e r y } t \in \mathbb { T } } \\ & { \mathfrak { M } , t = \mathbb { D } _ { \varrho } A } & & { \operatorname { i f } \mathfrak { M } , s = A \operatorname { f o r a l l } s \ \operatorname { w i n h } - t \in \varrho } \\ & { \mathfrak { M } , t = \mathbb { D } _ { \varrho } A } & & { \operatorname { i f } \mathfrak { M } , s = A \operatorname { f o r a l l } s \ \operatorname { w i n h } t - s \in \varrho } \\ & { \mathfrak { M } , t = \Theta _ { \varrho } A } & & { \operatorname { i f } \mathfrak { M } , s = A \operatorname { f o r } \operatorname { s u p e } \circ \forall \operatorname { i n h } s - l \in \varrho } \\ & { \mathfrak { M } , t = \Theta _ { \varrho } A } & & { \operatorname { i f } \mathfrak { M } , s = A \operatorname { f o r } \operatorname { s u p e } \circ \forall \operatorname { i n h } l - s \in \varrho } \\ & { \mathfrak { M } , t = A \theta _ { \varrho } A } & & { \operatorname { i f } \mathfrak { M } , s = A \operatorname { f o r } \operatorname { s u p e } \epsilon ^ { \prime } \operatorname { w i n h } t - s \in \varrho } \\ & { \mathfrak { M } , t = A \theta _ { \varrho } A } & & { \operatorname { i f } \mathfrak { M } , t ^ { \prime } = A ^ { \prime } \operatorname { f o r } \operatorname { s u p e } t ^ { \prime } \operatorname { w i n h } t ^ { \prime } - t \in \varrho } \\ & { } & & { \operatorname { a n d } \mathfrak { M } , s = A \operatorname { f o r } \operatorname { a l l } s \in ( t , t ^ { \prime } ) } \\ & { \mathfrak { M } , t = A S _ { \varrho } A ^ { \prime } } & & { \operatorname { i f } \mathfrak { M } , t ^ { \prime } = A ^ { \prime } \operatorname { f o r ~ s o m e } t ^ { \prime } \operatorname { w i n h } t - t ^ { \prime } \in \varrho } \\ & { } & & { \operatorname { a n d } \mathfrak { M } , s = A \operatorname { f o r } \operatorname { a l l } s \in ( t ^ { \prime } , t ) } \end{array} }
$$
An interpretation $\mathfrak { M }$ is a model of a rule $H A _ { 1 } \land \dotsc \land A _ { k }$ if for every grounding assignment $\nu : \mathbf { V } \mapsto \mathbf { C }$ , for every $t \in \mathbb { T }$ , ${ \mathfrak { M } } , t \models { \nu } { \bar { ( } } H )$ whenever $\mathfrak { M }$ $\uparrow , t \models \nu ( A _ { i } )$ for $1 \leq i \leq k$ , where $\nu ( B )$ denotes the ground literal obtained by replacing each $x \in \mathbf { V }$ by $\nu ( x )$ in ${ \it B } . \nabla \mathfrak { N }$ is a model of a program $\Pi$ if it is a model of all rules in $\Pi$ . It is a model of a fact $\alpha \ @ \iota$ if ${ \mathfrak { M } } , t \models \alpha$ for every $t \in \iota$ , and it is a model of a (possibly infinite) set of facts $\boldsymbol { B }$ if it is a model of all facts in $\boldsymbol { B }$ . A program $\Pi$ is consistent if it has a model. A set of facts $\boldsymbol { B }$ is $\Pi$ -consistent if there exists a model $\mathfrak { M }$ of both $\Pi$ and $\boldsymbol { B }$ , written ${ \mathfrak { M } } \models ( B , \Pi )$ . A program $\Pi$ and set of facts $\boldsymbol { B }$ entail a fact $\alpha { \ @ \iota }$ , written $( B , \Pi ) \mathrel { \mathop { = } } \alpha \ @ \iota$ , if every model of both $\Pi$ and $\boldsymbol { B }$ is also a model of $\alpha \ @ \iota$ . Finally, we write $\boldsymbol { B } \left| = \alpha \ @ \boldsymbol { \iota } \right.$ if $( B , \varnothing ) \mapsto \alpha \ @ \iota$ and $\Pi \models \alpha \ @ \iota$ if $( \varnothing , \Pi ) \models \alpha \ @ \iota$ .
Queries A DatalogMTL query is a pair $( \Pi , q ( \vec { v } , r ) )$ of a program $\Pi$ and an expression $q ( \vec { v } , r )$ of the form $Q ( \vec { \tau } ) \textcircled { \amalg } r$ , where $Q \in \mathbf { P }$ , $\vec { v } = ( v _ { 1 } , \ldots , v _ { n } )$ is a tuple of variables, $\vec { \tau }$ is a tuple of terms from $\mathbf { C } \cup \vec { v }$ , and $r$ is an interval variable. We may simply use $q ( \vec { v } , r )$ as a query when the program has been specified. A certain answer to $( \Pi , q ( \vec { v } , r ) )$ over a (possibly infinite) set of facts $\boldsymbol { B }$ is a pair $( \vec { c } , \iota )$ such that $\vec { c } = ( c _ { 1 } , \ldots , c _ { n } )$ is a tuple of constants, $\iota$ is an interval and, for every $t \in \iota$ and every model $\mathfrak { M }$ of $\Pi$ and $\boldsymbol { B }$ , we have $\mathfrak { M } , t \ l = Q ( \bar { \tau } ) _ { [ \vec { v } \vec { c } ] }$ , where $Q ( \vec { \tau } ) _ { [ \vec { v } \vec { c } ] }$ is obtained from $Q ( \vec { \tau } )$ by replacing each $\dot { v } _ { i } \in \vec { v }$ by the corresponding $c _ { i } \in \vec { c }$ .
We will illustrate the notions we introduce on a running example about a blood transfusion scenario.
Example 1. In our scenario, we wish to query the medical records of blood transfusion recipients to detect patients who exhibited symptoms or risk factors of transfusion-related adverse reactions. For example, if a patient presents a fever during the transfusion or in the next four hours, while having a normal temperature for the past 24 hours, one can suspect a febrile non-haemolytic transfusion reaction (potential fnhtr). This is represented by the following rule, where, intuitively, $x$ represents a patient and y a blood pouch:
$$
\begin{array} { r } { \mathsf { P o t F n h t r } ( x ) \mathsf { F e v e r } ( x ) \land \boxplus _ { ( 0 , 2 4 ] } \mathsf { N o F e v e r } ( x ) } \\ { \land \emptyset _ { [ 0 , 4 ] } \{ \mathsf { G e t B l o o d } ( x , y ) \quad } \end{array}
$$
Another rule detects more generally relevant fever episodes:
$$
\begin{array} { r } { \mathsf { F e v E p } ( x ) \gets \mathsf { F e v e r } ( x ) \land \qquad } \\ { \quad \quad \Leftrightarrow _ { [ 0 , 2 4 ] } \left( \mathsf { N o F e v e r } ( x ) \mathcal { U } _ { \{ 5 \} } \mathsf { G e t B l o o d } ( x , y ) \right) } \end{array}
$$
A patient cannot have a fever and no fever at the same time:
$$
\bot \mathsf { F e v e r } ( x ) \land \mathsf { N o F e v e r } ( x )
$$
We may also wish to identify patients who once produced anti$D$ antibodies, as they are at risk for adverse reactions to some blood types. This is represented as follows.
$$
\boxplus _ { [ 0 , \infty ) } \mathsf { A n t i D R i s k } ( x ) \mathsf { P o s i t i v e A n t i D } ( x )
$$
The following dataset provides information about a patient a who received transfusion from a blood pouch $b$ , assuming that time 0 is the time they entered the hospital.
$$
\begin{array} { r } { \begin{array} { r l } & { \mathcal { D } = \{ \mathsf { P o s i t i v e A n t i D } ( a ) @ \{ - 9 0 \} , \mathsf { G e t B l o o d } ( a , b ) @ [ 2 4 , 2 6 ] , } \\ & { \qquad \mathsf { N o F e v e r } ( a ) @ [ 0 , 2 9 ) , \mathsf { F e v e r } ( a ) @ [ 2 9 , 3 4 ] \} } \end{array} } \end{array}
$$
Let $\Pi$ consist of the DatalogMTL rules above. One can check that $\mathcal { D }$ is $\Pi$ -consistent, $( a , \{ 2 9 \} )$ is a certain answer to the query PotFnhtr $( v ) @ r$ , $( a , [ 2 9 , 3 4 ] )$ ) is a certain answer to FevEp $( v ) @ r$ , and $( a , [ - 9 0 , \infty ) _ { , } ^ { \cdot }$ ) to AntiDRisk $( v ) @ r$ .
# 3 Repairs and Conflicts on Time Intervals
In this section, we first define three kinds of repair and conflict for temporal datasets, then extend inconsistency-tolerant semantics to this context. Before delving into the formal definitions, we illustrate the impact of dealing with time intervals.
Example 2. Let Π be the program from Example 1 and
$\begin{array} { r } { \mathcal { D } = \{ \mathsf { P o s i t i v e A n t i D } ( a ) \ @ \{ - 9 0 \} , \mathsf { G e t B l o o d } ( a , b ) @ [ 2 4 , 2 6 ] , } \\ { \mathsf { M o F e v e r } ( a ) @ [ 0 , 3 2 ] , \mathsf { F e v e r } ( a ) @ [ 1 4 , 1 8 ] , \mathsf { F e v e r } ( a ) @ [ 2 9 , 3 4 ] \} . } \end{array}$ $\mathcal { D }$ is Π-inconsistent because in $\mathcal { D }$ , the patient a has both fever and no fever at $t \in [ 1 4 , 1 8 ] \cup$ [29, 32]. To repair the data by removing facts from $\mathcal { D }$ , there are only two minimal possibilities: either remove NoFever $( a ) @ [ 0 , 3 2 ]$ , or remove both Fever $( a ) \ @ [ 1 4 , 1 8 ]$ and Fever $( a ) @ [ 2 9 , 3 4 ]$ . This may be considered too drastic, since, e.g., the Fever facts do not contradict that the patient had no fever during $[ 0 , 1 4 )$ or (18, 29).
Manipulating sets of temporal facts To formalize conflicts and repairs of temporal datasets, we consider three ways of comparing (possibly infinite) sets of facts w.r.t. inclusion:
Hence, it may seem preferable to consider each timepoint independently, so that a repair may contain, e.g., the two Fever facts as well as NoFever $( a ) \ @ [ 0 , 1 4 )$ and NoFever $( a ) @ ( 1 8 , 2 9 )$ . However, with this approach, $i f \mathbb { T } =$ $\mathbb { Q }$ , there are infinitely many possibilities to repair the dataset, and the number of facts in a repair may be infinite. For example, an option to repair the Fever and NoFever facts is:
We also need to intersect (possibly infinite) sets of facts:
Definition 1 (Pointwise inclusion, subset comparison). We say that a fact $\alpha \ @ \iota$ is pointwise included in a set of facts $\boldsymbol { B }$ $i f$ for every $t \in \iota ,$ , there is $\alpha \ @ \iota ^ { \prime } \in \ B$ with $t \in \iota ^ { \prime }$ , i.e., $i f$ $\boldsymbol { B } \left| = \alpha \ @ \boldsymbol { \iota } \right.$ . Given sets of facts $\boldsymbol { B }$ and $B ^ { \prime }$ , we say that $B ^ { \prime }$ is
• $a$ pointwise subset of $\boldsymbol { B }$ , denoted $B ^ { \prime } \subseteq ^ { p } B$ , if every $\alpha \ @ \iota \in B ^ { \prime }$ is pointwise included in $\boldsymbol { B }$ ;
• an interval-based subset of $\boldsymbol { B }$ , denoted $B ^ { \prime } \subseteq ^ { i } B ,$ , if $B ^ { \prime } \subseteq ^ { p } B$ and for every $\alpha { \ @ \ell } \in \mathcal { B }$ , there is at most one $\alpha \ @ \iota ^ { \prime } \in \ B ^ { \prime }$ such that $\iota ^ { \prime } \subseteq \iota$ ;
• $a$ strong subset of $\boldsymbol { B }$ , written $B ^ { \prime } \subseteq ^ { s } B , i f B ^ { \prime } \subseteq ^ { i } B$ and $B ^ { \prime } \subseteq B$ .
$$
\begin{array} { r l r } & { \{ \mathsf { N o F e v e r } ( a ) @ [ 0 , 2 9 ) , \mathsf { F e v e r } ( a ) @ [ 3 0 , 3 4 ] , } & \\ & { \mathsf { N o F e v e r } ( a ) @ [ 2 9 + \frac { 1 } { 2 ^ { 2 k + 1 } } , 2 9 + \frac { 1 } { 2 ^ { 2 k } } ) , } & \\ & { \mathsf { F e v e r } ( a ) @ [ 2 9 + \frac { 1 } { 2 ^ { 2 k + 2 } } , 2 9 + \frac { 1 } { 2 ^ { 2 k + 1 } } ) \mid k \in \mathbb { N } \} . } & \end{array}
$$
We write $B ^ { \prime } \sqsubset ^ { p } B$ to indicate that $B ^ { \prime } \subseteq ^ { p } B$ and $B _ { \mathbb { Z } ^ { p } } B ^ { \prime }$ . For $x \in \{ i , s \}$ , we write $\boldsymbol { B } ^ { \prime } \stackrel { - \boldsymbol { x } } { = } \boldsymbol { B } \stackrel { i f } { \boldsymbol { B } ^ { \prime } } \stackrel { - \boldsymbol { x } } { = } \boldsymbol { B }$ and $B ^ { \prime } \subset ^ { p } B$ .
Definition 2 (Pointwise intersection). The pointwise intersection of a family $( B _ { i } ) _ { i \in I }$ of sets of facts is $\prod _ { i \in I } B _ { i } ~ =$ $\{ \alpha \mathbb { O } \{ t \} | \mathcal { B } _ { i } \models \alpha \mathbb { O } \{ t \}$ for each $i \in I \}$ . The pointwise intersection of a fact $\alpha \ @ \iota$ and a set of facts $\boldsymbol { B }$ is $\{ \alpha \ @ \iota \} \sqcap B$ .
An intermediate approach consists in only modifying the endpoints of intervals, in order to keep more information than with fact deletion without splitting one fact into many. Again we may obtain infinitely many possibilities, e.g., the Fever and NoFever facts can be repaired by NoFever ${ \bf \nabla } ( a ) \ @ [ 0 , t )$ and Fever $( a ) \ @ [ t , 3 4 ]$ for $t \in [ 2 9 , 3 2 ]$ .
Normal form A (possibly infinite) set of facts $\boldsymbol { B }$ is in normal form if for every pair of facts $\alpha \ @ \iota$ and $\alpha \ @ \iota ^ { \prime }$ over the same ground atom, if $\alpha \ @ \iota$ and $\alpha \ @ \iota ^ { \prime }$ are in $\boldsymbol { B }$ , then $\iota \cup \iota ^ { \prime }$ is not an interval.
Lemma 1. If $\boldsymbol { B }$ is in normal form, then $( l ) B ^ { \prime } \subseteq ^ { s } B$ iff $B ^ { \prime } \subseteq B$ , and (2) $B ^ { \prime } \subseteq ^ { i } B$ implies that the cardinality of $B ^ { \prime }$ is bounded by that of $\boldsymbol { B }$ .
To see why normal form is necessary, consider (1) $\boldsymbol { B } =$ $\{ P @ [ 0 , 4 ] , { \dot { P @ [ 1 , 2 ] } } \}$ , which is such that $B \sqsubseteq ^ { i } B$ , so that $\boldsymbol { B } \boldsymbol { \mathbb { Z } ^ { s } } \boldsymbol { B }$ , and (2) $\boldsymbol { \mathcal { B } } ^ { \cdot } = \{ P @ [ 0 , 4 ] , P @ [ 3 , 7 ] \}$ , which is such that $\{ { \cal P } \ @ [ 0 , 1 ] , { \cal P } \ @ [ 2 , 5 ] , { \cal P } \ @ [ 6 , 7 ] \} \stackrel { } { = } \{ \ l ^ { i } \}$ .
For every dataset $\mathcal { D }$ , there exists a dataset $\mathcal { D } ^ { \prime }$ in normal form such that for every $t \in \mathbb { T }$ , for every ground atom $\alpha$ , ${ \mathcal { D } } \left| = \alpha { \ @ \{ t \} } \right.$ iff ${ \mathcal { D } } ^ { \prime } \left| = \alpha { \ @ } \{ t \} \right.$ . Moreover, such $\mathcal { D } ^ { \prime }$ can be computed in polynomial time w.r.t. the size of $\mathcal { D }$ by merging every $\alpha \ @ \iota _ { 1 }$ and $\alpha \ @ \iota _ { 2 }$ such that $\iota _ { 1 } \cup \iota _ { 2 }$ is an interval into $\alpha \ @ \iota$ with $\iota = \iota _ { 1 } \cup \iota _ { 2 }$ . In the rest of this paper, we assume that all datasets are in normal form and all programs are consistent.
Conflicts, repairs, and inconsistency-tolerant semantics We are now ready to formally state the definitions of conflicts and repairs of a temporal dataset w.r.t. a DatalogMTL program. We start with the notion of conflict, which is crucial to explain inconsistency.
Definition 3 (Conflicts). Let Π be a DatalogMTL program and $\mathcal { D }$ be a dataset. Given $x \in \{ p , i , s \}$ , a set of facts $\mathcal { C }$ is an $x$ -conflict of $\mathcal { D } w . r . t$ . Π if $\mathcal { C }$ is in normal form, ${ \mathcal { C } } \subseteq _ { \mathcal { D } } ^ { x } { \mathcal { D } }$ , $\mathcal { C }$ is Π-inconsistent, and there is no $\Pi$ -inconsistent ${ \mathcal { C } } ^ { \prime } \subset { \boldsymbol { \mathscr { x } } } { \mathcal { C } }$ . We denote by xConf $( \mathcal { D } , \Pi )$ the set of all $x$ -conflicts of $\mathcal { D }$ w.r.t. Π. Example 3. Consider Π and $\mathcal { D }$ from Example 2. The s-conflicts are $\{ { \mathsf { N o F e v e r } } ( a ) { \ @ } [ 0 , 3 2 ]$ , Fe $\mathsf { v e r } ( a ) \ @ [ 1 4 , 1 8 ] \}$ and $\{ { \mathsf { N o F e v e r } } ( a ) { \ @ } [ 0 , 3 2 ]$ ], $= _ { \mathsf { e v e r } } ( a ) \ @ [ 2 9 , 3 4 ] \}$ , while the $p$ - conflicts and $i$ -conflicts are of the form $\{ { \mathsf { N o F e v e r } } ( a ) { \ @ } \{ t \}$ , Fever $( a ) \ @ \{ t \} \}$ with $t \in [ 1 4 , 1 8 ] \cup [ 2 9 , 3 2 ]$ .
We define repairs in a similar manner.
Definition 4 (Repairs). Let Π be a DatalogMTL program and $\mathcal { D }$ be a dataset. Given $x \in \{ p , i , s \}$ , a set of facts $\mathcal { R }$ is an $x$ -repair of $\mathcal { D }$ w.r.t. Π if $\mathcal { R }$ is in normal form, $\mathcal { R } \subseteq ^ { x } \mathcal { D }$ , $\mathcal { R }$ is $\Pi$ -consistent, and there is no $\Pi$ -consistent $\mathcal { R } ^ { \prime }$ such that $\mathcal { R } \subset { } ^ { x } \mathcal { R } ^ { \prime } \subset { } ^ { x } \mathcal { D }$ . We denote by $x R e p ( \mathcal { D } , \Pi )$ the set of all $x$ - repairs of $\mathcal { D }$ w.r.t. Π.
The requirement that $x$ -repairs are in normal form ensures that when $\mathcal { D }$ is $\Pi$ -consistent, $x R e p ( \mathcal { D } , \Pi ) = \{ \mathcal { D } \}$ .
Example 4. Π and $\mathcal { D }$ from Example 2 have two s-repairs:
$$
\begin{array} { r l } & { \mathcal { R } _ { 1 } = \mathcal { T } \cup \{ \mathsf { N o F e v e r } ( a ) \ @ [ 0 , 3 2 ] \} a n d } \\ & { \mathcal { R } _ { 2 } = \mathcal { T } \cup \{ \mathsf { F e v e r } ( a ) \ @ [ 1 4 , 1 8 ] , \mathsf { F e v e r } ( a ) @ [ 2 9 , 3 4 ] \} w i t h } \\ & { \quad \mathcal { Z } = \{ \mathsf { P o s i t i v e A n t i D } ( a ) \ @ \{ - 9 0 \} , \mathsf { G e t B l o o d } ( a , b ) @ [ 2 4 , 2 6 ] \} . } \end{array}
$$
Every $p$ -repair $\mathcal { R }$ is such that $\mathcal { I } \subseteq ^ { p } \mathcal { R }$ with
$$
\begin{array} { c } { \mathcal { I } = \mathcal { T } \cup \{ \mathsf { N o F e v e r } ( a ) \ @ [ 0 , 1 4 ) , \mathsf { N o F e v e r } ( a ) @ ( 1 8 , 2 9 ) , } \\ { \mathsf { F e v e r } ( a ) @ ( 3 2 , 3 4 ] \} } \end{array}
$$
and for every $t \in [ 1 4 , 1 8 ] \cup$ [29, 32], either Fever $( a ) @ \{ t \}$ or NoFever $( a ) \ @ \{ t \}$ is pointwise included in $\mathcal { R }$ . Finally, every $i$ -repair $\mathcal { R }$ is such that $\mathcal { I } \subseteq \mathcal { R }$ and contains:
• either two facts NoFever $\langle \boldsymbol { a } ) \ @ [ 0 , t \rangle$ , Fever $( a ) @ \langle t , 3 4 ]$ , where ⟩, ⟨ are either ], $\left( o r \right)$ , [ and $t \in [ 2 9 , 3 2 ]$ ;
• or three facts NoFever $( a ) \ @ [ 0 , t \rangle$ , Fever $( a ) \ @ \langle t , 1 8 ]$ , and Fever $( a ) \textcircled { \mathrm { a } }$ [29, 34], where $t \in [ 1 4 , 1 8 ]$ ,
$\begin{array} { l } { - \ \rangle , \langle \ a r e \ e i t h e r \ | , ( \ o r \ ) , [ \ a n d } \\ { - \ i f t = 1 8 , t h e n \ \rangle , \langle \ a r e \ ) , [ ; } \end{array}$
• or three facts ${ \mathsf { F e v e r } } ( a ) { \ @ } [ 1 4 , t _ { 1 } \rangle$ , ${ \mathsf { N o F e v e r } } ( a ) { \mathbb { O } } \langle t _ { 1 } , t _ { 2 } \rangle ^ { \prime } ,$ , Feve $( a ) @ \langle ^ { \prime } t _ { 2 } , 3 4 ]$ , where $t _ { 1 } \in [ 1 4 , 1 8 ]$ , $t _ { 2 } \in [ 2 9 , 3 2 ]$ , $\begin{array} { l } { { - \ \rangle , \langle { \it a n d } \rangle ^ { \prime } , \langle { \it ^ { \prime } a r e e i t h e r } \vert , ( { \it o r } ) , \vert , } } \\ { { - \ { \it i f t } _ { 1 } = 1 4 , t h e n \ \rangle , \langle { \it a r e } \vert , ( . } } \end{array}$
We can now extend the definitions of the brave, CQA and intersection semantics to use different kinds of repairs.
Definition 5. Consider a DatalogMTL query $( \Pi , q ( \vec { v } , r ) )$ , dataset $\mathcal { D }$ , tuple $\vec { c }$ of constants from $\mathcal { D }$ with $\left| \vec { c } \right| = \left| \vec { v } \right|$ , and interval $\iota$ . Given $x \in \{ p , i , s \}$ such that $x R e p ( \mathcal { D } , \Pi ) \neq \emptyset$ , we say that $\vec { c }$ is an answer to $( \Pi , q ( \vec { v } , r ) )$ under
• $x$ -brave semantics, written $\begin{array} { r l } { ( \mathcal { D } , \Pi ) } & { { } \left| = _ { b r a \nu e } ^ { x } \right. \ : q ( \vec { c } , \iota ) , i f } \end{array}$ $( \mathcal { R } , \Pi ) \mathrel { \mathop { = } } q ( \vec { c } , \iota )$ for some $\mathcal { R } \in \ b { x R e p } ( \ b { \mathscr { D } } , \ddot { \Pi } )$ ; • $x$ -CQA semantics, written $\begin{array} { r l } { ( \mathcal { D } , \Pi ) } & { { } \left| = _ { C Q A } ^ { x } \right. } \end{array}$ $q ( \vec { c } , \iota )$ , if $( \mathcal { R } , \Pi ) \mathrel { \mathop { = } } q ( \vec { c } , \iota )$ for every $\mathcal { R } \in \mathit { x R e p } ( \mathcal { D } , \Pi )$ ; • $x$ -intersection semantics, written $( { \mathcal { D } } , \Pi ) \ \ v { j } { \mid } = _ { \cap } ^ { x } q ( \vec { c } , \iota ) , \ i f$ $( \mathcal { T } , \Pi ) \vdash q ( \vec { c } , \iota )$ where $\begin{array} { r } { \mathcal { I } = \prod _ { \mathcal { R } \in x R e p ( \mathcal { D } , \Pi ) } \mathcal { R } } \end{array}$ .
Proposition 1. For every query $( \Pi , q ( \vec { v } , r ) )$ , dataset $\mathcal { D }$ , tuple of constants $\vec { c } ,$ and interval $\iota$ , $( D , \Pi ) \mid = _ { \cap } ^ { x } q ( \vec { c } , \iota )$ implies $( \mathcal { D } , \Pi ) ~ \vdash _ { C Q A } ^ { x } ~ q ( \vec { c } , \iota )$ , which implies $( \mathcal { D } , \Pi ) \ \vdash _ { b r a \nu e } ^ { x } q ( \vec { c } , \iota )$ . None of the converse implications holds.
Example 5. Consider Π and $\mathcal { D }$ from Example 2. By examining the $s$ -repairs given in Example $^ { 4 }$ , we can check that:
• $( \mathcal { D } , \Pi ) \mathrel { \mathop { \left. - \vphantom { \mathrm { \Large ~ \left( \mathcal { D } , \Pi \right) \left( \mathcal { D } \right) \left( \mathcal { D } \right) \left( \mathcal { D } \right) \left( \mathcal { D } \right) \left( \mathcal { D } \right) \kern - delimiterspace } 2 } } } \ma\right.thrm { A n t i D R i s k } ( a ) \ @ \lbrack - 9 0 , \infty ) \mathrm { , }$ • $( \mathcal { D } , \Pi ) \not \vdash _ { b r a \nu e } ^ { s }$ FevEp $( a ) \ @ \{ t \}$ for every $t \in \mathbb { T }$ , • $( \mathcal { D } , \Pi ) \not \vdash _ { b r a \nu e } ^ { s }$ PotFnhtr $( a ) \ @ \{ t \}$ for every $t \in \mathbb { T }$ .
With the $p$ -repairs (Example 4), we obtain that:
• $\cdot ( { \mathcal { D } } , \Pi ) \vdash _ { \cap } ^ { p } { \mathsf { A n t i D R i s k } } ( a ) @ [ - 9 0 , \infty ) ,$ ,
• $( { \cal D } , \Pi ) \mathrel { \mathop { \left| { \mathcal { D } } \right. \kern - delimiterspace } { \Gamma } } { \sim } _ { \cap } { \sf F e v E p } ( a ) @ ( 3 2 , 3 4 ] ,$ ,
• $( \mathcal { D } , \Pi ) \mathrel { \mathop = _ { b r a \nu e } }$ PotFnhtr $( a ) \ @ \{ t \}$ for all $t \in [ 2 9 , 3 0 ]$ , • $( \mathcal { D } , \Pi ) \not \vdash _ { C Q A } ^ { p }$ PotFnhtr $( a ) \ @ \{ t \}$ for every $t \in \mathbb { T }$ .
From the form of the $i$ -repairs (Example 4), we obtain that: • $( { \mathcal { D } } , \Pi ) \mathrel { \mathop { \left| - \cap _ { \cap } ^ { i } { \mathsf { A n t i D R i s k } } ( a ) @ [ - 9 0 , \infty ) , \right. } }$
• $( \mathcal { D } , \Pi ) \mathrel { \mathop = _ { b r a \nu e } }$ FevEp(a)@[29, 34],
• $( \mathcal { D } , \Pi ) \not \vdash _ { C Q A } ^ { i }$ FevEp(a)@{t} for each $t \in \mathbb { T }$ ,
• $( \mathcal { D } , \Pi ) \mathrel { \mathop = _ { b r a \nu e } }$ PotFnhtr(a)@{t} for all $t \in [ 2 9 , 3 0 ]$ , • $( \mathcal { D } , \Pi ) \not \vdash _ { C Q A } ^ { i }$ PotFnhtr $( a ) \ @ \{ t \}$ for each $t \in \mathbb { T }$ .
# 4 Properties of the Framework
We study properties of $x$ -conflicts, $x$ -repairs, and semantics based upon them. The results hold for $\mathbb { T } = \mathbb { Q }$ and $\mathbb { T } = \mathbb { Z }$ .
# 4.1 Properties of Repairs and Conflicts
We will consider in particular the following properties, which are well known in the case of atemporal knowledge bases.
Definition 6. We say that ${ \mathsf { P } } _ { i }$ holds if it holds for every dataset $\mathcal { D }$ (in normal form) and (consistent) program $\Pi$ .
$\mathsf { P } _ { 1 }$ : $x R e p ( \mathcal { D } , \Pi ) \neq \varnothing .$
${ \mathsf { P } } _ { 2 }$ : $\mathcal { D }$ is Π-inconsistent iff xConf $( \mathcal { D } , \Pi ) \neq \emptyset$ .
${ \mathsf { P } } _ { 3 }$ : $x R e p ( \mathcal { D } , \Pi )$ and xConf $( \mathcal { D } , \Pi )$ are finite.
$\mathsf { P } _ { 4 }$ : Every $B \in x R e p ( { D , \Pi } ) \cup x C o n f ( { D , \Pi } )$ is finite.
${ \mathsf { P } } _ { 5 }$ : For every fact $\alpha \ @ \iota$ pointwise included in $\mathcal { D }$ , $\alpha \ @ \iota$ is pointwise included in every $x$ -repair of $\mathcal { D }$ w.r.t. Π iff $\alpha { \ @ \iota }$ has an empty pointwise intersection with every $x$ -conflict of $\mathcal { D }$ w.r.t. Π.
The notions based on $\boldsymbol { \underline { { \underline { { \boldsymbol { \Pi } } } } } } ^ { s }$ have all these properties, while those based on $\boldsymbol { \underline { { \underline { { \boldsymbol { \Pi } } } } } } ^ { p }$ do not have any, and those based on $\boldsymbol { \underline { { \underline { { \boldsymbol { \Pi } } } } } } ^ { i }$ only one ( $i$ -repairs and $i$ -conflicts are finite by Lemma 1).
Proposition 2. Properties ${ \mathsf { P } } _ { 1 } { \mathsf { { - } } } { \mathsf { P } } _ { 5 }$ hold for $x = s$ .
Corollary 1. $\begin{array} { r } { \bigcap _ { \mathcal { R } \in s R e p ( \mathcal { D } , \Pi ) } \mathcal { R } = \mathcal { D } \setminus \bigcup _ { \mathcal { C } \in s C o n f ( \mathcal { D } , \Pi ) } \mathcal { C } . } \end{array}$
Proposition 3. None of the properties ${ \mathsf { P } } _ { 1 } { \mathsf { { - P } } } _ { 5 }$ hold for $x = p$ .
For $x = i$ , $\mathsf { P } _ { 4 }$ holds but properties ${ \mathsf { P } } _ { 1 } { \mathsf { - P } } _ { 3 }$ and $\mathsf { P } _ { 5 }$ do not.
In what follows, we will provide the counterexamples used to prove Proposition 3, as well as additional examples that illustrate the properties of $x$ -repairs and $x$ -conflicts.
# Existence of $p$ - and $i$ -Repairs and Conflicts
A major difference between repairs and conflicts based on $\boldsymbol { \underline { { \underline { { \boldsymbol { \Pi } } } } } } ^ { s }$ and those based on $\boldsymbol { \underline { { \underline { { \boldsymbol { \Pi } } } } } } ^ { p }$ or $\boldsymbol { \underline { { \underline { { \boldsymbol { \Pi } } } } } } ^ { i }$ is that the latter need not exist.
Example 6. Consider the following dataset and program.
$$
\begin{array} { r } { \mathcal { D } = \{ P @ ( 0 , \infty ) \} \qquad \Pi = \{ \perp \mathtt { H } _ { ( 0 , \infty ) } P \} } \end{array}
$$
There is no p- or $i$ -repair and no $p$ - or $i$ -conflict of $\mathcal { D } w . r . t .$ . Π. For $x \ \in \ \{ p , i \}$ , every $\Pi$ -inconsistent ${ \mathcal { C } } \subseteq ^ { x } { \mathcal { D } }$ in normal form is of the form $\{ P @ \langle t , \infty ) \}$ . Since $\mathcal { C } ^ { \prime } = \{ P @ ( t + 1 , \infty ) \}$ is Π-inconsistent and ${ \mathcal { C } } ^ { \prime } \subset { } ^ { x } { \mathcal { C } }$ , then $\mathcal { C }$ is not an $x$ -conflict.
Every $\mathcal { R } \subseteq ^ { i } \mathcal { D }$ is either empty (hence not an $i$ -repair since, e.g., $\{ P @ \{ 1 \} \}$ is $\Pi$ -consistent) or of the form $\{ \bar { P } @ \langle t _ { 1 } , t _ { 2 } \rangle \}$ with $\langle t _ { 1 } , t _ { 2 } \rangle \neq \emptyset$ . If $t _ { 2 } = \infty$ , $\mathcal { R }$ is $\Pi$ -inconsistent. Otherwise, $\mathcal { R } ^ { \prime } = \{ P @ \langle t _ { 1 } , t _ { 2 } + 1 \rangle \}$ is Π-consistent and $\mathcal { R } \sqsubset { } ^ { i } \mathcal { R } ^ { \prime } \sqsubseteq { } ^ { i } \mathcal { D }$ . In both cases, $\mathcal { R }$ is not an $i$ -repair.
For every $\mathcal { R } \subseteq ^ { p } \mathcal { D }$ in normal form, if there is only one $t \in$ $( 0 , \infty )$ such that ${ \mathcal { R } } \Vdash P @ \{ t \}$ , then $\mathcal { R }$ contains $P @ ( t , \infty )$ so $\mathcal { R }$ is $\Pi$ -inconsistent. Hence, for every $\Pi$ -consistent $\mathcal { R } \subseteq ^ { p } \mathcal { D }$ , there exist $t _ { 1 } , t _ { 2 } ~ \in ~ ( 0 , \infty )$ such that $t _ { 1 } ~ < ~ t _ { 2 }$ and $\mathcal { R } \not \in$ $P @ \{ t _ { 1 } \}$ , $\mathcal { R } \not \in P ^ { _ { \mathbb { Q } } } \{ t _ { 2 } \}$ . However, $\mathcal { R } ^ { \prime } = \mathcal { R } \cup \{ P @ \{ t _ { 1 } \} \}$ is then Π-consistent and ${ \mathcal { R } } \subset { } ^ { p } { \mathcal { R } } ^ { \prime } \subset { } ^ { p } { \mathcal { D } }$ so $\mathcal { R }$ is not a $p$ -repair.
Example 7 shows that there is no relationship between the existence of $x$ -conflicts and the existence of $x$ -repairs.
Example 7. Let ${ \mathcal { D } } _ { c } = { \mathcal { D } } \cup \{ R \mathbb { Q } \{ 0 \} \}$ and $\Pi _ { c } = \Pi \cup \{ \perp $ $R \}$ with $\mathcal { D }$ and Π from Example $\boldsymbol { \mathscr { \sigma } }$ . We can show as in Example $\boldsymbol { \mathscr { \sigma } }$ that for $x \in \{ p , i \}$ , there is no $x$ -repair of $\mathcal { D } _ { c }$ w.r.t. $\Pi _ { c }$ . However, $\{ R @ \{ 0 \} \}$ is an $x$ -conflict of $\mathcal { D } _ { c }$ w.r.t. $\Pi _ { c }$ . Now, let
$$
\begin{array} { r l } & { \mathcal { D } _ { r } = \{ P @ [ 0 , \infty ) , Q @ \{ 0 \} \} } \\ & { \Pi _ { r } = \{ \perp Q \land \oplus _ { ( 0 , \infty ) } \boxplus _ { ( 0 , \infty ) } P \} . } \end{array}
$$
For $x \in \{ p , i \}$ , there is no $x$ -conflict of $\textstyle { \mathcal { D } } _ { r }$ w.r.t. $\Pi _ { r }$ . Indeed, every $\Pi _ { r }$ -inconsistent ${ \mathcal { C } } \subseteq ^ { p } { \mathcal { D } }$ has to be such that $c \models$ $P @ ( t , \infty )$ for some $t > 0$ and none of such $\mathcal { C }$ is minimal $w . r . t . \ \subseteq ^ { x }$ . Yet, $\{ P @ [ 0 , \infty ) \}$ is an $x$ -repair of $\begin{array} { r } { \mathcal { D } _ { r } w . r . t . \Pi _ { r } } \end{array}$ .
The next examples show there is no relationship between the existence of $p$ -repairs and the existence of $i$ -repairs, nor between existence of $p$ -conflicts and existence of $i$ -conflicts.
Example 8. The following $\mathcal { D } _ { i }$ and $\Pi _ { i }$ have no $p$ -repair (cf. Example $\boldsymbol { \mathscr { \sigma } }$ ) but $\{ P @ ( - 2 , 0 ) , Q @ \{ 0 \} \}$ is an $i$ -repair.
$$
\begin{array} { r l } & { \mathcal { D } _ { i } = \{ P \ @ ( - 2 , \infty ) , Q @ \{ 0 \} \} } \\ & { \Pi _ { i } = \{ \perp \boxplus ( 0 , \infty ) \ P , \ \perp Q \land P \} } \end{array}
$$
In the other direction, let $\mathcal { D } _ { p } = \{ P \ @ ( - \infty , \infty ) , Q @ \{ 0 \} \}$ and
$$
\Pi _ { p } = \{ \perp \natural _ { [ 0 , \infty ) } P , \perp \natural _ { [ 0 , \infty ) } P , \perp P \land Q \}
$$
$$
\perp Q \wedge \sqcup _ { ( 0 , 1 0 ) } P \wedge \oplus _ { [ 1 0 , \infty ) } P ,
$$
$$
\perp Q \wedge \boxplus _ { ( 0 , 1 0 ) } P \wedge \ominus _ { [ 1 0 , \infty ) } P \} .
$$
One can check that $\{ Q @ \{ 0 \} , P @ ( - 1 0 , 0 ) , P @ ( 0 , 1 0 ) \}$ is $a$ $p$ -repair, but one can show that there is no $i$ -repair.
Example 9. $\mathcal { D } _ { i } ~ = ~ \{ P @ [ 0 , \infty ) , Q @ \{ 0 \} \}$ is an $i$ -conflict of itself w.r.t. $\Pi _ { i } = \{ \perp P \land Q \land \oplus _ { ( 0 , \infty ) \boxplus ( 0 , \infty ) } P \}$ . However, there is no $p$ -conflict of $\mathcal { D } _ { i }$ w.r.t. $\Pi _ { i }$ . Indeed, every $\Pi _ { i }$ -inconsistent dataset $\mathcal { C } \subseteq ^ { p } \mathcal { D } _ { i }$ in normal form has the form $\{ Q { \ @ \{ 0 \} } , P @ \{ 0 \} , P @ \{ t , \infty ) \}$ , and $\{ Q @ \{ 0 \} , P @ \{ 0 \} , P @ \langle t + 1 , \stackrel { \sim } { \infty } ) \}$ is also $\Pi _ { i }$ -inconsistent.
In the other direction, let $\mathcal { D } _ { p } = \{ P @ [ 0 , \infty ) , Q @ \{ 0 \} \}$ and
$$
\Pi _ { p } = \{ \perp \natural _ { ( 0 , \infty ) } P , \perp Q \land \boxplus _ { [ 0 , \infty ) } \oplus _ { [ 0 , 1 ) } P \} .
$$
One can easily check that $\{ Q \ @ \{ 0 \} , P \ @ \{ k \} \ | \ k \in \mathbb { N } \}$ is $a$ $p$ -conflict, but one can show that there is no $i$ -conflict.
# Size and Number of $p$ - and $i$ -Repairs and Conflicts
It follows from Lemma 1 that the $i$ -repairs and $i$ -conflicts of a dataset $\mathcal { D }$ w.r.t. a program $\Pi$ contain at most as many facts as $\mathcal { D }$ , hence are finite. In contrast, we have seen in Example 2 that a $p$ -repair may be infinite. Example 10 shows that some datasets have only infinite $p$ -repairs w.r.t. some programs, and Example 11 shows a similar result for $p$ -conflicts.
Example 10. Consider the following dataset and program.
$$
\mathcal { D } = \{ P @ ( 0 , \infty ) \} \qquad \Pi = \{ \perp \boxplus [ 0 , 2 ] \ P \}
$$
There exist $p$ -repairs of $\mathcal { D }$ w.r.t. Π, such as $\{ P @ ( 2 k , 2 k + 2 ) \mid$ $k \in \mathbb { N } \}$ , but one can show that they are all infinite.
Example 11. Consider the following dataset and program.
$$
\begin{array} { r l } & { \mathcal { D } = \{ P @ [ 0 , \infty ) , Q @ \{ 0 \} \} } \\ & { \Pi = \{ \perp Q \land \boxplus _ { [ 0 , \infty ) } \oplus _ { [ 0 , 2 ) } P \} } \end{array}
$$
There are $p$ -conflicts of $\mathcal { D }$ w.r.t. Π, such as $\{ Q { \widehat { \mathbb { o } } } \{ 0 \} , P { \widehat { \mathbb { o } } } \{ 2 k \}$ $| \ k \in \mathbb { N } \}$ , but one can show that they are all infinite.
Moreover, for both $x = i$ and $x = p$ , there can be infinitely many $x$ -repairs $/ x$ -conflicts:
Example 12. The following $\mathcal { D }$ and $\Pi$ have infinitely many $p$ - and $i$ - repairs and conflicts even if the timeline is $( \mathbb { Z } , \leq )$ :
$$
\mathcal { D } = \{ P @ [ 0 , \infty ) , Q @ [ 0 , \infty ) \} \quad \Pi = \{ \perp P \land Q \} .
$$
Indeed, for every $t \in [ 0 , \infty )$ , $\{ P @ \{ t \} , Q @ \{ t \} \}$ is a $p$ - and an $i$ -conflict, and $\{ P @ [ 0 , t ) , Q @ [ t , \infty ) \}$ is a $p$ - and an $i$ -repair.
# Absence of Link Between $p / i$ - Repairs and Conflicts
Example 13 shows that a fact may be pointwise included in all $p \mathrm { . }$ -, or $i .$ -, repairs while it is also pointwise included in a $p { \cdot }$ -, or $i .$ -, conflict, respectively, and, symmetrically, that a fact may have an empty pointwise intersection with all $p \mathrm { . }$ , or $i .$ -, conflicts but also with some $p$ -, or $i \cdot$ -, repair.
Example 13. Consider $\mathcal { D } _ { i }$ and $\Pi _ { i }$ defined in Example 8. There is only one $i$ -repair, $\{ P @ ( - 2 , 0 ) , Q @ \{ 0 \} \}$ , but $Q @ \{ 0 \}$ belongs to the $i$ -conflict $\{ P @ \{ 0 \} , Q @ \{ 0 \} \}$ . Symmetrically, $P @ ( 0 , \infty )$ has an empty intersection with every $i$ -conflict but also with every $i$ -repair. Indeed, $\{ P @ ( 0 , \infty ) \}$ is $\Pi _ { i }$ -inconsistent but is not minimal w.r.t. $\boldsymbol { \underline { { \underline { { \boldsymbol { \Pi } } } } } } ^ { i }$ .
For the $p$ - case, we first consider again $\mathcal { D } _ { i }$ but extend $\Pi _ { i }$ with $\perp Q \land \oplus _ { [ 0 , \infty ) } P$ . Now $\{ P @ ( - 2 , 0 ) , Q @ \{ 0 \} \}$ is the only $p$ -repair but $\dot { \{ P \ @ \{ 0 \} , Q \ @ \{ 0 \} \} }$ is a $p$ -conflict so $Q @ \{ 0 \}$ is in all $p$ -repairs and in some $p$ -conflict. For the other direction, consider $\mathcal { D } = \{ P @ [ 0 , \infty ) , Q @ \{ 0 \} , R @ \{ 0 \} \}$ and
$$
\Pi = \{ \perp P \land Q \land \Phi _ { ( 0 , \infty ) } \forall ( 0 , \infty ) P , \perp R \} .
$$
The only $p$ -conflict of $\mathcal { D } w . r . t .$ . Π is $\{ R @ \{ 0 \} \}$ (cf. Example 9) so $Q @ \{ 0 \}$ has an empty intersection with every $p$ -conflict. Yet, $\{ P { \overset { \cdot } { \underbrace { \mathrm { a } } } } [ 0 , \infty ) \}$ is a $p$ -repair that does not contain $Q @ \{ 0 \}$ .
# Case of Bounded-Interval Datasets over $\mathbb { Z }$
We have seen that $p$ - and $i$ -repairs and conflicts need not exist, and even when they do, they may be infinite in size and/or number. Moreover, this holds not only for the dense timeline $( \mathbb { Q } , \leq )$ , but also for $( \mathbb { Z } , \leq )$ . We observe, however, that the negative results for $\mathbb { Z }$ crucially rely upon using $\infty$ or $- \infty$ as endpoints. This leads us to explore what happens when we adopt $\mathbb { T } = \mathbb { Z }$ but restrict datasets to only use bounded intervals (i.e., finite integers as endpoints).
The following result summarizes the properties of repairs and conflicts in this setting, showing in particular that restricting to bounded-interval datasets suffices to ensure existence and finiteness of $p$ - and $i$ -repairs and conflicts:
Proposition 4. When $\mathbb { T } = \mathbb { Z }$ and datasets $\mathcal { D }$ are restricted to only use bounded intervals, ${ \mathsf { P } } _ { 1 } { \mathsf { - P } } _ { 5 }$ hold for $x = p$ , ${ \mathsf { P } } _ { 1 } { \mathsf { - P } } _ { 4 }$ hold for $x = i ,$ , and $\mathsf { P } _ { 5 }$ does not hold for $x = i$ .
# 4.2 Comparing the Different Semantics
The remaining examples show the following proposition.
Proposition 5. For every $S e m \in \{ b r a \nu e , C Q A , \cap \}$ and $x \neq$ $y \in \{ p , i , s \}$ , there exist $\mathcal { D }$ and $\Pi$ such that $\mathcal { D }$ has $x$ - and $y -$ repairs w.r.t. Π, $( \mathcal { D } , \Pi ) \mathrel { \mathop { = } } _ { S e m } ^ { y } q ( \vec { c } , \iota )$ and $( D , \Pi ) \ \not \in { \not { p } } _ { S e m } ^ { x } \ q ( \vec { c } , \iota )$ .
Example 14 shows the case $y = p$ and $x \in \{ i , s \}$ .
Example 14. Consider our running example and recall from Example $5$ that $( { \cal D } , \Pi ) \quad | { \bar { = } } _ { \cap } ^ { p } \quad \mathsf { F e v E p } ( a ) @ \{ 3 4 \}$ (hence $( { \cal D } , \Pi ) \ \longmapsto _ { C Q A } ^ { p } \ \mathsf { F e v E p } ( a ) @ \{ 3 4 \} ,$ while $( \mathcal { D } , \Pi ) \ \lvert \ne _ { C Q A } ^ { x }$ FevEp $\left( a \right) \ @ \left\{ 3 4 \right\}$ (hence $( { \cal D } , \Pi ) \not \vdash \emptyset _ { \cap } ^ { x } \ : \ : \mathsf { F e v E p } ( a ) @ \{ 3 4 \} ,$ ) for $x \in \{ i , s \}$ . Moreover, if we consider $\Pi ^ { \prime }$ that extends $\Pi$ with $\begin{array} { r l } & { \mathrm { \it ~ Q } ( x ) \mathsf { F e v e r } ( x ) \mathcal { U } _ { ( 0 , 4 ) } ( \mathsf { N o F e v e r } ( x ) \mathcal { U } _ { ( 0 , 4 ) } \mathsf { F e v e r } ( x ) ) , } \\ & { \mathrm { \it ~ \mathcal { D } , I I ^ { \prime } ) ~ } \vdash _ { b r a v e } ^ { p } ~ Q ( a ) \ @ \{ 1 4 \} ~ b u t ~ ( \mathcal { D } , \Pi ^ { \prime } ) ~ \vdash _ { b r a v e } ^ { x } ~ Q ( a ) @ \{ 1 4 \} } \end{array}$ for $x \in \{ i , s \}$ .
The case $y = s$ and $x \in \{ p , i \}$ is shown by Example 15 for $\mathrm { S e m } \in \{ \cap , \mathrm { C Q A } \}$ and Example 16 for $\mathrm { { { S e m } = } }$ brave.
Example 15. Consider $\mathcal { D } = \{ P @ [ 0 , 1 0 ] , Q @ \{ 5 \} \} \ : c$ and
$$
\Pi = \{ \perp P \land Q , \perp \boxplus _ { [ 0 , 1 0 ] } P \} .
$$
It is easy to check that $\{ Q @ \{ 5 \} \}$ is the only $s$ -repair so that $( { \mathcal { D } } , \Pi ) \ \ v { U } | = _ { \cap } ^ { s } \ Q @ \{ 5 \}$ . However, $\{ P @ ( 0 , 1 0 ] \}$ is a $p$ - and $i$ - repair so for $x \in \{ \bar { p } , i \} , ( \mathcal { D } , \Pi ) \not \vdash _ { C Q A } ^ { x } Q @ \{ \bar { 5 } \}$ .
Example 16. Consider $\textstyle { \mathcal { D } } _ { r }$ and $\Pi _ { r }$ from Example 7.
$$
\begin{array} { r l } & { \mathcal { D } _ { r } = \{ P @ [ 0 , \infty ) , Q @ \{ 0 \} \} } \\ & { \Pi _ { r } = \{ \perp Q \land \oplus _ { ( 0 , \infty ) } \boxplus _ { ( 0 , \infty ) } P \} } \end{array}
$$
Since $\{ Q @ \{ 0 \} \}$ is an $s$ -repair, $( \mathcal { D } _ { r } , \Pi _ { r } ) ~ \lvert = _ { b r a \nu e } ^ { s } ~ Q @ \{ 0 \} .$ . However, for $\dot { x } \in \{ p , i \}$ , one can show that the only $x$ -repair is $\{ P @ [ 0 , \infty ) \}$ . Hence $( \mathcal { D } _ { r } , \Pi _ { r } ) \sharp _ { b r a \nu e } ^ { x } { Q @ \{ 0 \} }$ .
Example 17 illustrates the case $y = i$ and $x = s$ for $\mathtt { S e m } \in$ $\{ \cap , { \mathrm { C Q A } } \}$ and Example 18 shows this case for $\mathbf { S e m } =$ brave. Example 17. In Example $I 6 ,$ , the only $i$ -repair is $\{ P @ [ \bar { 0 } , \infty ) \}$ so $\begin{array} { r l } { \left( \mathcal { D } _ { r } , \Pi _ { r } \right) } & { { } | = _ { \cap } ^ { i } \quad P @ [ 0 , \infty ) } \end{array}$ . However, $\{ Q @ \{ 0 \} \}$ is an $s$ -repair so $( \mathcal { D } _ { r } , \Pi _ { r } ) \not \vdash _ { C Q A } ^ { s } { P @ [ 0 , \infty ) }$ . Example 18. Consider our running example and recall from Example 5 that $( \mathcal { D } , \Pi ) \ \lvert = _ { b r a \nu e } ^ { i }$ ${ \mathsf { F e v E p } } ( a ) { \ @ } \{ 2 9 \}$ while $( \mathcal { D } , \Pi ) \not \vdash _ { b r a \nu e } ^ { s }$ ${ \mathsf { F e v E p } } ( a ) { \ @ } \{ 2 9 \}$ .
Example 19 illustrates the case $y = i$ and $x = p$ for $\mathtt { S e m } \in$ $\{ \cap , { \bf C Q } \bar { \bf A } \}$ and Example 20 shows this case for $\mathbf { S e m } =$ brave. Example 19. Let $\mathcal { D } = \{ T @ \{ 0 \} , P @ [ 0 , 4 ] , Q @ [ 0 , 4 ] \}$ and $\Pi = \{ \perp P \land Q , R P { \mathcal U } _ { ( 0 , 4 ) } Q { \mathcal U } _ { ( 0 , 4 ) } P , \perp R \land T \} .$ The $i$ -repairs are of the form $\{ T @ \{ 0 \} , P @ [ 0 , t \rangle , Q @ \langle t , 4 ] \} o r$ $\{ T @ \{ 0 \} , Q @ [ 0 , t \rangle , P @ \langle t , 4 ] \}$ so $( \mathcal { D } , \Pi ) \ \lvert = _ { \cap } ^ { i } \ T @ \{ 0 \}$ . However, $\mathcal { R } = \{ P @ [ 0 , 1 ] , Q @ ( 1 , 3 ) , P @ [ 3 , 4 ] \}$ is a $p$ -repair (note that $( \mathcal { R } , \Pi ) \mathrel { \mathop = } R \mathbb { \oplus } \{ 0 \}$ , so ${ \mathcal { R } } \cup \{ T { \widehat { \mathbb { Q } } } \{ 0 \} \} $ is Π-inconsistent). Hence $( \mathcal { D } , \Pi ) \not \vdash _ { C Q A } ^ { p } T @ \{ 0 \}$ .
Example 20. Consider $\mathcal { D } = \{ P @ [ 0 , \infty ) , Q @ \{ 5 \} \}$ and
$$
\Pi = \{ \perp P \land Q , \perp Q \land \oplus _ { [ 0 , \infty ) \boxplus [ 0 , \infty ) } P \} .
$$
Since {P @[0, 5), Q@{5}} is an i-repair, (D, Π) |=ibrave $Q @ \{ 5 \}$ . However, one can show that the only $p$ -repair is $\{ P @ [ 0 , \infty ) \}$ . Hence $( \mathcal { D } , \Pi ) \models _ { b r a \nu e } ^ { p } Q @ \{ 5 \}$ .
# 5 Data Complexity Analysis
We explore the computational properties of our inconsistency handling framework. Specifically, we analyze the data complexity of recognizing $x$ -conflicts and $x$ -repairs, generating a single $x$ -conflict or $x$ -repair, and testing query entailment under the $x$ -brave, $x$ -CQA, and $x$ -intersection semantics. For this initial study, we focus on cases where $x$ -repairs are guaranteed to exist: (i) $x = s$ , and (ii) bounded datasets over $\mathbb { Z }$ .
We recall that in DatalogMTL, consistency checking and query entailment are PSPACE-complete w.r.t. data complexity [Walega et al., 2019], and PSPACE-completeness holds for many fragments (such as core and linear) [Walega et al., 2020b] as well as for DatalogMTL over $\mathbb { Z }$ [Walega et al., 2020a]. We also consider some tractable fragments for which these tasks can be performed in PTIME w.r.t. data complexity: D lognrMTL c⋄−ore ata , DatalogMTL , and Data g[MTLli⋄−n (over $\mathbb { Q }$ $\mathbb { Z }$ $\mathbb { Z }$ 2018; Walega et al., 2020b; Walega et al., 2020a].
All results stated in this section are w.r.t. data complexity, i.e. the input size is the size of $\mathcal { D }$ . We assume a binary encoding of numbers, with rationals given as pairs of integers.
# 5.1 Results for $s$ -Repairs and $s$ -Conflicts
We can obtain PSPACE upper bounds for all tasks by adapting known procedures for reasoning with subset repairs and conflicts in the atemporal setting, cf. [Bienvenu and Bourgaux, 2016]. Specifically, an $s$ -repair or $s$ -conflict can be generated by a greedy approach (add $/$ delete facts one by one while preserving (in)consistency), and query entailment under the three semantics can be done via a ‘guess and check’ approach.
Proposition 6. For arbitrary DatalogMTL programs Π, (i) the size of $\mathcal { B } \in s C o n f ( \mathcal { D } , \Pi ) \cup s R e p ( \mathcal { D } , \Pi )$ is polynomially bounded in the size of $\mathcal { D }$ , (ii) it can be decided in PSPACE whether $B \in \mathit { s C o n f } ( \mathcal { D } , \Pi )$ or $\boldsymbol { B } \in \mathit { s R e p } ( \mathcal { D } , \Pi )$ , and (iii) $a$ single s-conflict (resp. $s$ -repair) can be generated in PSPACE. Moreover, for Sem $\iota \in \{ b r a \nu e , C Q A , \cap \}$ , query entailment under $s$ -Sem is PSPACE-complete.
If we consider tractable DatalogMTL fragments, we obtain better bounds for the recognition and generation tasks:
Proposition 7. For tractable DatalogMTL fragments, the tasks of testing whether $B \in \mathsf { \Omega } _ { s C o n f } ( \mathcal { D } , \Pi )$ (resp. $B \in$ $s R e p ( \hat { \mathcal { D } } , \Pi ) ,$ ) and generating a single $s$ -conflict (resp. $s$ - repair) can be done in PTIME.
We can use the PTIME upper bounds on recognizing srepairs to obtain (co)NP upper bounds for query entailment in tractable DatalogMTL fragments. Moreover, for specific fragments, we can show these bounds are tight.
Proposition 8. For tractable DatalogMTL fragments: query entailment1under s-brave (resp. $s$ -CQA, $s$ -intersection) semantics is in NP (resp. coNP). Matching lower bounds hold in Datalo $; _ { n r } M T L$ and DatalogMTLli⋄−n (and in DatalogMTLc⋄−ore in the case of $s$ -CQA). The lower bounds hold even for bounded datasets and $\mathbb { T } = \mathbb { Z } .$ .
Proof sketch. To illustrate, we provide the reduction from SAT used to show NP-hardness of $s$ -brave semantics in
Datalo $\operatorname { \Pi } _ { \operatorname { s n r } } \mathbf { M } \mathrm { T L }$ . Given a CNF $\varphi = c _ { 1 } \wedge \ldots \wedge c _ { m }$ over variables $v _ { 1 } , . . . , v _ { n }$ , consider the Datalo $\mathrm { g _ { n r } M T L }$ program and dataset:
$$
\begin{array} { r l } & { \Pi ^ { \prime } = \{ N ^ { \prime } ( v ) \diamond _ { [ 0 , \infty ) } N ( v ) , N ^ { \prime } ( v ) \diamond _ { [ 0 , \infty ) } N ( v ) , } \\ & { \qquad \perp P ( v ) \wedge N ^ { \prime } ( v ) , Q ^ { \prime } S \mathcal { U } _ { ( 0 , \infty ) } M , } \\ & { \qquad S \Leftrightarrow _ { [ 0 , 2 ) } P ( v ) , S \Leftrightarrow _ { [ 0 , 2 ) } N ( v ) \} } \\ & { \mathcal { D } ^ { \prime } = \{ P ( v _ { j } ) @ \{ 2 k \} \mid v _ { j } \in c _ { k } \} \cup \{ N ( v _ { j } ) @ \{ 2 k \} \mid \neg v _ { j } \in c _ { k } \} } \\ & { \qquad \cup \ \{ M @ \{ 2 m + 2 \} \} } \end{array}
$$
Then $\varphi$ is satisfiable iff $( D ^ { \prime } , \Pi ^ { \prime } ) \mathrel { \mathop { = } _ { \mathrm { b r a v e } } ^ { s } } Q ^ { \prime } @ \{ 2 \} .$ .
The hardness results for Datalog $_ \mathrm { i n r M T L }$ are somewhat surprising in view of the $\mathsf { A C } ^ { 0 }$ data complexity and $\mathrm { F O < - }$ rewritability of query entailment in Datalog $_ \mathrm { i n r } \mathbf { M T L }$ [Brandt et al., 2018], as a result from [Bienvenu and Rosati, 2013] shows how to transfer FO-rewritability results from classical to brave and intersection semantics. However, the latter result relies upon the fact that in the considered setting of atemporal ontologies, the existence of a rewriting guarantees a data-independent bound on the size of minimal inconsistent subsets and minimal consistent query-entailing subsets. As the preceding reduction shows, such a property fails to hold in Datalo $\operatorname { \bf \underline { { y } } _ { \mathrm { n r } } M T L }$ (observe that the minimal consistent queryentailing subsets in $\mathcal { D } ^ { \prime }$ have size $m + 1 \mathrm { \dot { \Omega } }$ ).
In DatalogMTL $\underset { \cdot \mathtt { c o r e } } { \odot }$ , by contrast, Walega et al. $\boldsymbol { [ 2 0 2 0 } \boldsymbol { \mathrm { b } }$ 2020a] have shown that every minimal $\Pi$ -inconsistent subset contains at most two facts, and query entailment can be traced back to a single fact. This is the key to our next result:
Proposition 9. DatalogMTLc⋄−ore query entailment1 under sbrave and $s$ -intersection semantics is in PTIME.
For propositional DatalogMTL, we even get tractability for $s$ -CQA semantics – notable in view of the notorious intractability of CQA semantics even in restricted atemporal settings. The proof relies upon rather intricate automata constructions, which build upon and significantly extend those given in [Walega et al., 2020a] for consistency checking.
Proposition 10. When $\mathbb { T } = \mathbb { Z }$ , propositional DatalogMTL query entailment under s-brave, s-CQA, and $s$ -intersection semantics is in PTIME (more precisely, NC1-complete).
# 5.2 Results for Bounded-Interval Datasets over $\mathbb { Z }$
We start by considering interval-based notions and observe that even if the binary encoding of endpoint integers leads to exponentially many choices for which sub-interval to retain for a given input fact, $i$ -conflicts and $i$ -repairs are of polynomial size and can be effectively recognized and generated. This allows us to establish the same general upper bounds for $x = i$ as we obtained for $x = s$ .
Proposition 11. When $\mathbb { T } = \mathbb { Z }$ and only bounded-interval datasets are considered, the results stated in Proposition 6 for the case $x = s$ hold in the case $x = i$ .
We further show that when we consider tractable fragments, one can tractably recognize or generate an $i$ -conflict, using binary search to identify optimal endpoints.
Proposition 12. For tractable DatalogMTL fragments: when $\mathbb { T } = \mathbb { Z }$ and only bounded-interval datasets are considered, it can be decided in PTIME whether $B \in \mathit { i C o n f } ( \mathcal { D } , \Pi )$ and $a$ single $i$ -conflict can be generated in PTIME.
The argument does not apply to $i$ -repairs, and we leave open the precise complexity of $i$ -repair recognition in this case (we only get a coNP upper bound). However, we can still obtain a tight complexity result for $i$ -brave semantics since we do not need to get a complete $i$ -repair in this case.
Proposition 13. For tractable DatalogMTL fragments: when $\mathbb { T } ~ = ~ \mathbb { Z }$ and only bounded-interval datasets are considered, query entailment1 under $i$ -brave (resp. $i$ -CQA, $i$ - intersection) is in NP (resp. in $\Pi _ { 2 } ^ { p }$ ). Lower NP (resp. coNP) bounds hold for Datalog $_ { n r } M T L$ and DatalogMTL $\cdot _ { I i n } ^ { \bigcirc }$ (and for DatalogMTLc⋄−ore in the case of i-CQA semantics).
The situation for pointwise notions is starkly different:
Proposition 14. When $\mathbb { T } = \mathbb { Z }$ and only bounded-interval datasets are considered, there exist $\mathcal { D }$ and $\Pi$ such that every $B \in \ L _ { p C o n f } ( \mathcal { D } , \Pi )$ (resp. $\begin{array} { r } { B \in p R e p ( { D , \Pi } ) , } \end{array}$ ) is exponentially large w.r.t. the size of $\mathcal { D }$ .
We thus only obtain EXPSPACE complexity upper bounds. Proposition 15. When $\mathbb { T } = \mathbb { Z }$ and only bounded-interval datasets are considered, all tasks considered in Proposition $\boldsymbol { \mathscr { \sigma } }$ for $x = s$ can be done in EXPSPACE in the case $x = p$ . | In this paper, we explore the issue of inconsistency handling in DatalogMTL, an extension of Datalog with metric temporal operators. Since facts are associated with time intervals, there are different manners to restore consistency when they contradict the rules, such as removing facts or modifying their time intervals. Our first contribution is the definition of relevant notions of conflicts (minimal explanations for inconsistency) and repairs (possible ways of restoring consistency) for this setting and the study of the properties of these notions and the associated inconsistency-tolerant semantics. Our second contribution is a data complexity analysis of the tasks of generating a single conflict / repair and query entailment under repair-based semantics. | [
"cs.LO",
"cs.AI",
"cs.DB"
] |
# 1. Introduction
Object detection is indispensable for accurately identifying and localizing objects of interest in aerial imagery [5]. It plays a crucial role in various applications, such as environmental monitoring, urban planning, and rescue operations [1, 25, 35]. Most existing aerial detectors primarily focus on addressing the inherent challenges of aerial images and are limited to fixed categories and scenarios, which defines them as closed-set detectors. However, as the demand for more versatile applications increases, closed-set detectors become inadequate for meeting real-world requirements.
Figure 1. (a) Remote-sensing visual grounding focuses on precise object localization, corresponding to a single instance only, and lacks caption diversity due to its reliance on template-generated captions. (b) Open-vocabulary aerial detection is constrained by a limited number of aerial categories, which have only minimal semantic richness. (c) Open-set aerial detection supports multilevel descriptive detection, ranging from words to phrases, and ultimately to richly detailed sentences.
Recently, language-guided open-world object detection has garnered significant attention due to its alignment with real-world application requirements. Several studies [12, 16, 24, 30] have explored open-vocabulary aerial detection. CastDet [12] employs a multi-teacher architecture that leverages the superior image-text alignment capabilities inherited from pre-trained VLMs. OVA-DETR [24] proposes a lightly open-vocabulary aerial detector that adopts a textguided strategy to further enhance image-text alignment. These methods are constrained by the limited category diversity in aerial detection, which provides minimal semantic information. Besides, there is an approach that addresses this limitation from the dataset perspective: LAE-DINO [16] employs VLMs to expand the number of detectable categories, aiming to increase category diversity and enrich the semantic content of the detection text. Although these methods effectively equip models with open-vocabulary capabilities to overcome the category limitations of traditional aerial detectors, their practical applicability remains constrained by the weak semantic representation of categories, which are typically represented by a single word. In other words, there is still significant room for optimization.
Compared to the aerial domain, open-set object detection in natural scenes has advanced significantly [13, 21]. We note that this is primarily due to the abundance of grounding data available for natural scenes. For instance, Grounding DINOv1.5 provides robust open-set detection capability by training on over 20 million grounding samples. In contrast, aerial grounding data is scarce. Only a few attempts [13, 31] have been made to construct remote sensing visual grounding (RSVG) datasets by annotating detection data with captions, yet these datasets suffer from several limitations: 1) Lack of scene diversity: Dataset construction is restricted to images containing no more than five objects of the same category, to ensure the correct correspondence between captions and instances, resulting in only simplistic scenes. 2) Limited caption diversity: Captions are generated using fixed templates, restricting their variability. 3) Single-instance annotation: Current RSVG datasets solely emphasize precise localization, where each image-caption pair corresponds to a single instance. Including cases where a vague caption corresponds to multiple instances within an image is critical for practical applications. 4) Limited Dataset Scale: The largest available dataset comprises only 25,452 images and 48,952 image-caption pairs. These limitations make existing datasets inadequate for open-set aerial object detection, and the scarcity of large-scale, semantically rich grounding data remains a major bottleneck in advancing the field.
To bridge this gap, in this paper, we aim to lay the data foundation for open-set aerial object detection. Specifically, we propose the OS-W2S Label Engine, an automatic annotation pipeline capable of handling diverse scene annotations for aerial images. It is based on an open-source visionlanguage model, image-operate-based preprocessing, and BERT-based postprocessing. Using this label engine, we construct a novel large-scale benchmark dataset, called MIOAD, to overcome the limitations of current RSVG data.
Key aspects include: 1) Scene Diversity: As depicted in Fig. 2, we introduced pre-processing steps (e.g., extracting foreground and instance regions) and post-processing steps (e.g., matching caption-instance pairs for each image) both before and after interactions with the VLM. This design enables the pipeline to effectively handle various scenarios aerial images and enusre label quality. 2) Caption Diversity: Leveraging the robust vision-language capabilities of the VLM, we generate captions with varying levels of detail for each instance based on its attributes, thereby ensuring caption diversity. 3) Multi-instance annotation: We aim to match varying numbers of instances to each caption based on its descriptive details during the post-processing steps. This process enables the generated data to meet diverse requirements in practical applications, accommodating both precise and approximate localization. 4) Dataset Scale: Using this label engine, we expanded eight widely used aerial detection datasets, yielding 163,023 images and 2 million image-caption pairs, which is 40 times larger than those available in existing RS grounding datasets.
In summary, the major contributions are as follows:
• OS-W2S Label Engine: To meet the fine-grained openworld detection requirements and address the scarcity of grounding data in the aerial domain, we aim to establish a data foundation for open-set aerial object detection. To achieve this, we propose the OS-W2S Label Engine, an automatic label annotation pipeline based on an open-source vision-language model and meticulously designed pre- and post-processing steps, ultimately enabling the annotation of diverse aerial scenes. Notably, this label engine requires only a machine equipped with eight RTX4090 GPUs, each offering 24GB of memory. • MI-OAD Dataset: Using the OS-W2S Label Engine, we expand existing aerial detection datasets with rich textual annotations, forming MI-OAD, the first benchmark dataset for open-set aerial detection. MI-OAD pioneers the exploration of open-set aerial object detection and includes three levels of annotations (words, phrases, and sentences) to address the limitations of existing RSVG datasets. Notably, this dataset contains 2 million imagecaption pairs, which is 40 times larger than those of current RSVG datasets.
• Validation of Open-Set Aerial Detection: With slight modifications to YOLO-World and Grounding DINO—state-of-the-art open-set detectors originally designed for natural images—we enable their training on our proposed dataset. These adaptations for open-set aerial detection yield improvements of 29.5 in $A P _ { 5 0 }$ and 33.7 in Recall $@ 1 0$ , respectively, for sentence inputs under the zero-shot transfer to novel classes on the MIOAD validation set.
# 2. Related Work
# 2.1. Open-set Object Detection
Open-set object detection (OSD), which refers to detecting objects based on arbitrary textual inputs, demonstrates significant potential due to its close alignment with real-world application needs. Several studies [3, 11, 13, 20, 29, 32] have demonstrated the feasibility of OSD in the natural image domain. GLIP [11] established a foundation for openset detection by integrating object detection and grounding tasks. Building on this, models such as YOLO-World [3] and the Grounding DINO series [13, 20, 21] have made significant progress. Notably, Grounding DINO v1.5, trained on over 20 million images with grounding annotations, demonstrates exceptional open-set detection performance, underscoring the crucial role of large-scale grounding data.
Compared to the natural image domain, the development of open-set aerial object detection has lagged behind, primarily due to a lack of sufficient grounding data in aerial contexts. To bridge this gap, this paper aims to establish a data foundation for open-set aerial object detection.
# 2.2. Object Detection in Aerial Imagery
Aerial object detection can be bordely divide into two types: closed-set aerial detection and open-vocabulary aerial detection.
Closed-set aerial detection refers to predicting bounding boxes and corresponding categories for objects that have been seen during training. Several studies [4, 6, 8, 14, 28] have primarily focused on addressing the inherent challenges of RS images. For instance, models such as UFPMPDet [6], ClustDet [28], and DMNet [8] employ a coarseto-fine two-stage detection architecture to mitigate significant background interference and effectively detect tiny, densely distributed objects. However, these models are constrained by predefined training categories, making them suitable only for specific scenarios in real-world applications.
Open-vocabulary aerial detection marks a step towards meeting the demands of open-world aerial detection. It seeks to eliminate the category limitations inherent in closed-set detection by establishing a relationship between image features and category embeddings, rather than simply linking image features to category indices. Models such as CastDet [12], DescReg [30], and OVA-DETR [24] leverage the superior image-text alignment capabilities inherited from pre-trained Visual Language Models (VLMs) to enable open-vocabulary aerial detection capabilities. However, the performance of these models is constrained by a limited number of categories in aerial detection. Additionally, LAE-DINO [16] aim of addressing this limitation from a dataset perspective. It employs VLMs to expand the detection category set, thereby increasing category diversity and enriching the semantic content of the detection text.
Despite these advancements, current research in openvocabulary aerial detection remains limited at the vocabulary level—relying on only a few words that offer scant semantic information. Compared with the natural image domain, open-set object detection in aerial images still has significant room for exploration and improvement.
# 2.3. Visual Grounding in Aerial Imagery
Visual grounding in remote sensing (RSVG) aims to locate objects based on natural language descriptions. Compared to close-set object detection, which relies on fixed category labels, RSVG can process arbitrary descriptions to identify corresponding bounding boxes, offering greater flexibility and suitability for practical applications [10]. However, this flexibility also introduces additional complexity to the RSVG task. Currently, RSVG remains in its early stages of development, with only three publicly available datasets: RSVG-H [23], DIOR-RSVG [31], and OPTRSVG [10]. Among these, RSVG-H comprises 4,239 RS images paired with 7,933 textual descriptions, each providing precise geographic distances (e.g., “Find a ground track field, located approximately 295 meters southeast of a baseball field.”). DIOR-RSVG, based on the DIOR dataset [9], makes use of tools such as HSV and OpenCV to extract instance attributes (e.g., geometric shapes and colors) and employs predefined templates to generate 38,320 imagecaption pairs. Meanwhile, OPT-RSVG further enriches RSVG scenarios by combining three detection datasets (DIOR, HRRSD [33], and SPCD [2]) and follows the annotation process in [9] to produce 25,452 RS images with 48,952 image-caption pairs.
Nevertheless, compared to the abundance of grounding data for natural images, the number of available aerial grounding data is extremely limited. This poses a significant barrier for data-driven open-set detection tasks. We observe that this issue stems from the inherent challenges in annotating aerial images, which often contain predominantly small objects and substantial background interference. Moreover, the captions in existing grounding datasets are typically generated through fixed templates, with each image-caption pair corresponding to a single instance annotation.
To address these limitations and lay the data foundation for open-set aerial object detection, this paper proposes the OS-W2S label engine and constructs MI-OAD, a largescale benchmark dataset for open-set aerial detection tasks.
Data Collection Instance-Level Sentence Caption Generation Hello InterVL, I need your assistance in annotating aerial images. We will % proceed in three steps: … {'caption': 'Yes'} o
Detection Standardize Format Merge Datasets Resolution Alignment Datasets <image>Step 1: You are provided with an aerial image of a target. The red box highlights the target.Generate a caption describing the target … Data Preprocessing {"caption_1": "a medium-sized yellow car is parked.","Color": "yellow", "Geometry": "rectangular"} <image>Step 2: You are provided … Based on the caption from Step 1, refine the description by incorporating relative location information about the target with respect to its Detection Foreground Absolute surrounding environment or nearby objects … Annotation Region Extraction Position Division {"caption_2": "a medium-sized yellow car is parked near a stop sign on the Size_Thresholds = [0.0005, 0.001, 0.01, 0.2] (Box Area Ratio) side of the road.","relative_position": "near a stop sign on the side of the
8 & 一 HorizoSinztael_Alattbreilbsut=e['=F[a'rtinLye'f,t',s 'mLaelflt'',, 'mCednituerm', ,'R'biigh',t ',l'aFragre']Right'] road"} Vertical_labels = ['Top', 'Upper Middle', 'Middle', 'Lower Middle', 'Bottom Categories Step 3: {Absolute Position}. You are provided the region of the image Size where the target is located. Review the caption from Step 2, enhance the Absolute caption by incorporating the provided absolute location information. … Postition Instance Foreground Instance {"caption_3": "a medium-sized yellow car is parked near a stop sign on Region Region Attributes the side of the road, located in the upper middle, right of the image."} Data Postprocessing Categories Color Categories-Color: e.g. A yellow car. Compare Instance_1 Attribute AbSsiozleute GReeolamtievtery Categories-Size: e.g. A medium car. CoCnaspitsitonA:ttaribmuetdeisu:m-esidzieudm,yeylellolwowc,arcar ... Instance_n Attribute Postition Postition Categories-Color-Size: e.g.A medium yellow car. Instance Attributes Instance-Level Phrase Caption Match Caption-Instance Pairs
# 3. Dataset Construction
# 3.1. Motivation
In the aerial detection domain, current research primarily focuses on open-vocabulary detection, aiming to eliminate the limitations imposed by predefined categories. Although these studies have made notable progress, they remain confined to the vocabulary level, which provides only minimal semantic information and consequently limits their applicability.
Developing open-set aerial detection is imperative to enable more flexible detection, thereby meeting the rapidly growing demands of fine-grained, open-world aerial detection. We observe that open-set detection in natural images has advanced significantly more than in the aerial detection domain. This disparity is primarily due to the extreme scarcity of aerial grounding data compared to that available
for natural images.
To fill this gap, we propose OS-W2S Label Engine, an automatic annotation pipeline capable of handling diverse scene annotations for aerial images, and construct MI-OAD, a large-scale benchmark dataset for open-set aerial object detection tasks, thereby laying a robust data foundation for future research in this area.
# 3.2. Design of OS-W2S Label Engine
As shown in Fig. 2, the OS-W2S Label Engine consists of the following four components:
• Data Collection: We collect large-scale aerial detection data, which provides instance coordinates and class labels, serving as a solid foundation for subsequent annotation processes and ensuring both dataset scale and scene diversity. • Data Preprocessing: Before interacting with VisionLanguage Models (VLMs), we implement preprocessing techniques, such as foreground and instance-level region extraction, to enable VLMs to effectively handle diverse scene annotations in aerial imagery.
Table 1. Overview of the collected aerial detection datasets, highlighting variations in image quantity, instance annotations, and category diversity. The number of images and instances represent those after cropping to a uniform resolution, as the original images had large and inconsistent resolutions. Using the OS-W2S Label Engine, these datasets were expanded with rich textual annotations to construct the MI-OAD dataset, which contains 163,023 images and 2 million image-caption pairs, making it 40 times larger than the existing RSVG dataset.
• Instance-Level Sentence Caption Generation: Leveraging the output from data preprocessing as prior knowledge, VLMs accurately focus on the target instances and their surrounding contexts. This enables VLMs to extract attribute features for each target and generate multiple sentence captions with varying levels of detail, thus ensuring caption diversity.
• Data Postprocessing: This step aims to generate multiple phrase-level captions for each instance and accurately match caption-instance pairs within each image. It accounts for scenarios where each caption may correspond to one or multiple instances, thereby meeting the dual requirements of precise and approximate localization in practical applications.
Data Collection. As shown in Table 1, we collected eight representative aerial detection datasets, ensuring diverse scenes due to variations in capturing heights and equipment (e.g., satellites and drones) across different datasets. Due to inconsistencies in image resolution and annotation formats, we standardized the resolution by cropping high-resolution images and aligning annotation formats. These processing steps, combined with annotations of instance categories and coordinates inherent to detection tasks, establish a robust foundation for the subsequent annotation pipeline.
Data Preprocessing. Data preprocessing aims to simplify complex aerial images, enabling VLMs to effectively focus on relevant regions. Specifically, we process images to extract three critical components: instance regions, foreground regions, and partial instance attributes. (1) Instance regions: These are easily obtained by cropping sub-images
Algorithm 1 Foreground Region Extraction
Require: Bounding box set $B = \{ b _ { 1 } , b _ { 2 } , \dots , b _ { N } \}$ , image size $( w , h )$
Ensure: Foreground region set $R$
1: Step 1: Scale Bounding Boxes
2: for $i = 1$ to $N$ do
3: Compute the area $A _ { i }$ of bounding box $b _ { i }$
4: Determine scaling factor $s _ { i }$ based on $A _ { i }$
5: Update the bounding box $b _ { i }$ to its extended version, ensuring it remains within the image boundaries
6: end for
7: Step 2: Merge Overlapping Boxes
8: for each unmerged box $b _ { i }$ do
9: Let $r \gets b _ { i }$
10: while there exists an unmerged box $b _ { j }$ that overlaps with $r$ do
11: $r \gets \mathbf { M E R G E } ( r , b _ { j } )$
12: Mark $b _ { j }$ as merged
13: end while
14: Add $r$ to the foreground region set $R$
15: end for
16: return $R$
based on the coordinates provided in the detection annotations. (2) Foreground regions: Given the dense distribution of instances and significant background proportion in aerial imagery, we apply Algorithm 1 to effectively isolate foreground regions. Specifically, we compute the maximum enclosing rectangle of the object bounding boxes to isolate multiple object clusters within each image. (3) Partial Instance Attributes: Inspired by previous approaches [15, 15], we leverage instance attributes as components to generate diverse captions. We focus on six primary attributes: category, size, color, geometric shape, relative position, and absolute position. While the category is predefined, size and absolute position attributes are determined based on manual rules due to their inherent subjective nature and spatial complexity. Specifically, size attributes are classified according to predefined thresholds, and absolute positions are categorized into 25 labeled regions (e.g., Left-Top, Far Right-Bottom). The remaining attributes are dynamically generated by the VLM based on image content during the annotation process.
Instance-Level Sentence Caption Generation. This step aims to generate three sentence captions with varying levels of detail for each instance, alongside additional instance attributes, by interacting with VLMs. To minimize the complexity and computational costs associated with using the labeling engine, we selected InternVL2 5-38BAWQ, which requires only an eight-GPU machine, each with 24GB of memory. The interaction with the VLM for each instance is structured into four rounds: (1) Introduc
Types Length Number of Instances
Relative_Location 12.2% 2.2% image_rsaiebzlsaeot_ilpvuhetre_ac_scaeap_tpictoiaonpntion 18.5% 12<01-023 0wowrodrsds 6.2% 22.8% 26-520inisntsatnacnecses color image_color_size_phrase_caption 1.7% 1.6% 19.4% 18.9% geometry 18.9%
Absolute_Location categsoizrey 18.9% 18.9% 60.5% 69.4% N1u01mber o1f0D2ifferen1t0C3lasse1s0p4er Attr1ib0u5te (a) Per attribute class count (b) Caption type distribution (c) Caption length distribution (d) Instances per caption truck blWhite adjacent tricycle redrdark grectangular nearot VehiCle t abasetailo S (e) Category Word cloud (g) Geometry Word cloud (h) Relative Location Word cloud
tion of the overall annotation workflow to the VLM. (2) Provision of the instance region image along with instance category and size attributes, prompting the VLM to extract color and geometric attributes, subsequently generating a self-descriptive caption. (3) Submission of the foreground region image containing the instance, allowing the VLM to determine the instance’s relative position attribute based on surrounding context, extending the previous caption into a relative-position caption. (4) Provision of the absolute position attribute, prompting the VLM to integrate this information into the previously generated caption, thus creating an absolute-position caption. Each interaction utilizes a JSON template to regulate VLM outputs. As a result, each instance is provided with three distinct sentence captions of different descriptive levels and a comprehensive set of six attributes.
Data Postprocessing. Based on the attributes obtained from previous steps, we generate three phrase-level captions per instance using combinations of category, color, and size attributes, resulting in six unique captions per instance. However, due to instance similarities, captions with fewer attributes often correspond to multiple instances. Leveraging attribute-based captions and the recorded attribute information for each instance, we effectively establish captioninstance pairs by comparing the attribute similarity between captions and instances. The attribute similarity is computed using Sentence-BERT [19].
# 3.3. MI-OAD Dataset
Using the OS-W2S Label Engine, we created a largescale, multi-instance dataset for open-set aerial object detection. This dataset comprises 163,023 images and 2 million image-caption pairs, encompassing three levels of language guidance: vocabulary-level, phrase-level, and sentence-level. The average caption length is 11.04 words, providing rich semantic information. Benefiting from the design of the OS-W2S Label Engine, the MI-OAD dataset effectively addresses the limitations of the existing RSVG dataset and establishes the first benchmark dataset for openset aerial object detection.
Scene Diversity: We made two efforts to ensure scene diversity. First, we collected data from eight detection datasets, which include images taken from various altitudes and viewpoints using drones and satellites. Second, we generated multiple types of captions and performed both data preprocessing and postprocessing to ensure the quality of captions for complex scenes. As a result, there is no need to impose restrictions on the number of objects per image.
Caption Diversity: Each caption is generated based on the attributes of instances. To ensure comprehensive coverage, we defined three sentence caption types and three phrase caption types, each varying in detail based on attribute combinations. The sentence captions provide detailed instance descriptions suitable for precise localization. Specifically, self-sentence captions describe the category, size, color, and geometric attributes of instances. By adding relative positional information, we obtain relative sentence captions, and by incorporating absolute positional information, we form absolute sentence captions. Additionally, three types of phrase captions constructed from combinations of category, color, and size attributes were created to support approximate localization.
Fig.3b illustrates the distribution of caption types, highlighting that after applying the sampling strategy described in Section4.1, the caption types are evenly distributed. Fig.3a presents the number of distinct expressions for each attribute, highlighting the rich diversity in attributes (relative location, color, and geometry) generated by the VLM. To visually demonstrate the quality of these VLMgenerated attributes, we conducted a word cloud analysis as shown in Fig.3(f)-(h). Notably, the geometry attribute extends beyond basic shapes to include descriptive components (e.g., “a cylindrical tower with three blades”). Furthermore, we analyzed the distribution of caption lengths to illustrate the richness of descriptions, as depicted in Fig.3c. Collectively, these analyses underscore the caption diversity within our dataset.
Multi-instance Annotation: To better align with realworld applications requiring both precise and approximate localization, each caption corresponds to all relevant instances in the image matching the description, encompassing both single-instance and multi-instance cases. We construct caption-instance pairs by comparing the attributes of captions and instances. As shown in Fig. 3d, $6 9 . 4 \%$ of captions correspond to a single instance, demonstrating that the generated captions effectively support precise localization even in complex scenes. The remaining captions, which involve multiple instances, fulfill the requirements for approximate localization.
Dataset Scale: The OS-W2S Label Engine is capable of generating high-quality caption annotations for each instance, and the aerial detection dataset contains numerous instance annotations. These conditions enable us to establish a large-scale dataset for open-set aerial detection. Finally, we constructed the MI-OAD dataset, which contains 163,023 images and 2 million image-caption pairs, making it 40 times larger than the existing RSVG dataset.
# 4. Experiments
In this section, we explore three key questions: 1) How can we effectively leverage the MI-OAD dataset? 2) How can we equip existing models with capabilities for openset aerial object detection? 3) How can we evaluate the open-set aerial detection capabilities of models at the word, phrase, and sentence levels?
# 4.1. MI-OAD Dataset Split and Sample
Base and Novel Classes Split. We designate 75 classes as Base and 25 classes as Novel. The class division is based on clustering the class semantic embeddings and selecting one class from each pair of leaf nodes in the clustering tree [30]. This assignment of novel classes ensures that the dataset can effectively evaluate zero-shot transfer capabilities.
Pretraining, Fine-tuning, and Validation Split. We further split the dataset into three subsets (Pretrain, Fine-tune,
Validation) based on base and novel classes. Specifically, we grouped images without novel classes (i.e., containing only base classes) into the Pretrain set (P-Set). We then split the remaining images into the Fine-tune set (FT-Set) and the Validation set (V-Set) using a 0.7:0.3 ratio, respectively. After this division, the P-Set, FT-Set, and V-Set are suitable for evaluating open-set aerial object detection. The FT-Set and V-Set can also be employed for conventional detection and grounding tasks.
Sampling Strategy and Experimental Data Statistics. Considering the large scale of the dataset, the substantial computational resources required, and recognizing this as the first work focused on open-set aerial object detection, we conducted caption sampling post-annotation. Specifically, for each image, we categorized captions by type and then sampled one caption per type category to form image-caption pairs, ensuring dataset diversity and annotation quality. Consequently, the MI-OAD dataset comprises approximately 2 million image-caption pairs and 163,023 detection annotations. Specifically, the P-Set consists of approximately 0.7 million image-caption pairs and 86,155 detection annotations. The FT-Set and V-Set include 0.76 million and 0.32 million image-caption pairs respectively, along with 53,806 and 23,062 detection annotations.
# 4.2. Training Strategy
To equip models with open-set aerial detection capabilities, we revisit and redefine the grounding data format, introducing slight modifications based on representative openset detectors from the natural image domain. Specifically, grounding data for open-set detection in natural images typically comprises an image-caption pair along with corresponding instance annotations [18]. The caption, serving as an image-level description, generally contains multiple noun phrases, each corresponding to a distinct instance. However, due to densely packed instances and complex backgrounds in aerial imagery, generating such detailed and comprehensive captions is challenging. Therefore, we redefine the aerial grounding data format as an image-caption pair accompanied by corresponding instance annotations, where each caption serves as an instance-level description uniformly applicable to all corresponding instances. In other words, instance-level captions can be viewed as extensions of fine-grained categorical information associated with their corresponding instances. Consequently, we unify the grounding and detection tasks by treating the grounding task as a detection task, substituting the instance-level caption for the category label.
# 4.3. Evaluation Details
To comprehensively evaluate open-set detection capability, we propose three evaluation protocols simulating real-world scenarios: vocabulary-level detection, phrase-level grounding, and sentence-level grounding, each corresponding to varying levels of detail in natural language input (vocabulary, phrase, and sentence). Additionally, we define three evaluation setups to assess detection performance under different constraints: zero-shot transfer to novel classes without domain adaptation, zero-shot transfer to novel classes with domain adaptation, and fine-tuned evaluation. The primary distinction between the first two setups is the use of the MI-OAD P-Set for domain adaptation of detectors originally designed for natural images. For vocabulary-level evaluation, we use the combination of all instance categories present in each image as the input prompt, closely mirroring real-world applications. For grounding tasks, we sample image-caption pairs for evaluation.
Table 2. Performance comparison of representative methods on the MI-OAD dataset across different open-set evaluation tasks (vocabularylevel detection, phrase-level grounding, and sentence-level grounding). The evaluation setups differ as follows: zero-shot transfer w/ or w/o domain adaptation indicates whether the model was trained on the MI-OAD P-Set for domain adaptation, while fine-tuned conditions represent models trained on the FT-Set of MI-OAD.
For evaluation metrics, we employ standard detection metrics including mean Average Precision (mAP), and Recall $@ 1 0 0$ . Both mAP and Recall are computed at an IoU threshold of 0.5. Additionally, as illustrated in Fig. 3d, $6 9 . 4 \%$ of captions correspond to exactly one target and $9 2 . 2 \%$ correspond to fewer than five instances; thus, we include Recall $\ @ 1$ and Recall $@ 1 0$ for grounding tasks.
# 4.4. Open-set Aerial Object Detection Results
From Table 2, we evaluate the open-set aerial detection capabilities of two representative approaches—Yolo-World and Grounding DINO—across three different evaluation scenarios. Both methods are evaluated on detection, phraselevel grounding, and sentence-level grounding, reflecting different levels of granularity in open-set detection tasks.
Zero-shot Transfer (w/o Domain Adaptation). When directly applying models trained on natural-image data to the MI-OAD V-Set, performance is notably limited. For instance, Yolo-World achieves a mere $1 . 3 \%$ $A P _ { 5 0 }$ under sentence-level prompts. Grounding DINO performs slightly better $( 5 . 4 \%$ $A P _ { 5 0 } )$ ), yet both methods exhibit substantial performance gaps, demonstrating the unique challenges posed by open-set aerial object detection.
Zero-shot Transfer (w/ Domain Adaptation). Introducing domain adaptation for these models by training on the MI-OAD P-Set results in considerable performance improvements for both methods. For example, Grounding DINO’s detection $A P _ { 5 0 }$ improve from $7 . 7 \%$ to $1 7 . 8 \%$ , while its sentence-level grounding $A P _ { 5 0 }$ increases by $2 9 . 5 \%$ . These results underscore the effectiveness of our proposed dataset.
Fine-tuning. After fine-tuning on the FT-Set, both models achieve superior results. Grounding DINO achieves outstanding performance, obtaining $A P _ { 5 0 }$ values of $5 7 . 3 \%$ for detection, $6 5 . 2 \%$ for phrase grounding, and $5 7 . 4 \%$ for sentence grounding.
These results demonstrate that the MI-OAD dataset provides an effective basis for advancing open-set aerial object detection and further confirm the importance of large-scale grounding data with rich textual annotations. | In recent years, language-guided open-world aerial object detection has gained significant attention due to its better alignment with real-world application needs. However, due to limited datasets, most existing language-guided methods primarily focus on vocabulary, which fails to meet the demands of more fine-grained open-world detection. To address this limitation, we propose constructing a large-scale language-guided open-set aerial detection dataset, encompassing three levels of language guidance: from words to phrases, and ultimately to sentences. Centered around an open-source large vision-language model and integrating image-operation-based preprocessing with BERT-based postprocessing, we present the OS-W2S Label Engine, an automatic annotation pipeline capable of handling diverse scene annotations for aerial images. Using this label engine, we expand existing aerial detection datasets with rich textual annotations and construct a novel benchmark dataset, called Multi-instance Open-set Aerial Dataset (MI-OAD), addressing the limitations of current remote sensing grounding data and enabling effective open-set aerial detection. Specifically, MI-OAD contains 163,023 images and 2 million image-caption pairs, approximately 40 times larger than comparable datasets. We also employ state-of-the-art open-set methods from the natural image domain, trained on our proposed dataset, to validate the model's open-set detection capabilities. For instance, when trained on our dataset, Grounding DINO achieves improvements of 29.5 AP_{50} and 33.7 Recall@10 for sentence inputs under zero-shot transfer conditions. Both the dataset and the label engine will be released publicly. | [
"cs.CV",
"cs.DB"
] |
# 1. Introduction
Conversational speech recognition (Conv-ASR), which aims to transcribe natural spoken language accurately, remains a significant challenge in the speech processing area [1, 2]. Unlike isolated speech segments, conversational speech typically involves spontaneous, unstructured language, occasional speaker interruptions, overlapping, and disfluencies, which are very common in the Fisher English [3] and SwitchBoard-1 [4] speech corpora. These factors complicate transcription, particularly in multilingual and low-resource scenarios [5], where the scarcity of training data exacerbates the model generalization issue.
Recent advancements in large speech models, such as Whisper [6] that utilizes large-scale multilingual training data and a multi-task training strategy, have achieved significant performance gains and improved robustness in multilingual ASR. In the meantime, large language models (LLMs), such as GPT [7], Llama [8], and Qwen [9], have profoundly impacted natural language processing, motivating researchers to integrate these powerful models to handle speech understanding tasks, such as ASR and spoken dialogue summarization. These hybrid models, termed Speech Large Language Models (SLLMs) or AudioLLMs, combine traditional acoustic representations with advanced language understanding capabilities [10, 11, 12, 13, 14]. Initial implementations, such as WavLLM [10], combine the representations of the Whisper encoder and a WavLM [15] encoder, while other works, including Qwen-audio series [11, 13] and Meralion-AudioLLM [12], only utilize a Whisper or fine-tuned Whisper encoder to obtain the acoustic representation. The representations are then combined with the embeddings of prompt text tokens and sent into a pretrained LLM, leveraging extensive linguistic knowledge for improved ASR accuracy and task adaptability.
Notwithstanding the above, achieving high performance on conversational speech is still challenging for SLLMs due to the limitation of training data, where large-scale training speech primarily comprises read speech rather than conversational data. Additionally, the hallucination in LLMs and the Whisper model limits the speech lengths when incorporating multi-turn conversations, typically resulting in poor performance for conversational ASR.
In this paper, we propose a novel bi-directional context integration method in SLLM to boost multilingual continuous conversational ASR. Inspired by recent prompt engineering techniques, such as providing prior conversational context as a prompt to enhance transcription accuracy in Whisper, we propose to employ style-specific prompts to control transcription style in PromptASR [16], and leverage in-context learning methods [17] to boost zero-shot performance in LLMs. Specifically, our contributions include:
• We propose to use Language-specific prompt tailored for different languages, which enhances multilingual capabilities.
• We demonstrate that historical contexts, and further bidirectional contexts improve the performance of conversational ASR in SLLM.
• We introduce a Two-stage Inference pipeline. Stage 1: Decode single segments without contextual information; Stage 2: These results will serve as the previous and future contexts in the re-decoding.
Experimental results on the Multilingual Conversational Speech and Language Model (MLC-SLM) corpus show that our proposed approach significantly outperforms the baseline systems by $18 \%$ relatively and even exceeds the performance of the model trained on a much larger dataset augmented with CommonVoice 21.0 [18], achieving superior accuracy with only 1500 hours of training data compared to 6000 hours.
# 2. Proposed Methods
In this section, we present the framework of the SLLM-based multilingual ASR system, along with our proposed methods.
# 2.1. Model Architectures
The model employs a post-alignment design, projecting speech features into the same semantic embedding space as the pretrained LLM. Its overall architecture is shown in Figure 1, consisting of three core components: a Whisper-large-v3 speech encoder, a linear projector as the modality adaptor, and the Gemma $- 2 - 2 \mathtt { B }$ [19] LLM backbone. During training, we freeze the audio encoder and fully fine-tune the
Language Language Specific / Contextual Template English Transcribe speech to text./The previous context is: <history context>. The next context is: <future context>. Transcribe speech to text. French Transcrire la parole en texte./Le contexte précedent est : <history context>.Le contexte suivant est : <future context>. Transcrire la parole en texte. Next Token Prediction Japanese 音声をキスト書き起しくたさい。/前の文 服以下の通い)で:<historycontext>。次の文脈 Gemma-2-2B LLM M 下通<fturecontex>。音声キト ↑ Whisper-large-v3 Encoder 米 ? Context Transcripts 调 History Masks Future Context Language Specific Prompt Context
modality adapter and the pretrained LLM rather than relying on PEFT methods like LoRA [20]. This maximizes the LLM’s capacity for encoding acoustic-to-text mappings, leading to more accurate transcription.
# Algorithm 1 Contextual Masking Training Strategy
Require: history context $P$ , future context $F$
Ensure: masked history $\tilde { P }$ , masked future $\tilde { F }$ 1: $\tilde { P } P , \quad \tilde { F } \dot { F }$ 2: if $P \neq \emptyset$ then 3: if Uniform $( 0 , 1 ) < 0 . 5$ then 4: $\begin{array} { r l } & { \alpha \gets \mathrm { U n i f o r m } ( 0 , 0 . 2 5 ) } \\ & { T \gets | P | , M \gets \lfloor \alpha T \rfloor } \\ & { k \gets \mathrm { R a n d o m I n t } ( 1 , \operatorname* { m i n } ( 3 , \operatorname* { m a x } ( 1 , \lfloor M / 3 \rfloor ) ) ) } \\ & { s \gets \lfloor M / k \rfloor } \\ & { { \bf f o r } i = 1 \mathrm { t } \mathbf { 0 } k \mathbf { d o } } \\ & { \quad \quad r _ { i } \gets \mathrm { R a n d o m I n t } ( 0 , T - s ) } \\ & { \quad \quad \quad . } \end{array}$ 5: 6: 7: 8: 9:
10: remove substring $[ r _ { i } , r _ { i } + s ]$ from $\tilde { P }$
11: end for
12: end if
13: end if
14: if $F \neq \emptyset$ then
15: if Uniform $( 0 , 1 ) < 0 . 5$ then
16: $\begin{array} { r l } & { \alpha ^ { \prime } \mathrm { U n i f o r m } ( 0 , 0 . 2 5 ) } \\ & { T ^ { \prime } | F | , \ M ^ { \prime } \lfloor \alpha ^ { \prime } T ^ { \prime } \rfloor } \\ & { k ^ { \prime } \mathrm { R a n d o m I n t } ( 1 , \operatorname* { m i n } ( 3 , \operatorname* { m a x } ( 1 , \lfloor M ^ { \prime } / 3 \rfloor ) ) ) } \\ & { s ^ { \prime } \lfloor M ^ { \prime } / k ^ { \prime } \rfloor } \\ & { \mathrm { f o r ~ } i = 1 \mathrm { t o } \ k ^ { \prime } \mathrm { ~ d } \mathbf { 0 } } \\ & { \qquad r _ { i } ^ { \prime } \mathrm { R a n d o m I n t } ( 0 , \ T ^ { \prime } - s ^ { \prime } ) } \\ & { \qquad \mathrm { r e m o v e ~ s u b s t i n g ~ } [ r _ { i } ^ { \prime } , \ r _ { i } ^ { \prime } + s ^ { \prime } ] \mathrm { ~ f r o m ~ } \tilde { F } } \end{array}$
17:
18:
19:
20:
21:
22:
23: end for
24: end if
25: end if return P˜, F˜
Each training sample is prefixed with a language-specific or contextually enhanced prompt that matches the speech input’s language based on whether contexts are given or not. By ensuring that prompts and audio share the same language, we guarantee truly multilingual ASR behavior while leveraging the LLM’s instruction-following capabilities. Figure 1 also shows examples of templates used for language-specific and contextual-enhanced text prompts of some languages. However, in continuous conversational data, the first turn has no history context, while the final turn has no future context; only the middle turns include both. To handle these situations, we use half of the prompt when only partial context exists.
# 2.2. Contextual Masking Strategy
We introduce a contextual masking strategy in the training phase to mimic the contextual information we may obtain during the inference period, which can be flawed, to prevent the model from converging to rely only on the groundtruth context information.
During training, each non-empty previous or future context is independently subjected to a fair coin flip: with $50 \%$ probability it remains intact, and others enter the masking pipeline. When masking is applied, we choose a single character-level removal ratio uniformly between $0 \mathrm { - } 2 5 \%$ of that context’s length, then carve that total removal budget into one to three contiguous spans of equal size at random positions. Because previous and future contexts each have their own keep/mask decision and their removal budget, the model routinely encounters examples where only one side is gapped, both sides are gapped, or neither is. This trains the model to handle “gapped” histories and futures, crucial for inference, where we must feed it its own hypothesis, which might be flawed, as context rather than groundtruth text. This strategy is shown as Algorithm 1.
# 2.3. Two-Stage inference
During the inference period, we employ a simple two-stage decoding pipeline to utilize the prior information provided by contextual information in the conversations.
• Stage 1: Context-agnostic decoding. Each segment is decoded independently, without any surrounding context, to produce an initial hypothesis. • Stage 2: Context-aware decoding. We re-decode each segment. This time prepending its neighbors’ Stage 1 outputs as “history” and “future” contextual information. The model is expected to refine its transcription for greater coherence across the conversation turns.
To demonstrate the upper-bound performance of our proposed methods with limited training data, we also report the results where we employ the groundtruth transcription of the validation set as the context in Stage 2 decoding.
# 3. Experiments
In this section, we detail the dataset we utilize and the technical specifications for both the training and inference phases.
# 3.1. Dataset
Our training set comprises approximately 1500 hours of twospeaker conversational speech in eleven languages provided by NexData 1, namely MLC-SLM competition dataset, including English (American, British, Filipino, Australian, and Indian accents), French, German, Italian, Portuguese, Spanish, Japanese, Korean, Russian, Thai, and Vietnamese. Each recording features two participants engaging in natural, fluent dialogues on randomly assigned topics, captured in quiet indoor environments using devices such as iPhones. Oracle utterance segmentation and speaker labels are provided to support the development of both speech recognition and speaker diarization. The English subset alone accounts for roughly 500 hours (100 hours per accent) while each of the other ten languages contributes about 100 hours.
To show the significance of our methods, we also include the CommonVoice (CV 21.0) dataset as an external singlesegment training supplement to boost our baseline systems. The CV 21.0 dataset we use comprises approximately 4500 hours of training data, covering the eleven languages featured in the MLC-SLM dataset. By combining the CV 21.0 and MLC-SLM train subset, we got roughly 6000 hours of training data for noncontextual single-segmented speech and 1500 hours of contextual conversational speech. Table 1 shows the statistics information for all the data we use.
# 3.2. Experimental setup
We built our models follow the architecture that is shown in Figure 1, utilizing Whisper-large-v3 encoder as the audio encoder followed by a linear projector consists of two linear layers with a subsampling factor of 5, and the $\mathtt { G e m m a - } 2 \mathrm { - } 2 \mathrm { B }$ as the backbone LLM where the LLM’s parameters were fully fine-tuned.
As shown in Table 2, our baseline model was trained using the MLC-SLM Training dataset only, and the text prompt was fixed to the English prompt: ”Transcribe speech to text,” regardless of the language of each sample. The S1 model used the same training data as the baseline but employed languagespecific prompts for each language, as illustrated in Figure 1.
Then, we introduce History Context in S2 system. Specifically, we set half of the contextual prompt, e.g., The previous context is: <history context>. Transcribe speech to text, and form another 1500 hours of contextual training data. This data is combined with the original single-segmented Train set, totaling 3000 hours, to maintain the model’s capability for both single-segmented speech recognition and contextual speech recognition. Similarly, we further introduce Future Context in the S3 system and obtain the training data using the strategy outlined in S2, maintaining a total of 3000 hours. Finally, S4 model is trained with extra CV 21.0 data, following the same prompt as S1 system, incorporating six thousand hours of training data.
Table 1: Dataset statistics. It includes a 1500-hour training set and a 32-hour validation set, covering eleven languages and five different accents in English. We ignore the evaluation set since we lack the transcriptions. CV 21.0 is the train subset from CommonVoice 21.0, only covering the eleven languages corresponding to the MLC-SLM dataset.
Table 2: Model training configurations. Baseline uses English prompt for all languages, while S1-S4 systems all follow the template as shown in Figure 1. CV 21.0 is the CommonVoice 21.0 dataset. Duration is shown in hours.
We built our models using the SLAM-LLM [21] toolkit, running on 8 NVIDIA H20-96GB GPUs. For all the models, we use a learning rate of $5 e ^ { - 5 }$ . In the meantime, we employed an early-stop strategy during training, with a tolerance of 2000 training steps, based on the validation accuracy. This ensures that these models are not underfitting or overfitting across different configurations. During the inference period, we use beam search with a beam size of 4 and set the maximum number of repeated n-grams to 5-grams, to prevent hallucinations, which can result in dozens of phrase repeats under certain situations.
# 4. Experimental Results
Table 3 summarizes the Word Error Rate (WER) and Character Error Rate (CER) achieved by our models across eleven languages and five accents on the validation set. In detail, we calculate CER for Japanese, Korean, and Thai, while WER is used for the rest of the languages based on the characteristics of each language. For Avg. Valid, we report the averaged Mix Error Rate (MER) on the validation set.
Table 3: Word Error Rate (WER↓) and Character Error Rate (CER↓) results for each of the models. The results for split languages are based on the validation dataset. Mix Error Rate (MER↓) is reported for average performance. Stage1 and Stage2 are corresponding to Context-agnostic decoding and Context-aware decoding as mentioned in section 2.3, respectively. Stage2-G means that we use Groundtruth as the context information in Stage2 decoding instead of hypothesis from Stage1, showing the upperbound performance.
First of all, our strong Baseline system shows $5 \%$ absolute MER degradation compared against the official Whisper-Qwen baseline and Whisper-Llama baseline2, demonstrating the effectiveness of full-parameter tuning under low-resource settings for AudioLLMs targeting the ASR task. Introducing language-specific prompts in S1 yields a substantial reduction of $1 0 . 4 \%$ in average MER from $1 6 . 6 0 \%$ to $1 4 . 8 7 \%$ , with nearly every language benefiting; for example, Japanese CER decreases from $2 4 . 0 7 \%$ to $1 7 . 9 8 \%$ and Portuguese WER from $3 2 . 9 7 \%$ to $2 8 . 6 6 \%$ .
Then, compared to S1, both S2-Stage1 and S3-Stage1 introduce additional variability during training, including a historical context in S2 and a bi-directional (both past and future) context in S3, which appears to regularize the model and mitigate overfitting. As a result, S2-Stage1 improves average MER from $1 4 . 8 7 \%$ to $1 4 . 3 0 \%$ , with particularly large gains on variants such as Portuguese (from $2 8 . 6 6 \%$ to $2 4 . 7 7 \%$ and Vietnamese (from $2 0 . 0 9 \%$ to $1 6 . 1 9 \%$ , even though the decoding itself remains context-agnostic, which is the same as S1. Even more striking, S3-Stage1 further lowers MER to $1 3 . 8 4 \%$ , outperforming both S1 and S2-Stage1 and underscoring the benefit of richer contextual variation in the training phase.
When we move from Stage1 to Stage2 decoding, i.e., from context-agnostic inference to context-aware inference, the model yields additional improvements even with imperfect context obtained from Stage1. In S2-Stage2, it brings MER down from $1 4 . 3 0 \%$ to $1 4 . 1 5 \%$ , while S3-Stage2 reduces MER from $1 3 . 8 4 \%$ to $1 3 . 5 6 \%$ . These consistent gains confirm that conditioning on preceding (and in S3’s case, with further following) hypotheses at the inference phase provides useful disambiguation, complementing the benefits of context-augmented training. For an upper-bound comparison, S3-Stage2-G uses groundtruth context when decoding, achieving an MER of
$1 3 . 1 6 \%$ . This gap quantifies the remaining potential if context were perfect.
Finally, we compare our best 1500 hours system S3-Stage2 against the model S4 that uses 6000 hours of training data. Despite using only one quarter of the data, S3-Stage2 outperforms S4 in average MER ( $1 3 . 5 6 \%$ vs $1 3 . 6 3 \%$ ), which demonstrates the diminishing marginal returns of simply scaling up the training data (i.e., each additional hour yields smaller gains) and, conversely, the substantial impact that context-aware modeling has on conversational ASR performance.
In summary, each successive enhancement, whether from language-specific prompts or more contextual information, consistently provides additive improvements. | This paper introduces the integration of language-specific bi-directional context into a speech large language model (SLLM) to improve multilingual continuous conversational automatic speech recognition (ASR). We propose a character-level contextual masking strategy during training, which randomly removes portions of the context to enhance robustness and better emulate the flawed transcriptions that may occur during inference. For decoding, a two-stage pipeline is utilized: initial isolated segment decoding followed by context-aware re-decoding using neighboring hypotheses. Evaluated on the 1500-hour Multilingual Conversational Speech and Language Model (MLC-SLM) corpus covering eleven languages, our method achieves an 18% relative improvement compared to a strong baseline, outperforming even the model trained on 6000 hours of data for the MLC-SLM competition. These results underscore the significant benefit of incorporating contextual information in multilingual continuous conversational ASR. | [
"cs.CL",
"eess.AS"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.